Search the Community

Showing results for tags 'vmware'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases

Calendars

  • Community Calendar

Found 10 results

  1. VMware has announced updates to its Cloud Provider Program today at VMworld 2017 in Barcelona; including the new VMware Cloud Provider Platform, which offers rapid deployment and scale up an environment to build value-added differentiated services; advancements to its cloud management platform to help customers deploy, operate and manage IT infrastructure and application services across a multi-cloud landscape; and the newest release of vSphere Integrated Containers v1.2, delivering new capabilities including provisioning native Docker Container Hosts. VMware Announces Major Updates to its Cloud Provider Program
  2. HPE MSA 2042 vs NetApp E-2724

    Hi All, I've got a HP G7 2 node cluster running on NetApp FAS just now. The site is not compute intensive, with some licence servers, AD, Printer, DNS, etc, and Exchange. However, they have a relatively large storage requirement - approx. 12TB and it needs to be fast (not all-flash fast, but responsive). I'm contemplating renewing this with G9 hosts directly attached with 10G iScsi using either an HP MSA 2042 or NetApp E-2724. Both are almost identical commercially. They seem fairly similar, although the MSA offers more drive compatibility and uses re-direct on write. Both specs have SSD cache. Anyone have any input? Thanks.
  3. VMware sees this new class of Optane SSDs as a vehicle to help increase vSAN's utility when it comes to big data analytics, business critical apps and VDI (amongst others). In these environments though the cache is much more active than the use case above, so it remains to be seen just what the P4800X can do when under a diverse load. The potential is promising though, considering the P4800X is just a drop in card with nothing new required at the node or vSAN level. For its part, VMware has done well to show their customers that new technology can be easily included in their HCI stack. VMware vSAN First HCI To Support Intel Optane
  4. Today at VMworld 2016 in Las Vegas, VMware Inc. announced the latest release of its OpenStack distribution, based off of the OpenStack Mitaka Release, VMware Integrated OpenStack 3. Today VMware is introducing new features, including support with VMware Cloud Foundation, that are designed to make deploying OpenStack clouds simpler and more cost-effective as well as allowing customers to use existing VMware vSphere workloads in an API-driven OpenStack cloud. VMware Introduces VMware Integrated OpenStack 3
  5. Expanding on its existing Cloud Continuity Platform, version 5.0 adds Microsoft Azure as a public cloud target. The latest version of Zerto Virtual Replication provides consistency groups, block-level replication, and point in time recovery journaling from both VMware vSphere and Microsoft Hyper-V environments to Azure. Zerto claims that version 5.0 is easy to deploy, with pre-installed Azure drivers in all Windows OS post-Microsoft Vista, as well as a list of supported Linux OS. Zerto Announces Virtual Replication 5.0
  6. The new vSphere release uses the ConnectX-4 offload engines to accelerate VXLAN virtual networks and VXLAN tunnel endpoint gateways. On that, benchmarks have been ran showing VDI over 25Gb/s are over two times as efficient as running them over 10Gb/s. Along with this, Mellanox also points out that RoCE improves storage access times by up to ten times, while using 50% less CPU resources compared to traditional transport. Mellanox’s ConnectX-4 supports VMware vSphere clouds with Ethernet networks operating at speeds up to 100Gb/s. Mellanox states that both compute and storage traffic can be ran over a single wire, improving both the ROI of HCI and enabling multicore CPUs to achieve their full capacity to run applications. Mellanox Announces Driver Support For ConnectX-4 & RoCE For vSphere
  7. Today at VMworld 2016 in Las Vegas, VMware introduced its new architecture that extends its hybrid cloud strategy, VMware Cross-Cloud Architecture. Along with this, VMware is also announcing new private and public cloud offerings helping customers to run, manage, connect, and secure their applications across clouds and devices in a common operating environment. These new offerings include VMware’s new unified software-defined data center (SDDC) platform, VMware Cloud Foundation, new DR offerings purpose built for vCloud Air Network partners, VMware vCloud Availability, and a new release of VMware vCloud Air Hybrid Cloud Manager. VMware Introduces Cross-Cloud Architecture
  8. Hi, Has anyone had an experience with EVO-RAIL/EVO-RACK by VMware? We are in the market for simplicity and ease of administration. Running an IBM SMB blade unit today. Looking for something that can connect up with a NetApp SAN. Thanks, John
  9. Hello All, just joined the forum as I have a couple of questions about or san and wondered if we are getting the best from it. I have tried to give as much information as possible without every possible setting (I'll wait for requests for them). in short I think I might be having throughput/iop performance issues for my VM's. not sure if its iSCSI network related or san related or VMware related. or maybe I have none and was just expecting more. The Problem We recently had a company in to upgrade our infrastructure (we were still server 2003). We are noticing a bit of a lag in one program. The program in question (an ERP system) is a new program for us so we are not sure if its the hardware or the software as we have no benchmarks on either. Information 3 hosts (HP DL380p Gen8 dual 3ghz 10 core cpu, 64gb mem) with latest esxi on SD card. no disk in any host. Cisco (C2960x) stacked switch connecting all the host for network, management, vmotion, etc.. The iSCSI network is then on 2 separate cisco switches (non stacked) again C2960x we currently only have 6vms setup (2 DCs, 1 exchange, 1 vcenter, 1 ERP application, 1 ERP DB ) the ERP software is not used by anyone yet. SAN is 1 HP MSA 1040 (dual controller with dual 1GB on each) I believe 2GB cache on each controller. + 1 HP D2700 enclosure. the disks are as follows: Name Size RAID Disk Type Current Owner Disks vDisk01 2398.0GB RAID50 SAS A 6 (10k 600GB) vDisk02 4196.5GB RAID5 SAS B 8 (10k 600GB) vDisk03 3597.0GB RAID5 SAS A 7 (10k 600GB) vDisk04 2996.8GB RAID50 SAS B 12 (15k 300GB) The ERP is on vDisk04 (only the application and db vm are on this). Here are some stats from CrystalDiskMark: Results This is from one of the erp vm's on vDisk04 Sequential Read : 108.898 MB/s Sequential Write : 107.679 MB/s Random Read 512KB : 102.853 MB/s Random Write 512KB : 98.401 MB/s Random Read 4KB (QD=1) : 8.630 MB/s [ 2107.0 IOPS] Random Write 4KB (QD=1) : 5.251 MB/s [ 1281.9 IOPS] Random Read 4KB (QD=32) : 98.198 MB/s [ 23974.2 IOPS] Random Write 4KB (QD=32) : 49.038 MB/s [ 11972.3 IOPS]T Test : 500 MB [D: 8.1% (16.3/200.0 GB)] (x5) OS : Windows Server 2012 Server Standard Edition (full installation) [6.2 Build 9200] (x64) running crystaldiskmark gives almost identical results from any vm (no matter the vdisk the vm is on) I would of expected slightly different results due to the raid. we also have 1 host running server 2012 native and ran crystaldiskmark on vDisk03 and get slightly improved results. So this leads me to question the VMware settings being a slight issue possibly? also not sure if I should be seeing better throughput/iops in general or if these figures are reasonable? This is the other result: Sequential Read : 205.442 MB/s Sequential Write : 189.707 MB/s Random Read 512KB : 172.833 MB/s Random Write 512KB : 169.600 MB/s Random Read 4KB (QD=1) : 9.021 MB/s [ 2202.3 IOPS] Random Write 4KB (QD=1) : 6.164 MB/s [ 1505.0 IOPS] Random Read 4KB (QD=32) : 136.312 MB/s [ 33279.2 IOPS] Random Write 4KB (QD=32) : 50.407 MB/s [ 12306.4 IOPS] Test : 500 MB [E: 22.5% (752.6/3349.4 GB)] (x5) OS : Windows Server 2012 Server Standard Edition (full installation) [6.2 Build 9200] (x64) Hopefully some intelligent knowledgeable person on here can help me troubleshoot where any issues may be or can tweak a few settings to get some better performance (if any). or failing that I can pinpoint the issue to be the ERP software and know my hardware is all fine. I hope the limitation isn't the new infrastructure that's been implemented. Thank you in advanced. EDIT: Forgot to mention I have roundrobin / MPIO setup. Thanks again.
  10. VMware has announced their vSphere Mobile Watchlist is now available, allowing users to remotely monitor important virtual machines in their vSphere infrastructure with a smartphone. Users can also discover diagnostic information about any alerts on their VMs with VMware Knowledge Base Articles and the web as well as remediating problems by using power operations or delegating the issue to someone on their team who is back at the datacenter. VMware vSphere Mobile Watchlist Now Available