swinster

Home ESXi (free) usinng SSHD (Hybrid) drives?

Recommended Posts

Hi all,

I have a small ESXi host that runs several VMs, but the main datastore HDDs are coming to the end of their life (Western Digital Caviar RE WD2500YD) as they must be getting on for 10 years old and have run pretty much non stop. I have had a few issue recently and so though I would change the these drives that house the main datastore for ESXi, and hence hold the actual VM's. The machine actually has a separate RAID card, but I put that in pass though mode with NTFS formatted RAID for my Windows SBS VM data.

So, I was looking at a couple of 1TB SSHD (Hybrid) drives, but I wondered how they might perform? To be honest, they can't be any worse than the 250Gb WD drives. My main SBS 2011 VM server take around 15-20 minutes to boot, although I'm guessing there is something of in the setup. Once up though I sits there and does what it needs to do without much issue.

I can't afford proper SSD drives, but I not sure if the SSHD drive will be a better choice over an inexpensive 1TB entry level enterprise drive - they come in around the same price.

Thoughts please.

Chris

Share this post


Link to post
Share on other sites

How large are the VMs you plan on hosting on the SSHDs? You say you can afford a proper SSD, but with a bit of tweaking you can get a long way on some of the budget SSDs on the market right now.

For example, right now one of the main hosts in the lab is using a H/W RAID6 configuration of Crucial M500 960GB SSDs which I've manually over-provisioned down to about 850GB. Gives each drive some space to handle more write-intensive scenarios and has the added benefit of being easier on the NAND increasing endurance.

Back on the SSHD topic, I've only played around with Seagate's enterprise SSHD in a VMware setting. It proved to be pretty good, although with that we're talking a 15K SAS HDD with a serious amount of flash. I'm not sure the consumer variants will be able to help out as much as you would hope. The spread of hot-data is going to be so large across the VMs that it will be constantly trying cache. You probably won't see any benefit over a normal 7k HDD. If you use the 2.5" Seagate notebook SSHD you will probably be really wishing you hadn't.

Share this post


Link to post
Share on other sites

Hey Kevin,

I have around 500GB in VM's, which is currently spanned across two of the WD 250GB (as an extent). I have had a third WD 250GB as a separate datastore for backup of the more essential VM's but for what ever reason this hasn't worked (it was a script), so I'm just crossing my fingers I can retrieve what I can before the main drive on the main datastore fails completely. I know I can reinstall the VMs, but it's still a pain reconfiguring all the server stuff.

Although I only have around 500GB in VMs at the moment, I'm always playing with new machines a little bit of extra room would be nice. My idea was to use two 1TB drives - but no RAID (they would be connected via the Motherboard SATA connectors, and ESXi doesn't support the software RAID in any case, so RAID would mean another PCIe RAID card). One would simply be used as the datastore, the other would be a backup, but this time I'm going to use something like Unitrends Free Edition and Free ESXi, or Thinware vBackup Standard (both free) and ensure they are doing a job.

I would rather have slower performance on my VM's but have the main data have some resilience on its RAID, hence the current RAID card is used for the Data volumes only.

For SSDH, I wasn't looking to use the 2.5" drives but rather something like the Seagate 3.5" 1TB SSHD (ST1000DX001) which has an 8GB NAND cache (approx £61). The WD Black dual drive with a 1 TB HDD and a more reasonable 120GB SSD looks great, but at over double the price for one normal drive (£168), I think it might be too much. From the 7200rpm line I was looking at either the Toshiba MG03ACA100 (£67) or the WD RE 4 (£77).

How much are we looking at for the SSDs you mention?

Share this post


Link to post
Share on other sites

Well the Seagate SSHD would work, but the Black dual drive would have no chance... its 100% not compatible with VMware.

Are those VMs thin or thick? A 500GB SSD is about 250 USD while a 1TB model would be ~$450.

Share this post


Link to post
Share on other sites

Hey Kevin,

I had originally been setting up the VM as thick, however, as space became more of an issue, I switched to thin. I would like to go back to thick where possible.

Cheers for the heads up on the WD Black dual drive. WRT to SSD, they are on my list, but so is a whole bunch of other stuff, I have looked at some of the SSD's around the 512GB in the UK, but there is a huge difference in the read and write time across the board. I'm guess the higher speed on both read and write the better. The 512GB Crucial MX100 is only around £156, which is suppose isn't bad, butt getting a few of them becomes more of a stretch.

I haven't really got into SSDs yet, especially in reference to ESXi. Is it that the SSD's are simply set up as their own datastore in the same way as a HDD would be, or are they used as some kind of mega cache?

Cheers

Share this post


Link to post
Share on other sites

With free ESXi you would just add the new storage device as a new datastore.

For your environment it almost seems as though caching might be a good alternative, but you are going to have to get creative to make it work. Enterprise-grade caching software at the hypervisor level will be far out of your price range (2000-3000 per host) and they all require enterprise ESXi licenses which might cost 2000-3000 per CPU.

Now one option could be if your RAID card supports SSD cache. Buying a new RAID card with that feature would cost more than the investment in consumer SSDs, so not the best upgrade path if you don't have the support already but its worth checking out.

I hate to say it, but unless you can shrink those VMs with thin-provisioning or somehow rebuild VMs so I/O intensive volumes sit on flash and the rest on a HDD, you don't have any cheaper solution than buying large consumer SSDs.

Could you post a screenshot of the advanced performance tab showing disk activity? Curious what you are seeing in peak access times during boot.

  • Like 1

Share this post


Link to post
Share on other sites

Yeah, at those costs I don't think that's going to happen, At this level, its all about compromise so I'll give the SSHDs a whirl.

My current RAID card is an Adaptec 5405, currently hooked up to 4 SATA drives set up as RAID 5 for the main data storage and in pass through mode on ESXi so the main server VM has direct access to the volume. Unless I add a SAS expander, I believe this is all this card can handle.

I ran a drive check via ESXi using 'voma' on the current datastore (for some reason its back up and working, but I don't trust it to stay that way, however I have extracted the VMs and have got them back up and running), which returned thousand (yes 127,000+) errors of one kind or another!!!!!!!!!! I can't quite believe the things is still going. As far I have been able to find out, I can't actually fix these errors.

I will reboot shutdown the host later and reboot the main VM to grab the performance of the disk activity.

Edited by swinster

Share this post


Link to post
Share on other sites

What type of system is all of this installed into? Also where are you located?

Homebrew :). Its based around a Tyan s5510 Motherboard, 24 GB RAM and an Intel Xenon E3-1235 CPU, in a crappy case that must date from the 90's!!! I'm in the UK, Swansea to be exact.

Share this post


Link to post
Share on other sites

I think I might have figured out an easy solution to this problem. I have some spare gear here to throw at this problem. Would you be game to test it out and report back some of your findings?

Share this post


Link to post
Share on other sites

Absolutely. I have however, already ordered the Seagate drives, but I spend my life tinkering (personal and work), so have no qualms in doing this.

The only thing I ever wondered about was the ESXi support for the onboard SATA ports. I believe this wasn't really supported and in fact the lasted ESXi v5.5 remove this support completely if you installed from new, although I have read you can re-add the drivers if required. I maybe I'm just making this up...... :wub:

Share this post


Link to post
Share on other sites

SATA support is just fine in ESXi 5.1 and Im almost positive about 5.5. You might run into driver issues if you arent on a fully certified platform, but SATA itself isn't chopped out.

Share this post


Link to post
Share on other sites

At least the Swans look safe from relegation ;)

There's a long way to go yet. Still, In any case, originally I'm actually from London/Essex way so am Claret and Blue through and through.

SATA support is just fine in ESXi 5.1 and Im almost positive about 5.5. You might run into driver issues if you arent on a fully certified platform, but SATA itself isn't chopped out.

Great. I did obviously buy all the motherboard some time back but with ESXi in mind, however, I have just looked at the Tyan/VMware compatibility matrix and whilst the S5510 is not listed specifically, it's bigger brother (the S5512) is certified for ESXi 5.0 and 5.1, and both boards use the same chipset and SATA controller - the Intel C204, so we should have no real problems (I'm thinking). However there is no mention of ESXi 5.5, certification (probably because the board was discontinued before ESXi 5.5 was released).

The upgrade board to both of these is the S5532 which has the C222 chipset, and this is certified for ESXi 5.5, but a lot of small server board use the C204 chipset, so I'm hoping that there is no issue.

Share this post


Link to post
Share on other sites

Hey all,

Am waiting for the new drives that should be here today, however, as one of the drives disappeared again last night from the controller and so crashed the machine again, I restarted this morning and took a snapshot of the start-up performance. I gotta say, this looks pretty poor.

There are two WD250 RE 7200rpm drives that make up this data store (something that I think I won't do again, I would just rather have separate datastores), and you can see the the main SBS server VM resides on one of these disk, the other is doing nothing, but an maximum read rate of 21 MB/s seems a little slow - the average is really bad.

One last thing though. I run the main ESXi Hypervisor form a rather old USB pen drive, which in itself it probably very slow, however, I was always lead to believe that this shouldn't matter too much as once loaded, the Hypervisor exists in memory, and only read/writes back to the USB drive only occasionally.

post-69908-0-93580000-1415094799_thumb.j

Edited by swinster

Share this post


Link to post
Share on other sites

Hi all,

Well the new drives are in an I have moved the VMs from the old data store to the new ones. Whilst starting the main SBS server isn't anything blisteringly fast it certainly seems a bit quicker than from the old drives. I have created two datastore, one on each drive taking the full capacity of the drive. Whilst these are 1TB drives, they have formatted under ESXi to 931MB.

I have a start up delay of 600 seconds on the main SBS 2011 server, remember this was sometimes taking over 15 minutes to boot to the "Ctrl+Alt+Del" screen - it always sits on the "Applying Computer Settings" screen for ages and whilst it still does, the server now booted in around 8 minutes. Given all the crashes, I may well have some corruption on the vHDD and other Windows problems, so I will run some in-place repairs over the coming days. It used to also blue screen quite a bit after running for a while as well (Stop 0x000000D1), so I'm hoping I can fix that too.

For the time being I have put most other VMs on the 'other' data store so each disk is doing a separate thing. I now also have a backup VM in place based on the Unitrends FREE backup server. This is a VM based appliance that runs as a traditional file and image based backup server. Whilst this doesn't snapshot and copy the VM's directly (which I can't do due to the fact that I'm running ESXi FREE), it should be better than nothing. For the time being, the backups of each VM will be placed onto a vHDD created on the "other" datastore. I might get another 1/2TB drive in the future in order that I can dump backups onto that, but it'll do for the time being.

I have posted another image showing the data throughput of the new drives. Again, not blistering fast but until I can afford another hardware RAID or caching SATA controller, it will have to do.

Any other thoughts or comments are still welcome.

Chris

post-69908-0-52645100-1415235313_thumb.j

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now