swinster

Member
  • Content Count

    36
  • Joined

  • Last visited

Community Reputation

0 Neutral

About swinster

  • Rank
    Member
  1. swinster

    Samsung 860 EVO SSD VS SAS HDD

    Resolving oddities == fine Spending money on SSDs that are doomed to simply not work in this scenario != fine Thanks for the heads up.
  2. swinster

    Samsung 860 EVO SSD VS SAS HDD

    @brian , Indeed I have a very similar question. I have a small home ESXi server that I managed to get hold of - a Dell T430 which has a Dell PERC H730 RAID Adaptor and a 16 bay, 2.5" SAS/SATA enclosure. Currently, there are two small SAS drive as RAID 1 (1TB, mainly for booting vHDD of various VMs), so 14 bays are free. There is also an LSI 9260-8i RAID Adaptor with 4 x 3.5" WD Red drive as RAID 6 although this keeps mucking about on me and I think a drive is failing. This RAID 6 array is around 5TB, although less than 50% full. The workload for me is general IT labbing. I spin up different VMs to test various different integrations, but also have some general full time running AD workhorse stuff, such as Exchange (although the user count is negligible), a File Server to save data (mainly photos), and a couple of other VMs. In the first instance, I'm looking to use a couple of the 2.5" slots with some SATA SSDs (around 1TB each I think) as RAID 0, but might well look to increase these over time, so build a redundant array but get good speed, for both boot VMs and to store data. What I'm interested in is, should there be anything to look out for with the cheaper consumer SSDs (like the Samsung 860) when adding them to RAID controller such as the Dell PERC, and building a RAID array? Is this a no-no, or a viable option?
  3. Hi, Does anyone know it it is possible to manage RAID arrays on an LSI MegaRAID 9260-8i that is used to house datastore on an ESXi host, from a Windows Guest? Regards.
  4. Just one other quick question, if I may? I have never needed to take the data on an array built under one manufactures controller and transfer it to another array on a a different manufacturers controller, but I suspect that the drives in the array itself can't simply we switched to the other controller, can they? So, if I have an array that houses data, I would need to back this up to a different device, disband the array, re-attach the drives to the new controller, re-initialises and transfer the data back? Many thanks.
  5. Cheers. There is one on Ebay for $150 at this moment.
  6. Hi all, I'm having a few issues with my Tyan S5510's onboard SATA controller, the motherboard forms the base a small homebrew Free ESXi server. I have a very simple setup whereby I have 3 drives (SATA) connected to the onboard controller, each of which is formatted as a separate VMFSS volume (no RAID, as the software RAID on the onboard controller is not supported in ESXi). However, every so often I lose one of the data stores and it looks (as if the controller stops functioning), and thus the VM running on them. It appears that when I run multiple HDDs, the controller gives up after a while.Rebooting the system restores the drive recognition. I have replaced the drive previously, but the issue remains. I can't quite afford to replace the motherboard, CPU and Memory just yet (which, apart form the SATA controller, seem OK), and so am looking to see if I can get an affordable (low cost) Hardware based RAID card to run these the disk as a single RAID 5 volume that will be recognised by ESXi. Thoughts welcome. Chris
  7. Hi all, Well the new drives are in an I have moved the VMs from the old data store to the new ones. Whilst starting the main SBS server isn't anything blisteringly fast it certainly seems a bit quicker than from the old drives. I have created two datastore, one on each drive taking the full capacity of the drive. Whilst these are 1TB drives, they have formatted under ESXi to 931MB. I have a start up delay of 600 seconds on the main SBS 2011 server, remember this was sometimes taking over 15 minutes to boot to the "Ctrl+Alt+Del" screen - it always sits on the "Applying Computer Settings" screen for ages and whilst it still does, the server now booted in around 8 minutes. Given all the crashes, I may well have some corruption on the vHDD and other Windows problems, so I will run some in-place repairs over the coming days. It used to also blue screen quite a bit after running for a while as well (Stop 0x000000D1), so I'm hoping I can fix that too. For the time being I have put most other VMs on the 'other' data store so each disk is doing a separate thing. I now also have a backup VM in place based on the Unitrends FREE backup server. This is a VM based appliance that runs as a traditional file and image based backup server. Whilst this doesn't snapshot and copy the VM's directly (which I can't do due to the fact that I'm running ESXi FREE), it should be better than nothing. For the time being, the backups of each VM will be placed onto a vHDD created on the "other" datastore. I might get another 1/2TB drive in the future in order that I can dump backups onto that, but it'll do for the time being. I have posted another image showing the data throughput of the new drives. Again, not blistering fast but until I can afford another hardware RAID or caching SATA controller, it will have to do. Any other thoughts or comments are still welcome. Chris
  8. Hey all, Am waiting for the new drives that should be here today, however, as one of the drives disappeared again last night from the controller and so crashed the machine again, I restarted this morning and took a snapshot of the start-up performance. I gotta say, this looks pretty poor. There are two WD250 RE 7200rpm drives that make up this data store (something that I think I won't do again, I would just rather have separate datastores), and you can see the the main SBS server VM resides on one of these disk, the other is doing nothing, but an maximum read rate of 21 MB/s seems a little slow - the average is really bad. One last thing though. I run the main ESXi Hypervisor form a rather old USB pen drive, which in itself it probably very slow, however, I was always lead to believe that this shouldn't matter too much as once loaded, the Hypervisor exists in memory, and only read/writes back to the USB drive only occasionally.
  9. There's a long way to go yet. Still, In any case, originally I'm actually from London/Essex way so am Claret and Blue through and through. Great. I did obviously buy all the motherboard some time back but with ESXi in mind, however, I have just looked at the Tyan/VMware compatibility matrix and whilst the S5510 is not listed specifically, it's bigger brother (the S5512) is certified for ESXi 5.0 and 5.1, and both boards use the same chipset and SATA controller - the Intel C204, so we should have no real problems (I'm thinking). However there is no mention of ESXi 5.5, certification (probably because the board was discontinued before ESXi 5.5 was released). The upgrade board to both of these is the S5532 which has the C222 chipset, and this is certified for ESXi 5.5, but a lot of small server board use the C204 chipset, so I'm hoping that there is no issue.
  10. Absolutely. I have however, already ordered the Seagate drives, but I spend my life tinkering (personal and work), so have no qualms in doing this. The only thing I ever wondered about was the ESXi support for the onboard SATA ports. I believe this wasn't really supported and in fact the lasted ESXi v5.5 remove this support completely if you installed from new, although I have read you can re-add the drivers if required. I maybe I'm just making this up......
  11. Homebrew . Its based around a Tyan s5510 Motherboard, 24 GB RAM and an Intel Xenon E3-1235 CPU, in a crappy case that must date from the 90's!!! I'm in the UK, Swansea to be exact.
  12. Yeah, at those costs I don't think that's going to happen, At this level, its all about compromise so I'll give the SSHDs a whirl. My current RAID card is an Adaptec 5405, currently hooked up to 4 SATA drives set up as RAID 5 for the main data storage and in pass through mode on ESXi so the main server VM has direct access to the volume. Unless I add a SAS expander, I believe this is all this card can handle. I ran a drive check via ESXi using 'voma' on the current datastore (for some reason its back up and working, but I don't trust it to stay that way, however I have extracted the VMs and have got them back up and running), which returned thousand (yes 127,000+) errors of one kind or another!!!!!!!!!! I can't quite believe the things is still going. As far I have been able to find out, I can't actually fix these errors. I will reboot shutdown the host later and reboot the main VM to grab the performance of the disk activity.
  13. Hey Kevin, I had originally been setting up the VM as thick, however, as space became more of an issue, I switched to thin. I would like to go back to thick where possible. Cheers for the heads up on the WD Black dual drive. WRT to SSD, they are on my list, but so is a whole bunch of other stuff, I have looked at some of the SSD's around the 512GB in the UK, but there is a huge difference in the read and write time across the board. I'm guess the higher speed on both read and write the better. The 512GB Crucial MX100 is only around £156, which is suppose isn't bad, butt getting a few of them becomes more of a stretch. I haven't really got into SSDs yet, especially in reference to ESXi. Is it that the SSD's are simply set up as their own datastore in the same way as a HDD would be, or are they used as some kind of mega cache? Cheers
  14. Hey Kevin, I have around 500GB in VM's, which is currently spanned across two of the WD 250GB (as an extent). I have had a third WD 250GB as a separate datastore for backup of the more essential VM's but for what ever reason this hasn't worked (it was a script), so I'm just crossing my fingers I can retrieve what I can before the main drive on the main datastore fails completely. I know I can reinstall the VMs, but it's still a pain reconfiguring all the server stuff. Although I only have around 500GB in VMs at the moment, I'm always playing with new machines a little bit of extra room would be nice. My idea was to use two 1TB drives - but no RAID (they would be connected via the Motherboard SATA connectors, and ESXi doesn't support the software RAID in any case, so RAID would mean another PCIe RAID card). One would simply be used as the datastore, the other would be a backup, but this time I'm going to use something like Unitrends Free Edition and Free ESXi, or Thinware vBackup Standard (both free) and ensure they are doing a job. I would rather have slower performance on my VM's but have the main data have some resilience on its RAID, hence the current RAID card is used for the Data volumes only. For SSDH, I wasn't looking to use the 2.5" drives but rather something like the Seagate 3.5" 1TB SSHD (ST1000DX001) which has an 8GB NAND cache (approx £61). The WD Black dual drive with a 1 TB HDD and a more reasonable 120GB SSD looks great, but at over double the price for one normal drive (£168), I think it might be too much. From the 7200rpm line I was looking at either the Toshiba MG03ACA100 (£67) or the WD RE 4 (£77). How much are we looking at for the SSDs you mention?
  15. Hi all, I have a small ESXi host that runs several VMs, but the main datastore HDDs are coming to the end of their life (Western Digital Caviar RE WD2500YD) as they must be getting on for 10 years old and have run pretty much non stop. I have had a few issue recently and so though I would change the these drives that house the main datastore for ESXi, and hence hold the actual VM's. The machine actually has a separate RAID card, but I put that in pass though mode with NTFS formatted RAID for my Windows SBS VM data. So, I was looking at a couple of 1TB SSHD (Hybrid) drives, but I wondered how they might perform? To be honest, they can't be any worse than the 250Gb WD drives. My main SBS 2011 VM server take around 15-20 minutes to boot, although I'm guessing there is something of in the setup. Once up though I sits there and does what it needs to do without much issue. I can't afford proper SSD drives, but I not sure if the SSHD drive will be a better choice over an inexpensive 1TB entry level enterprise drive - they come in around the same price. Thoughts please. Chris