CougTek

Member
  • Content count

    1014
  • Joined

  • Last visited

Everything posted by CougTek

  1. The Eternus AF250 has lower latency numbers than the NetApp AF200 you tested earlier this month. It would be great if you could also test a 3PAR 8200 all-flash and maybe also a PureStorage //X70. I don't know in what price segment the later is, but the 3PAR 8200 is a direct competitor of the Eternus AF250. PureStorage currently is the leading compagny on Gartner's chart for flash arrays. Doesn't mean much to me, but the management crowd drools when you show them Gartner graphs. Fujitsu seems to be more present in Europe than in North America. They have a local office here, but we never see them at trade shows and I've never been been invited to a product information lunch from like like HPE and Cisco do on a regular basis. They might have great products, but their marketing is well behind those of their competitors. Sending you a review unit seems like their best marketing move in years.
  2. We currently run a 3-node Hyper-V 2012 R2 cluster using an HP 3Par 7200 for storage. The 3Par is now out of warranty (which is too expensive to renew) and 90% full. Also, some of the nodes show memory spikes usage over 80%, so they'll have to be replaced soon too, even though they're still under their original 5-year warranty. The nodes have 256GB of RAM each and dual 10-core Xeon. We have twenty VMs deserving around 350 users. Among the VMs, there's one fatty MS Exchange server and three SQL 2014 servers, two of those being quite busy. The current 3Par 7200 (capable of ~8000 iops according to IOMeter) sometimes chokes under load, if I trust the Veeam One alerts I receive. Our data grows by over 30% per year and the VMs need 7TB today. We're looking for an upgrade that will last five years, without having to pour additional money before 2022. HPE's guys want us to get another 3Par (8200 with all-flash storage). I'd rather take another path. I read a lot about SDS and Windows 2016's Storage Space Direct looks quite promising. Also, Datacore SAN Symphony draws a lot of attention. SDS must also be simpler to manage than a proprietary system like a 3Par. Since we plan to upgrade our core switchs, PFC/DCB for RoCE support on the switch side isn't a problem; the model we plan to get has it. Nutanix wants to propose us a solution. I meet with one of their representatives tomorrow. An Hyper-converged solution sounds nice, although the horror story I've read here dating back to mid-2015 isn't flattering for Nutanix. Thoughts?
  3. Hyper-V Cluster Storage Revamp

    Good to know about the Synology vs NetApp all-flash. Unfortunately, none of my main suppliers try to push a NetApp to me. I don't know what kind of support I would have. The closest storage I'm offered to the NetApp AFF A200 is the 3PAR 8200 all-flash. I don't know how those compare. They are probably direct competitors to each other. Hopefully, the 3PAR is close to the performance you've seen with the NetApp. I meet again tomorrow with my main supplier and I'll ask about the NetApp, but I doubt he'll bother to even give me a price for it. Is it simple to configure?
  4. Hyper-V Cluster Storage Revamp

    This whole project, plus the recent review of the Synology FS3617xs NAS, made me think about Synology's attempt to pierce into the higher-end storage segment. Consider two solutions I've been offered : HPE 3PAR Storeserv 8200 with 8x 1.92GB SSD, ~23TB useable with compression, FC 16G links to servers, 70,000iops advertized, promo at ~55,000U$ (normally more) with 5y 24x7 support Nimble CS3000 with 3x ~2TB SSD and a bunch of mechanical drives, ~25TB usable storage overall, 50,000iops advertized, less-than 60,000U$ with 5y 24x7 support Now take the following configuration with a Synology FS3017 : 24x Samsung SM863 1.92TB (MZ-7KM1T9E) 2x Mellanox MCX314A-BCCT 2-port 40G QSFP+ Add the rail kit and you end up a tad over 39,000U$. You need two units for HA, so the price climbs to ~78,000U$. You need all those drives to end up with ~23TB usable (RAID 1+0) because they don't have deduplication or compression, at least not that I'm aware of. I prefer no RAID 5/6 on volumes I put high-load databases on. For a complete business proposal, with a DR site, it looks like this : The unit at the backup location could only have a single 2-port 10G adapter and 12x Samsung PM863 3.84TB SSDs. Enough for running the cluster while the main site is restored. The 3 units (two in HA at the main site and a cheaper one at the backup site) would cost ~110,000U$ for an all-flash architecture. No idea how many iops an FS3017 with that kind of SSDs would yield compared to the higher-end solutions from Nimble and 3PAR. The Synology also doesn't offer fast support in case of emergency, unlike Nimble, HPE, Dell, IBM, etc... For a Nimble solution, you only need one unit at each location, since they include two controllers each. With an HPE 3PAR, you need to add SAN switches in order to have site replication. This increases cost and complexity. In case of the 3PAR, complexity is part of the deal (bunch of services to configure, system reporter, service processor, dealing with direct FC links, configuring the SAN switch, managing the LUNs, etc). Synology offers more of a brute force solution to feed iops to the cluster, while the two others appear to be more refined (bunch of ASICs in the 3PAR doing the real-time compression, completely different approach on the Nimble). A Nimble CS3000 (main) + CS1000 (DR) cost about the same amount, but with much better support. It isn't all-flash though. A dual 3PAR 8200, including all the gears and licenses to do off-site replication, would cost a bit more (probably ~150,000U$) and would be significantly more complex to setup and manage. Synology's proposal doesn't look bad, but considering the options, who would dare to chose that to trust their entire SMB storage on? Would you?
  5. Hyper-V Cluster Storage Revamp

    A lot has happened during the past week. Long story short, I'll probably opt for two Nimble arrays: one at the main site and one at the backup site. Their CS3000 (main site) and CS1000 (backup site) are simple to configure and operate. Plus, replication between two sites is supposedly dead easy to setup and works well, so I won't need to spend for a replication software like Zerto. Taking volume snapshots practically doesn't take space and the amount you can take on their platform is very high (they tested up to 160,000 without issue). I'll still need Veeam, but only to do periodic full backups. They garanty 50,000iops minimum on the CS3000 and 30,000iops on the CS1000, which is enough for our needs. Since setup and management are so easy, I don't mind not moving to an hyper-converged architecture. Managing the hypervisor cluster isn't what I had problems with. Mitch had the right suggestion from the first reply of this thread.
  6. I've received some offers related to my previous topic to improve our current servers, but I'm not completly sold on what I've been presented so far. I wasn't considering VMWare before, mainly because I thought it cost too much, but I've since done my homeworks and it's really not that bad, at least at first view. I read Kevin mentionning that he really liked the Dell FX2 and a recent article here about the VxRail was quite flattering too. I hesitate between an FX2 as a base for a vSAN installation or simply go for the VxRail. I really like the form factor of the FX2, but the integration of the VxRail is superior. For the FX2, I'd go with 4x FC430 and two FD33x storage units. I plan to go all-flash. I'm pretty sure that if I ask a Dell representative, he'll push me on the VxRail. Sales people are often lazy and the least SKU they have to search for at similar price point, the more likely they are to opt for it. One thing to consider: I already have enough hardware for my DR site and it's all HPE (3 nodes sharing an MSA 2040 SAN). I don't know if the integration of the VxRail would prevent me to setup the DR using a vSAN on the HPE hardware. I don't think it would be a problem with a vSAN on FX2 though. Thoughts?
  7. vSAN on Dell FX2 vs Dell/EMC VxRail

    Thanks. That's exactly the kind of feedback I was looking for.
  8. Hyper-V Cluster Storage Revamp

    If Microsoft doesn't want to help you on a review, they sure won't help me with a small setup like what I was considering. That's their third strike. What pisses me off about those companies is that they charge a LOT of money for their licenses and support, but the service level they provide is abysmal. If Datacore refuses a comparative review, it's probably because they have something to hide. It at least tells that they aren't totally confident in their product. Nutanix, at least during the pre-sale stage, put a lot of efforts to convince me to go for their solution. I know that it wasn't your experience two years ago though, so I'm quite cautious with them.
  9. Hyper-V Cluster Storage Revamp

    Maybe you think about Starwind Virtual SAN? BTW, reviewing and comparing those solutions (Solarwind's Virtual SAN, Datacore SAN Symphony-V, VMWare's VSAN, Microsoft's Storage Space Direct) would be a great article and I'm sure it would draw a lot of visitors. It would be a fantastic tool for all those looking into software define storage. I've looked into a private Openstack cloud, but one of the goals of the new architecture is ease of management. Troubleshooting Openstack issues isn't easy. Being the sole network administrator of a ten-companies conglomerate isn't my only task. I'm also the IT manager of all this. I deal contracts, purchases, oversee the budget and supervise the L1-2 technicians and when they aren't capable to fix an issue,I'm the one who has to deal with it. The amount of time I have to do my real job, which is supposed to be a network administrator, is quite limited. I don't need something easy to deal with because I'm a moron. I need something simple because I simply don't have the time to do deep troubleshooting.
  10. Hyper-V Cluster Storage Revamp

    Yep, I've been reminded about Veeam's absence of support for Acropolis. Same goes for Zerto, which I was eyeing for the replication part. I'll stay with Hyper-V. Regarding VMWare, not sure I want to add another 20K$ for something that more or less does the same thing than Hyper-V, but a bit better. So far, my Hyper-V cluster has been good enough. Could be better, it's certainly perfectible, but not worth a five figures investment for the amount of VMs I have to manage. At least in my view. Too bad I'm too busy to try Datacore SAN Symphony-V. Not sure it would save us money. Not sure it's easier to manage either. Not even sure it plays nice with the backup/replication softwares. But the performance numbers posted on the SPC-1 website are amazing considering the low cost of the hardware used. Anyway, breaking benchmark records isn't the objective. Providing a reliable, high availability platform with enough space to store users' data while being fast enough so they don't wait for it, is
  11. Hyper-V Cluster Storage Revamp

    Thank you Kevin for the Reddit warning story. Since you both put a good word for the NetApp aff-flash SAN, I'll look into it later. I have a lot of reading to do so I probably won't post back for a few days. Thanks again for your help.
  12. Hyper-V Cluster Storage Revamp

    Sorry for the hiatus; I've been quite busy. Re: Mitch We've been contacted by and received a proposal from Nimble in January. It looked good on paper, but replacing only the SAN doesn't fix our node resources problem (not really a problem now, but will be sooner than later). Also, regardless of the company, if we only upgrade the SAN, then we'll be corned in another "nodes+SAN" architecture. I'd really like the management simplicity of an hyper-converged architecture. If we go with Nutanix, we'll convert the Hyper-V VM to Acropolis (their hypervisor). Re: Brian Yes, the 3PAR is a disk array. Our needs are to have a robust architecture with enough resources to support the production environment for several years and reliable replication to a DR site. We already have a DR setup, but Veeam replication leaves a lot to desire. It's been unreliable in our environment. Not something new. We've used Veeam since version 7 (which was crap for Hyper-V). Version 8 worked better, but version 9 and 9.5 fail to take snapshots from 3 of the VMs. We call the support, it gets fixed and a few months of Windows updates later, it breaks again. Overall, Veeam simply hasn't been dependable for us. Veeam also doesn't work on Nutanix's Acropolis. The RAM issue can be fixed easily if I manually balance the VMs on the host to balance the load, but an Hyper-V failover cluster doesn't efficiently distribute the VMs on the hosts when one host goes down. So if we keep using Hyper-V, we'll need to upgrade the nodes to ensure that we have a lot of spare resources on each host. According to the Nutanix talking heads, their cluster does a much better and simpler job of distributing the load. They demoed it numerous times too, but of course, the salesmen always show the shining parts. I've not received the prices yet, but if the offers have similar cost, Nutanix's architecture looks quite good. I'd really like to find out what you found to perform poorly two years ago. I understand that you cannot disclose it due to the agreement you've had with them. Depending on what doesn't work well on their solution, it might or might not affect us for our use. So maybe it's a non-issue in our case. Comparing Nutanix to a Windows Hyper-V cluster and Storage Space Direct volume, Nutanix has the advantage of data locality on the nodes. S2D doesn't apparently try to move the most used data on the node that uses it, so that's why it's a lot more demanding on the networking side (which means $$ for the switches). The nodes also all have to be the same, so no mix-generation nodes within the cluster, which isn't the case with Nutanix. However, with S2D, it's more of a DIY architecture, so there's more hardware choices than what Nutanix offers for their nodes. It also possible to use more generic component, bringing the cost down. The downside of this is multi-vendor support, so they can all throw the ball to each other when issues arises. I've not considered Dell or HPE's HC380 yet and I don't think I will either. Dell's support could be better around here and HPE's hyper-converged solution isn't what HPE's guys want to sell us, which means they won't give us a good discount for it. Regarding the budget, it's in the low six-figures (~150KU$).
  13. Sorry to be quite late on this topic. I've read this story and wondered if any development happened since? Any tips that AFS might resolve or mitigate the performance issues you've had back then? I ask because I'm the decision maker for IT purchases in our company. We've been offered a Nutanix solution to modernize our architecture and I meet with a local Nutanix representative Wednesday. The issues pointed out in your mid 2015 story, while not detailed, are worrying. The upgrade proposal will be in the six figures, so I'm quite concerned about committing such a large amount to a system with no independant testing result, having to rely solely on the good words of a salesman. If this isn't posted at the appropriated place, I appologize. Any input on this topic would be appreciated.
  14. I don't see anywhere which hardware options you had for the test. Did the tested unit have the mSATA drive cache or not? Is it the Pentium, the i3, the i5 or the i7 model? How much RAM was installed? Those matters as they should significantly affect the results. Same complain regarding the TVS-871 review. Maybe you've stated somewhere that you always use the highest-end model of a series, or the entry-level, but to some casual visitor, there's no way to know this if it isn't mentionned in the review itself.
  15. Storage Review Site Update

    Well, that person will get a "can't find page" error. Linking to www.storagereview.com has a better chance to be useful. Glad to see that something will be moving here.
  16. Seagate or WD... once again :)

    One point in favor of Seagate is that Seagate has started to incorporate an Acronis version in their DiskWizard utility and it is only available for Seagate drives. They add the same thing to Powermax for Maxtor drives. It's a nice bonus. Otherwise, there are pro and cons for each drive and I give none a clear edge. WD are often a few bucks cheaper, Seagate's have a slightly better warranty (standard 3 years + small refund if the drive fails during the fourth and fifth year).
  17. Is it a money-is-no-object question?
  18. Your sentence would have been just fine if it would have ended after "articles". Regarding the main topic of this thread, all of your interrogations should be answered in the following drive roundup : http://www.storagereview.com/articles/200601/250_1.html Not much has changed since then and Oceanfence's 7200.10 isn't any better than other models with a quarter-gig capacity. There's been a few other articles published on this very website that are worth (mandatory) reading for newcomers.
  19. No, really? Damn! I thought the RAM was there for aesthetic purpose. My point is the bit-flipping will also occur within the chipset anyway. As for Microsoft's recommendations, well, no comments needed. They can't even get their softwares right, so their opinion on hardware... That said (written, in fact), you are perfectly entitled to buy ECC RAM if you feel it is what's right for you. What will you do on that system?
  20. I remember having read a paper comparing the time before a conventional RAM module will produce an error and the time before an ECC module will too. The conclusion was that conventional RAM error ratio was already so high that for desktop use, it wasn't worth it to spend more for ECC RAM. A server reading/writing constantly to its RAM and which must not BSOD and reboot is a different story (even though the edge of ECC might still be questionable since non-ECC RAM error ratio is already excellent). Bottom line is : ECC RAM is for paranoids or systems that simply can't fail, otherwise, it's a waste. Server chipsets almost always require ECC RAM. They are designed from the ground up with reliability in mind, not features and/or performances. Maintream/enthusiast chipsets are mainly designed with performance and features first and reliability second because anyway, an average SOHO box shouldn't run a company server and doesn't need to run 24x7 365.25 days a year without rebooting. If you want something fast and don't mind rebooting once every six months (pessimistic), forget the server chipset and ECC RAM trip. And if you can't tolerate the bi-annual hardware-caused crash, then go for a server chipset first since it'll be your weakess link reliability-wise, well before the ECC-ness of your RAM.
  21. Why bother with ECC RAM if your motherboard can live happily with standard modules? Your RAM will just be slower and almost certainly won't be any more reliable. If reliability is paramount, get yourself a server-oriented chipset. Otherwise, don't bother with ECC. My returns are nil with Kingston modules (that's on many hundred modules). I've only had one bad Corsair stick and it was a "value" PC3200 stick. Their higher-end stuff, never had a problem with. I've had incompatibilty issues with OCZ Gold modules (voltage related), but none with their Platinum series. I never tried their Server series, but when I need server RAM, I turn to Kingston : slow timings, but garantied reliability. Expensive though. Kingston RAM often seems overpriced.
  22. Wine of the week.

    I know one Aussie that'll be happy to read this... I'm a coffee man ; I rarely drink wine. When I do, I prefer red wine, especially since it contains more anti-oxydants and stuff good for the heart's health. I avoid (when I'm awared of) wines with a marked bitterness (so unlike you, I hate Pinot). Last wine I drank was fairly good. I don't remember the name though, so that's not very helpful for other readers, sorry. When I'll find it, I'll share the name. It wasn't a very expensive bottle, maybe 12$CAN.
  23. SATA II doesn't exist yet. There's SATA 1.5Gbps and SATA 3Gbps, period. There are external hard drive enclosures that include the bracket to link a SATA port from your motherboard to connect an eSATA drive. The Vantec Nexstar 3 (NST-360SU-BK) is one of them.
  24. The Supermicro H8DCE has two x16 PCI-E and 2 x4 PCI-E slots. Both x4 PCI-E use x8 PCI-E physical slots. I don't think you'll find much better than that regarding multiple high-bandwidth PCI-E slots on a single motherboard. There's also the Tyan Thunder K8QE that offers the same configuration, except that both x4 PCI-E slots use x16 physical slots.
  25. IPEAK SPT?

    In fact, HD-Tach is so limited that it shouldn't be referred to as a tool but rather as a toy. About the only thing it can tell you is if you set up your drive and interface mode correctly. For anything accurate, it is meaningless. And version 3 is a stepback to version 2, which already wasn't very revelant. Reading someone affirming that HD-Tach can possibly be better than IPEAK sucks out the little hope I have left in humanity. If I read this one more time, I might throw myself off a sidewalk or dive from my window when I'm my basement.