CougTek

Member
  • Content count

    1007
  • Joined

  • Last visited

Community Reputation

0 Neutral

About CougTek

  • Rank
    Member

Contact Methods

  • Website URL
    http://www.storageforum.net
  • ICQ
    0

Profile Information

  • Location
    Québec, Québec
  • Interests
    IT architectures, mountaineering, physical training and microbrewery beers.
  1. Maybe you think about Starwind Virtual SAN? BTW, reviewing and comparing those solutions (Solarwind's Virtual SAN, Datacore SAN Symphony-V, VMWare's VSAN, Microsoft's Storage Space Direct) would be a great article and I'm sure it would draw a lot of visitors. It would be a fantastic tool for all those looking into software define storage. I've looked into a private Openstack cloud, but one of the goals of the new architecture is ease of management. Troubleshooting Openstack issues isn't easy. Being the sole network administrator of a ten-companies conglomerate isn't my only task. I'm also the IT manager of all this. I deal contracts, purchases, oversee the budget and supervise the L1-2 technicians and when they aren't capable to fix an issue,I'm the one who has to deal with it. The amount of time I have to do my real job, which is supposed to be a network administrator, is quite limited. I don't need something easy to deal with because I'm a moron. I need something simple because I simply don't have the time to do deep troubleshooting.
  2. Yep, I've been reminded about Veeam's absence of support for Acropolis. Same goes for Zerto, which I was eyeing for the replication part. I'll stay with Hyper-V. Regarding VMWare, not sure I want to add another 20K$ for something that more or less does the same thing than Hyper-V, but a bit better. So far, my Hyper-V cluster has been good enough. Could be better, it's certainly perfectible, but not worth a five figures investment for the amount of VMs I have to manage. At least in my view. Too bad I'm too busy to try Datacore SAN Symphony-V. Not sure it would save us money. Not sure it's easier to manage either. Not even sure it plays nice with the backup/replication softwares. But the performance numbers posted on the SPC-1 website are amazing considering the low cost of the hardware used. Anyway, breaking benchmark records isn't the objective. Providing a reliable, high availability platform with enough space to store users' data while being fast enough so they don't wait for it, is
  3. Thank you Kevin for the Reddit warning story. Since you both put a good word for the NetApp aff-flash SAN, I'll look into it later. I have a lot of reading to do so I probably won't post back for a few days. Thanks again for your help.
  4. Sorry for the hiatus; I've been quite busy. Re: Mitch We've been contacted by and received a proposal from Nimble in January. It looked good on paper, but replacing only the SAN doesn't fix our node resources problem (not really a problem now, but will be sooner than later). Also, regardless of the company, if we only upgrade the SAN, then we'll be corned in another "nodes+SAN" architecture. I'd really like the management simplicity of an hyper-converged architecture. If we go with Nutanix, we'll convert the Hyper-V VM to Acropolis (their hypervisor). Re: Brian Yes, the 3PAR is a disk array. Our needs are to have a robust architecture with enough resources to support the production environment for several years and reliable replication to a DR site. We already have a DR setup, but Veeam replication leaves a lot to desire. It's been unreliable in our environment. Not something new. We've used Veeam since version 7 (which was crap for Hyper-V). Version 8 worked better, but version 9 and 9.5 fail to take snapshots from 3 of the VMs. We call the support, it gets fixed and a few months of Windows updates later, it breaks again. Overall, Veeam simply hasn't been dependable for us. Veeam also doesn't work on Nutanix's Acropolis. The RAM issue can be fixed easily if I manually balance the VMs on the host to balance the load, but an Hyper-V failover cluster doesn't efficiently distribute the VMs on the hosts when one host goes down. So if we keep using Hyper-V, we'll need to upgrade the nodes to ensure that we have a lot of spare resources on each host. According to the Nutanix talking heads, their cluster does a much better and simpler job of distributing the load. They demoed it numerous times too, but of course, the salesmen always show the shining parts. I've not received the prices yet, but if the offers have similar cost, Nutanix's architecture looks quite good. I'd really like to find out what you found to perform poorly two years ago. I understand that you cannot disclose it due to the agreement you've had with them. Depending on what doesn't work well on their solution, it might or might not affect us for our use. So maybe it's a non-issue in our case. Comparing Nutanix to a Windows Hyper-V cluster and Storage Space Direct volume, Nutanix has the advantage of data locality on the nodes. S2D doesn't apparently try to move the most used data on the node that uses it, so that's why it's a lot more demanding on the networking side (which means $$ for the switches). The nodes also all have to be the same, so no mix-generation nodes within the cluster, which isn't the case with Nutanix. However, with S2D, it's more of a DIY architecture, so there's more hardware choices than what Nutanix offers for their nodes. It also possible to use more generic component, bringing the cost down. The downside of this is multi-vendor support, so they can all throw the ball to each other when issues arises. I've not considered Dell or HPE's HC380 yet and I don't think I will either. Dell's support could be better around here and HPE's hyper-converged solution isn't what HPE's guys want to sell us, which means they won't give us a good discount for it. Regarding the budget, it's in the low six-figures (~150KU$).
  5. We currently run a 3-node Hyper-V 2012 R2 cluster using an HP 3Par 7200 for storage. The 3Par is now out of warranty (which is too expensive to renew) and 90% full. Also, some of the nodes show memory spikes usage over 80%, so they'll have to be replaced soon too, even though they're still under their original 5-year warranty. The nodes have 256GB of RAM each and dual 10-core Xeon. We have twenty VMs deserving around 350 users. Among the VMs, there's one fatty MS Exchange server and three SQL 2014 servers, two of those being quite busy. The current 3Par 7200 (capable of ~8000 iops according to IOMeter) sometimes chokes under load, if I trust the Veeam One alerts I receive. Our data grows by over 30% per year and the VMs need 7TB today. We're looking for an upgrade that will last five years, without having to pour additional money before 2022. HPE's guys want us to get another 3Par (8200 with all-flash storage). I'd rather take another path. I read a lot about SDS and Windows 2016's Storage Space Direct looks quite promising. Also, Datacore SAN Symphony draws a lot of attention. SDS must also be simpler to manage than a proprietary system like a 3Par. Since we plan to upgrade our core switchs, PFC/DCB for RoCE support on the switch side isn't a problem; the model we plan to get has it. Nutanix wants to propose us a solution. I meet with one of their representatives tomorrow. An Hyper-converged solution sounds nice, although the horror story I've read here dating back to mid-2015 isn't flattering for Nutanix. Thoughts?
  6. Sorry to be quite late on this topic. I've read this story and wondered if any development happened since? Any tips that AFS might resolve or mitigate the performance issues you've had back then? I ask because I'm the decision maker for IT purchases in our company. We've been offered a Nutanix solution to modernize our architecture and I meet with a local Nutanix representative Wednesday. The issues pointed out in your mid 2015 story, while not detailed, are worrying. The upgrade proposal will be in the six figures, so I'm quite concerned about committing such a large amount to a system with no independant testing result, having to rely solely on the good words of a salesman. If this isn't posted at the appropriated place, I appologize. Any input on this topic would be appreciated.
  7. I don't see anywhere which hardware options you had for the test. Did the tested unit have the mSATA drive cache or not? Is it the Pentium, the i3, the i5 or the i7 model? How much RAM was installed? Those matters as they should significantly affect the results. Same complain regarding the TVS-871 review. Maybe you've stated somewhere that you always use the highest-end model of a series, or the entry-level, but to some casual visitor, there's no way to know this if it isn't mentionned in the review itself.
  8. Well, that person will get a "can't find page" error. Linking to www.storagereview.com has a better chance to be useful. Glad to see that something will be moving here.
  9. One point in favor of Seagate is that Seagate has started to incorporate an Acronis version in their DiskWizard utility and it is only available for Seagate drives. They add the same thing to Powermax for Maxtor drives. It's a nice bonus. Otherwise, there are pro and cons for each drive and I give none a clear edge. WD are often a few bucks cheaper, Seagate's have a slightly better warranty (standard 3 years + small refund if the drive fails during the fourth and fifth year).
  10. Is it a money-is-no-object question?
  11. Your sentence would have been just fine if it would have ended after "articles". Regarding the main topic of this thread, all of your interrogations should be answered in the following drive roundup : http://www.storagereview.com/articles/200601/250_1.html Not much has changed since then and Oceanfence's 7200.10 isn't any better than other models with a quarter-gig capacity. There's been a few other articles published on this very website that are worth (mandatory) reading for newcomers.
  12. No, really? Damn! I thought the RAM was there for aesthetic purpose. My point is the bit-flipping will also occur within the chipset anyway. As for Microsoft's recommendations, well, no comments needed. They can't even get their softwares right, so their opinion on hardware... That said (written, in fact), you are perfectly entitled to buy ECC RAM if you feel it is what's right for you. What will you do on that system?
  13. I remember having read a paper comparing the time before a conventional RAM module will produce an error and the time before an ECC module will too. The conclusion was that conventional RAM error ratio was already so high that for desktop use, it wasn't worth it to spend more for ECC RAM. A server reading/writing constantly to its RAM and which must not BSOD and reboot is a different story (even though the edge of ECC might still be questionable since non-ECC RAM error ratio is already excellent). Bottom line is : ECC RAM is for paranoids or systems that simply can't fail, otherwise, it's a waste. Server chipsets almost always require ECC RAM. They are designed from the ground up with reliability in mind, not features and/or performances. Maintream/enthusiast chipsets are mainly designed with performance and features first and reliability second because anyway, an average SOHO box shouldn't run a company server and doesn't need to run 24x7 365.25 days a year without rebooting. If you want something fast and don't mind rebooting once every six months (pessimistic), forget the server chipset and ECC RAM trip. And if you can't tolerate the bi-annual hardware-caused crash, then go for a server chipset first since it'll be your weakess link reliability-wise, well before the ECC-ness of your RAM.
  14. Why bother with ECC RAM if your motherboard can live happily with standard modules? Your RAM will just be slower and almost certainly won't be any more reliable. If reliability is paramount, get yourself a server-oriented chipset. Otherwise, don't bother with ECC. My returns are nil with Kingston modules (that's on many hundred modules). I've only had one bad Corsair stick and it was a "value" PC3200 stick. Their higher-end stuff, never had a problem with. I've had incompatibilty issues with OCZ Gold modules (voltage related), but none with their Platinum series. I never tried their Server series, but when I need server RAM, I turn to Kingston : slow timings, but garantied reliability. Expensive though. Kingston RAM often seems overpriced.
  15. I know one Aussie that'll be happy to read this... I'm a coffee man ; I rarely drink wine. When I do, I prefer red wine, especially since it contains more anti-oxydants and stuff good for the heart's health. I avoid (when I'm awared of) wines with a marked bitterness (so unlike you, I hate Pinot). Last wine I drank was fairly good. I don't remember the name though, so that's not very helpful for other readers, sorry. When I'll find it, I'll share the name. It wasn't a very expensive bottle, maybe 12$CAN.