CougTek

Member
  • Content count

    1013
  • Joined

  • Last visited

Community Reputation

0 Neutral

About CougTek

  • Rank
    Member

Contact Methods

  • Website URL
    http://www.storageforum.net
  • ICQ
    0

Profile Information

  • Location
    Québec, Québec
  • Interests
    IT architectures, mountaineering, physical training and microbrewery beers.
  1. Good to know about the Synology vs NetApp all-flash. Unfortunately, none of my main suppliers try to push a NetApp to me. I don't know what kind of support I would have. The closest storage I'm offered to the NetApp AFF A200 is the 3PAR 8200 all-flash. I don't know how those compare. They are probably direct competitors to each other. Hopefully, the 3PAR is close to the performance you've seen with the NetApp. I meet again tomorrow with my main supplier and I'll ask about the NetApp, but I doubt he'll bother to even give me a price for it. Is it simple to configure?
  2. This whole project, plus the recent review of the Synology FS3617xs NAS, made me think about Synology's attempt to pierce into the higher-end storage segment. Consider two solutions I've been offered : HPE 3PAR Storeserv 8200 with 8x 1.92GB SSD, ~23TB useable with compression, FC 16G links to servers, 70,000iops advertized, promo at ~55,000U$ (normally more) with 5y 24x7 support Nimble CS3000 with 3x ~2TB SSD and a bunch of mechanical drives, ~25TB usable storage overall, 50,000iops advertized, less-than 60,000U$ with 5y 24x7 support Now take the following configuration with a Synology FS3017 : 24x Samsung SM863 1.92TB (MZ-7KM1T9E) 2x Mellanox MCX314A-BCCT 2-port 40G QSFP+ Add the rail kit and you end up a tad over 39,000U$. You need two units for HA, so the price climbs to ~78,000U$. You need all those drives to end up with ~23TB usable (RAID 1+0) because they don't have deduplication or compression, at least not that I'm aware of. I prefer no RAID 5/6 on volumes I put high-load databases on. For a complete business proposal, with a DR site, it looks like this : The unit at the backup location could only have a single 2-port 10G adapter and 12x Samsung PM863 3.84TB SSDs. Enough for running the cluster while the main site is restored. The 3 units (two in HA at the main site and a cheaper one at the backup site) would cost ~110,000U$ for an all-flash architecture. No idea how many iops an FS3017 with that kind of SSDs would yield compared to the higher-end solutions from Nimble and 3PAR. The Synology also doesn't offer fast support in case of emergency, unlike Nimble, HPE, Dell, IBM, etc... For a Nimble solution, you only need one unit at each location, since they include two controllers each. With an HPE 3PAR, you need to add SAN switches in order to have site replication. This increases cost and complexity. In case of the 3PAR, complexity is part of the deal (bunch of services to configure, system reporter, service processor, dealing with direct FC links, configuring the SAN switch, managing the LUNs, etc). Synology offers more of a brute force solution to feed iops to the cluster, while the two others appear to be more refined (bunch of ASICs in the 3PAR doing the real-time compression, completely different approach on the Nimble). A Nimble CS3000 (main) + CS1000 (DR) cost about the same amount, but with much better support. It isn't all-flash though. A dual 3PAR 8200, including all the gears and licenses to do off-site replication, would cost a bit more (probably ~150,000U$) and would be significantly more complex to setup and manage. Synology's proposal doesn't look bad, but considering the options, who would dare to chose that to trust their entire SMB storage on? Would you?
  3. A lot has happened during the past week. Long story short, I'll probably opt for two Nimble arrays: one at the main site and one at the backup site. Their CS3000 (main site) and CS1000 (backup site) are simple to configure and operate. Plus, replication between two sites is supposedly dead easy to setup and works well, so I won't need to spend for a replication software like Zerto. Taking volume snapshots practically doesn't take space and the amount you can take on their platform is very high (they tested up to 160,000 without issue). I'll still need Veeam, but only to do periodic full backups. They garanty 50,000iops minimum on the CS3000 and 30,000iops on the CS1000, which is enough for our needs. Since setup and management are so easy, I don't mind not moving to an hyper-converged architecture. Managing the hypervisor cluster isn't what I had problems with. Mitch had the right suggestion from the first reply of this thread.
  4. Thanks. That's exactly the kind of feedback I was looking for.
  5. I've received some offers related to my previous topic to improve our current servers, but I'm not completly sold on what I've been presented so far. I wasn't considering VMWare before, mainly because I thought it cost too much, but I've since done my homeworks and it's really not that bad, at least at first view. I read Kevin mentionning that he really liked the Dell FX2 and a recent article here about the VxRail was quite flattering too. I hesitate between an FX2 as a base for a vSAN installation or simply go for the VxRail. I really like the form factor of the FX2, but the integration of the VxRail is superior. For the FX2, I'd go with 4x FC430 and two FD33x storage units. I plan to go all-flash. I'm pretty sure that if I ask a Dell representative, he'll push me on the VxRail. Sales people are often lazy and the least SKU they have to search for at similar price point, the more likely they are to opt for it. One thing to consider: I already have enough hardware for my DR site and it's all HPE (3 nodes sharing an MSA 2040 SAN). I don't know if the integration of the VxRail would prevent me to setup the DR using a vSAN on the HPE hardware. I don't think it would be a problem with a vSAN on FX2 though. Thoughts?
  6. If Microsoft doesn't want to help you on a review, they sure won't help me with a small setup like what I was considering. That's their third strike. What pisses me off about those companies is that they charge a LOT of money for their licenses and support, but the service level they provide is abysmal. If Datacore refuses a comparative review, it's probably because they have something to hide. It at least tells that they aren't totally confident in their product. Nutanix, at least during the pre-sale stage, put a lot of efforts to convince me to go for their solution. I know that it wasn't your experience two years ago though, so I'm quite cautious with them.
  7. Maybe you think about Starwind Virtual SAN? BTW, reviewing and comparing those solutions (Solarwind's Virtual SAN, Datacore SAN Symphony-V, VMWare's VSAN, Microsoft's Storage Space Direct) would be a great article and I'm sure it would draw a lot of visitors. It would be a fantastic tool for all those looking into software define storage. I've looked into a private Openstack cloud, but one of the goals of the new architecture is ease of management. Troubleshooting Openstack issues isn't easy. Being the sole network administrator of a ten-companies conglomerate isn't my only task. I'm also the IT manager of all this. I deal contracts, purchases, oversee the budget and supervise the L1-2 technicians and when they aren't capable to fix an issue,I'm the one who has to deal with it. The amount of time I have to do my real job, which is supposed to be a network administrator, is quite limited. I don't need something easy to deal with because I'm a moron. I need something simple because I simply don't have the time to do deep troubleshooting.
  8. Yep, I've been reminded about Veeam's absence of support for Acropolis. Same goes for Zerto, which I was eyeing for the replication part. I'll stay with Hyper-V. Regarding VMWare, not sure I want to add another 20K$ for something that more or less does the same thing than Hyper-V, but a bit better. So far, my Hyper-V cluster has been good enough. Could be better, it's certainly perfectible, but not worth a five figures investment for the amount of VMs I have to manage. At least in my view. Too bad I'm too busy to try Datacore SAN Symphony-V. Not sure it would save us money. Not sure it's easier to manage either. Not even sure it plays nice with the backup/replication softwares. But the performance numbers posted on the SPC-1 website are amazing considering the low cost of the hardware used. Anyway, breaking benchmark records isn't the objective. Providing a reliable, high availability platform with enough space to store users' data while being fast enough so they don't wait for it, is
  9. Thank you Kevin for the Reddit warning story. Since you both put a good word for the NetApp aff-flash SAN, I'll look into it later. I have a lot of reading to do so I probably won't post back for a few days. Thanks again for your help.
  10. Sorry for the hiatus; I've been quite busy. Re: Mitch We've been contacted by and received a proposal from Nimble in January. It looked good on paper, but replacing only the SAN doesn't fix our node resources problem (not really a problem now, but will be sooner than later). Also, regardless of the company, if we only upgrade the SAN, then we'll be corned in another "nodes+SAN" architecture. I'd really like the management simplicity of an hyper-converged architecture. If we go with Nutanix, we'll convert the Hyper-V VM to Acropolis (their hypervisor). Re: Brian Yes, the 3PAR is a disk array. Our needs are to have a robust architecture with enough resources to support the production environment for several years and reliable replication to a DR site. We already have a DR setup, but Veeam replication leaves a lot to desire. It's been unreliable in our environment. Not something new. We've used Veeam since version 7 (which was crap for Hyper-V). Version 8 worked better, but version 9 and 9.5 fail to take snapshots from 3 of the VMs. We call the support, it gets fixed and a few months of Windows updates later, it breaks again. Overall, Veeam simply hasn't been dependable for us. Veeam also doesn't work on Nutanix's Acropolis. The RAM issue can be fixed easily if I manually balance the VMs on the host to balance the load, but an Hyper-V failover cluster doesn't efficiently distribute the VMs on the hosts when one host goes down. So if we keep using Hyper-V, we'll need to upgrade the nodes to ensure that we have a lot of spare resources on each host. According to the Nutanix talking heads, their cluster does a much better and simpler job of distributing the load. They demoed it numerous times too, but of course, the salesmen always show the shining parts. I've not received the prices yet, but if the offers have similar cost, Nutanix's architecture looks quite good. I'd really like to find out what you found to perform poorly two years ago. I understand that you cannot disclose it due to the agreement you've had with them. Depending on what doesn't work well on their solution, it might or might not affect us for our use. So maybe it's a non-issue in our case. Comparing Nutanix to a Windows Hyper-V cluster and Storage Space Direct volume, Nutanix has the advantage of data locality on the nodes. S2D doesn't apparently try to move the most used data on the node that uses it, so that's why it's a lot more demanding on the networking side (which means $$ for the switches). The nodes also all have to be the same, so no mix-generation nodes within the cluster, which isn't the case with Nutanix. However, with S2D, it's more of a DIY architecture, so there's more hardware choices than what Nutanix offers for their nodes. It also possible to use more generic component, bringing the cost down. The downside of this is multi-vendor support, so they can all throw the ball to each other when issues arises. I've not considered Dell or HPE's HC380 yet and I don't think I will either. Dell's support could be better around here and HPE's hyper-converged solution isn't what HPE's guys want to sell us, which means they won't give us a good discount for it. Regarding the budget, it's in the low six-figures (~150KU$).
  11. We currently run a 3-node Hyper-V 2012 R2 cluster using an HP 3Par 7200 for storage. The 3Par is now out of warranty (which is too expensive to renew) and 90% full. Also, some of the nodes show memory spikes usage over 80%, so they'll have to be replaced soon too, even though they're still under their original 5-year warranty. The nodes have 256GB of RAM each and dual 10-core Xeon. We have twenty VMs deserving around 350 users. Among the VMs, there's one fatty MS Exchange server and three SQL 2014 servers, two of those being quite busy. The current 3Par 7200 (capable of ~8000 iops according to IOMeter) sometimes chokes under load, if I trust the Veeam One alerts I receive. Our data grows by over 30% per year and the VMs need 7TB today. We're looking for an upgrade that will last five years, without having to pour additional money before 2022. HPE's guys want us to get another 3Par (8200 with all-flash storage). I'd rather take another path. I read a lot about SDS and Windows 2016's Storage Space Direct looks quite promising. Also, Datacore SAN Symphony draws a lot of attention. SDS must also be simpler to manage than a proprietary system like a 3Par. Since we plan to upgrade our core switchs, PFC/DCB for RoCE support on the switch side isn't a problem; the model we plan to get has it. Nutanix wants to propose us a solution. I meet with one of their representatives tomorrow. An Hyper-converged solution sounds nice, although the horror story I've read here dating back to mid-2015 isn't flattering for Nutanix. Thoughts?
  12. Sorry to be quite late on this topic. I've read this story and wondered if any development happened since? Any tips that AFS might resolve or mitigate the performance issues you've had back then? I ask because I'm the decision maker for IT purchases in our company. We've been offered a Nutanix solution to modernize our architecture and I meet with a local Nutanix representative Wednesday. The issues pointed out in your mid 2015 story, while not detailed, are worrying. The upgrade proposal will be in the six figures, so I'm quite concerned about committing such a large amount to a system with no independant testing result, having to rely solely on the good words of a salesman. If this isn't posted at the appropriated place, I appologize. Any input on this topic would be appreciated.
  13. I don't see anywhere which hardware options you had for the test. Did the tested unit have the mSATA drive cache or not? Is it the Pentium, the i3, the i5 or the i7 model? How much RAM was installed? Those matters as they should significantly affect the results. Same complain regarding the TVS-871 review. Maybe you've stated somewhere that you always use the highest-end model of a series, or the entry-level, but to some casual visitor, there's no way to know this if it isn't mentionned in the review itself.
  14. Well, that person will get a "can't find page" error. Linking to www.storagereview.com has a better chance to be useful. Glad to see that something will be moving here.
  15. One point in favor of Seagate is that Seagate has started to incorporate an Acronis version in their DiskWizard utility and it is only available for Seagate drives. They add the same thing to Powermax for Maxtor drives. It's a nice bonus. Otherwise, there are pro and cons for each drive and I give none a clear edge. WD are often a few bucks cheaper, Seagate's have a slightly better warranty (standard 3 years + small refund if the drive fails during the fourth and fifth year).