mschnack

What is the best way to setup our storage architecture? Need help thin

Recommended Posts

We are an innovation company that is looking to create a fast, simple, and easy to use network. We are not IT people, and we don’t want to be. That’s why is so important to us that our network can be managed with ease, performs as fast as possible, and be low maintenance.



We need help establishing our storage architecture (which servers do we need? which software? which brand? which configuration? which drives?) in a way that will fit our current needs, be future proof, and is as close to worry-free as possible.



Goals:


  • Needs to be easy to manage and worry-free (almost no maintenance).
  • Scale out easily, as we grow.
  • High performance
  • Reliable
  • Easy to manage user permissions, shares...
  • Have a simple way for us to get qualified support, when needed.
  • Work for Mac and Windows users seamlessly.
  • Allow our employees to work directly from the server, as if they were accessing the files directly from their hard-drive.


Storage Needs:


  • Virtual Environment - extremely fast storage, with high IOPS - for our VMWare hosts.
  • Business Storage - fast storage for our creative users (mac) and standards users (windows) with considerable size and made for accessing files like they were on the user machine.
  • Security Cameras - storage for 24 cameras, recording 1920x1080, 10fps around 10 hours per day, retaining the footage for 180 days.
  • DAM - database, index, caching… everything our Digital Asset Management solution will need.
  • Backup - everything that we own and operate must be backed up on a daily basis (even with hourly increments for some cases)
  • Archive - made to archive files that will be read not frequently.
  • BitBucket - a big storage so we can dump everything we want to, make sense of it, and move to a better place (or leave it there) - like external drives, old hard-drives, etc.
  • Long-Term Backup - a way for us to backup everything in a cheap but reliable media for a long time.


Considerations:


  • We are willing to pay more for a solution that has a brand behind and proven track record, if it makes sense.
  • Video storage will be dealt with later on. Our current solution works for our current needs. Will need to be revisited in the near future.


Concerns:


  • Having servers and hard-drives that were bought and custom build, having no brand to support it and provide any help when needed.
  • Every place we read says that FreeNAS should NOT be used in an enterprise environment.
  • Should we use SAS or SATA drives?




I am attaching the way we see our storage needs in terms of tiers, showing how the relationship of size x speed x cost is. I am adding also a storage flow that shows how we believe the flow of our data needs to be.



Here is the solution that was initially proposed to us, by another consultants:



Tier 0 - Flash Storage (For VMs)


  • Chassis: 1U SuperMicro
  • CPU: 2 Intel 4-core Xeon E5-2609v2
  • RAM: 128GB
  • OS: FreeNAS 9.3
  • RAID configuration: 2 2-disk RAID 10 vdevs
  • Bays: 8 (4 available)
  • Disks: Samsung 850 PRO 1TB
  • Usable Storage: 1.5TB
  • Max Storage: 27TB (adding 2 more JBODs)
  • Price: $6,350.00 ($2,823.33 per TB)


Tier 1 - Business Storage


  • Chassis 4U SuperMicro
  • CPU: 2 Intel 4-core Xeon E5-2609v2
  • RAM: 128GB
  • OS: FreeNAS 9.3
  • RAID configuration: 3 6-disk RAID Z1 vdevs + 2 hot spares
  • Bays: 24 (0 available)
  • Disks: Samsung WD RE 4TB
  • Cache: 2-disk read zil, 2-disk write zil (Samsung 850 PRO 256GB)
  • Usable Storage: 40TB
  • Max Storage: 2.46PB (adding 8 more JBODs)
  • Price: $11,704.00 ($292.60 per TB)


Tier 3 - Backup Storage


  • Chassis 4U SuperMicro
  • CPU: 2 Intel Xeon E5-1650
  • RAM: 256GB
  • OS: FreeNAS 9.3
  • RAID configuration: 13 2-disk RAID 10 vdevs + 4 hot spares
  • Bays: 36 (0 available)
  • Disks: Samsung WD RE 6TB
  • Cache: 2-disk read zil, 3-disk write zil (Samsung 850 PRO 256GB)
  • Usable Storage: 57TB
  • Max Storage: 475TB (adding 8 more JBODs)
  • Price: $24,977.21 ($438.55 per TB)


Tier 4 - Archive Storage


  • Chassis 4U SuperMicro
  • CPU: 2 Intel Xeon E5-1650
  • RAM: 256GB
  • OS: FreeNAS 9.3
  • RAID configuration: 13 2-disk RAID 10 vdevs + 4 hot spares
  • Bays: 36 (0 available)
  • Disks: Seagate Archival HDD 8TB
  • Cache: 2-disk write zil (Samsung 850 PRO 256GB)
  • Usable Storage: 160TB
  • Max Storage: 1.37PB (adding 8 more JBODs)
  • Price: $17,716.75 ($110.73 per TB)



What do you think about this configuration? Shoot holes in it! Anything that is bad that we are being suggested? Anything that we should be aware of?



What your configuration would be? What is your recommended storage solution? Drives? Servers?



Please share some insights, ideas, recommendations, advice, examples, anything - so frustrated and willing to pay for help.


post-103117-0-86501500-1438189408_thumb.

post-103117-0-01913200-1438189412_thumb.

Share this post


Link to post
Share on other sites

Great questions, I'll work with Brian on hitting these one by one over the next day or so. You should be very worried buying DIY hardware for a business setting without support contracts in place. Downtime is money, lost data is more money, and someone to keep it all running is money on top of that.

To start off FreeNAS and to a larger extent TrueNAS are not items we at StorageReview recommend. There are some better things out there as well as more efficient items that will better serve your needs. Next, the drives outlined really stick out to me. Never use consumer drives in an enterprise setting... ever. This includes consumer SSDs unless you have a platform engineered from the ground up to use them properly. Next those Seagate Archival HDDs are to not be used with software or hardware RAID, only bad things will happen. I would not touch this setup with a 10 foot pole if I were you.

What is the total budget you are working with? What are your current storage capacity needs, where do you see it growing to? Do you have any backup application in production right now to see what your archival needs are?

Next how were these compute platforms chosen? What are the VMs operating on? If you want some things running on flash, are you using VMware, HyperV, etc? Why isn't flash being addressed in that server? It looks like your consultant just blindly chose servers to run FreeNAS and hoped for the best.

To give you an indication of how poorly built this setup is, for the same price as what you have right here you could get an EMC VNXe3200 and an EMC DataDomain for backup (with deduplication) for much less money, with on-site support and money left over for compute servers or other nice things like switches.

Share this post


Link to post
Share on other sites

I'm afraid I have to post a largely contrary opinion, that said although I have nothing against FreeNAS, I'd hesitate at the suggestion of using ZFS specifically for VM storage.

I do agree that a robust support contract is a must for a "non IT people" company, and that the hardware selection is... strange. However I have no issue with consumer drives in an enterprise setting. The largest scale studies have repeatedly shown no differences in reliability, and many of the largest online companies (Google, Facebook, Backblaze, Akamai, etc.) use almost exclusively consumer grade hardware. Those particular SSDs are technologically very robust and outlast many cheaper "enterprise" SSDs and only from a warranty perspective are they not suited extremely write-heavy VM workloads.

I'd actually explicitly recommend the Seagate Archive HDDs, but again, not in RAID-10. I have found they work perfectly fine in RAID and in ZFS. The only disadvantage is slow rebuild time but frankly, but dual (or triple) redundancy and hot spares makes it really a non-issue.

Ultimately though the whole kit does sound rather thrown together, and piles an excessive amount of x86 hardware onto what could be achieved by an all-in-one storage array (e.g. HP, EMC, etc.). Plus what? Two E5-1650 CPUs on one board? That's not even possible. I'll put that down to a typo or early-draft issue... There appears to be no redundancy, HA, or failover systems specified in any part of the build so I assume it isn't uptime critical.

I would add though, just buying a decent, supported array from the likes of EMC isn't a complete guarantee of safety either, it may still better to use a different vendor for the backups. There's been cases of irrecoverable data loss recently as a result of bugs in EMC software for example; thanks to the complete vertical integration the bug affected the entire stack including the backups. But you're really better off with a managed appliance from the big companies than a pile of white box servers if you want simple, manageable storage.

Share this post


Link to post
Share on other sites

Yea, well there are plenty of other not even trying hard systems that could meet or exceed his requirements. Its like that guy get a commission based on servers he sells. If you wanted to go that route (not saying go that route) you could get one highend server a JBOD and stick all that storage into it. The 2800 per TB of consumer flash is like the worst deal on earth. Heck its almost cheaper buying DRAM than that!

Share this post


Link to post
Share on other sites

Yea, well there are plenty of other not even trying hard systems that could meet or exceed his requirements. Its like that guy get a commission based on servers he sells.

Indeed, the more I look at it the more it seems the machines are dreadfully over-specified and and the important bit (i.e. support) hasn't been mentioned.

We "charge" nominally around $3000 per TB storage but that's including backups, replication, and 25 years of archival storage.

Share this post


Link to post
Share on other sites

Yeesh. As others have said already, the real cost of a long-term storage design is in the long-term sales and support. This looks like a bunch of Supermicro whitebox servers thrown together to maximize your consultant's initial profit and not much else.

I would strongly suggest you talk to a few more vendors and see what you can get, as well as some picture of your immediate term as well as near-term and medium-term storage needs to get an idea of a proper build-out.

it may well be a combination of vendors or product lines you end up dealing with depending on capacity/price/performance needed. Keep in mind if you are shoving lots of data around, the cost of the networking infrastructure needed (as well as keeping that sufficiently scaleable without breaking the bank) are also going to be a consideration, as well as the kind of reliability and uptime you need/don't need.

Share this post


Link to post
Share on other sites

Someone mentioned a VNXe and a DataDomain, and those are great examples. Don't forget HP and Dell have solid comparable backup appliances for more capacity at less cost, even if they get poorer dedupe, often you're still getting a better deal. At scale the DD may make sense... The VNXe with FAST licensed and a little bit of flash is a pretty slick little box IMO. Outside of that, a Nimble unit wouldn't be terrible.

There are also a number of startups that have some interesting tech... Like cohesity, Rubrik, datrium, etc.

There are even some interesting gateways to write to the "cloud," for slower, archive like data. EMC has CloudArray, there is a NetApp AltaVault, and Amazon used to offer something too.

Building your own, unless you like managing all of that, and building it, you'll never get it as refined, documented, stable as a something like a Nimble, EMC, or HP box. Sure if fun and boosts your pride though. *oh to be in my early 20's again...*

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now