dcjett

200TB SAN/NAS for feature film

Recommended Posts

Hello,

Wondering if anyone can give some guidance on my storage needs for an upcoming film.

I figure I need at least 176 TB of usable storage for the raw negative of the film. Each frame is 30MB. 

I will be copying files to the server from the cards, and simultaneously be reading the files on the server to render them into proxies using an attached workstation which would write those files back to the server. These will be larger quicktime or mxf files.

I'm anticipating only needing two workstations (one linux, one windows) attached to the server, and considering that I may need a secondary storage tier for the proxies which the edit station will access so as not to compete too much with the bulk of the archiving and rendering process. This could just be onboard storage on the workstation.

I'm looking for a solution around $25K. I'm hoping that the contained nature of this work will allow a cheaper solution to be viable, also bypassing the need for any fiber or 10gig switches and just going point to point on network cards (server would need 4port 10gig or fiber channel).

Any advice is greatly appreciated.

Thank you.

DC

 

Share this post


Link to post
Share on other sites

20x 12TB nearline disks are what, $700 each so $8400 right there, put them in two 10-drive RAID6's (or RAIDZ2) and you end up with 180TB usable, which might be cutting it pretty close since with disks that full you'd have no space to defragment effectively. (1TB in works out to 931.5GiB typically if memory serves--I could be off...)

In theory you can do this but you'd probably have to DIY it to make it cost-effective, I'm not sure any vendor would do it-- although I can imagine some smaller 3rd-tier players being able to build it for you for $25k.

Share this post


Link to post
Share on other sites

Yeah DIY was my first thought on this project but I thought it was worth exploring vendors who do this full time, as I am not exactly a storage professional.

Do you know any such 3rd tier vendors that are worth looking at?

Thanks.

Share this post


Link to post
Share on other sites

With that budget, you may consider a white box chassis with Windows Storage Spaces. Probably cheaper than buying a full fledged NAS. We're happy to help you spec it out further if you go that route. 

Share this post


Link to post
Share on other sites

Hey Brian,

I DM'ed you, but thought maybe this is a good topic for this board. So I understand better what you mean by white box, just a generic chassis.

Why Windows Storage Spaces?

So maybe a JBOD like SUPERMICRO CSE-847E16-RJBOD1 Black 4U Rackmount Server Chassis 1400W Redundant

$2000 on New Egg

Seagate IronWolf 6TB NAS Hard Drive 7200 RPM 128MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive ST6000VN0041

45 bays x 6TB ($200 each) = 270 raw TB for $9000

Then, I could use some help on the server spec side of things. I would assume I go SAS to JBOD and 10 or 40gig ethernet for client workstations.

What network protocol would I use for clients? CIFS? Are there any products that offer tuning for high volume low IOPS workflows like mine? I've worked with products like MetaSAN and Stornext in the past. In my experience, 10gig ethernet can be great, but is problematic without proper tuning. Fibre channel with Stornext has been the most robust for me, but it's out of my price range.

 

Edited by dcjett

Share this post


Link to post
Share on other sites

Windows Server and either hardware RAID or Storage Spaces would give you the sharing aspect you are after... at that point your bottleneck would be whatever network connection you planned on leveraging between the workstations and the server.

 

I'd personally look at a Storage Server option from Supermicro... same basic chassis, but has the motherboard as well.

Share this post


Link to post
Share on other sites

So, storage spaces is software RAID yeah? 

How is that comparing to a hardware RAID configuration?

And would this be the type of server chassis to consider?

Supermicro SuperChassis CSE-846BE1C-R1K28B 1280W 4U Rackmount Server Chassis (Black)

$1677 on New Egg. How much would I need to spend on MB, CPU, RAM etc. ?

I'm estimating then that I would need 24 x 10TB drives.

Seagate Iron Wolf Drives are available in "4 packs" for $1,439.97 so $8639 for 24

Plus 3x 10 gig HBA's (should I consider 40gig?)

1x ATTO NS14 = $1495

2x ATTO NS12 = $995

$3485 for net adapters

$13800 to this point. This leaves me roughly around 10K for MB, CPU, RAM, server OS HDD, and Software. 

What am I missing? Any recs on those last items? Sorry, I'm working this out as I post...:)

Share this post


Link to post
Share on other sites

On the chassis, that model should work, although there may be better options out there, I haven't looked at the models vs pricing in a while. Getting a full package with chassis + expander + motherboard + SAS card would be better knowing you get matched components.

CPU load for HDD storage is pretty low, you could go for fairly low-mid range CPUs and get out just fine. Same with DRAM... 16-32GB would probably get the job done easily.

On NICs, really depends on cost versus what you need. Are you going direct attached to bypass a switch? I'd probably look at the Intel X520 or X540 NICs which are pretty cheap, and get three dual-port cards. Use one in the server and the other two in the desktops, direct attach with static IPs. If you have budget for a switch in that mix go for a Netgear model.

Do a 2 SSD combo (240 or 480GB light enterprise) for RAID1/mirror boot drives.

Share this post


Link to post
Share on other sites

Something like this perhaps?

Disks seemed overpriced here, and it forced me to choose at least one.

My main question will be CPU. There were over 40 CPU choices, and I really am out of the loop on CPUs so any guidance there would be helpful.

Screen Shot 2017-09-28 at 4.55.46 PM.png

Share this post


Link to post
Share on other sites

The newest processor lineup naming is still new for me, but if you shoot me a link to that page I'll give you a few options that make sense. Its really a clock speed times core count question.

Share this post


Link to post
Share on other sites

https://www.rackmountpro.com/product/2568/6049P-E1CR24H.html

The server will only really be handling file copies, so part of this is what protocol is best for the kind of traffic I'm looking at, and whether clock speed or multiple cores is better.

I can't get a clear sense of whether the CPU gets hit a lot with copies. And I'm really interested in a way of copying that has low overhead. Like for example, the way Aspera can deliver faster network transfers with dedicated UDP links instead of TCP. I might have this slightly wrong, so any help is appreciated.

Share this post


Link to post
Share on other sites

To put a bit of perspective on the CPUs, I was able to hit ~2.1M IOPS at about 20GB/s (bandwidth wasn't the limiter, only IOPS) on two 2.9GHz CPUs with 8 cores each.

With the HDD platform you are looking at, nearly entry CPUs would probably work, file serving at this scale is very lightweight. Leveraging SMB3 for your file transfer protocol would be optimal if each side is Windows-based. (Win8+ and Windows Server 2012+).

Share this post


Link to post
Share on other sites

Mmm, that's more of a software question than I am familiar with. Most OS'es can support both SMB/CIFS (Windows) as well as NFS (Linux) as well as several other protocols with the right drivers/plugins, so generally it's not a problem with anything reasonably well developed. NAS4free, FreeNAS, Windows Server, whatever...

Again nearly entry level CPUs should be just fine, unfortunately I don't have/can't supply benchmark numbers like Kevin does for the hardware I have on hand.

Share this post


Link to post
Share on other sites

10 cores each is plenty, you can go waaay lower end than that., down to a single Xeon 3104 would be more than overkill. Our NAS builds are all single quad-core setups, I think the oldest is a single Core 2 Quad-era rig if you need perspective.

Your workload for the NAS is going to be so low that you're never going to push the CPUs anywhere near rated TDP except maybe when rebuilding the RAID/ZFS pool, and even then I doubt that's going to remotely stress it.

Share this post


Link to post
Share on other sites

This is great feedback from everyone. Thank you.

Speaking of RAID pool, is that what storage spaces does? Is it a traditional RAID? And is there any benefit in putting in a hardware RAID card?

Share this post


Link to post
Share on other sites

I'm not particularly familiar with Storage Spaces, sorry. Maybe someone else can answer?

As far as budget RAID/ZFS solutions, well, ZFS and Storage Spaces are all software/OS based so there's zero point to a hardware RAID card in those scenarios.

Share this post


Link to post
Share on other sites

While ZFS might have some FS improvements some could argue, the dead simple file sharing capability and speed of SMB3 makes Windows the preferable option IMO.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now