• Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About comomolo

  • Rank
  1. I have these items for sale: - Adaptec 3405 SAS/SATA RAID controller. Including MiniSAS to 4-SATA cable: US$199,99 - MiniSAS to 4-SATA 50cm cables (3 units): USD$15,99 each - Kingston 4GB 1333MHz ECC-REG modules KVR1333D3D4R9S-4G (4 units): US$89,99 each - HP NC360T dual port Gigabit card: US$$99,99 Shipping is US$20,00 for international destinations, EUR5,00 inside Spain (where I'm located). Combined shipping applies, of course. I'll put everything in a single parcel and charge for the cost of that single package. I build computer systems for a living and all these items are mostly unused. The RAID controller was bought for testing, the server memory was bought by mistake but since it was opened, Kingston wouldn't take it back. The dual port Gigabit card has been never even installed (bought by mistake again and forgot to return it). The MiniSAS cables were too short for the NAS they were bought for and never returned. Please don't hesitate to ask any questions or make your offer.
  2. I've used SAS 7200 RPM, RAID 6 with two spare drives (yes my customer is pretty paranoid about availability). I can't remember what the stripe size is (I'm waiting for the system to arrive from my customer at this very moment), but I tried to optimize it for the average file size (that's about 5 MB). File system used is XFS, I can't confirm if partitions are aligned or not (it's just one huge 20 TB partition if I recall correctly and it was created using SUSE graphical tools). I'm not sure about the I/O scheduler. It looks like there's an issue with the Adaptec driver, that's why I'm getting the system back for testing. First I'll try to make sure the driver is OK and fix any outstanding issues, then I'm going to optimize the system. I thank you guys for your support. I thought the NAS would be here this week, but it seems my customer postponed the shipment, so I'll be posting again when I get the NAS at my workshop.
  3. If by "you" you mean "me" (comomolo), I can't honestly say. I'm not very good at finding titles or organising forums. Sorry.
  4. Thanks! That would be great. Should I contact you privately?
  5. The system performing the tests will be the NAS itself. It's a 24 TB server with two Opteron CPUs and 8 GB RAM, an Adaptec SAS RAID card and Seagate SAS disks. I can give you full specs if needed. Yes the workstations have identical network cards (1 GbE). The NAS connects to two different networks via two 10 GbE NICs, one for each network. This has been decided to give some priority to a few workstations but it's irrelevant for the purpose of the tests. It also uses its onboard GbE connectors as a fallback. As a matter of fact, I'll be doing the tests with the NAS itself, disconnected from the network (there are reasons for this). So I'll have to simulate the workload and I just have to make sure the NAS is able to keep up with a certain load. The OS is Windows XP 64 for the workstations and Linux for the NAS (currently SUSE Linux Enterprise 11, but I'll be trying CentOS 5.5 and Windows Server 2008 R2 as well). I have to perform these tests because the NAS seems to be underperforming and my customer wants to make sure there are no hardware issues before deciding on a hardware upgrade. I find IOmeter rather confusing. I've tried to find tutorials on the web without success (if you can point me to one I'll be very grateful). That's why I was looking for an alternate, more user friendly application for the task. I'm willing to learn about IOmeter though. (My field of expertise is computer graphics. I built the workstations and I also built the NAS, but as it's obvious, I'm no expert in network storage.)
  6. I vote for an enterprise storage forum. I just posted a question at the home server forum (it might help there too), but I definitely would love to see some enterprise storage love.
  7. Hi everyone, According to this post: we don't have an enterprise storage forum yet, so I'd like to ask a question here about that sort of storage. I'm pretty sure it also can apply to home servers, so here I go: I've built a 24 TB NAS for a customer. It's a studio making movies and they have around 50 workstations being served by that NAS. The server is not showing stellar performance but in order to start my analysis, I would like to have proper tools. For single user performance, I use a small utility called frametest (a closed source utility built by Silicon Graphics). It shows how many frames of a given size the system is able to write or read. This is an OK test for most of my customers (small computer graphics boutiques) who use their NASes to store the frames generated for their films and have no more than one single user accessing the NAS at a given point. However the test doesn't say a lot about what happens when 10, 20 or 50 people in a studio are trying to read or write at the same time from the NAS. I've heard about a simple analysis tool from Intel (NASPT), but it seems to work only with Intel based servers (mine is AMD based). I've also heard about Vdbench (I haven't tried it yet). Of course I've also heard about IOmeter, but that piece of software is very hard to use and it's output not easy to interpret, except for higher level engineers (which I'm not, I'm just a system's builder). Any advice on tools to measure the performance of a NAS server will be greatly appreciated.
  8. Thanks for the replies, guys. I'm actually considering putting more than one controller in that file server but there are some issues with that too and certainly, the network will be the final bottleneck which may make this point moot. However, I made a discovery today about the new SAS drives from Seagate. They're the ES.2 series with a SAS interface instead of SATA. Areca has promptly published results of their perfomance with the 1680ix controllers and it looks promising. I know those benchmarks should be taken with caution, but at least the SATA emulation is gone in such a configuration. Plus, those drives have a price very similar to my previous choice (Ultrastar A7K1000), so now my decision has turn into: 1280ML + 24 x Ultrastar A7K1000 vs 1680ix + 24 x Seagate ES.2 SAS 1 TB I think I'll finally will have to choose what's readily available at the time of purchase (that's going to be next week) at my providers... Thanks again for your reassuring replies.
  9. I e-mailed Areca but they seem to be reluctant to make a decision for me here. They say the 1680ix is designed for SAS and that it emulates SATA, so it has a performance penalty for SATA drives. The 1280 on the other hand, is designed for SATA, but its processor is a 33% slower. They refuse to say which one will perform faster for my needs (a file server in a film studio with 40 workstations; high STR is a priority) So the questions would be: - Does SATA emulation slow down the 1680ix to a 33% less? - Are there just different CPUs and the clock speed doesn't mean a thing? And the most important one: - Do you guys know of any benchmark comparing these two controllers?
  10. Thanks for all the replies. Seems to be clear that two controllers will be better. I've checked the pricing and it's only a small penalty so I think I'll go that route.
  11. I must build a file server and it will have two big partitions (1/3 and 2/3 of the total available space). Would it be wiser to go with just one controller for the 24 disks and an ordinary partition scheme or should I go with one 8 disks controller and another 16 disk controllers and phisically separate both "partitions"? Which solution would be faster in terms of STR and access time? RAID5 + HotSpare will be used anyway, be it a single controller or a dual controller solution. Thanks for any help.
  12. Uhh, the above is very incorrect, unless your "in this case" is a very special case indeed. This illustrates it well: /Jesper I disagree. That illustration is very nice but "Network" can be any type of network, including one using FibreChannel HBAs and switches. As a matter of fact, the little cloud that reads "FC/GbE" could as well read "FC/GbE/Infiniband/Fast Ethernet/etc.", and the little cloud that reads "Network" could bear exactly the same reading as the other one. The drawing only illustrates where the network is placed when comparing a NAS and a SAN. It doesn't matter what technology you use for the network (of course it does for the speed and lenghts of your installation, but it doesn't for the concept).
  13. I'm building a file server and it must serve a number of clients through a network (let's call it "NAS"). FibreChannel is -in this case- just a network infrastructure choice as they are Gigabit, 10Gigabit or Infiniband. I am a system builder. I am the provider. I usually build workstations but I'm getting more and more into file servers. The choice of SATA is not about cost but about disk capacity. I can't go over 24 drives for other reasons and I must get 10 TB after formatting a RAID6 array. If you know of a SAS solution that can do this (i.e. 500 GB SAS drives), please let me know! Thanks for the advice. I wish I could go with ZFS and I might try convince my customer, but OpenSolaris is considered "exotic" at this point in time ("official" OpenSolaris "distro" was released only a couple of days ago...). I'll give it a try and will show my customer, though. Thanks again.
  14. Thanks. Would you mind to elaborate? Do you think SATA drives won't be up to the task? Why?
  15. What about connecting through InfiniBand links ? Switch 8x 10Gb port should be less than 1000$ while 10Gb HCA should be less than 100$ That's a nice idea (can you provide any links?) However, I just learned today the customer owns some FC switches and he definitely wants FC.