• Content Count

  • Joined

  • Last visited

Everything posted by Quindor

  1. Sure, no problem. Knowledge exists to be shared! If you are looking to build another server or type of storage, just let us known. I and people on here have a wide variety of experience from simple USB to high-end 100% uptime guarantee SAN storage (Hitachi Data Systems offers this). I myself host a 21TB server at home which runs VMware and draws around 140 Watt fully tuned. A bit tuned (which I mostly run) is 170 Wat...
  2. Quindor

    Ram Disks vs SSD vs Hard Disk

    Sure, RAM disks have been possible for years, but come with their own dangers. They are insanely fast...but...slow down system performance a bit because of the data getting accessed which prevents your normal program from using that RAM bandwidth. Also, powerloss = data loss. Well, you can prevent that with a UPS...but system crash or hang is ALSO data loss... I/O's and MB/sec is through the roof off course. But large data volumes are next to impossible. Only say heavy duty websites, etc. would run awesome on it. There is a company which makes "ramdisk" SAN appliances. They are called Texas RamSan. They employ a chassis with a built-in UPS, LOADS of memory and built in magnetical disks. In the case of a power failure or other failure the data is immediately dumped to disk using the built-in UPS power.... bit tricky, but their devices are enterprise worthy. EVE Online uses one for instance, because nothing else can deliver the IOPS they need for their 200, dual-quad core database blades which all connect to the SAME database. Unique high performance stuff! http://www.ramsan.com/success/ccpgames.htm for more info. So yes, it's possible. Using simple software in a production/enterprise environment... no definitely not recommended. Play with caching or something. Professional situation is possible, but.... going to cost you! Update : Seems there was news about it today too! : http://www.storagereview.com/texas_memory_systems_ramsan630_released
  3. Quindor


    I will give you a reply soon. It's all in the start post, but I'll try to explain with a realistic situation.
  4. Wow, okay... let's see. First, to clarify, FC HDD's are almost identical to SCSI hdd's except that the use FC. But FC and SCSI are very closely related and can be treated almost in the same way. FC is just a more flexible interface, but also more expensive and thus both standards are still alive. SCSI/SAS has its advantages as does FC. Then, I do not know who pays your power bill, but don't be surprised to see it shoot through the roof. I do not know how "new" these disks are, lets say they are 2 to 3 years old. They will give you fairly decent IOps (no where near SSD speeds, need about 15 probably to reach that, still won't come near access time) but only poor GB's. Meanwhile, they use a REAL big amount of power. Where a desktop disk of 1TB would use in idle say 7 watts, these will easily do 15 to 20 watts a piece. Be warned! So, if after heeding those warnings you are still wishing to use the equipment, let's continue. Well, you could hook up a single HDD to the HBA and that should work. Not very effective, but still, it's all very close to SCSI. Best thing to do is determine an array/storage box for it. This can be a box with a FC RAID controller in it which will allow you to make LUN's and then transport them through your HBA to your system, or this can be a simple JBOD box and thus leave your system to do the processing work. A HP EVA is indeed an example of a storage box with RAID functionality in the box and in the case of EVA this is called a "virtual" RAID. It leaves less up to the user and just presents you with diskspace you can use. I am not a fan of this type of boxes because it makes things too stupid and sets you up for a fall in the future, my opinion and probably not really something you need to worry about at home. A HP MSA is a more traditional box. Without controller you can hook it up to your system and see all the disks. With a controller it becomes an intelligent system which can RAID the disks for you and then present it to you. What you need, it all depends on choices. Let me answer your switch question, that will clarify some more. In most cases you are not required to have a switch to interconnect say and EVA with your HBA. In some cases I've seen, they might refuse to work together. Looking at the FC standard, this should just work. Now, say you go with an EVA and a switch. Then you can buy multiple HBA's and put one in each system you wish to use storage on. This is the traditional configuration. Which is the most flexible, but also definitely the most expensive. 2 controllers are mostly used for high-availability. Since the disks are dual ported, each controller has a connection to it. Using dual paths to your host, if one controller fails, the other automatically kicks in and takes over. Not really needed at home I think. Cheapest would be to buy a simple OEM box which can house your disks, connect that to your HBA directly and RAID the disks using software RAID on your host. No switch, no logic in the box, etc. For at home, a fine solution, just not boot drive compatible (probably, depends on OS). This could be your workstation which allows others to access the files using CIFS. Or, you could build a dedicated storage box with something like FreeNAS or OpenFiler and then start using iSCSI to deliver the storage to the other boxes. This would allow you to use software RAID and not bother other systems with the processing power that takes. Very many options, all dependent on what you want to do with it and how. Using these disks to hold movie files, etc. Don't even bother.... buy a 2TB disk, put it in your PC and you'll be much happier with the low power bill and low noise. I think if you would really run say 32 10K/15K FC disks of a few years old, it would cost you a 2TB sata disk a month. If you indeed wish to run a hightraffic webserver, start looking at SSD's. Not saying this type of storage is unsuited for this purpose, on the contrary, just not that good for a "home" situation. Hopefully this answers some of your questions. Let me know! p.s. A warning. A HP EVA or MSA will *not* accept most other disks that are not HP branded or have HP firmware. Also blank caddies for unused drive positions are not provided, these are dummy caddies and cannot be filled with your own disks.
  5. I've read something about this before on tomshardware when he was testing SSD's. It seems power management is too aggressive and causing the lower values. Try and google it, don't have time right now, sorry. Here is the article : http://www.tomshardware.com/reviews/ssd-hdd-power,2170.html Not sure if it applies to you, but you could try and disable power management features in your BIOS (and windows tools) and see if that has any effect! Let us know!
  6. Quindor

    RAID 5 Q&A

    Ouch.... 20MB/sec is really really slow. Raid-5 of USB sticks would be faster then that. But seriously, something is REALLY wrong there. That's slower then a single disk. RAID is invented for 2 things, data security and speed increase. I use several array's, but my newest one is a 7x2TB F3 Ecogreen samsung array. Each of those disks does about 100MB/sec (at the beginning of the disk). For sequential transfers you (in theory) just add up the speeds of your disks. So 7x100MB/sec = 700MB/sec. If you then wish to use RAID5, this is striping as with RAID0 but adding parity. Reads are affected by "-1 disk" so that makes 600MB/sec, writes are Dependant on your RAID proc, or your CPU using software RAID. As you can see, this is all more then enough for sequential reads or writes. It depends more on the OS you are going to use then the RAID array or controller I would think in this situation. It all depends on your usage pattern. Running 10 VM's requires a completely different setup then just downloading large files and reading large files. If using windows, be sure to run Win2k8 R2 and Win 7 so that you can use SMB2, this will give you 110MB/sec over the network with filesharing instead of 60MB/sec to 70MB/sec with Win2k3 and XP for instance.
  7. Quindor

    Promise SuperTrak EX8350 and Windows 7

    It might be a lack of bios memory "space". Try turning off everything not used (or used) in your bios, like onboard lan, onboard sound, onboard usb's (when not in use), onboard firewire, etc. etc. etc. These devices all claim space, which your raid controller is trying to do also. It might not be able to find enough, although this is more a problem of the past. What I would guess that really is the problem though is that windows tries to access the array which is found through the card and is unable to read from it and gets stuck in a timeout or plainly hangs. You could try the card without the disks and see what happens. If it does boot then, there is something wrong with the array (which windows accesses around the time you are talking about). Hope that gets you a bit further.
  8. Here at storagereview we are looking to make a series of prontpage articles on the topic of home servers. We of course have some ideas ourselves, but we are very much looking for input from the people to see which points you would like focused on. What would you like to see explained, what is most important for home users? We are intending om focusing on everything from a simple single drive based box, to a full fledged 12 drive bay large storage server. All have a place and everybody has a different need. To clarify, this topic is not for direct questions, that is what the rest of the forum is for. Well then, what would you like to see, what are you interested in? Complete box solutions, or custom made boxes with low power usage and linux or zfs. Or WHS? Tell us!
  9. Quindor


    Awesome, thnx's. Added it to the table! I seem unable to fix my tables, so wrote the admin. As soon as I can, I'll update! Table editing has been fixed (thank you admins!) and information has been added!
  10. Quindor


    Sorry to say, I am not quite clear what you mean exactly... The 7 seconds is chosen because most RAID edition HDD's from various manufacturers are also set to 7 seconds and mention that RAID controllers wait for 8 seconds. So this value isn't "guess work". Sure, some RAID controllers will wait 20 or 30 seconds, that still should not form a problem. If a RAID controller waits under the stated 8 seconds, you will still run into problems. But, since you can change the value to whatever you'd like, you can change it to the lower value needed. The whole reason we are looking at TLER is because if the RAID array evicts a disk because of an error (which could or could not have been prevented with TLER) it wishes to rebuild using a new disk. If during this process another disks encounters an error (and the disk does not report this in time) the rebuild would stop and kill your array altogether. But if TLER catches the error and tells the RAID controller, then RAID controller can choose what to do. Conclusion, we wish the disk to try and fix/reallocate the sector up to 7 seconds, if it cannot in that time, report the sector as bad and report to the RAID controller what to do. In a rebuild situation as you speak of, mostly the RAID controller will tell the disk to try again and again till it can, because there is no other situation to build it from.
  11. Quindor

    Building Low Cost RAID6 (8x 2TB)

    Interesting topic. I did almost the same a month back and choose the 2TB samsung's as you can read in my topic. I have 7 of them on a Adaptec 5805, working just fine. Seek performance is sooo slow. My 7200.11 7200RPM 4xRAID5 or 4xRAID5 samsung F1's was much faster for seeks then 7x2TB 5400RPM is. I use them to run VM's off them... but at best they suffice, not much more... Sequential each samsung will do about 100MB/sec at the start of the disk. So sequential performance is decent. Further, no problems with the drives. I'm hoping to use a MaxIQ module in the future to bridge the performance gap. Your 3ware controller is actually ideal for smartctl/smartmontools because it has the most operating systems supported in which you can access the physical disks!
  12. Cool! That is actually pretty smart!
  13. Quindor

    4 SSD RAID0 - For the fun of it

    hmm, that is a bit disappointing, but for mixing so many different SSD's, some of which aren't widly spread, I guess it isn't half bad! Fun experiment indeed! Here is a screenshot of the newest crystalmark running on my Vertex 120GB SSD on a ICH10R. Just for comparison.
  14. Quindor

    Home Server advice

    Reading these lines kind of says enough. In my opinion you are not looking for storage with which you can toy and fool around. You are looking for storage you can put into a box, should not eat too much power (does not rule out windows or linux boxes in my opinion), and should provide easy functionality. (correct me if I'm wrong here!) If those are indeed your choices, the synology box that Brain advises would be the perfect fit from what I can see. Only downside is that it only takes 5x2TB, with RAID5 you would get 8TB of nett space. Further then that, it's a versatile and a fast system going from the specs. It can do torrents and newsgroups (I believe) if you need that, and also has a host of other functions such as DLNA ready to go for you when and if you want them!
  15. Quindor

    Home Server advice

    I can easily tell you that allmost any fake hardware based RAID, thus fake RAID is allmost always inferiar to to host based raid using your operating system. This is especially true in the case of ZFS or the linux flavors, windows also has fine ways to do it (server versions). The linux versions are quite excellent right now and rival the power and flexibility of hardware cards. They do strain your host cpu off course, but when that is calculated in, they are even fine for running RAID5, etc. So that is absolutely an option in my opinion. No need to invest 100's of dollars/euros/pounds if the functionality isn't really needed. This does not mean that hardware based cards have become useless (although, don't talk to sun with ZFS), but they have their own appropriate scenarios.
  16. Well that's a tough one. I've heard good stories about WHS solutions, as well as the Synology stuff. Both a good in general! I'm guessing it depends a bit on flexibility, or rather, type of flexibility. The Windows Home Server gives you a well built pre-packed system with a full operating system, and, if wished, all those capabilities. But as you noted, also with the downfall that you need to buy a drive with it and have a hdd running with the installation on it, etc. Some will also make the point "it's still windows", but lets not go into that discussion. Also, depending on what case you get, the ability to grow is easier (past 4 disks) then with a hardware based solution. Synology on the other hand also gives you a box, no storage comes with it so you can determine that yourself, which is a good thing I think too. Also the system comes with a lot of extras you can easily install and use with it. But that is also the downside, since they use slower processors, don't expect to much out of it. Some torrenting, FTP and dlna, and that's it, don't expect too much more. You do get a smaller form factor, better power efficiency and if nothing is going on, all the disks can actually be put to rest! All is available using a web interface and well, it's linux and not windows so often seen as more robust. So the synology would be easier if you just wanted "out of the box, has to work now" functionality and accept the limits it might have. And the windows box might be capable of a little bit more, but with the price of complexity. Anyone has more thoughts on the subject?
  17. Quindor


    Added results for my 2.0TB Samsung F3EG disks, 1TB Samsung F1 disks and for the 1.5TB 7200.11 Seagate disks!
  18. Sure, no problem! Have always been a storage enthousiast en also have my daytime job in that field.
  19. Well then, benchmarking is completed and over the weekend I took the array into production! Pictures of the build/upgrade can be found by clicking on the following picture: Benchmarks can be found by clicking on the next picture. The pictures themselves have a description of how and what : 1.6TB/sec within VMware from the controller cache! NICE! Ended up with 21TB of gross storage. As I am using various levels of raid or striping on the various disks. Netto space is about 19TB I believe. Questions, comments, discussions, all very welcome!
  20. Well then, currently in the middle of migrating my data from my current storage to the new storage. Since this storage is based within VMware ESXi, this is a bit harder. What I ended up doing is making my workstation ESXi compatbile, and running that server besides it. Then, transferring all the VM's I wished to use through FTP and the data inside the windows storage VM using the network. When this is done, I will build the new array into the "old" server and thus have completed my data migration. Good, as I promised I was going to make benchmarks and such. For this I have compiled a zip file which you can find included in this post! For some reason the upload feature does not function, so you can download an archive over here. The file names are in a bit of a cryptic format, but hey. I'll start making a new post which makes a sort of review of everything.
  21. I was hoping to be posting my benchmarks right now (most have been made), but I've run into a bit of problems at a clients site, fixing those first. Will probably be tonight before I can put them online!
  22. Quindor


    Yes and yes, but busy at a client right now. Will add info to this topic and fill my other topic as soon as I get a bit of time!
  23. Quindor


    Hahaha, my linux skills are at noobish levels at best! I just try until it works. Again, I can put the zipped DD files of the stick on my FTP server for you if you really wish. But you will need an identical USB stick (exactly identical, in my case 8GB) and then the stuff I made only works for ADAPTEC cards, your PERC does not work in that way. So it would be completely useless for you..... The stick I have now is also a very crude big install, etc. My best bet for you would be, download a debian ISO (Debian is easier then fedora in my opinion, but fedora is best for adaptec), install that to your USB stick. Compile the svn of smartmontools (apt-get install subversion gcc automake "and euhm c++ modules for gcc") then follow the instructions on the smartmontools homepage. And then try to use smartctl using the megaraid switch (The Dell PERC 5/i is an OEM version of a LSI card (exact type not in my head right now). Sorry to disappoint you in this. Also took me 2 days to get it working for me, but believe me when I say my stick will not work for you. When you figure it out, it's not that hard though.... To everybody else, come on people, submit those specs! Single disks connected to motherboard controllers are fine, etc.
  24. Yes, you are correct. The exact type is the Samsung Spinpoint F3EG HD203WI. Let me know what you would like me to test, maybe as a prelude to the official review. Since I will be connecting these to a RAID controller, results would be influenced a bit by that (I can turn off cache on the controller). If you would like, I'm willing to hook one up to my motherboard controller (ICH10R) and run some tests using that too. Come on people, nobody have any standard benchmark tools you use and things you'd like to see? IOmeter pattern files, etc. ? I'll start off myself. Planning on running the tests below on the full 7 disk array in Raid-5, NCQ, Write-back caching enabled, Drive cache enabled : Attobench - 32MB (within cache) Attobench - 256MB (within cache) Attobench - 512MB (on cache limit) Attobench - 2GB (defeats cache) HDtach - Read & Write, Burst speed HDtune - Various tabs, will run them all IOmeter - Workstation, fileserver, web server pattern IOmeter - self developed pattern DD - Zero to disk, disk to Zero CrystalDiskMark - 100MB CrystalDiskMark - 1000MB Anything else?
  25. Cool article about the data recovery from a 6 months dunked disk! Sadly, my comment about the effect of water on a disk was taken a bit too seriously. I meant that as a test the Mythbusters could try, to see what the effect on the disk would be. Off course a running disk would short it's electronics, but the platters inside should be relatively safe. And thus data will remain obtainable. I repeat my statement before. If you wish to have it 100% destroyed, slam the disk with a hammer till the top is loose, throw it into a fire and melt the magnetical disks inside. Your data is gone, for good. If you do not want to physically destroy your drive and re-use it. DBAN is your best choice. All the other recovery tracks Ontrack talks about have all been physically damaged disks. No erased or rather, securely (DBAN) erased disks. So recovering from that will most likely be hard indeed. It's what I also believe, because of the process it uses. There is also a pretty decent review over here, which explains a bit more. The standards you can use, which US military standards they are based on, etc. etc. very decent stuff. Personally I always use the autonuke, I believe that writes a pass of 0's, then a pass of 1's and then a pass of 0's again. DBAN also does not look at partitions, or anything else on your disks, but uses lower level commands to just start at sector 0 till the end. Found another interesting thread. Basically they say a single pass wipe should be more then enough, "autonuke" is a 3 pass process, more then enough for almost any usage case. And if you are not sure about it, you could use the 35 wipe pass method, from which even the author thinks it's complete overkill...