• Content Count

  • Joined

  • Last visited

Everything posted by Quindor

  1. euhm, euhm, euhm... AHCI? No, sorry. No clue. Could you give us some information on what kind of enclosure this is and maybe what kind of controller is inside (probably a jmicron or silicon image variant). GPUID should always be readable from Win2k8/Vista and up. No matter if you are running 32Bit or 64Bit I believe. Do make sure you are running the latest patches and service packs. Hmmm... what OS was the partition created on? Thinking it could also be somekind of driver problem. How do you connect the case? USB or eSATA?
  2. Quindor


    From what I can read on the web, the software that is included with the drives is based on a limited and older version of Arconis True Image. If Seagate cannot provide you with an updated version, it might be your only choice to buy the full product from Acronis and thus have the newest version in which your bug (whatever it is, as you do not tell us, is fixed) or use an open source alternative. You do not specify what you are trying to accomplish. Myself I am quite content using clonezilla instead of commercial products such as Acronis in most situations. Writing in all capitals is NOT appreciated. Using several exclamation marks is also a bad way to use language. Both will put people off from answering you so you will receive less help. Please take this under advisement. A post such as you posted above is considered rude and dumb.
  3. Quindor

    Server OS 64bit?

    I think I answered this question and also why you are getting "slow" performance in the other thread you have on here. ( Why i get same trx rate between ramdsk/mech disk) A 32Bit OS or 64Bit OS (x64 / x86-64 vs x86) will do nothing in terms of performance. Especially disk performance will be almost if not exactly the same. The only reason 64Bit can preform better is because it is able to allocate it's cache more effectively and because you can make the cache (a lot) bigger. As you are not bound to the 4GB total memory of your system. Be warned, in a 32Bit OS, your maximum application size memory is 2GB by default. There is a special switch to extend this to 3GB, but this is mostly only used for exchange servers. Also, all memory needs to be allocated in this 4GB. Say, you have a system with 4GB memory, but have a videocard with 2GB memory. This would effectively limit windows of using more then 2GB of memory. In my opinion, 32Bit/x86 is something of the past, it has been for years. The only reason Windows 7 came out with a 32Bit/x86 edition is because of Intel whom would not put x64 extensions in their ATOM processors. Windows 7 will be the last 32Bit version there will be. Windows 2008 R2 which has since been released is 64Bit only, as will be everything released from Microsoft beyond this point (Operating system wise, software will follow in the years to come). So be warned if you are still on 32Bit. You should have switched a few years back (In my opinion), but if you did not, you will be forced to in the coming years. Better to do it now, on your own pace, then get stuck in the future. At home this is less immediate I think, but for a business, this can seriously impact business plans. But, before this turns into a personal rant. You are correct in saying that 32Bit or 64Bit Windows 2003 will not have any impact whatsoever on file transfer performance over the network. Upgrade the servers to Windows 2008 R2 x64 and the clients to Windows 7 x64 and you will have maximum wirespeed network throughput using SMB2. Granted you have no other bottlenecks of course.
  4. Several things are probably causing your "strange" results. In my eyes, they are absolutely correct. First, testing network bandwith, use something like netio or iperf/jperf. That tests network bandwith, not other devices. For testing transfer rate, using CIFS is not the right way to go. FTP has less limits. If you are testing real world application, using SMB1 (XP/2003) 65MB/Sec is about max you will achieve with a single stream. Even using 8xSSD in RAID0, this will not change, this is a protocol limitation. If using Windows 2008/Windows Vista, this should be a bit better, say 80MB/sec. When starting to use Windows 2008 R2 x64 and Windows 7 clients, you will see 100MB/sec+ transfer rates over SMB2. There is no way to achieve this using 2003/XP level software. The protocol is just too old and too inefficient. A single 1TB disk should, at the beginning of the disk be able to deliver about 100MB/sec sustained transfer rate. Towards the end of the disk this will be around 65MB/sec. This has to do with how the mechanical disks are constructed. So, using your current OS software, using CIFS/SMB you will never get higher transfer rates. This is protocol bound. Multiple sessions should give you a little bit higher performance, but not by much. I would recommend Windows 2008 R2 x64 and Windows 7 Pro x64. I use both (Win2k8 R2 x64 running VMware ESX 4.0) and I get about 110MB/sec easily reading files using the windows 7 client. Of course, it all depends on what you want to test. If you wish to test theoretical network bandwith, bottlenecks, etc. use some of the tools noted above. If you wish to test pure data transfer rate, use microsoft IIS FTP and a good FTP client. If your goal is file sharing. Upgrade your OS'ses. Last thing. How do you measure? Best is to use performance monitor, but something like netmeter (free, get the beta) or dumeter are very handy while doing testing. Hopefully this explains the problems a bit. If you have any more questions, or can tell us what you are looking for, let us know!
  5. Yup, RAID 5 would definitely be suitable for your situation I would think. Raid5 EE with Hot Space is also suited, but in practice, for SATA it's better to use RAID6, costing you just as many disks but giving you better protection overall. In the case of RAID5(EE) you will still need to rebuild the array with a failure, at which point it becomes more vulnerable per disk you add because of the statistical MTBF (Mean Time Between Failure) rate. Adding too many disks, at some point you pass the point where statistically you can calculate that you will always have a failure during a rebuild. RAID 5EE does not fix that, RAID 6 does. There is several articles on the internet about it. General consensus is RAID5 with SATA to max about 6 disks. I myself also run a 7xDisk RAID5, but it's a choice. Just be warned. I have other disks to transfer important information too in case of a failure.
  6. Quindor


    How odd, we have the same controller and the same disks, but it's not working on your end. No problem with the big posts (don't post too many though) all information is welcome! Sometimes I do notice that I have to wait a few minutes after booting to start smartctl. Otherwise it will just hang and not return anything, not sure what that is yet. What OS and driver are you using? Personally it only worked for me using CentOS with the default or adaptec driver. For me the situation was that the disks arrived with SMART disabled and adaptec would report problems. Then, after enabling them once, they started giving SMART and the adaptec problems disappeared. Before it could just not read any data from SMART. This value stayed after reboots. Maybe give the enable SMART command again but this time use the -d SAT together with it? I can imagine that something went wrong if it thought it was a SAS disk. Just read you did that. Also, take a look at this : http://linux.adaptec.com/2009/07/24/using-smartmontools-538-with-series-255z-controllers-with-firmware-17380-onwards/ Check your firmware and driver version! Hopefully we can find what your problems are!
  7. Quindor


    Try and use the -d SAT value to force it into sata mode. I need to do this also because otherwise it will try to use SAS, which the disks do not support. For me this was actually reversed with SMART and StorMan. When I initially got the disks, they would report SMART errors in the bios and also in SmartMon. After enabling it using smartctl these errors went away and stayed away even with power downs, etc. For you this seems to be reversed? Quite odd.
  8. As with everything, everything depends on, what you are wanting to do with it. I don't know the 4805, but own a 3805 and a 5805 myself. The 4805 is a slimmed down version of the 4805 I believe. For 5 to 6 drives I'd said RAID5 will fit your needs just fine. It will cost you 1 disk in total for parity, but in return will keep your data safe if a disk should fail. Going beyond the 5 or 6 disks I would start to recommend at least a hotspare but rather RAID6 if possible, because otherwise the chance of another disk failing during the rebuild would become too great. Then there is the choice of which type of disks you are wanting to use. WD and Samsung can both function on RAID controllers, but with any non RAID edition disk it's always a bit of touch and go if it will work or not. Certain types will and certain will not play nice with your RAID controller. Search around the web a bit to find other people using a similar setup. For the WD disks they often needed to have some settings changed (not possible on all models). The samsungs also need this, but in a different way. Then the last to consider if you are wanting to use ECO (5400RPM) drives or regular (7200RPM) drives. ECO drives have the advantage of less power, less heat en generally less noise. But, if you are also wanting to run programs from it or use it as a boot drive or anything of that sort, they will be noticeably slower then non-ECO drives. For streaming purposes this will not matter too much, so streaming files for backups or DVD/ISO images, video files, etc. they are perfectly fine and don't differ too much from their faster non-ECO versions. Running some games off it also should not be too big of a problem. Just realize they will be a bit slower then regular drives when seeking is needed. Answer those questions and you'll be on your way a little bit more I would think. Report back here and I am sure I or other members on here can answer any other questions you might have.
  9. Quindor

    I found a Unicorn !

    Awesome! The more we can store in the size of a pinky nail the better. Image how much the space of a 5.25" would hold with these suckers. Holy diskspace batman! Seriously, we should calculate it!!
  10. Quindor

    600GB VelociRaptor RAID vs SSD

    Not really sure what you are quoting, but with RAID1 they also might have meant read instead of write. Since intelligent RAID1 can read close to double the speed (I/O wise) of a single disk, but can only write the speed of a single disk.
  11. Quindor


    Sadly I know of no way to reproducible test this. Those values you found though definitely have something to do with how your controller handles a disk that is not responding. Personally, I would keep those values at their default 10 seconds and set your disks to 7 or 8, to give each device enough time to respond to the other. Interesting stuff! If your unable to configure the drives it might be useful to set the value very high, to prevent a disk that hangs for 20 seconds but then succeeds from being kicked out of the array. Not really what we wish (want the intelligent situation where the controller determines what to do), but better then nothing!
  12. Quindor


    That the value does not stick after a reboot is sadly, a known fact. Please give me some more information or a smartctl printout of the drive so that I can put it in the topic start post for others to see! I suspect that some RAID/NAS/Storage vendors actually use this command to set the drives using the drivers/firmware. Although that is just speculation, I know that the creator of the patch for smartctl has also worked on modified highpoint drivers which would do it automatically, suggesting it should be possible on that level. I also see no technical constraint why it would not be possible for the card or drivers to set this themselves, it's just simple ATA-8 SMART command. Hopefully RAID card manufacturers will start picking it up and publishing about it in the future. It could also be that the vendors which certify the drive have either, A. a bad testing procedure and do not factor testing how the drive handles when bad sectors occur (over age), or B. have a mechanism in place which just waits for the drive indefinitely. Each RAID card vendor has their own specific set of rules (known or not). The 7 seconds is more a of a default value I use for my adaptec card and from what I have read falls below the default of all other card, thus, makes you safe. But there are definitely cards that wait longer then the 7 or 8 seconds. Even some LSI cards might be configurable as stated above.
  13. Quindor


    Hey there and welcome to the topic. If you check the following link it states the commands needed to access the drives SMART values using a HighPoint controller. As far as I can see it's only supported using linux and you will have to make sure you are running one of the newer SVN builds. The commands are in the opening post, but, for your raid controller, you will need to alter those a slight bit as stated in the link above. Let us know it works out or not! If not, we'll try to help.
  14. Quindor

    AMD vs. Intel

    Various really.... their both good at what they do. AMD Side My server is a 21TB storage machine running VMware ESX 4.0u1 Vsphere. This is a 790GX machine with a Phenom II x4 940 (3.0Ghz) in it with 8GB ram. My HTPC is a simple triple core AMD machine, 2GB memory, 780G chipset with a added passive Nvidia 9400GT for VDPAU. Intel Side My desktop is a very small case (Silverstone SG-03) watercooled Intel Core i7-920 @ 4.2Ghz. Asus Rampage II Gene, Nvidia GTX260, 6GB memory My laptop is a Compal IFL90 with a 2.2Ghz intel chip and 4GB memory. Not sure, T7500 I believe. So I believe both AMD and Intel have a reason to exist. Both are good at what they do. Over the last years AMD is a cheaper and has good performance for the money. Intel on the other hand is a bit more expensive, but will win you every benchmark. But.... things aren't always about benchmarks. My HTPC is more then powerful enough. 1080p is still very modern and my HTPC does it well, either using software decoding or hardware. So why buy a more expensive intel solution for that? My server is the same thing. Even when doing 200MB/sec over the network from within a VM the host only has around 30% cpu utilization. Running 5 or 6 VM's, you run out of memory or disk I/O way sooner then CPU. This also is using Windows 2008 R2, in the foreseeable future nothing will really change (say 2 years) and thus the server is scaled to that and does not need to cost more then it does. Very Very happy with it. Also at the time I built it (a year ago) it was the most power efficient option. My desktop on the other hand I hopped on the Core i7-920 bandwagon. My desktop is also used for gaming and my plan is that it will have to survive one video card upgrade and preferably two. The poor 2.66Ghz is over-clocked in a case that is smaller then almost anything out there. Running very well and 24Hr linx or prime95 stable. This gives me an insane amount of processing power, so much that the next 2 years nothing will be brought onto the market which will surpass it. Sure, there will be processors with more cores, but no fundamental change in architecture that would boost performance way beyond what it is now. So, looking at that, it made sense to invest some more money, because it will also last longer. Also for gaming, I believe Intel does hold an edge. I believe intel does hold their edge on the desktop. When prices drop the 260GTX will be replaced with a 470GTX or 480GTX and a year after that it will be replaced with a newer version of that. So both are very good at what they do in my opinion.
  15. Very interesting! Was this scanning on the same PERC 6/i or using something like a ICH from intel? That could make the difference! Interesting. Maybe the drivers are intelligent enough to send the command to the drive itself? I know there is a highpoint driver/firmware in the works which can do that.
  16. Quindor


    Great, keep us updated. Share the knowledge!
  17. Quindor


    Ah, well there is your difference. The LSI will need to the pass-through mode listed here : http://sourceforge.net/apps/trac/smartmontools/wiki/Supported_RAID-Controllers Without using that, it is not going to work. The ICH10R in AHCI mode should do just fine.
  18. Hmm, I am bit a confused as to which drive you would like to use. Your topic title states HD154UI (Which is listed in my topic as working) but in the post above you state the HD153WI.Okay, after some google'in I learned that the HD153WI are actually the F3EG versions of my HD154UI F2EG disks. Newer line, lower number, great logic! Although that specific disk has not been tested. I also own 7 x HD203WI which I use in RAID5 on a Adaptec 5805. They have been tested using the Adaptec controller and a ICH10R in AHCI mode and with both they accept the SCTERR value after enabling SMART on the drives (when I received them it was turned off?). So it's 98% safe to assume that the HD153WI will also accept the commands just the same. The WD15EARS or WD1503FYYS have both not been tested to my knowledge. Hopefully that helps your search a bit.
  19. Quindor

    When did Eugene and company sell out?

    Indeed. Storagereview was a waste land for years and is finally rebuilding a bit, something which I very much welcome! That it switched owners.... well... if that is what was needed, so be it.
  20. Quindor


    Hmm, that is odd. I have a set of Samsung HD203WI's also (7 actually) and they all work perfectly fine setting SCTERC to 7 seconds. What kind of controller did you use to connect these devices? It should work. Ours are even using the same firmware, no reason for it not to work! Probably a controller issue. Also, try to force enable the drives to use SMART. I had to do this by hand once, and after that the drives saves the settings and remembers it after reboot (before my adaptec controller would report SMART problems). I tested mine with a Intel ICH10R in AHCI mode and using a Adaptec 5805 with physical drive pass-through under Fedora Linux. Then for the WDC. Well, because the samsung did not work, I fear/hope you are suffering the same problem. WDC is aware of the SCTERC values and has disabled that in some of their drives. But, as you said, you have managed to change it using WDTLER and after that where able to use smartctl on the drives makes the whole situation a bit confusing. Yes, the Hitachi Deskstar/Ultrastar 2TB drives are indeed very interesting. I also have read good things about them combined with RAID controllers. Hopefully someone with some of those drives will post here sooner or later!
  21. Quindor


    DD on solaris with HDAT2 would seem like a good testing method. DD for windows should be able to preform the same thing and is what I would recommend. Using R-studio though, you used a tool which is used for data recovery. I am not sure if it doesn't invalidate those results. Although it probably uses the windows drivers, you never know with those tools what kind of thresholds they use to read the data from the disk. The tool knows what a bad sector is and will try to read it vigorously in my opinion. So not quite sure about that. But as said, DD would indeed be a good way of testing! When/if I find a disk with a bad sector I'll give the DD method a try and see how that affects things. Thnx's!
  22. Quindor


    Ok good info. Your information seems plausible. Right now though, there is no information on the drives currently in the topic if it will or will not work. But as this thread is about collecting information about SCT, this is very good info! Sadly, as you say, testing it would be quite hard. I'll monitor my drives and if I develop a bad sector I will try to test it as you say. For now, I am going to keep advising people to set the settings on their drives, since, if it works, it's beneficial, and if it does not, no harm is done. On the current drives, it's unknown is it actually does something or not in reality, per your assumptions. I believe it actually does work, but as said, your information seems very plausible and as you said you have done tests, which I have not, because I have no drives with bad sectors. Would you mind sharing your exact testing procedure, so people who might have bad sectors can try it?
  23. Quindor


    I too would like to know where you found this information or have experience with it. All my research seems to suggest that these values work, when set. It's also mandatory since the ATA-8 specifications, so not actually doing the work invalidates your drives to the specifications. WD is the only one that I know of that has disabled it. Or rather, they have disabled their WDTLER tool, don't have the drives so I cannot test with smartctl. Any information welcome!
  24. Luckily there are other websites, so we don't have to do everything. The very well respected (at least by me) Anandtech have done a review on exactly this. A while back the Intel X25-V 40GB's where 99$ on newegg. A steal, especially with two of them! Here is the review : http://www.anandtech.com/show/3618/intel-x25v-in-raid0-faster-than-x25m-g2-for-250 . Conclusion, yes, accepting the risk (2 drives can fail killing your volume instead of 2 separate, killing only half the data. Your hdd backup solution once in a while seems like a fine solution). It seems like a higher performing solution then a single 80GB SSD.
  25. While it's true that by default it is disabled, since the disks conform to the ATA-8 specs you should be able to enable it using SMART (controller needed which can use pass-through or special options of SMARTCTL/Smartmontools). Please also see this topic : And fill with information about disks when you can! Personally, I feel a lot safer running with TLER/CCTL on, then without. *if* there is a problem, I will at least know it has not been caused by something in error, which was not really an error.