dhanson865

Member
  • Content Count

    174
  • Joined

  • Last visited

Everything posted by dhanson865

  1. Yes the M225 is a barefoot drive. The Vertex 120GB is about $2.60/GB. This is because the absolute price is higher and because the advertised capacity is lower. You could argue that the price per gig should be adjusted one way or another to account for differences in "overprovisioning" but I'm happy to just compare based on advertised capacity. Oh and I wouldn't call that price or drive exceptional, just good enough to consider. There will be much movement in SSD prices this Fall maybe the M225 is just leading the changes to come by a few weeks.
  2. Drives I'd be willing to buy for daily use as a boot drive as shown by pricegrabber shipped to my house divided by advertised capacity. Corsair Nova 32GB - ~$3.1/GB Intel X25-V 40GB - ~$3/GB Corsair Nova 64GB - ~$2.7/GB Crucial M225 64GB - ~$2.8/GB Intel X25-M 80GB - ~$2.8/GB Crucial M225 128GB - ~$2.25/GB Corsair Nova 128GB - ~$2.6/GB Crucial C300 128GB - ~$2.75/GB Intel X25-M 160GB - ~$2.7/GB Crucial C300 256GB - ~$2.55/GB The $650 C300 256GB drive is cheap per GB compared to many drives. The champ right now is the $290 M225 128GB at $2.25/GB
  3. caviar blue out of warranty? They have a 3 year warranty if bought after August 1st, 2007 and they didn't start using the Blue name until 2008. So if it says blue on the drive and it wasn't a refurb you should still have some warranty left. To be sure check http://websupport.wdc.com/warranty/serialinput.asp?aspsid=697491287&custtype=end&requesttype=warranty‚Ć©=en and give a serial number. Of course since I can get a 500GB WD Blue shipped to my door for under $50 I'm not sure how much trouble I'd spend on replacing an old drive if they gave me any hassle about it. I could see someone paying for new equipment and not worrying about the value of a refurb drive.
  4. dhanson865

    Kingston SNV425-S2 SSD Review

    It's a disservice to the readers to show the Toshiba controller and not discuss the JMicron heritage. At the least if you want to leave a neutral spin on it and not make a negative statement you should call it a Toshiba/JMicron controller and let the reader google why the second name is attached. It'd also be nice to clearly specify how much DRAM is on the board as cache as the Toshiba/JMicron controller has almost no cache internally and Indilinx Barefoot drives (which compete with this) always advertise the cache amount. http://www.legitreviews.com/article/1237/2/ says it is 64MB.
  5. SSDs have physical advantages over tape and rotating disks for backup. My company currently backs up to LTO3 and has significant shoe shining occurring. I'll be switching to LTO4 very soon and will be testing turning off compression to reduce shoe shining. We also have an accounting employee carry the tapes to the bank for secure off site storage. I wouldn't trust hard drives to survive the beating they'd probably take going in and out of the lox box at the bank. SSDs would also weigh less than LTO or hard drives (significant when the accounting employee complains about the weight of a locked briefcase for transporting backup media). SSDs are random write so they don't shoe shine like LTO SSDs are practically immune to shock damage so they aren't vulnerable like hard drives. It'd take a big drop in cost that hasn't happened yet but if I could get SSDs at even twice the cost per GB of LTO4 media it'd be significant considering a Quantum Superloader 3 is about $4000 and eSATA cradles are less than 1% of that cost. Even versus a non robot(tape loader) LTO drive the comparison is still several thousand dollars vs almost nothing for a cradle or enclosure. For really small businesses backing up to SATA SSD (by way of eSATA cradle or USB3 superspeed enclosures) will be cost effective well before the per GB cost of the raw media equalizes in price.
  6. Assuming you have a 2nd PC nearby and these are SATA drives I'd: 0. Backup the data from that RAID array. 1. power down the machine in question. 2. Remove one of the drives 3. attached that drive to another PC/non RAID drive controller 4. check smart data with the OS/app of your choice 5. pursue an RMA for warranty replacement if a warranty is still in place 6. replace with another drive (of a newer model, manufacturing date, or a different manufacturer) 7. Wait for the RAID to synch again 8. remove the other drive and repeat steps 3 through 6. Now it is possible that you test these drives outside of that PC/controller combination and find no errors but I'm assuming you'll find enough data if you test that will lead you to replace both drives. If these aren't SATA I'd be contacting someones tech support or warranty department to get help if support is available or I'd just break down and buy new hardware if it is all out of warranty.
  7. dhanson865

    SSD Pricing May Not Fall Until 2012

    Please if you can give me Intel 80GB G2 speed/reliability I'm willing to pay $2/GB. Right now that drive is still ~$2.70/GB In fact here is a run down of drives I'd like to buy at $2/GB and their current price rounded to make typing the list easier: Corsair Nova 32GB - ~$3.1/GB Intel X25-V 40GB - ~$3/GB Corsair Nova 64GB - ~$2.7/GB Crucial M225 64GB - ~$2.8/GB Intel X25-M 80GB - ~$2.75/GB Corsair Nova 128GB - ~$2.6/GB Crucial M225 128GB - ~$2.7/GB Crucial C300 128GB - ~$2.9/GB Intel X25-M 160GB - ~$2.6/GB Crucial C300 256GB - ~$2.6/GB Try and find any of those drives anywhere near $2 x the GB on the label. Feel free to send me one of each at $2 a GB and I'm sure I'll be able to test them all keep my favorite one and still sell the rest for a noticeable profit (assuming no DOAs). As to other drives not listed I don't care if a Toshiba/Jmicron or Samsung controller based drive gets down to $1/GB I won't use drives with a crappy controller. I'm assuming the sandforce drives will always be as expensive or have a premium over the Intel/Marvell based drives and until they get price competitive with the indilinx barefoot based drives I'll ignore them.
  8. dhanson865

    RAID edition? Worth it?

    Oh, and when a RAID 5 array that is heavily used drops 3 drives at once you realize how much stress was saved by having recent viable backups. The only way it could have been any better is if I'd known the drives were bad two weeks earlier and replaced them proactively.
  9. dhanson865

    RAID edition? Worth it?

    TLER definitely affects quality HW raid. How about softraid built in on a motherboard? Kojote, do you see SMART errors if you check the dropped drive? Or are you saying perfectly good single use drives don't play nice with your Intel softraid? how about OS level RAID? Fully software based RAID? In addition to those three types will the RAID level matter? TLER is more important in RAID 0/5/6 but less important in RAID 1/10? I have a couple of low end Dell servers doing RAID 1 with some WD Black WD6401AALS drives. One of the servers has a CERC SATA 6 channel RAID controller doing RAID 1 the other is doing OS level software RAID 1. Both are Server 2003 standard 32 bit. I suppose it is a bit early in their life to say anything significant but so far so good. FWIW the server with the CERC sata 6 had four drives in a RAID 5 array and after a power failure decided to drop 3 of the drives. The old drives were Maxtor Maxline Plus II 250 GB from 2004. Moving from 700GB usable space on the old array to just under 1.2TB usable space split across two RAID 1 arrays gave me an increase in space and I'm hoping reliability. I figure worst case if the controller flakes out and breaks the mirrors I can check for drive damage and if none is present switch to software only RAID 1. So far I haven't had a single issue and I'm hoping it stays that way.
  10. anybody with an opinion on eSATA enclosures versus the Dell Powervault DAS or will I just have to learn the hard way by buying thousands of dollars worth of equipment and trying it myself? How about experience using SATA drives in a hot swap bay in a Poweredge 2900? Anyone with a preferred vendor other than Dell to buy the tray? I'm looking for any info to help with purchasing decisions in an attempt to improve reliability (reduce down time) by increasing performance without breaking the bank.
  11. Any server 2003/2008 admins in the house to talk direct attached storage (DAS)? http://www.amug.org/amug-web/html/amug/rev...es/firmtek/5pm/ Can use 5 sata drives. I'm thinking 150GB velociraptors (10,000 RPM) for a Microsoft Exchange server but if the setup is reliable enough maybe I buy more units and fill some up with Green Power 1TB drives for a file server where performance isn't as important. If reliability becomes an issue the velociraptors could be repurposed for PC rebuilds down the road but I really don't want to go there. Assuming it could fill the role I could use 2 disks in RAID1 and 2 more disks in RAID 1 for exchange database and log files. Over time a second enclosure could be bought and then more RAID 1 pairs added or even RAID 10 sets of 4 drives. RAID pairs could even be made between drives in separate enclosures for PSU redundancy. Pluses here are any server or desktop PC could use this enclosure so long as it has an eSATA port (PCIe controllers are cheap/common enough that most any PC could use one) Drives could be bought through normal retail channels at lower cost as technologies improve. Not sure if I could put SSDs in this down the road but I'd hope that would be an option. The alternative to eSATA enclosures would be something like Dell PowerVault MD1000 http://www.dell.com/content/products/produ...;l=en&s=bsd If I got the Powervault MD1000 for the exchange server I'd be buying it with 73GB or 146GB 15,000RPM SAS drives with the intention of putting the mail database, log files, smtp queues, on the direct attached storage. RAID 1 pairs for each function early on with the possibility of RAID 10 sets later on for the more demanding functions if needed. Plus side for this option is I know it'll stand up to the load. Down sides are: Higher cost of SAS enclosure versus eSATA enclosure Higher cost for Perc RAID controllers versus cost of eSATA raid controllers Higher cost per hard drive for SAS versus SATA I always read about using SATA in a SAS enclosure but with a Dell enclosure you don't get the drive sleds unless you buy a drive to go in it. I suppose I could look for a source for trays/sleds so that I could put velociraptors or SSDs in it but I'm not sure if I can mix SAS and SATA trays in the enclosure. I don't know if I could ever put SSDs in the MD1000 but again I'd hope that would be an option down the road when SLC SSDs got below the cost of 15,000 RPM SAS drives. Now some flavor/spin of where I'm coming from. It's a small company with about 50-75 users on the exchange server. Reliability is more important than speed. Speed is more important than space. The exchange server is currently using 4 hard drives (two RAID 1 arrays) and the information store is under 20GB and Log files, queues, etc only need about 20GB. If speed weren't an issue the whole mailserver could run on a single 73GB drive. Assuming the cheaper option is just as reliable it would be preferred but if anybody has a horror story about eSATA enclosures with Server 2003 or Server 2008 I'm willing to listen. If you think there is a better route to go than either of these that still stays in the under $5000 range let me know. I probably have plenty of time to do this. It's not something I'm going to do next week. I'm just trying to plan ahead.
  12. FWIW http://technet.microsoft.com/en-us/library/bb124123.aspx suggests 10 drives (5 RAID 1 pairs if you ignore the RAID10 recommendations) or even 16 drives (2 RAID 1 pairs plus 3 RAID 10 sets of 4 drives) to fully optimize the storage on an exchange server. Of course that is from 2006 which doesn't address the improvements of modern hard drives not to mention the possibility of SSDs in this role some day down the road. 16 drives is probably way beyond what I need but on the opposite end of the spectrum 4 drives isn't enough. The fun thing about servers is that until you try it you won't know exactly what you need. You can only make a good estimation. If I get the drives all at once I might try phasing them in 2 at a time just to quantify where the price to performance sweet spot was.
  13. It's not uncommon to see the current disk queue to spike from 0 to several hundred. Looking at that closer the first raid array is usually at low numbers while the second raid array is entirely responsible for the spike. Until I split out the mail functions to separate drives/arrays I can't easily say which function causes which spike but they happen often enough that I want more drives to separate those functions. It's also a best practice on exchange to keep the log files on different physical disks than the information store so adding more drives kills 3 or more birds with one stone.
  14. Current drives are 10,000 RPM scsi drives in hot swap tray/sleds 1 Fujitsu MAW3073NC 3 Seagate ST373207LC The server is a Dell PowerEdge 2600 Disk queues are ok at idle times but when the spam filter updates and starts scanning messages, or the server reboots, or a backup is going the current disk queue spikes. Two of the employees have multi GB mailboxes and it isn't uncommon to see the connecting to server balloon pop up when trying to access one of those obscenely large mailboxes. The current drives are software RAID 1. If the server locks up and has to be turned off to reboot the drives get out of synch and resynching takes a couple of hours. I'd like to get away from that issue or at least make sure any synch issue can be resolved in considerably less time. So long as the drives are out of synch the exchange services won't start and mail is completely unavailable (no send/receive for cached mode, and no access at all for non cached mode). In general it is barely keeping up with the worst case load so long as it is in synch but doesn't recover from a reboot or update as quickly as is needed to reduce downtime. It's also likely that the server will be repurposed at some point and having disks external seems like it would help with that issue and allow for faster recovery should a motherboard or some other critical component fail. We have no redundancy for the mailserver. It might be nice to have a cold server sitting idle ready to step in and take over the mail role. Eventually a PowerEdge 2900 will likely replace that box and it currently has Fujitsu MBA3147RC Fujitsu MBA3147RC Maxtor Atlas10K5_300SAS Seagate ST3300555SS again these are hot swap tray/sled mounted The larger drives aren't needed for the exchange situation so it'd be better if I could move the 300GB drives from the newer server to the older server. If not I could just add enough drives to leave those as spares and use the 300GB for a file share unrelated to the mail role. Assuming they aren't compatible I'd be hesitant to buy 15K 73GB drives for the older server and then not be able to move them to the newer server when it takes on that role. I can't afford to buy drives all over the place, I need my investments to last and/or be more flexible that that. The question is since I don't have trays/sleds for the blank slots on either server how much will it cost to get my choice of drives instead of buying them from Dell? What compatibility gotchas am I really dealing with?
  15. dhanson865

    The Best OCZ Core SATA SSD information

    No, I don't have to prove it. I'm talking about this possibility as the justification for the irregular writing pattern from the 16GB onwards. The burden of proof is always on the one that claims that something is impossible No, the burden of proof is on the person that makes a claim. In this case we both did so we both have the burden of proof on our shoulders and one of us will be proven wrong. If you don't want to contribute to the process of finding out which one of us that is then stop posting. Eventually the answer will be known and whoever is right can say I told you so and the person that is wrong can admit they were wrong. It's not the most complex process in the world but so far I haven't seen a single URL or bit of evidence to back up your claim. BTW there was a reason I quoted the paragraph below You seem to be paying attention to irregularities in benchmarking and assuming it implies a variation in hardware inside the drive. It is entirely possible that the drive is all one type of flash and there is still variation in benchmark results. I still haven't seen a specific rebuttal on: * Apacer showing SLC + MLC in specs but OCZ not. * The possibility that chipset/OS combinations affect SSD performance * The possibility that benchmark apps could give misleading results and to this point I haven't been able to find a single article on the web that supports your supposition that the OCZ Core series uses SLC + MLC. I've read dozens of reviews, news articles, specs, hundreds of comments on other forums. You are the only one I see making this claim. Prove me wrong and I'll thank you for doing so.
  16. dhanson865

    The Best OCZ Core SATA SSD information

    Care to elaborate and prove on what I'm wrong Care to prove that you are right? I mean show me a statement from a reliable source that there is a single SSD on the market that has SLC and MLC mixed just to at least give a baseline for reasonability of your claim. Wait. I'll do it for you. Apacer makes a combo drive that is 96GB because it has 32 GB SLC and 64 GB MLC. But then they market that information and besides the 96GB gives it away as an odd size for an SSD. http://usa.apacer.com/us/news/News_05_28_2008_162.htm Notice how the specs mention a different read/write rate per type of flash. The OCZ Core series comes in expected drive sizes. It doesn't fit the mold for a SLC + MLC product. So lets revise that request show me a link from a reliable source saying that the OCZ is using multiple flash chip types and what amount of flash is SLC versus MLC in the 32GB product or for 64GB, or even for 128GB.
  17. dhanson865

    The Best OCZ Core SATA SSD information

    I think you are bonkers and I think it's all MLC. Take a peek at http://www.ocztechnology.com/ssd/OCZ_Core_...es_SSD_SPEC.pdf
  18. dhanson865

    New HD's added to database

    at 3mm the noise is measurably different but at several feet that difference would be much less noticeable. A proper noise test should be at least 1 foot away but 1 meter is a more common measuring distance. Unfortunately doing so requires a quieter environment and more expensive sound meter than some people are willing to provide. Often a tech review site will buy the first SPL meter they find under $100 and call it good. Most of those units have a noise floor in the 35 to 40 decibel range. With a little more effort you could find a better SPL and have more respectable numbers in the review. These are some products I googled up one night: Min dBA Price Brand/Model 35 25 SCOSCHE SPL1000 30 100 (Unknown brand) AR824 Multi-Range Sound Level Meter 30 100 Nady DSM-1 Digital SPL Meter 26 275 Extech 407738 Sound Level Meter with Memory Since Storage review shows a WD Green Power drive at 35.9 dBA I'm going to guess they have a SPL meter that won't read below 35 dBA reliably. As a comparison SPCR tested a Green Power drive from 1 meter and got 19 to 21 dBA. That requires a much quieter room and a much more expensive sound meter but it also gives results that aren't skewed by having a mic 3mm away from the drive. I hope that makes sense. I don't mean any disrespect to storagereview but I thought you and the other readers here should know...
  19. dhanson865

    New OCZ SSD

    http://img519.imageshack.us/my.php?image=o...hdtachqunm1.jpg and http://img519.imageshack.us/my.php?image=o...hdtachfuzd3.jpg from a thread on SPCR should give you a more accurate picture of the drives speed than that so called preview the guy did with the OS on the drive.
  20. dhanson865

    New OCZ SSD

    This is very interesting: Note how write performance starts to be very irregular at 25% of the drive. I wonder if the drive has both SLC and MLC. So, this would make it much closer to SLC-only longevity, performance and reliability, being just a bit more expensive than MLC-only. No, it's all one type of flash Those read and write tests are totally invalid as he is doing them with the OS and apps running from that drive. There is no telling what other activity was occurring during those tests that could have skewed the numbers. Wait for a proper test with no data on the drive.
  21. At best it might "know" the blocks that have never been written to. But once a block has been written it has no way of knowing wether it has since been freed - or not. If it can't tell when a block is empty after the first write then wear leveling wouldn't work past the first day when I fill the drive up with test writes from HDtune/HDtach and the like. Surely you don't think wear leveling this that worthless?
  22. So here is my proposed test. Take a system and Put an OS and benchmark apps on a 7200 RPM drive like the WD6400AAKS on a partition equal to the size of the SSD (effectively short stroking the HD). Defrag it. Clone the data (by way of making the RAID 1 arrays or by Ghosting) to another WD6400AAKS, and two SSDs that can handle sustained writes of similar or slightly higher levels than the HD. Once the data is present so you have a stable starting point test these three configs: 1. RAID 1 of the WD6400AAKS + WD6400AAKS 2. RAID 1 of the WD6400AAKS + SSD 3. RAID 1 of the SSD + SSD These would all be SATA drives and software RAID. I'll stipulate Windows XP SP2 or SP3 as the OS, though I'd accept an experience with Server 2003 as being very relevant. My question is will config 2 have the more of benefits of both, more of the disadvantages of both, average out the two giving a more well balanced performance with some improvement all around, or just be some mongrel mixed bag that randomly performs unpredictably better and worse depending on factors that cant be controlled. As in would the mixed config still be slow at random writes of small files like the all SSD config is? Would the mixed config behave like the random access of the SSD, the HD, or in between. If in between would it be consistent and how would the combination work more towards one end or the other or a balanced average? I don't have the 4 drives in hand or I'd do the test myself and report back. If anybody knows of an article or review were someone has actually tried an experiment similar to this I'd like to see the results. If anyone is willing to do the experiment I'd like to see the results. If you want to tell the world what your opinion is about the way config 2 would work go for it, just qualify your statements enough so that it is obvious whether you have or haven't tested a real world config.
  23. dhanson865

    RAID 1 SSD + 7200 RPM HD

    Well since both drives are the slowest I guess you are saying it would be a worst of both case scenario.
  24. A Celeron is no P2. 212332[/snapback] http://www.anandtech.com/showdoc.html?i=277 You might want to reconsider that statement. The name Celeron is only a marketing tool after all. While it is true that a rectangle is not always a square and a Celeron is not always a PII, it is true that the Celeron 300a is a PII core with less cache.