Leaderboard


Popular Content

Showing most liked content since 10/12/13 in all areas

  1. 2 points
    Minimal, I'm not sure it consumed more than a couple hundred MHz during the time we had it going in the background. We've been working with the Nexenta team over the duration of that review and they are still working on it internally. Should hopefully find out more soon on that topic.
  2. 2 points
    For USB storage it means the device supports 16 byte commands (so these can be used instead of 10 byte commands, which are limited to 2 TB). Of course, you also need an OS which supports 16 byte commands itself, but it doesn't have to be a 64 bit OS. Something like Windows 7 x86 supports >2TB disks just fine. BTW: You can check for support even without having a drive >2TB available by simply checking if the 16 byte commands are implemented. The reason why you see those "tested capacities" in advertisements, because those resellers don't really have a clue what they sell. They order a container full of some USB gadgets from a Chinese OEM and then "test" what works with it. Fun fact: eSATA never had any capacity limits. I have still have old USB 2.0 docking stations, which work fine with 4 and 6 TB drives using the eSATA ports.
  3. 2 points
    JEDEC spec for unpowered SSD retention isn't the same as data decay lifetime, but since I've already dug it up... http://www.anandtech.com/show/9248/the-truth-about-ssd-data-retention For drives that are worn out-- estimated wear left per drive spec (and reported via SMART) is zero: Client SSD: 30C ambient, 1 year. Enterprise SSD: 40C ambient, 3 months. There's some nice charts showing temperature vs. time relationships in there as well. Given we know that modern SSDs can run way, way past their specified lifetimes... quoted from the same link:: Micron states similar: http://www.micron.com/about/blogs/2015/may/addressing-data-retention-in-ssds
  4. 2 points
    Hah, they're not going to take Optimus Max to client systems any time soon, but they certainly could. I'd bet we'll have 2TB client SSDs next year though from Samsung and maybe Micron. SanDisk's client business isn't really that strong comparatively.
  5. 2 points
    Put them in a server and run some sort of SDS on top of it like Nexenta. Fun learning experience and gets you a cheap SAN.
  6. 2 points
    Note the three years in between though...they've been surprised by the interest in the platform I think. Now, if WD could just get those 2.5" Reds up to 2TB in a 9.5mm...
  7. 2 points
    Long-time watcher of StorageReview, but I registered just to be able to comment on this review. An excellent review, though your testing seems a bit high-end for the likely intended usage. I'd bet the majority of the target users for this SOHO device won't have a backbone that supports iSCSI or even dual-port aggregation. As you point out, 2-10 users in a casual / small office setting or for home use seems a likely audience. Such an audience would be much more likely to have an entry-level GbE switch as opposed to a managed backbone that costs 10x more. To that point, I've used the entire line of BlackArmor devices, and there are three critical issues common to them that seem to be repeated with the replacement Business Storage line... none of which are mentioned in the review, but they may not impact everyone so I'm not sure they necessarily bear mentioning up-front. 1) Performance. You obtained okay numbers in your testbed, but as summarized above, I doubt you'd see that infrastructure in the wild. I'd suggest you at least pair it with testing results from a cheapo GbE switch using a single LAN plug and simple Windows file sharing / disk mapping. Unless the BS line has markedly improved from the BA line, you'll see performance on the order of 15 MB/s read, 10 MB/s write. Horrendous for anything but backups, really, which is all I use my BA boxes for. Also, I recognize that there's a massive disparity of price points and target audience, but I get 110 MB/s--TEN TIMES the performance--from my Synology boxes, and 50-70 MB/s from my Drobos. And that's on a cost-conscious backbone of entry-level GbE switches using one LAN port per device and simple, iSCSI-less file sharing in Windows. There's no comparison at all. 2) Compatibility. Massively overpriced with disks, the BA and BS line are very reasonable when purchased diskless. I've used Buffalo, Seagate, Synology, and Drobo NAS boxes in small-business and personal settings, and diskless BA/BS boxes are far and away the cheapest way really of adding reliable (but not fast!) NAS storage in such contexts. But these NAS boxes only support Seagate disks. True, this is a Seagate device, but it seems as though someone had to intentionally code a rejection routine into the firmware, which is just kind of an obnoxious move. In addition, some of the compatibility notes for "certified drives" listed for the BA line are flat-out falsified--the diskless BA 400 will simply NOT work with the 1.5 TB desktop line of Seagate disks, period. 3) Risk. For those who know what they're doing, these are fairly easy boxes to deploy, and the web-based UI is second only to Synology's in my experience. But it's easy, far too easy, to make a catastrophic mistake. For example, if you set up a BA box using one LAN port, and then try to plug in a second LAN plug, it will not only not work, but it has a strong chance of corrupting the entire array, forcing you to not only lose all data and set everything up again, but in order to even begin to do so, you must eject each disk individually and reformat it using a separate computer. Otherwise it won't set itself up. Now, much of my comments above are from my experience with the older BA boxes, but I'd like to know if those issues have been resolved with the replacement BS line. Anyway, as always, I love seeing info on Storage Review.com so keep up the good work!
  8. 2 points
    You're almost right here. What's missing is that copying small files, even from the same directory, will automatically include some random access too. The files being read may be spread across the disk, they may be written different locations, filling up holes in the current file structure (what ever the OS see fit) and the MFT may be accessed. That's why multi-threaded copy for higher queue depths still improves throughput: the disk can arrange the accesses better through NCQ and can reduce access times. BTW: if the folders you're copying are often the same I'd look into incremental sync'ing with e.g. DirSync (nice freeware). Not sure it can increase QD, but it certainly saves time not to transfer untouched files again. And I'm not a fan of buying large SSDs for storage, that's often a waste of money (IMO). I'd rather use the SSD for temperary storage and as automatic cache. If you're concerned with many small files an SSD would be ideal. And if the SSD cache also buffers writes you may see a massive speed increase. The cache capacity would also be huge compared to the amount of storage required for small files MrS
  9. 1 point
    From the spec sheet, the drive falls under the same drive acoustic ratings as the 4TB, 6TB, 8TB and 10TB models of the BarraCuda Pro, with a typical idle of 2.8 bels (3.0 idle max), and a typical seek of 3.2 bels (3.4 seek max).
  10. 1 point
    You'd have to email Supermicro. Unfortunately we can't discuss the pricing of this system.
  11. 1 point
    The 03 or 04 part is the series number, it gives an indication of density and possibly power efficiency. Toshiba's 05 series has just hit the markets, with 04 still going strong. 03 is an older design today. I don't know what the G or D stands for.
  12. 1 point
    Apparently the bq2060A internal charge controller does not count backwards, several users has tried just changing battery but it was unsuccessful - relearn simply fails. I don't know what lsi module you have but it will be interesting if you post your findings, I have only edited code for ibbu07 so other modules would be interesting and might behave otherwise.
  13. 1 point
    Between the two I'd go with what's cheapest Really though, they're quite similar mechanically. The NAS drives have more hours on them globally, may be a better choice from a reliability standpoint.
  14. 1 point
    I can't find your older DD2200 forum topic, the link is dead from the review. But now that there are flash enabled DataDomain systems, do you think you can sweet talk DellEMC into getting one in the labs? Specifically the DD6300 or similar system? I would like to see the effects of flash used for metadata, in regards to Veeam restores, VDP performance, and general performance improvements. They are claiming big restore improvements, which has always been an issue for products like Veeam and CVLT and DD used together. Not that it was a limitation by EMC, but more so how those products do lots of random IO on the restore process, that punishes DataDomain, which doesn't do random IO well at all.
  15. 1 point
    The Veritas NetBackup 5240 appliance offers a heterogeneous backup and recovery suite and can be easily deployed into an existing NetBackup environment for expansion or acceleration. The appliance by itself offers a capacity ranging from right beneath 5TB up to almost 14TB before deduplication. Additional storage shelves can bring total capacity up to 148TB. The unit can be used as a master server, media server, or both and supports both VMware and Hyper-V virtualization environments. The appliance is offered in both the cost-optimized version we reviewed here and the 5300 model that is more performance-optimized. Veritas NetBackup 5240 Backup Appliance Review
  16. 1 point
    If you want to laugh out loud at my reply, please do so. I don't know how much data you need to migrate: without that knowledge, what I'm about to say may be totally in appropriate. Several years ago, malware hit our SOHO network and "migrated" to every machine in that network. It took 8 DAYS to re-build everything and disinfect every machine. After that burn, we decided that THE BEST WAY to keep a PC virus-free is to TURN IT OFF!! (lol here is aok Whenever we have been faced with a similar challenge, we ALWAYS start with a FULL BACKUP of all data, including of course the operating system and all files and databases. That FULL BACKUP is copied to one of our aging "backup servers" and then we turn that backup server OFF -- COMPLETELY OFF. Because PC hardware is so cheap now, and because some data bases have become invaluable e.g. mirror images of a website, we do not hesitate to maintain cheap "white boxes" with aging CPUs that do very little except to XCOPY data from here to there. We have even perfected a PUTT and GETT pair of Command Prompt BATCH files that do the job very well, particularly when we only need to backup a sub-folder in our website mirror. Our consistent approach has also been to maintain a formal separation between C: system partitions, and all other partitions. Every discrete storage device or RAID array is formatted with a primary partition exactly equal in size and contents to the Windows C: system partition. The remainder of each such storage device is formatted with a Data partition e.g. D: or E: (in Windows parlance). All of our key workstations host at leasat 2 identical copies of the same OS. From experience, we know that it doesn't take too much to completely corrupt a working OS e.g. the other day, a HDD crashed and that crash ended up corrupting the Windows Registry. So, with our dual-OS setup, we simply re-booted from the backup OS and restored a drive image of the primary C: partition: piece o' cake. As such, my first choice is your Option "A", making sure that you have a working "backup server" with redundant backups of all operating system and dedicated data partitions. Trying to mix HDDs and SSDs sounds like too much work: the future is solid-state, and I think you should migrate now to new system with SSDs and a quality / compatible RAID controller. You can buy large HDDs for your backup server, the sole purpose of which is to archive multiple redundant copies of really important data. Hope this helps. p.s. I would be very interested to read more Comments from others who study your question.
  17. 1 point
    One more thing: just because a PCIe expansion slot is mechanically x16 (full length), the chipset may be assigning a fewer number of logical PCIe lanes to any given slot. We've been circumventing that behavior by installing our RAID controllers in the first x16 expansion slot, which is normally where x16 video cards are inserted. (In our office, we have no need to super high-bandwidth video.) Since you intend to install a RAID controller with an x8 edge connector, you should be fine as long as you confirm that the chipset is also assigning x8 logical lanes to that expansion slot, NOT x4 or less. Your motherboard User Manual should have documentation on this point. And, there may be a BIOS setting which controls how many lanes are assigned to the other x16 slots below the primary slot (closest to the CPU socket). Also, very often the summary Specs that are published in motherboard marketing literature also document the lane assignments for each PCIe slot e.g. if your motherboard is still being sold by Newegg.com, those Specs should be in Newegg's description of that motherboard. Look for text like this: "x16 / 0" or "x8 / x8" "x16 / 0" means x16 lanes are assigned to the first expansion slot when the second expansion slot is empty. "x8 / x8" means x8 lanes are assigned to the first expansion slot and x8 lanes are also assigned to the second expansion slot when both slots are populated. And so on. You wouldn't want your upstream bandwidth cut IN HALF merely because of lane assignment decisions that were made by the chipset without your knowledge or control e.g. from x8 to x4.
  18. 1 point
    FWIW, Intel and Areca are basically rebranding LSI-- if you're concerned about support for an LSI card, I would buy an LSI-branded card (instead of a 3rd party one like a Dell or Intel) directly.
  19. 1 point
    Hello! I'm torn between buying one of these 3 drives for use as my PC's boot drive: Seagate ST2000DX001 SSHD: 196BGN (~113 USD or 99 EUR) Toshiba DT01ACA200: 144BGN (~83 USD or 74 EUR) Toshiba P300 HDWD120EZSTA: 167BGN (~96 USD or 85 EUR) Looking at them, you might say that the SSHD is inherently better, however I'm worried about the reliability of seagate drives in general. Out of the 17 "broken" PC's I was busy diagnosing and repairing last month, 5 had dead hard drives, 4 of which were Seagate drives. As for the new Toshiba P300 series, the reason for me to consider it is that the other Toshiba drive I'm looking at is a bit old at 54-ish months. That and also the fact I'm expecting some performance improvements with the P300, although I haven't been able to find any benchmarks to confirm this. Regarding SSDs: I'll get one about a year from now, when I'll do a complete system upgrade. It's not economically feasible for me right now. Regarding WD Blacks: They're far too expensive where I live, nearly twice the price of competing 2TB drives. And from reviews I've read, they seem too noisy and hot. Regarding why 2TB and not more: My current PC is rather old - 3.8GHz E8500-based, which means it doesn't have a UEFI BIOS, which means I can't utilize a drive larger than 2.2TB as a boot drive. So what's your opinion? Has anyone had any experience with these P300 drives?
  20. 1 point
    > performance is my highest priority Then, if you intend to install a Windows OS, be sure to format the C: system partition at ~50GB, and format the remainder as a dedicated data partition e.g. D: or E: etc. Historical research has proven that HDD linear recording densities are fairly constant: this means that there is much less data on innermost tracks, and there is much more data on outermost tracks. The amount of data on any given track is directly proportional to track diameter. Formatting a second data partition is also very useful for backup reasons e.g. drive images of your C: partition can be written to the data partition, and easily restored if your C: partition becomes infected with a virus or malware. This measurement from many moons ago illustrates the drop in platter transfer rates from outermost track to innermost track for a variety of HDDs popular at that time:
  21. 1 point
    This, pretty much this. You're going to want more spindles, and most likely some 10-15k or even SSD disks in a different datastore, and have your resource pools configured correctly.
  22. 1 point
    022 is probably a newer revision of the base model (002). According to the spec sheet it uses 0.1w less poiwer. Some firmware tweaks and firmware level functionality bumps too, particularly for hosts that have SMR aware optimisations. Now that you mention it, there's quite a of new technical information in the manual for the 022 model. They appear to have published the exact shingling and non-shingled geometries: 1.0 Introduction These drives provide the following key features: • Host aware, optimized for SMR performance and capable of ZAC command support • High instantaneous (burst) data-transfer rates (up to 600MB per second). • Streaming video optimization - consistent command completion times & ERC support • Idle3 power mode support • TGMR recording technology provides the drives with increased areal density. • State-of-the-art cache and on-the-fly error-correction algorithms. • Native Command Queuing with command ordering to increase performance in demanding applications. • Full-track multiple-sector transfer capability without local processor intervention. • Seagate AcuTrac™ servo technology delivers dependable performance, even with hard drive track widths of only 75 nanometers. • Seagate SmartAlign™ technology provides a simple, transparent migration to Advanced Format 4K sectors • Quiet operation. • Compliant with RoHS requirements in China and Europe. • SeaTools diagnostic software performs a drive self-test that eliminates unnecessary drive returns. • Support for S.M.A.R.T. drive monitoring and reporting. • Supports latching SATA cables and connectors. • Worldwide Name (WWN) capability uniquely identifies the drive. 1.2 Zone Structure Archive HDD models use SMR (Shingled Magnetic Recording Technology), physically formatted containing two types of zones. 64 “Conventional Zones” which are not associated with write pointer, and the media is non-SMR and 29808 Sequential Write preferred Zones which are SMR media. For the sequential write referred zones there is a write pointer to indicated preferred write location. For the conventional zone writes can occur randomly for any block size. New commands which report zonal structure, resetting zonal write pointers, as well as managing zonal properties are available for sequential write preferred zones through ZAC commands. Archive HDD Conventional Zone Structure • There are 64 256 MiB Conventional Zones. (ie. Not Shingled) • The conventional zone is located at the outer diameter and is 16GB. • Sequential Read and Writes to this zones will perform at similar data rates. • Random Write commands can be issued in any order without any performance delay. • Zone designed specifically for random writes data. For example: logs and meta data. There are 29808 Sequential Write Zones • Each zone is 2e19 logical blocks in size or 256 MiB each. • Each zone is a shingled zone. • To achieve best performance use of ZAC commands is required. • Re-setting write pointers for each zone is required before reuse. Optimal number of open sequential write preferred zones • Advised - the largest number of zones that should be open for best performance, is reported in Identify Device Data log 0x30 page 0x00h Optimal number of non-sequentially written sequential write preferred zones • Advised - the largest number of write preferred zonesthat should be randomly written for best performance, is reported in identify device data log 0x30 page 0x00h T-13 standards define the new ZAC commands; REPORT ZONES EXT to query the drive on what zones exist and their current condition, RESET WRITE POINTER EXT to reset the write pointers, OPEN ZONE EXT, CLOSE ZONE EXT, and FINISH ZONE EXT to Open, Close, and Finish zones. To achieve optimal performance, an SMR-aware Host driver will need to write sequentially to all sequential write referred zones. See the T13 Web Site at http://www.t13.org for ACS-4, T13/BSR INCIT 529 for command details.
  23. 1 point
    Outside of a cursory look, we haven't good very deep into that particular system's capabilities. That's a pretty basic request though for any backup software package. What's your reluctance to use an application on the server though?
  24. 1 point
    Available in capacities up to 2TB, the Intel P3700 is at the top drive of the family, which is designed for both mainstream applications and storage system providers. Intel has certainly introduced their new family of NVMe enterprise drives in a massive way with three different lines and two different form factors (2.5" and PCIe add-in card) that span a grand total of 12 different capacities. There aren’t many companies (besides the big three) that can afford such an impressive launch. The P3700 drives and their family are also vertically integrated solutions, meaning the Intel produces the controller, NAND (20nm MLC) and firmware (Intel also provides driver support for operating systems). This allows Intel to better understand the drive's characteristics, allowing them to effectively support their drive and offer more streamlined enhancements in the future. Intel SSD DC P3700 2.5" NVMe SSD Review
  25. 1 point
    There are no RAID adapters for PCIe-connected storage devices. It would all have to be at the software level.
  26. 1 point
    I have a Dell R610 sever and a extra Dell Perc H800 external raid card. Will the Lenovo SA120 work with the Perc H800? I'm guessing it should but would like to know for sure.
  27. 1 point
    I don't have enough budget and I don't need more than 2TB but wow, that's cool
  28. 1 point
    I have a built-in HDD in my new laptop, counted everything that is in it plus the free space and the sum is 726GB, but my HDD says that 913GB is usable (1TB HDD). So my question is, where is that missing space? Yes, I have it set to show invisible folders, too. (THE 180GB FREE SPACE IS COUNTED IN THE CALCULATOR) Thanks for help
  29. 1 point
    Find a new place to interview, sounds like a crazy man.
  30. 1 point
    This paper ( https://www.usenix.org/conference/fast15/technical-sessions/presentation/aghayev ) gives a more detailed explanation of the drive's behaviour, and especially its vulnerability to sustained random writes. If that's a significant part of what you want to use a drive for, then it's clear that this drive isn't for you. Whether this category covers "most end users" is an open question. One interesting idea I did see floated on another forum to improve SMR's performance even further was to make it a hybrid drive, with the drive's SSD acting as the drive's persistant cache.
  31. 1 point
    Whenever I receive a forum's notification email (I probably belong to several dozen forums), by habit, I click the first link in the email, expecting it to take me to the reply. I seldom bother to read the email itself, because quite often there are multiple replies in the thread. The notifications from this forum, however, place a link to the poster's member profile first, and I always mistakenly clicking it. Please consider rearranging these notifications so that a link to the thread's new posts is topmost in the message. Thanks.
  32. 1 point
    I recently aggregated to my setup, a Hitachi 1TB Deskstar, has 32MB buffer and it's 7.200 RPM. I moved my multimedia files on to it, and now I am accessing from that disk. Previously I had all the files on a Seagate 500GB... But now I am noticing that, sometimes, when I change a FLAC tag, or if the file gets rewritten, there is a 5 second lag access. The symptom is not easily reproducible but if you spend some time manipulating files, every now and then, you will notice this form of a lag access present. Is this a common characteristic of the Deskstars? I am starting to believe there might be something wrong. Before placing this unit, I made two full extended tests and it passed in all. Now I am getting a 250GB SSD and I will have the option to get rid of... either Hitachi or Seagate. However if I keep the Hitachi, I will have 1TB storage. This is a RMA'ed drive. But it looked pretty new when arrived, sealed. So what could be the cause for this drive to be like this? Thanks for inputs!
  33. 1 point
    At least the Swans look safe from relegation
  34. 1 point
    Personally I might lean towards the 9361-8i since it seemed to offer the highest performance in our environment. The Areca model is based off it with some minor additions... but why not just buy the deal deal instead? In terms of running hot, all the models that you listed are somewhat designed to be operated in an environment with a certain amount of airflow. If you are using them in a server they will operate for many years just fine. In a desktop, you would need to make sure a gentle/brisk breeze is always flowing over the card's heatsink.
  35. 1 point
    Ultimately the storage-centric R730xd offers new degrees of flexibility for those who want to keep storage as close as possible to the compute layer. The chassis has a lot of potential in SDS situations too, something Dell likely had in mind as they continue to innovate generation after generation with leading storage server solutions. Dell PowerEdge 13G R730xd Review
  36. 1 point
    Sadly, reinstall. It's a massive pain if you have a lot of apps, but the end result is worthwhile.
  37. 1 point
    Specifically, they have little ability to innovate going forward and I question their ability to build a proper support network. Note that the opinion is largely based on a US-centric slant, some brands have better adoption in Europe or Asia for instance. The decision to rebrand the M550 is a pretty clear case. They had no ability to create a new product with the delays in SandForce, so the best option was to copy Micron and hope to make money by selling a slower version for $10 less? Poor business sense and not something I'd invest my money in as an SSD buyer.
  38. 1 point
    Seagate NAS is what I was referring to. HGST is a 7K design...if you're really worried about noise I'd stick with the slower spindle.
  39. 1 point
    In a case you're not going to notice much noise difference these days, but the slower NAS drive should be more quiet. There are so many variables though, even within a product line, that a sample of one could vary somewhat. Sounds like the Seagate is a better fit for your needs and budget though.
  40. 1 point
    You're already sad with the Drobo performance, trust me, it's not any better now, so I'd mark that off the list promptly. In terms of NAS vs DAS though, how many people or devices will need to access the storage? Do you see any benefit from the NAS features like remote file access, etc or do you really want the performance DAS offers by comparison? Both the LaCie and G-Tech products are very nice incidentally. There are also other options form the likes of Caldigit and others who have quality products as well.
  41. 1 point
    We've actually been working on a new web server test for some time but got busy with other benchmarks. We do aim to pick that back up again, would give web hosts a perfect benchmark to consider. Let me see how they reply and then we'll decide what to do.
  42. 1 point
    An Enterprise review of Samsung 840 Pro would be very nice! We are using 840 Pro in a couple of enterprise projects for SSD caching (w/ LSI CacheCade) in production and for Tier 1 storage pools (under DataCore SANSymphony-V) for development / testing / lab environments. So far, so good, but having more information for the top-tier consumer SSDs and especially head-to-head comparisons would be great :-) In real life, if the reliability is acceptable, it could be much more efficient to use 200-300% more consumer/prosumer SSDs instead of a bunch of enterprise SSDs (which are insanely expensive) to achieve stable performance and modest TBW. P.S. I've just tried to log in with my 2001 account, but I had to create a new one :-) I guess the old forums were not migrated to the new CMS :-)
  43. 1 point
    Its not officially supported that way, although it looks like some Supermicro resellers do list it as a separate component.
  44. 1 point
    I'd reach out to them. I'm not sure of the refurb process but it could just entail a quick drive check and wipe.
  45. 1 point
    Hi don't know exactly what your budget and needs are, but this would be my dream SQL box. - Boot/OS/Binaries/pagefile/crash dump : 2 mechanic HDD in mirror (Can be slow you only boot OS once. Pagefile swapping might be slow but with proper amount of memory allocation there merely be any.) - Raid (0+)1 of SSDs for SQL logs - Raid 6 of 4 15K SAS for SQL databases Now if the database data are small enough: - Raid 1 of SSDs for data - raid 6 of 4 10K SAS for BLOBs/File stream storage (optional) Hope that gives you ideas m a r c
  46. 1 point
    I've bought quite a few used SSDs, they're good for test systems, HTPCs, etc. I won't pay a lot for them though - roughly half of what a similar new one would go for. Heck I even bought a used one just for use as a scratch disk, because I couldn't bring myself to buy a new one just to write to it non-stop and wear out :-P I second the Samsung recommendation. For what I consider my "critical" systems, I only use Samsung. I've used plenty of SanDisk SSDs, both new and used, in other systems with no problems, however. Personally I'd repurpose the old into another system, or sell it, and get one larger one. Some people opt to stripe multiple smaller ones, but I don't bother.
  47. 1 point
    Do you have $80-100 to spend on an SSD in addition to bulk storage? What drive do you have now?
  48. 1 point
    http://safebrowsing.clients.google.com/safebrowsing/diagnostic?site=http%3A%2F%2Fwww.storagereview.com%2F&client=googlechrome&hl=en-US That is the info I received when I attempted to load up storagereview with Google Chrome 23.0.1271.64 with adblock plus extension and do-not-track enabled.
  49. 1 point
    Hey guys, Is there any chance the leaderboard might be updated anytime soon?
  50. 1 point
    For the 1gbyte/sec benchmark, head to the bottom Subject: 12 Veliciraptors again w/x4 card (~1gbyte/sec aggregate read)! Each PCI-e x1 card has 1 veliciraptor on it now. Got an x4 card wit 4 sata ports: Not quite the > 1 gbyte/sec I was hoping for in regards to the reads but pretty close! (For my RAID5) Previously my write was limited to 400-420MiB/s now I see an additional 120-125 MiB/s increase! jpiszcz@p34:/x/f$ dd if=/dev/zero of=bigfile bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 20.7054 s, 519 MB/s jpiszcz@p34:/x/f$ sync jpiszcz@p34:/x/f$ dd if=/dev/zero of=bigfile.1 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 20.4973 s, 524 MB/s jpiszcz@p34:/x/f$ sync jpiszcz@p34:/x/f$ dd if=bigfile of=/dev/null bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 11.3529 s, 946 MB/s jpiszcz@p34:/x/f$ sync jpiszcz@p34:/x/f$ dd if=bigfile.1 of=/dev/null bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 11.2635 s, 953 MB/s jpiszcz@p34:/x/f$ -- For all disks: Something I noticed is the x1 PCI-e cards are doing around 68MiB/s each for 3 of them where the x4 has no issue pumping out 100MiB/s+ without a problem, however keep in mind the bus is probably already taxed from the 6 sata drives on the southbridge. vmstat output: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- 1 VR 0 1 160 45220 341772 6480468 0 0 122112 0 584 2082 1 7 73 20 0 1 160 46592 455436 6362088 0 0 113664 0 495 1968 0 4 74 21 2 VR 1 1 160 45540 3027724 3720340 0 0 243216 0 1006 4030 0 9 74 17 0 2 160 44988 3262220 3480648 0 0 234480 0 1008 4134 0 8 73 19 3 VR 1 2 160 44816 6600068 50476 0 0 330248 16 1342 4126 0 12 70 18 0 3 160 45440 6599812 50264 0 0 316032 8 1296 3878 0 12 72 17 4 VR 0 4 160 44504 6602488 47644 0 0 495232 0 1992 6081 0 20 57 23 1 3 160 45500 6602796 45980 0 0 483968 0 1915 6207 0 20 54 26 5 VR 1 5 160 43932 6602972 45304 0 0 606080 0 2375 6622 0 25 56 19 1 4 160 45412 6601852 45160 0 0 618756 0 2431 6791 0 25 53 21 6 VR 0 6 160 45000 6602348 44512 0 0 683904 8 2746 7880 0 31 42 27 0 6 160 45248 6602028 44460 0 0 705792 0 2754 7564 0 31 45 24 7 VR 2 6 160 46744 6599020 44688 0 0 748204 17 3042 9084 0 34 40 26 3 6 160 46592 6598824 44372 0 0 747520 8 2975 9047 1 33 31 36 8 VR 2 7 160 46512 6598612 44580 0 0 761184 16 3089 9937 0 36 40 24 2 7 160 44528 6600392 44360 0 0 759720 8 2993 9522 0 36 36 28 9 VR 2 8 160 47152 6596824 44572 0 0 767016 0 3075 9730 1 37 39 24 2 7 160 46576 6597728 44688 0 0 771200 0 3032 9568 0 37 40 23 10 VR 0 10 160 45048 6598240 44428 0 0 889072 8 3599 11561 0 47 20 33 2 10 160 45232 6598116 44772 0 0 890112 0 3495 11547 0 46 23 31 11 VR 4 8 160 45536 6594716 44600 0 0 996352 0 3947 12134 1 62 13 25 2 9 160 45348 6594912 44096 0 0 1009152 0 3949 11949 0 63 10 28 12 VR 6 8 160 45092 6583136 47016 0 0 1063200 0 4187 12394 1 71 9 21 3 11 160 47080 6578492 47588 0 0 1058412 0 4224 12547 1 72 8 20 Just about 1 gigabyte per second total aggregate read for all drives on a 965 chipset! Justin. Subject: Re: 12 Veliciraptors again w/x4 card (~1gbyte/sec aggregate read)! On Mon, 7 Jul 2008, Justin Piszcz wrote: > Each PCI-e x1 card has 1 veliciraptor on it now. > Got an x4 card wit 4 sata ports: > > Not quite the > 1 gbyte/sec I was hoping for in regards to the reads > but pretty close! Going to remove one of the drives from the x1 card and put it on the x4 card instead, then I will use all 4 SATA ports on the x4 and hopefully get better bw. If you look at 7,8,9 there is little improvement: (PCI-e x1) 7 VR 2 6 160 46744 6599020 44688 0 0 748204 17 3042 9084 0 34 40 26 3 6 160 46592 6598824 44372 0 0 747520 8 2975 9047 1 33 31 36 8 VR 2 7 160 46512 6598612 44580 0 0 761184 16 3089 9937 0 36 40 24 2 7 160 44528 6600392 44360 0 0 759720 8 2993 9522 0 36 36 28 9 VR 2 8 160 47152 6596824 44572 0 0 767016 0 3075 9730 1 37 39 24 2 7 160 46576 6597728 44688 0 0 771200 0 3032 9568 0 37 40 23 But once I hit the drives on the x4 card, vroom vroom! 10 VR 0 10 160 45048 6598240 44428 0 0 889072 8 3599 11561 0 47 20 33 2 10 160 45232 6598116 44772 0 0 890112 0 3495 11547 0 46 23 31 11 VR 4 8 160 45536 6594716 44600 0 0 996352 0 3947 12134 1 62 13 25 2 9 160 45348 6594912 44096 0 0 1009152 0 3949 11949 0 63 10 28 12 VR 6 8 160 45092 6583136 47016 0 0 1063200 0 4187 12394 1 71 9 21 3 11 160 47080 6578492 47588 0 0 1058412 0 4224 12547 1 72 8 20 Justin. Subject: Re: 12 Veliciraptors again w/x4 card (1.1gbytes/sec aggregate read)! On Mon, 7 Jul 2008, Justin Piszcz wrote: > > > On Mon, 7 Jul 2008, Justin Piszcz wrote: > >> Each PCI-e x1 card has 1 veliciraptor on it now. >> Got an x4 card wit 4 sata ports: >> >> Not quite the > 1 gbyte/sec I was hoping for in regards to the reads >> but pretty close! > > Going to remove one of the drives from the x1 card and put it on the x4 > card instead, then I will use all 4 SATA ports on the x4 and hopefully get > better bw. > Four drives on the x4 card, MAX bandwidth for every disk. p34:~# dd if=/dev/sdi of=/dev/null bs=1M & [1] 4720 p34:~# dd if=/dev/sdj of=/dev/null bs=1M & [2] 4721 p34:~# dd if=/dev/sdk of=/dev/null bs=1M & [3] 4722 p34:~# dd if=/dev/sdl of=/dev/null bs=1M & [4] 4723 p34:~# 120MiB/s per each one! Re-running dd test with all 12 disks: 1.1 gigabytes per second read! r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 120 59104 6632220 52228 0 0 0 40 168 517 0 0 100 0 0 0 120 59104 6632220 52228 0 0 0 0 20 291 0 0 100 0 3 10 120 43516 6635576 51924 0 0 1051776 62 4221 12301 1 70 11 19 6 9 160 44420 6634720 51788 0 0 1117284 0 4435 12308 1 75 5 19 6 9 160 47436 6631100 51676 0 0 1110300 0 4449 11438 1 76 3 20 2 10 160 46740 6632048 51948 0 0 1137920 0 4447 12251 1 75 8 17 9 7 160 45248 6632056 52004 0 8 1127940 45 4559 13259 1 74 9 17 3 9 160 44152 6634780 49960 0 0 1132032 12 4471 12962 0 75 8 16 4 9 160 44160 6634960 49380 0 0 1129216 8 4430 12545 0 76 7 16 After: About the same for write: $ dd if=/dev/zero of=bigfile.1 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 20.4056 s, 526 MB/s 'nuff said for read $ dd if=bigfile.1 of=/dev/null bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 10.2841 s, 1.0 GB/s Justin.