Search the Community

Showing results for tags 'benchmark'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 6 results

  1. Hi guys need some help with a selection of two raid controllers. Currently I'm using an areca 1880 ix controller with 16x4tb hard drives in a ca. 50tb raid 6 array. its just a storage for large files and sometimes i need copy performance to other clients. so far sufficient. However, I have now bought a new & cheap 9460 8 port controller with 2gb and connected 8 new 10tb hard drives (one array raid 6) for tests and some benchmarks. on a ntfs partition it was possible to copy files with 400mb internally. enough for me. only the management takes a bit of getting used to. Yesterday I deliberately put two hard disks out of the backplane. the array changed too critically. However, the volume was still visible in the explorer only the content was gone. Will this come back after a rebuild? and which controller is better in terms of performance ? LSI or Areca ? less drives are better and the 9460 ist getting so hot like the areca ..... but on the areca the management and settings are very easy to configure... What advantages does Broadcom offer vs areca ????
  2. ITMan

    Benchmark SSD

    Hi Friends, I have Samsung SSD PM1633 , I use them in RAID 10 with 12 SSDs with Hardware RAID, i run FIO on it, with below option: direct=1, iodepth=32, numjobs=8 , bs=8k, runtime=180, rw=randrw, rwmixread=70, ioengine=libaio, I use fio as directly on the server , output : IOPS Read : 90.3k IOPS write: 38.7k Latency Read: 2059 u Latency Read: 1807 u my question is: 1. do really I get IOPS Disks? 2. do this IOPS with this latency is real? as mentioned , this results are as local, 3. but test remote with FC to host show low iops, what is reason? very Thanks,
  3. Hi All, I follow a standard method for benchmarking for NAS and SAN , how i can show a comprehensive test on my storage ? what is aspects that i must run on it, and what is tools for it? very thanks for your helpful answers, The Best Regards, measures
  4. So my HDN726040ALE614 has arrived and I've filled it 22%. My first HGST: HDN724040ALE640 is getting so full defraggler is throwing out complaints. Newer version (released Feb 2017) is faster with it dealing with smaller data files more quickly as seen at start of test. I took two tests of the first HGST (0S03665) with crystal diskmark as I thought 98% full was effecting the result. 2nd test with 76% full didn't change things much. The 0S04005 frankly blew it away in this test. All on small platform using latest drivers and run today. I'm off to userbenchmarks.com. Sorry I'm not sure how to do spoiler tags here for the attached images. Edit: I lied that ATTO on HGST was not run today (the one where I used ms paint to draw change a 10 into a 9), it was not run on the same volume either. It was run on H which is like 3.2Tb and I've rerun it on volume e which is 400Gb and the results are impressive. Maybe volume H needs its free space defragged. Attached the ATTO HGST 2017 E 14.8.16.1063 23_08_2017.jpg result. I'm rerunning crystal disk mark on volume E (which btw is 89% full). Just noticed my crystal disk mark screenshots have no identifiying test. The first 2 are 0S03665 on volume H and the 3rd is 0S04005, the last one is 0S03665 again this time on volume E which for some reason benches higher than volume H. Just did userbenchmark.com test: UserBenchmarks: Game 58%, Desk 93%, Work 60% CPU: Intel Core i5-4690K - 98.4% GPU: AMD R9 280X - 53.1% SSD: Samsung 850 Evo 250GB - 105.1% HDD: Hitachi HDN726040ALE614 4TB - 105.8% HDD: HGST Deskstar NAS 4TB - 52.8% HDD: WD Green 2TB (2011) - 54.7% HDD: WD WD10EACS-00ZJB0 1TB - 44.6% USB: SanDisk Ultra USB 3.0 64GB - 33.9% USB: SanDisk Extreme USB 3.0 32GB - 80% RAM: HyperX Savage DDR3 2400 C11 2x8GB - 89.3% MBD: Gigabyte GA-Z97X-Gaming 5 http://www.userbenchmark.com/UserRun/4752792
  5. I got a samsung 840 evo and the when i do a benchmark test it goes of the chart. I'd like to believe its just fast but it's too good to be true. I've tried a bunch of different benchmark programs but they all get this ridiculous fast readings. Do anyone else get these results? Do anyone know what the cause of this can be? My PC: asus Z87-pro i7 4770K @ 4.2Ghz G.skill 2133 CL9 8GB EVGA GTX 780 @ 1150Mhz CM m2 1kW PSU
  6. Hi I have 3, 3TB WD Red drives on a LSI 9260CV in raid 5. Read = always read ahead IO Policy = Cached IO Write = Always write back OS= Win 2012R2 latest, firmware Drivers lastest I did some benchmarks and I can only explain the sequential results. The rest I dont get it, Raid5 is supposed to have slower writes due to parity calculation. The LSI card has 512K cache and for sure it influenses the results: the numbers get smaller as the ratio cahe/file size changes. While this is normal, there is always 50% more throughtput for random writes and this is consistent whatever the file size. I would expect that ratio to drop also as the test file grows bigger (if the cache was the reason for this strange performance). Here are results for a tiny file that fits entirely into the boards cache, so numbers reflect PCI transfer not disk perfomance. What did I miss ? m a r c