Search the Community

Showing results for tags 'fio'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 4 results

  1. ITMan

    Benchmark SSD

    Hi Friends, I have Samsung SSD PM1633 , I use them in RAID 10 with 12 SSDs with Hardware RAID, i run FIO on it, with below option: direct=1, iodepth=32, numjobs=8 , bs=8k, runtime=180, rw=randrw, rwmixread=70, ioengine=libaio, I use fio as directly on the server , output : IOPS Read : 90.3k IOPS write: 38.7k Latency Read: 2059 u Latency Read: 1807 u my question is: 1. do really I get IOPS Disks? 2. do this IOPS with this latency is real? as mentioned , this results are as local, 3. but test remote with FC to host show low iops, what is reason? very Thanks,
  2. Hi, I have a Xeon server with 3 Intel SSDs connected to a RAID controller with RAID disabled (basically acting as a buffer). I've tried testing the throughput using the fio utility with the following settings where X on numjobs I vary from 1, 2, and 4: [global] blocksize=8K rw=randwrite fsync=1000 numjobs=X [test1] filename=/disk1/test.file [test2] filename=/disk2/test.file [test3] filename=/disk3/test.file When I run this test with numjobs=1 (1 thread/disk), I see throughput of 120MB/s to each disk. However, when I use 2 and 4 threads, the throughput of 2 of the disks drop to 100 and 90MB/s while the first disk always stays at 120MB/s. Nothing else is running on this machine that is using up the disks or CPU that I can see. I will have an application on this machine where multiple cores write to the same disk since they'll be CPU-bound, but these results are not encouraging. Does anyone have a suggestion on what I can do to improve the results or possibly run different tests?
  3. I'm trying to rebulid your preconditioning test with Fio. My problem is that i don't know how i can run the test 360min and get a result for each minute. I tried to handle it like this: [global] runtime=60 time_based ioengine=libaio direct=1 filename=/dev/sdc thread=0 rw=randrw group_reporting=1 numjobs=16 iodepth=16 blocksize=4k [1] rwmixread=0 stonewall [2] rwmixread=0 stonewall [3] rwmixread=0 stonewall ... and the steps [1], [2], [3] 360times. Then FIO returns an error: maximum number of jobs (2048) reached I need over 5000 jobs for my script. How can I fix that problem? Is it possible that you from Storage Review show how you do the preconditioning? Thanks
  4. Fusion-io today announced updates to its ioTurbine enterprise caching software, which improves performance of existing NAS and SAN environments by utilizing server-side flash to accelerate applications and reduce latency for frequently-accessed data. ioTurbine is available for use with the company's ioMemory products for flash directly installed in servers, and can be used with the ION Data Accelerator or ioControl SPX solutions for external flash shared across servers. Fusion-io also announced a short-term Fast Flash program to facilitate technical consultations between the company and prospective ioTurbine clients. Fusion-io Announces Updated ioTurbine Software and Fast Flash Consultation Program