Search the Community

Showing results for tags 'ssd linux'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 1 result

  1. Hi all, I've got 6 Crucial MX300 525GB connected in JBOD mode to a ServeRaid M5110e in a IBM X3650 M4 server. Configuration of the system: - 2 x Intel Xeon E2690 2.90Ghz - 96GB DDR3 1333Mhz - Ubuntu 16.10 Server, Kernel 4.8.x I've done some intensive testing using the fio configurations found on your website. The workloads are 4k random reads and 8k random reads/writes (70% mix). Instead of doing parallel jobs I decided to go for 1 job per test and tested 1 to 32 IO Depths. My purpose was to find the best OS settings for those SSDs in a md software RAID10 configuration. I tested single disks, RAID10 and RAID0 in order to find if performance scales, and it does, so md isn't an issue. Now I'm stuck at the single disks performance because they average at a maximum of ~40k IOPS. Probably I miss something in the whole picture but shouldn't those disks reach 80k to 90k IOPS? Am I testing in the wrong way or the problem resides in driver or bad system configuration? I decided to buy them based on your reviews of the 750GB version, but never overpassed 40k iops... Thank you all Pietro