Search the Community

Showing results for tags 'performance'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases

Calendars

  • Community Calendar

Found 5 results

  1. Hi all, I've got 6 Crucial MX300 525GB connected in JBOD mode to a ServeRaid M5110e in a IBM X3650 M4 server. Configuration of the system: - 2 x Intel Xeon E2690 2.90Ghz - 96GB DDR3 1333Mhz - Ubuntu 16.10 Server, Kernel 4.8.x I've done some intensive testing using the fio configurations found on your website. The workloads are 4k random reads and 8k random reads/writes (70% mix). Instead of doing parallel jobs I decided to go for 1 job per test and tested 1 to 32 IO Depths. My purpose was to find the best OS settings for those SSDs in a md software RAID10 configuration. I tested single disks, RAID10 and RAID0 in order to find if performance scales, and it does, so md isn't an issue. Now I'm stuck at the single disks performance because they average at a maximum of ~40k IOPS. Probably I miss something in the whole picture but shouldn't those disks reach 80k to 90k IOPS? Am I testing in the wrong way or the problem resides in driver or bad system configuration? I decided to buy them based on your reviews of the 750GB version, but never overpassed 40k iops... Thank you all Pietro
  2. The benchmarks of SSDs like the Intel 750 suggest speeds approaching those of RAM-Disks. The problem of course is, does this have that much value for consumers, who mostly do read-heavy applications (probably the most important benchmark then would be the Read @ 4k, probably at queue depths of 1 and 2. The big advantage I see is mostly in the field of sequential benchmarks, where we have seen huge leaps in terms of performance. Boot speed and "application smoothness" don't seem to be improving that much. It's mostly storage intensive stuff that sees the benefit, although I suppose if it's super write intensive, then an SLC SSD or perhaps a RAM Disk is still needed.
  3. Does it seem like SSD progress is beginning to level off? Most current SSDs remain MLC planar, at around 16-20nm in size. There's a few value-oriented TLC SSDs as well. But at some point, it is looking like the die shrinks are going to start levelling off. Perhaps maybe ~10nm is the end? Does it seem like 3D NAND too is no easy solution over planar? It does seem like Samsung in particular is stuck at 40nm for now with 3D NAND for at least the next couple of generations, and I'd imagine the other vendors would encounter similar technical difficulties. It may very well be that the benefits were somewhat overhyped. It's not clear I think whether it will scale down very well past 40nm either. Performance-wise, I think we are seeing NVMe drives start to enter the market. Whether that will impact much save the high end though remains open to debate. So is that it, are we seeing SSD performance level off then? Likewise, is the rate of price drop going to level off as well in terms of rate of cost declines in cost:gb?
  4. Hi all, This is my first post in this forum, I hope someone can lend me a hand since now I have get out of ideas. I've built a raid 5 on a Asrock z87 extreme6 using six Western Digital RED 4tb that are connected to the six intel controller SATA3 ports, with the aim of creating a 20Tb Raid 5. The OS is Windows 8.1 x64. I created the raid from the BIOS utility selecting 64kb size (I had option of selecting 64 and 128 but the utility reccomended 64kb in case of raid 5). Once in Windows I formatted the raid unit with a 20Gb partition and write speed was really slow (10 MB/s max), even after waiting raid to be completely constructed (it took several hours) After reading and looking for information I enabled write cache and disabled write-cache buffer flushing. I also set simultaneous on in the Intel Rapid Storage Technology panel. After doing this the write speed increased to 25-30 MB/s. I have notice that physical cluster size is 4098 bytes (usual on those 4tb disks), but logical cluster size is 512 bytes: Shouldn't those cluster sizes match to have a good performance? In this case, how to change it? I've try to delete partition and create again, but selecting different cluster sizes for que partition, and the best performance is using 64kb (the stripes size), but it's only 50-60 MB/s actual speed copying a big MKV file from an SSD, and even doing it if doesn't makes any change on the capture where we see the 512 bytes for logical sector size. AS SSD Benchmark seems to tell that partition is correctly aligned: The results of the speed here seems ok, but as I told, real speed never exceeds 58-59 mb/s in writes. I attack a capture of fdisk, I really don't know if it's ok or bad aligned: ATTO DISK Benchmark: Those 6 discs were installed on a NAS, having a write speed higher than 80 mb/s, where is the problem here? Many thanks in advance
  5. Can someone please confirm what it means to "initialize" a virtual drive on an LSI MegaRAID card? I've read in some places that it's writing zeroes to all the drives (Fast Init only does it to the first/last 10 MB, Slow and Background Init do the entire drive space). Assuming I'm correct, if I'm setting up a RAID 1 or RAID 10 with SSDs that I've just secure erased, wouldn't writing zeroes to the drives (A) be total overkill since the drives are already consistent and ( severely impact performance until GC has time to clean things up? I've also heard that performance may be degraded if the RAID card isn't sure the drives are consistent. Is this true? If so, would running a consistency check shortly after setting up the virtual drive solve that? Many thanks in advance!