pietrop

Member
  • Content count

    17
  • Joined

  • Last visited

Everything posted by pietrop

  1. Hi all, I've got 6 Crucial MX300 525GB connected in JBOD mode to a ServeRaid M5110e in a IBM X3650 M4 server. Configuration of the system: - 2 x Intel Xeon E2690 2.90Ghz - 96GB DDR3 1333Mhz - Ubuntu 16.10 Server, Kernel 4.8.x I've done some intensive testing using the fio configurations found on your website. The workloads are 4k random reads and 8k random reads/writes (70% mix). Instead of doing parallel jobs I decided to go for 1 job per test and tested 1 to 32 IO Depths. My purpose was to find the best OS settings for those SSDs in a md software RAID10 configuration. I tested single disks, RAID10 and RAID0 in order to find if performance scales, and it does, so md isn't an issue. Now I'm stuck at the single disks performance because they average at a maximum of ~40k IOPS. Probably I miss something in the whole picture but shouldn't those disks reach 80k to 90k IOPS? Am I testing in the wrong way or the problem resides in driver or bad system configuration? I decided to buy them based on your reviews of the 750GB version, but never overpassed 40k iops... Thank you all Pietro
  2. Crucial MX300 - bad performance

    Will I be able to connect the 12Gb/s connectors to my 6Gb/s expander (backplane) simply using the right cables? Are the two interfaces compatible or should I buy a 12Gb/s backplane? PS My purpose is to use md (Linux software RAID), which is really fast from what I've seen.
  3. Crucial MX300 - bad performance

    Hi Kevin, thank you for your response. I was in need of a bunch of affordable SSDs that reaches the performance showed in your tests (80k to 90k IOPS each). This machine has been built for testing and development with databases and computational software and I was on a budget (Italy and UK black friday + ebay coupon). I posted a parallel thread on the HBA forum where you just answered. Probably the RAID controller is the bottleneck. I ALWAYS observe half the expected IOPS on single SSDs in my tests at any IO Depth and both on 4k and 8k workloads compared to your benchmarks and the ones I found on the net. Here is a graph showing the average IOPS for the 4k and 8k fio workloads available on this website. The tests where run before over-provisioning. The OP process improved low IO Depth performance, which now are a little bit faster, but never overpassed the ~40k IOPS limit.
  4. Hi all, I recently bought a used IBM X3650 M4 for testing and development. This server ships with a built-in RAID card, the ServeRaid m5110e. I'm using it in JBOD mode on all disks (2 x HDDs and 6 x SSDs) and configured software RAID on Ubuntu using mdadm. I would like to know from experts like you if that controller could be a bottleneck and if I should buy a dedicated HBA card. In other words, how can that controller affect performance against a dedicated HBA SAS/SATA 6Gb/s? Another question: a 12Gb/s HBA controller will improve performance even with 6Gb/s SAS/SATAIII disks? Thank you very much Pietro
  5. They are 6 x Crucial MX300 525GB. I should get 80k to 90k IOPS from each of them, as your (and other sites) benchmark show. This means a total maximum of about 480k-540k IOPS in a RAID0 configuration and 240k-270k in RAID10. Actually they reach 120k IOPS in RAID10 (half of 240k) for 4k workloads at 16 and 32 IO Depths. On IBM web site I found these HBAs are officially supported by my server: N2115 (LSI SAS2308): https://lenovopress.com/tips1061-n2115-6gb-internal-hba N2215 (LSI SAS3008): https://lenovopress.com/tips1075-n2215-12gb-internal-hba The N2215 should support high IOPS and this is a good thing to consider because I have room for other 8 SSDs. These boards have 2 x 4 SATA channels, so 2 x HBA in order to manage 16 disks. At the moment I will not saturate the N2215 but probably will saturate the N2115 model. I was wondering if you can suggest other boards that give good results, so that the bottleneck will become the SSDs and not the controller itself. Also, 12Gb/s requires different cables, so I would like to know if N2115 shuold be ok (and use the cables I have) or go for the more powerful N2215. Thank you a lot for your time Pietro
  6. Hi there, I'm looking for a used server in order to build a development/testing environment for some high performance PostgreSQL and Spark applications. After a long research on Internet and based on my (limited) experience on such machines (I use 3 different servers of this kind, all of them bought before 2013), I decided to go with a IBM x3650 M4 machine. The budget is limited to 2500€. The best offer I could get is the following: 2 x Intel Xeon E5-2690 (20MB cache, 8 core each, 2.90Ghz, TurboBoost@3.8Ghz) 96GB RAM (12 x 8GB PC3-10600 @ 1333Mhz, ECC, Registered, Low Voltage) IBM ServeRaid M5110e RAID controller card 2 x 900W AC Platinum power supplies 2 x 146GB SAS 15k rpm HDD 2.5" This will cost me 1200$ + 250$ shipping = 1450$ (about 1400€). I would like to add a secondary backplane (about 80€), some empty HDD SFF trays and some spare fans (34€ each). RAM Based on my typical work-load (datawarehouse and parallel processing), should I upgrade to PC3-12800 (1600Mhz) RAM? From what I understood, I will gain in bandwidth but will loose 2 CAS latency points and will increase energy consumption. This upgrade will also cost me 300$ more with same overall RAM size (96GB). The specifications of the two supported type of RAM are: 96GB (12 x 8GB) PC3-10600 @ 1333Mhz, CAS=9, Low Voltage (1.35V) (about 300$) 96GB (12 x 8GB) PC3-12800 @ 1600Mhz, CAS=11, Low Voltage (1.5V) (about 600$) STORAGE This server allows to host 16 x 2.5" HDDs or SSDs by adding a secondary backplane. I bought 4 x Crucial MX300 525GB SSD (best price/performance using various discounts and black-friday offers) and the server ships with 2 x 146GB 15k rpm HDDs. My idea is to: Buy other 2 Crucial MX300 in order to reach good performances in RAID 10 (6 total SSDs), and buy 4 "garbage" HHDs that will be used as a stage transition area for data Will the built-in ServeRaid M5110e card be able to handle 6 SSDs and 6 HDDs? It has 512MB RAM. Should I buy a different RAID controller? Should I use a software RAID? What do you suggest? POWER CONSUMPTION On full throttle (both CPU running at 100%) using this configuration, how can I estimate the power consumption? I know that CPUs are 135W each and that SSDs consume 80% lower than HDDs. I also know that a 900W PSU suffices and the minimum requirement for those CPUs is 750W. IS IT WORTH IT? Today, is it worth it to buy a similar configuration for less than 2000$? I compared Dell R720 and HP Proliant G8 (from the same era and generation) and this IBM model seems the more affordable and valid solution. For the next generation of servers (Intel E5-26xx V2 family) I should add at least 1500$ but RAM goes up to 1866Mhz and cache size to 30MB with better power consumption. Thank you all. Hope to have posted this in the right section of the forum. Pietro Pugni
  7. Hi, I'm new to this forum. I'm an OS X user since years and am currently using a Mac Mini with a Promise Pegasus R4 via Thunderbolt 1 as development environment. The main usage is for intensive database operations (PostgreSQL) on healthcare data (billion of records). My system is at least 12.5% faster than a basic Dell PowerEdge R720 T420 in terms of time spent on any kind of DB operation, even if the RAM on the Mac Mini is just 8GB (128GB on the Dell). The RAID configuration I use is RAID 5 with 4 HDDs at 7.2kRPM and 1TB of space, 32MB of cache (Hitachi). I'm currently looking for a hardware upgrade for my development environment and was attracted by this Lacie unit: http://www.storagereview.com/lacie_8big_rack_thunderbolt_2_review This means buying a new MacMini with Thunderbolt 2 (and a maximum of only 16GB of RAM ) and that's ok, but I'm not quite intented to buy a Mac Pro for about 5.000 € (Mac Mini at his top configuration is about 1.400 €). I was wondering if there's a more performing solution other than LaCie (OS X or other Unix OSs) at the same price. This world is full of alternatives and I've looked in all directions but am a little confused. Should I wait a few months for a new Mac Mini model and (probably) new Thunderbolt 2 solutions? Thank you in advance PS: I have no FC/SAS interface
  8. Wow! That's amazingly fast! I was thinking about using ZFS on Ubuntu but I don't know if it's more reliable than XFS. I tried enabling/disabling HT but it didn't impact performance in no way: time execution was still the same. I'm writing on the official PostgreSQL performance list and we are still searching for a solution (tried also NUMA tuning for dual socket motherboards). I've found a post comparing Intel i5 vs Intel Xeon using different PostgreSQL versions and it resulted in better performance from i5 http://blog.pgaddict.com/posts/performance-since-postgresql-7-4-to-9-4-pgbench On my Mac Mini I have a i7 and on Dell I have a Xeon, but i7 is from late 2011 while the Xeon is from late 2013! I'll update this post as soon as I get more responses from the official pg list. Thank you so much for your benchmarks. PS on Dell it's a Perc H710. H710P is the Performance version which is no installed on the machine. The main difference is in cache size, which is 512MB for H710 and 1GB for H710P.
  9. Hi Maxtor, thank you VERY much for your help. Before taking your time I prefer to complete some tests on Postgres. At the moment on dba.stackexchange.com they didn't gave me much help. I'm following the "recompiling way" after reading http://blog.pgaddict.com/posts/compiler-optimization-vs-postgresql because I was installing from binaries and not from sources. It states that different compilers, different compiler versions and different compiler options can give a 15% gain over Postgres performance. There are also other parameters that determines some Postgres file and segments size on disk which aren't covered in that specific article. There are many possible combinations so it will take me a lot of time to find the best one. I'm also following this blog which has a lot of useful test, some of which covers FileSystem options and Postgres compiling options: http://www.fuzzy.cz/en/articles/filesystem-vs-postgresql-block-size/. Just to say that probably the problem is on the Postgres side and not on the hardware side. One thing you might know is: what's the IO performance impact of disabling Hyper Threading? This is the unique option which I haven't tried yet and I was wondering if it works as with SSD (our server has HDD on it). Again, thank you so much for your help. Pietro edit: I've found a link with some benchmark (TPS) disabling HT: http://www.postgresql.org/message-id/53F4F36E.6050003@agliodbs.com. He gains about 75% performance with HT disabled. Impressive.
  10. Pardon me for the long time. In the meanwhile I've collected some stats. We've changed BIOS and other stuff configuration parameters gaining some performance. For example, a set of queries used for data loading and initialization which covered all the possible database activities (seq/rand read/write on small/big chunks of data) went from 240 seconds to 195 seconds. The same set of queries run in 40 seconds on my MacMini+Promise Pegasus. That's a HUGHE difference and I can't take it down anymore. I've tried a lot of different combinations in Postgres' configuration file: from RAM to disk parameters. I've tested also a transaction used in our real world application which makes some JOIN and writes records to disk. On Dell PE T420 execution time was 2m21s while on MacMini it was 1m44s. This is more impressive if you think that MacMini has 6.25% RAM availability as of T420. Also, the PostgreSQL version differs! On T420 I've installed a brand new 9.4 installation while MacMini has the original 9.0 version which is much slower and less performing. On MacMini I've OS X Server 10.7.5 and T420 has a Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64). I followed the official Postgres reccomandations about kernel configuration; I've set WriteBack and ReadAhead on RAID cache configuration and disabled disk cache (it slightly affected performance and it's a reccomended suggestion). I've also defragmented XFS file system on RAID disk array and separated the database log (the WAL - Write Ahead Log) from data directory (the two reside on two different RAID virtual disks). What's more interesting (and paradossal) is the results returned by a tool called pg_test_fsync which is used to evaluate the best sync method used by Postgres. The best choice is determined by the performance results returned by this tool, so one can make a "scientific" decision. T420 Compare file sync methods using one 8kB write: (in wal_sync_method preference order, except fdatasync is Linux's default) open_datasync 23358.758 ops/sec 43 usecs/op fdatasync 21417.018 ops/sec 47 usecs/op fsync 21112.662 ops/sec 47 usecs/op fsync_writethrough n/a open_sync 23082.764 ops/sec 43 usecs/op Compare file sync methods using two 8kB writes: (in wal_sync_method preference order, except fdatasync is Linux's default) open_datasync 11737.746 ops/sec 85 usecs/op fdatasync 19222.074 ops/sec 52 usecs/op fsync 18608.405 ops/sec 54 usecs/op fsync_writethrough n/a open_sync 11510.074 ops/sec 87 usecs/op Compare open_sync with different write sizes: (This is designed to compare the cost of writing 16kB in different write open_sync sizes.) 1 * 16kB open_sync write 21484.546 ops/sec 47 usecs/op 2 * 8kB open_sync writes 11478.119 ops/sec 87 usecs/op 4 * 4kB open_sync writes 5885.149 ops/sec 170 usecs/op 8 * 2kB open_sync writes 3027.676 ops/sec 330 usecs/op 16 * 1kB open_sync writes 1512.922 ops/sec 661 usecs/op Test if fsync on non-write file descriptor is honored: (If the times are similar, fsync() can sync data written on a different descriptor.) write, fsync, close 17946.690 ops/sec 56 usecs/op write, close, fsync 17976.202 ops/sec 56 usecs/op Non-Sync'ed 8kB writes: write 343202.937 ops/sec 3 usecs/op MacMini Direct I/O is not supported on this platform. Compare file sync methods using one 8kB write: (in wal_sync_method preference order, except fdatasync is Linux's default) open_datasync 3780.341 ops/sec 265 usecs/op fdatasync 3117.094 ops/sec 321 usecs/op fsync 3156.298 ops/sec 317 usecs/op fsync_writethrough 110.300 ops/sec 9066 usecs/op open_sync 3077.932 ops/sec 325 usecs/op Compare file sync methods using two 8kB writes: (in wal_sync_method preference order, except fdatasync is Linux's default) open_datasync 1522.400 ops/sec 657 usecs/op fdatasync 2700.055 ops/sec 370 usecs/op fsync 2670.652 ops/sec 374 usecs/op fsync_writethrough 98.462 ops/sec 10156 usecs/op open_sync 1532.235 ops/sec 653 usecs/op Compare open_sync with different write sizes: (This is designed to compare the cost of writing 16kB in different write open_sync sizes.) 1 * 16kB open_sync write 2634.754 ops/sec 380 usecs/op 2 * 8kB open_sync writes 1547.801 ops/sec 646 usecs/op 4 * 4kB open_sync writes 801.542 ops/sec 1248 usecs/op 8 * 2kB open_sync writes 405.515 ops/sec 2466 usecs/op 16 * 1kB open_sync writes 204.095 ops/sec 4900 usecs/op Test if fsync on non-write file descriptor is honored: (If the times are similar, fsync() can sync data written on a different descriptor.) write, fsync, close 2747.345 ops/sec 364 usecs/op write, close, fsync 3070.877 ops/sec 326 usecs/op Non-Sync'ed 8kB writes: write 3275.716 ops/sec 305 usecs/op My objective here is to understand if it's convenient for me to upgrade to a solution similar to the T420 one. If you don't mind I'll open a question on dba.stackexchange.com for the Postgres configuration part, while here I'm digging into the "disks" aspects of the problem. I think that Promise RAID controller is much more performant than Dell Perc H710, but I may be wrong. Thank you again for your support and help. Pietro
  11. Many thanks Kevin. Do you know if with MegaCLI I am able to see if NCQ is enabled (Native Command Queueing)? I've read about a guy that had to disable it on a PowerEdge (forgot the model) to make it run at a satisfying speed with VMWare. I've done al possible combinations of Write Cache/Read Look Ahead Cache/NCQ on my Promise Pegasus and best performance on bulk load are obtained enabling Write Cache and NCQ. I haven't done yet tests with hughe reads but I leave Read Look Ahead Cache activated. So on my machine NCQ isn't a problem and would like to test it on the PowerEdge T420. Big tables are read-only and I have some indexes on them. Now I'm trying to change they're architecture to be partitioned (Table Partitioning in PostgreSQL). This can solve huge index issues (they don't fit in RAM and are hard to maintain) and sequential/random read performance because the DB planner knows where some values are stored (for example, dates). I'm waiting for the loading process to finish as I am populating ex-novo the whole DB. I've made a PHP script that generates a BASH script which is about 10k rows of code to track every SQL statement execution time. It executes partitioning, inserts, indexing, clustering and vacuuming. All of these operations stress the RAID in every aspect (and so the HardDisks). I'm expecting improvements from both bulk loading and data retreiving tasks. I need to mention another issue which is relative to DB configuration. I don't know if you're used to PostgreSQL, but it maybe a kind of "race car" when it's about performance because you have something like 100 configuration variables. As I stated before I'm using this setup for 3 years and there are still some of these variables which are a little obscure to me. But I've read books, presentations, documentation and sites like stackexchange.com to achieve a decent knowledge about it. For example there's an option which can disabled synchronization resulting in a speed up of RAID performance but affecting reliability (I never disable it); you can choose the synch method and so on. So performance from a DB point of view are very relative. If I take a software like AJA or other similar, the results maybe quite distant from the real ones that I get using the DB (they still remain a reference point for my system relative to the others). What kind of benchmark indexes/stats can I trust and how can I collect them? This answer is in a "new buy" perspective, so I can choose a new system. I was thinking about a Dell PowerEdge Rxxx solution, but I've seen absurde Hard Disk costs. What's wrong with them? There are SATA HD at about 1k$ each. EDIT ------------------------------------------------------------------------------------------------------------ I've installed MegaCLI utility on Dell PowerEdge T420. HD are SAS, so NCQ isn't available as an option for them. Am I right? I saw it says "NCQ NO" (disabled).
  12. By executing the command cat /proc/scsi/scsi I obtain the following: Attached devices: Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC H710 Rev: 3.13 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 01 Lun: 00 Vendor: DELL Model: PERC H710 Rev: 3.13 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi5 Channel: 00 Id: 00 Lun: 00 Vendor: PLDS Model: DVD-ROM DH-16D7S Rev: WD11 Type: CD-ROM ANSI SCSI revision: 05 So probably the RAID controller is a PERC H710, right? It's a good starting point for performance tuning.
  13. I've asked and yes, I can access all the administration stuff. Generally my work place is far away from the server, so I'm not used to access it locally (I've seen it today for the first time in months) and I've never used such a kind of server solution (which is simple, but I come from even simpler server/storage solutions like Mac Mini or PCs). I'll search for proper documentation online (i don't know what iDRAC is, but I've seen a video on youtube explaining its usasge). Does it give access to the RAID configuration or there's another dedicated BIOS/Software for it? Seeing the benchmarks from your review, the maximum speed reached is around 1300MB/s in RAID 0. This means more than Thunderbolt 1 bandwidth, which is 10Gb/s. I'm looking for a solution able to reach that speed (or even more) at a reasonable price. The total size of DB in his smallest version is around 1TB but it will grow in time in unpredictable sizes. I need surely a minimum of 6TB (actually I have 2TB totally on both MacMini and PowerEdge). The 48TB solution from LaCie is superb because allows me to do a lot of experimentations. PostgreSQL takes a lot of disk space for indexes: generally they weight a lot more than the whole table which they are related to. There are also operations such as clustering or vacuum that temporarily consume a lot of disk space. Many thanks to both of you for your support and suggestions!
  14. I forgot to say that PostgreSQL is single core for each query: a query runs on a single core, so it's not a core-number-issue. Also, my applications are generally intended for 1 user connection, so multiple cores aren't relevant in performance.
  15. Brian, I've made a terrible mistake about the PowerEdge. It's a PowerEdge T420, not a R720. My fault.. I can't physically access this unit, but I have root privileges from remote. Here are some specs: - 2 x Xeon E5-2640v2 2GHz (8x2 cores + 8x2 logical cores) - 128GB RAM DDR3-1600 - 12 HD on board 300GB SAS 6Gb/s 10k RPM - RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05) [ RAID5 - XFS ] Not sure about how to access/configure the RAID controller on this machine (Ubuntu 14.04.1 LTS). The Mac Mini has the following specs: - 1 x Intel i7 2GHz (4 cores + 4 logical cores) - 8GB RAM DDR3-1333 - 2 x internal SATAIII HD 512GB 7.2k RPM - Promise Pegasus R4 (4TB: 4 x 1TB HD SATAIII 6Gb/s 7.2k RPM) on Thunderbolt 1 [ RAID5 - HFS ] Promise performs better than Dell' RAID and is more stable (less variable) in write/read rates. Dell behaviour is strange in my opinion but I'm not able to understand why (maybe some caching/queuing problems?). I'm working with Postgre since 3 years and have used different deployment machine which performed always worser than the Mac Mini. With T420 I was expecting an improvement but as I said it's minimum 12.5% slower in every database task. Probably there's some RAID configuration I'm missing (RAID BIOS that I cannot access from remote). Upgrading disks is a good idea and I was thinking about SSDs, but doing some Googling I discovered that they are not officially supported and that they are not fully compatible with Promise Pegasus R4 (while the new version doesn't give any problem). SSDs also costs a lot more (the 1TB top level from Samsung is around 650€ per disk). So there are two questions here: - Thunderbolt/OS X solution against alternatives. Price range should be the same as LaCie 8big 48TB + MacMini (2500€+1400€ = 3900€) or, in the worst case, LaCie 8big 48TB + MacPro (~7500€) - T420 vs MacMini+Pegasus R4 Many thanks for your replies and patience. I hope not being OT
  16. Hi, thank you for the reply. I want to stay with OS X because I develop on OS X (I use a Macbook Pro and a iMac) and having a Mac server unit is better for me (less headache and more compatibility). Also, Thunderbolt is Apple only if I'm not wrong. I want to upgrade to a new storage unit for performance reasons. Some tasks take long time for the kind of application I do (statistical data extraction and epidemiologic applications). Index creation or index clustering may take from 1 to 3 days to complete on my server and much more on the Dell PowerEdge. My developement environment is now 2.5 years old and I thought it was time to upgrade to Thunderbolt 2 (or any other faster solution at a good price).
  17. Hi, very useful review. It's probably the first and solo review that I've found on Internet about this product. The following benchmarks are the one featured by LaCie or are your results? In your review you wrote: What was the hardware configuration of the Macbook Pro used for the test? Can you repeat the test with the AJA software used by LaCie? These results differ a little from the previous benchmarks (1150MB/s and 1060MB/s). I'm looking for a RAID5 solution in a database environment. How much these speed can vary in your opinion? I mean..is it possible to configure many options for the disks and logical array of the RAID? I'm used to a Promise Pegasus and I setup the cache parameters and the block size (and probably other few parameters). In your opinion, how is reliable this LaCie product? Have you used it for a long period of time in stress conditions? And how good is LaCie assistance? I experencied three disk failure from my Promise Pegasus unit and a bad support. This is why I'm asking so many questions Thak you Pietro