pietrop

Upgrading from a Promise Pegasus R4 to a LaCie 8big?

Recommended Posts

Hi, I'm new to this forum.

I'm an OS X user since years and am currently using a Mac Mini with a Promise Pegasus R4 via Thunderbolt 1 as development environment.

The main usage is for intensive database operations (PostgreSQL) on healthcare data (billion of records). My system is at least 12.5% faster than a basic Dell PowerEdge R720 T420 in terms of time spent on any kind of DB operation, even if the RAM on the Mac Mini is just 8GB (128GB on the Dell). The RAID configuration I use is RAID 5 with 4 HDDs at 7.2kRPM and 1TB of space, 32MB of cache (Hitachi).

I'm currently looking for a hardware upgrade for my development environment and was attracted by this Lacie unit: http://www.storagereview.com/lacie_8big_rack_thunderbolt_2_review

This means buying a new MacMini with Thunderbolt 2 (and a maximum of only 16GB of RAM :( ) and that's ok, but I'm not quite intented to buy a Mac Pro for about 5.000 € (Mac Mini at his top configuration is about 1.400 €).

I was wondering if there's a more performing solution other than LaCie (OS X or other Unix OSs) at the same price. This world is full of alternatives and I've looked in all directions but am a little confused. Should I wait a few months for a new Mac Mini model and (probably) new Thunderbolt 2 solutions?

Thank you in advance :)

PS: I have no FC/SAS interface

Edited by pietrop

Share this post


Link to post
Share on other sites

I'm guessing your limitation isn't throughput related to the storage. Can you tell us more about the issues you're facing and why you want to stay on the Mac platform?

Share this post


Link to post
Share on other sites

Hi, thank you for the reply.

I want to stay with OS X because I develop on OS X (I use a Macbook Pro and a iMac) and having a Mac server unit is better for me (less headache and more compatibility). Also, Thunderbolt is Apple only if I'm not wrong.

I want to upgrade to a new storage unit for performance reasons. Some tasks take long time for the kind of application I do (statistical data extraction and epidemiologic applications). Index creation or index clustering may take from 1 to 3 days to complete on my server and much more on the Dell PowerEdge. My developement environment is now 2.5 years old and I thought it was time to upgrade to Thunderbolt 2 (or any other faster solution at a good price).

Edited by pietrop

Share this post


Link to post
Share on other sites

Thunderbolt is available in PC but not all storage guys support Windows yet.

I just don't think storage is your problem. The Mac is woefully underpowered for the job. Not sure what R720 you're comparing to, but if it's low core count, that would be an issue. Unless you want to go to 10K disks or SSDs, I just don't think storage is the right place to invest for this issue.

Share this post


Link to post
Share on other sites

Brian, I've made a terrible mistake about the PowerEdge. It's a PowerEdge T420, not a R720. My fault..

I can't physically access this unit, but I have root privileges from remote. Here are some specs:

- 2 x Xeon E5-2640v2 2GHz (8x2 cores + 8x2 logical cores)
- 128GB RAM DDR3-1600
- 12 HD on board 300GB SAS 6Gb/s 10k RPM

- RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05) [ RAID5 - XFS ]
Not sure about how to access/configure the RAID controller on this machine (Ubuntu 14.04.1 LTS).

The Mac Mini has the following specs:

- 1 x Intel i7 2GHz (4 cores + 4 logical cores)

- 8GB RAM DDR3-1333

- 2 x internal SATAIII HD 512GB 7.2k RPM

- Promise Pegasus R4 (4TB: 4 x 1TB HD SATAIII 6Gb/s 7.2k RPM) on Thunderbolt 1 [ RAID5 - HFS ]

Promise performs better than Dell' RAID and is more stable (less variable) in write/read rates. Dell behaviour is strange in my opinion but I'm not able to understand why (maybe some caching/queuing problems?).

I'm working with Postgre since 3 years and have used different deployment machine which performed always worser than the Mac Mini. With T420 I was expecting an improvement but as I said it's minimum 12.5% slower in every database task. Probably there's some RAID configuration I'm missing (RAID BIOS that I cannot access from remote).

Upgrading disks is a good idea and I was thinking about SSDs, but doing some Googling I discovered that they are not officially supported and that they are not fully compatible with Promise Pegasus R4 (while the new version doesn't give any problem). SSDs also costs a lot more (the 1TB top level from Samsung is around 650€ per disk).

So there are two questions here:

- Thunderbolt/OS X solution against alternatives. Price range should be the same as LaCie 8big 48TB + MacMini (2500€+1400€ = 3900€) or, in the worst case, LaCie 8big 48TB + MacPro (~7500€)

- T420 vs MacMini+Pegasus R4

Many thanks for your replies and patience. I hope not being OT

Share this post


Link to post
Share on other sites

I forgot to say that PostgreSQL is single core for each query: a query runs on a single core, so it's not a core-number-issue. Also, my applications are generally intended for 1 user connection, so multiple cores aren't relevant in performance.

Edited by pietrop

Share this post


Link to post
Share on other sites

Back to the initial point though, Thunderbolt 2 won't make your current configuration go faster, there's no way you're saturating the interface. So if you want the current rig to go faster, it's faster storage. There are many SSD options, how big is the data set?

Share this post


Link to post
Share on other sites

Do you have access to the iDRAC connection for that server? Through that management console you'd be able to get a quick glance at the virtual drive settings for that RAID. Something doesn't add up since that Dell should easily be able to out-perform the mac mini in a disk-intensive task with that setup.

Share this post


Link to post
Share on other sites

Do you have access to the iDRAC connection for that server? Through that management console you'd be able to get a quick glance at the virtual drive settings for that RAID. Something doesn't add up since that Dell should easily be able to out-perform the mac mini in a disk-intensive task with that setup.

I've asked and yes, I can access all the administration stuff. Generally my work place is far away from the server, so I'm not used to access it locally (I've seen it today for the first time in months) and I've never used such a kind of server solution (which is simple, but I come from even simpler server/storage solutions like Mac Mini or PCs). I'll search for proper documentation online (i don't know what iDRAC is, but I've seen a video on youtube explaining its usasge). Does it give access to the RAID configuration or there's another dedicated BIOS/Software for it?

Back to the initial point though, Thunderbolt 2 won't make your current configuration go faster, there's no way you're saturating the interface. So if you want the current rig to go faster, it's faster storage. There are many SSD options, how big is the data set?

Seeing the benchmarks from your review, the maximum speed reached is around 1300MB/s in RAID 0. This means more than Thunderbolt 1 bandwidth, which is 10Gb/s. I'm looking for a solution able to reach that speed (or even more) at a reasonable price.

The total size of DB in his smallest version is around 1TB but it will grow in time in unpredictable sizes. I need surely a minimum of 6TB (actually I have 2TB totally on both MacMini and PowerEdge). The 48TB solution from LaCie is superb because allows me to do a lot of experimentations. PostgreSQL takes a lot of disk space for indexes: generally they weight a lot more than the whole table which they are related to. There are also operations such as clustering or vacuum that temporarily consume a lot of disk space.

Many thanks to both of you for your support and suggestions! ;)

Edited by pietrop

Share this post


Link to post
Share on other sites

By executing the command cat /proc/scsi/scsi I obtain the following:

Attached devices:
Host: scsi0 Channel: 02 Id: 00 Lun: 00
  Vendor: DELL     Model: PERC H710        Rev: 3.13
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi0 Channel: 02 Id: 01 Lun: 00
  Vendor: DELL     Model: PERC H710        Rev: 3.13
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi5 Channel: 00 Id: 00 Lun: 00
  Vendor: PLDS     Model: DVD-ROM DH-16D7S Rev: WD11
  Type:   CD-ROM                           ANSI  SCSI revision: 05

So probably the RAID controller is a PERC H710, right? It's a good starting point for performance tuning.

Share this post


Link to post
Share on other sites

I've asked and yes, I can access all the administration stuff. Generally my work place is far away from the server, so I'm not used to access it locally (I've seen it today for the first time in months) and I've never used such a kind of server solution (which is simple, but I come from even simpler server/storage solutions like Mac Mini or PCs). I'll search for proper documentation online (i don't know what iDRAC is, but I've seen a video on youtube explaining its usasge). Does it give access to the RAID configuration or there's another dedicated BIOS/Software for it?

Its a side-band management interface. Basically you go through a web browser and from there you can load up a console window as if you had a monitor attached to the back of the computer. You can make some changes through it... power cycle the server and whatnot. The relevant part for you though is being able to perhaps restart the server and load up the PERC pre-boot menu to see how the RAID card and volumes are configured. It is possible to install MegaCLI in Linux to gather some of that information as well, but that process isn't as simple if you're able to do the iDRAC route.

Share this post


Link to post
Share on other sites

Seeing the benchmarks from your review, the maximum speed reached is around 1300MB/s in RAID 0. This means more than Thunderbolt 1 bandwidth, which is 10Gb/s. I'm looking for a solution able to reach that speed (or even more) at a reasonable price.

Many thanks to both of you for your support and suggestions! ;)

The issue though is you're workload isn't large block sequential I'm guessing. What sort of performance are you seeing now? I doubt you're topping out at the bandwidth headroom, it's more likely the disk is the bottleneck.

Share this post


Link to post
Share on other sites

Its a side-band management interface. Basically you go through a web browser and from there you can load up a console window as if you had a monitor attached to the back of the computer. You can make some changes through it... power cycle the server and whatnot. The relevant part for you though is being able to perhaps restart the server and load up the PERC pre-boot menu to see how the RAID card and volumes are configured. It is possible to install MegaCLI in Linux to gather some of that information as well, but that process isn't as simple if you're able to do the iDRAC route.

Many thanks Kevin. Do you know if with MegaCLI I am able to see if NCQ is enabled (Native Command Queueing)? I've read about a guy that had to disable it on a PowerEdge (forgot the model) to make it run at a satisfying speed with VMWare. I've done al possible combinations of Write Cache/Read Look Ahead Cache/NCQ on my Promise Pegasus and best performance on bulk load are obtained enabling Write Cache and NCQ. I haven't done yet tests with hughe reads but I leave Read Look Ahead Cache activated. So on my machine NCQ isn't a problem and would like to test it on the PowerEdge T420.

The issue though is you're workload isn't large block sequential I'm guessing. What sort of performance are you seeing now? I doubt you're topping out at the bandwidth headroom, it's more likely the disk is the bottleneck.

Big tables are read-only and I have some indexes on them. Now I'm trying to change they're architecture to be partitioned (Table Partitioning in PostgreSQL). This can solve huge index issues (they don't fit in RAM and are hard to maintain) and sequential/random read performance because the DB planner knows where some values are stored (for example, dates). I'm waiting for the loading process to finish as I am populating ex-novo the whole DB. I've made a PHP script that generates a BASH script which is about 10k rows of code to track every SQL statement execution time. It executes partitioning, inserts, indexing, clustering and vacuuming. All of these operations stress the RAID in every aspect (and so the HardDisks).

I'm expecting improvements from both bulk loading and data retreiving tasks.

I need to mention another issue which is relative to DB configuration. I don't know if you're used to PostgreSQL, but it maybe a kind of "race car" when it's about performance because you have something like 100 configuration variables. As I stated before I'm using this setup for 3 years and there are still some of these variables which are a little obscure to me. But I've read books, presentations, documentation and sites like stackexchange.com to achieve a decent knowledge about it. For example there's an option which can disabled synchronization resulting in a speed up of RAID performance but affecting reliability (I never disable it); you can choose the synch method and so on.

So performance from a DB point of view are very relative. If I take a software like AJA or other similar, the results maybe quite distant from the real ones that I get using the DB (they still remain a reference point for my system relative to the others). What kind of benchmark indexes/stats can I trust and how can I collect them? This answer is in a "new buy" perspective, so I can choose a new system. I was thinking about a Dell PowerEdge Rxxx solution, but I've seen absurde Hard Disk costs. What's wrong with them? There are SATA HD at about 1k$ each.

EDIT

------------------------------------------------------------------------------------------------------------

I've installed MegaCLI utility on Dell PowerEdge T420. HD are SAS, so NCQ isn't available as an option for them. Am I right? I saw it says "NCQ NO" (disabled).

Edited by pietrop

Share this post


Link to post
Share on other sites

What OS are you setup with on both systems? Trying to take a couple steps back because one item might be RAID configuration and the other is storage device configuration at the OS level.

Share this post


Link to post
Share on other sites

I don't know that database at all, so can't help on performance, maybe Kevin has ideas. On the system, any new server will be better than what you have today I'd say. Moving to flash internal to the server will take it to night and day levels. All the OEMs charge a pretty penny for drives, there's good margin there. To be fair they spend a lot of time qualifying drives, etc. but you can buy it bare and put whatever you want in there. If you're even more price sensitive you can build your own white box server or even PC, you don't really need a "server" per se. Questions come down to budget more than anything else. This is all assuming you need it to be local. There's a decent argument for cloud computing for this kind of effort too.

Share this post


Link to post
Share on other sites

Pardon me for the long time. In the meanwhile I've collected some stats.

I don't know that database at all, so can't help on performance, maybe Kevin has ideas. On the system, any new server will be better than what you have today I'd say. Moving to flash internal to the server will take it to night and day levels. All the OEMs charge a pretty penny for drives, there's good margin there. To be fair they spend a lot of time qualifying drives, etc. but you can buy it bare and put whatever you want in there. If you're even more price sensitive you can build your own white box server or even PC, you don't really need a "server" per se. Questions come down to budget more than anything else. This is all assuming you need it to be local. There's a decent argument for cloud computing for this kind of effort too.

We've changed BIOS and other stuff configuration parameters gaining some performance.

For example, a set of queries used for data loading and initialization which covered all the possible database activities (seq/rand read/write on small/big chunks of data) went from 240 seconds to 195 seconds. The same set of queries run in 40 seconds on my MacMini+Promise Pegasus. That's a HUGHE difference and I can't take it down anymore. I've tried a lot of different combinations in Postgres' configuration file: from RAM to disk parameters.

I've tested also a transaction used in our real world application which makes some JOIN and writes records to disk. On Dell PE T420 execution time was 2m21s while on MacMini it was 1m44s. This is more impressive if you think that MacMini has 6.25% RAM availability as of T420. Also, the PostgreSQL version differs! On T420 I've installed a brand new 9.4 installation while MacMini has the original 9.0 version which is much slower and less performing.

What OS are you setup with on both systems? Trying to take a couple steps back because one item might be RAID configuration and the other is storage device configuration at the OS level.

On MacMini I've OS X Server 10.7.5 and T420 has a Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64).

I followed the official Postgres reccomandations about kernel configuration; I've set WriteBack and ReadAhead on RAID cache configuration and disabled disk cache (it slightly affected performance and it's a reccomended suggestion). I've also defragmented XFS file system on RAID disk array and separated the database log (the WAL - Write Ahead Log) from data directory (the two reside on two different RAID virtual disks).

What's more interesting (and paradossal) is the results returned by a tool called pg_test_fsync which is used to evaluate the best sync method used by Postgres. The best choice is determined by the performance results returned by this tool, so one can make a "scientific" decision.

T420

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                   23358.758 ops/sec      43 usecs/op
        fdatasync                       21417.018 ops/sec      47 usecs/op
        fsync                           21112.662 ops/sec      47 usecs/op
        fsync_writethrough                            n/a
        open_sync                       23082.764 ops/sec      43 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                   11737.746 ops/sec      85 usecs/op
        fdatasync                       19222.074 ops/sec      52 usecs/op
        fsync                           18608.405 ops/sec      54 usecs/op
        fsync_writethrough                            n/a
        open_sync                       11510.074 ops/sec      87 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
         1 * 16kB open_sync write       21484.546 ops/sec      47 usecs/op
         2 *  8kB open_sync writes      11478.119 ops/sec      87 usecs/op
         4 *  4kB open_sync writes       5885.149 ops/sec     170 usecs/op
         8 *  2kB open_sync writes       3027.676 ops/sec     330 usecs/op
        16 *  1kB open_sync writes       1512.922 ops/sec     661 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
        write, fsync, close             17946.690 ops/sec      56 usecs/op
        write, close, fsync             17976.202 ops/sec      56 usecs/op

Non-Sync'ed 8kB writes:
        write                           343202.937 ops/sec       3 usecs/op

MacMini

Direct I/O is not supported on this platform.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                      3780.341 ops/sec     265 usecs/op
        fdatasync                          3117.094 ops/sec     321 usecs/op
        fsync                              3156.298 ops/sec     317 usecs/op
        fsync_writethrough                  110.300 ops/sec    9066 usecs/op
        open_sync                          3077.932 ops/sec     325 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                      1522.400 ops/sec     657 usecs/op
        fdatasync                          2700.055 ops/sec     370 usecs/op
        fsync                              2670.652 ops/sec     374 usecs/op
        fsync_writethrough                   98.462 ops/sec   10156 usecs/op
        open_sync                          1532.235 ops/sec     653 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
         1 * 16kB open_sync write          2634.754 ops/sec     380 usecs/op
         2 *  8kB open_sync writes         1547.801 ops/sec     646 usecs/op
         4 *  4kB open_sync writes          801.542 ops/sec    1248 usecs/op
         8 *  2kB open_sync writes          405.515 ops/sec    2466 usecs/op
        16 *  1kB open_sync writes          204.095 ops/sec    4900 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
        write, fsync, close                2747.345 ops/sec     364 usecs/op
        write, close, fsync                3070.877 ops/sec     326 usecs/op

Non-Sync'ed 8kB writes:
        write                              3275.716 ops/sec     305 usecs/op

My objective here is to understand if it's convenient for me to upgrade to a solution similar to the T420 one.

If you don't mind I'll open a question on dba.stackexchange.com for the Postgres configuration part, while here I'm digging into the "disks" aspects of the problem.

I think that Promise RAID controller is much more performant than Dell Perc H710, but I may be wrong.

Thank you again for your support and help.

Pietro

Edited by pietrop

Share this post


Link to post
Share on other sites

This is seriously strange. That Dell has excellent specifications. Something is definitely not correct somewhere, and we're not able to spot it by communicating through a forum.

I am whilling to dedicate time and explore this with you if we can both set time for it. I suggest Skype with it's Screen Sharing for me to view, so that we can revise settings and go much deeper. It'll be a lot faster than trying to get help through forums at this point.

So, let's get to the bottom of this "Dilemma". Send me a private message though here, if interested, to get started.

Share this post


Link to post
Share on other sites

Hi Maxtor, thank you VERY much for your help. Before taking your time I prefer to complete some tests on Postgres. At the moment on dba.stackexchange.com they didn't gave me much help. I'm following the "recompiling way" after reading http://blog.pgaddict.com/posts/compiler-optimization-vs-postgresql because I was installing from binaries and not from sources. It states that different compilers, different compiler versions and different compiler options can give a 15% gain over Postgres performance. There are also other parameters that determines some Postgres file and segments size on disk which aren't covered in that specific article. There are many possible combinations so it will take me a lot of time to find the best one. I'm also following this blog which has a lot of useful test, some of which covers FileSystem options and Postgres compiling options: http://www.fuzzy.cz/en/articles/filesystem-vs-postgresql-block-size/.

Just to say that probably the problem is on the Postgres side and not on the hardware side.

One thing you might know is: what's the IO performance impact of disabling Hyper Threading? This is the unique option which I haven't tried yet and I was wondering if it works as with SSD (our server has HDD on it).

Again, thank you so much for your help.

Pietro

edit: I've found a link with some benchmark (TPS) disabling HT: http://www.postgresql.org/message-id/53F4F36E.6050003@agliodbs.com. He gains about 75% performance with HT disabled. Impressive.

Edited by pietrop

Share this post


Link to post
Share on other sites

Well, at least according to that quick PostgreSQL benchmark the Dell is much faster than the Mac Mini as expected. For reference, I ran the same tool (PostgreSQL 9.4 for Solaris) on my ZFS storage server and got;

With Hyper Threading Enabled;

post-859-0-96150300-1428241662_thumb.jpg

With Hyper Threading Disabled;

post-859-0-04791900-1428243694_thumb.jpg

Higher ops/sec = Better (Operations per second)

Lower usecs/op = Better (Latency per operation)

Obviously my machine is insanely fast because it is using ZFS with system RAM for write-back and a regular LSI SAS card for disk drive control. I can re-test it with an LSI RAID enabled card and see what the figures are with the card's own memory for write-back instead.

As far as HT and its impact on just storage operations, I have always found it to have abysmal to zero impact. This holds true even for ZFS on my tests (ZFS uses main system CPU). The attachments show that performance actually degrades by a tiny bit with HT disabled for that tool. Though I am not an expert on HT nor do I make threaded applications, I have seen it degrade application performance to a small noticeable amount in the past, but that was years ago in my experience. Currently with HT enabled and the applications I use their performance will either remain the same, hardly go down, boost it, or allow for more CPU load than with HT disabled. In my case, leaving HT enabled is beneficial.

My storage server specifications for these tests are;

  • Oracle's SunOS 5.11 (Solaris 11.1)
  • Single Intel Xeon E5-2620 v2 (2.1GHz 6-core, 12 virtual with HT)
  • 64GB of RAM (16GB x4) DDR3-1333 Low-Voltage DIMMs as quad channel
  • LSI HBA 9207 card (LSI SAS-2308 Chipset)
  • LSI RAID 9260 card (LSI SAS-2108 Chipset)
  • 6x WD Raptor (SATA, 1TB, 10K RPM) in ZFS RAIDz-2 (ZFS's RAID-6 equivalent). These Raptor drives are the current latest for desktops/power users/workstations. They have newer SAS versions for servers, but I am very satisfied with these!

NEW: ZFS with no write-back and LSI 9260 with 7x WD Raptor in RAID-5 with write-back;

post-859-0-58732600-1428362637_thumb.jpg

Your Dell Perc 710P RAID card is pretty high end as we can see from the numbers you posted. My card has an LSI SAS-2108 Chipset with 512MB of DDR2-800. The Perc 710P has an LSI SAS-2208 with 1GB of DDR3 (according to Dell).

Edited by Maxtor storage

Share this post


Link to post
Share on other sites

Wow! That's amazingly fast!

I was thinking about using ZFS on Ubuntu but I don't know if it's more reliable than XFS.

I tried enabling/disabling HT but it didn't impact performance in no way: time execution was still the same. I'm writing on the official PostgreSQL performance list and we are still searching for a solution (tried also NUMA tuning for dual socket motherboards).

I've found a post comparing Intel i5 vs Intel Xeon using different PostgreSQL versions and it resulted in better performance from i5 http://blog.pgaddict.com/posts/performance-since-postgresql-7-4-to-9-4-pgbench

On my Mac Mini I have a i7 and on Dell I have a Xeon, but i7 is from late 2011 while the Xeon is from late 2013!

I'll update this post as soon as I get more responses from the official pg list. Thank you so much for your benchmarks.

PS on Dell it's a Perc H710. H710P is the Performance version which is no installed on the machine. The main difference is in cache size, which is 512MB for H710 and 1GB for H710P.

Edited by pietrop

Share this post


Link to post
Share on other sites

I can tell you that ZFS is pretty stable and just as fast on Linux because I tried it recently. I still prefer ZFS on Illumos (Open Solaris Fork) or Oracle Solaris because it is proven mature, Solaris isn't messy like Linux for storage/network operations, and because Solaris has an incredible debugging tool called DTrace that will analyze anything on the server.

I checked those comparisons, but I wouldn't take that Postgres test to the heart because;

2x Xeon 5450 (V0 (code name Nehalem), old obsolete 1,333MHz Bus, Dual Channel DDR2 most likely as ECC/Registered (adds some latency), 3GHz 4-core (no HT), 12MB L2 cache)

Vs.

1x i5-2500k (V1 (code name Sandy Bridge), QPI 5GT, Dual Channel DDR3, 3.3GHz 4-core (8 virtual), 6MB L2 cache)

I am actually thinking of playing around with Postgres and its settings on Solaris. To be honest I never used it, but I am intrigued to test my machines, and also test it under Linux to see if I also hit a problem. You can always install Solaris on the Dell server and see if you also have performance issues there with your settings. The Solaris OS is very easy to manage and I'd be glad to set it up quickly with you.

As for the Dell Perc controllers, I doubt there is an actual performance difference other than available RAM on the card to push for more write-back buffer capacity. It uses the exact same CPU and such. Either card will be just as incredibly fast.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now