Jump to content


Photo

What is wrong with this 840P SSD mdraid-1?


  • You cannot start a new topic
  • Please log in to reply
1 reply to this topic

#1 AndyB78

AndyB78

    Member

  • Member
  • 3 posts

Posted 17 April 2013 - 04:28 AM

Hello!

I am killing myself with this mdraid-1 fitted with 2 Samsung 840 PRO 512GB. The problem is that we get write speeds of 10-12MB/s during peak hours or 16-26MB/s during the night. The server is usually snappy but if I copy a file with a size anywhere between 100MB and 7+GB the server overloads from 0.5 to even loads of over 100.

This is how I tested:

root [~]# time dd if=test.tar.gz of=test6 oflag=sync bs=1G
0+1 records in
0+1 records out
547682517 bytes (548 MB) copied, 46.8873 s, 11.7 MB/s
I have also made hdparm read tests on both drives and this is what I get:

Normal (thorugh pagecache):
root [~]# hdparm -t /dev/sda
Timing buffered disk reads:  668 MB in  3.00 seconds = 222.63 MB/sec

root [~]# hdparm -t /dev/sdb
Timing buffered disk reads:  270 MB in  3.00 seconds =  89.88 MB/sec

Direct:
root [~]# hdparm --direct -t /dev/sda
Timing O_DIRECT disk reads:  804 MB in  3.01 seconds = 267.42 MB/sec

root [~]# hdparm --direct -t /dev/sdb
Timing O_DIRECT disk reads:  560 MB in  3.01 seconds = 186.29 MB/sec

Background data

Server: Supermicro 5017C-MTRF (SATA2, AHCI, ext4 for / and ext3 for /tmp, E3-1270v2, 16GB RAM)

O/S: CentOS 6.4 / 64 bits (2.6.32-358.2.1.el6.x86_64)

Matrix status:
root [~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb2[1] sda2[0]
      204736 blocks super 1.0 [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
      404750144 blocks super 1.0 [2/2] [UU]

md1 : active raid1 sdb1[1] sda1[0]
      2096064 blocks super 1.1 [2/2] [UU]

unused devices: <none>

System I/O:
root [~]# iostat -x
Linux 2.6.32-358.2.1.el6.x86_64 (hostname)      04/17/2013      _x86_64_        (8 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           6.36    0.06    1.47    0.91    0.00   91.20

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               7.66    54.55  789.79   79.05  6937.12  1116.37     9.27     0.24    0.28   0.04   3.84
sdb               4.72    55.01  516.12   78.58  4592.40  1116.37     9.60     1.24    2.09   0.08   4.94
md1               0.00     0.00    0.21    0.40     1.66     3.21     8.00     0.00    0.00   0.00   0.00
md2               0.00     0.00 1307.96  131.32 11278.94  1111.26     8.61     0.00    0.00   0.00   0.00
md0               0.00     0.00    0.32    0.00     2.35     0.00     7.45     0.00    0.00   0.00   0.00

Partitioning (we have left roughly 100GB unpartitioned for over provisioning purposes):
root [~]# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/md2      ext4    380G  236G  125G  66% /
tmpfs        tmpfs    7.8G     0  7.8G   0% /dev/shm
/dev/md0      ext4    194M   47M  137M  26% /boot
/usr/tmpDSK   ext3    3.6G  1.2G  2.3G  34% /tmp

root [~]# sfdisk -l

Disk /dev/sda: 62260 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1          0+    261-    262-   2097152   fd  Linux raid autodetect
/dev/sda2   *    261+    286-     26-    204800   fd  Linux raid autodetect
/dev/sda3        286+  50675-  50390- 404750336   fd  Linux raid autodetect
/dev/sda4          0       -       0          0    0  Empty

Disk /dev/sdb: 62260 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1          0+    261-    262-   2097152   fd  Linux raid autodetect
/dev/sdb2   *    261+    286-     26-    204800   fd  Linux raid autodetect
/dev/sdb3        286+  50675-  50390- 404750336   fd  Linux raid autodetect
/dev/sdb4          0       -       0          0    0  Empty

Disk /dev/md1: 524016 cylinders, 2 heads, 4 sectors/track
Disk /dev/md2: 101187536 cylinders, 2 heads, 4 sectors/track
Disk /dev/md0: 51184 cylinders, 2 heads, 4 sectors/track

Can you spot what I can do to speed write speeds?
Why do read speeds differ between the 2 identical devices?

Kind regards!

Edited by AndyB78, 17 April 2013 - 07:22 AM.

#2 rugger

rugger

    Member

  • Member
  • 723 posts

Posted 30 April 2013 - 05:15 PM

Are your partitions aligned on the SSDs?
Have you formatted your partitions with the correct raid parameters (mkfs,ext3 -stripe_size or whatever it is)?

I can't tell that from the partition table information you have printed.



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users