Is alignment really a thing on SSDs?

Recommended Posts

Every so often I hear people say, that you must align partitions on SSDs, just as on advanced format hard drives, because "AS SSD" told them so. And people who heard that tell other people the story and so on. And we have created an urban legend.

But is that really the case?

All my SSDs (Samsung, SanDisk) tell me, that they use physical sector size of 512 bytes. They have a valid configuration word 106 (bit 14 is set) which bit 13 set to zero (1 = Device has multiple logical sectors per physical sector). So it's unlike the early WD AF models, which just had no valid word 106.

The sector layout is completely abstracted from the underlying flash anyway, regardless of page size and erase block size, so there should be no read-modify-write cycles like on AF HDDs. SSD vendors don't tell people anything about alignment. So is that really an issue? Or just hearsay?

We also have SSDs with controllers compressing data, so there can't be a real 4K-sector flash-page relation anyway.

(Of course, you can align partitions preemptive "just to be safe", but that was not the question, I want factual data and clear examples of modern SSDs, which require alignment).

Edited by jtsn

Share this post

Link to post
Share on other sites

SSDs also use AF, which needs the correct alignment to reduce read and write activities. I don't know how Sandforce based drives operate, but the others use 4k or multiples of it.

Alignment is still important.

Share this post

Link to post
Share on other sites

SSDs also use AF

That's the point, I would like to see some solid proof of that. I have a hard time finding the term "Advanced Format" anywhere in datasheets of modern consumer SSDs, like Samsung 840 Basic.

Share this post

Link to post
Share on other sites

'Advanced Format' only refers to HDDs, which have traditionally been 512 bytes per sector. SSDs have been 4K from the beginning, so no reason to label them as AF drives.

Alignment is super important, as an unaligned SSD may perform at half its capability.

Edited by FastMHz

Share this post

Link to post
Share on other sites

Just create an intentionally misaligned partition, benchmark it and compare with an aligned partition. The difference in random write should be obvious, whereas it may not be as present in sequential data. I don't know about those capability bits, but I've read that the physical blocks in NAND are 4k (or maybe even 8k) - just as FastMHz said.


Share this post

Link to post
Share on other sites

MrS is exactly correct: here's a crude way of illustrating THE problem:

|--------|--------| <--- 2 physical records
.....|--------|...| <--- 1 logical record

When that logical record is WRITTEN randomly, it spans 2 physical records

and therefore 2 physical records must be updated.

This is not as much of a problem when all such logical records

are WRITTEN or READ sequentially, because the File System should "buffer" I/O

and usually send or receive one physical record at a time e.g. to/from a SATA device.

Therefore, although there is a measurable difference doing sequential I/O

with UNaligned partitions, it is not as noticeable or as obvious

when compared to the overhead that results with random I/O

on UNaligned partitions.

p.s. If you are still using rotating platters to host a Windows OS,

it's a very good idea to move your swap file pagefile.sys

to a "short-stroked" primary partition, on a second HDD,

and go the extra mile by ensuring that all physical sectors

are contiguous, using the excellent CONTIG freeware program:


Under those circumstances, whenever Windows decides to WRITE

an inactive program to the swap file, it should perform that WRITE

in an optimal fashion.

Share this post

Link to post
Share on other sites

This next measurement was what we were expecting from 4 x Samsung 840 Pro 128GB in RAID-0

a Highpoint RocketRAID 2720SGL controller and a PCIe 2.0 chipset (i.e. twice the

upstream bandwidth of a PCIe 1.0 chipset, and all SSDs properly ALIGNed):


Edited by MRFS
  • Like 1

Share this post

Link to post
Share on other sites

You're very welcome. When I was doing those measurements, I was mostly

interested in confirming the difference in upstream bandwidth between

PCIe 1.0 and PCIe 2.0 chipsets, using one 6G controller and one older 3G controller.

As such, I didn't do every possible measurement in a proper experimental matrix

and that's why those graphics appear to be somewhat "spotty".

Also, because there are so many PCI-Express motherboards installed worldwide,

I put a little extra focus into exploring how easily it was to achieve high speed

with PCIe 1.0 chipsets.

What became very clear is that 2 x modern 6G SSDs in a RAID 0 array

come pretty close to reaching MAX HEADROOM with a PCIe 1.0 chipset:

roughly 2 @ 500 = ~1,000 MB/second (certainly above 900).

There is no performance gain to be expected from 4 x 6G SSDs with a PCIe 1.0 chipset.

The "sweet spot" was predictably 4 x 6G SSDs with a PCIe 2.0 chipset and

a 6G controller like the Highpoint RocketRAID 2720SGL: that measurement

was done on an ASUS P5Q Deluxe motherboard with an Intel Q6600 CPU.

I really do enjoy working on that workstation, because regular file system

operations are truly SNAPPY, particularly program LAUNCH.

The other, less obvious issue was the lack of TRIM with these RAID 0 arrays;

and, that's why I'm recommending that builders take a close look at

Plextor's garbage collection for PCs that lack TRIM for some reason.

The folks at xbitlabs.com produce a very useful comparison here:


Incidentally, in our workstations that have 2 or more PCIe slots,

we've been installing RAID controllers in the primary x16 slot;

and, some of the older PCIe motherboards complain a little

about that setup: I need to press F1 to finish POST and STARTUP;

but with that minor exception those PCIe 1.0 chipsets still work fine

with a RocketRAID 2720SGL installed in the primary x16 slot.

Many less experienced Highpoint users stumble on the INT13

factory default: what they need to do is install the card withOUT

any drives attached, and flash the card's bios to disable INT13.

Then, the card won't interfere with chipset RAID settings.

THE problem is that INT13 ENABLED on the 2720SGL

has been known to DISABLE on-board RAID functionality e.g.

it's like the chipset's ICH10R isn't even there at all!

The solution is to revert the BIOS to IDE or AHCI modes,

and sometimes IDE is the ONLY setting that will work again

after the 2720SGL is installed with INT13 ENABLED.

Also, the latest bios needs to be flashed, in order to operate

the SATA channels at 6 GHz (using SFF-8087 "fan-out" cables).

... all "bleeding edge" lessons, to be sure :-)

Hope this helps.

Edited by MRFS
  • Like 1

Share this post

Link to post
Share on other sites

Can you post your exact alignment offsets for the Samsung 840pros? I'm trying to find information to the 512gb version, but I'm hoping they'd be the same.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now