Sign in to follow this  
Roxor McOwnage

SSD's = Defrag is a thing of the past?

Recommended Posts

If you have a SSD... with no moving parts... does that mean there's no speed penalty from having files fragmented in many different chunks? i.e. it's the same speed to read 10 chunks of data that are all "beside each other" (does that even make sense in solid-state drives?) as it is to read 10 chunks of data in random locations.

Think of all the work that has gone into drive logic to reorder IO to reduce seeks... all the work put into OS's to try to collect reads/writes into contiguous chunks... all the work done by filesystems to try to "do the right thing" to reduce fragmentation...

...is that all wasted effort if SSDs become popular?

(related question: does that mean I could fill a SSD to 100% full and it wouldn't get slower as it fills?)

Rox

Share this post


Link to post
Share on other sites

Fragmentation will still slow down an SSD, but it would be a lot harder to notice / measure. By way of example, it's faster to request one 64KB block of data than eight 8KB blocks. That said, I doubt I'd bother to defrag any SSD's I own.

Share this post


Link to post
Share on other sites

Defrag is already almost a thing of the past. With NTFS, it prevents horrible fragmentation most of the time anyhow. And if you leave your computer on long enough (like me, 24/7), then the disk never gets very fragmented since, as I understand it, windows does some tidying up when the computer sits idle for some amount of time.

But I'm sure with SSDs, even "bad" fragmentation wouldn't cause any major problems.

Share this post


Link to post
Share on other sites
Fragmentation will still slow down an SSD, but it would be a lot harder to notice / measure. By way of example, it's faster to request one 64KB block of data than eight 8KB blocks. That said, I doubt I'd bother to defrag any SSD's I own.

Thats irrelevant, dont you think?

Where exactly should that delay take place?

A 64Kb request will be broken down into x*blocksize requests in the ssd controller, no matter in what form those are present on the medium.

The position of those x blocks is of no convern for the controlelr for reading purposes.

In fact, giving wear leveling algorithms, its a safe bet that your blocks are pseudo-randomly distributed in any case.

Share this post


Link to post
Share on other sites

If the SSD is based on flash memory then defrag can help writing speed by keeping large contiguous blocks of open space available.

Flash is traditionally slow with writing because it is block-erased immediately before write operation. Having more contiguous open space available increases the odds that all the required space will already carry 1s or can be block-erased in one go, and that data bits will not need to be read then rewritten into a block.

Share this post


Link to post
Share on other sites

I think imsabbel is right: there's no benefit to having contiguous blocks of data. Contiguous would really only mean the chunks of data are side-by-side logically in some address space... the bits themselves could be sprayed randomly all over a collection of flash chips and the speed would be the same.

Share this post


Link to post
Share on other sites
Flash is traditionally slow with writing because it is block-erased immediately before write operation. Having more contiguous open space available increases the odds that all the required space will already carry 1s or can be block-erased in one go, and that data bits will not need to be read then rewritten into a block.

Thats true, but consider:

Wear leveling will cause all this not to get exposed to the OS at all.

If you are writing block 00001 1000 times, it will always point to a different physical block.

If you overwrite block 0815, then the physical block the data was initally in might not be touched at all, as the wear leveling will point the write to another block.

"contiguous open space", as seen by the OS, and the filesystem, has no meaning on physical layer.

All this logic (distributing writes towards most suitable memory regions, regarding block health status, bursts, ect, will entirely be in the domain of the controller.

Share this post


Link to post
Share on other sites

He, he. SSD is fragmented by default :D, since it's logic tries to spread writes trough the available space because flash cell can be written only about million times before it dies.

by

TheR

Share this post


Link to post
Share on other sites
Defrag is already almost a thing of the past. With NTFS, it prevents horrible fragmentation most of the time anyhow.

Really? Then why did I have four exceedingly slow machines last week with horribly fragmented disks that suddenly became usable again after a mere defrag? XP with NTFS and 50% or more free space.

Share this post


Link to post
Share on other sites
Defrag is already almost a thing of the past. With NTFS, it prevents horrible fragmentation most of the time anyhow.

Really? Then why did I have four exceedingly slow machines last week with horribly fragmented disks that suddenly became usable again after a mere defrag? XP with NTFS and 50% or more free space.

Problem in front of the screen?

Share this post


Link to post
Share on other sites
Problem in front of the screen?

Problem NTFS fragmentation. If NTFS is supposed to defrag itself (without scheduled jobs mind you) and the entire disk is fragged, the problem is NTFS. I know I have to defrag once in a while but your typical user does not.

Share this post


Link to post
Share on other sites
Defrag is already almost a thing of the past. With NTFS, it prevents horrible fragmentation most of the time anyhow. And if you leave your computer on long enough (like me, 24/7), then the disk never gets very fragmented since, as I understand it, windows does some tidying up when the computer sits idle for some amount of time.

But I'm sure with SSDs, even "bad" fragmentation wouldn't cause any major problems.

You're quite incorrect about NTFS preventing horrible defragmentation. XP requires regular defragmentation to keep performance at reasonable levels.

It is possible to schedule (using 3rd party tools) defragmentation jobs to occur (and the nicer tools will also defragment the pagefile and MFT, which can only be performed safely when the OS itself is not actually running). This is usually referred to as "boot time defragmentation".

IIRC, it is not possible to schedule defragmentation jobs with the built-in Disk Defragmenter utility, and I believe this behavior is intentional as an accomodation that MS reached with Executive Software.

The overriding consideraton with flash devices is the limited # of writes before a cell becomes incapable of further use. With that being the case, the gains that might be realized from defragmenting a SSD do not outweigh the con of having to perform many writes (and thereby using up the life of the device).

Theoretically, if you had a SSD that was non-volatile and capable of "infinite" rewrites (barring physical damage of some kind), then I believe you could make a case that all the reasons for having files defragmented on a HDD apply to a SSD as well (albeit on a smaller scale, since access times are much shorter), and you could also simplify things for the OS by not "hiding" where things actually are as regards the media and let the OS do more direct caching of contiguous files and having contiguous space to accept writes without having to obfuscate this (because of wear-leveling on the device).

Share this post


Link to post
Share on other sites

Actually, NTFS prevents fragmentation quite a bit by storing sub-block files directly into the MFT.

Also, i have never seen a machine with a fragmentation state that would actually slow the user experience down. Even after 4 years and deactivated idle defrag process.

Share this post


Link to post
Share on other sites
Actually, NTFS prevents fragmentation quite a bit by storing sub-block files directly into the MFT.

Also, i have never seen a machine with a fragmentation state that would actually slow the user experience down. Even after 4 years and deactivated idle defrag process.

Half a dozen this week...

Share this post


Link to post
Share on other sites
Actually, NTFS prevents fragmentation quite a bit by storing sub-block files directly into the MFT.

Also, i have never seen a machine with a fragmentation state that would actually slow the user experience down. Even after 4 years and deactivated idle defrag process.

Yes, NTFS does store files that are small enough directly in the MFT. I assume that what you mean here is that NTFS has a built-in feature to try to reduce fragmentation.

Whether this will actually prevent fragmentation "quite a bit" would seem to be entirely dependent on # of files and the average file size, wouldn't it?

And if you've never seen a machine that was dramatically slowed down by horrible file fragmentation, then all I can say is that right after doing a reimage of a system and installing 150+ MB of updates, I can go into Disk Defragmenter and benefit immediately from defragmenting the hard drive.

Fragmentation, particularly of the MFT and pagefile, can have a very serious impact on performance.

If you do a search on microsoft.com for "file fragmentation NTFS", you'll find a few articles relating to this.

Share this post


Link to post
Share on other sites
Defrag is already almost a thing of the past. With NTFS, it prevents horrible fragmentation most of the time anyhow.

Really? Then why did I have four exceedingly slow machines last week with horribly fragmented disks that suddenly became usable again after a mere defrag? XP with NTFS and 50% or more free space.

Because,

1. You don't leave the computer on 24/7

2. I was wrong

No need to get into a pissing match. :rolleyes:

Defrag is already almost a thing of the past. With NTFS, it prevents horrible fragmentation most of the time anyhow. And if you leave your computer on long enough (like me, 24/7), then the disk never gets very fragmented since, as I understand it, windows does some tidying up when the computer sits idle for some amount of time.

But I'm sure with SSDs, even "bad" fragmentation wouldn't cause any major problems.

You're quite incorrect about NTFS preventing horrible defragmentation. XP requires regular defragmentation to keep performance at reasonable levels.

It is possible to schedule (using 3rd party tools) defragmentation jobs to occur (and the nicer tools will also defragment the pagefile and MFT, which can only be performed safely when the OS itself is not actually running). This is usually referred to as "boot time defragmentation".

IIRC, it is not possible to schedule defragmentation jobs with the built-in Disk Defragmenter utility, and I believe this behavior is intentional as an accomodation that MS reached with Executive Software.

The overriding consideraton with flash devices is the limited # of writes before a cell becomes incapable of further use. With that being the case, the gains that might be realized from defragmenting a SSD do not outweigh the con of having to perform many writes (and thereby using up the life of the device).

Theoretically, if you had a SSD that was non-volatile and capable of "infinite" rewrites (barring physical damage of some kind), then I believe you could make a case that all the reasons for having files defragmented on a HDD apply to a SSD as well (albeit on a smaller scale, since access times are much shorter), and you could also simplify things for the OS by not "hiding" where things actually are as regards the media and let the OS do more direct caching of contiguous files and having contiguous space to accept writes without having to obfuscate this (because of wear-leveling on the device).

Whoa whoa whoa, it sounds like people assumed I knew what I was talking about.

I caught some offhand remark a couple years ago that NTFS prevents fragmentation better than FAT32. That's all.

Share this post


Link to post
Share on other sites

So we all seem to agree then that:

-SSD may benefit from defragmentation, at least for writing

-There is no mechanism by which current OSes can defragment them (since there is no standard mechanism for even reporting file location to the OS)

-Microsoft has oversold the ability of NTFS to remain well defragmented by itself. Typical since they did also claim FAT32 was faster than FAT16 and NTFS was faster than both of those too, which we know to only be the case under specific circumstances.

-The cost in life cycles should be weighed against performance gained

This tells me that defragmentation should be made automatic during idle time and built into the control logic of the SSD device, to be mostly invisible to the OS and user. If SSD manufacturers wish to provide a utility to change settings ala AAM then they could do so.

Share this post


Link to post
Share on other sites
So we all seem to agree then that:

-SSD may benefit from defragmentation, at least for writing

-There is no mechanism by which current OSes can defragment them (since there is no standard mechanism for even reporting file location to the OS)

-Microsoft has oversold the ability of NTFS to remain well defragmented by itself. Typical since they did also claim FAT32 was faster than FAT16 and NTFS was faster than both of those too, which we know to only be the case under specific circumstances.

-The cost in life cycles should be weighed against performance gained

This tells me that defragmentation should be made automatic during idle time and built into the control logic of the SSD device, to be mostly invisible to the OS and user. If SSD manufacturers wish to provide a utility to change settings ala AAM then they could do so.

I don't feel that MS has oversold the ability of NTFS to remain well defragmented by itself.

For the most part, I do agree with your other points.

I also agree that if there is to be any real "defragmentation" on a SSD, then the device itself must do it. However, as you already pointed out, the benefits of doing so would need to predicted by an algorithm of some kind before the device performed the defragmentation, and allowing the user to control the settings, or disabling it completely, would be nice.

Share this post


Link to post
Share on other sites

When memory addresses data in a differnt collumn or row isn't there a latency penalty while the singnal amplifier looks in a different place?

If so, solid state memory would suffer a penalty for defragmented data.

Share this post


Link to post
Share on other sites
When memory addresses data in a differnt collumn or row isn't there a latency penalty while the singnal amplifier looks in a different place?

If so, solid state memory would suffer a penalty for defragmented data.

That's true, but the magnitude of difference is pretty small. Most likely unnoticeable, if not unmeasurable in normal usage.

Share this post


Link to post
Share on other sites

"The ability of XP to keep itself defragged..."

Are you referring to the boot files and commonly used program optimisation?

AFAIK, this happens every 3 days in idle time. You can perform this yourself by typing "defrag c: -b". The command moves boot files and frequently accessed files to the edge of the disk for better performance. It only occurs on the partition containing the OS/ boot files.

It is not equivalent to a full defrag.

Source: MS, I think :P

Share this post


Link to post
Share on other sites

"He, he. SSD is fragmented by default , since it's logic tries to spread writes trough the available space because flash cell can be written only about million times before it dies"

If SSD with MLC, only some 10 000 times. Also if one flash cell dies, the disk dies.........

Jeff

Share this post


Link to post
Share on other sites
If SSD with MLC, only some 10 000 times. Also if one flash cell dies, the disk dies.........

Jeff

Pretty sure flash drives are designed to survive, even when bad blocks appear. At least the Intel flash SSDs are.

Share this post


Link to post
Share on other sites

Well Larry,

i am not. The controller of the SSD makes sure each flash cell is used "equally x times". When one cell dies, bye bye SSD. (MLC that is, the SLC is supposed to be used 100 000 times per cell.

BFG: "This tells me that defragmentation should be made automatic during idle time and built into the control logic of the SSD device, to be mostly invisible to the OS and user."

Defrag is already automatic if you use Raxco or Diskeeper, so HMTK, why do you have to defrag computers manually or wait until they are almost unusuable?

Not a good idea to defrag SSD's, since the life of the cells will go down DRASTICALLY.

Jeff

Share this post


Link to post
Share on other sites
Well Larry,

i am not. The controller of the SSD makes sure each flash cell is used "equally x times". When one cell dies, bye bye SSD. (MLC that is, the SLC is supposed to be used 100 000 times per cell.

Are you for real?

Seriously, everything better than a USB stick has relocation blocks. IIRC, the much hated OSC core drives have >1Gbyte of them. And even if a bit would fail without any kind of relocation possibility, there would just be a bad sector on your SSD.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this