Does windows need a new FS?

Do you have any benches or tests to support that? Not that I don't believe you, i'd just be interested to read what they have to say.

Find me some reproducible benchmarks from the defrag software makers which demonstrate noticeable performance gains after using their products ;) I don't believe any such data exists because the software is basically snake oil.

Here is one site with some detailed info about NTFS fragmentation from a quick google: http://www.exertusconsulting.com/techarticle/defrag.shtml

But anyway, my advice, get an SSD, install Windows on it, disable defragmentation and forget about it ;)
 
Thanks for the link.

It does support some points towards doing it, but on a yearly basis perhaps. I just make a habit out of doing it after a clean install. Usually goes months before I remember again.
 
NTFS is fine and is usually updated with every major Windows release. It's one of the only file systems out there that is suited to both desktop/workstation and server usage patterns.

Fragmentation usually only becomes a problem once the disk is full. The file system then has to find small segments where it can place a large file. As long as a disk always has 10-15% free space then rarely do fragmented files cause a performance issue.

NTFS uses all manner of algorithms to minimise fragmentation when it is possible to do so, just like the hip-cool alternatives like Reiser and ZFS do.

Since Vista, Windows automatically defragments all hard disks regularly as a scheduled background task.

WinFS was never going to oust NTFS. It was going to be a higher layer built on top of NTFS. In fact WinFS was just an SQL Server instance with a few tables that store various file indexes/pointers to the real files on the underlying NTFS file system. WinFS never materialised because Microsoft decided to deliver it in a different way by simply improving the indexing service and rolling it into Windows Search and virtual folders (a.k.a. "saved searches" in Vista and "Libraries" in Windows 7).
 
But why defrag in the first place? Really, why? Why don't they just ensure it manages disk space effectively, just like the countless other FS formats our there? Those "hip-cool" FSs don't just use algorithms to reduce fragmentation, they don't have fragmentation in the first place.
Also, fragmentation is a problem long before a disk becomes full. Fragmentation is fragmentation, regardless of available space. The head has to physically move from segment to segment if the files are not contiguous.
 
But why defrag in the first place? Really, why? Why don't they just ensure it manages disk space effectively, just like the countless other FS formats our there? Those "hip-cool" FSs don't just use algorithms to reduce fragmentation, they don't have fragmentation in the first place.
Also, fragmentation is a problem long before a disk becomes full. Fragmentation is fragmentation, regardless of available space. The head has to physically move from segment to segment if the files are not contiguous.

Every file system has fragmentation to some extent, I don't see how it possibly can't have unless you simply write files once and never delete or move any.
 
But why defrag in the first place? Really, why? Why don't they just ensure it manages disk space effectively, just like the countless other FS formats our there? Those "hip-cool" FSs don't just use algorithms to reduce fragmentation, they don't have fragmentation in the first place.
Also, fragmentation is a problem long before a disk becomes full. Fragmentation is fragmentation, regardless of available space. The head has to physically move from segment to segment if the files are not contiguous.

You're clearly just regurgitating tidbits you've picked up from other misinformation on the internet. You're definately not speaking from your head that is for sure. Because any computer scientist would realise this is a complex subject and isn't simply as one sided as "fragmentation is bad mmm-kay".

All general purpose file systems have fragmentation. There's no avoiding it. The ONLY scenario where fragmentation can be avoided is when the exact file size is known, in advance. So as an example.. PVR recorders (like Sky+, Humax etc) could use fragmentation-less file systems. But these are highly specialised to their application and definately not "general purpose".

Fragmentation is rarely a problem. If you actually look at the "report" these defragmentation programs generate you will see that the files that are split over more than one fragment are usually temporary files or other equally unimportant stuff. This is good though because it means NTFS has successfully used up a few "gap" segments on the disk with files that don't really benefit from being contiguous.

Fragmentation doesn't just happen on hard disks either. It happens in RAM as well. But Windows, like most operating systems, has measures to reduce the impact of it. Features like the "low fragmentation heap" for example, which many applications use these days.
 
An example of a ZFS user experiencing fragmentation:
http://www.opensolaris.org/jive/thread.jspa?messageID=93437&

Surprise surprise, a blog about ZFS talks about the same 10% free space figure I just mentioned: http://uadmin.blogspot.com/2006/05/why-zfs-for-home.html (search in page for "frag")


An example of ReiserFS with a similar issue:
http://linux.derkeiler.com/Newsgroups/comp.os.linux.misc/2007-02/msg00443.html

Further reading on just how crap ReiserFS really is when confronted with the real world: http://en.wikipedia.org/wiki/ReiserFS. The one that most stands out to me is its inability to scale over multiple processor cores... But also that it's main headline and much misunderstood feature to reduce fragmentation causes massive performance impacts during write operations... and that's ignoring the constant threat of the entire file system becoming corrupted without warning.

But yeah... grass is always greener isn't it.
 
I think a new file system will be needed inevitably, but there is nothing seriously wrong with NTFS at the moment. Perhaps when SSDs become the defacto a new filesystem will become an issue.
 
Microsoft has already optimised NTFS for SSDs in Windows 7. Not much was needed to be done other than effectively disable the concept of "sequential" I/O. All I/O to a SSD is now treated as random access. So even if the application provides a hint that it's going to be conducting sequential I/O, the NTFS driver will ignore this and treat it as random access anyway.

If you run a defragger program on a NTFS SSD drive it will look pretty bad. Even if it isn't full. But that's because NTFS is simply no longer even trying to keep fragmentation to a minimum because it would just be wasting CPU time. When this day comes people will start to realise just how damn good a job NTFS was doing all these years for regular hard disks.
 
Fragmentation occurs when files are changed not when they are just read. In NTFS when files are writtin they are laid down one after another like this (each file is represented by one number)


11111111122222222333333334444444445555555566666666


When file 1 changes file 2 is in the way so addition data has to go at the first free area


11111111122222222333333334444444445555555566666666111


Which causes the heads to have to move more to read the data which slows it down a little but also causes the drive to wear quicker.

In modern file systems the data is not laid down this way, it is laid down all over the disk


111111111-------------------22222222-------------------------33333333
-----------------------------444444444---------------------------------
---55555555-------------------------------66666666

so if data needs to be added to file 1 it can be added directly at the end and no frgamentation takes place. At some point as the drive gets full then even these modern FS have to fragment the data too as there is no choice.
 
Fragmentation occurs when files are changed not when they are just read. In NTFS when files are writtin they are laid down one after another like this (each file is represented by one number)


11111111122222222333333334444444445555555566666666


When file 1 changes file 2 is in the way so addition data has to go at the first free area


11111111122222222333333334444444445555555566666666111


Which causes the heads to have to move more to read the data which slows it down a little but also causes the drive to wear quicker.

In modern file systems the data is not laid down this way, it is laid down all over the disk


111111111-------------------22222222-------------------------33333333
-----------------------------444444444---------------------------------
---55555555-------------------------------66666666

so if data needs to be added to file 1 it can be added directly at the end and no frgamentation takes place. At some point as the drive gets full then even these modern FS have to fragment the data too as there is no choice.
Indeed. In fact it is because of these rules that defragmenters which place all the data to the start of the drive, rather than just making all files contiguous, can actually cause a performance decrease if you work with files which are constantly changing.

Things which are okay to defragment would be music, videos, etc (not games, they get patched). But the chances of you noticing a slowdown on videos and music due to fragmentation are very slim.
 
Fragmentation occurs when files are changed not when they are just read. In NTFS when files are writtin they are laid down one after another like this (each file is represented by one number)


11111111122222222333333334444444445555555566666666


When file 1 changes file 2 is in the way so addition data has to go at the first free area


11111111122222222333333334444444445555555566666666111


Which causes the heads to have to move more to read the data which slows it down a little but also causes the drive to wear quicker.

In modern file systems the data is not laid down this way, it is laid down all over the disk


111111111-------------------22222222-------------------------33333333
-----------------------------444444444---------------------------------
---55555555-------------------------------66666666

so if data needs to be added to file 1 it can be added directly at the end and no frgamentation takes place. At some point as the drive gets full then even these modern FS have to fragment the data too as there is no choice.

Actually each FS cluster is a fixed size so it can have an amount of padding in it so its possible to add to a file without causing file fragmentation.

As an example create a text file using notepad, add a few lines of text (not too much) save it and look at the properties.

There are two sizes Size (size of data in file) and Size on Disk (the amount of disk space utilized by the file) If you then add a few more lines the Size will increase but the size on disk will stay the same.
 
Microsoft has already optimised NTFS for SSDs in Windows 7.

Hmm I didn't know this - wicked.

It seems that NTFS development hasn't stood still so I'd err even more on the side of not needing a new FS. I personally like the fact that it is tried and tested. The modified NTFS in Windows Home Server introduced a few bugs so even if they surprised us with a new file system tomorrow I'd stick with NTFS for a good while before switching.

In regards to defrag I always find computers that are maybe a few years old that don't have a lot of disk space to start off with respond better to a defrag. This is a subjective observation of course. On my own Windows computers I've never done it more regularly than on a yearly basis.
 
NTFS is designed and maintained by people far more competent and versed in filesystem technology than any of the armchair experts on OcUK, that I can guarantee you.

Simply spreading files out on the disk isn't a perfect fit for every usage pattern. All filesystems need to make tradeoffs (speed vs fragmentation vs processor overhead) and NTFS does it pretty well for the range of systems it's used on.
 
Actually each FS cluster is a fixed size so it can have an amount of padding in it so its possible to add to a file without causing file fragmentation.

As an example create a text file using notepad, add a few lines of text (not too much) save it and look at the properties.

There are two sizes Size (size of data in file) and Size on Disk (the amount of disk space utilized by the file) If you then add a few more lines the Size will increase but the size on disk will stay the same.

Yes, which also means that for files smaller than the cluster size that the space on the disk is wasted.
 
Back
Top Bottom