Why is Windows (apparently) less secure than other OSes? Also re: NTFS fragmentation.

I think Oxy meant devs don't appear to be in any hurry to embrace writing software the way MS wants them to.
This.

I dual boot between linux and win7. I must say with linux OS's like Ubuntu, mandriva, opensuse etc etc it no longer is purely a geek playground. The beginner distro's are at a stage now where I can recommend them over windows, if people want to save money.

Windows 7 is a great improvement, so much so I actually bought 4 copies for £45 each in the summer. If windows 8 goes back to silly pricing I will use linux full-time aqnd use win7 in a VM.

Back on topic, no system is 100% safe. I think linux has the better structure/security systems, windows is improving on this front. I just think linux was thaught about better way back when...

(I aint a computer degree holder, just voicing my opinions as an enthusuast).

:)
 
It's not that Microsoft didn't think about security, they just had to make concessions for a better user experience. Linux by comparison has only a handful of [and sorry to invoke a stereotype] nerds, who love every chance they get to type "sudo" :p

MS are making a great effort to change devs and users alike and so far they are doing a great job.
 
This sounded like you were confusing write-back cache to me, so I did a few searches (in case I was mistaken) through the Microsoft knowledge-base Wikipedia and Google in general; from what I can tell you were.

Write-back cache only uses the drive cache available. Allocate on flush uses available system memory, which is available in vastly greater quantities and thus reduces fragmentation further.

Not true. NTFS uses (albeit indirectly, I will explain this later) system memory for storing its "delayed write cache". I don't understand why you think otherwise. Given that performance would be utterly dire if it did not do this.

The write-back cache is something totally different. For starters, writing to it is not "delayed" - to do so would defeat its purpose. It is typically something provided by SCSI / RAID controllers that have a battery-backup module along with a fairly large chip of memory. This is something that, in Windows at least, is not implemented in the filesystem driver nor the I/O Manager itself - but by an I/O controller driver.

Technically, on Windows at least, this functionality isn't normally credited to the filesystem driver. It is slightly strange that other platforms do this. I suppose it comes down to the better seperation of concerns that NT has due to its driver model. Windows credits the delayed write caching feature to the I/O Manager (a subsystem of the kernel). It is something that Windows NT has had since the beginning. And it is something which FAT32/FATex (and other FS implementations) can use as well, if it so wishes. The basic concept is that because all I/O operations are proxied through the I/O Manager, it can build up large backlogs (how large, depends on licensed edition and configuration) and then "flush" them all at once to the filesystem driver. Which of course affords the FS driver to make better allocation decisions.

"Allocate on flush" as it is called in the generic platform-independant term is not really so much a 'fragmentation minimisation' feature. It is primarily a performance enhancement feature. I can't imagine any modern filesystem would go without it. The fact that it has a positive effect of the typical degree of fragmentation is merely a lucky sideaffect and almost negligable. The sideaffect isn't always present, it can be lost if the program executing the write operations is written in a certain (albeit, probably bad) way.

The I/O Manager reports the size of its current cache as the "System Cache" metric which is visible in Task Manager.

I should probably point out that this cache is also used for storing successful past read operations.

This is why if you don't "Eject / Safely Remove" an external hard disk (or hell even just a USB memory stick) you will almost certainly corrupt something on it. Though in more recent version of Windows it uses a more conservative caching for removable drives.

The system cache can operate in both Server and Workstation modes (configurable).

Moreover, let there be no doubt that Windows has one of the most advanced (here we go again!) I/O caching strategies available today. ReadyBoost, alone, (and despite its cheesy name) is an example of that: http://en.wikipedia.org/wiki/Readyboost

Hoodlum said:
.. the rest of your post ..
As for the rest, I really can't be bothered anymore. I didn't even bother to read it, sorry. It just goes round and round in circles. You keep spouting the same FUD with no reasoning or facts.

I also think it's a bit silly to call a truce on the name calling (not that I've done any) and yet use the same post to do a load more name calling.

This could have been such a good debate but you came in "all guns blazing" for no reason whatsoever.

In any case, personally I feel that your misunderstanding of Windows and NTFS in the area of "delayed write caching" is so fundamental that it undermines your whole argument against it.




Edit: Self correction... when I refer to the I/O Manager, I probably ought to refer to the Cache Manager. They're two different systems in the kernel, albeit very close to one another.
 
Last edited:
Windows credits the delayed write caching feature to the I/O Manager (a subsystem of the kernel). It is something that Windows NT has had since the beginning. And it is something which FAT32/FATex (and other FS implementations) can use as well, if it so wishes. The basic concept is that because all I/O operations are proxied through the I/O Manager, it can build up large backlogs (how large, depends on licensed edition and configuration) and then "flush" them all at once to the filesystem driver. Which of course affords the FS driver to make better allocation decisions.

Noted.

Moreover, let there be no doubt that Windows has one of the most advanced (here we go again!) I/O caching strategies available today. ReadyBoost, alone, (and despite its cheesy name) is an example of that: http://en.wikipedia.org/wiki/Readyboost
Although it hasn't been terribly useful so far (at least to me or anyone I am aware of), it should be interesting once USB 3.0 gets serious uptake.


As for the rest, I really can't be bothered anymore. I didn't even bother to read it, sorry. It just goes round and round in circles.
I felt the same (see the earlier "*sigh*") but felt I should give you a final opportunity to provide some sort of data or sources to back up your claims.

You keep spouting the same FUD with no reasoning or facts.
Same tired old BS again. Already tackled this in my previous post.

I also think it's a bit silly to call a truce on the name calling (not that I've done any) and yet use the same post to do a load more name calling.
1) There was no name calling in that post.

2) I pointed out the many errors in your statements and then alluded to the "credibility" claim you made about my education and applied the same argument to your own education with your blatant errors as the basis.

3) You can clearly see I edited the post later with the specific intention of making it clear this line of discussion is both worthless and a distraction from the topic; I did not wish to continue replying to you on that topic because of this. I will not respond to future attempts to continue this.

This could have been such a good debate but you came in "all guns blazing" for no reason whatsoever.
Judging by the other comments there are people who got something out of it. The topic was extremely one-sided previous to this, with a large number of responses being yours.

In any case, personally I feel that your misunderstanding of Windows and NTFS in the area of "delayed write caching" is so fundamental that it undermines your whole argument against it.
I have already pointed in my previous post and why I thought you were not able to discuss this topic.
"Please. Having not even used fsck, I do not think you are qualified to accurately judge the honest strengths of *nix systems.​
You require at least rudimentary knowledge of *nix to be able to have a discussion involving it. Hopefully, in future discussions you will provide data and / or sources to back up your opinions and we can have a more interesting discussion.

Edit: Self correction... when I refer to the I/O Manager, I probably ought to refer to the Cache Manager. They're two different systems in the kernel, albeit very close to one another.
Again, noted.
 
Last edited:
Does this mean you now accept that NTFS isn't anywhere near as badly performing or fragmenting as you once thought?

And even that allocate-on-flush implementations aren't some holy grail of preventing fragmentation? (a commonly held belief by many in the FS community)

I have already pointed in my previous post and why I thought you were not able to discuss this topic.
"Please. Having not even used fsck, I do not think you are qualified to accurately judge the honest strengths of *nix systems.
You require at least rudimentary knowledge of *nix to be able to have a discussion involving it. Hopefully, in future discussions you will provide data and / or sources to back up your opinions and we can have a more interesting discussion.

I'm not someone that practices *nix server administration. Therefore, I have no reason to know about fsck. I don't even practice Windows administration. The last time I looked at Windows Defragger was when I first setup my PC in 2007 - so perhaps you can forgive me for being unfamilar with them on a User Interface level.

I really think it shows up the weaknesses in your own understanding and insecurities on this subject that you keep attacking me with two User Interface related items. It seems to be all you've got against me.

You don't need to know about the intricacies of any given platform's disk checker utility to know how filesystems, kernels and data structures work on a fundamental level. And that, ultimately, is what we're discussing here.

You have made far more significant errors on the actual central topic matters. For instance, in believing that NTFS couldn't perform allocations on flush (delayed write cache). This is so central to your argument of saying its performance is bad and that it fragments badly compared to the others. And yet, only a passing admission of "Noted." is provided.

I felt the same (see the earlier "*sigh*") but felt I should give you a final opportunity to provide some sort of data or sources to back up your claims.
What sources or data do you need? And for which claims?

I've not actually made all that many "claims" in all of this.

I've already backed up my claims that NTFS is performant and doesn't fragment any worse (given a comparable workload) than other filesystems by teaching you about delayed write caching. Any schmuck can do a Google to learn more on this subject so I don't see why I need to provide URLs. Of course, delayed writes aren't the be-all-end-all of performance. But given NTFS is a B+tree, like most of its rivals, it's going to be matching them in just pure data structuring as well anyway. I'm sure there are plenty of benchmarks on Google - but be careful reading vendor-sponsored reports and ensure that comparable OS editions are being compared. I.e. comparing W7 to Solaris is a no-go as one is a desktop/workstation OS, and the other is server optimised.

The topic was extremely one-sided previous to this, with a large number of responses being yours.
Are you suggesting I was trying to sway the thread?

So is that what prompted all of this? Me giving a balanced and subjective analysis of Windows' position amongst its rivals in terms of security?

Did you have issue with those postings as well? :confused:
 
Does this mean you now accept that NTFS isn't anywhere near as badly performing or fragmenting as you once thought?
It means I believe your statement regarding NTFS delayed write cache, even without a source. As the Microsoft knowledge base makes no mention of this anywhere I have no way of knowing 100% if this is true or not.

Personal experience with both systems has still shown fragmentation worse on the NTFS side. Having done no more than a handful of performance related benchmarks personally and having no access to the code makes it pretty hard for me to pinpoint why this is, though.

And even that allocate-on-flush implementations aren't some holy grail of preventing fragmentation? (a commonly held belief by many in the FS community)
Surely you must admit they have proven to be quite effective, though?

I'm not someone that practices *nix server administration. Therefore, I have no reason to know about fsck. I don't even practice Windows administration. The last time I looked at Windows Defragger was when I first setup my PC in 2007 - so perhaps you can forgive me for being unfamilar with them on a User Interface level.
On Defrag: You brought up the UI as an example of why I was a liar. It was absolutely pivotal to the statements you were making. It was wrong and that's the only reason I brought it up.

On fsck: As this was in the context of comparing many filesystems as part of a *nix system (In comparison to Windows 7's NTFS) I think you should have at least some rudimentary knowledge of them before you speak about them, having used them at least once - for example. That is all, the bar is pretty low. I don't think I'm making an unreasonable demand.

I really think it shows up the weaknesses in your own understanding and insecurities on this subject that you keep attacking me with two User Interface related items. It seems to be all you've got against me.
Having to re-direct the argument away from the topic and onto the character assassination territory of questioning my hypothetical "insecurities" rather than providing any sources or data to back up your own points really shows what I was saying to be correct. Continually avoiding the issue.

You have made far more significant errors on the actual central topic matters. For instance, in believing that NTFS couldn't perform allocations on flush (delayed write cache).
1) I'm sorry but having not known how NTFS handles delayed allocation strangely when even Microsoft do not seem to provide this information is not in the same league as not knowing how to see a fragmentation percentage which is clear for all to see.

2) Could I not make the same argument? I'm no system admin of any kind, period. Never have been. The majority of my career (in IT) was consultancy. Most of that time I spent working for banks. All systems at my current workplace run SL (RHEL based, if you're unfamiliar).

This is so central to your argument of saying its performance is bad and that it fragments badly compared to the others. And yet, only a passing admission of "Noted." is provided.
Firstly you did not even acknowledge your errors and even denied one until I was forced to quote you back to yourself.

Secondly, It was what I perceived to be the reason for the lesser performance (especially has NTFS handles small reads & writes less well), which was apparently not the case. That does not change the fact that any benchmark you can find will show similar results. Earlier I went out of my way to give you a suite you could use to test it yourself, as you did not appear to trust me. This way you can see the same result completely independently.

You will find it will confirm both lower sequential and random reads & writes when compared to Reiser4, XFS, ext2 (though I omitted ext2 earlier because it is so far behind in feature-set though it is an absolute monster in performance), ext3, ext4 and possibly JFS (Have not tested this one personally). NTFS often comes close with large reads & writes (at least on standard 7200rpm Sata 3gb drives) but tends to fall behind quite a bit when dealing with many small reads & writes, more so in Windows 7 for some reason.

If this is too challenging you could run a simpler (albeit less scientific) test - Simply copy the same large set of small files from one location to another on an ext 4 volume (running the latest Fedora, Suse, Ubuntu, whatever) then do the same for a large set of large files. Repeat with an NTFS volume in Windows 7; The difference is immediately clear.

What sources or data do you need? And for which claims?

I've not actually made all that many "claims" in all of this.

I've already backed up my claims that NTFS is performant
No you haven't. Provide a benchmark or data to back up your claim, anything. This was a main point of dispute for me. I could keep repeating that I was the king of england but that wouldn't make it true either. You haven't even tried to detail its performance strengths and weaknesses because of this and based on your other comments I'm only left to assume you simply don't know them. I've pretty much given up trying to get anything hard out of you at this point.

and doesn't fragment any worse (given a comparable workload)
Again no citation. I have never experienced this to be the case and you have made no effort to provide a source that shows this to be true.

Any schmuck can do a Google to learn more on this subject so I don't see why I need to provide URLs.
If it is so "easy" to access these sources why have you not backed up your claims and instead thought up a reason as to why you shouldn't have to? From the outside this just seems like an admission that you are unable to. (as you are again avoiding it).

Example:
If I was to release a paper claiming I used current M-theory as a basis for cracking the TOE (theory of everything - The unification of quantum mechanics and general relativity) speaking in very general terms and avoiding specifics, technical details or sources (as you mostly have) I would be a laughing stock. Can you not see why I would like you to back up what you are saying with hard facts? Especially when your "performance" claims are so clearly in doubt.

Of course, delayed writes aren't the be-all-end-all of performance. But given NTFS is a B+tree, like most of its rivals, it's going to be matching them in just pure data structuring as well anyway.
Again I'm still not 100% on the delayed writes on Windows 7 (having no independent source to confirm this) so if you have a source you could provide to verify that would be great.

Are you suggesting I was trying to sway the thread?
No, any topic is going to be one sided with one person their opinion more than anyone else.

So is that what prompted all of this? Me giving a balanced and subjective analysis of Windows' position amongst its rivals in terms of security?
I did not believe your final post was neutral/objective and the thread was lacking an opposing voice, so why not? A forum is for discussion, no?

Did you have issue with those postings as well? :confused:
No.
 
Last edited:
Hoodlum said:
It means I believe your statement regarding NTFS delayed write cache, even without a source. As the Microsoft knowledge base makes no mention of this anywhere I have no way of knowing 100% if this is true or not.
My source (and where I learnt pretty much everything about how Windows works under the hood): Microsoft Windows Internals, Fourth Edition.

Page 655 relates specifically to the subject at hand.

Page 683 (and surrounding) discusses write-back caching and "lazy writing" (delayed writes) in great detail.

I quote (p683):

Windows Internals said:
The Cache manager implements a write-back cache with lazy write. This means that data written to files is first stored in memory in cache pages and then written to disk later. Thus, write operations are allowed to accumulate for a short time and are then flushed to disk all at once, reducing the overall number of disk I/O operations.

It goes on to detail some useful Performance Counters, these are (from p684):
Windows Internals said:
Cache: Lazy Write Flushes/sec - number of lazy writer flushes
Cache: Lazy Write Pages/sec - number of pages written by the lazy writer

It also details how applications which don't want any delayed write caching to occur can circumvent the default behavior by using a special option on the CreateFile Win32 API (p684):
Windows Internals said:
Because some applications can't tolerate even momentary delays between writing a file and seeing the updates on disk, the cache manager also supports write-through caching on a per-file object basis; changes are written to disk as soon as they're made. To turn on write-through caching, set the FILE_FLAG_WRITE_THROUGH flag in the call to the CreateFile function. Alternatively, a thread can explicitly flush an open file, by using the Windows FlushFileBuffers function...

I've used this flag numerous times in .NET programming.

Hoodlum said:
Personal experience with both systems has still shown fragmentation worse on the NTFS side. Having done no more than a handful of performance related benchmarks personally and having no access to the code makes it pretty hard for me to pinpoint why this is, though.
Performing direct and literal comparisons between two operating systems running different file systems is always going to be difficult. This is why you must take any "prepared" benchmarks you find on the internet with a large pinch of salt. The only way to really test is to setup a test bed platform running a simulation of the type of workload you expect to put on the systems. But even then, if you *do* find a significant disparity between any modern filesystem on a modern OS then you've probably encountered a problem. Not so much a by-design behaviour.

Hoodlum said:
Surely you must admit they have proven to be quite effective, though?
Not really, not for fragmentation. For performance yes. It's been waaay too hyped up by *nix oriented filesystems. The problem is that in order to make delayed writes have a significant benefit for fragmentation the flush interval must be quite exceptionally high. In the order of several minutes, probably. Maybe more. But then if you do that it is going to seriously hurt if you had a power cut or some other type of failure which resulted in the system not having the chance to flush its caches to hard disk. As with most things, it's a trade between performance vs memory usage vs reliability vs usability. I dare say that the Solaris system you run comparisons with against W7 has far more aggressive default settings in this regard. W7 is probably running them pretty damn conservative given how likely it is to experience a power cut or other failure compared to a typical Solaris system which is likely to have both a RAID controller battery module AND a UPS for the entire machine.

Hoodlum said:
On Defrag: You brought up the UI as an example of why I was a liar. It was absolutely pivotal to the statements you were making. It was wrong and that's the only reason I brought it up.

On fsck: As this was in the context of comparing many filesystems as part of a *nix system (In comparison to Windows 7's NTFS) I think you should have at least some rudimentary knowledge of them before you speak about them, having used them at least once - for example. That is all, the bar is pretty low. I don't think I'm making an unreasonable demand.
This sub-thread is done, I believe.

Hoodlum said:
Having to re-direct the argument away from the topic and onto the character assassination territory of questioning my hypothetical "insecurities" rather than providing any sources or data to back up your own points really shows what I was saying to be correct. Continually avoiding the issue.
See above.

Hoodlum said:
1) I'm sorry but having not known how NTFS handles delayed allocation strangely when even Microsoft do not seem to provide this information is not in the same league as not knowing how to see a fragmentation percentage which is clear for all to see.

2) Could I not make the same argument? I'm no system admin of any kind, period. Never have been. The majority of my career (in IT) was consultancy. Most of that time I spent working for banks. All systems at my current workplace run SL (RHEL based, if you're unfamiliar).
Re #1: Delayed writes is an implementation detail. It has almost nothing to do with the filesystem. For instance, I bet the NTFS 3G driver doesn't bother to implement delayed writes? As it's only intended to serve as compatibility driver. If you read the specification of any filesystem it shouldn't mention delayed writes/lazy writes/allocate-on-flush anywhere. It has nothing to do with the "on disk" bytes that are stored by nor any mandatory programming contract that is provided by the filesystem. That is why if you read documentation on NTFS it will mention it nowhere. Second of all, NTFS doesn't actually implement this detail at all anyway. Because it is handled by Windows higher up in the stack (at the I/O and Cache managers).

I really think you're being argumentative just for the sake of it if you seriously believe that your misunderstanding/ignorance surrounding Windows/NTFS pales in significance to some User Interface ballsup that I made. Vista didn't have the %. Windows 7 does again. Boo hoo. I made a mistake. Get over it, I did.

Hoodlum said:
Firstly you did not even acknowledge your errors and even denied one until I was forced to quote you back to yourself.
I have acknowledged them. They were minor and barely on topic.

Hoodlum said:
Secondly, It was what I perceived to be the reason for the lesser performance (especially has NTFS handles small reads & writes less well), which was apparently not the case. That does not change the fact that any benchmark you can find will show similar results. Earlier I went out of my way to give you a suite you could use to test it yourself, as you did not appear to trust me. This way you can see the same result completely independently.
Let's see some facts and figures for these fresh claims of "NTFS handles small reads & writes less well". Afterall, you've pressured me enough to get my Windows Internals book off the shelf.

What test suite? You mentioned a 9TB database. That's not a test suite. It's an example of a test suite.

Hoodlum said:
You will find it will confirm both lower sequential and random reads & writes when compared to Reiser4, XFS, ext2 (though I omitted ext2 earlier because it is so far behind in feature-set though it is an absolute monster in performance), ext3, ext4 and possibly JFS (Have not tested this one personally). NTFS often comes close with large reads & writes (at least on standard 7200rpm Sata 3gb drives) but tends to fall behind quite a bit when dealing with many small reads & writes, more so in Windows 7 for some reason.
Sources please.

I do fear that (along the previous quote) we're straying off topic once again though.

It strikes me you heard someone discussing NTFS block size defaults compared to rival filesystems.

Hint: All my storage volumes are formatted using 64KB block size NTFS. File copying bliss. I left my OS volume at 4KB though because that tends to just be full of smallish DLL/EXE files.

I did a quick look and it appears the default block size on Solaris in general and indeed for ZFS is 8KB. So of course that will give it an unfair advantage of a Windows 7 machine running a 4KB block size. Both in terms of performance and fragmentation!

Moreover, how can we possibly be sure that these tests aren't benchmarking the operating system (taking into account licensed edition) more than the filesystem?

If this is too challenging you could run a simpler (albeit less scientific) test - Simply copy the same large set of small files from one location to another on an ext 4 volume (running the latest Fedora, Suse, Ubuntu, whatever) then do the same for a large set of large files. Repeat with an NTFS volume in Windows 7; The difference is immediately clear.
YouTube it, or something. If the difference is huge then something is wrong. Because fundamentally the art of I/O and filesystems performance has not changed all that much in the last decade.

But still, all it may prove is that W7 is crap at file copying? I often find myself impressed by the speed of Server 2008. Even browsing network shares is shockingly faster than Vista/W7.

Hoodlum said:
No you haven't. Provide a benchmark or data to back up your claim, anything. This was a main point of dispute for me. I could keep repeating that I was the king of england but that wouldn't make it true either. You haven't even tried to detail its performance strengths and weaknesses because of this and based on your other comments I'm only left to assume you simply don't know them. I've pretty much given up trying to get anything hard out of you at this point.
I've never tried to make claims that it more performant than X, Y, Z. Simply that is very well comparable to rival systems and that it strikes a nice balance that few rival systems do at the moment. I do believe it was yourself that made those rather extravagant claims that X, Y, Z filesystems were faster than NTFS (with no sources or benchmarks).

If I said something like "NTFS is higher performing than X" then yes, I would probably be obilged to back that up. But if you re-read what I wrote it was merely trying to indicate that, contrary to popular belief, NTFS is not as bad or slow as people think. Which was the status quo set by the OP in the thread until I came along to correct it. You didn't seem to like me elevating NTFS' position from where it was.

Hoodlum said:
Again no citation. I have never experienced this to be the case and you have made no effort to provide a source that shows this to be true.
See above for citation from Windows Internals. I should think that any computer scientist should be able to understand that if a filesystem implements a B[+-]tree and uses some form of write caching, that it will have comparable fragmentation characteristics to a rival filesystem that implements those same basic concepts. Sure there will be subtle differences based upon their implementation and configuration, but by and large they can be considered comparable implementations.

Hoodlum said:
If it is so "easy" to access these sources why have you not backed up your claims and instead thought up a reason as to why you shouldn't have to? From the outside this just seems like an admission that you are unable to. (as you are again avoiding it).
See above. Not avoiding it, just didn't feel the need to educate someone who, seemingly, has already been educated at least once.

Hoodlum said:
Again I'm still not 100% on the delayed writes on Windows 7 (having no independent source to confirm this) so if you have a source you could provide to verify that would be great.
Buy the Windows Internals book. Or hell, just fire off an e-mail to Mark Russinovich. He will think that you're pretty silly though (as I do).

Hoodlum said:
No, any topic is going to be one sided with one person their opinion more than anyone else.

I did not believe your final post was neutral/objective and the thread was lacking an opposing voice, so why not? A forum is for discussion, no?

No.
It is for discussion but as I said before, your confrontational posting style is something this thread could have done without. I don't have a problem with someone disagreeing with me. But boy I didn't expect to have a ton of bricks thrown at me.
 
I'm going to skip anything not immediately relevent to the progression of the topic
Performing direct and literal comparisons between two operating systems running different file systems is always going to be difficult. This is why you must take any "prepared" benchmarks you find on the internet with a large pinch of salt. The only way to really test is to setup a test bed platform running a simulation of the type of workload you expect to put on the systems. But even then, if you *do* find a significant disparity between any modern filesystem on a modern OS then you've probably encountered a problem. Not so much a by-design behaviour.
I agree and I have found quite a disparity. I never claimed it was a design flaw, only that it exists (and has for a while, incidentally).

Not really, not for fragmentation. For performance yes. It's been waaay too hyped up by *nix oriented filesystems. The problem is that in order to make delayed writes have a significant benefit for fragmentation the flush interval must be quite exceptionally high. In the order of several minutes, probably. Maybe more. But then if you do that it is going to seriously hurt if you had a power cut or some other type of failure which resulted in the system not having the chance to flush its caches to hard disk.
It is with ext4. Beyond just "a few minutes" even. I didn't pick it as an example blindly. The power-loss issue is less of a concern generally for two reasons:
1) Critical systems have UPS.

2) You (either the user or the developer; OpenOffice is a good example of this) can force it to immediately fsync or fdatasync (though this will negate the advantage some of the time - as you will not always want to force an fsync/fdatasync)​

As with most things, it's a trade between performance vs memory usage vs reliability vs usability. I dare say that the Solaris system you run comparisons with against W7 has far more aggressive default settings in this regard. W7 is probably running them pretty damn conservative given how likely it is to experience a power cut or other failure compared to a typical Solaris system which is likely to have both a RAID controller battery module AND a UPS for the entire machine.
Quite possible. It's simply too difficult (and time consuming) to pinpoint what is causing ZFS' advantage with copy on write and other features most other filesystems lack. It makes direct comparisons extremely difficult.

Re #1: Delayed writes is an implementation detail. It has almost nothing to do with the filesystem. For instance, I bet the NTFS 3G driver doesn't bother to implement delayed writes? As it's only intended to serve as compatibility driver.
I honestly couldn't tell you. I don't know much about NTFS-3G. I don't use it much personally.

If you read the specification of any filesystem it shouldn't mention delayed writes/lazy writes/allocate-on-flush anywhere.
This appears to only be true in the Windows world. At least it isn't true for the Solaris, BSD, Mac OS, Linux etc. I say only because there aren't really any other competitors.

Let's see some facts and figures for these fresh claims of "NTFS handles small reads & writes less well". Afterall, you've pressured me enough to get my Windows Internals book off the shelf.
First some notes:
1) The below benchmarks were using pre-release versions of both Ubuntu and Windows 7. Windows Vista and The latest stable Ubuntu release are also included for comparison.

2) Regarding potential bias (full disclosure): The people providing the benchmark work for future publishing, a company which receives large investment from Microsoft. The people involved are also the writers and editors of a Linux magazine.

3) These are at defaults directly after installation. Block size is the same across the board.

4) All Linux systems are using ext3 by default there, which is slower than ext4.
Those things in mind:

"IO testing

To test filesystem performance, we ran four tests: copying large files from USB to HD, copying large files from HD to HD, copying small files from USB to HD, and copying small files from HD to HD. The HD to HD tests copied data from one part of the disk to another as opposed to copying to a different disk. For reference, the large file test comprised 39 files in 1 folder, making 399MB in total; the small file test comprised 2,154 files in 127 folders, making 603MB in total. Each of these tests were done with write caching disabled to ensure the full write had taken place."

ubuntuvs76.png

Amount of time taken to copy the small files from a USB flash drive to hard disk. Measured in seconds; less is better.

ubuntuvs77.png

Amount of time taken to copy the small files from one place to another on a single hard disk. Measured in seconds; less is better.

ubuntuvs78.png

Amount of time taken to copy the large files from a USB flash drive to hard disk. Measured in seconds; less is better.

ubuntuvs79.png

Amount of time taken to copy the large files from one place to another on a single hard disk. Measured in seconds; less is better.

Notes: Vista and Windows 7 really seemed to struggle with copying lots of small files, but clearly it's something more than a dodgy driver because some of the large-file speeds are incredible in Windows 7."

For anyone interested ext3 vs ext4:
ubuntuvs7ext4.png


Source Article

In case you were wondering the small HD to HD results are repeatable on a current, fully updated Windows 7 retail installation. Maybe you can shed some light on where in the stack this performance issue lies or at least try something similar on Server 2008? Because at this point I'm at a loss.

I'll do a search for more tomorrow if you like, It's getting a bit late here. You should be able to find others pretty easily too.

What test suite? You mentioned a 9TB database. That's not a test suite. It's an example of a test suite.
Phoronix Test Suite. I mentioned it in a previous post. The alpha works on Windows, Linux, Solaris, BSD and others. I believe there was an earlier news post regarding the current stable 2.4 running on Windows too but that has limited functionality.

It strikes me you heard someone discussing NTFS block size defaults compared to rival filesystems.

Hint: All my storage volumes are formatted using 64KB block size NTFS. File copying bliss. I left my OS volume at 4KB though because that tends to just be full of smallish DLL/EXE files.

I did a quick look and it appears the default block size on Solaris in general and indeed for ZFS is 8KB. So of course that will give it an unfair advantage of a Windows 7 machine running a 4KB block size. Both in terms of performance and fragmentation!
I've seen quite a few benchmarks showing the disparity, over a number of years and from different sources. It's unfortunate NTFS-3G cannot use the Windows 7 driver. The last I heard the most it can use is the XP-SP2 driver which makes for some interesting (if outdated) results.

I personally use 64KB block size for LVM volume groups, any experience in the last two years I have related is further complicated by this.

Moreover, how can we possibly be sure that these tests aren't benchmarking the operating system (taking into account licensed edition) more than the filesystem?
You can't unless you can run the same system on both. Even then it could be optimised with one in mind.

YouTube it, or something. If the difference is huge then something is wrong. Because fundamentally the art of I/O and filesystems performance has not changed all that much in the last decade.
Something is wrong. It may not even be NTFS, but somewhere else in the stack.

But still, all it may prove is that W7 is crap at file copying? I often find myself impressed by the speed of Server 2008. Even browsing network shares is shockingly faster than Vista/W7.
Well as the OP was regarding the Windows desktop (and desktop oriented distributions) I'm sticking to Windows 7 as the comparison. I also do not have a copy of Server 2008 to test so you would have to provide any data for that.

I've never tried to make claims that it more performant than X, Y, Z. Simply that is very well comparable to rival systems and that it strikes a nice balance that few rival systems do at the moment. I do believe it was yourself that made those rather extravagant claims that X, Y, Z filesystems were faster than NTFS (with no sources or benchmarks).
They were provided as an incentive for you to source your claims. Regardless I have provided whatever benchmark I found above.

If I said something like "NTFS is higher performing than X" then yes, I would probably be obilged to back that up. But if you re-read what I wrote it was merely trying to indicate that, contrary to popular belief, NTFS is not as bad or slow as people think. Which was the status quo set by the OP in the thread until I came along to correct it. You didn't seem to like me elevating NTFS' position from where it was.
I wouldn't dare to speculate what other people believe. At least in regards to XP, Vista and Windows 7 I have found it lacking in performance.

See above for citation from Windows Internals. I should think that any computer scientist should be able to understand that if a filesystem implements a B[+-]tree and uses some form of write caching, that it will have comparable fragmentation characteristics to a rival filesystem that implements those same basic concepts. Sure there will be subtle differences based upon their implementation and configuration, but by and large they can be considered comparable implementations.
Normally I would agree, but we both know it isn't that simple. There is far more to NTFS than just the filesystem itself to consider. The whole stack has to be looked at. I am a scientist, I do not deal in assumptions. We observe, measure and review.

My personal observation first alerted me to the disparity (in fragmentation and performance), many years ago. Having seen this through many years under many different revisions of NTFS under many different workloads (and with the apparent regularity which everyone else seems to need to defrag too). At this point I developed a hypothesis that NTFS was the cause, having never experienced such an issue outside of it (unless you're counting FAT, which I am not).

He will think that you're pretty silly though (as I do).
Having reached a conclusion based on the assumption your position was correct without even observing let alone measuring or reviewing the disparity, the feeling is mutual.
 
Last edited:
It's a risky option,

If you suffer a power loss then personal/OS data can get corrupted. Not worth the minimal performance gain in place of a potential hefty data integrity loss especially on a modern drive!
 
@Hoodlum's benchmarks: It strikes me that the very nature of those benchmarks undermines their usefulness in determining which file system has the higher level of performance; moreover, we can see a large disparity between the various Windows operating systems in many of those tests, and they use the same file system. The very fact that the benchmarks use different operating systems introduces a huge confounding variable which makes it useless as an indication as to what file system is superior, it only says which operating system is better at handling file transfers with their default file systems. I mean you could make the case that potentially Windows would implement the NTFS file system better than Linux systems that also can implement it, but that just means that any objective comparison between file systems you could come up with is utterly useless in terms of determining which file system is superior performance-wise for that reason, as opposed to making any one test better than the other.
 
It's a risky option,

If you suffer a power loss then personal/OS data can get corrupted. Not worth the minimal performance gain in place of a potential hefty data integrity loss especially on a modern drive!

All write caching is risky :) Very risky. Luckily NTFS journals its master file tables et al so it is not possible for write caching to "corrupt" a whole NTFS volume. But it is certainly possible for write caching to corrupt standalone files on the disk which were written to before something like a power outage.

My understanding of "Enable advanced performance" is that it puts the write caching into a super-aggressive mode (at the expense of available memory).
 
I will have to run some extended HDTune pro tests tonight as I have weekly backups incase of data loss so I'm confident with the risk :p

Edit*

It seems in Windows 7 it's different...

cachingdisk.JPG


The checked boxes are checked by default BTW. Win7 x64 Pro.
 
Last edited:
It appears to be the same option just with a better name in Windows 7.

What the new name suggests to me is that it disables (or more likely just very seriously relaxes) the frequency of the lazy writer flush interval. Instead in order for a true flush to occur it would require the system cache to get filled to a point where it can not feasibly store any more without causing memory pressure for the rest of the system. Or until the software program explicitly requests a flush.

The default frequency of the lazy writer is in the order of just a few seconds, I believe. It varies depending on licensed edition and, of course, these various options in Control Panel.

I might have to hook up my APC UPS serial cable so my PC auto-shuts down in a power outage. Then I can safely have a play with this option... ;)
 
The very fact that the benchmarks use different operating systems introduces a huge confounding variable which makes it useless as an indication as to what file system is superior
I take your point but in that case you could argue any comparison is pointless. Even if you were to run the same benchmark (such as the one I mentioned earlier) across both platforms the entire underlying stack would be different.

I mean you could make the case that potentially Windows would implement the NTFS file system better than Linux systems that also can implement it, but that just means that any objective comparison between file systems you could come up with is utterly useless in terms of determining which file system is superior performance-wise for that reason, as opposed to making any one test better than the other.
Agreed, I really can't see a fairer way to test as long as *nix lacks a good NTFS implementation and Windows lacks a good ext 3/4 implementation. You are forced into testing on different platforms.

Edit: Iirc the thing that really confirmed the small file copy issue with Windows 7 was backing up my steam folder. It's an absolute nightmare, takes nearly half a day when I can easily back up a third or half the amount of data in large mkv's in an hour or less. The outstanding large file performance obviously contributed greatly to this disparity too.
 
Last edited:
To further prove the point about the benchmarks being flawed, copying files in Windows 7 and SBS 2008 seems a lot slower using Explorer [i.e., drag and drop] than using something like robocopy. It may have just been a fluke [or the types of file] but I will do some playing around if anyone is interested! :p

I don't have access to my Windows 7 computer currently, but when I get a chance I will see if I can get some screenshots. :)
 
My own observation is that ever since Vista, Windows seems to spend more time on "Calculating remaining time" than it does actually copying the files.

I don't think blaming NTFS, or hell even just the Windows kernel, is particularly fair. It's the team that write Windows Explorer that need a kick up the arse.
 
To further prove the point about the benchmarks being flawed, copying files in Windows 7 and SBS 2008 seems a lot slower using Explorer [i.e., drag and drop] than using something like robocopy. It may have just been a fluke [or the types of file] but I will do some playing around if anyone is interested! :p
I would definitely be interested in seeing that. Infact I might give that a try myself.
 
Back
Top Bottom