Not true. NTFS doesn't call it "allocate on flush" though. In NTFS it is called "delayed write caching". It has been a feature of NTFS since NT 3.51. It was one of the first filesystems to implement it. Hence its name being "New Technology File System". It's really not as bad and out-dated as you think.
This sounded like you were confusing write-back cache to me, so I did a few searches (in case I was mistaken) through the Microsoft knowledge-base Wikipedia and Google in general; from what I can tell you were.
Write-back cache
only uses the drive cache available. Allocate on flush uses available system memory, which is available in vastly greater quantities and thus reduces fragmentation further.
I have made no personal attacks. That is something I'm not allowed (nor do I want) to do on here. All I said is that it is clear you're not a computer scientist.
Considering my first major was computer science I'm sure you can understand why I saw this as an attack of my professional credibility.
Simply from the points of view that you adopt and the pseudo-technical knowledge you possess about filesystems.
Please. This is coming from someone who has never used fsck and does not know the Windows 7 Disk Defragmenter has "(x% fragmented)" on the main UI. These examples are precisely why I stated that if anyone's professional credibility should be in doubt, it should be yours.
Yes a workstation storing 9TB of data. Versus a Windows 7 desktop PC probably used for web surfing and e-mail? Like I said, not really a fair comparison.
I agree. It's pretty clear that the workstation in question should accure massive fragmentation where lighter, less IO intensive use of a desktop should not. However this is not the reality.
If the two systems are performing similar workloads... What block size is the NTFS partition(s) running? And what is the block size of the other filesystem to which you are comparing it? If the files are particularly large then block size will play a major role in determining how much fragmentation might occur.
The default NTFS block size is 4KB. Which, whilst optimised for desktop usages, is not so good for workstations and definately not so good for servers. The block size is chosen at format-time.
They are not performing similar workloads. I do have a SL Desktop which I use for similarly general desktop tasks as the Windows box however. The fragmentation theme persists here too.
The disk space isn't really the issue. Fragmentation does not discriminate how much disk space you have. It only cares how much of it you have free. I would also not really describe a 9TB database as a "daily task".
I'll refer you to my earlier post where I said:
"Both frequently fluctuate between 10-40% free space."
Calling someone a liar because they disabled a scheduled defrag is a pretty hysterical response. That requires great emotion.
By contrast, "*sigh*" comments are emotional. Along with bold and underlining.
It was an expression of how tiresome it was to have to drag you away from the personal attack of my character (by calling me a liar because I changed a setting) and back to the topic.
On the bold and underlining. Over the years I have found that because my responses are lengthy there is a significantly higher chance of the other poster reading the key points. Often I have found people have failed to read the entire post responding with a rebuttal I have already shown to be false.
In any case, I said you made up the 30% figure. Because, fundamentally, Windows 7 (and Vista) do not provide such % figures to the user anywhere. This is a fact.
Here's a screenshot of the dialog that does not exist:

Is it any surprise I have found your professional credibility so in doubt when you make so many obvious errors? This is the kind of mistake a new Windows user would make.
I can only assume you got this figure from a third-party defragmentation tool. But if that were the case, surely you would have mentioned it?
Nope, the one included with Windows 7 shown above.
So you've turned off NTFS' defragger and done a months worth of desktop/workstation oriented chores on it. Then you're surprised that 30% of your disk is fragmented? Where did this 30% figure come from? Was it calculated based upon the number of files split into at least 2 fragments? Because 2 fragments for a file that is say 1GB isn't really so bad... If a filesystem can manage that, it is probably doing quite well actually.
Based on the distribution of fragmentation on previous versions I'd say it would be likely to be an even spread. As the Windows 7 defrag tool does not actually provide this information ask me again in a month.
You're a bit trigger happy and premature with the "this is false" statements.
Just because there are large deployments of ZFS in desktop/workstation scenarios doesn't mean the administrator(s) care about fragmentation. Nor, probably, do they have any reason to. They've likely weighed up the likelihoods and decided that impacts would be negligable. Or maybe they're just confident that by the time it could become a problem ZFS will have a defragger available.
I gave you an example of why it was false, you gave me opinion. This is quite a recurring theme.
I did quantify my statement though. I gave precise reasoning to the degree of why I considered it to be the most advanced.
Because it has a decent feature-set and isn't terrible in any standard use case I can think of. I can think of many cars that fit this description too. They are rarely the most desirable for any particular use. This is a
strange way to assign the term "most advanced".
No it isn't contrary to my earlier post, at all. As I said, I backed up my statement with reasoning and logic. If you don't agree, that's fine.
You have provided
no data and
no sources to back up your claim. The most you have provided is a list of features and mentioning a few "general" use cases.
The Google FS assumption was implication by yourself. Not an assumption on my part.
"Ext4 is different in that it actually supports online defragmentation. Presumably that is why Google are using it... because they're fed up with not being able to defragment their ext3 volumes without taking them offline?"
Sorry but this is not my implication.
I never mentioned nor hinted at ext3 being used at Google.
And no you haven't said NTFS is useless. You've just said it fragments badly, is slow and that it is "one of Windows' biggest weaknesses".
It does fragment badly compared to Reiser4, ext3, ext4 (especially) and XFS. I cannot say either way for JFS. Any rudimentary use will make this immediately apparent.
It is slower than Reiser4, ext3, ext4, XFS and possibly (though I am not 100% sure on this) JFS for both large and small read, write, copy and move actions. There are plenty of benchmarks around if you want to check this. I would suggest trying the Phoronix Test Suite (this runs on both Windows and Linux) if you would like to test it personally.
I called it a weakness based on the relatively slow progress it has made since introduction. Keep in mind filesystems such as ZFS were not even an idea at that time much less a reality.
As I said, this is the false and ignorant viewpoint that many *nix users take.
The problem with this statement is that you have done nothing to either show your claim to be true, or to show that to be false.
There is no vendetta here other than to dispell of the FUD surrounding NTFS that is propagated by some corners of the *nix fraternity.
Please. Having not even used fsck, I do not think you are qualified to accurately judge the honest strengths of *nix systems.
You pushed almost every filesystem under the sun above NTFS at one point. Quote: "ZFS, XFS, Reiser4, JFS, ext3, ext4 etc etc all perform fairly significantly better"
You're misunderstanding the quote. In raw performance they do come out above NTFS, some by quite a margin.
I don't fit the profile of a Microsoft zealot at all.
Then why did you, yourself feel the need to
actively point out that you did not wish to come across as one? Even you were aware you fit the profile.
"NTFS is actually, probably (and I don't want to come across as a Microsoft zealot here), the most advanced file system that exists today."
Oh and you did call me the T word. But don't let that get in the way.
Based on your statements (which seemed converse to reality), I genuinely believed you were, for which I apologise.
This is something you didn't make clear from the beginning anyway. Or maybe your position has changed and you didn't realise.
I shouldn't have to list everything I do not believe. The statements I made were in a specific context for a reason.
I would still dispute a filesystem that's only been around for 5 years is "mature". As I said, it's not mature in all areas, only some. Therefore, overall, it is not mature.
Mature enough for critical use. An average desktop is certainly not critical like military use.
If that's not relevant then almost none of the ZFS / NTFS discussion is relevant.
Not at all. You just can't directly compare the usage statistics of them because the overall system that uses them is a vastly overriding issue when making the choice.
Edit:
Finally. I think all this "You're a liar" and credibility questioning line of debate you started which I have responded to and continued is OT and of no value. As such I am no longer going to entertain it. If you were wondering, this is one of the reasons I considered you to be a troll. I would like it if we could in future stick to the topic as outlined by the OP. I do not wish to be again drawn into a debate over personal and professional credibility. It was a mistake of me to lower myself by responding to your statements on the subject. I withdraw those responses and apologise for any offence they may have caused, in the hope that it is reciprocated.
On that note, I hope that we can
respectfully continue the debate.