Why is Windows (apparently) less secure than other OSes? Also re: NTFS fragmentation.

I feel we've gone a little off topic here, maybe we should try and get this back on topic?

Perhaps, but it was the OPs question:

Amp34 said:
Also the same question with defragging, why is the NTFS format worse with fragmentation than other types of drive formatting?

I, personally, wanted to answer that question because I wanted to correct his false preconception that NTFS was bad at handling fragmentation compared to rival filesystems.

Then I accidently offended someone by suggesting NTFS is probably the most advanced filesystem available today (for a number of real-world reasons that I listed) and, yes, the thread got a bit polluted from what was originally just a post-script type question by the OP.
 
Yes it is mature, in the server space. But not in the desktop/workstation space.

It's understandable that the ZFS authors don't very much prioritise the development of a defragger - because their current user base simply don't need one for the usage scenarios it is currently being used for. And that's fine.

But until ZFS has proven itself more in the desktop/workstation space, it just isn't going to supercede NTFS in the minds of most. That is what gives NTFS the edge at the moment - it works well in all usage scenarios.
I'd have thought fragmentation would potentially be much more of an issue in a server environment - if ZFS can handle that scenario without any need for a defragger, surely it could handle less demanding desktop/workstation usage without breaking sweat?

I doubt ZFS's non-dominance in the desktop space has anything to do with any intrinsic lack of technical merit compared to NTFS, or any other current filesystem for that matter - it's more to do with the licensing issues you alluded to earlier. If Apple hadn't been unable to agree on a deal with Sun, it might even now be the standard filesystem on Mac OS...

/edit: missed Burnsy2023's post while I was typing this, perhaps the thread could be split if necessary? It's all quite interesting after all...
 
Last edited:
You implied Google switched to ext4. I've made no errors.
I didnt "imply" anything. I said Google switched to ext4, which they did. I never mentioned ext3, which they never used. That was your error.

Ext4 isn't really an example of a file system that puts incredible efforts into minimising fragmentation. It was a bad example on your part.
Wrong again. It puts a great deal of effort minimising fragmentation in comparison to NTFS. Allocate-on-flush being a prime example (which NTFS lacks). One of the main advantages of ext4 over ext3 is how much less fragmentation is an issue in the first place.

FYI what I said was: "There do exist very specialist file systems that put incredible efforts into minimising fragmentation (some even simply don't allow it at all). But they lack in other areas, such as performance, scalability and hell even just reliability (non-journaling).".

I stand by that. Ext4 is a terrible example to provide to try to counteract that statement. It almost has nothing to do with what I was talking about, in fact. I was talking about specialist file systems - the sort of things they use on PVR set top boxes, and at the other extreme: the Mars Rovers. I didn't have a problem with you misunderstanding thinking I was taking a pot shot at Ext3/4, ZFS etc. But that was ultimately your mistake, not mine. Those filesystems are not what I'd consider specialist at all. A specialist filesystem generally makes large compromises in certain areas in order to afford gains in others.
Re-reading that quote, I agree with it. It was an honest misunderstanding. There are, however, at least 3 examples of outright dishonesty on your part (see the quote below for one).

Good. You finally concede it.
lol? I never claimed otherwise. I have never denied the existence of fragmentation. Please don't try to mislead by falsely claiming I did. This is simply being dishonest.

What on earth? That would almost be funny if you weren't being serious.


Again, what the hell?! This is getting pretty silly now, I hope your standard of posting picks up soon.
Coming from the person making massively incorrect assumptions about my professional life because I disagreed them on a forum, this is pretty ironic. Those quotes were poking fun at the fact you had to resort to personal attacks. Lighten up ;)

And how do you know that fsck is using the same criteria for detecting fragmentation that NTFS's standard defrag tool is? Hint: There are a lot of defraggers available for Windows, they will all return a different % figures for the same volume. Therefore differences between OS and filesystem are certainly going to exaggerate the subtle differences in the way that fragmentation is calculated.
The point was you were claiming there were no tools to your knowledge that provided this information. This is wrong, there are. Instead of spending a paragraph back-pedalling you may as well just admit it. Trying to cloud the argument with the point that "they may use different criteria" without any hard, factual information that they do is simply trying to cast doubt.

I've said was that NTFS is probably the most advanced file system available today. And it probably is once you take into account licensing, stability, fragmentation and scalability (in the context of all 3 key environments - server, desktop and workstation) concerns of the alternatives.
So you're saying it is the most advanced jack-of-all trades, master of none? You have a funny interpretation of "most advanced". That's not what most people think of when they hear it. A decent jack-of-all-trades is, by definition, very average overall; neither especially good nor bad at any one task.

But that's a stupid way to compare file systems: comparing NTFS running a desktop/workstation environment against a server storing large databases on Ext4. It's hardly any surprise NTFS scored less favourably in this scenario. A database server is highly optimised to avoid fragmentation and generally doesn't do lots of small file allocations like a typical desktop application does.
Both are workstations. Other uses were not featured in my example because it is unlikely you will have access to a comparable cluster.

Unfortunately I don't have access to a "9TB ext4 volume" so no I can't "test this at home". But I fully expect that Ext4 would handle it fine, as would NTFS given a chance to handle the same workload.
No but I'm sure you can perform similar daily tasks on two desktops with a similar amount of space available for a few months. The difference is pretty clear.

Windows 7 defrags itself automatically (by default). So how you've got 20-30% fragmentation on it is quite some achievement. Not that Windows 7 reports a % figure anywhere regarding fragmentation - so I guess you just made it up.
I turned off the scheduler. Apparently if I do not use the default schedule I get hysterically accused of being a liar. *sigh*

This is what I meant when I referred to your "emotional responses".

ZFS is still missing pieces of its puzzle before it can be taken seriously for all environments (server, desktop, workstation). It needs a defragger.
There are fairly large deployments of it in each of those use cases already, so no, this is again false.

I've nothing against any of the filesystems you've suggested. (To do so would be ridiculous). They are all fair suggestions. However they don't really disprove my point that "NTFS is probably the most advanced file system available today". In fact, your suggestions have bolstered it because it has given me the chance to highlight some of their shortcomings.
That isn't a point, it's your opinion. I have never claimed any filesystem to be "the best" overall because such a statement without quantifiable critera such as the one you have made is ridiculous. Just mentioning a general use-case is not an objective dataset to reach a conclusion with.

Well that is your personal opinion. I'm only interested in discussing facts and logic here.
This is contra to your earlier post. The reason I replied was because your claims were based on opinion and not fact.

ZFS certainly has some killer features which NTFS does not. But still, it has other drawbacks that currently prevent it from hitting the big time. Until it gets a proper defragger tool, for example, it's never going to be a bit hitter on the desktop/workstation space (where fragmentation is often a problem).

The reason I agreed that ZFS is probably on a par with NTFS is because whilst it has certain advantages over NTFS (in terms of potential featureset), it also has some shortcomings (which I have pointed out elsewhere). Therefore, it seems fair to balance it by saying that the two are "on par".

This is akin to saying one car being twice as fast while another which handles corners twice as easily are on-par, they are not. They are entirely different beasts and not at all comparable.

If you are forming an opinion of which is the "most advanced" filesystem (as nebulous as that term is) Surely you would base that on the underlying technology? This is where ZFS is far ahead and why it is widely believed to be the most advanced filesystem around.

I don't think this is ridiculous and I think you would find most IT professionals would agree with me.
So you've just reversed what I have previously said in regard to ZFS? That is hardly a point.

It's just an observation. You're not the only *nix user to have blindly assumed that NTFS is useless and that everything else is better.
I never assumed anything. Firstly you are the one assuming things (see the Google FS assumption you made). Secondly I never said NTFS was useless or that everything else is better. These are the emotional responses you keep making to which I am referring. Please stick to the topic and not your personal vendetta.

At one point you were even pushing ext3 above NTFS, though you seem to have conceded on those claims now, thankfully.

You certainly "fit the profile", as it were.
This fallacy is is know as an Irrelevant Conclusion: diverts attention away from a fact in dispute rather than address it directly.​
I have not pushed ext3 above NTFS. It does not suffer fragmentation to the degree NTFS does that much is obvious and that is the only way in which I compared the two.

You "fit the profile" of a Microsoft Zealot but because I find it cheap to resort to a personal attack that intentionally distracts from the argument I have not called you one.

A claim which I have backed up numerous times with reasoning and logic. All you've done it banded about, in a rather chaotic fashion, the names of various alternative file systems. To which I have agreed are mostly good suggestions (especially ZFS, but definately not ext3) but which I still disagree are currently in a position to supercede NTFS as the "most advanced". Watch this space though.
Straw man: A straw man argument is an informal fallacy based on misrepresentation of an opponent's position.​

I never said ext3 was more advanced. It has advantages, yes but it also has disadvantages.

Ext4 is lacking an online defragger. This prevents it from being a "serious contender" in the desktop/workstation space.
The high usage it gets in the workstation space contradicts this.


Yes it is mature, in the server space. But not in the desktop/workstation space.

It's understandable that the ZFS authors don't very much prioritise the development of a defragger - because their current user base simply don't need one for the usage scenarios it is currently being used for. And that's fine.
At first you were trying to imply it was immature (without qualification). I'm glad you have conceded this.

But until ZFS has proven itself more in the desktop/workstation space, it just isn't going to supercede NTFS in the minds of most. That is what gives NTFS the edge at the moment - it works well in all usage scenarios.
I don't think that's even relevant as NTFS does not get used outside of Windows (and ZFS outside of Solaris and some BSDs).
 
Last edited:
I'd have thought fragmentation would potentially be much more of an issue in a server environment - if ZFS can handle that scenario without any need for a defragger, surely it could handle less demanding desktop/workstation usage without breaking sweat?

I doubt ZFS's non-dominance in the desktop space has anything to do with any intrinsic lack of technical merit compared to NTFS, or any other current filesystem for that matter - it's more to do with the licensing issues you alluded to earlier. If Apple hadn't been unable to agree on a deal with Sun, it might even now be the standard filesystem on Mac OS...

/edit: missed Burnsy2023's post while I was typing this, perhaps the thread could be split if necessary? It's all quite interesting after all...

I am pleased that you're finding it interesting CaptainCrash. So thank you for that.

Now to discuss your very good point.

It really depends what the server is doing. Yes there are all manner of server configurations that will suffer if fragmentation is not kept in check. But I would say that the vast majority of servers which deal largely in data storage would be running some form of database server (e.g. Oracle, MySQL, MSSQL etc). All of these database systems are highly optimised and perform file allocations in a batched and predictive way. They also provide very good "hints" to the file system which allows the filesystem to minimise fragmentation.

Fragmentation normally occurs for four reasons:

1. The volume is nearing capacity and the size of available "gaps" (i.e. locations of free space) are starting to run dry. Or at least, the gaps are on average becoming smaller than the average size of files being written.

2. A program doesn't provide sufficient "hints" to the filesystem what type of I/O operations it is performing on a file. For example: If a program is making writes to a file on a "random" basis but it has not informed the filesystem of that fact... what do you think would happen if this program made a write to a sector at the end of the file? Chances are the filesystem would assume a sequential write and, because the write is occuring near the end of the file, the filesystem might preemptively allocate another fragment for the file (if the current fragment was already close to being filled).

3. A program isn't actually sure itself whether it is likely to write to a file again. This is usually caused by things such as log files. If a program is writing out a .txt file with a trace log, quite often these can result in plenty of fragmentation occuring. Sometimes this can't be avoided, but in most cases this type of scenario is reduced by the filesystem's own built in "delayed-write cache" (this is what it's called on NTFS, I believe ZFS/ext3/4 etc have a different name for it).

4. A program is badly written. For example, a number of the first BitTorrent client software were so bad that every time you downloaded something it would result in the file being made up of ten's of thousands of fragments. Partly this is because of point #2 (BitTorrent by nature would require random writes, not sequential), but also because the program itself was not doing its part of preallocating the entire file first. Nowadays BitTorrent clients (for all platforms) tend to preallocate the entire file. There's no reason not to given that they already know its exact size before they even start downloading!

Generally, all of these points very rarely apply to server software. Number 3 is perhaps the only one which is likely to occur. But if the author of the server software expected it to be a major issue then it is likely they would come up with some other solution.

You're quite right that there is nothing wrong (technically) with ZFS. Technically, from a sheer engineering standpoint, it is almost certainly superior to NTFS in a number of ways. But the mere fact that its authors seem to ignore the desktop/workstation space as though it doesn't exist cannot be ignored.

It's quite humourous if you read some of Sun's literature on ZFS. They make rather bold claims about "[fragmentation not being a problem]". Which from all the benchmarks and tests they have done on their Solaris server platform, I'm sure is not far from the truth. Microsoft could probably make similar claims about NTFS on their Server platforms. But fundmentally those are just statistics. If you put some badly written server software on Solaris (or Windows Server), you could quite easily get the filesystem to start fragmenting.
 
Wrong again. It puts a great deal of effort minimising fragmentation in comparison to NTFS. Allocate-on-flush being a prime example (which NTFS lacks). One of the main advantages of ext4 over ext3 is how much less fragmentation is an issue in the first place.
Not true. NTFS doesn't call it "allocate on flush" though. In NTFS it is called "delayed write caching". It has been a feature of NTFS since NT 3.51. It was one of the first filesystems to implement it. Hence its name being "New Technology File System". It's really not as bad and out-dated as you think.

Coming from the person making massively incorrect assumptions about my professional life because I disagreed them on a forum, this is pretty ironic. Those quotes were poking fun at the fact you had to resort to personal attacks. Lighten up
I have made no personal attacks. That is something I'm not allowed (nor do I want) to do on here. All I said is that it is clear you're not a computer scientist. Simply from the points of view that you adopt and the pseudo-technical knowledge you possess about filesystems.

So you're saying it is the most advanced jack-of-all trades, master of none? You have a funny interpretation of "most advanced". That's not what most people think of when they hear it. A decent jack-of-all-trades is, by definition, very average overall; neither especially good nor bad at any one task.
I've not said that at all, no. That's your opinion. Microsoft makes subtle tweaks to NTFS' parameters for their different markets. NTFS also has a number of user-configurable options available in Device Manager to optimise for different hardware.

Both are workstations. Other uses were not featured in my example because it is unlikely you will have access to a comparable cluster.
Yes a workstation storing 9TB of data. Versus a Windows 7 desktop PC probably used for web surfing and e-mail? Like I said, not really a fair comparison.

If the two systems are performing similar workloads... What block size is the NTFS partition(s) running? And what is the block size of the other filesystem to which you are comparing it? If the files are particularly large then block size will play a major role in determining how much fragmentation might occur.

The default NTFS block size is 4KB. Which, whilst optimised for desktop usages, is not so good for workstations and definately not so good for servers. The block size is chosen at format-time.

No but I'm sure you can perform similar daily tasks on two desktops with a similar amount of space available for a few months. The difference is pretty clear.
The disk space isn't really the issue. Fragmentation does not discriminate how much disk space you have. It only cares how much of it you have free. I would also not really describe a 9TB database as a "daily task".

I turned off the scheduler. Apparently if I do not use the default schedule I get hysterically accused of being a liar. *sigh*

This is what I meant when I referred to your "emotional responses".
How is that emotional? I don't remember having any significant emotion at all when writing that. I was just writing what I thought. By contrast, "*sigh*" comments are emotional. Along with bold and underlining.

In any case, I said you made up the 30% figure. Because, fundamentally, Windows 7 (and Vista) do not provide such % figures to the user anywhere. This is a fact. I can only assume you got this figure from a third-party defragmentation tool. But if that were the case, surely you would have mentioned it?

So you've turned off NTFS' defragger and done a months worth of desktop/workstation oriented chores on it. Then you're surprised that 30% of your disk is fragmented? Where did this 30% figure come from? Was it calculated based upon the number of files split into at least 2 fragments? Because 2 fragments for a file that is say 1GB isn't really so bad... If a filesystem can manage that, it is probably doing quite well actually.

There are fairly large deployments of it in each of those use cases already, so no, this is again false.
You're a bit trigger happy and premature with the "this is false" statements.

Just because there are large deployments of ZFS in desktop/workstation scenarios doesn't mean the administrator(s) care about fragmentation. Nor, probably, do they have any reason to. They've likely weighed up the likelihoods and decided that impacts would be negligable. Or maybe they're just confident that by the time it could become a problem ZFS will have a defragger available.

That isn't a point, it's your opinion. I have never claimed any filesystem to be "the best" overall because such a statement without quantifiable critera such as the one you have made is ridiculous. Just mentioning a general use-case is not an objective dataset to reach a conclusion with.
I did quantify my statement though. I gave precise reasoning to the degree of why I considered it to be the most advanced.

This is contra to your earlier post. The reason I replied was because your claims were based on opinion and not fact.
No it isn't contrary to my earlier post, at all. As I said, I backed up my statement with reasoning and logic. If you don't agree, that's fine.

It's not really a surprise that a statement which contains the word "probably" isn't meant to be taken literally as a fact you'd find in an encyclopedia.

I never assumed anything. Firstly you are the one assuming things (see the Google FS assumption you made). Secondly I never said NTFS was useless or that everything else is better. These are the emotional responses you keep making to which I am referring. Please stick to the topic and not your personal vendetta.
The Google FS assumption was implication by yourself. Not an assumption on my part.

And no you haven't said NTFS is useless. You've just said it fragments badly, is slow and that it is "one of Windows' biggest weaknesses". As I said, this is the false and ignorant viewpoint that many *nix users take.

There is no vendetta here other than to dispell of the FUD surrounding NTFS that is propagated by some corners of the *nix fraternity.

I have not pushed ext3 above NTFS. It does not suffer fragmentation to the degree NTFS does that much is obvious and that is the only way in which I compared the two.

You "fit the profile" of a Microsoft Zealot but because I find it cheap to resort to a personal attack that intentionally distracts from the argument I have not called you one.
You pushed almost every filesystem under the sun above NTFS at one point. Quote: "ZFS, XFS, Reiser4, JFS, ext3, ext4 etc etc all perform fairly significantly better"

I don't fit the profile of a Microsoft zealot at all.

Oh and you did call me the T word. But don't let that get in the way.

I never said ext3 was more advanced. It has advantages, yes but it also has disadvantages.
This is something you didn't make clear from the beginning anyway. Or maybe your position has changed and you didn't realise.

At first you were trying to imply it was immature (without qualification). I'm glad you have conceded this.
I would still dispute a filesystem that's only been around for 5 years is "mature". As I said, it's not mature in all areas, only some. Therefore, overall, it is not mature.

I don't think that's even relevant as NTFS does not get used outside of Windows (and ZFS outside of Solaris and some BSDs).
If that's not relevant then almost none of the ZFS / NTFS discussion is relevant.
 
Perhaps, but it was the OPs question:



I, personally, wanted to answer that question because I wanted to correct his false preconception that NTFS was bad at handling fragmentation compared to rival filesystems.

Then I accidently offended someone by suggesting NTFS is probably the most advanced filesystem available today (for a number of real-world reasons that I listed) and, yes, the thread got a bit polluted from what was originally just a post-script type question by the OP.

And it's brought up an interesting discussion, it's definately taught me thing or two as I just made the assumption there were only two or three file systems with at least the one that Apple possibly being quite a lot better (as that's what all mac users slate windows for).:)
 
Not true. NTFS doesn't call it "allocate on flush" though. In NTFS it is called "delayed write caching". It has been a feature of NTFS since NT 3.51. It was one of the first filesystems to implement it. Hence its name being "New Technology File System". It's really not as bad and out-dated as you think.
This sounded like you were confusing write-back cache to me, so I did a few searches (in case I was mistaken) through the Microsoft knowledge-base Wikipedia and Google in general; from what I can tell you were.

Write-back cache only uses the drive cache available. Allocate on flush uses available system memory, which is available in vastly greater quantities and thus reduces fragmentation further.

I have made no personal attacks. That is something I'm not allowed (nor do I want) to do on here. All I said is that it is clear you're not a computer scientist.
Considering my first major was computer science I'm sure you can understand why I saw this as an attack of my professional credibility.

Simply from the points of view that you adopt and the pseudo-technical knowledge you possess about filesystems.
Please. This is coming from someone who has never used fsck and does not know the Windows 7 Disk Defragmenter has "(x% fragmented)" on the main UI. These examples are precisely why I stated that if anyone's professional credibility should be in doubt, it should be yours.

Yes a workstation storing 9TB of data. Versus a Windows 7 desktop PC probably used for web surfing and e-mail? Like I said, not really a fair comparison.
I agree. It's pretty clear that the workstation in question should accure massive fragmentation where lighter, less IO intensive use of a desktop should not. However this is not the reality.

If the two systems are performing similar workloads... What block size is the NTFS partition(s) running? And what is the block size of the other filesystem to which you are comparing it? If the files are particularly large then block size will play a major role in determining how much fragmentation might occur.

The default NTFS block size is 4KB. Which, whilst optimised for desktop usages, is not so good for workstations and definately not so good for servers. The block size is chosen at format-time.
They are not performing similar workloads. I do have a SL Desktop which I use for similarly general desktop tasks as the Windows box however. The fragmentation theme persists here too.

The disk space isn't really the issue. Fragmentation does not discriminate how much disk space you have. It only cares how much of it you have free. I would also not really describe a 9TB database as a "daily task".
I'll refer you to my earlier post where I said:
"Both frequently fluctuate between 10-40% free space."​

How is that emotional?
Calling someone a liar because they disabled a scheduled defrag is a pretty hysterical response. That requires great emotion.

By contrast, "*sigh*" comments are emotional. Along with bold and underlining.
It was an expression of how tiresome it was to have to drag you away from the personal attack of my character (by calling me a liar because I changed a setting) and back to the topic.

On the bold and underlining. Over the years I have found that because my responses are lengthy there is a significantly higher chance of the other poster reading the key points. Often I have found people have failed to read the entire post responding with a rebuttal I have already shown to be false.

In any case, I said you made up the 30% figure. Because, fundamentally, Windows 7 (and Vista) do not provide such % figures to the user anywhere. This is a fact.
Here's a screenshot of the dialog that does not exist:

Is it any surprise I have found your professional credibility so in doubt when you make so many obvious errors? This is the kind of mistake a new Windows user would make.

I can only assume you got this figure from a third-party defragmentation tool. But if that were the case, surely you would have mentioned it?
Nope, the one included with Windows 7 shown above.

So you've turned off NTFS' defragger and done a months worth of desktop/workstation oriented chores on it. Then you're surprised that 30% of your disk is fragmented? Where did this 30% figure come from? Was it calculated based upon the number of files split into at least 2 fragments? Because 2 fragments for a file that is say 1GB isn't really so bad... If a filesystem can manage that, it is probably doing quite well actually.
Based on the distribution of fragmentation on previous versions I'd say it would be likely to be an even spread. As the Windows 7 defrag tool does not actually provide this information ask me again in a month.

You're a bit trigger happy and premature with the "this is false" statements.

Just because there are large deployments of ZFS in desktop/workstation scenarios doesn't mean the administrator(s) care about fragmentation. Nor, probably, do they have any reason to. They've likely weighed up the likelihoods and decided that impacts would be negligable. Or maybe they're just confident that by the time it could become a problem ZFS will have a defragger available.
I gave you an example of why it was false, you gave me opinion. This is quite a recurring theme.

I did quantify my statement though. I gave precise reasoning to the degree of why I considered it to be the most advanced.
Because it has a decent feature-set and isn't terrible in any standard use case I can think of. I can think of many cars that fit this description too. They are rarely the most desirable for any particular use. This is a strange way to assign the term "most advanced".

No it isn't contrary to my earlier post, at all. As I said, I backed up my statement with reasoning and logic. If you don't agree, that's fine.
You have provided no data and no sources to back up your claim. The most you have provided is a list of features and mentioning a few "general" use cases.

The Google FS assumption was implication by yourself. Not an assumption on my part.

"Ext4 is different in that it actually supports online defragmentation. Presumably that is why Google are using it... because they're fed up with not being able to defragment their ext3 volumes without taking them offline?"

Sorry but this is not my implication. I never mentioned nor hinted at ext3 being used at Google.

And no you haven't said NTFS is useless. You've just said it fragments badly, is slow and that it is "one of Windows' biggest weaknesses".
It does fragment badly compared to Reiser4, ext3, ext4 (especially) and XFS. I cannot say either way for JFS. Any rudimentary use will make this immediately apparent.

It is slower than Reiser4, ext3, ext4, XFS and possibly (though I am not 100% sure on this) JFS for both large and small read, write, copy and move actions. There are plenty of benchmarks around if you want to check this. I would suggest trying the Phoronix Test Suite (this runs on both Windows and Linux) if you would like to test it personally.

I called it a weakness based on the relatively slow progress it has made since introduction. Keep in mind filesystems such as ZFS were not even an idea at that time much less a reality.

As I said, this is the false and ignorant viewpoint that many *nix users take.
The problem with this statement is that you have done nothing to either show your claim to be true, or to show that to be false.

There is no vendetta here other than to dispell of the FUD surrounding NTFS that is propagated by some corners of the *nix fraternity.
Please. Having not even used fsck, I do not think you are qualified to accurately judge the honest strengths of *nix systems.

You pushed almost every filesystem under the sun above NTFS at one point. Quote: "ZFS, XFS, Reiser4, JFS, ext3, ext4 etc etc all perform fairly significantly better"
You're misunderstanding the quote. In raw performance they do come out above NTFS, some by quite a margin.


I don't fit the profile of a Microsoft zealot at all.
Then why did you, yourself feel the need to actively point out that you did not wish to come across as one? Even you were aware you fit the profile.
"NTFS is actually, probably (and I don't want to come across as a Microsoft zealot here), the most advanced file system that exists today."​

Oh and you did call me the T word. But don't let that get in the way.
Based on your statements (which seemed converse to reality), I genuinely believed you were, for which I apologise.

This is something you didn't make clear from the beginning anyway. Or maybe your position has changed and you didn't realise.
I shouldn't have to list everything I do not believe. The statements I made were in a specific context for a reason.

I would still dispute a filesystem that's only been around for 5 years is "mature". As I said, it's not mature in all areas, only some. Therefore, overall, it is not mature.
Mature enough for critical use. An average desktop is certainly not critical like military use.

If that's not relevant then almost none of the ZFS / NTFS discussion is relevant.
Not at all. You just can't directly compare the usage statistics of them because the overall system that uses them is a vastly overriding issue when making the choice.


Edit:
Finally. I think all this "You're a liar" and credibility questioning line of debate you started which I have responded to and continued is OT and of no value. As such I am no longer going to entertain it. If you were wondering, this is one of the reasons I considered you to be a troll. I would like it if we could in future stick to the topic as outlined by the OP. I do not wish to be again drawn into a debate over personal and professional credibility. It was a mistake of me to lower myself by responding to your statements on the subject. I withdraw those responses and apologise for any offence they may have caused, in the hope that it is reciprocated.

On that note, I hope that we can respectfully continue the debate.
 
Last edited:
i read the 1st page and skipped most of the 2nd.

as far as exploits go, isn't the biggest security difference between linux and windows down to the end users? linux users tend to have a clue, most windows users and the retarded masses who think it's a good idea to click on something that says you have a virus, install something and get screwed?

windows users who have a clue are very unlikely to get a virus/trojan since most methods of infection (afaik anyway) rely on the paddy virus method.

"here, run this and give yourself a virus. that'd be grand"
 
Windows has a large market share, when you have malicious motives you target the O/S with the largest market share. Also, the biggest security flaw with Windows is the user and no patch can fix that.
 
Last edited:
A patch over most user's eyes might stop them clicking every damn shiny, flashing banner there is! :p

The worst Windows [or any OS] user are the ones who think they know what they are doing and insist on spreading that "knowledge" about. It's usually not enough for them to make their own PCs insecure, they like to do it to all their friends too!
 
as far as exploits go, isn't the biggest security difference between linux and windows down to the end users? linux users tend to have a clue, most windows users and the retarded masses who think it's a good idea to click on something that says you have a virus, install something and get screwed?

windows users who have a clue are very unlikely to get a virus/trojan since most methods of infection (afaik anyway) rely on the paddy virus method.

If you're using Linux you're more likely to be a geek, but that can be a double edged sword. In theory you would expect Linux users to be more security conscious (or at least aware) but on the other hand a determined wannabe would expose himself to arguably as much malware as a similar Windows user.

There is also a usability aspect as well. Aside from not being an idiot, how do you make sure your Windows box is fairly secure? These days auto-update is on by default, so is the firewall, UAC and you are prompted to install antivirus software and keep it up to date. IE8 in protected mode is the default browser. So basically all the user has to do is install some AV software, which is relatively straightforward. A quick check in the 'security centre' will show you any glaring omissions.

So what about the standard desktop flavour of Linux 'for the rest of us'? The update manager is great, I really like it. The firewall is not enabled by default, and if you want a GUI it's a trip to the repo. You're given a normal user account (which is normal, and good). If you want antivirus - clamAV for example - you have to go find it with apt, and again if you want a GUI front end it's back to the repo. I actually don't bother with the GUI and just do a resursive scan of the root folder every now and again. The default browser is Firefox which has some good anti-phishing stuff in there etc. All this stuff works when it is set up, but the difficulty bar is much higher than Windows for the average user.

And OSX? Well, auto updating is on - whether Apple are fast enough at patching things is a matter of opinion. The firewall is off by default, but is quite easy to enable. The standard user account is an 'almost' root user, still requiring credentials to install stuff which is a good thing. I must admit I've never installed AV on OSX, but I assume it is quite easy if you are familiar with installing things on Macs. The default browser is Safari, which I don't think is that hot from a security point of view. It also comes with a default setting of "open safe files after downloading" which is one of the stupidest things I've ever seen and I can't believe it has survived the last three incarnations of OSX. Macs also have a sort of "security centre" but it is not as comprehensive as the Windows version, but this could no doubt be updated easily.

So, all things being equal, you could easily argue that for the average user on the average desktop Windows is the most secure. However, the phenomenon of "security through obscurity" will completely skew the stats, and therefore common opinion.
 
This thread has been an interesting read so far. Mainly because of Nathan and Hoodlum's posts. Hopefully this can remain a debate and we can all learn some new stuff about file systems. :)
 
userbase tbh
*nix had an advantage in that most users didn't run as root/admin so anything they ran could not install without asking permission but windows is like that now too.


apart from it seems that windows software develops are very slowly writing "in this new way" so elevated permissions are needed for everything.


Feel free to shoot me down, but it just seems that the linux way makes more sense, may just be I understand the linux principle more.
 
Microsoft are pushing developers very hard to adopt better programming practices to break users out of running as Admins day to day. Developers have to change as Microsoft may not be so gracious as to include a handy elevation prompt in Windows 8.
 
Microsoft are pushing developers very hard to adopt better programming practices to break users out of running as Admins day to day. Developers have to change as Microsoft may not be so gracious as to include a handy elevation prompt in Windows 8.


I thought that UAC was Microsoft's attempt to combat this too? :)
 
The UAC is a convenience for users. It means they don't have to switch to a user with admin rights to install software and change certain system settings. Lazy devs will just abuse the fact most users will click "OK" or "Yes" on any prompt they get.

Microsoft could turn around and say in Windows 8 there will be no prompt at all. Software that needs to be elevated simply wont work in a standard user environment.
 
I've seen new, bespoke software written in the last two or three years that needs write access to Program Files just to run. I wanna throw it out of the nearest open window like a discus.
 
apart from it seems that windows software develops are very slowly writing "in this new way" so elevated permissions are needed for everything.

I'm not quite sure I understand you. If you mean software developers are writing their software in such a way they require administrator rights all the time, it's actually now the opposite. Due to Windows having a long history of users running as an administrator since an administrator account has always been the default, software developers have always written their applications assuming they would have administrator rights.

However, since Windows Vista and the introduction of User Account Control, even though the default account is still an administrator, with User Account Control enabled, which it is by default, everyone is running with standard user rights. This means software developers will not have administrator rights by default and this in turn forces them to write their software so it works correctly with standard user rights - The primary purpose of User Account Control.

Mark Russinovich said:
Standard user accounts provide for better security and lower total cost of ownership in both home and corporate environments. When users run with standard user rights instead of administrative rights, the security configuration of the system, including antivirus and firewall, is protected. This provides users a secure area that can protect their account and the rest of the system. For enterprise deployments, the policies set by desktop IT managers cannot be overridden, and on a shared family computer, different user accounts are protected from changes made by other accounts.

However, Windows has had a long history of users running with administrative rights. As a result, software has often been developed to run in administrative accounts and take dependencies, often unintentionally, on administrative rights. To both enable more software to run with standard user rights and to help developers write applications that run correctly with standard user rights, Windows Vista introduced User Account Control (UAC). UAC is a collection of technologies that include file system and registry virtualization, the Protected Administrator (PA) account, UAC elevation prompts, and Windows Integrity levels that support these goals. I've talked about these in detail in my conference presentations and TechNet MagazineUAC internals article.

Mark Russinovich said:
The PA account was designed to encourage developers to write their applications to require only standard user rights while enabling as many applications that share state between administrative components and standard user components to continue working. By default, the first account on a Windows Vista or Windows 7 system, which was a full administrator account on previous versions of Windows, is a PA account. Any programs a PA user executes are run with standard-user rights unless the user explicitly elevates the application, which grants the application administrative rights. Elevation prompts are triggered by user activities such as installing applications and changing system settings. These elevation prompts are the most visible UAC technology, manifesting as a switch to a screen with an allow/cancel dialog and grayed snapshot of the desktop as the background.

Accounts created subsequent to the installation are standard user accounts by default that provide the ability to elevate via an "over the shoulder" prompt that asks for credentials of an administrative account that will be used to grant administrative rights. This facility enables a family member sharing a home computer or a more security-conscious user using a standard user account to run applications with administrative rights, provided they know the password to an administrative account, without having to manually switch to a different user logon session. Common examples of such applications include installers and parental control configuration.

When UAC is enabled, all user accounts—including administrative accounts—run with standard user rights. This means that application developers must consider the fact that their software won't have administrative rights by default. This should remind them to design their application to work with standard user rights. If the application or parts of its functionality require administrative rights, it can leverage the elevation mechanism to enable the user to unlock that functionality. Generally, application developers need to make only minor changes to their applications to work well with standard user rights. As the E7 blog post on UAC shows, UAC is successfully changing the way developers write software.

Inside Windows 7 User Account Control - Mark Russinovich
 
Back
Top Bottom