Why is Windows (apparently) less secure than other OSes? Also re: NTFS fragmentation.

There do exist very specialist file systems that put incredible efforts into minimising fragmentation (some even simply don't allow it at all). But they lack in other areas, such as performance, scalability and hell even just reliability (non-journaling).
Completely false. See ext4, which Google are using currently because of its high performance, high scalability and low fragmentation. As far as I know Google are the final word when it comes to real-world scalability.

Same as NTFS then. Except that NTFS has supported defragmentation since NT 4.0 and a native defragger was included since Windows 2000.
This quote is sheer ignorance. It is not remotely comparable in terms of fragmentation. The mere fact you are stating this at all is proof enough you have not used ext3 or ext4 on a production system at all.

NTFS is actually, probably (and I don't want to come across as a Microsoft zealot here), the most advanced file system that exists today. No other file system provides journaling, transactional operations, fine-grained security ACLs with inheritence, shadow copies, file compression and/or encryption, user/group quotas ... whilst still maintaining a high I/O performance. It works excellently for both servers and workstations/desktops with pretty much no changes between the two environments at all.
This whole quote is ludacris. NTFS is one of the biggest weaknesses of Windows. The performance has long been poor in comparison to all modern competition and it has some of the worst fragmentation issues of any file system. It has a feature-set en-par with many, however. ZFS, XFS, Reiser4, JFS, ext3, ext4 etc etc all perform fairly significantly better. ZFS being massively superior in feature-set too.

Some links for you:

The most advanced (best is subjective) FS currently: http://en.wikipedia.org/wiki/ZFS
It's likely future replacement: http://en.wikipedia.org/wiki/Btrfs
And a basic feature overview: http://en.wikipedia.org/wiki/Comparison_of_file_systems

"Best File system that exists today?" Seriously? I thought it was supposed to be Mac users with the reality distortion field.
 
Making an assessment of all the operating systems side-by-side is extremely difficult - there are just too many factors involved. If I had to pick three I would go with market share, average user competency and Microsoft's past coming back to haunt them as the main reasons why Windows appears less secure than it's peers.

You could go on all day making arguments and counter arguments but the truth is that any OS will either adapt to it's environment and remain successful or get binned. I actually think Windows has improved massively over the last decade and Microsoft have done a really good job.
 
Windows 7 does it automatically. User never needs to manually defrag.

EDIT: I have just read that the ext4 filesystem will come with a defrag utility. etx3 keeps fragmentation to a minimum, but it does get fragmented over time.

So does Vista in fact. I haven't manually defragged either of my installs. :)

It's just another one of those things where Mac and linux (but not as much) say when "comparing" the OSs. I thought about it again due to the thread asking for a defragger in the open source ssub forum yesterday, where the guy was told no need. So it's just another one of those fallacies that are trotted out by the fanboys? In which case why do they not need to defrag their drives? It's just something i've been wondering for a while (and I can't be bothered to wade through a load of technical jargon that I have no idea what it means.:p
 
Completely false. See ext4, which Google are using currently because of its high performance, high scalability and low fragmentation. As far as I know Google are the final word when it comes to real-world scalability.
If you had a clue you'd know that ext4 (just like ext3) suffer from fragmentation to the same extent as NTFS. Ext4 is different in that it actually supports online defragmentation. Presumably that is why Google are using it... because they're fed up with not being able to defragment their ext3 volumes without taking them offline?


This quote is sheer ignorance. It is not remotely comparable in terms of fragmentation. The mere fact you are stating this at all is proof enough you have not used ext3 or ext4 on a production system at all.
Yes it is. NTFS uses a self balancing b-tree just like any half decent file system. The mere fact you state "It is not remotely comparable in terms of fragmentation" proves you don't have a clue. All these file systems have fragmentation effects. ZFS, ext3/4... If the volume is filled up to such an extent that the largest "gap" it has is 100MB, but you've got a program wanting to write 200MB of data... then fragmentation into at least 2 segments is inevitable. Providing the file system supports it of course.

This whole quote is ludacris. NTFS is one of the biggest weaknesses of Windows. The performance has long been poor in comparison to all modern competition and it has some of the worst fragmentation issues of any file system. It has a feature-set en-par with many, however. ZFS, XFS, Reiser4, JFS, ext3, ext4 etc etc all perform fairly significantly better. ZFS being massively superior in feature-set too.
NTFS is one of Windows strengths. Whilst other operating systems are constantly scratching their heads wondering and looking for a better file system for themselves. Windows has remained consistent with NTFS and has incrementally improved it over the years. For instance, it acquired Transactional support in NTFS 6.

Provide proof that NTFS is slow (you won't find any) and people here might believe your outrageous claims.

Some links for you:

The most advanced (best is subjective) FS currently: http://en.wikipedia.org/wiki/ZFS
It's likely future replacement: http://en.wikipedia.org/wiki/Btrfs
And a basic feature overview: http://en.wikipedia.org/wiki/Comparison_of_file_systems

"Best File system that exists today?" Seriously? I thought it was supposed to be Mac users with the reality distortion field.

ZFS is advanced, perhaps even on a par with NTFS, but as Apple and Linux are finding... its license is too restrictive.
 
If you had a clue you'd know that ext4 (just like ext3) suffer from fragmentation to the same extent as NTFS. Ext4 is different in that it actually supports online defragmentation. Presumably that is why Google are using it... because they're fed up with not being able to defragment their ext3 volumes without taking them offline?
More ignorance.
1) Google did not use ext3.
2) Online defragmentation in ext4 is not what I would call "production ready" in any case. No enterprise would use it; had you used it you would know this.

Yes it is. NTFS uses a self balancing b-tree just like any half decent file system. The mere fact you state "It is not remotely comparable in terms of fragmentation" proves you don't have a clue.
Firstly NTFS uses a B+tree not a B-tree
The fact I've used both at home and in an enterprise setting and experienced massive fragmentation disparity between them shows the clear difference. Having not noticed this at all I can only assume you have never used ext4. It really is blatantly obvious. I would suggest experiencing the things you are commenting on in future.

NTFS is one of Windows strengths. Whilst other operating systems are constantly scratching their heads wondering and looking for a better file system for themselves. Windows has remained consistent with NTFS and has incrementally improved it over the years. For instance, it acquired Transactional support in NTFS 6.
That's a strange way to view progress. While other systems have overhauled their filesystems in search of improvement, NTFS has stagnated when it comes to advanced features comparable to ZFS, btrfs and similarly advanced filesystems. NTFS has a decent (but never market leading) feature-set overall (not too dissimilar to ext4, Reiser4, JFS, XFS) but it has never been as performant as the competing filesystems (see the list in the first post, pick any of them).

Provide proof that NTFS is slow (you won't find any) and people here might believe your outrageous claims.
"Best File system that exists today?" - The outrageous claim is yours, to which I was replying. I do not subscribe to your "disprove me" line of argument. It is no different than "prove god does not exist".

Back up your claims or refrain from making them.

ZFS is advanced, perhaps even on a par with NTFS
Never mind, I've misunderstood the tone, I thought your post was serious. NTFS is not even close to the feature-set of ZFS. Please read the Wiki link on ZFS for an overview.

Simple Example: You have a file in two locations on one disk.

With NTFS the file is saved in two locations.

With ZFS it is saved in one location. All free space is available to create snapshots. Think of this as system restore that isn't just tacked on because of file system limitations. It is integral to the filesystem, with no performance loss and no duplication of data outside the snapshots, saving you space and adding redundancy.​

On the subject of licensing issues check out btrfs. It should be both technically superior to ZFS and have no such licensing issues.

As I came to your last statement I realised that you were trolling (or genuinely know nothing of NTFS/ZFS) so I'm not sure why I bothered.
 
Last edited:
i stopped using IE after 6.

this new sandboxed environment, is that just a normal IE with activeX and flash and everything useful disabled ?

yea as i thought. i agree that the other browsers are just as secure or unsecure. I can't wait till everyone on the net stops using flash, i hate the codec, but taht is another topic.

i was just trying to be funny when i said "windows 7 is even worse, they have built in security flaws. m$ say it is feature. i believe they call it internet explorer... "

but i do stand by my claim that windows 7 or 2008 server or which ever release is unsecure.

So just to clarify you think that the standard IE out of the box with both Vista and 7, that is sandboxed, is not capable of running Flash or any other ActiveX control?

Since that's quite a strong opinion you have about the latest version of Windows could you please give some detail, not just googled links, as to why you feel that way. Don't be afraid about going to technical detail, I'd be interested to read why you think attack vector mitigation techniques, such as ASLR and DEP, and split token authentication is useless or easily by passable. May there is something to this that I, and the rest of the security community, haven't thought of.
 
More ignorance.
1) Google did not use ext3.
2) Online defragmentation in ext4 is not what I would call "production ready" in any case. No enterprise would use it; had you used it you would know this.
That was your implication, not mine. I don't really care what Google use.

Again though, you're missing the point. It's not that its "online defragger" isn't production ready. It's the mere fact it has one at all. It implies the authors feel their previous version (ext3) didn't address these issues. So they have (albeit apparently only just started) to do something about it in their ext4 version.

Firstly NTFS uses a B+tree not a B-tree
The fact I've used both at home and in an enterprise setting and experienced massive fragmentation disparity between them shows the clear difference. Having not noticed this at all I can only assume you have never used ext4. It really is blatantly obvious. I would suggest experiencing the things you are commenting on in future.
Hilarious. NTFS uses a B+tree just like Reiser and BTRFS. Why are you spinning this to be a disadvantage? B+ and B- trees are similar data structures - I didn't feel the need to distinguish between the two to someone who is clearly not a computer scientist. Is this another factoid you've picked up from Wikipedia and thought you'd have a pop at me with?

How are you supposedly measuring fragmentation on these file systems? Yes NTFS comes with a tool to do it. But as far as I'm aware the others don't. These are well documented limitations of the filesystems.

And remember: just because a filesystem doesn't provide tools to measure fragmentation or perform defragmentation - doesn't mean the FS doesn't suffer from it. NTFS 3.51 was the same... it didn't support neither natively.

That's a strange way to view progress. While other systems have overhauled their filesystems in search of improvement, NTFS has stagnated when it comes to advanced features comparable to ZFS, btrfs and similarly advanced filesystems. NTFS has a decent (but never market leading) feature-set overall (not too dissimilar to ext4, Reiser4, JFS, XFS) but it has never been as performant as the competing filesystems (see the list in the first post, pick any of them).

Again, provide benchmarks which prove NTFS is slower than the "hip and cool" names you keep listing. Then people here might believe you.

NTFS development has slowed, but has by no means "stagnated". Not many file systems maintainers can be bothered to implement a potentially breaking change like Transactional support at a version number as high as 6.

"Best File system that exists today?" - The outrageous claim is yours, to which I was replying. I do not subscribe to your "disprove me" line of argument. It is no different than "prove god does not exist".

It's funny that people like you just latch onto a soundbite like this. And even though you want to challenge it you can only provide links to Wikipedia about niché file systems where one doesn't even have a Stable release yet (BTRFS) and the other (ZFS) has only been around in a Stable condition for a couple years - and even though its authors acknowledge it will perform fragmentation, they provide (at least the last time I checked) no tools to keep it in check.

Back up your claims or refrain from making them.

Never mind, I've misunderstood the tone, I thought your post was serious. NTFS is not even close to the feature-set of ZFS. Please read the Wiki link on ZFS for an overview.

Simple Example: You have a file in two locations on one disk.

With NTFS the file is saved in two locations.

With ZFS it is saved in one location. All free space is available to create snapshots. Think of this as system restore that isn't just tacked on because of file system limitations. It is integral to the filesystem, with no performance loss and no duplication of data outside the snapshots, saving you space and adding redundancy.


On the subject of licensing issues check out btrfs. It should be both technically superior to ZFS and have no such licensing issues.

As I came to your last statement I realised that you were trolling (or genuinely know nothing of NTFS/ZFS) so I'm not sure why I bothered.​


What is wrong with my last statement? Its license is too restrictive. This is a fact and it is stopping it from becoming a mainstream filesystem. But don't let that get in the way of this silly little crusade that so many from the *nix fraternity seem to embark upon.

I don't deny ZFS has certain features that make it quite special. But it's not a mainstream filesystem. NTFS is far more advanced and better than the file systems that OSX (HFS+) and Linux (ext3/4) use on a daily basis.

When ZFS is more mature, less encumbered by its license and has a defragmentation tool: it may supercede NTFS. But I would doubt ZFS can ever properly mature until its license is sorted out.

PS: I would think it's pretty clear who, if anyone, is the troll here. Who was the one that turned up in the thread to throw a load of hot air around without substance? But just remember that I didn't bring up the subject of "trolling". It was you.​
 
I had a dream today, that all the little white filesystems and all the little black filesystems, held hands together ;(
 
That was your implication, not mine. I don't really care what Google use.
I didn't "imply" anything. This is your excuse for another error on your part?

I merely stated the fact that Google use ext4 which has the properties that you were claiming could not co-exist. Why don't you tell Google their data is wrong?

Again though, you're missing the point. It's not that its "online defragger" isn't production ready. It's the mere fact it has one at all. It implies the authors feel their previous version (ext3) didn't address these issues. So they have (albeit apparently only just started) to do something about it in their ext4 version.
Of course it should have an online defragmenter, file fragmentation still exists.

Hilarious. NTFS uses a B+tree just like Reiser and BTRFS. Why are you spinning this to be a disadvantage?
Please do not project your emotions onto the topic. Pointing out another factual error in your statement is not the same as "spinning" an advantage as a disadvantage (which clearly isn't possible).

B+ and B- trees are similar data structures - I didn't feel the need to distinguish between the two to someone who is clearly not a computer scientist. Is this another factoid you've picked up from Wikipedia and thought you'd have a pop at me with?
The internet detective strikes! What next? statements about my mother? Please...

How are you supposedly measuring fragmentation on these file systems? Yes NTFS comes with a tool to do it. But as far as I'm aware the others don't. These are well documented limitations of the filesystems.

And remember: just because a filesystem doesn't provide tools to measure fragmentation or perform defragmentation - doesn't mean the FS doesn't suffer from it. NTFS 3.51 was the same... it didn't support neither natively.
See this is why i'm replying, you are not aware. On ext3 or ext4 run fsck, for example. If we should be questioning anyone's credentials it should be yours ;) This is *nix for dummies, basic.

Again, provide benchmarks which prove NTFS is slower than the "hip and cool" names you keep listing. Then people here might believe you.
So you don't have to prove your claim at all then? I have to disprove it? Brilliant logic there. What church indoctrinated you into this faith based thought?

As I said before, you clearly have not used what you are trying to discuss. I work with extremely heavy datasets, I have a 9TB ext4 volume over LVM, I have not formatted for coming up to 2 years now. I have never defragmented it and it is sitting at 2% fragmentation. My home Windows 7 PC manages to get to 20-30% fragmentation within a month or two with less use. The difference is immediately apparent to anyone with even slight experience. Both frequently fluctuate between 10-40% free space. I'm using this as the example because you can test this at home and see for yourself.

NTFS development has slowed, but has by no means "stagnated". Not many file systems maintainers can be bothered to implement a potentially breaking change like Transactional support at a version number as high as 6.
In comparison to advancements in filesystems like ZFS (the context I used) it has.

You somehow managed to view massive overhauls as "constantly scratching their heads wondering and looking for a better file system for themselves. " creating != looking. It is just as easy to see gradual improvement as stagnation when stable examples in use on critical systems exist of a far superior filesystem.

It's funny that people like you just latch onto a soundbite like this.
Yeah it's terrible that people challenge you when you say something ridiculous with no basis and a refusal to back it up.

And even though you want to challenge it you can only provide links to Wikipedia about niché file systems where one doesn't even have a Stable release yet (BTRFS) and the other (ZFS) has only been around in a Stable condition for a couple years
Apparently I have to restrict my examples because you don't like them? Surprise! Because it doesn't fit your argument.

Please. ZFS has been around years as you have said yourself and I wouldn't call it's use in Solaris "Niché" considering the use it gets, especially in the US.

What is wrong with my last statement? Its license is too restrictive.
That was more a reference to the "ZFS is advanced, perhaps even on a par with NTFS" statement which is simply ridiculous for anyone with basic knowledge of NTFS / ZFS. Obvious trolling, nobody in the IT industry could be that oblivious.

This is a fact and it is stopping it from becoming a mainstream filesystem. But don't let that get in the way of this silly little crusade that so many from the *nix fraternity seem to embark upon.
Please keep your personal vendetta against *nix users out of this. I use Windows too.

I don't deny ZFS has certain features that make it quite special. But it's not a mainstream filesystem.
So you make a silly claim about NTFS. I provide examples which far surpass it in many ways and you don't like them because they don't fit your invisible criteria. You wouldn't need to do this if you had researched the topic first.

NTFS is far more advanced and better than the file systems that OSX (HFS+) and Linux (ext3/4) use on a daily basis.
This is pretty subjective in the case of ext4. I notice you omitted JFS / XFS.

When ZFS is more mature, less encumbered by its license and has a defragmentation tool: it may supercede NTFS. But I would doubt ZFS can ever properly mature until its license is sorted out.
It's already fairly mature considering it's used for critical systems. It's pretty clear btrfs will replace it in the future; especially as Oracle bought SUN and intend to continue with btrfs development, though.

PS: I would think it's pretty clear who, if anyone, is the troll here. Who was the one that turned up in the thread to throw a load of hot air around without substance? But just remember that I didn't bring up the subject of "trolling". It was you.
Yes, by saying "that's ridiculous here is why" and proving many of your statements to be riddled with a lack of knowledge and errors that makes me a troll.
 
Last edited:
I had a dream today, that all the little white filesystems and all the little black filesystems, held hands together ;(

Hehe I just had to bite after the "NTFS is actually, probably (and I don't want to come across as a Microsoft zealot here), the most advanced file system that exists today" quote. It was so ludacris it was begging for a response :p

Gees! What have I stirred up?! It's just a file system! :p

On the subject of your OP. Macs are generally less secure than Vista/Win 7. They don't get exploited because they don't really have a presence on the scale of Windows (Mostly Desktop, some Servers - often for exchange) or Linux (Server, Desktops only in the scientific community). The patch time for exploits alone is huge problem. I can understand the business-logic of not making security a priority if no one bothers to exploit your code, though.

You should be mostly fine as a normal desktop user as long as you:
1) Have a good firewall, on at all times.
2) Keep up to date.
3) Have a modern browser. Any will do. IE 8 is actually pretty decent in terms of security. Firefox, Opera, whatever you prefer.
4) Stick to known software.
5) Avoid piracy.
Anti-viruses are not especially successful because:
1) If you need one often your security has already been compromised.
2) They often do not have definition updates for the latest threats until the worst of it has passed and
3) Can often be inhibited by viruses themselves.​

I have to say I was impressed with the anti-virus supplied with Security essentials, though. Unlike the others I have tried it does not grind the PC to a halt for me and seems to offer decent protection.
 
Last edited:
So which filesystem is better?

harryhillfightr.jpg
 
i feel uneasy with the notion of measuring the level of security by how quickly holes are patched.

Outside of the OpenBSD's (or similarly slow moving, highly security focused systems) of the world the reality is almost any modern system is open to a plethora of exploits, so it does have importance. Look at the Java stack on a Mac for a prime example. Laughably easy to exploit, incredibly out of date with known vulnerabilities. This is not true on any other system that I am aware of. It isn't so much the time (in hours or days) that concerns me. It is when known exploits are left for months, even years on end unfixed. That really is an issue.

I think it's safe to say you win this thread :p
 
The whole issue of users using admin accounts was for a big part due to poor application programmers (who required admin privileges to run) and users being lazy and ill informed when using admin accounts.

To be fair, whilst software developers have contributed to the administrative model Windows users have been stuck in, Microsoft themselves have very much played their role in it as well. Administrator accounts have always been the default in Windows and whilst it was possible to use a standard user account, certain Windows operations required administrative privileges unnecessary; changing the time zone immediate comes to mind. This along with the fact that many applications simply wouldn't work as a standard user made it very difficult to successfully use a standard user account on a daily basis.

Fortunately though, as of Windows Vista and due to the implementation of User Account Control (Cheer!!!!!! :p:D) we are now slowly progressing out of this administrative model. We are not quite there yet but I'll expect this will finally change in the next couple of Windows releases.

One thing that would be nice to see in the future would be if processes were truly isolated from other processes running at different integrity levels in a single user account. In fact, this was actually an original goal in Windows Vista but was dropped in the end due to application compatibility as well as usability.

Mark Russinovich said:
The Windows Integrity Mechanism and UIPI were designed to create a protective barrier around elevated applications. One of its original goals was to prevent software developers from taking shortcuts and leveraging already-elevated applications to accomplish administrative tasks. An application running with standard user rights cannot send synthetic mouse or keyboard inputs into an elevated application to make it do its bidding or inject code into an elevated application to perform administrative operations.

Windows Integrity Mechanism and UIPI were used in Windows Vista for Protected Mode Internet Explorer, which makes it more difficult for malware that infects a running instance of IE to modify user account settings, for example, to configure itself to start every time the user logs on. While it was an early design goal of Windows Vista to use elevations with the secure desktop, Windows Integrity Mechanism, and UIPI to create an impermeable barrier—called a security boundary—between software running with standard user rights and administrative rights, two reasons prevented that goal from being achieved, and it was subsequently dropped: usability and application compatibility.

Inside Windows 7 User Account Control - (Mark Russinovich)

It will be interesting to see Microsoft plans for future versions of Windows with regards to this. I'm certainly looking forward to how Windows will evolve in the future.

As of Windows Vista, the default configuration of Windows is fairly good and when Windows finally ships with a standard user account by default, the out of the box configuration is going to be very good. However, at the end of the day, for people who aren't particularly computer literate, unless their system is going to be truly locked down and at which point, they probably wouldn't be able to use it as they would like and will be calling the person who has set it up every other minute, it doesn't matter how *technically* secure an operating system is, if a user simply hasn't been educated in regards to security, there will always be problems. If they want to see the naked dancing pigs, they are eventually going to see the naked dancing pigs. There needs to be a big shift in user education more than anything.
 
You can apply security to Linux systems on almost twice as many levels than on Windows, e.g. even down to the Kernel level with patches such as Pax and Grsecurity, modifying kernel parameters through sysctl etc.

Linux security updates don't require patch days or even reboots. Updates can be installed on the fly, even patches to the live running kernel can be done on the fly with Ksplice.

I still prefer Windows on the desktop but servers, its got to be Linux every time (Unless its Exchange!)
 
I didn't "imply" anything. This is your excuse for another error on your part?
You implied Google switched to ext4. I've made no errors. In any case, I really doubt fragmentation is a concern for Google at all. Their BigTable database system would almost certainly have a very well tuned file allocation strategy which almost entirely takes the load off of the filesystem from making the decisions.

I merely stated the fact that Google use ext4 which has the properties that you were claiming could not co-exist. Why don't you tell Google their data is wrong?
Ext4 isn't really an example of a file system that puts incredible efforts into minimising fragmentation. It was a bad example on your part.

FYI what I said was: "There do exist very specialist file systems that put incredible efforts into minimising fragmentation (some even simply don't allow it at all). But they lack in other areas, such as performance, scalability and hell even just reliability (non-journaling).".

I stand by that. Ext4 is a terrible example to provide to try to counteract that statement. It almost has nothing to do with what I was talking about, in fact. I was talking about specialist file systems - the sort of things they use on PVR set top boxes, and at the other extreme: the Mars Rovers. I didn't have a problem with you misunderstanding thinking I was taking a pot shot at Ext3/4, ZFS etc. But that was ultimately your mistake, not mine. Those filesystems are not what I'd consider specialist at all. A specialist filesystem generally makes large compromises in certain areas in order to afford gains in others.

Of course it should have an online defragmenter, file fragmentation still exists.
Good. You finally concede it.

Please do not project your emotions onto the topic. Pointing out another factual error in your statement is not the same as "spinning" an advantage as a disadvantage (which clearly isn't possible).
What on earth? That would almost be funny if you weren't being serious.

The internet detective strikes! What next? statements about my mother? Please...
Again, what the hell?! This is getting pretty silly now, I hope your standard of posting picks up soon.

See this is why i'm replying, you are not aware. On ext3 or ext4 run fsck, for example. If we should be questioning anyone's credentials it should be yours ;) This is *nix for dummies, basic.
And how do you know that fsck is using the same criteria for detecting fragmentation that NTFS's standard defrag tool is? Hint: There are a lot of defraggers available for Windows, they will all return a different % figures for the same volume. Therefore differences between OS and filesystem are certainly going to exaggerate the subtle differences in the way that fragmentation is calculated.

So you don't have to prove your claim at all then? I have to disprove it? Brilliant logic there. What church indoctrinated you into this faith based thought?
Hmm, well no? Not really, no I don't. I'm not making outlandish claims that Ext3/4 perform better than NTFS or that they don't suffer from fragmentation to the same extent that NTFS does. All I've said was that NTFS is probably the most advanced file system available today. And it probably is once you take into account licensing, stability, fragmentation and scalability (in the context of all 3 key environments - server, desktop and workstation) concerns of the alternatives.

As I said before, you clearly have not used what you are trying to discuss. I work with extremely heavy datasets, I have a 9TB ext4 volume over LVM, I have not formatted for coming up to 2 years now. I have never defragmented it and it is sitting at 2% fragmentation. My home Windows 7 PC manages to get to 20-30% fragmentation within a month or two with less use. The difference is immediately apparent to anyone with even slight experience. Both frequently fluctuate between 10-40% free space. I'm using this as the example because you can test this at home and see for yourself.
But that's a stupid way to compare file systems: comparing NTFS running a desktop/workstation environment against a server storing large databases on Ext4. It's hardly any surprise NTFS scored less favourably in this scenario. A database server is highly optimised to avoid fragmentation and generally doesn't do lots of small file allocations like a typical desktop application does.

Unfortunately I don't have access to a "9TB ext4 volume" so no I can't "test this at home". But I fully expect that Ext4 would handle it fine, as would NTFS given a chance to handle the same workload.

Windows 7 defrags itself automatically (by default). So how you've got 20-30% fragmentation on it is quite some achievement. Not that Windows 7 reports a % figure anywhere regarding fragmentation - so I guess you just made it up.

In comparison to advancements in filesystems like ZFS (the context I used) it has.

You somehow managed to view massive overhauls as "constantly scratching their heads wondering and looking for a better file system for themselves. " creating != looking. It is just as easy to see gradual improvement as stagnation when stable examples in use on critical systems exist of a far superior filesystem.
ZFS is still missing pieces of its puzzle before it can be taken seriously for all environments (server, desktop, workstation). It needs a defragger.

Yeah it's terrible that people challenge you when you say something ridiculous with no basis and a refusal to back it up.
I don't mind discussion and even argument but the way you have gone about discussing this is completely over the top. That is why it is funny. Because I've seen it happen all too many times over the years.

Apparently I have to restrict my examples because you don't like them? Surprise! Because it doesn't fit your argument.

Please. ZFS has been around years as you have said yourself and I wouldn't call it's use in Solaris "Niché" considering the use it gets, especially in the US.
I've nothing against any of the filesystems you've suggested. (To do so would be ridiculous). They are all fair suggestions. However they don't really disprove my point that "NTFS is probably the most advanced file system available today". In fact, your suggestions have bolstered it because it has given me the chance to highlight some of their shortcomings.

ZFS has been around about about 5 years. And not all of those years was it stable or as feature-packed as it is today.

That was more a reference to the "ZFS is advanced, perhaps even on a par with NTFS" statement which is simply ridiculous for anyone with basic knowledge of NTFS / ZFS. Obvious trolling, nobody in the IT industry could be that oblivious.
Well that is your personal opinion. I'm only interested in discussing facts and logic here.

ZFS certainly has some killer features which NTFS does not. But still, it has other drawbacks that currently prevent it from hitting the big time. Until it gets a proper defragger tool, for example, it's never going to be a bit hitter on the desktop/workstation space (where fragmentation is often a problem).

The reason I agreed that ZFS is probably on a par with NTFS is because whilst it has certain advantages over NTFS (in terms of potential featureset), it also has some shortcomings (which I have pointed out elsewhere). Therefore, it seems fair to balance it by saying that the two are "on par". I don't think this is ridiculous and I think you would find most true IT professionals would agree with me. Although I doubt any IT professionals exist that, through that profession alone, are qualified to compare filesystems in such detail. Yes they can look down a feature matrix and make a decision but they will glaze over pretty quickly if the subject of self-balancing tree data structures or delayed-write caching came up.

Please keep your personal vendetta against *nix users out of this. I use Windows too.
It's just an observation. You're not the only *nix user to have blindly assumed that NTFS is useless and that everything else is better. At one point you were even pushing ext3 above NTFS, though you seem to have conceded on those claims now, thankfully.

You certainly "fit the profile", as it were.

So you make a silly claim about NTFS. I provide examples which far surpass it in many ways and you don't like them because they don't fit your invisible criteria. You wouldn't need to do this if you had researched the topic first.
A claim which I have backed up numerous times with reasoning and logic. All you've done it banded about, in a rather chaotic fashion, the names of various alternative file systems. To which I have agreed are mostly good suggestions (especially ZFS, but definately not ext3) but which I still disagree are currently in a position to supercede NTFS as the "most advanced". Watch this space though.

This is pretty subjective in the case of ext4. I notice you omitted JFS / XFS.
Ext4 is lacking an online defragger. This prevents it from being a "serious contender" in the desktop/workstation space.

It's already fairly mature considering it's used for critical systems. It's pretty clear btrfs will replace it in the future; especially as Oracle bought SUN and intend to continue with btrfs development, though.
Yes it is mature, in the server space. But not in the desktop/workstation space.

It's understandable that the ZFS authors don't very much prioritise the development of a defragger - because their current user base simply don't need one for the usage scenarios it is currently being used for. And that's fine.

But until ZFS has proven itself more in the desktop/workstation space, it just isn't going to supercede NTFS in the minds of most. That is what gives NTFS the edge at the moment - it works well in all usage scenarios.

Yes, by saying "that's ridiculous here is why" and proving many of your statements to be riddled with a lack of knowledge and errors that makes me a troll.
Well, you have a very confrontational posting style and come across as though you're a bit hot under the collar. That much is obvious to most observers here. I didn't say the T word. You did.
 
Back
Top Bottom