Why Aren't There Better User Agent String Standards?

Soldato
Joined
15 Jan 2004
Posts
10,206
I know why IE starting using the Mozilla stamp, but why does it continue to use it?

Why don't the browser teams come to together and come up with a better standard?

For example:
IE/9 (Windows 7; 64bit;)

Instead of:
Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64;)



Or
Firefox/4.0.1 (Windows 7; en-GB;)

To replace:
Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.17) Gecko/20110420 Firefox/4.0.1
 
I know why IE starting using the Mozilla stamp, but why does it continue to use it?

Why don't the browser teams come to together and come up with a better standard?

For example:
IE/9 (Windows 7; 64bit;)

Instead of:
Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64;)



Or
Firefox/4.0.1 (Windows 7; en-GB;)

To replace:
Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.17) Gecko/20110420 Firefox/4.0.1

Yeah it would be nice. I hate the html DTD, thankfully with 5 they have made it much simpler.
 
What would break?

Millions of legacy webservers that wouldn't see the browser as being (old) 'mozilla' compatible.

The problem is that it's the browser that would get the blame from not being able to render a simple web page, not the server, because the end user doesn't see the code and decisions the server is making that resulted in either a badly formatted page or an unnecessary and patently wrong 'oops, you're not running a good browser' response.

That's why browser creators are fearful of imposing new user agent strings. Which is probably the same reason as to why you suspected IE was doing this in the first place.
 
Eh? You seem confused as user agent strings have nothing to do with servers or how they deliver content.

In an ideal world this would be true. Unfortunately there's plenty of HTTP servers out there which will inspect the User Agent of a request and then form a customised response for that agent.

You can see it today with things like smartphones and iPad's. Sites "detect" when you're using a mobile device and offer up a different/reduced experience.
 
You could take this one step further, and ask when are they going to throw out the failed markup language, separation of content and layout experiment which never worked. Many of the web standard protocols are based on a natural languages that evolved over time, and did not necessarily start out as a well thought out processes.

Why, when you request a page from a web server, does it not respond with a single, discrete bundle of data the comprises that entire page. Instead we get a token trail that requires the client to explicitly request that data, such as images, one at a time.
 
Awesome, let's make the web even slower by not being able to cache and reuse things like images and other embedded content, and instead re-request them for every page of the same site even if it's just a 2K piece of HTML that changes in each hit :D Just an example.

They don't have to get them one at a time of course; actually the traditional bottleneck was set to 2 http streams at once, due to http 1.0 proxy servers (and a few web servers) that might freak out with more than that, but it still seems and acts quite 'serial' in practice.

Slowly though, as browser competition between google/mozilla/microsoft continues, they've been willing to be a little less cautious about this and started to open the floodgates to allow more concurrent requests in a bid to prove that their browser is the fastest in order to gain an advantage.
 
Back
Top Bottom