• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

A brief history of OpenGL

Caporegime
Joined
24 Sep 2008
Posts
38,284
Location
Essex innit!
So with all the hot debates in the the Mantle threads, I thought let Mantle be and show what the score with OpenGL is. Take your time reading this, as it is a fantastic and very informative post.

http://programmers.stackexchange.com/questions/60544/why-do-game-developers-prefer-windows
Many of the answers here are really, really good. But the OpenGL and Direct3D (D3D) issue should probably be addressed. And that requires... a history lesson.

And before we begin, I know far more about OpenGL than I do about Direct3D. I've never written a line of D3D code in my life, and I've written tutorials on OpenGL. So what I'm about to say isn't a question of bias. It is simply a matter of history.

Birth of Conflict

One day, sometime in the early 90's, Microsoft looked around. They saw the SNES and Sega Genesis being awesome, running lots of action games and such. And they saw DOS. Developers coded DOS games like console games: direct to the metal. Unlike consoles however, where a developer who made an SNES game knew what hardware the user would have, DOS developers had to write for multiple possible configurations. And this is rather harder than it sounds.

And Microsoft had a bigger problem: Windows. See, Windows wanted to own the hardware, unlike DOS which pretty much let developers do whatever. Owning the hardware is necessary in order to have cooperation between applications. Cooperation is exactly what game developers hate because it takes up precious hardware resources they could be using to be awesome.

In order to promote game development on Windows, Microsoft needed a uniform API that was low-level, ran on Windows without being slowed down by it, and most of all cross-hardware. A single API for all graphics, sound, and input hardware.

Thus, DirectX was born.

3D accelerators were born a few months later. And Microsoft ran into a spot of trouble. See, DirectDraw, the graphics component of DirectX, only dealt with 2D graphics: allocating graphics memory and doing bit-blits between different allocated sections of memory.

So Microsoft purchased a bit of middleware and fashioned it into Direct3D Version 3. It was universally reviled. And with good reason; looking at D3D v3 code is like staring into the Ark of the Covenant.

Old John Carmack at Id Software took one look at that trash and said, "Screw that!" and decided to write towards another API: OpenGL.

See, another part of the many-headed-beast that is Microsoft had been busy working with SGI on an OpenGL implementation for Windows. The idea here was to court developers of typical GL applications: workstation apps. CAD tools, modelling, that sort of thing. Games were the farthest thing on their mind. This was primarily a Windows NT thing, but Microsoft decided to add it to Win95 too.

As a way to entice workstation developers to Windows, Microsoft decided to try to bribe them with access to these newfangled 3D graphics cards. Microsoft implemented the Installable Client Driver protocol: a graphics card maker could override Microsoft's software OpenGL implementation with a hardware-based one. Code could automatically just use a hardware OpenGL implementation if one was available.

In the early days, consumer-level videocards did not have support for OpenGL though. That didn't stop Carmack from just porting Quake to OpenGL (GLQuake) on his SGI workstation. As we can read from the GLQuake readme:

Theoretically, glquake will run on any compliant OpenGL that supports the texture objects extensions, but unless it is very powerfull hardware that accelerates everything needed, the game play will not be acceptable. If it has to go through any software emulation paths, the performance will likely by well under one frame per second.

At this time (march ’97), the only standard opengl hardware that can play glquake reasonably is an intergraph realizm, which is a VERY expensive card. 3dlabs has been improving their performance significantly, but with the available drivers it still isn’t good enough to play. Some of the current 3dlabs drivers for glint and permedia boards can also crash NT when exiting from a full screen run, so I don’t recommend running glquake on 3dlabs hardware.

3dfx has provided an opengl32.dll that implements everything glquake needs, but it is not a full opengl implementation. Other opengl applications are very unlikely to work with it, so consider it basically a “glquake driver”.

This was the birth of the miniGL drivers. These evolved into full OpenGL implementations eventually, as hardware became powerful enough to implement most OpenGL functionality in hardware. nVidia was the first to offer a full OpenGL implementation. Many other vendors struggled, which is one reason why developers preferred Direct3D: they were compatible on a wider range of hardware. Eventually only nVidia and ATI (now AMD) remained, and both had a good OpenGL implementation.

OpenGL Ascendant

Thus the stage is set: Direct3D vs. OpenGL. It's really an amazing story, considering how bad D3D v3 was.

The OpenGL Architectural Review Board (ARB) is the organization responsible for maintaining OpenGL. They issue a number of extensions, maintain the extension repository, and create new versions of the API. The ARB is a committee made of many of the graphics industry players, as well as some OS makers. Apple and Microsoft have at various times been a member of the ARB.

3Dfx comes out with the Voodoo2. This is the first hardware that can do multitexturing, which is something that OpenGL couldn't do before. While 3Dfx was strongly against OpenGL, NVIDIA, makers of the next multitexturing graphics chip (the TNT1), loved it. So the ARB issued an extension: GL_ARB_multitexture, which would allow access to multitexturing.

Meanwhile, Direct3D v5 comes out. Now, D3D has become an actual API, rather than something a cat might vomit up. The problem? No multitexturing.

Oops.

Now, that one wouldn't hurt nearly as much as it should have, because people didn't use multitexturing much. Not directly. Multitexturing hurt performance quite a bit, and in many cases it wasn't worth it compared to multi-passing. And of course, game developers love to ensure that their games works on older hardware, which didn't have multitexturing, so many games shipped without it.

D3D was thus given a reprieve.

Time passes and NVIDIA deploys the GeForce 256 (not GeForce GT-250; the very first GeForce), pretty much ending competition in graphics cards for the next two years. The main selling point is the ability to do vertex transform and lighting (T&L) in hardware. Not only that, NVIDIA loved OpenGL so much that their T&L engine effectively was OpenGL. Almost literally; as I understand, some of their registers actually took OpenGL enumerators directly as values.

Direct3D v6 comes out. Multitexture at last but... no hardware T&L. OpenGL had always had a T&L pipeline, even though before the 256 it was implemented in software. So it was very easy for NVIDIA to just convert their software implementation to a hardware solution. It wouldn't be until D3D v7 until D3D finally had hardware T&L support.

Dawn of Shaders, Twilight of OpenGL

Then, GeForce 3 came out. And a lot of things happened at the same time.

Microsoft had decided that they weren't going to be late again. So instead of looking at what NVIDIA was doing and then copying it after the fact, they took the astonishing position of going to them and talking to them. And then they fell in love and had a little console together.

A messy divorce ensued later. But that's for another time.

What this meant for the PC was that GeForce 3 came out simultaneously with D3D v8. And it's not hard to see how GeForce 3 influenced D3D 8's shaders. The pixel shaders of Shader Model 1.0 were extremely specific to NVIDIA's hardware. There was no attempt made whatsoever at abstracting NVIDIA's hardware; SM 1.0 was just whatever the GeForce 3 did.

When ATI started to jump into the performance graphics card race with the Radeon 8500, there was a problem. The 8500's pixel processing pipeline was more powerful than NVIDIA's stuff. So Microsoft issued Shader Model 1.1, which basically was "Whatever the 8500 does."

That may sound like a failure on D3D's part. But failure and success are matters of degrees. And epic failure was happening in OpenGL-land.

NVIDIA loved OpenGL, so when GeForce 3 hit, they released a slew of OpenGL extensions. Proprietary OpenGL extensions: NVIDIA-only. Naturally, when the 8500 showed up, it couldn't use any of them.

See, at least in D3D 8 land, you could run your SM 1.0 shaders on ATI hardware. Sure, you had to write new shaders to take advantage of the 8500's coolness, but at least your code worked.

In order to have shaders of any kind on Radeon 8500 in OpenGL, ATI had to write a number of OpenGL extensions. Proprietary OpenGL extensions: ATI-only. So you needed an NVIDIA codepath and an ATI codepath, just to have shaders at all.

Now, you might ask, "Where was the OpenGL ARB, whose job it was to keep OpenGL current?" Where many committees often end up: off being stupid.

See, I mentioned ARB_multitexture above because it factors deeply into all of this. The ARB seemed (from an outsider's perspective) to want to avoid the idea of shaders altogether. They figured that if they slapped enough configurability onto the fixed-function pipeline, they could equal the ability of a shader pipeline.

So the ARB released extension after extension. Every extension with the words "texture_env" in it was yet another attempt to patch this aging design. Check the registry: between ARB and EXT extensions, there were eight of these extensions made. Many were promoted to OpenGL core versions.

Microsoft was a part of the ARB at this time; they left around the time D3D 9 hit. So it is entirely possible that they were working to sabotage OpenGL in some way. I personally doubt this theory for two reasons. One, they would have had to get help from other ARB members to do that, since each member only gets one vote. And most importantly two, the ARB didn't need Microsoft's help to screw things up. We'll see further evidence of that.

Eventually the ARB, likely under threat from both ATI and NVIDIA (both active members) eventually pulled their head out long enough to provide actual assembly-style shaders.

Want something even stupider?

Hardware T&L. Something OpenGL had first. Well, it's interesting. To get the maximum possible performance from hardware T&L, you need to store your vertex data on the GPU. After all, it's the GPU that actually wants to use your vertex data.

In D3D v7, Microsoft introduced the concept of Vertex Buffers. These are allocated swaths of GPU memory for storing vertex data.

Want to know when OpenGL got their equivalent of this? Oh, NVIDIA, being a lover of all things OpenGL (so long as they are proprietary NVIDIA extensions), released the vertex array range extension when the GeForce 256 first hit. But when did the ARB decide to provide similar functionality?

Two years later. This was after they approved vertex and fragment shaders (pixel in D3D language). That's how long it took the ARB to develop a cross-platform solution for storing vertex data in GPU memory. Again, something that hardware T&L needs to achieve maximum performance.

One Language to Ruin Them All

So, the OpenGL development environment was fractured for a time. No cross-hardware shaders, no cross-hardware GPU vertex storage, while D3D users enjoyed both. Could it get worse?

You... you could say that. Enter 3D Labs.

Who are they, you might ask? They are a defunct company whom I consider to be the true killers of OpenGL. Sure, the ARB's general ineptness made OpenGL vulnerable when it should have been owning D3D. But 3D Labs is perhaps the single biggest reason to my mind for OpenGL's current market state. What could they have possibly done to cause that?

They designed the OpenGL Shading Language.

See, 3D Labs was a dying company. Their expensive GPUs were being marginalized by NVIDIA's increasing pressure on the workstation market. And unlike NVIDIA, 3D Labs did not have any presence in the mainstream market; if NVIDIA won, they died.

Which they did.

So, in a bid to remain relevant in a world that didn't want their products, 3D Labs showed up to a Game Developer Conference wielding presentations for something they called "OpenGL 2.0". This would be a complete, from-scratch rewrite of the OpenGL API. And that makes sense; there was a lot of cruft in OpenGL's API at the time (note: that cruft still exists). Just look at how texture loading and binding work; it's semi-arcane.

Part of their proposal was a shading language. Naturally. However, unlike the current cross-platform ARB extensions, their shading language was "high-level" (C is high-level for a shading language. Yes, really).

Now, Microsoft was working on their own high-level shading language. Which they, in all of Microsoft's collective imagination, called... the High Level Shading Language (HLSL). But their was a fundamentally different approach to the languages.

The biggest issue with 3D Labs's shader language was that it was built-in. See, HLSL was a language Microsoft defined. They released a compiler for it, and it generated Shader Model 2.0 (or later shader models) assembly code, which you would feed into D3D. In the D3D v9 days, HLSL was never touched by D3D directly. It was a nice abstraction, but it was purely optional. And a developer always had the opportunity to go behind the compiler and tweak the output for maximum performance.

The 3D Labs language had none of that. You gave the driver the C-like language, and it produced a shader. End of story. Not an assembly shader, not something you feed something else. The actual OpenGL object representing a shader.

What this meant is that OpenGL users were open to the vagaries of developers who were just getting the hang of compiling assembly-like languages. Compiler bugs ran rampant in the newly christened OpenGL Shading Language (GLSL). What's worse, if you managed to get a shader to compile on multiple platforms correctly (no mean feat), you were still subjected to the optimizers of the day. Which were not as optimal as they could be.

While that was the biggest flaw in GLSL, it wasn't the only flaw. By far.

In D3D, and in the older assembly languages in OpenGL, you could mix and match vertex and fragment (pixel) shaders. So long as they communicated with the same interface, you could use any vertex shader with any compatible fragment shader. And there were even levels of incompatibility they could accept; a vertex shader could write an output that the fragment shader didn't read. And so forth.

GLSL didn't have any of that. Vertex and fragment shaders were fused together into what 3D Labs called a "program object". So if you wanted to share vertex and fragment programs, you had to build multiple program objects. And this caused the second biggest problem.

See, 3D Labs thought they were being clever. They based GLSL's compilation model on C/C++. You take a .c or .cpp and compile it into an object file. Then you take one or more object files and link them into a program. So that's how GLSL compiles: you compile your shader (vertex or fragment) into a shader object. Then you put those shader objects in a program object, and link them together to form your actual program.

While this did allow potential cool ideas like having "library" shaders that contained extra code that the main shaders could call, what it meant in practice was that shaders were compiled twice. Once in the compilation stage and once in the linking stage. NVIDIA's compiler in particular was known for basically running the compile twice. It didn't generate some kind of object code intermediary; it just compiled it once and threw away the answer, then compiled it again at link time.

So even if you want to link your vertex shader to two different fragment shaders, you have to do a lot more compiling than in D3D. Especially since the compiling of a C-like language was all done offline, not at the beginning of the program's execution.

There were other issues with GLSL. Perhaps it seems wrong to lay the blame on 3D Labs, since the ARB did eventually approve and incorporate the language (but nothing else of their "OpenGL 2.0" initiative). But it was their idea.

And here's the really sad part: 3D Labs was right (mostly). GLSL is not a vector-based shading language the way HLSL was at the time. This was because 3D Labs's hardware was scalar hardware (similar to modern NVIDIA hardware), but they were ultimately right in the direction many hardware makers went with their hardware.

They were right to go with a compile-online model for a "high-level" language. D3D even switched to that eventually.

The problem was that 3D Labs were right at the wrong time. And in trying to summon the future too early, in trying to be future-proof, they cast aside the present. It sounds similar to how OpenGL always had the possibility for T&L functionality. Except that OpenGL's T&L pipeline was still useful before hardware T&L, while GLSL was a liability before the world caught up to it.

GLSL is a good language now. But for the time? It was horrible. And OpenGL suffered for it.

Falling Towards Apotheosis

While I maintain that 3D Labs struck the fatal blow, it was the ARB itself who would drive the last nail in the coffin.

This is a story you may have heard of. By the time of OpenGL 2.1, OpenGL was running into a problem. It had a lot of legacy cruft. The API wasn't easy to use anymore. There were 5 ways to do things, and no idea which was the fastest. You could "learn" OpenGL with simple tutorials, but you didn't really learn the OpenGL API that gave you real performance and graphical power.

So the ARB decided to attempt another re-invention of OpenGL. This was similar to 3D Labs's "OpenGL 2.0", but better because the ARB was behind it. They called it "Longs Peak."

What is so bad about taking some time to improve the API? This was bad because Microsoft had left themselves vulnerable. See, this was at the time of the Vista switchover.

With Vista, Microsoft decided to institute some much-needed changes in display drivers. They forced drivers to submit to the OS for graphics memory virtualization and various other things.

While one can debate the merits of this or whether it was actually possible, the fact remains this: Microsoft deemed D3D 10 to be Vista (and above) only. Even if you had hardware that was capable of D3D 10, you couldn't run D3D 10 applications without also running Vista.

You might also remember that Vista... um, let's just say that it didn't work out well. So you had an underperforming OS, a new API that only ran on that OS, and a fresh generation of hardware that needed that API and OS to do anything more than be faster than the previous generation.

However, developers could access D3D 10-class features via OpenGL. Well, they could if the ARB hadn't been busy working on Longs Peak.

Basically, the ARB spent a good year and a half to two years worth of work to make the API better. By the time OpenGL 3.0 actually came out, Vista adoption was up, Win7 was around the corner to put Vista behind them, and most game developers didn't care about D3D-10 class features anyway. After all, D3D 10 hardware ran D3D 9 applications just fine. And with the rise of PC-to-console ports (or PC developers jumping ship to console development. Take your pick), developers didn't need D3D 10 class features.

Now, if developers had access to those features earlier via OpenGL on WinXP machines, then OpenGL development might have received a much-needed shot in the arm. But the ARB missed their opportunity. And do you want to know the worst part?

Despite spending two precious years attempting to rebuild the API from scratch... they still failed and just reverted back to the status quo (except for a deprecation mechanism).

So not only did the ARB miss a crucial window of opportunity, they didn't even get done the task that made them miss that chance. Pretty much epic fail all around.

And that's the tale of OpenGL vs. Direct3D. A tale of missed opportunities, gross stupidity, willful blindness, and simple foolishness.

I don't know the poster but that is a very well explained history of OpenGL. Maybe something is wrong and will get pulled up but that's for discussion.

And of course we now have SteamOS, which I for one will be going for. A dual boot OS is a very simple thing and when SteamOS has a little more game support, I will certainly be giving it a go. Microsoft have had me hooked for years and I am very happy with W7 but I want them to have some viable competition and SteamOS with Linux looks a good start.

So onto SteamOS and what is the future.

What is SteamOS?
SteamOS is a gaming-centric operating system made by Valve, the developer/publisher behind games like Half-life 2 and the proprietor of Steam, the biggest digital games portal in the world.

The system is designed to offer an experience that bridges the divide between PC gaming and the more relaxed style of console gaming, and to try and bring back the ‘PC games’ (i.e. non-console) audience, which has gradually eroded over the last decade.

SteamOS is a free download, and is a complete, Linux-based system that you’ll need to install on a ‘Steam Machine’ that hooks up to your TV (or a monitor). Any PC can run SteamOS and Steam Machines will play games, but they can also stream games from your high-power desktop PC to your TV, via the Steam Machine and your Wi-Fi network.

Native games compatibility isn’t going to be anywhere near as good as it would be on a standard PC, though, as Steam Machines are based on Linux rather than Windows. You can find out which Steam games support Linux on the Steam website. They include favourites like Counter-Strike, Football Manager 2014 and Garry’s Mod, but the selection right now isn't great.

The idea, however, is that the rise of Steam Machines will eventually improve Linux support in the games industry. It has a long way to go at present, but Valve promises native support for triple-AAA games in the future as it attempts to woo big publishers to make their big PC releases work on Linux and by extension SteamOS and Steam Machines.

SEE ALSO: Best Xbox One games

What do I need for SteamOS?

SteamOS will see several manufacturers make a bunch of different Steam Machines – computers designed with the SteamOS specifically in mind. However, you don’t need such a machine to get up and running.

As SteamOS runs on the same x86 architecture as Windows, so you can turn just about any modern PC into a Steam Machine. If you have a media box PC, it’s likely you’ll be able to convert it into a Steam Machine using the SteamOS download available from Valve’s website. We’ll be back with more on how to do this soon. EA is unlikely to invest hugely in Linux ports if the public doesn't invest in SteamOS, so the more people who do this the better.

What else can I do on SteamOS besides play games?
Although primarily a game-playing operating system, SteamOS will also support media streaming services. We expect things like Netflix, iPlayer, 4OD and LoveFilm will all eventually be accessible using a Steam Machine, although not at launch.

You won’t need to stream these from another PC elsewhere in the house, either.

What is a Steam Machine?
The term Steam Machine is a Valve creation, something that makes the hardware required to run SteamOS more marketable, and less vague. PCs not specifically labelled as ‘Steam Machines’ can also become such things with a bit of software tinkering. However, it’s likely that all computers sold as Steam Machines will have been made in association with Valve to some extent.

It’s not yet clear whether there’s a strict set of minimum requirements for any of these new SteamOS-specific boxes. However, given Valve is aiming for accessibility to some extent, it’s unlikely that a high-end graphics card will be mandated. Those things are expensive.

How will I control a Steam Machine?
Steam ControllerSteam Machines, and Steam OS, will be operable using a mouse and keyboard. However, Valve has also devised a clever and rather unusual controller designed just for the system.

Its shape is somewhat like an Xbox One controller, but the actual control surfaces and layout are quite different.

There are two circular trackpads on the front, whose input will be similar to that of two analogue sticks (but with greater accuracy potential).

The front buttons are arranged around these pads, and there’s a third touchpad in the middle. This is clearly a pad that wants to offer a better first-person shooter experience than a console gamepad.

Like a normal gamepad, there are standard triggers on the top of the controller.

Good news for gamers – the pad won’t just work with SteamOS. It’ll also work with Steam games on a ‘normal’ PC.

When can I get a Steam Machine or Steam Controller?
Although the release date of SteamOS is 13 December, this is still early days for the system. Prototype Steam Machines and Steam Controllers are being sent out to 300 beta participants to QA the system.

You need to bear in mind that Valve is a very different company from the games parts of Sony and Microsoft. Its more open approach is great, but means there’s not quite the same style of singular release date where everything is (more or less) ready for public consumption. Early adopters of SteamOS need patience.

None of the beta units are being sent out of the US, either. So if you want to be among the first to try out SteamOS, you’ll need to get your hands dirty and install the software yourself. We’ll be back soon with more on how to do this.

SEE ALSO: Best PS4 games

How much will a Steam Machine cost?
There is no set price for a Steam Machine. Unlike an Xbox One or PS4, it’s not a singular thing. And they’re not all made by the same company.

Steam Machines will be available at all sorts of prices. But those designed for native gaming, rather than game streaming, will be fairly expensive.

The first Steam Machine to be announced was the iBuyPower box, which is set to sell for £499. It has a multi-core AMD CPU and AMD Radeon R9 270 graphics card. It costs as much as a next-gen console, but when the graphics card alone sells for around £150, that’s no surprise.

The second one we saw was a Steam Machine from Digital Storm, set to cost over $1100. You can expect to pay a grand in the UK for such a box. That’s more than twice the price of a next-gen console. And it’s not yet clear whether there’ll be a massive influx of games you’ll be able to play on it natively.

Details of these Steam boxes are obviously subject to change – their makers are clearly savvy to the idea that they can drum-up interest simply by stating their intention to make SteamOS products. We'll get our hands on as many Steam boxes as we can at the CES 2014 show in January.

Is SteamOS the future of home gaming?
SteamOS arrives at a very interesting time, just as console gaming gets a shot in the arm with the release of the Xbox One and PS4.

The platform has significant issues to combat, both technical and in terms of its messaging. Will Linux support be improved enough to make a high-end Steam Machine be worth buying? Is the controller any good? Will normal gamers ever really understand what a Steam Machine is?

SteamOS has a tricky time ahead of it if it wants to eat into the console audience, but Valve has a history of pulling off tricky feats like this. Many ridiculed the Steam platform when it was introduced, but it has become a huge success, becoming the place where three-quarters of all digital games are bought. This success gives Valve a lot more power and opportunity to make SteamOS and Steam Machines a success.
Read more at http://www.trustedreviews.com/opinions/steamos-and-steam-machines-faq-guide#77FaxpTsfqQ6Y2Sf.99
http://www.trustedreviews.com/opinions/steamos-and-steam-machines-faq-guide

And the current crop of games using SteamOS are:

Counter-Strike
Half-Life 2
Team Fortress 2
Left 4 Dead 2
Portal 2
Natural Selection 2
BioShock Infinite
Sid Meier’s Civilization V
Serious Sam 3: BFE
Call of Duty: Modern Warfare 3
Call of Duty: Modern Warfare 3 – Multiplayer
Metro: Last Light
DiRT 3
Borderlands 2
Amnesia: The Dark Descent
The Elder Scrolls V: Skyrim
Bastion
Stacking
Batman: Arkham City GOTY
XCOM: Enemy Unknown
Torchlight II
Tomb Raider
Crusader Kings II
Max Payne 3
Dishonored
Starbound
FTL: Faster Than Light
aerofly FS
PAYDAY 2
Don’t Starve
Kerbal Space Program
Far Cry® 3
The Cave
Beatbuddy: Tale of the Guardians
Gone Home
Prison Architect
Surgeon Simulator 2013
Organ Trail: Director’s Cut
Teleglitch: Die More Edition
War Thunder
Legend of Dungeon
Outlast
Papers, Please
Spelunky
Rogue Legacy
Assetto Corsa
Volgarr the Viking

This list is constantly being updated and remember, these games are listed as compatible with the controller. Any Linux game should work with a mouse and keyboard.
 
Last edited:
Good info G, well worth a read through

SteamOS plus both vendors finally giving gl the attention it deserves could be a turning point

Until Microsoft release DX11.X for PC and lazy developers stick with the status quo instead of learning something new :D
 
Doh! I meant to add that to the OP and will do now. That is where I got it from and an awesome read.

Yea Gregster, it really is an awesome post.
I never really post in this section, 'cause I'm not really knowledgeable at hardware level and my interests are at the software/api level.

Then you have the hardware drives software argument, which is true, but it works both ways as that thread demonstrates.

I used to be a directx guy, but going more towards opengl now, with the rise of android and all that.

Gone through many directx api levels over the years, but MS seem to have lost interest recently in directx (desktop apps anyway).

The quoted post shows what can happen if you drop the ball for a while.
 
Good post and some great info. I didn't realise that such a plethora of games will use SteamOS, it will be interesting to watch the developments of this new operating system from Valve - I can imagine that it will become the operating system for gamers as it will be optimised and designed essentially to just run games probably. Interesting times lie ahead!
 
Yea Gregster, it really is an awesome post.
I never really post in this section, 'cause I'm not really knowledgeable at hardware level and my interests are at the software/api level.

Then you have the hardware drives software argument, which is true, but it works both ways as that thread demonstrates.

I used to be a directx guy, but going more towards opengl now, with the rise of android and all that.

Gone through many directx api levels over the years, but MS seem to have lost interest recently in directx (desktop apps anyway).

The quoted post shows what can happen if you drop the ball for a while.

Yep, dropping the ball was quite apparent and such a shame. Vista didn't help of course but missing deadlines on an already long project must have been a massive blow.

I do hope SteamOS gets a fair crack of the whip and with both AMD and nVidia giving it full backing, I see it has a damned good chance. I know AMD have been slow to Linux in the past but they seem to be taking the bull by the horns and jumping on SteamOS.

So with your experience in both, would you say that devs would switch to Linux or would you say that DX is still the devs choice of weapon?
 
I know that you can run a CPU over clock from bios but what about the Gpu, what about all the drivers for peripherals that you have? For example my wifi adapter has no driver on Linux
 
I think in time SteamOS could become a viable alternative, if we assume that the uptake is really really good and just about every game released has a rendering path that will work on Linux. However for it to be really viable for me it needs a decent back catalogue of games, not the 300 or so that it currently has.
Basically going forward it will need to support all the new games and all the recent games of the last few years. I think that's where it will struggle.
Until then it might be ok as a dual boot option, but I'll still need Windows for some time. Also if it doesn't offer more than Windows I have no reason to change from Windows until SteamOS supports all the games Windows does.
So for me unless a lot of companies go back and convert their existing games (as I think Valve did) then it'll be around 5 years before it's even close to replacing Windows for me. That's if it gets every game Windows gets from this point onwards.
 
Nice info Greg, heres another article that gives some pretty good info on the current state and recent history of the OGL/D3D war (Circa late 2010):

Why you should use OpenGL and not DirectX

Often, when we meet other game developers and say that we use OpenGL for our game Overgrowth, we're met with stares of disbelief -- why would anyone use OpenGL? DirectX is the future. When we tell graphics card representatives that we use OpenGL, the temperature of the room drops by ten degrees.

This baffles us. It's common geek wisdom that standards-based websites, for instance, trounce Silverlight, Flash, or ActiveX. Cross-platform development is laudable and smart. No self-respecting geek enjoys dealing with closed-standard Word documents or Exchange servers. What kind of bizarro world is this where engineers are not only going crazy over Microsoft's latest proprietary API, but actively denouncing its open-standard competitor?

Why does everyone use DirectX?

Everyone uses DirectX because API choice in game development is a positive feedback loop, and it was shifted in favor of DirectX in 2005.

It's a positive feedback loop because whenever one API becomes more popular, it keeps becoming more and more popular due to network effects. The most important network effects are as follows: the more popular API gets better support from graphics card vendors, and graphics programmers are more likely to already know how to use it.

API use was shifted in favor of DirectX by Microsoft's two-pronged DirectX campaign around the launch of XBox 360 and Windows Vista, including the spread of FUD (fear, uncertainty and doubt) about the future of OpenGL, and wild exaggeration of the merits of DirectX. Ever since then, the network effects have amplified this discrepency until OpenGL has almost disappeared entirely from mainstream PC gaming.


1. Network effects and vicious cycles

On Windows, it's a fact that the DirectX graphics drivers are better maintained than the OpenGL graphics drivers. This is caused by the vicious cycle of vendor support. As game developers are driven from OpenGL to DirectX by other factors, the graphics card manufacturers (vendors) get less bug reports for their OpenGL drivers, extensions and documentation. This results in shakier OpenGL drivers, leading even more game developers to switch from OpenGL to DirectX. The cycle repeats.

Similarly, it's a fact that more gaming graphics programmers know how to use DirectX than OpenGL, so it's cheaper (less training required) to make a game using DirectX than OpenGL. This is the result of another vicious cycle: as more game projects use DirectX, more programmers have to learn how to use it. As more programmers learn to use it, it becomes cheaper for game projects to use DirectX than to use OpenGL.


2. FUD about OpenGL and Vista

Microsoft initiated a fear, uncertainty, and doubt (FUD) campaign against OpenGL around the release of Windows Vista. In 2003, Microsoft left the OpenGL Architecture Review Board -- showing that they no longer had any interest in the future of OpenGL. Then in 2005, they gave presentations at SIGGRAPH (special interest group for graphics) and WinHEC (Windows Hardware Engineering Conference) giving the impression that Windows Vista would remove support for OpenGL except to maintain back-compatibility with XP applications. This version of OpenGL would be layered on top of DirectX as shown here, (from the HEC presentation) causing a dramatic performance hit. This campaign led to panic in the OpenGL community, leading many professional graphics programmers to switch to DirectX.

When Vista was released, it backpedaled on its OpenGL claims, allowing vendors to create fast installable client drivers (ICDs) that restore native OpenGL support. The OpenGL board sent out newsletters proving that OpenGL is still a first-class citizen, and that OpenGL performance on Vista was still at least as fast as Direct3D. Unfortunately for OpenGL, the damage had already been done -- public confidence in OpenGL was badly shaken.


3. Misleading marketing campaigns

The launch strategies for Windows Vista and Windows 7 were both accompanied with an immense marketing push by Microsoft for DirectX, in which they showed 'before' and 'after' screenshots of the different DirectX versions. Many gamers now think that switching from DirectX 9 to DirectX 10 magically transforms graphics from stupidly dark to normal (as in the comparison above), or from Halo 1 to Crysis. Game journalists proved that there was no difference between Crysis DX9 and DX10, and that its "DX10" features worked fine with DX9 by tweaking a config file. However, despite its obvious inaccuracy, the marketing has convinced many gamers that DirectX updates are the only way to access the latest graphics features.

While many games participate in Microsoft's marketing charade, more savvy graphics programmers like John Carmack refuse to be swept up in it. He put it this way, "Personally, I wouldn’t jump at something like DX10 right now. I would let things settle out a little bit and wait until there’s a really strong need for it."


So why do we use OpenGL?

Given that OpenGL has less vendor support, is no longer used in games, is being actively attacked by Microsoft, and has no marketing momentum, why should we still use it? Wouldn't it be more profitable to ditch it and use DirectX like everyone else? No, because in reality, OpenGL is more powerful than DirectX, supports more platforms, and is essential for the future of games.


1. OpenGL is more powerful than DirectX

It's common knowledge that OpenGL has faster draw calls than DirectX (see NVIDIA presentations like this one if you don't want to take my word for it), and it has first access to new GPU features via vendor extensions. OpenGL gives you direct access to all new graphics features on all platforms, while DirectX only provides occasional snapshots of them on their newest versions of Windows. The tesselation technology that Microsoft is heavily promoting for DirectX 11 has been an OpenGL extension for three years. It has even been possible for years before that, using fast instancing and vertex-texture-fetch. I don't know what new technologies will be exposed in the next couple years, I know they will be available first in OpenGL.

Microsoft has worked hard on DirectX 10 and 11, and they're now about as fast as OpenGL, and support almost as many features. However, there's one big problem: they don't work on Windows XP! Half of PC gamers still use XP, so using DirectX 10 or 11 is not really a viable option. If you really care about having the best possible graphics, and delivering them to as many gamers as possible, there's no choice but OpenGL.


2. OpenGL is cross-platform

More than half of our Lugaru users use Mac or Linux (as shown in this blog post), and we wouldn't be surprised if the same will be true of our new game Overgrowth. When we talk to major game developers, we hear that supporting Mac and Linux is a waste of time. However, I've never seen any evidence for this claim. Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.

As John Carmack said when asked if Rage was a DirectX game, "It’s still OpenGL, although we obviously use a D3D-ish API [on the Xbox 360], and CG on the PS3. It’s interesting how little of the technology cares what API you’re using and what generation of the technology you’re on. You’ve got a small handful of files that care about what API they’re on, and millions of lines of code that are agnostic to the platform that they’re on." If you can hit every platform using OpenGL, why shoot yourself in the foot by relying on DirectX?

Even if all you care about is Windows, let me remind you again that half of Windows users still use Windows XP, and will be unable to play your game if you use the latest versions of DirectX. The only way to deliver the latest graphics to Windows XP gamers (the single biggest desktop gaming platform) is through OpenGL.


3. OpenGL is better for the future of games

OpenGL is a non-profit open standard created to allow users on any platform to experience the highest quality graphics that their hardware can provide. Its use is being crushed by a monopolistic attack from a monolithic corporate giant trying to dominate an industry that is too young to protect itself. As Direct3D becomes the only gaming graphics API supported on Windows, Microsoft is gaining a stranglehold on PC gaming.

We need competition and freedom to drive down prices and drive up quality. A Microsoft monopoly on gaming would be very bad for both gamers and game developers.

SRC: http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX
 
as long as it represents a viable alternative I can see it gradually gaining momentum

new API's or OSes are not really a 6 month or even 1 year make or break deal, from the point a new API is released it can and does take 2 plus years before you start seeing them in the majority of games - just look at DX9 and the subsequent usage of DX10 and 11, it has taken years to get the majority of games using DX11

DX11.1 and .2 represent some good features that could improve games for gamers, yet devs are not using them as they are only supported on Win8.1 which hardly anyone is using

that is the biggest downfall of DX and it's new versions, MS insisting on locking devs and gamers to upgrade in order to use them = low user base = no incentive for devs to use it or being forced to use effectively 2 API's already - DX and DX

OpenGL works on Steam, it also works on Windows, Linux, Mac, Android etc.
so work done on one platform can contribute towards other platforms

with mobile devices getting ever more popular and powerful, I can well see a time in say 5 years where the vast majority of casual gamers ditch consoles, PC's, laptops etc. and game entirely on a tablet with a controller and attached to the TV or even totally mobile

if you start to lay the groundwork now of OpenGL and crossplatform compatibility then it makes it that much easier to have a game that can run on a mobile platform, but also run with much higher visual settings on a steam box or windows PC

the only thing holding back OpenGL on the PC is developers taking the easy route of what they know - DX

however, that is gradually changing, with many of the major developers having either already converted or actively working on converting their engine to add OpenGL support

to be honest, I don't really care if it's OpenGL or Mantle that becomes "the standard", as long as it is open standard and as easy for vendors to add their own extentions and support as OpenGL is, then it is all good

OpenGL already has strong cross vendor and cross platform support, so I question if or how long it will take to whip mantle in to the same shape, and if in doing so you end up adding all of the "baggage" that ends up giving it the same kinds of problems that OpenGL currently has, would that work be better placed improving OpenGL which is already better placed to have those features
 
Yep, dropping the ball was quite apparent and such a shame. Vista didn't help of course but missing deadlines on an already long project must have been a massive blow.

I do hope SteamOS gets a fair crack of the whip and with both AMD and nVidia giving it full backing, I see it has a damned good chance. I know AMD have been slow to Linux in the past but they seem to be taking the bull by the horns and jumping on SteamOS.
Yea, I really hope SteamOs takes off. I heard that nVidia devs had been helping them optimize their opengl drivers, and I expect AMD are doing the same too.:)

So with your experience in both, would you say that devs would switch to Linux or would you say that DX is still the devs choice of weapon?
I wouldn't say it's a matter of switching between Linux and DX, but maybe between opengl and Dx.

Obviously opengl is cross-platform, while Dx is windows only, so that's a +1 for opengl.

DirectX is still the devs choice of weapon if they writing a game for Windows/Xbox only, but I'm hoping opengl will gain traction with financial/dev support from from the other OS vendors and the GFX hardware companies.

Nobody pays for games on a Linux based OS of course.;)

Edit: what Andybird123 said^^^
 
Last edited:
A good article Uber and I remembered doing the DX10 hack to get the better visuals for Crysis on XP. I also remember reading on Vista semi blocking OpenGL and the scare tactics definitely worked :(

@ Andy, spot on post. I made this thread to try and open some eyes to what goes on in the industry, as so many can't see or care to see the results. Open Standards is the way forward and I said it before but got shut down about an OpenGL style API would be good for the industry. If Mantle is an open standard, then sweet, if OpenGL gets a resurgence, also sweet but as we stand now, PC games are held back because of DX. I am probably the most clueless person for coding and appreciate when people have been there, done that or are doing it now. Far better info from the horses mouth.

@ CamboFrog. Yea, makes sense and would love to see the industry move away from the shackles of DX but that all takes time and money. SteamOS has the potential to kick start that and nVidia are pushing hard on OpenGL and have pretty deep pockets. :)

Ohhh and Linux was supposed to be OpenGL....Long day.
 
another good article on the potential future of Steam / Linux / OpenGL and gaming

http://www.eurogamer.net/articles/digitalfoundry-2014-in-theory-steamos-and-the-opengl-factor

Good read and shows to me that OpenGL has a serious future. Android being the reason and once they get to grips with that, they will be able to improve frames, as the learning of Android is the crux of it at the moment. The Tegra K1 gets bigged up at CES and deservedly so. An awesome chip with Laptop beating potential. I would be interested in a Tablet using the K1 and the potential to have my steam games on that. I detest gaming on touch screen though but like the article says, "Touch screen is now and there are other possibilities".

All in all, the future is looking good for OpenGL. Steam boxes maybe a pricey option for now but give it time and they will become a viable option for the masses (hopefully).
 
bluetooth controller :)
quite a few of them for android now

a steambox is just a prebuilt PC with SteamOS, so just install SteamOS on your PC (as dual boot if needs be) job done, a steam box will be £75 cheaper than a self built windows PC of the same spec :)
but yes, for consumer adoption they will need to get the steamboxes right and the right price
 
Last edited:
Back
Top Bottom