• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia says it's first to offer full OpenGL 3.0 support

sitting on hardware stalls developement. its not always about features and in this instance im talking specifically about the hardware itself. as good as my gtx was it was an absolutely juice guzzler and they could have done so much more to get that power consumption down but because of the lack of competition there was nothing to drive them to do it. You'll always see the biggest advances when either side is under pressure.

ATi, for instance, have 40nm cores tapped out - Thats not bad for a company that apparenty isnt pushing the industry forward.. nvidia could possibly have already gotten there if they hadnt of sat on their backsides. but again like I said earlier, that is due in part to ATi's blunders at the time but that isnt really an excuse.

What does 40nm really bring us right now? or the immediate future.
 
When i look back and see that it made no difference......its obviously i dont think that, no. i dont think nvidia needed to include dx10 with the g80's - as you very well know, we all bought them for their dx9 performance as a) they were untouchable in dx9 and b) there were no dx10 games out at the time. so it doesnt bother me that ATi didnt have dx10 compatibility at the time as i never used it on my gtx anyway. back when nvidia were pushing sm3.0, i was on the other side of the fence owning an ATi card at the time - it didnt bother me then either.

I tend not to get caught up and choose sides in this arguments, its crazy really. every so often those tables turn and company) would be doing exactly what youve been slating comapny b) for doing. And those performance and price/performance crowns change hands as well.

to be honest, im glad i sold up when i did, the graphics market in particular - it's frustrating reading forums sometimes, filled with juvenile arguments, althoguh that isnt limited to the graphics forums at all.

What does 40nm really bring us right now? or the immediate future.

Cooler. smaller, more efficiant cards that are cheaper to produce. that's extremely important IMO.....
 
Nothing other than cooler/less power if not clocked far higher.

You do not need make it smaller to add more modern breakthrough extentions.

its not as simple as that - there's only so far you can push a core at any given production size. You probably remember nvidia had huge trouble tapping out the gtx280 cores, they couldnt pack any more in to those cores as they were already huge for the process size. Ati on the other hand, they found they had the opposite problem lol, they had to add extra shader units to the cores because of the ammount of interlinks between the core and the pcb - they needed to fill the space(!!). if i remember correctly, the 4800's were supposed to have around 640 shader cores but ended up with 800!
 
Not to say a smaller die isn't desierable - but I get the feeling ATI is trying to draw nvidia into a silly race for smallest die here - in the hope that nVidia will take the bait and so entrench themselves into fighting that corner that ATI can pull a trump out their sleeve...

IMO ATI would be better off slightly rejigging their SP architecture, adding a few more ROPs and pushing for a hardware physics standard.
 
its not as simple as that - there's only so far you can push a core at any given production size. You probably remember nvidia had huge trouble tapping out the gtx280 cores, they couldnt pack any more in to those cores as they were already huge for the process size. Ati on the other hand, they found they had the opposite problem lol, they had to add extra shader units to the cores because of the ammount of interlinks between the core and the pcb - they needed to fill the space. if i remember correctly, the 4800's were supposed to have around 640 shader cores but ended up with 800!

There is a point there - with the 295 nvidia have had to drop the memory to 448Bit to fit the core + memory + PCI-e splitter and power circuitry... I still prefer nVidias architecture tho even tho its bigger as it lends itself better to things like hardware physics and other GPGPU features - tho time will tell how well that takes off or not - DX11 seems to be pushing support for the GPU to be able to compute more general non-graphical stuff.
 
Last edited:
You even try to put a negative spin on reducing the manufacturing process. I cant see how you can think thats a bad thing at all. I already mentioned why - it makes them cheaper to produce which is a huge factor especially considering the astronomical price of the gtx280's at the time of launch. that wasnt nvidia being greedy then, they HAD to price them there to break even.

That's exactly why ATi have managed to undercut them so much, because their cores are so much cheaper to produce. The cost to produce a core is very closely linked to how many (working) cores you can produce from one waffer.
 
Just depends on how well ATI's super-scalar handles GPGPU type tasks as to if its a good thing or not - until they start to roll out hardware physics, etc. we won't really know.
 
Nothing other than cooler/less power if not clocked far higher.

You do not need make it smaller to add more modern breakthrough extentions.

Yes you do, core speeds are only a small part of it, extra features and shaders through shrinks are a much larger and more important part. Besides for the sake of the environment there should be an interest in developing both smaller chips to not pollute as much during production and an interest in recducing the frankly shocking power consumption figures on GPUs these days.
 
There is no ATI support of GL_EXT_geometry_shader4 for OGL3.0 tho which is (currently) maintained by nVidia. There is no ARB equivalent for 3.0 so ATI refuse to support it.

http://lists.apple.com/archives/mac-opengl/2007/Nov/msg00006.html

Get a Mac! :p

Seriously though guys, as soon as stuff like Maya/3DS/Pro-ENG/Solidworks etc.. requires it, ATI will have it ready in their drivers. Right now it's all forum fanboy p*****g contests and pure conjecture about 1 or 2 titles.
 
Last edited:
They said it was not needed at the time, which is also true of DX10.1 and PhysX currently.

I disagree with the physx bit - game development has reached a point where your limited as to what you can handle on the CPU and until we have some kinda standard for hardware physics game development is being held back.
 
I disagree with the physx bit - game development has reached a point where your limited as to what you can handle on the CPU and until we have some kinda standard for hardware physics game development is being held back.

I'll make the point again.

There are no games of significance that need PhysX. At the moment all it is a fancy un-needed extra which most will be surpassed on selection of a GPU in light of more positive features (cost for one). Yes, it's a lovely extra, as is DX10.1, CUDA, OpenGL and Stream but they share the same characteristic - they are not needed at the moment. When hardware Physics comes the norm (I am of the view that MS will be the driver not Nvidia), current GPUs will be out of fashion. Buying on an infant feature for the medium/far future in computer hardware is silly, especially for a GPU.

There are many technologies and products that would be great if there was anything significant out to support them and sound fantastic in theory.
 
Yes you do, core speeds are only a small part of it, extra features and shaders through shrinks are a much larger and more important part. Besides for the sake of the environment there should be an interest in developing both smaller chips to not pollute as much during production and an interest in recducing the frankly shocking power consumption figures on GPUs these days.

It's not Nvidia that are pioneering it, so to some here, it's a mute point :rolleyes: You can guarantee with certain people here that if this was ATi, OpenGL and PhysX it will be useless and not as supposed ground breaking as they are now.
 
I'll make the point again.

There are no games of significance that need PhysX. At the moment all it is a fancy un-needed extra which most will be surpassed on selection of a GPU in light of more positive features (cost for one). Yes, it's a lovely extra, as is DX10.1, CUDA, OpenGL and Stream but they share the same characteristic - they are not needed at the moment. When hardware Physics comes the norm (I am of the view that MS will be the driver not Nvidia), current GPUs will be out of fashion. Buying on an infant feature for the medium/far future in computer hardware is silly, especially for a GPU.

There are many technologies and products that would be great if there was anything significant out to support them and sound fantastic in theory.
Do you work in video game development or have any experience in the development of video games?

From a game development point of view hardware physics are absolutely needed - just because you've not seen a game yet that doesn't make extensive useage of it doesn't mean its not needed.
 
Absolutely needed? hmmm I would not go that far. Yes It is going to be a nice feature to have, but it does not mean we should blindly welcome a propitiatory piece of IP from a company with a history of trying to force their own tech on the industry with open arms. If Nvidia decided to open it up to a 3rd-party standards committee to oversee then that would be a different proposition entirely.

The same argument could be made that GPU power on the level of Tri-SLI'd 280's is absolutely required, it's not but it would be nice if everybody had it. Or perhaps 30" displays, they are absolutely required, plus 5.1 sound, 3D displays etc..

I don't want things to be held back. I like technology and progress, we all do and that is why we are all here posting in this forum. But overriding that I don't want a single company to have an iron fist over said industry. Look at the stagnation we had while ATI were floundering the the 2x and 3x series and the loltastic original pricing we saw with the GTX range. Do you want a return to those days? As that is where we will be heading if Nvidia get their way.

PhysX = Good stuff :)
Nvidia trying to use it as leverage over the industry = Bad form :(
 
Do you work in video game development or have any experience in the development of video games?

From a game development point of view hardware physics are absolutely needed - just because you've not seen a game yet that doesn't make extensive useage of it doesn't mean its not needed.

No I don't, I am a consumer who plays games on my PC, i.e. the end user. I don't need you to tell me that I can't formulate an opinion on an end user because I don't understand game development, frankly I don't care how a game is built, I enjoy the game and leave it at that.

What I do know is PhysX is not important now. How long before you take a look at reality? I'm getting bored copying and pasting.

What's your views on DX10.1 by the way?
 
Absolutely needed? hmmm I would not go that far. Yes It is going to be a nice feature to have, but it does not mean we should blindly welcome a propitiatory piece of IP from a company with a history of trying to force their own tech on the industry with open arms. If Nvidia decided to open it up to a 3rd-party standards committee to oversee then that would be a different proposition entirely.

The same argument could be made that GPU power on the level of Tri-SLI'd 280's is absolutely required, it's not but it would be nice if everybody had it. Or perhaps 30" displays, they are absolutely required, plus 5.1 sound, 3D displays etc..

I don't want things to be held back. I like technology and progress, we all do and that is why we are all here posting in this forum. But overriding that I don't want a single company to have an iron fist over said industry. Look at the stagnation we had while ATI were floundering the the 2x and 3x series and the loltastic original pricing we saw with the GTX range. Do you want a return to those days? As that is where we will be heading if Nvidia get their way.

PhysX = Good stuff :)
Nvidia trying to use it as leverage over the industry = Bad form :(

I agree - maybe absolutely needed is putting it a bit strong - but current CPU solutions for handling objects and so on are coming to the point where the CPU doesn't have the performance to do it at an acceptable rate with hardware physics of any kind we would see much more immersive game environments and much more inventive games. And don't underestimate even the effect simple things like steam/smoke interacting with a fan blade and wrapping around objects in its path can have on the overall gameplay - once you've experienced it going back to older games makes them look very ordinary.
 
No I don't, I am a consumer who plays games on my PC, i.e. the end user. I don't need you to tell me that I can't formulate an opinion on an end user because I don't understand game development, frankly I don't care how a game is built, I enjoy the game and leave it at that.

What I do know is PhysX is not important now. How long before you take a look at reality? I'm getting bored copying and pasting.

What's your views on DX10.1 by the way?

I can see your perspective from a consumers' point of view - I'm just of the opinion that once you've seen what hardware physics (when properly implemented by a good designer) can do you'll change your mind.

The main benefits of DX10.1 right now are for AA quality and performance primarily on ATI hardware... the main problem here is throwing deffered shading into the mix - as it changes all the rules. My views are somewhat ironically jaded on this as most of the effects developers use now can be "faked up" under even directx7 on the performance of todays CPUs. I'm more skeptical of the designer here tho than the technology - my thought is more if your going to use this technology atleast do it justice.

Heres one example ATI were pushing DX10.1 to make something that looked like their ping pong tech demo when you can do it with just DX7 with a little trickery... tho I couldn't get the realtime illumination system up and running in the 2 hours I spent mocking up a DX7 proof of concept - but with a little more time spent on it I could have the realtime illumination and environment lighting effects working properly.

http://www.youtube.com/watch?v=MJfj_mnoAwg
 
Last edited:
Back
Top Bottom