• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Let Battle Commence

What was the point of making it 15% then? Not questioning you just a general confusion. lol

They cut the VAT to get everyone out spending (which was pretty dumb tbh, as no one was out spending due to having no money), trouble is though, now that everyones gone on major spending sprees seen as everythings got cheaper (and spent all the money they never had in the first place :confused:), when they actually whack it back up, they are going to want all the money back they lost over the past year at 15%, so will more than likey go to 20% or something, so everyones going to be even worse off, as they are still going to have **** all, but everything will now have gone right up in price, making it even tougher to afford, and one eyed idiot will be sitting there going OWNED. :D
 
Last edited:
Understanding & caring are 2 different things because on this forum more people spend time playing games then doing productivity work.

Your looking at it from your own perspective & uses that others may not share.

Give the average user here access to the worlds fastest super computer & the first thing they will do is install Crysis.

More people will care when it has uses for what they like to do on a daily bases & not in rare cases.

The point is that all the things people do on a daily basis can almost all be accelerated massively with CUDA enabled cards. That's really the point, that's why it's worth the money.

Now when people first started buying multicore processors - they knew full well that very very very few applications supported the extra cores - but they knew that soon they would. The same is true with CUDA (and openCL).
 
I think that CUDA, and the emerging standard are quite interesting ...

I used to write code that had to be optimised for each manufacturers RISC CPU (each had their strengths ... ). I also coded for vector computers, often designing a new algorithm to benefit from that architecture (parallel streams anyone??) - there is lots that can be done to benefit numerical modelling solutions on massively parallel and vector computers ...

Then we used BLAS and LINPACK type libraries cos they were hand optimised by the vendors to take advantage of their hardware .... sounds familiar? I think its great that this is coming to mainstream ... never mind MIMD with a few paltry cores on the CPU, the real performance is in SIMD ... I personally have no use for this tehcnology right now but maybe that will change soon ...
 
The point is that all the things people do on a daily basis can almost all be accelerated massively with CUDA enabled cards. That's really the point, that's why it's worth the money.

Now when people first started buying multicore processors - they knew full well that very very very few applications supported the extra cores - but they knew that soon they would. The same is true with CUDA (and openCL).

The point is not about could be done but is being done & what is seen to be done.
People only really care about what they can do & not what they may be able to do.

You don't need multi thread SW to make use of multi core CPUs.

It will be worth the money to the avg user when the SW that uses it is here.
http://www.anandtech.com/video/showdoc.aspx?i=3643&p=8
 
Last edited:
The point is that all the things people do on a daily basis can almost all be accelerated massively with CUDA enabled cards. That's really the point, that's why it's worth the money.

Now when people first started buying multicore processors - they knew full well that very very very few applications supported the extra cores - but they knew that soon they would. The same is true with CUDA (and openCL).

Well then surely you're not arguing for CUDA, you're arguing for GPU accelerated applications.

No one is going to argue with that, but CUDA isn't the definition of that.

It's healthy for everyone that a standard such as CUDA shouldn't be owned by an interested party in terms of 'CUDA works on our hardware only'.

This is the point of OpenCL. I get the impression that some people around here see that CUDA is an nVidia thing and so assume OpenCL is an ATI thing which gets the nVidia trolls up about it.

ATi has its own thing called 'stream' which again I think is wrong.

ATi appear to have caught on to this and want to support OpenCL.

We need a stream computing API that has nothing to do with GPU manufacturers because then it leads the way for them to play one-up-manship with their competitors.

"Oh look, we're optimised this to work on our *better faster* hardware"

*we've gimped it so it works bad on your hardware*.

Same goes for PhysX and why it gets so much bad attention.

The concept of it is great, I personally like PhysX in the concept sense, but it's going to need to become a standard before it'll live up to its potential.

With nVidia owning and marketing PhysX, it's not gonna become a standard as it's restricted to nVidia hardware.

That's why I support OpenCL even more, we need a goo physics API that can be ran on a GPU of any brand, only then will there be gameplay changing physics implementations.

I'm really looking forward to OpenCL physics. I don't care if it's PhysX or Havok, just as long as if it's PhysX, nVidia make it open source and it's ported to OpenCL to work on any GPU, and if it's Havok, it also runs on any GPU worth running it on.
 
Well then surely you're not arguing for CUDA, you're arguing for GPU accelerated applications.

No one is going to argue with that, but CUDA isn't the definition of that.

It's healthy for everyone that a standard such as CUDA shouldn't be owned by an interested party in terms of 'CUDA works on our hardware only'.

This is the point of OpenCL. I get the impression that some people around here see that CUDA is an nVidia thing and so assume OpenCL is an ATI thing which gets the nVidia trolls up about it.

ATi has its own thing called 'stream' which again I think is wrong.

ATi appear to have caught on to this and want to support OpenCL.

We need a stream computing API that has nothing to do with GPU manufacturers because then it leads the way for them to play one-up-manship with their competitors.

"Oh look, we're optimised this to work on our *better faster* hardware"

*we've gimped it so it works bad on your hardware*.

Same goes for PhysX and why it gets so much bad attention.

The concept of it is great, I personally like PhysX in the concept sense, but it's going to need to become a standard before it'll live up to its potential.

With nVidia owning and marketing PhysX, it's not gonna become a standard as it's restricted to nVidia hardware.

That's why I support OpenCL even more, we need a goo physics API that can be ran on a GPU of any brand, only then will there be gameplay changing physics implementations.

I'm really looking forward to OpenCL physics. I don't care if it's PhysX or Havok, just as long as if it's PhysX, nVidia make it open source and it's ported to OpenCL to work on any GPU, and if it's Havok, it also runs on any GPU worth running it on.

I can understand your argument, but then again Nvidia are a company not a charity. I personally think Nvidia have taken the initative with GPU processing, far more than ATI and it's only fair for them to push CUDA. This is no different than AMD's or Intel's compilers in my view.

I'm not sure shoe horning both ATI and Nvidia chipsets into using a common framework for GPU computing is an ideal solution, IMO it takes part of the competition out of it.
 
I don't encode videos
I don't use photoshop
I don't care about CUDA

Programs I have running 99% of the time: mirc, firefox, pidgin, utorrent.

And no, I don't care about pointless 3D widgets and doodas in firefox.
 
Well then surely you're not arguing for CUDA, you're arguing for GPU accelerated applications.

No one is going to argue with that, but CUDA isn't the definition of that.

It's healthy for everyone that a standard such as CUDA shouldn't be owned by an interested party in terms of 'CUDA works on our hardware only'.

This is the point of OpenCL. I get the impression that some people around here see that CUDA is an nVidia thing and so assume OpenCL is an ATI thing which gets the nVidia trolls up about it.

ATi has its own thing called 'stream' which again I think is wrong.

ATi appear to have caught on to this and want to support OpenCL.

We need a stream computing API that has nothing to do with GPU manufacturers because then it leads the way for them to play one-up-manship with their competitors.

"Oh look, we're optimised this to work on our *better faster* hardware"

*we've gimped it so it works bad on your hardware*.

Same goes for PhysX and why it gets so much bad attention.

The concept of it is great, I personally like PhysX in the concept sense, but it's going to need to become a standard before it'll live up to its potential.

With nVidia owning and marketing PhysX, it's not gonna become a standard as it's restricted to nVidia hardware.

That's why I support OpenCL even more, we need a goo physics API that can be ran on a GPU of any brand, only then will there be gameplay changing physics implementations.

I'm really looking forward to OpenCL physics. I don't care if it's PhysX or Havok, just as long as if it's PhysX, nVidia make it open source and it's ported to OpenCL to work on any GPU, and if it's Havok, it also runs on any GPU worth running it on.

+1
 
I can understand your argument, but then again Nvidia are a company not a charity. I personally think Nvidia have taken the initative with GPU processing, far more than ATI and it's only fair for them to push CUDA. This is no different than AMD's or Intel's compilers in my view.

I'm not sure shoe horning both ATI and Nvidia chipsets into using a common framework for GPU computing is an ideal solution, IMO it takes part of the competition out of it.

They compete on the hardware level & that's where they should stay.

And you have just done exactly what kylew was talking about, that if its not CUDA way then its ATI way when in fact that's not the case, its the Open way.

Intel AMD compilers does not stop it working on the others CPU.

How about if besides regular BlueRay Sony Entertainment studios released BlueRay films that only played on Sony players because they really were the first to push BlueRay.

Take your example to the extreme would then stop the PC from being an open platform.
You would end up with an Intel, AMD, NV systems that are incompatible with each other. Windows & SW would need to be made for each one of them just because all 3 of them have brought innovations to the PC world first at one time or another.

Its bad enough having to own both brands of consoles because of exclusivity.
 
Last edited:
I'm not sure shoe horning both ATI and Nvidia chipsets into using a common framework for GPU computing is an ideal solution, IMO it takes part of the competition out of it.

so they should have different graphics APIs as well if they could instead of both using DX?
 
using bluray to help make your point was a bad move as it was just one of the two propriatory standards that fought it out.

on the whole i think OpenCL is a great idea and its the next logical step for gpu computing, but you cant dismiss cuda, because with out it we wouldnt find that we are now heading towards a more open standard rather than not using the gpu computing power for stuff other than just graphics.

just to brush on the whole vat thing, you really think that when it goes up to whatever percent it wont effect currant graphics card prices aswell as future ones.
 
They compete on the hardware level & that's where they should stay.

And you have just done exactly what kylew was talking about, that if its not CUDA way then its ATI way when in fact that's not the case, its the Open way.

Intel AMD compilers does not stop it working on the others CPU.

How about if besides regular BlueRay Sony Entertainment studios released BlueRay films that only played on Sony players because they really were the first to push BlueRay.

Take your example to the extreme would then stop the PC from being an open platform.
You would end up with an Intel, AMD, NV systems that are incompatible with each other. Windows & SW would need to be made for each one of them just because all 3 of them have brought innovations to the PC world first at one time or another.

Its bad enough having to own both brands of consoles because of exclusivity.

I'm sorry but I totally disgaree, there is NO WAY NV would plough the amount of money they do into CUDA if it was an open standard. Things like their fellowship program, CUDA centers of excellent and free hardware to certain academic group would be a thing of the past, there'd be no where NEAR the same inncentive!

It's just like OSX they keep it looked to Apple hardware so that people that want to run OSX have to buy a Mac.

A good exmaple is Open Office - Sun massively cut their involvement in OO recently - as it's open platform, and what happened - the OO people cried out saying they had nowhere NEAR enough developers after sun cut their committment.

Quite simple open standards DO NOT encourage innovation at the rate a sucessful, profitable, product does.

And don't get me started about AMD / Intel compilers working on each others platforms, they've been trying to get round this for years:

"Intel has designed its compiler purposely to degrade performance when a program is run on an AMD platform. To achieve this, Intel designed the compiler to compile code along several alternate code paths. Some paths are executed when the program runs on an Intel platform and others are executed when the program is operated on a computer with an AMD microprocessor"
 
using bluray to help make your point was a bad move as it was just one of the two propriatory standards that fought it out.

on the whole i think OpenCL is a great idea and its the next logical step for gpu computing, but you cant dismiss cuda, because with out it we wouldnt find that we are now heading towards a more open standard rather than not using the gpu computing power for stuff other than just graphics.

just to brush on the whole vat thing, you really think that when it goes up to whatever percent it wont effect currant graphics card prices aswell as future ones.

My point stand as all standards had to fight against another open or not.
BD is still more open than CUDA as blu ray consortium drives what's what with BlueRay & Sony really was just responsible for developing the Diode & getting the ball rolling & Sony is not keeping it all to themselves with BD only running on there hardware.

We cant dismiss cuda getting the ball rolling sooner but i would never think for a second that no one else would have done the same & would be the same as saying if it weren't for 3DFX we would still have no 3D gfxcards today or if it was not for Matrox there would never be multi screen gaming.

Even ATI had its avivo converter Folding@home way before CUDA so the ideas of other things running on the GPU besides gfx were already brewing.
 
Last edited:
I'm sorry but I totally disgaree, there is NO WAY NV would plough the amount of money they do into CUDA if it was an open standard. Things like their fellowship program, CUDA centers of excellent and free hardware to certain academic group would be a thing of the past, there'd be no where NEAR the same inncentive!

It's just like OSX they keep it looked to Apple hardware so that people that want to run OSX have to buy a Mac.

A good exmaple is Open Office - Sun massively cut their involvement in OO recently - as it's open platform, and what happened - the OO people cried out saying they had nowhere NEAR enough developers after sun cut their committment.

Quite simple open standards DO NOT encourage innovation at the rate a sucessful, profitable, product does.

And don't get me started about AMD / Intel compilers working on each others platforms, they've been trying to get round this for years:

"Intel has designed its compiler purposely to degrade performance when a program is run on an AMD platform. To achieve this, Intel designed the compiler to compile code along several alternate code paths. Some paths are executed when the program runs on an Intel platform and others are executed when the program is operated on a computer with an AMD microprocessor"

NV can plough as much money as they like but it only runs on there own hardware & no one else's so innovation at the cost of low adoption until it runs on others hardware.

AMD/Intel. compilers don't stop the software from running on each others CPUs.

Cuda & Phxys Don't degrade on ATI gfx cards, they don't run at all.

Apple have there complete platform NV does not.
 
Last edited:
Thing is... with CUDA and PhysX nVidia did initially offer other companies access with relevant legal assurances so that they wouldn't be at nVidias mercy... and they turned their noses up at it... ATI even went as far as to deny nVidia even offered it to them... normally I wouldn't blame them (ATI or anyone else) for it... but as both technologies are well ahead of anything else in the field - we are just going to keep on reinventing the wheel a few times over before we get anywhere... I don't see anything in DX10, 11 or even maybe 12 being on the same level.
 
Thing is... with CUDA and PhysX nVidia did initially offer other companies access with relevant legal assurances so that they wouldn't be at nVidias mercy... and they turned their noses up at it... ATI even went as far as to deny nVidia even offered it to them... normally I wouldn't blame them (ATI or anyone else) for it... but as both technologies are well ahead of anything else in the field - we are just going to keep on reinventing the wheel a few times over before we get anywhere... I don't see anything in DX10, 11 or even maybe 12 being on the same level.

You have no proof of that as usual unless you have a copy of the formal Email sent to ATI.
But with NVs track record i would not trust them no matter what they say.

DX is not set by any gfx Vendor & no Vendor is locked out.
 
Last edited:
Thing is... with CUDA and PhysX nVidia did initially offer other companies access with relevant legal assurances so that they wouldn't be at nVidias mercy... and they turned their noses up at it... ATI even went as far as to deny nVidia even offered it to them... normally I wouldn't blame them (ATI or anyone else) for it... but as both technologies are well ahead of anything else in the field - we are just going to keep on reinventing the wheel a few times over before we get anywhere... I don't see anything in DX10, 11 or even maybe 12 being on the same level.

Can you back this up please, or like most other stuff you post it's rumour or from a blog ;)
 
Sure I have no proof... as usual... but I'll prolly be proved right as usual... was I wrong about the number of SPs on the 5870, the performance gains, the launch price point, etc? surely if it was all so wild and unsubstantiated I'd be wrong more often than right...

RE: DX maybe not - but that wasn't my point, my point was something along the lines of... games like sup com could run silky smooth today using CUDA for a lot of the heavy parallel processing, AI especially is something that can be massively sped up in this regard... instead we will have to wait until atleast DX11 getting upto speed and probably more likely DX12 before we see anything close to the same level in what it can offer.
 
Back
Top Bottom