• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Rumours out that Kepler GK104 cards will be $299...

And according to the present rumors, the reason NVIDIA has switched this to go mid-range first, followed by high end is because they feel their midrange will be competitive against the 7970 and they don't want AMD to rule for too long.

So where are these cards then? :confused: No confirmed dates/confirmed specs? Feel a bit sorry for team green tbh, they've been caught with their pants round their ankles :p
 
First page I will go to on reviews of the GK104 will be power consumption, that will set the tone for its success and the rest of the lineup. ;)

Hahahahaha. I love it when people fall for that.

I did some calculations recently. If you game for three hours a day you will save about £4.49 over the space of a year by having a 7970 over a 6970.

Literally pennies a day.
 
Where'd you get that from?

4xx was a "tock" and 5xx was a "tick", assuming you mean that the tocks are the architectural changes and the ticks are refreshes (ala Intel).

Kepler is clearly a "tock". 7970 is also a "tock".

Neither, I can't remember which way around the tocks and ticks are but going by your example Intel's tocks are architectural changes, the ticks are new process's, a refresh is just a tweaked part on the same process. 580gtx is the same part on the same process with a few tweaks. a Tick would be 580gtx shrunk to 28nm, which isn't happening.

GPU's are almost always both at the same time. The only time you really get a "split" from both is when something needs fixing. The 3870 was a new process but mostly the same chip, but was there to fix power and cut costs quickly, the 4870 was a huge change but same process. The 580gtx was the same chip but fixed, and the 6970 was a pretty big architecture change(but without the process).
 
Hahahahaha. I love it when people fall for that.

I did some calculations recently. If you game for three hours a day you will save about £4.49 over the space of a year by having a 7970 over a 6970.

Literally pennies a day.

That isn't what he's saying, at all. if the gk104 REALLY uses 250W or so, then the big kepler will likely have be a freaking 350-400W monster, or be crazy clock limited like the 7970 to fit within a sane power usage.

Or more like if GK104 is a hair above the 7970 and uses 30-40W less, AMD has made a booboo and GK112/110/big kepler, whatever its called, could be epic.

If the GK104 is 20% behind an overclocked 7970, and uses more power, big Kepler will likely have issues.
 
Hahahahaha. I love it when people fall for that.

I did some calculations recently. If you game for three hours a day you will save about £4.49 over the space of a year by having a 7970 over a 6970.

Literally pennies a day.

Very true, I do however like power reduction technologies and believe them to be environmentally responsible, I don't see this as much of a buying feature but a natural progression of technology, the individual will save little but society will profit more. heh... yeah 90% of my light bulbs are energy efficient :D
 
They're hardly mainstream are they?

If they had a major worthwhile point people wouldn't buy any AMD cards.

Yeah, there are some pretty cool Physx titles out there. However, Physx does absolutely nothing Havok can't. And Havok can do it with an API using your CPU.

The creators of Havok realised this, and instead of wasting time like Ageia they decided to market it as an API instead.

Nvidia insisted on bolting it to their cards, heavily decreasing their target audience. Which is why we have few decent Physx titles because you need to be in complete cahoots with Nvidia in order to do so, which limits your target audience making it a brave thing to do.

CUDA? another server workstation technology that is of very little use to gamers. Good for folding, mind, but again you are reaching out to a small target audience.

PhysX is also an API like Havok. And it also runs on the CPU. The difference is that the CPU is not fast enough for more complex physics effects, and this is where NVIDIA's PhysX has its advantage over something like Havok on CPU.

Every Physics framework requires someone or the other to be in cahoots with someone else. This is true whether it's PhysX or Havok or Bullet or whatever else you want to use.

As for CUDA, now AMD uses GCN. This was a trend NVIDIA saw GPUs tending towards, and it has happened. DX11 includes DirectCompute for example. For better or worse... And it does have its advantages. It means future games will be able to leverage not just Graphics power but additional computing power... And this should start to become more common now that AMD has also embraced the Compute-ness.

And PhysX is implemened on top of CUDA. Similarly PhysX can be implemented on AMD GPUs if AMD wanted to, just as it is implemented in CPUs. Or it could be implemented on FPGA or ASIC or whatever else you want. Ultimately it's just a software library that has massive advantages from vectorization of instructions.


On another note, unifying Compute and GPUs isn't without its advantages in a financial sense. Costs in microelectronics depend dramatically on the economy of scale. Selling X number of units to gamers and Y number of units to industries for what is essentially the same product dramatically reduces the cost per unit.
 
That isn't what he's saying, at all. if the gk104 REALLY uses 250W or so, then the big kepler will likely have be a freaking 350-400W monster, or be crazy clock limited like the 7970 to fit within a sane power usage.

Or more like if GK104 is a hair above the 7970 and uses 30-40W less, AMD has made a booboo and GK112/110/big kepler, whatever its called, could be epic.

If the GK104 is 20% behind an overclocked 7970, and uses more power, big Kepler will likely have issues.

Exactly.
 
That isn't what he's saying, at all. if the gk104 REALLY uses 250W or so, then the big kepler will likely have be a freaking 350-400W monster, or be crazy clock limited like the 7970 to fit within a sane power usage.

Or more like if GK104 is a hair above the 7970 and uses 30-40W less, AMD has made a booboo and GK112/110/big kepler, whatever its called, could be epic.

If the GK104 is 20% behind an overclocked 7970, and uses more power, big Kepler will likely have issues.

I've never met any one that's cared about a high end component guzzling power. It's kind of like a Ferrari. You don't buy it with fuel economy and cheap services in mind.

Fermi 5xx could use just as much power as 4xx if you removed the throttles. Yet people fell for the hype. Hilarious :D
 
I've never met any one that's cared about a high end component guzzling power. It's kind of like a Ferrari. You don't buy it with fuel economy and cheap services in mind.

Fermi 5xx could use just as much power as 4xx if you removed the throttles. Yet people fell for the hype. Hilarious :D

I posted and he confirmed it, it has NOTHING to do with the power, its about what the architecture is capable of, try reading what I said not what you think i was getting at.

If the 7970 at its current stock performance was using 350W, it would mean there was likely no overclocking head room, it would mean something was drastically wrong with the architecture, it would mean something 30% slower(like the 7870) would still be using 225+W it would give an indication of where the other cards and their performance would be.

It's not anywhere near rocket science, when a 480gtx used 300W, it was utterly not even close to surprising that a card 30-40% slower used around 40% less power. So knowing 480gtx performance AND its power usage roughly gave us where the 460gtx would end up.

If the 460gtx came first, you'd know how much power a 480gtx would likely use to be 30-40% faster.

As I said if the 660ti comes in at 768 shaders and uses 250W, Kepler to be 40+% faster will be a well over 300W gpu. I don't think Kepler will worry about that as much as AMD have, if they do it will mean a definitely stunted big kepler like the 7970.

If however the GK104 comes in at 180W or less, it should mean a completely unlimited, fast as crap big kepler.

It's what you can estimate from the data...... he's not interested in the actual power usage itself, but what it likely means for other parts in the range.
 
As I understand it Havok implements physics but it's mainly for eye candy / visual effects. PhysX implementation is very different, and requires a GPU's massively parallel processing capability to perform all the simultaneous calculations required. It's a potentially much more powerful (and demanding) solution.

PhysX requires CUDA for the calculations. This is just one use for CUDA, there are many others, mainly outside of the gaming sphere, for example weather modelling and prediction, protein mapping etc, where the spoils for nVidia to reap are tremendous (think in terms of supercomputer installations).
 
I posted and he confirmed it, it has NOTHING to do with the power, its about what the architecture is capable of, try reading what I said not what you think i was getting at.

If the 7970 at its current stock performance was using 350W, it would mean there was likely no overclocking head room, it would mean something was drastically wrong with the architecture, it would mean something 30% slower(like the 7870) would still be using 225+W it would give an indication of where the other cards and their performance would be.

It's not anywhere near rocket science, when a 480gtx used 300W, it was utterly not even close to surprising that a card 30-40% slower used around 40% less power. So knowing 480gtx performance AND its power usage roughly gave us where the 460gtx would end up.

If the 460gtx came first, you'd know how much power a 480gtx would likely use to be 30-40% faster.

As I said if the 660ti comes in at 768 shaders and uses 250W, Kepler to be 40+% faster will be a well over 300W gpu. I don't think Kepler will worry about that as much as AMD have, if they do it will mean a definitely stunted big kepler like the 7970.

If however the GK104 comes in at 180W or less, it should mean a completely unlimited, fast as crap big kepler.

It's what you can estimate from the data...... he's not interested in the actual power usage itself, but what it likely means for other parts in the range.

Don't confuse TDP and the power a GPU requires...they aren't the same.
 
As I understand it Havok implements physics but it's mainly for eye candy / visual effects. PhysX implementation is very different, and requires a GPU's massively parallel processing capability to perform all the simultaneous calculations required. It's a potentially much more powerful (and demanding) solution.

PhysX requires CUDA for the calculations. This is just one use for CUDA, there are many others, mainly outside of the gaming sphere, for example weather modelling and prediction, protein mapping etc, where the spoils for nVidia to reap are tremendous (think in terms of supercomputer installations).

Prrecisely
 
As I understand it Havok implements physics but it's mainly for eye candy / visual effects. PhysX implementation is very different, and requires a GPU's massively parallel processing capability to perform all the simultaneous calculations required. It's a potentially much more powerful (and demanding) solution.

PhysX requires CUDA for the calculations. This is just one use for CUDA, there are many others, mainly outside of the gaming sphere, for example weather modelling and prediction, protein mapping etc, where the spoils for nVidia to reap are tremendous (think in terms of supercomputer installations).

Yeah I'm aware of what it's capable of :)

I'm also aware of what it's achieved in games, and I'm under the impression that nothing done so far could not be achieved using a CPU and an SDK.

There have been loads of "could be cool" things over the past few years. Hydra was one. Sadly it all falls back to software support for it (drivers and so on) and like many other "could be cool" things (SLI, Crossfire, Quadfire, Quad SLI, Eyefinity, Surround, 3D ETC) the software support is hardly there or just completely non existent. This is something Nvidia should be realising. Stop putting money into "What if" items and concentrate on just getting the fastest GPUs out there.

Unless they are mainstream they won't catch on. And that brings us back to what is mainstream. Consoles. Until they catch up with what goes on in a PC none of the above can be mainstream.
 
nvidia would be quite silly to price the gk104 at sub £300 prices. it may sell for over £350. i cant see them letting amd get away with the profits from inflated prices on the 79 series so they will probably also want to rake in as much as they can from the first few months of sales.
 
GK104 could be another 8800GT; A mainstream card that beats AMD's top card for a bargain price. It has been done before. Rather than starting with the premium "8800GTX" and then releasing the budget "8800GT", NVidia appear to be launching in reverse order this time.

The 8800GTX destroyed AMD's 2x00 series utterly. The 8800GT rubbed salt into their wounds, kicked sand in their faces, and gave them a cracking wedgie. Much like the 9700 series did to NVidia back in 2002.
 
Last edited:
nvidia would be quite silly to price the gk104 at sub £300 prices. it may sell for over £350. i cant see them letting amd get away with the profits from inflated prices on the 79 series so they will probably also want to rake in as much as they can from the first few months of sales.

The 7950 is rumoured to cost around £350. Pushed it will be up there with a 7970 at stock. Thus, if their new card is the same as a 580 then they won't have much choice. It will have to be cheaper.

You simply can't charge an inflated price for something that has competition. The 7970 has no competition. People are moaning it's too expensive. Had Nvidia come up with a Kepler card now that does what the 7970 does it would have been £600.

Nvidia tried over pricing their 470 and 480 cards. They were more expensive than their ATI counterparts, louder, hotter and more power hungry (though the latter wasn't a huge problem) and guess what? they failed to sell.

The ATI cards of the time (5870 and 5850) weren't quite as fast, but they were cheaper, cooler, quieter and used less power.

Within six months the 470 and 480 were in bargain bins everywhere. I should know, I bought one :D

Nvidia are on the back foot now. I don't care what people say about how Kepler is going to be this, that or the other, it isn't here.

So it's vaporware as far as I am concerned and thus couldn't really care how good it is supposed to be.

That's why I find it crazy that people are talking about price. I mean sheesh, at least with the 7970 we were given some charts, real or fake? hey it was better than thin air. Turns out some of the charts were actually quite real.

Nvidia? people are talking about nothing. Nvidia have not released anything official at all, it's all figments ATM.

When I see how it performs I will then be curious about the price. But as I already pointed out, *IF* it can't keep pace with a 7970 I will find it rather insipid and won't give two craps what it costs.
 
Back
Top Bottom