• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia keplar wins hands down.

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,360
Location
kent
For me the most interesting thing with this upcoming card is what are they going to call it, now we know it is suppose to be the 560 replacement so do they use the 660 name or the 680 because of its suggested performance.
If they go down the 660 route then it is going to be hard to justify a price that would be expected due to its suggested performance £300-400, but of course if they go for the 680 naming idea then that sort of price would seem understandable.
equally confusing is if they go for a 660 name then they allow for higher end cards as we expect to come later in the year but what of the current range. the 660 comes in with higher than 580 performance and possibly a similar price, I think I’m right in saying they stopped producing the 580/570's but if the replacements are still 5/6+ months away then it makes me wonder how things are going to pan out. If they go for the 680 naming idea then where does that leave the faster cards to come?

All in all things are slowly hotting up in the graphics card game and me personally I’m loving it.
 
Don
Joined
20 Feb 2006
Posts
5,228
Location
Leeds
^ that.

As I said earlier I clearly remember when I bought my 5950 Ultra they were marketted around the $499 mark.

Thankfully I got mine in a pricing error for $145 :D

But yes.. This is what I meant earlier with my turkeys waiting for Christmas comment. If people really think that Nvidia are going to release a nice cheap fast GPU..

They've always charged more than AMD. In their eyes they have more to charge for tbh. Physx, CUDA and so on. Thing is, 2gb of ram is downright stingy. Especially at the way things are headed. Looks like they're drip feeding.

Releasing a 2gb card, IMO, is a mistake. It can be miles faster, but your die hard AMD fans will still have an excuse, a reason, a "better" if the 7970 has more vram.

TBH though none of this matters. The 7970 is here, in our hands (if we decided to buy one) so what's going to happen a month or two down the line doesn't matter right now.




They were rushing. I just don't understand it though, I don't. They revised the hot potato 280 and 260 with a better cooler. I'm pretty certain it was a vapor chamber too, the same one they kitted out the 295 single PCB with.

Why would you not use it again?

Stock coolers are so lame. I just don't get it. All they need to do is have a look around and the answers are all there. Look at MSI's coolers, Zalman's. Why would you ignore that?

To save money? I dunno man, it's kinda strange tbh. I'm not going to say that it was deliberate, because even a corporation as arrogant as Nvidia wouldn't be that cocky, would they?

It was the 460 that saved them. They even released their own 460 with Nvidia branded packaging. I read that was a last desperate attempt to stay afloat.. Cut out the middle man and charge a fortune desperately trying to recoup revenue.

The 460 should enter the hall of fame IMO. It was a superb little card that completely reversed their fortunes. Sadly it was Fermi and came in an era where Fermi was a dirty word.




That also. See, I have a confession to make. After my rubbish 6970 I wanted to go back to Nvidia so bad . Nothing against AMD. We all like certain products for certain reasons but when we associate one with aggro and crap it does get kinda hard not to be swayed in another direction.

But that was the thing. I wanted a GTX 580 with 3gb ram. My Fallout install has 6gb of custom textures. Even the 7970 struggles

But there was just no way on earth I could justify spending more on a slower card that basically loses in every last test even on release drivers.

Yeah, they were more. The cheapest one came in stock guise but was woefully slower so I was considering the GTX 580 Classified. Thing is, even with its nuts revved off it was still considerably slower.

Do you honestly belive fermi was that bad?

Sure it could have been improved, but I do not see it as the failure that you elude too.
 
Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
More incidental rubbish, if you read ...




... As for the rest of your utterly illogical nonsense.

Your post only makes sense if adding more GPGPU features dramatically reduces gaming performance per watt........ yet everything Nvidia have claimed over Kepler, is its going to be a far more efficient GPGPU.

So what you're saying is moving to a more gpgpu architecture would naturally reduce AMD's efficiency....... but it wouldn't reduce Nvidia's efficiency to go further along that path, and me suggesting this previously was somehow wrong. You're suggesting one thing for AMD and the complete opposite for Nvidia.....

You're basing this all on a story that suggests the GK104 is going to use 225W, which if its the same rating as ANY last gen Fermi means, it will use at least 250W, probably more than that in furmark...... while at stock the 7970 uses closer to 200 than 225W......... so this is proof Nvidia have closed the performance/watt gap. If the story is to be believed...... you can't claim its closed this gap AT ALL.

LIkewise, you don't know where the GK104 chip is, if its within 10% of its max realistic clocks and needs heavy voltage increases to gain more speed.

As you should know, voltage required for a given clock speed increases exponentially, if GK104 is near the end of the "sensible" increase range at stock, while the 7970 is no where near it.... then you could find a overclock gk104 using 300W while gaining 10% speed, while a 7970 gains 30% speed hitting a similar power usage.

Nothing is known yet, the only thing we do know is 7970 is underclocked, it overclocks a very decent margin and while it certainly uses a lot more power, for a 30-35% increase in performance the extra power is fairly "safe". IE a 35% overclock on a 6970 required a HUGE voltage increase and power usage was through the roof, and those kinds of clocks only happened on LN2.

I will ignore the first half of your rant which is entirely conjectural and is opinion stated as fact -- with the implication that, as it is your opinion, it is somehow worthy of the undue weight you demand it is accorded. But instead I will discredit you on the statements you made that are demonstrably false and indefensible.

First. There you go again confusing TDP with power draw. I thought we'd already been through this and schooled you on this. TDP is NOT power draw. So they cannot be substituted for each other.

And it's really time you stopped pretending you know anything about electronics, because you don't. So stop trying to talk about it like you do. I'd like to see you design a passive RLC filter before trying to talk authoritatively on specialised topics like microelectronics.

Voltage does NOT vary EXPONENTIALLY with clock frequency. Do you even know what exponential growth looks like? There are very very few things in nature that actually grow exponentially. The only area where exponential growth is common place is in pure mathematics and theoretical computer science where algorithmic complexity is exponential (complexity class EXPTIME) for a large subset of problems.

The majority of CMOS power dissipation occurs during switching and switching power is determined by P = c*v^2*f. Where is exponential increase here?

Do you even know what exponential growth is? Because it is becoming abundantly clear you have no grasp of what exponential growth looks like. The largest growth in the preceding equation is power vs voltage which is quadratic growth. Quadratic growth is a specific instance of polynomial growth. It is NOT exponential. Nowhere near it! Polynomial growth is far far FAR smaller than exponential growth. And the other growth factors seen are here are LINEAR growth, which is smaller still (very small in fact. There are few types of functions that grow more slowly. For example logarithmic growth, which is slower still.) From the CMOS power equation you can also see that voltage and frequency in fact share an INVERSE quadratic growth factor, if power and capactive impedances are constant.

However, one does not even need to know a great deal of electronics to know that what you're saying about exponential voltage is complete bull****. All one needs is to understand what an exponential function looks like, mathematically, and reason with the following argument:
"IF clock frequency and voltage did indeed have an exponential relationship then, beginning with the first low-frequency CMOS circuits, we'd be needing billions of volts to power CMOS ICs today, even if we assume nominal CMOS voltage being around 1-2 volts about two decades ago."


Here's a bit of mathematical background on REAL exponential growth. An exponential function in asymptotic notation takes the form:
y = O(k^n)

Now, if we took base 10 for simplicity then
At n=1, O(k^n)= 10 which is actually quite small.
At n=10 O(k^n)=10^5 = 100 000 (which is big but not that big)
At n=100 O(k^n)= 10^100 (!) Which is a number so vast that there have not been that many microseconds since the beginning of time!

This is what an exponential function looks like. That is how quickly it grows. Next time you throw around a word like 'exponential' make sure you know what you're talking about. Exponential growth is rarely encountered outside theoretical problems.

To drive that point further, here are things that are not exponential but popularly called exponential: Bacterial growth. NOT exponential. A few biologists may call it such because they lack proper mathematical background and biology is not a precise science, but bacterial growth is actually logistic growth (and is intensely studied in the form of the logistic map in the mathematical discipline of Nonlinear Dynamics & Chaos).


Now, seriously, quit talking through your rear-end like you have the answers. You often talk about topics well beyond your grasp and act like an authority on the topic. Often greatly oversimplifying complex topics -- bringing them down to your level; and in other cases, adding unnecessary detail to simple topics to make it seem like you know and have more background on the topic thus inflating your self-worth.
You may convince a few ignorants, but dial it down as you're massively ignorant of anything scientific or mathematical. You make me laugh.
 
Last edited:
Soldato
OP
Joined
28 May 2007
Posts
10,071
Help me out ere guys, the Gk104 is the 660 and will probably be 2gb, so what';s the name for the 780?

Its the gk112.

In fermi it went gf102 = gtx470/480, gf104 = gtx460, gf112 = gtx580/570, gf114 = ti560/560. So as there is no supposed gk102 the highend chip for the first keplar is gk104/gtx680. I think gk102 was wrote of because there was no way to get it to market in time for it to be called a 6 series. So the full keplar part is gk112 a little like the full fermi part was gf112. Seems like good sense from nvidia as they are not making the same mistakes this time round that they did releasing the gtx480.
 
Last edited:
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Stuff about Charlie Demerjian

I'm sorry, was there a point in there somewhere? I must have missed it if there was. Unless you're saying that his correct predictions are genuine, but the things he gets wrong are "planted info"? :confused:

I had to laugh at this part though:

What unbiased people will realise is........

Seriously? You think of yourself as unbiased?


Your post only makes sense if adding more GPGPU features dramatically reduces gaming performance per watt........ yet everything Nvidia have claimed over Kepler, is its going to be a far more efficient GPGPU.

?!?!?!?!?!?!?!

Kepler will be the SECOND EVOLUTION of a general compute architecture... GCN is the first.

The mass of compute features (improved thread scheduling, larger cache, faster and more flexible interconnects, more access to shader-level memory, much higher geometry throughput etc etc) were introduced in CGN over the much more transistor-efficient (for gaming) VLIW architecture. The equivalent step for Nvidia was the GTX480... Kepler is the second iteration of the architecture. As you say, power-efficiency has been the focus of their entire development cycle, whereas for AMD it's been the creation of their first full compute architecture.

Nvidia have a natural advantage this generation - just as AMD did with the 5- and 6- series, where they were not supporting the overhead of a compute architecture. Things will level out eventually, possibly as soon as the second iteration of 28nm, but for now - from a purely technical point of view, the advantages are all with Nvidia. You should know by now, I only ever look at these things from the point of view of the technology involved...

AMD's biggest win this time around will have been getting their card to market first. If they can follow up with the 8-series soon afterwards, then they could negate quite a lot of the damage. But between Kepler's release and the 8-series AMD will suffer. It's a pity, since one side being dominant is always bad for the consumer, but unfortunately this is the way that things will unfold.


...But anyway, keep spouting all this nonsense. I'm sure you'll find another creative way to backtrack on all your comments when everything is released and performance is known.
 
Last edited:
Soldato
Joined
9 Nov 2009
Posts
24,845
Location
Planet Earth
To drive that point further, here are things that are not exponential but popularly called exponential: Bacterial growth. NOT exponential. A few biologists may call it such because they lack proper mathematical background and biology is not a precise science, but bacterial growth is actually logistic growth (and is intensely studied in the form of the logistic map in the mathematical discipline of Nonlinear Dynamics & Chaos).

Never,heard anyone I know say that before. OTH,I have seen people(not biologists) try to over simplify biological systems for modelling purposes which is cringe-worthy.
 
Soldato
Joined
10 Feb 2007
Posts
3,435
According to rumour, speculation, and just plain guesswork.
GK100 is the top card that will replace the GTX580
GK104 is GTX560 replacement that is supposed to be faster than the current GTX580
GK110 (perhaps GK112) is the dual GPU (2x GK104 or 2x GK100) card that will be faster than the GTX590

Nobody really knows anything for sure at this stage.
 
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Do you even know what exponential growth is?

I was going to make this point months ago, but I thought it might look a bit petty :p But anyway, since you already opened the gates...

Drunkenmaster uses the word 'exponentially' as a synonym for 'much bigger', rather than to indicate exponential growth. Yes it's a bit cringeworthy to read, as are a lot of the things he posts, but DM does't have a background in science and technology, so mistakes like these become a lot more understandable - as do a lot of his other misguided assumptions about how GPUs work, or how the design process operates.

I don't have a problem with this in principle... Not everyone is an engineer or scientist, and no-one is good at everything (I'm sure as **** not!). My problem with DM is that:

a) He is unwilling to recognise where he has gaps in understanding
b) When anyone points out these errors he will become very aggressive
c) He has an implicit assumption that he 'knows best' when it comes to anything he has an opinion on


I don't want to turn this into an attack post - it really isn't supposed to be. If anything it's a plea for DM to take a step back and look at what he DOES and DOES NOT understand - and please just don't be so aggressive. It's unpleasant, and there really isn't any need for it.

I think this picture sums up the situtation nicely. Yes it's a joke, but a lot of truth is said in jest:





... I could add quite a number of drunkenmaster quotes to that list.
 
Soldato
Joined
9 Nov 2009
Posts
24,845
Location
Planet Earth
According to rumour, speculation, and just plain guesswork.
GK100 is the top card that will replace the GTX580
GK104 is GTX560 replacement that is supposed to be faster than the current GTX580
GK110 (perhaps GK112) is the dual GPU (2x GK104 or 2x GK100) card that will be faster than the GTX590

Nobody really knows anything for sure at this stage.

It might be the following:

GK110 is the GPU meant to be the competitor which beats the HD7970. It is based on an MCM with two smaller GK104 chips
GK104 is the smaller chip which will power the midrange Nividia cards. Slower than a GTX580.

Hence Nvidia only needs to use one chip for its higher end GPUs.

GK110 picture

a5983386.jpg


Having said that the easiest explanation is that the GK110 is the HD7970 competitor.
 
Soldato
OP
Joined
28 May 2007
Posts
10,071
It might be the following:

GK110 is the GPU meant to be the competitor which beats the HD7970. It is based on an MCM with two smaller GK104 chips
GK104 is the smaller chip which will power the midrange Nividia cards. Slower than a GTX580.

Hence Nvidia only needs to use one chip for its higher end GPUs.

GK110 picture

a5983386.jpg


Having said that the easiest explanation is that the GK110 is the HD7970 competitor.

Would this gk110 with 2 x gk104 in one chip have to rely on some sort of sli technology. If that were the case it might put some people off.
 
Soldato
Joined
14 Oct 2004
Posts
5,223
Location
location, location
It might be the following:

GK110 is the GPU meant to be the competitor which beats the HD7970. It is based on an MCM with two smaller GK104 chips

Now, I've said this, and used the original Intel Core 2 Quad as an analogy which actually comprised two dual core CPU dies in a single package, stating that GK110 is likely to take this guise.

DM, in his infinite wisdom, said I was talking nonsense about the Intel chips and therefore couldn't possibly be right about the GK110. I, on the other hand, know I'm right! :)



However....

Duff-Man said:
Biologically, Tomato is a fruit, not a vegetable
Oh balls, I've said that before! :p I'm hiking to the top of Mount Stupid :D
 
Last edited:
Soldato
Joined
9 Nov 2009
Posts
24,845
Location
Planet Earth
TBH,I think the GK110 is a single GPU and that the GK104 and GK110 rumours have cross-pollinated each other.

Now, I've said this, and used the original Intel Core 2 Quad as an analogy which actually comprised two dual core CPU dies in a single package, stating that GK110 is likely to take this guise.

DM, in his infinite wisdom, said I was talking nonsense about the Intel chips and therefore couldn't possibly be right about the GK110. I, on the other hand, know I'm right! :)

Sorry did not see your post before my post!!

It is a possibility but TBH,it might there are two separate GPUs. The GK110 which is the high end GPU and a GK104 which is the slower one which is competition to the HD7870,HD7870 and HD7890.
 
Last edited:
Back
Top Bottom