• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Are next gen highend GPU's going be priced like the Titan ??

The Titan's premium pricing doesn't really matter if it really is going to be a standalone matter....what's worrying is if both companies both start to price their "high-end" cards in that price range, and giving only average performing cards (pitiful 10-20% improvement over previous gen) for "normal" price range as a norm.

People always say the performance improvement over theses two gens tanks (comparing to previous improvement for the previous gens - 8800 to the HD5000 generations) is due to "technical limitation" or "games ain't don't need that much power due to being held back by console", but now that I look at the Titan and games like Crysis 3...it gets me double facepalm.

Call me sceptical, I somewhat think the huge drop in performance increase for this two gens (comparing to previous gens) is not that both camps can't design and make faster GPUs (as proven by the Titan), but more about both are trying to push multi-GPUs as mainstream (despite still with many "unironed-out" issues such as driver support, performance scaling and micro-shuttering etc) for the sake of shifting more units. Because of this, rather than being able to get a £500 single GPU card that can game in Crysis 3 at max settings with constant 45-60fps at 1900 res, people with that budget would have no choice but to turn to relying on the not-so reliable multi-GPUs (comparing to single GPU) getting something like a pair of 7950, together with the cost of more heat and higher power consumption...

Don't get me wrong, I do agree that CF and SLI has improved considerably over the the recent years, but until the day they become as reliable and performance scale 100% in all games like single GPU, I still somewhat seeing them as "unfinished products being put onto the shelves".
 
Last edited:
The Titan's premium pricing doesn't really matter if it really is going to be a standalone matter....what's worrying is if both companies both start to price their "high-end" cards in that price range, and giving only average performing cards (pitiful 10-20% improvement over previous gen) for "normal" price range as a norm.

People always say the performance improvement over theses two gens tanks (comparing to previous improvement for the previous gens - 8800 to the HD5000 generations) is due to "technical limitation" or "games ain't don't need that much power due to being held back by console", but now that I look at the Titan and games like Crysis 3...it gets me double facepalm.

Call me sceptical, I somewhat think the huge drop in performance increase for this two gens (comparing to previous gens) is not that both camps can't design and make faster GPUs (as proven by the Titan), but more about both are trying to push multi-GPUs as mainstream (despite still with many "unironed-out" issues such as driver support, performance scaling and micro-shuttering etc) for the sake of shifting more units. Because of this, rather than being able to get a £500 single GPU card that can game in Crysis 3 at max settings with constant 45-60fps at 1900 res, people with that budget would have no choice but to turn to relying on the not-so reliable multi-GPUs (comparing to single GPU) getting something like a pair of 7950, together with the cost of more heat and higher power consumption...

Exactly the reason I'm buying a Titan.. :p

I'm fairly happy with my purchase, it's still faster than a 680 or 7970. The price is mental, but then again.. So am I! :D

I think the Titan is more of an investment. I won't need to upgrade for at least a couple of HW cycles; especially if NV and AMD keep up their tradition of '15% performance increase'. It will take 3 generations of new nVidia card to breeze past The Titan.
 
That IS censorship though, you are suggesting it saying that it's not a great idea to let people say Titan isn't worth its price and people shouldn't buy.

Saying that it's not a good idea means that you think OcUK shouldn't allow it because technically it's inconvenient.

brilliant strawman argument, but not one that I'm going to respond to
 
@Rusty

1) From the thread I said it was 5-10%, but no I'm informed that I'm wrong again, no offense mate, but your being selective to the point that you refuse to be educated on a topic, which results along the lines of 'I'm a user data man-if it doesn't go against my way of thinking'.

I realise it's a shader test, but it was thrown in due to the shader intensive titles that have arrived one after the other in recent months, there will only be more, the 5% ballpark figure is off the mark, I'm only educating the masses.:D

2) Is the early performance figures in regards to the topic discussed with andy less valid because they are not 1080p?

No, they are valid, simply dispelling that the 680 was only bested on the 12.11's, this is 100% false, it was busted@1080p in June, but I shouldn't talk about that though should I?

3) People that enjoy a debate will read it, your being selective over user data, you asked me to 'prove it' then ignored it when proved.

Why would I want to go to the trouble of putting up a post and results that I'm told I'm wrong about on a public forum to then discuss my correct results offline?:confused:

You can't be bothered posting more about it on a forum, but wanted to take it private instead and discuss it further?:confused:

No offense but that's a ridiculous suggestion mate, that's telling me your one of these stuborn guys that commit and won't change their mind regardless, which it's not about.

It's not for your benefit, it's for the folks benefit that's looking for honest advice.

Greg iirc, withdrew his results because he wrongly got stick for daring to post Lightning results as stock Lightning results which certainly wasn't misleading if you can read.:(

Now, if someone has differing views, throw ALXAndy's name into the fray, because people don't like what they read from andy because it can cut to the bone to both AMD/Nvidia fanboys.

Is that the norm now?:(

I realise it's meant to be humour, but it's misleading when it gets taken the wrong way, andy is his own man/own opinions, nothing to do with what I post.

Finally, I didn't take it off topic, it was in response to andy who then said 'it's taken the best part of year to really get the most it's capable of out of it', two totaly different ways of saying things.
 
Last edited:
Thats not the way the market works though, if Nvidia & Amd tried to sell the next gen cards at the Titan prices they would lose money. They know that keeping the price low and volumes high makes them far more profit than vice versa.
It would be business suicide if they tried to sell the next gen cards for Titan prices.

I disagree. it is how the market works.

They would only lose money if not enough people buy the titan or in my example a 780 in same price as titan.

simple logic.

like i said, if more people buy it for 800, then more likely future cards will sell at that price.

i dont think u understand this very simple straight forward logic.

Nvidia and AMD are in it for the money. They exist to make a business ie to be rich and if they could get away of selling gaming gpu's for 2k with a nice healthy volume stream of buyers then they would. u really think they wouldnt? u really think they would never try to sell something at the best price they can get away with ?

They are in it for the money. Remember that.

supply and demand.
 
Exactly the reason I'm buying a Titan.. :p

I'm fairly happy with my purchase, it's still faster than a 680 or 7970. The price is mental, but then again.. So am I! :D

I think the Titan is more of an investment. I won't need to upgrade for at least a couple of HW cycles; especially if NV and AMD keep up their tradition of '15% performance increase'. It will take 3 generations of new nVidia card to breeze past The Titan.

I expect the 780 to be as good or faster then a titan else we will have another issue. release 780 with a small performance bump for 600? and then 6months later release titan mk2 for 800 ie the real 780gtx?

See where another issue can occur with not just pricing but forcing others to either spend 800 for a proper high end next gen card or 500-600 for a small increase
 
@rusty

1) From the thread I said it was 5-10%, but no I'm informed that I'm wrong again, no offense mate, but your being selective to the point that you refuse to be educated on a topic, which results along the lines of 'I'm a user data man-if it doesn't go against my way of thinking'.

I realise it's a shader test, but it was thrown in due to the shader intensive titles that have arrived one after the other in recent months, there will only be more, the 5% ballpark figure is off the mark, I'm only educating the masses.:D

Still waiting for a game benchmark showing it.

2) Is the early performance figures in regards to the topic discussed with andy less valid because they are not 1080p?

No, they are valid, simply dispelling that the 680 was only bested on the 12.11's, this is 100% false, it was busted@1080p in June, but I shouldn't talk about that though should I?

Wrong. Gregsters and my own benches said otherwise. I thought we were discussing 1080p as most people game at this. If we're discussing higher resolution then it's not even a debate is it really? Of course the 7970 was faster/even at a sooner point in time.

3) People that enjoy a debate will read it, your being selective over user data, you asked me to 'prove it' then ignored it when proved.

Well I'm not, I was going by the OcUK benchmark thread rather than a benchmark which may, or may not be correlative to game performance. I'm waiting for proper proof, I'm not just going to take your word for it.

You can't be bothered posting more about it on a forum, but wanted to take it private instead and discuss it further?:confused:

To stop thread soiling, like this.

No offense but that's a ridiculous suggestion mate, that's telling me your one of these stuborn guys that commit and won't change their mind regardless, which it's not about.

See above.

p.s. it's "you're" :p :D

Greg iirc, withdrew his results because he wrongly got stick for daring to post Lightning results as stock Lightning results which certainly wasn't misleading if you can read.:(

No, that was just humbug nitpicking.

Then somebody else posted 680 results and they weren't as good as Greg's and a few 7900 owners started inferring that he was making them up even though both mine and Gregster's results were pretty much in sync based on the clock speeds used.

I realise it's meant to be humour *snip*

Exactly, so stop whinging :D.

This just looks like: "now I can't argue what's faster anymore, I'll need to argue so I'll argue what was faster at an undefined point in the past"
 
Last edited:
Nope, it's a case of posting facts, not assumptions based on wrong/misleading information because of being too stubborn to accept fact/it suits to soil a gpu maker.;)

Most here will know that I was never in the habit of posting that the 7970 was faster due to the fact they perform almost the same, which I pointed out long, long ago.

:)
 
Last edited:
Nope, it's a case of posting facts, not assumptions based on wrong/misleading information because it suits to soil a gpu maker, to stubborn to accept fact.;)

:)

too* stubborn :p

As I said, I'm not just going to take your word for it. You haven't posted any facts at all really other than the literal point that a 7970 is ~10% quicker in your benchmark, clock for clock. My problem is that you've posted a benchmark and a conclusion drawn upon it which doesn't satisfy your original hypothesis.

It's like saying, for example, because Heaven 3.0 scales well with a memory overclock, you'll see the same increase in performance in games. That's a fallacy really.

I'll wait for game benchmarks showing the difference which have been verified by a few different sources. It's not stubborn, it's healthy scepticism. You might be right - that's not the point here - and it wasn't the point last time you "wall of text" me. When you see past this point you may actually start to see ;).
 
As I said, I'm not just going to take your word for it. You haven't posted any facts at all really other than the literal point that a 7970 is ~10% quicker in your benchmark, clock for clock. My problem is that you've posted a benchmark and a conclusion drawn upon it which doesn't satisfy your original hypothesis.

Difference is between 5-10% game dependent clock for clock

And I'm the stubborn one?

Present/future 5-10% potential is there regardless, it's not a 5% faster/slower ballpark figure.
 
Last edited:
And I'm the stubborn one?

Errr didn't say that. That's been your buzzword all morning, not mine. Wrong side of bed this morning? :D

Present/future 5-10% potential is there regardless, it's not a 5% faster/slower ballpark figure.

If you say so. I'm still waiting for the proper evidence for this claim.

FYI:

Me and PGI ran BF3 clock for clock prior to 12.11 and then again after. In this game, BF3 was faster on the 680 till the 12.11. This was at the most used resolution of 1080P.

The 12.11 drivers was the only time the 7970 pulled ahead of the 680.
 
Last edited:
Slowly clicked it was the nazigramer queen in you. :D

Clock for clock identical setups?

Erm no, they were not, they are fallible results mate, I'm not questioning the results, but fallible nonetheless.

Where as my data is tested on the same system/setup=proper results with no agenda, simply backing up my findings simply because I could and informing the masses against wrong assumptions as that is all you have.

Pointless discussion with you rusty.

Whether or not you think it's 5% or me showing that there is potential for up to 10%, folks can take either at their word-except ones of us has properly compared both over months and gave evidence, the other hasn't compared anything at all, just simply going by hearsay/gullible assumption/doesn't want to admit genuine data. ;)

:)
 
Last edited:
Where as my data is tested on the same system/setup=proper results with no agenda, simply backing up my findings simply because I could and informing the masses against wrong assumptions.

Pointless discussion with you rusty, folks can take either at their word-except ones of us has properly compared both over months and gave evidence, the other hasn't compared anything at all, just simply going by hearsay/gullible assumption/doesn't want to admit genuine data. ;)

:)

Well... it was just that the 680s were faster until 12.11. So you're wrong on this point. I don't even know why you're that bothered about it that you keep chirping on in multiple threads, what does it really matter now? Going on benchmarks ran by people on this very forum shows this. People ran these tests due to the inconsistent nature of review sites and also the lack of overclocked comparisons on the latest drivers.

On your other point: benchmark != game performance

See Heaven 3.0 comparison above:

You haven't posted any facts at all really other than the literal point that a 7970 is ~10% quicker in your benchmark, clock for clock. My problem is that you've posted a benchmark and a conclusion drawn upon it which doesn't satisfy your original hypothesis.

It's like saying, for example, because Heaven 3.0 scales well with a memory overclock, you'll see the same increase in performance in games. That's a fallacy really.

Not sure why I'm needing to repeat myself.
 
Last edited:
It matters as it paints AMD in a bad light-the norm everywhere, that's why it matters to inform of misinformation.

I'm not wrong in the slightest, it's there for everyone to dig out should they feel the need, the reviews all voiced as much with the GHz paper launch release in June.

BM threads are all good and what not, but how much data/results are provided that is full of artifacts on suicide runs or rigs sitting out the back door in the frost?
 
Last edited:
I'm not wrong in the slightest, it's there for everyone to dig out should they feel the need, the reviews all voiced as much with the GHz paper launch release in June.

BM threads are all good and what not, but how much data/results are provided that is full of artifacts on suicide runs or rigs sitting out the back door in the frost?

lol - yes you are.

Gregster has confirmed it with his results. As I said, mine were very similar but I didn't post them as they were on oldish drivers and as I was SLI/multi screen at the time it was a pain in the arse to disable and re-test. I can't find them now in the plethora of text files in my benchmark folder so I don't know what is what :D.

We're talking about overclocked results on the latest drivers (one before 12.11 and then post 12.11). The paper launch of the GHz editions has nothing to do with this. It's irrelevant to what we're discussing.

You're now making strange speculative points based on your comment regarding 680/7970 being disproven. We went through all this with VRAM when even Neil came in, who tested multiple 2GB/4GB set up's and said you were wrong but you still dug in and it ended up "if I'm not right now, I'll be right at some undefined point in the future".

No problems with the 5-10% thing. As I say, you may be right, you may not. There isn't any proper evidence as of yet.
 
I'm discussing both cards having nothing in it when both oc'ed from launch more or less by the info from Nvidia favoured(fact sponsered by Galaxy) [H], win some lose some.

You now have no problems with 5-10% thing, but I may be wrong, you couldn't make it up bud.



What is alarming is the attempt to deflect that argument away(classic move when things don't go your way) by bringing up 2GB/4GB vram debate-which I may add coincides with greg openly admitting running out of vram in a good few posts here@OcUK.;)

Never saw the need to bring it up, but there you go, I'll bite and humour you.

Well the undefined point in the future wasn't too long a wait was it?

Hitman is one game that is extremely sensitive to VRAM capacity. We are seeing 2GB be an absolute bottleneck for the game, and 3GB not being enough either.

Hmm, put cards together in SLi/CrossFire and bob's your uncle aunt and fanny is left with a stuttering mess cause she believed 2GB would last-it's not even been a year!;)

Of course you will portray me as being wrong, yada yada yada....

... my hands are firmly in my pockets, no need to wave them about over my head.:D
 
Last edited:
I agree with the 5% - 10% clock for clock thing. Its as little as 5% and up to 10%. It depends on the game involved and if its making good use of the shaders or compute units etc.

What is the other point being discussed here? Once the 7970 Ghz edition came out in June ( was it june?) it took the top spot away from the 680. There should be no doubt about that. Might not have been much in it, but it was definitely faster in the majority of tiles. Plenty of reviews confirm that.

Are we talking about a stock 7970 here? Because ill agree that didn't catch up until 12.11. But then the 680 was released after the 7970 and given a boost overclock to make it faster. Just the same as the 7970 ghz edition was given a boost to enable it to beat the 680 with boost. Apologies if i have my wires crossed with what's being debated.
 
I'm discussing both cards having nothing in it when both oc'ed from launch more or less by the info from Nvidia favoured(fact sponsered by Galaxy) [H], win some lose some.

Yes, you're right it was close but that isn't the problem. The problem was your incorrect absolutist statements.

You now have no problems with 5-10% thing, but I may be wrong, you couldn't make it up bud.

Don't be simple lol. It's quite obvious what that means. My concern has been with the conclusion you've drawn based on the evidence used. It has potentially multiple holes in it none of which you've addressed.

What is alarming is the attempt to deflect that argument away(classic move when things don't go your way) by bringing up 2GB/4GB vram debate-which I may add coincides with greg openly admitting running out of vram in a good few posts here@OcUK.;)

Well the undefined point in the future wasn't too long a wait was it?

lmao - it's not deflection, it's just another example of you digging in contrary to what others are telling you when you're wrong.

If you actually read Greg's post you'll hear that he didn't have acceptable FPS at the points he was running out of VRAM anyway. That's different to just running out of VRAM. I'm sorry you can't see the difference :(.

Are we talking about a stock 7970 here?

Nope - 7970/680 at while overclocked.

Everything else you said is agreeable.
 
Me and PGI tested Lightnings. Remember it was a clock for clock test which a few including me wanted to see. Hopefully Paul will show the chart but not fussed either way.

As for VRAM, it wouldn't matter if I had the 4GB cards because they simply can't cope with decent/acceptable frame rates at 5760*1080 with all the bells and whistles on.

Whatever way you look at it now, the 7970Ghz edition is faster than the 680. Clock for clock, the 7970 is faster than the 680. 2*7950's are faster than 2*680's at 5760*1080.

Anyways, let's get back on topic and have a group hug :p
 
Back
Top Bottom