• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

HD 7000-series GPUs which are reportedly scheduled to make their entrance on December 5


A completely new, superior architecture, more ability to help out in the future offloading CPU tasks onto the GPU and a vast improvement in GPU power, likely to be 70%+, was expecting more?

What exactly?

I can't help but think "AMD vs. non power of 2 buses". :p

Why, 512bit bus would drastically increase cost, have more bandwidth than they need and require 4gb mem per card, further increasing cost, or for the same RRP, means they would have to sell the cores at a lower cost so AIB's can spend more on the memory.

Its completely unfeasable to stay with power of 2 memory buses, 512bit, then 1024bit, then 2048bit? It makes no logical sense, 128bit lower mid end, 256bit upper midrange, 384bit high end.
 
Well more, just like my post said.

More shaders, more texture units, more clock speed etc etc.

GCN shaders aren't in any way comparable to VLIW 4 ones. VLIW 4 1536 shaders, could be slower than "random architecture 5" 8 shader gpu, or another gpu with 12400 shaders, could be slower than a 6990.

GCN is well, we'll have to see how much more efficient, but its closer to Nvidia in so far as its not really scalar anymore.

Worst case scenario with VLIW 4 was only 1/4 of all shaders were working on an instruction per clock, this was an efficiency increase on VLIW5 which unsurprisingly its worst case scenario was 1/5. If you look at reviews 6970 is sometimes way out ahead of the 5870, others its right on top but even then usually the minimums are anything up to 25-30% faster than the 5870 and the architecture change is why.

Best case VLIW4 was WORSE than VLIW 5, yet it is almost never slower and often 20-30% ahead on max and mins, with less shaders, thats purely from efficiency.

Now GCN essentially has less shaders, but the amount of times it should fill all 4 shaders per cluster....... should be MASSIVELY higher. IE VLIW 4 was probably averaging close to 2-2.5 instructions per 4 shader cluster. GCN should pretty much be 100% used every clock, or that is the intention.

In theory that means a Cayman if it could fill every shader every clock could gain anything from 40-50% faster with the same shader count........ that is what GCN is. So circa 2000 shaders SHOULD roughly work as fast as probably 3000 Cayman shaders ;)

Its a HUGE increase in power we're talking about here, HUGE.
 
GCN shaders aren't in any way comparable to VLIW 4 ones. VLIW 4 1536 shaders, could be slower than "random architecture 5" 8 shader gpu, or another gpu with 12400 shaders, could be slower than a 6990.

GCN is well, we'll have to see how much more efficient, but its closer to Nvidia in so far as its not really scalar anymore.

Worst case scenario with VLIW 4 was only 1/4 of all shaders were working on an instruction per clock, this was an efficiency increase on VLIW5 which unsurprisingly its worst case scenario was 1/5. If you look at reviews 6970 is sometimes way out ahead of the 5870, others its right on top but even then usually the minimums are anything up to 25-30% faster than the 5870 and the architecture change is why.

Best case VLIW4 was WORSE than VLIW 5, yet it is almost never slower and often 20-30% ahead on max and mins, with less shaders, thats purely from efficiency.

Now GCN essentially has less shaders, but the amount of times it should fill all 4 shaders per cluster....... should be MASSIVELY higher. IE VLIW 4 was probably averaging close to 2-2.5 instructions per 4 shader cluster. GCN should pretty much be 100% used every clock, or that is the intention.

In theory that means a Cayman if it could fill every shader every clock could gain anything from 40-50% faster with the same shader count........ that is what GCN is. So circa 2000 shaders SHOULD roughly work as fast as probably 3000 Cayman shaders ;)

Its a HUGE increase in power we're talking about here, HUGE.

All theory and speculation.

Look what AMD said about Bulldozer for the past good knows how many years its been coming.
 
All theory and speculation.

Look what AMD said about Bulldozer for the past good knows how many years its been coming.

Tosh, AMD told us what Bulldozer was, most people knew Bulldozer wasn't going to blow away the 2600k, anyone expecting it too, was mental.

Bulldozer wasn't speculation it was a known architecture and it was maybe 5-10% slower than people expected due to a few cache issues, a little slower in single thread due to scheduling problems, the latter of which was speculated and thought about way before launch.

As with GCN, its not speculation, its known, an architecture simply works how it says it works, that's how life is, its 0's and 1's and VERY predictable, its pretty damn easy to predict where performance will be, down to the last percent will be almost impossible, drivers, exactly where clock speeds end up on shipping products and if there is a bottleneck that we don't know about.

The architecture speaks for itself, there is a DISTINCT difference between a scalar architecture DESIGNED to use UP TO 4 or 5 shaders in a cluster, its a known issue that it won't always fill it up, this was the design behind it. It's also known that this architecture is simply not like that, that each shader is essentially individual and able to be accessed independantly.

There is no guessing here, Cayman and GCN architectures aren't close, the shaders aren't close, and there isn't even the possibility a 2000 ish shader GCN will perform around that of a 2000 shader Cayman.
 
While some of the details probably will be a little different come release, what DM is saying is pretty much what AMD has been working towards, how sucessful they will be we won't know til nearer the time but as far as the idea behind it goes hes correct.
 
Tosh, AMD told us what Bulldozer was, most people knew Bulldozer wasn't going to blow away the 2600k, anyone expecting it too, was mental.

Bulldozer wasn't speculation it was a known architecture and it was maybe 5-10% slower than people expected due to a few cache issues, a little slower in single thread due to scheduling problems, the latter of which was speculated and thought about way before launch.

As with GCN, its not speculation, its known, an architecture simply works how it says it works, that's how life is, its 0's and 1's and VERY predictable, its pretty damn easy to predict where performance will be, down to the last percent will be almost impossible, drivers, exactly where clock speeds end up on shipping products and if there is a bottleneck that we don't know about.

The architecture speaks for itself, there is a DISTINCT difference between a scalar architecture DESIGNED to use UP TO 4 or 5 shaders in a cluster, its a known issue that it won't always fill it up, this was the design behind it. It's also known that this architecture is simply not like that, that each shader is essentially individual and able to be accessed independantly.

There is no guessing here, Cayman and GCN architectures aren't close, the shaders aren't close, and there isn't even the possibility a 2000 ish shader GCN will perform around that of a 2000 shader Cayman.

Thats TOSH, you are claiming a performance increase which must be speculation and theory as no one knows anything about it outside of the developers etc, also AMD made many claims about Bulldozer through the years, none of them have come true, some to do with release dates, some to do with performance claims against existing AMD and competing products.

I wait for real reviews and tests, not pulling 70%± out of the air.
 
Last edited:
not true, many people are still running 1080p res so they might be interested in a slightly faster, cooler, quieter 6900 series card for less money :D

Not if it isnt much better than my GTX 560 tis.

I'll be wanting keplers though, I like physx now.
 
Thats TOSH, you are claiming a performance increase which must be speculation and theory as no one knows anything about it outside of the developers etc, also AMD made many claims about Bulldozer through the years, none of them have come true, some to do with release dates, some to do with performance claims against existing AMD and competing products.

I wait for real reviews and tests, not pulling 70%± out of the air.

Firstly I didn't claim anything, I'm as you said speculating, its an educated guess, and a very good one.

What has what AMD claims got to do with anything? You're mixing up 15 different things which aren't comparable and making a daft claim.

AMD PR, didn't make many claims about Bulldozer at all, except it was a server chip first and foremost and was designed for throughput, 90% of the performance claims came from forums of people making stupid guesses out of thin out, something I did not.

You can tell the difference "raven.... its 40% faster than a 580gtx, because my friend's, cousin's, barber's, sister's, boyfriend's, father...... had a premonition", BS, "DM.... 6970 will at most be 15% bigger than a 5870 because they won't go over 400mm2, as its mental to do so, that will put performance somewhere in the 25-30% improvement range IF its 15% bigger, a bit less if its not quite that big". What happened, I posted that 5-6 months before the release, it was patently obvious to anyone with any basic knowledge on the subject area.

Also even PR marketing, doesn't matter, they "released" info on the architecture, cold hard facts, PR spin was irrelevant, you could LOOK AT THE ARCHITECTURE INFO, and work out where it would perform and most people that did that, got it VERY close. There were early clues, very early clues, like the reduction to 2 integer pipes, each is MUCH faster than the old ones, but there were 33% less, the x86 decoder couldn't provide enough instructions per clock, and this all lead to some very good guesses that were almost spot on. These were things I posted on 6 months before Bulldozer was released, Bulldozer was only a surprised in not being 12 times as fast as a 2600k, to those who couldn't read.

This is what you keep missing, AMD officially disclosed the architecture for GCN as well, its a KNOWN quantity, we can see EXACTLY how it works, and make VERY good predictions based on this, that aren't out of thin air.

Stop talking about bold claims, because those are irrelevant, it was clear where Fermi would be performance wise based on the architecture, it was clear when it would be available despite the hot air from Nvidia for 6 months lying about it.

PR has nothing to do with this, cold hard facts and educated guesses based on those FACTS. Not educated guesses based on PR guff, which aren't facts, nor based on info from random friend of a friend.
 
Last edited:
wall of random text

AMD PR did make performance claims about Bulldozer, they are out there from months/years ago (im sure you can dig them up also), and thus I take anything said about a new architecture from ati/Amd with a pince of salt



You can tell the difference "raven.... its 40% faster than a 580gtx, because my friend's, cousin's, barber's, sister's, boyfriend's, father...... had a premonition", BS, "DM.... 6970 will at most be 15% bigger than a 5870 because they won't go over 400mm2, as its mental to do so, that will put performance somewhere in the 25-30% improvement range IF its 15% bigger, a bit less if its not quite that big". What happened, I posted that 5-6 months before the release, it was patently obvious to anyone with any basic knowledge on the subject area.

Whats this got to do with the 7 series? or are you just wanting to fill your post out a bit?


Stop talking about bold claims, because those are irrelevant, it was clear where Fermi would be performance wise based on the architecture, it was clear when it would be available despite the hot air from Nvidia for 6 months lying about it.

Why are you talking about Fermi? whats this got to do with the ATI 7 series? or anything I have said, think you are even confusing yourself now.

and make VERY good predictions based on this, that aren't out of thin air.

Guessing then.
 
Last edited:
ffs, you're just trolling now but, I'm bored.

You bought up PR people, they are irrelevant, I highlighted for you that both AMD and Nvidia have made claims about products, neither changed the fact that looking at the architecture alone, ignoring all claims, gave you an incredibly good idea of where it would perform.

You bought up PR making claims out of no where. The SAME architecture diagrams can have a PR guy next to it saying its faster than light, or slow as a turtle, the architecture is the architecture and that doesn't change.

The architecture info on Bulldozer was complete and gave everyone an excellent idea of where performance would be and why and where it would be slow.

I notice how you left in some of my post when you wanted to contradict it, but you put in "random wall of text" when you wanted it to appear I said something I didn't.

I didn't say they made no claims, I said they didn't make MANY claims, and the ones they did, most were true and accurate, just not to your liking and heavily (and often purposefully) misinterpreted.

Again, this architecture info isn't PR guff, its not false, or lies, its what it is, it is cold hard fact.

Making a estimate of performance based of a PR guy saying it will be 70% faster, with no other info, is both PR guff, and a complete guess. Estimating performance off the cold hard facts of what the architecture is, IF its known(which it is) is neither guessing from thin air, nor anything to do with PR.
 
more text going off topic


FFS your trolling,

Classy, swearing.

you bought up some random conversation between you and Raven (why? I dnot know), bought up some random TOSH about Fermi (why? I dont know), while making a claim about a 70% performance increase over current cards.

I will make it real simple for you, your educated guess of 70%+ is a guess yes or no?
 
Last edited:
Classy, swearing.

you bought up some random conversation between you and Raven (why? I dnot know), bought up some random TOSH about Fermi (why? I dont know), while making a claim about a 70% performance increase over current cards.

I will make it real simple for you, your educated guess of 70%+ is a guess yes or no?


I'll make it real simple for you, where did I say it wasn't a guess, where did you say claim a guess was always wrong, or was bad, or couldn't be good?


For the cheap seats we'll go back to your first couple posts claiming everything guessed is speculation and theory and you loosely linked info from AMD PR people being untrustable and this is why Bulldozer was a surprise.

So, what did I say after you bought up both PR, theory, and how you can't predict performance AND that no one predicted Bulldozer performance.

Firstly I guess i'll point out, I was talking about theory, PR the predictions of performance of any architecture and the predictions of bulldozer...... wow all subjects YOU came up with, and all relevant.

me and Raven, we had two different "guesses", one was based off knowing what the architecture was, the die size limits of 40nm for AMD and the other guess was based off nothing. one was accurate, almost bang on, the other wasn't even close.

So hmm, you bring up guesses and I point out guesses based off cold hard fact, and ones pulled out your arse, aren't comparable.

Why Fermi, again, YOU bought up PR and how it can't be used to predict anything. Once again for Fermi AND Bulldozer I pointed out how, PR means nothing, but the cold hard specs for the architecture could give you near perfect predictions on performance. Again not relevant right?

Once again, my prediction is NOT based off general PR, its based off the architecture, AMD have never once lied when releasing Architecture notes, nor have Nvidia, Intel nor any company I've heard of, they would lose all business if they did as developers base all their software off using these architectures, being incorrect here would mean simply suicide.

Based off the architecture it is fairly easy to predict performance VERY accurately.


Now simply please explain where, when you brought up EACH of these subjects first, me responding and explaining it too you was off topic?

Let me make another educated guess, you won't prove me wrong, you'll find one word I used that is somehow off topic and incorrectly quote things I've said, while pretending you never said most of what you said, and errm, then come up with an arbitrary question that has no relevance to the discussion?
 
Last edited:
Have to say my money is on DM, he's usually pretty close and does seen to use known facts to make educated guesses. It's not worth arguing about as all will be revealed very shortly....
 
Back
Top Bottom