680 was supposed to be the 660Ti?

Status
Not open for further replies.
I never said went downhill in performance i said in Bus width :\ and spoffle just said the Titan cost around the same as the 580 so going off what ive read the 780 should have been the 680.

Im not making stuff up, all this is off the 680 should have been the 660Ti like the thread says, its not me "telling you" this and that. Im asking a question then tell you why i think this, im not stating anything other then what the cards are.
 
Based on spec differences, PCB size is also smaller then the 7970 is it not?

Uses less power too?

Smaller Die, less shaders ect.

The only thing that could make it more expensive is CUDA and PHYSX?

Ofc i dont understand i have no idea, i bet you dont either, really, unless you work in manufacturing? I take a look at the cards and whats on them and then say what has more of that and this so that cost more it works 9/10 doing it this way.

Oh lawd, so you just guess?

FACEPALM

The PCB of the 680 is barely any smaller than the 7970.

The die is about 10% larger, and incidentally is around as much faster.

Oh lawd, you're comparing shader counts? It doesn't work like that, you can't compare shader counts because they are two very different architectures, they aren't directly comparable.

Just because you don't have a clue, don't try and drag others down as not having a clue either.

The issue here is you have no idea what you're talking about, and then using that complete lack of understanding to form opinions on things.

The bottom line is that it doesn't work the way you think it works.

The main reason the 7970 costs more to produce than the GTX680 is because it has more RAM, a better PCB and a beefier power regulation system.

The higher memory bandwidth means that the 7970 is quite a bit faster at high resolution, which is also helped by more RAM.

Kepler chips tend to tank hard with high res plus AA compared to the 7900 series.

Most 680s and 670s don't have the ability to up the voltage through software which limits overclocking potential.

Seriously, you have no idea what you're talking about yet again, so just quit.
 
After that comment about the 7870XT and memory buses tbh you might want to take a bit of your own advice.

Yeah, no. You were implying that chips don't work on a bus they haven't been designed to work on.

If that isn't what you meant, then you should have said what you meant. I am aware of how memory controllers work but that isn't really what you said.
 
Last edited:
Nice try but no.

What I said was pretty plain - its a much bigger deal to take a process like the GK104 thats been designed around a specific bus width (256bit) and then produce a card that works with a wider memory bus (384bit) than the full fat design than it is to produce cards that have a narrower memory bus (192bit) than the full fat version of the design.

Which leads onto the point about where on the timeline for developing GK104 they would have made the decision to go with a 256bit bus and that at that point in time it would have been a very risky decision to go with a 256bit bus as a cost saving measure on their biggest GeForce design with a very high chance - not being able to foresee what AMD might have in store - of coming back to bite them in the rear and cost even more when they had to either rework the design for a wider memory bus or stick higher frequency memory (that costs a TON more) on to make up the bandwidth.
 
Last edited:
Nice try? Again, yeah no.

You should really start saying what you mean because you yourself have said people misunderstand you a lot because of this.

As I said, I understand how memory controllers work, I understand that GK104 has 4 64bit memory controllers, each addressing 2 RAM modules.

But it's been a trend, the GTX200 series had a 512bit bus, with 8x 64bit memory controllers, nVidia dropped this down to 6 memory controllers on GF100/GF110, and then down again to 4x 64 on GK104.

Each time I think it was related to cost reasons. I think the reason for chopping 8x64 down to 6x64 on GF100 was due to GDDR5 meaning higher memory badwidth on the same bus width, which as above has reduced costs.

I believe GK104 was where it was intended to be and designed that way on purpose. I think at the very most they could have dropped the intended clockspeeds down, but I don't think there's any chance they were originally going to use a different chip for the 680, and even your example of 680s showing up as 670Tis shows this because nVidia typically uses the same GPU for the two top cards.
 
Its pretty simple - at the point in time they would have decided to go with 4x 64 on GK104 in the development timeline was too early to shackle their highest end GeForce design (if the GK104 was indeed that) to what is ostensibly a mid-range memory bus configuration as a cost saving measure due to the very high chance that it could prevent them with competing with AMD's next round of GPUs without having to go to significant extra cost.
 
This belongs in the graphic card section, not in GH. Oh but you can't post in the graphics card section can you weehamish!.

Pull this kind of stunt again and you'll be on holiday.
 
Status
Not open for further replies.
Back
Top Bottom