• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Which graphics card for my system - 770 v 7970

Anyway...after all these discussion, it still bring back the fundamental question of- on the ground of GPUs being, what's the merit of and WHY going for a card which the 2GB vram and 256-bit (lower memory bandwidth to be more exact) can and most likely would become and issue in the future over a card that has 3GB vram, and 384-bit (higher memory bandwidth) when both are priced the same?

I would love to toss in the good old "Nvidia drivers are better and more reliable", but their more recent drivers proves otherwise (for gamers at least) :(

I have said that if the purchaser of a 770 or 7970GE is not overclocking Or is using 1080 res then from reading many reviews they would be 50/50 in what to choose for the games they play mostly. The fact that the 770 is just a 680 means nothing when nvidia market it as a 770 ( newer card ) , If going for higher res or the 7970 performs better in any said game then they would choose that.

Many people recommend a 7970 or a 7950 over nvidia cards as have I, but you cannot make people buy a 7970 over a 770 its thier choice, just like it was the last gen and the gen before that.
 
A single GK104 chip running the Hitman Absolution bench @1080p totally maxed out

Not the fastest run I have ever seen but it has got higher minimums than the HD 7970 seen in earlier posts. Hang on a minute, lack of vram really hits the minimum fps so why is the HD 7970 doing so bad.:D

svux.jpg

This is the same settings on a HD 7950 MSI TFS Boost at stock (960/1250)



Very similar performance to the GK104 benchmark above, though the minimums are still higher than the 7970 ones posted above. All this shows is that game canned benchmarks do not generally reflect actual in game experience.
 
Last edited:
I wouldn't bother ubersonic.

In my opinion some people seem to see the world their way only.

No amount of reasoned discussion will get them to change their mind.

It is obsessive, almost extremist in nature.

Yeah, I'm beginning to get that lol.


You've already shown you don't have an understanding.

you're disagreeing about something you don't understand.

you're arguing authoritatively about something you clearly don't understand.

you don't understand this topic.

Repeatedly telling somebody they don't understand something when they have clearly demonstrated multiple times that they do is a tad silly. Like I have said previously I am not saying that the HD79xx are not currently the better choice, they most certainly are when value for money is concerned. Nor am I saying that the bus throughput disadvantage of the GTX is not a limitation at very high res. What I am saying is that the limitation is irrelevant in the equation because it will never be an issue at normal resolutions (I.E 1920*1080) because by the time games become so demanding that the limitation would have kicked in at normal resolutions forcing GTX owners to start lowering settings, the GTX6xx and the HD7xxx will have hit their GPU grunt limits and owners will already be forced to turn settings down anyway, as has historically happened every time the have been close performing GPU's with different memory throughputs.

As you seem to just completely ignore my explanations every time and just make out like I'm arguing that the bus doesn't make a difference, ill try with an analogy:

The Toyota 2JZ engine is considered pretty bulletproof when looked after and will normally do well over 200,000 miles. The Toyota 1UZ engine is also considered pretty bulletproof when looked after but will normally do well over 300,000 miles. Is this a distinct difference? yes, is it a reason to buy a car with a 1UZ over one with a 2JZ? no because in practice the car will be a pile of rust in a bucket long before either engine is approaching its limits.

This is the same scenario, the increased bus throughput of the HD9xxx does exist, it is real, it is a distinct difference. But it is irrelevant for a normal user as GPU power will be forcing settings to be lowered on both cards before the advantage kicks in.
 
Yeah, I'm beginning to get that lol.

This is the same scenario, the increased bus throughput of the HD9xxx does exist, it is real, it is a distinct difference. But it is irrelevant for a normal user as GPU power will be forcing settings to be lowered on both cards before the advantage kicks in.

You mean 7xxx dont you or are have you seen the 9xxx cards in action ? Or are you a spy ;)
 
What I am saying is that the limitation is irrelevant in the equation because it will never be an issue at normal resolutions (I.E 1920*1080) because by the time games become so demanding that the limitation would have kicked in at normal resolutions forcing GTX owners to start lowering settings, the GTX6xx and the HD7xxx will have hit their GPU grunt limits and owners will already be forced to turn settings down anyway, as has historically happened every time the have been close performing GPU's with different memory throughputs.

Only time will tell and I do believe you you will very well end up being correct. Having said that, I defy anyone to say with absolute 100% certainty that a 256bit bus won't become a problem in the much the nearer future. As you already mentioned, when it comes to GPUs deliberately picking the lesser spec'd yet equal priced card is a potentially foolish choice. Even if there is only a 20% chance it will be a problem, is it a chance worth taking when a similar priced and more future proofed alternative GPU is available?

Essentially someone who chooses the 256bit GK104 vs 384bit Tahiti is taking a risk and hoping with fingers crossed that the VRAM bandwidth/capacity wont become an issue with more graphically demanding games due for release in the next year.

In the past the lesser spec'd cards at least had the advantage of being cheaper. So when people chose a 768Mb 192bit bus GTX460, 256Mb 8800GT, or 512Mb 4870 they were at least getting the advantage of value. The VRAM and/or bus-width limitation in these cards did manifest itself before the cards themselves ran out of GPU grunt. So there have been plenty of times in the past where a GPU with a lower bus-width/bandwidth or less VRAM did eventually suffer accordingly. Though as previously stated, the people who purchased them could console themselves that they did opt for the value edition.

That value advantage is simply non existent for someone choosing a GK104 based card compared to an equivalent priced Tahiti alternative.

GTX670 vs HD7950, similar price but less VRAM and bandwidth.
GTX680/770 vs HD 7970/7970GE, similar price but less VRAM and bandwidth.
 
Last edited:
Only time will tell and I do believe you you will very well end up being correct. Having said that, I defy anyone to say with absolute 100% certainty that a 256bit bus won't become a problem in the much the nearer future. As you already mentioned, when it comes to GPUs deliberately picking the lesser spec'd yet equal priced card is a potentially foolish choice. Even if there is only a 20% chance it will be a problem, is it a chance worth taking when a similar priced and more future proofed alternative GPU is available?

Essentially someone who chooses the 256bit GK104 vs 384bit Tahiti is taking a risk and hoping with fingers crossed that the VRAM bandwidth/capacity wont become an issue more demanding games due for release in the next year. The past has proven beyond all doubt that such performance problems did manifest before the actual GPU ran out of grunt.

In the past the lesser spec'd cards at least had the advantage of being cheaper. So when people chose a 768Mb 192bit bus GTX460, 256Mb 8800GT, or 512Mb 4870 they were at least getting the advantage of value. The VRAM and/or bus-width limitation in these cards did manifest itself before the cards themselves ran out of GPU grunt. So there have been plenty of times in the past where a GPU with a lower bus-width/bandwidth and less VRAM did eventually suffer accordingly. Though as previously stated, the people who purchased them could console themselves that they did opt for the value edition.

That value advantage is simply non existent for someone choosing a GK104 based card compared to an equivalent priced Tahiti alternative.

GTX670 vs HD7950, similar price but less VRAM and bandwidth.
GTX680/770 vs HD 7970/7970GE, similar price but less VRAM and bandwidth.

I run my GTX 690s in quad sli, or putting it another way I have the GPU grunt of 4 GTX 680s using 2gb of vram. What I find even with that much GPU grunt I have to back off on the settings before vram even becomes a problem.

Or putting it another way even if the GTX 680 was 4 times as powerful as it is on a single screen it still would not have a problem.

Having said that on multi screen setups GTX 690s in quad sli are very poor and I would not recommend them.
 
I run my GTX 690s in quad sli, or putting it another way I have the GPU grunt of 4 GTX 680s using 2gb of vram. What I find even with that much GPU grunt I have to back off on the settings before vram even becomes a problem.

Or putting it another way even if the GTX 680 was 4 times as powerful as it is on a single screen it still would not have a problem.

Having said that on multi screen setups GTX 690s in quad sli are very poor and I would not recommend them.
I'm afraid what you saying here is totally irrelevant to the main point that ICDP is making.
 
There's no doubt that the new consoles are going to mean higher quality textures, so even aside from bandwidth, ultra settings in future games are extremely likely to use up >2GB long before these cards are obsolete. I wouldn't be at all surprised if BF4 at ultra wants 3GB given its AMD backing.
 
Sorry to rain on your parade but you will get similar performance. That review posted sounds like a classic example of PEBKAC. However as long as you're happy with the new card thats all that matters.

Latest drivers

h3doAOK.jpg


VisUqMK.jpg

Hi,

Could you link me to the page where this came I wanted to have a look through it thanks :)
 
I run my GTX 690s in quad sli, or putting it another way I have the GPU grunt of 4 GTX 680s using 2gb of vram. What I find even with that much GPU grunt I have to back off on the settings before vram even becomes a problem.

Or putting it another way even if the GTX 680 was 4 times as powerful as it is on a single screen it still would not have a problem.

Having said that on multi screen setups GTX 690s in quad sli are very poor and I would not recommend them.

I'm confused by this post, my apologies but are you attempting to prove or refute my points?

My basic points.

  1. It cannot be 100% guaranteed that in the near future games will NOT require more VRAM capacity or bandwidth. This could conceivably happen long before a GTX770 looses the GPU grunt to play those games.
  2. Why recommend a GPU with worse future proofing, given the fact that in the past VRAM and bandwidth did indeed become a performance issue long before lack of GPU grunt rendered a card obsolete. Examples include 512Mb vs 1Gb HD4870, 192bit 768Mb GTX460 vs 256bit 1GB GTX460, 256Mb vs 512Mb 8800GT for example.
  3. I am not saying it WILL become a problem, I am saying it MAY become a very real problem. So why deliberately choose a GPU that offers less future proofing?

Is future proofing only valid if it is a Nvidia GPU that offers it?
 
I'm confused by this post, my apologies but are you attempting to prove or refute my points?

My basic points.

  1. It cannot be 100% guaranteed that in the near future games will NOT require more VRAM capacity or bandwidth. This could conceivably happen long before a GTX770 looses the GPU grunt to play those games.
  2. Why recommend a GPU with worse future proofing, given the fact that in the past VRAM and bandwidth did indeed become a performance issue long before lack of GPU grunt rendered a card obsolete. Examples include 512Mb vs 1Gb HD4870, 192bit 768Mb GTX460 vs 256bit 1GB GTX460, 256Mb vs 512Mb 8800GT for example.
  3. I am not saying it WILL become a problem, I am saying it MAY become a very real problem. So why deliberately choose a GPU that offers less future proofing?

Is future proofing only valid if it is a Nvidia GPU that offers it?

On point 1 I agree future games will require more vram, having said that I think this is only relevent with sli/xfire setups. I think the single cards of today including the Titans are no where near powerful enough to be able to run at acceptable fps without having to reduce the settings. Crysis 3 on a single Titan @1600p maxed is 22fps and about double that @1080p which is pretty useless.

On point 2 I normally recommend HD 7970/50s to anyone who is going to use multi monitor/multi GPU setups as this is where people can get into trouble with lack of vram.

On point 3 for a single card I don't think it will be a problem with 2gb as they are not fast enough. As in point 2 for multi monitor/multi GPU 3gb is advisable now as the bare minimum.

I have never recommended a 2gb card for multi monitors.
 
ps3 has 256mb ram for gaming where as ps4 has 4.5gb dedicated ram for gaming.


There is a lot of news on new console ram at the moment on google news not sure if I'm aloud to link it all.

I think games are going to start using more vram xbox 1 has got 5.5gb dedicated ram for a graphics card less powerful than a 7850.
 
Last edited:
On point 1 I agree future games will require more vram, having said that I think this is only relevent with sli/xfire setups. I think the single cards of today including the Titans are no where near powerful enough to be able to run at acceptable fps without having to reduce the settings. Crysis 3 on a single Titan @1600p maxed is 22fps and about double that @1080p which is pretty useless.

On point 2 I normally recommend HD 7970/50s to anyone who is going to use multi monitor/multi GPU setups as this is where people can get into trouble with lack of vram.

On point 3 for a single card I don't think it will be a problem with 2gb as they are not fast enough. As in point 2 for multi monitor/multi GPU 3gb is advisable now as the bare minimum.

I have never recommended a 2gb card for multi monitors.
That is not the point.

It's quite simple...it is like for most parts 8GB 1600MHz and 12GB 2400MHz DDR3 memory don't make much difference in most real-world usage (at the moment). However, if both were the same price, there's no logic in getting the 8GB 1600MHz over the 12GB 2400MHz memory.

Also, while 7970 and 770 might seem to be "equals", but that's only for up to 4xMSAA...and even that is only for now. Going 770 would pretty much means giving up the option to using 8xMSAA, as it would lose so much frame rate due to its low memory bandwidth, even when with a pair of them in SLI.
 
Repeatedly telling somebody they don't understand something when they have clearly demonstrated multiple times that they do is a tad silly. Like I have said previously I am not saying that the HD79xx are not currently the better choice, they most certainly are when value for money is concerned. Nor am I saying that the bus throughput disadvantage of the GTX is not a limitation at very high res. What I am saying is that the limitation is irrelevant in the equation because it will never be an issue at normal resolutions (I.E 1920*1080) because by the time games become so demanding that the limitation would have kicked in at normal resolutions forcing GTX owners to start lowering settings, the GTX6xx and the HD7xxx will have hit their GPU grunt limits and owners will already be forced to turn settings down anyway, as has historically happened every time the have been close performing GPU's with different memory throughputs.

I'm saying you don't understand, because you're ignoring the fact that it IS happening NOW at 1920x1080, as I have said repeatedly.

It's already happening as we speak, ie, people are having to lower settings on GK104 based cards to keep their FPS up where people who have 79XX cards are not.
 
I'm saying you don't understand, because you're ignoring the fact that it IS happening NOW at 1920x1080, as I have said repeatedly.

It's already happening as we speak, ie, people are having to lower settings on GK104 based cards to keep their FPS up where people who have 79XX cards are not.

I thought i saw evidence of this on a 680 on Hitman absolution in another thread but even that turned out to be a red herring, as in a later review of a 770 they had the 680 at 1080p running Ultra and 8xmsaa.

So the question is "What people ? Where ? And in What games do people with 680's have to turn settings down that 7970 owners do not ?"
 
I'm saying you don't understand, because you're ignoring the fact that it IS happening NOW at 1920x1080, as I have said repeatedly.

It's already happening as we speak, ie, people are having to lower settings on GK104 based cards to keep their FPS up where people who have 79XX cards are not.

Reeeeelly? :P

Okay lets test that theory.


Crisis 3: Neither card playable at max settings, both sort of are when OC'd, neither would be at 8x AA

OI3BEig.jpg


---------------------------------

Far Cry 3: Both cards pretty playable and moreso when OC's but again neither would be at 8x

Rjkzjrm.jpg


---------------------------------

Metro 2033: Neither card playable at 4x AA even when OC'd

yJ6Ar6m.jpg


---------------------------------

Metro Last Light: Neither card playable at 4x AA even when OC'd

bjjYi4d.jpg


---------------------------------

Tomb Raider: Both cards playable at 2x AA but doubtful they would be at higher AA

ozooGyA.jpg


---------------------------------


Well, I must say I'm surprised, I guess I was wrong about the bus throughput advantage of the 7xxx not being irrelevant in the future as the GPU's would run out of grunt first, it turns out its ALREADY irrelevant as the GPU's are running out of grunt at max settings TODAY.
 
Back
Top Bottom