• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

970's having performance issues using 4GB Vram - Nvidia investigating

Status
Not open for further replies.
did you read the part of false advertising?

The same could be said of 295X being advertised as an 8GB card.

GTX970 has 4GB of RAM just the same as we always thought, the only difference being that we now know 0.5GB of it is partitioned and has a small performance impact compared to GTX980 (4-6%), you get a bigger performance loss than that from the reduction in CUDA cores.
 
Last edited:
Makes you wonder what testing is done before sale really as surely this would have shown under intense testing and a driver written to help the OS decide which pool of memory to use in the correct order....
 
from anandtech:

This in turn is why the 224GB/sec memory bandwidth number for the GTX 970 is technically correct and yet still not entirely useful as we move past the memory controllers, as it is not possible to actually get that much bandwidth at once on the read side. GTX 970 can read the 3.5GB segment at 196GB/sec (7GHz * 7 ports * 32-bits), or it can read the 512MB segment at 28GB/sec, but not both at once; it is a true XOR situation. Furthermore because the 512MB segment cannot be read at the same time as the 3.5GB segment, reading this segment blocks accessing the 3.5GB segment for that cycle, further reducing the effective memory bandwidth of the card. The larger the percentage of the time the crossbar is reading the 512MB segment, the lower the effective memory bandwidth from the 3.5GB segment.
 
I had these 970's because in terms of VRAM and ROPs they matched the AMD R290 they replaced and now in the space of 2 days I find out it does't actually match at all.

What ive been saying all the time... I know people dont like to compare Firestrike to real games, but its comparable to a certain extent and there's a consistent higher score on 290s over 970s. (it doesnt happen for no reason...)

what usually happens when reviewers compare the cards is that they run 970 at stock which boosts up to 1250-1300 and the 290/290x stays at 975-1050 or whatever. And from what ive worked out based on firestrike mind you, is that the 970 needs 200-300 core clock advantage to stay on par (ABOUT 25%), but when you start upping that 290 be it 200-250mhz higher the higher IPC really starts to show and you pull away with a 290. Idk why this is so hard to comprehend for most people. Its not like the 290 CANT be oced just like a 970 can. Hell, i run my 290 at 1270-1290 depending on my fan and pump speed and that equals to what, a 1600-1700 970 at 1080p? rofl. Lets not forget i benched my 290 at 1400 core and beat every single 970 in the firestrike thread, which i think i had already done anyways.

Dont take this post as definite fact when it comes to the comparisons in performance after overclocking , its just based on my observations and experience when it comes to comparing my 290s to various 970s on this forum, so please. Take it with a grain of salt.



Anyhow, now that i see your regret... How come you would defend and talk your 970s up even though you noticed stuttering without hesitation before this was exposed? :confused::rolleyes:
 
Last edited:
Just read through that article and if I owned a 970 or 2, I would have no concerns. Sure, the last .5GB runs a bit slower (4-6% in fact) than if it was a straight 4GB like the 980 but that 4-6% is completely negligible.


..
i've never noticed a thing, the cards work perfectly but you know what it's like, the product is now tainted...........mud sticks
 
Makes you wonder what testing is done before sale really as surely this would have shown under intense testing and a driver written to help the OS decide which pool of memory to use in the correct order....

AMD said the same thing regarding the Bulldozer and its cache/cores.... ;)
 
The same could be said of 295X being advertised as an 8GB card.

GTX970 has 4GB of RAM just the same as we always thought, the only difference being that we now know 0.5GB of it is partitioned and has a small performance impact compared to GTX980 (4-6%), you get a bigger performance loss than that from the reduction in CUDA cores.

The 295X2 does technically have 8GB. The 970 too does technically have 4GB. The difference being that the 295X2 can actually make use of ALL of the 8GB of VRAM on the PCB at one time, unlike the 970. You also suffer no performance loss on the 295X2 should it use it all.

Also, if you believe the figures that Nvidia are giving for the performance loss over 3.5GB then you have something coming. :D Nvidia are just providing lip-service with that "statement".
 
Last edited:
anand is throwing numbers out there for the slow down - its aweful it really is ; need to access a lot of textures stored in those 512mb? you drop 68% memory throughput!
 
Makes you wonder what testing is done before sale really as surely this would have shown under intense testing and a driver written to help the OS decide which pool of memory to use in the correct order....

They would have known. Just nothing more than cost-cutting, and people are silly enough to believe all the hyped BS about them.
 
What ive been saying all the time... I know people dont like to compare Firestrike to real games, but its comparable to a certain extent and there's a consistent higher score on 290s over 970s. (it i run my 290 at 1270-1290 depending on my fan and pump speed and that equals to what, a 1600-1700 970 at 1080p? rofl. Lets not forget i benched my 290 at 1400 core and beat every single 970 in the firestrike thread, which i think i had already done anyways.

Whilst I'm not denying what you are saying I have a 290 under water and those clocks are unachievable for me so will be the same for many others. 122* (cant remember the exact number) is my absolute maximum with +200mv. I run mine at 1100 24/7 as the best compromise between heat/speed.

What voltage did you need for 1400? Thats the highest by quite a margin i've seen a 290 at.
 
The question is why when nvidia tech saw their marketing teams gaffe did they not correct at that point. Clearly they knew and were able to articulate the answer right away. I think they were hoping no one would notice.
 
So much for the monitoring tools not being able to read the vram correctly, didn't think that would be the case! Just pleased my release one went back now, I knew I wasn't imagining the stuttering. However now I'm even more stuck deciding what to replace it with seen as it's not a driver issue, go back to a 290 or hold off for the Q2 cards....decisions.
 
Last edited:
Can't see how anyone can defend Nvidia on this. They gave out false specs, sell a card advertised as 4GB but can only use 3.5 optimally. They only own up when the consumer see through their BS.

If this were AMD we would have them burning at the stake already.
 
Pleased my release one went back now, I knew I wasn't imagining the stuttering. However now I'm even more stuck deciding what to replace it with seen as it's not a driver issue, go back to a 290 or hold off for the Q2 cards....decisions.

I would hold off until AMD release their next-gen cards.
 
So, out of interest, what is the returns policy on Europe for these cards now given the possible mis-selling?

My card is less than a month old.

Is attempting a return OTT given aspirations for 1440p gaming with decent settings in future games with high vram requirements?
 
anand is throwing numbers out there for the slow down - its aweful it really is ; need to access a lot of textures stored in those 512mb? you drop 68% memory throughput!

Looks like that Mordor video is about spot on then.

Mass RMA ahoy! I wouldn't want to be a retailer about now...
 
Also, if you believe the figures that Nvidia are giving for the performance loss over 3.5GB then you'll have something coming. :D

i should think so!
i hope there is some tests done where real strain is put on it, not just the cookie cutter tests

its bound to wig out in some situations/games more than others
i dont see how you can even put a overall % on it
 
Whilst I'm not denying what you are saying I have a 290 under water and those clocks are unachievable for me so will be the same for many others. 122* (cant remember the exact number) is my absolute maximum with +200mv. I run mine at 1100 24/7 as the best compromise between heat/speed.

What voltage did you need for 1400? Thats the highest by quite a margin i've seen a 290 at.

1.63v :o

http://www.3dmark.com/fs/3691850
 
Status
Not open for further replies.
Back
Top Bottom