• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RTX 4070 12GB, is it Worth it?

Status
Not open for further replies.
29 Shader TFlops. 67 RT Tflops.

You just need to check the links I gave. It's easy to work out using the 3070.

We are looking at a slide out of context. The purpose of this slide is clearly to say, their 3rd gen RT cores and 4th gen Tensor cores are way better. They are then providing the compute numbers.

In that case the 4070Ti 41 of those...

7688 / 5888 = 1.30
40.9 / 29 = 1.41

if its 5888 that's not quite right

7688 / 5500 = 1.4.

Not saying you're wrong, its just interesting.
 
So, we're looking at something like a 15% price increase from last gen 70 series cards for a ...% increase in performance. Anyone want to take a stab at what that performance increase will be? Because if it's not more than 15% the 4070 will be DOA.
 
Last edited:
So, we're looking at something like a 15% price increase from last gen 70 series cards for a ...% increase in performance. Anyone want to take a stab at what that performance increase will be? Because if it's not more than 15% the 4070 will be DOA.

Nvidia will use frame generation in their promo material to say "Our 4070 is 300% faster than our last gen 3070".
 
So, we're looking at something like a 15% price increase from last gen 70 series cards for a ...% increase in performance. Anyone want to take a stab at what that performance increase will be? Because if it's not more than 15% the 4070 will be DOA.
WCCFTech, say the 4070s theoretical compute power, based on information to hand, should be the same as a 3080, so around 30% performance increase over the 3070.

My concern is the Memory bus is reduced from 256bit, 3070, to 192bits on the 4070, and the impact this will have, but the L2 cache on the 4070 is a lot higher, 36MB, compared to 4MB on the 3070, so may offset the lower bus.
 
Nvidia will use frame generation in their promo material to say "Our 4070 is 300% faster than our last gen 3070".
and the real sad thing, with a compliant media it probably will work :mad:

Tow the line, or you won't get a review sample next time.
Don't worry they'll argue it draws less power so therefore more performance per watt :cry:
It will be interesting if perf/watt comes back. Last gen it was totally gone - but then with Nvidia on Samsung's 8nm team green was always going to lose. GDDR6X is also power hungry. This time Nvidia have paid more for the process node and AMD have gone for multi-chip which must waste some power.

If anyone starts driving perf/watt again, well they either are shills or take all their "thinking" points from Nvidia PR.
 
WCCFTech, say the 4070s theoretical compute power, based on information to hand, should be the same as a 3080, so around 30% performance increase over the 3070.
I wish i could say theory always tracks with reality. ;)

If i had to guess it maybe closer to 20% so a measly 5% better price to performance.
 
Last edited:
and the real sad thing, with a compliant media it probably will work :mad:

Tow the line, or you won't get a review sample next time.

It will be interesting if perf/watt comes back. Last gen it was totally gone - but then with Nvidia on Samsung's 8nm team green was always going to lose. GDDR6X is also power hungry. This time Nvidia have paid more for the process node and AMD have gone for multi-chip which must waste some power.

If anyone starts driving perf/watt again, well they either are shills or take all their "thinking" points from Nvidia PR.

Agreed! It's a load of Balls, you want a decent generational leap in performance not a generational leap in less power for the same damn performance. That's like buying a new mid range sports car but advertising it saying it has the same horsepower but now uses 20% less fuel than before.
 
and the real sad thing, with a compliant media it probably will work :mad:

Tow the line, or you won't get a review sample next time.

It will be interesting if perf/watt comes back. Last gen it was totally gone - but then with Nvidia on Samsung's 8nm team green was always going to lose. GDDR6X is also power hungry. This time Nvidia have paid more for the process node and AMD have gone for multi-chip which must waste some power.

If anyone starts driving perf/watt again, well they either are shills or take all their "thinking" points from Nvidia PR.

The same people were strangely quiet when RDNA2 had the performance/watt advantage for a number of models.
 
The same people were strangely quiet when RDNA2 had the performance/watt advantage for a number of models.
That was exactly my point. Mind you in these days of max OC out the door, any perf/watt advantage just gets squandered on higher clocks to remain competitive.

Both 6900 XT and 3090 were a lot more efficient with some tweaks. Often a lot more efficient: 100W for 5%?
 
That was exactly my point. Mind you in these days of max OC out the door, any perf/watt advantage just gets squandered on higher clocks to remain competitive.

Both 6900 XT and 3090 were a lot more efficient with some tweaks. Often a lot more efficient: 100W for 5%?

Exactly,but I find it weird when suddenly people going on about how the new Nvidia dGPUs are amazing in terms of power consumption with capped FPS,but seemingly are ignorant of Radeon Chill.

The RX6600 was amazing in terms of performance/watt last generation,especially when tweaked.

Even with my RTX3060TI I run it with custom curves and saved 30W~40W in certains scenarios. OTH,Fallout 4 is so CPU limited,the dGPU is falling asleep! :cry:
 
Last edited:
From Techpowerup 4K Rasterization comparison 7900XTX vs 4080:
Hitman 3: 6.5% faster
Dying Light 2: 7.3% faster
Divinity: Original Sin II: 7.6% faster
Cyberpunk 2077: 8.3% faster
Assassin's Creed Valhalla: 9.4% faster
Battlefield V: 13.9% faster
Watch Dogs: Legion: 15.3% faster
Red Dead Redemption 2: 17.7% faster
Resident Evil: Village: 18.2% faster
Far Cry 6: 21% faster

As for the physical size of of the 4080 play it down as a non-issue in your headcanon all your want, but it reality is that its size IS something which the users would have to think about when it come to purchasing decision.
  • Nvidia Founders Edition: 304 x 137 x 61mm (3-slot)
  • ASUS ROG Strix – 357.6 x 149.3 x 70.1mm (4-slot)
  • ASUS TUF Gaming – 348.2 x 150 x 72.6 mm (4-slot)
  • Colorful iGame Neptune – Card: 253.5 x 170.8 x 41.5mm (2-slot), Radiator: 394 x 119.2 x 54.4mm
  • Gainward Phantom GS – 329 x 142 x 70mm (4-slot)
  • Galax SG/ KFA2 SG – 352 x 153 x 66mm (3.5-slot)
  • MSI Gaming X Trio – 337 x 140 x 67 mm (3.5-slot)
  • MSI Suprim X – 336 x 142 x 78 mm (4.5-slot)
  • MSI VENTUS 3X – 322 x 136 x 63mm (3-slot)
  • PNY XLR8 Gaming Verto Epic-X RGB – 331.7 x 136.9 x 71.1mm (4-slot)
  • Zotac AMP Extreme AIRO – 355.5 x 149.6 x 72.1mm (4-slot)

You simply can not generalise and expect everyone to have a full-size/over-sized ATX cases; there are plenty of people with Micro-ATX PC which the length of the card might not necessarily be the issue, but the card being 3-4 slots in height and/or 140-150mm in width is (side-panel clearance issue). Like it or not, the reality is that oversized card will alienate some userbase, and the simply reality is smaller cards WILL fit in more cases than a larger ones- that's just how physics is.
So you just list all the games were the 7900XTX is faster and you remove all the others that is equal or slower?:cry:
The average from Techpowerup 4K Rasterization comparison 7900XTX vs 4080 show that the 7900XTX is 2% faster compared to 4080 (excluding RT) and to be honest reviewers like the Techpowerup should stop excluding RT from the average.

A 4080 can fit on midi tower no problem you don't need a full-size tower, in fact a 4080 is more suitable on a midi tower than the 7900XTX due to the lower heat output. Also the non reference designs have similar size regardless if it is 4080 or 7900XTX.
 
WCCFTech, say the 4070s theoretical compute power, based on information to hand, should be the same as a 3080, so around 30% performance increase over the 3070.

Yeah, the 3080 has 29.77 TFlops FP32.

Its exactly 30% faster than the 3070.

The 3080 cost £699, if this costs $599 they can argue you're getting a 3080 with 4GB more VRam for $100 less, which to many people will sound like a bargain.
 
So you just list all the games were the 7900XTX is faster and you remove all the others that is equal or slower?:cry:
The average from Techpowerup 4K Rasterization comparison 7900XTX vs 4080 show that the 7900XTX is 2% faster compared to 4080 (excluding RT) and to be honest reviewers like the Techpowerup should stop excluding RT from the average.

A 4080 can fit on midi tower no problem you don't need a full-size tower, in fact a 4080 is more suitable on a midi tower than the 7900XTX due to the lower heat output. Also the non reference designs have similar size regardless if it is 4080 or 7900XTX.

You do realise the RTX 4080 is £370 more expensive than the 7900XT?

Or to put it another way the 4080 is 46% more expensive than the 7900XT.

You can get an RX 6700XT for what the 4080 costs more.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom