• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RTX 4070 12GB, is it Worth it?

Status
Not open for further replies.
The truth is both Nvidia and AMD are getting bored with dGPU's.

Nvidia have reached equilibrium, they can't grow from this space anymore, its reached a constant.

AMD feel the same way, yes, they also feel they have reached their equilibrium, they have hit rock bottom, they don't feel it could get any worse and if it does its probably a good thing, because then they can wind it all down.

Nvidia are looking to AI to grow their company, AMD are looking to SoC's and licensing their IP.

Just imagine what would be if things were more equal between them? I have been saying for years to a certain sect in this community, be careful what you wish for.
Whatever happened to just making a solid profit and being happy, damn greed just ruins everything....
 
That is still a flawed comparison. Last years 3090 was pretty bad in terms of performance compared to this years 4090. Again, your argument leads to the fact that - if the 4090 wasn't as fast as it is - that would automatically make the 4070 a better product than it is.
Well yes and no, sure if the 4090 was worse then the 4070 wouldn’t look so out of place but then you look at the typical generation performance upgrade that a 70 class brings to the table then this one is the lowest jump in years despite the architecture being good and the TSMC N4 node seeing a considerable improvement from Samsung 8nm so why is the performance below par when we have seen cards like the 4090 at the high end smash last gen?
 
Well yes and no, sure if the 4090 was worse then the 4070 wouldn’t look so out of place but then you look at the typical generation performance upgrade that a 70 class brings to the table then this one is the lowest jump in years despite the architecture being good and the TSMC N4 node seeing a considerable improvement from Samsung 8nm so why is the performance below par when we have seen cards like the 4090 at the high end smash last gen?
Ιm not saying im pleased with the 4070s performance, im just saying that these comparisons just miss the point
 
Well yes and no, sure if the 4090 was worse then the 4070 wouldn’t look so out of place but then you look at the typical generation performance upgrade that a 70 class brings to the table then this one is the lowest jump in years despite the architecture being good and the TSMC N4 node seeing a considerable improvement from Samsung 8nm so why is the performance below par when we have seen cards like the 4090 at the high end smash last gen?
Because greed and profit. If they sell you garbage price/performance improvement cards, that use tiny GPU chips, they can then refresh it down the road and drag out this node/gen for longer, they simply do not want to give you the best they can make because it doesn't suit their business interests.
 
Because greed and profit. If they sell you garbage price/performance improvement cards, that use tiny GPU chips, they can then refresh it down the road and drag out this node/gen for longer, they simply do not want to give you the best they can make because it doesn't suit their business interests.
The 4070 is NOT a tiny chip though. You measure chips in transistor count. If it was build in Samsung 8nm the 4070 would be much much bigger than the 3090ti. It is definitely, definitely not tiny. Quite the contrary. It's massive.
 
The 4070 is NOT a tiny chip though. You measure chips in transistor count. If it was build in Samsung 8nm the 4070 would be much much bigger than the 3090ti. It is definitely, definitely not tiny. Quite the contrary. It's massive.
You are kidding right?, that's the whole point, the massive node improvement they got going from Samsung 8NM(more like advanced 10NM lol) to advanced TSMC 4NM is HUUUUUUUUGE. They give us NONE of those gains, sure you might get a few more transistors, but they cut down the die size significantly so you do not benefit much, except with the 4090, and even then its cut down more than a 3080 was to a 3090ti LOL.

The 3070 die size is like 1/3 bigger than the 4070 die size, the 3080 is like 45% bigger die size than a 4080, the 4080 is cut down by a HUUUGE margin over the 4090 this time around. They are fitting more transistors onto a die yes, and pushing the clock speeds higher, yes, but but at the same time giving you less silicone so they can cut costs and create cheaper GPUS cause more die's per wafers yielded, all the while pushing prices UP UP UP, even though their economy of scale has improved. They turned the node jump into pure profit for them and zero gains for us, offering us increased prices and stagnated price/performance while also skimping hard on ram, AGAIN!.

You either do not have a clue or are trolling hard...

Pick one.
 
Last edited:
You are kidding right?, that's the whole point, the massive node improvement they got going from Samsung 8NM(more like advanced 10NM lol) to advanced TSMC 4NM is HUUUUUUUUGE. They give us NONE of that those gains, sure you might get a few more transistors, but they cut down the die size significantly so you do not benefit much, except with the 4090, and even then its cut down more than a 3080 was to a 3090ti LOL.

The 3070 die size is like 1/3 bigger than the 4070 die size, the 3080 is like 45% bigger die size than a 4080, the 4080 is cut down by a HUUUGE margin over the 4090 this time around. They are fitting more transistors onto a die yes, and pushing the clock speeds higher, yes, but but at the same time giving you less silicone so they can cut costs and create cheaper GPUS cause more die's per wafers yielded, all the while pushing prices UP UP UP, even though their ecomomy of scale has improved. They turned the node jump into pure profit for them and zero gains for us, offering us increased prices and stagnated price/performance while also skimping hard on ram, AGAIN!.

You either do not have a clue or are trolling hard...

Pick one.
That makes absolutely a total of 0 sense. The actual size of the die, in terms of dimensions, is absolutely irrelevant and useless. What matters is the actual transistor count. And going by transistor count, the 4070 is NOT a small chip. If actual physical dimensions is what counts to you go buy a 24nm GPU, those will have huge dies. Sadly they will only pack 1/5th the transistor count of a 4070, but hey, it's all about the dimensions isn't it? I never understood all this talk about die sizes - transistors - clockspeeds, they never made sense to me. Who cares what margin profits nvidia has? Performance to price is what's important, how it gets there and what profits nvidia or amd has to get there is totally irrelevant.

And yes, the 4070 sux, but not because it's a small die, lol. It's absolutely insanely massive. It packs DOUBLE the transistors of the 3070, it just doesn't perform, probably because a big chunk of those transistors is the huge cache.
 
You are kidding right?, that's the whole point, the massive node improvement they got going from Samsung 8NM(more like advanced 10NM lol) to advanced TSMC 4NM is HUUUUUUUUGE. They give us NONE of that those gains, sure you might get a few more transistors, but they cut down the die size significantly so you do not benefit much, except with the 4090, and even then its cut down more than a 3080 was to a 3090ti LOL.

The 3070 die size is like 1/3 bigger than the 4070 die size, the 3080 is like 45% bigger die size than a 4080, the 4080 is cut down by a HUUUGE margin over the 4090 this time around. They are fitting more transistors onto a die yes, and pushing the clock speeds higher, yes, but but at the same time giving you less silicone so they can cut costs and create cheaper GPUS cause more die's per wafers yielded, all the while pushing prices UP UP UP, even though their ecomomy of scale has improved. They turned the node jump into pure profit for them and zero gains for us, offering us increased prices and stagnated price/performance while also skimping hard on ram, AGAIN!.

You either do not have a clue or are trolling hard...

Pick one.

Try argueing with an Apple uberfan when Apple keeps selling less for more and you will get the same excuses. But even Apple has limits now:

PCMR is now worse than Apple uberfans.
 
Last edited:
That makes absolutely a total of 0 sense. The actual size of the die, in terms of dimensions, is absolutely irrelevant and useless. What matters is the actual transistor count. And going by transistor count, the 4070 is NOT a small chip. If actual physical dimensions is what counts to you go buy a 24nm GPU, those will have huge dies. Sadly they will only pack 1/5th the transistor count of a 4070, but hey, it's all about the dimensions isn't it? I never understood all this talk about die sizes - transistors - clockspeeds, they never made sense to me. Who cares what margin profits nvidia has? Performance to price is what's important, how it gets there and what profits nvidia or amd has to get there is totally irrelevant.

And yes, the 4070 sux, but not because it's a small die, lol. It's absolutely insanely massive. It packs DOUBLE the transistors of the 3070, it just doesn't perform, probably because a big chunk of those transistors is the huge cache.
You simply do not get it, clueless, I shan't waste my breath, not going to try convince you to see sense, carry on :)
 
You simply do not get it, clueless, I shan't waste my breath, not going to try convince you to see sense, carry on :)
"Simply do not get it" is not an argument. I can say the same to you. The problem with the 4070 underdelivering is not the die size, that is pretty much a fact, your opinion on the matter is irrelevant. Performance depends entirely on transistor count, not on die size.

The problem with the card is that they chocked it with the memory bus, so they had to use a big part of that die for cache to alleviate the problem. The whole ada lineup is bandwidth starved, even the 4090.
 
Status
Not open for further replies.
Back
Top Bottom