• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

What do you think of the 4070Ti?

  • Thread starter Thread starter Deleted member 251651
  • Start date Start date
Status
Not open for further replies.
I dont think there will be a cheaper 4series card in the near future. nGreedia thinking is we already have mid teir cards now in the 3 series. like the 3070 3080 so on. Thats the way nGreedia are going with "moores law is dead"
 
If they were paying on a per transistor basis that would mean something but they don't so...
You don't price a GPU on die size either. I'd say the transistor density and count are both important factors when comparing products.

Large dies are inefficient, when they can't shrink them down you end up with refresh products in many cases.

But it goes without saying that the transistor density is a big improvement from Samsung 8nm to TSMC 4/5nm.
 
Last edited:
No amounts of arguing is going to bring the prices down, don't like it... don't buy it. But most will, good for them. Now, go game!


You go game, I prefer to rip on Jensen and Lisa for the price fixing scam that is the discrete gpu market. Then again you bought a 4080 so that's all I need to know about your perspective on rip-off GPUs.
 
i believe wave32/64 is an internal amd only specification.. nvidia is a superscalar architecture do they need to batch/queue instructions like that?
What I don't understand is AMD said RDNA was a move to a more gaming oriented uarch which was easier to extract performance from unlike GCN. GCN lived on in CDMA.

Now they have gone back and made it harder to get effective gaming performance like GCN. Makes me wonder if AMD is trying to go back to a one sized fits all design, with all the issues they had in the past.

It's utterly weird.
 
3090 Ti's even second hand are 900. If you can grab for msrp a 4070ti you are better off electricity useage. That's also taking into consideration you're getting a second hand card Vs a new one.

You pay your money you make your choice I guess.

Edit - I tried quoting multiple messages and failed... This was in response to the electricity use calculations against a 3090ti Vs a 407ti
Why buy a 3090ti for £900 when you could pick up a 3090 for £650 or a 3080 for under £500 then undervolt the cards to 250-300w for no performance loss.
 
You go game, I prefer to rip on Jensen and Lisa for the price fixing scam that is the discrete gpu market. Then again you bought a 4080 so that's all I need to know about your perspective on rip-off GPUs.
Problem is not only die size but even the whole GPU tier in the lineup,and relative technical specs vs the top SKUs. This is the same thing done with phones like Apple or Samsung does where they mess up with line-ups.

This is why the mainstream cards are getting relatively worse and worse and why consoles are becoming more important for devs to determine base specifications. They getting lower and lower down the line-up as the top goes upwards. The PCMR whales don't get it.
 
Last edited:
Yes, please everyone stop buying 'new' RTX 3080s/3090s. Think of what you are indirectly communicating to Nvidia...

The MSRP prices on these were already pretty high at launch...

Same thing for RTX 3070s, but the problem isn't so acute.
 
Last edited:
What I don't understand is AMD said RDNA was a move to a more gaming oriented uarch which was easier to extract performance from unlike GCN. GCN lived on in CDMA.

Now they have gone back and made it harder to get effective gaming performance like GCN. Makes me wonder if AMD is trying to go back to a one sized fits all design, with all the issues they had in the past.

It's utterly weird.
amds architecture is more complex than nvidias they rely on simd units and vectorized instructions to maximize utilization and performance.. i havent really read a lot abt it just skimmed through few white papers.. someone with a better handle should be able to provide more clarity on whats really going on.. i remember reading somewhere that 5700xt got rid of wave64 because the batch was too large for optimal utilization, then i skimmed through another article which did some tests and concluded that the 7900xtx is actually working at half the advertised tflops and then they theorized abt wave 32/64 modes
 
Last edited:
Why buy a 3090ti for £900 when you could pick up a 3090 for £650 or a 3080 for under £500 then undervolt the cards to 250-300w for no performance loss.
At a considerable risk. Always get a (reliable) warranty on expensive used hardware. Except for maybe CPUs, they seem pretty robust.
 
Will 12GB be enough for games with HD texture packs?
Until Far cry 7 is released :D .
10GB isn't enough for FC6 @1440P with the HD texture pack which brings me to the comparison of a used 3090 Ti vs 4070 Ti someone mentioned. One is a 4K+ GPU, the other probably targeting up to1440P I think. I don't think it's a case of someone buying a new 4070 Ti instead of a used 3090 Ti if their use is at 4k or higher, unless the new/improved frame generation stuff reduces the memory requirements somewhat?
 
Last edited:
Anyway, the price should be around £790 for the RTX 4070 TI (but no more than this), considering the performance improvement of 50% (1% lows) vs the RTX 3070 TI.

Or £733, considering the performance improvement of 56% (1% lows) vs the RTX 3070.

Ideally, it would be priced at £700.
That's not how prices work, if we had to pay X more for X more performance we be paying millions of times more for everything, cars would cost more than the national debt of Hati, CPUs would cost more than the national debt of Iraq, and GPUs would cost more than the national debt of India.

Not only that it would make buying anything that improved performance pointless, when it cost 1% more for 1% more performance there's literally zero reason to buy the new thing as all you're doing is buying a GeForce 256 X 1000 for GeForce 256 money x 1000.
You don't price a GPU on die size either. I'd say the transistor density and count are both important factors when comparing products.

Large dies are inefficient, when they can't shrink them down you end up with refresh products in many cases.

But it goes without saying that the transistor density is a big improvement from Samsung 8nm to TSMC 4/5nm.
No you don't but i didn't say that, i said they and you brought up transistors implying that you were also talking about what designers (AMD, Nvidia) pay semiconductor companies. Transistor density and count are both important factors when comparing products but we're not comparing products, we're comparing price to performance.

Yes large dies are inefficient, not sure why you think that's relevant to a discussion on prices but whatever, yes when you can shrink them down you end up with refresh products in many cases and that refreshed product costs less to produce or is more performant because you can either fit the same amount of transistors on a smaller area of silicon or you can put more on the same amount thus reducing the price or increasing the performance to area ratio.

The only reason higher transistors density matters in regards to either lowering costs for the same performance (the same number of transistors fit into a smaller area) or improving performance for the same cost (more transistors fit into the same area). Something that's simply not happening here, at least in relation to end customers, because we're not getting the same performance for cheaper and we're not getting more performance for the same price, we're getting more cost and more performance making the whole reason for a reduction in the fabrication process entirely pointless.
 
Last edited:
That's not how prices work, if we had to pay X more for X more performance we be paying millions of times more everything, cars would cost more than the national debt of Hati, CPUs would cost more than the national debt of Iraq, and GPUs would cost more than the national debt of India.

Not only that it would make buying anything that improved performance pointless, when it cost 1% more for 1% more performance there's literally zero reason to buy the new thing as all you're doing is buying a GeForce 256 X 1000.
You don't get to decide the pricing though lol. They set it based on what they think people will pay (willingly or not).

I tend to pay based on how much performance improvement I'll get. If the price is way to high, I just wait it out for another gen (until I get a performance improvement I can afford).
 
Last edited:
You don't get to decide the pricing though lol. They set it based on what they think people will pay (willingly or not).

I tend to pay based on how much performance improvement I'll get. If the price is way to high, I just wait it out for another gen (until I get a performance improvement I can afford).

You will just keep paying more and more money every time something improves?
 
You don't get to decide the pricing though lol. They set it based on what they think people will pay (willingly or not).

I tend to pay based on how much performance improvement I'll get. If the price is way to high, I just wait it out for another gen (until I get a performance improvement I can afford).
I do, just like every customer does, just like you say they do when you say they set it based on what they think people will pay.

Just like when you say you tend to pay based on how much performance improvement you'll get, you're literally saying cost to performance is one of the key metrics that influence you purchasing decision so why you think transistors is in anyway relevant is beyond me. Other than what i already laid out in that higher densities should get you more performance for the same price or the same performance for a lower price, something that once again we're not seeing because you're paying higher prices for higher performance when it should be, roughly, higher performance for the same price.

e: Let's put the logic of "I tend to pay based on how much performance improvement I'll get" to the test. A RTX 4090 is what, at least a thousand times faster than a GeForce 256 (very conservative estimate) so if you get a thousand times more performance you'd be willing to pay a thousand times more, you'd be willing to pay £256,000 (Qtr of a million) for a 4090, yes?
 
Last edited:
Each generation should be better than the previous, but each generation should occupy similar price points accounting for inflation.

It shouldn't be "this new one is better than the old one so it's only right for you to pay double"

I still can't get over digital foundry using a ps5 comparison against a 3k pc.

That's be expected from DF, biggest NV shills on the internet.
 
You will just keep paying more and more money every time something improves?
Nope, we each have a limit to what we can afford or will pay (this is likely to be influenced by how much we were willing to pay in the past). For me, the limit for a single component at the moment, would be £600. Maybe £700 if the performance improvement was huge (like 80-100% or something).

it doesn't kill people to wait, but the way some go on, you'd think it did :D

I think the prices we're seeing for GPUs are going to be the new normal probably. They'll improve as inflation improves, but you'll need to wait maybe a year or so for prices to seem more reasonable. The big warning sign is the high MSRP prices, and likely lack of reference models.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom