• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD are back at their shenanigans. When you take one step forwards, 3 steps back...

Status
Not open for further replies.
AMD have tried something different and the performance is their best ever. Nothing but time will decide if the genuine issues are fixable or a feature of the new design.

Prices suck but that's a separate issue to performance. No one is getting screwed by anyone other than themselves when buying fancy toys.
tbh i saw the new chiplet gpu thing similar to buying arc cards, is something new and its going to have issues, anyone expecting it to not have faults was just deluded
 
I do think it's neat that AMD are going with a new MCM design and like with anything new there will always be a few bugs that need sorting out but I thought the whole point of MCM this time around is to reduce costs yet as a consumer I'm not seeing this, infact the 2nd tier card has increased in price by $250 despite the top card remaining static.
 
I do think it's neat that AMD are going with a new MCM design and like with anything new there will always be a few bugs that need sorting out but I thought the whole point of MCM this time around is to reduce costs yet as a consumer I'm not seeing this, infact the 2nd tier card has increased in price by $250 despite the top card remaining static.
it was to reduce costs for amd not for you
 
tbh i saw the new chiplet gpu thing similar to buying arc cards, is something new and its going to have issues, anyone expecting it to not have faults was just deluded
And highly likely that when Nvidia launch their version of MCM it will also have teething issues (As all new technologies do - e.g. 1st gen Ryzen, Intels P+E cores etc)
 
From feedback of someone who owns a 7900 XTX, he prefers it over his 4080 (that bought after owning and returning a 4090) and even thinks it makes the 4090 look excessively overpriced.

Undervolting to 1050 - 1070 and an OC on the VRAM gives him an up to ~10% performance uplift as higher boost clocks can be reached. With the power limit increased he can get ~15% extra performance. His own tests of the 4080 and 4090 gave ~5% OC. He concluded the 7900 XTX MBA was noticeably faster than a 4080 in his preferred non RT games and nipping at the heels of the far more expensive 4090.

Not bad for a supossedly broken GPU.
 
From feedback of someone who owns a 7900 XTX, he prefers it over his 4080 (that bought after owning and returning a 4090) and even thinks it makes the 4090 look excessively overpriced.

Undervolting to 1050 - 1070 and an OC on the VRAM gives him an up to ~10% performance uplift as higher boost clocks can be reached. With the power limit increased he can get ~15% extra performance. His own tests of the 4080 and 4090 gave ~5% OC. He concluded the 7900 XTX MBA was noticeably faster than a 4080 in his preferred non RT games and nipping at the heels of the far more expensive 4090.

Not bad for a supossedly broken GPU.

Indeed. Still over priced though.
 
It's funny how people are making a big fuss about this being "beta-testing", when the entire gen of RTX2000 series cards was basically that (with way laughable RT performance by today's standard or even at back when they were launched, and DLSS 1.0 lol)

Turing was crap too though. That's why sales were poor and Jensen had to say Pascal friends it is now safe to upgrade :cry:
 
it was to reduce costs for amd not for you

This is very true.

In AMD's own recent charts to explain to GamersNexus why they use MCM for their GPU; they showed some CPU comparisons and whether unintentional or not they confirmed that a 16 core ryzen costs half the price of a monolithic Intel 16 core because it's MCM - and yet AMD charges a significantly higher price for its 16 core than Intel despite costing 50% less to make - this proving that any cost benefit from MCM is for AMD's investors, not for customers

Under Lisa Su, AMD for the last 3 years has been and will continue to price products based on their performance relative to the competition, not how much they cost to make so consumers can forget about any savings in GPU prices from MCM
 
Last edited:
This is very true.

In AMD's own recent charts to explain to GamersNexus why they use MCM for their GPU; they showed some CPU comparisons and whether unintentional or not they confirmed that a 16 core ryzen costs half the price of a monolithic Intel 16 core because it's MCM - and yet AMD charges a significantly higher price for its 16 core than Intel despite costing 50% less to make - this proving that any cost benefit from MCM is for AMD's investors, not for customers

Under Lisa Su, AMD for the last 3 years has been and will continue to price products based on their performance relative to the competition, not how much they cost to make so consumers can forget about any savings in GPU prices from MCM
There is the performance scaling benefit as well. Nvidia and Intel will go MCM too.
 
This is very true.

In AMD's own recent charts to explain to GamersNexus why they use MCM for their GPU; they showed some CPU comparisons and whether unintentional or not they confirmed that a 16 core ryzen costs half the price of a monolithic Intel 16 core because it's MCM - and yet AMD charges a significantly higher price for its 16 core than Intel despite costing 50% less to make - this proving that any cost benefit from MCM is for AMD's investors, not for customers

Under Lisa Su, AMD for the last 3 years has been and will continue to price products based on their performance relative to the competition, not how much they cost to make so consumers can forget about any savings in GPU prices from MCM
I suspect they are still lagging some way behind Intel on cash reserves. Even if they catch up I can't see them accepting less money if the market will bear the higher prices. That said there are much harder times around the corner, huge debts out there and it won't take much to tip things over into a depression. No-one will be selling at high prices then, however at least AMD will have some room to drop prices and still be profitable. Intel would be in big trouble.
 
With the amount of cache the GPU now has, is there ever any reason to run the memory at high speeds for desktop use even with 3-4 high-res, high-refresh screens?

Certainly, the TPU clocks, e.g. https://www.techpowerup.com/review/sapphire-radeon-rx-7900-xtx-nitro/38.html:
wTZUkwm.png

show that for multi-monitor memory is running at full speed while the core is idle. If possible, then running the memory really low and the core low but putting the load on the cache would be the way to go. Caching the "frame-buffer" would make sense for mobile too.

As for A0 silicon, hard to know whether a future A1 would fix these things. Getting a part out on the first revision is of course the goal of every silicon vendor, but that isn't to say that the A0 is actually - mostly - bug free. (Almost no chips are ever totally bug-free hence errata and firmware/driver bug-fixes.)
 
This is very true.

In AMD's own recent charts to explain to GamersNexus why they use MCM for their GPU; they showed some CPU comparisons and whether unintentional or not they confirmed that a 16 core ryzen costs half the price of a monolithic Intel 16 core because it's MCM - and yet AMD charges a significantly higher price for its 16 core than Intel despite costing 50% less to make - this proving that any cost benefit from MCM is for AMD's investors, not for customers

Under Lisa Su, AMD for the last 3 years has been and will continue to price products based on their performance relative to the competition, not how much they cost to make so consumers can forget about any savings in GPU prices from MCM
While having a far worse brand-name, generally not polishing their releases as well, and possibly having less features (not that I personally care for DLSS or the video encoders, but other do), and of course we all know they have invested far less silicon for RT*.

This seems to be a strategy to get the Radeon Group down to 10% marketshare!

* Approaching RT with a view to re-using and adding bits to the existing parts of the GPU is fine - wasting silicon benefits nobody - but if that finds you with way less RT performance you have to price lower or add more RT "bits" at the cost of a bit more die space.
 
I think a lot of if not all of these issues are overblown and I feel most will be resolved. The power issues look like a driver issue.

It's the same over on the 4000 release side of things. We have had threads proclaiming they would burn your hourse down and a few black screen issues that were resolved with a firmware update by Nvidia.

All par for the course on new GPU releases, yet you would think Nvidia and AMD were doing it on purpose. Well other than the extorionate pricing of course. :)
 
I think a lot of if not all of these issues are overblown and I feel most will be resolved. The power issues look like a driver issue.

It's the same over on the 4000 release side of things. We have had threads proclaiming they would burn your hourse down and a few black screen issues that were resolved with a firmware update by Nvidia.

All par for the course on new GPU releases, yet you would think Nvidia and AMD were doing it on purpose. Well other than the extorionate pricing of course. :)
there is always some issue, for 3000 series they were unicorn ***** as impossible to buy, 2000 series had rtx but no games with it, tbf i dont remember anything for the 1000 series but i deffo remember the 900 series and its 3.5gb vram 970 i was lumbered with for 6 years :D
 
Status
Not open for further replies.
Back
Top Bottom