• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Exclusive: The AMD Inside Story, Navi GPU Roadmap And The Cost Of Zen To Gamers

The usual excuse by the lazy developers who never made the efforts to make it perfectly work. But is is possible. If it isn't possible for them, then they need to find anything else to do. Not to pretend and tell us how to use/waste the money we have been paying.
Don't you think it is somehow unfair to pay for an APU and not being able to use half of it because someone else is lazy and relies on you throwing even more money at the problem?!

Why not just solving the issues once and for all?!

I wouldn't buy an APU and a discrete GPU system.

When I had an Intel CPU I didn't care that the IGP wasn't doing anything. Why would I want my 4770ks IGP to work with my 290x? It would be terrible. As it was when lucid hydra did it.
 
I wouldn't buy an APU and a discrete GPU system.

The i7-5775C with its Iris Pro Graphics 6200 and the Ryzen 5 2400G with its Radeon Vega 11 Graphics

are APUs and their graphics performance is decent enough to make people want this additional acceleration, for example on top of their usual RX 550, RX 560, RX 570, RX 580 and GTX 1050, GTX 1060 discrete graphics.

This acceleration in itself can add frame rates to make the difference between unplayable and playable experience in a given scenario.

Why would I want my 4770ks IGP to work with my 290x?

Because you want more frames per second and you want to improve your gaming experience.

It would be terrible.

It would be amazing.

It is in AMD's interest to drop the monolithic, difficult to design and manufacture large chips, and instead should focus on small, relatively cheap to design and manufacture ones.
In this way they will be sure that the scalability returns, and isn't limited by a wrong architecture.

Imagine next-gen graphics cards similar to Ryzen.
The ultra-enthusiast cards to get 4 small chips, acting as one large.
The performance segment cards to get 3 small chips.
Mainstream segment cards to get 2 small chips.
Entry segment cards to get 1 small chip.

Some people say multi-GPU future will happen when a solid wall in the performance of single GPUs will be reached.
The thing is that even today graphics card solutions lag behind the available display resolutions.
But the majority of graphics cards make 4K gaming quite unpleasant, not to mention 8K that will need 4 times as more power. Where will it come from?
 
Anyone taking this stuff seriously is crazy, think about where Intel are. They are announcing that servers are expecting to take a 4billion dollar revenue hit in the next year and that 2019 is going to be terrible for them. The only good news they can put out is that new gpus are coming out in 2020, they are pushing a media narrative that Raja had nothing to do with the failings of Vega as a way to make shareholders less worried about 2020 as well. It's protecting share price, nothing more or less. There has been a large scale push across media to spin Raja as an innocent victim at the exact time Intel are talking about their gpus.


It makes absolutely no sense, Nvidia, AMD both before and years from now have always split their engineering teams. You don't finally tape out and ship lets say Maxwell and start work on Pascal the same day. At any given time teams are usually working on three different gpu projects and the number of people on each project shifts over time.

At some point most of the team would be working on the 7970 while lets say 25% are working on 290x, another 5% are working on Fiji. As 7970 major works get closer to the end more of the team moves to 290x and Fiji, maybe it's now 20% on 7970 to finish it off, 35% on 290x, 10% on Fiji and the rest on Polaris (largely because). Then 7970 gets completely finished and people move to the other teams.

The idea that 2/3rds of the engineers are working on Vega... as vega is close to done and Navi needs work is absolutely and completely normal. it couldn't possibly work any other way unless you want a 3 year gap between new gpu releases.

The fact that these stories are so misleading and stupid are, well, worrying. The entire time Vega was dressed up as Raja's baby and now after the fact he only did the things he was bragging about in Vega because he was forced... nonsense. He was forced to move more people to the next architecture.... you mean like he had to do regardless or he'd be failing in his job miserably.

Intel are extremely desperate at this point to find anything at all to talk up 2020 and that it's not performance or fantastic ideas they have, or brilliant early samples, but instead it's "Raja's not as big of a screw up as you think he is guys" is...... I'd say troublesome.

If the gpus were predicted to hit 1080ti performance in 2020 they'd be screaming that, about how that will help them make a bit impact in all segments, particularly HPC and others, instead they are talking up that Raja isn't to blame for projects under him.
 
Mac sales
Q4 2017 5.4 million. (up 1.2 million from previous quarter)
Q1 2018 5.2 million
Q2 2018 4.1 million

Believe me there are crazy people our there who buy a £1800 PC for £9000 because is All in One and Apple.
That iMac Pro seems to start with a $1113 (wholesale) workstation CPU and has upgrade options from there, so it's hardly commodity parts
 
AMD Navi GPUs Will Not Use MCM Design, Feature Single Monolithic Die Instead, Reveals RTG SVP – Yet To Conclude If MCM Can Be Used in Traditional Gaming Graphics Cards

https://wccftech.com/amd-navi-gpus-not-using-mcm-feature-monolithic-die-radeon-rx-gaming-cards/


Entirely predictable really. We are a long way form MCM being able to scale for GPU demands with the workloads gaming requires due to latency and coherency.
For HPC applications. The patents for such technology applied to graphics are very immature and recent.
It is one thing connecting 2 8core CPUs together, and even that has issues with Zen when tested appropriately, but putting together 2 to 4 dies of 3-5,0000cores is entirely different.
 
Entirely predictable really. We are a long way form MCM being able to scale for GPU demands with the workloads gaming requires due to latency and coherency.
For HPC applications. The patents for such technology applied to graphics are very immature and recent.
It is one thing connecting 2 8core CPUs together, and even that has issues with Zen when tested appropriately, but putting together 2 to 4 dies of 3-5,0000cores is entirely different.

It'll come - I highly suspect the next (proper) generation of GPUs from each vendor will have architectural tweaks internally towards the end of being able to use do MCM designs in the future and then if everything goes smoothly start to see MCM designs start emerging on the next major release after that.

These MCM designs won't be what people keep talking about which is like Ryzen, etc. with multiple monolithic blocks that talk to each other - this will be spreading out the sub-systems of a traditional GPU on the substrate with a mixture of command and (headless) processing packages with ability to scale up/down certain areas as desired. We aren't quite there yet but advances in substrate technology are almost there and same with the semiconductor nodes - refined 7nm should facilitate it.

EDIT: Though initial versions will probably be closer to resembling monolithic cores talking to each other while later iterations will be much further removed from that.
 
Last edited:
Entirely predictable really. We are a long way form MCM being able to scale for GPU demands with the workloads gaming requires due to latency and coherency.
For HPC applications. The patents for such technology applied to graphics are very immature and recent.
It is one thing connecting 2 8core CPUs together, and even that has issues with Zen when tested appropriately, but putting together 2 to 4 dies of 3-5,0000cores is entirely different.
I wonder whether MCM (N=2) being targeted exclusively at VR would fare a bit better because there's slightly more of an inherent division when rendering for the left and right eyes. Of course the textures and shaders in use would be largely the same.
 
I wonder whether MCM (N=2) being targeted exclusively at VR would fare a bit better because there's slightly more of an inherent division when rendering for the left and right eyes. Of course the textures and shaders in use would be largely the same.


A lot of the rendering can actually be shared in VR, which is part of the technology in Pascal's Simultaneous multi-projection. In any case VR is too small to be a concern for such a complex project.

MCM definitely makes sense when it is possible but we just aren't there yet.
 
RX 580 is exactly what 4890/4870 were. RX Vega 64 is what 4870 X2 was.
6970 had also 6990 variant, 7970 also had 7990 variant (but these were slightly higher than the sweet spot).

RX 580 falls right into the old sweet spot strategy practiced by AMD.
You're in so much denial.
 
The usual excuse by the lazy developers who never made the efforts to make it perfectly work. But is is possible. If it isn't possible for them, then they need to find anything else to do. Not to pretend and tell us how to use/waste the money we have been paying.
Don't you think it is somehow unfair to pay for an APU and not being able to use half of it because someone else is lazy and relies on you throwing even more money at the problem?!

Why not just solving the issues once and for all?!


How many game developers do you know, are you a graphics programmer ? Would you spend time money and effort on a 3rd party's technology in your own profession if a small fraction of the market use it and the 3rd party in question no longer maintains/supports/develops it ?
 
How many game developers do you know, are you a graphics programmer ? Would you spend time money and effort on a 3rd party's technology in your own profession if a small fraction of the market use it and the 3rd party in question no longer maintains/supports/develops it ?

APU's be will one of the fastest growing segments in the industry.
 
This is a bad news.
They said Navi will be Vega 64 type of performance for 200ish$ which means they stand no chance against nVidia future GTX 1160/70/80/Ti.

:(

Why don't they. Nobody is expecting the 1160 to match the 1080 performance wise for $200. The first Navi chip will likely be tiny. There will be bigger chips with much more power to follow. I will need to see a Navi chip working before writing it off. It's very doubtful they can fully catch up with Nvidia in the time scale but it could still be good if the price/performance is good.

If Navi did come to market at $200 with gtx1080 performance I am sure it will fly off the shelves but to me it's a little far fetched especially the price.
 
The 7970 launched at $550 and the 680 launched at $500.
It's worth noting the context there, when the HD7970 launched it was the most powerful GPU on the planet and notably so so it's price was justified. Later when the GTX680 launched AMD dropped the price of the 7970 (and revised the BIOS/drivers to increase performance and maintain the lead).
 
Back
Top Bottom