• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Exclusive: The AMD Inside Story, Navi GPU Roadmap And The Cost Of Zen To Gamers

Multi GPU versus Single GPU, also you're comparing the flagship AMD single GPU's to a mid range single GPU.

It's not even close to the same thing.

I'm not sure what's "sweet" about the 580 price point given that it was available in 290X's years and years ago.......

wait what ??
Single chip gpu vs crossfire on one card... Hows that even comperavle LOL
Bet they would make Vega 64x2 but case in pc would go on fire

580 is 290x performance from ALMOST 5 Years ago with 150 pound lower price....

If AMD make RX 580 X2, it will be as fast as Vega 64, right?

I meant this:

Just as we saw with the Radeon HD 4870 launch last year, AMD is going for a "sweet spot" strategy in which they design a GPU for the performance segment and then address other areas based on that GPU. With the HD 4870 they released the HD 4850, 4830 and 4770 for the lower price points and also created the HD 4870 X2 dual-GPU card for the ultra-high end segment. It looks like AMD is again taking that approach as we'll see Hemlock dual-GPU cards out before the end of the year based on this roadmap.
https://www.pcper.com/reviews/Graph...Card-and-AMD-Eyefinity-Review/Radeon-HD-5800-

https://www.hardocp.com/article/2009/09/22/amds_ati_radeon_hd_5870_video_card_review/1



Crossfire on a single PCB is what Infinity Fibric is for Threadripper now.
 
I still fail to see how a 250-300 pound 580 is priced at the sweet spot, it's stagnated performance wise for years and years.

I find it depressing.

Also, given how poor crossfire/SLI seem these days, I'm not sure how much AMD are caring for a crossfire GPU.
 
Nvidia are beating them at that game then surely.

580
die area of 232 mm² and 5,700 million transistors

1060
4,400 million Die Size 200 mm²



580 is rebranded overclocked 480 that is basically 290x shrunk down

Pascals are die shrunk of Maxwheel basically.

Thing is NV's architecture reacted much better to node shrink than amds GCN
 
What REALLY hit AMD was 20nm process failure.... Vega was supposed to be on that by the time they had 16nm NV pulled WAYYYY ahead. If Vega came out when it supposed to on 20nm situation would have looked different I'm sure.

Vega CRUSHED COMPETITION not :/ not when it was almost One year tooo late. If Vega 64 came out lets say 1-2 months after 1080 would looked good. But TI was out before vega that was a killer.
 
Last edited:
Damn, how many iMac Pros are they selling? Seems a fairly niche product.

Mac sales
Q4 2017 5.4 million. (up 1.2 million from previous quarter)
Q1 2018 5.2 million
Q2 2018 4.1 million

Believe me there are crazy people our there who buy a £1800 PC for £9000 because is All in One and Apple.

So from the numbers above you see just in a single year they sold millions of AMD GPUs. (555, 560,575, 580, Vega 56, Vega 64) through Apple alone. Doubt there are many Intel Iris sold.

No wonder Apple is now almost the first trillion USD business.
And the cash sitting on are more than the GDP of Finland (41st richest country in the world). Actually they could pay the debt of all African countries and still have enough left...

I will never buy an Apple product out of principle. Similarly I will never buy a German car or motorbike regardless how much I want a 1200GS. :(
Japan it is then on both.... :D
 
I am reading one article at https://www.reddit.com/r/Amd/commen..._a_traditional_monolithic_gpu/?ref=readnext_0
and despite it having many technical truths, many people in it are in the wrong direction. Claiming crap :D

The idea around multichip GPUs is making the GPUs appear as a single GPU to the Dev and end user, this was mentioned on the release of DX12 and Vulkan.
The issue with traditional CF/SLI is getting the GPUs to produce perfect alternate frames.

An MCM chip does not utilize crossfire or sli, it is seen as one chip like Ryzen is.

Same. Multi-GPU chips worry me because of how poor crossfire/sli support is currently. Most games still primarily use DX11 and developers are slow to change. If, theoretically, AMD were to drop a multi-GPU chip today that beat the 1080 Ti in raw performance and TFLOPS, it would still be crushed in gaming due to poor optimization for multiGPU, latency problems and poor frametimes. It probably won't work well until widespread DX12, Vulkan, and explicit multi-adapter support becomes mainstream. And that's kind of a problem of what comes first, chicken or the egg? Developers do not have much incentive to make efforts to get these things working since the vast majority of the market isn't using multi-GPU configs.

It was a pipe dream really, when David Wang at Computex 2018 stated Multi-GPU (infinity fabric) for their GPU's i had no idea he was only taking about the professional market where it is accepted easily by their software same goes with mining software, unfortunately infinity fabric still doesn't solve the problem with multi-GPU gaming i thought maybe there was some kind of break through with infiinity fabric LMAO oh boy just another pipe dream.

These people completely forget that all of the current Core i7, etc Intel processors do contain integrated graphics which currently sits idle in 3D gaming, but can be utilised in Multi-GPU combo with the discrete graphics card.
This is a huge share of the market, wow. :mad:


 
The IGP and discrete GPU working together has always been hit and miss (more miss in terms of experience)
I can't remember the program called (Hydra possibly?) that had the IGP and GPU working together but it gave humungous amounts of frames but gaming like that was impossible, as it was a stutter fest. Completely broken.
 
I really wanted it to work but I pretty much knew it wouldn't take off like they wanted it to - it would have required every major software developer to come onboard with it and work into their products from the ground up as well as extensive ongoing support from Lucid beyond their ability to finance :|

One of the more useful uses for the IGP would be with developers programming specifically for it with explicit multi-adaptor - in some cases rendering significant 2D elements i.e. for a MMO UI, etc. can take up significant time on the GPU that isn't really using a lot of its capabilities - farming out the UI to the IGP or another lower powered GPU and then recomposing it on the main GPU would free up some extra 3D rendering time.
 
Didn't it also allow for Nvidia and amd/ati gpu to also render together

Yeah but then you had to hope that they equivalently supported high enough feature level and produced identical output or otherwise you'd end up with artefacts around the seams where the rendering was recomposed for the final image :s if one used different optimisation tricks to another for texture filtering, etc. you could get some weird oddities that wouldn't be noticeable if the same GPU was rendering the whole scene.

Again something that could be avoided by intelligent use of such a system by the application developer rather than the Hydra engine either indiscriminately just throwing the output together and/or trying to make it work with application specific compatibility profiles.
 
The IGP and discrete GPU working together has always been hit and miss (more miss in terms of experience)

Then, we need to take the hit instances, unify them across all game engines and standardize it as required.
I think DX12 pretty much does it.

Hybrid Crossfire works. Now it's called Dual Graphics and there are nice reviews: https://www.hardwaresecrets.com/amd-dual-graphics-technology-review/5/

Conclusions
Our tests showed clearly that the AMD Dual Graphics technology really boosts the performance in games. On most of the tested games (as well as on the 3DMark benchmarks) the performance with the technology enabled was superior to the obtained with the integrated video alone or with the discreet video card.

In two games we ran, however, there was no performance difference. Our guess is that, in those tests, the performance was being limited by the CPU, which is not a high-end processor, after all.

Another important detail that was clear is that the integrated GPU of the A8-7670K is more powerful than the Radeon R7 240 video card we used. It makes sense, since the technical specs of this graphic engine are slightly superior to the ones of the used video card.

So, if you have an A8 or A10 CPU, there is no sense in buying an entry-level video card and disable the integrated video of the processor: you will be burning money and maybe even reducing the video performance of your computer. However, you will get a performance gain in most games if you buy a video card that can be used together with the integrated video of your processor using the Dual Graphics technology.



print screen windows xp

free image uploading





When the performance uplift is small, it is still better than nothing and is acceleration:



It has its own settings in the Control Centre:



 
You're living in the past through Rose tinted glasses. Hybrid crossfire was limited and wasn't perfect at all.

You're also just saying buzzwords too.

The usual excuse by the lazy developers who never made the efforts to make it perfectly work. But is is possible. If it isn't possible for them, then they need to find anything else to do. Not to pretend and tell us how to use/waste the money we have been paying.
Don't you think it is somehow unfair to pay for an APU and not being able to use half of it because someone else is lazy and relies on you throwing even more money at the problem?!

Why not just solving the issues once and for all?!
 
Back
Top Bottom