• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Radeon VII

Actually
https://www.4gamer.net/games/446/G044684/20190115124/

If something doesn't make sense in english, blame google translate from japanese to english.




So to sum up.
a) AMD doesn't answer question about DXR saying "we cannot talk about this this time". However in Luxmark is 62% faster than the RTX2080 (so is faster than the 2080Ti also by around 30%).


b) About DirectML (DirectX based DLSS), we have excellent results but getting certification


Food for thought.

64 cu version would barely offer any performance bump over the 60cu version, the difference between vega 56 and vega 64 is about 10% with 2 cu's disabled so you're looking at 5% probably for 1 cu disabled. Bit of an odd decision really tro disable that 1 unit, and it seems a bit speculative that in every card that didn't make the instinct grade a CU was the pass or fail point. Might have just been down to power draw possibly?
 
Actually
https://www.4gamer.net/games/446/G044684/20190115124/

If something doesn't make sense in english, blame google translate from japanese to english.




So to sum up.
a) AMD doesn't answer question about DXR saying "we cannot talk about this this time". However in Luxmark is 62% faster than the RTX2080 (so is faster than the 2080Ti also by around 30%).


b) About DirectML (DirectX based DLSS), we have excellent results but getting certification


Food for thought.
Fastest Vega64 on the Luxmark result page scores 5478 for the Complex Hotel scene http://www.luxmark.info/node/5804 increase that by 1.64 and you get 8984. I just ran a test on my 2080ti and got the 8th highest score at 9567 http://luxmark.info/top_results/Hotel/OpenCL/GPU/1 . Something screwy going on with their maths unless I am missing something.
I am also pretty sure that an offline renderer is not utilising the specialist RT hardware in my card, and you can see what a huge difference it makes in real time RT performance. https://www.overclock3d.net/news/gpu_displays/titan_v_vs_3dmark_port_royal_-_rt_cores_matter/1 (a familiar name there!) So if the RTX cores are not used then it is 62% faster, but as you can see, they make a huge difference to real time RT performance. I'll be surprised if it can match the 2060 in real time ray tracing.
 
Fastest Vega64 on the Luxmark result page scores 5478 for the Complex Hotel scene http://www.luxmark.info/node/5804 increase that by 1.64 and you get 8984. I just ran a test on my 2080ti and got the 8th highest score at 9567 http://luxmark.info/top_results/Hotel/OpenCL/GPU/1 . Something screwy going on with their maths unless I am missing something.
I am also pretty sure that an offline renderer is not utilising the specialist RT hardware in my card, and you can see what a huge difference it makes in real time RT performance. https://www.overclock3d.net/news/gpu_displays/titan_v_vs_3dmark_port_royal_-_rt_cores_matter/1 (a familiar name there!) So if the RTX cores are not used then it is 62% faster, but as you can see, they make a huge difference to real time RT performance. I'll be surprised if it can match the 2060 in real time ray tracing.

RVII is 62% faster than the RTX2080 in Luxmark not the Vega 64....
 
how would you know? I don't see how you can say by simply looking at a pic whether a fan is suitable for not for the task.

It does look nice though ... And that's what matters


I'm not saying its not suitable for the task, i'm saying longer blades = better airflow. And on a 300 watt card you want all the airflow you can get. If you look at the fanhub compared to the blades the blades look relatively short in comparison.
 
https://www.extremetech.com/gaming/283514-amd-radeon-vii-details-performance-projections
29FDioK.jpg
and the rest of the points I raised still stand.

Not too long to go until we find out for real. :)

It's not Luxmark they are saying it's faster in. It's the Luxmark Luxball test from that image. If that's the case Vega 7 will be up there with the 2080ti.

Here is the leader board for that test.

http://www.luxmark.info/top_results/LuxBall HDR/OpenCL/GPU/1

Some how there is a 390 in the top 19 ahead of Vega. I think that must be wrong and really a dual card run. Any how adding 62% to the Vega result will put it up there with the big boys in this test.
 
It's not Luxmark they are saying it's faster in. It's the Luxmark Luxball test from that image. If that's the case Vega 7 will be up there with the 2080ti.

Here is the leader board for that test.

http://www.luxmark.info/top_results/LuxBall HDR/OpenCL/GPU/1

Some how there is a 390 in the top 19 ahead of Vega. I think that must be wrong and really a dual card run. Any how adding 62% to the Vega result will put it up there with the big boys in this test.
Fair enough, I just went for the most demanding scene. Looks like more than a single performance metric is needed to get an idea of overall performance. :)

I have no doubt that the V7 will be a monster of a compute card, but without having dedicated hardware it will not be able to match any of the RTX cards in DXR enabled titles. Look at how the TitanV can only manage to snap at the heels of the 2060 as even though it has the tensor cores it lacks the RTX hardware. Which is why I think AMD are being a bit ambiguous by bringing up Luxmark when pressed on DXR enabled gaming.
 
I think you should stop thinking. It would have cost a fortune to make the changes you are suggesting. Please tell me how you were going to just double the shaders without doing a massive redesign.
To be fair I did say the same thing back in dec 2016, abd before that I was stating Fiji had a broken front end, with the fact that Amd hadn't progressed to a x6 wide geometry engine. I was hoping for a 4608/5120 vega, as effectively I predicted vega64 would be cf rx470 performance at 4096. It would only have required another 2 cu's on each geometry engine for 72 cu's, notns lot of work. I was underwhelmed by the die size of vega, with it being a half arsed compute card with no fp64, it really needed a seperate spin off for a leaner gaming Vega.
I think what 4k is saying is that Amd could have cut a lot out of the chip design of Vega10 prior to transposing onto 7nm, therefore having a seperate gaming and compute line.


https://forums.overclockers.co.uk/threads/amd-vega-confirmed-for-2017-h1.18746880/page-105
 
I think what 4k is saying is that Amd could have cut a lot out of the chip design of Vega10 prior to transposing onto 7nm, therefore having a seperate gaming and compute line.

But why? There's nothing in Vega that could magically make it a lean gaming machine. A business decision was made to build a compute architecture for the pro market, and it just so happened that it could game as well. Then Vega 20 adds even more compute stuff to it as part of the 7nm shrink, none of which improves gaming performance. AMD would not have had the money nor the inclination to try and spin off a dedicated gaming architecture version of Vega.

This is why Navi exists.
 
But why? There's nothing in Vega that could magically make it a lean gaming machine. A business decision was made to build a compute architecture for the pro market, and it just so happened that it could game as well. Then Vega 20 adds even more compute stuff to it as part of the 7nm shrink, none of which improves gaming performance. AMD would not have had the money nor the inclination to try and spin off a dedicated gaming architecture version of Vega.

This is why Navi exists.

Navi doesn't exist yet and secondly Navi still uses a 4x geometry engine, it's purpose is to replace Vega as a cheaper to manufacture die. It's not to gain performance over Vega. We have to wait for the next architecture.
Getting rid of the hbcc and the high banwidth would save a little die space, as Vega is modular in design and uses infinity fabric to communicate round the whole chip, effectively you can take out the geometry engine block and put in a tweaked engine block, which is exactly what V7 did, along with logic tweaks and the backend memory upgrades. So Redrawing this wouldn't take too much effort or expense, but then again Amd don't have the engineering capability to carry out inhouse designs like they did in the earlier days. Struggling in the red led them to letting a lot of engineers go and outsourcing to 3rd party solutions.

Like I said Amd need to invest in a 6x wide front end, they can then scale to a 1x2x for lower end, 4 for mid range and 6 for high end, also their drivers can be made for a whole shared family, instead of having 3x different architectures and one with a totally different memory subsystem.
 
Navi doesn't exist yet and secondly Navi still uses a 4x geometry engine, it's purpose is to replace Vega as a cheaper to manufacture die. It's not to gain performance over Vega. We have to wait for the next architecture.
Getting rid of the hbcc and the high banwidth would save a little die space, as Vega is modular in design and uses infinity fabric to communicate round the whole chip, effectively you can take out the geometry engine block and put in a tweaked engine block, which is exactly what V7 did, along with logic tweaks and the backend memory upgrades. So Redrawing this wouldn't take too much effort or expense, but then again Amd don't have the engineering capability to carry out inhouse designs like they did in the earlier days. Struggling in the red led them to letting a lot of engineers go and outsourcing to 3rd party solutions.

Like I said Amd need to invest in a 6x wide front end, they can then scale to a 1x2x for lower end, 4 for mid range and 6 for high end, also their drivers can be made for a whole shared family, instead of having 3x different architectures and one with a totally different memory subsystem.
I don't understand most of what you just said, but i support you and they should do that
 
I don't understand most of what you just said, but i support you and they should do that

Thanks lol, from my research I think the architecture is limited to 64 cu's on a 4 wide geometry engine, I don't think they can utilise anymore cu's without some clever context switching. The solution is to scale up to 6x gelometry engines and then recode their command processor and workload distributor, with 7nm they have the perfect opportunity, just it's gonna take them a while.
 
Thanks lol, from my research I think the architecture is limited to 64 cu's on a 4 wide geometry engine, I don't think they can utilise anymore cu's without some clever context switching. The solution is to scale up to 6x gelometry engines and then recode their command processor and workload distributor, with 7nm they have the perfect opportunity, just it's gonna take them a while.

They should have done this with Fiji, yet they still didn't do this with Vega. My guess is that the amount of redesign required was too significant. Nvia's architecture changes form Kepler to Maxwell to Pascal to Turing have all been significant while AMD have been making smaller tweaks and just expanding stream processors with the fixed CU and 4 geometry engines, until they reached a limited at Fiji, and then they could only increase clock speeds.

7nm will give them the transistor count to do this but Navi just seems to be more of the same.
 
Thanks lol, from my research I think the architecture is limited to 64 cu's on a 4 wide geometry engine, I don't think they can utilise anymore cu's without some clever context switching. The solution is to scale up to 6x gelometry engines and then recode their command processor and workload distributor, with 7nm they have the perfect opportunity, just it's gonna take them a while.
Exactly ! I was thinking this when i was making coffee this morning... :D Just scale to 6 geometry engines I thought, recode their command processor and workload distributor and POOF job done, and here you are saying the same thing ! lmao
 
They should have done this with Fiji, yet they still didn't do this with Vega. My guess is that the amount of redesign required was too significant. Nvia's architecture changes form Kepler to Maxwell to Pascal to Turing have all been significant while AMD have been making smaller tweaks and just expanding stream processors with the fixed CU and 4 geometry engines, until they reached a limited at Fiji, and then they could only increase clock speeds.

7nm will give them the transistor count to do this but Navi just seems to be more of the same.

I agree in that with the 28nm delays, Amd could have designed it then, or at least Vega (phase2) should have then implemented it. It may have been a sudden hit financially but long term wise it would have seen them benefit over 5 years.
In a couple of years we should hopefully see a rework, cost factor aside they've reached the llimit and even in a mcm shared data setup they'd still have scheduling, power switching and latency issues to address. So either way there's a lot of work to do. Thsnkfully Amd are a little healthier these days.
 
For a change they're shooting down the bs.

There's plenty of it go around still though. :(

In a recent interview, AMD's Adam Kozak confirmed that the Radeon VII would support DirectML, Microsoft's Machine Learning (ML) add-on to DirectX 12, opening the door to AI-powered enhancements with AMD's latest graphics card. The Radeon VII boasts a lot of potential in the world of AI, though it remains to be seen how long it will take for these AI features to become more prominently featured within new games. A DirectML alternative to DLSS is also on the cards

The highlighted points from the article are interesting, No ray-tracing from AMD at this point won't neccessarily be a bad thing as it'll give game dev's some time to get used to using AMD's version of DLSS and then they'll have it ready to use with AMD's ray-tracing tech when that comes to market. We're hearing that DLSS will compliment Nvidia's ray-tracing by reducing the performance impact ray-tracing has on a game, I'm sure it will but with Nvidia releasing both new techs at the same time it means there's a double whammy of learning new ways to do things for the game developers. Hopefully AMD's version of DLSS becoming available earlier than their ray-tracing support will make the learning curve simpler for each.
 
There's plenty of it go around still though. :(



The highlighted points from the article are interesting, No ray-tracing from AMD at this point won't neccessarily be a bad thing as it'll give game dev's some time to get used to using AMD's version of DLSS and then they'll have it ready to use with AMD's ray-tracing tech when that comes to market. We're hearing that DLSS will compliment Nvidia's ray-tracing by reducing the performance impact ray-tracing has on a game, I'm sure it will but with Nvidia releasing both new techs at the same time it means there's a double whammy of learning new ways to do things for the game developers. Hopefully AMD's version of DLSS becoming available earlier than their ray-tracing support will make the learning curve simpler for each.

A page back posted the answer from the real article, not WCCFTech extract.

Q: NVIDIA is trying to bring AI-based image processing called DLSS (Deep Learning Super Sampling) to game graphics processing. Please let me know what AMD thinks about this trend.

Adam Kozak: Last year's GDC 2018, Microsoft announced a framework "Windows ML" for developing machine learning based applications on the Windows 10 platform, and "DirectML" that makes it available from DirectX (Windows It became effective (October 2018 Update of 10) ( related article ).
We are currently experimenting with obtaining the evaluation version SDK of DirectML, but Radeon VII shows excellent results in that experiment. By the way, Radeon VII scored about 1.62 times the "GeForce RTX 2080" in "Luxmark" which utilizes OpenCL-based GPGPU-like ray tracing renderer . Based on these facts, I think NVIDIA's DLSS-like thing can be done with GPGPU-like approach for our GPU.

The difference between DLSS & DirectML is that the latter is in DirectX (like DXR) while the former is Nvidia custom tech like PhysX
 
Back
Top Bottom