• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Do you think AMD will be able to compete with Nvidia again during the next few years?

Do you think AMD will be able to compete with Nvidia again during the next few years?


  • Total voters
    213
  • Poll closed .
Soldato
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
OFC,I am talking more about PC gaming.

Nvidia ATM,seems to forging forward. Volta is out,and we are now starting to see Turing/Ampere prototypes. AMD,OTH still is finding it hard to match Nvidia on performance/watt in many segments,meaning they are loosing out on laptops with regards to dGPU,Vega got delayed,and if it were not for mining,would have been uneconomic to sell at RRP,against cheaper to make Nvidia cards.

Rumours say Raja Koduri wasn't given as much funds as he wanted to make more gaming specific GPUs,as R and D was pushed towards non-gaming segments.

It seems we won't see a new gaming GPU this year from AMD meaning Nvidia will probably push ahead again. Navi apparently is only a midrange chip for release in 2019,which could only mean a high end chip in 2020. Intel also is entering the market in 2020,meaning more compeitition.

So,what do you guys/gal think,will this turn out like after Bulldozer was released,and Intel was reining supreme,where competitors will do the minimum to get sales,and prices will start to increase?? Or do you think AMD might be able to pull something out of the bag(like they did with Ryzen)??
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
You think somebody like Crytek couldn't come along and make an engine that kills any card at 1080p?

Sadly I don't think it is going to happen anytime soon IMHO as Crytek did that,we then had a huge performance jump with affordable cards like the 8800GT,and still many gamers moaned and also pirated the game. After that you saw less and less companies,really push the PC on the technical level like Crytek did as Crytek didn't make enough money. Outside AMD/Nvidia pushing some tech to sell new cards,its more likely consoles will be pushing technical innovations,to get around their limitations,and any big jumps in image quality are most likely going to be because of new consoles like the PS5 as the potato CPUs on the current consoles are a big problem. An example is the streaming tech we saw in Skyrim which meant a largish open world without loading screens.

Why do you think so many PC Gamers are throwing money at Star Citizen - the devs have promised a title which will only run on PC and push what it is possible which I really hope it does.Whether that happens is another thing. Most AAA titles are literally console titles but looking a bit prettier and running at higher FPS on PC. If you looked at Crysis and compared that to console games,the consoles looked utterly meh. The same with FarCry,HL2 or Unreal back in the day.

Edit!!

Then you have the whole early access crap on PC. ARK really pushed hardware and was not a bad looking game,but most of it was down to utterly rubbish optimisation especially with the early access fad(which is to save money for the devs). I would argue most of the really intensive PC games nowadays are usually just poorly optimised,or running on old engines,which are being strung along to save money. They don't look nearly as good as what hardware you are expected to use for them.

CB2077,might be another game which could push things,but they got 30FPS at 4K on a single GTX1080TI,and they want to run it on current consoles(sadly),so I expect they might have dialed down some detail to make it run better. The same happened with The Witcher 3,as the product we got looked worse than what was revealed earlier which was a shame. Still a pretty game,but it could have been the next Crysis.

Its also another issue,if AMD does not compete,the competition will string out improvements based on financials,as you can see what has happened under £300 with dGPUs.
 
Last edited:
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
Not really, the Ti smashes the 56 in a lot of games. @LoadsaMoney posted a vid not long back with up-to-date drivers with a Ti vs a 56 (or 64) and the Ti beat it in most games.

You mean the reference Vega56 compared by Techspot this month against a non-reference GTX1070TI where the latter was a mahoosive 5% faster overall??

Edit!!

This is the GTX1070TI he tested:

https://www.overclockers.co.uk/msi-...ddr5-pci-express-graphics-card-gx-342-ms.html

Then he said all aftermarket Vega56 cards were expensive,yet this is not the case,and that includes a copy of FC5:

https://www.overclockers.co.uk/giga...hbm2-pci-express-graphics-card-gx-19p-gi.html
https://www.overclockers.co.uk/asus...hbm2-pci-express-graphics-card-gx-41x-as.html

He then says reference Vega 56 cards are the same as aftermarket ones in terms of performance. TH tested the cheapest Vega56 card in the US and UK and that is not the case:

https://www.tomshardware.com/reviews/gigabyte-radeon-rx-vega-56-gaming-oc-8g-review,5413-3.html

In practice, the Radeon RX Vega 56 Gaming OC 8G achieves a 9- to 11%-higher clock rate than AMD's reference card, resulting in average frame rates that are anywhere from 6- to 8% better. That's respectable scaling, to be sure.

Do I think Vega 56 is overpriced?? Yep,but so is the GTX1070TI.

I would rather have a £350 GTX1070 over both and save the money. If I was paying £450 I would expect a GTX1080 and you could get them a while back for that kind of money including a game or two.

The Vega56 price increases and GTX1070TI have only propped up the price of the GTX1080 and the GTX1070,which is annoying.
 
Last edited:
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
If it's only 15% faster - nobody.

People won't pay silly money (£300+) for 15% improvements.

But thankfully whatever nV's newest cards will be, they'll be a lot more of a jump than that. Because Pascal chips were nice and small and power efficient, so at the very least nV can just up the core count. I'm not sure AMD can realistically do the same. So 15% improvement sounds like the max they can improve Polaris 30 by over and above the 580.

Its from Wccftech - remember the "beast mode" RX480 cards. LMAO.
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
Was looking at that review of WoW DX12 performance and I was intrigued by the DX12 and DX11 frametimes with midrange graphics cards.

2306qux.png cminm4F.png

Even though the RX580 is registering less FPS,look at the frametime plots - the GTX1060 graph if magnified for DX11 seems to show it see-sawing,whereas the RX580 seems to be somewhat flatter,however Computerbase.de have used different scales for both graphs even though they are the same physical size. So 70 vs 30(factor of 2.33) on the y axis and 1.813 and 1.432(factor of 1.27) on the x axis.

If you equate the scales together.

TBUdnCX.png

Also something else even more intriguing. It seems WoW looks very CPU limited on the GTX1080 under DX11(Core i7 8700K is 19% faster than a G4560),whereas the Vega64 under DX12 looks mostly GPU limited(Core i7 8700K is 5% faster than a G4560).

I would love to see some additional testing during raids which is where the game is mostly CPU limited.
 
Last edited:
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
Is not new for Blizzard cutting off support of old tech forcing everyone to upgrade. And was planning to push DX12 to all and then switch off DX11 later.
Look at their abolition of DX9 moving to DX11. Everyone with XP & Vista couldn't run the game after the patch.
Same applied to those with W7/W10 but they didn't had DX11 card, they couldn't run the game. (check their forums they were many posts like these).

Dude it has nothing to do who has more fps. It has to do that Nvidia performance drop forces Blizzard to keep maintaining DX11 for Nvidia cards only, while it pushed by default DX12 on all AMD owners. Because there is no perf drop, and they will greatly benefit those with weaker CPUs.

I superimposed some of the graphs from the computerbase.de report earlier:

https://forums.overclockers.co.uk/posts/31973272/

Its really interesting how flat the variation in frametimes are in DX12 compared to DX11 on both AMD and Nvidia.

Its going to be interesting when Volta/Ampere is out.
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
Interesting unless Nvidia still suck at DX12 :p
Only if is Volta derivative with it's full hardware async compute there are going to be benefits for DX12 in the next round of Nvidia cards.
Otherwise any derivative of the Maxwel/Pascal line is still a DX11 card in effect and they will get performance hits.

Have a look here what I am talking about

https://www.hardocp.com/article/2018/03/26/nvidia_voltas_titan_v_dx12_performance_efficiency/5

Well I am making the assumption that Ampere is a derivative of Volta - well I hope it is!!
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
NVIDIA DX12 is even faster than AMD DX11 in WOW in multiple areas according to Extremetech:

SeethingShore.png

DalaranFlyby.png


https://www.extremetech.com/gaming/273923-benchmarking-world-of-warcrafts-directx-12-support

Wowsers,how is Nvidia getting 21 FPS lower minimums during a flyby with a Core i7-4960X?? Eh??

Also,the article says this:

It’s possible that lower-end CPUs would see different results in these tests. As we’ve discussed before, DirectX 12 doesn’t really help you recover much in the way of GPU performance, though features like asynchronous compute can improve GPU perf in certain ways if supported in hardware. The major advantage of low-overhead APIs and the place where we always saw them do the most good was when paired with low-power or weaker CPUs, not GPUs. Here, they can make a significant difference, sometimes cutting CPU utilization by 10-30 percent and allowing for corresponding improvements in power consumption or giving developers more flexibility. We’ve also seen some specific cases where AMD’s DirectX 12 performance has given it better competitive standing against Nvidia, though again, the shifts here tend to be on the smaller side.

But at least in WoW, for now, the message seems clear. If you have a higher-end CPU and reasonably new GPU, DirectX 11 is the better API. We’ve got an eye on the situation and will re-test and/or revisit this question if Blizzard makes any formal announcements about improving the newer API’s performance relative to the older one.
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
AMD could offer a competitive price since they had the tech in place,and also had experience with working with both TSMC and GF for both CPU and GPU products,which would have been important for dual sourcing. For example the Jaguar core was designed for easier portability between different process nodes apparently.
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
The jump needs to be Big enough to make people with 1080tis start thinking about upgrading to 1180. So I guess 15-20% should cut it.

Also i wounder if they will have Titan variant or go just ALL OUT on this 12nm and 1180 will be full fat uncut gaming card. I think It can happen since 12nm is basically an 12 months ponny.

I also think TI of nex gen will be 7nm card :)

If its only 15% to 20% then you might as well not bother. Its like ditching a GTX970 to get a GTX1060.
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
Thrust me it will be tempting enough for allt of people. Thats how you Sell stuff AMD should learn from nv. 20% is enough for many ti buyers to get NEW AND SHINY.

Only reason i have not jumped feom 980ti to 1080 was bios edit and running it at 1515mhz on core constant. In furmrk iw seen 980ti pull over 400w thats vega overclock level.

What i hate most about nv is the encripted bios on pascal god damn tdp limit.....

I don't know any gamer who would upgrade for 15% to 20% though. I think the only people I knew who ever did that,just liked buying new hardware since it was their hobby.

But then I don't know a single person in real life who bought a GTX970 and replaced it with a GTX1060.

I expect many people who buy the GTX1100/GTX2000 series will be getting more than a 15% to 20% jump since they have older cards,or Nvidia supports something like ray tracing and is useable on the newer cards.

The last time I remember Nvidia pushing that level of upgrade was on the GTX900 series with the GTX960 over the GTX760,and I don't think many did that upgrade either.

Usually its more like a 30% to 50% increase.
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
just did a quick check

http://gpu.userbenchmark.com/Compare/Nvidia-GTX-980-Ti-vs-Nvidia-GTX-1080/3439vs3603
Well jump was average of 28% But that was jump from 28 to 16 nm. And now we got 16 to 12 i see no chance of 28% jump cause of that.
Depends how much gain ddr6 will gieffff

They went from an 8 billion 601MM2 GPU with the GTX980TI to a 7.2 billion 314MM2 GPU with the GTX1080,although the GTX980TI didn't use a fully enabled GPU in the GTX980TI,and the GTX980TI was apparently underclocked a bit.

The GP102 in the GTX1080TI/Titan X is actually smaller than average when it comes to the Nvidia large GPU flagships,which tended to be 529MM2 to 601MM2,ie,its under 500MM2.

12NM is lower leakage,so you can throw more transistors for a similar level of power consumption,so I expect Nvidia will increase die size,which they have plenty of room to do.

An example is Maxwell. So they could increase die size from the 471MM2 in the GP102 to over 600MM2,and then use much faster GDDR6 as a replacement for the GTX1080TI. One leak of a dev board,hinted at a GPU over 600MM2.

For the GTX1080 replacement,they could get another 30% improvement there by increasing die size from 314MM2 to 400MM2 to 450MM2 and using faster RAM.

With Maxwell that is what they did - they increased die size over the GTX770,and cut down the size of the memory controller over the GTX780TI.
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
WCCFtech and their "beast mode" RX480 cards. LMAO.

I remember in the Polaris thread,I even posted one of the leaks from China weeks before the launch which said the RX480 would be close to a R9 290/390 and still people believed WCCFtech. AMD even said it was entry level for VR and entry level for VR was an R9 290 on the AMD side at the time.
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
The major fail of Vega 56/64 is the production costs - apparently they had to buy Samsung HBM2 as the cheaper Hynix HBM2 was not available in time,and it made the cards even more expensive to make. Also the chip is quite big too,and things like the FP16 compute might prove useful in the future but is only used in some games.
 
Soldato
OP
Joined
9 Nov 2009
Posts
24,850
Location
Planet Earth
It'll be a few % better at best. A pointless card.

Better than nothing especially if it replaces the current cards at the current price points. If the RTX2070 costs nearly £600,then unless it gets a price cut,the GTX2060 series will probaby replace the GTX1070 and GTX1070TI at similar pricing.

If the RTX2080 is a tad faster than a GTX1080TI,that means the RTX2070 will be a tad faster than a GTX1080 at nearly £600. So if that follows down the range with the GTX2060 that means its probably GTX1070/GTX1070TI level for £400, unless Nvidia surprises us and gives us a very fast non-RTX card,but that would reduce RTX series sales.

Maybe Nvidia will rebrand it a GTX2060TI at £400. So you have a GTX2060 non-TI at say nearly £300. Its probably only going to be a bit faster than a GTX1060 at that price.

Now OFC,Nvidia might drop RTX card pricing in due course,but then again they might be testing the water to see how much they can charge.
 
Back
Top Bottom