• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Do you think AMD will be able to compete with Nvidia again during the next few years?

Do you think AMD will be able to compete with Nvidia again during the next few years?


  • Total voters
    213
  • Poll closed .
This is BS. Nvidia's compression is lossless.

And if there was lossy compression then you wont see washed out colours, there will be color artifacts and banding etc. The "washed out" image quality, is just
Nvidia's choice of not pushing the saturation and contrast to the limits like TVs in showrooms. A few minutes setting vibrance in the settings and it will match AMD

There is no such thing as lossless compression, its BS marketing speak, what they mean is "perceivably lossless" which depends on what the marketing guys deem as perception, i guarantee they take liberties with that definition.
 
There is no such thing as lossless compression, its BS marketing speak, what they mean is "perceivably lossless" which depends on what the marketing guys deem as perception, i guarantee they take liberties with that definition.


This is just nonense.

Of course there is lossless compression. Ever zipped a file before?

Nvidia uses lossless color delta compression and run-length encoding amongst others, just like a PNG image.

What you are talking about, perceptible lossless, is an enitely different thing and not used by Nvidia GPUs internally because the behavior would be unpredictable for the developers. E.g., it is common to use render-to-texture and other render targets to store lighting information, or use textures as large lookup tables for complex mathematical equations. If lossy compression was used the entire display could be corrupted.

And don;t forget, AMD also uses very similar delta colour compression techniques. AMD made a big fuss about this for Polaris and Vega launches. The only difference is Nvidia started using such technology earlier in their architectures, and are now somewhat more advanced. AMD is playing catch-up, but you can bet that navi will have very similar compression techniques to the current Pascal cards. There are also hard limits on the effectiveness of lossy compression, as given by Shannon Entropy theory. Nvidia are probably not that far away and AMD right on the heals give it a couple of generations and they will have near identical compression techniques.

EDIT: and to be clear, lossy textures are permitted in DX, but they are explicitly made lossless by the developer. And they don't have an advantage in bandwidth but vram size, so their use is more limited (there is actually a performance penalty).


here are details of Nvidia's, truely lossless delta color compression.
https://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/3
 
Last edited:
This is BS. Nvidia's compression is lossless.

And if there was lossy compression then you wont see washed out colours, there will be color artifacts and banding etc. The "washed out" image quality, is just
Nvidia's choice of not pushing the saturation and contrast to the limits like TVs in showrooms. A few minutes setting vibrance in the settings and it will match AMD

All right. Fair enough and I accept it. One can indeed play with the saturation/brightness/contrast settings in order to get the AMD picture on a GeForce card.
But, what about the missing leaves on the trees?



 
All right. Fair enough and I accept it. One can indeed play with the saturation/brightness/contrast settings in order to get the AMD picture on a GeForce card.
But, what about the missing leaves on the trees?




That looks similar to the World of Tanks.... The Vega 64 renders twice as much foliage on same settings, to the point that is more difficult to aim behind bushes than before.
But I prefer the looks of it.
 
I'd say they are with in a few % of eachother, Vega 56 maybe slightly faster than the 1070, under 5%.

As for the 1080TI, its seen as this fantastical gaming monster, its a powerful GPU but its only 30% faster than the 1080, thats significant but not huge, the difference between the 1060 and the 1070 is near 40%

Oh I entirely agree, I was merely replying to cainer's opinion that Nvidia are light years ahead when they're not.
 
In pure numbers, Vega 56 usually beats the 1070, Vega 64 usually beats the 1080. The only thing AMD can't touch is the Ti. They're not that far behind, but it will probably take a herculean effort to bridge that gap. We'll see what happens with Navi I guess.

but the 10** range was out long before Vega, AMD were very late to the party

Oh I entirely agree, I was merely replying to cainer's opinion that Nvidia are light years ahead when they're not.

can you point that out to me where i said that as i dont remember saying that?


thanks
 
but the 10** range was out long before Vega, AMD were very late to the party

And? You're assuming that Nvidia are holding onto some massive trump card that they'll play if AMD gain ground back. They probably do, because sure as hell the GTX 1100 series isn't going to be a mindblowing leap in performance. But that's also not to say that AMD already knew some of Vega's failings and have already addressed them when Navi development started.

We don't know what Turing will bring, we don't know what Navi will bring. What we do know is right now in performance terms Vega can more or less keep up with all but the halo GTX in the gaming market. That doesn't give Nvidia an insurmountable lead.
 
And? You're assuming that Nvidia are holding onto some massive trump card that they'll play if AMD gain ground back. They probably do, because sure as hell the GTX 1100 series isn't going to be a mindblowing leap in performance. But that's also not to say that AMD already knew some of Vega's failings and have already addressed them when Navi development started.

We don't know what Turing will bring, we don't know what Navi will bring. What we do know is right now in performance terms Vega can more or less keep up with all but the halo GTX in the gaming market. That doesn't give Nvidia an insurmountable lead.

well Nvidia (and im no fanboi) have had much longer to prepare their next release, the 1080ti has been out for so long that they should have something thats significantly better, if they dont then its right to question why not,
but AMD (and i am a fanboi) just seem to be comming up second best for too long, and i see no change in that
 
AMD were late to a party which continues even now. So, it doesn't really matter when you appear to the party - the most important is how long the party will go on.
If Turing is 12nm and is released soon, but Navi is 7nm and is released 6 months later, I guess nVidia is doomed.

I see no reason why they would release a new series on the rebranded 16nm process so close to the mass quantity availability of the 7nm processes by all - GlobalFoundries, Samsung and TSMC.

That looks similar to the World of Tanks.... The Vega 64 renders twice as much foliage on same settings, to the point that is more difficult to aim behind bushes than before.
But I prefer the looks of it.

Every normal human being would prefer better looks, especially on computer graphics which in most cases are so mediocre that you wanna grab the monitor and throw it out of the window lol

This also means that all FPS reviews are wrong and should be taken with a grand of salt.
 
All right. Fair enough and I accept it. One can indeed play with the saturation/brightness/contrast settings in order to get the AMD picture on a GeForce card.
But, what about the missing leaves on the trees?




Certainly nothing to do with drivers.
The different detail level is related an LoD system within the game engine, which is sensitive to exact distances, FoV, angles and other parameters. Nvidia have no control over that kind of thing from the driver level.

Unless the game is programmed with an explicit benchmarking tool that aims to render exact frames then any comparison is bound to be flawed.
 
AMD were late to a party which continues even now. So, it doesn't really matter when you appear to the party - the most important is how long the party will go on.
If Turing is 12nm and is released soon, but Navi is 7nm and is released 6 months later, I guess nVidia is doomed.

I see no reason why they would release a new series on the rebranded 16nm process so close to the mass quantity availability of the 7nm processes by all - GlobalFoundries, Samsung and TSMC.



Every normal human being would prefer better looks, especially on computer graphics which in most cases are so mediocre that you wanna grab the monitor and throw it out of the window lol

This also means that all FPS reviews are wrong and should be taken with a grand of salt.

Maxwel err Pascal err Turing is 12nm not 7nm card.
Actually AMD is the first one who will bring GPU & CPU to 7nm.
 
@cainer , AMD's problem isn't the tech alone, it's the entire graphics product. Vega's not that bad in its own right, but when AMD seemingly used every single die they produced regardless of quality, ramped the required power draw through the roof to ensure even the dodgiest of dies worked, and then stuck a ridiculously high price tag on the thing so the price/performance ratio was just wrong (why would you pay GTX 1080 money for GTX 1070 performance?), Vega as a gaming card was a total mess. Good thing it can mine like a mother so still sold well and AMD got their money back.

But look at the plethora of undervolting and tuning guides and results out there and you'll see what Vega could, nay should, have been: Vega 56 approaching 1080 performance whilst drawing less power than a 1070? That's impressive. If AMD did that with a price to match then RX Vega would have been a very different story. It'll be interesting to see how Vega Instinct performs in data centers and machine learning when put against Titan V.

But you say AMD just "seem to be coming up second best for too long, and i see no change in that" seems a little defeatist and melodramatic. AMD haven't been destroyed by Nvidia, in fact AMD are holding their own in the midfield with the RX500s, Vega when tuned will trade blows with the bigger boys, and the super top end is just bragging rights (which AMD enjoyed for a very long time with the 295X2). I'd even argue that basing AMD's current status on the fact they can't topple the 1080 Ti is a bit short-sighted.

Let's see what happens with Turing and Navi.
 
And? You're assuming that Nvidia are holding onto some massive trump card that they'll play if AMD gain ground back. They probably do, because sure as hell the GTX 1100 series isn't going to be a mindblowing leap in performance. But that's also not to say that AMD already knew some of Vega's failings and have already addressed them when Navi development started.

We don't know what Turing will bring, we don't know what Navi will bring. What we do know is right now in performance terms Vega can more or less keep up with all but the halo GTX in the gaming market. That doesn't give Nvidia an insurmountable lead.


AMD is a long way behind in performance per Watt and performance per mm2 die area. Saying that the vega 64 keeps up with a 1080 is pointless when the power consumption and die area is closer to the 1080TI.

Nvidia could sell the 1080ti at less than the cost of Vega64 and still make more profit.

Vega64 can only be compared to the 1080ti in terms of peformance per generation. In fact closer still would be Vega64 vs Titan XP and Vega 56 vs 1080ti. That is a level playing ground in terms of die size (price wise of course not, Nvidia enjoys much higher margins).
 
@LePhuronn Raja himself shortly before the launch of the VEGA complained on the stage that we all like numbers...
I guess this is the reason why they asked for higher power, so the FPS meter changes to the north ;)
 
AMD is a long way behind in performance per Watt and performance per mm2 die area. Saying that the vega 64 keeps up with a 1080 is pointless when the power consumption and die area is closer to the 1080TI.

Don't start with the "but the die is the huge so we have to compare it to the Ti" argument again dude because it's bullcrap, or do you have a lot of people around you saying "yes D.P. size DOES matter".

True, it does illustrate where AMD are deficient because it took a big die and lots of power to get there (although it actually shouldn't have if they were more discerning with their dies, as the undervolting results show), but AMD still got there. And they can only improve upon it.

Random comparison: the USSR were seen as a legitimate threat to the US during the Cold War because their technology could more or less compete. It didn't matter that the Russians competed through sheer brute force and massive, clunky machines verses the comparatively slick and sophisticated US tech, what mattered was the Russians could still do it and were not dismissed because of it. So yeah, Nvidia may have the slick, power-efficient US tech and AMD may have the beastly, brutish and brash Russian vibe, but AMD are still there (more or less). Doesn't matter how they do it, they do it.

Do a 7nm refresh of Vega, be more discerning with the dies and yields, set power requirements accordingly and you'd see a Vega that's much more refined. And, again, let's see what Navi can do; here's hoping it catches Nvidia napping and forces them to release the killer tech they've clearly been sitting on for these 2 years as Maxwell Reprise stagnates everything around it.
 
Last edited:
Maxwel err Pascal err Turing is 12nm not 7nm card.
Actually AMD is the first one who will bring GPU & CPU to 7nm.


That is completely unknown. AMD have only stated an HPC part on 7nm shipping non volume next year. Nvidia haven't publicly stated anything, which is nothing new. Nvidia are fighting tooth and nail for HPC domination, and we know Volta was a stop-gap required due to contractual commitments. Nvidia will have a 7nm HPC GPU avauaavai as soon as it makes commerical sense. Volta is already old. No coincidence we here about both Ampere and Turing architectures,one of these is HPC on 7nm without doubt.
AMD haven't really said much about consumer 7nm GPUs, and the rumours are only mid-range H2 next year.

Most current rumours is AMD will have a 12nm version of Polaris at the end of the year, so nearly 6 months behind Nvidia. They won't follow a 12nm GPU with a 7nm consumer GPU within a year.


Mybbetbis neither AMD nor Nvidia will have 7nm GPU beige the end of next year, and both will release within a few months, which is always the case.
 
AMD have 12nm products launched in April t.y and probably being in their laboratories for at least 6 months before then.
If they really wanted a shrunk 12nm Polaris, they would have released it alongside Ryzen.
 
Don't start with the "but the die is the huge so we have to compare it to the Ti" argument again dude because it's bullcrap, or do you have a lot of people around you saying "yes D.P. size DOES matter".

True, it does illustrate where AMD are deficient because it took a big die and lots of power to get there (although it actually shouldn't have if they were more discerning with their dies, as the undervolting results show), but AMD still got there. And they can only improve upon it.

Do a 7nm refresh of Vega, be more discerning with the dies and yields, set power requirements accordingly and you'd see a Vega that's much more refined. And, again, let's see what Navi can do; here's hoping it catches Nvidia napping and forces them to release the killer tech they've clearly been sitting on for these 2 years as Maxwell Reprise stagnates everything around it.


This is just more BS, because the simple fact is Nvidia is along way ahead so whatever AMD does such as lowering margins or increasubg voltage and clocks, Nvidia can do the same.

There is a size.limkt for a commercialy via me GPU, and a physical limit. Same with power. If you ahve to produce a GPU that is far bigger or power hi Gary then your technology is a long way behind. It is relatively trivial to scale upwards in siS to gain performance, it is much harder to gain performance within the same transistor cpubtbor die size.

And the die size is of critical importance to the viavilvia of the GPU. Nvidia make far more profit per GPU sold because smaller dies have far higher yields and more GPUs per wafer - costs increase at least quadraticly with die size.

If Nvidia ever felt threatened they could start a price war and crush AMD right now.
 
Back
Top Bottom