Caporegime
- Joined
- 30 Jul 2013
- Posts
- 29,620
I remember, I had a R9 295x2.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
NVIDIA have been milking us for the past decade or so. I am sure they have enough cash in the reserves to pull out a beast that AMD cannot beat. However, if AMD brings affordable reliable high performance cards, they may win majority of the customers rather than the consumers who are always seeking the latest and greatest no matter what the cost is. I have faith in AMD but I am lacking the patience since my GPU upgrade is due next gen. Hopefully they are on to another Ryzen but in the GPU industry.Big Navi will be fast, faster than a 2080TI, about 40% faster.
But Ampere will be even faster. Nvidia know RDNA2 is good, very good. But Nvidia never lose... they will pull out all the stops and just go mahoosive to beat AMD.
I think he's right, when you look at the performance of the new XBox at 1.8Ghz with 52 CU's, its between a 2080 Super and a 2080TI, so we know the performance is there even on a relatively small cut down GPU and when you look at the PS5 clocking their 36 CU GPU to over 2.2Ghz we know the thing clocks.... there is a benchmark in the wild where an unknown AMD engendering sample crushed a very high clocked 2080TI by 30%.
But Nvidia will stop an nothing just to beat AMD and they will with Ampere.
It doesn't matter, the important thing is AMD will bring truly great GPU's back to the fight.
Ah yes thanks. Corrected. It just doesn't look right unless you add the xt sometimes.All good, just one thing, 5700... not 5700XT, the comparison they used was 5700 vs RTX 2070, they both have 2304 Shaders, the 5700XT has 2560 Shaders.
You are correct. I think he's debating semantics regarding use case scenarios. Dlss is not limited to just RTX. However that is neither here nor there.So I make a point, you counter the point, I counter your counter and now suddenly I don't make sense? One last time: RTX is Turing's enterprise and research technologies forced onto the gaming sector as nothing more than an excuse to raise prices astronomically and an excuse to nickel-and-dime the user with lacklustre specifications. Ampere is doubling-down on the fallacy because Nvidia have painted themselves into a corner.
If you disagree then fine but I fail to see how you not understand.
Which was faster then their cards they put the IP on. I bet those PPU cards still run circles around those physx effect today.
I can see the arguments already... "But But DLSS, you should be running it at 1080P and unscaling to 1440P because bar charts!!!!!"
DLSS requires the game developer to put it in game and its not as if AMD don't have their own compression technologies to boost performance, they do and no one is going to complain the bar charts are fake because X or Y reviewer didn't use them.
Like DLSS requires developers to take use so will Amd when using DirectML from Microsoft.
The question a developer ask themselves do I use DLSS for one vendor or do I use DirectML and use it for everything
Apparently DLSS 3.0 with Ampere works with anything that's running TAA, but for best results there's still a game driver required.DLSS requires the game developer to put it in game
Yeah. Let’s see how that turns out. It all sounds promising, but as usual I will believe it when I see it.Apparently DLSS 3.0 with Ampere works with anything that's running TAA, but for best results there's still a game driver required.
Apparently DLSS 3.0 with Ampere works with anything that's running TAA, but for best results there's still a game driver required.
If they make game drivers for modern demanding titles, Nvidia could be onto a winner. I doubt they will be able to throw out game drivers for every single title.Apparently DLSS 3.0 with Ampere works with anything that's running TAA, but for best results there's still a game driver required.
Apparently DLSS 3.0 with Ampere works with anything that's running TAA, but for best results there's still a game driver required.
They will reveal it on launch and have it working by the time the 4000 series is outStill has to be supported by game engine apparantly, but this is all complete conjecture atm and may not even be there! Or at least on launch.
None of the AI in games is sophisitcated enough it can't be done on CPU.
We are now looking at well over 250 watt gpus. Yet no one blanks an eye, unless it's AMD of course .
I think some are forgetting a very serious aspect of diminishing returns. Nvidia lost their efficiency per mm2 a long time ago now. At which point they already reached a peek on how large a die becomes before cost and price becomes consumer prohibitive. They want to offset that by marketing RTX. Based on my observation the smaller nv gpus are the less efficient they become. Thus needing more transistors to maintain a competitive edge. No one really questions this method (in reviews) because "faster is faster".
-Area not power.You what? A 2080Ti uses 250watts on a 775mm² 12nm chip compared to the 5700XT which uses 225watts on a 251 mm² 7nm chip how did you come to this conclusion for efficiency per mm²?
I think some are forgetting a very serious aspect of diminishing returns. Nvidia lost their efficiency per mm2 a long time ago now. At which point they already reached a peek on how large a die becomes before cost and price becomes consumer prohibitive. They want to offset that by marketing RTX. Based on my observation the smaller nv gpus are the less efficient they become. Thus needing more transistors to maintain a competitive edge. No one really questions this method (in reviews) because "faster is faster".
We are now looking at well over 250 watt gpus. Yet no one blanks an eye, unless it's AMD of course .
In a nutshell more working chips on a wafer = less cost. Less cost trickles down to the consumer pricing. Nvidia can't do that because of die size and stock pricing mandate higher price point.
All AMD needs to do is get close in performance but using a smaller die which = more dies per wafer which = lower cost. This is how AMD is beating Nvidia. Not on performance but in manufacturing. They are R&D'ing gpu dies that get close to what nvidia offers in a much larger die. Yet save more per wafer doing so. Therefore could pay more for a wafer which is offset based on higher yields. Pretty clever if you ask me.
So will Big Navi be the chip to bet 3080ti? I believe it will be competitive but at a much lower production cost and asking price. Remember this is Big Navi. We still have this Navi Killer yet to be announced and released. As well as how AMD plays their cards on those consoles games.