Soldato
- Joined
- 26 Sep 2017
- Posts
- 6,206
- Location
- In the Masonic Temple
Bring back sli and crossfire I say!
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Graphics isn't everything, frame rate for me is far more important than looks and PC will still be the platform of choice for this.
I will save this here and remind you of this quote the next time you mention AMD's image quality is better than Nvidia's and is one of the reasons you prefer AMD cards![]()
Good point. AMD's HUGE market share of 20% means piles of cash for R&D. They should be miles ahead of Nvidia shouldn't they.
As are cards with an AIO cooler or HBM?? I know what you mean but a card is a card, Nvidia cards certainly are another matter when it comes to pricing.....Dual GPU cards are another matter though really.
Same here, wasn't going to pay the ludicrous initial asking priceTrue, and I even had one!! Not til they dropped a fair bit though![]()
Happens. I am always happy to put my hands up and say I got it wrongAh, you are wrong there TNAShankly always said that the difference in image quality was down to the default colour settings.
Except that "dedicated hardware" isn't doing such great performance with performance penalty from notable to half.
Happens. I am always happy to put my hands up and say I got it wrong![]()
It is less about the shrink and more about the number of transistors. Usually you get 400+mm^2 GPUs and generally the only way to increase performance from there is to do a node shrink so you can add in more transistors. This go around for AMD there is no 400+mm^2 GPU so all they really need to do to get a similar performance uplift to what a node shrink does is release a 400+mm^2 GPU. I think the main thing stopping AMD from releasing such a card with RDNA 1 is that they would need to reduce clock speed to get it within a 300W power envelope. With RDNA2 provided the +50% perf/watt is accurate then they can get a doubling of Navi 10 into a 300W envelope with similar clockspeeds which should lead to a significant performance uplift provided workloads can scale across that many CUs.
For all their faults Turing's ray tracing capabilities are a generational leap calling it just rubbish is just rubbish even if things are lacking gaming wise at the moment - doing it on the shaders is at least 6x slower like for like. I'll be surprised if the consoles outperform 2080ti's RT performance unless AMD have some additional tricks up their sleeve - the approach is basically alleviating some of the reasons why shaders are so poor for it rather than going for a best possible solution.
It is going to be funny how quickly some people change their tune once AMD has a decent RT solution and games start making proper use of such features rather than just token use for specific effects.
AMD claim up to 50% performance per watt. It's exactly what you said, how accurate this statement is will decide a lot. Does RDNA 2 only reach this figure in some lab test that has no real world application?
I hope it's accurate and even erring on the cautious side.
How and why? You do realise that you are comparing an APU to a full desktop core GPU.
I just don't see it happening, and released trailers already have shown that devs need to use RT on console wisely.
My point of view on all this yes next generation of the consoles will be a nice upgrade over the current consoles. But they will fall short of what the pc GPUs now has to offer.
RDNA2 on the console will not be the same for RDNA 2 on the desktop, the desktop will be a much higher clocking GPU with far greater performance for both RT and normal gaming.
You must have sore armsHappens. I am always happy to put my hands up and say I got it wrong![]()
You can't compare a crappy blower stock clocked ref card, to a non ref custom OC'd card, unless we're doing that now ?
If so, i apologise
The PS5 should beat the 2080Ti in RT, as from what we've seen, its RT is a blury mess, as its only doing it at 1080p.
and theres everyone slamming DLSS for its pi$$ poor detailed loss blury quality![]()
I also wonder how they can say it's a bad solution? What have they to compare to? Apart from cards with no Ray Tracing hardware, and even then the 2060 mops the floor with 1080Ti in Ray Tracing.
True, and I even had one!! Not til they dropped a fair bit though![]()
Before I forget, as you have mentioned it a few times in this thread, when it comes to RT and comparisons and the consoles - I am not that fussed about it for the time being and want to see if the performance standard using the 2080Ti in fps - where the new hardware kicks in and leave the RT element out for now.
I would be happy enough for the "better than 2080Ti by x%" so that you can play 4k or 1440p comfortably whether it be on console or the PC.
Sure it's lot better than having no slightest hardware acceleration.Not sure how you can say that Nvidia's solution is bad?
Out of Navi 10's 250 mm2 not everything is processing units.Aye, adding transistors can really make a difference. If you look back at 8800GTX vs the 7900GTX, no die shrink but they more than doubled the size of the chip and the number of transistors.
But, performance didn't double, it's not a linear improvement. If AMD make a 500mm chip, it won't be double the performance of the 5700XT.
Quite. So unless nV did an Intel and drastically downsized their R&D, you wouldn't really expect AMD to leapfrog them.Well for all Nvidia's faults and failing, and people have good reason to despise them, you can't say they are bad engineers.