Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
They're in the consoles. Both of the main ones. That could be fairly important/crucial.The problem with waiting for AMD is that they're going for the value play again. Which means definitely lower performance for ray tracing, DLSS or equivalent will be missing or strictly inferior (let's not forget DLSS 1.0 itself was a disaster) - no tensor cores, the others in the software stack are also going to be worse/missing (RTX Voice, CUDA, etc). Plus some less covered but usual omissions like how crap their video decoding is (important for AVR + PC, or HTPC users).
I just don't see them simply being cheaper and maybe having more vram as rivalling all those advantages. Hope they prove me wrong, they have 2 weeks to start leaking stuff.
And I never said anything to the contrary, but so far Ampere doesn't look like the stratospheric leap we suspected, and actually leaves itself (bar 3090) to be a very attainable, nay beatable, target for AMD.Ah come on man. We are talking about a much better GPU than the current best for around haLf the price. Your post makes you seem like bitter AMD fanboy![]()
Agreed. nV might have had a clean sweep were it not for the very stingy 6/8/10 GB VRAMAnd I never said anything to the contrary, but so far Ampere doesn't look like the stratospheric leap we suspected, and actually leaves itself (bar 3090) to be a very attainable, nay beatable, target for AMD.
I'm not disputing any of that, but it will certainly be worse in terms of RT performance as a result. Specialised hardware is simply better, but it has the down-side of being more expensive and limited in application. No different here.It's about effiency and throughput. People need to appreciate its why SMT is a thing - its to make sure you maximise utilisation of the CPU core by replicating some of the front end. In RDNA2 Texturing and RT operations are happening on the same pipeline,but,both of those operations are happening at different stages of the pipeline. So what AMD is doing is making sure you have as much pipeline utilisation as possible. This fits in with the DXR1.1 specification which MS recently released(inline RT),which makes sense as that will be running on the XBox Series X.
The reason I mentioned the die size was to back-up the judgement on them making a value-play. Smaller dies allow for that sort of thing, and in this case they wouldn't have a choice unlike with the 5700 XT. I don't care about TSMC vs Samsung nodes battles.Also looking at die area is not really useful this generation. Samsung 8NM is a derivative of their 10NM process node. TSMC 7NM is superior in density and power characteristics. AMD might not be as behind in transistor count as we think. Remember the Radeon 7 and RX5700XT were early generation 7NM products,so it wouldn't surprise me if RDNA2 ups density even more. Its quite clear the XBox Series X 56CU GPU seems relatively densely packed(those 56CUs taken up under 200MM2 of the SOC IIRC) and it still runs at a constant 1.8GHZ or thereabouts!
Them being in the consoles will certainly help, but up to a point. In terms of RT everything has been happening with Nvidia's help for the past 2 years, including all RT engine-developments. Devs don't really have to do much, just allow settings sliders for RT (for ray counts, RT resolution etc) - all of which is trivial to implement, and then we can put our extra hardware power to use. Consoles will still target 4K 30 fps at best (with ray traycing), so it's not like they're going to overboard with it.They're in the consoles. Both of the main ones. That could be fairly important/crucial.
How many devs are going to spend extra effort pushing more RT on the PC platform? If AMD-powered consoles don't have the RT grunt, how many devs are going to work extra hard to add more just for the PC version..
If it's so trivial and slider-based, then really you'll get a few extra rays being cast, a bit higher IQ and... that's about itThem being in the consoles will certainly help, but up to a point. In terms of RT everything has been happening with Nvidia's help for the past 2 years, including all RT engine-developments. Devs don't really have to do much, just allow settings sliders for RT (for ray counts, RT resolution etc) - all of which is trivial to implement, and then we can put our extra hardware power to use. Consoles will still target 4K 30 fps at best (with ray traycing), so it's not like they're going to overboard with it.
I'm not disputing any of that, but it will certainly be worse in terms of RT performance as a result. Specialised hardware is simply better, but it has the down-side of being more expensive and limited in application. No different here.
The reason I mentioned the die size was to back-up the judgement on them making a value-play. Smaller dies allow for that sort of thing, and in this case they wouldn't have a choice unlike with the 5700 XT. I don't care about TSMC vs Samsung nodes battles.
If it's so trivial and slider-based, then really you'll get a few extra rays being cast, a bit higher IQ and... that's about it
They probably won't rework scenes to add extra dynamic lighting that the console version doesn't have, for instance. Unless nV pays them too.
So your extra nVidia RT hardware is going to give you - not much? - in most console ports. A bit higher IQ.
How do you know that though?? MS literally wrote the DXR1.1 specifications around their consoles. Literally every multiplatform game will be coded around that form RT. You do also appreciate BOTH AMD and Nvidia use specialised hardware for parts of the RT calculations too,its only part of the graphics pipeline which is reused with AMD.
Well we've already put Turing through the test and it seems to come out looking good. Can't really say that about RNDA 2 yet. The scaling issue is tricky but right now RNDA 2's best looks to be 80 CU and limited to GDDR6 & 384 bit bus, which puts it somewhere around 1.7x 5700 XT, or about 20% faster than a 2080 Ti. And that's just a best case on paper. I don't see how it grows bigger than 80 CUs, at least until next merry go 'round. So either way it's gonna hard for it to tackle the 3080 let alone past it.Have you also not considered,Nvidia's method also will use more die area,but AMD can increase RT power by just scaling up normal shaders?? The problem with having parallel hardware,is of syncing it all. Turing could be bottlenecked by its RT pipelines holding up its rasterisation pipelines.
Look at that Scott Herkelman tweet. That's not an accident.So either way it's gonna hard for it to tackle the 3080 let alone past it.
Yes but the drivers for AMD cards are the sticking point for me... AMD need to be more consistent with their drivers.
Yes and reading the forums...
Agreed. It's turned into something of a 'fake news' story for me, as I've never had any problems personally and people never seen to qualify the comment with details.People always say this about AMD but what do you actually mean? They work fine. Better than fine actually.
People always say this about AMD but what do you actually mean? They work fine. Better than fine actually.
A good 5 years ago amd had a rep that the drivers was very bad and its something that has stuck around even after winning a good few years of the driver of the year award they still get bad rep.
Sure navi had issues on release that is now fixed but it wasn't every user having them either it was a select few with certain configuration.
Amd drivers are way above what Nvidia has in every department but people will still disagree.
I would like to know what experience that is so I can put your comment into some proper perspective.Yes and reading the forums...
Agreed. nV might have had a clean sweep were it not for the very stingy 6/8/10 GB VRAM
Matching 3080 is all they really have to do, and add some more VRAM and we're good.
If the consoles don't have very much RT power then perhaps we'll see another cycle on PC where only a handful of games actually use RTX, still.
Look at that Scott Herkelman tweet. That's not an accident.
No game I play use rtx or dlss so all nvidia Jensen did was to adress someone else.
RTX is still a gimmick.
This type of change with RT takes a lot of time due to the big market is below $300 markets not $500.
Its why consoles can change this down the line thanks to amd bringing big navi there.
Yea, people bought the RT hype from Jensen selling as it was one of the most try to sell things presentation I seen.
Big Dog Navi coming soon
And I never said anything to the contrary, but so far Ampere doesn't look like the stratospheric leap we suspected, and actually leaves itself (bar 3090) to be a very attainable, nay beatable, target for AMD.
Look at that Scott Herkelman tweet. That's not an accident.
A good 5 years ago amd had a rep that the drivers was very bad and its something that has stuck around even after winning a good few years of the driver of the year award they still get bad rep.
Sure navi had issues on release that is now fixed but it wasn't every user having them either it was a select few with certain configuration.
Amd drivers are way above what Nvidia has in every department but people will still disagree.
Why do you hate AMD so much?Can't wait for this "AMD Navi 23 ‘NVIDIA Killer" card.
I thought their marketing overall was very good considering it was from his kitchen. Lol.I didn't really think much of the Nvidia event. There were some interesting small tidbits and the performance of the 3080 looks good. However, so many things made me cringe. The silly "streamers" playing 8k and all you see is this fake or ignorant reaction. The 3090 out of the oven. The futuristic talk. So many cringe moments and then all the overselling just made me want to turn it off. I've been a salesperson myself, and there is good marketing and there are the tryhards. Jensen managed to squeeze into the latter category with me.