• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
IRfM5Lh.png

xqEXObZ


This is from a while ago too. 30% faster than 2080ti. NVIDIA late again and hot and loud
Looks like a fake, smells like a fake, it's fake.
 
Did someone hack 4k8k? He's.... he's been sensible lately :O WHAT HAPPENED? :D

But you're right, I think the 30x0 series and pricing is the biggest evidence that AMD has something.

Well it's not AMD that has something. It's more like Sony and Microsoft with their xbox and ps5 for £500.

That's the reason for the price reduction. Not the 6series :p:p:p:D:D:D Nvidia is really covering all the basis. Don't think they are bothered by Navi tbh.
 
Based on AdoredTV's video this stuff is decided as soon as they start designing these chips.
This was also confirmed on a recent MLID Broken Silicon with SemiWiki Founder Daniel Nenni. He went into great length about designs are node-specific and you can't jump between them at a technical level, plus there are many NDAs involved to keep X foundry's fab secrets locked away. It's also prohibitively expensive to develop for 2 nodes simultaneously, so Sammy 8nm was decided a long time ago for the gaming line. It also pours cold water on the notion of a Super refresh later on with TSMC 7nm. Nvidia certainly have the money and capacity to start over and turn around quickly (the top A100 is on tSMC 7nm already so there's a head start), but it's unlikely.

As an aside, Nenni also explained why the rumoured clock speeds of Ampere dropped as time went on, which also applies to why 5GHz Zen 2 CPUs never materialised. Essentially early engineering samples for development work are always golden samples because that's just what's coming off the wafer when dev yields are so low. As development continues, you get a broader idea of what clocks are going to be as yields increase and you simply just make more chips, and those average clocks (and therefore your final specs) always come down.

It was a really good episode.
 
This was also confirmed on a recent MLID Broken Silicon with SemiWiki Founder Daniel Nenni. He went into great length about designs are node-specific and you can't jump between them at a technical level, plus there are many NDAs involved to keep X foundry's fab secrets locked away. It's also prohibitively expensive to develop for 2 nodes simultaneously, so Sammy 8nm was decided a long time ago for the gaming line. It also pours cold water on the notion of a Super refresh later on with TSMC 7nm. Nvidia certainly have the money and capacity to start over and turn around quickly (the top A100 is on tSMC 7nm already so there's a head start), but it's unlikely.

As an aside, Nenni also explained why the rumoured clock speeds of Ampere dropped as time went on, which also applies to why 5GHz Zen 2 CPUs never materialised. Essentially early engineering samples for development work are always golden samples because that's just what's coming off the wafer when dev yields are so low. As development continues, you get a broader idea of what clocks are going to be as yields increase and you simply just make more chips, and those average clocks (and therefore your final specs) always come down.

It was a really good episode.
I will watch it.

@Rroff Just pinging you incase you were interested
 
I will watch it.

@Rroff Just pinging you incase you were interested

Was literally just reading this and about to reply.

This was also confirmed on a recent MLID Broken Silicon with SemiWiki Founder Daniel Nenni. He went into great length about designs are node-specific and you can't jump between them at a technical level, plus there are many NDAs involved to keep X foundry's fab secrets locked away. It's also prohibitively expensive to develop for 2 nodes simultaneously, so Sammy 8nm was decided a long time ago for the gaming line. It also pours cold water on the notion of a Super refresh later on with TSMC 7nm. Nvidia certainly have the money and capacity to start over and turn around quickly (the top A100 is on tSMC 7nm already so there's a head start), but it's unlikely.

As an aside, Nenni also explained why the rumoured clock speeds of Ampere dropped as time went on, which also applies to why 5GHz Zen 2 CPUs never materialised. Essentially early engineering samples for development work are always golden samples because that's just what's coming off the wafer when dev yields are so low. As development continues, you get a broader idea of what clocks are going to be as yields increase and you simply just make more chips, and those average clocks (and therefore your final specs) always come down.

It was a really good episode.

Pascal existed on both Samsung 14LPP and TSMC 16FF - people vastly underestimate nVidia's virtual design capabilities these days.
 
Last edited:
People looking at the 3080 just as way to get a faster cheaper 2080 Ti are missing the forest AND the trees.

It's 2 years later and Nvidia have to launch a new GPU. Let's say the 3080 is actually 3080 Ti and sells for $1200 again, and the 3090 is instead the Ampere Titan and sells for $2999. How the f are they going to sell a 15% faster 2080 Ti for the same price, while bringing almost no new features to the tables? At least with the 2080 Ti they could make the b.s. claim of adding RTX & sacrificing the gains of a generation in order to advance rendering techniques. Okay, cool, then what? Can't do the same trick twice. And now they have both consoles & RDNA 2 breathing down their necks, and they'd have a launch even ******** than Turing? LOL! Nvidia COULDN'T have sold you this 3080 for same price as the 2080 Ti because it's just 15% faster w/ OC! It's not out of the goodness in their hearts, it's because it's a pathetic increase! Even for "$699" it's not a steal because that's just what it's supposed to be as baseline for a flagship (which this isn't). So it's only looking somewhat appetising because Turing has been so god awful.

If you look at the context proper all that Nvidia really delivered is the minimum not-completely-awful product. That's it.

Can AMD top this? Absolutely. Will they? Who knows, at the end of the day it's a question between them beating Nvidia's bottom(line) red, or making more money making more CPUs. I'm sure they have their pride, but I don't know if that will override their greed, they're still a corporation. If they go for 384bit GDDR6 then it's just gonna be another flatline generation.
 
People looking at the 3080 just as way to get a faster cheaper 2080 Ti are missing the forest AND the trees.

It's 2 years later and Nvidia have to launch a new GPU. Let's say the 3080 is actually 3080 Ti and sells for $1200 again, and the 3090 is instead the Ampere Titan and sells for $2999. How the f are they going to sell a 15% faster 2080 Ti for the same price, while bringing almost no new features to the tables? At least with the 2080 Ti they could make the b.s. claim of adding RTX & sacrificing the gains of a generation in order to advance rendering techniques. Okay, cool, then what? Can't do the same trick twice. And now they have both consoles & RDNA 2 breathing down their necks, and they'd have a launch even ******** than Turing? LOL! Nvidia COULDN'T have sold you this 3080 for same price as the 2080 Ti because it's just 15% faster w/ OC! It's not out of the goodness in their hearts, it's because it's a pathetic increase! Even for "$699" it's not a steal because that's just what it's supposed to be as baseline for a flagship (which this isn't). So it's only looking somewhat appetising because Turing has been so god awful.

If you look at the context proper all that Nvidia really delivered is the minimum not-completely-awful product. That's it.

Can AMD top this? Absolutely. Will they? Who knows, at the end of the day it's a question between them beating Nvidia's bottom(line) red, or making more money making more CPUs. I'm sure they have their pride, but I don't know if that will override their greed, they're still a corporation. If they go for 384bit GDDR6 then it's just gonna be another flatline generation.

RTX 3080 is 30% not 15% faster than RTX 2080 Ti and could sell for $1200, no problems, it's Nvidia after all and its endless base of loyal boys.
 
RTX 3080 is 30% not 15% faster than RTX 2080 Ti and could sell for $1200, no problems, it's Nvidia after all and its endless base of loyal boys.
25 - 30% faster at 4k. Oc the 2080ti and suddenly that gap closes dramatically. 3080fe oc is almost useless.
 
Nobody knew if it was a 3070, 3080 and 3090 until it was leaked until high heaven. And even then, Nvidia didn't confirm it until the actual kaunch event...

Why would AMD do any different?

The problem is, many are assuming we will get a RX 6900 card this year. The fact AMD haven't specifically confirmed it, makes me think they might be considering a delay for the top RDNA 2 GPU. Perhaps there will just be a RX 6700 and RX 6800 this year.

The RTX 3080 is out now, so there's not much reason to hold back on the product names they intend to launch. If not, certainly once the RTX 3070 has been released.
 
Last edited:
https://www.youtube.com/watch?v=O4DgXtxkZNg for the episode, I think the part specifically is 57 minutes in, but there are timestamps and chapters in the description.

I think only time is going to tell with some of this stuff - but it doesn't mesh with a good bit of what I know especially when it comes to their design approach these days. Obviously there are large lead times on this stuff and decisions would have been made a long time ago

Some of it is covered here how quickly they can turn it around compared to other companies: https://youtu.be/650yVg9smfI?t=294
 
Status
Not open for further replies.
Back
Top Bottom