• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel Arc series unveiled with the Alchemist dGPU to arrive in Q1 2022

A Navi 22 killer you say?
Navi 22 is 335mm² on 7nm, DG2-512 is meant to be around 400mm² on 6nm. Supposedly TSMC's cost per transistors hasn't increased for 6nm but it is denser (about 18% or so?), so 400 * 1.18 is around 470mm² equivalent in terms of cost. So perf/area doesn't look too good.
 
A Navi 22 killer you say?
Navi 22 is 335mm² on 7nm, DG2-512 is meant to be around 400mm² on 6nm. Supposedly TSMC's cost per transistors hasn't increased for 6nm but it is denser (about 18% or so?), so 400 * 1.18 is around 470mm² equivalent in terms of cost. So perf/area doesn't look too good.

Unless they used that area for raytracing and AI.
 
Unless they used that area for raytracing and AI.
Maybe.

Intel being 'rich', the don't have to do a Vega and be laden with lots of compute stuff as they should be able to afford to run multiple designs, but fixed or semi-fixed raytracing stuff might make sense.

Although one nice thing about the console approach where they make the shaders some of that work would be if the extra transistors there could be used for pure rasterisation too.
 
Maybe.

Intel being 'rich', the don't have to do a Vega and be laden with lots of compute stuff as they should be able to afford to run multiple designs, but fixed or semi-fixed raytracing stuff might make sense.

Although one nice thing about the console approach where they make the shaders some of that work would be if the extra transistors there could be used for pure rasterisation too.

My own opinion on that is that we are close to if not at the point where wasting die space as a percentage in an attempt to make rasterisation look like raytracing, an approach that has been used for decades, needs to come to an end.

We would still be watching paper plates dangle from string if Hollywood hadn't upgraded :)
 
Salt required but 3D Center has made a Arc DG2-512 / GA 104 / Navi 22 comparison table
https://www.3dcenter.org/news/intel...-grafikchip-design-der-xe-hpg-architektur-vor
oCttsdv.png
Spec-wise it seems to compare pretty well although of course this isn't compared to GA102 or Navi 21 and by release time the next gen from green and red will not be far off.
Rumour is the Arc may come out early 2022 and the AMD and Nvidea GPUs are supposed to be end of 2022, so it may be competing with these current ones for a while at least
 
A Navi 22 killer you say?
Navi 22 is 335mm² on 7nm, DG2-512 is meant to be around 400mm² on 6nm. Supposedly TSMC's cost per transistors hasn't increased for 6nm but it is denser (about 18% or so?), so 400 * 1.18 is around 470mm² equivalent in terms of cost. So perf/area doesn't look too good.

DG2-512 die is around 396mm² on 6nm, 4mm² more than 392mm² GA104 die size.

ixYb6Y2.jpg

Yes it will be 6700 XT killer, DG-512 has 16 TFLOPs FP32 performance, 512 XMX tensor cores and 16MB Smart Cache but Navi 2X GPUs did not have tensor cores and Ampere GPUs did not have smart cache.
 
DG2-512 die is around 396mm² on 6nm, 4mm² more than 392mm² GA104 die size.

ixYb6Y2.jpg

Yes it will be 6700 XT killer, DG-512 has 16 TFLOPs FP32 performance, 512 XMX tensor cores and 16MB Smart Cache but Navi 2X GPUs did not have tensor cores and Ampere GPUs did not have smart cache.


Okay, but my comment was also about the economics. If going by trans/cost then DG-512 will cost Intel close to 470mm² versus 336mm² for Navi 22 or 393mm² for GA104 (on a cheaper Samsung process).

Intel probably don't have any choice but to make less margin at least the first few years.
 
Okay, but my comment was also about the economics. If going by trans/cost then DG-512 will cost Intel close to 470mm² versus 336mm² for Navi 22 or 393mm² for GA104 (on a cheaper Samsung process).

Intel probably don't have any choice but to make less margin at least the first few years.

Considering the mega-margins AMD/Nvidia are making on their GPUs,I expect the margins will still be OK!
 
Considering the mega-margins AMD/Nvidia are making on their GPUs,I expect the margins will still be OK!
As new entrants, they might be willing to go for console margins. If their design is inefficient or has inconsistent performance they may not have a choice.

The bad news for Intel is that they not manufacturing these themselves.

The good news for Intel is that they not manufacturing these themselves!

So, they don't have to do the calculations AMD has to do with whatever they can get from TSMC: monster margins on Zen3 CCDs, okay margins on APUs and get to please some laptop OEMS, some margins on GPUs, or use most of the wafers at almost giveaway prices MS/Sony because of 'contracts'.

Although dGPU will cut into Intel's TSMC wafers for high margin HPC projects. Plus, I think Intel are very keen to destroy some of Nvidia's high HPC margins just for "business is war" reasons.
 
As new entrants, they might be willing to go for console margins. If their design is inefficient or has inconsistent performance they may not have a choice.

The bad news for Intel is that they not manufacturing these themselves.

The good news for Intel is that they not manufacturing these themselves!

So, they don't have to do the calculations AMD has to do with whatever they can get from TSMC: monster margins on Zen3 CCDs, okay margins on APUs and get to please some laptop OEMS, some margins on GPUs, or use most of the wafers at almost giveaway prices MS/Sony because of 'contracts'.

Although dGPU will cut into Intel's TSMC wafers for high margin HPC projects. Plus, I think Intel are very keen to destroy some of Nvidia's high HPC margins just for "business is war" reasons.

Maybe,but considering Intel threw billions of USD at contrarevenue for Atom,etc they have no issue loosing a few billion USD in the short term. However,the only reason they have a chance is because Nvidia/AMD have gotten greedy now and formed a sort of cartel. It reminds me of Apple and Samsung doing the same with smartphones,until Chinese companies started competing better.
 


Very interesting from Intel

* Overclocking and undervolting is supported in the driver tools on day one.

* Intel is currently testing being able to use the iGPU on the CPU to accelerate ML/AI, in other words using the iGPU to accelerate XeSS/DLSS so you don't have to put as many tensor/xmx cores on the GPU. Imo this is a fantastic idea, why have the iGPU just sit there doing nothing at all for most people who have them when it can be used to give you more gaming performance
 
Last edited:
Yes using the iGPU for something rather than it going to waste is a pretty good idea. And some kind of AA, upscaling, or any post processing is the obvious thing for the iGPU to do.

Would certainly shut up the constant "I'd rather they used the 20-40% of the die dedicated to there iGPU for more cores" comments.
 
I really hope Intel and Nvidia can work together on an API allowing both vendors to use their own tech, while the developers use just the one front end API.
 
Intel margins start at 55% (lower than Nvidia @ 65% but higher than AMD @45%)

Intel makes mostly CPUs,which are higher margin products so it shows you how much Nvidia is making. AMD is selling a ton of consoles which are lower margin - it wouldn't surprise me their margins are pretty decent once you discount consoles. Once they get more supply to non-console products,see their margins go past 50% easily IMHO(and their net margins have gone up a decent amount). 7NM and 8NM are lagging nodes now,unlike two years ago.
 
Intel makes mostly CPUs,which are higher margin products so it shows you how much Nvidia is making. AMD is selling a ton of consoles which are lower margin - it wouldn't surprise me their margins are pretty decent once you discount consoles. Once they get more supply to non-console products,see their margins go past 50% easily IMHO. 7NM and 8NM are lagging nodes now,unlike two years ago.

Its very challenging to find a proper break down on products , but AMD margins are coming from server cpu`s (thats from either TTP or GN, cant remember). Intels margins are on the up, they were at 45% just 2 years ago. 14nm has really hurt them, but they have supply which AMD (at TSMC) doesnt have. The news though is that Intel wont be buying up GloFo and rumour is they are heading to Samsungs 8nm process licenced.
 
Last edited:
Its very challenging to find a proper break down on products , but AMD margins are coming from server cpu`s (thats from either TTP or GN, cant remember). Intels margins are on teh up, they were at 45% just 2 years ago. 14nm has really hurt them, but they have supply which AMD (at TSMC) doesnt have. The enws though is that Intel wont be buying up GloFo and rumour is they are heading to Samsungs 8nm process licenced.

So was the GF rumour only about that DoD Welfare Check?

Now that Intel won it, they are no longer interested but while it wasn't a sure thing talking about GF and Malta, NY and the old ex IBM'er was useful.

For their latest mainframe super chip haven't IBM gone with Samsung (although 7nm) too?

re. Breakdown, I think AMD are trying to hide their low console margins by lumping that in with servers!
 
Back
Top Bottom