• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel leaks Xe graphics card codenames and details

Interested to see if Intel can leverage multi-GPU when the old guard have failed to make it happen. Of course, AMD will have had chiplet based GPUs in the lab for a long time. Making it seamless to the OS is a different kettle of fish.
 
Interested to see if Intel can leverage multi-GPU when the old guard have failed to make it happen. Of course, AMD will have had chiplet based GPUs in the lab for a long time. Making it seamless to the OS is a different kettle of fish.

Making multi GPU invisible to the OS & and each die being able to acess the same pool of Vram is the key is true, as for Chiplet based GPU designs, I'm not 100% convinced it will work for real time graphics as its increasingly latency sensitive and as soon you need Chip A and Chip B to communicate with each other even on the same package will introduce some latency.

I think Nvidia had a white paper and prototypes too
 
I won't believe it until it's actually released.

Larrabee was much further along, they even had demos of it doing Real Time Ray Tracing, and it was never released.

It would not surprise me at all if this latest effort is never available to buy either.
 
Intel is already the #1 PC graphics company because so many prebuilts and laptops use their IGP :p

46p6QU7.jpg
 
Making multi GPU invisible to the OS & and each die being able to acess the same pool of Vram is the key is true, as for Chiplet based GPU designs, I'm not 100% convinced it will work for real time graphics as its increasingly latency sensitive and as soon you need Chip A and Chip B to communicate with each other even on the same package will introduce some latency.

I think Nvidia had a white paper and prototypes too
Latency is a big deal for gaming but we can already see with 3 generations of Ryzen that AMD are improving it a little bit at a time. The question is what is the limit of those improvements and will it be suitable for a GPU application?
 
Intel's ability to create graphics hardware isn't in question and never has been but getting the best out of GPU from what I've read requires much more investment in software/drivers then traditional CPU's do and Intel isn't known for it's frequent driver updates for it's graphics products and it's that side of it that will determine the fate of these new cards (it's pretty much the reason why Nvidia were able to leap ahead of it's competitors early on as they realised the importance of software).
 
Intel's ability to create graphics hardware isn't in question and never has been but getting the best out of GPU from what I've read requires much more investment in software/drivers then traditional CPU's do and Intel isn't known for it's frequent driver updates for it's graphics products and it's that side of it that will determine the fate of these new cards (it's pretty much the reason why Nvidia were able to leap ahead of it's competitors early on as they realised the importance of software).

I agree with this, Intels drivers are crap they seem to give up half the time on older generation when new a generation is out and that is being kind, now many times have they had missing features that they never fixed for games.
 
Back
Top Bottom