False, as far as Intel are concerned. The VAST majority of the chips they sell end up powering the video systems too, and in laptops they switch between dGPU and iGPU dynamically to increase battery life. Expect more software using GPU compute to utilise iGPU cores even with a dGPU present. And when in low power state/gated, iGPUs most likely help spread/dissipate heat on the GPU die*
Intel are more likely to can "enthusiast" chips like the K series...*
*I have no idea if true, but were all just stating stuff with impunity here, right?
Intel have a very large GPU on their die, igpu being required/useful and the igpu they actually have are extremely different things. For one thing the video decode and basic desktop output can be handled by extremely small IP blocks that would take up maybe, pushing it 5% of the size of the GPU they provide. 99% of their users have no need for more than 5% of the gpu power they require currently. It's a complete waste even in laptops for the vast majority of users. For those laptops that have discrete gpus also and switch between them again having a gpu taking up 10-15% of the space they currently do would save even more power at idle/desktop/playing video.
Since Intel went iGPU they've provided pretty much one gpu accelerated application in quicksync, which always had horrible quality, few options and honestly no one even mentions it any more. Not sure if it's supported in any better encoding software and if quality has improved or not. Intel have spent a monumental amount developing their GPUs, allowing them to take up around 50% of the die space but put in nearly zero effort trying to create an ecosystem in which devs are actually encouraged to utilise gpu accelerated code making the growing size of their GPUs completely ridiculous.
understand what you mean, for low power systems and laptops I can see the appeal of an igpu. But as someone who has owned several Ivybridge and haswell/devils canyon chips. The integrated graphics was a hindrance. I had the misfortune of having to use it once on a 4770k. Much better to keep an old discrete card spare should your main one fail. Also tried the lucid feature that allows the igpu to work in tandem with a discrete card, extremely buggy tbh.
I remember 8Pack stating in a thread on here prior to the release of skylake that he had spoken with various Intel engineers and reccomended the removal of the igpu on k series chips. Savings could have been spent on improvements to the thermal issues on such cpu's.
Ignoring Lucid, just their basic driver quality is awful, same game run on Intel or AMD/Nvidia, even with similar level power and similar framerate Intel provides horrible quality, horrible IQ and driver bugs galore. They have the hardware but put zero effort into their software. Make the drivers work in a few games like WoW(I assume) and ignore everything else.
Intel havent licenced actual GPU tech from nvidia, just some underlying patents. Intel havent really been struggling with their APU's and the timing of this basically seems to be based off "nvidia licence ends soon" type speculation. Except it doesnt end, they just dont have to keep paying for it.
It wouldnt really make sense for AMD to licence APU tech to their biggest competitor right when were told they finally have something competitive.
As above, for the size of Intel's gpus their performance is horrific. Don't forget that their absolutely best igpu still frequently fails to beat what is effectively a 5 year old cpu, 2 year old gpu on a significantly larger process AMD APU. They have a huge number of transistors and their actual performance for the given transistor budget spent on the GPU is horrendous, as is IQ, driver quality, basically everything about it.
If AMD could have at the same time been putting out 22nm gpus with a huge chunk of cache on package it would be basically at least twice as fast as what Intel are providing.