Yes sorry to clarify guys if you run multi monitor then it's OK, if you are single monitor powered by a discrete card you don't get quick sync.
Essentially the GPU has to be actively used.
Its not quite that simple though, the multi monitor has to, afaik, use the Intel output from the mobo for at least one screen. But I would assume that for instance, if you have a 6970 say, and a triple screen setup, you're unlikely to want to connect any of the screens through the intel output.
Gaming is by no means the only reason for multiple screens, but its one reason and there should be others that you'd want all the outputs connected to one screen.
Quicksync is certainly in its infancy, and it doesn't really use the gpu yet, just the transcode engine which is separate transistors to the "shader processor" type bits of the gpu. Theres still little/no acceleration most software on the gpu itself. AMD will have the same issue though maybe slightly less due to Nvidia/AMD pushing acceleration of flash, video and a few other things on gpu's for much longer and much harder than Intel.
The real question is can AMD's fusion seemlessly use both a discrete gpu and the ondie gpu, in some situations maybe, some maybe not.
I think both companies are pushing in the future(could be a couple gens, or quite a way down the line) for software to not see a gpu unless it needs to, and for the CPU/OS to decide where best to send certain instructions. We seem a pretty long and complex way from that kind of seamless usage for now though.
AMD really need to push on die gpu usage alongside discrete gpu asap, as theres only so long Intel's on die gpu's will be far behind AMD, push the advantage while its there.
Anyway the thread is really more about Nvidia quality, its nothing to do with it being a 3d/2d engine problem, nothing on a GPU is "3d" or 2d, its just processors, lots of small not very complex ones with a ludicrous amount of bandwidth and ability to run them all at the same time.
The thing responsible for poor quality on Nvidia gpu transcoding will be the software and the software alone. Why, don't know, have they sacrificed all quality for speed, is it just poorly coded and could be equally as fast with much better quality, or do they just not really care about it as in reality not many people spend that long transcoding stuff, they made an app, that uses CUDA and gets touted as a useful feature but thats enough, 99% of users won't use it so putting lots of time and effort making it fantastic is just a waste of cash, again who the heck knows.
As I said before, its a little irksome that the rubbishness of Nvidia's transcode quality hasn't been mentioned before in other reviews.
When its AMD vs Nvidia, reviewers just tend to carpet bomb the reviewer with the idea, that any gpgpu, any professional work, any video work, anything cuda, or physx, or any acceleration of anything non gaming Nvidia are the clear leaders without question and is touted as another reason to go Nvidia. THere are places Nvidia does lead AMD, and places AMD lead Nvidia. Its bad reviewing that so many reviewers seem to just concede all that non gaming arena to Nvidia without any comparison at all.