The CPU in use has nothing todo with the 'quality' of 'ports'; 99% of the code in any game/engine code is written in C++ (or C) with only small percentage going below that for performance reasons. In fact that we currently work with In-order CPUs with small caches helps the x86 compiled versions as memory access pattern improvements, which are the main source of optimisation on games these days due to the increasing CPU/memory speed gap, help both platforms.
I also say 'port' because many games maintain builds for all 3 major platforms at the same time; there is no 'porting' there is common code for the majority of the game and then platform specific sections to deal with the various APIs (or SPUs in the case of the PS3) which is a very small contact area indeed and generally has a team dedicated to it.
Also, in many games, a fair chunk of logic can and is farmed out to scripting languages such as Lua and, frankly, Lua on the console is a bit of a performance issue (not a critical path one, but one all the same.. messes with memory something rotten) and an area where PCs have the edge due to larger caches, better branch prediction and out-of-order operation.
--------
On a related note I still find it cute that 'PC gamers' worry so much about the GPU when what they should be worrying about is the graphic stack on the PC.
Right now the consoles are hamstrung by their old hardware however if you remove that from the equation suddenly things don't seem so rosey.
The example I have to hand is part of our rendering test bed; imagine a cube which is rotating about it's Y axis. Now imagine that cube shape is made up for 50,000 other cubes also rotating about their Y-axis. To remove the 'gpu problem' those cubes are flat coloured; basically rendering them is no trouble for a PS3 or 360 GPU, never mind a modern PC (stock NV GTX470 in this case).
Now, each of these 50,000 cubes is draw using a single draw call. This is about the worst possible case for the system to deal with.
The 360 and the PS3 will happily chug along at 16.6ms per frame (or 60fps) all day and all night, no problem at all using 6 threads or a SPU based system to setup the draw calls.
The on the flip side the PC (Xenon 4C/8T @ 2.63Ghz, NV GTX 470, DX11 rendering mode, multiple DX11 deferred contexts for MT rendering over 6 threads) couldn't even manage 30fps. (I believe it clocked in around 27fps, or 37ms per frame, 30fps is 33.3ms/frame).
Clearly the PC GPU is fast enough so the problem must be CPU side.
Now, at this point it is I'm sure tempting to shout and yell about 'poorly optimised PC code' however the code in question is VERY light, a few hunded lines at best on the main path and our engine API very very closely mirrors DX11 API, so much so that for the PC it is often a very very thin wrapper to the DX API call and nothing more; if anything on some paths the consoles do more CPU work load.
To cut an increasingly long story short; after a couple of days looking into this (as I was very surprised at the time, 8 months later I'm not remotely surprised) the problem was in the drivers.
At or below 15,000 draw calls per frame the PC version had no trouble at all, however as the draw call count increased so the frame time began to increase as the driver would 'stall' at the part were the back buffer was swapped to the front buffer. At 50,000 objects this stall was massive, taking up most of the frame time by a long shot. At the time I recommended that we kept the draw calls on the PC version of our game below 15,000; which might seem like a lot until you realise that deferred rendering causes an amplification of draw calls as the same objects are rendered into multiple buffers for the various lighting, shadow(x3 in our case) and other passes required to build the scene.
(To give you a vague idea of costs current game I'm working on/with is a 4 player split screen game with a 60fps target; estimated max draw calls per player is 800/frame. This gets us a colour, lighting, single shadow pass + particle effects. Depth might have recently been allowed on the consoles but no on the PC due to overhead of how the arch. works).
So, while right now things might look good going forward unless driver arch and interfaces change the PC version is in for a world of hurt. On the PS3 we can use 50% of the SPU time to generate draw calls and chain them together, on the 360 we can use 6 cores for the same, on the PC with the larger per-draw overhead we can use deferred contexts to record command buffers (not very well, they do the work in the wrong place and AMD don't support them as well as NV as yet) but the final submission remains a single threaded problem and, in case you weren't paying attention single threaded performance stalled some time back.
Basically next generation I expect the consoles to destory even your highend OMGWTFBBQ!!! PCs when it comes to object draw count (even if it remains a PPC arch (and I hope so, much nicer to work with an x86) the arch is better than it was and even has out-of-order execution to cover for the bad coders out there) with only the GPU limiting how much res we can push and with what factors enabled.
(and yes, next generation we will continue to do all the tricks we do now, including no fp16 render targets to reduce bandwidth costs (my group lead's wish list includes logLUV encoding/decoding in hardware as it is common method of storing HDR data) etc, we'll just get a new box of tricks and some quality improvements on the old ones.)