• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

PowerVR demonstrate x2 to x16 core GPU chip for mobile devices!

LOL

Show me a laptop GPU that even half the speed of a GTX 580 ;)

While his claims are a bit over the top it is quite interesting how far mobile tech has come and how fast its advancing, my phone (which has been out 2 years now) has 3D rendering hardware thats slightly faster than the original Voodoo 1, has more VRAM and simple shader capabilities - its quite capable of running something like Quake 2 at 30fps (800x480) when properly opptimised for the hardware.
 
While his claims are a bit over the top it is quite interesting how far mobile tech has come and how fast its advancing, my phone (which has been out 2 years now) has 3D rendering hardware thats slightly faster than the original Voodoo 1, has more VRAM and simple shader capabilities - its quite capable of running something like Quake 2 at 30fps (800x480) when properly opptimised for the hardware.

But that's still 15 years around behind which they were back then, 15 years is an eternity in computer tech.
 
While it's true that mobile graphics has advanced rapidly over the last few years, it's not exactly a complete suprise. All of a sudden you have this massive expansion in mobile technology with huge demand for it, it makes sense that their will be a rapid increase in performance and features for mobile devices.

The important question though is, will they be able to sustain this rate of increase and for how long. In my opinion, at the moment the mobile graphics tech is simply playing catchup, but eventually they will reach the same heat/power limitations of the desktop counterparts. Unless I'm mistaken, the technology in mobile devices isn't radically different to normal desktop components, they are simply scaled down and always will be unless we have a completely new computing architecture in the works.
 
Last edited:
A Rock Extreme X785 Laptop has a GTX480M in it ?

Which is massively cutdown from a full GTX480, something like 352 or 384 sp compared to 480sp on the desktop GTX480 and 512sp on the GTX580, including similiar reduction in ROPS/TMUs and also massive reduction in clockspeeds to reduce heat output.

Compared to a full blown desktop GTX580, realistically, you're looking at half the performance and probably much less if you take into account the rest of the components in a laptop.

Plus, calling these types of laptops "mobile" is stretching the term a little.
 
There is no magic wand. Mobile chips must work within a very small thermal envelope hence they will always be YEARS behind desktop chips. Even the best mobile GPU's are about 8-10 years behind.
 
titaniumx3 said “Unless I'm mistaken, the technology in mobile devices isn't radically different to normal desktop components, they are simply scaled down and always will be unless we have a completely new computing architecture in the works.”
That’s the bit I am trying to explain but don’t seem to be doing a good job of it. The architecture is different and is not just a scaled down version of desktop parts.

I do agree with “will they be able to sustain this rate of increase and for how long.” that is the key bit. Can this different architecture keep increasing at the current pace. If it can then it will overtake desktops at the current rate. But can it keep up the pace?


Owenb “Even the best mobile GPU's are about 8-10 years behind. “
But what happens when/if mobiles swap to a very different way of working. Like swapping to either raytraceing or a hybrid of raytraceing before desktops? Mobiles are starting to pick up new features and technology’s faster than desktops.

Will mobiles then be behind desktops?
 
@Pottsey

Even if these chips are as good as you claim and can match high end desktop GPU's, the same architechture would make its way onto PC cards but with much higher power/clocks, so the PC GPU would still be way ahead of a mobile one.
 
But how would the architechture makes its into the PC? ATI and Nvidia would never licence the technology it would be too embarrassing I would have thought. Who else would use it and move it over?

I would love to see it in the desktop space but I just cannot think of any company that would want to do that.

I agree in theory the mobile chips could be moved into the desktop space either higher clocks or more cores. But who would produce the card? If there is no one to move it into the PC space then the advantage stays in the mobile space.
 
Am I missing something? If this chip is so good, wouldn't Amd/Nvidia have to come up with similar tech as people would expect their desktop to be more powerful than a mobile and no one would want a PC if they wasn't, Amd/Nvidia would go out of business.
 
I think the flaw in your arguments Pottsey is that you're are making massive assumptions and extrapolating way too much.

Just because these PowerVR chips are doing great things currently, it doesn't necessarily mean they can simply scale up in a linear fashion and just match nvidia's and amd's current desktop top end, whilst staying within the size and power constraints of a mobile device.

I hope you realise that, as perceived image quality/complexity of a 3D scene increases, the computing power required to render the scene increases in a non-linear fashion (possibly exponentially). It's an unescapable fact of 3D rendering and I simply do not see PowerVR creating such technology whilst nvidia and amd are completely oblivious to it.

Also why are PowerVR limiting themselves to handheld mobile platforms, why not enter the desktop and notebook market with such amazing technology?

Sure it's a possiblity, but one must then ask how probable is it - to which the rest of us are saying, not very probable at all, for reasons we have already discussed.
 
titaniumx3 said "Also why are PowerVR limiting themselves to handheld mobile platforms, why not enter the desktop and notebook market with such amazing technology?"
PowerVR are not limiting themselves to a handheld platform as they have desktop and other chips. Do I really have to explain that one again, they are an IP company. I assume you know what an IP company is? This is the main reason I don't see the technology being moved into the desktop space. This is why I don't buy the argument but the desktop can just take the same PowerVR chip and scale it up. Yes it could happen but its even more unlikely to happen then PowerVR getting ray tracing in mobiles.



titaniumx3 said "Sure it's a possiblity, but one must then ask how probable is it - to which the rest of us are saying, not very probable at all, for reasons we have already discussed."
But the reasons you lot are giving are unlikely to happen. The main argument against what I say seems to be, but we could just scale up the chip and cloak speed in the desktop market. But that is extremely unlikely to happen for the reasons I stated.

As metalmackey said yes Amd/Nvidia would have to come up with similar tech but how fast can they do that when it appears PowerVR are generations ahead in that area?



titaniumx3 said "I hope you realise that, as perceived image quality/complexity of a 3D scene increases, the computing power required to render the scene increases in a non-linear fashion (possibly exponentially)."
Due to the way PowerVR's architecture works the more complicated a screen gets the more data is removed and not needed to be rendered.

Take a PowerVR Kyro in a simple screen it was the Speed of a Geforce MX. In a medium complexity screen it was the speed of a Geforce GTS/GTX in a high complex screen it was the speed of an Geforce Ultra. In a super high complex screen it was too slow to use but could render the screen far faster ten a Geforce Ultra.

Yes the computing power to render a screen increases in a non-linear fashion but PowerVR have the advantage in high complex screens. For example a screen with an overdraw rate of x2 makes PowerVR chips effectively x2 faster as ATI and Nvidia have to render x2 more data than PowerVR. In a high complex screen with an overdraw rate of 5 ATI and Nvidia have to render x5 more data just to match the end result of PowerVR.

That's one of the most interesting things about PowerVR's architecture. It's just so much more efficient.
 
Back
Top Bottom