• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

PowerVR demonstrate x2 to x16 core GPU chip for mobile devices!

Soldato
Joined
29 May 2006
Posts
5,381
Imagination's Technology’s (PowerVR) demonstrate a portable device Multi-Core graphics processor at CES 2011. http://www.engineeringtv.com/video/Imaginations-Portable-Device-Mu running on a PowerVR SGX 5XT

What I find really interesting is they can have up to x16 cores with almost no performance loss, no change in driver software or program software. Everything is done automatically via hardware and drivers so no change needed in apps. Would be pretty useful if in a desktop card, now if only they would bring one out.

Apparently Apple are using this as a SGX543MP2 dual core GPU chip in the Ipad2 with rumours of a 2048x1536 resolution. The latest iPad/iPhone OS 4.3 beta has programmed support for the SGX543 so it’s pretty much confirmed the chip is going to be used although there is no evidence it will be dual core. Not that I am interested in the Ipad or Iphone directly it’s the technology I find interesting.

It really is amazing how fast mobile GPU’s are advancing. At this rate it’s only going to take a handful of years to overtake desktop chips. Year after year the performance increase between generations has increase far more in the mobile space then desktop. This year’s dual core PowerVR chip should have around 4 times the capability over last year’s chip and if the rumors are true that’s at a resolution of 2048x1536 with free FSAA.

http://www.macrumors.com/2011/01/15/ipad-2-screen-likely-to-have-2048x1536-resolution/ some evidence of the new Iphone and Ipad resolution doubling this year.

PS Anyone remember when phones battery’s lasted a good 16days?
 
What’s so funny? Unless PowerVR slowdown in advances and unless Nvidia speed up then what I said is true. The mobile chips have already overtaken low end desktop GPU’s and perhaps caught up with some mid end desktop GPU’s. How many of those desktop GPU’s run at 2048x1536with FSAA? At this rate its only a matter of time before they catch up to high end desktop GPU's.
 
Last edited:
Well the current generation already run’s Rage and the Unreal engine well. How does Rage compared against Crysis? I see no reason why the Ipad2 GPU cannot run Crysis.

EDIT anyway I never said they would overtake high end GPU's. I said they will overtake desktop GPU and it looks like they have already at the low end and will soon at the mid end.
 
drunkenmaster said “As for closing on on the desktop market, rubbish, doubling the capable resolution output doesn't have any bearing at all on its available performance.”
Doubling the resolution with doubling the GPU cores at the same time as using a next generation core which is better than the last gen core should have a bearing on performance. At the very minimum you would expect that setup to be double the performance of the last generation. The better GPU will should more than make up for the 5% loss in performance from a 2nd GPU core.



RavenXXX2 Said “A what res, what settings, got any vids.”
I only edited spelling corrections in my older posts when I change facts I do a EDIT: then put the new fact/info/. Anyway resolution depends on which device you use. The higher end is 1024x767 as that’s the limit of the screen. Not sure if you can HD output but the device has an output of 1280×720 to TV's and LCD screens. Below is a nice video. Perhaps not nice from a game point of view, but from a technology point of view its good. That looks very much like a modern PC game to me.

http://gizmodo.com/5693208/download...eed&utm_campaign=Feed:+gizmodo/full+(Gizmodo)



drunkenmaster said “The guy himself in the video said an ALMOST doubling in performance with double the cores, that could mean 60% or 95% , xfire does 95% scaling in loads of games now.”
Since when is 60% almost doubling? Anyway the figures quoted for the chip and I quote from its tech specs “highly linear scaling (over 95% efficiency) of performance in both geometry (vertex processing) and rasterisation (pixel/fragment processing)“ over 95% efficiency with 16 cores is impressive. It’s a well-known fact Tile based cards scale very well as each core renders different tiles.



drunkenmaster said “IF what you mean to say is a mobile chip designed for mobile use, for use with optimised mobile engines can perform better than a desktop chip designed for none of that usage, then sure.”
The chip is designed to scale all the way to desktop and higher. It can run everything a desktop chip can.



drunkenmaster “A couple game engines designed for mobile devices, that use tiny textures, and tiny res, and tiny power thats all well and good.”
You have the most strangest definition of tiny that I have ever seen. How can anyone say the res or textures are tiny? If you class that as tiny what on earth do you class as normal or worse what do you class as HD?



drunkenmaster said “What happens when that mobile chip is asked to use proper high res textures and every little high end lighting trick, with full aa?”
FSAA is free due to the way the chip works so that’s hardly a problem which is why almost all if not all apps have FSAA on. Rage appears to use every little lighting trick.

What do you mean proper high res textures? Surly it already has proper high res textures?
 
drunkenmaster said “You've shown absolutely no performance comparison between desktop/mobile gpu's, but just claim how fast they are when you post threads on the subject.”
Doing a direct comparison is next to impossible it was bad enough trying to compare a tile based desktop card to a PC card yet alone a mobile tile based card running different software to a desktop card.

But going by current facts and specs it’s a safe bet that the next gen chip can run Rage at 2048x1536 with free FSAA and full lighting effects. That alone should be enough to compare against the low and perhaps mid end desktop chips.

How many low or mid end desktop cards can run Rage at those settings smoothly?

Anyway my point was that mobile GPU’s are increasing in speed far faster than desktop GPU’s. There is no arguing against that fact. If that trend keeps going then mobile GPU’s will overtake desktop GPU’s at some point. Perhaps the trend won’t keep going, but so far it is.
 
Caracus2k said “Ipad 2 to run Crysis?”
Given the specs and the quality of this video from last gen http://gizmodo.com/5693208/download...eed&utm_campaign=Feed:+gizmodo/full+(Gizmodo) why wouldn’t next gen or the gen after be able to run Crysis?

I admit I do have habit of underestimating timeframes.



wakayoda said “Crysis will never run on a mobile device without a large amount of tweaks and optimization.... my Nintendo 64 could easily play rage on low based on that screen shot!“
That’s just what people said about Unreal engine and Rage this time last year and the year before yet now it runs. As for the screenshot it was taken on low settings. It looks much better on high settings.
 
I am not arguing this anymore its basic math. Based on the current curve rate it will happen. PowerVR seem to be advancing faster than Nvidia in speed. Current projections say it will happen. Now these projections can change over time but you cannot deny current PowerVR generation speed increases are bigger then Nvidia. If this keeps up is another thing which can be argued.

EDIT: Put it another way. Nvidia’s next gen mobile chip is 10% faster than PowerVR last gen.
PowerVR’s next gen mobile chip is over 200% with some saying over 400% faster then last gen.

EDIT2 (edited again to make more sence: Plus when you add in PowerVR are expacted to have real-time ray tracing in a gen or two its not hard to see a mobile ray tracing chip running a game that looks better then a desktop card that does not have real-time ray tracing .
 
Last edited:
MR.B said "Theres absolutely no way a phones/tablets GPU could match a high end desktop GPU.
Theres many many reasons why this literally could not happen. Ever.
But i'll just say two very obvious ones... the cooling needed for a high end GPU, and the power draw."

Perhaps you should read up on tile based cards before you say that. PowerVR do not use the same type of technology as ATI and Nvidia. Tile based rendering means PowerVR can be 1/3 the raw speed of ATI/ Nvidia with 1/3 the power needed and 1/3 the cooling yet match them for speed or over take them in 3D. If you have a PowerVR chip with the raw power of Nvidia high end desktop cards then the PowerVR chip would be x3 to x5 faster. While ATI and Nvidia are struggling with bandwidth and heat PowerVR have solved that and have little problem in comparison. That and PowerVR are steps away from real-time ray tracing if they can pull this off and ATI/ Nvidia do not have real-time ray tracing. Then a mobile tile based chip with real-time ray tracing will have a very high chance of looking better than anything ATI/ Nvidia desktop cards can pull off.

I am not saying it's going to happen for sure but it's a very real possibility now with real-time ray tracing and the other mobile advance's.



MR.B said " You seem to live in a reality where the laws of physics do not apply."
Like above you should really go and read up on tile based cards. You seem to be under the wrong impression PowerVR chips work in the same way as ATI & Nvidia. The technology is very interesting and as vastly more efficient then what ATI/ Nvidia use. Tile based can scale to 16x cores with over 95% efficiency that's something ATI/ Nvidia cannot current do.




MR.B said " They could not compete with Nvidia on performance and features, so they were crushed and had no choice but to leave that market."
That is completely incorrect and is not what happened. Also the PowerVR desktop card with Nvidia TNT2 specs ran as fast as a Geforce 2 GTS sometimes as fast as the Geforce 2 Ultra depending on game.
 
MR.B said " ... yet theres absolutely no evidence of this anywhere to back it up. Infact PowerVR cards from the 90's were tile based, yet they could not even match the high end GPU's from NV at the time, let alone be 3x more powerful. "
No evidence apart from just about every single PowerVR desktop cards we had? Not to mention all the whitepapers explain how the tile based cards work. It seems most of the old reviews are no longer up but here is a thread of people talking about the Kyro's speed.

http://arstechnica.com/civis/viewtopic.php?f=6&t=1008997

The Kyro as one of many examples had specs well below a Geforce 1 but ran at speeds between around a Geforce 2 GTS or Ultra most of the time.
Geforce 2 GTS has a 800 Mpixels/s fill rate. Ultra has a 1000 Mpixels/s fillrate. Yet a Kyro 2 with a low 350M pixiel fill rate and SDR ram would often run as fast as a Geforce 2 GTS and in high overdraw games run faster than Ultra's. Only in the most basic simple games did it drop to speeds of a Geforce MX but even a Geforce MX has far higher specs.

Yes ATI use some tile based rendering type features but it's not a tile based card. If you think putting two none tiled based cards together makes a tile based card you clearly don't understand what a true full tile based card is or the benefits of tile based rendering.

ATI might render the screens in tiles but they still have two none tiled based cards doing the work.



MR.B said " No one bought them because of these reasons, sales were bad - another fact."
Sales were great in the millions, drivers were great and the card was popular. The problem was ST who at the time made over 6billions decided that the desktop graphics card market is too small to bother with any more compared to its other divisions.

As for "NV have demoed real-time raytracing, any CPU from Intel or AMD can raytrace. My 980X CPU can raytrace in real-time if i render a low complexity scene, infact any CPU could do this if it's at a low enough res and complexity."
I hardly call 0.63fps realtime and that was on a single NV high end GPU. I guess it realtime but it's unusable. On the flip sides Imagination Technologies old generation very low speed 75Mhz chip was demonstrated doing real-time raytracing around about 23x faster than the CPU or Nvidia high end chip.

They say the 2nd generation chip is 14X. faster and at some point this tech will be implanted in future generations of mobile GPU. That's far in advance of Nvidia or the CPU. I would be very surprised if PowerVR are not the first to get 30fps real-time raytracing on a single cheap chip. All evidence says PowerVR are massively ahead of everyone else in this area. PowerVR are in double digit FPS's while Nvidia are under 1 FPS when useing a single GPU setup.



MR.B said " could not even match the high end GPU's from NV at the time, let alone be 3x more powerful."
As I recall and looking back I am correct PowerVR budget card was a match for NV mid to high end cards. So I think its safe to say a high end PowerVR card would have been x3 faster then a high end NV card. Pretty much every review talks about how you get more for less with PowerVR.


MR.B said "why do PowerVR not compete in high-end GPU/performance areas? Where are the amazing looking demo's showing off PowerVR's tech? Why dont even any of the consoles use PowerVR anymore? "
PowerVR are an IP company. They have technology that does compete in high-end GPU/performance areas but being an IP company they business module does not work like ATi or Nvidia. If the rumours are true PowerVR have been chosen in the next generation of consoles over ATi and Nvidia. Time will tell if this is true. We know a licence has been taken out but it's not been 100% confirmed.
 
VinceB1 I have always been a PowerVR fan boy, this is nothing new. I just find the technology very interesting. I am just trying to stry up some interesting graphics talk that isn't just Nvidia or ATI. I fully admit I could be wrong on some points especially when it comes to estimated time frames.
 
Nickg said "But how does it fare vs the Tegra2? especially in things like Hd 1080p decoding and stuff?"
PowerVR are well beyond the Tegra2. Nvidia them self said Tegra2 is 10 to 20% faster than the PowerVR SGX540. Various sources say a single SGX543 core is double the speed over the SGX540 and the SGX543 comes with x2 cores. That should mean PowerVR are around x3 to x4 faster than Tegra2 all the while running at a much lower cloak speed.

http://www.engadget.com/2011/01/17/more-details-emerge-on-apples-a5-chip-for-upcoming-ipad-2-and-i/

http://channel.*****.net/content/item.php?item=28491 more info. both sites say the SGX540 is x4 the speed over current generation.

As for 1080p HDMI that's the default output spec.
 
Vinni3 @H|H said "lolwat?
That's some pixel density - nearly as high as a 30" LCD! If it could be made, which it can't, it would cost more than you can imagine."

What do you call this then? http://www.screentekinc.com/Acer_As...-inch--2048x1536-qxga-laptop-lcd-screen.shtml
There are various reference online to the iPad2 replacement screen costing 3x the price of the current iPad screen. Apple are well known for doubling the screen resolution with an aim to reach just over a 300 DPI.
http://www.macrumors.com/2011/01/15/ipad-2-screen-likely-to-have-2048x1536-resolution/ explains more with evidence for a 2048x1536-resolution. 2048x1536 is double the current Ipad resolution and double size"x2 iPad graphic have been found in iBooks 1.2. There are also links for double Iphone graphic suggesting that to is getting another doubling in resolution.

EDIT: I have not done much shopping around for best price but the screen seems to cost around $69.99. This needs confirming
EDIT:http://www.247laptoplcd.com/servlet/the-93395/ACER-ASPIRE-ONE-PRO/Detail Around $54 or $44 on Ebay.
 
Last edited:
Baboonanza said “I don't know why you're so determined that mobile graphics should surpass desktop GPUs but the it's pure horse****. Mobile graphics are where they always have been (maybe slightly ahead historically since there is now more investment), which is many years behind console tech and even further behind PCs.”
It’s not pure horse****. You and a lot of people don’t seem to realise the world is changing. Give me one good reason why a mobile real-time ray tracing chip cannot look better than a none real-time ray tracing desktop card? Just because something has always been ahead does not mean it will always be ahead. PDA’s used to always be ahead of phones now look at how far behind PDA’s are.


It’s a perfectly reasonable possibility that mobile chips at the current rate of expansion can overtake desktops. Just look at the upcoming technology and what’s being developed. I am not saying it’s going to happen 100% but it now a very real possibility. Anyone saying it flat out cannot happen doesn’t understand the technology in development. The improvements between mobile generations is vastly better than the improvements between PC generations.

More and more investment is being put into the mobile market and it’s predicated to vastly overtake PC’s. For year after year mobile markets have seen bigger improvements between generations then PC markets.

It is wildly believe the mobile market will outship the desktop market by 2012 and shortly after be twice the size and still growing. Once you have company’s main source of income in the mobile market and more R&D is put into mobile chips then desktop it’s not hard to imagine mobile chips overtaking desktops. That’s why I am investing in mobile based companies. The future is mobile. Desktop PC’s are not going to die out but I strongly believe they will become the secondary market dwarfed by the mobile market.

Real-time ray tracing is a prime example. Its strongly looking like mobiles will get that before desktops. If that happens desktops could well fall behind graphically. How can you not see that?

As some people don’t get it, please take note, I said might, not will overtake desktops.
 
titaniumx3 “The problem with your arguement Pottsey is that it goes against all logic and common intuition.”
It might do at first glance but not once you investigate. PowerVR are vastly more efficient with their architecture. Like I said before PowerVR’s cheap 75Mhz card outperformed Nvidias top end desktop card at raytraceing by a massive amount.

Nvidias next gen 1000Mhz chips barely outperform PowerVR’s older 100Mhz chip yet alone the next gen. PowerVR only need to make a small increase to get useable raytraceing while Nvidia are very far away. Raytraceing is the key, once that arrives and it looks like it will then mobiles should be able to look better then desktops. Raytraceing can pull off graphics none Raytraceing desktops cards cannot dream off. That’s the reason lots of people call real time Raytraceing the holy grail



FrenchTart Said “Mobile chips will never overtake desktops.
…..
You're living in a dream world sadly. “

PowerVR are not currently in the desktop space so if they come out it with a great mobile ray tracing chip we can end up with amazing graphics in a mobile sector with no ray tracing in the desktop. ATI and Nvidia are currently years behind PowerVR when it comes to ray tracing.

How is it a dream to believe that PowerVR will get ray tracing at useable speeds before the desktop market? All the facts point towards it happening. Do you think PowerVR will not be the first to get useable Raytraceing?
 
Back
Top Bottom