• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

PowerVR demonstrate x2 to x16 core GPU chip for mobile devices!

metalmackey "Maybe they have but you seem to think that no one else could come up with something similar or better and the fact is they would have to. "
I am not saying no one else could just that PowerVR have a massive head start by years. Everyone else has to pretty much start the technology from scratch. PowerVR have been refining it for generations now.

Just look at Nvidia's current attempt with the Tegra 2. Nvidia's 1ghz chip doesn't even beat PowerVR's 100mhz last generation chip at games. Can Nvidia or AMD really make that much advanceman in such a short amount of time?

The Tegra chip is a full SoC... the PowerVR one is just graphics, isn't it?
 
That's correct. PowerVR are just graphics well a little more than that. hrrm you got me thinking now. Are the Mhz speeds posted for Tegra 2 the whole chip speed or the GPU part speeds?

Anyone know if the GPU part is running at the same speed at the rest of the chip? It's too late to research now I have to look into it tomorrow.
 
Mobiles are the future and mobiles are overtaking desktops both in unit numbers sold and perhaps over time even in GPU power. I am not saying the desktop market will die out but it's becoming the secondary smaller market.

I play games on a 25" HD monitor. Its fairly huge and is an excellent screen.

I also have a soundcard that has a highend heaphone amp and DAC that a portable does not have and could not realistically have due to the lack of interest in quality on a mass market item.

I also use a mouse and keyboard to play for comfort and accuracy

my only interest is gaming , music and nice easy on the eye full screen browsing.


exactly how are 'mobiles the future' ? the future of what? HD newspaper reading, web browsing, hd mini golf games?

I can browse using a smart phone easy, but connection speed/stability and ease of use is still behind a pc and will always be this way (3g,4g,5g,6g etc.. vs stable 10gig fiber connections?) . but just because i can play games or can browse on a smart phone doesn't mean its the best way to do things.

ultimately convergence is happening but there are markets for both pc and phones and even if the phone was graphically superior to the pc... then id plug a keyboard and mouse into it and hook up a decent sized HD monitor and external DAC hence.. a small pc ! ;):D

whatever replaces the pc will be my new pc. as long as its like a pc :D
 
This is why I don't think mobile gaming will become huge, because there's always going to be a market for people who want a quality and immersive gaming experience. No matter how powerful these mobile chips become, they'll never to be able to adequately replicate the experience of a proper PC gaming rig.
 
phill1978 said "exactly how are 'mobiles the future' ?"
The future in that more mobile units will sell more then desktops, more advancements in mobiles then desktops with new features arriving first on mobiles, more games and more people playing games on them, more users, more people doing email and web and the list goes on.

Basically companies wanting profit will move to the mobiles and focus on it. I am not saying they will abandon the desktop. Just look at how many graphics chips PowerVR sold, its dwarfs Nvidia which is why Nvidia want to get into the market.


Like I said before, the PC is not going to die out but it will become smaller and less used then the mobile market.



You both said "I play games on a 25" HD monitor. Its fairly huge and is an excellent screen."
"No matter how powerful these mobile chips become, they'll never to be able to adequately replicate the experience of a proper PC gaming rig."

That might not be true with the way things are going. Most new gen mobiles output the resolution a 25" HD monitor runs at or higher. 1080p is getting common and that's 1920×1080 with rumors saying this year's device's will be even higher. What res is that HD screen of yours?

The idea being if you want a big screen you pull out the mobile and plug it into the hotel room or home screen. Pull out a wireless keyboard and mouse and bingo you have a device that revels a PC. We are getting to that stage surprising fast.

When you get consoles graphics in a mobile why bother with a console? No need to lug around a laptop or PC. Carrey the device as you travel, get home place it in a caddy to charge with a keyboard, and mouse plugged into the caddy or auto detected via whatever means you choose. Kind of like how people use laptops which plug into a charging bay. Once plugged in the desktop monitor, keyboard and mouse all switch to the charging bay laptop. I see this getting more widespread as the years go on.

I am the same and will stick to a PC but I can see how mobiles will become the larger market.
 
Why carry the mobile device at all, why not connect to a a powerful cloud cluster that can do all the rendering and physics.
 
How do you connect to a cloud cluster without some sort of fixed or mobile device? You still need hardware to render the video at HD level.

EDIT: I am not sure a full cloud cluster devices would take off. There are too many problems and disadvantage for very little advantage. In my mind the best system is a hybrid system like Steam. You have the local hardware and files stored local. The cloud keeps the important files like settings and save games or work. But you still store them local to avoid all the downsides to of cloud. Best of both worlds.
 
Last edited:
Lots more interesting news today and it’s not looking good for Nvidia or Arm. STMicroelectronics just sign up for PowerVR Series6 Rogue chip which is a major win and should cause a domino effect for more people swapping over to PowerVR. So it looks like no more Mali at the high end.

Source http://www.design-reuse.com/news/25654/imagination-technologies-stmicroelectronics-deal.html

http://www.stericsson.com/press_releases/NovaThor.jsp says “The Nova A9600 will bring more than a 20-fold improvement in graphics performance compared with the U8500 platform.” and "The Nova A9600, built in 28nm, will deliver groundbreaking multimedia and graphics performance, featuring a dual-core ARM Cortex- A15-based processor running up to up to 2.5 GHz breaking the 20k DMIPS barrier, and a POWERVR Rogue GPU that delivers in excess of 210 GFLOPS. The graphics performance of the A9600 will exceed 350 million ‘real’ polygons per second and more than 5 gigapixels per second visible fill rate (which given POWERVR’s deferred rendering architecture results in more than 13 gigapixels per second effective fill rate). that also means an effective fill rate of 910 million polygons. That’s some impressive specs.

Renesas Mobile are set to demonstrate the First POWERVR SGX MP chip this week at the show. This is over 5 times the speed of current devices. Quoting them “Achieving a new level of graphics compute density with significantly enhanced performance per square millimetre (mm2) and per milliwatt (mW), this marks a breakthrough in mobile computing, removing the last performance barriers for smartphone”

http://www.renesas.com/press/news/2011/news20110215.jsp

http://www.globalprintmonitor.com/e...-at-mobile-world-congress-2011-#ixzz1DySvVSil
 
No, those are different chips. Series 5 refresh is x4 to x8 performance depending on version over older Series 5 chips. Series 6 is the first full new generation in years with a x20 performance increase over Series 5 and a 100x increase over the old MX found in the Iphone 1.

Just to be clear the PowerVR SGX MP chip is the fast x4 to x8 chip. Anything with SGX in the name is a Series 5 chip with the SGX MP being the refresh. The codename for Series 6 is Rogue. Looks like I was right is saying the mobiles are advancing much faster than desktops and could overtake.
 
Last edited:
There's a saying " a good big one will always beat a good little one". Someone will make a larger version and probably won't need to buy any licence. There's plenty of plagiarism and downright theft of ideas in the tech industry in the full knowledge they may have to pay up later but have already made their money. The big players won't just give up their billion easily...
 
metalmackey said "Is the 210 GFLOPS double or single precision? A quick google says the 5970 is 928 GFLOPS."
No idea and we don't know how many cores where used for that figure. If it's one core then it should be scale up to 16cores liked rest of the chips. Anyway due to PowerVR's architecture being so massively different we cannot directly compare raw specs like that.

We all know a PowerVR 210 GFLOPS card performance is far better than a 210 GFLOPS card for AMD or Nvidia.

I also don't see how this is anything like Cell. The x8 numbers has practically been proven already and there are just to many demonstrations and big names saying the x20 numbers is true. If it wasn't true why would so many massive companies drop their old GPU and swap over to the new PowerVR one?


Troezar so you think AMD and Nvidia are going to steal the technology off PowerVR and get away with using it? I don't think AMD and Nvidia will date take on and upset that many giant players many of which dwarf them.
 
They won't straight copy but once a new idea is out there it isn't long before there are lots of close competitors. Look at any other manufacturing industry, very rare to get one uncontested product...
 
Pottsey, for the love of God use the quote button.

It's not just so that I can read your replies, it's so that I can use the jump to read what the person you're "quoting" said originally.
 
Is the 210 GFLOPS double or single precision? A quick google says the 5970 is 928 GFLOPS.

Single precision, and a 6970 is aroudn 2.7Tflops, not gigaflops.

Also, Pottsey loves, absolutely loves to make these bold proclamations about the speed with which its moving forward.

But Rogue isn't out yet, its not even close, its miles and miles away. Desktops are at 2.7tflops, the next gen will, likely be pushing 5tflops, and that WILL be out LONG before Rogue is, and the next generation after that will likely be around the same year Rogue is, which could well be pushing 10tflops.

Yup, sure, mobile gpu's are catching fast.

As per usual Pottsey tends to ignore logic and have a glorified take on things he really really likes.

The fact is, no one was EVEN TRYING with mobile graphics before, it was tiny, it was like an Intel IGP, you can call it a gpu, but its really just an insult to gpu's ;)

When you go from Intel not even remotely trying, to Intel trying just a bit you get a massive speed up, ok, Intel have only caught up with a couple gen old AMD IGP's with their 2600k IGP, but thats a pretty big step, but going from that, to beating or competing with AMD on their IGP's is a MONUMENTAL step in power, advances and complication.

No doubt mobile gpu's are moving along, and the power they use(if not being underestimated and misdirected just a touch) is impressive to say the least.

But its not in the ballpark of desktop gpu's, pretending so is silly, really really silly.

You're now trying to compare future generations of mobile products, to current desktop stuff, which will double in power roughly speaking, every generation.

Heres a hint, Powervr gpu's will not increase in speed 20 fold EVERY gen at the same power envelope.
 
Well its looking even worse for Nvidia who just announced the Tegra roadmap and it looks like they have nothing to compete against PowerVR.

http://blogs.nvidia.com/2011/02/teg...-chip-worlds-first-quadcore-mobile-processor/

Assuming those performance numbers are true they will lag behind massively.

Before making such bold claims maybe you should stablish the fact that the current competition is close, because if the current gpu is 20times slower, getting 20 times faster won't even keep up with a Tegra that doubled its gpu power.

Likewise, you're basing all of this on the likely figures of a full 16 core gpu, what power will a 16 core version use, will it fit into mobiles, or will a 16 core version only be a tablet gpu.

Tegra is a SOC, Powervr is not, Tegra could use a powervr gpu inside its SOC, Nvidia could release a new core with a new name with vastly upgraded graphics for tablet use.

AMD will have 140W(well likely 125 or 140W) octo cores available in a few months, do you expect to see octo core bulldozers in netbooks? No, different markets, VERY clearly to anyone not wanting to fudge the facts, a powervr 2 core chip, while architecturally the same, will NOT be aimed at the same market that a 16 core version would be.

I'm actually interested in it, its a very interesting product, but theres VERY few facts right now. Die size, power usage, actual speed, actual features, none of these things are known.

Will it only be 210gflops in 16 core version which will use 2-3W, that would be almost certainly worse performance/flop than the gpu in bobcat. Is it 210gflops per 2 core pair, and only uses 0.1W and fitting 8 into a mobile is possible, I don't know that, you don't know that, no one seems to know that.
 
drunkenmaster said “As per usual Pottsey tends to ignore logic and have a glorified take on things he really really likes.”
Ok I might something take a little glorified approach on things. But I never ignore logic. Can you point out one point in this thread where I ignored logic? In fact any thread. Just because you don’t agree with me or don’t understand something it does not mean I am ignoring logic. Thats the one thing I never do and if I am wrong I admit it.



drunkenmaster said “You're now trying to compare future generations of mobile products, to current desktop stuff, which will double in power roughly speaking, every generation.”
That’s my point. By 2012 mobiles will have hit today’s mid/high end desktop cards power. Desktops are only doubling every generation while mobiles are more than doubling every generation. Mobiles chips are now right behind the better mid end desktops and look to catch up if not overtake.

Mobiles are advancing at a rapid rate far faster than desktops and with a far more efficient design then desktops.



drunkenmaster said “Before making such bold claims maybe you should stablish the fact that the current competition is close, because if the current gpu is 20times slower, getting 20 times faster won't even keep up with a Tegra that doubled its gpu power”
But the current competition isn’t close. How do you work that one out? The Tegra 2 next gen chips are barely faster than PowerVR’s last generation if you can even argue they are faster as so far the games benchmarks show them as slower. It’s only a few synthetic benchmarks that slow then as a little faster. Take a closer look at NVidia’s marketing hype. When NV say they are faster they are comparing this year’s NV gen against PowerVR old chips.

NV being 20% faster than PowerVR old chip isn’t very impressive when PowerVR’s new chip is x4 to x8 faster depending on device.



drunkenmaster said “Likewise, you're basing all of this on the likely figures of a full 16 core gpu, what power will a 16 core version use, will it fit into mobiles, or will a 16 core version only be a tablet gpu.”
No one ever said the series 6 chip was 1 core or 16. They only gave performance numbers. The power numbers are significantly enhanced performance per square millimetre (mm2) and per milliwatt but no finale details. Assuming this is all true we should be talking well under 100 milliwatt’s. Very impressive technology.



drunkenmaster said “But its not in the ballpark of desktop gpu's, pretending so is silly, really really silly.”
How is 2660 million polygons per second and a fill rate of 80GPixel/s a second not in the ballpark of desktops? Even this year’s chips at 133 million polygons per second and a fill-rate of four billion pixels per second (4GPixel/s) are in the ballpark of desktop GPU. Granted not the high end GPU’s but they have caught up with desktops instead of being years behind.



drunkenmaster said “Tegra is a SOC, Powervr is not, Tegra could use a powervr gpu inside its SOC”
That would be great for me as I would make even more money. But I don’t see it happening. As for saying Nvidia could bring out a new GPU. What’s the chance of them breaking the roadmap and bringing out something much better then what’s on the roadmap? Have they even ever done that before?



drunkenmaster said “Die size, power usage, actual speed, actual features, none of these things are known.”
Pretty sure I posted all that for the Series 5 MP chip although very little is confirmed for the Series 6 chip. What do you wawn to know for the Series 5 and I will post it if I can.




drunkenmaster said “Will it only be 210gflops in 16 core version which will use 2-3W, that would be almost certainly worse performance/flop than the gpu in bobcat. Is it 210gflops per 2 core pair, and only uses 0.1W and fitting 8 into a mobile is possible, I don't know that, you don't know that, no one seems to know that. “
Well they said it’s got improved power consumption so worse case should be 100mW for one core and 1600mW for 16 cores. Most likely much better.
 
Last edited:
Back
Top Bottom