• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Could PowerVR be adding real-time hardware ray tracing to their 3dchips?

It's the lack of hardware phsyics that's holding us back. After seeing some of the hardware physics coming in Eve online and other games I believe I was correct in saying hardware physics is the future.

In all fairness, everyone knew hardware accelerated physics was the future.

We just had to wait until cpus' were quick enough & had enough spare cores to allow this to happen, rather than them being tied to a specific piece of hardware.

As soon as Micro$oft bring their own physics api to DirectX the better, if only from the point of view of a common platform for developers & end users.
 
Monkeynut said "Pottsey can you PLEASE quote people in the proper manner. "
But that is what I am doing according to the correct rules of the English language. You see " those are called quotation marks, used for quoting people :)



Ordokai "In fact, you would have to ramp up a whole deal of components to achieve complex and awesome-looking scenes."
Why would you? PowerVR has near perfect hidden surface removal unlike everyone else. PowerVR only renders what can be seen cutting down the power needed to render a screen massively. What components would need to be ramped up? I don't see any apart from the GPU which would be ramped up anyway as we are talking next generation or the generation after. The point of hardware ray tracings is the whole thing is done on the GPU. Apart from database issues there is no extra strain on the rest of the system if the GPU can handle the workload.


The devices are already using SSD's so that's the data transfer and database access problems fixed. As far as I can see the only thing holding us back is the GPU.



Klo said " Raytracing doesn't suddenly make everything better, it makes the lighting more realistic, but if the textures etc are still low quality, it won't make that much difference."
Yes but PowerVR solved the bandwidth problem and have more than enough spare bandwidth and storage space for good textures. That's hardly a problem.

I didn't say Ageia's Physx was the future, I said that if we call have hardware phsyics then games could be so much better then today. It's the lack of hardware phsyics that's holding us back. After seeing some of the hardware physics coming in Eve online and other games I believe I was correct in saying hardware physics is the future.
Honestly, the only thing that is clear from anything in this post is that you really don't know what you are talking about.
 
Even if we make a whole deal of unlikely assumptions to get this thing in a mobile by tomorrow, can the rest of the device handle complex enough environments/models for raytracing to even matter, for instance ? In fact, with anything less than complex scenes/models tracing rays would be close to pointless, why use a ridiculously calculation intensive simulation when an approximation would yield much the same result ? Even more so I suspect on a tiny screen.

Uh oh you said the magic words that make Pottsey explode, poor Pottsey.

THe fundamental flaw in his(and Rroffs)arguments over Physx were very basic, and simple and obvious. basically exactly what you said minus the small screen bit. Phsyx does nothing better, just much more accurately, but, you can't actually tell the difference because, when a box explodes from inside, the mind can't tell where each shard should be, one goes left, one goes right, neither is "right" in the minds eye, both are fine.

What was ultimately worse with Physx is, it doesn't do things more REALISTICALLY it got more accurate numbers for unrealistic simulations of events and thats even worse. Look at Mafia 2 "physx" things, the effects are wholly unrealistic, just more power intensive.

Here we apply the same argument, ultimately accurate lighting to a 0.1mb texture on a 2 inch screen that looks woeful but lighted well, its laughable.

I've seen some mention of the effective simplicity of using a low level of raytracing to pinpoint where everything should be over some very bulky and difficult to manage and generate shadowmaps and things, I've no idea how raytracing works but for a tiny tiny device I can understand the potential to simplicity and efficiency.

Then theres the other argument "it was 2 years ago and a 75Mhz card, in a proper PC".

Sorry but the key answer here would have been, it was a 2W card two years ago and today could be done in 0.2W in a tiny form factor giving acceptable framerates, as this wasn't mentioned I'm going to go ahead and assume they can't.

As for what it actually matters, yes unfortunately dev's like anyone else insist on involving themselves in every last form factor to the detriment of every other form factor.

First PC's got overlooked because someone managed to get 15-30mil consoles in peoples hands and it felt a more attractive process, then you have phones and people started making rubbish games for them, and then tablets, and then better phones, then the new consoles, then PC's again, etc, etc.

People don't expect world class accuracy on a tiny screen, nor do dev's want the cost of developing to massively increase, the games for phones/mobile devices are cheap, short, crap and pointless, exactly like the audience they are designed for :p

Dev's don't want to learn how to code for raytracing and producing ultra quality stuff on mobiles that increases cost dramatically and increases time to market, just to produce "old" stuff on pc's and every other platform.

Now the question the thread, and several of the tech news sites should be asking is, where ELSE might powervr be pushing in the future and would that platform be somewhere to push forwards raytracing.

Theres been rumours of PowerVr trying to get something in some console, handheld maybe or, meh, who knows.

Sometimes companies buy other companies just to stop them being a threat, sometimes they want the quality engineers/coders, and most importantly lots of people buy companies and never ever use the IP they got from buying them.

Intel have bought lots of smaller companies with good intentions but ultimately went a completely different route, thats life.

Even in the slim chance they did bring raytracing to mobiles, the real question is, so the heck what. If they keep going forwards they can take over desktop 3d cards, really? despite not selling desktop 3d cards, and the absolute certainty that desktop and console gaming won't be based around people with a nice "phone stand" playing games infront of them, then no, theres literally no chance it will "take over" gaming.

Crap low res textures, crap ability to game over a long time due to power, crap resolution, crap screen size, crap control methods, but super doopa ultra realistic lighting............ yeah I don't think that will be something that takes the world by storm.

Revenue wise take over, possible, mobile gaming is already growing rapidly. People are much more willing to buy a 10mb game on their phone for £2 than buy a 10gb game for £30 on a whim while on a boring train ride to wherever they are going. That market is growing very fast without raytracing anyway, that won't have any bearing on it.
 
They could maybe for phones, but for PowerVR to bring ray tracing into Pc gaming you need both Amd and Nvidia doing hybrid chips or at least on board with PowerVR.

Not sure PowerVR could make a hybrid chip that compeats with Amd and Nvidia in rasterization and do ray tracing good enough for people to want it and market share big enough for gaming company's to take the gamble to make ray tracing games
 
drunkenmaster said " Sometimes companies buy other companies just to stop them being a threat, sometimes they want the quality engineers/coders, and most importantly lots of people buy companies and never ever use the IP they got from buying them."

You didn't read any of the links I posted did you or do any research.
" “Well, we don’t want to tip our hand but this acquisition opens up the potential for highly photorealistic imagery to reach new real-time applications and markets, including consumer, not possible previously, via its integration with POWERVR, which is the de facto standard for mobile and embedded graphics.

" “We would not have acquired this [Caustic] technology if we did not believe we could get it into handsets,” he added, commenting on the new deal.


http://www.mobilebusinessphones.com...mises-photo-realistic-3d-graphics-on-mobiles/




drunkenmaster said "rap low res textures, crap ability to game over a long time due to power, crap resolution, crap screen size, crap control methods, but super doopa ultra realistic lighting............ yeah I don't think that will be something that takes the world by storm."
You do like making stuff up as proven many times before. You mean high res textures, no power problems can play for ages as tile based cards use much less power and more than just super lighting. I forgot its HD as well so not low resolution, it's made to output to TV's. As for crap control that completely depends on the device and has nothing to do with the 3dchips.

http://www.cubixgpu.com/img.php?foto=i/ray_tr_home1.jpg&rat=600
http://thepriorart.files.wordpress.com/2009/08/nvidia_automotive_rt2.jpg
http://www.cadsoftsolutions.co.uk/software/sketchup/su_podium/files/page80-su_podium_01.jpg
http://www.cadsoftsolutions.co.uk/software/sketchup/su_podium/files/page80-exterior_sm.jpg

This is the type of graphics we are talking about, photo realistic with nice curved surfaces, no or few jagged lines, no bad looking edged polygons with curves being made up of clear big straight lines.

Where are you getting this crap resolution and low textures rubbish from? Are you aware the devices already output to TV's getting around your screen size problems and as we are talking about next gen it's only going to get better. As for textures is supports and runs just fine high res textures. It sounds like you are just creating problems out of thin air.



drunkenmaster "Then theres the other argument "it was 2 years ago and a 75Mhz card, in a proper PC".
Sorry but the key answer here would have been, it was a 2W card two years ago and today could be done in 0.2W in a tiny form factor giving acceptable framerates, as this wasn't mentioned I'm going to go ahead and assume they can't."

So you think a company who specialises in tiny form factors and mentioned they are implanting this technology and spent a lots of money on a company's technology whom they cannot use? You think this because A technology that in a slow card over 2 years ago made as a proof of concept ran at over 50x faster than everyone else, a card not made to be as small as possible only to see if the idea would work.

http://images.anandtech.com/reviews/shows/2002/CeBIT/part1/powervr_fpga.jpg You see that, that's a PowerVR MBX running on a FPGA broad at a little over 10% of the speed of the final product - 14MHz as opposed to 120MHz for the final product and a higher W rating then the final product. Going by your logic they cannot possible get that into a phone.
People like you said PowerVR could never pull off Xbox and better graphics with free FSAA in a mobile and look at what they did. Before PowerVR people like you where saying the state of mobiles today wouldn't happen any time soon.
 
drunkenmaster said "Phsyx does nothing better, just much more accurately, but, you can't actually tell the difference because, when a box explodes from inside, the mind can't tell where each shard should be, one goes left, one goes right, neither is "right" in the minds eye, both are fine."
Yet you and others complained when simple less accurate simulations are done and the explosions are the same with shards exploding in the same as the eye can tell. I am sure you complained the explosions are all the same in some older games. Proven the Eye can tell the difference.

Anyway as per normal you totally miss the point of hardware physics. There are 3 options do the same physics at the same accuracy as before only much, much faster. Option 2 do the same physics at the same accuracy as before only more of it so you get more for no less FPS then before. Option 3 do more accruate physics.



drunkenmaster said "Here we apply the same argument, ultimately accurate lighting to a 0.1mb texture on a 2 inch screen that looks woeful but lighted well, its laughable."
That argument is pointless because ray tracing is renowned for great textures, powervr are renowned for some of the best lossless texture compression and for having extra bandwidth and more free ram for textures then other cards. Textures are not a problem. Have you seen Rage on the Iphone? It has the Rage megatextures without any problem.

Its more of a case of the same textures with proven better looking lighting with ray traceing or the same or worse textures with worse lighting without ray tracing.

2" screen? Current screen size's go up to 10" or more at 1024 × 768 or more with being designed to output HD to HD TV's.



drunkenmaster said " Dev's don't want to learn how to code for raytracing and producing ultra quality stuff on mobiles that increases cost dramatically and increases time to market, just to produce "old" stuff on pc's and every other platform."
Yeah just like they wouldn't want to program for a tile based card over normal cards. Just like they wouldn't want to program for all the different PowerVR features like the different compressed texture. If anything the millions upon millions of games sold for PowerVR's device shows the devs love this kind of stuff.

EDIT: Where is your evidence costs would increase? What if it’s easier to program for and produces better results? Lots of people have been raytracing for years even I can managed to make some raytracing screens and I am no real programmer. The skills are already out there. Why would raytracing increase time to market? Are those facts you postedf or just a wild guess work you are marking up and acting as though they are facts?
 
Last edited:
Ah - ray tracing :D

Well, ray tracing the "the future" of lighting. In principle it's very easy to implement (much easier than rasterisation or hybrid methods), leads to more realistic looking environments, and allows a near-infinite number of reflections without too much of a performance hit.

Unfortunately it requires a massive amount of computing power to implement properly. As a ballpark estimate, I would say at least 10x the compute power of today's high-end GPUs would be required in order to implement ray-tracing effectively in complex 3D games. We can expect this kind of compute power in perhaps three or four (proper) GPU generations - assuming that we haven't all switched to purely online, cloud-compute based gaming by then!


Anyway, it's interesting that PowerVR have bought a ray-tracing specialist company - I assume that they want their engineers or perhaps their patents for some reason or another. They may even be looking to move into non real-time rendering, or be looking to "future-proof" themselves. But to suggest that ray-tracing will be implemented on mobile devices is just not realistic. Ray tracing is a massively parallel compute operation, which requires huge floating-point processing power in order to operate. The reduced screen resolution of mobile devices does not come anywhere close to making up for the reduction in parallel processing power that result from the strict space- and power-consumption constraints.

In short: Ray tracing is a highly parallel operation that requires massive floating point compute capacity (which devices like GPUs are suitable to provide). It would certainly be possible to adjust the design of the GPU processing pipeline to one which is more suited to the type of computations encountered in ray tracing (rather than resterisation), but it would still not get us close to the compute power required for real-time gaming implementation. We will not see it on mobiles first.
 
there was a proof of concept device back in 1999 or so that only ran at 75MHz (massively parallel) and could do 640x480@60fps, tho from memory I don't think it did caustics.

Do you have any info on this? It doesn't seem realistic that a device from back then could perform real-time ray tracing... At least not on a large scale.

That said, if it was a proof of concept device back in 1999, it could well have just been a demonstration of a flexible parallel compute device (similar to the programmable GPUs of today), which implemented a cut-down ray-tracing algorithm to demonstrate the potential of such technology (...and the improvement relative to CPU rendering).
 
Duff-Man Said “But to suggest that ray-tracing will be implemented on mobile devices is just not realistic.”
The CEO of PowerVR has directly said that’s what they are doing, after seeing all the things they have pulled off I wouldn’t bet against them. Saying that it wouldn’t surprise me if we didn’t see it in mobiles first but I do believe we will see it in PowerVR chips first and much sooner than most people expect although perhaps not next generation but the generation after.

As for the 1999 number I am wondering if that’s a typo or if someone got confused and was thinking of Raytraceing Quake from around that time? I know this http://www.maximumpc.com/files/u58308/CausticGraphics_RayProcessor.jpg is from 2009 is low speed at 75Mhz at runs Raytraceing at 3 to 5 FPS. According to various sources the 2010 card is 14 times faster putting Ray Tracing at 43 to 70fps.

The people at PowerVR seem confident they can incorporate that into mobile chips. It’s also clear the people at Caustic have made a major breakthrough in the speed of Raytraceing.

There are some nice videos at http://vimeo.com/4202946 which also proves if anything it lowers development time. He was going on about how artist no longer have to worry about lighting tricks and it’s all done automatically on the fly.

Some stunning images here http://caustic.com/gallery_images.php I like the car scene.
 
Oh, I'm sure we will see it on mobiles eventually, as technology evolves sufficiently. But we certainly won't see it on mobiles first. We don't have the compute power, even in high-end GPUs, to implement effective real-time ray tracing even on mobile-resolution screens (say 800*480). As a very basic requirement, we will need more than the compute power of today's high-end GPUs to run at mobile standards for size and power draw (i.e. just a few Watts) before it can be implemented in mobile devices. That alone is many years away.

Once the parallel compute power is available to perform effective real-time ray tracing, I suspect the need for separate "ray-tracing specific" add-in boards will disappear. These massively parallel operations are ideal for processing with 'regular' GPUs, and some relatively minor modification to the pipeline process should improve ray-tracing efficiency (at the cost of reduced rasterisation efficiency of course - no such thing as a free lunch :p).


There are some interesting results on those links. It seems that for fully ray-traced renders, their hardware is capable of around 5fps at VGA resolution (600*400). So, we're looking at *at least* a factor of ten improvement to get playable framerates on that resolution, and a further factor of 8 (or so) to scale up to HD resolutions. I suspect there is a lot of room for improvement using advanced high-powered GPUs rather than specialist add-in boards though, so perhaps a factor of 10 or 20 may be realistic.
 
Last edited:
Duff-Man Said “As a very basic requirement, we will need more than the compute power of today's high-end GPUs to run at mobile standards for size and power draw (i.e. just a few Watts) before it can be implemented in mobile devices. That alone is many years away.”
Are you sure? It’s hard to get data on the 2010 card but from what I can gather the 2010 card overall is must simpler and less powerful then todays high-end GPUs yet can run raytraceing at 30+ FPS. Assuming that’s true and I haven’t been able to confirm it is, it would be might be easy to implant in mobiles devise's (ok easy might not be the best word, perhaps doable).

Overall I agree with you, it’s just the timeframe I don’t agree with. I believe there is a very and I do mean very small chance we will see ray tracing in the next gen chip. A very high chance in the generation after and almost guaranteed in the generation after that. Then again I do tend to underestimate timeframes needed.



Duff-Man Said “So, we're looking at *at least* a factor of ten improvement to get playable framerates on that resolution, and a further factor of 8 (or so) to scale up to HD resolutions.”
They say that have already achieved a factor of 14 improvement this year. Taking that into account I believe they could get that further x4 within two years.

I guess the biggest question is can they scale down the hardware while keeping the needed performance. Only time will tell I guess.
 
Anyway as per normal you totally miss the point of hardware physics. There are 3 options do the same physics at the same accuracy as before only much, much faster. Option 2 do the same physics at the same accuracy as before only more of it so you get more for no less FPS then before. Option 3 do more accruate physics.

http://www.pixelux.com/dmmEngine.html

How would you class DMM then ?

That argument is pointless because ray tracing is renowned for great textures, powervr are renowned for some of the best lossless texture compression and for having extra bandwidth and more free ram for textures then other cards. Textures are not a problem.

Raytracing is a process for calculating lighting, reflections & refractions. It has nothing directly to do with texture generation, so saying its renowned for great textures isn't really true.

Pixar movies have a reputation for looking beautiful with great texture work & amazing lighting, but the PhotoRealistic Renderman engine is primarily a scanline based one, not raytraced based.

Have you seen Rage on the Iphone? It has the Rage megatextures without any problem.

I've seen Rage on the iPhone. It's impressive, but it runs on a predetermined path through the levels, so some of the lighting & effects is pre-baked into the textures. And the textures are fairly low resolution in the iPhone version. The HD version is better, but it's still XBox 1 territory visually.

If anything the millions upon millions of games sold for PowerVR's device shows the devs love this kind of stuff.

They don't love it because it's a PowerVR gpu that's running it.

They love it because you get millions of people buying a game that costs 59p, playing it for 5min, getting bored with it & then buying another one a day or so later.

The end consumer couldn't care less whether it runs on a PowerVR gpu or a Rolf Harris SuperDuperClocked Mega Edition gpu.
 
... stuff which is awkward to quote without spiralling into neverending complexity... (post #35)


The most basic measurement for all floating-point computations is "FLOPS" (floating-point operations per second). You can approximate the floating point requirement to render a scene at a particular resolution, using a given ray-tracing algorithm [ = #pixels * algorithmic requirement for each ray ]. Multiply this by the number of frames required per-second, and you have the FLOPS requirement.

The unknown in the above computation is the number of floating point operations required to render each ray, which is a property of the ray tracing algorithm used. However, once you have defined the mathematical algorithm to be used to render the rays, its floating point requirement is easy to compute. Unfortunately, implementing ray-tracing is extremely intensive on floating-point arithmetic - this is fundamentally the reason we haven't seen any real-time ray-tracing implementations yet.

Now, the most powerful floating point powerhouses that we have available today are GPUs - offering of the order 1000 GFLOPs performance. Of course they only work for massively parallel operations (like pixel rendering), but luckily ray tracing falls into this category (rays can largely be rendered independently). So, in order to improve ray-tracing performance we need to improve the floating-point capacity of GPUs (the "big" technological development), and also adjust their function to more efficiently handle the types of computation required by ray-tracing algorithms (a more minor adjustment). The specialist hardware that Caustic use will likely be designed to perform ray-tracing very efficiently, but will lack the floating point power of modern GPUs. They can improve this floating point capacity very easily, up to a point, but when they begin to approach the FPU capacity of a modern GPU then massive development (of the order spent by AMD / Nvidia on developing their GPUs) would be required to progress further. So - there is a very real "brick wall" type limit not far above their heads.

As for scaling down to mobile devices, well I don't have to tell you how much power recent GPUs require (more than the rest of a high-end system combined), and how much you have to reduce performance in order to fit even the requirements of laptops, let along mobile devices. Since floating point performance is the true limiting factor for real-time ray-tracing, I still maintain that it's going to be a long time before we see it in mobile devices.
 
Last edited:
The thing is you don’t seem to be factoring in that they found a software and hardware combination that allows them to compute a massive amount of ray’s with a low amount of FLOPS. That’s what the breakthrough is. Before the breakthrough I would have agreed with everything you just said.

If the breakthrough is as good as it seems then they don’t need anything anywhere near the power of top end GPU’s to get fast 30fps+ ray tracing.

This might be simplified too much but if the current PowerVR chips have way more FLOPS then the first Caustic board and the technology is compatible which PowerVR seem to think it is. Then in theory the new PowerVR chips will be able to have to power to ray trace far faster than desktops cards all the while having far less power.

It’s looking like they don’t need to increase the FLOPS up to the level of current high end GPU’s. Just how many FLOPS could the 75mhz card have had? That managed to outperform top end GPU’s by a factor of what x5?

PowerVR have always been about smarter work, get more for less. Not the brute force put as much processing power at the problem as you can like AMD and Nvidia.

I don’t think looking at current desktop GPU’s is an indication that this cannot work at mobile level. AMD and Nvidia have always been vastly inefficient compared to PowerVR.

Anyway we might not agree on timeframes but I think we can both agree this will not be next year if it arrives at all.
 
Well, breakthroughs in algorithmic efficiency (where a floating point utilisation saving can be made) are always more valuable than "brute force" hardware improvements. That being said, in my experience massive algorithmic performance improvements tend to come with quite severe restrictions on their scope of applications, in comparison to the traditional algorithm they are replacing (I'm talking in general terms about numerical methods here, not specifically about ray tracing).

Still, it's an interesting development that is well worth watching. But algorithmic improvements or not, ray tracing is still heavily reliant on floating point power, and massive floating point performance is extremely difficult to scale down to low-power devices. For that reason, I can't see it finding a home in mobile devices before it is implemented on desktops.
 
Still, it's an interesting development that is well worth watching. But algorithmic improvements or not, ray tracing is still heavily reliant on floating point power, and massive floating point performance is extremely difficult to scale down to low-power devices. For that reason, I can't see it finding a home in mobile devices before it is implemented on desktops.

Bbbbbut then PowerVR won't be winrar :( (..again..)
 
Back
Top Bottom