Unlimited Detail Technology - Infinite geometry 3D technology

Associate
Joined
2 May 2008
Posts
1,121
Location
Glasgow
"Unlimited Detail is a new technology for making realtime 3D graphics. Unlimited Detail is different from existing 3D graphics systems because it can process unlimited point cloud data in real time, giving the highest level of geometry ever seen."


Very impressive, at this stage. I just hope that when they add more processes, ie. animations, effects, AI, that they can maintain the same level of performance.
 
Actually I think its a very interesting idea...they need to get better artists to do their presentational materials though.
 
Very impressive, at this stage. I just hope that when they add more processes, ie. animations, effects, AI, that they can maintain the same level of performance.

Almost certainly not, that level of performance already looks pretty bad with just the rendering, plus I would love to see how that works with animation -- having to move millions of points just for any one object and keeping them in their relative positions, I imagine that would take more processing than the actual rendering and surely would increase system load considerably for every animated object present in the game world. Then throw physics into the mix whilst using this, even going down to basic collision detection? **** that.
 
Last edited:
I like the way they try to imply Creative Assembly refused to use their system out of some sort of jealousy.

I find it more likely CA wanted to see something a bit more technical than 'polygons are bad, weee look at this' and they couldn't provide it.
 
Makes sense to me.

I am sure there will be MANY companies wishing this isn't true as well.

If it is, lots of money to be lost.
 
I really couldn't see anything like that coming into play for several generations though. The next consoles are never going to use it and maybe not even the ones after that too. And companies are never gonna design a game for this if consoles are still using Polygons.
 
So what he's saying is that they create the geometry on the fly by specifically selecting dynamically created point cloud data.

Interesting, but hardly unlimited :) It's as "unlimited" as any interpolated image can be. 3D space still has to be discritised into the point cloud, which in turn has to be processed into vertex data to be processed by a standard GPU.

Having said that, what it does mean is that in theory, given an unlimited amount of processing you could discritise to the infinite. But then, that can be said of any numerical representation.. ever!

Also, they must have scoured the office to find the narrator, he has to be the most patronising person ever!!
 
Interesting, hopefully this goes somewhere. and as people have said, it is able to be animated and can have physics applied to it without destroying systems.
 
Think of the memory involved in containing all those points. Even in a limited scene of a few hundred unique objects you're looking at many gigabytes of data. Each point needs atleast 6 bytes just for the position offset from the model centre. Add onto that the colour of each point which will be 2 bytes if you use a texture lookup.

Let's assume each object contains 1,000,000 points (see it's not really unlimited at all now is it). In a scene of 100 models this gives us 100,000,000 points multiplied by atleast 8 bytes which gives us 800mb just for the model data. Add on textures and you're well up above 1gb already.

That mightn't sound like much but that's just for a basic coloured point. For lighting you need another 6 bytes for the normal of the point, assuming you want normal mapping then you need the tangent and binormal which are another 6 bytes each. Normal mapping mightn't be needed with sufficiently detailed models but the normal certainly will. All of a sudden that 800mb is up to 1.4gb just for the minimum data needed to light the points. And that's in a scene of just 100 unique models. Don't even try to get started on how many points an "unlimited" detailed terrain would take up!

It'd be great if they actually had some magic way of getting this to work at the same speed as current games and look better at the same time but I think I'll wait until I see a peer reviewed paper at Siggraph about this.
 

Agreed, its really not doable atm and i can't really claim to know the details. But its nice to know where the future could be going. Of course it depends on how long it would take until its actually viable and whether or not Polygons have progressed so much in that time that the improvement is insignificant.

At the end of the day, unless consoles are going to buy into the idea it'll never really come into comercial use. Maybe just high end graphics or something.
 
just went to their site and downloaded the hires comparison video... it looks pretty crap tbh, lots of points missing per frame, lo res textures etc. still has a long way to go before it competes with the current age of polygons + effects
 
Sound's like a con to me, the guy's voice, jerky video, i don't buy it.

It's analogous to current efforts to triangulate LiDAR point cloud data (same as RADAR, but using a laser to produce a point cloud of a real environment), the difference being, they are creating the point cloud data, to suit their needs, so ultimately it is far easier to mesh.

What they are proposing is entirely feasible, it simply isn't in the context of gaming. The current push for consumer hardware is not more memory, more power, its levelled out in terms of memory (ish) and the power becomes spread across many processing devices (CPUs, GPUs etc.) while high parallelism can suit the processing of point data, the levels of detail they are suggesting are simply not feasible on current or near future hardware, not in the same way that current techniques produce games like Crysis at 60FPS on top end stuff.

It's interesting, but looks like its from a small upstart tech company with nobody with experience in how to present their work. Don't start on Youtube with some initially shoddy looking demo's and a patronising voice over guy, start by going to small, relevant conferences, test the water, find out what people do and don't like to tailor the sales pitch...

Meh, anyway, interesting I spose :)

EDIT: Found some article by one of the two (yes two) people listed as a contact, so we are talking very small company here.

From what I can tell, they are billing it as a point search algorithm. So effectively to develop a model, you would create a (large) point cloud dataset. The "algorithm" then (for each frame) finds the points relevant to the current camera position. So for a 1024x768 screen, it finds the 786432 points relevant to each pixel, and simply renders them (though I'm still not 100% sure how this works with current 3D libraries given you need to calculate normals of each vertex etc. correctly, unless we are talking per pixel vertex shading specifically, and even then each point from the cloud would need to be processed by a shader...)

A search algorithm that can mask out say 2304000 points from a cloud of (I imagine) billions quickly enough so that we get > 25FPS on a standard system would be very impressive indeed, be interesting to see if they ever publish any of this :)
 
Last edited:
I wouldn't get excited by this, I know I'm not. They don't seem to really have done anything new or exciting. How does their technique handle dynamic lights or interactive scenes with physics? Answer - it doesn't because it relies on a lot of preprocessing of the scene to ensure the searches are efficient. Artist time and memory constraints are also gonna come into play.

They have a lot of waffle and random claims but at the end of the day there's a reason that no one invested in them.
 
You'd have thought they could have found someone better to present it. The video makes some awful simplifications and claims and is incredibly patronising if you know even a little bit about graphics card technology (even if you don't). They could make some nicer models to test their 'incredible' innovation with too.
 
Back
Top Bottom