Beyond3D interviewed Carmack and Sweeney about the latest ATI and NVIDIA architectures. Namely, how ATI has gone with 48 pixel pipes while NVIDIA is still keeping the 1:1 texture:shader relationship. Also, a bit of VTF.
http://www.beyond3d.com/
Unfortunately, this is only a news post, not a full-fledged article, so I can only link to the main page.
There are a few key points:
1. Contrary to claims made in the past, Unreal 3 does NOT CURRENTLY USE VERTEX TEXTURE FETCHING. It's official now, straight from Sweeney's mouth. Really though, the claim of U3 using VTF was never substantiated nor proven. Why did it spread so far?
2. Going with a higher ALU:TEX ratio is beneficial. Both gaming giants have confirmed this. Sweeney makes a snide comment about X1600 in the process. Or, could he be complimenting X1900? The NVIDIA-optimistic person would read this as saying that 7900 would have Sweeney's ideal ALU:TEX ratio, but I think that's a rather far-fetched stretch from what was actually said.
Thanks to Shadowmage at EOCF for that...
http://www.beyond3d.com/
Unfortunately, this is only a news post, not a full-fledged article, so I can only link to the main page.
There are a few key points:
1. Contrary to claims made in the past, Unreal 3 does NOT CURRENTLY USE VERTEX TEXTURE FETCHING. It's official now, straight from Sweeney's mouth. Really though, the claim of U3 using VTF was never substantiated nor proven. Why did it spread so far?
2. Going with a higher ALU:TEX ratio is beneficial. Both gaming giants have confirmed this. Sweeney makes a snide comment about X1600 in the process. Or, could he be complimenting X1900? The NVIDIA-optimistic person would read this as saying that 7900 would have Sweeney's ideal ALU:TEX ratio, but I think that's a rather far-fetched stretch from what was actually said.
Since the release of ATI’s X1000 series of products we’ve seen a couple of different takes on Shader Model 3.0 from the two main vendors. The first caused a small controversy with NVIDIA clearly believing Vertex Texturing to be part of the VS3.0 specification, but ATI (and apparently Microsoft’s WHQL certification process) disagreeing such that this wasn’t included in the X1000 series, with “Render to Vertex Buffer” being provided as an alternative. Another divergence has been highlighted with the recent X1900 release and ATI keeping a comparatively low number of texture units in their high end, whilst scaling up their math processing capabilities significantly.
We’ve quizzed both id Software’s John Carmack and Epic’s Tim Sweeney on their thoughts on these differing directions between the two vendors:
In our interview with Eric and Richard of ATI they mentioned they went in the direction of tripling the ALUs versus the TMUs after talking with you for instance. My question would be: do you see this as a good direction? Are you working on shaders which require a lot of ALU while keeping TMU usage constant to today?
John: I think it is clear that the ratio of math to texture fetches is increasing.
Tim: It's a definite trend that ALU usage in shaders is going up at a faster rate than TMU usage, so it's reasonable that the hardware should increase ALU's faster than TMU's. What ratio is ideal is debatable; it depends on a whole lot of variables, but fortunately it's easy to see whose tradeoffs win at a given price level by running some benchmarks.
The X1000 series of ATI cards don't implement an actual texture fetch in the vertex shader, unlike NVIDIA's GeForce 6 and GeForce 7 series, preferring instead to get the texture information from a vertex buffer that the programmer has to setup in the pixel shader. Which implementation do you prefer?
John: For vertexes, I think more often about looking up data in a table rather than indexing an image, but I can see either perspective.
Tim: We don't use vertex texture fetch in UE3 right now, but I expect we'll be using it in the future for moving more of our displacement-mapped terrain logic to the GPU.
Tim also dropped the following comment to us with regards to Unreal Engine 3:
Tim: We'll be making a UE3 benchmark available several months before shipping UT2007 on PC, in order to encourage the hardware folks to optimize their drivers. We're not doing this now, because at our stage in development many aspects of our rendering pipeline aren't fully optimized, and if we encouraged IHV's to optimize for it now (by releasing a benchmark), they would end up wasting a lot of time optimizing code paths that aren't reflective of a final, shipping UE3 project. Regarding the timeline, we'll be actively developing Unreal Engine 3 throughout the current hardware generation -- all the way through 2009.
Thanks to Shadowmage at EOCF for that...