XBOX 360 GPU

Associate
Joined
2 Mar 2004
Posts
1,033
Thought this was a good read...

To call the custom-built ATI GPU found inside the Xbox 360 “powerful” is like saying Muhammad Ali was a “good boxer.” Sure, it’s true, but it doesn’t even come close to the entire truth of the matter. There are subtle yet very important aspects of the Xbox 360’s GPU that, at first glance, might not strike you as impressive. But once you take a deeper look, we’re sure you’ll agree that this is one bad chip.

First off, the custom-built ATI chip runs at 500MHz, a very respectable speed for a console-based GPU. It uses 512MB GDDR3 memory (which requires less power and runs cooler than previous memory types) running at 700MHz and has a 256-bit memory interface bus with a 22.4GBps bandwidth. This memory is equally accessible to both the GPU and CPU, creating what is known as Unified Memory Architecture, making graphics performance truly lightning-fast. (For more on the Xbox 360’s Xenon processor, see “Fire In The Belly” on page 137.)
Innovative use of on-die eDRAM makes sure this punch doesn’t lose its speed and impact, even at 720p.

Manufactured by NEC using 90nm technology, the unique 10MB eDRAM (embedded DRAM) chip provides the truly powerful benefits of the Xbox 360’s GPU. (You might note that NEC is also the provider of the Nintendo GameCube embedded DRAM, but that is an entirely different generation of eDRAM.)

Even running at a full 500MHz, the ATI GPU draws less than 35 watts of power, and that includes the eDRAM. The power management features found on the chip are impressive. They provide clock throttling at both a “macro” and “micro” level, powering down either large blocks of the chip or smaller logical units where necessary. The true wonder of this chip is the fact that there is no fan directly on the GPU, only a passive heatsink, which is cooled by the air drawn over it by the fans on the back of the Xbox 360. Together, these features define a truly efficient chip.

Sounds Cool . . . What Can It Do?

The 360’s GPU can produce up to 500 million triangles per second. Given that a triangle has three edges, for practical purposes this means the GPU can produce approximately 1.5 billion vertices per second. (In comparison, the ATI X1900 XTX processes only 1.3 billion vertices per second and runs at nearly double the clock speed.) For antialiasing, the 360 GPU pounds out a pixel fillrate of 16 gigasamples per second, using 4X MSAA (Multi-Sampling Anti-Aliasing). Of course, the big claim to fame of the 360’s GPU is the stunning 48 billion shader operations per second, thanks to its innovative use of Unified Shader Architecture.

Why is that figure so impressive? For the uninitiated, shader operations are the core of what makes a rendered graphic look the way it does. There are two separate types of shaders that are used in gaming graphics: vertex shaders and pixel shaders. Vertex shaders impact the values of the lines that make up a polygon. They are what determine how realistic animation of polygons and wireframe models will look: the swagger of a walking character, for instance, or the rolling tread of a tank as it crushes an android skull laid to waste on a charred battleground.

Pixel shaders, on the other hand, are what determine how realistic that charred battlefield will look or the color of the dents in the tank. They alter the pixel’s color and brightness, altering the overall tone, texture, and shape of a “skin” once it’s applied to the wireframe. These shaders allow developers to create materials and surfaces that no longer look like, say, the main characters in Dire Straits’ “Money For Nothing” video. That is, they enable developers to create games with textures and environments that much more closely resemble reality.

This is a wireframe pyramid, one of the simplest polygons used in gaming graphics. Each point where the lines meet is a vertex. Stitch 5 million of these together, and you begin to see why vertex shaders are so important.

Each of these graphics processing functions are called and executed on a per-pixel or per-vertex basis as they pass through the pipeline. Until recently, graphics processors handled each type of shader individually with dedicated units for each. Developers used low-level assembly languages to talk directly to the chip for instructions on how to handle the shaders, or they used APIs such as OpenGL or DirectX. Unified Shader Architecture changes all that by handling both shader types at the hardware level in the same instruction process. This means that the GPU can make use of the common pieces of each type of shader while making direct calls and relaying specific instructions to the shader itself. This decreases the actual size of the instruction sets and combines common instructions for two shader types into one when applicable. This is how the 360’s GPU quickly and efficiently handles shader operations. 48 billion shader operations per second, in fact.

How Does It Stack Up?

It’s tempting to compare the GPU inside the Xbox 360 to today’s high-dollar, high-performance video cards, and some who do might scoff a little. The latest graphic cards from Nvidia and ATI, such as Nvidia’s GeForce 7800 GTX and ATI’s Radeon X1900 series, are—on paper—superior GPUs. They tout processor speeds of 550 to 625MHz and memory clock speeds of 1,500MHz and above. In terms of raw horsepower, these cards are indeed brutes. Of course, if there’s one thing we’ve all learned about clock speeds in the great processor wars between Intel and AMD, it’s that raw speed hardly translates into a real measure of processing power.

It’s not hyperbole to say that video memory bandwidth is one of the most important (if not the most important) parts of processing and rendering graphic elements. This is simply because bandwidth and speed determine how rapidly instructions can be transferred, processed, and returned to the system. Thus it’s in direct control of overall graphics performance for a system.

To improve video memory bandwidth, graphics card manufacturers have resorted to the typical methods of boosting speed, such as creating wider bitpaths (512MB nowadays) or boosting core clock speed. These techniques have placed performance in the range of 40 to 50GBps at peak range, which is respectable when compared with other graphics processors. However, these figures still fall short of the Xbox 360’s 256GBps.

Yes, you read that right: 256GBps memory bandwidth. It’s utterly stunning, and it’s thanks to the chip’s embedded 10MB of eDRAM.

No currently available video card makes use of embedded DRAM. And even if one was available, it’ll be at least the end of 2006 before they’ll be of any use. That’s when Windows Vista comes out, meaning that the operating systems they’re gaming on can’t make use of Vista’s WGF (Windows Graphics Foundation) 2.0 features. This speed of instruction handling combined with Unified Shading Architecture not only makes the GPU inside the Xbox 360 the current graphics powerhouse, it also means it’ll stay that way for a number of years.

And even when current PC-based GPUs start catching up, it’s going to be extremely expensive to match the performance of this dedicated gaming platform. At the time of this writing, the top-level cards by ATI and Nvidia described in this article are retailing for around $560 apiece, and that’s without Unified Shading Architecture support or eDRAM. And of course, there are other aspects of the system to consider, such as the fact that the CPU and memory were custom-built for dedicated gaming performance.

ATI and Microsoft have truly built something special in the Xbox360’s GPU. It’s astounding to see a chip with such power run at such an efficient clock speed and generate as little heat as it does, while at the same time making use of never-before-seen technology that will surely be replicated in graphics cards and consoles for years to come. It’s comforting to know that the Xbox 360 will continue to produce visually stunning and smooth graphics well into the foreseeable future.
 
wow didnt realise the difference was so huge! Cant wait til developers get the hang of the 360 and use all of that power :cool:
 
Baine said:
To improve video memory bandwidth, graphics card manufacturers have resorted to the typical methods of boosting speed, such as creating wider bitpaths (512MB nowadays) or boosting core clock speed. These techniques have placed performance in the range of 40 to 50GBps at peak range, which is respectable when compared with other graphics processors. However, these figures still fall short of the Xbox 360’s 256GBps..
This is the bit that got me....256GBps :eek: (now all we need are some very good games that can use this)
 
Last edited:
yeah its all good :), had a guy from Rare come to give us a presentation at uni about placments etc and getting into the gaming industry. he brought along a developers 360 and told us a bit about why its so powerful. was quite amazed at how fast some of the memory is on the thing.
 
Yeah.. I reckon I'll just get a laptop that'll run COD2 in the future and just stick my diamondback in now and then, then get a 360 or 720.. whatever it is then :P for games, as they seem far superior, pound for pound.
 
Woah, that's a bit missleading! The 256GBS bandwidth is only between the GPU and 32mb edRam. It is most definately not the bandwidth between GPU and system memory, and comparisons with other GPU's as suggested are totally misleading as they're measuring different things.
The PS3 actually has greater theoretical bandwidth between system memory and GPU than the Xgpu.
 
I would say that it is more technically more advanced, with the unified shader architecture and embedded dram, also it has more unified shader\vertex pipes than the nvidia seperate shader\vertex pipes setup, so I would hazard a guess and say yes
 
also I think that the nvidia ps3 chip is more or less a slightly more customized relation to the 7800 graphics card setup ( or maybe 6800 ) with a turbocache feature similar to th 6200 graphics card, where the x360 gpu seems to contain a bit more innovation in the way of graphics processors and the pc version, unified shader pipelines and emedded dram are not a feature of any pc graphics card and wont be until dx10, which I would say puts the x360 closer to next gen spec than ps3, also with xna the the 360 has dx9\10 hybrid features, where technically the ps3 chip is a pure dx9 part
 
The PS3 GPU is meant to have:

* Clocked at 550 MHz
* 1.8 TFLOPS floating point performance, 356 GFLOPS programmable
* Full high definition output (up to 1080p) x 2 channel
* Multi-way programmable parallel floating point shader pipelines
* 300.4 million transistors
* 280 shader operations per cycle
* 154 billion shader operations per second
* 51 billion dot products per second (When combined with CPU power.)
* 128-bit pixel precision (for rendering scenes with high dynamic range imaging)
* Vertices Performance: 1.1 billion vertices per second
* Texture bandwidth: 47.5 GB/sec


For the sake of comparison

0.4 billion less vertices per second but more than 3x shader operations per second.
 
Last edited:
What's also worth noting is that a lot of graphics processing can be diverted and done by the cell chip, so it's a very difficult comparison to make between 360 and PS3 - I believe SONY originally intended on Cell doing all of the gfx processing, and nVidia was brought in quite late in development.
Comparing GPU's alone will not give the full picture, and in fact both architectures and memory allocations are very dissimilar. As PC technophiles, we tend to relate everything to a PC architecture, but both of these are very very different affairs!
 
Last edited:
PuncH said:
all that oomph and we still get slowdown in need for speed!


This is a good point

Whats all the power of the next gen systems both ps3 and 360 without good devlopers?

Just dreams of what could have been :rolleyes:

This is the reason i brought a 360 now and didnt wait till the PS3 as i am sure that we wont see the PS3 produce anything better then the 360 for ages if at all
 
PuncH said:
all that oomph and we still get slowdown in need for speed!

Could you drive a car instantly where there is no steering wheel and no pedals? Same principle. PCs you can jump right in and get good (not excellent, but good) performance from new hardware. Consoles you can jump in and get poor performance straight away. I can make basic 3D apps for the PC, but converting a 24bit image to a 256 colour palettized image took me hours and hours on a GP2X because I was overlooking a unique feature of the ARM processor. Pointer maths on an x86 CPU is more or less byte perfect, ARM it isn't.
 
It's been this way with every console. Compare the first batch of ps2 games with what we're receiving now. The difference is huge.

Give the 360 a year and we'll have a better idea of what it can do.
 
Boogle said:
Could you drive a car instantly where there is no steering wheel and no pedals? Same principle. PCs you can jump right in and get good (not excellent, but good) performance from new hardware. Consoles you can jump in and get poor performance straight away. I can make basic 3D apps for the PC, but converting a 24bit image to a 256 colour palettized image took me hours and hours on a GP2X because I was overlooking a unique feature of the ARM processor. Pointer maths on an x86 CPU is more or less byte perfect, ARM it isn't.

i see what you mean. but compare need for speed with burnout. i've played both and burnout is smooth and gives a constant frame rate all the time. need for speed doesn't. there were certain areas it would slow down (not terribly, but noticably) even if my car was the only one on the screen!

just a couple of months more development would have ironed that out. that's all i'm getting at. developers have more power than ever at their disposal and i appreciate that it'll take a good 3-4 years before they start to push the 360 to it's limits, however i still think it's pretty poor that any game can have frame rate issues when this amount of power is available, even if it is a release title.
 
PuncH said:
just a couple of months more development would have ironed that out. that's all i'm getting at. developers have more power than ever at their disposal and i appreciate that it'll take a good 3-4 years before they start to push the 360 to it's limits, however i still think it's pretty poor that any game can have frame rate issues when this amount of power is available, even if it is a release title.

Definitely. I think MS are desperate to get as large a catalogue as possible to spite Sony's launch. Also publishers know that since there aren't exactly that many games, they will sell far better than they normally would. Even though I love Full Auto, and would have bought it anyway - would many people who bought it if it came out after Burnout? Plus FA was one of only 2 or 3 games released in Feb.

I'm sure the PS3 will have a similiarly mixed launch though. Its even more difficult to get performance out of it (blame Cell, the most over-hyped CPU I've ever known), yet Sony's hype machine has it labelled as faster than a super computer. That puts more pressure on developers for better graphics which can easily lead to more framerate problems than the Xbox 360 has. I'm not happy with how Sony are handling the PS3, it makes things much harder on developers (ie. make the console perform miracles, no we won't provide proper dev tools).
 
Back
Top Bottom