**XBOX ONE** Official Thread

Apparently, the Xbone and PS4 are a generation ahead of the highest end PC on the market. It must be true because some chap at EA says so.

http://www.linkedin.com/today/post/article/20130522214715-10904058-the-technology-behind-xbox-one

"Both the Xbox One and PlayStation 4 have adopted electronics and an integrated systems-on-a -chip (soc) architecture that unleashes magnitudes more compute and graphics power than the current generation of consoles. These architectures are a generation ahead of the highest end PC on the market and their unique design of the hardware, the underlying operating system and the live service layer create one of the most compelling platforms to reimagine game mechanics."


https://twitter.com/MarkRein/status/337627995323895808

:D
 
Actually Mr Men, perhaps educating you might be best. You asked the simple question:
How did 360 do 60FPS then?
which I replied:
Huh, they werent trying to throw 2Gbs worth of assets at the frame buffer within 16ms - the resolution difference between the two consoles are the reasoning for that jump in requirements...

ps3ud0 :cool:
and you seemed to get confused by the highlighted sentence, if you had remembered the conversation you had on the previous page (only a couple of hours ago) you would have remembered that your question was to this post
How is it guess work? Do you have anyidea just how slow the XBONE memory system is compared to the PS4's? 16ms isn't a lot of time to get your data to the GPU etc and it's what is required to maintain 60FPS
so its pretty obvious your own question was a comparision with the XO and the X360 and so why in context my answer is pretty self-explanatory.

I may sound condescending, but honestly I dont care much for spoon feeding people, youll just either come over as lazy or obtuse...

ps3ud0 :cool:
 
Actually Mr Men, perhaps educating you might be best. You asked the simple question:

Which I replied:

And you seemed to get confused by the highlighted sentence, if you had remembered the conversation you had on the previous page (only a couple of hours ago) you would have remembered that your question was to this post.

So its pretty obvious your own question was a comparison with the XO and the X360 and so why in context my answer is pretty self-explanatory.

I may sound condescending, but honestly I don't care much for spoon feeding people, you'll just either come over as lazy or obtuse...

ps3ud0 :cool:

Fixed that for you, see now we can both educate ;)
 
Like most, I'm disappointed by the XB1's specs, but reading the threads on beyond 3d, the difference isn't likely to be as big as the numbers suggest, but even being optimistic, I have no doubts the gap will still be bigger them this generation.

It's a complex task to see how big the deficit will be, the whole of ESRAM concept is to allow the GPU to do post processing and deferred rendering features whilst the CPU can be accessing the main Ram with low latency..
GDDR5 has amazing bandwidth, but the latency of both CPU and GPU accessing it at the same time does hurt efficiency.

I will say that if the difference is noticeable, I will be buying all multiplats on the PS4..

I'm going to wait for E3, I'm sure we'll have far more valid comparisons when we start seeing actual game play.. KZ4 looked great, but not dropping, if the XB1 is struggling, it'll be very apparant
 
Last edited:
It's a complex task to see how big the deficit will be, the whole of ESRAM concept is to allow the GPU to do post processing and deferred rendering features whilst the CPU can be accessing the main Ram with low latency..
Got a link to that? My understanding is that unlike the eDRAM of the X360 which was used at the end/after of the pipeline to give post-processing effects like AA and motion blur that the eSRAM is going be used like a GPU Cache so sits at the start/before of the pipeline to assist the DDR3 in actually providing the assets in the first place. I just dont see how that could do any additional post-processing when in effect its pretty damn busy keeping the GPU ticking over...

No doubt Ive missed something, just cant see where theres a point where the eSRAM wont be used solidly to cache data between the GPU and DDR3...

ps3ud0 :cool:
 
Got a link to that? My understanding is that unlike the eDRAM of the X360 which was used at the end/after of the pipeline to give post-processing effects like AA and motion blur that the eSRAM is going be used like a GPU Cache so sits at the start/before of the pipeline to assist the DDR3 in actually providing the assets in the first place. I just dont see how that could do any additional post-processing when in effect its pretty damn busy keeping the GPU ticking over...

No doubt Ive missed something, just cant see where theres a point where the eSRAM wont be used solidly to cache data between the GPU and DDR3...

ps3ud0 :cool:

Sure. The source was Sebbbi (TrialsHD main man)..

from this post, http://forum.beyond3d.com/showthread.php?p=1738762#post1738762

sebbbi said:
On Xbox 360, the EDRAM helps a lot with backbuffer bandwidth. For example in our last Xbox 360 game we had a 2 MRT g-buffer (deferred rendering, depth + 2x8888 buffers, same bit depth as in CryEngine 3). The g-buffer writes require 12 bytes of bandwidth per pixel, and all that bandwidth is fully provided by EDRAM. For each rendered pixel we sample three textures. Textures are block compressed (2xDXT5+1xDXN), so they take a total 3 bytes per sampled texel. Assuming a coherent access pattern and trilinear filtering, we multiply that cost by 1.25 (25% extra memory touched by trilinear), and we get a texture bandwidth requirement of 3.75 bytes per rendered pixel. Without EDRAM the external memory bandwidth requirement is 12+3.75 bytes = 15.75 bytes per pixel. With EDRAM it is only 3.75 bytes. That is a 76% saving (over 4x external memory bandwidth cost without EDRAM). Deferred rendering is a widely used technique in high end AAA games. It is often criticized to be bandwidth inefficient, but developers still love to use it because it has lots of benefits. On Xbox 360, the EDRAM enables efficient usage of deferred rendering.

Also a fast read/write on chip memory scratchpad (or a big cache) would help a lot with image post processing. Most of the image post process algorithms need no (or just a little) extra memory in addition to the processed backbuffer. With large enough on chip memory (or cache), most post processing algorithms become completely free of external memory bandwidth. Examples: HDR bloom, lens flares/streaks, bokeh/DOF, motion blur (per pixel motion vectors), SSAO/SSDO, post AA filters, color correction, etc, etc. The screen space local reflection (SSLR) algorithm (in Killzone Shadow Fall) would benefit the most from fast on chip local memory, since tracing those secondary rays from the min/max quadtree acceleration structure has quite an incoherent memory access pattern. Incoherent accesses are latency sensitive (lots of cache misses) and the on chip memories tend to have smaller latencies (of course it's implementation specific, but that is usually true, since the memory is closer to the execution units, for example Haswell's 128 MB L4 should be lower latency than the external memory). I would expect to see a lot more post process effects in the future as developers are targeting cinematic rendering with their new engines. Fast on chip memory scratchpad (or a big cache) would reduce bandwidth requirement a lot.

He starts with saying how the EDRAM helped on the 360, then adds in what a 'memory scratchpad (or a big cache)' will help with..

No one actually knows the exact details of the ESRAM and how it's connected, and developers can't say anything at the moment (all under NDA).. I expect when games are shown at E3, we might start getting a feel for how large the gap is.

There are also other people (like ERP, a sony developer) that go indepth on why even they see some reason for the ESRAM in keeping your CU's running more efficiently, such that the 18 vs 12 CU gap may not necessarily be as large as it looks.

Saying all that, there is no way in hell the XB1 is going to be on-par with the PS4, I'm sure of that, but my expectations range from worst case 60 vs 30fps kind of territory to just a little bit worse (the reverse of what we have today with the 360/PS3).. I suspect the reality is somewhere inbetween..
 
It gets worse, probably due to the OS, games only have access to 90% of the GPU.

1) Running: The game is loaded in memory and is fully running. The game has full access to the reserved system resources, which are six CPU cores, 90 percent of GPU processing power, and 5 GB of memory. The game is rendering full-screen and the user can interact with it.

2) Constrained: The game is loaded in memory and is still running, but it has limited access to the system resources. The game is not rendering full screen in this state; it either is rendering to a reduced area of the screen or is not visible at all. The user cannot interact with the game in this state. System resource limits in this state are four CPUs, 5 GB of memory, and 45 percent of GPU power if the game is rendering to a reduced area of the screen, or 10 percent of GPU power if the game is not visible.

3) Suspended: The game is loaded in memory but is not running, meaning that the system has stopped scheduling all threads in the game process. The game has no access to CPUs or to the GPU processing power, but it still has the same 5 GB of memory reserved.

4) NotRunning: The game is not loaded in memory and is not running, and the system has no game-history information about the previous execution of the game. A game would be in NotRunning state in any of these three scenarios:
-The game has not been run since the console started.
-The game crashed during the last execution.
-The game did not properly handle the suspend process during the last execution and was forced to exit by the system.

5) Terminated: The game is not loaded in memory and is not running, which is identical to the NotRunning state in terms of system resource usage. Terminated state, however, indicates that during the last execution of the game, the game process was successfully suspended and then terminated by the system. This means that the game had a chance to save its state as it was suspended; the next time the game is activated, it can load this previous state data and continue the user experience from the same position. A game, for example, can start from the same level and position in the player’s last session without showing any front-end menu.

http://kotaku.com/the-five-possible-states-of-xbox-one-games-are-strangel-509597078
 
Sure. The source was Sebbbi (TrialsHD main man)..

from this post, http://forum.beyond3d.com/showthread.php?p=1738762#post1738762



He starts with saying how the EDRAM helped on the 360, then adds in what a 'memory scratchpad (or a big cache)' will help with..

No one actually knows the exact details of the ESRAM and how it's connected, and developers can't say anything at the moment (all under NDA).. I expect when games are shown at E3, we might start getting a feel for how large the gap is.

There are also other people (like ERP, a sony developer) that go indepth on why even they see some reason for the ESRAM in keeping your CU's running more efficiently, such that the 18 vs 12 CU gap may not necessarily be as large as it looks.

Saying all that, there is no way in hell the XB1 is going to be on-par with the PS4, I'm sure of that, but my expectations range from worst case 60 vs 30fps kind of territory to just a little bit worse (the reverse of what we have today with the 360/PS3).. I suspect the reality is somewhere inbetween..

I personally think that while PS4 may have on paper better specs, differences in the level of detail that both machines will be capable of creating will probably be imperceivable to everybody but the most anal fanboys. Games are at a point where adding more polygons and detail doesn't really give anything back other than for playing top trumps with.

For at least the first half of the life of these consoles we'll not see anything taking full advantage of either of their power.

I'll be getting both anyway, as arguing about which is better is pretty pointless unless you're trying to justify why somebody loves one more than the other :p
 
I personally think that while PS4 may have on paper better specs, differences in the level of detail that both machines will be capable of creating will probably be imperceivable to everybody but the most anal fanboys. Games are at a point where adding more polygons and detail doesn't really give anything back other than for playing top trumps with.

For at least the first half of the life of these consoles we'll not see anything taking full advantage of either of their power.

I'll be getting both anyway, as arguing about which is better is pretty pointless unless you're trying to justify why somebody loves one more than the other :p

Differences in frame rate however are easily perceptible and the PS4 running the same/similar code to XB1 will have the higher frame rate.
 
Back
Top Bottom