PS3, only 256mb of ram?

The ps3 has the cell which is superb but surely with it only having a 7800gtx in terms of gfx power even though it has a good cpu surely the gpu will restrict it, if i put a quad core intel with a 7800gtx on the pc don't make the in game gfx any better than a game on a c2d.

What am trying to say is surely the ps3 will be restricted by the gfx gpu it has in, if it had a 8800gtx in for example it would be something else.
 
Last edited:
Personally I dont care so much about graphics. The current gen of graphics is very good imo, i want better pysics and AI in games and this is what the cell should bring too new levels.
 
wannabedamned said:
Games are becoming more complex though aren't they. Which warrant something that can handle the complexity. GPU is important I admit, But CPU and RAM are becoming just as important in games.

I still see them as equal..
Looking on Beyond 3D's technical console forum, it's clear that in the long term, developers are using the cell to augment the deficiencies of the GPU in the PS3.. the net effect is this reduces the CPU power, negating some of the cell advantage...
And 360 developers are still saying that with the more flexible GPU on the 360 they can do some stuff that the cell is good at, i.e. physics, etc.. so negating some of the GPU's advantage..

I do think that we should see a slight difference in games on each console that are maxing out their power, but that doesn't make one better then the other does it, it just means two superb games for slightly different reasons..

Personally I dont care so much about graphics. The current gen of graphics is very good imo, i want better pysics and AI in games and this is what the cell should bring too new levels.
AI doesn't play to the strengths of CELL, lots of AI is done with integer algorithms, a strength of the 360..
Physics should be able to be done better on the PS3, but the problem is as mentioned, the cell is 'needed' to augment the graphics.. and the 360 can do physics on the GPU if really needed..

TBH Games are much improving on the consoles, playing GRAW2 (A nice multiplatform title) shows that they are delivering better physics and AI already... I for one don't really care about willy waving 360/PS3 specifications and possibilities, I'm just happy to play these next gen games and finally feel 'happy' that AI/Physics and graphics are all at a level that make for a really good immersive experience..
 
Last edited:
saffyre said:
Personally I dont care so much about graphics. The current gen of graphics is very good imo, i want better pysics and AI in games and this is what the cell should bring too new levels.


Thats a good point
 
Jabbs said:
if i put a quad core intel with a 7800gtx on the pc don't make the in game gfx any better than a game on a c2d.

doesnt make the graphics any better, but performance will increase.
a faster CPU will decrease any graphics bottleneck
icon14.gif
 
Jabbs said:
The ps3 has the cell which is superb but surely with it only having a 7800gtx in terms of gfx power even though it has a good cpu surely the gpu will restrict it, if i put a quad core intel with a 7800gtx on the pc don't make the in game gfx any better than a game on a c2d.

What am trying to say is surely the ps3 will be restricted by the gfx gpu it has in, if it had a 8800gtx in for example it would be something else.

on the other hand the reason most of the really graphically good games on the original xbox ran at 25/30fps (despite it's gpu being far superior to ps2's graphics synthesizer) was because of the intel p3 cpu limiting it.
 
Last edited:
msmalls74 said:
The bus bandwith for the Cell is far greater to that of the 360 and its CPU, both GPU main memory runs at the same speed about 22.5 Gbits/s, but the Xbox360 has a small amount of on die memory which can be used for things like AA and AF this runs at 256 Gbits/s . The System ram ie not the video on the PS3 runs at 3.2 GHz compared to the 700mhz for the Xbox360.

3.2GHZ memory? Whats that? Some sort of Super DDR4 stuff?
GDDR4 is only up to 2.3GHz so i doubt the PS3 has memory at 3.2GHz.
Or were you meaning something else?
 
Kamakazie! said:
3.2GHZ memory? Whats that? Some sort of Super DDR4 stuff?
GDDR4 is only up to 2.3GHz so i doubt the PS3 has memory at 3.2GHz.
Or were you meaning something else?

Uses Rambus technology think it was called Rambus Yellowstone its ODR, octal data rate ie 8 times the speed of the external clock. where as the GPU and xbox360 uses DDR2, ie double data rate.
 
sunlitsix said:

That is some impressive stuff.
I presume its only really economical for lower capacities? Otherwise why are graphics cards not having stuff like this implemented?


EDIT: Also, wouldn't this have been better used with the GPU rather than the CPU? Or are their latency issues or some such problem that make it unhelpful with a GPU?
 
Kamakazie! said:
That is some impressive stuff.
I presume its only really economical for lower capacities? Otherwise why are graphics cards not having stuff like this implemented?


EDIT: Also, wouldn't this have been better used with the GPU rather than the CPU? Or are their latency issues or some such problem that make it unhelpful with a GPU?

I always wondered that aswell maybe it was so not to bottleneck the Cell, well latency 1.25/2.0/2.5/3.33 ns request packets what ever that means.
Maybe this is part of the reason in 2004, it was revealed that Infineon, Hynix, Samsung, Micron, Elpida had entered a price-fixing scheme against Rambus during 2001, to force RDRAM out of the market. The offending parties pleaded guilty and were fined afterwards.

The motive for price-fixing was not officially known, but one theory is that back in the introduction of Intel 820, Intel decided to use RDRAM exclusively on future products, but because of RDRAM's high price, the plan failed. Rambus officials tried to recoup the losses by filing lawsuits against major memory manufacturers starting in year 2000, and claimed SDRAM, DDR SDRAM (and later DDR2, GDDR2, GDDR3 SDRAM) as Rambus's intellectual property, and forcing memory makers into paying royalties. This gave major memory manufacturers motive to drive Rambus and its RDRAM technology out of market.
 
msmalls74 said:
I always wondered that aswell maybe it was so not to bottleneck the Cell, well latency 1.25/2.0/2.5/3.33 ns request packets what ever that means.
Maybe this is part of the reason in 2004, it was revealed that Infineon, Hynix, Samsung, Micron, Elpida had entered a price-fixing scheme against Rambus during 2001, to force RDRAM out of the market. The offending parties pleaded guilty and were fined afterwards.

The motive for price-fixing was not officially known, but one theory is that back in the introduction of Intel 820, Intel decided to use RDRAM exclusively on future products, but because of RDRAM's high price, the plan failed. Rambus officials tried to recoup the losses by filing lawsuits against major memory manufacturers starting in year 2000, and claimed SDRAM, DDR SDRAM (and later DDR2, GDDR2, GDDR3 SDRAM) as Rambus's intellectual property, and forcing memory makers into paying royalties. This gave major memory manufacturers motive to drive Rambus and its RDRAM technology out of market.

The cell should be fine with the usual DDR2 speeds i would have thought. GPUs are the bandwidth hungry things usually.
Maybe, but RDRAM was always prohibitively expensive and that is why it failed. As far as i can remember it didn't offer enough of a performance boost over DDR to justify the price hike.
 
Kamakazie! said:
The cell should be fine with the usual DDR2 speeds i would have thought. GPUs are the bandwidth hungry things usually.
Maybe, but RDRAM was always prohibitively expensive and that is why it failed. As far as i can remember it didn't offer enough of a performance boost over DDR to justify the price hike.

As it says it was a price fixing scam done to take rambus outta the market! Original Rambus tech had latency issues which have now been removed.
 
Chris1712 said:
So the same as DDR400 was @ dual channel was 4 years ago? N1 sony.

The ddr2 400 is 400Mb/s thats why its called ddr2 400! the module ie group of memory chips normally 16 devices will give a bandwith off 6.4Gb/s. Where as a single XDR chip will give a bandwidth 6.4 Gb/s. The total aggregate bandwidth for the PS3 is over 65Gb/sec.
 
msmalls74 said:
The ddr2 400 is 400Mb/s thats why its called ddr2 400! the module ie group of memory chips normally 16 devices will give a bandwith off 6.4Gb/s. Where as a single XDR chip will give a bandwidth 6.4 Gb/s. The total aggregate bandwidth for the PS3 is over 65Gb/sec.

I think you'll find it was called ddr400 because of the clock speed, and referred to as PC3200 as the bandwidth (3.2gb/s), when used in Dual Channel (motherboard features) could make a peak bandwidth of 6.4gb/s :rolleyes: .

Proof Alert :eek:

bandbreite24f0d63zx5.png
 
Last edited:
Chris1712 said:
I think you'll find it was called ddr400 because of the clock speed, and referred to as PC3200 as the bandwidth (3.2gb/s), when used in Dual Channel (motherboard features) could make a peak bandwidth of 6.4gb/s :rolleyes: .

Proof Alert :eek:

bandbreite.gif


Agreed with the DDR400 being the mhz. Didn't know about the PC3200.
However, i am pretty sure the peak bandwidth would refer to the total memory module.
I'm not sure what the XDR is reffering to when it says "a single, 2-byte wide" module but presume this is available bandwidth of a single chip?!?
 
Jabbs said:
The ps3 has the cell which is superb but surely with it only having a 7800gtx in terms of gfx power even though it has a good cpu surely the gpu will restrict it, if i put a quad core intel with a 7800gtx on the pc don't make the in game gfx any better than a game on a c2d.

What am trying to say is surely the ps3 will be restricted by the gfx gpu it has in, if it had a 8800gtx in for example it would be something else.

I can't see it "Only having a 7800gtx" holding cell back :) Plus 98% of games will be 720p for a good while which judging from my gtx which so far plays pretty much anything maxed out at 1920/1200 I don't think this is a problem ;)

Anyway I'm pretty sure I saw somewhere that Cell can do some gpu jobs aswell as it was originally designed to operate without a GPU.
 
This is just a copy and paste from an other site, It's easier than me trying to put it in my words, It's where I learn't about the Cell/XDR Bandwidth benefits :)


The Cell will use high speed XDR RAM for memory. A Cell has a memory bandwidth of 25.6 Gigabytes per second which is considerably higher than any PC but necessary as the SPEs will eat as much memory bandwidth as they can get. Even given this the buses are not large (72 data pins in total), this is important as it keeps chip manufacturing costs down. The Cells runs it’s memory interface at 3.2 Gigabits per second per pin though memory in production now is already capable of higher speeds than this. XDR is designed to scale to 6.4 Gigabits per second so memory bandwidth has the potential to double.



Memory Capacity

The total memory that can be attached is variable as the XDR interface is configurable, Rambus’ site shows how 1GB can be connected [Xdimm] . Theoretically an individual Cell can be attached to many Gigabytes of memory depending on density of the the RAM chips in use and this may involve using one pin per physical memory chip which reduces bandwidth.



SPEs may need to access memory from different Cells especially if a long stream is set up, thus the Cells also include a high speed interconnect. This consists of a set of 12 x 8 bit busses which run at 6.4 Gigabit / second per wire (76.8 Gigabytes per second total). The busses are directional with 7 going out and 5 going in.



The current systems allows 2 Cells to be connected glue-less (i.e. without additional chips). Connecting more Cells requires an additional chip. This is different from the patent as it specified 4 Cells could be directly connected and a further 4 could be connected via a switch.



IBM have announced a blade system made up of a series of dual Cell “workstations”. The system is rated at up to 16 TeraFlops, which will require 64 Cells.
 
Back
Top Bottom