About 10% faster actually: http://www.madshrimps.be/vbulletin/f22/amd-radeon-hd-5870-hd-5850-performance-chart-66465/
Why is the 8800GT 7% quicker than a 9800GT according to that graph

Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
About 10% faster actually: http://www.madshrimps.be/vbulletin/f22/amd-radeon-hd-5870-hd-5850-performance-chart-66465/
9600GT?
if these specs turn out to be what they are or very similar, which gpu will you buy ATI or Nvidia?
I do genetic algorithm research on CUDA, and I'm waiting anxiously for CUDA 3.0 on Fermi. Unlike most, I'm not too bothered (yet) about their massive DP performance increase, rather the simultaneous different kernel executions and increased caching/memory behaviour.
Not sure if 1600MHz shaders is right - tho that was originally the target... but there seemed to be a change recently.
EDIT: If thats right and they really have bumped the 360 to 480SP its going to be a good margin faster than the 5870...
At 384SP the 360GTX would have been competitive against the 5870 either very slightly slower or very slightly faster... but with 480SP it would be quite a bit faster.
9600gt 550million transistor part, 9800gtx 750million transistor part, so other than it NOT being salvaged but being a smaller core with the correct amount of shaders... oh wait, so completely and utterly different.
If they'd made 9600gt's out of 9800gtx cores it would have been just as bad, but the point is they didn't. With the 9600gt they cut so much off it, yet it still proved to be very very good performance comparitively(because the 9800gtx was heavily limited somewhere, I can't remember the reasoning).
Fact is the 9600gt, supposely midrange, was a little bit too close to the top end performance from what I recall, but it was a midrange gpu design, well really just a cut down 8800gtx, but thats fine, it was smaller and therefore had less shaders. The 350gtx could be right up on the 360gtx performance for all I care, I don't mind the name, call it a 220gt, or a 495gtx ultra mega uber boobs, the problem is when you make a 256shader part out of a core twice its size, why, because you can't sell it for the same price as a full 512shader part. Why's that bad, the cheaper that 256shader card has to be sold for, the more the higher end cores have to be sold for to even it out.
If each core working perfectly cost $100 out of the oven, if you can only sell half at $50 because they only half work, to BREAK EVEN you HAVE TO increase the cost of the other cores that do work. The fact that so many expensive cores look set to be sold much cheaper, just pushes the 380gtx price higher in the end, which is bad for everyone, AMD fans, Nvidia fans, gaming fans. If they were making 256shader cores out of a 1.8-2billion transistor core that only had 256shaders, that would be fantastic, there might be a price war created from a well priced midrange card. But a core with only 256 shaders, being sold with all its shaders doesn't effect the price of waifers with larger cores, this one would, dramatically.
Already have a 5850, if Nvidia release something with the specs as listed, it will still be better value to go AMD. A second 5850, even at £200 would make a cheaper and faster setup than a 380gtx, which I can't see them launching at under £450, maybe even launch it at £550+ just so no one actually wants one. It's a great way to do business, release a card you can barely make but price it where no one wants it, and no one really notices you can't make them because theres still a few around.
If the 360gtx turns out to be great at circa £300, then the 5850 will drop to £150, undercut it and still 2 in crossfire will be faster than it by a HUGE margine. If a 350gtx is in the works to salvage yields, well it won't be very fast anyway, and it will likely be beaten on price by 5770 in crossfire aswell as smashed in performance.
I have little doubt that Nvidia can make a faster single gpu card than AMD have, AMD's design policy is to not be going for the performance crown, Nvidia's is. The problem is twofold, do you need that much extra juice, can you get the same performance with two cheaper cards, ok threefold, can they even sell it. The other big issue being, the only games that really want a 5970's level of power, for now, would be a game in ridiculous res, which eyefinity gives you. Then again I guess you can say Nvidia has the 3d screens so getting 120fps worth of performance is fairly crucial there also.
Both features the 5870 has, can execute different programs on different clusters, and increased caching over last gen. This is the problem, AMD really don't want to invest millions marketing in a market thats worth less than $50million turnover to them, however they basically have every last GPGPU feature Nvidia are going to have with Fermi, while pushing arguably the better platform(opencl). The thing is Nvidia have no where else to expand so are spending heavily to get into a market that was only worth $88million in revenue, not profit, last year.
Pretty much everything Fermi can do, the 5870 can already do. I'm not a good enough coder to know this though, for gaming the 4+1 architecture AMD have is VERY hard to get the full power out of, thats why its peak theoretical throughput dwarfs Nvidia's, but most of the time its using about half its capacity. But in non gaming coding, GPGPU work, I'm not sure if its easier or not to use the full 4+1 parts of a shader. If you can, the 5870 WILL spank Nvidia, if a specific gpgpu program can't, then it won't.
The funny part is, well not funny, almost sad, Nvidia will spend millions, maybe hundreds of millions expanding a tiny market over the next few years, while Intel and AMD simply intergrate massive fpu monsters onto the cpu die. In the not to distant future we'll be at the point where there will be a interger line of I 17 cpu's with 24 int cores and 8 fpu cores, and a fpu moster version with 4 in cores and 28 fpu cores. Aswell as both having the abilility to rather seemlessly(software wise) add extra discrete gpgpu's to share the workload and offer a gpgpu platform top to bottom to the big companies. That means reliability, compatibility complete flexibility and better pricing, aswell as both companies producing all their own stuff cutting out the middle man in manufacturing to decrease prices. So Nvidia are set to spend money turning this tiny market into a bigger market, for both Intel and AMD to step in and take it easily. AMD are keeping up with features, improving gen to gen and when the markets big enough that extra marketing and spending means profits(which they don't now) they'll go for it.
I think that the 5870 doesn't have ECC memory, which is a biggie for the scientific bods apparently
...
Pretty much everything Fermi can do, the 5870 can already do. I'm not a good enough coder to know this though, for gaming the 4+1 architecture AMD have is VERY hard to get the full power out of, thats why its peak theoretical throughput dwarfs Nvidia's, but most of the time its using about half its capacity. But in non gaming coding, GPGPU work, I'm not sure if its easier or not to use the full 4+1 parts of a shader. If you can, the 5870 WILL spank Nvidia, if a specific gpgpu program can't, then it won't.
...
released yesterday at Tech ARP
http://www.techpowerup.com/110471/N...X_300_Series_to_Clinch_Performance_Crown.html
old news - and even fudzilla has agreed these are fake & fudo <3's nvidia.
http://www.fudzilla.com/content/view/16843/1/
"Evergreen family of GPU chips comes with a fully-optimized memory controller for the GDDR5, and built-in ECC function into every GDDR5 chip means you can "drive the cells" to the yet unseen points in the world of DDR SDRAM."
Source : Bright side of news
Memory Controller
The Radeon HD 5800 series memory controller has also received some optimizations. ATI has now turned on GDDR5’s Error Detection Code (EDC) similar to ECC in DRAM. This allows GDDR5 modules to reach higher frequencies while reducing errors. This could help make video cards as a whole more stable and robust, especially as you clock memory frequencies higher.
One interesting result of enabling EDC however is that overclocking consequences have now changed. In the past, when you overclocked memory on a video card it would ramp up in frequency until the video card would start throwing out artifacts, and then crash. Now when you overclock memory frequency your performance will increase until it reaches its climax and EDC kicks in fixing errors in the memory due to the frequency being too high. This means EDC will fix these problems and your performance will start to actually degrade as you increase frequency and EDC has to work harder at fixing the errors. There won’t be as much flashing polygons and artifacts anymore and no outright crashing from overclocking memory, you will only see a performance decline. Therefore, the goal is to now overclock memory until performance starts declining, then back it off until you’ve reached the point right before the decline, and that is your maximum stable memory overclock.
If that report is accurate then its wierd that more ppl don't realise this.
These specs fly in the face of what has already been confirmed by nvidia. Fermi will have a 384bit memory bus, not 512bit. I wouldn't take anything from this list at all.
Indeed. Thanks to these threads my ignore list is looking pretty healthy
I should have known better than to expect reasoned discussion about up and coming tech.![]()
9600gt 550million transistor part, 9800gtx 750million transistor part, so other than it NOT being salvaged but being a smaller core with the correct amount of shaders... oh wait, so completely and utterly different.
If they'd made 9600gt's out of 9800gtx cores it would have been just as bad, but the point is they didn't. With the 9600gt they cut so much off it, yet it still proved to be very very good performance comparitively(because the 9800gtx was heavily limited somewhere, I can't remember the reasoning).
Fact is the 9600gt, supposely midrange, was a little bit too close to the top end performance from what I recall, but it was a midrange gpu design, well really just a cut down 8800gtx, but thats fine, it was smaller and therefore had less shaders. The 350gtx could be right up on the 360gtx performance for all I care, I don't mind the name, call it a 220gt, or a 495gtx ultra mega uber boobs, the problem is when you make a 256shader part out of a core twice its size, why, because you can't sell it for the same price as a full 512shader part. Why's that bad, the cheaper that 256shader card has to be sold for, the more the higher end cores have to be sold for to even it out.
If each core working perfectly cost $100 out of the oven, if you can only sell half at $50 because they only half work, to BREAK EVEN you HAVE TO increase the cost of the other cores that do work. The fact that so many expensive cores look set to be sold much cheaper, just pushes the 380gtx price higher in the end, which is bad for everyone, AMD fans, Nvidia fans, gaming fans. If they were making 256shader cores out of a 1.8-2billion transistor core that only had 256shaders, that would be fantastic, there might be a price war created from a well priced midrange card. But a core with only 256 shaders, being sold with all its shaders doesn't effect the price of waifers with larger cores, this one would, dramatically.
Already have a 5850, if Nvidia release something with the specs as listed, it will still be better value to go AMD. A second 5850, even at £200 would make a cheaper and faster setup than a 380gtx, which I can't see them launching at under £450, maybe even launch it at £550+ just so no one actually wants one. It's a great way to do business, release a card you can barely make but price it where no one wants it, and no one really notices you can't make them because theres still a few around.
If the 360gtx turns out to be great at circa £300, then the 5850 will drop to £150, undercut it and still 2 in crossfire will be faster than it by a HUGE margine. If a 350gtx is in the works to salvage yields, well it won't be very fast anyway, and it will likely be beaten on price by 5770 in crossfire aswell as smashed in performance.
I have little doubt that Nvidia can make a faster single gpu card than AMD have, AMD's design policy is to not be going for the performance crown, Nvidia's is. The problem is twofold, do you need that much extra juice, can you get the same performance with two cheaper cards, ok threefold, can they even sell it. The other big issue being, the only games that really want a 5970's level of power, for now, would be a game in ridiculous res, which eyefinity gives you. Then again I guess you can say Nvidia has the 3d screens so getting 120fps worth of performance is fairly crucial there also.
Both features the 5870 has, can execute different programs on different clusters, and increased caching over last gen. This is the problem, AMD really don't want to invest millions marketing in a market thats worth less than $50million turnover to them, however they basically have every last GPGPU feature Nvidia are going to have with Fermi, while pushing arguably the better platform(opencl). The thing is Nvidia have no where else to expand so are spending heavily to get into a market that was only worth $88million in revenue, not profit, last year.
Pretty much everything Fermi can do, the 5870 can already do. I'm not a good enough coder to know this though, for gaming the 4+1 architecture AMD have is VERY hard to get the full power out of, thats why its peak theoretical throughput dwarfs Nvidia's, but most of the time its using about half its capacity. But in non gaming coding, GPGPU work, I'm not sure if its easier or not to use the full 4+1 parts of a shader. If you can, the 5870 WILL spank Nvidia, if a specific gpgpu program can't, then it won't.
The funny part is, well not funny, almost sad, Nvidia will spend millions, maybe hundreds of millions expanding a tiny market over the next few years, while Intel and AMD simply intergrate massive fpu monsters onto the cpu die. In the not to distant future we'll be at the point where there will be a interger line of I 17 cpu's with 24 int cores and 8 fpu cores, and a fpu moster version with 4 in cores and 28 fpu cores. Aswell as both having the abilility to rather seemlessly(software wise) add extra discrete gpgpu's to share the workload and offer a gpgpu platform top to bottom to the big companies. That means reliability, compatibility complete flexibility and better pricing, aswell as both companies producing all their own stuff cutting out the middle man in manufacturing to decrease prices. So Nvidia are set to spend money turning this tiny market into a bigger market, for both Intel and AMD to step in and take it easily. AMD are keeping up with features, improving gen to gen and when the markets big enough that extra marketing and spending means profits(which they don't now) they'll go for it.
according to that the 395 breaks ATX limits by being over 300W TDP, strange