• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Tech ARP updated GTX 380, 360, 395 SPECs!

Well based off performance if these are the real specs (huge IF there) then they would obviously be top dog and thus from a performance point of view it would be nVidia, however I can imagine whether the specs are this good or most likely lower, the price will be absurd and thus, ATi will probably be my next gpu purchase.

how much of a rough estimate or we looking at for the cost?
 
As already said, Im inclined not to believe these anyway, but if you were to assume they are then
- you cant use numbers to calculate performance, its far to complex for that
- as such theres certainly no way you can work out price, other than extimating on NVidias pricing history

I still find it strange when at a time ATI/AMD and Intel are trying to reduce power requirements, making more efficient quieter cooler chips NVidia would be bucking the trend and releasing a non ATX specification card
 
These specs fly in the face of what has already been confirmed by nvidia. Fermi will have a 384bit memory bus, not 512bit. I wouldn't take anything from this list at all.
 
We all know the GTX360 will rival the 5870 (4870vs GTX260), the GTX380 will beat the 5870 but fall short of the 5970, the GTX395 will then hold the crown.
ATI will then release other cards which beat Nvidia price/performance.
 
We all know the GTX360 will rival the 5870 (4870vs GTX260), the GTX380 will beat the 5870 but fall short of the 5970, the GTX395 will then hold the crown.
ATI will then release other cards which beat Nvidia price/performance.

can you show me your source for this please?
 
Interesting to see if true that the GTX360 is slightly slower than a GTX295 and hence slower than a 5870.

The gtx380 will obvioulsly be faster but on those figures doubt it will beat a 5970.

The price is going to be the key.

umm a 295 is 20% faster than a 5870 so dont know where you are getting that from?
 
We all know the GTX360 will rival the 5870 (4870vs GTX260), the GTX380 will beat the 5870 but fall short of the 5970, the GTX395 will then hold the crown.
ATI will then release other cards which beat Nvidia price/performance.
that may be nvidias plans.... maybe they are even hoping for better. but as it stands, we'll have to wait and see what the latest silicon will bring :)
 
I do genetic algorithm research on CUDA, and I'm waiting anxiously for CUDA 3.0 on Fermi. Unlike most, I'm not too bothered (yet) about their massive DP performance increase, rather the simultaneous different kernel executions and increased caching/memory behaviour.

I was seriously looking at Larrabee, and hoping it would resolve many of the headaches suffered by CUDA programming. Unfortunately Intel have pulled it for 2010 at least. Larrabee on 22nm might be magnificent, but what will the gfx manufacturer's have on the same scale? I worked for Intel years ago, and they are very cost-conscious and I wonder if they're not reluctant to invest massively outside of their primary realm? Will they just keep their investment in the on-board graphics market instead?

ATI's Close to the Metal has nothing going for it realistically, so I'm drooling in anticipation of Fermi. Now, this has nothing to do with gaming, so I can understand most people getting disappointed with NV's nonsense - but for anyone in scientific research there's nothing to compare.

My only concern at the moment is Time, waiting for a decent refresh of Fermi, and not jumping on their first release, which by all accounts looks rather untrustworthy. Larrabee was scheduled for H1'10, so I suppose I can wait 6 months - I really wish ATI/AMD could get their GPGPU-act organised.

I used to weary of Charlie' Ds rage against the green-machine, but I've added him to favourites lately - whatever about his personal opinions, he does tend towards semi-accuracy most of the time.
 
ATI's Close to the Metal has nothing going for it realistically, so I'm drooling in anticipation of Fermi. Now, this has nothing to do with gaming, so I can understand most people getting disappointed with NV's nonsense - but for anyone in scientific research there's nothing to compare.

Whilst I see your point, ATi dropped Close to Metal ages ago with the introduction of the Stream SDK. They now support a few GPGPU languages that suit different needs (CAL, Brook+, DirectCompute and OpenCL - Brook+ being a higher level of abstraction than OpenCL/CUDA, where CAL is writing in what is basically making abstracted calls to the graphics hardware).
 
I do genetic algorithm research on CUDA, and I'm waiting anxiously for CUDA 3.0 on Fermi. Unlike most, I'm not too bothered (yet) about their massive DP performance increase, rather the simultaneous different kernel executions and increased caching/memory behaviour.

I was seriously looking at Larrabee, and hoping it would resolve many of the headaches suffered by CUDA programming. Unfortunately Intel have pulled it for 2010 at least. Larrabee on 22nm might be magnificent, but what will the gfx manufacturer's have on the same scale? I worked for Intel years ago, and they are very cost-conscious and I wonder if they're not reluctant to invest massively outside of their primary realm? Will they just keep their investment in the on-board graphics market instead?

ATI's Close to the Metal has nothing going for it realistically, so I'm drooling in anticipation of Fermi. Now, this has nothing to do with gaming, so I can understand most people getting disappointed with NV's nonsense - but for anyone in scientific research there's nothing to compare.

My only concern at the moment is Time, waiting for a decent refresh of Fermi, and not jumping on their first release, which by all accounts looks rather untrustworthy. Larrabee was scheduled for H1'10, so I suppose I can wait 6 months - I really wish ATI/AMD could get their GPGPU-act organised.

I used to weary of Charlie' Ds rage against the green-machine, but I've added him to favourites lately - whatever about his personal opinions, he does tend towards semi-accuracy most of the time.

Exact same here, also looking at Genetic algorithms with CUDA. Potentially Fermi could make our 400 core cluster become redundant!
 
but for anyone in scientific research there's nothing to compare

Just re-iterating lightnix's point, there is openCL which is coming on leaps and bounds, NVidia stole a march with CUDA being first to market but I suspect in the next 5 years the advantages that openCL gives will mean it will surpass CUDA, pure speculation mind, but ATI do have something to compare
 
Back
Top Bottom