• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fermi NDA ends today

^^^ isn't this the GTX360 that we've been seeing in these latest videos that have been getting posted, like the ray tracing one, and Farcry 2??
 
The majority of vids are recorded with an HD camera pointed directly at the screen, so its got nothing to do with the PC causing lag, check em out.;)

said vids recorded with cameras that do 25 or 30 fps recording. doubt you will understand what that means though when games are rendering more frames per second than the hd camera can capture. even if someone used a 60fps camera its not gonna be much use if game is rendering above 60fps.
 
Never had any stuttering using SLI, 6800, 7800 and 8800's were all fine. I thought the micro stuttering thing was a major issue with the ATI 3800's? I'm sure most of it is user error tbh.

Looking forward to playing RAGE in 3D vision....
 
Some interesting stuff today then :) We know that Fermi is a killer when it comes to tessellation, and that nvidia still haven't decided upon final clockspeeds. Not as much performance info as I would have liked, but to be honest, about as much as I was expecting.

I think the most interesting thing was the description of the new "polymorph" engine, which takes geometry setup inside the SMs. Aside from increasing geometry throughput by 8x over GT200, it also takes more of the logic inside the modular parts of the chip. This will make it slightly easier to scale the chip down to smaller mid-range and low-end parts (although on the flip side the presence of a large global cache will make things more difficult). Anyway, given the massive geometry performance and the way that Fermi is architected for general compute type stuff, it comes as no surprise that it performs tessellation so well (as demonstrated by the Unigine benchmark that nvidia chose to release).

As for real performance, they can quote whatever stats they like, but at the moment the only real benchmark information we have is from this video: http://www.youtube.com/watch?v=xCE9kG-ForQ

Here we see, in Far Cry 2, the GF100 getting 84FPS average compared to 50FPS on the GTX285. It's hard to draw too much from this, especially since you have to expect Nvidia to have chosen the best game to show off their new toy, but at least we know that Fermi can be up to 68% faster than a GTX285. Looking at other FC2 benchmarks, this puts the card not too far behind the 5970. I wouldn't expect it to be so close in most games though. AMD will keep the fastest single card crown.

Anyway, it's clear that there are some very interesting new features in Fermi, and that it's going to be a great platform for Nvidia the next generation around. That said, I still think that this entire line is going to end up in the toilet because of TSMC yeilds and general instability on the design. I guess that's what happens when your competitor releases first and you have to rush to catch up. The next generaation should be a beast though, since the new and highly modular architecture will scale very efficiently to a bigger design with more SMs.


You've almost completely misunderstood it all I'm afraid. By moving stuff inside each cluster, it means every lower power version, ie 448shader, 386-256shader parts will have LESS tesselation power than the top end part, significantly, it will also be heavily based on the shader clocks. We're led to believe they WON'T run at full shader clocks, but at a divider, this could be 1/2, 5/6, or 1/8 for all we know. Personally I'd guess that the same core logic that was outside the cluster before won't be running much faster inside the cluster, so expect around 1/2 the shader speed, which could be terrible if the shader clocks on available parts are much lower than expected.

Right now a 5850 has identical tesselation performance to a 5870, and a 5770 and across the range. Nvidia will lose power with every derivitive. This will go for everything else they've moved across, highly scalable architecture in terms of shader clusters, sure. But they CAN'T just chop off shader clusters to produce a half sized core easily, because the core logic arrangement will HAVE to change for each version. Where a 5870 essentially has everything every core requires on one side, that doesn't change much at all from top to bottom, and the shader clusters on the other side, and it can clip off from one end fairly easily. Nvidia have spread the core logic out far more on all sides and will need a pretty radical change for each core, not impossible at all, but personally I think it will be harder to scale down. Likewise its "we had a 10% larger core size increase than anticipated" will flow down to every single derivitive. It can't on last gen's shrink compete on die size/performance, this one is more complex with lower yields and BIGGER comparitive die sizes than AMD, which means their low/mid end will compete worse and cost more than their previous gens.

AS for tesselation, we have no info that its fixed hardware, or using generic power for any geometry. Thats bad, because theoretically it could very well lose a HUGE amount of its tesselating power if its doing other things, the only demo it showed with huge tesselation did nothing else. Its only real game like world demo, train physx thing, had smeg all tesselation, considering its their biggest feature the fact its missing is odd.

For AMD it doesn't matter what its shaders/core logic gets up to, its tesselation power is always available.

Likewise it its only 1.6x faster than AMD in uniengine witha 512 shader part at max theoretical clock speeds, and the only part you can buy is 448shaders with 25% lower clock speeds, its lost most of its advantage in tesselation already, before any possible loss in real world performance.

So tesselation is COMPLETELY UNKNOWN at this point in time, we know where on the core its done, thats it, we have no idea if its sharing hardware. Its L2 cache and core logic being spread around make it much harder to scale down, and it will lose significant power as it loses shader clusters, more than AMD do as you go down the range.

As you've said, we pretty much know the vid's are real as anandtech say so as do others(if Nvidia hid /lied about what was in those rigs we don't know). That means its not as fast as 2x285gtx's in SLI, in one of its best games(if it was one of the worst, it wouldn't be on display, the only game on display, almost certainly the best scaling game on the architecture).
 
and your getting 60fps minimum in crysis with all settings maxxed out at that res? i use a single gtx260 and i get the feeling that even 2 of them in sli wont do what your saying they will do. :rolleyes:

I never claimed I was... I said that the fact that a single card doesn't get 60+fps with AA in everything is the reason I have 2 to boost performance...

Off the top of my head 1920x w/ 4x AA Very High Settings DX10 gets me 55fps average in crysis. I'm not too bothered as I don't like the game and consider the engine badly written - FarCry 2 runs much better.
 
My 5870 cant get frame rate like that with those settings and my last card 4870x2 wouldnt get anywhere close to 55 fps on very high detail.My sli gtx 8800s would do like 12 fps at those settings and my single gtx 260 did about 20 so you must have some super dooper 260s there.
 
My 5870 cant get frame rate like that with those settings and my last card 4870x2 wouldnt get anywhere close to 55 fps on very high detail.My sli gtx 8800s would do like 12 fps at those settings and my single gtx 260 did about 20 so you must have some super dooper 260s there.

They are running a nice overclock :D I'm not 100% sure on the figure but its deff. above 40.
 

I didn't actually see _any_ tessellation in their physics/train demo - what they guy was showing off looked like plain old dynamic LOD terrain that rearranges a set number of polygons to reproduce terrain in 10,000s of polies from a deffinition that could contain enough detail for 10s of millions.
 
tbh most interesting would be the card that will challenge 5850, and if it will be faster and if its faster only 10% or so will it overclock well :confused: , as u could overclock 5850 very high and surpass 5870. should be interesting
 

so if Ive read what you've said correctly, you reckon the guys at nvidia have built this new architecture that is going to be the basis for thier next couple of generations of cards, and completely messed it up without thinking about how it will scale up or down to lower ranged cards or to following generation cards.

sorry drunkenmaster in my opinion i feel it is you that have
almost completely misunderstood it all I'm afraid

just because nvidia's PR and marketing keeps getting it wrong, and showing the company in a bad light, it doesn't mean that all the engineers that design the gpu's are clueless. I'm sure they know exactly what they are doing.
 
so if Ive read what you've said correctly, you reckon the guys at nvidia have built this new architecture that is going to be the basis for thier next couple of generations of cards, and completely messed it up without thinking about how it will scale up or down to lower ranged cards or to following generation cards.

sorry drunkenmaster in my opinion i feel it is you that have


just because nvidia's PR and marketing keeps getting it wrong, and showing the company in a bad light, it doesn't mean that all the engineers that design the gpu's are clueless. I'm sure they know exactly what they are doing.

Other than you are 'sure' that Nvidia's engineers have got their bases covered, do you have any logical explanation on why he is mis-understood?
It's just when logical people try to debunk others theories, they use facts where necessary and don't simply 'feel' that the other person is incorrect.

Anyway cool story Bru!
 
Last edited:
I didn't actually see _any_ tessellation in their physics/train demo - what they guy was showing off looked like plain old dynamic LOD terrain that rearranges a set number of polygons to reproduce terrain in 10,000s of polies from a deffinition that could contain enough detail for 10s of millions.

It is supposedly tessellated, in one video the guy explicitly mentions it:

http://www.youtube.com/watch?v=6RdIrY6NYrM

In this case it seems like it's just used for smoothing rather than for displacement mapping, though, although it's hard to tell from such a low resolution video.

Edit: Also it's probably worth mentioning the wireframe may be misleading as they likely use a non-tessellated mesh for collision detection for performance reasons.
 
Last edited:
so if Ive read what you've said correctly, you reckon the guys at nvidia have built this new architecture that is going to be the basis for thier next couple of generations of cards, and completely messed it up without thinking about how it will scale up or down to lower ranged cards or to following generation cards.

sorry drunkenmaster in my opinion i feel it is you that have


just because nvidia's PR and marketing keeps getting it wrong, and showing the company in a bad light, it doesn't mean that all the engineers that design the gpu's are clueless. I'm sure they know exactly what they are doing.

he never gets anything right but keeps on going. bookmark some quotes from him or he'll deny everything he said before
 
OK so if my reply falls into the 'sorry your just wrong category' just because i don't agree with somebodies perspective on things, how does that make my response any different than drunkenmasters response to duff man.

if it came down to it, ill take the engineers choices on being right (seeing as thier the ones who actually designed the thing) rather than a forum member that has openly admitted he dislikes nvidia ideas on why it wont work, when it comes to scaling.
 
OK so if my reply falls into the 'sorry your just wrong category' just because i don't agree with somebodies perspective on things, how does that make my response any different than drunkenmasters response to duff man.

if it came down to it, ill take the engineers choices on being right (seeing as thier the ones who actually designed the thing) rather than a forum member that has openly admitted he dislikes nvidia ideas on why it wont work, when it comes to scaling.

So you are basically saying you have no idea what your talking about, and it's just simply impossible that Nvidia goof'd, because it just is.

This perspective differs from DM as he used reason and logic derived from what are believed to be the spec's of GF100 to base his opinions.

Not to worry though, you can still have an opinion like everyone else, but just get used to being wrong.
 
Back
Top Bottom