• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

[hardware.info] AMD Radeon HD 7970 GHz Edition vs. Nvidia GeForce GTX 680: frametimes

Soldato
Joined
7 May 2006
Posts
12,183
Location
London, Ealing
Introduction

A few days ago we introduced you to the concept of of frametime tests, a more accurate way of gauging graphics card performance in video games. Today we are applying the new test for the first time, and with the new data we will compare the AMD Radeon HD 7970 GHz Edition to the Nvidia GeForce GTX 680.

If you want the best of the best in the area of PC graphics capabilities, there aren't that many options out there and you will quickly narrow down your shortlist to include these two cards. The AMD Radeon HD 7970 GHz Edition is available for an average of £374 or € 400, and the Nvidia GeForce GTX 680 costs an average of £404 or € 460.

When we originally tested the 7970 GHz Edition, we concluded that the AMD card is faster in most benchmarks compared to the GeForce GTX 680. And when you take the lower price into consideration, it's clear that AMD gives you the best value for your money.

So, will this conclusion hold up in the light of new frametime data? We tested the two cards in five recent video games to find out.
http://uk.hardware.info/reviews/399...n-vs-nvidia-geforce-gtx-680-frametimes-review
 
a more accurate way of gauging graphics card performance in video games

A huge massive dis-agree from me here.

Frame Times help paint a BETTER picture of real world perceived smoothness of a game, they do not paint the performance picture alone.
 
A huge massive dis-agree from me here.

Frame Times help paint a BETTER picture of real world perceived smoothness of a game, they do not paint the performance picture alone.

Hm some may disagree. For example a constant 50 FPS may feel smoother than 60 FPS if the frame times are lower/more consistent. Crunching out more frames isn't always better.

That said, it's all very subjective really and to each their own :) I agree though I think it's a nice metric to include alongside FPS as the two combined paint a much better overall picture.
 
I thought this was going to show more amd stutter like some of other reviewers have been showing lately but it seemed about even with some games looking smoother on amd and some on nvidia.
 
I thought this was going to show more amd stutter like some of other reviewers have been showing lately but it seemed about even with some games looking smoother on amd and some on nvidia.

That's the way its always been. Sometimes you just have to look over the guff. I haven't even read the review yet but its a real shame they couldn't have waited a week or so as the new 13.3 amd memory management drivers are supposed to arrive soon and they are designed to help with this very issue.
 
That's the way its always been. Sometimes you just have to look over the guff. I haven't even read the review yet but its a real shame they couldn't have waited a week or so as the new 13.3 amd memory management drivers are supposed to arrive soon and they are designed to help with this very issue.

But the review isn't showing AMD in a bad light? Infact it's either neck and neck or one pulls ahead of the other and vice versa.

Still, would be nice to see what the drivers do, AMD have really been squeezing a lot out of their cards with 'em lately and it's impressive :)
 
But the review isn't showing AMD in a bad light? Infact it's either neck and neck or one pulls ahead of the other and vice versa.

Still, would be nice to see what the drivers do, AMD have really been squeezing a lot out of their cards with 'em lately and it's impressive :)

Yeah i didn't think it was. I gathered that from your post and others in this thread. Sorry if i sounded like i was jumping to conclusions.

I was just referring to the general misconception that only amd have this massive problem with stutter whilst every thing is smooth sailing for other vendors.

I agree though it would have been nice if they could have held the review back to see what difference, if any, these new drivers make.
 
I don't understand all this frame time stuff at all. Sorry, should rephrase that, I don't know what all the fuss is about? It's something that's been blown out of all proportion to the actual problems it causes in games.

Even the techreport article said the effects weren't noticable in games at all. I am not talking about the stutter you get when you are playing games at the highest resolutions and settings that are sort of beyond what the card is capable of. I am talking about just general gaming. IF things really were that bad, then there would be outcry.

Like take the Assasins creed result, I play that game all the time at the highest settings at 1920*1080. I got the GTX 680 and if the game really was stuttery as that graph seems to indicate I think I would notice. No I am sure I would notice, I am extremely sensitive to microstutter. But it's perfectly smooth for me.

I think this is all BS, just something else to talk about.
 
I think it is a fantastic way of doing reviews. fps isn't always conclusive who the winners are.

Take FarCry 3 for example. It was a mess on 310.xx drivers and I did the frame latency tests which proved it. Frame returns were great with 80+ constants but frame latency times painted a different story with serious spikes well past 50ms.

When Nvidia released the 313.95, I ran the same tests again and game felt real good and with the fraps latency viewer, it showed how smooth it was.

Well done reviewers.
 
+1, all reviews should include both avg. FPS AND frametimes
if you do frametimes then you don't need to worry so much about min / max FPS because what you are looking for in the mins is slowdowns and stuttering that the avg. doesn't show - but this is better shown by frametimes
 
I think it is a fantastic way of doing reviews. fps isn't always conclusive who the winners are.

Take FarCry 3 for example. It was a mess on 310.xx drivers and I did the frame latency tests which proved it. Frame returns were great with 80+ constants but frame latency times painted a different story with serious spikes well past 50ms.

When Nvidia released the 313.95, I ran the same tests again and game felt real good and with the fraps latency viewer, it showed how smooth it was.

Well done reviewers.

That was on an SLI system though. Would you have noticed on a single card?
 
That was on an SLI system though. Would you have noticed on a single card?

To be fair, SLI uses a technology which is hardware and software based called frame metering. This makes for a smoother performance and is quite evident with the latest frame testing software. It has brought to light the reasons why Nvidia feels smoother 'out of the box' compared to AMD multi card setups. I am very anal about smooth gameplay and the slightest stutter or jerk has me dissecting why it happened.

The latest 314.22 drivers are very nice and remind me of the 306.23 and 313.95 drivers that have always been dependable and smooth.

This is quite an interesting read from September 2011.

We didn't set out to hunt down multi-GPU micro-stuttering. We just wanted to try some new methods of measuring performance, but those methods helped us identify an interesting problem. I think that means we're on the right track, but the micro-stuttering issue complicates our task quite a bit.

Naturally, we contacted the major graphics chip vendors to see what they had to say about the issue. Somewhat to our surprise, representatives from both AMD and Nvidia quickly and forthrightly acknowledged that multi-GPU micro-stuttering is a real problem, is what we measured in our frame-time analysis, and is difficult to address. Both companies said they've been studying this problem for some time, too. That's intriguing, because neither firm saw fit to inform potential customers about the issue when introducing its most recent multi-GPU product, say the Radeon HD 6990 or the GeForce GTX 590. Hmm.

AMD's David Nalasco identified micro-stuttering as an issue with the rate at which frames are dispatched to GPUs, and he said the problem is not always an easy one to reproduce. Nalasco noted that jitter can come and go as one plays a game, because the relative timings between frames can vary.

We'd mostly agree with that assessment, but with several caveats based on our admittedly somewhat limited test data. For one, although jitter varies over time, multi-GPU setups that are prone to jitter in a given test scenario tend to return to it throughout each test run and from one run to the next. Second, the degree of jitter appears to be higher for systems that are more performance-constrained. For instance, when tested in the same game at the same settings, the mid-range Radeon HD 6870 CrossFireX config generally showed more frame-to-frame variance than the higher-end Radeon HD 6970 CrossFireX setup. The same is true of the GeForce GTX 560 Ti SLI setup versus dual GTX 580s. If this observation amounts to a trait of multi-GPU systems, it's a negative trait. Multi-GPU rigs would have the most jitter just when low frame times are most threatened. Third, in our test data, multi-GPU configs based on Radeons appear to exhibit somewhat more jitter than those based on GeForces. We can't yet say definitively that those observations will consistently hold true across different workloads, but that's where our data so far point.

Nalasco told us there are several ideas for dealing with the jitter problem. As you probably know, vsync, or vertical refresh synchronization, prevents the GPU from flipping to a different source buffer (in order to show a new frame) while the display is being painted. Instead, frame buffer flips are delayed to happen between screen redraws. Many folks prefer to play games with vsync enabled to prevent the tearing artifacts caused by frame buffer flips during display updates. Nalasco noted that enabling vsync could "probably sometimes help" with micro-stuttering. However, we think the precise impact of vsync on jitter is tough to predict; it adds another layer of timing complexity on top of several other such layers. More intriguing is another possibility Nalasco mentioned: a "smarter" version of vsync that presumably controls frame flips with an eye toward ensuring a user perception of fluid motion. We think that approach has potential, but Nalasco was talking only of a future prospect, not a currently implemented technology. He admitted AMD can't say it has "a water-tight solution yet."

Nalasco did say AMD may be paying more attention to these issues going forward because of its focus on exotic multi-GPU configurations like the Dual Graphics feature attached to the Llano APU. Because such configs involve asymmetry between GPUs, they're potentially even more prone to jitter issues than symmetrical CrossFireX or SLI solutions.

Nvidia's Tom Petersen mapped things out for us with the help of a visual aid.

nv-slide.gif


The slide above shows the frame production pipeline, from the game engine through to the display, and it's a useful refresher in the context of this discussion. Things begin with the game engine, which has its own internal timing and tracks a host of variables, from its internal physics simulation to graphics and user input. When a frame is ready for rendering, the graphics engine hands it off to the DirectX API. According to Petersen, it's at this point that Fraps records a timestamp for each frame. Next, DirectX translates high-level API calls and shader programs into lower-level DirectX instructions and sends those to the GPU driver. The graphics driver then compiles DirectX instructions into machine-level instructions for the GPU, and the GPU renders the frame. Finally, the completed frame is displayed onscreen.

Petersen defined several terms to describe the key issues. "Stutter" is variation between the game's internal timing for a frame (t_game) and the time at which the frame is displayed onscreen (t_display). "Lag" is a long delay between the game time and frame time, and "slide show" is a large total time for each frame, where the basic illusion of motion is threatened. These definitions are generally helpful, I think. You'll notice that we've been talking quite a bit about stutter (or jitter) and the slide-show problem (or long frame times) already.

Stutter is, in Petersen's view, "by far the most significant" of these three effects that people perceive in games.

In fact, in a bit of a shocking revelation, Petersen told us Nvidia has "lots of hardware" in its GPUs aimed at trying to fix multi-GPU stuttering. The basic technology, known as frame metering, dynamically tracks the average interval between frames. Those frames that show up "early" are delayed slightly—in other words, the GPU doesn't flip to a new buffer immediately—in order to ensure a more even pace of frames presented for display. The lengths of those delays are adapted depending on the frame rate at any particular time. Petersen told us this frame-metering capability has been present in Nvidia's GPUs since at least the G80 generation, if not earlier. (He offered to find out exactly when it was added, but we haven't heard back yet.)

Poof. Mind blown.

Now, take note of the implications here. Because the metering delay is presumably inserted between T_render and T_display, Fraps would miss it entirely. That means all of our SLI data on the preceding pages might not track with how frames are presented to the user. Rather than perceive an alternating series of long and short frame times, the user would see a more even flow of frames at an average latency between the two.

Frame metering sounds like a pretty cool technology, but there is a trade-off involved. To cushion jitter, Nvidia is increasing the amount of lag in the graphics subsystem as it inserts that delay between the completion of the rendered frame and its exposure to the display. In most cases, we're talking about tens of milliseconds or less; that sort of contribution to lag probably isn't perceptible. Still, this is an interesting and previously hidden trade-off in SLI systems that gamers will want to consider.

So long as the lag isn't too great, metering frame output in this fashion has the potential to alleviate perceived jitter. It's not a perfect solution, though. With Fraps, we can measure the differences between presentation times, when frames are presented to the DirectX API. A crucial and related question is how the internal timing of the game engine works. If the game engine generally assumes the same amount of time has passed between one frame and the next, metering should work beautifully. If not, then frame metering is just moving the temporal discontinuity problem around—and potentially making it worse. After all, the frames have important content, reflecting the motion of the underlying geometry in the game world. If the game engine tracks time finely enough, inserting a delay for every other frame would only exacerbate the perceived stuttering. The effect would be strange, like having a video camera that captures frames in an odd sequence, 12--34--56--78, and a projector that displays them in an even 1-2-3-4-5-6-7-8 fashion. Motion would not be smooth.

When we asked Petersen about this issue, he admitted metering might face challenges with different game engines. We asked him to identify a major game engine whose internal timing works well in conjunction with GeForce frame metering, but he wasn't able to provide any specific examples just yet. Still, he asserted that "most games are happy if we present frames uniformly," while acknowledging there's more work to be done. In fact, he said, echoing Nalasco, there is a whole area of study in graphics about making frame delivery uniform.
 
Last edited:
That was on an SLI system though. Would you have noticed on a single card?
Well, I got 2 older systems one with 8800GTS and one with 9800GTX+, but both I have no problem with and the slow-downs only becomes more apparent when frame rate dip below 30fps; my main system with the 5850 on the other hand, as soon as it dip below 40fps, I would already pick up the feeling of "jerking along", which I usually won't feel unless the frame rate dip to around 25fps on the two mentioned Nvidia cards above...

On the other hand though, I came across someone mentioning he feels that ATI/AMD card deliver better sense of depth for 3D (not the 3D with monitors, but just 3D graphic games in general) than Nvidia...and after reading that, I decide to pay more attention...and I think he's right! The 3D depth seem to much better on my 5850, while my other systems with Nvidia card the 3D graphic looks more "flat" in comparison.
 
Back
Top Bottom