• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GTX680 to arrive at the end of February!

Permabanned
Joined
28 Nov 2006
Posts
5,750
Location
N Ireland
No its just poor coding in those games - both have dozens of issues even with single GPU systems - some of those issues may be magnified with multi GPU but its not directly an issue with multi GPU.

Thats the point we tried to make about Single GPU's though.I would guess that the source engines mouse problems would be one such problem that would seem worse with SLI.

L4D2 CS:S and TF2 are culprits and pretty big games to have had such glaring issues for such a long time.My threads go back to 2009 i think about this issue.

http://www.youtube.com/watch?v=AAnvrsKchD8 Smooth as butter using WSAD but once u touch the mouse and strafe holding a key its a mess.

Recently 90% fixed with raw mouse input option.Im guessing BF3 probably has the same issue if they added in raw mouse input.Multicore option disabled can further reduce it alongside using 1000hz polling on the mouse but for me its still there its just hard to notice but its there if u really focus on strafing..I bet if i had SLI the game would have been unplayable when strafing?
 
Last edited:
Permabanned
Joined
28 Nov 2006
Posts
5,750
Location
N Ireland
most games are just shoddy console ports with a higher resolution

True but u forget that in some u can add in ambient occulsion with Nvidia at driver level,8xMSAA,16XAF,Vsync,physx,Eyefinity

You can go 3D and if you really have the money u can play at 2560x1600.Its a lot of money and in some titles u may get 3 of the above features but i still would not want to go to 30fps 720p on my PS3.

Appreciate what you have as PC gaming still rocks :)
 
Man of Honour
Joined
13 Oct 2006
Posts
91,694
Thats the point we tried to make about Single GPU's though.I would guess that the source engines mouse problems would be one such problem that would seem worse with SLI.

L4D2 CS:S and TF2 are culprits and pretty big games to have had such glaring issues for such a long time.My threads go back to 2009 i think about this issue.

http://www.youtube.com/watch?v=AAnvrsKchD8 Smooth as butter using WSAD but once u touch the mouse and strafe holding a key its a mess.

Recently 90% fixed with raw mouse input option.Im guessing BF3 probably has the same issue if they added in raw mouse input.Multicore option disabled can further reduce it alongside using 1000hz polling on the mouse but for me its still there its just hard to notice but its there if u really focus on strafing..I bet if i had SLI the game would have been unplayable when strafing?

Do you play with Vsync on (I play with it off)? coz I used to play CSS with my 2 previous SLI setups and definitely didn't get that problem - it would have stood out a mile.
 
Permabanned
Joined
28 Nov 2006
Posts
5,750
Location
N Ireland
Yea always Vsync on mate and TB too and Pre Rendered frames 3.I have tried with vsync off and its the same.It does it on my uncles computer too with normal mouse and a quad core etc it bugged me for years and everyone called me "mad" its only by luck i found the video to prove that i am not alone or insane and that it happens on both Nvidia and ATI regardless of settings :D You really have to strafe and hold a key to see it working.My favourite part is the start of parish map on l4D2 where i strafe around the jetty looking at zoey's backpack or the lampost slightly trailing like in the video.you probably would not even notice it but i do and it ruins the game imo.Oh and disabling keyboard repeat keys is another small fix which makes zero difference but can in games like Dead island so thats just how extreme i have became that i even disable that now.

Going to get 120hz shortly to see if thats what the final jigsaw peice is to getting things as smooth as they used to be in the CS 1.5 days with my 85hz CRT.

All games suffer slight mouse problems with object vibration when strafing and just what i suspect is general 60hz frame jumping im picking up on.Ive tried formats every fix under the sun and its not my mouse either or DPI,sensitivity,polling rate or hyperthreading.Maybe a side effect of being console ported who knows.Oddly i find Windows xp a lot smoother in L4D and i can just instantly feel at home with a much smoother feeling which i call the Original HL feeling which i remember fondly from 10 years ago.

WSAD is fine btw, so smooth its like a CRT.
 
Last edited:
Associate
Joined
20 Jun 2009
Posts
1,101
Location
UK
Just looked at my Ocuk account back to jan 2008' I purchaseed 2 amd 3870x2 cards, the best at the time,I paid £259 - vat each lol it's more than double now for a 6990..
 
Man of Honour
Joined
13 Oct 2006
Posts
91,694
Yea always Vsync on mate and TB too and Pre Rendered frames 3.I have tried with vsync off and its the same.It does it on my uncles computer too with normal mouse and a quad core etc it bugged me for years and everyone called me "mad" its only by luck i found the video to prove that i am not alone or insane and that it happens on both Nvidia and ATI regardless of settings :D You really have to strafe and hold a key to see it working.My favourite part is the start of parish map on l4D2 where i strafe around the jetty looking at zoey's backpack or the lampost slightly trailing like in the video.you probably would not even notice it but i do and it ruins the game imo.

Going to get 120hz shortly to see if thats what the final jigsaw peice is to getting things as smooth as they used to be in the CS 1.5 days with my 85hz CRT.

All games suffer slight mouse problems with object vibration when strafing and just what i suspect is general 60hz frame jumping im picking up on.Ive tried formats every fix under the sun and its not my mouse either or DPI,sensitivity,polling rate or hyperthreading.Maybe a side effect of being console ported who knows.Oddly i find Windows xp a lot smoother in L4D and i can just instantly feel at home with a much smoother feeling which i call the Original HL feeling which i remember fondly from 10 years ago.

WSAD is fine btw, so smooth its like a CRT.

Its an artifact of SLI and VSync @60Hz when using a high level of pre-rendered frames which is why I asked. VSync and SLI don't really work the best tbh - not a problem for me as I don't use it for multiplayer games and singleplayer w/ VSync @120Hz doesn't exhibit the same issues.
 
Permabanned
Joined
28 Nov 2006
Posts
5,750
Location
N Ireland
Weird isnt it.I also tried fps limiter with values such as 58,59,60,63 and i tried all the pre rendered frame settings from 8 and 6 right down to 0 and its still bloody there! Somehow that bugged out my whole pc and i have the problem 1000x worse in games that was perfect before.I think all the chopping of mouse DPI has bugged windows.

Cod series is much improved though.COD4 i can barely notice it in that game.

Vsync and 120fps is the holy grail i think.
 
Permabanned
Joined
28 Nov 2006
Posts
5,750
Location
N Ireland
Its called the GTX 760 apparantly..

The GTX760 doesn’t seem to be as fast as expected though its unknown if its based on the GK-104 core or GK-106, If this is the GK-104 performance then its pretty underwhelming considering it packs 670-768 Cuda Cores as reported by 3D Center however performance is nearing an Oc’ed GTX580 levels.

http://wccftech.com/nvidias-kepler-...orce-500-series-generation-performance-chart/

Bare in mind it shows the 770 beating the GTX 590 so its highly unlikely.
 
Last edited:
Soldato
Joined
10 Feb 2007
Posts
3,435
I fail to see your point there, the 5870 was a great card and wasn't 70-80% ahead of a 285gtx.

This is what people seem to be entirely failing to grasp. If a 6970 is 15% behind a 580gtx in one game, then being 80% faster THAN THE 6970, is only 50% faster than the 580gtx. For example, in random game G, the 6970 gets 85fps, the 580gtx gets 100fps, the 7970 gets 80% higher fps than the 6970..... that is 150ish fps.

AMD are 70-100% faster than the 6970(its hard to know without knowing the really achieveable non tdp clocks that it would have released at, giving a 10% clock margin on the reference cooler the 5870/6970 had the 7970 should be around about 1125Mhz at least at stock) when the 7970 is at what you would guess to be the "stock" speed if they weren't underclocking for TDP.

You can argue it any which way you want really but the gain in power every gen is apparent(or we'd still be on 50W GPU's), and AMD refused to up the TDP this gen(I personally think big Kepler won't have that problem and will be a 325-350W gaming card which can hit 400W + in crap like furmark). We can all pretend the stock speeds weren't going to be higher, who cares really.

Top overclock vs top overclock on reference cooler the 7970 is anything from 70-100% faster than the 6970(its actually a little OVER 100% faster in Deus Ex, likewise in Metro 2033 without tessellation iirc the 5870 was bang on 100% faster than the 4890, some games scale great, some less so).

That is a great generational jump, anyone that can't see that is blind, or purposefully missing it. The fact is that being 70-100% faster than the 6970, was NEVER going to make it 70-100% faster than a freaking 580gtx, and why would it, and why should it?

In the days of the fabled 8800gtx, it was 70-100% faster than a 7900gt, and only 50-60% faster than a x1950xtx...... because in those days the 7900gt was the "small" core and the x1950xtx was comparatively a beast in size and power. These things happen, since then Nvidia/AMD have pretty much switched around in terms of who has the stupidly big, stupidly fast, stupidly expensive core and who has the cheaper smaller core that offers 80% of the performance.
Read THIS please. The 7970 is only 30% faster than 6970 and just 10% faster than GTX580 when averaged accross all resolutions and approx 20 games. At 2560x1600 the 7970 pulls ahead a bit more but it is still only 40% faster than the 6970 and 20% faster than a 1.5GB GTX580.

All this noise about the 7970 being much faster than previous gen cards is nonsense. It is notoiceably faster for sure, but it is very far from the Holy Grail of graphics cards. For me the 7970 is disappointing and overpriced. NVidia really would have to mess up on a scale never before seen for the 7970 to remain top-dog after Kepler is realeased.

I am not having a go at above poster, and I actually agree with almost everything he says. I am just trying to correct the misconception that the 7970 is ~70% faster than the 6970 in anything other than very specific circumstances. Never, as far as I am aware has ATI/AMD's current generation card held such a small advantage over the previous gen. It's only saving grace is reasonable performance per watt.
 
Last edited:
Soldato
Joined
28 May 2007
Posts
10,091
Read THIS please. The 7970 is only 30% faster than 6970 and just 10% faster than GTX580 when averaged accross all resolutions and approx 20 games. At 2560x1600 the 7970 pulls ahead a bit more but it is still only 40% faster than the 6970 and 20% faster than a 1.5GB GTX580.

All this noise about the 7970 being much faster than previous gen cards is nonsense. It is notoiceably faster for sure, but it is very far from the Holy Grail of graphics cards. For me the 7970 is disappointing and overpriced. NVidia really would have to mess up on a scale never before seen for the 7970 to remain top-dog after Kepler is realeased.

This all depends on what keplar part is released though. From what i have read it seems its going to be mid to high end keplar released soonish or what we would have called the ti660 or at most gtx670. It looks like they are going to call it gtx680 though. The real top end keplar part is ages away. Its a good move from nv though as it should be around the same speed as a 7970.

It will be interesing to see who has the advantage when the gtx680 and 7970 are running similar amounts of power. Amd did have the advantage there for awhile but with new architecture maybe this round will be different.
 
Last edited:
Caporegime
Joined
18 Oct 2002
Posts
33,188
Read THIS please. The 7970 is only 30% faster than 6970 and just 10% faster than GTX580 when averaged accross all resolutions and approx 20 games. At 2560x1600 the 7970 pulls ahead a bit more but it is still only 40% faster than the 6970 and 20% faster than a 1.5GB GTX580.

All this noise about the 7970 being much faster than previous gen cards is nonsense. It is notoiceably faster for sure, but it is very far from the Holy Grail of graphics cards. For me the 7970 is disappointing and overpriced. NVidia really would have to mess up on a scale never before seen for the 7970 to remain top-dog after Kepler is realeased.

I am not having a go at above poster, and I actually agree with almost everything he says. I am just trying to correct the misconception that the 7970 is ~70% faster than the 6970 in anything other than very specific circumstances. Never, as far as I am aware has ATI/AMD's current generation card held such a small advantage over the previous gen. It's only saving grace is reasonable performance per watt.


http://www.techpowerup.com/reviews/AMD/HD_7970/20.html

Techpowerup for AGES have both been Nvidia shills, and ridiculous, their comparisons mean NOTHING, according to that benchmark the 7970 IS faster than the 6990, flat out, are you saying the 6990 is no faster than a 6970.

You're using bogus results, when they quite simply add in a bunch of cpu limited games, and, this is great for techpowerup, disabled xfire in later reviews in the game AMD did best in earlier on when xfire worked fine, because they are guilty of that as well.


http://www.hardocp.com/article/2011/12/22/amd_radeon_hd_7970_video_card_review/5

http://www.hardocp.com/article/2012/01/09/amd_radeon_hd_7970_overclocking_performance_review/3

The 6970, 48fps, the 7970, 71fps WITH HIGHER SETTINGS.

Yeah, its barely any faster.

At stock speed its closer to 30% faster than the 580gtx in deus ex, overclocked its over 50% faster....... yup.

It's really very simple, and try to comprehend, AMD released this card to get within a certain TDP, the 580gtx was not, had you put a 200W TDP limit on the 580gtx, and underclocked it, it would have been an underclocked card, nothing more or less.

You can NOT overclock the 6970 by 35% on the reference cooler, nor the 580gtx, nor the 5870, nor the 285gtx, nor the....... any other top end card ever, you can the 7970...... but its not underclocked is it.

Again MAX 6970 reference overclock vs MAX 7970 reference cooler overclock, the card is anything up pretty much 60-90% faster than the 6970........... but its not faster, because of the stock clock.

Let me ask you this, who cares about the stock clock, my 6950 is overclocked on a 3rd party cooler, compared to the 7970 basic overclocks almost everyone is getting, my card is averaging out 70-80% slower in the same benchmarks....... but its not faster. For the record mine is unlocked to 6970 and overclocked to around 950mhz.

This isn't a new argument, serious what would you have called the 580gtx if it randomly came out at 600Mhz clock speeds, but every single card could do 850Mhz overclocked without an issue.

Comparing rough average "max" overclocks on a giving type of cooling is both, useful because not surprisingly most users can achieve it if you probably can and therefore that is the speed you can get if you buy the card. Stock speed hasn't effected my purchase for 15 years on any computer parts. T-bird 1.4Ghz through to a 5850 which is 20% slower than a 5870 at stock, but overclocked to its max is within 5%.

What I can achieve on the average 6970 vs what you can achieve on an average(and we've still really yet to see driver tweaks, potentially more stable and better overclocking tools with more voltage adjustment, etc) 7970 the performance ends up WELL over 50% apart, and in some games, 100% apart. That is ALL THAT MATTERS HERE, clock speeds mean nothing, clock speeds on the GTX680 won't mean anything.

WHat if the gtx680 comes out with neutered TDP and underclocked by 200Mhz, I will not randomly arbitrarily change my stance, as I haven't here either. What the average person can achieve with default cooling is really the only measure of how good ANY card is, not marketing, tdp, price points that companies "aim" products at for various reasons.

The 590gtx was a cut down joke because Nvidia didn't want it to be a too high power card, so they didn't give it enough power to, well, go as fast as it could. Which is why everyone, myself included said it would have been a great card, if only Nvidia didn't do that.

Had AMD left out the good vrm's and not made the average 7970 capable of hitting 1150-1250Mhz, that is one thing, but all the cards ARE doing that.

I would expect 95% of people buying a 7970 from here to overclock it, I would expect the same percentage of people buying 560ti's, and 5850's, and 6970's and 580gtx's.

Let's not get into the whole Nvidia releasing 560ti to hit a certain TDP then having ridiculously overclocked versions all over the place being compared in reviews to stock 6950's..... but that's ok because its Nvidia.

The only measure of a card and how good it is, is the speed you can get out of it, NOT the speed some bios set's it at for default.

Who buys a 2500k to run at 3.3Ghz, who buys a Phenom quad to run at 3Ghz. Oh I get it, Nvidia users have a problem with how fast the 7970 is, so now overclocking doesn't count...... cry me a river.
 
Last edited:
Soldato
Joined
10 Feb 2007
Posts
3,435
If the top end Kepler part has just 768 shaders it will comfortabley beat the 7970. If there is a 1024 shader part in the making (as widely anticipated), AMD will get creamed.

All NVidia really have to do to match the 7970 is shrink the existing GTX580 to 28nm and clock it to the same speeds. Clock for clock both cards will offer very similar performance. It seems very likely that AMD will have a "beefier" 7970 in the pipeline with more shaders and higher clocks. Without such a backup card they will concede the top-end absolutely. With such a card, competetion will be closer, but early adopters of 7970's get screwed.
 
Back
Top Bottom