• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

460 1gb and 768mb

OK well Ive found a graph comparing the 2 cards, its only for metro2033 though so the resolution is lower than were talking about which I know is main thing when it comes to memory size but comparing the scaling of numbers to this it should give an indication

m2033-comparo.gif


As can be seen the 768 card still never drops below 30fps
 
I wouldn't have thought that the 768 would be able to maintain 30fps minimum in Metro so this puzzles me slightly; sure I've seen higher end cards dip in to the low 20's or even lower. Mind you, maybe they were enabling tessellation or DoF etc. :)
 
It did use tesselation, link, as I say the one thing to note though is the lower res, maybe the others youve seen were 1920x1080 rather than 1680x1050
 
Last edited:
Most people that do not own a money tree or have some sense of value-for-money tend to gravitate towards hardware purchases that offer Bang-for-buck . . . the people that gobble up expensive, bad-value, diminishing returns items tend to be benchmarkers, e-slongers, money-tree owners, spacemen on an outward bound voyage to mars or simply +1 people that do not know any better and just follow the GroupThink herd without engaging their brain and examining the facts . . .

Big.Wayne, which of those do you think best describes AnandTech? After all, in their review of the GTX460, they said:

"However NVIDIA also has the 1GB version of the GTX 460, with more RAM, more L2 cache, and more ROPs for $30 (15%) more. The 1GB GTX 460 isn’t 15% faster, but at the same time it’s difficult to ignore it. We already have games such as Crysis and Stalker that benefit from the additional capacity of the GTX 460, and this is the future of gaming. For as fantastic of a card as the 768MB GTX 460 is, it has one potential pitfall: it’s 768MB. It’s not a huge problem today, and NVIDIA will tell you it’s not a huge problem tomorrow either, but here we must disagree.

To purchase a $200 card with only 768MB of RAM today is shortsighted; it’s less RAM than last year’s $200 GTX 275 and Radeon 4890 cards had, and it’s going to come up short in tomorrow’s games. The difference is 256MB, but we’re willing to bet between that 256MB of RAM and the additional L2 cache and ROPs that the 1GB advantage will only grow from here. We would rather spend another $30 now for better performance in many of today’s games, knowing that we also will have a better shot at playing tomorrow’s games."


http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/19


Do you think they are foolhardy in their conclusion? Are they disreputable for passing this opinion on to their readers?

And what about the 2048MB version? :D
 
Big.Wayne, which of those do you think best describes AnandTech? After all, in their review of the GTX460, they said:

"However NVIDIA also has the 1GB version of the GTX 460, with more RAM, more L2 cache, and more ROPs for $30 (15%) more. The 1GB GTX 460 isn’t 15% faster, but at the same time it’s difficult to ignore it. We already have games such as Crysis and Stalker that benefit from the additional capacity of the GTX 460, and this is the future of gaming. For as fantastic of a card as the 768MB GTX 460 is, it has one potential pitfall: it’s 768MB. It’s not a huge problem today, and NVIDIA will tell you it’s not a huge problem tomorrow either, but here we must disagree.

To purchase a $200 card with only 768MB of RAM today is shortsighted; it’s less RAM than last year’s $200 GTX 275 and Radeon 4890 cards had, and it’s going to come up short in tomorrow’s games. The difference is 256MB, but we’re willing to bet between that 256MB of RAM and the additional L2 cache and ROPs that the 1GB advantage will only grow from here. We would rather spend another $30 now for better performance in many of today’s games, knowing that we also will have a better shot at playing tomorrow’s games."


http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/19


Do you think they are foolhardy in their conclusion? Are they disreputable for passing this opinion on to their readers?

And what about the 2048MB version? :D

I will point out something about what anand said thats not true now. There is a much bigger price gap now than there was then. The difference is £40 pound on this site which equates to $63.40 dollar so just over twice what anand said in that review.

With this in mind its a harder choice than it was then. I would still go with the 1gb version as i do think that 768mb will lead to a less smother experience in top games and can only get worse.
 
Last edited:
768 is a bit low for 1920 with AA but if you on a budget the 1gig is quite a bit more now, i do feel until new consoles come out you might get away with 768 if you drop down the AA a bit and maybe don't max out textures
 
Hello arknor :)

i guess you misssed this thread posted the other day which has links to a foreign article on video card memory usage
Indeed I was not privy to that info, looks like quite a bit of interesting data there!

I've never heard of overclockers.ru before but it looks like donnerjack has gone to a lot of effort in collating all that info . . . do we assume that all that information is 100% accurate or do we collect lots of data from different sources before thinking we know the truth? . . . what does that data actually prove?

It seems that the premise of that article was to find out what is the "maximum" possible vRam usage is one goes into a game control panel and cranks up every single setting to pretty much maximum . . . I'm aware there are some folks out there that after installing a game go into the games graphic options and do exactly this, I call them the Max-Max brigade! . . . I don't do this myself, I pretty much let the game autodetect the hardware and choose the correct settings and then play a bit of the game and see how it looks & feels at stock settings . . . . If the auto-detect feature doesn't get the screen-res right thats the first thing I adjust, if it doesn't enable AA that also gets adjusted and this process continues for a little while . . .

Now you may say this is a waste of time and its better to just Max-Max everything but I have found in the past there are a number of graphic options that basically eat resources, reduce FPS and don't really add anything to the gameplay or visual experience? . . . I don't really see the point in paying out good money for extra vRam in order than I can enable features I can't see during the game (apart from the reduced FPS) and I can't even see when comparing a screenshot side-by-side?

Some people (mainly e-Slongers) just won't be happy unless they can type on a forum that they are able to play Meggadon III: The Return with Max Max Ultra-High settings, and if that's the main value they get from spending extra on a card featuring more vRam then thats their call, in reality I'm not sure they could tell the difference between Max Max settings and good balanced selection of visual options to get the best from the game . . . It's not all about vRam with the graphic options btw, some of them require more rendering power, more shaders, faster memory etc etc (which is beyond the scope this specific debate) . . .

Now before you make a Reductio ad absurdum argument out of what I just said let me state I am not suggesting anyone cripples their gaming experience by selecting low graphic detail, no AA, dropping their screen res etc . . because thats not what I am saying! ;) . . . what I am saying is that just by having the ability to go into the game-graphic control panel and Max Max every setting possible before even playing the game is not what I would place any value on and I certainly wouldn't part with extra cash for the pleasure of . . . what I place value on is being able to have the best graphic options enabled that make the game the most visual rewarding and immersive without getting carried away . . . some of the Ultra High settings (4xAA vs 8xAA vs 16XAA etc) seriously suffer from a case of "visual" diminishing returns where as I said before even when taking a freeze frame and examining the high, ultra high & Max-Max screenshots side-by-side one cannot actually tell the difference? . . . so why spend money if you can't see the difference? . . . well apart from forum braggin rights of course! :D

Anyway with that said I have gone to the effort of collating that overclockers.ru "Maximum vRam" info into an easy to read & share screenshot, it's not as compelling proof as you may think that everyone needs to spend extra on more and more VRam but it certainly adds something into the pot! . . .



One thing I didn't know that became immediately apparent to me looking at these results above is how much extra vRam a game uses once you move past DirectX®9.0! :eek: (Happy Windows® XP user here! ) . . . I hope the different DirectX® is visually noticable! . . .

Colin McRae DIRT 2
  • 714MB DirectX® 9.0
    • 1078MB DirectX® 11.0

Stalker Call Of Pripyat
  • 650MB DirectX® 9.0
    • 947MB DirectX® 10.0

Unigine Heaven Benchmark v2.1
  • 688 DirectX® 9.0
    • 800 DirectX® 10.0

World In Conflict Soviet Assault
  • 765MB DirectX® 9.0
    • 900MB DirectX® 10.0

not frame skip ,stutter , frame skip ,stutter , frame skip anytime the card reachs more than 768 memory
You do realise that this is a proposition you are making and not fact? . . . and without any solid evidence I'm afraid this is pure conjecture on your behalf?

What I personally would need to see would be some kinda FPS timeline that showed realtime framerate in a various games using a few different graphic configs that clearly "demonstrated" the FPS line dropping down to 0FPS . . this data would totally backup your unproven "frame skip ,stutter , frame skip ,stutter" proposition and I would happily update my viewpoint . . . I'm not going to make the same mistake as you and just "assume" once vRam is exceeded the gameplay is impacted to the point it becomes noticable and makes the game less enjoyable or unplayable . . .

This timeline data cannot be that hard to produce? . . . why has no-one produced this data? . . . It would make debates like these really cut n dry? . . . I'm not just talking about someone Max Max'ing every single graphic option and enabling 16AA with just the pure intent of causing a skip/stutter but instead running a few tests with an open mind using few different configs ranging from visually great to Max Max and see what happens? . . . . the last time I ever encountered anything like a skip/stutter was playing a game @ 1920x1200 and putting the AA up to a ridiculous 16X :p

its clearly obvious even now 768mb of ram on a video card is barely cutting it at settings most people who buy one of these cards are likey to want
It's clearly obvious that you rush to form a conclusion, based on a single set of data that proves nothing apart from how to max out vRam by forcing every graphic option whether its makes a visual difference or not or whether its been well coded or not and you also assume you know what settings most people are likey to want and indeed what O/S they are running?

[Off Topic]

I don't know if you read my earlier post where I said:

good clean debate, no punches below the belt etc! ;)

In case you didn't understand what this meant I was hoping you would avoid making posts like these! :eek:

yea i like how people go on about 1gb card is future proofing only , oh we dont need it now do we wayne? ;)
maybe see if bigwayne offers a stutter back garuntee ;)
I've got no problem with you man and I'm always happy to have a good dicussion/debate but can I politely ask you to avoid making posts like that using my name when I am personally nothing to do with the subject matter? . . . please make your posts as factual as you like and share your viewpoint and data to help other forum users learn but please try not to drag a debate down to a personal level . . . it's not constructive in any way . . . . I hope you understand what I am saying here! :cool:
 
min/max are useless when your stuttering along out of memory
TBH min avg and max fps mean absolutely nothing
I think you boys need to do some serious testing . . . and depending on your findings I would write a letter of complaint to every single website that produces benchmark reviews telling them "min/max are useless" and "min avg and max fps mean absolutely nothing" :D

This who vRam scenario needs a lot more investigation before a proper conclusion can be drawn . . . I'm not convinced that exceeding the vRam limit due to excessive graphic options being Maxed causes as big a problem as is being made out . . . if it is a problem and does causes noticable "stuttering" then it should be really easy to prove . . .

The only thing that can really give you a good comparative idea of how good or bad a GPU is compared to another GPU is a graph showing the framerate over time such as this:

mf2470p260.jpg
Rroff, in that screenie, what is Stutter: 10.6937 meaning or based on? :confused:
 
Hello Outpost262 :)

Ryan Smith @anandtech said:
For as fantastic of a card as the 768MB GTX 460 is, it has one potential pitfall: it’s 768MB. It’s not a huge problem today, and NVIDIA will tell you it’s not a huge problem tomorrow either, but here we must disagree.

  • To purchase a $200 card with only 768MB of RAM today is shortsighted
  • it’s going to come up short in tomorrow’s games
  • The difference is 256MB, but we’re willing to bet between that 256MB of RAM and the additional L2 cache We would rather spend another $30 now for better performance in many of today’s games, knowing that we also will have a better shot at playing tomorrow’s games

Do you think they are foolhardy in their conclusion? Are they disreputable for passing this opinion on to their readers?
Do I think that Ryan Smith is foolhardy? . . . no not really, he is just a guy who gets paid to write words about computer hardware? . . . he is no more an expert than anyone here . . . I am not that interest in peoples opinions because as we know an opinion is nothing more than an individuals interpretation of facts . . . all I am interested in is the facts and from these facts I would prefer to draw my own conclusions and share the facts with other forum users so they to can draw their own conclusions . . . we are not sheep and do not need a shepherd to guide us and keep us safe! ;)

There are no authority figures apart from facts . . . and these speak for themselves . . . the problem is getting at them! :p

I don't think anadtech is reputable or disreputable really, All they (or any website) needs to do is benchmark the hardware as accuratly as possible and pay meticulous attention to anything that could skew the facts & figures . . . once the "accurate" data is published we can do the rest! :cool:
 
I have done some serious testing :P hence that screenshot its from a program I made.

The stutter figure doesn't mean anything on its own - its based on the variation in frametimes for each frame update within a short period and a guess at how noticeable that would be to the person playing. For example if you had 30 rendering updates within a second you would have a "shown" 30fps - but if 20 of those frame updates all came in in the first 40ms of that second (1000ms) and the last 10 came in the remaining 960ms then the effect would be of a much lower framerate than 30fps.

When you have a graph like above from the same benchmark, same settings but a different GPU then the Stutter figure can be compared between those runs to see which setup is smoother to the player. As a rough guide anything under 20 is acceptable, under 5-6 ideal, anything over 20 will produce noticeable jerkiness to the game. The figure is somewhat arbitary in how its come to but its loosely based on a percentage of how many samples showed high levels of variation from the total number of sample groups in the data.
 
Last edited:
Hi jakesnake :)

these benchmarks are probably taken while playing single player mode while shooting 3 or 4 computer bots for a couple of minutes . . .
Are you sure, what are you basing this probability on?

i dont think they reflect online multiplayer with a packed 32man server and how intensive that can be. this is wer the 1gb will shine
Why don't you think the results refect the usage you are likey to see? . . . have you got any data to show otherwise? . . . do you think the anandtech data is irrelevant to you? . . . would you be happy or unhappy if you found out the more affordable card met your needs? . . . or would you be happier to pay more eitherway?

im getting 2 460s in a couple of weeks. im still getting the 1Gb version :)
What GPU are you using at the moment? . . . what has SLI GTX 460's got to do with "32man server" games? . . . doesn't the extra players require extra CPU processing power and not GPU muscle? :confused: . . . or does the extra models running around and shooting a gun require tons more rendering than a normal single player game? :D

yea i could lower my res and disable AA and save a few pounds, but i dont want to :D:D 1gb powa lol
Why do you think you need to "disable AA" or " lower your res" with a 768MB card? . . . what game is it your playing? . . . as much as you try and as much research you put in I'm certain your gonna get burnt jakesnake, I guess its the only way to learn . . . everyone knows best after all! :cool:
 
Hey Rroff :)

I have done some serious testing :P hence that screenshot its from a program I made.
Well nice one on making your own program, good for you! . . . . as soon as you can produce some "facts" that can be understood by the layman the sooner we start buying you beers!

The stutter figure doesn't mean anything on its own
Oh?

The figure is somewhat arbitary in how its come to but its loosely based on a percentage of how many samples showed high levels of variation from the total number of sample groups in the data.
I know you got a big brain on you, we just need to work on that comms-interface so simple laymen like me can understand WTF you are saying! :D

Keep up the good work mate, always interested in any good "Facts" you uncover! . . . thanks in advance! :cool:
 
To illustrate a little further why I don't like min, max, avg figures on their own lets elaborate on the multi GPU rendering example...

You run a minute long benchmark with a single GPU say GTX480 and multi GPU say GTX285 SLI the benchmark has an intense fighting scene and then exits out by panning through a scene thats mostly sky and very light on rendering load.

Both GPUs return a min fps of 18, average of 39 and max of 250.

The single GPU peaked at 250 right at the start before anything happened, it then sailed through the intensive scene at 48fps with a short time dropping to 18 when a big explosion went off and rendered the less intensive scene at the end at 150fps.

The multi GPU setup also peaks at 250 at the start, but struggles through the intensive scene with poor SLI scaling at 29fps with a drop to 18fps at the same explosion, but on the less intensive scene at the end the full pixel fillrate power of the multi GPU solution came into play with maxed out scaling and it managed 230fps.

Now on paper they look like they performed identically... but infact throughout most of the intensive scene the multi GPU setup was bogged down substantially.
 
Back
Top Bottom