• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

4850 compared to a 8800GTX? Will I be impressed?

I'm also an owner of a 8800GTX but helmutcheese, you really need to chillout mate, you were getting quite defensive and calling people names noob, lamer, wannabe etc... is really taking the ****. Having high post count and being a regular contributor really don't mean that you can talk to people like that. You can add me to your ignore list and in fact any members of this forum any day.
How would you feel if I call you an ignorant grumpy nV fanboy?
ATI's giving the best price/performance ratio ATM, just get over it.

He's adding people to his little list because what they're saying is true. I think it's out of order knowingly misinforming people because he has issues with people buying an ATi card.

What could he possibly gain from lying about them when there's a load of evidence to show he's talking fairy land talk.
 
I never said they did not, I did misread topic and state a GTX owner gaming at 1600's does no need a new card (he does not own a GTX).

His best bet is the new ATI since he had a 8600GT.

I don't set peeps to ignore for no reason, but requesting a ban is bit much.

The peeps I set to ignore are peeps I should never have removed from previous times (my error).

This thread is totally off topic by peeps making it into a 512MB V XXX MB card thread, I did not do that but did argue the facts you seen I argued thinking he OWNED the GTX. :)

Now why since I have done arguing are you and others still baiting/throwing fuel on the fire ?

That gets me most of all. :)

You're aware that you basically started the 512MB versus XXXMB argument though right?
 
Back it up then, only a n00b would think ATI have some super RAM Tech that can do 1920's or higher+AA+AF now and in months to come or with 512MB, it may get further than Nvidias though.

Nvidia have crappy memory management why can't you understand that?

gridlr3.gif


Oh look ATI's 512MB card beating Nvidia's 1GB card, yeah that 512MB is such a bottleneck.

assassinscreedwk9.gif


cod41920kh6.gif


And since Oblivion was mentioned

8xobl1920wb2.gif


8xobl2560qx4.gif


1213877482046hf8.png
 
I had an 8800GTX now I have both a 4850 + 4870 and to anyone considering upgrading I would say this.

8800GTX was an incredible card & when I sold it @ a £300 loss last week I was gutted but if I held onto it it would have been a £330 loss (paid £400 sold for £100 12 months later).

For a long time the 8800 gave NV bragging rights over performance but they just reaped the profits (over 4 million 8 series cards sold according to Nvidia's Roy Taylor in recent interviews) and decided to do minor tech changes to maximise R&D and retain maximum profits from their 8 series hence all the die shrinks like G92 etc etc. Whilst this was happening ATI were able to design the HD4xxx series and they created their own equivalent 8 series line.

4850 is much better than my old 8800GTX. Forget 768MB vs 512MB its not an issue as the ATI texture compression management is superior. Forget the 384bit vs 256bit memory bandwidth as again ATI have a better design.

Bottom line is in games I have experienced very smooth stutter free gameplay compared to lots of microstutter (especially in UT3 engine games) on the 8800GTX. This is regardless of resolution.

I found the 4850 to be up to 30% quicker than an 8800GTX and sometimes you can get almost performance free AA as well depending on the game.

Bioshock & Assassins Creed showed the biggest gains to me over the 8800GTX. Assassins Creed in particular with DX10.1 has superior visuals & performance whereas the 8800GTX struggled with any AA the 4850 seems quite happy with it enabled. COD4 I found @ 1920x1200 struggles a little on the 4850 with 4 x AA but @ 2 x AA or no AA is just fine.

ATI still need to work on their drivers a little to unlock the full performance potential so the gains will be a little better when the optimium drivers are released.

In the meantime regardless of resolution I believe the 4850 offers any Nvidia single card owner the best upgrade unless you have a GTX280 or 9800GX2 4850 beats every NV card below these 2 quite handsomely its not all about benchmarks as you cannot benchmark for microstuttering which most NV cards have in a lot of games.

The 4870 I found to be up to 20% faster the the 4850 so this is a good upgrade if you want the 3rd best card after the 9800GX2/GTX280 but when ATI unlock more performance from drivers the 4870 will probably be the 2nd fastest but still a lot cheaper.
 
NV sold over 4 million 8800's and 3 of em went to me :p

Assasins creed can still be played with DX10.1? Thought they took it out?

I dont think ATi will unlock anymore performance, call me stupid but I'm not so sure about this oh lets wait abit we'll see more performance through drivers... If anything its only going to be a small boost hardly noticable imo.

I have to say that judging by the performance today, whilst yes the 4870 is still not there for Crysis, 2xAA 1680x1050 is fine however, its the best card I have upgraded to, its totally wowed me even though the cooler is a pos and I hope ATi keeps up the good work.
 
Oh right, well I wont path I guess it lowers performance then :p

Anyway, I think a lot of people will remember when I flooded the forums with my moaning about that POS called a 2900XT that was a discrace and everything, well now they gotta cope with me going on about how awsome this 4870 is and how ATi has come along :p, bloody cracking card I say!, same for 4850!
 
Last edited:
1213877482046hf8.png


This graph speaks volumes as to why 512Mb is not a bottleneck for ATI cards.

QTP is known for gobbling video memory, yet even at this high res and aa settings, the 4850 has no problems managing.
 
He's adding people to his little list because what they're saying is true

I'm honestly suprised there's anyone left who's *not* in his ignore list. Threatens to add an ignore at least once per thread :p

Seriously impressed with the 4850 benchies above!

Bubby, what is QTP?

*edit*

Quarls Texture Pack. I should read the small print ;)
 
Just so everyone remembers who good I am, I'm the one thats been pointing out the texture compression issue. helmutcheese you are completely and utterly wrong.

AS i pointed out while making that point to everyone, this has been for several years. Their x1900xt 256 beating a 8800gtx 320mb in high res games, or the 512mb version beating the 8800gts 640mb version in high res CLEARLY and 100% obviously showing this to be true, even in lots of TWIMTBP games. Nvidia just clearly do not have close to the edge in texture compression and memory management.

I remember even multiple threads a long while back about 8800gtx users constantly having to atl-tab out to windows to clear the memory cache as games grind to a halt, and being fine after alt tabbing out, it was a very common issue in lots of games. Early on Lotro i had the issue with my 8800gtx. Though the x1900xt's of any memory simply couldn't touch the bus of the 8800gtx, or the sheer power of the shader number. THe reduced shader and bus and memory of the gts's made this gulf in memory capability obvious and I've been trying, in vein apparently till the last week or two, to let everyone know so people don't waste money on a 1GB card to become future proof.

Also worth pointing out is 1920x1200 is a SET RESOLUTION. texture size won't change massively, except in games that keep too many unused textures in memory. The biggest reason we've needed more memory on cards ISN'T increasing game quality/complexity and memory requirements, its that with every generation we've slowly increased the most used resolution and aa/af settings which increase the base amount of memory you need. The reason cards were fine with 128mb 5 years ago is, most people gamed at 1024x768, or maybe 1280x1024, with a very few of us gaming at 1600x1200. Very few gamed at higher resolutions or with high aa/af settings we have now.


Just to lay claim to the fame, search my posts, see when i started saying it and when other people started to mention it as a big issue ;)


As to why the 4850/70 with a 256mbit bus can still cope. Well largely, when memory is compressed better, it takes up less bandwidth, very efficient bandwidth also, ring bus(told you it would be useful soon enough) and probably an incredibly efficient hardware compression/uncompression logic on core somewhere. Think of that game thats compressed down to a ridiculous amount of kilobytes, compression can be everything, as long as you have code /hardware that can basically instantly encode/decode it on the fly. With the 4870 its simply a case of pure and unadultorated speed + smaller bus = same bandwidth anyway.

Then back to the ring bus, essentially the idea is, rather than have a single memory loop where everything is passed through one tiny route and everything is held up which is the crossbar style memory controller(i think its called crossbar, i forget), you have a ring bus so data needed on shader 800 doesn't have to go through a path of all the shaders to get that info, it can go directly to the cluster that shader is in, then the shader.

Previous generation there wasn't quite enough power needed for this to make a difference, the new Nvidia cards seem to be suffering from it quite badly. You can look at memory bandwidth, but then you also have to look at how efficiently its using the bandwidth.

trying to think of a good way to explain it i guess. HMm, think of it like an american city with a block type system, very uniform and easy. You have the external bandwidth, 125gb/s or something, imagine that coming over one of those huge bridges to new york, theres only one route there but its full speed. But once you hit the city, rather than all follow the same path, the ringbus essentially lets the information flow along the quickest path to the individual shader that needs that information, so very little of that information is waiting at any one time for the next piece of info.

The old style crossbar method has that same bridge, but instead of a block/grid pattern, theres a single motorway, there is one shader at a time down a massively long motorway and all information must move down that motorway, so the shader at the end has to move through the entire motorway to get where its going. If any of that makes sense. In other words, the external bandwidth from memory to the core itself is only a single part of the information pathway, internal bandwidth and efficiency is a completely different situation, one which the ringbus completely solves.

Its kind of a situation where, the pure speed and simplicity of the single pathway works to a point, and the 2900/3800 series was like really high latency DDR2 when it first came out. Its much faster speed, but such high latencies DDR1 was still much better. But then DDR2 dropped its latency, increased speed and became much better than DDR1, this is essentially the situation. With 320 shaders, in groups of 5 which couldn't all be utilised efficiently the improvement wasn't apparent, it clearly is with 800 shaders of which more are used efficiently.

I've said for a year, it won't be long before Nvidia will adopt a ringbus-type system aswell. Its not far off the Prescott issue of massive pipeline, infact it might be incredibly similar. imagine the last shader on that motorway is just about to get the info, when a frame is dropped and it needs a different piece of information, its basically got an extremely long pipeline to get through before it gets information, where as the ATI ringbus setup essentially acts as a much wider pipeline that an incredibly amount of information can move in other pipes, rather than all stuck in the same one. AT some point Nvidia will have to bring out their "core 2 duo" a much lower latency smaller pipeline internal memory setup. With the old school 16 pipes we had in the x800 cards, and a 6800, it took no time at all to get information anywhere, at 24 it wasn't aproblem at 128 it started to be a massive problem but wasn't to apparent, with 240 and 800 shaders its so obviously a problem that its a joke. There is zero question Nvidia will have to address this in their next major core design update.

AS for the past 5 years, ATi tend to jump the gun on new tech by a generation which tends to kick them in the ass, repeatedly. I mean can anyone remember complaining about the X1800 vs the 7800, lack of shader increase, comparitively massive shader power increase, just to early, as always. Now we look at what every dev wants, the game industry and Nvidia's new cores and see, yet again ATi were on the right path and doing what the dev's want exactly and getting screwed by Nvidia paying the dev's to do something they don't want ;) But for every generation ATi bring something a little too early, their generation after is an incredibly power beast that brings a pretty excellent value card with it x1900 pro, low pipeline, cheap massive shader power just when games needed it and worked great. XT2900/hd3800 ringbus, insane shader power, touch early, HD 4800 perfect combinations of all the technologies needed.
 
Last edited:
The RV770 core dropped the ring bus btw, one of the reasons the die increase is so small in comparison to the extra ALU and TMU increase.

Edit: link'd

http://anandtech.com/video/showdoc.aspx?i=3341&p=9

hmm, I think its more likely that they've simply kept the ring bus for the shader access only, rather than dedicating so much bandwidth to the features that need it less, though crossfire being from the hub might be questionable, but then crossfire has to go off core anyway and that connection is more just telling each core what the other one is up to, its not actually transfering meaningful amounts of data so probably doesn't have an effect. The simple fact that the shaders inside the main part of the core are still in "units" rather than individual still would highly suggest that theres a wider bus and communcation with each unit then subdivided to each shader as before, just cutting out wasted power as UVD and things simply don't need that kind of latency.

http://anandtech.com/video/showdoc.aspx?i=3341&p=10

this is just what I don't understand, sayign they fixed AA performance then showing graphs which show INCREDIBLY SIMILAR AA scaling performance for normal AA and worse scaling for tent filtering most likely as the Rv770 R700 have most likely got issue using the AA hardware trying to work with software at the same time.

Basically AA was pretty much fine on the 3870/2900xt the very first 2-3 driver sets had poor AA performance and MOST of that was the wrong settings being used. I remember tons of reviews showed same numbers for say 2xaa as 8xaa, but no one cottoned on to the fact that it was just setting 8xaa all the time, because old games, and even current games can't use new modes of AA automatically. Nvidia had this issue aswell HL2/CS:S would basically set the highest quality 6xaa i think when you select 2xaa in the menu, because theres a whole range of new options and the game simply didn't know which was which. The 2900XT after a couple months never had a severe issue with AA< and neither did the 3870. a couple of games showed a decrease in performance mostly because they were TWIMTBP games and just not tweaked games. For instance, Bioshock still runs almost as good without and with AA on a 2900xt as a 8800gtx. Other than some select benchmarking with certain odd games I can't actually find this mythical AA performance hit on either the 2900xt or the 3870x2 I have, infact enabling, at 1920x1200, most levels of AA i get the same performance hit I'd expect from ATi cards.

Infact the graph I linked shows incredibly similar AA hit for both Nvidia and ATi, current and last gen, and surprise surprise, shows a much worse performance hit on Nvidia compared to ATi at massive AA levels, thats worse AA scaling on a GTX 280 than a 3870, though lets be honest, this is down to its crappy memory management, again. :p
 
Last edited:
OMFG, Tomorrow morning then I buy a 4850 =D That should hold me down nicely for the rest of the next 12months until the proper next gen cards are out, my 8800GT already has 1 bid for £50 and I expect this to climb steadily kekeke.

Benches tomorrow!
 
Do the ATI cards support dual scren under Linux yet? I'm tempted to upgrade my 8800gt but not if I lose dual screen functionality...
 
I sense a lot of unhappy nvidia 8800 gtx/ultra owners out there because ATI have made their cards only worth £70 resale ?
I feel some pain having payed £197 for my 8800gt in nov 07 and sold it for £90 including crysis.
I'd have steam coming out my ears if I'd just bought a new gtx or expensive second hand one at anything over the 4850's price though.
 
Last edited:
Great to see helmutcheese going off on one as usual!

gotta be the grumpiest person on the board lol
 
I've been lucky enough to play around with an 8800GTX in my PC for a couple of days. I currently have a 8600GT which is mine though. I'm attempting to save for a new graphic card.

If I go and buy a 4850 will it be better than the 8800GTX I played on? Or will it be slightly worse?


I don't think you will be impressed, not worse, but no impressive gains like you probably think u will get.

The GTX is a most impressive card and stands the test of time well.:D
 
Are there some efforts to change history going on here? How can anyone claim that the 2xxx series did not have serious AA problems!
 
Back
Top Bottom