• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Do AMD provide any benefit to the retail GPU segment.

In theory, but I've never seen my 4070 Ti get anywhere close to the 280w limit even under full load, it's more like 220w. With a frame lock which I always use it's around 160w.
TPU tested Cyberpunk 2077 at Ultra settings at 4K(with no RT) to get that figure. The reality is that GDDR6X is a bit heavy on power consumption:

Most of the extra power consumption of the RTX3070TI was due to the use of GDDR6X instead of the GDDR6 in the RTX3070,which is what the RTX4070 series use. Nvidia could have easily used a 256 bit GDDR6 memory controller,and gotten similar memory bandwidth and power. But it would have made the chip a bit bigger and cost them more. It's also very telling they are using GDDR6 on the AD104 based RTX4080 Laptop Edition.
 
Ok, let's put a stop to this silliness. Middle Earth: Shadow of War. In 2017 it required over 8GB@1080p, so I'd say we knew at least since that year that 8GB VRAM was going to age soon.


Soon is 6 years and 4 (now, including 1xxx) generations later. Not to mention that back then, besides the 1080ti that got launched in that year with 11GB, what other consumer card had more than 8GB? And was 11GB even enough at 1440p, nevermind 4k?
Is just sub-optimal optimization.
 
Soon is 6 years and 4 (now, including 1xxx) generations later. Not to mention that back then, besides the 1080ti that got launched in that year with 11GB, what other consumer card had more than 8GB? And was 11GB even enough at 1440p, nevermind 4k?
Is just sub-optimal optimization.

But between 2009 to 2016 we went from 512MB/1GB to 8GB in the mainstream. We went from 1.5GB/2GB to 11GB at the high end.

Since 2016,we mostly stagnated at 8GB(apart from two cards) in the mainstream.We went from 11GB to 24GB at the high end.

If the mainstream had kept up with the high end,we should have been at 12GB/16GB by last generation.

Yet during the period since 2009,we also had 3.5 console generations too.Also,the processing power of dGPUs has increased massively. An R9 390 8GB has much less processing power than an RTX3070 8GB. SSDs are much faster now,so PCs can read textures much quicker too.

So despite all this we are still stuck with only 8GB? It's not only a limitation of texture size and quantity,but also the amount of details in them. Imagine if desktop PCs,didn't increase system RAM since 2016? Or SSD size?

It's becoming a limitation now. Even as a modder,I tend to have to now mod to keep textures within 8GB!
 
Last edited:
Reading the ones bad as the other comments:cry:


The ****housing award has been sitting in Nv headquarters since PhysX times:

Nv-We want everyone to enjoy PhysX, we even asked AMD if they wanted to license PhysX and they haven't replied.

AMD- We haven't even picked up the phone and talked about PhysX-besides we wouldn't want another company controlling their software on our hardware.

Meanwhile Nv sends modder a cease and desist notice to stop hacking PhysX running directly on AMD, then sticks 2 fingers up at them and mods it to run AMD hybrid using a dedicated Nv gpu.

PhysX pt 2:

Want to run Mantle on AMD and Physx on Nv in the same PC, yes please!

Mantle API works outputting through AMD gpu with a Nv card in the system.

PhysX takes a fit and is disabled outputting through Nv's own gpu with an AMD card installed in the system even with zero pci-e cables attached, you have to remove AMD for Physx to work.

AMD release ONE title for racer Dirt(something)with an AMD exclusive AA mode, Nv users lose their **** saying AMD's just as bad as Nv's PhysX lock out.



Kepler stutter bug:Nvidia, we are looking into it, can't fix it and start deleting forum complaints, close their forum for 6 months and blame it on a security breech and it re opens when they fix the stutter bug.



Nv intros G-Sync, you need this box to work-yes at the time you did

AMD we can do it too without proprietary hardware

Nv their solution won't work, you need our box.

Fast forward Freesync domination.

(This ones for Bencher)Nvidia-you only need 3Gb's, all our titles don't use more than 3 Gb's but we have the Titan anyway-AMD have 4Gb.

Nv sponsored Metro 2 launches alongside 780 6Gb as it needs more than 3Gb for max settings-780Ti anyone.

DX12:

Nvidia:we are completely compliant, lets laugh at AMD they are missing a feature.

Dx12 titles with AS launch, Nv and users mocked because it only had the capability to run AS via software and could even reduce performance so was never enabled in the driver despite fanboys stating AS worked great/better on Nv.

Gamesworks: Great software suit but much like RT'ing it took a big hit, even bigger on AMD due to poor tesselation, massive **** storm ensues with AMD claiming we need access to opitimise our end and we can't because it's a black box api.

Taking into account Nv had previous with publicly stating they needed cooperation with dev to optimise AMD's open source TressFX api on og TR running Nv. Nv and their sponsored devs maintain-Nonsense you don't need access.

Nv/Devs- we don't pay/get paid for running GW's-games bundled with gpus and other instances of Nv adverts plastered over in game bill boards.

Nvidia keep patching GW's with Ubi on FC4 to improve GW's performance, every time an update comes out it breaks something AMD side, by the time AMD fix it, Nvidia/Ubi patch it again-yes Ubi will take cash to **** anyones performance.;)

Game works and Witcher 3:

Bombshell no 1: Pre release CDPR dev states there's nothing WE can do to optimise GW's on AMD it's a closed api and we can't say anything else about it! :cry:

Bombshell no 2: Despite being developed on Kepler, it launches around Maxwell release and kills everything not Maxwell, AMD provide driver side tesselation overides in game to reduce GW's over use of tesselation(in W3 and every other game going forwards) with zero IQ reduction, Fermi users have to kick off big style to get a performance fix for this title and others that Nv stopped 'optimising' when Maxwell was lead, CDPR has to introduce an in game slider for Nv users.

GPP program: Hey aibs, join the program and give us exclusivity of all your brands, you can't use them on AMD as they are **** and making us look bad and you will be put to the bottom of the que for allocation if you don't sign up, yes, we want Strix, Lightning, Aurus, Windforce etc branding to be Nv exclusive-translated, we are just going to absorb your branding and there's nowt you can do with it.

AMDs Fine wine-lot of rubbish

79's>Titans over time

6Gb 1060- you don't need more than that, fastforward and 6Gb's not enough, the slower 480's run higher settings.
Patern's continued surpassing equivalent lower spec vram cards over time....


Moving on to the current climate...

Nv dominates RT AMD has zero support, AMD users, it's **** anyway, don't want it. :cry:

AMD get RT but it's rubbish in comparison, Nv users complain at AMD RT titles because it's rubbish, doesn't use enough RT'ing because it can't do it-pathetic hardware-there's ZERO AMD worldwide backlash- as it's true.

AMD pay 2 devs to wreck 10GB-MELTDOWN:p

Fast forward past Nv's 2nd RT gen Ampere and AMDs running RT and ultra textures and most of Nv's RT'ng hardware has to turn RT off because of the vram hit on todays titles-you couldn't make this up, considering Marv is not allowed to be talked about because of the grief it causes in here.:(

Despite DLSS>2 being a proprietary software feature you have to buy into ONLY available on NV compatible gpus, and DLSS>3 is 40 series locked :cry:

AMDS opensource FSR works on anything is mocked for being less better than native, than DLSS is better than native, and outcry ensues because they locked out DLSS for a few early titles.

News breaks that 12 Gb is the new entry level for high end gpus, Nv users going absolute tonto with excuses like it's down to bad launch performance despite devs stating 12/16Gb for X settings.

Nvidia ups 40 series 8Gb to 12Gb and 10Gb to 16Gb due to todays increased vram requirements.


Moral of the story, the bad guy is dominating, running the show!

Edit:

What about Maxwell, canny believe I missed that it was the best.

970 4GB RIOT ensues and all hell breaks out!

Remember the peelable 4GB to 3.5Gb+ stickers and free flash drive memes?


Jensun- Sorry but not sorry we didn't lie about the config, we went with a NEW segmented system-em you lied again, it's the second gpu made by Nv, a previous 60 series used the same segmented system, and users punished Nv by returning them to buy the more expensive 980! :cry:


Best award goes to the wooden Fermi reveal!:p
 
Last edited:
But between 2009 to 2016 we went from 512MB/1GB to 8GB in the mainstream. We went from 1.5GB/2GB to 11GB at the high end.

Since 2016,we mostly stagnated at 8GB(apart from two cards) in the mainstream.We went from 11GB to 24GB at the high end.

If the mainstream had kept up with the high end,we should have been at 12GB/16GB by last generation.

Yet during the period since 2009,we also had 3.5 console generations too.Also,the processing power of dGPUs has increased massively. An R9 390 8GB has much less processing power than an RTX3070 8GB. SSDs are much faster now,so PCs can read textures much quicker too.

So despite all this we are still stuck with only 8GB? It's not only a limitation of texture size and quantity,but also the amount of details in them. Imagine if desktop PCs,didn't increase system RAM since 2016? Or SSD size?

It's becoming a limitation now. Even as a modder,I tend to have to now mod to keep textures within 8GB!
It would have been nice to have extra vram, no doubt, but I don't want extra vram to overcome **** optimization, just like i wouldn't want back then a stronger tessellation unit in AMD hardware just to tessellate flat surfaces or underground, invisible oceans. That's my stand on it. TLOU is the example of tessallaated invisible water and flat surfaces in my book.
That Middle Earth, the same, not properly optimized. I haven't seen such issues in the likes of AC Unity where you had big crowds of NPC and a huge, detailed city.
 
It would have been nice to have extra vram, no doubt, but I don't want extra vram to overcome **** optimization, just like i wouldn't want back then a stronger tessellation unit in AMD hardware just to tessellate flat surfaces or underground, invisible oceans. That's my stand on it. TLOU is the example of tessallaated invisible water and flat surfaces in my book.
That Middle Earth, the same, not properly optimized. I haven't seen such issues in the likes of AC Unity where you had big crowds of NPC and a huge, detailed city.
The big issue is that massive jump in dGPU power during the time. So the issue is what we are seeing is 7 years of companies making more money on not including enough VRAM. We have more cores, more system ram and bigger ssds. Optimisation is important but it also must be so much work for devs too.
 
It would have been nice to have extra vram, no doubt, but I don't want extra vram to overcome **** optimization, just like i wouldn't want back then a stronger tessellation unit in AMD hardware just to tessellate flat surfaces or underground, invisible oceans. That's my stand on it. TLOU is the example of tessallaated invisible water and flat surfaces in my book.
That Middle Earth, the same, not properly optimized. I haven't seen such issues in the likes of AC Unity where you had big crowds of NPC and a huge, detailed city.


 
Reading the ones bad as the other comments:cry:

snip

And yet, you still consider only nvidia gpus and not amd and keep on buying the controversial gpus i.e. 970, 3070 and 3080 and now considering the 4070..... :D :p :cry:





Some valid points but just on the freesync bit, freesync launched in a complete mess and was a mess for a good 2 years after launching (of which gsync had already been out for 1-2 years), now you could say that it is monitor manufacturers fault for skimping on the built in scalers etc. but then again, amd should have tightned up and done a better job of QC as they did when they moved to new versions of freesync i.e. freesync 2 and premium (and even then, they still aren't problem free but I digress....). When freesync launched, most of the monitors had black screening, flickering, poor freesync ranges, no low frame rate compensation, lack of variable pixel response overdrive (still don't have this iirc) and many other little things.

Also, freesync isn't really an amd invention either, it's just amds marketing name for the vesa open standard i.e. adaptive sync, same way nvidia call their support for adaptive sync as "gsync compatible".

That's generally what sets nvidia and amd apart, nvidia have somewhat decent/usable features for a good while, when you're first to the market with a good solution, it's no surprise they will bank on this, amd don't have this choice as they arrive late and with an inferior solution, guarnteed amd would be doing the same if they were in nvidia's shoes.

Again, nothing wrong with waiting years for a solution or/and a usable/good solution but it's not for me any more hence why I personally am happy to pay a premium to get this.
 
Last edited:
And yet, you still consider only nvidia gpus and not amd and keep on buying the controversial gpus i.e. 970, 3070 and 3080 and now considering the 4070..... :D :p :cry:
Course I am, already told you I prefer NV over AMD, because end users don't care about **** housing so better the devil you know with access to everything bar longevity.

I don't whine that Amd's as bad as NV though do I, put up more AMD's bad as NV examples and prove yourself correct for once, I'll wait...
 
Last edited:
It would have been nice to have extra vram, no doubt, but I don't want extra vram to overcome **** optimization, just like i wouldn't want back then a stronger tessellation unit in AMD hardware just to tessellate flat surfaces or underground, invisible oceans. That's my stand on it. TLOU is the example of tessallaated invisible water and flat surfaces in my book.
That Middle Earth, the same, not properly optimized. I haven't seen such issues in the likes of AC Unity where you had big crowds of NPC and a huge, detailed city.

Heck look at cp 2077 path tracing, runs better on a 3070 8gb than a 7900xtx 24gb, huge dense open world with path tracing, looks better "overall" than anything else currently out. Since the issues shown in forgotten, tlou 1 (although I think most of these issues have been fixed now by the devs, so much for not being a game issue though ;)) and so on are never game issues and 100% down to the hardware then I guess we can say the same for 3070 vs 7900xtx cp 2077 path tracing :p ;) :D
 
Last edited:
Heck look at cp 2077 path tracing, runs better on a 3070 8gb than a 7900xtx 24gb, huge dense open world with path tracing, looks better "overall" than anything else currently out. Since the issues shown in forgotten, tlou 1 (although I think most of these issues have been fixed now by the devs, so much for not being a game issue though ;)) and so on are never game issues and 100% down to the hardware then I guess we can say the same for 3070 vs 7900xtx cp 2077 path tracing :p ;) :D
Isn't that after the update which wrecked rasterisation performance on Navi? But if 8GB is enough, how come the RTX4070 series now has 12GB and the RTX4080 has 16GB? Nvidia knows something you don't! :cry:
 
Last edited:
Speaking of Mantle (well, mentioning Mantle), can you still run the Mantle renderer in the games that support Mantle (Thief and Beyond Earth are the 2 I remember off the top of my head, was there also a Battlefield?) or is that completely gone from AMD drivers now?
 
Course I am, already told you I prefer NV over AMD.

I don't whine that Amd's as bad as NV though do I, put up more AMD's bad as NV examples and prove yourself correct for once, I'll wait...

Like I said in my post, there is a big difference:

when you're first to the market with a good solution, it's no surprise they will bank on this, amd don't have this choice as they arrive late and with an inferior solution, guaranteed amd would be doing the same if they were in nvidia's shoes.

i.e. probably one of the main reasons you prefer nvidia and keep buying their products ;)

Hence why as per this thread and what I always post, maybe start blaming amd for not providing competition rather than blaming nvidia and thinking they are the bad guys, we're in this position because of amd not putting the effort in


As for what amd have done in terms of bad practices, I have always said nvidia are worse but amd aren't the "peoples champion" like many have been brainwashed into thinking with amds "victim mentality":

- nvidia offered an "open source" solution to implement all brands upscaling tech in one go, which would have benefited all customers and developers but nope, despite all the preaching amd and their loyal fans do about being open source driven, doing what is best for both gamers and developers, they refuse to go with this option, I wonder why....
- remember their launches with polaris, how lacklustre they were and basically rebadged, "poor volta", "overclockers dream"
- remember the fury x with 4gb vram i.e. downgraded over previous amd flagships and nvidia having the lead here, vram amount didn't matter then though "4GB is enough for 4k" ')
- amd with their "technical" sponsorships removing dlss and nvidia features, forcing nvidia users to use inferior features, meanwhile, in nvidias technical sponsorships, amd tech is there

Nvidia got a huge ton of **** thrown at them for their power adapter cable melting, which as proven by gamer nexus and other reputable sources ended up being a user error (granted poor design), meanwhile rdna 3, legitimate "hardware issue", "oh it's nothing to worry about, just return and get a replacement" :cry:

Again, this is why you vote with your wallet, if you don't like a certain companies practices, you don't buy their products, only then will said company take note, simple.

Jenson and the shareholders reading posts like yours whilst you go of to buy another one of their products:

E8ACuX9.gif

:D

Isn't that after the update which wrecked rasterisation performance on Navi? But if 8GB is enough, how come the RTX4070 series now has 12GB and the RTX4080 has 16GB. Nvidia knows something you don't!
:cry:

*But it's not the game/developers fault!!!!

*using the same logic as those who say it's not the game/developers when games have issues because of vram and absolutely nothing to do with the way they have designed the game to utilise consoles memory management system with direct storage

I guess they should have added more vram to the 3090 to stop it ******** the bed in these newest titles too :cry:

I think we have all agreed that going forward as of "now", 12gb is the minimum especially if you refuse to use dlss/fsr due to how some of these recent games are just being straight ported from console to work on pc.


Main point which no one still hasn't answered so far.... is what benefit are those with large vram pool i.e. 24gb getting over consoles games/people with lesser vram other than just to avoid encountering issues? Generally even going from ultra to high texture setting makes zero difference as shown in tlou..... The only real beneficial use case is for those who mod in high res texture packs.

Speaking of Mantle (well, mentioning Mantle), can you still run the Mantle renderer in the games that support Mantle (Thief and Beyond Earth are the 2 I remember off the top of my head, was there also a Battlefield?) or is that completely gone from AMD drivers now?

Knowing amd with their over the fence approach and wanting to be hands of, probably doesn't work any more. Mantle was fantastic back in the day, really provided a nice boost on my i5 750 and 290 system in bf 4 and hardline, shame it didn't get in many other games back then.
 
Last edited:
*But it's not the game/developers fault!!!!

*using the same logic as those who say it's not the game/developers when games have issues because of vram and absolutely nothing to do with the way they have designed the game to utilise consoles memory management system with direct storage

I guess they should have added more vram to the 3090 to stop it ******** the bed in these newest titles too :cry:

I think we have all agreed that going forward as of "now", 12gb is the minimum especially if you refuse to use dlss/fsr due to how some of these recent games are just being straight ported from console to work on pc, of course.


Main point which no one still hasn't answered so far.... is what benefit are those with large vram pool i.e. 24gb getting over consoles games/people with lesser vram other than just to avoid encountering issues? Generally even going from ultra to high texture setting makes zero difference as shown in tlou..... The only real beneficial use case is for those who mod in high res texture packs.

But Nvidia wouldn't increase VRAM amounts,unless they know something about what the next generation engines will do. There are console refresh incoming too. An example is UE5 which has all sorts of new tech included. That UE5 developer talked in detail about,the extra VRAM being useful to improve HD textures for character models,etc. Isn't this the whole 2GB vs 4GB,or 4GB vs more than 4GB discussions people had all those years ago? Or when dual cores were replaced by quad cores? Or when six core CPUs started winning over quad cores?

Lots of games end up having to do things like texture streaming to manage VRAM usage. This is why games have texture pop in,as there are different pools of textures which need to be loaded. The bigger issue still,is PC games don't seem to really use SSD storage effectively,and that too many systems are stuck on SATA drives and low end dramless drives.

Edit!!

Also not sure how you think Nvidia are doing well:

NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 29, 2023, of $6.05 billion, down 21% from a year ago and up 2% from the previous quarter. GAAP earnings per diluted share for the quarter were $0.57, down 52% from a year ago and up 111% from the previous quarter. Non-GAAP earnings per diluted share were $0.88, down 33% from a year ago and up 52% from the previous quarter.

Nvidia's Gaming Revenue Takes Another Hit, Falls 46% Despite RTX 4000 GPUs​


Demand for the company's GPUs slumps even with the arrival of Nvidia's first RTX 4000 graphics cards during last year's holiday season.

So for all the noise about being a gaming champion this is happening:

NVIDIA’s gaming revenue for the last quarter of 2022 came in at $1.83 billion, up from $1.57 billion in Q3. Ergo, Team Green has a lead of only $200 million over its Red rival. It’s worth noting that NVIDIA’s Gaming revenue primarily includes GeForce sales and the Nintendo Switch SoC, with a little bit from the GeForce NOW streaming service.

Most of that is from consoles,which have most of the R and D paid for by MS/Sony/Valve,so is a much lower risk market than dGPUs!
 
Last edited:
But Nvidia wouldn't increase VRAM amounts,unless they know something about what the next generation engines will do. There are console refresh incoming too. An example is UE5 which has all sorts of new tech included. That UE5 developer talked in detail about,the extra VRAM being useful to improve HD textures for character models,etc. Isn't this the whole 2GB vs 4GB,or 4GB vs more than 4GB discussions people had all those years ago? Or when dual cores were replaced by quad cores? Or when six core CPUs started winning over quad cores?

Lots of games end up having to do things like texture streaming to manage VRAM usage. This is why games have texture pop in,as there are different pools of textures which need to be loaded. The bigger issue still,is PC games don't seem to really use SSD storage effectively,and that too many systems are stuck on SATA drives and low end dramless drives.

Edit!!

Also not sure how you think Nvidia are doing well:





So for all the noise about being a gaming champion this is happening:



Most of that is from consoles,which have most of the R and D paid for by MS/Sony/Valve,so is a much lower risk market than dGPUs!

Again, hence the from "now" onwards where more vram will be needed.... funny thing is though, so far isn't even a 4090 ******** the bed in all the UE 5 demos we have seen with ray tracing? i.e. Come game releases, we'll have new hardware and anyone who wants to max settings for a smooth pc gaming experience will be having to upgrade again anyway especially if the games are poor ports and based to work on consoles hardware design.

I don't disagree with the point of "progress" in tech and visuals enhancements hence why my question is:

Main point which no one still hasn't answered so far.... is what benefit are those with large vram pool i.e. 24gb getting over consoles games/people with lesser vram other than just to avoid encountering issues? Generally even going from ultra to high texture setting makes zero difference as shown in tlou..... The only real beneficial use case is for those who mod in high res texture packs.

I'm all for more vram if it is actually going to provide a worthwhile benefit to my gaming experience with regards to visuals other than just to avoid "game" issues....

Would you say that for example, the 3090 has proved to be worth the "extra" £750 over the last 2-3 years? Bearing in mind, it is now ******** the bed in the same titles where a 3070 ***** the bed, of course, texture quality can be left higher over the 3070 and not crash but has that "really" been a worthwhile showcase for the extra vram? IMO, nope, maybe in another year or 2, it will be more worthwhile? Possibly but then we'll have new tech and new advancements in games visuals and so the cycle starts all over again.



It's not surprising sales are down given no mining craze, no lockdown, inflation hitting thus people cutting back on their spending. Still doesn't change the fact that their gpus are selling and from looks of it, better than amd?

I don't care about the console market and how they are selling, it's a different market and it's not rocket science that consoles are the more popular market....
 
Like I said in my post, there is a big difference:



i.e. probably one of the main reasons you prefer nvidia and keep buying their products ;)

Hence why as per this thread and what I always post, maybe start blaming amd for not providing competition rather than blaming nvidia and thinking they are the bad guys, we're in this position because of amd not putting the effort in


As for what amd have done in terms of bad practices, I have always said nvidia are worse but amd aren't the "peoples champion" like many have been brainwashed into thinking with amds "victim mentality":



- nvidia offered an "open source" solution to implement all brands upscaling tech in one go, which would have benefited all customers and developers but nope, despite all the preaching amd and their loyal fans do about being open source driven, doing what is best for both gamers and developers, they refuse to go with this option, I wonder why....

- remember their launches with polaris, how lacklustre they were and basically rebadged, "poor volta", "overclockers dream"

- remember the fury x with 4gb vram i.e. downgraded over previous amd flagships and nvidia having the lead here, vram amount didn't matter then though "4GB is enough for 4k" ')
- amd with their "technical" sponsorships removing dlss and nvidia features, forcing nvidia users to use inferior features, meanwhile, in nvidias technical sponsorships, amd tech is there

Nvidia got a huge ton of **** thrown at them for their power adapter cable melting, which as proven by gamer nexus and other reputable sources ended up being a user error (granted poor design), meanwhile rdna 3, legitimate "hardware issue", "oh it's nothing to worry about, just return and get a replacement" :cry:

Again, this is why you vote with your wallet, if you don't like a certain companies practices, you don't buy their products, only then will said company take note, simple.

Jenson and the shareholders reading posts like yours whilst you go of to buy another one of their products:




*But it's not the game/developers fault!!!!

*using the same logic as those who say it's not the game/developers when games have issues because of vram and absolutely nothing to do with the way they have designed the game to utilise consoles memory management system with direct storage

I guess they should have added more vram to the 3090 to stop it ******** the bed in these newest titles too :cry:

I think we have all agreed that going forward as of "now", 12gb is the minimum especially if you refuse to use dlss/fsr due to how some of these recent games are just being straight ported from console to work on pc.

Main point which no one still hasn't answered so far.... is what benefit are those with large vram pool i.e. 24gb getting over consoles games/people with lesser vram other than just to avoid encountering issues? Generally even going from ultra to high texture setting makes zero difference as shown in tlou..... The only real beneficial use case is for those who mod in high res texture packs.



Knowing amd with their over the fence approach and wanting to be hands of, probably doesn't work any more. Mantle was fantastic back in the day, really provided a nice boost on my i5 750 and 290 system in bf 4 and hardline, shame it didn't get in many other games back then.
I've struck out your reasons people don't buy AMD which I mostly agree with, but I asked for houser comparisons to even up the bad as each other your claiming.

Nvidia offered an open source delivery system to to make life easier for their paywalled proprietary solution and AMDs open source solution in game features that are both selectable via an on/off switch when you pause the game right?

Why am I seeing all these games with every solution in the options?

AMD locking out (I'll be generous here with)a dozen AAA titles that includes upscaling for everyone, how anyone can remotely compare that to say PhsyX lockout over generations of GPUs.

4gb on FuryX wasn't deceptive it was lunacy, but they didn't lie twice about that 4Gb and have to pay out compen in the USA.

Besides everyone laughed at 4Gb except Matt and another handful of users, not as if threads weren't getting closed because of **** storms from angry AMD users.

However balls of steel for even throwing in **** memory allocation in relation to 4K and longevity. :cry:
 
GTX 970.
GTX 1070.
RX 5700XT for about 4 weeks.
RTX 2070 Super.

I'm not going to spend my hard earned cash on something that i just don't think is good enough, Polaris while nothing at all wrong with it it was not good enough for my needs and Vega IMO had everything wrong with it, not a single redeeming thing about it, too much power, too slow for what it was, problematic..... a lot Like ARC, actually exactly like ARC.

The 970 despite the shenanigans of its 3.5GB Buffer was a good GPU. i liked it.
The GTX 1070 was a great GPU, i really liked that one.
The RTX 2070 Super is also a good GPU but also flawed, it technically has RTX, but not really, it was never a GPU with powerful enough RTX, not even on day one let alone 3 years later, its lack of VRam became a real problem about 2 years in to its life.

However, i prefer AMD as a company, very much more, Nvidia are just the absolute worst for all the reasons @tommybhoy laid out, and more, AMD are no angels, far from it, but ##### me....

Since RDNA2 i'm happy to buy AMD again and am glad of it because the less of my money Nvidia get the more i smile.
 
Last edited:
I've struck out your reasons people don't buy AMD which I mostly agree with, but I asked for houser comparisons to even up the bad as each other your claiming.

Nvidia offered an open source delivery system to to make life easier for their paywalled proprietary solution and AMDs open source solution in game features that are both selectable via an on/off switch when you pause the game right?

Why am I seeing all these games with every solution in the options?

AMD locking out (I'll be generous here with)a dozen AAA titles that includes upscaling for everyone, how anyone can remotely compare that to say PhsyX lockout over generations of GPUs.

4gb on FuryX wasn't deceptive it was lunacy, but they didn't lie twice about that 4Gb and have to pay out compen in the USA.

Besides everyone laughed at 4Gb except Matt and another handful of users, not as if threads weren't getting closed because of **** storms from angry AMD users.

However balls of steel for even throwing in **** memory allocation in relation to 4K and longevity.
:cry:

Maybe I haven't had enough coffee but not sure what you're trying to say here:

Nvidia offered an open source delivery system to to make life easier for their paywalled proprietary solution and AMDs open source solution in game features that are both selectable via an on/off switch when you pause the game right?
Why am I seeing all these games with every solution in the options?

FSR uptake is still somewhat questionable especially the later versions. Why not support an open source solution to benefit everyone, more so the developers? AMD have made comments before of how they want something developers want to use and make their lives easier especially at the time when they were working on FSR yet they won't support the one thing that would have done just that, as a developer myself, I know what I would rather have:

- having to implement all 3 versions separately i.e. 3 times the work basically especially if they aren't integrated into the engine
- being able to implement all 3 versions with the use of a tool

Physx was a nvidia feature in nvidia technical sponsored titles that only added to the games visuals, it didn't lock out amd users from playing the game with good visuals. I don't disagree, it's scummy the same way amd have done with locking out dlss and even RT in their technical sponsored games, see boundary.... my example of this was to counteract the physx example you provided.

There wasn't that much of an uproar i.e. you didn't have several threads and every other thread constantly going on about vram, there were far more defenders of it back then too, I should know as I was one of them back then too :p Much like now, when issues did start to show, grunt was more of an issue and neither gpu i.e. fury x and 980ti was a 4k gaming card.....
 
Maybe I haven't had enough coffee but not sure what you're trying to say here:




FSR uptake is still somewhat questionable especially the later versions. Why not support an open source solution to benefit everyone, more so the developers? AMD have made comments before of how they want something developers want to use and make their lives easier especially at the time when they were working on FSR yet they won't support the one thing that would have done just that, as a developer myself, I know what I would rather have:

- having to implement all 3 versions separately i.e. 3 times the work basically especially if they aren't integrated into the engine
- being able to implement all 3 versions with the use of a tool

Physx was a nvidia feature in nvidia technical sponsored titles that only added to the games visuals, it didn't lock out amd users from playing the game with good visuals. I don't disagree, it's scummy the same way amd have done with locking out dlss and even RT in their technical sponsored games, see boundary.... my example of this was to counteract the physx example you provided.

There wasn't that much of an uproar i.e. you didn't have several threads and every other thread constantly going on about vram, there were far more defenders of it back then too, I should know as I was one of them back then too :p Much like now, when issues did start to show, grunt was more of an issue and neither gpu i.e. fury x and 980ti was a 4k gaming card.....

Am going to do you a favour, I won't even counter any of that, I'll give you a pass.


According to the above, AMD's done two brutal shockers.

Too nullify both of your 'just as bad as Nv' points, here's a few more of many I haven't even mentioned yet:p

Re-releasing I don't know how many 2060's that keep getting slower.

Trying to take the **** again with two differing 4080's after they successfully launched two different 1060s, woooooooooooooooft, there goes the **** list.
 
Last edited:
Back
Top Bottom