• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Do AMD provide any benefit to the retail GPU segment.

More Sapphire Pulse cards.

RX 6800, 16GB card. £500.


rx 6800xt, £549.

 
Typically I find the game that is my "go to" to be the one where my 3070 runs out of VRAM....

memory.png


.....what difference that makes I'm not sure, it is not as tho I have a 8GB / 16GB switch to throw. Otherwise the card is rock solid, absolutely, and has been great at either 1440p or UW. It runs really well. There are some later titles it seems to run out of grunt with, well to a point, Hogwarts springs to mind, need to get back to that one. But a part of me reckons that inadequate development and optimisation plays a big part at dragging the performance down as does any GPU limitation.

It wasn't my first choice of that gen, that went to a Red Devil LTD ED 6800XT. It lasted around 2 weeks, returned for a refund. That AMD card ran as expected in the AAA titles I had to test it but in the Indie type games I played it was horrendous. AMD support forums were of little use but it did remind me of historical problems I had in the past with ATI.

With the above in mind there will probably be a resignation and acceptance of sticking with another Nvidia card, even tho I would like that to be by choice of knowing an AMD one would work as expected, but it doesn't, for me.

Look at the VRam in the OSD on this, 120Hz, keep your eye on the system ram as i approach the scrapyard, the VRam is already full, it need more, what to put it? System ram, keep your eye on it and watch the frame rates, goes from 100 <> 120, to 30.

RTX 2070 Super.

 
And that's a perfect example of terrible optimization. You think this game,, the way it looks, should require more than 8gb of vram? The game looks like it came straight from 2008, something like a fallout 3.

Its a 2013 game.

something like a fallout 3

It looks nothing like Fallout 3.

Its built in Unity. https://unity.com/ one of the best engines out there, IMO second only to Unreal Engine.

I do wish people would stop using "optimisation" as a catch all for what doesn't run well on gimped hardware, doing that just encourages said hardware vendor to keep selling you gimped low end hardware for enthusiast grade pricing.

Besides that people who use the "Optimisation" catch all have no clue what that means in game development.
 
Look, the behaviour its exiting, in seamlessly pushing VRam overspill in to System ram without so much as a slight stutter or reduction in texture quality is testament to the quality of the code.

Im sorry but if a 2013 game that looks like that needs more than 8gb of vram, then what should eg Cyberpunk require? 40gb? If that's not awful optimization then I don't what to tell you. The game looks way worse than gamas that require 1/4th the vram. Crysis 3 came out around the same time, and so did witcher 3.

Cyberpunk has nice lighting, beyond that it doesn't look good at all, it looks ok, in my view. Lighting is everything in that game, without it it looks "meh"
 
Last edited:
I didn't say it's a good thing at all. I actually do agree with you.
But if you have 2 different GPUs that perform relatively similar for similar prices is the better card the one that stays roughly the same or the one that could gain 10-20% more over the next couple of years. Sure it's not a good thing 100% at the start but the idea that GPU will last potentially longer does make it more appealing.

Regardless my point was less about "fine wine" and more that AMD have already lost the marketing game. Even if AMD release a product 20% cheaper and 20% more performance they still wouldn't outsell Nvidia. They need to really go with the Ryzen Gen 1 mentality and absolutly crush the price difference. Ryzen 1 came out almost 50% cheaper than Intels comparable top end even if it was 20% slow, the difference really helped with market share.

Memory costs more the larger the sizes become and takes up more PCB so I can see why they don't want to go to endless sizes, 16gb is however relatively cheap and cheap enough that Nvidia could easily do it. I don't think it's so much as why shouldn't AMD just put more than 16 on the cheaper models, and more that it's clearly planned obsolescence on Nvidias side, we already know some games are being bottlenecked by GPU memory so Nvidia should create a product to fix that issue, they just don't want to.


I have some sympathies for Nvidia, to a very limited extent....

When it comes to memory (or rather the analogue side of the...) silicon mores law really is dead, if that is what Nvidia are talking about they are not wrong, as a result the memory architecture is taking up more and more room with in the chip, so Nvidia are saving die space by having less memory lanes on the chip, it is perhaps why we now have 192Bit 4070/Ti, down from 256Bit on the 3070/Ti, that saves quite a bit of space. The problem with that is even if you're using 2GB Memory IC's you're still only getting 12GB, so it may be more an optimisation exercise rather than something that is cynically deliberate. tho i do think the 8GB 3070 and 10GB 3080 was cynical.

It has to be said AMD face the same problem, they are the ones who have explained this problem of Analogue silicon no longer shrinking.
But AMD came up with a clever solution, that is remove the analogue portion off the logic die, that way you can make it on a cheap older node and it wouldn't matter, what's more you can pick and chose how many memory lanes you have because you can "Glue" them to the logic die like Lego.

For all the bluster and talk of AMD being a poor second to Nvidia...... mmmmmmm...... maybe in software, but even that's debatable.
 
Last edited:
I don't like either of the control panels.
Nvidia's looks really dated, but I sort of like the simplicity.
AMD's looks better and less dated, although it is starting to look a little dated now (IMO). What I dislike the most about AMD's is that it looks like it was made for a smartphone or tablet with everything being ****-off great big buttons that look like I'm going to be dabbing at them with my fat fingers rather than a mouse pointer. Make it about 1/4th or 1/8th of the size and I think it would seem much more reasonable and useable.

I've not used the GeForce Experience thing so I've no idea what that's like.

Maybe what both of them need is the ability to use skins (like Afterburner) including community made ones. Maybe then everyone could find something they liked and it would take the burden off of AMD and Nvidia.

What I dislike the most about AMD's is that it looks like it was made for a smartphone or tablet with everything being ****-off great big buttons that look like I'm going to be dabbing at them with my fat fingers rather than a mouse pointer.

That's probably no coincidence given there is an AMD Smart Phone CP app, because of course there is.... actually jokes aside it is pretty good, apart from monitoring tools and GPU setting remotely from the app it also streams your desktop or from PC game to your Smart TV, Tablet, Phone, Laptop, and all at the same time.
 
Last edited:
Just reading up on it now, v5 you can have up to 4 players streamed locally over Wifi, like a 4 player LAN party.

I mean, that's ^^^^ a nice bonus.... bring the family together for a 4 way LAN party.
 
Last edited:
But does it need to look like that on my PC? I'm fine with the smartphone/tablet looking interface on a smartphone or tablet.

Does sound quite nifty that though, even if I doubt I'll ever use it.

No... i know what you mean and i'm with you on that, when my PC starts mimicking phones is when i lose the will to live.
 
Last edited:
Having used both, the AMD software is just more functional and the UI is better because the Nvidia software hides settings under obscure menus. For example, I tend to undervolt and use custom curves, and have to use third party tools like MSI Afterburner to do so with my current dGPU - the AMD software integrates this as standard functionality.Even their CPU software is far more functional than the Intel equivalent.

Since MSI Afterburner is coded by a Russian guy development nearly ended due to the financial sanctions on Russia. This almost left Nvidia dGPU owners without a way to do things with their dGPUs. This should be part of the standard driver suite. With Nvidia having higher margins than Apple, you would think that they could be bothered to add this instead of relying on MSI, etc to fund development.

Yeah i like Ryzen Master too, everything that matters in the BIOS you find in the software and any setting you set in it then takes in the BIOS.

AMD have put a lot of work in to making enthusiast level software, good software, for both CPU's and GPU's.

Not so long ago first party software like this was a mare dream for people like us. And here it is, now underrated and largely ignored.

L1FQx4G.png
 
Last edited:
I've not used Ryzen Master since the early days and it seemed to offer very little then.
I was going to ask what it's used for, but I guess I shouldn't as it'd be taking the thread off topic.

Well there ^^^^ it is, everything you see there is settings, Your Curves are here.

Not off topic, GPU's, CPU,'s, its all relevant to the question. :)

WhEutpr.png
 
Last edited:
I think AMD are still stuck in the "Black Edition" days, they are a company of engineers, nerds, people who think we want and like these tools.

Largley that may no longer be true, but don't tell them that. i like them, i want them.
 
AMD even have a button in the GPU drivers that will tighten the timings of the Memory IC's on the GPU.

Who else but AMD would think of something like that and then add it to their drivers for us to play with?
 
Have you even played Rust? It actually looks very good at 4k it's a very good looking game especially for a game that old.

What are you expecting graphics wise? With a game would that's controlled by players who can build literally anything they want it's surprising that don't need more memory to me.

A 700 player server with hundreds of buildings some ridiculous in size and looking as it does it's a very well optimized game.

No he's never played it, he doesn't even know what it is, he only knows Cyberpunk.
 
Explains it, I think is a great looking game, I play in 1440p as I can't run it in 4k but it looks fantastic for its age and they have significantly improved the graphics over the years.

You can't compare a single player game to a multiplayer game especially a multiplayer game where everything is player built.

One of the biggest issue is lighting, you have to restrict how good they look as they can severely impact FPS depending on the system. Now Cyberpunk the only good thing about the way it looks is the lighting everything else is a bit meh in all honesty.

For him to compare a 500+ player game which has to restrict lighting as it would impact the most with a game which literally specialises in lighting (can be argued its the only good looking thing about it) it's just a ridiculous argument.

It is a good looking game yes, i was surprised researching its release date i thought it was much newer than 10 years old, i've only been playing it on and off for a couple of years.

I like the servers that are friendly for a couple of weeks, the ones that allow you to build your little cave, or small town... before it all kicks off and every man / woman for themselves, a good fun game. some of the constructions people build are massive.

I agree with you on Cyberpunk, said it my self, it has beautiful lighting, but its everything in that game visually, Its Nvidia's showcase for all their RTX work, if not for that yeah it looks pretty naff.
 
Last edited:
So it's just what you have in the BIOS? I'd just do it in the BIOS.
Asus have something similar in their software suite I believe (or used to, I've not installed it recently) but I didn't care for that either, I just wanted the fan control software.
I'm sure everyone always said to do tweaking in the BIOS. Not sure why that would be any different now that it's AMD software instead of Asus or whoever.

Yeah and i agree with that, but i have used it for quick dirty tuning to test in the desktop environment.

So its useful but you're right its not what one should be using to do it properly, more than anything i appreciate the sentiment, it costs money and time, they didn't have to do it, others don't.
Its like 1usmus, the free tools he spends his time making are for the enthusiasm of the subject, he does this because of the type of person that he is, someone who i would quite like to have drink with and talk nerd knowing that's all good, he's on my level.
So while AMD making these tools might not be ultimately useful, AMD probably know this themselves, but they still do it, because they are that kind of people, so to me, that's all good.
 
Last edited:
A dev on reddit has confirmed the practice. But he also explained it a bit more so not just lazyness.

As we know UE is now dominant in the AAA industry, and this engine utilises streaming of textures whilst playing.

The consoles have lots of hardware optimisations which make the overhead of loading textures really low hence either reduced or no stutters. (they do of course also have better VRAM to tflops balance).

On the PC this is a problem of course, but according to this dev, if they used system ram more instead of VRAM which is an option on PC it would be even worse unless of course VRAM is being saturated, in which case using system ram would improve things.

He didnt really offer a solution though other than direct storage will improve things.

The take in my opinion from what he said is they choose to use a solution that works best for cards with decent VRAM, and its the usual "upgrade" for those who have VRAM starvation.

Of course time saving will still be a part of this in my opinion, as its quicker to port a game if you dont have to tinker with the memory allocation code.

--

Personally I wish UE would just go away but sadly its getting more common if anything, I have played low end games using UE4 and they still had dodgy performance, just seems a horrid engine.

Also a few other devs who responded spoke about UE4, and they admitted the engine has practically no built in optimisation. Most of these comments were on a thread about the new star wars game.
He is not the first dev to say this, he will not be the last, i have been saying it for at least 2 years.

UE is now dominant in the industry because its nothing short of brilliant. And its not going to get any better, every other engine developer is going to want to emulate the technology.

Live texture streaming, and Object Container Streaming first appeared in 2016, to my knowledge, in Star Citizen, its the only way to get seamless transitions from space to the surface of planets, especially in a multilayer environment when you have crew mates in the back of your ship doing their own thing so you can't level load.
Then Sony, on the PS5, with Ratchet and Clank, again, seamless, no level loading.

When the CIG Frankfort office cracked this after working on the tech for about 2 years they quietly put out a little celebration video.


And in game in 2020.


This has been coming, for years, and frankly PCMR dGPU's are being left behind by Game Consoles and game developers have just had enough of being strangled by that, even CIG making a PC exclusive game are saying you're going to have to run proper hardware to play our game, because we can't make the game we want to make for middling dGPU's, tho they do try to keep it running ok on 8GB hardware they have talked about how difficult a task that is, its a lot of time and effort. It runs better with a 12GB and certainly 16GB GPU and a fast NVMe.

I don't do it just because i want to hate on Nvidia, i have a project in UE5 that's on hold until i get a GPU with more VRam, because 8GB just wont do it.
8GB isn't even low end these days, that's 12GB, mid range 16GB, higher end 20GB at least, there is no reason for Nvidia or AMD to not do that, VRam costs them peanuts, the only reason they would do this is for planned obsolescence, the RTX 3070 and 3080 are exactly that and i as a PC enthusiast do not take kindly to BS like that, having to pay hundreds and hundreds of £ for these things i take that personally. I fell like i'm being manipulated and ripped off by some cynically disrespectful people.

We should all call them out of it and demand better. Because right now PCMR is a pee taking joke. And its you and me they are taking the pee out of....
 
Last edited:
Isn't Star Citizen using an evolved version of CryEngine?

They started out using Cryengine, its not Cryengine anymore, its so heavily modified there is literally nothing left of it, its their own in house engine now.
They haven't used the Cryengine Logo on the splash screen for years.
-----------

In order for technology to progress someone has to take the lead, developers are not waiting on PC hardware vendors anymore, forcing them to catch up, if they don't we will all be running game consoles.
 
Last edited:
Back
Top Bottom