In theory, but I've never seen my 4070 Ti get anywhere close to the 280w limit even under full load, it's more like 220w. With a frame lock which I always use it's around 160w.If you compare the RTX4070TI 12GB it consumes 100W more power.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
In theory, but I've never seen my 4070 Ti get anywhere close to the 280w limit even under full load, it's more like 220w. With a frame lock which I always use it's around 160w.If you compare the RTX4070TI 12GB it consumes 100W more power.
TPU tested Cyberpunk 2077 at Ultra settings at 4K(with no RT) to get that figure. The reality is that GDDR6X is a bit heavy on power consumption:In theory, but I've never seen my 4070 Ti get anywhere close to the 280w limit even under full load, it's more like 220w. With a frame lock which I always use it's around 160w.
Ok, let's put a stop to this silliness. Middle Earth: Shadow of War. In 2017 it required over 8GB@1080p, so I'd say we knew at least since that year that 8GB VRAM was going to age soon.
Soon is 6 years and 4 (now, including 1xxx) generations later. Not to mention that back then, besides the 1080ti that got launched in that year with 11GB, what other consumer card had more than 8GB? And was 11GB even enough at 1440p, nevermind 4k?
Is just sub-optimal optimization.
It would have been nice to have extra vram, no doubt, but I don't want extra vram to overcome **** optimization, just like i wouldn't want back then a stronger tessellation unit in AMD hardware just to tessellate flat surfaces or underground, invisible oceans. That's my stand on it. TLOU is the example of tessallaated invisible water and flat surfaces in my book.But between 2009 to 2016 we went from 512MB/1GB to 8GB in the mainstream. We went from 1.5GB/2GB to 11GB at the high end.
Since 2016,we mostly stagnated at 8GB(apart from two cards) in the mainstream.We went from 11GB to 24GB at the high end.
If the mainstream had kept up with the high end,we should have been at 12GB/16GB by last generation.
Yet during the period since 2009,we also had 3.5 console generations too.Also,the processing power of dGPUs has increased massively. An R9 390 8GB has much less processing power than an RTX3070 8GB. SSDs are much faster now,so PCs can read textures much quicker too.
So despite all this we are still stuck with only 8GB? It's not only a limitation of texture size and quantity,but also the amount of details in them. Imagine if desktop PCs,didn't increase system RAM since 2016? Or SSD size?
It's becoming a limitation now. Even as a modder,I tend to have to now mod to keep textures within 8GB!
The big issue is that massive jump in dGPU power during the time. So the issue is what we are seeing is 7 years of companies making more money on not including enough VRAM. We have more cores, more system ram and bigger ssds. Optimisation is important but it also must be so much work for devs too.It would have been nice to have extra vram, no doubt, but I don't want extra vram to overcome **** optimization, just like i wouldn't want back then a stronger tessellation unit in AMD hardware just to tessellate flat surfaces or underground, invisible oceans. That's my stand on it. TLOU is the example of tessallaated invisible water and flat surfaces in my book.
That Middle Earth, the same, not properly optimized. I haven't seen such issues in the likes of AC Unity where you had big crowds of NPC and a huge, detailed city.
It would have been nice to have extra vram, no doubt, but I don't want extra vram to overcome **** optimization, just like i wouldn't want back then a stronger tessellation unit in AMD hardware just to tessellate flat surfaces or underground, invisible oceans. That's my stand on it. TLOU is the example of tessallaated invisible water and flat surfaces in my book.
That Middle Earth, the same, not properly optimized. I haven't seen such issues in the likes of AC Unity where you had big crowds of NPC and a huge, detailed city.
Reading the ones bad as the other comments
snip
Course I am, already told you I prefer NV over AMD, because end users don't care about **** housing so better the devil you know with access to everything bar longevity.And yet, you still consider only nvidia gpus and not amd and keep on buying the controversial gpus i.e. 970, 3070 and 3080 and now considering the 4070.....
It would have been nice to have extra vram, no doubt, but I don't want extra vram to overcome **** optimization, just like i wouldn't want back then a stronger tessellation unit in AMD hardware just to tessellate flat surfaces or underground, invisible oceans. That's my stand on it. TLOU is the example of tessallaated invisible water and flat surfaces in my book.
That Middle Earth, the same, not properly optimized. I haven't seen such issues in the likes of AC Unity where you had big crowds of NPC and a huge, detailed city.
Isn't that after the update which wrecked rasterisation performance on Navi? But if 8GB is enough, how come the RTX4070 series now has 12GB and the RTX4080 has 16GB? Nvidia knows something you don't!Heck look at cp 2077 path tracing, runs better on a 3070 8gb than a 7900xtx 24gb, huge dense open world with path tracing, looks better "overall" than anything else currently out. Since the issues shown in forgotten, tlou 1 (although I think most of these issues have been fixed now by the devs, so much for not being a game issue though ) and so on are never game issues and 100% down to the hardware then I guess we can say the same for 3070 vs 7900xtx cp 2077 path tracing
Course I am, already told you I prefer NV over AMD.
I don't whine that Amd's as bad as NV though do I, put up more AMD's bad as NV examples and prove yourself correct for once, I'll wait...
when you're first to the market with a good solution, it's no surprise they will bank on this, amd don't have this choice as they arrive late and with an inferior solution, guaranteed amd would be doing the same if they were in nvidia's shoes.
Isn't that after the update which wrecked rasterisation performance on Navi? But if 8GB is enough, how come the RTX4070 series now has 12GB and the RTX4080 has 16GB. Nvidia knows something you don't!
Speaking of Mantle (well, mentioning Mantle), can you still run the Mantle renderer in the games that support Mantle (Thief and Beyond Earth are the 2 I remember off the top of my head, was there also a Battlefield?) or is that completely gone from AMD drivers now?
*But it's not the game/developers fault!!!!
*using the same logic as those who say it's not the game/developers when games have issues because of vram and absolutely nothing to do with the way they have designed the game to utilise consoles memory management system with direct storage
I guess they should have added more vram to the 3090 to stop it ******** the bed in these newest titles too
I think we have all agreed that going forward as of "now", 12gb is the minimum especially if you refuse to use dlss/fsr due to how some of these recent games are just being straight ported from console to work on pc, of course.
Main point which no one still hasn't answered so far.... is what benefit are those with large vram pool i.e. 24gb getting over consoles games/people with lesser vram other than just to avoid encountering issues? Generally even going from ultra to high texture setting makes zero difference as shown in tlou..... The only real beneficial use case is for those who mod in high res texture packs.
NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 29, 2023, of $6.05 billion, down 21% from a year ago and up 2% from the previous quarter. GAAP earnings per diluted share for the quarter were $0.57, down 52% from a year ago and up 111% from the previous quarter. Non-GAAP earnings per diluted share were $0.88, down 33% from a year ago and up 52% from the previous quarter.
Nvidia's Gaming Revenue Takes Another Hit, Falls 46% Despite RTX 4000 GPUs
Demand for the company's GPUs slumps even with the arrival of Nvidia's first RTX 4000 graphics cards during last year's holiday season.
NVIDIA’s gaming revenue for the last quarter of 2022 came in at $1.83 billion, up from $1.57 billion in Q3. Ergo, Team Green has a lead of only $200 million over its Red rival. It’s worth noting that NVIDIA’s Gaming revenue primarily includes GeForce sales and the Nintendo Switch SoC, with a little bit from the GeForce NOW streaming service.
But Nvidia wouldn't increase VRAM amounts,unless they know something about what the next generation engines will do. There are console refresh incoming too. An example is UE5 which has all sorts of new tech included. That UE5 developer talked in detail about,the extra VRAM being useful to improve HD textures for character models,etc. Isn't this the whole 2GB vs 4GB,or 4GB vs more than 4GB discussions people had all those years ago? Or when dual cores were replaced by quad cores? Or when six core CPUs started winning over quad cores?
Lots of games end up having to do things like texture streaming to manage VRAM usage. This is why games have texture pop in,as there are different pools of textures which need to be loaded. The bigger issue still,is PC games don't seem to really use SSD storage effectively,and that too many systems are stuck on SATA drives and low end dramless drives.
Edit!!
Also not sure how you think Nvidia are doing well:
Nvidia's Gaming Revenue Takes Another Hit, Falls 46% Despite RTX 4000 GPUs
Demand for the company's GPUs slumps even with the arrival of Nvidia's first RTX 4000 graphics cards during last year's holiday season.www.pcmag.com
So for all the noise about being a gaming champion this is happening:
Console Sales Put AMD's GPU Sales Nearly on Par with NVIDIA's Gaming Revenue in Q4 2022 | Hardware Times
NVIDIA has a near monopoly in the gaming PC market with its GeForce RTX graphics cards (and some older GTXs). AMD’s GPU market share has faltered in the last 4-5 years as the company ditched the high-end enthusiast market, starting with the RDNA family. Luckily for Team Red, its GPU...www.hardwaretimes.com
Most of that is from consoles,which have most of the R and D paid for by MS/Sony/Valve,so is a much lower risk market than dGPUs!
Main point which no one still hasn't answered so far.... is what benefit are those with large vram pool i.e. 24gb getting over consoles games/people with lesser vram other than just to avoid encountering issues? Generally even going from ultra to high texture setting makes zero difference as shown in tlou..... The only real beneficial use case is for those who mod in high res texture packs.
I've struck out your reasons people don't buy AMD which I mostly agree with, but I asked for houser comparisons to even up the bad as each other your claiming.Like I said in my post, there is a big difference:
i.e. probably one of the main reasons you prefer nvidia and keep buying their products
Hence why as per this thread and what I always post, maybe start blaming amd for not providing competition rather than blaming nvidia and thinking they are the bad guys, we're in this position because of amd not putting the effort in
For those who dislike nvidia, why do you keep buying nvidia and not amd?
Lets keep all the nvidia vs amd BS out so no baiting/trolling at all and keep purely to the thread title question. Context: Genuinely find this fascinating lately, the amount of threads that just end up with nvidia bashing is crazy now i.e. "rt/rtx is ****", "dlss is fake/crap", "frame...forums.overclockers.co.uk
As for what amd have done in terms of bad practices, I have always said nvidia are worse but amd aren't the "peoples champion" like many have been brainwashed into thinking with amds "victim mentality":
- nvidia offered an "open source" solution to implement all brands upscaling tech in one go, which would have benefited all customers and developers but nope, despite all the preaching amd and their loyal fans do about being open source driven, doing what is best for both gamers and developers, they refuse to go with this option, I wonder why....
- remember their launches with polaris, how lacklustre they were and basically rebadged, "poor volta", "overclockers dream"
- remember the fury x with 4gb vram i.e. downgraded over previous amd flagships and nvidia having the lead here, vram amount didn't matter then though "4GB is enough for 4k" ')
- amd with their "technical" sponsorships removing dlss and nvidia features, forcing nvidia users to use inferior features, meanwhile, in nvidias technical sponsorships, amd tech is there
Nvidia got a huge ton of **** thrown at them for their power adapter cable melting, which as proven by gamer nexus and other reputable sources ended up being a user error (granted poor design), meanwhile rdna 3, legitimate "hardware issue", "oh it's nothing to worry about, just return and get a replacement"
Again, this is why you vote with your wallet, if you don't like a certain companies practices, you don't buy their products, only then will said company take note, simple.
Jenson and the shareholders reading posts like yours whilst you go of to buy another one of their products:
*But it's not the game/developers fault!!!!
*using the same logic as those who say it's not the game/developers when games have issues because of vram and absolutely nothing to do with the way they have designed the game to utilise consoles memory management system with direct storage
I guess they should have added more vram to the 3090 to stop it ******** the bed in these newest titles too
I think we have all agreed that going forward as of "now", 12gb is the minimum especially if you refuse to use dlss/fsr due to how some of these recent games are just being straight ported from console to work on pc.
Main point which no one still hasn't answered so far.... is what benefit are those with large vram pool i.e. 24gb getting over consoles games/people with lesser vram other than just to avoid encountering issues? Generally even going from ultra to high texture setting makes zero difference as shown in tlou..... The only real beneficial use case is for those who mod in high res texture packs.
Knowing amd with their over the fence approach and wanting to be hands of, probably doesn't work any more. Mantle was fantastic back in the day, really provided a nice boost on my i5 750 and 290 system in bf 4 and hardline, shame it didn't get in many other games back then.
I've struck out your reasons people don't buy AMD which I mostly agree with, but I asked for houser comparisons to even up the bad as each other your claiming.
Nvidia offered an open source delivery system to to make life easier for their paywalled proprietary solution and AMDs open source solution in game features that are both selectable via an on/off switch when you pause the game right?
Why am I seeing all these games with every solution in the options?
AMD locking out (I'll be generous here with)a dozen AAA titles that includes upscaling for everyone, how anyone can remotely compare that to say PhsyX lockout over generations of GPUs.
4gb on FuryX wasn't deceptive it was lunacy, but they didn't lie twice about that 4Gb and have to pay out compen in the USA.
Besides everyone laughed at 4Gb except Matt and another handful of users, not as if threads weren't getting closed because of **** storms from angry AMD users.
However balls of steel for even throwing in **** memory allocation in relation to 4K and longevity.
Nvidia offered an open source delivery system to to make life easier for their paywalled proprietary solution and AMDs open source solution in game features that are both selectable via an on/off switch when you pause the game right?
Why am I seeing all these games with every solution in the options?
Maybe I haven't had enough coffee but not sure what you're trying to say here:
FSR uptake is still somewhat questionable especially the later versions. Why not support an open source solution to benefit everyone, more so the developers? AMD have made comments before of how they want something developers want to use and make their lives easier especially at the time when they were working on FSR yet they won't support the one thing that would have done just that, as a developer myself, I know what I would rather have:
- having to implement all 3 versions separately i.e. 3 times the work basically especially if they aren't integrated into the engine
- being able to implement all 3 versions with the use of a tool
Physx was a nvidia feature in nvidia technical sponsored titles that only added to the games visuals, it didn't lock out amd users from playing the game with good visuals. I don't disagree, it's scummy the same way amd have done with locking out dlss and even RT in their technical sponsored games, see boundary.... my example of this was to counteract the physx example you provided.
There wasn't that much of an uproar i.e. you didn't have several threads and every other thread constantly going on about vram, there were far more defenders of it back then too, I should know as I was one of them back then too Much like now, when issues did start to show, grunt was more of an issue and neither gpu i.e. fury x and 980ti was a 4k gaming card.....