They generally launch with the latest standard as far as I'm aware. So most likely.Did we ever get any strong hints that HDMI 2.1 was definitely coming with the next gen cards?
Or am I merely hoping?
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
They generally launch with the latest standard as far as I'm aware. So most likely.Did we ever get any strong hints that HDMI 2.1 was definitely coming with the next gen cards?
Or am I merely hoping?
Maybe Nvidia will tier their HDMI support:
Low-end (< £750) = HDMI 2.0a
Mid-range (£750-1500) = HDMI 2.0b
High-end (£1500-3000) = HDMI 2.1
Ultra-megalodon (£3000+) = HDMI-X, brought to you by Nvidia in partnership with Russ Andrews.
Edit: Added 4th tier
Not a bad idea, since most people who want an HDMI 2.1 GPU want it because they want to play games at 4k 120hz on their TV's and well only the highest end GPU's will have performance anywhere close to being capable of it.
You also forgot about Displayport 2.0
yes but an absence of multi-threaded software unfortunatelyPhysX was a thing back in 2005 because CPUs were struggling to run a game let alone anything else. Fast forward to today and we have cores aplenty to throw to at physics calcs.
You said run better, hence I was wondering exactly how. So you meant faster. I am thinking unless one has a super old cpu there won’t be any difference at all due to the way it would be coded in games. What I am saying is, if there was a difference between an AMD and Nvidia GPU in games that use it in the last few years, you can bet that nvidia would be making a big deal of it. It simply is not a unique selling point anymore and has not been for quite a few years now.Physx FAQ
Think about it. Obvisouly Physx runs faster on the GPU than CPU.
"
Does PhysX scale across the GPU and CPU? If yes, does that mean having a faster CPU enhances PhysX performance or visual quality?
PhysX uses both the CPU and GPU, but generally the most computationally intensive operations are done on the GPU. A CPU upgrade could result in some performance improvement, as would a GPU upgrade, but the relative improvement is very dependent on the initial balance of the system. An optimized PC with the right mix of CPU to GPU horsepower will be the best balanced solution.
Intel and AMD say it’s better to run physics on the CPU. What is NVIDIA’s position?
PhysX runs faster and will deliver more realism by running on the GPU. Running PhysX on a mid-to-high-end GeForce GPU will enable 10-20 times more effects and visual fidelity than physics running on a high-end CPU. Portions of PhysX processing actually run on both the CPU and GPU, leveraging the best of both architectures to deliver the best experience to the user. More importantly, PhysX can scale with the GPU hardware inside your PC. Intel and AMD solutions, which utilize the Havok API, are fixed function only and cannot scale.
"
Obviously depends when that was written. But I still think that Physx in the presence of an Nvidia card will run on the GPU.
This is 2018
Recent Physx
+1, I have to agree, if physx genuinely was a USP that made Nvidia GPUs performant then Nvidia would be singing to the rafters about it each and every day.....You said run better, hence I was wondering exactly how. So you meant faster. I am thinking unless one has a super old cpu there won’t be any difference at all due to the way it would be coded in games. What I am saying is, if there was a difference between an AMD and Nvidia GPU in games that use it in the last few years, you can bet that nvidia would be making a big deal of it. It simply is not a unique selling point anymore and has not been for quite a few years now.
DLSS 2.0. Now that is a unique selling point
Key words being “was”I bought 2 780ti's at random cause I wanted to play arkham asylum and city with Batman's Cape animated using physx, so it was certainly a selling point to me.
Fantastic cards that I only got rid of in 2014'ish due to lack of vram.
CPU physx sucks but now we are stuck with it
Yes I would agree BUT I would still argue most games use 2 threads or less if ur lucky, meaning your argument is with lazy software developers the hardware is there but you have to use it just like RAM capacityThis - and sadly people celebrate it... same with ray tracing yet both can provide a far superior experience when implemented properly and with the performance to back it up but because they've never experienced it and lack the vision to see what it can bring people largely rubbish it out of hand.
Yes I would agree BUT I would still argue most games use 2 threads or less if ur lucky, meaning your argument is with lazy software developers the hardware is there but you have to use it just like RAM capacity
Can we all shut up about physX.
I'm sorry - please feel free to post about any of the other redundant topics that make their rounds every day in this thread, siuch as:
* RTX is expensive memes
* RTX is hot memes
* LOL ray tracing memes
* LOL DLSS mems
* Inform us which card you want to buy
* Ask again if the new cards have HDMI 2.1
* Post more made up specs
* Guess the release date
etc