• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

Permabanned
Joined
28 Nov 2009
Posts
2,582
Location
İzmir
Maybe Nvidia will tier their HDMI support:

Low-end (< £750) = HDMI 2.0a
Mid-range (£750-1500) = HDMI 2.0b
High-end (£1500-3000) = HDMI 2.1
Ultra-megalodon (£3000+) = HDMI-X, brought to you by Nvidia in partnership with Russ Andrews.

Edit: Added 4th tier :o
 
Last edited:
Soldato
Joined
6 Feb 2019
Posts
17,468
Maybe Nvidia will tier their HDMI support:

Low-end (< £750) = HDMI 2.0a
Mid-range (£750-1500) = HDMI 2.0b
High-end (£1500-3000) = HDMI 2.1
Ultra-megalodon (£3000+) = HDMI-X, brought to you by Nvidia in partnership with Russ Andrews.

Edit: Added 4th tier :o

Not a bad idea, since most people who want an HDMI 2.1 GPU want it because they want to play games at 4k 120hz on their TV's and well only the highest end GPU's will have performance anywhere close to being capable of it.

You also forgot about Displayport 2.0
 
Soldato
Joined
6 Jan 2013
Posts
21,839
Location
Rollergirl
Not a bad idea, since most people who want an HDMI 2.1 GPU want it because they want to play games at 4k 120hz on their TV's and well only the highest end GPU's will have performance anywhere close to being capable of it.

You also forgot about Displayport 2.0

With Gsync, 4K @ 75fps will be enough, and surely that's going to be possible with the 3080.
 

TNA

TNA

Caporegime
Joined
13 Mar 2008
Posts
27,196
Location
Greater London
Physx FAQ

Think about it. Obvisouly Physx runs faster on the GPU than CPU.

"
Does PhysX scale across the GPU and CPU? If yes, does that mean having a faster CPU enhances PhysX performance or visual quality?
PhysX uses both the CPU and GPU, but generally the most computationally intensive operations are done on the GPU. A CPU upgrade could result in some performance improvement, as would a GPU upgrade, but the relative improvement is very dependent on the initial balance of the system. An optimized PC with the right mix of CPU to GPU horsepower will be the best balanced solution.

Intel and AMD say it’s better to run physics on the CPU. What is NVIDIA’s position?
PhysX runs faster and will deliver more realism by running on the GPU. Running PhysX on a mid-to-high-end GeForce GPU will enable 10-20 times more effects and visual fidelity than physics running on a high-end CPU. Portions of PhysX processing actually run on both the CPU and GPU, leveraging the best of both architectures to deliver the best experience to the user. More importantly, PhysX can scale with the GPU hardware inside your PC. Intel and AMD solutions, which utilize the Havok API, are fixed function only and cannot scale.

"

Obviously depends when that was written. But I still think that Physx in the presence of an Nvidia card will run on the GPU.

This is 2018


Recent Physx
You said run better, hence I was wondering exactly how. So you meant faster. I am thinking unless one has a super old cpu there won’t be any difference at all due to the way it would be coded in games. What I am saying is, if there was a difference between an AMD and Nvidia GPU in games that use it in the last few years, you can bet that nvidia would be making a big deal of it. It simply is not a unique selling point anymore and has not been for quite a few years now.

DLSS 2.0. Now that is a unique selling point ;)
 
Associate
Joined
16 Jan 2010
Posts
1,415
Location
Earth
You said run better, hence I was wondering exactly how. So you meant faster. I am thinking unless one has a super old cpu there won’t be any difference at all due to the way it would be coded in games. What I am saying is, if there was a difference between an AMD and Nvidia GPU in games that use it in the last few years, you can bet that nvidia would be making a big deal of it. It simply is not a unique selling point anymore and has not been for quite a few years now.

DLSS 2.0. Now that is a unique selling point ;)
+1, I have to agree, if physx genuinely was a USP that made Nvidia GPUs performant then Nvidia would be singing to the rafters about it each and every day.....
 
Last edited:
Soldato
Joined
7 Aug 2012
Posts
2,640
I bought 2 780ti's at random cause I wanted to play arkham asylum and city with Batman's Cape animated using physx, so it was certainly a selling point to me.

Fantastic cards that I only got rid of in 2015'ish due to lack of vram.
 
Soldato
Joined
6 Feb 2019
Posts
17,468
Compare physx in borderlands 1 to physx in borderlands 3 and tell me that it doesn't look far, far better in the older game that runs off the GPU. CPU physx sucks but now we are stuck with it
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
90,824
CPU physx sucks but now we are stuck with it

This - and sadly people celebrate it... same with ray tracing yet both can provide a far superior experience when implemented properly and with the performance to back it up but because they've never experienced it and lack the vision to see what it can bring people largely rubbish it out of hand.
 
Associate
Joined
21 Apr 2007
Posts
2,483
This - and sadly people celebrate it... same with ray tracing yet both can provide a far superior experience when implemented properly and with the performance to back it up but because they've never experienced it and lack the vision to see what it can bring people largely rubbish it out of hand.
Yes I would agree BUT I would still argue most games use 2 threads or less if ur lucky, meaning your argument is with lazy software developers the hardware is there but you have to use it just like RAM capacity
 
Man of Honour
Joined
13 Oct 2006
Posts
90,824
Yes I would agree BUT I would still argue most games use 2 threads or less if ur lucky, meaning your argument is with lazy software developers the hardware is there but you have to use it just like RAM capacity

One of the problems is the lack of unified hardware acceleration support - developers don't want to include features that only a subset of their audience can run (which would happen if you implemented physics that required hardware acceleration fundamentally into the application) and/or spend time on supporting multiple vendor paths, etc.

As an aside to that much of the PhysX API itself is multi-threaded and will run on separate threads to your main code - it is also trivial to thread any heavy weight batches of calls to it within your own application code although the performance benefits aren't always worth while though it can have an impact on how "responsive" your game feels.
 
Soldato
Joined
6 Feb 2019
Posts
17,468
Can we all shut up about physX.

I'm sorry - please feel free to post about any of the other redundant topics that make their rounds every day in this thread, siuch as:

* RTX is expensive memes
* RTX is hot memes
* LOL ray tracing memes
* LOL DLSS mems
* Inform us which card you want to buy
* Ask again if the new cards have HDMI 2.1
* Post more made up specs
* Guess the release date

etc
 
Soldato
Joined
16 Jan 2006
Posts
3,020
I'm sorry - please feel free to post about any of the other redundant topics that make their rounds every day in this thread, siuch as:

* RTX is expensive memes
* RTX is hot memes
* LOL ray tracing memes
* LOL DLSS mems
* Inform us which card you want to buy
* Ask again if the new cards have HDMI 2.1
* Post more made up specs
* Guess the release date

etc

i don’t have any ampere rumours so don’t post other than to try and keep the thread on topic.

want to talk about Physx? Start a thread. Let this thread die. I only want to see it top when there is a leak, rumour etc.
 
Back
Top Bottom