• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Ray Tracing - Do we care?

Ray Tracing - Do you care?


  • Total voters
    183
  • Poll closed .
Caporegime
Joined
18 Oct 2002
Posts
32,617
It will support also DXR API according to Nvidia.
Predominately all Ray Tracing will work only on Nvidia proprietary tech, including gameworks.


You are just making up stuff. The demos rleased have been using Microsoft DXR.
 
Caporegime
Joined
17 Mar 2012
Posts
47,534
Location
ARC-L1, Stanton System
Making stuff up. Hypocrite ^^^^

Nvidia have no desire to defeat open source, they are big proponents. Nvidia are pushing OpenGL and Vulkan far more than AMD is.

With Ray tracing, Nvidia has already provided an open source materials library MDL which is used by the likes of Adbobe and Unreal.

RTX is being supported in open source projects like Blender.

lol is that a joke? they are not, they are defiantly not, they push DX11 more than anyother API because that is where they are strongest vs competition, Vulkan brings at least performance parity to AMD, if anything nVidia have been holding back the use of Vulkan
 
Permabanned
Joined
2 Sep 2017
Posts
10,490
If Ultra HD 4K = 8.3 MegaPixels x 400 Rays/pixel x 60 FPS = 199 GigaRays/second and some estimates say up to 10x that figure for noise reduction.

So, only 10 GigaRays/second shall be extremely weak.

How many RT cores, if CUDA cores are up to 4608 and Tensor cores are up to 576?
 
Soldato
Joined
19 Dec 2010
Posts
12,026
I've seen a lot of posters quoting G-sync module component costs like these but I don't believe they're even close to accurate. Where are these figures? They don't make sense.

That's the new Gsync module that's in the HDR and 144Hz 4k monitors, not the one that's in the any of the current non HDR Gsync monitors. The price of the new module came for a teardown that PCPer did. You can look up the part in online distributers, costs $2600 and you have to buy 3 minimum. They estimated what it cost Nvidia to buy one based on that.
 
Soldato
Joined
28 May 2007
Posts
18,233
That's the new Gsync module that's in the HDR and 144Hz 4k monitors, not the one that's in the any of the current non HDR Gsync monitors. The price of the new module came for a teardown that PCPer did. You can look up the part in online distributers, costs $2600 and you have to buy 3 minimum. They estimated what it cost Nvidia to buy one based on that.

So still no VVR support on the RTX cards?
 
Permabanned
Joined
12 Sep 2013
Posts
9,221
Location
Knowhere
That's the new Gsync module that's in the HDR and 144Hz 4k monitors, not the one that's in the any of the current non HDR Gsync monitors. The price of the new module came for a teardown that PCPer did. You can look up the part in online distributers, costs $2600 and you have to buy 3 minimum. They estimated what it cost Nvidia to buy one based on that.

That makes more sense now, Cheers.

The FPGA CPU used on the HDR Gsync module has street price $2600. The rest of the parts on the module cost $250-300 as it bit more beefed up than the normal Gsync module that costs $250. Do the maths.

My mistake I thought we were talking about regular modules. :)
 
Associate
Joined
9 Feb 2017
Posts
241
Raytracing will be as important (maybe more) as transform and lighting was on the first GPU.
This will make development time faster and allow for more accurate visuals in all sorts of fields.
If you care for VR, AR, design, gaming, film and TV then you should definitely care about raytracing on the GPU as all of these areas are going to benefit from the technology.
 
Soldato
Joined
28 May 2007
Posts
18,233
If there was going to be VRR support by Nvidia, do you believe they could have spent all that money to create their own HDR Gsync module?
Or their own TV lineup?

They could offer both. Though it seems mental to keep pushing Gsync, I think the focus group might have enough influence to keep people buying Gysnc. All the rhetoric would have to be is, Gsync is better because AMD don't have £3000 graphics cards. That point seems to be all that matters.

Nvidia are 5 years behind on implementing an open standard. Maybe if they just implemented freesync monitor support on the refreshed GTX cards?
 
Soldato
Joined
22 Nov 2009
Posts
13,252
Location
Under the hot sun.
They could offer both. Though it seems mental to keep pushing Gsync, I think the focus group might have enough influence to keep people buying Gysnc. All the rhetoric would have to be is, Gsync is better because AMD don't have £3000 graphics cards. That point seems to be all that matters.

Nvidia are 5 years behind on implementing an open standard. Maybe if they just implemented freesync monitor support on the refreshed GTX cards?

Nah. NV said numerous times when asked about this, that they are selling "exclusive" products, and want their customers to feel exclusive.
Can you imagine using Freesync alongside consoles, AMD and Intel? Where is the exclusivity on that?
 
Soldato
Joined
28 May 2007
Posts
18,233
Nah. NV said numerous times when asked about this, that they are selling "exclusive" products, and want their customers to feel exclusive.
Can you imagine using Freesync alongside consoles, AMD and Intel? Where is the exclusivity on that?

Are Nvidia happy to sacrifice sales and go all lambo-boutique on the master race?
 
Soldato
Joined
22 Nov 2009
Posts
13,252
Location
Under the hot sun.
Why is this thread talking about monitors?

Dialogue, and indeed we digress for a completely different matter. I will steer the discussion to the subject.

If Ultra HD 4K = 8.3 MegaPixels x 400 Rays/pixel x 60 FPS = 199 GigaRays/second and some estimates say up to 10x that figure for noise reduction.

So, only 10 GigaRays/second shall be extremely weak.

How many RT cores, if CUDA cores are up to 4608 and Tensor cores are up to 576?

Given what we know of the top spec RTX quadro, to achieve the above it would require 200 cards each costs $10000.
+ the NVLink licences.
 
Associate
Joined
9 Feb 2017
Posts
241
Dialogue, and indeed we digress for a completely different matter. I will steer the discussion to the subject.

Cheers Panos.

Given what we know of the top spec RTX quadro, to achieve the above it would require 200 cards each costs $10000.
+ the NVLink licences.

The first Geforce 256: "GeForce 256 was marketed as "the world's first 'GPU', or Graphics Processing Unit", a term Nvidia defined at the time as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second.”

How many can the 1080ti do compared to this?
 
Soldato
Joined
22 Nov 2009
Posts
13,252
Location
Under the hot sun.
I want playability in a game. Fancy graphics is all well and good but if the game is gash then no point.
Amen to that.

Look at EUIV or CK2. Easily people clock 2000+ hours on them, each. But they do lack on gfx department.

On the other side look something like FC5. Impressively looking game but how many times can you play it? There is only 1 way to play it.
Same applies to the SW Battlefront 2 (the new one). Amazingly looking game yet if you are not on pvp (which is crap compared to eg BF2 and all other pre CoD games), you can spend 10 hours do the story line and done. Of course there is The Division, but that after 100 hours or so becomes a grind fest, even the PVP Survival mode. (which I love).
 
Soldato
Joined
22 Nov 2009
Posts
13,252
Location
Under the hot sun.
Cheers Panos.



The first Geforce 256: "GeForce 256 was marketed as "the world's first 'GPU', or Graphics Processing Unit", a term Nvidia defined at the time as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second.”

How many can the 1080ti do compared to this?

If we talk about the future, sure in 5-7 years time we should be able to achieve the above numbers in single top of the range GPU.
But not right now, nor next year.
Also between the Geforce 256 (1999) and GTX780 there were greater leaps every year in that 14 years span, compared to the next 4 years (GTX780 to GTX1080Ti (2017)).
Yet still parts of NVidia were up to 5 years behind than AMD by that point in time. (DX12, hardware async compute, hardware ray tracing).
And possible two of the above will be behind for the next couple of years also, if not implemented in the Turing cards.

So we might be comparing apples and oranges taking the 256 and comparing it straight way to 1080ti as indication of evolution over the 18 years.
 
Back
Top Bottom