• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

2070 is a TU106

Wut?

If the calculations are literally done in advance on the supercomputer, then... no I won't finish that sentence it's madness.

What do you think the Turing cores are doing if not performing AA calculations? "Remembering" the answer that the supercomputer taught them at GPU school? I guess the supercomputer rendered every possible combination of scenes from every game in existence and told the Turing units so they wouldn't have to calculate anything during the game?

Computers just don't work like that... you're antropomorphising them. I'm guessing you've never done any coding.

WUT back at you. Go read up on it. Your GPU still has work to do, of course it does but the heavy lifting has already been done.
 
One last post before bed :p

I don't work in the field of "AI", but I can smell marketing BS as well as anyone. Firstly all of the information from nVidia about Turing is...well... designed to sell cards. This spiel is being written by their marketing depts, not their engineers. It is also true to say that the less understandable the subject matter is to non-experts, the more they will attempt to hype, spin and exaggerate, safe in the knowledge that few people would be able to challenge any of it.

Now on to my actual point. If I took a card from... let's say 6 year go... a GTX 480, for example. I bolted some "AI cores" to it. I then claimed that this GPU could render games as fast as a 1080Ti, because a supercomputer had "trained the GPU" so it didn't need to do any calculations to render the image any more... instead it just used what it "learned" from the supercomputer.

Would you not say to me, "Foxeye, with no due respect, that sounds like utter BS."

Foxeye, I am saying it to you now, this post is pure and Utter BS.
 
Thats the thing people need to consider its a "model" - the "model" needs to be run,just like a model needs to be run for TAA,normal upsampling,etc. These tended to be hard baked general models made to cover a whole lot of games.

What Nvidia is using their servers to make a unique upscaling model for each supported game,that is what is happening which is the innovation. GFE then transfer that model to the host computer.

So in the end the GPU still needs to do all the calculations but instead of a general model,a specific one for that supported game.
 
@melmac Except the parts that aren't, eh? Feel free to take everything nV's marketing dept says as gospel. Bothers me not. I've seen this play out many times over the years, and the end result is always the same... you believe the hype, you end up disappointed.

P.S. Do you have a background in IT melmac? You seem so eager to believe that nV have changed the whole paradigm, that they've re-invented how GPUs work.
 
One last post before bed :p

I don't work in the field of "AI", but I can smell marketing BS as well as anyone. Firstly all of the information from nVidia about Turing is...well... designed to sell cards. This spiel is being written by their marketing depts, not their engineers. It is also true to say that the less understandable the subject matter is to non-experts, the more they will attempt to hype, spin and exaggerate, safe in the knowledge that few people would be able to challenge any of it.

Now on to my actual point. If I took a card from... let's say 6 year go... a GTX 480, for example. I bolted some "AI cores" to it. I then claimed that this GPU could render games as fast as a 1080Ti, because a supercomputer had "trained the GPU" so it didn't need to do any calculations to render the image any more... instead it just used what it "learned" from the supercomputer.

Would you not say to me, "Foxeye, with no due respect, that sounds like utter BS."

Well yup it does sound absolutely bonkers but, that is basically what is happening, to the best of my understanding anyway.
Also to say that the GPU doesn't need to do any calculations, is as CAT-THE-FIFTH has just said not strictly true, but the calculations are far easy and quicker to do because the supercomputer has already done all the heavy lifting.

And yes it certainly does sound bonkers, but hell welcome to the future.:p
 
You'll have to forgive me for not believing at this point in time :p Call me an old cynic.

Don't worry I think we were all in a similar boat when they first mentioned it, because it really does sound unbelievable.
but if they can deliver on the promises and give us better AA without the harsh penalties and in fact they promise boosting performance, then I think it just goes to show they have some very cleaver people working for them.

All of the last three or four NVidia keynotes over the last few years where they have banged on about driverless cars and training neural nets for deep learning, which at the time seemed completely irrelevant for gaming suddenly make lots of sense. This has been their vision for a long time, hence they say 10 years in the making.
 
I do have an issue about how TPU described it which is probably causing more confusion. The upscaling is being done locally in the GPU. However instead of a general model which is one size fits all a specific model for that game is generated beforehand so the GPU does a better job.

Also as a side note I would not call this AI per se - this more a neural network. I have see stuff mates done using software based networks to train autonomous robots even 10 years ago.
 
@FoxEye what's so hard to comprehend about glorified SLI profiles?

Developer submits a game to Nvidia
Nvidia runs the game through their games servers to generate ultra high res images
Nvidia runs those ultra high res images through their deep learning servers to create an upscaling algorithm
Upscaling algorithm is made available for download
Player downloads the algorithm
Tensor cores use this algorithm to upscale generated images.

That's how DLSS works.

Note that game devs can also generate their own DLSS algorithm if they have the kit to do so, of course available to buy from Nvidia.
 
Er. You might want to go and read up what DLSS is before making such a statement.

I know how neural networks work, and the principle of that is happening with DLSS. Smart idea.

Just the marketing nonsense about leveraging the power of supercomputers on your GPU that's annoying...
 
I know how neural networks work, and the principle of that is happening with DLSS. Smart idea.

Just the marketing nonsense about leveraging the power of supercomputers on your GPU that's annoying...


But without the supercomputer doing the heavy lifting the entire idea falls flat, so if that isn't leveraging the power of supercomputers on your GPU (ie without you having to own one) I really don't know what is.
 
@melmac Except the parts that aren't, eh? Feel free to take everything nV's marketing dept says as gospel. Bothers me not. I've seen this play out many times over the years, and the end result is always the same... you believe the hype, you end up disappointed.

P.S. Do you have a background in IT melmac? You seem so eager to believe that nV have changed the whole paradigm, that they've re-invented how GPUs work.

More BS from you. Where did I say they re-invented how the GPUs works? I explained how DLSS works that's all.

And where am I believing the Hype? LOL what an asinine comment.
 
Is all this talk saying the RTX cards don't do realtime ray tracing but more of something more simple , like comparing actually making a meal from scratch but what the RTX does is microwave a ready made meal ?
 
Is all this talk saying the RTX cards don't do realtime ray tracing but more of something more simple , like comparing actually making a meal from scratch but what the RTX does is microwave a ready made meal ?

Your mixing up Ray Tracing with DLSS. But it seems they will use DLSS to boost the frame rate of games using Ray Tracing.
 
Is all this talk saying the RTX cards don't do realtime ray tracing but more of something more simple , like comparing actually making a meal from scratch but what the RTX does is microwave a ready made meal ?

Ray tracing and the new super sampling are separate functions. The super sampling should boost performance because it's handled by the Tensor cores acting on an algorithm generated previously by Nvidia's deep learning servers, rather than part of the render pipeline as we've had up until now. So the GPU core can get on with rendering the image and the Tensor cores do the anti aliasing in parallel. Then throw in the ray tracing from the RT cores as another parallel workload.
 
Where did I say they re-invented how the GPUs works? I explained how DLSS works that's all.
This is what make DLSS special, the actual work IS done on the supercomputer. It creates algorithms that Your Turing card uses. The calculations are already done by the super computer, there is no offloading necessary.
Your explanation was that the AA work was done on the supercomputer, freeing your GPU to... well, not do "actual work", I guess :p
More BS from you.
Oh, the ironing :p
 
But without the supercomputer doing the heavy lifting the entire idea falls flat, so if that isn't leveraging the power of supercomputers on your GPU (ie without you having to own one) I really don't know what is.

All your DLSS does is run an algorithm that was created by training a neural network, rather than one explicitly written by an engineer.

You might as well say "render web pages in your browser, leveraging the power of 1000 software engineers!".
 
Back
Top Bottom