• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

2070 is a TU106

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,360
Location
kent
It's all about RTX Operations. So of course the Tu106 is going to be faster than the Titan XP since it hasn't the hardware to do any of those operations. I am not sure how you listened to the presentation and looked at the slides and can think that he was talking about performance in Normal games.

He did some slides with normal games later and didn't mention the 2070 been faster at all. And the very slides seem to suggest that the 2080 will be only marginally faster than the 1080Ti in Normal games, how is the 2070 going to be faster than the Titan Xp?

Second if it is faster in normal games why didn't Tom Petersen just say that in his interview, because if the 2070 is faster than the Titan Xp, then the 2080 is definitely going to be a faster than the 1080ti, you don't need any data in front of you to know that.

For the 2070 to be faster than the Titan Xp, a lot of speculative theories have to come true about Nvidia sandbagging and hiding the true performance for some unknown reason. Usually the simple explanation is the right one and the simple explanation is that it's just not going to be faster in normal games. But, we shall see.

That one line shows me that you haven't rewatched or even watched in the first place, the stream.

It all happens in the last 9 mins of the stream, so no he didn't show some slides later at all.
He clearly uses three different metrics in relation to the 2070, 6 Giga rays per second which is 5 times that of a TitanXP, then moments later, 45 trillion RTX ops per second and that is several times the performance of a TitanXP, and then when he announces the prices about 2 mins later in the stream the clearly states the $499 2070 is higher performance than the $1200 TitanXP.

Now yes he may be referring to raytracing and I might be wrong, but you seem to be utterly convinced that it cannot possibly be right and you seem unwilling to face the possibility that it is correct.
 
Soldato
Joined
19 Dec 2010
Posts
12,031
That one line shows me that you haven't rewatched or even watched in the first place, the stream.

It all happens in the last 9 mins of the stream, so no he didn't show some slides later at all.
He clearly uses three different metrics in relation to the 2070, 6 Giga rays per second which is 5 times that of a TitanXP, then moments later, 45 trillion RTX ops per second and that is several times the performance of a TitanXP, and then when he announces the prices about 2 mins later in the stream the clearly states the $499 2070 is higher performance than the $1200 TitanXP.

Now yes he may be referring to raytracing and I might be wrong, but you seem to be utterly convinced that it cannot possibly be right and you seem unwilling to face the possibility that it is correct.

Yes, my apologies. It's not that I haven't watched it, but I haven't watched it in order. Just bits and pieces of it. And, yes I thought the DLSS slides came from a part of the presentation I didn't watch. But, Nvidia only released them later. (I skipped through the DLSS section as I had seen the demo already)

But, the point the still stands. The slides show without DLSS that the 2080 would only be a little faster than the 1080Ti. How can the 2070 be faster than the Titan Xp?

My second point still stands as well, if the 2070 is faster than the Titan Xp, Why didn't Tom Petersen come out and say so when asked the direct question is the 2080 faster than the 1080Ti?

I could say the same to you, you seem convinced that it is about normal gaming even though there is no evidence to support that. The slides don't support that, the sections in the live stream don't support that. You say 2 minutes after talking about 45 trillion RTX operations per second been several times the performance of the Titan Xp, he comes out and says that the $499 2070 is higher performance than the $1200 Titan Xp. And, you are saying in an event totally dominated by RTX, that, he went from talking about RTX performance to performance in normal games in those 2 minutes? Isn't it more than likely he was still talking about RTX performance? Come on even Ryan Shrout from PCPER said it was about RTX operations not normal gaming.

I know it's a new process with new memory and FLOPS aren't always an indicator of final performance, but, with Nvidia cards the Tflops normally line up with expected performance. For example, going back to the 1070 you referenced earlier. It had 6.5 TFlops, The Titan X(Maxwell) had 6.6 Tflops.

Now compare that to current situation, the 2070 has 7.4 Teraflops, the Titan Xp has 12.1 Tflops. Yes, there is a new process, but, that's still a lot of ground to make up.

For the 2070 to be faster than the Titan Xp in normal games it would have to be something in order of 70% faster than the 1070. The slides from Nvidia comparing the 2080 to the 1080 would seem to suggest that this won't be the case. It would be really odd for Jensen to have meant faster in normal games, then bring out slides that show the 2080 is only 50% faster than the 1080 which would prove him wrong. Unless of course the 2070 is going to be faster than the 2080.

And yes, I am don't believe that I am wrong in this. He was only talking about RTX performance not normal gaming.
 
Man of Honour
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
That one line shows me that you haven't rewatched or even watched in the first place, the stream.

It all happens in the last 9 mins of the stream, so no he didn't show some slides later at all.
He clearly uses three different metrics in relation to the 2070, 6 Giga rays per second which is 5 times that of a TitanXP, then moments later, 45 trillion RTX ops per second and that is several times the performance of a TitanXP, and then when he announces the prices about 2 mins later in the stream the clearly states the $499 2070 is higher performance than the $1200 TitanXP.

Now yes he may be referring to raytracing and I might be wrong, but you seem to be utterly convinced that it cannot possibly be right and you seem unwilling to face the possibility that it is correct.

Metrics are funny things as people can pick and choose which ones they like best.

It could be argued for example that a Titan V is many times faster than any of the Turing cards if we use DP performance to compare.

Fortunately we should judge a card by the sum of all its capabilities rather than just picking individual metrics and on that basis Turing hopefully will do the business for gaming.
 
Soldato
Joined
19 Dec 2010
Posts
12,031
Well only a few more days and then I can come back and say "Yup I was wrong".:)

Corrected that for you :p

ah no, only joking, :D A few days all will be revealed, hopefully, if it's not pushed back again!! Really looking forward to seeing what DLSS can do.
 
Caporegime
Joined
17 Feb 2006
Posts
29,263
Location
Cornwall
Of course nobody wants to believe it and as I showed above even a 1080ti can lose to a 1070 in certain situations, So it only need to beat the TitanXP in one situation for Jensen to be correct.
No, not in the minds of most of us.

If you said to me, "This pushbike is faster than this Ferarri!" I would (quite reasonably) ignore it as nonsense.

You might then be correctly able to say, "If we drop both of them from an aircraft, the aerodynamics of the bike makes it fall faster than the Ferrari!"

(I appreciate this example is pure nonsense, but still...)

Just because the bike might be faster in some ultra-contrived scenario does not mean that a general statement, such as "The bike is faster" is correct.

General, non-specific statements like that convey an "overall expectation". The overall expectation, borne out by most people's experiences and real-world usage, will be that the Ferrari is faster than the pushbike.
 

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,360
Location
kent
Corrected that for you :p

ah no, only joking, :D A few days all will be revealed, hopefully, if it's not pushed back again!! Really looking forward to seeing what DLSS can do.


Yup DLSS is an amazing idea, get the super computers to do all the heavy lifting but the guys in their home PC get the results, it could be AA with none of the downsides, or it might not work as we all hope it will.

And I just have to say, I still cannot believe that the 2070 is the TU106 chip, as that would be three chips and only one card from each. I suppose it does allow room for lots of in between cards, but then with 7nm looming up fairly quickly is their enough time to launch these and all the ones that could follow a couple of months apart. They could still be launching cards at this time next year at that rate
 
Man of Honour
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
Yup DLSS is an amazing idea, get the super computers to do all the heavy lifting but the guys in their home PC get the results, it could be AA with none of the downsides, or it might not work as we all hope it will.

And I just have to say, I still cannot believe that the 2070 is the TU106 chip, as that would be three chips and only one card from each. I suppose it does allow room for lots of in between cards, but then with 7nm looming up fairly quickly is their enough time to launch these and all the ones that could follow a couple of months apart. They could still be launching cards at this time next year at that rate

Neither the 2080 or 2080 Ti use all the Cores available on the TU104 or TU102 chips so there is already room for higher speced cards.

I think producing the 2070 on the smaller TU106 chip is NVidia maximising the number of chips they can get from each wafer.
 
Soldato
Joined
20 Dec 2004
Posts
15,840
Yup DLSS is an amazing idea, get the super computers to do all the heavy lifting but the guys in their home PC get the results, it could be AA with none of the downsides, or it might not work as we all hope it will.

And I just have to say, I still cannot believe that the 2070 is the TU106 chip, as that would be three chips and only one card from each. I suppose it does allow room for lots of in between cards, but then with 7nm looming up fairly quickly is their enough time to launch these and all the ones that could follow a couple of months apart. They could still be launching cards at this time next year at that rate

Who's telling people this 'supercomputer' nonsense? Grade A marketing cows testicles right there...
 
Caporegime
Joined
17 Feb 2006
Posts
29,263
Location
Cornwall
Er. You might want to go and read up what DLSS is before making such a statement.
What you wrote sounds completely fantastical.

Yup DLSS is an amazing idea, get the super computers to do all the heavy lifting but the guys in their home PC get the results, it could be AA with none of the downsides, or it might not work as we all hope it will.
The idea that you're leveraging "supercomputers" to somehow assist in performing AA calculations in advance... I don't think that's really how it works.
 
Soldato
Joined
19 Dec 2010
Posts
12,031
What you wrote sounds completely fantastical.


The idea that you're leveraging "supercomputers" to somehow assist in performing AA calculations in advance... I don't think that's really how it works.

Well, that's actually pretty much how it works from the info we got so far. The super computer "learns" how to make the low res game look as close as possible to a super high res version of the game. What it has learned is used by your Turing card and it's tensor cores to make the game look as near as possible to the high res version on your PC. Since the GPU is basically just following a set of rules instead of having to work it all out in real time, it can run games faster.

We shall see how it actually works when the reviews come out.
 

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,360
Location
kent
Yup I know it really does sound fantastic, and as melmec says we will see how well it actually works when the reviews hit.
 
Caporegime
Joined
17 Feb 2006
Posts
29,263
Location
Cornwall
Well, that's actually pretty much how it works from the info we got so far. The super computer "learns" how to make the low res game look as close as possible to a super high res version of the game. What it has learned is used by your Turing card and it's tensor cores to make the game look as near as possible to the high res version on your PC. Since the GPU is basically just following a set of rules instead of having to work it all out in real time, it can run games faster.

We shall see how it actually works when the reviews come out.
Well, "following a set of rules" all all that computers have done from their inception until now ;)

The statement, "Since the GPU is basically just following a set of rules instead of having to work it all out in real time, it can run games faster," doesn't make a whole lot of sense to me. Instructions are rules; coding is giving the computer rules to follow.

The actual "work" to perform AA as you play is done on the GPU, not any supercomputer. This is clear. You could say that the supercomputer is "programming" the GPU to perform AA efficiently, but it's also true that none of the calculations are offloaded from the GPU to a mainframe at render time.

So the idea that you're bringing the power of a supercomputer to bear to do AA calculations on your GPU cannot be true.

Basically what it amounts to is saying that a supercomputer can create a more efficient AA algorithm than a team of human developers. Or... that the new algorithm is cheating in some way (which may mean trade-offs between speed and quality).
 
Soldato
Joined
19 Dec 2010
Posts
12,031
Well, "following a set of rules" all all that computers have done from their inception until now ;)

The statement, "Since the GPU is basically just following a set of rules instead of having to work it all out in real time, it can run games faster," doesn't make a whole lot of sense to me. Instructions are rules; coding is giving the computer rules to follow.

The actual "work" to perform AA as you play is done on the GPU, not any supercomputer. This is clear. You could say that the supercomputer is "programming" the GPU to perform AA efficiently, but it's also true that none of the calculations are offloaded from the GPU to a mainframe at render time.

So the idea that you're bringing the power of a supercomputer to bear to do AA calculations on your GPU cannot be true.

Basically what it amounts to is saying that a supercomputer can create a more efficient AA algorithm than a team of human developers. Or... that the new algorithm is cheating in some way (which may mean trade-offs between speed and quality).

This is what make DLSS special, the actual work IS done on the supercomputer. It creates algorithms that Your Turing card uses. The calculations are already done by the super computer, there is no offloading necessary.

Here, read this, explains it better than I ever could.

DLSS is, essentially, an image upscale algorithm with a Deep Neural Network (DNN) approach; it uses NVIDIA's Tensor Cores to determine the best upscale result in a per-frame basis, rendering the image at a lower resolution and then inferring the correct edges and smoothing for each pixel. But there is much magic here: it is not all being done locally on your computer.

DLSS basically works after NVIDIA has generated and sampled what it calls a "ground truth" image—the best iteration and highest image quality image you can engender in your mind, rendered at a 64x supersampling rate. The neural network goes on to work on thousands of these pre-rendered images for each game, applying AI techniques for image analysis and picture quality optimization. After a game with DLSS support (and NVIDIA NGX integration) is tested and retested by NVIDIA, a DLSS model is compiled. This model is created via a permanent back propagation process, which is essentially trial and error as to how close generated images are to the ground truth. Then, it is transferred to the user's computer (weighing in at mere MBs) and processed by the local Tensor cores in the respective game (even deeper GeForce Experiecce integration). It essentially trains the network to perform the steps required to take the locally generated image as close to the ground truth image as possible, which is all done via an algorithm that does not really have to be rendered.
 

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,360
Location
kent
Just because you don't want it to be true doesn't make it so.

Go read up on it, there is plenty of information out there.

This one page has a reasonable explanation of what is going on.

https://www.techpowerup.com/reviews/NVIDIA/GeForce_Turing_GeForce_RTX_Architecture/9.html

Try as we might to hate on them for the pricing, which really is obscene in my view. When they say Graphics reinvented, they really are going about the rendered image in new and hopefully wonderful ways.

Along time in the making and certainly the future, judging by what we have seen so far.

Edit: hehe melmac beat me to it.:)
 
Caporegime
Joined
17 Feb 2006
Posts
29,263
Location
Cornwall
This is what make DLSS special, the actual work IS done on the supercomputer. It creates algorithms that Your Turing card uses. The calculations are already done by the super computer, there is no offloading necessary.

Here, read this, explains it better than I ever could.
Wut?

If the calculations are literally done in advance on the supercomputer, then... no I won't finish that sentence it's madness.

What do you think the Turing cores are doing if not performing AA calculations? "Remembering" the answer that the supercomputer taught them at GPU school? I guess the supercomputer rendered every possible combination of scenes from every game in existence and told the Turing units so they wouldn't have to calculate anything during the game?

Computers just don't work like that... you're antropomorphising them. I'm guessing you've never done any coding.
 
Soldato
Joined
9 Nov 2009
Posts
24,841
Location
Planet Earth
I think people and the the tech press to a degree are not understanding how DLSS is working. They would need to train the neural net beforehand to generate a model for the game,which is then probably downloaded via GFE,for that particular game - this is why the game needs to "support" DLSS. If this was happening in realtime,ie,the game was actively referring to a supercomputer over the internet before each frame was generated it would lead to severe latency issues,as the GPU would need to first ping the supercomputer to get a new model per frame,before it did the actual rendering.

Edit!!

Also the calculations still need to be done by the card - the model is only a basic framework which the GPU still needs to perform. Think of it as a blueprint.
 
Caporegime
Joined
17 Feb 2006
Posts
29,263
Location
Cornwall
One last post before bed :p

I don't work in the field of "AI", but I can smell marketing BS as well as anyone. Firstly all of the information from nVidia about Turing is...well... designed to sell cards. This spiel is being written by their marketing depts, not their engineers. It is also true to say that the less understandable the subject matter is to non-experts, the more they will attempt to hype, spin and exaggerate, safe in the knowledge that few people would be able to challenge any of it.

Now on to my actual point. If I took a card from... let's say 6 year go... a GTX 480, for example. I bolted some "AI cores" to it. I then claimed that this GPU could render games as fast as a 1080Ti, because a supercomputer had "trained the GPU" so it didn't need to do any calculations to render the image any more... instead it just used what it "learned" from the supercomputer.

Would you not say to me, "Foxeye, with no due respect, that sounds like utter BS."
 
Back
Top Bottom