• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

John Carmack Says to Take Nvidia's K1 Claims with Several Grains of Salt

Only decent story in this thread is that pottsey has learned to quote:p:o

lol.

The last argument with Pottsey was his claims of matching the last gen consoles, when I pointed out that this had not happened at all. In 2011 Pottsey claimed it categorically had.... based on one exceptionally simplified game looking okay with 2 characters on screen and lots of prebaked effects.

He also at the time claimed 200gflop mobile gpu's less than a year away. Fast forward to today, and we have ANOTHER incoming mobile GPU again making the claim they have finally matched last gen consoles. So Pottsey claimed it happened as a fact 3 years ago already but Nvidia only think their next gen unreleased product can do it. Even then the product Nvidia think will do it, by all accounts is a 951Mhz part that when in a mobile device will likely be 1/3rd of that performance... or, well below last gen console performance still, years after Pottsey announced it had happened.


As for ray tracing, and being better than desktop parts. Ray tracing != ungodly unbeatable games.

Take a game that ray traces a simplified background and 2 characters on screen......... and that requires a fraction of the power of a game with 64 characters, or even hundreds of characters(ryse, lotro, loads of other things) and thousands of objects, hundreds of buildings, plants, trees, helicopters, missiles, etc, etc, and you aren't looking at the same amount of power.

A normally rendered game with thousands of objects on screen will provide a much better gaming experience than an overly simplified low object, low character count mobile game that is "ray traced". In no way will mobile graphics have surpassed desktop graphics. It's a fantasy, and in such a way ray tracing obscenely basic scenes lends itself to low powered devices, it shouldn't be ANY surprise, nor take any brain power to work out why ray tracing COULD(yet hasn't) come to some basic mobile games before taking on huge complex massively more power requiring, fully fledged desktop games.

To ignore the difference and proclaim desktop would be behind because it wouldn't have the all magical ray tracing, is beyond absurd.

They we get back to the never ending argument of your claims, the ones you were completely wrong about, then quoting yourself saying common things that everyone was saying and acting like you knew something no one else did... In each of these threads saying along the lines of "i'm not pretending I can predict anything.... but, here's a list of stuff I predicted". Which always excludes the big claims you were wrong about and includes daft stuff like "mobiles will beat desktop". Year in 2011 it either was or was close to it, anyone with a brain could see it was coming half a decade before that, quoting something like that as a prediction you had is laughable.

More to the point

I just glanced over that thread and didn’t a lot of those bold claims come true. We have 2048x1536 with FSAA in mobile just like I said we would. I was laughed at for saying that but it turned out true. Look at the state of current mobile GPU tech full GPU compute, full DX11 effects.

Another one from that thread "It is wildly believe the mobile market will outship the desktop market by 2012 " Guess what that happened. mobiles GPUs now out ship desktops. Last one as I am out of time. from that old thread "Real-time ray tracing is a prime example. Its strongly looking like mobiles will get that before desktops. If that happens desktops could well fall behind graphically. How can you not see that? " what looks to be happening today was just what I said 2 years ago. Still feel like LoL at me?

Why don't you link to this thread and link specifically to where people laughed at you for claiming mobiles would have FSAA, or that mobile gpu's would outship desktop gpus.

From what I can tell people were laughing at your claim that mobile would surpass desktop in a handful of years. If we presume a handful to be 5 or less, that would mean the fastest upcoming "mobile" chip which gets 365gflops at 951Mhz at an unknown power draw is due this year. 1 more year till a "handful" of years has gone past. What are high end desktop gpu's pushing now, 5-6tflops? Yes we were all very right to laugh at you for that.

The reason for scaling in mobile has been explained to you many times over, the reason for their scaling now being mostly based on process is what was predicted, by me, and anyone with half a brain. There was a time when AMD/Nvidia could scale on the same process, because they started off with small chips no where near the power/die size/cost limits. Mobile did the same, they didn't start off with 100mm^2 chips that use 2-3W.

Why did A15 only come with 28nm, why is everyone waiting for 20nm to make the next gen Arm chips? it's almost like, as EVERYONE IN THE WORLD BUT YOU expected, mobile chips hit those limits everyone said they would. They are now almost exactly as reliant on new processes for vast increases in performance as AMD/Nvidia/Intel are.

Everyone told you this, everyone explained this to you, you still believe mobile will advance faster than physics allows.

Let's list Apple and their powervr using chips shall we.

Iphone 3gs, 65nm 1.2glops 71.8mm^2. Sept 2009
A4 45nm 1.6gflops 53.3mm^2 March 2010
A5 45nm 16gflops 122.2mm^2 March 2011
A5X 45nm 32gflops 165mm^2 March 2012
A6 32nm 25.5gflops 96.71mm^2 Sept 2012
A6X 32nm 68 gflops 123mm^2 Oct 2012
A5 32nm 16gflops 69.6mm^2 March 2013 (this was purely financial, reduce size to save cash making them)
A7 28nm 115.2Glops 102mm^2 Sept 2013

A4 to A5 10 times the power increase in one year.... since then, never has performance more than doubled in the same time frame? Why, because they drastically increased size to the edge of viable limits for power/diesize/cost.

On 45nm it took one year and a 70mm^2 increase(mostly gpu) to give 10x the performance, another year and another 40mm" to double again(this time almost exclusively gpu size).
It took a process node change to drop die size/power to enable doubling the gpu performance, and the next doubling of performance again took a new process. Since A15/Apple own cores, performance hasn't doubled again and won't till...... 20nm.

The reason for the MASSIVE slow down in recent times of mobile scaling, that is almost reliant now on doubling performance due to process, was entirely 100% predictable from day one. You are the only person on earth who thought mobile would scale exponentially for all time till it passed desktop. You're basing all your assumptions on the time performance scaled on the same chips because die size/power/cost weren't the limiting factors. Now they are, the only real performance increases are coming from new processes and have done for the past 2 process nodes. Mobiles "caught up" to desktops exactly like everyone told YOU. They caught up to the limits of their market and production capabilities and are now almost exclusively reliant on process nodes for power/die size/transistor count, exactly like Intel, AMD, Nvidia have been for years.

Mobile can double performance with every process node drop, so can Intel AMD, Nvidia. The 10x scaling was NEVER on the cards long term, everyone but you realised this, I mean literally everyone.
 
unable-to-process-wall-of-text.jpg
 
Finally something me and DM agree on

Pottsey, I'm back on your side :p

It's no different than if AMD or Nvidia were new companies and their first card was this gen and releasing their low end card, then a midrange card that is 10 times faster on the same process, because of the headroom, then a high end card. By that progression gpu performance would scale insanely fast. But they'd still hit that high end limit and then be bound to the process. This is true for every piece of silicon being made, regardless of market/segment. There is a limit for every segment and until you hit it gains will be huge, when you do, gains will be pretty much industry standard.

The difference was processes made almost no effort for low leakage/power till around the 65nm process, which is when mobile unsurprisingly took off. If Mobile had been "big" since 2000, they'd have hit the limit a very long time ago and performance would be pretty much where it's at today anyway.

unfortunately for the time being we're pretty much silicon industry/process node reliant on performance/scaling, with worryingly few strides being made into alternatives and a massive slow down in the process nodes both in time between nodes, exponential cost increase(R&D and to a lesser degree production).

The only area mobile wins is power efficiency, but that is largely because one of the biggest costs in processing is moving the data. Moving a small amount of data exceptionally small distances is magnitudes cheaper in power than moving way more data over larger distances.
 
Pottsey, I'm back on your side :p
Ok that made me smile. As much as we might disagree I have to admit you do have a good sense of humor.

But the rest of your posts :( you are either lying or remembering wrong. You are also using your age old tactics of making stuff up then pretending its fact and true. You made up flat out lies about me a few weeks ago and it looks like you’re doing it again now. Most of the time you think I was wrong it was only because you failed to understand what I was talking about or have the totally wrong end of the stick and in actually fact I was correct. The mobiles beating desktops augment for example, you just don’t understand the concept I was talking about. As your rant above proves.

Scaling had nothing to do with what I was talking about. I was talking about how if the mobile market keeps increasing it will out ship the desktop market I believe back then I said I expect that to happen in 2012. Well I cannot remember the precise date it happened but it did happen around about 2012. I then went on to say this could lead onto more R&D being focused on mobiles and less on desktop as mobiles become more impotent and a bigger market. That means new technology like Ray tracing could well be implanted on mobiles before desktops which leads to mobile looking better then desktops.
My old quote from that thread “Real-time ray tracing is a prime example. Its strongly looking like mobiles will get that before desktops. If that happens desktops could well fall behind graphically.”

Fast forward years and it turns out mobile do seem to be getting ray tracing well before desktops. We know ray tracing effects and lights are well in advanced over raster graphics on desktop cards. So just how was my old comments wrong? Surly you can agree a ray tracing GPU will have better lights and effects then raster GPU'S? A Hybrid raster/ray tracing GPU will produce better effects then a raster only GPU. Do you disagree with that?

A more important question is if mobiles do get ray tracing why wouldn’t the ray tracing graphics advantages provide graphics that raster desktop cards cannot do?

I have to ask what out dated world do you live in where games are 2 characters on screen with pre baked effects! That’s not been true in years.

Why on earth would a ray tracing game only have a simple background and 2 characters on screen? Is that something you just decided to make up on the spot and pretend like it fact?



“based on one exceptionally simplified game looking okay with 2 characters on screen and lots of prebaked effects.”
Actually I showed lots of examples not 1exceptionally simplified game. I also showed you videos that had graphics effects beyond or matching what you see on consoles. No pre baked effects, very advanced lighting and in many cases more than 2 characters on screen. I also showed you a video of John Carmack and another video of EPIC devs talking about effects beyond what you see on consoles. But it seems you ignored all that back then.


“or, well below last gen console performance still, years after Pottsey announced it had happened.”
So I am wrong because one company hasn’t caught up!!! The same company who is a nobody in the mobile world and who are massively behind everyone else. That’s hardly a good example of me being wrong.

By the way 5 years from my old quotes would be January 2016. I still feel confidant about that time line. I still feel confident about ray tracing.


EDIT:
“He also at the time claimed 200gflop mobile gpu's less than a year away.”
I don’t recall saying less than a year away but a G6430 at 600mhz is 153.6gigaflops and
GX6650 at 600mhz would be 230gigaflops. So even if I was wrong on the timeframe I was not far out. I really do not recall that old quote when was it?
 
Last edited:
Wall-o-text battle royale.

BTW Pottesy how long is a 'handful of years'? I wouldn't have that any long then three. As it stands mobile chips are at least 7 years behind there desktop counterparts
 
Wall-o-text battle royale.

BTW Pottesy how long is a 'handful of years'? I wouldn't have that any long then three. As it stands mobile chips are at least 7 years behind there desktop counterparts
Handful to me means 1 hand so up to 5 fingers so 1 to 5 years. That means by 2016 counting from my old posts. I am pretty confident we will see ray tracing in mobiles by then in which case what I said in that old thread about mobiles overtaken desktops is true. We should start seeing a lot more news in the next 6 months with an announcement within a month. If you read the old thread I wasn’t talking about just performance but how more R&D will start to get shifted to mobiles and new feature might appear on mobiles first letting them do graphics effects better then desktops.

In my mind mobiles are not 7 years behind. That would put us at DX9, no GPU compute, no 2048x1536 resolutions with FSAA and all the other feature mobiles have that 7 year old desktop GPU’s do not have. A 7 year old GPU didn’t run at 2048x1536 with FSAA with DX11 feature set.

Top end mobiles can run the latest 3dmark cloud gate test. Can 7 year old GeForce 7 Series GPU’s do that? Pretty sure they cannot due to lack of DX10/11 features.

Part of the problem in saying how far mobiles are behind is are we talking the average mid range desktop gaming card or the most expensive high end rare cards or even just the average desktop GPU built into the motherboard that some people game with? Are we talking raw speed, real speed or other specs?


  • Do you agree Ray Tracing can do certain graphics features better than raster GPU’s? If not why not?
  • Do you agree it looks like mobiles are getting real time Ray tracing before desktops if not why not?
  • Assuming mobiles do get ray tracing first and its at useable speeds why wouldn’t they be able to pull off advanced lights and other ray tracing effects beyond what raster GPU’s can do?
  • If mobiles do the above why wouldn’t they be ahead of desktops GPU in some respects?
  • More impotently for the thread topic is. Considering the low specs of the Tegra K1 and if mobile do have ray tracing in that time frame K1 comes out. Then how is the K1 going to perform in the market?
 
K1's market performance? Given how much of a let down previous generation Tegra processors have been it's hard to imagine many of the big oems buying into Tegra again anytime soon especially when Snapdragon is the flavour of the month for mobile devices at the moment.

I'm yet to be convinced that ray tracing is as good as the marketing suggests especially given the performance hit you take over traditional rastorastions and zbuffer techniques which are more efficient. I'll have to wait and see what IT has developed and see how it performs once thrid parties have developed software that can take advantage of the technology before I make up mind.
 
Ray tracing isn't an automatic win, a ray traced game DOES NOT look automatically better than a game without ray tracing. A game with crappy textures ray traced will still have crappy textures.

Most of the first gen ray tracing is almost certainly going to have most of the engine working as they currently are, with targeted ray tracing, likely not involving the entire scene, doing some prebaked stuff and just lighting up a couple characters realistically. Plenty of desktop graphics is heading in this direction also.

http://www.eurogamer.net/articles/digitalfoundry-inside-killzone-mercenary

this is a link to a game on a Vita, and about the many things they did to scale it down for mobile.

Ray casting... not precisely ray tracing but it's out there and being used already(on pc and mobile).

They talk about having to reduce the ray casts, limit the amount of NPC's active/on screen at any one time. You seem to mistake feature parity(if it's there, when it's there) for automatically having the same power/content, but the key is in scaling. When you simply get down to basic rendering with no post processing effects, that is always the difference between a low and high end gpu, how much it can handle, how many triangles, how many pixels and this not unsurprisingly scales to everything else, how many lighting effects, how many shadows, how many this that and the other.

This is the biggest issue I have, you talk as if ray tracing is one thing, it's the same quality and detail level applied to the whole scene that an artist working on pre-rendered stuff in Maya is doing. This isn't true, it wasn't the case when artists were doing pre-rendered work for games before they switched to ray tracing and it isn't true now. When game devs would pre-render stuff, FMV, it would used to be done without ray tracing but at levels FAR beyond what you actually got in game. This is the same for ray tracing, the quality someone is making at 1-10 hour rendering times for ray tracing is not the level of quality you will find in game, nothing remotely close.

They will add SMALL amounts to parts of a scene, and by power VR's own info they seemingly think that by adding this fixed function hardware you can save 20-30% power by removing the lighting from the normal part of the gpu's workload and shift the lighting over to the fixed function. But that also actually requires the guys making chips to include the fixed function part of the IP, which they don't have to.


from one of your own links, ray tracing, making every game look epic instantly.... not. The game looks better and that is the best case scenario because as the guy explained there is no attempt from the old engine to simulate ANY lighting at all or shadows. In a "real" current game that already approximates lighting and shadows pretty realistically the difference will be drastically smaller.

You can't seem to grasp scale, believing mobile graphics are close to desktop because they use some of the same effects, it's the quantity, complexity and actual power requirement differences that dictate how much you can put on screen. Ray tracing will be introduced (on any platform) in a scaled way for use in games. Aimed at a small area of the screen, probably the main character and the an area close by, same as with most other lighting effects being currently done and the overall improvement will be small, potentially barely noticeable.


Likewise games based on current methods are still improving all the time, their approximations of real lighting are improving constantly and newer attempts at estimated global illumination, or raytraced/ray cast versions finding ways to heavily limit how much of the scene gets processed to reduce the overall amount of processing required. Read up on UE4 engine and others for what they are doing.


So to sum up, ray tracing isn't nearly as far ahead of current methods as you think, and scale is key to what ray tracing ends up being on mobile. The hardware won't be standard on every shipping chip and with no current content the low margin Arm segment isn't prone to adding die sized for fixed function hardware that won't be heavily utilised.

Wait for shipping mobile "ray traced" games, before making grandiose claims, in fact, wait to see if anyone pays the extra to include the IP block first.
 
“A game with crappy textures ray traced will still have crappy textures.”
Do you realize after saying that, you posted to a link that proved that is not true. Around 1min 31 seconds in that link you posted has no ray tracing, crappy textures, back of car looks rubbish. With no dev work they turn ray tracing on and the car looks x10 better. The textures look x10 better the windows and reflection look x10 better all in seconds. So conclusion ray tracing will make crappy textures look good due to light and reflections.

I really wanted to show you a video of ray tracing on textures I had seen but there doesn’t seem to be any videos online yet :(. It might have completely changed your mind if only I could show you. What they did was turn lights on and off on basic textures to show the effect in a game like screen. The difference was stunning. Once I get hold of it I will post it.



“Ray tracing isn't an automatic win, a ray traced game DOES NOT look automatically better than a game without ray tracing.”
Everything I have seen shows games do automatically look better. I have yet to see a case where they do not automatically look better. I could not find the video I wanted to show you so this will have to do.

a typical mobile game level from a FPS game being edited in ray tracing. Instantly looked better with ray tracing and the entire area rendered not just a small section.






““Ray tracing will be introduced (on any platform) in a scaled way for use in games. Aimed at a small area of the screen, probably the main character and the an area close by, same as with most other lighting effects being currently done and the overall improvement will be small, potentially barely noticeable.”
That is totally wrong from what I have seen. I can grasp the scale and what I have seen has removed my worry about scale. Ray tracing won’t be used only on a small part of the screen focused on 1 main character. One of the big advantages of Ray tracing is how it works for the devs and lowers game creation costs. That only works if you move the entire screen lighting wise to ray tracing. It also only works if everyone on that platform who plays the game has a Ray tracing hardware chip. But if IMG have over 50% market share and all there new GPU’s have ray tracing built in that’s over 50% of smartphones with Ray tracing in years to come. For some platforms like Apple that’s 100% of Iphone X and Ipad X in one generation with access. That’s when we should start seeing ray traced games take off. 1 generation of Apple Iphone/Ipad/TV will means 10’s of millions with sudden ray tracing access with apps and games ready to go. I cannot believe anyone who has seen live demonstrations of ray tracing will say “the overall improvement will be small, potentially barely noticeable”




“Wait for shipping mobile "ray traced" games, before making grandiose claims, in fact, wait to see if anyone pays the extra to include the IP block first. “
We won’t have long to wait there is going to be a big ramp up over the next 6 months and some ray tracing gaming info later this month. Time will till if my claims are really grandiose or if I really did my research well. Perhaps you are not aware but at the mobile shows and at the investor meets at Imgtech HQ they have been showing ray traced demos. I am not basing my so called grandiose claims on nothing. I might be off a little bit off due to my own interpretation but not massively off. I already know the new IP block is coming in new products. One of the perks of being an investor. We get to see new IP before it widely gets out.



“ “This is the same for ray tracing, the quality someone is making at 1-10 hour rendering times for ray tracing is not the level of quality you will find in game, nothing remotely close.”“This is the biggest issue I have, you talk as if ray tracing is one thing, it's the same quality and detail level applied to the whole scene that an artist working on pre-rendered stuff in Maya is doing.”
I take it you haven’t been following the GPU shows? Fair enough not everyone does. You are more than a factor of x10 out as you don’t wait for 1 to 10 hours for a final render. It is also not pre rendered but live in the viewport.

That was what 2 to 3 seconds? To final render on the old 90nm 75Mhz card. You keep saying mobile games are simple backgrounds and single characters. So those types of ray traced graphics should run fine on a mobile in those types of games on a modern 20nm or 16nm chip running faster than 75Mhz.
I agree the first games will be hybrid most likely using ray tracing for lights shadows and reflections, HDR style effects and the like.

This is a simple screen just like you would get in mobile games according to you. Surly you can agree that looks amazing from a light and reflection point of view well beyond what desktop games look like. It’s runs at 1080p HD and took around 2 minuets to final render and should work real time on future ray tracing chips. There is another demo like that with a sport car just like what we would see in a racing game.



“So to sum up, ray tracing isn't nearly as far ahead of current methods as you think,”
To sum it up you demonstrated you are massively out dated when it comes to ray tracing. Which is fair enough not everyone follows ray tracing. You comments sound like your experience are from where ray tracing was 8 years ago. 1 to 10 hours for a final render just shows how outdated your knowledge in this area is. We are down to 5 to 18 seconds for 1 frame for game quality or 2 minutes per frame for film quality. That is going to speed up by an order of magnitude with the upcoming generation.



EDIT: That old 75Mhz ray tracing cards is doing shaders on the CPU. Moving shaders onto the GPU alone will give a massive speed increase. Add in shrinking the chip from 90nm down to 20nm or smaller with a Mhz boost and we are talking orders of magnitude faster. That puts us from 5 seconds per frame down to multiple FPS.
 
Last edited:
“A game with crappy textures ray traced will still have crappy textures.”
Do you realize after saying that, you posted to a link that proved that is not true. Around 1min 31 seconds in that link you posted has no ray tracing, crappy textures, back of car looks rubbish. With no dev work they turn ray tracing on and the car looks x10 better. The textures look x10 better the windows and reflection look x10 better all in seconds. So conclusion ray tracing will make crappy textures look good due to light and reflections.

I really wanted to show you a video of ray tracing on textures I had seen but there doesn’t seem to be any videos online yet :(. It might have completely changed your mind if only I could show you. What they did was turn lights on and off on basic textures to show the effect in a game like screen. The difference was stunning. Once I get hold of it I will post it.



“Ray tracing isn't an automatic win, a ray traced game DOES NOT look automatically better than a game without ray tracing.”
Everything I have seen shows games do automatically look better. I have yet to see a case where they do not automatically look better. I could not find the video I wanted to show you so this will have to do.

a typical mobile game level from a FPS game being edited in ray tracing. Instantly looked better with ray tracing and the entire area rendered not just a small section. From a dev point of view Ray tracing is an automatic win from the time saved and money saved.






““Ray tracing will be introduced (on any platform) in a scaled way for use in games. Aimed at a small area of the screen, probably the main character and the an area close by, same as with most other lighting effects being currently done and the overall improvement will be small, potentially barely noticeable.”
That is totally wrong from what I have seen. I can grasp the scale and what I have seen has removed my worry about scale. Ray tracing won’t be used only on a small part of the screen focused on 1 main character. One of the big advantages of Ray tracing is how it works for the devs and lowers game creation costs. That only works if you move the entire screen lighting wise to ray tracing. It also only works if everyone on that platform who plays the game has a Ray tracing hardware chip. But if IMG have over 50% market share and all there new GPU’s have ray tracing built in that’s over 50% of new smartphones with Ray tracing in years to come. For some platforms like Apple that’s 100% of Iphone X and Ipad X in one generation with access. That’s when we should start seeing ray traced games take off. 1 generation of Apple Iphone/Ipad/TV will means 10’s of millions with sudden ray tracing access with apps and games ready to go. I cannot believe anyone who has seen live demonstrations of ray tracing will say “the overall improvement will be small, potentially barely noticeable”




“Wait for shipping mobile "ray traced" games, before making grandiose claims, in fact, wait to see if anyone pays the extra to include the IP block first. “
We won’t have long to wait there is going to be a big ramp up over the next 6 months and some ray tracing gaming info later this month. Time will till if my claims are really grandiose or if I really did my research well. Perhaps you are not aware but at the mobile shows and at the investor meets at Imgtech HQ they have been showing ray traced demos. I am not basing my so called grandiose claims on nothing. I might be off a little bit off due to my own interpretation but not massively off. I already know the new IP block is coming in new products. One of the perks of being an investor. We get to see new IP before it widely gets out/widely gets known. Like I said before I expect 1 new ray tracing product announcement this month.



“ “This is the same for ray tracing, the quality someone is making at 1-10 hour rendering times for ray tracing is not the level of quality you will find in game, nothing remotely close.”“This is the biggest issue I have, you talk as if ray tracing is one thing, it's the same quality and detail level applied to the whole scene that an artist working on pre-rendered stuff in Maya is doing.”
I take it you haven’t been following the GPU shows? Fair enough not everyone does. You are more than a factor of x10 out as you don’t wait for 1 to 10 hours for a final render. It is also not pre rendered but live in the viewport.

That was what 2 to 3 seconds? To final render on the old 90nm 75Mhz card. You keep saying mobile games are simple backgrounds and single characters. So those types of ray traced graphics should run fine on a mobile in those types of games on a modern 20nm or 15nm chip running faster than 75Mhz.
I agree the first games will be hybrid most likely using ray tracing for lights shadows and reflections, HDR style effects and the like.

This is a simple screen just like you would get in mobile games according to you. Surly you can agree that looks amazing from a light and reflection point of view well beyond what desktop games look like. It’s runs at 1080p HD and took around 2 minuets to final render per frame and should work real time on future ray tracing chips. There is another demo like that with a sport car just like what we would see in a racing game.



“So to sum up, ray tracing isn't nearly as far ahead of current methods as you think,”
To sum it up you demonstrated you are massively out dated when it comes to ray tracing. Which is fair enough not everyone follows ray tracing. You comments sound like your experience are from where ray tracing was 8 years ago. No one pre renders anymore, it’s all live editing and render as you go. That and saying 1 to 10 hours for a final render just shows how outdated your knowledge in this area is. We are down to 5 to 18 seconds for 1 frame render for game quality or 2 minutes per frame render for film quality. That is going to speed up by an order of magnitude with the upcoming generations.


EDIT: That old 75Mhz ray tracing cards is doing shaders on the CPU. Moving shaders onto the GPU alone will give a massive speed increase. Add in shrinking the chip from 90nm down to 20nm or 16nm with a Mhz boost and we are talking orders of magnitude faster.That should move us from 5 to 18 seconds per frame to FPS for game quality.
 
Last edited:
Tegra K1 SoC 4+1 Cores and 2 Cores Variant Antutu Benchmarks Surface – No Power Draw Mentioned

The benchmarks of the Quad Core and Dual Core Variants of Nvidia’s Tegra K1 SoC have surfaced (for possibly the first time. ) Though we already had a very impressive benchmark of one variant before, this time we have both. However, if what SemiAccurate revealed last week is true, then to put it bluntly, these numbers are not impressive.
NVIDIA Tegra K1

Tegra K1 SoC 4+1 Cores and 2 Cores Variant Antutu Benchmarks Surface – No Power Draw Mentioned
The leak source is Hardewareluxx, a site which is usually quite reliable. The benchmarks pit the Tegra K1 SoC (both variants) against the rivals in the mobile SoC industry and show Nvidia’s SoC leading with a huge margin. Infact, if you look at the graph ahead you will see both Tegra K1 variants in a league of their own ( and the poor Exynos 5410 on its own too). But like I said before, reserve judgement on that till you learn the actual power draw.
Tegra K1 Quad Core Dual Core BenchmarkA cursory glance would reveal 3 tiers in the benchmarks. The first tier is occupied by the Exynos 5410 at 27,887 Antutu Points, the first tier containing both the Tegra K1s and the second tier with all the other SoCs. The higher scoring Tegra K1 is the 4 + 1 variant while as the lower scoring is the Dual Core one.

Now here is the golden question: is this particular Tegra K1 SoC drawing anywhere near the 40W TDP Mark? If so then these scores are nowhere near impressive, infact they are downright disappointing. Ofcoure to be fair, the fact remains that SemiAccurate’s revelation is based on an assumption by seeing the Charger (powering the Tegra k1 SoC ) specs. I don’t even need to say how much error of margin there could be in deducing wattage from something like that. Ofcourse the GT 740M draws 40W of juice and smokes the Tegra K1 so theres that. It is possible that the module SA saw at the demo booth was in actuality the Vehicle Module and not the mobile based unit. In any case, these are very suspicious circumstances for Nvidia’s flagship SoC to be in. But if the SA hype is inaccurate, pardon my pun, then these are good numbers.


Read more: http://wccftech.com/benchmarks-tegra-k1-soc-quad-core-dual-core-variants-surface/#ixzz2vT2e9oXT

9abdd86d39fcb3c71e544902d1dce738.jpg


25cc1184815bcfe46bf5408c2e3c126f.jpg


Seems an apt place to post this.
 
I can't see nvidia giving anyome a 40 watt tdp part seems too ridiculous apart from maybe chromebooks. Is there even a way of measurng the power draw of a soc?
 
I can't see nvidia giving anyome a 40 watt tdp part seems too ridiculous apart from maybe chromebooks. Is there even a way of measurng the power draw of a soc?

Yes, however from experience(of more than just Nvidia) reviewers know that there is an issue with the numbers vs the power being used when the company in question point blank refuses to tell you how much power is being used. There is a reason why MULTIPLE websites refused a preview of K1 without being told the power usage after being conned into effectively lying on Nvidia's behalf with previous generations.

Anandtech in particular got sucked in(painfully naively) to posting a , I forget if it was a T3 or T4 preview in which they claimed it effectively trounced Apple's comparative chips. Only months later did they find out Nvidia had mislead them over power and it was using over 10W while the apple chip was using 2-3W. When it came to shipping products in similar devices Nvidia performance had dropped 3-4 times.

Arm chips have no problem currently running 2-3Ghz, with 800-1000Mhz gpu clocks with ZERO throttling if you stuck them in a device that had a huge heatsink and huge battery.

Difference is Apple don't give out numbers for what they chip can do with 20W of capable cooling in a open air dev box, they give out the numbers you'd get on the phone or ipad, Qualcomm, Samsung do the same. They give out the numbers dictated by tiny power usage and small batteries, or the devices they will ship in. Nvidia consistently gives out the performance numbers for a chip as used in a device no one will buy, either a dev box with a huge heatsink and active cooling, or a 27" AIO screen device... then tries to compare itself to chips getting massively throttled, with lowered clocks, with small batteries and then declare itself unmatched.
 
What did Charlie Chalk show his dismay at? Wasn't the unit around 60W? TBH it wouldn't surprise me if the chip used around the 40W mark. I lost interest in Tegra sometime ago. It's one area where NV have continually disappointed.
 
A 40 watt SOC would murder a phone and a tablet wouldn't last much longer. In fact its a nonsense as a phone wouldn't be able to disapte that much heat.

There's a lot of FUD going around once reviewers start to get there hands on k1 enabled dvices only then will be able to see how good/bad the new Tegra is but going on past evidence I'm not expecting a great deal.
 
Back
Top Bottom