Nvidia Project Logan

Caporegime
Joined
28 Jan 2003
Posts
39,973
Location
England
Nvidia's Project Logan Brings PC Graphics to Smartphones and Tablets and is said to be more powerful than the ps3.

link

Pretty awesome really.

What with control pads for phones, could we eventually see this next gen of consoles eclipsed by phones?

Personally I don't think we will, but I bet by god the manufacturers will push it.
 
I imagine that by the time the next gen consoles come to the end of their lifespan, they will have been eclipsed by phones in terms of power, just look at the progress that has been made since the current gen came out in 2005/2006. That said, I will believe in the awesomeness of Project Logan when I see it.
 
I really do not see anything impressive there when you look at the competition. NVidia seem to be constantly behind in mobile tech.

Basically they are saying is they will beat what is a year old Ipad today in years’ to come. Putting them years behind the competition.
 
Last edited:
I really do not see anything impressive there when you look at the competition. NVidia seem to be constantly behind in mobile tech.

Basically they are saying is they will beat what is a year old Ipad today in years’ to come. Putting them years behind the competition.

It's because they're more of a marketing company. Market their stuff in a way that gets people to have an emotional response over their products and then the downsides will be overlooked continuously.
 
I was looking at this earlier, looks pretty neat, Tegra 4 looks like it will be pretty good too... I've come around to thinking I really want a shield.
 
It's because they're more of a marketing company. Market their stuff in a way that gets people to have an emotional response over their products and then the downsides will be overlooked continuously.
Starting to see that. Just look at how poor Tegra 4 is yet people want it. Tegra is years behind but NVidia sell it like it’s the best thing ever even though it has slow performance, runs hot and has bad battery life. Still not sure what the shield is all about as it looks like a pointless device. You have to have a PC that is not being used and nearby. Why not just go on the PC and use the better screen and controls? Why not use a smartphone with controller?
 
Last edited:
Starting to see that. Just look at how poor Tegra 4 is yet people want it. Tegra is years behind but NVidia sell it like it’s the best thing ever even though it has slow performance, runs hot and has bad battery life. Still not sure what the shield is all about as it looks like a pointless device. You have to have a PC that is not being used and nearby. Why not just go on the PC and use the better screen and controls?

The shield concept is interesting in that you can stream your PC games around the house, however they soiled it with their typical nVidia ways, whereby you must have an nVidia graphics card to use it, and then their price they're trying to sell it at.

Companies don't really like nVidia anyway, I can't see Google opting to use Tegra for any more nexus devices, Sony didn't even consider them for PS4 due to the way they acted over their PS3 involvement, and then nVidia posts comments showing high levels of cognitive dissonance over the PS4, "it's crap and low end and we weren't even interested anyway" is pretty much what they said when asked about the PS4 and their involvement.
 
In fairness to NVIDIA, Tegra 5 is likely to be as big a jump from Tegra 4 as desktop Kepler was from Fermi (i.e. huge quite frankly).

As somebody who uses GPGPU programming I am actually quite excited by the prospect of a cluster of low power Tegra 5 devices, each offering both multiple relatively powerful ARM processors and a CUDA (and OpenCL) enabled GPU. The potential is actually quite impressive.

Tegra 4 was a pure marketing exercise so people didn't think they had stalled their development cycle, Tegra 5 is a genuinely exciting prospect, a proper fully programmable GPU in a mobile low wattage package. Sure the competition (ARM/PowerVR etc.) might have more powerful mobile GPUs by that time, but it is fair to say that NVIDIA has the most accomplished compute model, and this will finally enable mobile devices to use that.
 
From the perspective of a person using the hardware for GPGPU purposes, it was approximately a 50% improvement comparing model line to model line. The restructured shader model (i.e. CUDA Cores) meant that parallelism was effectively doubled. Important oversights of the SM 2.1 architecture were also addressed in 3.0 and have been improved upon greatly again in 3.5.

Granted this may not always result in faster gaming if the games haven't been optimised for this radical change in architecture design, but that wasn't my point, Fermi and Kepler are literally chalk and cheese whereas anything before that had been a very gradual increment going all the way back to the G80 chips. Kepler was not a step back, in every way it is superior to Fermi, I don't think people realise quite how radically different the two architectures are.

The Logan chip is Kepler, all other Tegra chips have used VERY cut down versions of Fermi (and the minimally different prior architectures) but importantly didn't offer access to GPGPU, Logan will be a game changer in the mobile GPU market, I'm fairly sure.
 
When you say "Kepler" are you talking solely about GK110?

Because anything below GK110 has been severely crippled for GPGPU performance.
 
Admittedly they have made the whole Kepler line a bit confusing... but I am talking about anything that uses the Kepler namesake, effectively all cores that are SM 3.0 compliant onward.

I can offer direct comparison of some fairly rigorous scientific CUDA apps on top end SM 2.1 devices (i.e. Fermi) and then 3.0 and 3.5 devices and the jump between a top end 2.1 and a top end 3.0 (i.e. a GTX 580 and GTX 680 if we are talking GeForce) is approximately a 45% speed-up. Admittedly the majority of my problems are embarrassingly parallel so make good use of the total re-design that SM 3.0 brought.

I'm not saying there aren't cases where an SM 2.1 < chip might not have more horse-power per "core" but when you are effectively doubling the number of available threads, increasing memory bandwidth, reducing latency and doing so while impressively reducing power consumption, is is hard to argue that the SM 2.1 < architecture is superior.

You say that anything less than GK110 has been severely crippled, that's true when comparing to chips above, but when you compare to those based on Fermi (i.e. a GTX 680 vs a GTX 580) or indeed anything equivalent in the Quadro or Tesla space, the Kepler still wins because of the architecture, which is effectively the same across the Kepler range in ethos at least.
 
Logan will be a game changer in the mobile GPU market, I'm fairly sure.
Why will it be a game changer when all it does is catch up with ARM and PowerVR have today? All it is, is NVidia playing catch up again. I do not think its fair to say NVIDIA has the most accomplished compute model. There is no evidence for that in mobile in fact so far they are way behind.
 
It depends on whether it's reliant on double precision performance or not, which is where Kepler falls over, with GK104 having roughly 10% the DP performance of GF110.
 
It depends on whether it's reliant on double precision performance or not, which is where Kepler falls over, with GK104 having roughly 10% the DP performance of GF110.

There are some Kepler devices which are class-leading for double precision, you just have to pay for them.

The double precision "issue" was clearly a marketing shift by NVIDIA rather than an architectural stumbling block. They are constantly trying to make the Tesla range a decent prospect for HPC managers, they needed a way to differentiate the products and floating point precision was one way. It's working well too, I know many managers who hve dug deeper to secure K20s over K10s even though the majority of their work-load is SPFP not double. GPU compute is becoming very lucrative for NVIDIA so it had to go that way unfortunately.

Undoubtedly there will be just as stark a contrast in products with the next chips, that's just business unfortunately.

(edited the next-gen GPU name out as i suddenly realised it may not be officially out there quite yet and I wouldn't want the forums to get in to trouble!)
 
Last edited:
Why will it be a game changer when all it does is catch up with ARM and PowerVR have today? All it is, is NVidia playing catch up again. I do not think its fair to say NVIDIA has the most accomplished compute model. There is no evidence for that in mobile in fact so far they are way behind.

It will be a game changer as this is NVIDIA finally releasing a fully featured GPU to the mobile market.

As I said, it may not be class-leading in terms of horsepower, but it will be in terms of available features, and it's these which will lead to better graphics, and physics (because CUDA is suddenly an option, therefore so is PhysX etc.).

Bar AMD, no other company has the GPU prowess that NVIDIA has, this will result in portable devices being able to be programmed for the latest DirectX for example and the latest OpenGL as well as OpenGL ES and of course CUDA. Ramping up the horse-power is less of an issue than cramming in features, which is what NVIDIA has done here and other manufacturers haven't.
 
“Bar AMD, no other company has the GPU prowess that NVIDIA has, this will result in portable devices being able to be programmed for the latest DirectX for example and the latest OpenGL as well as OpenGL ES and of course CUDA.”
So NVidia releasing what everyone already has is a game changer? Are you aware the only company to not have a fully featured GPGPU in the mobile market is NVidia? Everyone else is already DX10/11 only NVidia are stuck back in DX9. By the time Logan arrives it will be very much playing catch up.


"Bar AMD, no other company has the GPU prowess that NVIDIA has, this will result in portable devices being able to be programmed for the latest DirectX for example and the latest OpenGL as well as OpenGL ES and of course CUDA. Ramping up the horse-power is less of an issue than cramming in features, which is what NVIDIA has done here and other manufacturers haven't. “
ImgTech have just as much if not more GPU prowess then NVidia. Apart from CUDA which is irrelevant in mobile space all that stuff you talk can be done on PowerVR Rogue’ architecture. Plus in coming years PowerVR will have Rogue + ray tracing in mobiles so again NVidia will be behind in features.
 
I think we'll have to agree to disagree on this one.

All I will say is that (as far as I am aware) the (next) PowerVR rogue line for example will (still) be shipping with OpenGL 3.2 support. Tegra 5 will be out at about the same time with OpenGL 4.0 out of the box.

I think you also underestimate the advanced state of CUDA compared to OpenCL and other alternatives, I think you also underestimate the importance of having a GPGPU capable chip in a mobile device. At the moment comp off-loading when programming on a current device is difficult, I know, I've done it. You end up either wrestling with some TI DSP thrown in as an after-thought, or trying to get things working on the GPU using old-fashioned shader programming. Fine if you're doing graphics, not so great if you want to offload some physics work etc.

With a CUDA enabled GPU i can suddenly throw a few OpenACC pragmas in to my code and I have some GPU off-loading.

I stick by my statement, having a (albeit cut-down) Kepler GPU alongside 4(.1) strong ARM CPUs will be a very potent package, even if in some other ways NVIDIA have fallen short.

Edit: added a few key words I forgot!
 
Last edited:
Back
Top Bottom