• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

We will set our graphics free. #SIGGRAPH2018

Graphics card state of affair is just really sad at present.


NVIDIA are locking people into Gsync despite no mainstream device to allow powerful NVIDIA cards to play with say your everyday TV you can buy. If they would just stop doing that and adopt an open-ish standard or a way I can activate Gsync on new TV I'll buy, I'll stick to the green team.

AMD are doing everything right with freesync, making it cheap/affordable/open to TVs... but there cards just aren't powerful enough to hit 4k and they constantly seem to be 1 year behind NVIDIA in performance/heat/noise despite not being priced competitively.


The last point is what really hurts me with AMD. You'd think given they arrive late to the party with cards which hardly blow out of the water NVIDIA's 1-2 year old cards, they'd at least price them somewhat competitively.
 
Fingers crossed for intel on the GPU side. I hope they have found a new way of doing things and push graphics architecture forward. AMD have failed to move on from GCN amd Nvidia look like they are getting lazy.

Competition is awesome.
 
Fingers crossed for intel on the GPU side. I hope they have found a new way of doing things and push graphics architecture forward. AMD have failed to move on from GCN amd Nvidia look like they are getting lazy.

Competition is awesome.

Turing is a complete reworking of the raster architecture, over Pascal, so I'm not sure where they are getting lazy.

But yes more competition can only be a good thing.
 
Nvidia not getting lazy, just incredibly greedy, enabled by lack of serious competition.
 
Turing is a complete reworking of the raster architecture, over Pascal, so I'm not sure where they are getting lazy.

But yes more competition can only be a good thing.

Depending on the exact configuration almost half the core is split between Tensor cores and ray tracing specific functionality. Hence one of the reasons Turing cores are so big.
 
I'm just going by what it says on the NVidia website.

https://www.nvidia.co.uk/design-visualization/technologies/turing-architecture/

Certainly seems to be a complete reworking of the non Tensor and RT cores as well.

Yeah the SMs/"CUDA" cores are significantly redesigned as well - they seem to have made some significant changes to scheduling as well so you don't end up with a lot of cores potentially sitting there idle and unusable if there is other work available to them but as of Pascal and older they'd still have to wait til other stuff is done (unrelated directly to async, etc.).
 
Do nvidia plan to build an entirely new and theirs ecosystem of hardware and games?
I am asking because the consoles run with AMD hardware and AMD is not currently supporting ray-tracing....
Seems very weird what nvidia plan to do.
 
Do nvidia plan to build an entirely new and theirs ecosystem of hardware and games?
I am asking because the consoles run with AMD hardware and AMD is not currently supporting ray-tracing....
Seems very weird what nvidia plan to do.

Its a good point, as you say most games nowdays are console ports and AMD do not have this tech, so it will be interesting.
 
I don’t see ray tracing games being made for a long time anyway outside of a handful that may experiment with it. Maybe for PS6 or something. By then AMD will have it also. This tech is just a step in the right direction, nothing more for us gamers imo. It will however come in handy for professionals making videos, which I think is what it is aimed at currently. For gamers it is just marketing... Bit like some of Vega’s features were.
 
On reading the initial summaries of Intel getting into GPU's, and the current nVidia RT stuff, the thought that came into my head was whether Intels GPU would be a newer implementation of a Project Larrabee style card. ( i.e. multithreaded cores running software implementations of GPU functions ) as shown on one of Linus's videos.

It could make an element of sense being able to re-purpose the GPU resources to different types of functions through a software change rather than having RT cores going unused etc etc.
 
Back
Top Bottom