• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Lisa Su Considers A Role Beyond AMD And Prepares A Successor

Yet they still some way off matching the 2080ti despite the node advantage and nVidia throwing a big slice of the core at RTX functionality, etc. nVidia for a number of reasons has made a more managed move to 7nm and that will likely show when 7nm products arrive.

Heard this so many times with <X> technology yet it is always nVidia who actually pushes it into the mainstream and supports it in the long term. I really hope AMD doesn't drop the ball on ray tracing as it has huge potential when done properly.

Don't entirely agree with you on that point. Well VRR is thanks to nVidia I believe. I don't think we would have had freesync any time soon if it were not for nVidia pushing Gsync but look at all the other tech? not exactly for the mainstream. Raytracing runs to slow on the 2060, physx has never really taken off due to poor support and development even though it was hyped(there were other reasons too) and DLSS is a blur fest. To me nvidias features, more than not ,always requires the top tier card and even then the performance leaves much to be desired and is vendor locked and therefor ends up in very few titles, most with performance issues. I find it sad really cause some of the ideas and demos shown looks pretty cool.
 
Don't entirely agree with you on that point. Well VRR is thanks to nVidia I believe. I don't think we would have had freesync any time soon if it were not for nVidia pushing Gsync but look at all the other tech? not exactly for the mainstream. Raytracing runs to slow on the 2060, physx has never really taken off due to poor support and development even though it was hyped(there were other reasons too) and DLSS is a blur fest. To me nvidias features, more than not ,always requires the top tier card and even then the performance leaves much to be desired and is vendor locked and therefor ends up in very few titles, most with performance issues. I find it sad really cause some of the ideas and demos shown looks pretty cool.

The list goes on - for all nVidia's sins tessellation would be nowhere to be found if it was still down to ATI/AMD and so on.

PhysX is an unfortunate one - but a lot of games do use software PhysX and/or FleX, etc. unless AMD (and to a lesser extent Intel) gets onboard with hardware accelerated physics developers aren't going to make it a fundamental part of the engine though while that cuts off a good slice of their market.

DLSS is a bit of a disappointment but maybe they'll turn it around with 2.0 :s personally don't really care on that note though I don't have much use for it personally.
 
I actually think DLSS is a brilliant idea.. if it was designed for the xx50/60 series and cards and only those cards that way people on a budget could experience 4k and want to move up to a proper card but it seems to be poorly implemented imo
 
I actually think DLSS is a brilliant idea.. if it was designed for the xx50/60 series and cards and only those cards that way people on a budget could experience 4k and want to move up to a proper card but it seems to be poorly implemented imo

Personally I think they've gone the wrong way with it entirely. They'd have been much better off using the dedicated hardware to intelligently do anti-aliasing at native resolution with a minimal or no performance hit implementation of MSAA x4 or better something like the results of super-sampling rather than trying to rebuild information that wasn't there. Though the use to help out lower end cards handle higher resolutions would be nice as well.
 
Don't entirely agree with you on that point. Well VRR is thanks to nVidia I believe. I don't think we would have had freesync any time soon if it were not for nVidia pushing Gsync but look at all the other tech? not exactly for the mainstream. Raytracing runs to slow on the 2060, physx has never really taken off due to poor support and development even though it was hyped(there were other reasons too) and DLSS is a blur fest. To me nvidias features, more than not ,always requires the top tier card and even then the performance leaves much to be desired and is vendor locked and therefor ends up in very few titles, most with performance issues. I find it sad really cause some of the ideas and demos shown looks pretty cool.


Variable refresh rate was both AMD, already in the industry in laptops and Nvidia jumping a mainstream feature to market to rip off it's customers, again.

Look at any industry standard and look at design times for even more simple chips. VRR was already available in laptops, AMD was talking with monitor manufacturers to come agree on support for VRR, writing a standard they can all agree with, submitting the standard and all the major players working on a design cycle for scalars to support it. Nvidia knew about it and then used FPGAs which can massively short cut your time to market, to create g-sync and 'beat' the standard. By beating the standard and locking their customers to only support for g-sync they got to sell a bunch of massively massively marked up g-sync modules for years.

FPGAs are pretty much designed to be programmable chips which for something basic like a scalar is pretty easy to achieve. To make a dedicated hardware chip still takes a 18-24 months to go through design, tape out, verification, etc.

There is a reason g-sync was launched with one screen in a rush only a few months before the freesync standard was submitted to VESA and there is a reason that freesync screens started launching a year later... because discussions with monitor makers over how to support VRR started likely a year before g-sync/freesync were announced. That's how the industry worked. So AMD as per usual was working with everyone on an industry standard to push everyone forwards and Nvidia saw a chance to profit and screw over their own customers.

You can't short cut time to market on full hardware support or industry standards, there is no way freesync and adaptive sync get that much support only a year later unless the monitor makers were all discussing this a LONG time before g-sync launched, literally no chance.

Again remember this, G-sync screwed over not one AMD users, and freesync costs never screwed over a single AMD customer, but g-sync costs ripped off every single Nvidia buyer. This is what Nvidia does, sees where the market is going thanks to the work of others and tries to take advantage and profit off it. Let AMD push tessellation in hardware till devs support it then jump on board rather than the other way around, Nvidia eat the die cost till it's supported then AMD come in. Nvidia then pushed enough tessellation power to win benchmarks but again cost their own customers, putting in more hardware than needed just to cheat benchmarks rather than provide a meaningful experience for their users.
 
As for matching the 2080ti despite the node 'advantage'. The 2080ti is huge and 7nm is no where near getting viable yields on a equivalent transistor count part as the 2080ti.

TU102 has only 18.6B transistors. This means that a 453 sq.mm RDNA2 chip would get the same transistors quantity, and very likely it will manage to surpass the TU102's performance.
 
TU102 has only 18.6B transistors. This means that a 453 sq.mm RDNA2 chip would get the same transistors quantity, and very likely it will manage to surpass the TU102's performance.

Yes, it should handily beat the 2080ti performance, when yields improve and bigger dies are more viable. Also there is a more practical issue, limited 7nm supply. So what's more profitable right now, a $7k epyc using 8x 75mm^2 dies at roughly 600mm^2 or one ~500mm^2 die being sold for <$1k, also with much lower yields than the 75mm^2 dies.

As TSMC moves more fabs over to 7nm, and/or a lot of production moves to 7+ euv, or 6nm/5nm versions with various fabs being moved to deal with those nodes then there should be increasing overall 7nm capacity.

Will be interesting to see what gets made on what node in the next couple of years. Also AMD has console production to get done at TSMC starting I'd guess around Q2 next year. They've got ever increasing volume production going on over there and a lot of competition for the supply.
 
Yes, it should handily beat the 2080ti performance, when yields improve and bigger dies are more viable. Also there is a more practical issue, limited 7nm supply. So what's more profitable right now, a $7k epyc using 8x 75mm^2 dies at roughly 600mm^2 or one ~500mm^2 die being sold for <$1k, also with much lower yields than the 75mm^2 dies.

As TSMC moves more fabs over to 7nm, and/or a lot of production moves to 7+ euv, or 6nm/5nm versions with various fabs being moved to deal with those nodes then there should be increasing overall 7nm capacity.

Will be interesting to see what gets made on what node in the next couple of years. Also AMD has console production to get done at TSMC starting I'd guess around Q2 next year. They've got ever increasing volume production going on over there and a lot of competition for the supply.

The production allocation looks like a problem for AMD and it's one that is probably underestimated by them, hence the limited supply of Navi 10 chips and 7nm Zen 2 chiplets.
Where are GlobalFoundries, Samsung and other foundries?
 
Tessellation was first implemented in hardware with Truform in the ATI 8500 in 2001 and Matrox also supported tessellation with the Parhelia-512 in 2002. Hence tessellation was not implemented properly in games until Nvidia bothered to do it themselves, as at least half the target market on PC wouldn't support it:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/

The true successor of the original hardware tessellation feature reappeared with the Xbox360′s GPU and then for PC with the introduction of the AMD Radeon HD2000 series. This hardware generation came equipped with a fixed function hardware tessellator similar of that of the Radeon 8500 but with added programming flexibility. The functionality is accessible in OpenGL through the extension GL_AMD_vertex_shader_tessellator but, again, it didn’t make it its way into core OpenGL, neither into DX10 due to the lack of support on NVIDIA GPUs.
 
Last edited:
Tessellation: AMD.

PhysX: Ageia and NovodeX AG for the Hardware. Nvidia bought the rights to PhysX and then set about butchering it so you needed high end proprietary Nvidia hardware to run it at anything above reasonable FPS, Much like RT today.
Alternatives like Bullet and Havoc are far more efficient.

Ray-Traced Global Illumination: AMD. first seen in Dirt Showdown, Crysis 3 and Farcry 3. They do this at good performance without the need for proprietary RT cores, As Crytek, who have had this technology in their engine for years proved, or rather reinforced.

Ray Traced Reflections: Nvidia. Having said that traditional screen space reflections are also Ray Traced, it again works without RT cores and is far more efficient by culling distance reflections to various degrees of quality, in the past also by limiting the angle and directions of the Rays, this did cause some visual limitations and graphical artefacts in older implementations, however these are now completely resolved by allowing more complete Ray Tracing on more powerful modern hardware, again there was just no need for proprietary hardware.

Variable Refresh Rate Screens: Nvidia. G-Sync, again unnecessary proprietary hardware, AMD did the same thing with software, which Nvidia are now also using, i have a Free Sync screen and am running "G-Sync" on it.

DLSS: Nvidia. Image quality enhancement, it doesn't work.

AMD recently introduced FidelityFX. Which does work.

The one in the middle is FidelityFX. 2160P - 1680P + FidelityFX - 2160P + DLSS

LvqGzmE.png
 
Last edited:
Variable refresh rate was both AMD, already in the industry in laptops and Nvidia jumping a mainstream feature to market to rip off it's customers, again.

Look at any industry standard and look at design times for even more simple chips. VRR was already available in laptops, AMD was talking with monitor manufacturers to come agree on support for VRR, writing a standard they can all agree with, submitting the standard and all the major players working on a design cycle for scalars to support it. Nvidia knew about it and then used FPGAs which can massively short cut your time to market, to create g-sync and 'beat' the standard. By beating the standard and locking their customers to only support for g-sync they got to sell a bunch of massively massively marked up g-sync modules for years.

FPGAs are pretty much designed to be programmable chips which for something basic like a scalar is pretty easy to achieve. To make a dedicated hardware chip still takes a 18-24 months to go through design, tape out, verification, etc.

There is a reason g-sync was launched with one screen in a rush only a few months before the freesync standard was submitted to VESA and there is a reason that freesync screens started launching a year later... because discussions with monitor makers over how to support VRR started likely a year before g-sync/freesync were announced. That's how the industry worked. So AMD as per usual was working with everyone on an industry standard to push everyone forwards and Nvidia saw a chance to profit and screw over their own customers.

You can't short cut time to market on full hardware support or industry standards, there is no way freesync and adaptive sync get that much support only a year later unless the monitor makers were all discussing this a LONG time before g-sync launched, literally no chance.

Again remember this, G-sync screwed over not one AMD users, and freesync costs never screwed over a single AMD customer, but g-sync costs ripped off every single Nvidia buyer. This is what Nvidia does, sees where the market is going thanks to the work of others and tries to take advantage and profit off it. Let AMD push tessellation in hardware till devs support it then jump on board rather than the other way around, Nvidia eat the die cost till it's supported then AMD come in. Nvidia then pushed enough tessellation power to win benchmarks but again cost their own customers, putting in more hardware than needed just to cheat benchmarks rather than provide a meaningful experience for their users.

Eh? FreeSync scalers are slightly modified off the shelf scalers misusing the PSR feature to do adaptive sync. If nVidia was trying to beat AMD to market there is no way they'd have done that using an FPGA that you still have to design and develop the system for from scratch even if the FPGA is programmable.

If FreeSync scalers were a custom design for adaptive sync they'd have things like adaptive overdrive to enhance adaptive sync response by default - something you can't do with a slightly modified off the shelf scaler - the amount of time taken to modify an off the shelf scaler to support FreeSync would be well basically the time it took for AMD to take notice of G-Sync and then start on their own efforts not the timeline you are trying to push.

First gen FreeSync scalers are not "full hardware support" they support the bare minimum of functionality through using VESA standard features in ways they weren't originally intended to hence many of the problems with FreeSync and standardisation of things like refresh range and the poor support for windowed modes, etc. (albeit MS have currently ****** up window mode G-Sync in Windows 10 but it still works fine in 7/8).

AMD only ever showed any interest in adaptive sync after nVidia started demonstrating it via eDP after they got no traction when trying to pursue it as a standard through VESA any thing else like you suggest is trying to reinvent history.
 
Last edited:
Back
Top Bottom