• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Dark days, AMD share price at lowest ever.

It was a golden time though, than 9000 series propelled ATI up, and the x800 series put them ahead. They were competing really well until the 2900 line flopped.

HAoQc7Z.jpg
Looking at this particularly with the HD5000 vs the GTX 400 series era; with ATI had a beast of a card in the form of the 5870/5850 that utterly destroyed what Nvidia had at the time on the market for 6 months and still failed to capture majority the market share shows how powerful people's loyalty to Nvidia was, or how effectively the "It's coming! It's coming" (but the GTX480/470 didn't arrival till half year later) lie worked.

People always quote poor power efficiency as a shortcoming of AMD's product, but HD5850/5870 (ATI at the time) was far more power efficient comparing to the GTX470/480 and that didn't help bringing them even enough to touch Nvidia. Keep in mind that the GTX470/GTX480 were way overpriced and hot as the sun and barely sold until Nvidia slashed the pricing.

Let's face it, any shortcomings that are on the ATI/AMD don't matter when they are on the Nvidia cards, as most people that would only consider buying Nvidia would only buy Nvidia anyway since buying AMD is not an option full-stop.
 
Let's face it, any shortcomings that are on the ATI/AMD don't matter when they are on the Nvidia cards

If the GTX480 had trailed behind the 5870 on performance while using the power it does people would have levelled the same kind of complaints against it - but it didn't - over a range of games the 480 was 10-15% faster even though there was the odd game the 5870 was significantly faster.

Having a slower card that uses less power isn't particularly noteworthy unless it is significantly better performance/watt as if it was a concern you can usually undervolt/downclock the faster card a bit and get the same power use at the same performance level sometimes even better as often the power use goes up massively with the last few steps of performance/voltage.
 
If the GTX480 had trailed behind the 5870 on performance while using the power it does people would have levelled the same kind of complaints against it - but it didn't - over a range of games the 480 was 10-15% faster even though there was the odd game the 5870 was significantly faster.
Yea but the HD5870 was launched at £320~£350 and utter destroyed Nvidia's GTX 200 series for 6 months, and when GTX480 was eventually launched it was 6 months late, and at £450~£500- that's like a whole 45~50% higher in price.

The HD5870 was more power efficient and WAY faster than the GTX280/285 for 6 months.
 
Crossfire/SLI are about as good as it gets when it comes to that approach - even chiplets won't help there if they have to use techniques like AFR (the AMD guy was correct in that respect) - you can't just sew together GPU cores using IF in the style of CPUs and get the same kind of results. (EDIT: Unless developers start building games around DX12 explicit multi-adaptor from the ground up).

The real trick with MCM designs is when you can implement an entire logical GPU on substrate without the restrictions of a monolithic package, possibly using multiple headless processing packages and multiple command processors and arbitrarily scale up/down any one area.

I am aware of the technical inconveniences to make it working all the time, in every title and without micro- stuttering.
My thought is that they don't spend enough time nor resources to make it work - it is definitely possible.

How about:
  • Split Frame Rendering (SFR): One GPU renders the top (or left) half of each video frame, the other does the bottom (or right). (i.e. plane division);
  • Some additional chip like Freesync;
  • And possibly - to make the titles think/see the dual/multi-GPU as a single processing unit.
 
Yea but the HD5870 was launched at £320~£350 and utter destroyed Nvidia's GTX 200 series for 6 months, and when GTX480 was eventually launched it was 6 months late, and at £450~£500- that's like a whole 45~50% higher in price.

The HD5870 was more power efficient and WAY faster than the GTX280/285 for 6 months.

I think a lot of us pretty much ignored the high end cards then and bought the 470s or 5850s and clocked the nuts off them anyhow.

How about:
  • Split Frame Rendering (SFR): One GPU renders the top (or left) half of each video frame, the other does the bottom (or right). (i.e. plane division);
  • Some additional chip like Freesync;
  • And possibly - to make the titles think/see the dual/multi-GPU as a single processing unit.

SFR performance varies too hugely depending on what the scene is made up of - if you have say a sky box filling the perspective or tiles allocated to most of one GPU while the other is dealing with the complex scene parts then performance tanks and there is no real solution to that - intelligent scene analysis, etc. to try and farm out the load more efficiently just uses most of the performance up you gain to do that.

There is no real software solution to making software see multiple GPUs as one without huge latency penalties that aren't acceptable for gaming though useful for other applications.

Quite a bit of the reason Crossfire/SLI are in the state they are is due to the way more modern features work that require information inta-frame or inter-frame that isn't available directly to the GPU that will need it next - there is no real way around that other than developers adopting explicit multi-adaptor or a completely new hardware architecture different to how current monolithic GPU cores are designed.
 
Last edited:
Yea but the HD5870 was launched at £320~£350 and utter destroyed Nvidia's GTX 200 series for 6 months, and when GTX480 was eventually launched it was 6 months late, and at £450~£500- that's like a whole 45~50% higher in price.

The HD5870 was more power efficient and WAY faster than the GTX280/285 for 6 months.

Stock was dire for ages which resulted in loads of gouging though for the AMD 58XX series.
So it's not so clear cut.
 
SFR performance varies too hugely depending on what the scene is made up of - if you have say a sky box filling the perspective or tiles allocated to most of one GPU while the other is dealing with the complex scene parts then performance tanks and there is no real solution to that - intelligent scene analysis, etc. to try and farm out the load more efficiently just uses most of the performance up you gain to do that.

There is no real software solution to making software see multiple GPUs as one without huge latency penalties that aren't acceptable for gaming though useful for other applications.

Quite a bit of the reason Crossfire/SLI are in the state they are is due to the way more modern features work that require information inta-frame or inter-frame that isn't available directly to the GPU that will need it next - there is no real way around that other than developers adopting explicit multi-adaptor or a completely new hardware architecture different to how current monolithic GPU cores are designed.

So, the best solution is every title to be explicit multi-GPU aware and AMD to start making chiplets, finally.
 
So, the best solution is every title to be explicit multi-GPU aware and AMD to start making chiplets, finally.

**I'm not an expert by any means** but on a multi die on interposer they could have a "front end" chiplet that presents the whole package as a coherent GPU with multiple processing units and fixed function logic attached. From my understanding each "stream processor" or whatever they call their units these days don't need direct access to whatever the next is working on (it's massively parallel) so who cares what chiplet has what SP? Bandwidth to memory would be a clincher, but with an active interposer it might be possible. GPUs aren't as memory latency sensitive as CPUs and if the GDDR5/whatever/HBM is on the package routing it to the die might not even be as challenging as I imagine it would be. The OS would be unaware that it's a multi-GPU system as the supervisor chip would dispatch all the work as it arrives. Not this gen coming but maybe the next...

/speculation
 
Reported by AMD as fact: 7nm Vega allows 2X Density, 35% improved performance, 50% power consumption. 7nm Vega will launch in 2018.
Reported by TSMC: Volume production of 7nm started nearly 2 months ago.

/Speculation Mode On

AMD already have the GPU that is a direct 1180ti competitor they have made it right under our noses and so far not been called on it, that is why Nvidia are holding off on their launch.

TSMC are mass producing Vega already at 7nm. They are actually making the next versions of an entire Vega 2 line up.
Vega 2 is going to Launch with stock just before Christmas with 5 lines:

Vega2 48 with 8GB of GDDR5 - Replacement for Polaris 580 at current Vega56 performance
Vega2 56 with 8GB of GDDR5X - well above current 1080 territory and 65% lower power consumption and GDDR reduces the price point to make it profitable
Vega2 64 with 16GB of GDDR5x - 5% north of current 1080ti performance
Vega2 82 with 16GB of HBM2 - 1180ti competitor - Halo gaming product
Vega2 90 with 32GB of HBM2 - Radeon Pro card

A new 64 CU chip at 7nm with two lower grade cards harvested from the same silicon that don't quite make the grade for the top chip. Shifting out to GDDR if that allows them to significantly increase time to market and reduce cost - if production and price make it competitive the whole range could be HBM2.

The full fat VEGA2 90 for AI... this being a large part for an immature process makes the 82 makes a good secondary harvest - which is why both stay HBM2

In 2019 we get the 3400g a 6 core 12 thread Ryzen 3 APU with 22 Vega2 CU's with close to RX570 Performance and a 3200g 6 Core 6 thread APU with 16 Vega2 CU's at RX550/560 Level performance.

/Speculation Mod Off

However, I also bought a lottery ticket over the weekend and am still working - so while I believe the above is actually technically possible I have no clue if it is economically viable or if AMD have the R&D budget or manufacturing partners to be able to make it happen.
 
Reported by AMD as fact: 7nm Vega allows 2X Density, 35% improved performance, 50% power consumption. 7nm Vega will launch in 2018.
Reported by TSMC: Volume production of 7nm started nearly 2 months ago.

/Speculation Mode On

AMD already have the GPU that is a direct 1180ti competitor they have made it right under our noses and so far not been called on it, that is why Nvidia are holding off on their launch.

TSMC are mass producing Vega already at 7nm. They are actually making the next versions of an entire Vega 2 line up.
Vega 2 is going to Launch with stock just before Christmas with 5 lines:

Vega2 48 with 8GB of GDDR5 - Replacement for Polaris 580 at current Vega56 performance
Vega2 56 with 8GB of GDDR5X - well above current 1080 territory and 65% lower power consumption and GDDR reduces the price point to make it profitable
Vega2 64 with 16GB of GDDR5x - 5% north of current 1080ti performance
Vega2 82 with 16GB of HBM2 - 1180ti competitor - Halo gaming product
Vega2 90 with 32GB of HBM2 - Radeon Pro card

A new 64 CU chip at 7nm with two lower grade cards harvested from the same silicon that don't quite make the grade for the top chip. Shifting out to GDDR if that allows them to significantly increase time to market and reduce cost - if production and price make it competitive the whole range could be HBM2.

The full fat VEGA2 90 for AI... this being a large part for an immature process makes the 82 makes a good secondary harvest - which is why both stay HBM2

In 2019 we get the 3400g a 6 core 12 thread Ryzen 3 APU with 22 Vega2 CU's with close to RX570 Performance and a 3200g 6 Core 6 thread APU with 16 Vega2 CU's at RX550/560 Level performance.

/Speculation Mod Off

However, I also bought a lottery ticket over the weekend and am still working - so while I believe the above is actually technically possible I have no clue if it is economically viable or if AMD have the R&D budget or manufacturing partners to be able to make it happen.

Dude, if AMD put on the market a Vega 7nm it will be very expensive at the price of a Titan. The chip is massive.

Pls put away the crystal ball, we know that next AMD GPU will be based on Navi and it will come out 2019
 
Dude, if AMD put on the market a Vega 7nm it will be very expensive at the price of a Titan. The chip is massive.

Pls put away the crystal ball, we know that next AMD GPU will be based on Navi and it will come out 2019

Pfft I can dream - in the perfect vacuum that is the AMD GPU roadmap this seems like a perfectly valid strategy.

They are making a 7nm Vega chip so the masking has already been done, so it is just the unit yield costs - if they could harvest a top GPU from silicon that didn't quite hit their Pro cards they could very easily create a high end gaming GPU.

Plus if they make a 7nm Vega 64 it will be a much smaller chip close to what 250mm2... they would get great yields from that. Especially if they used one chip to populate 3 or 4 different product lines depending on quality.

I don't expect to be right - but the actual unit silicon costs don't feel like a valid reason not to move. In fact the exorbitant set up costs for creating the pro card make it more not less likely to me.

Anyhoo - this thread is about the death pf AMD not it's glorious victory.
 
Anyhoo - this thread is about the death pf AMD not it's glorious victory.

Quite right, humbug did start a new thread For the more uplifting news and more recent financial reports, but he posted it in the CPU section, which doesn't get used as much.

Maybe a mod could move it over to the graphics card section. ;)

Don't get me wrong it is great to have a thread I started still being used 3 years latter, but the title certainly no longer reflect AMD current position.
 
Quite right, humbug did start a new thread For the more uplifting news and more recent financial reports, but he posted it in the CPU section, which doesn't get used as much.

Maybe a mod could move it over to the graphics card section. ;)

Don't get me wrong it is great to have a thread I started still being used 3 years latter, but the title certainly no longer reflect AMD current position.

The thread in question is here https://forums.overclockers.co.uk/threads/amd-on-the-road-to-recovery.18826670/

Its in the CPU section because its primarily AMD's Ryzen CPU's that have them back on their feet, GPU's are still in the doldrums but hopefully with a Ryzen roadmap now laid out they can get back to making successful GPU's.

This thread communicates just how bad AMD have had it over the past 10 years, they have very limited resources and i think what little they had has gone to developing Ryzen, a smart move that seems to be paying off now but with them now making good competitive CPU's my hope is they can switch some of the money they are now earning to GPU R&D.

Bru you make a good point and for as important as this thread is there is now more to this story, i'll ask the mods to move mine in here :)
 
Hopefully a higher shader count 7nm card

My speculation is,that unless Nvidia is on 7NM too with the GTX1160/GTX2060 its probably a TSMC 12NM based card. TSMC 12NM is lower leakage so a GTX1160/GTX2060 could target a similar TDP class to the GTX1060 but with higher performance.

It could be quite possible since AMD needs to still be able to compete under £300,they release a higher core count and/or higher clockspeed Polaris based chip on GF 12NM as a stop-gap until 7NM. Hence,the need for higher speed RAM.

OFC,as this is speculation,the GDDR5 might have been ordered for some other project,but I would at least hope AMD at least has a stop-gap GPU until Navi. OTH,even if they do it might be a 2019 release.
 
Thankfully it didn't happen,
Imagine how dire things would be if AMD's Ryzen had flopped taking RTG with it?

Amen to that. Can you imagine the whole market rely on Intel & Nvidia only? We would be still on Quad core and 4th generation of Maxwell.
Oops forgot, we are on the 3rd iteration of Maxwell already and if rumours are correct the 11xx series will be the 4th generation.....
 
My speculation is,that unless Nvidia is on 7NM too with the GTX1160/GTX2060 its probably a TSMC 12NM based card. TSMC 12NM is lower leakage so a GTX1160/GTX2060 could target a similar TDP class to the GTX1060 but with higher performance.

It could be quite possible since AMD needs to still be able to compete under £300,they release a higher core count and/or higher clockspeed Polaris based chip on GF 12NM as a stop-gap until 7NM. Hence,the need for higher speed RAM.

OFC,as this is speculation,the GDDR5 might have been ordered for some other project,but I would at least hope AMD at least has a stop-gap GPU until Navi. OTH,even if they do it might be a 2019 release.

AMD are at risk of falling off the GPU map completely, right now the RX 580 is a good £250 card, i would argue its better than the GTX 1060 6GB but those performance level cards are about to be made one up from entry level GTX 1050 / RX 560 cards so unless AMD have a £250 GTX 1070 level card up their selves they have serious problems, Vega 56 is GTX 1070 level and its a very expensive monstrosity in comparison, are AMD about to make small efficient GPU's that have performance like Pascal?
No, AMD need an architectural shift, recycling existing stuff to lower end has reached that unsustainable end for AMD.
 
Back
Top Bottom