• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Agreed! So what he saying then Nvidia will never hire anyone from AMD well that is false information from him.

@Doom112
Say a guy or girl passes all the qualification to work for either AMD or Nvidia.. They have a choice what makes Nvidia different? Is they a Nvidia school they need to pass? Alien school? Somthing else? Am very interested what makes you think some people can and can't work for Nvidia!! I can picture you now the head boss at Nvidia with a mustache doing Hitler shouts Lool
That one is western - No
That One is Asian - yes

Funny


Nvidia would certainly hire top people form AMD but I do think there is a different culture between AMD and Nvidia when it comes to staffing, just like Google sets different requirements to to oher tech companies. Nvidia really pride themselves on hiring the best of the best, paying them above market rates, giving them a lot of flexibility etc. AMD is more like conventional tech companies, plenty of great engineers and great working condition but AMD isn't fostering the same environment. You see the difference on Nvidias website and at conferences, Nvidia a putting out a lote more scientific research. In the fuure if AMD is more financially stable they might go the same way
 
No I am suggesting that they will have a target % of mark up they need to recover per unit based on total expenditure including R&D, marketing, support etc. It isn't just the cost of manufacture. The total cost per unit though to cover the rest of the overheads on that product line need to be taken into account.

I worked for a company that produced an item that in raw material costs around £18 and we sold it for £110 a unit. However due to R&D, Marketing, Testing, Approvals required, Distribution, Support, Expected Damages/Loss the profit left was around £30 a unit and that was with us distributing direct. Now you then sell them via a third party distributor who want to take a cut but you can't price them higher than what you sell direct and by then we was left with a £15 profit per unit.

Now unit sales increased by them holding stock and having a larger distribution system in the UK than ourselves but it doesn't leave a lot. I think people under estimate how much everything adds up and what the cost is and how it breaks down on a per unit basis.

Now we take into consideration that currently it costs per unit £80 to do all that, leaving them £120 in profit per unit. Now you are on about taking £70 off that. £50 per unit then becomes quite a small number.

Of course the profit margins for the lower tier units are less and the higher tear units are more since a lot will be the same process with the same costs but just sold at a premium for having a better binned chip. Still profit is made by vast quantities of units sold not small quantities with high margins.

If you want the high margin profit then clothing, specifically items such as high end suits for instance can have huge markups. Where my mother used to work they often had a mark up of around 500%.

Side note: Other notable mark up items;
Popcorn in a theater, their mark up can be 1000% onward
Prescription drugs that are between 500-3000%
The cost of a text message equates to 6000% mark up on pay as you go

The tech market though is not a high markup place.
The tech market isn't high markup for retailers and distributors. High end hardware is another matter. They will have already made back R&D costs. This is exactly how Apple operate. nVidia have been trying to become the Apple of graphics cards in that people buy for the brand alone.

Target margins change over time. Due to the volume of sales and the decreases in manufacturing costs due to better yields with each stepping.

To give you some perspective, the 1070 costs just as much as the 1080 to manufacture. Especially now that the yields will be up and the manufacturing process will be a lot more mature.

Sure, it's a cut down chip but that doesn't mean anything in this context. It's the same chip, they come from the same wafers. The costs are exactly the same. Near enough everything else between the 1080 and 1070 are the same bar the chip's configuration.

The cost of the PCB and other components is pretty cheap, the only notable thing would be the VRAM. But they'll be buying in absolutely massive quantities.

So they can absolutely afford to drop the price without making a loss. It's crazy to suggest otherwise really. High margins is exactly why nVidia has been thriving. Due to a lack of competition from AMD they've been able to get away with charging more for less.

For example, the GTX 580 had a very similar bill of materials compared to the original Titan. Before you mention R&D as well, nVidia had already recouped their costs before they brought the original Titan to market as they had been selling the chips to another company to use in their supercomputer. The only real difference in costs between the 580 and Titan was the VRAM. The GPU chip itself was very similar in cost to manufacture, and the PCB of the Titan would have been about the same.
 
So just because that link says Vega was developed in China means AMD laid off all the staff across the water? Looolll

You do understand companies have multi-teams across the world don't you!! Just That's like saying just because Rockstar san diego Helped make Red dead redemption, Rockstar laid off its staff over at Rockstar North LOL

Doom you really do bring a joy to this forum its funny to laugh at.

And life's better when you have plenty to smile about so keep it up..

There are only 2 companies in this field.You cannot compare Nvidia employees with AMD.

Gold.. Or a new signature.


500 pages soon...Think of the space on the OCUK servers just filled with pure guff

Highly volatile guff at that. Make sure nobody lights a match.
 
Last edited:
Last edited:
Thread is waaaaay off topic now - can we try and point it back in the right direction (and without any more personal insults)

Thanks
 
I know some people asked about the Doom video I linked to a few pages back, been pretty busy and more interested in Volta but here is a quick repose. I have seen threads on other forums talk about some rendering issues that affected both AMD and Nvidia cards in Doom, I will try to dig them out later. Regardless of that, my point still stands. Doom has some rendering bugs that the developers haven't fixed, some of these are liekly only affecting Nvidia GPUs but Nvidia have been unable to make a driver fix since the issue is deeper within the game code.

Taking another step back, my only point is when there are rendering artifacts in game by far the most common cause is the developer making a bug or not properly following the API. A lot less common is an actual driver bug, and these occur often because the drivers are incredibly complex n order to overcome many of the common bugs that developers create. Nvidia's drivers form Maxwell onwards have become incredibly complex in order to increase performance for DX11 with multi-threaded command lists etc. Completely avoiding rendering some part of the scene in today world as dome kind of performance cheat would be completely ridiculous. Even when AMD and Nvidia were at their worst they were reducing precision or making approximations that gave visually similar but not identical differences.
 
In terms of tech and I'm talking pure hardware is AMD actually ahead of Nvidia?

What I mean is Vega for example is going to bring to the table things like HBM2, 6 months or more before Nvidia do with Volta.
 
Last edited:
In terms of tech and I'm talking pure hardware is AMD actually ahead of Nvidia?

What I mean is Vega for example is going to bring to the table things like HDM2, 6 months or more before Nvidia do with Volta.

NVIDIA already have HBM2 cards on their Pascal Tesla P100, and Quadro GP100.
It's too much hassle for them for gaming cards, to just used GDDR5X.

AMD will be the first to bring it to consumer gaming cards, but that's only really because they don't have the R&D to have two separate designs most likely.

Vega Gaming, and Vega Instinct will all have the same cores, where as NVIDIA slash out FP64 and 2:1 FP16 from their GP102, 104, and 102 cards usually used in GeForce/Titan, and lower end Quadros.
Tech Wise, Volta looks a league ahead of AMD for Compute and Deep Learning; simply because of their stronger FP64, and these new Tensor Cores that give massive performance in Deep Learning and A.I.

All that will be slashed from consumer Volta though, so they'll be able to ramp up the clock speeds more, and still have a massive amount of CUDA cores; and most likely GDDR6.

Vega looked extremely promising for Deep Learning, but unfortunately it seems they were held back by supply issues for high speed HBM2. SK Hynix was supposed to have 2.0GBps 1000Mhz HBM2 ready Q3 2016, but it's not even on their catalogue for 2017 anymore.

Only 1.6Gbps HBM2 is, which if AMD uses that means Vega can't reach it's reported 512GB/s bandwidth.

NVIDIA on the otherhand has been using Samsung 1.4Gbps HBM2, which is fine for their Teslas, as they've been putting 4 modules on there, giving massive amounts of Bandwidth.

AMD is only using 2 modules, so really need the 2.0Gbps ones to be able to compete well; and sadly it seems SK Hynix is letting them down, and delaying Vega.
 
Vega starting to lose it's appeal. Lets hope AMD can do what they did with Ryzen with Vega. Otherwise... I will just get an Nvidia card when the 1000 series (hopefully) come down in price.

But where the 'BLEEP' are these FreeSync2 monitors?
 
Vega starting to lose it's appeal. Lets hope AMD can do what they did with Ryzen with Vega. Otherwise... I will just get an Nvidia card when the 1000 series (hopefully) come down in price.

But where the 'BLEEP' are these FreeSync2 monitors?

Makes little sense to get a discounted 1000 series card, if vega is out; especially if Vega still beats it and is the reason the price is forced down more.

Just gotta buy what you need at the time. FreeSync 2 will probably be announced at Computex when AMD have their event.
 
Makes little sense to get a discounted 1000 series card, if vega is out; especially if Vega still beats it and is the reason the price is forced down more.

Just gotta buy what you need at the time. FreeSync 2 will probably be announced at Computex when AMD have their event.

My issue is the locking in forced by monitor decision. May be better of with a Gsync monitor as that will give me access to use Nvidia cards moving forwards otherwise I am making a commitment for the lifetime of the monitor (5-10 years) to stay AMD.
 
In terms of tech and I'm talking pure hardware is AMD actually ahead of Nvidia?

What I mean is Vega for example is going to bring to the table things like HBM2, 6 months or more before Nvidia do with Volta.
Nvidia won't be bringing HBM to consumers with Volta, absolutely no point t when GDDR6 will provide more bandwidth
 
Shame HBM hasn't taken off and replaced GDDR completely, I was looking forward to cards all being the length of the R9 Nano. :(
 
Shame HBM hasn't taken off and replaced GDDR completely, I was looking forward to cards all being the length of the R9 Nano. :(

It might longer term, still a reason that Nvidia is using it for Tesla parts. The problem is the technology is still.not quite there for consumer parts given the cost and the rate of progress still seems less than GDDR.

Even if Vegas does get the 2Gbps chips it will end up with only marginally more bandwidth than the 1080ti for a lot more cost, and the same bandwidth as the FuryX. So best case scenario is not that exciting at all. Meanwhile early next year Volta Titan will release with 768Gbps GDDR6, 50% more than best case Vega. And every time there is a fab node shrink GDDR gets the same performance per watt increase which more or less will keep pace with GPUs in the short term.

Longer term GDDR will die and HBM gets more attractive, although the life if HBM in it's current form also looks short. Nvidia had a nice presentation and it really showed HBM can only be a stop-gap. But whatever replaces it will share some similarities.
 
I'm not sure. I know there are already specification for HBM3 with lower power.

Here is the Nvidia slide : https://www.extremetech.com/extreme...-huge-size-advantage-of-hbm-over-gddr5-memory

You see that HBM power usage still increases massively. And that slide is out of date because GDDR6 is on par with HBM2 for power and bandwidth.

Some varient of stacked memory may replace HBM standard, or potentially actual 3D memory.
 
Nvidia would certainly hire top people form AMD but I do think there is a different culture between AMD and Nvidia when it comes to staffing, just like Google sets different requirements to to oher tech companies. Nvidia really pride themselves on hiring the best of the best, paying them above market rates, giving them a lot of flexibility etc. AMD is more like conventional tech companies, plenty of great engineers and great working condition but AMD isn't fostering the same environment. You see the difference on Nvidias website and at conferences, Nvidia a putting out a lote more scientific research. In the fuure if AMD is more financially stable they might go the same way
i hope nVidia, and AMD for that matter don't take a leaf from google's books and have staff living in camper vans in the car park cause they've priced employees out of the local housing market.
 
Status
Not open for further replies.
Back
Top Bottom