• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
AMD really need to release some info about Vega soon. If they take to long there will be few people left who have not already purchased an Nvidea GPU.
I think most who was have probably done so already. There is still many like me who will wait for the new tech before making a choice.
 
[Speculation]Big Vega might be a chip that's very different, sort of like GP102 to GP104, so may be harder to fabricate, take more time to refine and tune, therefore come after.[/Speculation]

Or of course Big Vega will come first but I imagine there will still be something between a launch in H1 2017 and Navi.
 
I should imagine that Small Vega will be competing against the 1080 (4096 shaders) and Big Vega against the Ti (6144 shaders).
Not much of a stretch either. They had 4096 cores years ago. Surely big vega will have more. They have so many more transistors to play with with a more matured 14nm now no?
 
Thats what I am assuming will appear between the Vega launch in a couple of months, and Navi. I have a feeling it'll be a 'smaller' one in June and a bigger one around October-December.
I actually would not mind waiting until December for big vega, not in a huge rush for a new gpu. As long as it can beat the 1080Ti handily and be priced cheaper by that date, I am happy :)
 
This isn't based on anything factual so don't hold it against me if I am wrong :p. This is just how I expect it'll pan out. I think the June Vega will be a little slower than a 1080Ti and the Bigger Vega will be faster than anything we have right now from Nvidia or AMD.
 
This isn't based on anything factual so don't hold it against me if I am wrong :p. This is just how I expect it'll pan out. I think the June Vega will be a little slower than a 1080Ti and the Bigger Vega will be faster than anything we have right now from Nvidia or AMD.
As long it is small vega that is a little slower than 1080ti and is priced well, it will be fine and sell well. But will be a bit disappointing if it is big vega.
 
God, I would shove a blunt jackhammer in my ear and fire it up rather than listen to this doddering, tangential and irritating git ever again. People sook about Adored TV but at least he spends time to back his theories with comprehensive, plain English (Scottish) arguments based on logic and good visuals, even if he gets it wrong it's down to misinterpretation of data rather than omission. This clown was all over the place like he was on crack and his graphs weren't even sync'd. Correction, his use of other people's IP was badly sync'd.

I don't disagree with what he was saying but just was put off by the amateurish ad-lib crud pouring forth. Youtube has it's hits & misses.

I've gone of RGT lately too, Too much content based on nothing concrete,
The channels been letting itself down for quite a while now.
 
I'd be suprised if they do release small vega first they released the RX 480 first then the 470 and the 460 later. They also released the big Ryzen chips first with the lower spec chips to follow, etc.

Good points, Let's hope they follow the trend and don't go small Vega first in the hope of then getting extra sales from those same people then buying big Vega, like Nvidia do with there card releases.

the confusing thing is, I'm sure all along everyone said the bigger card would be out first, even raja or someone at an AMD event...im almost 99% certain.

That's what I remember too, too much fake news due to AMD's lack of news I guess.

The one's I've seen have had Raja refusing to comment on whether the demos were big or small when he was asked directly so I'm not sure when that would have been.
 
Not much of a stretch either. They had 4096 cores years ago. Surely big vega will have more. They have so many more transistors to play with with a more matured 14nm now no?

Much more to performance than the number of cores. In general its easier to get better performance with less cores that are working more efficiently, as shown by Nvidia's recent architectures. Simply stacking up extra cores is easy but keeping them well fwd and efficient is very hard, leads to many bottlenecks and sub-par performance given the theoretical peak. Radeon architectures have shown this over the last 5 years, greatly diminishing returns highlighted by the FuryX.

What we hear form Vega is something like a move to an architecture more similar to Nvidia's Maxwell with the transistor budget spent on increasing efficiency and utilization, removing bottlenecks and making those transistors count rather than sitting there generating heat.
 
Much more to performance than the number of cores. In general its easier to get better performance with less cores that are working more efficiently, as shown by Nvidia's recent architectures. Simply stacking up extra cores is easy but keeping them well fwd and efficient is very hard, leads to many bottlenecks and sub-par performance given the theoretical peak. Radeon architectures have shown this over the last 5 years, greatly diminishing returns highlighted by the FuryX.

What we hear form Vega is something like a move to an architecture more similar to Nvidia's Maxwell with the transistor budget spent on increasing efficiency and utilization, removing bottlenecks and making those transistors count rather than sitting there generating heat.
Yeah maybe. All I care about is the performance and price in the end. I hope they can nail it so I can go Freesync 2 in the near future.
 
Much more to performance than the number of cores. In general its easier to get better performance with less cores that are working more efficiently, as shown by Nvidia's recent architectures. Simply stacking up extra cores is easy but keeping them well fwd and efficient is very hard, leads to many bottlenecks and sub-par performance given the theoretical peak. Radeon architectures have shown this over the last 5 years, greatly diminishing returns highlighted by the FuryX.

What we hear form Vega is something like a move to an architecture more similar to Nvidia's Maxwell with the transistor budget spent on increasing efficiency and utilization, removing bottlenecks and making those transistors count rather than sitting there generating heat.

Let not forget something that everyone has missed also since the initial news back in December.
The tech that comes with Vega, which allows a dual GPU card to operate like a single GPU card sharing resources between the cache and the HBM also. In similar fashion as the CCX modules are working on the Ryzen 7 CPUs.
 
Let not forget something that everyone has missed also since the initial news back in December.
The tech that comes with Vega, which allows a dual GPU card to operate like a single GPU card sharing resources between the cache and the HBM also. In similar fashion as the CCX modules are working on the Ryzen 7 CPUs.
Would be nice if such things was real and Crossfire worked properly like a single card. But won't hold my breath.
 
Let not forget something that everyone has missed also since the initial news back in December.
The tech that comes with Vega, which allows a dual GPU card to operate like a single GPU card sharing resources between the cache and the HBM also. In similar fashion as the CCX modules are working on the Ryzen 7 CPUs.

Still doesn't work - some type of layer that virtualised the whole thing as 1 GPU would be good for compute heavy tasks but penalise gaming performance and the way that 2 cores have to work regardless of memory access, etc. to provide performance benefits would still fall down in the case where subsequent frames rely on data from the previous one - which is why SLI and Crossfire fall down - you can't despatch 2 or more frames at the same time and get a performance benefit unless the data they are working with is fully independent and increasingly with DX12, etc. it simply isn't.

The future for multi GPU is if developers ever get upto speed with explicit multi adapter and farm out parts of the same scene they know can be processed independently through knowledge of how their game/game engine works.
 
Status
Not open for further replies.
Back
Top Bottom