• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Bet that top cards touching a million degrees :p

Makes me jealous, wouldn't mind just two :D. Couldn't bring myself to spend that much though.

I bet that top card runs really cool, they all have waterblocks on them now.:)

That is a really old pic that NVidia used for their Facebook page.:)
 
Much easier to be on the side of caution so if it turns out good it's a pleasant surprise. With Fury X and RX480 i was to optimistic and felt a little let down. More so with Fury X.

That's how I roll: cynicism. If it terrible, I can say I told ya so, but if it's good, I can be happy to be proved wrong.

And yes, I would love AMD to have a card worth upgrading to, I'm sick and tired of these rollercoaster drivers on Nvidia GPUs. Otherwise I'll feel worse about not managing to get that 980ti... (same situation, just with a bit more power).
 
not sure if this has been shared already, if not, feast your eyes :)

http://www.pcworld.com/article/3155...-freesync-2-and-the-p-in-pc.html?sf50965756=1

Raja just explained in that video how games were developed to be at a default setting where artwork and graphics where developed to look good at these default settings. Ultra etc where the slider goes all the way to the right the artwork and graphics wasn't developed for this and it's why it doesn't tend to look much better than its settings before it. It just tends to increase texture resolutions and other techniques etc for minimalist improvements to image quality.

Now this is something ive been trying to say for ages. Some settings don't do much if anything to image quality but you can see the trade off in performance. It's like running 4k Resolution whith maxed out AA. Just does not improve image quality that much. Its like trying to notice the difference from 120Hz to 144Hz. But the settings that take 10% - 15% of your FPS for hardly any difference in IQ, is it worth it?

When you're immersed you just cannot tell unless your stationary looking very hard at the screen which means your not actually playing the game.
 
Last edited:
But the settings that take 10% - 15% of your FPS for hardly any difference in IQ, is it worth it?

When you're immersed you just cannot tell unless your stationary looking very hard at the screen which means your not actually playing the game.

I have to reduce settings to get decent frame rates with my Fury running 3440x1440 and more often than not I struggle to tell the difference between the tweaked and max'd visual versions. If I took 10 different pics of each and shuffled them all up I wouldn't be able to place them all in the right group.
 
Traditionally many games used to ship with an optional high res texture pack on the 2nd or 3rd CD and/or you could request/download it that often had double or quad the amount of resolution - but these days seems to be less and less of that other than unofficial patches that replace the original artwork.

Often game assets are made at very high res and crunched down to their target mainstream platform i.e. models are often created at say 2 million polygons and the shipped form has say 20K and likewise the original image data can be say 20-60million pixels but ships as 60-120K pixels for an asset - which looks ok at 1080p aslong as you aren't too close to something but is a long way from making the best use of higher resolutions or close camera work.

That isn't all there is to game quality settings though - infact it is one of the smaller performance hits today - things like realtime global illuminations can be massive performance hits and a proper high end implementation can significantly improve how a game looks without any reliance on the developer shipping higher resolution assets for that feature.
 
That isn't all there is to game quality settings though - infact it is one of the smaller performance hits today - things like realtime global illuminations can be massive performance hits and a proper high end implementation can significantly improve how a game looks without any reliance on the developer shipping higher resolution assets for that feature.
High quality texture packs don't require a lot of gpu grunt just memory which is why the newer console games often use them to improve how their games look.
 
You can encounter things like fillrate throughput depending on what you are doing though - i.e. if your shaders are interacting per texel level in a situation where that isn't limited/occluded in some form you can really bog a GPU down.
 
Vega 20 will be the 7nm die-shrink of Vega 10, not a bigger / higher-end chip.

They're saying 2H 2018 because that's when GloFo are supposed to deliver the process. It's still too early to see if things are on track for this.

I don't think we'll see another die shrink for a while, maybe late 2019 or 2020 which would also coincide with when Ryzen gets a new platform.

You can encounter things like fillrate throughput depending on what you are doing though - i.e. if your shaders are interacting per texel level in a situation where that isn't limited/occluded in some form you can really bog a GPU down.
I'm presuming you mean it can't move them about fast enough, I imagine that's always a possibility, I wonder if it happens much.
 
Last edited:
I don't think we'll see another die shrink for a while, maybe late 2019 or 2020 which would also coincide with when Ryzen gets a new platform.


I'm presuming you mean it can't move them about fast enough, I imagine that's always a possibility, I wonder if it happens much.

Some shaders will process at a texel level in object space and not a pixel in screen space.More texels means more computations.
 
I don't think we'll see another die shrink for a while, maybe late 2019 or 2020 which would also coincide with when Ryzen gets a new platform.

I was talking about the 'leaked' slides from CES referred to in multiple sites like PC Perspective for instance. I quote:

Moving into the second half of 2018, the leaked slides suggest that a Vega 20 GPU will be released based on a 7nm process node with 64 CUs and paired with four stacks of HBM2 for 16 GB or 32 GB of memory with 1TB/s of bandwidth.

This does fit into the big picture (including Ryzen 2019/2020 you mentioned).

What I mean is, ever since GloFo announced they'd skip 10nm and go straight to 7nm, they've always also said in the same sentence that they expect 7nm to be ready by 2H of 2018.

Now, we all know that GloFo had one major failure before and generally have issues in first batches while they refine things. However, this time we're talking about what is really an IBM-developed process (GloFo bought that department and took on the engineers) so maybe things will run smoother this time around.

Either way, best case scenario is that AMD will do a node-shrink of Vega in 2H of 2018 and follow up with Ryzen+ in 2019 (they always shrink GPU first to get a hang of the process). Otherwise, Vega20 will slip to 2019 and Ryzen+ will inevitably slip to 2020.
 
Last edited:
Some commentary on the Vega architecture on SemiAccurate in case anyone finds it interesting.

Reading SemiAccurate's words, imo, it's not as simple as Green Team users spouting Vega can't compete!

Vega is an all new arch replacing GCN, if AMD can eradicate the launch **** up that they are famous for, it 'possibly' could be better than we think performance wise as I doubt it's running optimum Vega features on the demos, running Fury drivers.

On another note current GCN support most likely is about to go on AMD's backburner due to the new arch promotion, so you can soon join in with your Nv brothers and compare tales.:p
 
Reading SemiAccurate's words, imo, it's not as simple as Green Team users spouting Vega can't compete!

Vega is an all new arch replacing GCN, if AMD can eradicate the launch **** up that they are famous for, it 'possibly' could be better than we think performance wise as I doubt it's running optimum Vega features on the demos, running Fury drivers.

On another note current GCN support most likely is about to go on AMD's backburner due to the new arch promotion, so you can soon join in with your Nv brothers and compare tales.:p

Have to agree that if the new arch is quite a bit different than current GCN they will focus mostly/only on the new one. Understandable tbh and not something i blame either vendor for.
 
Reading SemiAccurate's words, imo, it's not as simple as Green Team users spouting Vega can't compete!

Vega is an all new arch replacing GCN, if AMD can eradicate the launch **** up that they are famous for, it 'possibly' could be better than we think performance wise as I doubt it's running optimum Vega features on the demos, running Fury drivers.

On another note current GCN support most likely is about to go on AMD's backburner due to the new arch promotion, so you can soon join in with your Nv brothers and compare tales.:p

1/3 size of the chip is actualy taken by HBCC and HBMC, new hardware and features that wont necessarily translate into gaming performance, but the pro sector will get their money's worth.
my guess AMD is just going to expand the the budget target to 500$ without aiming at top performance crown, and just wait for some cash through ryzen and pro segment to buff up R&D.
and let's be honest top tier performance GPU is more about $ than anything else, both companies have the expertise, just question who can caugh up more cash for R&D and die size.
and i really really hope AMD doesn't launch reference design this time around, and let AIBs only at least at launch, thats the only way they can avoid screwing up the launch, iand i hope vega drivers will be ready at launch not couple months later.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom