• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
This is what I'm waiting for.
Either the lowest Vega card or the release pushes the price of the 470/480 down to a reasonable level.

sis kid got a 480 4gb for 195euro.
cant say thats expensive.
Its bit better than reasonable.

Vega early, 4 weeks old, no driver ready, 20% performance or so expected to happen, looks good for AMD with the best drivers in graphics and RyZen incoming its going to be AmaZen year for customers :D
 
No you won't... that price-point of cards have been the worst to improve over the years. A good many years back I had a HD6770 that cost me roughly £80... not only will you not find anything reasonable from either AMD or Nvidia, but the RX460 isn't even twice as fast (though it is close) as the 6770. It barely keeps up with a 370, but that was a £120 card to begin with. Cards at the low end seem progress just like Intel CPUs do currently, a measly improvement for what it's worth.

At least at higher prices the cards improve. Even if you take the 1070 to be a replacement for the 980, it's still a substantial increase in performance from the 980. You'll be lucky if the 1150ti or RX 560 can perform like a 280. Though that's what you should get, a 280 compared to a 460 is like the 1070 compared to a 980. It'll be a few years before cards at that price point get reasonable performance.

'Best bang for buck' actually lies at a higher price point, contrary to what you might think. Right now, that's the 470, but of course you've missed out on the well priced models a few weeks back, right now it's not worth buying with these prices. AMD need Vega to do what the 470 did, so it'll be interesting to see what the 2nd best Vega card does, since the top dog is unlikely to hit the magic 4k60 in most titles (except Vulkan/AMD-optimised perhaps).

Urm twas a joke xD

Also the GTX 1050, RX470, RX 480 and even GTX 1060 6GB are very well priced cards. The prices of the high end are where it's gone insane. Over £400 for the 1070 and an average of £600 for a 1080, not even the full Pascal Die. Then you have Titan XP priced to the moon. If you think that's where the bang for buck is i.e those higher end cards, you are wrong. Very very wrong xD
 
I think that AMD and associated press are making all the right noises about Zen /Vega - a couple of articles have pointed out that high-end gaming hardware is one area where the PC is seeing growth, so I'm hopeful that AMD has products to rival both Intel and nVidia in their respective segments.

I've been running a 780 GTX for a number of years now, and, whilst it's still performing admirably, it struggles with some of the newer titles (i.e. The Division) and I'm looking to change within the next 6 - 12 months.

While the 1080 offers about double the performance of my current card, I'm not prepared to pay £650 for it. What I'm very interested to see is what performance Vega can offer, particularly in light of DX12/Vulkan benchmarks on existing AMD products.

It might be wishful thinking, but if it turns in performance similar to that of the 1080 for a price-point of about £450 - £500 then I would be very tempted.
 
Urm twas a joke xD

Also the GTX 1050, RX470, RX 480 and even GTX 1060 6GB are very well priced cards. The prices of the high end are where it's gone insane. Over £400 for the 1070 and an average of £600 for a 1080, not even the full Pascal Die. Then you have Titan XP priced to the moon. If you think that's where the bang for buck is i.e those higher end cards, you are wrong. Very very wrong xD

Lol, sorry buddy, you've got me mixed up. I meant cards like the RX 470 and RX 480 were the bang for buck. The day I say the GTX 1070 is bang for buck? You'd better start checking to see who's hacked my forum account lol. Of course those high-end cards are terrible right now. I meant higher than the £80-150 price bracket (anything below that doesn't exist IMO). The 1070 was an example to show that even the overpriced high-end cards at least provide a significant increase in performance over the previous gen.

It depends entirely on the settings used. For instance Deathstar map with a TXP all maxed with NO AA at 4K. Gets in the region of 120-130fps.

Such a map is no indication of how VEGA will perform. Even fully maxed I'd expect a 1080 to keep over 60fps on that map.

That was the point I was making a few pages back, where I brought up the RX480 Hitman demo running at solid 60 on 1440p. You can't get that on max settings on an RX 480 right now (will drop below 60). Though I was talking about the Doom 4k demo they showed Vega.

Which is why I think Vega will go the way of the Fury again. Nvidia will ruin things for AMD by releasing the 1080ti shortly before and then Vega won't be able to keep up. I sure hope AMD prove me wrong... but they didn't with the RX 480.
 
Been saying this stuff for a long time, with a whole lot of resistance.

That said, he does gloss over some of the examples(apart from Doom) where AMD cards do genuinely gain from using the new low level API's.

But he's right that these low-level API's are not the magic people were hoping they'd be. And I'll go beyond that and say that DX12/Vulkan will NOT be any kind of standard anytime soon. Because there are just too many benefits to using DX11 as a developer. Not to mention that DX11.3 also includes some pretty significant multi-core/threaded capabilities that enable titles like The Witcher 3 and GTA V to run as well as they do, with excellent core scaling.

I'll go as far to say that DX11, or at least a non-low level API, will *always* exist and be quite notable in the development scene. Because even if plenty of major engines are built with DX12 in mind, there's still going to be tons of developers who are simply not experienced enough or have enough resources to get into such nitty gritty and will want the convenient workload of a GPU driver to do its thing in order to provide a workable and optimized product.
 
Been saying this stuff for a long time, with a whole lot of resistance.

That said, he does gloss over some of the examples(apart from Doom) where AMD cards do genuinely gain from using the new low level API's.

But he's right that these low-level API's are not the magic people were hoping they'd be. And I'll go beyond that and say that DX12/Vulkan will NOT be any kind of standard anytime soon. Because there are just too many benefits to using DX11 as a developer. Not to mention that DX11.3 also includes some pretty significant multi-core/threaded capabilities that enable titles like The Witcher 3 and GTA V to run as well as they do, with excellent core scaling.

I'll go as far to say that DX11, or at least a non-low level API, will *always* exist and be quite notable in the development scene. Because even if plenty of major engines are built with DX12 in mind, there's still going to be tons of developers who are simply not experienced enough or have enough resources to get into such nitty gritty and will want the convenient workload of a GPU driver to do its thing in order to provide a workable and optimized product.

There are also plenty of games where DX12 has a performance up-step, often even the same game can be better in DX12 in some parts of the game and worse in others, even if sometimes for AMD only.

What this guy is doing is cherry picking to try and push a downer on new API's, he's an ass.
 
There are also plenty of games where DX12 has a performance up-step, often even the same game can be better in DX12 in some parts of the game and worse in others, even if sometimes for AMD only.

What this guy is doing is cherry picking to try and push a downer on new API's, he's an ass.
Haha. No, this dude is being super realistic. I specifically mentioned that there ARE examples of DX12 being better for AMD users(not for all users), so dont act like I've ignored that.

But he's definitely pointing out a reality of this DX12/Vulkan movement not being this savior that many thought it would be. This IS a reality. And he's right that things are unlikely to change til we get games or game engines specifically built for low level API's, and even then, I guarantee that it's not necessarily going to be game-changing due to how great DX11.3 is and how much less work it is for devs in that situation.
 
The reality is the majority of PC gamers don't have high spec machines and probably have older specs like myself. If you run a super overclocked latest I7 you are removing a lot of the bottlenecks. My experience of DX12 is pretty good so far. I just got it for The Division yesterday and my game is much smoother compared to Dx11 due to there being pretty much no fps drops. Dx12 effectively saves my old I7. The dx12 reviews should run older systems as well as newer to give the bigger picture to all PC gamer's and not just to those with high spec machine.
 
The reality is the majority of PC gamers don't have high spec machines and probably have older specs like myself. If you run a super overclocked latest I7 you are removing a lot of the bottlenecks. My experience of DX12 is pretty good so far. I just got it for The Division yesterday and my game is much smoother compared to Dx11 due to there being pretty much no fps drops. Dx12 effectively saves my old I7. The dx12 reviews should run older systems as well as newer to give the bigger picture to all PC gamer's and not just to those with high spec machine.

+1

Every single DX 12 title I have played so far has been superb and brought a huge improvement to performance. For me it is the saving grace of PC atm, especially after having so many broken and poorly optimised pieces of **** this year.

Rise of the tomb raider first dx 12 patch was utter dog **** but then they added async support and there was a drastic improvement (although that game is still a horribly optimised game on my PC)

Even when FPS are similar to dx 11, the game stills feel smoother.

My division dx 12 results.

I really am super impressed with this patch and dx 12 for performance, didn't see my FPS drop below 50 at all tonight, was mostly between 55-60FPS the entire time where as with dx 11, I would regularly be dropping to 40's and hardly ever holding a constant 60 (unless inside) and this was with considerably lower settings too.

Epic gains for me :cool: :D

DX 11:

IpZZRS7h.png.jpg

DX 12:

wREJSOzh.png.jpg

They really need to show the min fps as well, during the dx 11 run, fps was dropping to 40's, dx 12, I don't think I saw it drop below 60, even the max fps was better, which is unusual for dx 12.

Had a quick run around and FPS still hasn't dropped below 50FPS with higher settings, game feels stupidly smooth now.

AooGqJ2h.png.jpg

l5E3f56h.png.jpg

lavwf4th.png.jpg

mAnEaKNh.png.jpg
 
Haha. No, this dude is being super realistic. I specifically mentioned that there ARE examples of DX12 being better for AMD users(not for all users), so dont act like I've ignored that.

But he's definitely pointing out a reality of this DX12/Vulkan movement not being this savior that many thought it would be. This IS a reality. And he's right that things are unlikely to change til we get games or game engines specifically built for low level API's, and even then, I guarantee that it's not necessarily going to be game-changing due to how great DX11.3 is and how much less work it is for devs in that situation.

You did, he didn't that is my point, he deliberately ignored anything where DX12 is a benefit.

The reality is the majority of PC gamers don't have high spec machines and probably have older specs like myself. If you run a super overclocked latest I7 you are removing a lot of the bottlenecks. My experience of DX12 is pretty good so far. I just got it for The Division yesterday and my game is much smoother compared to Dx11 due to there being pretty much no fps drops. Dx12 effectively saves my old I7. The dx12 reviews should run older systems as well as newer to give the bigger picture to all PC gamer's and not just to those with high spec machine.

This too.
 
The reality is the majority of PC gamers don't have high spec machines and probably have older specs like myself. If you run a super overclocked latest I7 you are removing a lot of the bottlenecks. My experience of DX12 is pretty good so far. I just got it for The Division yesterday and my game is much smoother compared to Dx11 due to there being pretty much no fps drops. Dx12 effectively saves my old I7. The dx12 reviews should run older systems as well as newer to give the bigger picture to all PC gamer's and not just to those with high spec machine.
The *reality* is that you dont need a high end machine to somehow leave behind the benefits of DX12.

It's most useful in unbalanced systems(weak CPU especially), but it does not mean the savior of the general poor rig. If you have an old rig with both an old CPU and GPU, it's not going to make much difference cuz your old GPU is going to be a bottleneck all the same.

I mean, this kind of thing goes to show even more how people, even many supposed 'enthusiasts', just DONT UNDERSTAND what DX12 actually is. Nor do they understand what it entails and why it's not going to be some universally adopted standard.
 
No dude. lol

We dont know this AT ALL yet.

What's wrong with you guys? What makes your brains fall out when it comes to AMD products that you cant step back and have some cautious optimism, like I know damn well you'd be doing if this were an Nvidia product.

None of the demonstrations have *proven* anything whatsoever. Not remotely close. I'll say it again - even Bulldozer matched or bettered Sandy Bridge in the odd app/benchmark when it first came out. We all saw how that turned out.

Stop acting like people who have never dealt with stuff before. This place is supposed to be a forum for enthusiasts, yet all I largely see is people acting like ignorant fools.

When bulldozer was compared it was an 8 core vs a 4 core with HT, in multi thread aps Bulldozer did actually offer good performance, in single threaded aps it lacked... but there is a difference here. It was 8 core in it's multithreaded state vs what is a native quad core with and without HT(it was compared to the 2500k and 2600k). In a massively multithreaded application an 8 core has a native advantage over a quad core, in single threaded application this changes. AMD was throwing double the cores and was much more competitive in applications that could use 8 threads.

This demonstration was a native 8 core with 16 threads from AMD vs a native 8 core with 16 threads from Intel. There is no inherent multithreaded advantage, the AMD chip doesn't have double the number of cores and in that way there is no reason to believe it will suddenly drop off massively with less cores.

In the bulldozer comparisons AMD was throwing 2 cores vs 1 for Intel. In the Zen comparison AMD is throwing 1 core vs 1 core for Intel. That anyone is even pretending that can be similar to Bulldozer benching is beyond daft because it's such a completely different situation.

If Zen was dramatically slower in single threaded applications it would absolutely show up in a benchmark with one 8 core chip against another 8 core chip.
 
When bulldozer was compared it was an 8 core vs a 4 core with HT, in multi thread aps Bulldozer did actually offer good performance, in single threaded aps it lacked... but there is a difference here. It was 8 core in it's multithreaded state vs what is a native quad core with and without HT(it was compared to the 2500k and 2600k). In a massively multithreaded application an 8 core has a native advantage over a quad core, in single threaded application this changes. AMD was throwing double the cores and was much more competitive in applications that could use 8 threads.

This demonstration was a native 8 core with 16 threads from AMD vs a native 8 core with 16 threads from Intel. There is no inherent multithreaded advantage, the AMD chip doesn't have double the number of cores and in that way there is no reason to believe it will suddenly drop off massively with less cores.

In the bulldozer comparisons AMD was throwing 2 cores vs 1 for Intel. In the Zen comparison AMD is throwing 1 core vs 1 core for Intel. That anyone is even pretending that can be similar to Bulldozer benching is beyond daft because it's such a completely different situation.

If Zen was dramatically slower in single threaded applications it would absolutely show up in a benchmark with one 8 core chip against another 8 core chip.
Bulldozer was not actually 8 core. Are people still trying to peddle this misinformation? Holy crap.

But yes, you're right that it actually did well in *certain* multi-threaded apps. That was its strength. In an age when multi-threading wasn't all that prominent. And by the time it was, Intel had progressed to a point where it left AMD in the dust.
 
Bulldozer was not actually 8 core. Are people still trying to peddle this misinformation? Holy crap.

But yes, you're right that it actually did well in *certain* multi-threaded apps. That was its strength. In an age when multi-threading wasn't all that prominent. And by the time it was, Intel had progressed to a point where it left AMD in the dust.

IT was an 8 core regardless of if you believe it to be so. The real question is are people still peddling the crap that it isn't an 8 core when literally legal cases were brought against AMD because nutjobs believed it to not be an 8 core and were thrown out because it was proven by anyone with a brain in the industry to be an 8 core chip.

If it wasn't an 8 core, it could not have had much lower single threaded performance, but managed to actually compete with Intel chips with only 4 cores when more than 6 threads were used.

Give a single viable explanation for a FX8350 to be uncompetitive in single or 4 threads with a quad core Intel chip with/without HT, but does become competitive with 6-8 threads if it didn't have more actual cores?

The industry by and large deems an integer block as a 'core', Bulldozer had 2 per module and 8 integer cores in a FX8350. The entire industry deems it an 8 core chip, the legal system deemed it an 8 core chip. Intel never once ever anywhere stated the FX8350 wasn't a real 8 core chip.... but you're right, it's totally not an 8 core chip.

You also managed to ignore the actual point of my post, which is, you're comparing benchmarks with an 8 core vs 4 core chip, in which it was ONLY strong in multithreading and fell behind in single threading to the current Zen benchmark situation in which an 8 core chip is competing against another 8 core chip. If Zen had noticeably weaker single threaded performance(>10% difference), then it could NOT compete with an 8 core Intel chip.

Lets say Bulldozer had 60-70% of the single thread performance vs the 2500k, and HT boosted performance of that chip by 5-35%. In single thread it was left behind, in 4 thread it was left behind, in 8 thread you have 0.6x8= 480% performance of a single Sandy core the 2500k had 4x 1.0 = 400%. the 2600k has 400x1.05 to 400x1.35= 440% to 540% performance, which is what we saw in those early Bulldozer benchmarks. in 8 threaded situations it was usually ahead of the 2500k, sometimes ahead of the 2600k, sometimes level, sometimes behind by a relatively large margin.

For two chips to have the same performance using 8 cores with 16 threads in a 16threaded situation, if AMD had significantly less single threaded performance that would carry over. If you assume Zen has only 70% of the performance of a 6900k single core, then it would only have around 70% of the performance when both chips were running 16 threads. The math doesn't work. For two chips with an equal number of cores to have relatively equal performance using 16 threads, it can't be far behind in single thread. Yes I simplified and ignored efficiency/scaling performance from 1-16threads. In reality you don't gain 100% performance, but neither AMD nor Intel chips will. Same goes for HT, you won't gain 100%. AMD could be 10% down in single thread and make it up via scaling the number of threads, but that still puts it in the same ballpark performance wise rather than 40% down, which still makes it a great chip.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom