• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
good thing you put sarcasm at the end, everything you said was totally believable up to that point ...
Not sure if serious :p

Trust me without it there would have been at least one person who would have not got it. I mean Doom and a few actually think this way so... :p
 
And the cards NVidia use it on will be slower than a RX480 running DX12 in five years time.....

In the real world outside of this forum, the longevity of the card does actually play a role in cards people buy. Not everyone upgrades to the latest just because it is there.
 
In the real world outside of this forum, the longevity of the card does actually play a role in cards people buy. Not everyone upgrades to the latest just because it is there.

Rroff and myself were not being serious in our last posts.:)
 
yes some people will find it funny because they are in denial just like they were for Fury X and Polaris clocked speed.
since you are the one who started this argument, by labeling it stupid, when you actualy have no clue what you are talking about.
i do agree 4GB would be stupid in the traditional sense, if HBCC doesn't change anything, but i also did disagree, that if HBCC actualy changes memory management it wouldn't be stupid at all, and you do not like when someone brings up both possibilities, out of ignorance you only accept the negative outcome, while i base my positive outcome from actual data collected from Vega chief architect and radeon group's head.
im not saying it would happen as koduri says it would, but at least my argument isn't emotional but factual.
start the video from minute 4:10s ( sorry dont seem to be able to make it to start from there)
 
Last edited:
Lets just wait and see how it works.
Traditionally the 4GB is not enough, but if the HBC works as intended then it's a different thing.
 
And the cards NVidia use it on will be slower than a RX480 running DX12 in five years time.....

That's a given, Look at the 390x with it's superwide bus, we already see it ahead of the 1070 at times* :D


Sapphire Vega 8GB HBM Toxic OC with lots of yellow bits pls.

I'd be happy with a full Vega chip with the cooler on my Fury Tri-x on it.

I've grown to like the black and Mustard design. I thought the blue and yellow coolers used on the 200 series cards on release looked plastic, cheap and tacky so I went with the Twin Frozr IV's red and black design instead but boy did I get that wrong :( It just goes to show you shouldn't judge a book by it's cover or a graphics card by it's shroud. :)


*
I'm kidding we all know all it took was reviewers to turn one setting off for the 1070 to regain it's lead in RE7 :o
 
That's the same sort of claim we heard for HBM1, HBM2 will be no different regardless of the extra stuff they add this time around, 4gb's is 4gb's and when a game goes over 4 gb's the experience will suffer regardless of it being on a Fury or a Vega card.
I'm sure the 4gb model will do a great job at 1080p with some settings tweaks to keep the memory usage down but I agree with what others have been saying, releasing a card with less memory than what lower level cards have is not a smart move, When they did it with the Fury range Nvidia immediately took advantage of it making performance tank on AMD's flagship range and that was even when compared to the Grenada cards which out performed the Fiji's quite a few times over the last year, and they did that because HBM has not got any magic sauce in it and anyone hoping that this time it will be different is a glutton for punishment.

It's not the same at all, AMD have actually implemented features to deal with memory management this time around. Fiji with HBM1 just used it's VRAM as normal, Vega is supposed to have dynamic VRAM management to reduce the amount of data stored in the actual VRAM itself. That's why they are now referring to the HBM2 memory as being a high bandwidth cache instead of VRAM because it's not used in the traditional sense where all data is just lumped into the memory.
 
since you are the one who started this argument, by labeling it stupid, when you actualy have no clue what you are talking about.
i do agree 4GB would be stupid in the traditional sense, if HBCC doesn't change anything, but i also did disagree, that if HBCC actualy changes memory management it wouldn't be stupid at all, and you do not like when someone brings up both possibilities, out of ignorance you only accept the negative outcome, while i base my positive outcome from actual data collected from Vega chief architect and radeon group's head.
im not saying it would happen as koduri says it would, but at least my argument isn't emotional but factual.
start the video from minute 4:10s ( sorry dont seem to be able to make it to start from there)
Yeah. I also like and agree that the P in PC should now really stand for Performance. It makes sense. One of the main reasons to have a desktop these days is performance. It is not as constrained compared to laptops and tablets like surface. I will always have a PC for the performance and customisation.

Really looking forward to Vega. I just hope all the new tech that comes with it will translate into good performance at a competitive price point. If Prey turns out to be good also, will definitely get it to play that :)
 
It's not the same at all, AMD have actually implemented features to deal with memory management this time around. Fiji with HBM1 just used it's VRAM as normal, Vega is supposed to have dynamic VRAM management to reduce the amount of data stored in the actual VRAM itself. That's why they are now referring to the HBM2 memory as being a high bandwidth cache instead of VRAM because it's not used in the traditional sense where all data is just lumped into the memory.

Doesn't change the problem - they are correct in that when you look in depth at what many games are doing, their actual in use memory is only around 2GB (at 1440p-4K I see more like 2.5GB) which is why I moved on from my 3GB 780 as it was getting close to the limit. However it won't be all that long before 4GB is a limiting factor in that regard also. This is partly compounded due to where in DX11 and older APIs programmers ostensibly didn't have much if any control of what the API functions within themselves actually did with memory deployment i.e. a function might always make a copy of a texture of specific feature use even if your intention with using that function never required that copy to be used - in newer APIs programmers will have better control over that which will give them more memory budget to work with and push the boat out a bit more.
 
Without quoting few people, 4gb vram is plenty for a lot of users, people have to bare in mind all those that can't afford top end gear or need to buy them either, I barely pc game much, so my 1gb is fine, a 1gb card was fine for me in RB6S even if I had to run on low and lower res, had a 2gb card and only the division as got close at 1.7gb on high/ultra, hell BF4 and FO4 didn't even use near 1.4gb on high/ultra @ 1080p, granted that's presets so no real changing myself as it's rare I need to, if I don't sell the computer I will get a 2-4gb card depending if I get an rx 460 or something from nvidia as I won't need more grunt so it's good the companies are still thinking of the average joe people.

If your a serious hardcore gamer you wouldn't even need to question the market as you will already have top end stuff and will continue to only buy top end stuff ;).
 
^^ The query is 4GB on an ostensibly higher end card which would be intended for 1440p or higher. Its a lot less of an issue at 1080p or below though even then we are only seeing increasing amounts of VRAM utilisation it certainly isn't standing still.
 
Single Pascal Titan vs Quadfired Fury Xs
Shadow of Mordor maxed out.

4 Fury Xs

1080p
wg4Omdr.jpg

2160p
Ftia4bp.jpg





Single Pascal Titan

1080p
7feTlMy.jpg

2160p
v8jMQqz.jpg


Not having enough memory is really really bad.
 
Depends how high end you talk as from what Ive read in the past, it's not just having the memory it's being able to use it? Never played beyond 1080 on pc, but I'm sure a 4gb card could run a 1440 monitor just not at max like you can't run a lot of cards max at 1080 unless both parties create a fantastic low vran card that can (above 1080 I mean). This logic to me would feel less disappointed than spending, 4-800 on a gpu or 800-2k for a whole system then further 4-1k for a 4K tv if you don't already own one and not be able to get the performance you'd expect? So many people going on about their 4K card not getting over 60fps or whatever they consider the sweet spot after spending a tonne of money, make me and I'm sure others like me happy/privileged by their lower cost builds getting what they expect.
 
Single Pascal Titan vs Quadfired Fury Xs
Shadow of Mordor maxed out.

4 Fury Xs

1080p
wg4Omdr.jpg

2160p
Ftia4bp.jpg





Single Pascal Titan

1080p
7feTlMy.jpg

2160p
v8jMQqz.jpg


Not having enough memory is really really bad.


I'd pretty much answered this in my comment before I noticed yours, but if your going to push a card beyond its limits then you should expect poor performance, it's like cars and other things, I wouldn't buy a 4gb fury if there even is one, if I wanted to max it out at a high resolution, I would have the res because it's good, but so long it can be used t doesn't matter...

Fair enough if it looks like a 360 as a result then yeah not good, I played titanfall on my hd 4400 graphics on low and lower res and while spot in, it did look like the 360 version lol, crysis is the same, but if it doesn't then all good really.
 
but I'm sure a 4gb card could run a 1440 monitor

Until recently I ran a 3GB card fine on 1440p and it would still cut it today mostly but that isn't going to last forever and 4GB seems potentially a bit limited going forward if developers do actually start squeezing more out of the VRAM with next generation effects when they are less limited by having to accommodate the memory usage of the API.
 
So is it correct they mean for the 4gb to be a cache to the main memory and that texture space will mostly (in some future game that might ever use 12gb) be using the normal DDR4 you have or ddr3 even. How effective this cache system can be is not known.

Not having enough memory is really really bad.
Its a disaster for frame rates. If you have an old card it can chug along on lower settings, most games will scale. However textures have a minimum I guess, they arent going to rewrite the level just for the card thats lacking space so its like a bad windows setup paging to disk. It can be near zero fps.
Not all games suffer as badly, some slow pace games might be ok. I think the point this time is they are running some clever system to anticipate what is needed onboard the gfx card not just as the frame is being rendered?
 
Until about early last year, I felt 4gb was perfectly enough and it still can be if you are happy lowering textures. But on latest games like say Resident Evil 7 I found 8gb really did help. As someone who games at 4K, I would not buy a card with less than 8gb going forward unless AMD's caching technology on Vega works well. If HBM2 is fast enough to move things in and out of vram to ram/ssd then 4gb will be fine. But being a first gen tech I would still go for 8gb anyway just in case :)
 
I think this is how far AMD have come, that I might actually end up with a full AMD PC.

I've just gone and bought a 1700 and mobo, and now I wait for Vega before making a GPU choice.

---

AMD and their drivers.... do they have things like Adaptive Vsync? What about something similar to fast sync?

For me, a HUGE deal was adaptive vsync with Nvidia cards as I have an older monitor and it tears like crazy.

Saying that I do still actually get tearing even with adaptive vsync.
 
I think this is how far AMD have come, that I might actually end up with a full AMD PC.

I've just gone and bought a 1700 and mobo, and now I wait for Vega before making a GPU choice.

---

AMD and their drivers.... do they have things like Adaptive Vsync? What about something similar to fast sync?

For me, a HUGE deal was adaptive vsync with Nvidia cards as I have an older monitor and it tears like crazy.

Saying that I do still actually get tearing even with adaptive vsync.

for AMD it's called Freesync, you can find them here
 
Status
Not open for further replies.
Back
Top Bottom