• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
5G is supposed to be revolution in tele-communication, even if it's not possible to provide all devices with 1ms, the latency would still be lower than what ethernet or wifi offers us today, and you can't really compare it to 4G, it's like comparing bluetooth with Wifi, the possibilities that opens for real life applications are far greater.
getting out of topic though, so to get back to topic, yes AMD seem to have a very well tuned architecture for the transition to game streaming, they have APUs, and for the servers used to stream games, the HBCC with Vea and it's 512TB memory , and sacable navi makes it a perfect fit

One thing that's going to hold back 5G is the feeble amount of data allowance mobile providers give out, where we've gone from 3G to 4G in the I've not seen any substantial increases in network data allowances from any of the big providers even though 4G is 5 to 10 times faster so I doubt we will see anything happen when 5G finally lands.
 
yet still, to this day, just about every game, barring the odd one here, there, runs much better on Nvidia on the PC, not AMD, whose hardware it is in all the consoles, no, and no, Scorpio isn't going to change that either, its just going to be as you were when that arrives on the PC, as if it made a difference to AMD on the PC, with them being the only ones in the consoles, and for this long now, every game would be running better on their PC gaming cards, than they do on Nvidias, no ifs, no buts, but they don't, AMDs powered console games, run much better on Nvidias cards, on the PC, not their own.
You dont seem to have been paying attention as the AMD-console relationship is absolutely bearing fruit on PC. Even ignoring DX12/Vulkan, the trend is very clear - modern AAA multiplatforms are tending to perform equal or better on AMD cards, whereas before Nvidia was usually ahead barring some exceptions.

This trend shows no signs of slowing down, nor would it. If anything, it's only going to grow as we've still got 2-3 years more of the current generation and developers are getting better and better with maximizing the console hardware.

And of the titles that Nvidia maintains an advantage in(likely to be PC-led games rather than console-led games), AMD's progress with DX11 performance, as already seen with Polaris, will likely mean AMD is rarely ever 'far behind' in a given title.

If you're the type who likes to keep up with modern AAA games, I think considering AMD should very much be on your mind.

That said, if Nvidia continually maintain a general horsepower advantage via higher end offerings, then what software favors what may be moot to many, especially those trying to get into 3440x1440 or 4k.
 
You would be surprised that some people can't physically remove their AMD-tinted glasses and believe that AMD will release some 1080ti beater for half the price.
But the point fo my post was that best case scenario big Vega might be slightly faster and slightly cheaper than a 1080, so if you have recently purchased a ti there really isn't that much to worry about and you can just enjoy the card now.
Maybe your seeing things different through green tinted glasses? possibly? (that's a question and not a accusation before i get jumped on because i know i got banned for saying your a fan of nvidia technology before.) I don't remember numbers of people repeatedly saying or believing that AMD will release a 1080Ti beater at half the price. They may have said something like that in jest or joking but no one believes that. I know people have said well with what AMD have done with ryzen and stirred up the CPU market they could do this with Vega and stir up the GPU market by coming in competitively as long as they have Vega at 1080Ti or faster.
You allways come across so negative about AMD and so Pro nvidia and i think a lot on this forum would agree with this so please don't accuse people of having red tinted glasses because that's just as good as calling them AMD fanbois
 
Come on, probably Nvidia is also the reason for the third world problems, and earthquakes, aliens...

People should better look for the real reason and the main problem is you can't build a ground up DX12 engine, because you loose too many sales. All this DX11/12 mixed engines are not great and don't really take advantage of DX12. But first there are way too many people still on Win7/8.1. Second you would like to build a real DX12 game with Feature Level 12_0 and not 11_0 like all the DX12 games have now. But there are too many graphics cards which don't support FL 12_0. The whole Kepler line and additionally the 750Ti, which was pretty popular are only FL11_0. Then on AMD side you still have a lot of 79x0/78x0/7x700 and R280/270 owners, which also don't support FL12_0. And at last one big DX12 Feature is not even yet finished. Shader Model 6 is now in experimental support in the creators update and will be released this year. But it also can only be used by Feature Level 12_0 Gpus. So when Devs kill off Win7 and Kepler/GCN1, then we will see the big DX12 jump but not before.

But looking at steam hardware survey, with 52% WIn10 compared to 45% Win7/Win8 and the 750Ti as 2nd most popular gpu it will take at least one more year till the first real DX12 games appear.

i do agree with you half way on what you said, my reply to loadsamoney, was that, tailored to loadsamoney, with a bit of sarcasm.
and i said pretty much what you said on another post, that it's mostly due to hardware and OS migration, but some companies can abuse that, Nvdia for exemple is taking one major feature of DX12, async compute, and they just don't want to implement it, mostly because AMD gains an edge over them, and they do just what you said, they follow up with multiple generations of GPUs, even couple years after the feature was added to the API and some games and consoles already use it, this is really rare, to see a GPU manufacturer drag his feet behind in new API features, usualy they race to be the first to implement them, way before it starts being used.
in 2 year we will get 70-80% DX12 capable, with 20-30% async capable, making is a tough decision for devs to implement, instead of being automatic.
 
Last edited:
i do agree with you half way on what you said, my reply to loadsamoney, was that, tailored to loadsamoney, with a bit of sarcasm.
and i said pretty much what you said on another post, that it's mostly due to hardware and OS migration, but some companies can abuse that, Nvdia for exemple is taking one major feature of DX12, async compute, and they just don't want to implement it, mostly because AMD gains an edge over them, and they do just what you said, they follow up with multiple generations of GPUs, even couple years after the feature was added to the API and some games and consoles already use it, this is really rare, to see a GPU manufacturer drag his feet behind in new API features, usualy they race to be the first to implement them, way before it starts being used.
Nvidia do not have the hardware to properly implement async compute. This is not a case of them just 'not wanting' to do something. These architectures are designed many years ahead of time and they cant just go throw async compute-capable engines on their GPU's late in the process.
 
Nvidia do not have the hardware to properly implement async compute. This is not a case of them just 'not wanting' to do something. These architectures are designed many years ahead of time and they cant just go throw async compute-capable engines on their GPU's late in the process.
Yeah but R&D... Nvidia have a lot more money and spend a lot more on R&D, they can do anything they want. They are the best... :p

/s
 
Nvidia do not have the hardware to properly implement async compute. This is not a case of them just 'not wanting' to do something. These architectures are designed many years ahead of time and they cant just go throw async compute-capable engines on their GPU's late in the process.
async compute was not a secret, it couldn't happen in 700series, even 900 i understand, but the 10 series...beside didn't microsoft start working on DX12 like 7 years ago :D

this ruins it for the potentially funny replies.
 
Been away for a while but after reading the last few pages I am guessing there is no news on Vega yet. Mind you, you guys are always entertaining....especially Loadsamoney...I think the guy is mentally cracking. :eek:

Hey dude put yer money where your mouth is and get an Nvidia card....or are you REALLY waiting for VEGA? :p
 
Isn't Nvidia's Volta architecture meant to to have a lot of what's missing from the current Maxwell/Pascal architecture added back?
Someone did an article I saw linked here where they said Nvidia stripped a lot of what was at the time going to waste out of their
Maxwell architecture in order to improve efficiency, With Volta a lot of it is being added back,
The article said Volta would be more GCN like.

How factual that is is anyones guess,

Damn I hope I'm not quoting Wccftech :confused:
 
Been away for a while but after reading the last few pages I am guessing there is no news on Vega yet. Mind you, you guys are always entertaining....especially Loadsamoney...I think the guy is mentally cracking. :eek:

Hey dude put yer money where your mouth is and get an Nvidia card....or are you REALLY waiting for VEGA? :p

I think he has a 1080ti on order
 
async compute was not a secret, it couldn't happen in 700series, even 900 i understand, but the 10 series...beside didn't microsoft start working on DX12 like 7 years ago :D
Pascal was still heavily based on Maxwell, though. Of course there wasn't gonna be any massive overhaul of the GPU design.

And async compute was not a secret, but probably not considered a priority, either. Especially early on when no developers were using it. Wasn't til the next gen consoles started getting going that it became a thing.
 
Nvidia do not have the hardware to properly implement async compute. This is not a case of them just 'not wanting' to do something. These architectures are designed many years ahead of time and they cant just go throw async compute-capable engines on their GPU's late in the process.

async compute was not a secret, it couldn't happen in 700series, even 900 i understand, but the 10 series...beside didn't microsoft start working on DX12 like 7 years ago :D

Kepler actually has the 2nd DMA engine that could be used to do compute + graphics but its disabled on GeForce cards.
 
AMD really need to start making noises with regards to Vega. Building up some momentum and excitement prior to release.

This will not only increase sales but stop people going Nvidia in the interim.
Didn't they already do this though? They were giving out Vega t-shirts recently. Look at all the attention and excitement it is getting. This thread has been on fire! :p
 
Status
Not open for further replies.
Back
Top Bottom