• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Just remembered, Gibbo said 1080s are getting whacked back up, can't remember if it was now, or in May, but they're going back to £500+, so why would Nvidia do that, when Vegas coming out, put their prices up, instead of down ?

:p

Nvidia launched the GTX980TI before the Fury X and massively undercut the Titan X(M).

The Fury X didn't beat a GTX980TI.

I suspect Vega won't be cheap,especially if it uses a largish core.
 
Both AMD and Nvidia are megacorporations, legally obligated to suck you of your money. Stop acting like it's a battle between good and evil, guys. Neither of these companies are your friend.
It is of course true that neither are our friends, and it may be that due to their marketshare AMD cannot afford to try some of the anti-consumer things which Nvidia and Intel try.

However, while this remains the case why would I support the bigger company known for these anti-consumer things? That, plus I'm still mostly avoiding Nvidia since I got burned with their response to the solder-defective stuff the sold which was basically tell people to get lost.
 
I think what you may be referring to the incorrect assumption of some people that memory compression allows a card to use bigger textures than its VRAM size, which is totally bogus. A games allocating 5GB VRAM will require 5GB VRAM no matter what. Even if the card manages to compress it down to 3GB it does not mean you could run it on 970 (for example).
Once again: lots of SPECULATION in these posts of mine...

Yeah, I was trying to get across that currently, compression is for bandwidth reasons, where 4Gb of textures requires 4Gb of VRAM.

The question is if the Vega NCUs access VRAM via a virtualised abstraction, or whether they have direct access. I would expect direct access for latency/efficiency reasons, but who knows, I certainly don't!
 
The question is if the Vega NCUs access VRAM via a virtualised abstraction, or whether they have direct access. I would expect direct access for latency/efficiency reasons, but who knows, I certainly don't!

That's the reason (as much as many here misinterpret it) I keep saying that Vega is not really a gaming-focused card.

Clearly AMD are targeting compute and they are preparing a damn good offering:

  • Vega is a chip designed for an amazing MI-25.
  • It is not late, it is timed perfectly to coincide with the release of Naples (complete server-side solution).
  • It will have amazing software, but ... for compute! (ROCm and then Tensorflow and all the other deep learning frameworks)
  • Server-side features (virtualized unified memory access, packed math etc)
We'll also get a version of it for gaming with decent drivers, but that's beside the point...

It'll be a decent card, but I doubt it will blow away Nvidia's offerings (especially in power consumption).
 
I think what you may be referring to the incorrect assumption of some people that memory compression allows a card to use bigger textures than its VRAM size, which is totally bogus.

That is a bit misleading - using the right texture compression format you can stick what would be say 5GB of textures from TGA data into VRAM using only a couple of GB or so with minimal quality loss over the original raw data. But yeah if your compressed data exceeds the amount of VRAM nothing except streaming/tiling methods can offset the impact of not having enough VRAM and that is still potentially inferior to just having enough VRAM in the first place.
 
That's the reason (as much as many here misinterpret it) I keep saying that Vega is not really a gaming-focused card.

Clearly AMD are targeting compute and they are preparing a damn good offering:

  • Vega is a chip designed for an amazing MI-25.
  • It is not late, it is timed perfectly to coincide with the release of Naples (complete server-side solution).
  • It will have amazing software, but ... for compute! (ROCm and then Tensorflow and all the other deep learning frameworks)
  • Server-side features (virtualized unified memory access, packed math etc)
We'll also get a version of it for gaming with decent drivers, but that's beside the point...

It'll be a decent card, but I doubt it will blow away Nvidia's offerings (especially in power consumption).


Yup, they can't compete with Nvidia anymore, so are now VR and compute.
 
That is a bit misleading - using the right texture compression format you can stick what would be say 5GB of textures from TGA data into VRAM using only a couple of GB or so with minimal quality loss over the original raw data. But yeah if your compressed data exceeds the amount of VRAM nothing except streaming/tiling methods can offset the impact of not having enough VRAM and that is still potentially inferior to just having enough VRAM in the first place.


AMD confused a lot of people with the FuryX release talking about memory management and new compression technologies such that the 4GB VRAm woudln't be a limit. The actual compression AMD were talking about made no difference to VRAM usage but increased effective memory bandwidth.
 
I don't understand AMD. Surely they would want Vega out asap. Instead they are bringing out pointless re brands.

Which represent the vast majority of the market.

High end, £500 cards maybe profitable but they're very low volume in comparison to the mainstream.
 
Nope, came from an XFX Fury.

You missed the point but nevermind. There is nothing further to be gained arguing this. Cheers bud and a good day to you(I sincerely mean that btw)

I think what you may be referring to the incorrect assumption of some people that memory compression allows a card to use bigger textures than its VRAM size, which is totally bogus. A games allocating 5GB VRAM will require 5GB VRAM no matter what. Even if the card manages to compress it down to 3GB it does not mean you could run it on 970 (for example).*snip*

If this is the case then whats the point in compression? I am asking cause I want to know. I thought the whole idea of memory compression would be to allow for more data stored in vram. Is it to allow more data to be loaded at the same time(which would then require decompression on the fly which seems to me at first glance counter intuitive instead of "just" increasing bandwidth).

That is a bit misleading - using the right texture compression format you can stick what would be say 5GB of textures from TGA data into VRAM using only a couple of GB or so with minimal quality loss over the original raw data. But yeah if your compressed data exceeds the amount of VRAM nothing except streaming/tiling methods can offset the impact of not having enough VRAM and that is still potentially inferior to just having enough VRAM in the first place.

This is actually how I thought it worked.
 
** Comments removed **

True. But Nvidia has had the top end to them selves for years, be nice too see some competition.
Indeed it would be. But if you are AMD who have had to recover from a poor situation it makes only sense to focus your limited resources where most of the market is no matter how much we want some top tier GPUs from them.
 
Last edited by a moderator:
If this is the case then whats the point in compression? I am asking cause I want to know. I thought the whole idea of memory compression would be to allow for more data stored in vram. Is it to allow more data to be loaded at the same time(which would then require decompression on the fly which seems to me at first glance counter intuitive instead of "just" increasing bandwidth).


This is actually how I thought it worked.

texture compression has been used in graphics card for over 20 years, that is nothing interesting in the slightest.

The compression that Nvidia and AMD have been talking about over the last years (which also existed in GPUs for some time), is compression of data during processing in the shader core which increases effective bandwidth and not memory size. nvidia have been very aggressive with advanced delta compression which allowed them to use a slower memory bandwidth than AMD cards that resorted to a more brute force approach.

It is like if you want to email some data to someone you can zop the files up so the transportation size is smaller, effectively increasing bandwidth. The data stored on your hard disk and at the recipients hard disk is the same size though when you've unzipped the data.
 
^^ Not using delta compression, though it can add some complexity, in these situations is kind of silly as you only need to transmit the changes and in some cases the difference can be huge compared to just increasing bandwidth.
 
Status
Not open for further replies.
Back
Top Bottom