• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
That video was interesting, if not alittle over my head.

Can the GPU itself work out mathematically what needs to be rendered and what doesn't? what I'm really asking is does it need game dev support?
 
That video was interesting, if not alittle over my head.

Can the GPU itself work out mathematically what needs to be rendered and what doesn't? what I'm really asking is does it need game dev support?

It has a process/chip before the main "GPU" process that removes everything that doesn't need. So the "main GPU" focuses on rendering what is going to be seen
According to the video is running a mathematical equation to determine it
 
Buildzoid tested Vega Frontier Edition on LN2.

https://cxzoid.blogspot.co.uk/2017/08/first-impressions-of-vega-fe-on-ln2.html

He managed to reached 1807/1100MHz stable on LN2 that cannot be achieved on Frontier Edition water cooler, he did tried 2000/1100MHz with Timespy but the score was worse than at stock clock.

http://www.3dmark.com/3dm/21393617

Vega at 1807/1100MHz on LN2 graphic score 8164 but that is worse compared to ASUS STRIX GTX 1080 OC to 2126MHz boost on air score 8665 from Guru3D review.

http://www.guru3d.com/articles_pages/asus_geforce_gtx_1080_strix_oc_11_gbps_review,38.html
 
Buildzoid tested Vega Frontier Edition on LN2.

https://cxzoid.blogspot.co.uk/2017/08/first-impressions-of-vega-fe-on-ln2.html

He managed to reached 1807/1100MHz stable on LN2 that cannot be achieved on Frontier Edition water cooler, he did tried 2000/1100MHz with Timespy but the score was worse than at stock clock.

http://www.3dmark.com/3dm/21393617

Vega at 1807/1100MHz on LN2 graphic score 8164 but that is worse compared to ASUS STRIX GTX 1080 OC to 2126MHz boost on air score 8665 from Guru3D review.

http://www.guru3d.com/articles_pages/asus_geforce_gtx_1080_strix_oc_11_gbps_review,38.html
Lol Athlon, all you ever do is post negative AMD stuff. You should consider asking the Don's to name you TitanXP instead of AthlonXP1800 :p;)
 
this has been a roller coaster launch with reports circulating about a 100MH/s throughput for vega linked inexplicably to a single source.
lets say if we are looking at something like 50MH/s.. will it still be a cause for worry?

I believe this is going to be activated with the proper RX drivers release for the FE on the 14th, as per presentation last week

I doubt if that could happen .. the process thingy might not be a different chip.. it basically refers to all stages in the rendering pipeline, it must be present on all current generation cards including nvidias
if the pipeline is hardcoded, theres no way you could do those "detect life" charms in elder scrolls
 
That video was interesting, if not alittle over my head.

Can the GPU itself work out mathematically what needs to be rendered and what doesn't? what I'm really asking is does it need game dev support?
The name for the technology is called immediate rendering mode and wprks as Panos describes. It last seen in 2001 in the Kyro II 3D.

In the AMD driver thread i questioned why this hadn't caught on as you would think it's a good thing that GPU not rendering visable assets but upon further reading it looks like what you gain from command line savings you can potentially loose in processor bandwidth (the time taken for the CPU to workout what parts of the scene that's not visable to the user). I guess by having primative shaders on the GPU that problem goes away (but then your making the GPU package bigger to accommodate these primative shaders).
 
Is an old technology proposed all way back in 2001.
Asfar as I can tell it's not about any new culling technique per-se but changing the traditional architecture of Vertex shader -> Pixel Shader so that primitives can be discarded earlier on in the pipeline. It has to be coded for, though it's probably a fairly simple case of the dev splitting out the bit of vertex shader that directly has to do with vertex position so that the GPU can do culling tests without intiating other bits of the pipeline. But that's a guess, I can't find enough info to be sure and I am neither a 3D pro or architecture expert.

I didn't see anything in the video about dynamic culling based on visibility apart from the classic (and fast) back-face culling, frustum culling and 0 pixel culling. More sophisticated techniques are best implemented in an engine where you have a better idea of the problems and as mentioned it's not always going to help.
 
Asfar as I can tell it's not about any new culling technique per-se but changing the traditional architecture of Vertex shader -> Pixel Shader so that primitives can be discarded earlier on in the pipeline. It has to be coded for, though it's probably a fairly simple case of the dev splitting out the bit of vertex shader that directly has to do with vertex position so that the GPU can do culling tests without intiating other bits of the pipeline. But that's a guess, I can't find enough info to be sure and I am neither a 3D pro or architecture expert.

I didn't see anything in the video about dynamic culling based on visibility apart from the classic (and fast) back-face culling, frustum culling and 0 pixel culling. More sophisticated techniques are best implemented in an engine where you have a better idea of the problems and as mentioned it's not always going to help.

From what I can make out the primitive discard and raster stuff, etc. requires carnal knowledge by the application developer and/or some coding to support it to be sure of working properly with the most efficient implementation then there is a compatibility deferred mode which can be forced on at driver level with lower efficiency then finally the traditional pipeline.
 
it looks like a straightforward problem.. one just needs to compare the depth of full triangles from the viewpoint
so basically you get [x,y,z] for all primitives
for a fixed [x,y] we have to find the minimum "z" beyond which the primitives would be discarded
the minimum "z" can be updated using transparency checks.. or alternatively by transforming the "z" value of a transparent primitive to something like "999999999".. in fact this looks very efficient for the tile based rendering methodology.. you can parallelize those checks in small tiles than the whole scene for a faster result

i am just a math guy.. but i dont see how else can this problem be addressed.
from a pipeline stage the occlusion culling can be done at a much earlier stage in the pipeline so that no additional efforts are spent on unnecessary primitives
.. this is what the vega is claiming will be done

but, on second thoughts, why not implement this as a hard pipeline.. should be the default rendering mode unless the developer supplies a transparency shader.. so in my earlier example the "detect life" needs to activate a script that will attribute transparency to all primitives, and the effect will still be preserved.. but if the developer does nothing those primitives will be culled.. the DX and OGL API's should not allow developers to tinker with the pipeline

on second thoughts, the mining problem too can be easily solved if GPU manufacturers add this simple step in the pipeline.. when the GPU pipeline hits a certain utilization level [say 80%] during the last 1 minute.. the driver can start adding random values of small magnitude to all computations.. these values will be so small that there will be no impact on gaming iq.. but mining folks willwake up with pitchforks - unless they have a separate mining card :D
 
From what I can make out the primitive discard and raster stuff, etc. requires carnal knowledge by the application developer and/or some coding to support it to be sure of working properly with the most efficient implementation then there is a compatibility deferred mode which can be forced on at driver level with lower efficiency then finally the traditional pipeline.
I don't know, but!
Take Skyrim...
There was a mod that manually added additional planes of visibility that were rendered if the viewer looks at the scene from a certain vector. That technique was already implemented in the game engine of this game from 2011 and popular.
As I understand him, there is now a chip (at the time when the object goes into the graphics pipeline and the position is processed) that detects if an object is seen or not and generates those planes on the fly.
The processing is stopped earlier in the Vega architecture than on other chips when it detects the object is actual not in the scene, but kept in memory for later use.

These techniques could have course run on top of each other.
The game engine filters by planes what objects should be send to the GPU, then the GPU filters by vertex of those objects what is rendered and what not.

I was playing around with userbenchmark results, because this is the same test for all and actual users running it in real life scenarios.
My current 390X is roughly 17% ahead of the GTX 970 or lets say 40% better quality, but the 970 is ~15% faster in multi-rendering. I can still play all I play on 1080p in mostly ultra/high/very high (except Witcher3, bricked by nvidia hairworks of course).
The Fury X is 12% ahead of my 390X, between -8 and 43% in everything graphic.
The GTX 1070 is 32% ahead of my 390X, between 11 and 60% in everything graphic.
The GTX 1080 is 69% ahead of my 390X, between 36 and 131% in everything graphic.
The GTX 1080ti is 108% ahead of my 390X, between 52 and 196% in everything graphic.

People benched the Vega Frontier Edition (3 users in 12 benchmarks) ^^
It is 59% ahead of my 390X, between 54 and 124% in everything graphics.

Since the Frontier is not gaming optimised, I am looking forward to the RX version. It will still be at least 59% improvement to my current card.
 
Last edited:
I was playing around with userbenchmark results, because this is the same test for all and actual users running it in real life scenarios.
My current 390X is roughly 17% ahead of the GTX 970 or lets say 40% better quality, but the 970 is ~15% faster in multi-rendering. I can still play all I play on 1080p in mostly ultra/high/very high (except Witcher3, bricked by nvidia hairworks of course).
The Fury X is 12% ahead of my 390X, between -8 and 43% in everything graphic.
The GTX 1070 is 32% ahead of my 390X, between 11 and 60% in everything graphic.
The GTX 1080 is 69% ahead of my 390X, between 36 and 131% in everything graphic.
The GTX 1080ti is 108% ahead of my 390X, between 52 and 196% in everything graphic.

People benched the Vega Frontier Edition ^^
It is 59% ahead of my 390X, between 54 and 124% in everything graphics.

Since the Frontier is not gaming optimised, I am looking forward to the RX version. It will still be at least 59% improvement to my current card.

I'm in the same boat as you. I'll be coming from an R9 290X 8GB model (it's literally the exact same card as your 390X, just -50 Mhz slower stock), that ran on 1160 Mhz core speed & 1625 Mhz ram speed on air for several years. I sold it during the Ethereum coin craze a few weeks ago for almost double it's usual market price from May 2017 ($420 last week vs $200 in may). Sitting around waiting on vega.

I -REALLY- don't want to buy nvidia.. not fanboyism, I just don't want to support their business practices today. What with how all of the pascal cards for a given product line are 100% identical, even aftermarket vs reference.. they all overclock to the same maximum frequency.. locked down bios we can't modify.. hard coded voltage limits that we can't change.. etc.

This is why I want AMD to Succeed. At least with AMD all of their cards come "unlocked" and we can change core speed, memory speed, voltages, modify the bios, do whatever the heck we want with our hardware.

This is why I feel AMD is superior.... and it's just my personal opinion however. So I'm really hoping vega 64 is at least as powerful as the GTX 1080, and hopefully like +15% or +20% more for our money. I would be perfectly happy with that and scoop up a MSI one with custom PCB with extra power phases and what not, and then a few months down the road fit it with a full cover EK block.

I just hope the power usage isn't substantially more than the nvidia cards and make us vega owners feel bad about our purchase.. :( Also I'm sitting around every day literally sort of scared I won't even buy able to buy an aftermarket one when they release due to coin miners. And I can't afford and don't want to pay inflated post-coin-miner-buy-out inflated prices.. hope I don't end up stuck waiting to buy a released vega card 6 months later. :/

Every day I have to try and play games on my old backup GTX 680 2GB card makes me miss my 290X more and more :(
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom