• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
There will be no other way in the future for extracting more performance than to use multiple chips, either on a single substrate, single PCB, or in parallel cards.
That is still a long long way off and as I said, unless they do in consoles then it will be **** on pc like before where it works on some games and not others and then you get stuttering etc.
 
Last I checked they don’t and people have to find workarounds to get it to work and even then you have stuttering etc. **** that.

Even die hard Nvidia guys like Gregster gave up SLI as it is ****. That says it all. The above benchmarks are meaningless to me.
True that. SLI was great til they bought out GSync and then you notice the microstutter even more. SLI can do one til they get the stutter sorted.
 
This is pure bait though everyone knows they will not kill Nvidia this year by the time they get around to this the new Nvidia series will be here and it will be a memory. The gap is big correct me if i am wrong but this happens every year a bit like Liverpool for 20 years and well they came close but still technically no cigar as of yet! :D
 
Bring it on this is my upgrade path from VEGA 64

this.

Don't care if it don't beat 2080/2080ti, by the time AMD get something top end out Nvidia will have moved on to 3 series anyway. I just need a card twice as fast as V64 before I upgrade again, I don't need the best GPU, just something that gives me a sensible upgrade path. The 5700XT was nowhere near a big enough leap to upgrade to.
 
Last edited:
this.

Don't care if it don't beat 2080/2080ti, by the time AMD get something top end out, Nvidia will have moved on to 3 series anyway. I just need a card twice as fast as V64 before I upgrade again, I dont need the biggest bestest GPU, just something that gives me a sensible upgrade. The 5700XT was nowhere near a big enough leap to upgrade to.
3080 Ti then? :D
 
I wonder who's gonna be first to 1 TB/s bw (but not as a workstation card ala VII). I'd love to see a graphics power house have that kind of grunt, but I think it's going to be AMD again. Nvidia doesn't seem to want to touch HBM for its gaming cards while AMD is very eager to. Then again, if it's HBM2e it's unlikely they'll go beyond 2 stacks (2 x 8 GB) so that would put it at ~900ish GB/s. Close enough for me, I guess.
 
HBM is expensive and I guess Nvidia won't use it until it's necessary.

GDDR6 is already reaching 770GB/s as is so that's not bad (especially when you consider it can be up to 50% cheaper than HBM), we'll see if they can push it further for Ampere or if they move to HBM or maybe there is GDDR6x
 
It is much cheaper and easier to develop proper multi GPU DX12 explicit support, than it is to design and develop transistors on silicon to give the same or comparable performance increase :D



There is the thing called
Explicit DirectX12 Multi GPU rendering
https://gpuopen.com/wp-content/uploads/2017/03/GDC2017-Explicit-DirectX-12-Multi-GPU-Rendering.pdf

Explicit-DX12-Support.png

I'm well aware of DirectX12 different Multi GPU rendering methods, however this is not what i meant. It still requires some developer time and lets face it, not gonna happen in majority of titles. It's sort of like an endless circle of sadness. Few devs will/can put in the work because of the low % of multi gpu users, and few will adopt a second GPU because of the **** poor multiGPU support.
 
Last edited:
Needs to be similarly to RAID for hard disks - handled at the hardware level ideally. Must be much more complex with GPU's. A game dev shouldn't need to handle it but some or all of a combination of drivers/DirectX/some kind of hardware controller should. I can see AI possibly helping achieve something better in future.
 
It all depends on how Nvidia manufacture the performance into their next offering. i.e. if the new stuff(30X0's) is only 10-20% faster than current models and AMD bring out something that betters or matches the highest 20X0's but for substantially less, they could be on to a winner. At the end of the day, they don't have to lead both fields to dominate them both. High end is a fraction of their profits.
 
I though big navi was already baked into the Q1 schedule from the roadmap that was leaked earlier this year ?

The 35% chip size bump would certainly munch a 2080ti at stock.
 
Needs to be similarly to RAID for hard disks - handled at the hardware level ideally. Must be much more complex with GPU's. A game dev shouldn't need to handle it but some or all of a combination of drivers/DirectX/some kind of hardware controller should. I can see AI possibly helping achieve something better in future.

this is the problem with SLI/Crossfire. Back in the Voodoo2 days it was literally plug and play, the game didnt have to do anything. Dual GPU's need to do it automatically with no input from the game/software/user.
 
Status
Not open for further replies.
Back
Top Bottom