• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
That's what I meant, I don't need the super-ultra top power one.

edit - I don't do the fanboi thing, I just get whatever I choose and deem to be great value (okokok I bought a 2080Ti, shoot me :D )

Same... i just get whatever gives me the most FPS per £.
 
Last edited:
Well FreeSync was basically what Laptops used for years but not called that, I have used all and prefer the Hardware Module, this Odyssey G7 is a buggy POS (G-Sync Comp).
 
IMO its an overcomplicated specialized and expensive way of doing something, its typical Nvidia, It was the same with PhysX, Bullet and Havoc did the same thing only much more efficiently and didn't require specialized instructions, Same with GameWorks, RTX is a sledgehammer to something that's been done more efficiently for more than a decade, again to run it the way Nvidia have it you need specialized acceleration hardware, G-Sync.... you don't need a dedicated £150 chunk of complex hardware to do that.

Its almost as if Nvidia take what is almost always an existing technology, bastardizes it and turns it into something far more complex and resource expensive than it needs to be and then hope this frightens any competitor off.

And then they wonder why more often than not someone like AMD comes along and makes the whole thing more efficient.

This is the point we are highlighting, and only nvidia seem to be able to get championed for it. If other companies tried it they would soon find sales disappear as most people out there like 'free' stuff not more proprietary locked down nonsense.
 
AMD already have something similar.

Native 4K left, AMD image Sharpening Middle, NVidia original blurry mess DLSS right.

Just in case your blind.... the best Image quality is the AMD one in the middle.

Yes i'm going to bring this up everytime someone makes a daft "But DLSS" blurt"

LvqGzmE.png

That's DLSS 1.0 and if I didn't knew better, I'd say you're purposely misleading. :D

I have to wonder why is it that other developers devalue DLSS though?

For the same reasons devs did not bother with a lot of other tech (including Mantle, async compute, implicit primitive shaders, True Audio and other fancy stuff ): (1) they're lazy, (2) requires extra resources or (3) all of them.

IMO its an overcomplicated specialized and expensive way of doing something, its typical Nvidia, It was the same with PhysX, Bullet and Havoc did the same thing only much more efficiently

As long as it offers performance for the end users, doesn't really matter how they're doing it. DLSS2.0 offers better performance and image quality than AMD.

Bullet and Havoc didn't do the advanced stuff (fluid simulation and such), if I'm no mistaken, they're only doing more in specialized apps, not games. FEMFX I think is the latest of these "etherical" techs. AMD bother little to help developers implement their own tech and it shows.
 
Bullet and Havoc didn't do the advanced stuff (fluid simulation and such), if I'm no mistaken, they're only doing more in specialized apps, not games. FEMFX I think is the latest of these "etherical" techs. AMD bother little to help developers implement their own tech and it shows.

WHAT????

Cryengine... Bullet.










Blender... Bullet


 
Last edited:
YEp, AMD's iteration is sensational... imagine Sharpening, fantastic tech that I don't think people know just how good it is in the last gen!
 
Same... i just get whatever gives me the most FPS per £.

Yeah same here. If more people did this, we'd have a much more competitive GPU market, where AMD would have had much more R&D budget years ago. We'd all be winners with fairer prices and more performance.

That said, just look at this thread. Big Navi thread, and still we have those who openly admit to being financially dependant on Nvidia's success (heavily invested in stocks, or working for a Nvidia partner) who make it their life's mission to downplay AMD and up-talk Nvidia.
 
I am not emotionally attached to either company. I will pick the card which offers the best performance per price. DLSS 2.0 looks fantastic to me however.
So what you're saying is you think it is a GOOD thing that Nvidia have intentionally downgraded the rendering potential of their dies in favour of proprietary technology to fake an image that has seen little adoption? And this is not you drinking Kool-Aid? OK then.

It is not a case of being "emotionally attached" to a company, it is a case of using some common sense. You say DLSS 2.0 looks fantastic - we'll agree to disagree on that one - but would DLSS even be needed if the die space taken up by the tensor cores was actually used for generating the native image in the first place? You don't see this is a means for Nvidia to continually cheap out on their hardware yet inflate their prices further?
 
So what you're saying is you think it is a GOOD thing that Nvidia have intentionally downgraded the rendering potential of their dies in favour of proprietary technology to fake an image that has seen little adoption? And this is not you drinking Kool-Aid? OK then.

It is not a case of being "emotionally attached" to a company, it is a case of using some common sense. You say DLSS 2.0 looks fantastic, but would DLSS even be needed if the die space taken up by the tensor cores was actually used for generating the native image in the first place? You don't see this is a means for Nvidia to continually cheap out on their hardware and inflate their prices further?
To be honest I wasn't aware of this. I was merely under the impression that this was an additional tech they developed. I don't follow GPU news too much!
 
To be honest I wasn't aware of this. I was merely under the impression that this was an additional tech they developed. I don't follow GPU news too much!
Well thats a big ol can of worms you need to open then for when you come to make your purchase :p

Edit: I retract the Kool-Aid comment then if you weren't aware of the details. Marketing sucker instead? :P (I jest, I jest)
 
So what you're saying is you think it is a GOOD thing that Nvidia have intentionally downgraded the rendering potential of their dies in favour of proprietary technology to fake an image that has seen little adoption? And this is not you drinking Kool-Aid? OK then.

It is not a case of being "emotionally attached" to a company, it is a case of using some common sense. You say DLSS 2.0 looks fantastic - we'll agree to disagree on that one - but would DLSS even be needed if the die space taken up by the tensor cores was actually used for generating the native image in the first place? You don't see this is a means for Nvidia to continually cheap out on their hardware yet inflate their prices further?


If the image quality and frame rate are there, and beating the competition, does it really matter to you how that's achieved?
Why should it?
Is all optimisation "cheating"?
 
Status
Not open for further replies.
Back
Top Bottom