21:9? you need to get some 32:9 curved screen action, is pretty much double monitors but without the annoying bezel and gap in between.
Yer, I quite fancy one of those when my wallet allows (and Mrs).
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
21:9? you need to get some 32:9 curved screen action, is pretty much double monitors but without the annoying bezel and gap in between.
They also have a new tech which helps massively with VR rendering at no IQ cost at all. This is probably where the 2x Titan X performance thing comes from.
There's no master race in 21:9...stop fooling yourself mate Plenty of games with no support, new games without it, and others where you have to do some sort of tom foolery to get it working
21:9? you need to get some 32:9 curved screen action, is pretty much double monitors but without the annoying bezel and gap in between.
Afaik the entire point of the VR rendering gain was that they do it at an IQ cost. They want to lower the resolution of the outer edges as you aren't looking directly at them so you won't notice it.
http://www.pcgamer.com/nvidia-gtx-1080-1070-features-detailed/
So it's not about no IQ loss, it's directly about reducing resolution to gain frame rate, it's a complete cheat and absolutely reduces IQ.
The bottom line is the same, however: fewer pixels rendered without a loss in image quality.
I think you misunderstand, there's no IQ loss/cheating going on here. It simply leverages their new multi projection whatever to render a smaller amount of pixels equal to the size you can actually see through the lenses.
I'm pretty sure valve already has a similar technique in software which Alex Vlachos talked about in his GDC VR rendering speech. He also mentioned a technique (foveated rendering?) that does lower the image quality/resolution of the visible outer portions of the screen but that is not what is going on here.
The article you linked says as much also.
The game was running at 4K but only hitting about 50 fps, just a bit below the desired 60 fps for a 60Hz display. By flipping a switch, the game renders the outer portions of the display at a lower resolution and stretches these, leaving the main section of the display—where you're most likely focused during gaming—at full quality. Frame rates jumped from 45-50 fps without multi-projection to over 60 fps, and while there was a slight loss in quality, it was only really visible if you were stationary and carefully looking for the change.
Its a choice, dont like it dont use it.
For VR it absolutely does increase fps without reducing IQ though
You are literally wrong, the article says you're wrong, everything I've seen suggests Nvidia is lowering resolution on parts of the image to increase performance. Saying this increases FPS without reducing IQ is nothing short of ridiculous.
You are literally wrong, the article says you're wrong, everything I've seen suggests Nvidia is lowering resolution on parts of the image to increase performance. Saying this increases FPS without reducing IQ is nothing short of ridiculous.
Our next topic is multi-resolution shading.
The basic problem that we’re trying to solve is illustrated here. The image we present on a
VR headset has to be warped to counteract the optical effects of the lenses.
In this image, everything looks curved and distorted, but when viewed through the lenses,
the viewer perceives an undistorted image.
The trouble is that GPUs can’t natively render into a distorted view like this –
it would make triangle rasterization vastly more complicated.
Current VR platforms all solve this problem by first rendering a normal image (left) and
then doing a postprocessing pass that resamples the image to the distorted view (right).
If you look at what happens during that distortion pass, you find that while the center of
the image stays the same, the edges are getting squashed quite a bit.
This means we’re over-shading the edges of the image. We’re generating lots of pixels that
are never making it out to the display–they’re just getting thrown away during the
distortion pass. That’s wasted work and it slows you down.
The idea of multi-resolution shading is to split the image up into multiple viewports
–here, a 3x3 grid of them.
We keep the center viewport the same size, but scale down all the ones around the edges.
This better approximates the warped image that we want to eventually generate, but
without so many wasted pixels. And because we shade fewer pixels, we can render faster.
Depending on how aggressive you want to be with scaling down the edges, you can save
anywhere from 25% to 50% of the pixels. That translates into a 1.3x to 2x pixel shading
speedup.
Another way to look at what’s going on here is as a graph of pixel density aross the image.
The green line represents the ideal pixel density needed for the final warped image. With
standard rendering, we’re taking the maximum density – which occurs near the center
– and rendering the whole image at that high density.
Multi- resolution rendering allows us to reduce the resolution at the edges, to
more closely approximate the ideal density while never dropping below it.
In other words, we never have less than one rendered pixel per display pixel, anywhere in the image.
This setting lets us save about 25% of the pixels, which is equivalent to 1.3x improvement
in pixel shading performance.
It’s also possible to go to a more aggressive setting, where we reduce the resolution in the
edges of the image even further. This requires some care, as it could visibly affect image
quality, depending on your scene. But in many scenes, even an aggressive setting like this
may be almost unnoticeable – and it lets you save 50% of the pixels, which translates into a
2x pixel shading speedup.
But isn't that the 'old' method multi-resolution shading and not the new feature 'Multi projection'?
You are literally wrong, the article says you're wrong, everything I've seen suggests Nvidia is lowering resolution on parts of the image to increase performance. Saying this increases FPS without reducing IQ is nothing short of ridiculous.
Multiprojecection has nothing to do with lowering resolution for the peripheral vision.
A naive VR rendering solution is to render the left eye view-port and then independently render the right eye view-port in a complete new rendering pass. Multiprojection render allows both eyes to render simultaneously, sharing resources and optimizing rendering between the 2 views.
Resolution isn't changed in the slightest anywhere in the scene, merely redundancy is removed between rendering the left and right views.