• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel details game-boosting frame generation

Man of Honour
Joined
23 Mar 2011
Posts
17,280
Location
West Side
Intel is preparing to introduce its own frame generation technology similar to DLSS 3 and FSR 3, called ExtraSS (via WCCFTech). Detailed at Siggraph Asia in Sydney, ExtraSS is not just a clone of DLSS 3 or FSR 3: instead of using frame interpolation, it uses frame extrapolation. Although these two methods are very similar, extrapolation has some advantages that could set ExtraSS apart from its competitors.

Link.

 
y'know I guess this is as good a place as any to ask this, I thought frame generation and other fancy stuff relied on "tensor cores" or some kinda fancy nvidia nonsense? Why can't 3xxx series do any of it?
 
y'know I guess this is as good a place as any to ask this, I thought frame generation and other fancy stuff relied on "tensor cores" or some kinda fancy nvidia nonsense? Why can't 3xxx series do any of it?
It can as seen with FSR3 frame gen on 3000 series cards,Nvidia just wanted to lock it behind 4000 series cards so people would upgrade
 
That's it's then, all GPU vendors have now officially declared Moore's Law is dead then!

With the expectations that node advances have almost stopped (and the most important part of Moore's Law was the reduction in transistor costs), they all now expect to be fake frames and clever upscaling to make up the shortfall.

All this while full path tracing ray tracing at 4k and decent frames would require probably X10 or more hardware.
 
The term "fake frames" needs to be eradicated from all places of the internet. It's a bit cringe really :

ALL frames are "fake", they need to be computed and rendered, whether pure or not.

Also, frame gen now produces higher quality results as a general overview than traditional rendering, which is now quite evidently inefficient and heavy on computational resources. AI is the future for high quality high speed rendering whether anyone likes it or not.
 
Last edited:
y'know I guess this is as good a place as any to ask this, I thought frame generation and other fancy stuff relied on "tensor cores"
You need Optical Flow Accelerators for Frame Gen, can I interest you in our Ada lineup?

jensun.png
 
Last edited:
That's it's then, all GPU vendors have now officially declared Moore's Law is dead then!

With the expectations that node advances have almost stopped (and the most important part of Moore's Law was the reduction in transistor costs), they all now expect to be fake frames and clever upscaling to make up the shortfall.

All this while full path tracing ray tracing at 4k and decent frames would require probably X10 or more hardware.
I wouldn't mind so much,if it were not for the fact they are using frame generation/upscaling as an excuse for shrinkflation. Then next generation you are locked out of the next improvement due to "reasons".

If it were frame generation/upscaling on top of decent generational improvements it would be fine.Instead the generational improvements ARE frame generation/upscaling. But even frame generation/upscaling are reliant on cards have enough pure processing power in the first place.

After all how can a RTX4060TI/RTX4060/RX7600 really be much of an upgrade over an RTX3060TI/RTX3060/RX6600XT after nearly 3 years? By extension how can you push forward even raytracing if the most sold cards are so dire? This is what hardware enthusiasts on tech forums don't seem to understand because most buy higher level products,so are affected less by shrinkflation.
 
Last edited:
y'know I guess this is as good a place as any to ask this, I thought frame generation and other fancy stuff relied on "tensor cores" or some kinda fancy nvidia nonsense? Why can't 3xxx series do any of it?

As Case said, it's a gimmick to get you to upgrade to a 4000 series GPU * locked *, however as is the case with the recent mod it now works on the 3000 series and isn't HARDWARE only specific :rolleyes: ;)
 
Last edited:
Yes :D :D :D


To think Nvidia lied all this time :rolleyes:
Christ.

DLSS3 FG does not work on anything else other than 4x series.

DLSS upscaling works with FSR3 via that ‘mod’. It’s not DLSS3FG. Do people even read anymore?
 
Yes :D :D :D


To think Nvidia lied all this time :rolleyes:

I did think it was weird an RTX3090 couldn't run it,but an RTX4060 could.
 
Christ.

DLSS3 FG does not work on anything else other than 4x series.

DLSS upscaling works with FSR3 via that ‘mod’. It’s not DLSS3FG. Do people even read anymore?
Of course people don't actually read the info, they are too busy jumping on the Nvidia hate train to actually read about it.
 
Extrapolation requires a higher resolution input and could still result in lots of visual glitches and artifacts, as Intel admits in its white paper. However, the benefit is that there is a reduced latency penalty compared to interpolation, which has to delay frames so it can generate new ones (otherwise, they'd show up out of order).

Requires higher resolution... than what? 1080P or 1440P? And how does that effect downscaling?

The thing with both DLSS 3 and FSR 3 is you shouldn't use it much below 60 FPS as the frame interpolation being AI generated from the previous frame and the proceeding frame has a higher chance of being graphically garbled as with a lower FPS the differences between frames are greater, less accurate data from the AI to work with.
AMD actually turn FSR 3 off and on again on the fly during fast lateral mouse movements as just whacking your mouse from side to side stretches the differences in frames even at a high refreshes rate.

With Exterpilation you only have the preceding frame to work with, you're not using data from the proceeding frame, yes this cuts down on 1 frame of latency but the reason Nvidia and AMD do it this way is to get a more accurate AI generated frame, less artefacts, it seems the reason Intel need a higher resolution is to preserve the detail in the frame to generate its proceeding frame from, it doesn't have the benefit of using two frames, one on either side of the AI generated frame.

It seems to me that either through hardware limitations or some other reason Intel are trying to get away with doing half the AI work and marketing it as a good thing, through lower latency, which is just empty and misleading marketing as both Nvidia and AMD have existing latency lowering technology to draw on "Nvidia Reflex" and "AMD Ant-Llag" to nullify that frame exterpilation.
Its technically true only if you ignore Nvidia Reflex and AMD Ant-Llag, in practical terms, in user experience, its not true.
 
Last edited:
Requires higher resolution... than what? 1080P or 1440P? And how does that effect downscaling?

The thing with both DLSS 3 and FSR 3 is you shouldn't use it much below 60 FPS as the frame interpolation being AI generated from the previous frame and the proceeding frame has a higher chance of being graphically garbled as with a lower FPS the differences between frames are greater, less accurate data from the AI to work with.

Most likely why it looks like **** in Alan Wake 2, but games such as The Witcher 3/Ratchet it's much much better
 
Most likely why it looks like **** in Alan Wake 2, but games such as The Witcher 3/Ratchet it's much much better

Yes, and i think a lot of people, even tech jurnoes just don't get that, the longer the time between rendered frames (Lower FPS) the more difference between those frames... the worse the AI generated frame will look.
 
Back
Top Bottom