• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Microsoft Seemingly Looking to Develop AI-based Upscaling Tech via DirectML

Soldato
Joined
7 Dec 2010
Posts
8,691
Location
Leeds
Microsoft Seemingly Looking to Develop AI-based Upscaling Tech via DirectML


https://www.techpowerup.com/284265/...-develop-ai-based-upscaling-tech-via-directml

Microsoft seems to be throwing its hat in the image upscale battle that's currently raging between NVIDIA and AMD. The company has added two new job openings to its careers page: one for a Senior Software Engineer and another for a Principal Software Engineer for Graphics. Those job openings would be quite innocent by themselves; however, once we cut through the chaff, it becomes clear that the Senior Software Engineer is expected to "implement machine learning algorithms in graphics software to delight millions of gamers," while working closely with "partners" to develop software for "future machine learning hardware" - partners here could be first-party titles or even the hardware providers themselves (read, AMD). AMD themselves have touted a DirectML upscaling solution back when they first introduced their FidelityFX program - and FSR clearly isn't it.



It is interesting how Microsoft posted these job openings in June 30th - a few days after AMD's reveal of their FidelityFX Super Resolution (FSR) solution for all graphics cards - and which Microsoft themselves confirmed would be implemented in the Xbox product stack, where applicable. Of course, that there is one solution available already does not mean companies should rest on their laurels - AMD is surely at work on improving its FSR tech as we speak, and Microsoft has seen the advantages on having a pure ML-powered image upscaling solution thanks to NVIDIA's DLSS. Whether Microsoft's solution with DirectML will improve on DLSS as it exists at time of launch (if ever) is, of course, unknowable at this point.

RSEtrLv.jpg



t0642PU.jpg
 
This is no surprise at all, but great to see.

MS have been heavily into graphics, image processing, ML/DL for a very long time. The Kinect was incredibly advanced NN Machine Learning (Thanks Zarax) for it's time. The training dataset was millions of images

Will be really interesting to see how this playing field evolves with the giants of ML/DL battling it out. Very cool :)

Keep in mind this could be an ML/DL approach to ray tracing, physics or anything else in the context of games and graphics. There is a TON if research into accelerating physics simulations by like 1000x using ML trained by very accurate simulators, and the results are absolutely bonkers.

EDIT: Corrected a wrong assumption. Thanks Zarax.
 
Last edited:
Holy ****, I made the assumption about NN's like 10 years ago based on talk of their crazy big training datasets, and never corrected it. Thanks for pointing that out.

The paper from MS is really interesting: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/BodyPartRecognition.pdf
Yes, that paper makes me wonder if the tech world jumped on deep learning too fast. Even though there are some experimental alternatives (like the deep forest paper), it appears the biggest companies are not interested, perhaps because capital intensive deep learning helps maintaining competition limited...
 
Yes, that paper makes me wonder if the tech world jumped on deep learning too fast. Even though there are some experimental alternatives (like the deep forest paper), it appears the biggest companies are not interested, perhaps because capital intensive deep learning helps maintaining competition limited...

100%.

I fell for the same thing assuming the Kinect was a NN and I used k-Nearest-Neighbour in my Thesis... :rolleyes:
 
Err, KNN and NN are two very different things though. One is a clustering algorithm, the other a regression/classification one.

That's exactly my point. I could have applied a (C)NN in my Thesis, but I didn't for many many reasons. But I still made the assumption that the Kinect used NN's. The point is, I have a rough idea about the field and I still made poor assumptions. How are business people and the laymen supposed to make better assumptions?
 

Notable Restrictions of Microsoft Auto SR​

  • Only supports Qualcomm Snapdragon X and Windows 11 on Arm (24H2 or newer), for now. The PC must also support Copilot+.
  • Restricted to native ARM titles, as well as certain DirectX11 and DirectX12 games. DX11 and DX12 games in 10-bit formats, as well as OpenGL, Vulkan, and older versions of DirectX like 9 or 8 aren't supported at all.
  • For games where it can work but isn't automatically enabled, the end user will need to manually configure each additional game.
  • Auto SR cannot be used simultaneously with HDR, which is a significant sacrifice in color vibrancy and accuracy for devices with OLED and high-end IPS panels.
  • Enabling or disabling a passive Auto SR indicator requires registry key edits, for some reason. There's no reason this shouldn't be a quick toggle in Windows.
  • Auto SR doesn't support Display resolutions under 1080p. Considering how many mobile devices use a sub-1080p resolution— and how they would certainly see gaming performance boons from doing so— this is an odd omission.

So don't be expecting microsoft to compete with fsr, xess or dlss then :cry:
 
That's exactly my point. I could have applied a (C)NN in my Thesis, but I didn't for many many reasons. But I still made the assumption that the Kinect used NN's. The point is, I have a rough idea about the field and I still made poor assumptions. How are business people and the laymen supposed to make better assumptions?
The Kinect actually used a random forest. While it might seem odd given the success of deep learning, we also recently found out that neural networks can be represented as decision trees so it was ultimately a good idea...
 
Aren't Sony doing the same thing with the PS5?

Whose really driving this.... it'll nullify DLSS, genius.
 
Last edited:
DLSS won't be going anywhere because the extension of RT/PT is via the use of acceleration tech such as ReSTIR GI and Ray Reconstruction/SER. SER alone gives an 8-20% performance boost. These technologies only work on RTX cards, so DLSS will always have a foothold. No other vendor has even leaked a roadmap of anything similar within their RT hardware scope, they simply don't have the budget for that level of R&D like Nvidia do.

Once Windows integrates DirectSR as well (currently the initial release seems to be mobile chipsets from Qualcomm etc), it will incorporate all vendor upscalers, nothing changes on the DLSS front, just update the DLL files manually like we currently do and continue as normal. Games will just target DirectSR and within Windows the user selects their flavour of upscaler based on what their card supports.
 
Last edited:
Direct ML Upscaling has been a long time coming but it still seems to be a long way off yet for us regular PC people.

Its what DLSS is, now that all GPU's have machine learning there is no reason why it can't be here next week...
 
The Kinect actually used a random forest. While it might seem odd given the success of deep learning, we also recently found out that neural networks can be represented as decision trees so it was ultimately a good idea...
That's awesome. I've been using Random Forest in some reasearch very recently, somewhat interpretable but at the cost of memory. We used them as a first model to get a better understsanding of feature importance and to tune our inputs, then are transitioning to more memory efficient types, e.g. NNs.

I'm amazed Kinect used them, do you know how big the model ended up being?!?
 
Last edited:
That's awesome. I've been using Random Forest in some reasearch very recently, somewhat interpretable but at the cost of memory. We used them as a first model to get a better understsanding of feature importance and to tune our inputs, then are transitioning to more memory efficient types, e.g. NNs.

I'm amazed Kinect used them, do you know how big the model ended up being?!?
I think it had a limited number of features, but you can find the paper here: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ks_book_2012.pdf
As for memory intensive, I don't find RF bad as long as they're properly tuned. They're solid performers and relatively stable although rarely the most accurate model around.
 
Last edited:
Back
Top Bottom