• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Acquires Xilinx

AMD has done well with this acquisition it seems. Xilinx produces many different products but I think the AI products will be relevant to gpu's.

Check out this presentation from Xilinx and particularly slides 6-9 where they show Super Resolution benchmarks vs the old Nvidia V100. The A100 Ampere based accelerator is around 3-4 times faster than V100.
https://china.xilinx.com/content/dam/xilinx/publications/presentations/D1-02.pdf

6qz5WYH.jpg

Some more info ...
https://www.xilinx.com/content/dam/xilinx/publications/product-briefs/alveo-product-brief.pdf

f1vUQQ3.jpg


The ML accelerator definitely appears to be fast enough to compete against Nvidia in general ML but how that translates to gaming applications is yet to be seen. AMD will hopefully utilize some of the Xilinx knowhow before they finally deliver a DLSS competitor.
 
Fingers crossed this means they'll be able to compete with nvidia for AI upscaling and RT now, maybe a bit too late of a purchase to see the benefits in time for RDNA 3 though? If so, sadly that means waiting for RDNA 4 by which time nvidia will probably still be leading the way for the next 1-2 years?
 
Fingers crossed this means they'll be able to compete with nvidia for AI upscaling and RT now, maybe a bit too late of a purchase to see the benefits in time for RDNA 3 though? If so, sadly that means waiting for RDNA 4 by which time nvidia will probably still be leading the way for the next 1-2 years?

I would rather the pair of them ditched up scaling tech and concentrated on stronger hardware!
 
So like I said, Stronger hardware :rolleyes:

That's the point.... They aren't going to just be able to magically come up with that hardware hence the 2-3 year wait....

Why not better hardware AND AI upscaling?

And this.

People need to realise these are massive companies, they will have different divisions of teams focusing on different things, it's not just a case of oh put on that on the back burner, we need everyone to focus on this one thing....
 
Based off Nvidia's work. AI "needs" dedicated hardware which means bigger chips that are more expensive and use more power (since you can't fully shut them down when not in use).

That's one example of doing it though, just like when graphics cards used to handle vertex and pixels separately and were eventually unified. Just because something has originally been done a single way, doesn't mean it'll always be done that way.
 
Back
Top Bottom