• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Lucid can scale with 2 GPUs close to linear

Umm, a few games off the top of my head that can, and do scale 100%.

COD4

Call of Juarez

FEAR.

I'm pretty sure there are more, but I can't remember any.

What I understand this chip to do is make 100% scaling easier across all games, not just simply achieving 100%, which we know has already been done.
 
The Hydra 100 then appears to the host OS as a PCIe device, with its own driver. It intercepts calls made to the most common graphics APIs—OpenGL, DirectX 9/10/10.1—and reads in all of the calls required to draw an entire frame of imagery. Lucid's driver and the Hydra 100's RISC logic then collaborate on breaking down all of the work required to produce that frame, dividing the work required into tasks, determining where the bottlenecks will likely be for this particular frame, and assigning the tasks to the available rendering resources (two or more GPUs) in real time—for graphics, that's within the span of milliseconds. The GPUs then complete the work assigned to them and return the results to the Hydra 100 via PCI Express. The Hydra streams in the images from the GPUs, combines them as appropriate via its compositing engine, and streams the results back to the GPU connected to the monitor for display.

Holy Latency Batman!
 
Does anyone have any information about how the technology actually works?

If it's another AFR (alternate frame rendering) method then I'm not really interested. Having separate GPUs render alternate frames just leads to too many problems (like uneven frame output - the dreaded 'microstutter').

We need a technology which allows multiple GPUs to work on the same frame. Of course this carries with it a greater cost in terms of communication, meaning that faster pathways are required for communication between GPUs. This needs to be the focus of the future though.




edit - just read the information above. Looks like GPUs co-operate in rendering a single frame. This is fantastic :) :)
 
Last edited:
Does anyone have any information about how the technology actually works?

If it's another AFR (alternate frame rendering) method then I'm not really interested. Having separate GPUs render alternate frames just leads to too many problems (like uneven frame output - the dreaded 'microstutter').

We need a technology which allows multiple GPUs to work on the same frame. Of course this carries with it a greater cost in terms of communication, meaning that faster pathways are required for communication between GPUs. This needs to be the focus of the future though.

Well the way I keep hearing it described is that the lucid chip intercepts DirectX calls and sends them to the cards' respective drivers to get that part of the scene calculated, the second card sends its data to the first card and then the frame gets rendered. I'm assuming that method is just breaking up each frame and rendering different parts of a frame individually, putting the frame back together on the 'master' card and outputting on the monitor. This is as opposed to AFR which simply makes each card render every other frame.

Edit: Just noticed your edit. :P
 
Not neccesarily - not if all the "bottleneck analysis" is done in hardware. I guess we'll have to wait and see though.

I'd guess he was referring to where it says, 'in the span of milliseconds,' which frankly is a very long time in real-time graphics. For example, to get 60FPS you need to have 16.666...ms frametimes, if you're using 8 of those ms's just to figure out which card renders what, you're going to notice it.
 
I'd guess he was referring to where it says, 'in the span of milliseconds,' which frankly is a very long time in real-time graphics. For example, to get 60FPS you need to have 16.666...ms frametimes, if you're using 8 of those ms's just to figure out which card renders what, you're going to notice it.

Yes, you're right.

I guess the key question then is about the queueing efficiency. If the (say) 10ms delay is just a lag time, then it should be okay (most LCD monitors have input lags of 20ms+ anyway). If the GPUs must sit idle waiting for the breakdown instructions for the next frame, then it's going to bring with it a big performance hit.

I imagine (hope...) they've come up with a solution that allows the GPU to work on one frame while the decomposition instructions for the next frame are being calculated. I guess we'll have to wait and see though.

Either way, I'm quite excited about this now :D It's nice to see some real-world improvements coming out that didn't originate in either the red or the green camps :)
 
The problem is this technology will make SLI and XFIRE obsolete as marketing tools, and thus its not outside the realms of possibility that ATI and Nvidia could update their drivers to stop Lucid technology working.

Hopefully this wont be the case though........
 
The problem is this technology will make SLI and XFIRE obsolete as marketing tools, and thus its not outside the realms of possibility that ATI and Nvidia could update their drivers to stop Lucid technology working.

Hopefully this wont be the case though........

On the other hand, it might make multiGPUs more desirable. Most people's issues with multiGPU setups is that it doesn't always work to the best it can.

As for ATi and NV, well both their crossfire and technologies are now nothing more than logos considering they're almost an open standard. With them both working on any X58 motherboard.

They could still maintain sli and crossfire as what they call the use of multiGPUs if they wished, but it'd still result in them potentially selling more games, and also having to do less work.
 
The problem is this technology will make SLI and XFIRE obsolete as marketing tools, and thus its not outside the realms of possibility that ATI and Nvidia could update their drivers to stop Lucid technology working.

Hopefully this wont be the case though........

If the technology is good, and an improvement over existing solutions, it will live on.

Most likely the company will be bought out by either nvidia or AMD, and the tech rebranded as SLI / crossfire. I don't think that either company would shoot themselves in the foot over a relatively small firm, when instead they could 'get one over' on their greatest rivals.
 
ATI would LOVE this tech on motherboards, crossfire is limiting the amount of chips they can sell as over 4 just dont scale well at all. I think nvidia might grump a bit till they can reduce the size of there chips tho.
 
So now we'll just have to wait and see which company gets their mitts on it first I guess. No doubt the tech will be imitated by the other in a small amount of time.
 
If the technology is good, and an improvement over existing solutions, it will live on.

Most likely the company will be bought out by either nvidia or AMD, and the tech rebranded as SLI / crossfire. I don't think that either company would shoot themselves in the foot over a relatively small firm, when instead they could 'get one over' on their greatest rivals.

I doubt it, if my understanding of the situation is right, Intel wouldn't let them. Intel has a very strong relationship with Lucid, infact if I remember rightly, a lot of Lucid's R&D dosh comes from the blue team and there've been rumours circulating that Larrabee's multi-GPU functionality will be entirely dependant on the Lucid Hydra technology.
 
I bet that Intel buy it, given that Ati has CF and NVidia has SLI it makes sense that Intel buys it because it can then dump on Ati and Nvidia from a great height :)
 
Back
Top Bottom