• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

MSI 290X LE quick review and thoughts

OK more modern information regarding multiple GPUs.

The fourth lesson: Multi GPU (SLI/CrossfireX) is ******* complicated. You cannot begin to conceive of the number of failure cases that are involved until you see them in person. I suspect that more than half of the total software effort within the IHVs is dedicated strictly to making multi-GPU setups work with existing games. (And I don't even know what the hardware side looks like.) If you've ever tried to independently build an app that uses multi GPU - especially if, god help you, you tried to do it in OpenGL - you may have discovered this insane rabbit hole. There is ONE fast path, and it's the narrowest path of all. Take lessons 2 and 3, and magnify them enormously.

Deep breath.

Ultimately, the new APIs are designed to cure all four of these problems.
* Why are games broken? Because the APIs are complex, and validation varies from decent (D3D 11) to poor (D3D 9) to catastrophic (OpenGL). There are lots of ways to hit slow paths without knowing anything has gone awry, and often the driver writers already know what mistakes you're going to make and are dynamically patching in workarounds for the common cases.
* Maintaining the drivers with the current wide surface area is tricky. Although AMD and NV have the resources to do it, the smaller IHVs (Intel, PowerVR, Qualcomm, etc) simply cannot keep up with the necessary investment. More importantly, explaining to devs the correct way to write their render pipelines has become borderline impossible. There's too many failure cases. it's been understood for quite a few years now that you cannot max out the performance of any given GPU without having someone from NVIDIA or AMD physically grab your game source code, load it on a dev driver, and do a hands-on analysis. These are the vanishingly few people who have actually seen the source to a game, the driver it's running on, and the Windows kernel it's running on, and the full specs for the hardware. Nobody else has that kind of access or engineering ability.
* Threading is just a catastrophe and is being rethought from the ground up. This requires a lot of the abstractions to be stripped away or retooled, because the old ones required too much driver intervention to be properly threadable in the first place.
* Multi-GPU is becoming explicit. For the last ten years, it has been AMD and NV's goal to make multi-GPU setups completely transparent to everybody, and it's become clear that for some subset of developers, this is just making our jobs harder. The driver has to apply imperfect heuristics to guess what the game is doing, and the game in turn has to do peculiar things in order to trigger the right heuristics. Again, for the big games somebody sits down and matches the two manually.


http://forums.overclockers.co.uk/showthread.php?p=27752182#post27752182

And of course actual benchmarks.

http://www.guru3d.com/articles_pages/geforce_gtx_980_sli_review,22.html

And guess what? it's still exactly the same -



So you still lose performance in certain scenarios just as I did in 2012.

£450+ for another GPU only to lose FPS.
 
I get the same as you Shankly. 144Hz showing as the max refresh rate. I choose 120Hz and within a few seconds, it starts to flicker and the only cure is with choosing 60Hz again.
 
OK more modern information regarding multiple GPUs.

The fourth lesson: Multi GPU (SLI/CrossfireX) is ******* complicated. You cannot begin to conceive of the number of failure cases that are involved until you see them in person. I suspect that more than half of the total software effort within the IHVs is dedicated strictly to making multi-GPU setups work with existing games. (And I don't even know what the hardware side looks like.) If you've ever tried to independently build an app that uses multi GPU - especially if, god help you, you tried to do it in OpenGL - you may have discovered this insane rabbit hole. There is ONE fast path, and it's the narrowest path of all. Take lessons 2 and 3, and magnify them enormously.

Deep breath.

Ultimately, the new APIs are designed to cure all four of these problems.
* Why are games broken? Because the APIs are complex, and validation varies from decent (D3D 11) to poor (D3D 9) to catastrophic (OpenGL). There are lots of ways to hit slow paths without knowing anything has gone awry, and often the driver writers already know what mistakes you're going to make and are dynamically patching in workarounds for the common cases.
* Maintaining the drivers with the current wide surface area is tricky. Although AMD and NV have the resources to do it, the smaller IHVs (Intel, PowerVR, Qualcomm, etc) simply cannot keep up with the necessary investment. More importantly, explaining to devs the correct way to write their render pipelines has become borderline impossible. There's too many failure cases. it's been understood for quite a few years now that you cannot max out the performance of any given GPU without having someone from NVIDIA or AMD physically grab your game source code, load it on a dev driver, and do a hands-on analysis. These are the vanishingly few people who have actually seen the source to a game, the driver it's running on, and the Windows kernel it's running on, and the full specs for the hardware. Nobody else has that kind of access or engineering ability.
* Threading is just a catastrophe and is being rethought from the ground up. This requires a lot of the abstractions to be stripped away or retooled, because the old ones required too much driver intervention to be properly threadable in the first place.
* Multi-GPU is becoming explicit. For the last ten years, it has been AMD and NV's goal to make multi-GPU setups completely transparent to everybody, and it's become clear that for some subset of developers, this is just making our jobs harder. The driver has to apply imperfect heuristics to guess what the game is doing, and the game in turn has to do peculiar things in order to trigger the right heuristics. Again, for the big games somebody sits down and matches the two manually.


http://forums.overclockers.co.uk/showthread.php?p=27752182#post27752182

And of course actual benchmarks.

http://www.guru3d.com/articles_pages/geforce_gtx_980_sli_review,22.html

And guess what? it's still exactly the same -



So you still lose performance in certain scenarios just as I did in 2012.

£450+ for another GPU only to lose FPS.

[email protected]
2x295x2 @1200/1650
14.12
1080P


1x295x2 (single core)

IPC6hLH.jpg

1x295x2 (CrossFire)

fJgxOYP.jpg

2x295x2 (QuadFire)

JXttAh5.jpg



5960X @4.9Ghz
295x2 @1190/1625Mhz
14.12
4K

295x2 (single core)

VRMbipL.jpg


295x2 (CrossFire)

Tp2bczM.jpg


2x295x2 (QuadFire)

gjxWCoL.jpg


And that's with a slightly broken and under performing Crossfire profile which will be rectified in the not too distant future.

Excellent 1>2>4gpu scaling. :)
 
kk can you select 120hz and restart the PC and see when it boots up it keeps the refresh rate?

I am a bit worried on doing that Shankly, as if it does what it did before, that means I have to get one of my other monitors back out the box and connect that as well, otherwise I just get a black screen and can't change anything.
 
Check our bench threads for TR and you'll find that the above chart is quite mistaken.

I'll find that the drivers were bad. Quelle surprise ! So Nvidia added a bodge in later.

It still doesn't change anything. Hardware support all rests on software support which is, and always was, notoriously bad.

If games were coded properly one Titan would run 8k. But they're not. Support is time and time is money. At the end of the day money in = money out. There is no money to be had by wasting time on something incredibly complicated just for 1% of 1% of PC owners.

And it'll never change.
 
Tomb Raider 4k scaling. Looks pretty good to me. It may not always work but when it does the benefits are real.

Single GPU

1. Kaapstad - 980 @1502/2102, 347.09. 30.3
2. Besty - 980 @1560/2000, 344.16. 27.1
3. Kaapstad - 290X @1270/1625, 14.9. 25.9
4. whyscotty - 780 @1163/1852, ??? 12.9


Dual GPU

1. Gregster - Titan @1267/3758, 340.3. 51
2. Kaapstad - 290X @1230/1625, ???.48.6
3. Clov!s - 290 @1030/1250, ???. 39.8

Triple GPU

1. Kaapstad - 290X @1230/1625, ???. 73
2. whyscotty - Titan @928/1502, ??? 70.7
3. LtMatt - 290X @1125/1500, 14.8. 69.1
4. Clov!s - 290 @ 1030/1250, 14.7. 57.9

Quad GPU

1. Kaapstad - 980 @1492/2002, 347.09. 112.4
2. AMDMatt - 290X @1240/1625, 14.12. 101.2
3. Besty - 980 @1530/2000, 344.16. 97.1
4. Kaapstad - 290X @1220/1625, 14.9. 95.4
5. AMDMatt -290X @1175/1500, 14.8. 94
 
Tomb Raider 4k scaling. Looks pretty good to me. It may not always work but when it does the benefits are real.

Single GPU

1. Kaapstad - 980 @1502/2102, 347.09. 30.3
2. Besty - 980 @1560/2000, 344.16. 27.1
3. Kaapstad - 290X @1270/1625, 14.9. 25.9
4. whyscotty - 780 @1163/1852, ??? 12.9


Dual GPU

1. Gregster - Titan @1267/3758, 340.3. 51
2. Kaapstad - 290X @1230/1625, ???.48.6
3. Clov!s - 290 @1030/1250, ???. 39.8

Triple GPU

1. Kaapstad - 290X @1230/1625, ???. 73
2. whyscotty - Titan @928/1502, ??? 70.7
3. LtMatt - 290X @1125/1500, 14.8. 69.1
4. Clov!s - 290 @ 1030/1250, 14.7. 57.9

Quad GPU

1. Kaapstad - 980 @1492/2002, 347.09. 112.4
2. AMDMatt - 290X @1240/1625, 14.12. 101.2
3. Besty - 980 @1530/2000, 344.16. 97.1
4. Kaapstad - 290X @1220/1625, 14.9. 95.4
5. AMDMatt -290X @1175/1500, 14.8. 94

I've already said that if you benchmark and want the highest scores then it's worth it.

And it is, look at the results. You can't win there by only having two GPUs.

Pete - indeed it seems to be fingers in ears time because you are letting passion cloud your judgement. If the facts were different you'd have a case but they're not.
 
I've already said that if you benchmark and want the highest scores then it's worth it.

And it is, look at the results. You can't win there by only having two GPUs.

Pete - indeed it seems to be fingers in ears time because you are letting passion cloud your judgement. If the facts were different you'd have a case but they're not.

So you are saying if they were to go and play the game there would be no benefit to having 4 cards all scaling around 100% in this case.
 
Pete - indeed it seems to be fingers in ears time because you are letting passion cloud your judgement. If the facts were different you'd have a case but they're not.

Sorry, what?

You're the one with the absolutist opinion that's posting charts that back you up and then ignoring the ones that prove you wrong while clinging to the whole 'it's not coded by the devs' angle, while ignoring that it works just fine in the titles we're discussing.

Passion has nothing to do with anything here.
 
Sorry, what?

You're the one with the absolutist opinion that's posting charts that back you up and then ignoring the ones that prove you wrong while clinging to the whole 'it's not coded by the devs' angle, while ignoring that it works just fine in the titles we're discussing.

Passion has nothing to do with anything here.

£450 for 2-7 FPS.

Yes that's awesome ZOMG where do I sign up?
 
Back
Top Bottom