• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Borderlands 2 PhysX can be forced to run on the CPU

Yes of course its not actually that simple, but just as crossfire/sli has to be coded for and eyefinity/surround has to be coded for, if AMD had a working physics implementation then games would be coded for both and the whole thing wouldn't be an issue.

or are you trying to say that if you code a game to use eyefinity then it will automatically work with surround, I don't think so somehow. There are differences, they may be slight but that is why these companies have their own proprietary tech. It is only when both companies are using similar technology that these things are used by everyone and then it becomes irrelevant which make of card you have.

Just imagine if games were coded so they only work with eyefinity or crossfire, then another game was surround/SLI.

Games aren't coded to work with multi-GPU, surround, eyefinity, stereo 3D in quite that way - in most cases a lot of games will work fine with these technologies out the box with absolutely no specific coding for these features, in some cases there may be incompatibilities if games aren't coded with these features in mind and/or coding to take better advantage of these features can enhance the experience. - probably 80% of games have no SLI or crossfire targetted code in them and are enabled via driver profiles.

To be fair, Bru is correct.
Companies could code game engines to support both physx and an API designed by AMD, then both sides would get it. As usual though, just like a proper driver level 3D implementation, AMD won't bother.

Not really - they'd have to design, implement and test both APIs individually which is going to be a lot of extra work unless both APIs behave 100% identically - for instance a slight difference in how a solver worked in one engine could result in very different behavior of ingame objects which depending on the level of integration could have very undesireable effects.
 
Last edited:
Yes of course its not actually that simple, but just as crossfire/sli has to be coded for and eyefinity/surround has to be coded for, if AMD had a working physics implementation then games would be coded for both and the whole thing wouldn't be an issue.

Games aren't really coded for crossfire or SLi either. They can be optimised for it, but the vast majority of games aren't coded to work with either. All of that is set up by AMD and nVidia driver side. That's what the whole driver profiles thing is about, and that's why often you can rename an exe of a game to something else to get SLi or Crossfire working because the name of that exe has a profile within the drivers already.

On face value, it's a lot more simple than a lot of people expect it to be.


or are you trying to say that if you code a game to use eyefinity then it will automatically work with surround, I don't think so somehow. There are differences, they may be slight but that is why these companies have their own proprietary tech.

There's literally nothing to code for eyefinity/surround. They aren't proprietary at all, implementation of multi display support is pretty basic really. It consists of allowing any aspect ratio to be used, and then the HUDs, menus and all that stuff, basically where to place them when an ultra wide resolution is detected.

As for the actual implementation of multiple displays, there's also nothing proprietary there either, windows is just told that a group of displays are a single ultra wide display, it's pretty simple really.

That's generally the difference between a game "optimised" for multi displays and one that isn't, whether the menus and HUDs work properly and are placed properly, other than that there's little to nothing to actually "code" for.

It is only when both companies are using similar technology that these things are used by everyone and then it becomes irrelevant which make of card you have.

Just imagine if games were coded so they only work with eyefinity or crossfire, then another game was surround/SLI.

That's essentially the problem with PhysX, no one's gonna take it seriously (without some serious financial incentive from nVidia) and do any sort of proper implementation because of this.

The problem is though that nVidia go one step further with it, and ensure that if you don't have an nVidia GPU you're gonna have a crap time with PhysX because of how they've restricted its performance to make it appear as if you NEED it to run on a GPU.

To be fair, Bru is correct.
Companies could code game engines to support both physx and an API designed by AMD, then both sides would get it. As usual though, just like a proper driver level 3D implementation, AMD won't bother.

That's not really correct, it's no where near that simple, no developer is going to want to have to constantly code for two different physics APIs that would be incompatible.

3D Isn't really comparable because of the relative simplicity with which it works, it's essentially two different perspectives sent to different eyes. There's no actual need for a driver level 3D implementation, the only reason nVidia do it the way they do is again, control, because they love all things proprietary.

I'm not sure why you're trying to dress it up as "lazy" by constantly referencing the fact that AMD isn't doing any work towards it, because it's not something that would benefit many people. Think about it, the best things are open (to some degree). A system of 3D glasses that works on any 3D monitor is the ideal solution (outside of glasses free) not 3D sets and monitors you have to buy that only works with an nVidia graphics card. It's all marketing exercises to make people feel like they are locked in to only being able to buy nVidia graphics cards.

I full expect that $140 million nVidia spent on PhysX has paid for itself in that regard really, I've never really thought it was about them genuinely wanting a hardware physics API, which is the exact reason they've done so little with it. Think of all the people you've seen talk about PhysX as some big feature and how it influences their choice to by an nVidia GPU despite the severe lack of games that even use hardware PhysX?

I also think the majority of people don't actually realise that not all games that have PhysX as the physics API use hardware PhysX, nearly all of them use no hardware PhysX at all. It's only the games that have hardware PhysX that the CPUs are restricted in, again, to exaggerate the suggested NEED for an nVidia graphics card. Look at that Borderlands 2 video from nVidia directly where it outright said you wouldn't get any of those physics effects unless you had an nVidia graphics card (blatant lies).

If they both had some kind of Physics calculation implementation I'm sure developers would be more inclined to use it.

I still doubt it. It's not as simple as people think to have two differing APIs work together. Basically it has to be none or one to get that sort of compatibility.
 
oh dear.

so many points to counter. Anyway ill just say this yes you are both right allot of these things are taken care of in the drivers, but guess what why do you think these things are done in the drivers, to make it easier for the games developers.

Think back (if your old enough ) to when games only ran at 320x200 then the first few started to arrive that would run in VGA yes folks a whole 640x480, since then the Sky's been the limit, but those first few games didn't work too well at 640x480 they defaulted back to 320x200 until the coding was changed to allow them to work. And so it has moved on with many hundreds of new innovative ideas that transformed our gaming enjoyment, each of which has needed to have the coding changed to work properly.

Now back to those two proprietary bits of technology, you know AMD eyefinity and Nvidia surround. yes they have all the tricky bits of coding in the drivers and its been stable enough for a few years now, to the point that all the games developers have to do is say to the drivers we want to allow big resolutions and the drivers take care of it for the individual makers gpu's. so just think if AMD had some sort of physics thing it would eventually be the same, the game developers would just say hey we want this to happen and the drivers would handle the rest for each makers gpu.
And that is what i have been trying to say all this time, and the only one who even seems to have understood it is no 1 dave.
 
oh dear.

so many points to counter. Anyway ill just say this yes you are both right allot of these things are taken care of in the drivers, but guess what why do you think these things are done in the drivers, to make it easier for the games developers.

Think back (if your old enough ) to when games only ran at 320x200 then the first few started to arrive that would run in VGA yes folks a whole 640x480, since then the Sky's been the limit, but those first few games didn't work too well at 640x480 they defaulted back to 320x200 until the coding was changed to allow them to work. And so it has moved on with many hundreds of new innovative ideas that transformed our gaming enjoyment, each of which has needed to have the coding changed to work properly.

Now back to those two proprietary bits of technology, you know AMD eyefinity and Nvidia surround. yes they have all the tricky bits of coding in the drivers and its been stable enough for a few years now, to the point that all the games developers have to do is say to the drivers we want to allow big resolutions and the drivers take care of it for the individual makers gpu's. so just think if AMD had some sort of physics thing it would eventually be the same, the game developers would just say hey we want this to happen and the drivers would handle the rest for each makers gpu.
And that is what i have been trying to say all this time, and the only one who even seems to have understood it is no 1 dave.

OpenCL can be used for Physics, so AMD have it it's just a case of Game developers using it, which i'm sure they will if AMD work with them on it.
Which they are now.
 
oh dear.

so many points to counter. Anyway ill just say this yes you are both right a lot of these things are taken care of in the drivers, but guess what why do you think these things are done in the drivers, to make it easier for the games developers.

I think you have a limited understanding of the things you're talking about.

Think back (if your old enough ) to when games only ran at 320x200 then the first few started to arrive that would run in VGA yes folks a whole 640x480, since then the Sky's been the limit, but those first few games didn't work too well at 640x480 they defaulted back to 320x200 until the coding was changed to allow them to work. And so it has moved on with many hundreds of new innovative ideas that transformed our gaming enjoyment, each of which has needed to have the coding changed to work properly.
That really really doesn't apply. There are many different game engines now, and games are coded to run on a many different configurations of hardware, the examples you're giving have absolutely no relevance to now, and I'm not quite sure why you think that's the case.

Games now have to be able to run on a range of different resolutions, there is no standard resolution that a game works at. There are various different aspect ratios too, this is why a lot of games work out of the box at ultra wide resolutions, it's a side effect of the developers having to account for many different possibilities. You're making it out to be a lot more complex than it is.

Now back to those two proprietary bits of technology, you know AMD eyefinity and Nvidia surround. yes they have all the tricky bits of coding in the drivers and its been stable enough for a few years now, to the point that all the games developers have to do is say to the drivers we want to allow big resolutions and the drivers take care of it for the individual makers gpu's.

All wrong, the implementation of multiple displays is fairly simple. It's been possible to play games across multiple displays for years before eyefinity. There's been matrox's triple head to go devices, and there's also been a piece of software called "softth". That was more complicated because the software had to do all the work when it came to displaying the game across multiple displays.

With the current nVidia and AMD implementations, it's very simple. The whole OS sees all the displays as one, that's it. The game has to do nothing special to account for it, the "coding" was never on the games developers to support multiple monitors in the way you're suggesting.

so just think if AMD had some sort of physics thing it would eventually be the same, the game developers would just say hey we want this to happen and the drivers would handle the rest for each makers gpu.
No, not at all. It's nowhere near that simple. Both AMD and nVidia would need to be able to run the same APIs to achieve what you're suggesting.

You're using examples that don't match. Multiple monitors, 3D, AA and all that kind of stuff is not comparable to a physics API. Again as I keep saying, the developer needs to implement that kind of thing, and they really aren't going to want to implement multiple physics APIs, it doubles the amount of work with regards to physics processing, which they don't want or need to do.

Then you've got all the problems that come with different APIs depending on the hardware for physics work, what happens when they don't quite behave the same way? It's not at all practical, and I think your suggestions of it imply you have a very limited understanding of this.

And that is what i have been trying to say all this time, and the only one who even seems to have understood it is no 1 dave.

no1dave doesn't "understand" you, he's simply agreeing with the wrong assumptions you've made, it's not as simple as you're making out and simply doesn't work like that.
 
OpenCL can be used for Physics, so AMD have it it's just a case of Game developers using it, which i'm sure they will if AMD work with them on it.
Which they are now.

The problem is OpenCL is just a language, just because physics processing can be run through it doesn't make Bru's suggestions any more realistic.

Some one has to actually come out and produce an API that runs physics through OpenCL. As far as I know Havok have/are working on it, but it doesn't really look like it's going anywhere since it was a few years ago now that anything Havok related was shown running through OpenCL.
 
yes multiple displays is now simple to deal with but it wasn't always that way, and if both sides had working physics engines then eventually it would be the same as multiple displays or 3d etc.

Think back to the first time multiple cards were used for gaming, back in 1998 when 3DFX introduced SLI it had to have its own api to work (Glide), but eventually directX and openGL took over and now all these things are taken care of by the various drivers Windows and video card drivers so that it all works nicely.

Technology is a wonderful thing and has become very powerful in allowing us to do things that we take for granted so easily now, but it wasn't always that way and these things have been improving all the time. It is very easy to forget the hard work and effort that goes into making these things work so seamlessly.
 
Would it be worth looking at a cheap 8800GT/GTS250/GT430 to run phsyx alongside my 7950 (if that can even be done any more)?
 
Well, AMD has Bullet as a physics engine, but not in AAA games as far as I know - http://bulletphysics.org/wordpress/?p=304

Other than that, it really looks awesome, all that being done real time. I suspect it will arrive in future next gen games, including console if we are to believe what vr-zone says:
Also worth mentioning is that SI is the architecture chosen to be expanded into high-performance consoles, thus we should see quite interesting announcements regarding to vast compute and graphics capabilities carrying the next generation of console games.

Read more: http://vr-zone.com/articles/amd-nex...-2015-gpus-get-names/17154.html#ixzz28KEZ2gf6
 
Would it be worth looking at a cheap 8800GT/GTS250/GT430 to run phsyx alongside my 7950 (if that can even be done any more)?

It can still be done, very simple to do to:

http://www.ngohq.com/graphic-cards/...with-latest-physx-and-geforce-285-solved.html

Whether my 9800Gt is up to the task later on in B2 is another thing, I read that it can slow down the 79's in this title due to not being fast enough swapping data(or however it works), but I don't know for sure yet.

If someone has a save(if it's possible in this title) in the caverns where all the physx happens, I could try it out and report back, as I won't be giving this title a real go until I have finished other titles.
 
Back
Top Bottom