• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Star Wars: 1313 Gameplay- PhysX Title?

There is only one reason why next gen consoles will use AMD hardware.

Its cheaper.

This is not the case. Amd are being used this gen because nvidia has had big fall outs with sony and microsoft in the past. Microsoft were using nvidia in the original xbox but the companies fell out so they used ati for the 360. Sony and nvidia had a fall out which led to nvidia not releasing drivers for any sony desktops.

In short nvidia shot themselves in the foot when it came to getting there hardware in any of the next gen consoles.
 
Obvioulsy nobody knows, but do you think a GTX 670 would handle this game well enough?

I have no idea how powerfull the new cards are (7970/680/670) and and thinking of picking up a 670 in the next few days.
 
Obvioulsy nobody knows, but do you think a GTX 670 would handle this game well enough?

I have no idea how powerfull the new cards are (7970/680/670) and and thinking of picking up a 670 in the next few days.

Iirc you run at 1200p, so you'll be fine with 2gb of vram if going for a 670.
 
Obvioulsy nobody knows, but do you think a GTX 670 would handle this game well enough?

Initial impressions for me on this title is it's going to be demanding, looking at the main character alone, there is much more detail being used, then there is the lighting constantly changing.


I have no idea how powerfull the new cards are (7970/680/670) and and thinking of picking up a 670 in the next few days.

I'm usually the last to say wait for the next cards, but considering the current pricing climate, next gen console development definitely is underway, so at last PC games will look the part again albeit at a very demanding cost, and it's already June...

...means we will be in for game engines that can do this:


That is another game engine, not a cut scene, PC isn't on the list, but fingers crossed that's a sign of things to come next year.
 
Initial impressions for me on this title is it's going to be demanding, looking at the main character alone, there is much more detail being used, then there is the lighting constantly changing.




I'm usually the last to say wait for the next cards, but considering the current pricing climate, next gen console development definitely is underway, so at last PC games will look the part again albeit at a very demanding cost, and it's already June...

...means we will be in for game engines that can do this:


That is another game engine, not a cut scene, PC isn't on the list, but fingers crossed that's a sign of things to come next year.

I get what your saying. But if they can get it to look like this on a console running a ATI 6750 GPU (or what ever it is) surely it can run on a modern PC too?

I could always add a second 670.....
 

That is another game engine, not a cut scene, PC isn't on the list, but fingers crossed that's a sign of things to come next year.

Apart from the annoying and useless motion blur effect (ruins clarity while adding a performance hit) I think that actually looks better than the Unreal 3 Samaritan demo.
 
Thing is they are probably running this on some silly expensive machine.

With like 4 x 680's or even greater.

How on earth are they going to get this to play on the next gen consoles?

Arent they only ment to have a 6750 gpu in them?
 
Last edited:
I get what your saying. But if they can get it to look like this on a console running a ATI 6750 GPU (or what ever it is) surely it can run on a modern PC too?

It doesn't work like that, a console can render something like 40,000 polygons per second v's the PC's 15,000(don't know the exact figures) but it's along those lines.

It's a MS limitation, surprise, surprise.:(
 
Are you sure?

Yep, I'm sure.

Final8y, Rroff or DM would probably be able to advise on the exact specifics

MS make the xbox, why would they limit the DX implimentation on PC?

To sell more xbox's of course, much more money to be made selling hardware and dev license's/game profits writing Xbox games than just a one off payment for a Windows license.

Games for Windows is proof enough!:o

Microsoft have made it sooo user unfriendly, it's unbelievable, in comparison to Xbox live, GFW is a complete joke.
 
Last edited:
It doesn't work like that, a console can render something like 40,000 polygons per second v's the PC's 15,000(don't know the exact figures) but it's along those lines.

Sorry but wtf. Where the hell have you got this information from.
 
If it's UE3 then its definitely PhysX, whether its CPU or GPU, or if its 2.8.x or 3.x remains to be seen, but if it's an Nvidia title then its very likely it will have some GPU accelerated support.
As for it going away, unless UE4 fails while is very unlikely then it PhysX should be around for a long time, whether people want to admit it or not.

Not sure what AMD hardware being in the next consoles has to do with PC games, it won't make the slightest bit of difference.
 
Last edited:
Draw calls

bobvodka;1336825638 Resident Game Developer said:
The CPU in use has nothing todo with the 'quality' of 'ports'; 99% of the code in any game/engine code is written in C++ (or C) with only small percentage going below that for performance reasons. In fact that we currently work with In-order CPUs with small caches helps the x86 compiled versions as memory access pattern improvements, which are the main source of optimisation on games these days due to the increasing CPU/memory speed gap, help both platforms.

I also say 'port' because many games maintain builds for all 3 major platforms at the same time; there is no 'porting' there is common code for the majority of the game and then platform specific sections to deal with the various APIs (or SPUs in the case of the PS3) which is a very small contact area indeed and generally has a team dedicated to it.

Also, in many games, a fair chunk of logic can and is farmed out to scripting languages such as Lua and, frankly, Lua on the console is a bit of a performance issue (not a critical path one, but one all the same.. messes with memory something rotten) and an area where PCs have the edge due to larger caches, better branch prediction and out-of-order operation.

--------
On a related note I still find it cute that 'PC gamers' worry so much about the GPU when what they should be worrying about is the graphic stack on the PC.
Right now the consoles are hamstrung by their old hardware however if you remove that from the equation suddenly things don't seem so rosey.

The example I have to hand is part of our rendering test bed; imagine a cube which is rotating about it's Y axis. Now imagine that cube shape is made up for 50,000 other cubes also rotating about their Y-axis. To remove the 'gpu problem' those cubes are flat coloured; basically rendering them is no trouble for a PS3 or 360 GPU, never mind a modern PC (stock NV GTX470 in this case).

Now, each of these 50,000 cubes is draw using a single draw call. This is about the worst possible case for the system to deal with.
The 360 and the PS3 will happily chug along at 16.6ms per frame (or 60fps) all day and all night, no problem at all using 6 threads or a SPU based system to setup the draw calls.
The on the flip side the PC (Xenon 4C/8T @ 2.63Ghz, NV GTX 470, DX11 rendering mode, multiple DX11 deferred contexts for MT rendering over 6 threads) couldn't even manage 30fps. (I believe it clocked in around 27fps, or 37ms per frame, 30fps is 33.3ms/frame).

Clearly the PC GPU is fast enough so the problem must be CPU side.

Now, at this point it is I'm sure tempting to shout and yell about 'poorly optimised PC code' however the code in question is VERY light, a few hunded lines at best on the main path and our engine API very very closely mirrors DX11 API, so much so that for the PC it is often a very very thin wrapper to the DX API call and nothing more; if anything on some paths the consoles do more CPU work load.

To cut an increasingly long story short; after a couple of days looking into this (as I was very surprised at the time, 8 months later I'm not remotely surprised) the problem was in the drivers.

At or below 15,000 draw calls per frame the PC version had no trouble at all, however as the draw call count increased so the frame time began to increase as the driver would 'stall' at the part were the back buffer was swapped to the front buffer. At 50,000 objects this stall was massive, taking up most of the frame time by a long shot. At the time I recommended that we kept the draw calls on the PC version of our game below 15,000; which might seem like a lot until you realise that deferred rendering causes an amplification of draw calls as the same objects are rendered into multiple buffers for the various lighting, shadow(x3 in our case) and other passes required to build the scene.
(To give you a vague idea of costs current game I'm working on/with is a 4 player split screen game with a 60fps target; estimated max draw calls per player is 800/frame. This gets us a colour, lighting, single shadow pass + particle effects. Depth might have recently been allowed on the consoles but no on the PC due to overhead of how the arch. works).

So, while right now things might look good going forward unless driver arch and interfaces change the PC version is in for a world of hurt. On the PS3 we can use 50% of the SPU time to generate draw calls and chain them together, on the 360 we can use 6 cores for the same, on the PC with the larger per-draw overhead we can use deferred contexts to record command buffers (not very well, they do the work in the wrong place and AMD don't support them as well as NV as yet) but the final submission remains a single threaded problem and, in case you weren't paying attention single threaded performance stalled some time back.

Basically next generation I expect the consoles to destory even your highend OMGWTFBBQ!!! PCs when it comes to object draw count (even if it remains a PPC arch (and I hope so, much nicer to work with an x86) the arch is better than it was and even has out-of-order execution to cover for the bad coders out there) with only the GPU limiting how much res we can push and with what factors enabled.

(and yes, next generation we will continue to do all the tricks we do now, including no fp16 render targets to reduce bandwidth costs (my group lead's wish list includes logLUV encoding/decoding in hardware as it is common method of storing HDR data) etc, we'll just get a new box of tricks and some quality improvements on the old ones.)
http://www.rage3d.com/board/showpost.php?p=1336825638&postcount=80
 
Last edited:
^
:D, Thanks for the info mate.

Sorry but wtf. Where the hell have you got this information from.

For one, Carmack talked about it in length last year.

There was also:

The DirectX Performance Overhead

So what sort of performance-overhead are we talking about here? Is DirectX really that big a barrier to high-speed PC gaming? This, of course, depends on the nature of the game you're developing.

'It can vary from almost nothing at all to a huge overhead,' says Huddy. 'If you're just rendering a screen full of pixels which are not terribly complicated, then typically a PC will do just as good a job as a console. These days we have so much horsepower on PCs that on high-resolutions you see some pretty extraordinary-looking PC games, but one of the things that you don't see in PC gaming inside the software architecture is the kind of stuff that we see on consoles all the time.

On consoles, you can draw maybe 10,000 or 20,000 chunks of geometry in a frame, and you can do that at 30-60fps. On a PC, you can't typically draw more than 2-3,000 without getting into trouble with performance, and that's quite surprising - the PC can actually show you only a tenth of the performance if you need a separate batch for each draw call.

DirectX supports instancing, meaning that several trees can be drawn as easily as a single tree. However, Huddy says this isn't still enough to compete with the number of draw calls possible on consoles

Now the PC software architecture – DirectX – has been kind of bent into shape to try to accommodate more and more of the batch calls in a sneaky kind of way. There are the multi-threaded display lists, which come up in DirectX 11 – that helps, but unsurprisingly it only gives you a factor of two at the very best, from what we've seen. And we also support instancing, which means that if you're going to draw a crate, you can actually draw ten crates just as fast as far as DirectX is concerned.

But it's still very hard to throw tremendous variety into a PC game. If you want each of your draw calls to be a bit different, then you can't get over about 2-3,000 draw calls typically - and certainly a maximum amount of 5,000. Games developers definitely have a need for that. Console games often use 10-20,000 draw calls per frame, and that's an easier way to let the artist's vision shine through.'

Of course, the ability to program direct-to-metal (directly to the hardware, rather than going through a standardised software API) is a no-brainer when it comes to consoles, particularly when they're nearing the end of their lifespan. When a console is first launched, you'll want an API so that you can develop good-looking and stable games quickly, but it makes sense to go direct-to-metal towards the end of the console's life, when you're looking to squeeze out as much performance as possible.

Consoles also have a major bonus over PCs here, which is their fixed architecture. If you program direct-to-metal on the PlayStation 3's GPU, then you know your code will work on every PS3. The same can't be said on the PC, where we have numerous different GPU architectures from different manufacturers that work in different ways.

For example, developers may ideally need to vectorise their code for it to run optimally on an AMD GPU's stream processor clusters, or maybe tell the GPU's stream processor clusters to split up their units into combinations of vector and scalar units. Conversely, developers will ideally need to program for a scalar architecture on Nvidia GPUs. Once you remove the API software layer, you suddenly have to really start thinking about the differences between GPU architectures.

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/2



Not sure what AMD hardware being in the next consoles has to do with PC games, it won't make the slightest big of difference.

fe15ee292137888693d97f0fa976bf9c.jpg


~96 million Wii's sold worldwide= ~96 million AMD gpu's, add in 67 million 360 gpu's.

Then imagine what's going to happen to Nvidia's pockets when you add in the PS4.

Factor in APU's/IGP's getting more powerful, it destroys discrete gpu sales.

It would be catastrophic to PC gaming if Nvidia took a turn for the worse.
 
New Dawn demo is made of approx 4 million triangles. What about the insane amount when using tesselation? PC's are not limited compared to consoles when it comes to sheer grunt the suggestion is ridiculous.

Are you suggesting a 680 would be out performed by the GPU in a ps3 when rendering a maximum number of polygons?:/
 
Initial impressions for me on this title is it's going to be demanding, looking at the main character alone, there is much more detail being used, then there is the lighting constantly changing.




I'm usually the last to say wait for the next cards, but considering the current pricing climate, next gen console development definitely is underway, so at last PC games will look the part again albeit at a very demanding cost, and it's already June...

...means we will be in for game engines that can do this:


That is another game engine, not a cut scene, PC isn't on the list, but fingers crossed that's a sign of things to come next year.

I would buy the next gen console to play that...Looks awsomes.
 
New Dawn demo is made of approx 4 million triangles. What about the insane amount when using tesselation? PC's are not limited compared to consoles when it comes to sheer grunt the suggestion is ridiculous.

Are you suggesting a 680 would be out performed by the GPU in a ps3 when rendering a maximum number of polygons?:/

The Endless City Demo by Nvidia pushes 1.3 billion polygons with tessellation, but raw polygon count isn't the issue, it's the number of unique geometry calls (objects) that the PC can draw per second compared to consoles AFAIK.

Say you have a forest: a PC could render more detailed trees using tessellation, but a console could render a forest with not only more trees but more unique trees and the PC would have to copy many versions of the same trees.

AFAIK this hasn't been an issue this generation because the current consoles don't really have enough grunt to fully use this to their advantage.
 
New Dawn demo is made of approx 4 million triangles. What about the insane amount when using tesselation? PC's are not limited compared to consoles when it comes to sheer grunt the suggestion is ridiculous.

Are you suggesting a 680 would be out performed by the GPU in a ps3 when rendering a maximum number of polygons?:/

Did you even read the two quoted articles?

If I turn around your statement and you were to throw a 680 into a PS3, what do you think would happen if you ran the Dawn demo alone?

The Dawn demo is exactly that, it's a demo not a game engine, what the demo can do won't be seen in game for years to come at best.

I don't do console gaming, but I'm not blind to the tech, yes the 680 can do this and that, but the PC is fundamentaly held back by gimped out DX code.

Carmack has said it, AMD has said it, I don't have a clue whether Nvidia got in on the discussion at the time or not.

For the true power of the 680 to shine, a custom gaming OS would have to be made, and boy would it be a sight to behold, it would absolutely annihilate the next consoles.

If AMD and Nvidia worked together and made a gaming OS(with always on DRM), Sony MS and Nintendo would be in for the biggest shock ever!

But that will never happen as it would kill gpu sales.

I would buy the next gen console to play that...Looks awsomes.

It certainly does look the part gregster, considering the amount of money/time thrown into my PC since the Xbox arrived, I might just have to think about it myself.
 
Back
Top Bottom