• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

New Nvidia card: Codename "GF100"

Not sure you can compare this to AMD's fusion or intel's version of fusion. You can't run windows os or any os on this for that matter, not from the looks of it.

Well, you won't be running windows on intel's larrabee either. It's very much a GPU with enhanced control logic, just like this beast seems to be.
 
I thought larrabee and the thing where intel puts a gpu and cpu together were two separate things.

Ah, right. I didn't realise that intel were working on a combined GPU-CPU silicon right now. If so, then it will definitely be separate from larrabee, which is more of a pseudo-GPU. Got any links out of interest?
 

Oh, clarkdale. That's more of a die-shrink of nehalem. It has some on-chip graphics capabilities, but nothing on the scale of a modern GPU. It's designed to replace motherboard onboard graphics, rather than any high-end GPUs for rendering or general prupose computing. AMD's fusion should be more of a "true" GPU-CPU combination as I understand it.
 
http://www.youtube.com/watch?v=UlZ_IoY4XTg
Tesla S1070
RRP £1,125.85
This thing has many of the innovations that FERMI has NVIDIA Parallel, Double precision, modified kernel.
Doesnt look good for Fermi's price :(

There's always a massive markup for HPC-graded parts. More so even than the quadro series. You can expect the Fermi HPC parts, with 3Gb and 6Gb of memory, to still go for well over £1000. The graphics / gaming retail part will still be priced to compete with ATIs cards though.

I'm a bit less skeptical than most of you guys, regarding the direction nvidia are heading. I work with a lot of people who do scientific modelling (FE / FV simulations of solid and fluid mechanics etc), and who would love to port their codes to GPUs. So far it's just been that bit too much effort for the potential rewards in most cases, but with this card running c++ natively, having a shared cache and error checking, and being able to run double-precision at "sensible" speeds... I really think that the interest could snowball rapidly. This card really does implement all the features that people have been asking for.

Of course, if the thing can't compete with the 5800 series nvidia will lose a lot of their core business. But if it can at least compete, I think they could well be onto a winning solution, losing a small amount of their graphics share in order to dominate a new and expanding market. Only time will tell, but I can guarantee at least a few dozen will be sold at my place of work...
 
It's a fair point about the scalability of the architecture. Not sure how they will be able to get around it, without producing a different and cut-down architecture for the mid range market. And this would probably be pretty expensive solution...

Perhaps nvidia will stick to the G200 architecture to fill up the mid-range and low-end segments? A 40nm version of the GTX285 would make a cracking mid-range card (though it wouldn't be DX11 compatible which I guess would be an issue).
 
I notice there was no mention of a dedicated tessilator. Anyone think that the GTX3xx cards may not actually be 100% DX 11 compliant? Or perhaps Nvidia is busy lobbying M$ again.

The chip should be sufficiently general-purpose that no tessilation-specific hardware would be required. It'll simply be a case of sending a few instructions to the GPU. After all, the specs include a "special function unit" for each of the 16 SMs, which is to be used for "transcendental math and interpolation". Sending a few appropriate interpolation calls to these units should be sufficient to perform tessilation.

Remember that this thing is flexible - it can even execute C++ code. The one thing we don't have to worry about any more is whether it will support specific features. It may lose some efficiency by being so general, but feature support will not be an issue.
 
No hardware tessailation is bad bad bad....5870 in an optimised DX11 + tessilation game ( which is not out yet ) is going to give it a spanking with hardware tessilation built in. Nvidia doing it for themselfs. Software tessilation rolf.

I don't think you're understanding how this unit works. There are NO "specialised" units of any kind. It's all floating-point arithmetic. Tessilation will be performed in exactly the same way as rendering, or physics, or any other GPGPU computations. This is the main strength (and also possibly the main weakness) of the new architecture.

It's not a "software" solution any more than rendering or physics is software.
 
I dont think you understand. To make it compatible with DX11 tessilation they are going to have to run it in software.

"Performance = X without tessellation. On Cypress, performance with tessellation on = X * a number greater than one. On G300, it is * a number less than one"
http://www.semiaccurate.com/forums/showthread.php?p=7186ne."

Hence a big hit. TWIMTBP yeah right!

:confused:

That doesn't make any sense at all. Why would tessilation be run in software? It's all done as calls to the various GPU floating point units. Which is hardware. Just the same as rendering (i.e. application of shaders).

Did you read the literature on the new chip? Why would it be less powerful on this card than on Cypress? If the total floating-point performance is higher, then subject to architectural efficiency it will be able to perform more tessilation operations, just the same as rendering operations, or any other kind of arithmetic.

I suggest you read a little more.
 
Your clearly quite confused.
This card can run a PS3 / XBOX game if the software is able to tell it what to do. You need software to tell GT300 to do DX11 tessilation. It wont do it by itself. This is just one big programmable brick with a lot of power.
You need software to tell the hardware how and what do! Same like any hardware. where as 5870 doesnt need that software but is hardware based. Hence a spanking once we get tessilation in the games.

Cant do any more, than actually draw you a diagram..................................:confused:

When you want to render a shader, you send an instruction to the GPU, which executes it. When you want to tesselate a mesh you send a different set of instructions to the GPU. In both cases, the computations are performed within the GPU. In hardware. There is no more of a software component with either piece of hardware.

ATI may have a tesselator which sits separate from the rest of the shader pipes, at the backend, but so does the fermi (check the 'special function unit' - a description is given in the anandtech article). In the case of cyprus this performs only tessilation, but in the case of fermi it can be configured to perform a variety of interpolation arithmetic.

Whether or not the effect of the tessilator is included in the total FPU performance number of either piece of hardware is irrelevant (especially since we don't have floating point performance numbers from fermi yet). In both cases the extra unit will remain inactive if tessilation is not used, and become activate when it is used. There is nothing to suggest this would have an impact on performance for fermi.

Of course, it might turn out that the tessilation unit on cyprus is more powerful than the special function units on fermi, but then again it might turn out to be the other way around. Since fermi is currently vapourware, there is no way to tell.


By the way, I won't rise to your flame-baiting. I already know that I know what I'm talking about, I use GPUs for coding almost every day. You can try to tell me otherwise, but it changes nothing.
 
There still has to be fixed-function hardware in there somewhere - discreet hardware can normally do a much more power-efficient job than a software-based version. For instance texture fetching filtering is far faster and easier in hardware than spending the time coding it, especially anisotropic filtering!
Doesn't anyone remember how slow AA was in the ATI 3xxx series since it was done in the shaders?

This is a fair point, but it applies to *everything* about the fermi architecture. Having a more general and programmable architecture will tend to increase flexibility at the cost of reduced performance.

Consider though, that when G80 was announced as having programmable pipes, people were worried that it would not be able to compete with the old fixed-form GPUs at traditional pixel and vertex shading (for exactly these reasons). We all know how that turned out.
 
That'd be great if tessellation were the only thing it was doing, or if the GPU had unlimited resources - unfortunately if the scene were large and complex, doing a lot of tessellation on the shaders may just bog down the GPU and hence hurt performance.

If the special function units are used for anything else in-game, then yes - it could take away from the performance. However, they are not likely to be used in rendering. Hardware physics calculations are the only other things which are likely to need interpolation or transcendental math calls.
 
So giving that nVidia have designed Fermi as a general purpose microprocessor first and a 3D accelerator second, nVidia obviously feels that the future of PC's as a gaming machine is in question? I wonder what timeframe period they're projecting for the death of PC gaming?

All client-side gaming is on a limited lifespan.

Eventually we will all be streaming our games over the internet, with all the rendering done server-side. How long this will take to become the norm is anyone's guess, but there are already plenty of things in construction which don't require any local rendering.
 
Wouldn`t that mean we would have no need to upgrade our processors or graphics cards anymore if for gaming??? and wouldn`t this also impact pc hardware sales???

Well, processors and memory etc would still be important for general everyday tasks. Graphics cards would become much less important.

It certainly would have a big impact on PC hardware sales, but at the end of the day this can't be avoided - it's the demand for hardware that determines supply. It's in game developers best interests to have games run server-side for obvious reasons (uniform hardware spec, can charge subscriptions for continuous income, no need for expensive distribution, everyone can have access without buying a console or expensive PC hardware etc etc). So, once games are run in this way there is much less need to produce high end graphics hardware.

Sucks balls if you ask me - it will encourage 'lowest common denominator' gaming, and your framerate will always be at the mercy of the server speed, which will be affected by the number of people playing. But still, it's coming, and this might be one of the reasons why nvidia is has chosen to go down the GPGPU route instead. There will be a market for these kinds of cards long after traditional consumer graphics cards are dead.
 
Now way thats gonna happen, unless they magically develop faster than light communications ... or gaming becomes so dumbed down, quick reactions and skill lose meaning.

Well, we can already get "good" latencies in online gaming. Sub 20ms is common with fast servers in this country, and is much faster than the human reflex time.

What we're missing is reliably high data transfer rates. But once you can reliably stream a high-def 720p video over the internet, you can also stream a game in 720p. We're not there yet, but we aren't too far off.

I hate the whole ****ing idea personally, but what can you do? We can't fight technological progress.
 
You don't think consoles will make PC gaming redundant ?

Consoles would be made redundant by streaming technology, in the same way that PC gaming would.

Why have expensive computation and rendering equipment in your living room when you can just have a tiny, silent box to connect you to the internet and stream the game to your TV?
 
Cloud gaming is pie in the sky, a GPU for every person playing? Datacenter nightmare, just for the costs alone.

Not neccesarily - remember it will all be subscription based, so the cost is covered by effectively renting the GPU you use, for as long as you play.

We'll have to wait and see how it develops, but in principle it should be cheaper overall on the hardware side of things. By centralising the processing, each GPU can be kept in operation close to 100% of the time. With each end-user having their own GPU, it only gets used when you actually choose to play a game, which is going to be at most one or two hours per day (well, for most people anyway!). Hence the total number of GPUs needed to keep everyone gaming is less than if we all have our own.
 
The problem is really the centralisation process though - it's infeasible. If you want everyone to have a decent gaming experience, you're going to have to make a lot of local servers which will cost a lot to maintain and staff. If you want total centralisation, or even partial centralisation (say one server per country, for example), the system will suck for a lot of people.

I imagine it'll work out that you have multiple server farms per country, operated by independent companies who buy massive numbers of licenses for the various games, and subscription revenue that's shared with the game publishers. But yeah, you have a point about the staffing costs. It's hard enough keeping a relatively small CPU cluster working constantly without 24/7 attention from well trained personnel.
 
Back
Top Bottom