• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

New Nvidia card: Codename "GF100"

AMD have certainly protected themselves and ATi with that acquisition in the short to mid-term. But if things get anywhere near difficult (and $38m profit is not near difficult for Nvidia, that is still a profit and should have R&D already deducted) that only leaves two companies to pair up.

Intel and Nvidia would be a formidable team.
 
I want to know how they're gonna slim this chip down to fit into every segment. They can't just cut out 3/4 shaders in order to make a low-end product, since a large amount of the chip area would still be spent on features that the targeted buyer won't need.

I reckon they'll have a second die on the go with none of the enterprise features allowing them to make more economical use of their wafers.

Things which'll go:
- 64 bit address space (each bit added to their program counters and addresses takes up valuable space in the register file, plus the AGUs will generate extra heat)
- 64-bit float support: completely unnecessary for integrated graphics, midrange GPUs
- full IEEE 754 support - supporting all the different rounding modes, formatting and different types of exceptions will add a lot of stages to their FP pipelines
- ECC RAM support is only necessary in enterprise products - again, this will unnecessary for any gaming products and will take yet more die space

I can't see them using the same architecture from top to bottom.

Thoughts?
 
It's a fair point about the scalability of the architecture. Not sure how they will be able to get around it, without producing a different and cut-down architecture for the mid range market. And this would probably be pretty expensive solution...

Perhaps nvidia will stick to the G200 architecture to fill up the mid-range and low-end segments? A 40nm version of the GTX285 would make a cracking mid-range card (though it wouldn't be DX11 compatible which I guess would be an issue).
 
...

Perhaps nvidia will stick to the G200 architecture to fill up the mid-range and low-end segments? A 40nm version of the GTX285 would make a cracking mid-range card (though it wouldn't be DX11 compatible which I guess would be an issue).

maybe whythey are downplaying dx11...
 
I asked Jonah if that meant Fermi would take a while to move down to more mainstream pricepoints. Ujesh stepped in and said that he thought I'd be pleasantly surprised once NVIDIA is ready to announce Fermi configurations and price points. If you were NVIDIA, would you say anything else?

Jonah did step in to clarify. He believes that AMD's strategy simply boils down to targeting a different price point. He believes that the correct answer isn't to target a lower price point first, but rather build big chips efficiently. And build them so that you can scale to different sizes/configurations without having to redo a bunch of stuff. Putting on his marketing hat for a bit, Jonah said that NVIDIA is actively making investments in that direction

That does suggest that Nvidia have thought of this and can scale the chip down to cheaper price points and lesser cards.

Therefore I would expect to see a GTX380 on launch followed by a cheaper and cut down GTX360 later.

Downside is, it looks like we won;t see anything until Q1 next year. Or have I read that wrong?
 
Also - for gamers - how are they going to flog this if they've primarily targeted it at enterprise users? Will be get some (gimicky?) features like Eyefinity to snare gamers too? Or will they just sell is as "this is 1.5-2x faster than GTX280"?
 
Also - for gamers - how are they going to flog this if they've primarily targeted it at enterprise users? Will be get some (gimicky?) features like Eyefinity to snare gamers too? Or will they just sell is as "this is 1.5-2x faster than GTX280"?

3d games, physx and faster than a 5870.

What else does a Nvidia fanboi need? That will sell enough of them anyway.:p
 
there is something that must worry everybody here. the future of pc gaming looks darker than ever. unless you just buy hardware to boost your e-peens.
 
The current trend seems to be leaning towards multi platform games i.e. console ports coded on inferior hardware.

I guess the question is will games publishers continue to make games solely for the PC ? If not there will be no need for dedicated hyper graphics cards :(

Personally I can see the GPU being incorporated into or alongside the CPU on the motherboard.
 
Most of my facts come from semi accurate forums, mostly from Charlie himself. And let's be honest he knows a hell of a lot more than you. Go visit sometimes and educate yourself. So please, in a nicest way, be quiet. It's a $600 iPod encoder. :-)
Don't like the way nvidia is running their bussines anymore. And i think many ppl would agree.

Did I say I knew more than charlie lol? Besides why go educate myself when you seem to have all the well educated and unbiased facts right here for me, thanks :D

In any event, I can appreciate that some people may not be happy with the way in which Nvidia do things, but at the end of the day, your posts clearly demonstrate an non objective viewpoint which serves no useful purpose whatsoever.
 
there is something that must worry everybody here. the future of pc gaming looks darker than ever. unless you just buy hardware to boost your e-peens.


What are you on about.

Buy 4 PCIe slot motherboard Lucidlogix (Hydra chip) included.
Buy 2x5870X2 2xGTX380 + Windows 7.

A Cuda physx crunching 3D 3 eyed-infinited beast' Awsome
 
I notice there was no mention of a dedicated tessilator. Anyone think that the GTX3xx cards may not actually be 100% DX 11 compliant? Or perhaps Nvidia is busy lobbying M$ again.
 
I notice there was no mention of a dedicated tessilator. Anyone think that the GTX3xx cards may not actually be 100% DX 11 compliant? Or perhaps Nvidia is busy lobbying M$ again.

Rumours of a software trick to implement tessellator.
 
Last edited:
I notice there was no mention of a dedicated tessilator. Anyone think that the GTX3xx cards may not actually be 100% DX 11 compliant? Or perhaps Nvidia is busy lobbying M$ again.

The chip should be sufficiently general-purpose that no tessilation-specific hardware would be required. It'll simply be a case of sending a few instructions to the GPU. After all, the specs include a "special function unit" for each of the 16 SMs, which is to be used for "transcendental math and interpolation". Sending a few appropriate interpolation calls to these units should be sufficient to perform tessilation.

Remember that this thing is flexible - it can even execute C++ code. The one thing we don't have to worry about any more is whether it will support specific features. It may lose some efficiency by being so general, but feature support will not be an issue.
 
I've had a brief read through Anandtech's Fermi article that he published yesterday.

The way I interpret the article, Fermi doesn't appear to be a games-centric chip. nVidia have built this new chip/architecture to try and boost their HPC TESLA business and the fact it will be faster in games than last generation isn't the priority.

It depends on what consumer, game-related products nVidia plan to base on Fermi - it's going to be expensive is the bottom line and how much the Fermi architecture is going to affect 3D games.
 
The chip should be sufficiently general-purpose that no tessilation-specific hardware would be required. It'll simply be a case of sending a few instructions to the GPU. After all, the specs include a "special function unit" for each of the 16 SMs, which is to be used for "transcendental math and interpolation". Sending a few appropriate interpolation calls to these units should be sufficient to perform tessilation.

Remember that this thing is flexible - it can even execute C++ code. The one thing we don't have to worry about any more is whether it will support specific features. It may lose some efficiency by being so general, but feature support will not be an issue.

No hardware tessailation is bad bad bad....5870 in an optimised DX11 + tessilation game ( which is not out yet ) is going to give it a spanking with hardware tessilation built in. Nvidia doing it for themselfs. Software tessilation rolf.
 
No hardware tessailation is bad bad bad....5870 in an optimised DX11 + tessilation game ( which is not out yet ) is going to give it a spanking with hardware tessilation built in. Nvidia doing it for themselfs. Software tessilation rolf.

Can't knock it until the Fat GPU Sings.
 
No hardware tessailation is bad bad bad....5870 in an optimised DX11 + tessilation game ( which is not out yet ) is going to give it a spanking with hardware tessilation built in. Nvidia doing it for themselfs. Software tessilation rolf.

I don't think you're understanding how this unit works. There are NO "specialised" units of any kind. It's all floating-point arithmetic. Tessilation will be performed in exactly the same way as rendering, or physics, or any other GPGPU computations. This is the main strength (and also possibly the main weakness) of the new architecture.

It's not a "software" solution any more than rendering or physics is software.
 
Back
Top Bottom