• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

First Nvidia GT300 Fermi pic.

Hmm, buy a 5870 now or wait untill Nivida release the GT300. I had a quick scim through that AnandTech article and it seems Nivida will have the pricing more in line with ATI now.
 
Hmm, buy a 5870 now or wait untill Nivida release the GT300. I had a quick scim through that AnandTech article and it seems Nivida will have the pricing more in line with ATI now.

I hope so too, but I m still not keen on the idea of buying a card that comes from a company run by monkies ! that like to nick your money for their silk made pockets :mad: and sabotage games :mad:
 
Nvidia won't in any way have pricing in line with AMD, well ok, its plausible, but I can't see how they can afford it, the core will be not far off twice as big, and far far worse yields by all accounts. Its entirely possible they'll just eat the entire extra cost and get sales at cost, but that only helps ATi, who have a much bigger leeway for price drops, no matter how low Nvidia go ATi will make more profit at any price point on a smaller core and cheaper pcb/memory.

Not forgetting AMD basically funded the developement of GDDR5, so Nvidia will likely be paying a premium there also, aswell as the fact its larger bus means odd memory sizes, it either goes for same memory the 8800gtx has, or as we've seen 1.5GB so a further increase in cost.

Likely Nvidia will come in close, with no profits to AMD's current prices, and AMD will just drop prices further, with likely far higher yields also by, Jan-March when the GT300 launches.

We've not seen performance numbers largely because with yields crappy they really are months from even knowing what final clock speeds they can launch at. I'm not convinced they will be that competitive(moreso in DX11) with decent clock speeds.

Again Nvidia are being entirely screwed by TSMC exactly the same way ATi were on the 2900xt, leakage meaning clock speeds are a major issue. Even worse because Nvidia run the shaders at twice the speed(or more) than the front end, so shader high speed make the leakage problems worse. TSMC screwed everyone this round, again.
 
The card will not be available to the HPC sector first though. I think the only reason NV are mentioning this stuff first, is because it gives them a lot more to show off. There cards can do a lot more than ATI's now, they are not just adding DX 11 stuff. The new computing capabilities and C++ support mean that these GPU's can also do a lot for the home user, a lot of software can now take advantage of the GPU, and i'm not just talking about speeding up video encoding and decoding. These cards can offer way more.

Can I ask for a few (say 5?) examples of what software you think can be massively sped up by this card? It would need to be massively parallelisable, and heavily FPU dependant, yes there are a few things but not what I would class as 'a lot', or in fact that many very common things...

Anyway, some random thoughts about the card, obviously this is a Tesla press release, very lacking on specific graphics information but some good info that gives some clues (good and bad) as to what to expect for the GTX3xx's.

Absolutely massive core ~48% bigger than the 5870, plus 50% more memory (I'm doubting they'll go for 768Mb on the high end card) means it's gonna be a power hog, or be relatively low clocked I fear.

Obviously, raw power has been increased by around double the GT200, which is quite similar to the 5870 (and note that the 5870 didn't achieve double the performance, even given the same clocks). And it has added quite a lot of extra complexity to improve HPC that will also lead to improvements in graphics, however there are definately area's that are very HPC specific that I can't see improving graphics at all, effectively being 'useless' die space as far as we're concerned.

On that note I'm not sure on their one chip for two semi-related purposes tactic, a pure HPC oriented chip OR a pure graphics chip would likely be smaller/quicker than the middle ground, but at least this way they keep their foot in the graphics market whilst also getting into the HPC market, I think eventually they'll either split the chips up or go fully one way or the other though.

It's amusing they couldn't get a nice shiney card to show off, does give fairly definitive proof that they're running 'late', Q1 2010 seems the common thoughts/statements. But they do have working hardware, they do appear to have a complete working board with software even, so it's not all bad, but lots of work left to go.

It'll be interesting to see, and also see what ATi have in response, seems likely we'll see the 2Gb 5870, 5870X2 and possibly even 5890 before we'll see the GTX3xx, but I'd imagine the GTX is likely to beat the 5870 and 5890 cards, maybe not the X2. But then how close will the GTX release be to whatever ATi's next chip is going to be, and is that going to be another evolution or a brand new architecture, and if so what are they planning with that.

For the time being I think I'll keep my plans to get a nice 5870 for triple monitor goodness...
 
Just a thought - can't the 5800 series graphics cards execute up to 20 different instructions in parallel? Given this:

SIMD engines operate independently of each other, so it is possible for each array to execute different instructions.

And the 5800 series has 20 'simd engines', each with 80 shader cores (giving 1600 shaders in total). Under thread processing (1.2.2) here: http://developer.amd.com/gpu_assets/Stream_Computing_User_Guide.pdf

Given this, isn't the 5800 series actually slightly ahead in terms of the amount of instructions it can process in parallel, given Fermi is supposed to be able to execute up to 16 different processes at a time. Or am I misunderstanding or what? I'm currently under the impression that the previous geforce architectures basically operate as a massive SIMD array, whereas the Radeon architecture has multiple SIMD arrays that, as described in the paper, can operate independently of one another.

So, in the interests of learning something - how mislead am I here?
 
Id be worried if you think thats real, as it doesn't look very powerful does it :p

Fermi that was running PhysX at Jensen's GTC keynote was real but the one that we all took pictures of was a mock-up.

The real engineering sample card is full of dangling wires. Basically, it looks like an octopus of modules and wires and the top brass at Nvidia didn't want to give that thing to Jensen. This was confirmed today by a top Nvidia VP president class chap, but the same person did confirm that the card is real and that it does exist.

http://www.fudzilla.com/content/view/15798/1/


WHAT DO YOU DO when you have a major conference planned to introduce a card, but you don't have a card? You fake it, and Nvidia did just that.

In a really pathetic display, Nvidia actually faked the introduction of its latest video card, because it simply doesn't have boards to show. Why? Because it didn't get enough parts to properly bring them up, much less make demo boards. Why do we say they are faked? If you look at the pictures, it is painfully obvious that Fermi cards don't exist. Well, painful if you happen to be Dear Leader who waved fakes around and hopes to get away with it, but hilarious if you are anyone not working at Nvidia.

http://www.semiaccurate.com/2009/10/01/nvidia-fakes-fermi-boards-gtc/
 
Fudzilla is worthless nowadays, it's pretty obvious he's taking backhanders from the green goblin when the whole site is plastered with "FERMI TO BE SECOND COMING OF CHRIST, [RANDOM NVIDIA SUIT] CONFIRMS" adverticles.

What makes me laugh are the "confirmations" from random soulless marketing drones.
 
Can I ask for a few (say 5?) examples of what software you think can be massively sped up by this card? It would need to be massively parallelisable, and heavily FPU dependant, yes there are a few things but not what I would class as 'a lot', or in fact that many very common things..

Well i didn't say software would be massively sped up from the card, but that all kinds of software can now use it. Obviously certain kinds, especially in the HPC area can see a big increase in performance. But my point was that with this card it would now be possible to basically run almost anything a CPU can.
But for home users, Physics computing (and PhysX) will see massive increases. 3D raytracing software renderers could also be used (which there is a ton of). A plugin for Visual Studio has been demonstrated running on the card, and can also use the card for debugging.

Intels Larabee will also be a CGPU, just like this new NV card. It's the next evolution of the GPU. If ATI want to stay in the game they will have no choice but to move to this, which they are anyway, just not as fast.
 
Ok, this is a simple state of affairs. Oddly I've been researching this market for jobs..

nVidia want to move into the CPU market as the GPU market is shrinking. The Mobile market has leaders such as ARM Mali and Imagination Technologies PowerVR that power the majority of power-senstive mobile devices (phones, video cameras etc). nVidia don't have the skill to produce hardware that's power efficient enough for that market. Note that Intel and Apple have a large increased holdings in Imagination Technology, ARM are the unprecedented 800lb gorilla in this area - even making Intel look like a kicked spaniel.

For mobile laptop devices (desktop replacements), ATI and nVidia need mobile chips that deliver performance but balanced with power. In short the chips aren't the bleeding edge are often evolution of existing models that have been made more power efficient over time.

Next up is the console market. A 10 year IPR deal which, once selected, the GPU vendor is then producing a top end graphics experience. Over time the pressure to discount and reduce the cost to the console manufacturer reduces margins. Significant upfront R&D costs exist here although once selected, cashflow is steady. Fail to be selected and you've no return on any investment unless you can take it to another market - the desktop gamer.
Game developers love consoles. It's easier to develop for one platform, return on investment exists for squeezing performance out of the platform by optimising for a single chip configuration. Skillsets for developers are easy to train and come by.

The desktop gamer market is reducing. Having a vast array of platforms increases costs and to get the game experience that the game designer envisioned is harder.
The market for desktops is reducing. There's enough power in a desktop for 90% of the domestic market to run email, web surf and Office. People will by for that reason and settle for a performance level that isn't the highest (even if the kids want the latest, chances are the parents will only fund something 'almost as good'). Only the 'nerd' segment is going to fuss about the latest and greatest graphics card (that games developers will not support for at least a year). That nerd segment isn't large enough to sustain the market by itself.

nVidia's corportate market strategy is to be the leader in terms of performance. This means it has to R&D massively to push the boundaries (including moving to the CPU area). Funding from that R&D comes from it's sales, so it must capture the big value area of mainstream GPU sales.
AMD's strategy is simple yet effective. It defined itself, not as number one in performance, but number one in the midstream market this stabbing nVidia right where it hurts. Go for the money.
Whilst nVidia is hurting financially, it places restrictions on their future R&D - both CPU, GPU and motherboard chipsets (remember AMD has all three areas).
Now ATI, flush with money, product equal products in terms of positioning and technology that reduce nVidia's high end offerings. I'm talking the 5 series and OpenCL which covers GPU and the GPGPU area - just where Tesla and Femi are about to appear.
nVidia are forced to provide OpenCL compliance to reduce the cost to adopt the platform, although this then makes it easy to switch to AMD's GPUs. Next we'll see a war raging over the little features and standards changes in OpenCL as AMD and nVidia attempt to secure standards support for features they have and the market wants.

Enter Fermi with faster high double precision, ECC correction and the 64 bit addressing (you want big memory attached!) to capture that supercomputer area. The features *are* what the market requires for GPGPU and these are what nVidia hope to win the first round of the GPGPU war. AMD will play the 'next best thing' (with the majority of useful features and high performance at a lower cost) strategy with the 5-series and OpenCL, slowly eroding nVidia's financials as they push them to continuously innovate with less and less cashflow for R&D.

So is nVidia doomed? Well if they use the IPR, patents to force AMD to pay licensing for their innovations and maintain the lead publicly (relying on the leader position to enable the reinforcement marketing effect where people gravitate to the leader) then they may be ok. However Intel, AMD, ARM, Imagination, Cray all hold a minefield of patents (GPGPU isn't new thus prior art exists back into the 70/80s) then factor that Intel, ARM all hold the real leaderships in market segments with AMD increasing GPU share... there's only one way. Down. Only a console deal or supercomputing alliance will save them - AMD will do everything to get the next generation of xbox, sony have their own stuff for the PS and IBM, Fujitsu etc are the undisputed kings of the supercomputing..

(hope that makes sense as I couldn't sleep and my brain is just ticking over)
 
Last edited:

JJMAN.org-back1.JPG
 
.... Only a console deal or supercomputing alliance will save them - AMD will do everything to get the next generation of xbox, ...[\QUOTE]

From what I've heard ATi are v much front runners for that especially as MS is not at all happy with nvidia about DX10.1 and other pressure from them. could change obviously but think nvidia really shot themselves in the foot saying crap about DX10.1 & dx11
 
Hasn't nvidia secured a deal to produce a super computer using Fermi, that is 10 times faster than the current leading Jaguar (powered by AMD Opteron) and is being hailed as one of the most significant leaps in HPC progress?
 
I will be waiting for these to get released before I decide which card I will buy next.

Looking at the specs it should be a beast!
 
Nah this is the real card

gtx380.jpg

What a total waste of time, how long did that take you, a couple of hours???

I notice a lot of guys on this forums always having a dig at nvidia, but when someone starts bashing ATI cards people get ripped apart..

If people don't like nvidia products, then they should just stay away from any threads related to them, and same goes for ATI haters...

all this fanboi crap really bugs the hell out of me, there really isn't any need for it at all.
 
Back
Top Bottom