• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

First Nvidia GT300 Fermi pic.

What a total waste of time, how long did that take you, a couple of hours???

I notice a lot of guys on this forums always having a dig at nvidia, but when someone starts bashing ATI cards people get ripped apart..

If people don't like nvidia products, then they should just stay away from any threads related to them, and same goes for ATI haters...

all this fanboi crap really bugs the hell out of me, there really isn't any need for it at all.

I didn't do the picture, I found it over at nvnews and found it amusing. People need to lighten up. It's a joke poking fun at the fact nvidia have multiple times rebadged graphics cards.
 
its only a bit of fun,and anyway if the ati fanboys get kicks out of doing it,just shows how bothered they are that nvidia will be back to the top of the gpu again for the 4th yr running;)
 
Everyone knows they rebadged several of their cards into the next generation, but that doesn't change the fact that their top end stuff is usually pretty amazing..

Like I said though, all this fanboy stuff if pointless, and childish to.

Anyway, back on topic, just saw a couple of other videos on Youtube with them demonstrating some of the physics capabilities of this card, it looks pretty awesome.

I will look forward to the release of these cards, though I wont buy one when they first hit the shelves, I will wait until the prices come down a little first before I decide which card I will purchase.

Though, I have been tempted on 2 different occasions to go ahead and buy a 5870, but I will resist the temptation as I would like to see how these cards compare first.
 
well it makes sence to wait a bit longer to see how this new gpu does,i have been the same and have been tempted to get the ati cards,but i will wait also to see how and when they are going to release these cards before i rush out and buy either...
 
its only a bit of fun,and anyway if the ati fanboys get kicks out of doing it,just shows how bothered they are that nvidia will be back to the top of the gpu again for the 4th yr running;)

Considering the new nvidia card probably won't be out till next year I'd hope it was faster.
 
well you never know,but i have never been disapointed with nvidia cards in the past and i,m quite happy to keep my gtx285 xxx extreme till they do release a card,i,m in no rush:)
 
Hasn't nvidia secured a deal to produce a super computer using Fermi, that is 10 times faster than the current leading Jaguar (powered by AMD Opteron) and is being hailed as one of the most significant leaps in HPC progress?

Ten years is a long time. I don't see their competitors relaxed about it.

Universities and companies want a mid "cheap" supercomputer as the modern desktop can't cope with a size of dataset they want to process.
Mark Harris has been working with universities to mature the GPGPU market. The result is a myriad of university projects and now commercial research to use GPGPU to process medical scanner data, oil exploration data etc.
This hasn't been one sided - but first I want to break to look at the strategy in the toolsets.


GPGPU originally existed by loading your data into OpenGL textures (yes the same game textures), then programming the fragment shaders with your data processing program before rendering the texture to an off-screen buffer. Bingo we have computation.
Researchers started picking this up and coming up with ways to minimise the limitations when doing reductions using ping-pong etc and creating ways to maintain data efficiently. Brook, RapidMinds (acquired by Intel) were born to minimise the differences between nVidia and ATI.

Obviously it makes sense if you can control the software toolkit to adapt the platform to GPGPU - hence CUDA was born.
nVidia's approach is identical to Apple, a high cost closed platform with a "Boom it works" approach (we'll forget the early driver issues!).
CUDA became the language (much like Objective-C is to Apple). Nexus will become the defacto closed toolkit akin to Xcode on Apple with Objective-C/Cocoa (yes I know it runs GCC (thus apple get this for free) at the bottom but how many platforms actively use Objective-C instead of C/C++).
A faster, flexible but more costly toolchain to produce and maintain. It has to be free to start using, naturally they claw the cost back in the high price hardware (specifically the more expensive Tesla range is required).

ATI follow a more Linux open approach, so they produced Close to the Metal (CTM). Which basically allowed you to hand code GPU assembler. Open but so low level that nobody really wanted to touch it (switch GPU and prepare to recode to optimise!).
I was a registered CTM developer and looked at combining GCC and CTM however the GCC chain is so basic and olde-worlde it's not suited to SPMD programming as all the analysis and internals work for small SIMD optimisations.
Anyway I digress. The key here is that there is no GCC standard toolset currently to underpin OpenCL so there is a small lag as products appear..

Apple could see this being a useful technology for their media processing and started work on OpenCL.. eventually handing to Khronos.
AMD could see the value and dropped CTM.
A slower toolchain to produce as the standard becomes subject to design by committee. However it's cheaper and more likely to deliver a wider range of development environments for university and company budgets.

So now we have CUDA/Nexus vs OpenCL/<random toolkits> as the implementation platforms. Although nVidia support OpenCL I see it will become a second cousin to CUDA/Nexus.


So back to the part about being hailed as a major leap. It's a leap forward but not solely nVidia. SGI were doing this before nVidia existed (Mark Harris, IIRC was part of SGI).

Next question that needs to be asked - what's the market size (financially).
Universities by their nature are interested in low cost - is Fermi low cost when compared to the expected OpenCL compliant AMD/Intel platforms?
When AMD/Intel focus on the remaining discrete GPU market, then nVidia could find it's cash source squeezed. The same will occur in the commercial GPGPU market as there's money to be made in this blue ocean market.

10 years is a long time for nVidia's competitors to come up with competitive products in the GPGPU space. In this time it has to has to safeguard it's share - this is where the proprietary toolkit comes in, making it too costly for commercial deployments and applications to switch vendors.
This is the reason that nVidia don't like a level playing field such as DX and OpenCL.

For larger installations still - super computers - the entire platform is proprietary and as long as the development toolkit delivers then it's an easy life for upgrades and expansion.
The thorn is that the majority of code is C/C++ and x86 SIMD orientated. Again I can see the move to supporting C++/CUDA here. The requirement for logic operations requires nVidia's CPU program to bare fruit, although the majority of processing is multiply-and-add (the key GPU instruction). Their weakness is a lack of experience with interconnect performance between GPU nodes. They may opt, if sensible, to partner with CRAY in the mid-term.

AMD have some experience in this space and I would expect CRAY to partner in the open alliance to create supercomputers using AMD CPU, GPU technology as they have done in the past...

So although it's heralded as a "significant leap" it should be noted as a leap of market potential however it is a market which boarders the territory of some big companies, who may not be first to exploit the market but when they arrive will be quickly applying pressure by stretching their product portfolios down and leveraging their knowledge of the supercomputer market.

In this scenario I would be careful of an initial "success" blip for nVidia as they see sales rise in the fresh market, only to fall as the major competitors arrive. They do have a lock-in device but it's the market cashflow that will really see them succeed or fail, and I think 10 years is an awefully long time in a tank without partnering with some sharks.
 
Last edited:
Sabrefox - you may want to see this thread which shows the move for physics engines.

This underpins the moves of games developers to DX11 and OpenGL/CL platforms for the likes of Xbox and desktop systems. Thus the generation of new toolkit components for the open platforms has already started to erode nVidia's market (they'll be pushed to support these standards to maintain their market share).

It maybe that Nexus supports OpenCL and DirectX in future and CUDA slowly, quietly, falls into the shadows.
 
That's not the way you launch your most advanced, company defining, product...

It is when your major rivals are getting all the publicity, well that's my theory anyway. I just feel Nvidia were desperate to deflect attention away from the AMD's DX11 products which is actually kind of ironic in as much Nvidia have already said they don't feel DX11 is primary divining factor behind it's new range of cards.
 
It is when your major rivals are getting all the publicity, well that's my theory anyway. I just feel Nvidia were desperate to deflect attention away from the AMD's DX11 products which is actually kind of ironic in as much Nvidia have already said they don't feel DX11 is primary divining factor behind it's new range of cards.

Of course - it did, however, take away attention from the 5850's launch (or at least the day reviews went up) rather successfully.
 
I don't understand it at all, surely with all the technical knowledge at their disposal they could have come up with a better looking mock up than that, a complete closed in guard around the card front and back, or a aluminium box round the whole thing or something, heck even a unlabeled 285gtx shroud on a card would have been a better idea that what they showed us. Surely thay cant have done this deliberately just to draw attention away from the opposition, just too bizarre for words.
 
Of course - it did, however, take away attention from the 5850's launch (or at least the day reviews went up) rather successfully.

They needed to provide a technology update for customers, hoping the promise of what is to come stops them from switching. It's standard practice to time this for maximum impact on any competitive threat announcement.
Frankly nobody cares about the mockup, he could have held up a Kellogs box model. The key is delivery of the technology, the first standard customer questions on seeing the slide of features will be - when? and how much?
Additionally, the customers will now wait before spending further money as they don't want the old platform.

If you are AMD you have a secondary surprise after the first announcement, the DirectX computer/OpenCL physics engine is one such follow on but the killer would have been if they could demonstrate supercomputing in a box then draw back the curtain to show an even bigger box with partnership of Cray.

I did note the almost identical copy of an Apple event, right down to the black top and relaxed clothing. Quite surprised he didn't mention "Boom it just works" and "One more thing". Quite humorous considering the spat between nVidia and Apple. :D
 
Frankly nobody cares about the mockup

Speak for yourself. I feel sorry for the people who are waiting for this card when they could be spending money on a 5800 and enjoying the same performance, TODAY. After all, the people who have the money to spend on a Fermi "GTX380" can also afford 2 5870s.
 
Speak for yourself. I feel sorry for the people who are waiting for this card when they could be spending money on a 5800 and enjoying the same performance, TODAY. After all, the people who have the money to spend on a Fermi "GTX380" can also afford 2 5870s.

However once the side is on the machine - do you look at the graphics card?

nVidia's CEO didn't state it was available 'now' though. So it could be four-six months before that comes out as they've only just got the system up and running by the looks of it. I suspect there's obscene pressure to be ready, in quantity, for the christmas season.

There's nothing stopping people buying 58xx card(s) today (given the current stock issues - a week or two). So you have a choice - take a DX11 58xx now or wait for nVidia's release date. In the end it boils down to if the games are available to make use of the cards to put pressure on consumers to upgrade over christmas.
 
Last edited:
thats a dirty thing to do , showing off nothing more than a cut up card is ridiculous, im waiting on my next graphics card weather it be nvidia or ati i dont know at the minute, but im only waiting a few more weeks, im hoping theres some choice by then
 
One of the first things I do when buying a new card, is go back to "old" games and max them out or run around with 60fps vsync etc, it's down to free time and personal interest of course but experiencing the game as perfectly as the developers intended can be very satisfying.
 
One of the first things I do when buying a new card, is go back to "old" games and max them out or run around with 60fps vsync etc, it's down to free time and personal interest of course but experiencing the game as perfectly as the developers intended can be very satisfying.

Yeah I'll vouch for that actually - like when I got my X1650 Pro, being able to run UT2004 on max was awesome, and when I got my 4870, Bioshock, for example.
 
Back
Top Bottom