• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The first "proper" Kepler news Fri 17th Feb?

Yeah but it is stopping me putting cash on the line when there is an unknown variable, 450 quid is enough for a two week wait to avoid severe buyers remorse.
 
If Kepler disappoints, how would that make the 7970 better? Many people have avoided the 7000.series thus far because they offer poor value for money. The potential release of something worse does not make the 7970 better in itself.

Torres playing crap at Chelsea does not make Deogba a better player. He is what he is.

Poor analogy. A better one would be: If Torres is playing crap, Drogba is more likely to start in his place.

A lot of people are waiting to see the performance of Kepler before making a high-end upgrade. If it disappoints, then people will be more likely to choose the 7970. It's not a case of "people will only upgrade if Kepler meets a certain standard", it's more "people are waiting to see which is the best high-end option".
 
Maximum stable clocks of our card are 1085 MHz core (36% overclock) and 1785 MHz Memory (43% overclock).

We have already seen great overclocking numbers from the HD 7970 and the HD 7950 can top that (when looking at the relative increase). Essentially the card reaches the same maximum clocks as the HD 7970.

I thought the 7970 went upto 1300mhz easily
 
I thought the 7970 went upto 1300mhz easily

1300Mhz is pushing it for a 7970. Not all cards are capable of 1200Mhz+, and certainly you will need to increase GPU voltage to get there. The 1085Mhz figure from the techpowerup article is at stock volts.

If you look later on in the review, they examine the clock speed increase that can be achieved by increasing voltage. As you can see, they achieve 1200Mhz with 1.2v, and can push a little beyond this with 1.3v+.

This is fairly typical of 7970 overclocking behaviour also. The main difference between the two cards is the small increase in shader cores (2048 vs 1792).
 
I would be happy to pay £399 for the geforce led logo alone and £50 for the card. So my new Kepler is only gonna be about £50.
If however it comes in at £399 then I get mine for nowt. Simples.
 
Apple drops Nvidia Kepler from large numbers of laptops

If you recall our last story about Intel and the Ivy Bridge non-delay, we mentioned that it happened due to a large customer order. Things get interesting when you stop and think about why they would need to make such a change at the last minute.

That customer is Apple of course, and along with a few others, suddenly decided they needed more GPU power. This may seem a bit odd when you think about it, Apple is going to be using Nvidia GPUs in most of the MacBooks as we exclusively told you several months ago. All was fine and dandy, we actually understand why this decision was made, Nvidia’s Kepler is probably the better GPU architecture for this round. The only question was that given the company’s process woes, could they make enough to supply Apple?

Dark clouds started to gather on the Nvidia Q4/2012 conference call last month, and a rather nervous sounding Jen-Hsun admitted that 28nm was going horribly for the company. This completely validated what we have been saying for months, SemiAccurate moles have been giving a diametrically opposing version of events from the official “unicorns and rainbows” version. Needless to say, we were really curious about how this supply shortage would play out in Cupertino, or who else would get the short end to satisfy the notoriously supply-chain related humor impaired Apple.

Well, the answer was given by Apple itself a few weeks ago. Just after TSMC had their mysterious production line stoppage, Apple changed their orders at Intel. What exactly changed? Apple upped their SKUs from parts bearing awful Intel GPUs to variants with more of those awful shaders. Since those Ivy Bridge CPUs are going in to laptops that have a GPU, upping the shader count from 6 to 16 should be a waste, they will never be turned on.

If they are going to be turned on, that would mean that the discrete GPU in those machines is either going to be much higher spec’d, or it won’t be there. Since Nvidia can’t supply enough small GPUs, what do you think the odds of them supplying the same number of larger and lower yielding ones are? There goes that option, leaving only one possibility, the next gen low and mid-range MacBooks are not going to have a GPU, only a GT2 Ivy Bridge.

That is exactly what SemiAccurate moles are telling us is going on. Nvidia can’t supply, so Apple threw them out on their proverbial magical experience. This doesn’t mean that Nvidia is completely out at Apple, the Intel GPUs are too awful to satisfy the higher end laptops, so there will need to be something in those. What that something is, we don’t definitively know yet, but the possibilities are vanishingly small.

Given how late it is in the game, and how long it takes to retool a laptop’s physical parts, we don’t think Apple will deviate from the current plan at the higher end of the lineup. We also doubt that AMD will get any business out of it, but you never know. Until the complete stoppage at TSMC, AMD was not having yield issues like Nvidia, but that doesn’t mean wafer supply was enough to supply their existing demands, much less take on Apple as a customer at the last minute.

Our analysis indicates that the lower end MacBooks will simply do without a GPU, the higher end parts will remain unchanged, and the middle ground will have some models with and some without a GPU instead of almost all with a discrete Nvidia GPU. Those without will make a much larger portion of the mix than they would have at this time last month, if there were any at all. Since the laptops are not launched yet, and specs have not leaked, presumably no one outside of Apple and Nvidia will know the difference.

For Nvidia, this hard fought win at Apple just went up in smoke for the reasons we have been warning readers about since last summer. The largest single order for the GPU maker this year was scaled way way back, but we can’t say the exact percentage. By the time there are new bids for the next generation laptops, Haswell will essentially kill off the segment that Apple would have used, meaning this market is dead forever. It may sound dramatic, but this is the end of the mid-range GPU segment as a standalone part. This most lucrative slice of the market is now on its last legs.S|A


http://semiaccurate.com/2012/03/13/...from-large-numbers-of-laptops/comment-page-1/

More reason for concern?
 
More reason for concern?

It's just so hard to know tbh.

When Fermi launched they were apparently on death's door. I even read articles titled "The end of Nvidia?" which explained all of the problems Nvidia supposedly had.

Some of them came to light and thus it could be said they were perfectly true. Nforce chipsets were an example to that, as once the article was read there were no more. That was apparently because they either did something to upset Intel or, Intel simply didn't want them making any more chipsets so they refused to give them licenses to use the sockets.

What is odd, though, is that they also stopped making AMD based SLI/Nforce motherboards at the same time. So in reality it could have been Nvidia that decided to call it a day, and not the shunning they supposedly got from AMD/Intel.

Then they had problems with their mobile chips, CUDA and Physx were a large waste of money and resources, and before long they were actually packing and marketing cards themselves.

I really thought they were in dire straits, but they pressed on and here we are today.

So it's always quite impossible to know what the actual situation is, especially the financial one.
 
It's just so hard to know tbh.

When Fermi launched they were apparently on death's door. I even read articles titled "The end of Nvidia?" which explained all of the problems Nvidia supposedly had.

Some of them came to light and thus it could be said they were perfectly true. Nforce chipsets were an example to that, as once the article was read there were no more. That was apparently because they either did something to upset Intel or, Intel simply didn't want them making any more chipsets so they refused to give them licenses to use the sockets.

What is odd, though, is that they also stopped making AMD based SLI/Nforce motherboards at the same time. So in reality it could have been Nvidia that decided to call it a day, and not the shunning they supposedly got from AMD/Intel.

Then they had problems with their mobile chips, CUDA and Physx were a large waste of money and resources, and before long they were actually packing and marketing cards themselves.

I really thought they were in dire straits, but they pressed on and here we are today.

So it's always quite impossible to know what the actual situation is, especially the financial one.

Sorry but this is a load of rubbish.

nVidias financial data is out there for anyone to analyse and they are quite healthy (currently) and have never been remotely near deaths door or dire straits despite what some people might claim... now there may be very valid concerns about their future cash flow but they are in a pretty stable situation at the moment and could continue to bleed out for years before having to call it a day - plenty of time to get their act together and find new revenue streams aslong as they don't do the head in the sand thing.

CUDA is not a waste of money it has as strong and rapidly growing industry following, physx could be argued is a waste of money but thats mostly due to the way nVidia has handled it, there is a need for advanced physics simulation but nVidias approach so far has rather stiffled it and nVidia doing their own brand cards was a complete one off and not any indication of anything else.
 
I don't know about PhysX but CUDA is far from a waste of money. It is the industry standard massivel parallel processing technology. Now, newer supercomputers are being architected entirely around CUDA.

And even Matlab 2011b (iirc) officially supports it -- all the more reason why the engineering and scientific computing communities are going to keep using it increasingly.
 
Last edited:
Not sure if these have been posted:

nvidia_gtx680ch_1.jpg


nvidia_gtx680ch_2.jpg


Source: http://www.fudzilla.com/home/item/26308-nvidia-gtx-680-pixellized-in-more-detail
 
Back
Top Bottom