Caporegime
We don't know prices yet
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
you missed the important part although you highlighted it - the 1070 whilst replacing the 970 - is priced as the 980.
NVidia are price hiking again .
W
Now there are some reports that 1080 will have less cuda cores than 980ti with similar amount of transistors as maxwell top end. I know the clocks can go up, but will it offset the loss of cuda cores(which in its own will be enhanced a bit if any).
I think the draw to the 970 was the sub £300 price point upon release and when OC'd it could hang in there with the 780Ti. If they release the 1070 at £400 plus it puts itself in a completely different market.
I'm not a hobbyist, I'm a price conscious gamer. I'm not too fussed about having the best. The 970 was the perfect card for me.
Personally, I have no allegiance to either Nvidia or AMD but I am hoping that AMD can be competitive with Polaris and beyond. It has to be good for the consumer. We certainly don't need Nvidia being allowed to charge inflated prices. The Nvidia shareholders wont mind though.
I would find it surprising if Nvidia priced the X70 card in the £400+ range, unless they know AMD's polaris cards are going to be a lot slower.
The X70 card has always traditionally been around the £300 mark (£280 - £330 ish)
That doesnt mean anything. The 980 had 2048 cores compared ot the 780Ti's 2880.
That means the 780Ti had over 40% more cores than the 980 but was still slower.
I think the draw to the 970 was the sub £300 price point upon release and when OC'd it could hang in there with the 780Ti. If they release the 1070 at £400 plus it puts itself in a completely different market.
I'm not a hobbyist, I'm a price conscious gamer. I'm not too fussed about having the best. The 970 was the perfect card for me.
Personally, I have no allegiance to either Nvidia or AMD but I am hoping that AMD can be competitive with Polaris and beyond. It has to be good for the consumer. We certainly don't need Nvidia being allowed to charge inflated prices. The Nvidia shareholders wont mind though.
That doesnt mean anything. The 980 had 2048 cores compared ot the 780Ti's 2880.
That means the 780Ti had over 40% more cores than the 980 but was still slower.
Well if NVidia keep price hiking as people suggest then people will stop buying them, either that or we will or have been paying stupid money for really low end parts. £300 entry level card anyone, cannot see that selling well.
Companies will sell at whatever the market will sustain, even AMD will sell at the price they can sell at, for example the Nano, it didn't sell particularly well at £550 so it is now under £400 and it sells quite well at that price point.
Now all that compute stuff is coming back with pascal and core count is still going down
but leaked numbers do not indicate a massive leap in performance some of you are dreaming about, unless nvidia manages to clock new chips to kingdom come.
I do understand that each cuda core iteration is different, but leaked numbers do not indicate a massive leap in performance some of you are dreaming about, unless nvidia manages to clock new chips to kingdom come.
Fair enough. I see a lot of people make really exaggerated claims about what is required for what sometimes and I feel compelled to call it out not because I like to prove people wrong on the internet but because I fear that some people reading may get the wrong idea and base a purchase on false or misleading information.Just to be clear I'm not making a case of the overall suitability of a 780 at 1440p these days
I'd say any performance increases are far more likely to be from driver improvements than actual hardware iteration. At least the bulk of it.- none the less there is a huge gulf in performance across the range on out the box performance - something like 20% from an early A1 to one of the last B1s and what might be ok with one 780 might fall flat on its face on another.
Well you having G-sync would explain a lot. I'm guessing you're not actually hitting a consistent 60fps in many games. For me, I'm still sensitive to that, variable refresh rates or not.Regarding my own setup - one aspect here is that I'm running G-Sync which makes it a lot more tolerable when you are getting drops just below 60fps - otherwise the card would have been replaced by now as it is certainly stretched at 1440p.
Factors that can push the price up. I'd stop short of saying 'likely', though.There are several factors that would likely push the price up - profitability at some of the companies that provide other parts that make up the GPU has been hit by recent natural disasters and other economic reasons as well as 16nm not being the cheapest to produce products on.
Fair enough. I see a lot of people make really exaggerated claims about what is required for what sometimes and I feel compelled to call it out not because I like to prove people wrong on the internet but because I fear that some people reading may get the wrong idea and base a purchase on false or misleading information.
I'd say any performance increases are far more likely to be from driver improvements than actual hardware iteration. At least the bulk of it.
Well you having G-sync would explain a lot. I'm guessing you're not actually hitting a consistent 60fps in many games. For me, I'm still sensitive to that, variable refresh rates or not.
But games like GTA V - you cant even max that out at 1080p on a 980Ti. And there's *tons* of games that wont be playable at 1440p/60fps with high/ultra settings using a 780 or a 970. Seriously, like most modern AAA games and quite a few graphics-heavy indie games, too. We're not just talking the odd game here and there.
has this been posted?
http://videocardz.com/59266/nvidia-pascal-gp104-gpu-pictured-up-close
Quite interesting, especially as the leaker said the 1080 has gddr5x
NVIDIA GeForce Pascal GP104 GPUs Are Ready For DirectX 12, Vulkan, VR and Compute Preemption
Before Pascal, on systems where compute and display tasks were run on the same GPU, long-running compute kernels could cause the OS and other visual applications to become unresponsive and non-interactive until the kernel timed out. Because of this, programmers had to either install a dedicated compute-only GPU or carefully code their applications around the limitations of prior GPUs, breaking up their workloads into smaller execution timeslices so they would not time out or be killed by the OS.
Indeed, many applications do require long-running processes, and with Compute Preemption in Pascal, those applications can now run as long as they need when processing large datasets or waiting for specific conditions to occur, while visual applications remain smooth and interactive—but not at the expense of the programmer struggling to get code to run in small timeslices.
Compute Preemption also permits interactive debugging of compute kernels on single-GPU systems. This is an important capability for developer productivity. In contrast, the Kepler GPU architecture only provided coarser-grained preemption at the level of a block of threads in a compute kernel. This block-level preemption required that all threads of a thread block complete before the hardware can context switch to a different context. via NVIDIA