• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia: Next-Generation Maxwell Architecture Will Break New Grounds.

No idea. For me personally, I don't care much. I'll be buying Nvidia even if AMD have an edge, which I doubt they will given what little we already know about Maxwell.

You base this statement on what? AMD haven't released ANYTHING about next gen yet...

If Nvidia is better then AMD and is cheaper then i would also buy it. But its been noted that lately AMD is better and cheaper o.O
 
You base this statement on what? AMD haven't released ANYTHING about next gen yet...

It's based on what we know about Maxwell. I never mentioned anything about AMD's upcoming cards, because there's nothing to mention yet. Did you not read my post properly ?
 
All we know about maxwell is a pile of BS marketing

Thanks for your enlightening post. I wasn't aware that it was 'a pile of BS marketing'. You're a really smart guy. I'm so glad you're on these forums posting and letting people know what is BS and what isn't. Without people like you, we would all be wasting our money buying the wrong stuff. Thanks again ! :cool:
 
All I know about maxwell is a pile of BS marketing

Fixed.

2014 – March of Maxwell

NVIDIA 20nm Maxwell GPUs would feature more than double the performance per watt over current generation Kepler architecture. Maxwell GPUs would also be integrated with the Nvidia’s project Denver which fuses general purpose ARM cores alongside the GPU core. Xbitlabs recently got many details on Project Denver, the Denver is basically an custom built ARMv8 64-Bit Processor which would be highly beneficial for computing purposes such as workstation and server usage.

The roadmap still mentions that the Maxwell GPUs would be able to churn out 14-16 GFlops with the much more power efficient designs compared to Kepler 28nm. However the regime of Kepler is far from over, as in 2013 the company plans to launch a refresh to its Kepler lineup. While recently showing the power of its GK110 Kepler core used in the Tesla K20 and Tesla K20X parts, there’s no doubt that the same chip with a different codename such as GK114 or GK204 could enter the consumer market.


Read more: http://wccftech.com/nvidia-roadmap-...kepler-refresh-arrives-1h-2013/#ixzz2QSKDvINv

Reports suggest that the flagship of the GeForce 700 series, GeForce GTX 780 could have over 2000 cores. A much real number would be 2304 as being reported by Chip.de. The core count is not the only change Kepler 2 would get, NVIDIA would also be bumping up the memory interface bringing 3 GB GDDR5 buffer which would operate along a 384-bit interface ending the known bandwidth restrictions faced on GTX 680. The site also reports alleged clock speeds of the card such as 1100 MHz at stock and 1150 MHz boost while the memory could operate at 6.5 GHz effective frequency.

We have summed up details for the rest of GeForce 700 series lineup here. Overall, if the specs of the 700 series hold any truth than the next two GPU architecture from NVIDIA are one to really look forward to


Read more: http://wccftech.com/nvidia-roadmap-...kepler-refresh-arrives-1h-2013/#ixzz2QSKUvX00
 
Thanks Greg. That would be good if they improved the 256bit interface and added an extra 1gb of vram. Any word of if they plan to add in any compute units outside of titan gpu's?
 
I wonder when the refresh of Kepler is due. Hopefully not too late in the year. If they leave it too late it wont be long until Maxwell arrives so they would surely give the Kepler refresh a while before that.
 
Thanks Greg. That would be good if they improved the 256bit interface and added an extra 1gb of vram. Any word of if they plan to add in any compute units outside of titan gpu's?

I doubt it. I think nVidia gimped desktop compute performance by design because who's going to buy quadro and tesla cards if the desktop geforce cards do the same job?

This is partly because a lot if not most CAD/3D software performs very well using direct x whereas it used to be Open GL which gave better viewport performance.

So now the professional cards only really have an advantage when it comes to compute performance.
 
I doubt it. I think nVidia gimped desktop compute performance by design because who's going to buy quadro and tesla cards if the desktop geforce cards do the same job?

This is partly because a lot if not most CAD/3D software performs very well using direct x whereas it used to be Open GL which gave better viewport performance.

So now the professional cards only really have an advantage when it comes to compute performance.

If that's the case then why have AMD got compute units on all of their cards?

DirectCompute is part of the Microsoft DirectX collection of APIs and was initially released with the DirectX 11 API but runs on both DirectX 10 and DirectX 11 graphics processing units.

Taken from Wiki
http://en.wikipedia.org/wiki/DirectCompute
 
Thanks Greg. That would be good if they improved the 256bit interface and added an extra 1gb of vram. Any word of if they plan to add in any compute units outside of titan gpu's?

No idea Matt. The 6 series isn't that bad at direct compute but could have been better. The 580 is faster than the 680 at compute :(
 
No idea Matt. The 6 series isn't that bad at direct compute but could have been better. The 580 is faster than the 680 at compute :(

It doesn't have any compute abilities at all though does it? Not one unit as far as i understand it. I've heard (and i don't know if this is true) that the only way around it for Nvidia is to disable the compute effects via the drivers. I heard (again i don't know if this is true) that is where the massive gains came from in Tomb Raider.(45% or whatever the figure was)
 
Im excited for this tbh, if Nvidia can get the pricing right then it should make for some good competition! But then this being Nvidia they will probably over charge for the gpus for a good while.
 
If that's the case then why have AMD got compute units on all of their cards?



Taken from Wiki
http://en.wikipedia.org/wiki/DirectCompute

Because they chose to not gimp their cards, or rather the notion of gimping them never occurred.

However, for nVidia it'll be about CUDA. The margins on nVidia's compute specific cards are very high so they'll be looking after their bottom line with that.

AMD haven't got something like CUDA, ie, a compute API that brings in revenue the way nVidia has with CUDA, so it makes sense why they'd want to do that regardless of how people feel about it
 
Nothing wrong with a bit of speculation though Diago, it makes things interesting.
This is certainly true, though we all know nVidia has quite the record for talking BS in their PR.

Anyone remember those really powerful wood screws? :p

That aside though, I do enjoy speculation.
 
Not fixed at all. No proof of the marketing BS.

So it is in turn BS marketing.

If you have nothing useful to say, stay out the thread. Both Nvida and AMD have NDA's and nothing will be confirmed till that is lifted (usually just before launch). There is plenty of AMD threads for you to wallow in.

It doesn't have any compute abilities at all though does it? Not one unit as far as i understand it. I've heard (and i don't know if this is true) that the only way around it for Nvidia is to disable the compute effects via the drivers. I heard (again i don't know if this is true) that is where the massive gains came from in Tomb Raider.(45% or whatever the figure was)

I think you are comparing Nvidia to AMD in this department and it doesn't work like that Matt. Direct compute is better on the 7970/50/580 over the 680 but it comes down to OpenGL and OpenCL if I am not mistaken. There is a couple of games that use compute and BF3/Civ 5 are 2 of them.
 
Last edited:
If you have nothing useful to say, stay out the thread. Both Nvida and AMD have NDA's and nothing will be confirmed till that is lifted (usually just before launch). There is plenty of AMD threads for you to wallow in.

Why does negativity towards nVidia always have to be related back to AMD?

Is it really that hard for you to accept that people don't like nVidia because of stuff nVidia does?
 
Why does negativity towards nVidia always have to be related back to AMD?

Is it really that hard for you to accept that people don't like nVidia because of stuff nVidia does?

This is a Maxwell thread and not AMD. Stay on topic or I start RTMing.
 
Considering how small the 670's are PCB wise do you reckon Maxwell will see even smaller PCB's Greg and maybe even high end graphics card require only a single 8 pin connector like the current 7850's and 660's?
 
Back
Top Bottom