• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

gtx 590 spec`s and priced

This may be a dumb question but I'll ask it anyway LOL. Instead of 2 GPU's on 1 card, why don't they increase the power to 1 GPU and give it higher clocks/shaders?

1) There is a limit to how high a frequency you can run a given IC at. This is limited by things like max heat/clock frequency of the specific IC, dicated by its design spec

Beyond that there are other theoretical reasons:

2) There is a limit to how high a frequency you can feasibly run any digital circuit. This is for a number of reasons. a) At higher frequencies digital signals behave increasingly like their analogue counterparts (in truth a digital signal is analogue that has certain "discrete-like" behaviour under certain conditions). Also other effects come into play. e.g. low frequency circuit design is a completely different affair from higher frequency design. Electrical & Electronics engineers use different principles and theories to design low speed circuits, and as the speed increases the principles involved also need to be changed and reworked from first principles. These first principles being Maxwell's Equations - a set of partial differential equations (or, also integro-differnetial equations) that govern the behaviour of the propagating electromagnetic wave. Because partial differential equations are very very difficult to solve exactly we tend to use specific closed-form solutions to these equations that target a specific frequency range, and use those design methods after verifying that they fit our purposes for practical use.
So, for example, we may use KCL, KVL etc to do calculations on a normal low frequency electrical circuits, then use a different design formulation for say, a circuit at microwave frequencies, use a different approach for radiotelescopes and then at optical frequencies use a completely different approach again.
Another thing to keep in mind is distance. As process sizes increase transistors are packed closer together. This means that a higher frequency signal has to travel a shorter distance. This frequency* distance relationship is what determines what type of theoretical framework you can apply to solve a problem. For example at low frequencies over short distances (such as a 1 inch PCB trace) you don't need to treat a interconnection as anything but an interconnection. But at very high frequencies this 1 inch PCB trace starts to behave like something called a "Transmission Line" -- i.e. instead of simply thinking of it as a simple interconnect you need to model it as a transmission line. the same happens at low frequencies and huge distance (like power lines), where again the interconnect starts to behave like a transmission line.

It would be difficult in a circuit to both increase the frequency while also moving down the VLSI process to more compact circuits because high speed digital design is a veritable forest of theoretical frameworks, each unique, uniquely targeted at different frequency spectra.


For both these reasons simply stepping up the power on a circuit doesn't give good enough returns.
Reason 1 is generally why you don't see much higher clocked GPUs beyond the tolerances for which they've been designed (it is well within this range that u do overclocking)

Reason 2 is generally why you're not seeing 10 GHz processors today. Beyond a certain frequency range it is often better to come up with algorithmic optimizations supported by the hardware than to simply increase the speed by clocking it much higher.


Edit: I realised I am also grossly simplifying things here. There are a lot more complexities involved as well. E.g. second order effects defined by the dimension of the NMOS and PMOS transistors in modern integrated circuits can no longer be neglected in something like nanometre CMOS design. These things also change the theoretical framework required to properly analyse a circuit at a given frequency. Or say pretty soon we'll reach the sub-11nm nanometre scale where Maxwell's Electrodynamics itself breaks down and these circuits will need analysis in terms of Quantum Electrodynamics (QED).

The bottom line that is relevant here is that arbitrarily varying the frequency/power of a circuit causes the breakdown of the theoretical foundations on which it was designed. In other words, it will just cease to work.
 
Last edited:
1) There is a limit to how high a frequency you can run a given IC at. This is limited by things like max heat/clock frequency of the specific IC, dicated by its design spec

Beyond that there are other theoretical reasons:

2) There is a limit to how high a frequency you can feasibly run any digital circuit. This is for a number of reasons. a) At higher frequencies digital signals behave increasingly like their analogue counterparts (in truth a digital signal is analogue that has certain "discrete-like" behaviour under certain conditions). Also other effects come into play. e.g. low frequency circuit design is a completely different affair from higher frequency design. Electrical & Electronics engineers use different principles and theories to design low speed circuits, and as the speed increases the principles involved also need to be changed and reworked from first principles. These first principles being Maxwell's Equations - a set of partial differential equations (or, also integro-differnetial equations) that govern the behaviour of the propagating electromagnetic wave. Because partial differential equations are very very difficult to solve exactly we tend to use specific closed-form solutions to these equations that target a specific frequency range, and use those design methods after verifying that they fit our purposes for practical use.
So, for example, we may use KCL, KVL etc to do calculations on a normal low frequency electrical circuits, then use a different design formulation for say, a circuit at microwave frequencies, use a different approach for radiotelescopes and then at optical frequencies use a completely different approach again.
Another thing to keep in mind is distance. As process sizes increase transistors are packed closer together. This means that a higher frequency signal has to travel a shorter distance. This frequency* distance relationship is what determines what type of theoretical framework you can apply to solve a problem. For example at low frequencies over short distances (such as a 1 inch PCB trace) you don't need to treat a interconnection as anything but an interconnection. But at very high frequencies this 1 inch PCB trace starts to behave like something called a "Transmission Line" -- i.e. instead of simply thinking of it as a simple interconnect you need to model it as a transmission line. the same happens at low frequencies and huge distance (like power lines), where again the interconnect starts to behave like a transmission line.

It would be difficult in a circuit to both increase the frequency while also moving down the VLSI process to more compact circuits because high speed digital design is a veritable forest of theoretical frameworks, each unique, uniquely targeted at different frequency spectra.


For both these reasons simply stepping up the power on a circuit doesn't give good enough returns.
Reason 1 is generally why you don't see much higher clocked GPUs beyond the tolerances for which they've been designed (it is well within this range that u do overclocking)

Reason 2 is generally why you're not seeing 10 GHz processors today. Beyond a certain frequency range it is often better to come up with algorithmic optimizations supported by the hardware than to simply increase the speed by clocking it much higher.


Edit: I realised I am also grossly simplifying things here. There are a lot more complexities involved as well. E.g. second order effects defined by the dimension of the NMOS and PMOS transistors in modern integrated circuits can no longer be neglected in something like nanometre CMOS design. These things also change the theoretical framework required to properly analyse a circuit at a given frequency. Or say pretty soon we'll reach the sub-11nm nanometre scale where Maxwell's Electrodynamics itself breaks down and these circuits will need analysis in terms of Quantum Electrodynamics (QED).

The bottom line that is relevant here is that arbitrarily varying the frequency/power of a circuit causes the breakdown of the theoretical foundations on which it was designed. In other words, it will just cease to work.

Ah I see now, very interesting post:) I guess we will see a 10Ghz chip 1 day? I mean, when 486's were about I'm guessing the 1Ghz barrier seemed far off.
 
Actually the 1GHz barrier wasn't that far off when we hit 20 MHz barrier, though it might look a much bigger jump than us going to 10 Ghz. If you want an exact answer on how difficult/feasible 10 GHz will be I may need to consult to consult some papers I haven't looked at it in a while. But I can, with a degree of certainty, handwavingly say that the 10 Ghz barrier is somewhat more difficult. We'll probably only move very slow towards 10 GHz. It's not that it can't be done. It's just that it's hard to justify doing it when there are so many other optimizations that can be performed. You may have noticed that CPU speeds have been moving upwards only slowly over the past several years (Case in point, Sandybridge clocks aren't significantly higher than Core 2 clocks).

But on another note, once we hit sub-11nm exploiting quantum effects for algorithmic speedups is a bigger and better challenge than trying to reach 10 GHz. But there are also very simple ways of increasing performance. One of my friends who used to work at NVIDIA showed me a paper he co-wrote some years before Sandybridge of how simple cache optimisations could lead to as much as 60% (iirc) speed up over contemporary methods.

So yeah, it's all about optimisations. A lot of microelectronics design these days is about taking the developments in theoretical computer science and putting them into hardware.
 
This may be a dumb question but I'll ask it anyway LOL. Instead of 2 GPU's on 1 card, why don't they increase the power to 1 GPU and give it higher clocks/shaders?

When your building the chip you have a box to fit everything in, and that's really the limiting factor for how fast you can make the GPU.

The box we have now is 300w and 40nm and Nvidia have already pushed way past the 300w limit and Fermi only just fits 40nm as it is. Nvidia have struggled to fit Fermi on to a 40nm core as it is. The short of it is Nvidia started out at the end of the road with Fermi.
 
Very nice card indeed!.

Very expensive indeed!.

So glad i got the 580 when they first came out at £400 now clocked at 850 core, will get another in a year at £300, Job done.
 
The box we have now is 300w and 40nm and Nvidia have already pushed way past the 300w limit and Fermi only just fits 40nm as it is.

now if we wanted to be picky :p it could be said that at this point in time only AMD have pushed past the 300w limit, of course in 2/4 days time Nvidia will join the fun. :D
 
now if we wanted to be picky :p it could be said that at this point in time only AMD have pushed past the 300w limit, of course in 2/4 days time Nvidia will join the fun. :D

I was talking per GPU. I think we have a good chance of Nvidia breaking 600watt :D

Have they announced anything about the GTX590 yet ?
 
I think I would be prepared to pay £700~ for a card, but it would have to be amazingly fast before I would even consider breaking £499. For £700 Nvidia would need to come up with a perfect card, and lets face it that's really not going to happen with a pair of Fermi cores.
 
If it uses 375 watts like whats been rumoured will it be any faster then a 6990?

No would be my answer as amd have the edge atm when it comes to watts/performance. I doubt at 607 core clocks whether it can match a 6990 but i am pretty sure it will be an overclockers card. It may match the 6990 at 830 core but definately imo not the full 6970 speeds.
 
MSI N590GTX-P3D3GD5 supports Afterburner super voltage function, pressurized by a 607MHz core clock setting raised to 840MHz, an increase of up to 38% overclocking potential,
Not sure if i can belive they can be overclocked to 840MHZ that easy..

As even the single core GTX 580's can start to get problems running at them clocks and are running hot due to the increased voltage needed..
 
I'm glad i got my 580, having just one card is much better and i prefer nvidia, however if i was to have more than one gpu i would go crossfire (because of the scalling) i can see nvidia struggling with this, especially at the £££
 
I'm glad i got my 580, having just one card is much better and i prefer nvidia, however if i was to have more than one gpu i would go crossfire (because of the scalling) i can see nvidia struggling with this, especially at the £££

To be fair it's not as Nvidia's scaling has got any worse with the 500 series it's just as good as the 400 line and when they came scaling was one of the few reasons to get a gen 1 high end Fermi it's just that AMD have got there act together with the drivers.
 
Back
Top Bottom