This may be a dumb question but I'll ask it anyway LOL. Instead of 2 GPU's on 1 card, why don't they increase the power to 1 GPU and give it higher clocks/shaders?
1) There is a limit to how high a frequency you can run a given IC at. This is limited by things like max heat/clock frequency of the specific IC, dicated by its design spec
Beyond that there are other theoretical reasons:
2) There is a limit to how high a frequency you can feasibly run any digital circuit. This is for a number of reasons. a) At higher frequencies digital signals behave increasingly like their analogue counterparts (in truth a digital signal is analogue that has certain "discrete-like" behaviour under certain conditions). Also other effects come into play. e.g. low frequency circuit design is a completely different affair from higher frequency design. Electrical & Electronics engineers use different principles and theories to design low speed circuits, and as the speed increases the principles involved also need to be changed and reworked from first principles. These first principles being Maxwell's Equations - a set of partial differential equations (or, also integro-differnetial equations) that govern the behaviour of the propagating electromagnetic wave. Because partial differential equations are very very difficult to solve exactly we tend to use specific closed-form solutions to these equations that target a specific frequency range, and use those design methods after verifying that they fit our purposes for practical use.
So, for example, we may use KCL, KVL etc to do calculations on a normal low frequency electrical circuits, then use a different design formulation for say, a circuit at microwave frequencies, use a different approach for radiotelescopes and then at optical frequencies use a completely different approach again.
Another thing to keep in mind is distance. As process sizes increase transistors are packed closer together. This means that a higher frequency signal has to travel a shorter distance. This frequency* distance relationship is what determines what type of theoretical framework you can apply to solve a problem. For example at low frequencies over short distances (such as a 1 inch PCB trace) you don't need to treat a interconnection as anything but an interconnection. But at very high frequencies this 1 inch PCB trace starts to behave like something called a "Transmission Line" -- i.e. instead of simply thinking of it as a simple interconnect you need to model it as a transmission line. the same happens at low frequencies and huge distance (like power lines), where again the interconnect starts to behave like a transmission line.
It would be difficult in a circuit to both increase the frequency while also moving down the VLSI process to more compact circuits because high speed digital design is a veritable forest of theoretical frameworks, each unique, uniquely targeted at different frequency spectra.
For both these reasons simply stepping up the power on a circuit doesn't give good enough returns.
Reason 1 is generally why you don't see much higher clocked GPUs beyond the tolerances for which they've been designed (it is well within this range that u do overclocking)
Reason 2 is generally why you're not seeing 10 GHz processors today. Beyond a certain frequency range it is often better to come up with algorithmic optimizations supported by the hardware than to simply increase the speed by clocking it much higher.
Edit: I realised I am also grossly simplifying things here. There are a lot more complexities involved as well. E.g. second order effects defined by the dimension of the NMOS and PMOS transistors in modern integrated circuits can no longer be neglected in something like nanometre CMOS design. These things also change the theoretical framework required to properly analyse a circuit at a given frequency. Or say pretty soon we'll reach the sub-11nm nanometre scale where Maxwell's Electrodynamics itself breaks down and these circuits will need analysis in terms of Quantum Electrodynamics (QED).
The bottom line that is relevant here is that arbitrarily varying the frequency/power of a circuit causes the breakdown of the theoretical foundations on which it was designed. In other words, it will just cease to work.
Last edited: