Okay, this is the last time I try to dissect your nonsense. It simply isn't worth my time.
Here's a hint, if you can get 800Mhz at 0.9v, 900Mhz at 0.95v, 1000Mhz at 1.05v, 1100Mhz at 1.2v, 1200Mhz at 1.5v, etc, etc....... you'll find that is most definately exponential.
First of all, that is not an exponential function. An exponential function is one where the function grows as a power series, i.e:
f(x) = k*a^x + const
But lets pretend for a second that it is. What you're saying is that
IF these were the voltages required for a stable overclock, then they would grow exponentially. Well, what evidence do you have to suggest that your numbers are reflected in reality?! There certainly is no reason, in terms of electrical physics, that you would see an
exponential growth of voltage with clockspeed.
His entire diatribe is both hilariously stupid and hilariously wrong as is everything he ever responds to me. I called him stupid and got banned because him and his dupe account both laughed at what can really nicely be put as, complete inability to read. I stated that I specifically DID NOT have to post a mathematical proof to be able to then state that 9.8 the value for gravity. TO which I think there were maybe 4 posts of Krugga and Xsistor incorrectly assuming I said that WAS a mathematical proof.
Every single time Xsistor has said I was wrong, he's been incapable of reading what I said. Which makes you basing your arguments on what he said as.. misguided.
I have not based anything I've said to you on anything that xsistor, or anyone else, has said. And what has any of this above got to do with anything?
As for your diatribe of more nonsense, no, shaders are, both complex and very simple, you can without question speed up ANY calculation by adding more complex units that do several calculations in one, magnitudes faster, the issue is always, how much faster, how often is it used, overall what is the better option, more smaller wider use compute parts or less, bigger, specific and not always useful parts. Saying you can't proves you know absolutely nothing.
Just because better algorithms would speed up fluid dynamic calculations on gpu's, DOES NOT EQUATE to better designed, more complex and more specific hardware couldn't do it faster.
This is the case for almost anything you do on a computer. It's really as simple as this, you can add 1 + 1 + 1, or have a more complex shader that can do that three times. I'd be not shocked but downright amazed if there wasn't a single calculation that couldn't be speed up.
But that is where AMD/Intel with cpu's and gpu's are at, multifunction, simple enough that almost any software can be ported to work on it even if many calculations take many clocks and many separate calculations.
Isn't this the
entire principle of GPU compute? That you
take away fixed function units in order to allow a wider range of use?
Of course you can design processors to perform simple,
predictable tasks more efficiently. But the entire point of GPGPU applications is that they're
inherently unpredictable in terms of their specific needs. That's WHY you design a flexible compute architecture in the first place!
This really isn't rocket science...
You know, as far as dataloads go, 3D rendering is incredibly predictable and uniform. Geometry processing comes from a triangular mesh loaded in fixed format. Pixels are passed individually, and have relatively simple queues of pre-defined multiply-add operations applied. And still, with all this uniformity, we see fairly large variations in performance between architectures (i.e. in games).
The range of dataset conditions encountered in GPGPU compute, even within a fairly tight field such a fluid dynamics, is just immense by comparison. You couldn't possibly design fixed function hardware to cope efficiently with all the needs. Sure - you could design a GPU to perform very efficiently on a
single specific code running a
single specific model - BUT, as soon as you try to run anything else your needs would change so dramatically that the highly-tuned, fixed function hardware would be all-but useless.
As an example, using the exact same commercial CFD code: What I would need from a GPU when running a 1-million cell mesh for a laminar flow simulation is completely different to what I need when running a 20-million cell mesh with turbulence. The balance between raw processing power and inter-cell communication is completely different. It simply isn't possible to write fixed-function hardware to cope with the entire range of requirements. Hence the need for GPGPU, and the reason AMD and Nvidia have been investing billions (of dollars and transistors) in GPGPU compute.