Soldato
- Joined
- 1 Jun 2010
- Posts
- 7,058
- Location
- Bedford
IBT is a front-end to the Intel Linpack benchmark http://software.intel.com/en-us/articles/intel-math-kernel-library-linpack-download/. The included matrix operations may very well include trigonometric functions in addition to multiplications and additions, I have not seen an exact specification of the Linpack subroutines included in the benchmark.
It's solving simultaneous eqns according to http://en.wikipedia.org/wiki/Linpack as I described above. If this isn't the case I'd like to know.
It seems like it is though, as e.g. in LinX you can choose the number of eqns explicitly.
Choosing more ram will result in more simultaneous equations which will be converted into matrix form, i.e the matrix size will be bigger.
Once the matrix is stored in ram, IBT then makes use of Gaussian Elimination method in order to solve the matrix, hence Flops being carried out by the CPU.
When I did Gaussian elimination back in uni, there were no trignometric steps involved. However division and subtraction were to some extent carried out if I recall correctly.
So I think IBT will involve division and subtraction subroutines but they won't get as much used up as multi-add.
Simultaneous equations can be solved by multiplying the equations by a same number to make an unknown say x have same value. Then the two x can be added if one is +ve and the other is -ve hence taking it out of the equations thus simplifying equations to solve them.
Multi-add is more popular than say division or trignometric functions because the former is quicker to implement per cycle whereas the latter could/will take more cycles hence the process will be slower. This means that cpu will be slower hence it won't be as much stressed as when using multi-add.
Even when they talk about GFlops of PS3 or Xbox360, they are usually referring to multi-add instructions