Soldato
- Joined
- 1 Jun 2010
- Posts
- 7,058
- Location
- Bedford
Hi guys
I hope I have this thread in the right place. I have a query about IBT flops and I was wondering if some one can clear out my slight confusion about IBT operation. I would also like to hear from people with computer science background.
I know what Flops are and afaik I understand in computing, Flops are carried out by the Floating Point Unit (FPU) in the cpu that involves multi,division,addition,subtraction of real numbers with decimal points.
IBT/LinX is double precision (64bit) Linpack cpu stress test program.
I have been reading that in double precision, modern processors can carryout a multi-add operation (fused I believe) on a 64 bit value in a cpu cycle which would amount to 2 flop.
With the inclusion of SSE2 instructions which has 128bit wide register, the cpu can carry out 4 flop per cycle which is 2multi-2add operations.
Now IBT uses ram to store system of linear algebraic simultaneous equations and then solves them using Gaussian elimination method involving matrices.
This involves not only mutliplication and addition but also subtraction and division if I am correct.
If the cpus can only carry out multi-add operations involving floating point number (real number) then how is subtraction and division done in order to solve those equations and output results in GFlops?
Many thanks
I hope I have this thread in the right place. I have a query about IBT flops and I was wondering if some one can clear out my slight confusion about IBT operation. I would also like to hear from people with computer science background.
I know what Flops are and afaik I understand in computing, Flops are carried out by the Floating Point Unit (FPU) in the cpu that involves multi,division,addition,subtraction of real numbers with decimal points.
IBT/LinX is double precision (64bit) Linpack cpu stress test program.
I have been reading that in double precision, modern processors can carryout a multi-add operation (fused I believe) on a 64 bit value in a cpu cycle which would amount to 2 flop.
With the inclusion of SSE2 instructions which has 128bit wide register, the cpu can carry out 4 flop per cycle which is 2multi-2add operations.
Now IBT uses ram to store system of linear algebraic simultaneous equations and then solves them using Gaussian elimination method involving matrices.
This involves not only mutliplication and addition but also subtraction and division if I am correct.
If the cpus can only carry out multi-add operations involving floating point number (real number) then how is subtraction and division done in order to solve those equations and output results in GFlops?
Many thanks
