Maths: Geometry limits based on 64-bit FP

D3K

D3K

Soldato
Joined
13 Nov 2014
Posts
3,971
Ok, so given this recommendation in Inventor, I am trying to reverse engineer the reasoning but I'm clearly out of my depth:

Part geometry or features should be less than 100 meters in size and not farther than ±100m from the part document origin.

Assembly features, modeling features defined in the assembly which involve one or more parts, must be confined to the same parameters within the assembly document. Additionally, any model participating in an assembly feature must also be confined within ±100m from origin and must be smaller than 100m in length.

The Autodesk Shape Manager (ASM) geometric modeling kernel uses 64-bit floating point computations and Inventor sets the modeling tolerance/resolution (i.e. minimum dimension size) to 1.0e-8 meters, therefore the modeling space in Inventor has an upper limit described by a cube of ±100m x ±100m x ±100m centered at the document origin. Any piece of geometry (edge, face, or body) should be contained within those limits. When geometry exceeds these limits, any modeling operations or view constructions on the geometry can lead to unpredictable behavior due to a lack of numerical precision. It is highly recommended to keep any geometry within these limits.
I would summarise that as :

Geometrical increments = 0.00000001 (based on tolerance)
Max range = 100

So total possible 1d increments = 10,000,000,000
Total possible 3d increments = 1,000,000,000,000,000,000,000,000,000,000 = 10^30

64-bit FP range is -10^308 > 10^308

What step am I missing?
 
Last edited:
Is that 64bit value just for the position information or does it contain more fields? (angle?)

Ive not checked your maths but it looks like you should be able to exceed 100m^3

Although practically I cant imagine drawing something that required 1/1000th/mm accuracy that was 100m long. Unless you work for Nasa.
Nothing wrong with 10% scale imho :)
There is definitely more missing, it must be something to do with boolean operations and curve intersections, and then accuracy between 2 sets of numbers.

Rhino goes into more detail about it. But it's late and I'm too tired to work it out. I think the issue with the Inventor text is the part that reads:

" to 1.0e-8 meters, therefore the modeling space in Inventor has an upper limit described by a cube of ±100m x ±100m x ±100m"

The use of "therefore" is entirely inappropriate. It's like saying there is energy, therefore E=mc^2.
 
Cheers for all the replies.
The range is irrelevant, look at the precision. The mantissa (that is the M in M*2^E) is 52 bits, that is enough to hold a number up to 10^16 ish. So if you're up at 100, you can store precision down to 10^-14. Which is a fair bit smaller than 10^-8 but I presume they've chosen the limit to ensure that the accumulation of error doesn't overwhelm the 10^-8 limt in a conservative fashion.
This is starting to make more sense. I think my failing has been trying to visualise a mantissa and a base 10 exponent as part of the puzzle, when it's actually just a method of notation. I've also not considered the binary conversions properly.

So while we can talk about numbers as large as 10^308, we can't really compare with accuracy anything that deviates by 10^16?

e.g. 10^308 against 10^290 would result in an answer indentical to 10^308?
 
Precisely.
Awesome.

Now to reassure myself that all those FP numbers I see in FEA matrices are not deviating by more than 10^16 :D It's a massive number, but feels like it could cause restriction in some of the small to large unit calculations.
 
I wouldn't worry about it.

The point of the 10^6 margin between the required final precision and the precision required in the software by the 100m limit is to buffer you against this kind of error.

Floating point is never accurate*, you need only worry about whether the degree of inaccuracy is harmful.


* - To be pedantic there are certain, precisely defined, circumstances where it is but for functional purposes treat it as if it never is. Simply using 0.1 anywhere will render it inaccurate for example (0.1 cannot be represented in a finite number of binary bits, it's a recurring binary number).
I'm starting to wonder if it is limited to that in the interests of calculation.

Vectors run on x^2 for parabolas, x^3 for inflections, x^4 for loops, etc, so that takes the internal numbering to 10^14, leaving a factor of 10 for error. (nb. I'm using the 1d range of 10^10 as their stated range of precision)
 
Back
Top Bottom