Maths: Geometry limits based on 64-bit FP

D3K

D3K

Soldato
Joined
13 Nov 2014
Posts
3,971
Ok, so given this recommendation in Inventor, I am trying to reverse engineer the reasoning but I'm clearly out of my depth:

Part geometry or features should be less than 100 meters in size and not farther than ±100m from the part document origin.

Assembly features, modeling features defined in the assembly which involve one or more parts, must be confined to the same parameters within the assembly document. Additionally, any model participating in an assembly feature must also be confined within ±100m from origin and must be smaller than 100m in length.

The Autodesk Shape Manager (ASM) geometric modeling kernel uses 64-bit floating point computations and Inventor sets the modeling tolerance/resolution (i.e. minimum dimension size) to 1.0e-8 meters, therefore the modeling space in Inventor has an upper limit described by a cube of ±100m x ±100m x ±100m centered at the document origin. Any piece of geometry (edge, face, or body) should be contained within those limits. When geometry exceeds these limits, any modeling operations or view constructions on the geometry can lead to unpredictable behavior due to a lack of numerical precision. It is highly recommended to keep any geometry within these limits.
I would summarise that as :

Geometrical increments = 0.00000001 (based on tolerance)
Max range = 100

So total possible 1d increments = 10,000,000,000
Total possible 3d increments = 1,000,000,000,000,000,000,000,000,000,000 = 10^30

64-bit FP range is -10^308 > 10^308

What step am I missing?
 
Last edited:
Ok, so given this recommendation in Inventor, I am trying to reverse engineer the reasoning but I'm clearly out of my depth:


I would summarise that as :

Geometrical increments = 0.00000001 (based on tolerance)
Max range = 100

So total possible 1d increments = 10,000,000,000
Total possible 3d increments = 1,000,000,000,000,000,000,000,000,000,000 = 10^30

64-bit FP range is -10^308 > 10^308

What step am I missing?

Yes.
 
Is that 64bit value just for the position information or does it contain more fields? (angle?)

Ive not checked your maths but it looks like you should be able to exceed 100m^3

Although practically I cant imagine drawing something that required 1/1000th/mm accuracy that was 100m long. Unless you work for Nasa.
Nothing wrong with 10% scale imho :)
 
64-bit FP range is -10^308 > 10^308

What step am I missing?

The precision of the FP number isn't constant across its range. As you reach higher numbers, the difference between the closest two numbers you can represent gets larger. If your precision is required to be within a certain amount this puts an upper limit on the largest number you can use.

100m is likely, well, wrong, because it doesn't look binary but is rather the closest round decimal beneath the true value.
 
The short answer is that because of how floating point numbers work, they are very very accurate when they are very small and progressively get less accurate when they are very large. So, as the distance away from the centre point of the workspace or the dimensions of an object gets larger and larger, the accuracy gradually decreases until you start to get noticeable rounding errors. Whether this would cause a physical problem with the program I don't know.

*edit*: The video isn't as useful as I thought. It explains how floating point numbers work, but doesn't directly explain why they aren't as accurate for big numbers compared to small ones:
 
Last edited:
To give a decimal example, imagine you're storing a number of the form A x 10^B and you have 3 digits for A and 1 digit for B, you can represent a number up to 999 x 10^9, or 999,000,000,000 but then the nearest number you can represent is 1,000,000,000 away. If on the other have you have 999 x 10^-9, the nearest number you can represent in 0.000000001 away - much higher precision.

The same thing is true in 64-bit FP but, obviously, it's in binary and the number of bits is different.
 
Most 3D modelling software will give you major problems if you import objects too far from origin. Obviously the step you're missing is within the limitations of the program itself.
 
Is that 64bit value just for the position information or does it contain more fields? (angle?)

Ive not checked your maths but it looks like you should be able to exceed 100m^3

Although practically I cant imagine drawing something that required 1/1000th/mm accuracy that was 100m long. Unless you work for Nasa.
Nothing wrong with 10% scale imho :)
There is definitely more missing, it must be something to do with boolean operations and curve intersections, and then accuracy between 2 sets of numbers.

Rhino goes into more detail about it. But it's late and I'm too tired to work it out. I think the issue with the Inventor text is the part that reads:

" to 1.0e-8 meters, therefore the modeling space in Inventor has an upper limit described by a cube of ±100m x ±100m x ±100m"

The use of "therefore" is entirely inappropriate. It's like saying there is energy, therefore E=mc^2.
 
How do we go from the potential of 10^308 to a limit of 10^30? That's a massive reduction in scale.

The range is irrelevant, look at the precision. The mantissa (that is the M in M*2^E) is 52 bits, that is enough to hold a number up to 10^16 ish. So if you're up at 100, you can store precision down to 10^-14. Which is a fair bit smaller than 10^-8 but I presume they've chosen the limit to ensure that the accumulation of error doesn't overwhelm the 10^-8 limit in a conservative fashion.
 
I consider myself a reasonably intelligent guy - this thread could have been written in Russian and I'd probably have understood more of it.
 
Here is a good example of why you always leave leeway to prevent compounding of inaccuracy. http://fas.org/spp/starwars/gao/im92026.htm

In fairness what they've done above is leave more than I would expect but in answer to your question I guess they have made a trade off between the scale they can provide with the doubles and what they perceive to be an acceptable accuracy to their desired tolerance taking into exact consideration how they draw the actual curve. Basically what Mr Jack is saying. The inaccuracy do compound far quicker than you realise - I work on much smaller scales than this - but the principle is the same where you would expect to maintain a consistent accuracy you generally don't over time and with complexity so you always have to make an allowance for that.
 
Last edited:
The range is irrelevant, look at the precision. The mantissa (that is the M in M*2^E) is 52 bits, that is enough to hold a number up to 10^16 ish. So if you're up at 100, you can store precision down to 10^-14. Which is a fair bit smaller than 10^-8 but I presume they've chosen the limit to ensure that the accumulation of error doesn't overwhelm the 10^-8 limit in a conservative fashion.

Pretty much this. When dealing with floating point binary you need to decide how many bits will be your mantissa and how many bits will be your exponent. However you may also be missing the fact that you are most probably using two's complement floating point, so the first digit of both the mantissa and exponent are negative values.

The rule of thumb is the larger the mantissa the more accurate number you have whilst the larger the exponent the wider range of numbers you have.
 
Pretty much this. When dealing with floating point binary you need to decide how many bits will be your mantissa and how many bits will be your exponent. However you may also be missing the fact that you are most probably using two's complement floating point, so the first digit of both the mantissa and exponent are negative values.

This isn't the case for the normal storage method for floating point numbers. Instead the exponent uses an offset and a sign bit is stored for the overall value of the number. I do not know why they don't use a complement method?
 
Cheers for all the replies.
The range is irrelevant, look at the precision. The mantissa (that is the M in M*2^E) is 52 bits, that is enough to hold a number up to 10^16 ish. So if you're up at 100, you can store precision down to 10^-14. Which is a fair bit smaller than 10^-8 but I presume they've chosen the limit to ensure that the accumulation of error doesn't overwhelm the 10^-8 limt in a conservative fashion.
This is starting to make more sense. I think my failing has been trying to visualise a mantissa and a base 10 exponent as part of the puzzle, when it's actually just a method of notation. I've also not considered the binary conversions properly.

So while we can talk about numbers as large as 10^308, we can't really compare with accuracy anything that deviates by 10^16?

e.g. 10^308 against 10^290 would result in an answer indentical to 10^308?
 
Back
Top Bottom