Maths - Raising Number To Power Of Decimal

Associate
Joined
8 Mar 2007
Posts
88
Hey,

This is eventually to be programmed in java for a big number (not a double, long, int, etc)...so if someone knows of some pre written code then let me know however i'm looking for the generally theory of raising a number to a power of a decimal e.g

2 ^ 2.1

What is the maths behind this that makes the answer

4.28709385...

I can't seem to find anything on the net.

Cheers.

Jon
 
Yeah i don't really follow that though :confused:

Does it actually say how the calculation is performed or just that you can take a fraction and approximate a real?
 
Yeah performance is not an issue as just a prototype really.

I know you can do it in Java and return a double but i really need to return a BigDecimal for precision...

If i could grasp the basic theory of how a calculations like 2 ^ 2.1 are done on paper then i reckon i could build it...
 
Audiofreq said:
Yeah performance is not an issue as just a prototype really.

I know you can do it in Java and return a double but i really need to return a BigDecimal for precision...

If i could grasp the basic theory of how a calculations like 2 ^ 2.1 are done on paper then i reckon i could build it...
I think this is a lot harder than you realise (unless BigDecimal already has exp, log routines, in which case it's easy). I'm not sure how to do it properly, and I have written arbitrary precision routines from first principles.

Why do you need such precision?

If you really need such precision, what is the range of possible inputs? And how much time are you happy for the algorithm to take?
 
Well given a set of numbers in some sort array where 0 represents an empty space e.g

[2,0,5,8,9,3]

This can be stored as one number using prime factorisation eg

2 ^ 2 x 3 ^ 0 x 5^5 x 7^ 9 etc....

The thing is this is performed on a large array and the resulting number is...large.

So i was thinking i could use logarithms since a multiplication would be :
5 x 5 = e ^ (ln 5 + ln 5)

So for storing sake i could in this example i would store the value of ln 5 + ln 5

On a large scale this would result in a much smaller number.

So now i have this number i want to turn it back into the original number but it's longer than a long, double etc...

So i need BigDecimal which i have found a modified version although it doesn't allow numbers to be raised to the power of a decimal and the type of calculation i'll need to do is

e ^ 50.2 but want need it to be precise so i can do prime factorisation to get the original array back.

I need to try make the number i'm storing as small as possible.

If you know for a fact this isn't possible or can suggest a better way i'd be more than grateful.

Thanks in advance.

Jon
 
Audiofreq said:
Well given a set of numbers in some sort array where 0 represents an empty space e.g

[2,0,5,8,9,3]

This can be stored as one number using prime factorisation eg

2 ^ 2 x 3 ^ 0 x 5^5 x 7^ 9 etc....

The thing is this is performed on a large array and the resulting number is...large.

So i was thinking i could use logarithms since a multiplication would be :
5 x 5 = e ^ (ln 5 + ln 5)

So for storing sake i could in this example i would store the value of ln 5 + ln 5

On a large scale this would result in a much smaller number.
Why do you think that? There is no reason to expect it to be smaller, and lots of reasons to expect it to be larger. (e.g. ln 5 is irrational, so you need an infinite number of digits to represent it accurately. Not really an improvement on the one digit you need to store 5, is it?)

Certainly in general what you are suggesting is nonsense. There is no way to losslessly compress general data by applying any kind of mathematical transform. I don't know that is also the case for a distribution of prime powers; it is an area where mathematics doesn't have all the answers. I do think it very very unlikely.

I need to try make the number i'm storing as small as possible.

If you know for a fact this isn't possible or can suggest a better way i'd be more than grateful.
I don't know it for a fact, but I have a degree in mathematics and I would be very surprised to find this is even theoretically possible. I am far more certain that it can't be done in any "sensible" way. Do you really want to do what I think will be millions to billions of calculations for each conversion?

I think you are barking up the wrong tree here. The questions I would be asking are:

What are the possible ranges of numbers you are considering?
Why do you want to store them as one number?
Why do you want to use as little space as possible?

My guess is that you are wanting the "convenience" of using BigInteger for something like comparing between two sets. But you are going to pay a huge price for that if you try to do this log/exp thing. Far easier to use a different storage format and write your own compare routine.
 
I think you really want to look into a different type of data structure if your worried about storage constraints such as a sparse matrix if you are going to have lots of non default values http://en.wikipedia.org/wiki/Sparse_matrix

Im no mathematician but I would think that factoring huge numbers is not exactly practical :p
 
for the special case of 2^2.1 you can do 2^2 * 2^0.1. 2^0.1 is the 10th root of 2, which can be worked out with newton-raphson approximation.

Code:
double integer_power(float x, int power) {
        double y = 1;
        while (power--) y *= x;
        return y;
}

double nth_root(float n, float a) {
        int i = 100;
        double nx, x = a;
        while (i--) {
                nx = (1/n)*((n-1)*x + (a/integer_power(x, n-1)));
                x = nx;
        }
        return x;
}

int main() {
        // 2^2.1 = 2^2 * 2^0.1
        //       = 2^2 * nth_root((1/0.1), 2)

        printf("2^2.1 = %0.10f\n", integer_power(2, 2) * nth_root((1/0.1), 2));

        return 0;
}

I'll leave it up to you to work out the general algorithm via this method, or you could go down the e/ln path...

edit: oh yeah, if you fiddle with the bits of an IEEE-754 float manually, you get powers of 2 for free :p
 
DaveF said:
I think you are barking up the wrong tree here. The questions I would be asking are:

What are the possible ranges of numbers you are considering?
Why do you want to store them as one number?
Why do you want to use as little space as possible?

Hey thanks for the very informative post.

You're right about barking up the wrong tree and i've barked up a few wrong trees on the way.

Essentially what i'm trying to do is store a large 2D array where only 30% of the spaces have numbers...the rest are 0's which represent blank.

Each 2d array that has to be saved will be different meaning different spaces will be blank and different spaces will have numbers.

I have to store these arrays at a bit level using as few bits as possible...but also need a way of matching numbers to their array space.

I don't know lots of maths but am fairly good when it comes to learning things so thanks for linking me to the Sparse Matrix...
:)
 
Audiofreq said:
Essentially what i'm trying to do is store a large 2D array where only 30% of the spaces have numbers...the rest are 0's which represent blank.
So the next question is what are those numbers likely to be?

For example, if you are storing images that happen to have large blank areas, then even in the non-blank areas, it is likely that two adjacent pixels will either be the same or very similar. So you can just store the differences between adjacent pixels, which will typically fit into only a few bits. But if what you are storing is a lot more random, then this probably won't work well.

Similarly, if you are going to have distinct blank areas as opposed to the blank cells being scattered randomly, this is going to make a significant difference to what algorithm is most sensible.

It also makes a big difference how big your array is. What works best for a 1000 x 1000 array and what works best for a 1000000 x 1000000 array are likely to be very different.

I have to store these arrays at a bit level using as few bits as possible...but also need a way of matching numbers to their array space.
These two things are somewhat incompatible. When you say "as few bits as possible", what do you really mean? How big is your data, and how small a space do you need it to fit in?

And when you say you need to be able to match numbers to their array space, how much overhead are you prepared to pay? Both in terms of time (e.g. "I don't mind processing 10 million pieces of data to get to the right one"), and in terms of space ("I'm prepared for the data to be 10% bigger if I only have to process 100 pieces of data to get to the right one").

You also need to think about how much effort you realistically want to spend on this. Once you factor in what your time is worth, it might well be more efficient to just buy a bigger hard disk and/or more RAM.
 
Back
Top Bottom