Audiofreq said:
Essentially what i'm trying to do is store a large 2D array where only 30% of the spaces have numbers...the rest are 0's which represent blank.
So the next question is what are those numbers likely to be?
For example, if you are storing images that happen to have large blank areas, then even in the non-blank areas, it is likely that two adjacent pixels will either be the same or very similar. So you can just store the differences between adjacent pixels, which will typically fit into only a few bits. But if what you are storing is a lot more random, then this probably won't work well.
Similarly, if you are going to have distinct blank areas as opposed to the blank cells being scattered randomly, this is going to make a significant difference to what algorithm is most sensible.
It also makes a big difference how big your array is. What works best for a 1000 x 1000 array and what works best for a 1000000 x 1000000 array are likely to be very different.
I have to store these arrays at a bit level using as few bits as possible...but also need a way of matching numbers to their array space.
These two things are somewhat incompatible. When you say "as few bits as possible", what do you really mean? How big is your data, and how small a space do you need it to fit in?
And when you say you need to be able to match numbers to their array space, how much overhead are you prepared to pay? Both in terms of time (e.g. "I don't mind processing 10 million pieces of data to get to the right one"), and in terms of space ("I'm prepared for the data to be 10% bigger if I only have to process 100 pieces of data to get to the right one").
You also need to think about how much effort you realistically want to spend on this. Once you factor in what your time is worth, it might well be more efficient to just buy a bigger hard disk and/or more RAM.