If you go for bitfields (as in Una's example above), do be careful. They are implementation-dependent, and as such are not portable between different architectures.
There are a few problems with using them:
* A bitfield will always be padded to a 'machine unit'. This is usually the word size, but isn't guaranteed to be. So if you were to just define (say) 5 bits in your data type, the actual size of the data type could be 8, 16, 32 or even 64 bits. Make sure you understand how your processor and compiler work.
* Same goes if mixing other types with bitfields. Check out the following:
Code:
typedef struct myStruct
{
unsigned five_bit_int:5;
unsigned int next_field; /* This will get aligned how the compiler
* wants. Probably to the next word
* boundary, but who knows?
*/
} myStruct_t;
* Some architectures will expect bits packed in the reverse order. This is especially problematic if you simply try to cast some memory to your bitfield type -- it may have been packed in the reverse order, causing dodgy results! This is why it's not a great idea to use bitfields to encode/decode files.
Visage's method is probably best for what you want to do -- simply encode your 5-bit integers into a char. You can use the remaining 3 bits for part of another one (i.e. 5 chars could store 8 of your 5-bit integers).