defining data types in C

Soldato
Joined
18 Oct 2002
Posts
8,444
Location
Leamington Spa
Is it possible to define a data type that is a value, n bits big? For what I'm doing I basically want an unsigned 5 bit integer.
 
Its not recommended to do that but I think after u declare the variables u place a :5 after it.
 
Just use a char and use a bitmask to get the 5 least significant bits:

char b = something ;
;
int five_bit_value = b & 0x1F ;
 
Inquisitor said:
Why do you need it to be exactly 5 bits? Why not just use a 16-bit integer? :confused:

There are many reasons why you would just need a specific number of bits. For example when you are modelling memory blocks/entries.

You can also do something like the following,

Code:
typedef struct cacheEntry 
{
        unsigned valid:1; /* Validity bit , Defaults to 0. */
        unsigned tag:3;   /* 3 Bits for cache tag */
        unsigned data:4;  /* 4 Bits for cache data */
} cacheEntry_t;
 
Last edited:
If you go for bitfields (as in Una's example above), do be careful. They are implementation-dependent, and as such are not portable between different architectures.

There are a few problems with using them:

* A bitfield will always be padded to a 'machine unit'. This is usually the word size, but isn't guaranteed to be. So if you were to just define (say) 5 bits in your data type, the actual size of the data type could be 8, 16, 32 or even 64 bits. Make sure you understand how your processor and compiler work.

* Same goes if mixing other types with bitfields. Check out the following:

Code:
typedef struct myStruct 
{
        unsigned five_bit_int:5;
        unsigned int next_field;   /* This will get aligned how the compiler
                                    * wants. Probably to the next word
                                    * boundary, but who knows?
                                    */
} myStruct_t;

* Some architectures will expect bits packed in the reverse order. This is especially problematic if you simply try to cast some memory to your bitfield type -- it may have been packed in the reverse order, causing dodgy results! This is why it's not a great idea to use bitfields to encode/decode files.

Visage's method is probably best for what you want to do -- simply encode your 5-bit integers into a char. You can use the remaining 3 bits for part of another one (i.e. 5 chars could store 8 of your 5-bit integers).
 
xyphic said:
If you go for bitfields (as in Una's example above), do be careful. They are implementation-dependent, and as such are not portable between different architectures.

There are a few problems with using them:

* A bitfield will always be padded to a 'machine unit'. This is usually the word size, but isn't guaranteed to be. So if you were to just define (say) 5 bits in your data type, the actual size of the data type could be 8, 16, 32 or even 64 bits. Make sure you understand how your processor and compiler work.

* Same goes if mixing other types with bitfields. Check out the following:

Code:
typedef struct myStruct 
{
        unsigned five_bit_int:5;
        unsigned int next_field;   /* This will get aligned how the compiler
                                    * wants. Probably to the next word
                                    * boundary, but who knows?
                                    */
} myStruct_t;

* Some architectures will expect bits packed in the reverse order. This is especially problematic if you simply try to cast some memory to your bitfield type -- it may have been packed in the reverse order, causing dodgy results! This is why it's not a great idea to use bitfields to encode/decode files.

Visage's method is probably best for what you want to do -- simply encode your 5-bit integers into a char. You can use the remaining 3 bits for part of another one (i.e. 5 chars could store 8 of your 5-bit integers).

Most compilers can be set to not align data on word boundaries. Its been a wghile, but on MS compilers you can do a #pragma pack(1). In the source code.
 
Completely non-portable though! I'm used to working in an environment where portability is pretty important, especially to ensure that software can be migrated across several hardware variants with the minimum of effort.

It usually doesn't matter (like if you're just using it for storing things internally, as long as the compiler is self-consistent - and most are! - then you'll be fine), but there are occasions where you can get caught out badly by making assumptions about the packing of bitfields and structures. I came across one today: a structure had been defined that assumed a word size of 16-bits, and chars had been inserted to align ints and longs to a word boundary. Unfortunately, it was also being used on a system with word size of 32-bits and it got packed differently. In the latter case, it overran a buffer that assumed it would be packed as in the former.

Unions are another good way of getting yourself caught out.

Anyway, I digress -- I feel I've derailed this thread enough already! To the OP: if you need any more C help, I'm more than happy to offer up the benefit of my experience. My email and MSN are in my trust, feel free to get in touch.
 
Inquisitor said:
Why do you need it to be exactly 5 bits? Why not just use a 16-bit integer? :confused:
I'm writing an assembler and the instruction set takes a 5 bit register number.

Just use a char and use a bitmask to get the 5 least significant bits:

char b = something ;
;
int five_bit_value = b & 0x1F ;
This is probably the best idea. Better not use something platform dependent as the lecturers seem to have a habit of not telling us what they're going to compile on.
 
Yep the bitmask things works great. Even works with negative numbers as long as I check they are in the right range.
 
Back
Top Bottom