I understand turing machines, binary, storage, logic gates, the part that gets me is how does the system recognize different bits of code, how does it split 01010000010101010 into say 01010, 000010 and 101010?
Yes, this is known in the computer world as "words". 32-bit words on a 32-bit processor, 64-bit words for 64-bit, etc. Computers execute instructions word-by-word.
A word typically may contain instructions, memory addresses, or data. It could be representing a number, or an ASCII character, or a command to add the next two following words together. This is determined by the current "state" of what the system is doing at that moment and what it has been instructed to do next.
There are many different kinds of operations. They themselves can be combined in different ways for more complex operations, and so on and so forth.
So e.g. (and this is very simplistic), your chipset will have a queue of instructions (i.e. machine language):
In Memory location 005F01E3, add 300 to it.
In Memory location 168ED145, add 923 to it.
Is value in 005F01E3 bigger than value in 168ED145? Jump to instruction #123.
Hex code (1234567890ABCDEF) is just a shorter way of writing binary, i.e. "FF" in Hex = 1111 1111 in Binary = 255 in Decimal
Different architectures break down and execute "words" differently, there are several layers involved, binary -> groups of binary/flagging bits -> words -> instructions -> machine language -> compiler -> programming language -> user interface -> + more.
There is no "single way" computers break binary into their own functions, it depends on the architecture of the chipsets, cpus, and the way they are designed to execute functions (e.g. 486, RISC, etc).
I learned most of this in my final year in Comp.Sci degree, was a module called Advanced Computer Architecture. Was very interesting but immensely rich in material to learn. I haven't gone anywhere near scratching the surface in what I've mentioned above, and this was nearly 2 years ago so I'm a little rusty

Last edited: