Hello and welcome back to my blog!

In a kind of follow up to 10 steps to becoming a better programmer, this time I wanted to write about binary and binary operations, because I've been rather surprised by the number of people (some even running highly successful tech businesses) who simply don't understand such a useful tool.

If you've seen this kind of thing before and been confused, this article is for you:

const uint kSignal = 1<<0; const uint kBreak = 1<<1; const uint kWait = 1<<2; uint flags = Wait(kSignal|kBreak|kWait); flags &= ~kWait; |

I'm going to be talking about the binary representation of unsigned integers in this article.

## What is binary?

The binary numeral system is a way of representing numbers, in the same way that the decimal system that humans use every day is. The difference between the two is the 'base', which is simply a count of the unique number of digits representable in the system, including 0. So binary is base 2 (since we have 0 and 1 as the full set of digits) and decimal is base 10 (since we have from 0-9).

We wouldn't get very far in life as humans if we could only count up to 9, though. We needed a way to count the number of tens while still being able to accumulate units, which we solved by pre-pending the tens count to the start of the number and resetting the units to zero. So 9+1 = one ten and 0 units or 10. Obvious right? With binary it's a similar thing:

## Bits

In binary, each one of the digits is called a bit which is short for **b**inary dig**it**. Bits can be numbered, like so (always starting from 0):

Computers usually deal with bits a chunk at a time, either 8, 16, 32, 64, 128 bits etc depending on the register size. For the rest of this article I'm going to be assuming a fictional register size of 4 bits (known as a nibble), for the sake of clarity.

## Converting from binary to decimal

It's useful to be able to covert from binary to decimal. To see how we can do this, it's helpful to explore the composition of numbers in base ten and base two.

Decimal numbers which represent tens, hundreds, thousands and so on can be called 'powers of ten':

**10 ^{x} in decimal:**

`10`

^{0} = 1

10^{1} = 10

10^{2} = 100

10^{3} = 1000

Looking at them in this way, it's clear to see that when we raise ten to the power of an integer x, we can write it as: **one** appended with x **zeros**. In binary it's a very similar thing, except we're now dealing with base two, instead of base ten:

**2 ^{x} in binary:**

`2`

^{0} = 1

2^{1} = 10

2^{2} = 100

2^{3} = 1000

This time, it's binary **one** appended with x binary **zeros**. If we do the same thing but convert to decimal:

**2 ^{x} in decimal:**

`2`

^{0} = 1

2^{1} = 2

2^{2} = 4

2^{3} = 8

The individual powers of two above correspond to a particular bit in a binary number. The bit which is 'set' (filled with a one) is the bit numbered with the corresponding power:

So, now we have a way to calculate the value of any particular binary digit. In order to convert an arbitrary binary number into decimal, all we have to do is add the powers of two together:

Adding together only the powers of two which have a corresponding 1 in the bit:

2^{0} + 2^{2} + 2^{3} =

...we arrive at the decimal equivalent

1 + 4 + 8 = **13**

## Bitwise operations

Since a computer is built out of transistors (which have two states, on or off) binary is the natural language of the computer. As such, bitwise operations are often the fastest possible performing operations since there is no need to mess about dealing with the human's silly decimal system.

### Logical left shift

The logical left shift operator, or **<<** (in most programming languages) simply takes a binary number and moves every digit left by the number of bits in the 2nd argument, filling the bottom most bit(s) with zero. Here is the binary representation of decimal one:

Shifting this left by two bits:

The astute among you will recognise this as left shift of one by two as being equivalent of multiplying by two and then multiplying by two again. Indeed, every time we left-shift a number by one, it's equivalent to multiplying it by two. This is usually much faster than actually multiplying the number by two.

### Logical right shift

Logical right shift, or >> is the opposite of left shift. Instead of it being equivalent to multiply, it's equivalent to divide:

12 >> 2 = 12 / 4 = 3

### Underflow and overflow

What happens when we take a binary number, shift it and some of the digits get shifted off one end or the other? In this example we only have 4 bits of storage for our numbers.

Above, decimal three has been right shifted by one and we've lost a digit off the end:

3>>1 = 3/2 = 1

As you can see, this is equivalent to the integer decimal operation 3/2. We lost a bit of information, hence the approximation. This is called underflow.

Overflow is arguably worse:

This time we don't end up with an approximate answer, it's just plain wrong! This is why it's very important to understand the **storage capacity** of variables that we are working with. It's worth noting that this result is no different to what we would end up with if we performed the equivalent decimal multiply operator in the same storage capacity.

### Bitwise Or

Bitwise or, represented by | in most languages 'combines' the two binary numbers together. The resulting binary number has a **one** in each bit where either of the two source numbers have a **one**, and only has a **zero** where both of the two source numbers have **zero**.

This is incredibly useful when dealing with binary numbers which represent a set of 'flags'. Such numbers are called bitfields. Flags are pieces of state information which can either be set or clear, which is a nice fit for binary. Flags are often defined like this:

const uint kSignal = 1<<0; // binary 0001 const uint kBreak = 1<<1; // binary 0010 const uint kWait = 1<<2; // binary 0100 |

Each flag is given a meaningful name and assigned the **unique** binary bit which represents it. Because a bitfield is designed to hold many such flags together, it's often useful to be able to set multiple flags together, which is where **bitwise or** comes into play:

const uint kSignal = 1<<0; // binary 0001 const uint kBreak = 1<<1; // binary 0010 const uint kWait = 1<<2; // binary 0100 // 0011 = 0001 | 0010 // bitField = 0011 uint bitField = kSignal | kBreak; |

Above we have composed a new bitfield containing the flags *kSignal* and *kBreak* but not *kWait*. Because our flags are unique, when we **or** them together we end up with the combination of both flags set in our bitfield.

### Bitwise And

Bitwise and, represented by & in most languages combines the two input numbers in a different way. The resulting binary number has a **one** in each bit **only** where both source numbers have a **one**, and has a **zero** in all other locations.

**Bitwise and** is also very useful when dealing with flags. For example, what if we want to tell if a given bitfield contains a particular flag?

const uint kSignal = 1<<0; // binary 0001 const uint kBreak = 1<<1; // binary 0010 const uint kWait = 1<<2; // binary 0100 // 0011 = 0001 | 0010 // bitField = 0011 uint bitField = kSignal | kBreak; // (0011 & 0100) = 0, so // containsWait = false bool containsWait = (bitField & kWait) != 0; |

Here, *containsWait* will be false because our bitfield does not have the bit represented by *kWait* set, so when it gets **bitwise anded** with the flag kWait (which of course only contains one particular bit) the resulting integer will be zero. You can also write that like this:

bool containsWait = (bitField & kWait) == kWait; |

### Bitwise not

Bitwise not, represented by ~ in most languages, simply takes each bit in a single number and flips it.

This is useful in conjunction with **bitwise and** when we'd like to remove a flag from a bitfield:

const uint kSignal = 1<<0; // binary 0001 const uint kBreak = 1<<1; // binary 0010 const uint kWait = 1<<2; // binary 0100 // 0011 = 0001 | 0010 // bitField = 0011 uint bitField = kSignal | kBreak; // clear kBreak // ~kBreak = ~0010 = 1101 // 0011 & 1101 = 0001 // bitField = 0001 bitField &= ~kBreak; |

Because ~ flips every bit, performing that operation on a flag forms a mask which represents every possible flag except the given one. When this mask is then **bitwise anded** with a bitfield, it clears the bit from the bitfield which is zero in the mask, thus removing it.

Multiple flags can be cleared at the same time by combining all three operators:

const uint kSignal = 1<<0; // binary 0001 const uint kBreak = 1<<1; // binary 0010 const uint kWait = 1<<2; // binary 0100 // 0011 = 0001 | 0010 // bitField = 0011 uint bitField = kSignal | kBreak; // clear kBreak and kSignal // kBreak|kSignal = 0010 | 0001 = 0011 // ~(kBreak|kSignal) = ~0011 = 1100 // 0011 & 1100 = 0 // bitField = 0 bitField &= ~(kBreak|kSignal); |

### Exclusive or

Exclusive or, represented by ^ in most languages takes two numbers and produces a **one** for each bit which is set in one number but **not** the other and **zero** elsewhere.

This operator is useful with flags if we want to toggle a flag:

const uint kSignal = 1<<0; // binary 0001 const uint kBreak = 1<<1; // binary 0010 const uint kWait = 1<<2; // binary 0100 // 0011 = 0001 | 0010 // bitField = 0011 uint bitField = kSignal | kBreak; // toggle kSignal // bitField = bitField ^ kSignal // 0010 = 0011 ^ 0001 // bitField = 0010 bitField ^= kSignal; |

Again, we can use it with a mask to toggle various flags together:

const uint kSignal = 1<<0; // binary 0001 const uint kBreak = 1<<1; // binary 0010 const uint kWait = 1<<2; // binary 0100 // 0011 = 0001 | 0010 // bitField = 0011 uint bitField = kSignal | kBreak; // toggle kSignal and kBreak // bitField = bitField ^ (kSignal | kBreak) // 0000 = 0011 ^ 0011 // bitField = 0 bitField ^= kSignal | kBreak; |

## Maintainability

Because it's more difficult to read the true meaning of bitwise operations on a set of flags you should consider very carefully whether you need the extra level of compactness offered by using these techniques before writing new code. It might be just as easy to use multiple boolean variable types, which will be much easier to read and maintain.

Understanding bitwise operations will, however, set you in good stead should you encounter them in legacy code and a good programmer should be able to fully understand all the tools at his disposal.

That's it for now, hope this has helped you get a better understanding of how binary works!

Until next time, have fun!

Cheers, Paul.

very useful ,thanks very much!!!!

This article is so useful I read it 0010 times 😉

There are just 10 sorts of people. The ones who understand binary and the one who don’t. 😀

This is really really helpful thanks a lot for the hard work

No problem

Apologies if this seems critical but I hope it reads as constructive criticism. Also, I only skimmed part of it because I know this stuff. Regardless, for what it is worth:

Re: “Computers usually deal with bits a chunk at a time, either 8, 16, 32, 64, 128 bits etc depending on the register size. For the rest of this article Iâ€™m going to be assuming a fictional register size of 4 bits (known as a nibble), for the sake of clarity.”

It is the WORD size. Back in the 16 bit days, a word size would be 16 because that was two bytes. A double word (dword) back then would be 32 bits, and so on. There is a technicality here with sizes but I’m going to ignore that. I think that reminding the reader that 4 bits is a nibble with regards to the word size (your wording – pardon the pun – being register size) is misleading because registers can be different sizes (64-bit CPUs still have 8, 16, 32 bit registers and others (floating point)).

Re: “The astute among you will recognise this as left shift of one by two as being equivalent of multiplying by two and then multiplying by two again.”

Another way of thinking of it is multiplying by powers of 2 (read: multiplying by 2 to the power of X).

Re: “This is incredibly useful when dealing with binary numbers which represent a set of â€˜flagsâ€™. Such numbers are called bitfields.”

No, a bitfield is (going from C) an integer with a specified width. They are also less (if at all) portable (network included). A better way to put it is a bit vector or bit set or … Okay, re-reading it, I think I see what you’re getting at. You could indeed use a bitfield to hold the flags. But I think there should be a distinction here because an unsigned int is different from a bit field (let’s say a 4-bit bitfield). An unsigned int, also, and in particular how large it is, will depend on other things, where as a bitfield is what you specify (but again less portable and other problems too, if I recall).

Re: “As such, bitwise operations are often the fastest possible performing operations since there is no need to mess about dealing with the humanâ€™s silly decimal system.”

Only useful when dealing with individual bits though. Otherwise you can go with decimal, hexadecimal or octal. Bitwise logic is very different from (example) boolean logic. For instance, you have the bitwise AND and the boolean AND which have different uses.

Re: “This is usually much faster than actually multiplying the number by two.”

This used to be true. It really isn’t nowadays (and hasn’t been in a long time). I mean sure there could be an exception but generally there’s been enough improvements for this to not matter. But that’s not to say that shifting is useless. It is very useful even for more than just flags.

Re: “It might be just as easy to use multiple boolean variable types, which will be much easier to read and maintain. ”

I disagree for this reason: you typically have (and if you don’t you’re relying on magic numbers which is a huge mistake, 99.99% of the time) names anyway, for each value. Also, if you go this route (your suggestion) then you have many variables to keep track of whereas with bits you can less to keep track of. Besides, with bitwise logic you can check if multiple bits are set at the same time (and many other things that aren’t so simple as doing ‘one’ thing at a time). But how to manage this all is simple. Make your program modular (like you should always do): make a set of functions that do these tasks. Or if you’re using an OO language then you make a class with these functions (that act on the actual object which holds the data). Lastly, if you take the logic of ‘easier to read and maintain’ too far you can get yourself ignoring some very powerful but ‘more difficult to read’ features (pointers come to mind, especially advanced use of pointers). But yes, you’re right: make it easy to maintain, easy to read (the two are related). Keep in mind however, that if you have to set 10, 15, 30 or more variables (and test/set/toggle/unset/…) then you are NOT (pun intended) at all making things easier to maintain. And no, that is not a contrived example by any means.

A few other notes that came to mind. First one is what I was trying to think of yesterday but it was not coming to my mind (until I was trying to sleep… which is when I often think of programming .. I am addicted to it):

~ is while sometimes called bitwise NOT I prefer the complement operator (I seem to remember this is the proper name). And there is a distinction here too. It is true that !1 is 0 but !3 is also 0. However, ~3 is not 0. The difference is ~ will change 0 to 1 and 1 to 0 which is not the same as logical NOT. Another example: ~7 = ~0111 = 1000 but because of one/twos complement, it isn’t the same sign. In C ~7 = -8 for instance.

The other point was to do with bitwise AND (an addition that you don’t have). I’m quite distracted however, so I have a better idea. (Note however that this might confuse those who are new to this). Still, you may find it of interest:

http://graphics.stanford.edu/~seander/bithacks.html

Cheers.

I was working on something a bit (no, I cannot help puns… they just happen) ago and I somehow remembered this website. I specifically remembered my remark about word size. And something occurred to me. That something is I could very well be confusing matters for others. So although maybe no one read what I wrote, I do want to address (sorry) my point:

Yes, the registers are different sizes and indeed based on the architecture. So I can see what you mean by register size. Depending on how low level you want to go, though, you can run into the low half/hi half (ax made of ah and al for instance). I think in the end I was (or am) going too low level and I think that is also not helpful. You also refer to the register size in question, and since ‘al’ is a register itself (even if it is part of ax, and again that is going too far), you’re right in that way (you can of course only work on a single bit but you explain this even). Word size, in how I … sorry … worded it, is probably up to semantics, too, and that isn’t helpful. I apologise and all things considered, register size might be a better way of wording (…) it. What really matters is this:

It depends on the variable being used and how the computer is instructed to operate on it. Computers operate with bits, in the end, and there’s more than one way of working on the bits, some of which you describe. That is much more simplified and again I apologise for adding confusion to it all.

Kind regards.