Part 4: Bitwise Patterns

A bitwise ops cheatsheet (¬, <<, >>, ROTR/L, +, ∧, ∨, ⊕)

Cédric Bellet
Biffures
9 min readSep 8, 2016

--

We explained in Part 1 what bit strings are, and in Part 2 and 3, that interesting patterns can be found in bitwise operations such as AND and OR. In Part 4, we accelerate these findings as we present key bitwise operations and their patterns with minimal commentary. Usual caveats: I use the notations found in FIPS 180–4; assume bit strings are representations of positive-only integers, most significant bit to the left.

In the following examples, we work with bit words of fixed length n, and we note a, b two n-bit words. For the examples and illustrations, we set n=6, so the 6-bit words a, b represent positive integers between 0 and 63.

Representing bits by their numerical values, from 0 (black) to 63 (white)

Bitwise operations on a single bit word

Negation (¬): ones’ complement

  • The negation of a, noted ¬a, is a bit string containing zeros where a contains ones, and ones where a contains zeros.
  • The sum of a and of its negation ¬a is a bit string containing only ones, so we can write: a + ¬a = 2ⁿ, or equivalently, ¬a = 2ⁿ a
      a = 011010 (26)
¬a = 100101 (37)
--------
a + ¬a = 111111 (63)
Bottom: all possible values of a 6-bit word. Top: corresponding negated values.

Left shift (<< k): multiplication by 2

  • a k, is a n-bit string where all bits of a are shifted to the left by k slots, with zeros inserted to the right
  • As a result, a k = a 2ᵏ mod 2ⁿ. The modulo indicates that a n-bit string cannot hold numbers greater than 2ⁿ.
  • Single left shifts are a convenient way to multiply by 2 at a ‘low’ cost, assuming no overflow: a 1 = 2 a mod 2ⁿ
     a = 011010 (26)
a << 1 = 110100 (52) # i.e., 26 ⋅ 2
a << 2 = 101000 (40) # i.e. 52 ⋅ 2 - 64 (overflow)
Bottom: all possible values of a 6-bit word. Top: corresponding value, shifted to the left by 1 to 5 slots. Above 5 shifts, all values become zeros.

Right shift (>> k): division by 2

  • a k, is a n-bit string where all existing bits of a are shifted to the right by k slots, with zeros inserted to the left
  • Note: in languages such as JavaScript, ≫ is a slightly different operation adding ones to the left; right shift as described here is noted ⋙ instead (and in fact, >> and >>> respectively — I just don’t like Medium’s formatting for those)
  • a k = floor(a / 2ᵏ)
     a = 011010 (26)
a >> 1 = 001101 (13) # 26 / 2
a >> 2 = 000110 (6) # 13 / 2 - 0.5
Bottom: all possible values of a 6-bit word. Top: corresponding value, shifted to the right by 1 to 5 slots. Above 5 shifts, all values become zeros.

Rotation to the left (ROTL): left shift with a twist

  • ROTL(a, k) is a circular shift to the left of bit string a by k slots. That is, ROTL is a left shift, where overflowing bits to the left are added back to the right, instead of zeros.
  • As a result, for any value of k between 0 and n-1, ROTL can be written as:
Which reads as “ROTL(x) is x << k, plus the leftmost k bits inserted back to the right”
         a = 011010 (26)
ROTL(a, 1) = 110100 (52) # i.e., 26⋅2
ROTL(a, 2) = 101001 (41)# i.e., 26⋅4 mod64 + 1
ROTLᵏ(a) = (a << k) + δ, where δ 2ᵏ, so ROTL looks similar to the left shift for ‘small’ values of k or a. Above 5 rotations, the pattern repeats itself.

Rotation to the right (ROTR): rotation to the other left

  • ROTR(a, k) is a circular shift to the right of bit string a by k slots. That is, ROTR is a right shift, where overflowing bits to the right are added back to the left, instead of zeros.
  • Alternatively, you can verify that, in general, ROTR(a, k) = ROTL(a, n−k), so ROTR is fully defined by ROTL.
ROTR(a, k) = ROTL(a, n-k): same pattern but inverted.

Bitwise operations on a two bit words

Addition mod 2 (+)

  • (a + b) is an n-bit word corresponding to the numeric sum of a and b, modulo 2ⁿ. The modulo indicates that a n-bit word cannot store values greater than 2ⁿ.
    a = 011010 (26)
b = 101001 (41)
--------
a + b = 000011 (3) # i.e., 26 + 41 - 64 (overflow!)
Axes: increasing values of a, b. Colors: as above, indicates the value of f(a,b) in this case, a+b (mod 2ⁿ)

AND (∧)

  • (a b) is an n-bit word with ones only where both a and b have ones
  • The AND graph is organized in blocks of size 2ᵏ made of four quadrants: 3 identical blocks tₖ to the left and bottom, and one dominant block Tₖ in the upper right corner, such that Tₖ = tₖ + 2ᵏ
  • Blocks of size 2make the leftmost quadrant (tₖ₊₁) of the the next bigger block of size 2ᵏ⁺¹
  • See Part 2: The beauty of bitwise AND for a detailed discussion on the properties and pattern of the function, and an explanation of what I designate under the notations Tₙ, tₙ et cetera.

Danger! Interpretation in ‘blocks’, ‘quadrants’, ‘tiers’ my own: use with caution.

    a = 011010 (26)
b = 101001 (41)
--------
a ∧ b = 001000 (8)
The AND graph. Axes: values of a,b. Cells: a + b, represented in the color gradient used above (black: 0, to white: 63)

OR (∨)

  • (a b) is an n-bit word with ones where either a or b have ones
  • Similar to AND, OR is organized in blocks of size 2ᵏ made of four quadrants. This time, only one low-value block, located in the bottom left of the block and noted tₖ, and 3 identical high-value blocks Tₖ right and top, such that Tₖ = tₖ + 2ᵏ.
  • Blocks of size 2make the leftmost quadrant (tₖ₊₁) of the the next bigger block of size 2ᵏ⁺¹
  • Despite similarities, OR differs from AND: the initialization blocks T1 and the arrangement of quadrants within blocks are different; and blocks are delimited by different ‘critical’ values (powers of 2 for AND, and their predecessors for OR). For more information on the nuances of OR vs. AND, see Part 3: Bitwise OR, AND’s sibling

Again, danger: interpretation in ‘blocks’, ‘quadrants’, ‘tiers’ my own. Use with caution.

    a = 011010 (26)
b = 101001 (41)
--------
a + b = 111011 (59)
Bitwise OR

Looking at the patterns, it seems clear that AND and OR must be connected by some sort of relationship. In fact, at least two are worth mentioning:

  • (a b) + (a b) = a + b. The superimposed AND and OR graphs makes the Addition graph. Alternatively, (a b) = (a+b) − (a ∧ b): a or b, is everything in a plus everything in b, minus the double-counted portion at the intersection of a and b.
  • (a b) = ¬(¬a ¬b). The OR graph is exactly the AND graph, with inverted color scheme and inverted axes. In alternative notation, (a b) +(¬a ¬b)= 2ⁿ.

XOR (⊕)

    a = 011010 (26)
b = 101001 (41)
--------
a ⊕ b = 110011 (51)
  • (a b) is an n-bit word with ones where either a or b have ones but not both.
  • By now, you already anticipate that XOR is also organized in blocks of size 2ᵏ; each made of four quadrants; 2 low-value quadrants, noted tₖ, bottom left and top right; 2 high-value ones, noted Tₖ, top left and bottom right; Tₖ = tₖ + 2ᵏ. And blocks of size 2ᵏ make the leftmost quadrant (tₖ₊₁) of the the next bigger block of size 2ᵏ⁺¹.
‘nuff said — XOR chart.
  • (a b) + (a b) = (a b). XOR has just enough light in its high-value quadrants (bot-right, top-left) to turn the dark AND pattern into a bright OR pattern. Alternatively, (a b) = (a b) − (a b): a XOR b is everything that is in a or in b, minus everything that is in both a and b.

Bitwise addition

The observant ones will have noted that the addition is the only operation here that I defined numerically only, without explaining what it did on a per-bit basis. In fact, the addition is the only operation on this page that does not qualify as a bitwise operation according to Wikipedia, for it does not “operate on one or more bit patterns or binary numerals at the level of their individual bits”.

It is possible however to define the addition as a bitwise operation, i.e. as an operation defined first and foremost by what it does to the bits of two n-bit strings. Let’s do it.

We note a, b the two n-bit strings, and s their sum modulo 2ⁿ: s = a + b; and we note aᵢ, bᵢ, sᵢ, the i-th bits in those bit strings, so that for instance, a can be written as: aₙ₋₁aₙ₋₂…a₂a₁a₀.

  1. We set s₀ = a₀b₀, and note c the carry of that operation: c = a₀b₀
  2. For any 0 < i < n-1, we define: sᵢ = (aᵢbᵢ) c, and we redefine the carry c so c =((aₖbₖ) c) (aₖ bₖ)
  3. Finally, for i = n-1, we simply set s = aₙ₋₁bₙ₋₁ c

This method is very similar to the addition algorithm we’ve all learnt in elementary school. First step, compute the sum of the rightmost digits. If they add up to more than 1 (i.e., a₀b₀ = 1), carry 1, and jolt down a zero (a₀b₀ = 0). If not, do not carry (a₀b₀ = 0), and jolt down a one if either bit is a one, or a zero if both bits are zeros (a₀b₀). Repeat the steps for all numbers to the left, but taking into account this time the carry if any, etc., etc.

And… We’ve done it! We now have a bitwise definition of the addition. Real question though, is do we need one and if so, why? Well, as far as understanding what the addition does, we probably don’t need one. However, these instructions actually enable us to create a basic electronic adder using only logic gates. In turn, those adders and logic gates help create arithmetic logic units, which are building blocks in CPUs, GPUs, FPUs. And so — who knew — by examining simple bitwise operations, we got our first glance at the world of computing using electronic circuits.

A full adder, part of a ripple-carry adder, which implements the addition as described above. Mandatory pat on the back.

There are a few more operations that we could spend time defining here, but we have defined enough concepts for you to explore in more depth, if you wanted to; and more importantly, we have defined all the operations that we need to discuss important upcoming topics: hashing and encryption.

Thanks for making it this far, and if you spot any mistakes, please let me know in a comment. Have a great day!

--

--