## Fundamentals of C++: Introduction to Templates

Send feedback »

Recently I have been helping a colleague convert a series of painfully repetitive code segments into function templates and template classes. I respect this colleague very much and he is a very skilled developer, including the use of the C++ Standard Library. However, he has never developed any templates of his own. As I was helping him learn how to develop new templates it occurred to me that there are plenty of fundamental C++ concepts that are given very little attention. This creates an enormous gap in reference material to progress from the competent skill level to proficient and expert levels.

Therefore, I am going to periodically write about fundamental concepts and how to actually apply them to your daily development. This post discusses templates at a very basic level. Entire books have been written about them. I will revisit with more sophisticated applications of templates in the future.

Full story »

## Segmented Computer Memory

Send feedback »

I wanted to start writing about secure coding practices as well as more instructive posts related to security topics such as encryption and hacking. You probably already have a conceptual understanding of things like the "stack", "heap" and "program counter". However, it's difficult to have concrete discussions regarding security unless you have a solid grasp on the computer memory model. This post is intended to provide a concrete foundation of the memory model, and my future posts related to security will build on this foundation.

It is easy to take for granted the complexity of computer memory because of the many layers of abstraction that programmers work through today. The same basic memory design has existed for all computers that use a paged memory structure since the early 60's. These are some of the areas where the knowledge of the memory layout plays a crucial role in application portability and embedded resources, program security, code optimization. The diagrams I present will also help you understand where the different activities occur in a program during runtime.

Full story »

## Introduction to the Math of Computer Graphics

2 feedbacks »

Three-dimensional computer graphics is an exciting aspect of computing because of the amazing visual effects that can be created for display. All of this is created from an enormous number of calculations that manipulate virtual models, which are constructed from some form of geometric definition. While the math involved for some aspects of computer graphics and animation can become quite complex, the fundamental mathematics that is required is very accessible. Beyond learning some new mathematic notation, a basic three-dimensional view can be constructed with algorithms that only require basic arithmetic and a little bit of trigonometry.

I demonstrate how to perform some basic operations with two key constructs for 3D graphics, without the rigorous mathematic introduction. Hopefully the level of detail that I use is enough for anyone that doesn't have a strong background in math, but would very much like to play with 3D APIs. I introduce the math that you must be familiar with in this entry, and in two future posts I demonstrate how to manipulate geometric models and apply the math towards the display of a 3D image.

## Linear Algebra

The type of math that forms the basis of 3D graphics is called linear algebra. In general, linear algebra is quite useful for solving systems of linear equations. Rather than go into the details of linear algebra, and all of its capabilities, we are going to simply focus on two related constructs that are used heavily within the topic, the matrix and the vertex.

### Matrix

The Matrix provides an efficient mechanism to translate, scale, rotate, and convert between different coordinate systems. This is used extensively to manipulate the geometric models for environment calculations and display.

These are all examples of valid matrices:

$$\left\lbrack \matrix{a & b & c \cr d & e & f} \right\rbrack, \left\lbrack \matrix{10 & -27 \cr 0 & 13 \cr 7 & 17} \right\rbrack, \left\lbrack \matrix{x^2 & 2x \cr 0 & e^x} \right\rbrack$$

Square matrices are particularly important, that is a matrix with the same number of columns and rows. There are some operations that can only be performed on a square matrix, which I will introduce shortly. The notation for matrices, in general, uses capital letters as the variable names. Matrix dimensions can be specified as a shorthand notation, and to also identify an indexed position within the matrix. As far as I am aware, row-major indexing is always used for consistency, that is, the first index represents the row, and the second represents the column.

$$A= [a_{ij}]= \left\lbrack \matrix{a_{11} & a_{12} & \ldots & a_{1n} \cr a_{21} & a_{22} & \ldots & a_{2n} \cr \vdots & \vdots & \ddots & \vdots \cr a_{m1} & a_{m2} & \ldots & a_{mn} } \right\rbrack$$

### Vector

The vector is a special case of a matrix, where there is only a single row or a single column; also a common practice is to use lowercase letters to represent vectors:

$$u= \left\lbrack \matrix{u_1 & u_2 & u_3} \right\rbrack$$, $$v= \left\lbrack \matrix{v_1\cr v_2\cr v_3} \right\rbrack$$

## Operations

The mathematical shorthand notation for matrices is very elegant. It simplifies these equations that would be quite cumbersome, otherwise. There are only a few operations that we are concerned with to allow us to get started. Each operation has a basic algorithm to follow to perform a calculation, and the algorithm easily scales with the dimensions of a matrix.

Furthermore, the relationship between math and programming is not always clear. I have a number of colleagues that are excellent programmers and yet they do not consider their math skills very strong. I think that it could be helpful to many to see the conversion of the matrix operations that I just described from mathematical notation into code form. Once I demonstrate how to perform an operation with the mathematical notation and algorithmic steps, I will also show a basic C++ implementation of the concept to help you understand how these concepts map to actual code.

The operations that I implement below assume a Matrix class with the following interface:

C++

 class Matrix { public:   // Ctor and Dtor omitted      // Calculate and return a reference to the specified element.   double& element(size_t row, size_t column);     // Resizes this Matrix to have the specified size.   void resize(size_t row, size_t column);     // Returns the number rows.   size_t rows();     // Returns the number of columns.   size_t columns(); private:   std::vector    data; };

### Transpose

Transpose is a unary operation that is performed on a single matrix, and it is represented by adding a superscript T to target matrix.

For example, the transpose of a matrix A is represented with AT.

The transpose "flips" the orientation of the matrix so that each row becomes a column, and each original column is transformed into a row. I think that a few examples will make this concept more clear.

$$A= \left\lbrack \matrix{a & b & c \cr d & e & f \cr g & h & i} \right\rbrack$$

$$A^T= \left\lbrack \matrix{a & d & g \cr b & e & h \cr c & f & i} \right\rbrack$$

The resulting matrix will contain the same set of values as the original matrix, only their position in the matrix changes.

$$B= \left\lbrack \matrix{1 & 25 & 75 & 100\cr 0 & 5 & -50 & -25\cr 0 & 0 & 10 & 22} \right\rbrack$$

$$B^T= \left\lbrack \matrix{1 & 0 & 0 \cr 25 & 5 & 0 \cr 75 & -50 & 10 \cr 100 & -25 & 22} \right\rbrack$$

It is very common to transpose a matrix, including vertices, before performing other operations. The reason will become clear for matrix multiplication.

$$u= \left\lbrack \matrix{u_1 & u_2 & u_3} \right\rbrack$$

$$u^T= \left\lbrack \matrix{u_1 \cr u_2 \cr u_3} \right\rbrack$$

Addition can only be performed between two matrices that are the same size. By size, we mean that the number of rows and columns are the same for each matrix. Addition is performed by adding the values of the corresponding positions of both matrices. The sum of the values creates a new matrix that is the same size as the original two matrices provided to the add operation.

If

$$A= \left\lbrack \matrix{0 & -2 & 4 \cr -6 & 8 & 0 \cr 2 & -4 & 6} \right\rbrack, B= \left\lbrack \matrix{1 & 3 & 5 \cr 7 & 9 & 11 \cr 13 & 15 & 17} \right\rbrack$$

Then

$$A+B=\left\lbrack \matrix{1 & 1 & 9 \cr 1 & 17 & 11 \cr 15 & 11 & 23} \right\rbrack$$

The size of these matrices do not match in their current form.

$$U= \left\lbrack \matrix{4 & -8 & 5}\right\rbrack, V= \left\lbrack \matrix{-1 \\ 5 \\ 4}\right\rbrack$$

However, if we take the transpose of one of them, their sizes will match and they can then be added together. The size of the result matrix depends upon which matrix we perform the transpose operation on from the original expression.:

$$U^T+V= \left\lbrack \matrix{3 \\ -3 \\ 9} \right\rbrack$$

Or

$$U+V^T= \left\lbrack \matrix{3 & -3 & 9} \right\rbrack$$

Matrix addition has the same algebraic properties as with the addition of two scalar values:

#### Commutative Property:

$$A+B=B+A$$

#### Associative Property:

$$(U+V)+W=U+(V+W)$$

#### Identity Property:

$$A+0=A$$

#### Inverse Property:

$$A+(-A)=0$$

The code required to implement matrix addition is relatively simple. Here is an example for the Matrix class definition that I presented earlier:

C++

 void Matrix::operator+=(const Matrix& rhs) {   if (rhs.data.size() == data.size())   {     // We can simply add each corresponding element     // in the matrix element data array.     for (size_t index = 0; index < data.size(); ++index)     {       data[index] += rhs.data[index];     }   } }   Matrix operator+( const Matrix& lhs,                   const Matrix& rhs) {   Matrix result(lhs);   result += rhs;     return result; }

### Scalar Multiplication

Scalar multiplication allows a single scalar value to be multiplied with every entry within a matrix. The result matrix is the same size as the matrix provided to the scalar multiplication expression:

If

$$A= \left\lbrack \matrix{3 & 6 & -9 \cr 12 & -15 & -18} \right\rbrack$$

Then

$$\frac{1}3 A= \left\lbrack \matrix{1 & 2 & -3 \cr 4 & -5 & -6} \right\rbrack, 0A= \left\lbrack \matrix{0 & 0 & 0 \cr 0 & 0 & 0} \right\rbrack, -A= \left\lbrack \matrix{-3 & -6 & 9 \cr -12 & 15 & 18} \right\rbrack,$$

Scalar multiplication with a matrix exhibits these properties, where c and d are scalar values:

#### Distributive Property:

$$c(A+B)=cA+cB$$

#### Identity Property:

$$1A=A$$

The implementation for scalar multiplication is even simpler than addition.
Note: this implementation only allows the scalar value to appear before the Matrix object in multiplication expressions, which is how the operation is represented in math notation.:

C++

 void Matrix::operator*=(const double lhs) {   for (size_t index = 0; index < data.size(); ++index)   {     data[index] *= rhs;   } }   Matrix operator*( const double scalar,                   const Matrix& rhs) {   Matrix result(rhs);   result *= scalar;     return result; }

### Matrix Multiplication

Everything seems very simple with matrices, at least once you get used to the new structure. Then you are introduced to matrix multiplication. The algorithm for multiplication is not difficult, however, it is much more labor intensive compared to the other operations that I have introduced to you. There are also a few more restrictions on the parameters for multiplication to be valid. Finally, unlike the addition operator, the matrix multiplication operator does not have all of the same properties as the multiplication operator for scalar values; specifically, the order of parameters matters.

#### Input / Output

Let's first address what you need to be able to multiply matrices, and what type of matrix you can expect as output. Once we have addressed the structure, we will move on to the process.

Given an the following two matrices:

$$A= \left\lbrack \matrix{a_{11} & \ldots & a_{1n} \cr \vdots & \ddots & \vdots \cr a_{m1} & \ldots & a_{mn} } \right\rbrack, B= \left\lbrack \matrix{b_{11} & \ldots & b_{1v} \cr \vdots & \ddots & \vdots \cr b_{u1} & \ldots & b_{uv} } \right\rbrack$$

A valid product for $$AB=C$$ is only possible if number of columns $$n$$ in $$A$$ is equal to the number of rows $$u$$ in $$B$$. The resulting matrix $$C$$ will have the dimensions $$m \times v$$.

$$AB=C= \left\lbrack \matrix{c_{11} & \ldots & c_{1v} \cr \vdots & \ddots & \vdots \cr c_{m1} & \ldots & c_{mv} } \right\rbrack,$$

Let's summarize this in a different way, hopefully this arrangement will make the concept more intuitive:

One last form of the rules for matrix multiplication:

• The number of columns, $$n$$, in matrix $$A$$ must be equal to the number of rows, $$u$$, in matrix $$B$$:
$$n = u$$

• The output matrix $$C$$ will have the number of rows, $$m$$, in $$A$$, and the number of columns, $$v$$, in $$B$$:
$$m \times v$$

• $$m$$ and $$v$$ do not have to be equal. The only requirement is that they are both greater-than zero:
$$m \gt 0,$$
$$v \gt 0$$

#### How to Multiply

To calculate a single entry in the output matrix, we must multiply the element from each column in the first matrix, with the element in the corresponding row in the second matrix, and add all of these products together. We use the same row in the first matrix, $$A$$, for which we are calculating the row element in $$C$$. Similarly, we use the column in the second matrix, $$B$$ that corresponds with the calculating column element in $$C$$.

More succinctly, we can say we are multiplying rows into columns.

For example:

$$A= \left\lbrack \matrix{a_{11} & a_{12} & a_{13} \cr a_{21} & a_{22} & a_{23}} \right\rbrack, B= \left\lbrack \matrix{b_{11} & b_{12} & b_{13} & b_{14} \cr b_{21} & b_{22} & b_{23} & b_{24} \cr b_{31} & b_{32} & b_{33} & b_{34} } \right\rbrack$$

The number of columns in $$A$$ is $$3$$ and the number of rows in $$B$$ is $$3$$, therefore, we can perform this operation. The size of the output matrix will be $$2 \times 4$$.

This is the formula to calculate the element $$c_{11}$$ in $$C$$ and the marked rows used from $$A$$ and the columns from $$B$$:

$$\left\lbrack \matrix{\color{#B11D0A}{a_{11}} & \color{#B11D0A}{a_{12}} & \color{#B11D0A}{a_{13}} \cr a_{21} & a_{22} & a_{23}} \right\rbrack \times \left\lbrack \matrix{\color{#B11D0A}{b_{11}} & b_{12} & b_{13} & b_{14} \cr \color{#B11D0A}{b_{21}} & b_{22} & b_{23} & b_{24} \cr \color{#B11D0A}{b_{31}} & b_{32} & b_{33} & b_{34} } \right\rbrack = \left\lbrack \matrix{\color{#B11D0A}{c_{11}} & c_{12} & c_{13} & c_{14}\cr c_{21} & c_{22} & c_{23} & c_{24} } \right\rbrack$$

$$c_{11}= (a_{11}\times b_{11}) + (a_{12}\times b_{21}) + (a_{13}\times b_{31})$$

To complete the multiplication, we need to calculate these other seven values $$c_{12}, c_{13}, c_{14}, c_{21}, c_{22}, c_{23}, c_{24}$$. Here is another example for the element $$c_{23}$$:

$$\left\lbrack \matrix{ a_{11} & a_{12} & a_{13} \cr \color{#B11D0A}{a_{21}} & \color{#B11D0A}{a_{22}} & \color{#B11D0A}{a_{23}} } \right\rbrack \times \left\lbrack \matrix{b_{11} & b_{12} & \color{#B11D0A}{b_{13}} & b_{14} \cr b_{21} & b_{22} & \color{#B11D0A}{b_{23}} & b_{24} \cr b_{31} & b_{32} & \color{#B11D0A}{b_{33}} & b_{34} } \right\rbrack = \left\lbrack \matrix{c_{11} & c_{12} & c_{13} & c_{14}\cr c_{21} & c_{22} & \color{#B11D0A}{c_{23}} & c_{24} } \right\rbrack$$

$$c_{23}= (a_{21}\times b_{13}) + (a_{22}\times b_{23}) + (a_{23}\times b_{33})$$

Notice how the size of the output matrix changes. Based on this and the size of the input matrices you can end up with some interesting results:

$$\left\lbrack \matrix{a_{11} \cr a_{21} \cr a_{31} } \right\rbrack \times \left\lbrack \matrix{ b_{11} & b_{12} & b_{13} } \right\rbrack = \left\lbrack \matrix{c_{11} & c_{12} & c_{13} \cr c_{21} & c_{22} & c_{23} \cr c_{31} & c_{32} & c_{33} } \right\rbrack$$

$$\left\lbrack \matrix{ a_{11} & a_{12} & a_{13} } \right\rbrack \times \left\lbrack \matrix{b_{11} \cr b_{21} \cr b_{31} } \right\rbrack = \left\lbrack \matrix{c_{11} } \right\rbrack$$

Tip:

To help you keep track of which row to use from the first matrix and which column from the second matrix, create your result matrix of the proper size, then methodically calculate the value for each individual element.

The algebraic properties for the matrix multiplication operator do not match those of the scalar multiplication operator. These are the most notable:

#### Not Commutative:

The order of the factor matrices definitely matters.

$$AB \ne BA$$ in general

I think it is very important to illustrate this fact. Here is a simple $$2 \times 2$$ multiplication performed two times with the order of the input matrices switched. I have highlighted the only two terms that the two resulting answers have in common:

$$\left\lbrack \matrix{a & b \cr c & d } \right\rbrack \times \left\lbrack \matrix{w & x\cr y & z } \right\rbrack = \left\lbrack \matrix{(\color{red}{aw}+by) & (ax+bz)\cr (cw+dy) & (cx+\color{red}{dz}) } \right\rbrack$$

$$\left\lbrack \matrix{w & x\cr y & z } \right\rbrack \times \left\lbrack \matrix{a & b \cr c & d } \right\rbrack = \left\lbrack \matrix{(\color{red}{aw}+cx) & (bw+dx)\cr (ay+cz) & (by+\color{red}{dz}) } \right\rbrack$$

#### Product of Zero:

$$AB=0$$ does not necessarily imply $$A=0$$ or $$B=0$$

#### Scalar Multiplication is Commutative:

$$(kA)B = k(AB) = A(kB)$$

#### Associative Property:

Multiplication is associative, however, take note that the relative order for all of the matrices remains the same.

$$(AB)C=A(BC)$$

#### Transpose of a Product:

The transpose of a product is equivalent to the product of transposed factors multiplied in reverse order

$$(AB)^T=B^TA^T$$

#### Code

We are going to use a two-step solution to create a general purpose matrix multiplication solution. The first step is to create a function that properly calculates a single element in the output matrix:

C++

 double Matrix::multiply_element(   const Matrix& rhs,   const Matrix& rhs,   const size_t i,   const size_t j ) {   double product = 0;     // Multiply across the specified row, i, for the left matrix   // and the specified column, j, for the right matrix.   // Accumulate the total of the products    // to return as the calculated result.   for (size_t col_index = 1; col_index <= lhs.columns(); ++col_index)   {     for (size_t row_index = 1; row_index <= rhs.rows(); ++row_index)     {       product += lhs.element(i, col_index)                 * rhs.element(row_index, j);     }   }     return product; }

Now create the outer function that performs the multiplication to populate each field of the output matrix:

C++

 // Because we may end up creating a matrix with  // an entirely new size, it does not make sense  // to have a *= operator for this general-purpose solution. Matrix& Matrix::operator*( const Matrix& lhs,                            const Matrix& rhs) {   if (lhs.columns() == rhs.rows())   {     // Resize the result matrix to the proper size.     this->resize(lhs.row(), rhs.columns());       // Calculate the value for each element     // in the result matrix.     for (size_t i = 1; i <= this->rows(); ++i)     {       for (size_t j = 1; j <= this->columns(); ++j)       {         element(i,j) = multiply_element(lhs, rhs, i, j);       }     }   }     return *this; }

## Summary

3D computer graphics relies heavily on the concepts found in the branch of math called, Linear Algebra. I have introduced two basic constructs from Linear Algebra that we will need to move forward and perform the fundamental calculations for rendering a three-dimensional display. At this point I have only scratched the surface as to what is possible, and I have only demonstrated how. I will provide context and demonstrate the what and why, to a degree, on the path helping you begin to work with three-dimensional graphics libraries, even if math is not one of your strongest skills.

### References

Kreyszig, Erwin; "Chapter 7" from Advanced Engineering Mathematics, 7th ed., New York: John Wiley & Sons, 1993

1 feedback »

The binary, octal and hexadecimal number systems pervade all of computing. Every command is reduced to a sequence of strings of 1s and 0s for the computer to interpret. These commands seem like noise, garbage, especially with the sheer length of the information. Becoming familiar with binary and other number systems can make it much simpler to interpret the data.

Once you become familiar with the relationships between the number systems, you can manipulate the data in more convenient forms. Your ability to reason, solve and implement solid programs will grow. Patterns will begin to emerge when you view the raw data in hexadecimal. Some of the most efficient algorithms are based on the powers of two. So do yourself a favor and become more familiar with hexadecimal and binary.

## Number Base Conversion and Place Value

I am going to skip this since you can refer to my previous entry for a detailed review of number system conversions[^].

## Continuous Data

Binary and hexadecimal are more natural number systems for use with computers because they both have a base of 2 raised to some exponent; binary is 21 and hexadecimal is 24. We can easily convert from binary to decimal. However, decimal is not always the most useful form. Especially when we consider that we don't always have a nice organized view of our data. Learning to effectively navigate between the formats, especially in your head, increases your ability to understand programs loaded into memory, as well as streams of raw data that may be found in a debugger or analyzers like Wireshark.

The basic unit of data is a byte. While it is true that a byte can be broken down into bits, we tend to work with these bundled collections and process bytes. Modern computer architectures process multiple bytes, specifically 8 for 64-bit computers. And then there's graphics cards, which are using 128, 192 and even 256-bit width memory accesses. While these large data widths could represent extremely large numbers in decimal, the values tend to have encodings that only use a fraction of the space.

What is the largest binary value that can fit in an 8-bit field?

It will be a value that has eight, ones: 1111 1111. Placing a space after every four bits helps with readability.

What is the largest hexadecimal value that can fit in an 8-bit field?

We can take advantage of the power of 2 relationship between binary and hexadecimal. Each hexadecimal digit requires four binary bits. It would be very beneficial to you to commit the following table to memory:

 Dec 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Bin 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Hex 0x0 0x1 0x2 0x3 0x4 0x5 0x6 0x7 0x8 0x9 0xA 0xB 0xC 0xD 0xE 0xF

Now we can simply take each grouping of four bits, 1111 1111, and convert them into hex-digits, FF.

What is the largest decimal value that can fit in an 8-bit field? This isn't as simple, we must convert the binary value into decimal. Using the number base conversion algorithm, we know that the eighth bit is equal to 28, or 256. Since zero must be represented the largest value is 28 - 1, or 255.

### Navigating the Sea of Data

Binary and hexadecimal (ok, and octal) are all number systems whose base is a power of two. Considering that computers work with binary, the representation of binary and other number systems that fit nicely into the logical data blocks, bytes, become much more meaningful. Through practice and the relatively small scale of the values that can be represented with 8-bits, converting a byte a to decimal value feels natural to me. However, when the data is represented with 2, 4 and 8 bytes, the values can grow quickly, and decimal form quickly loses its meaning and my intuition becomes useless as to what value I am dealing with.

For example:

What is the largest binary value that can fit in an 16-bit field?

It will be a value that has sixteen, ones: 1111 1111 1111 1111.

What is the largest hexadecimal value that can fit in an 16-bit field? Again, let's convert each of the blocks of four-bits into a hex-digit, FFFF.

What is the largest decimal value that can fit in an 16-bit field?

Let's see, is it 216 - 1, so that makes it 65355, 65535, or 65555, it's somewhere around there.

Here's a real-world example that I believe many people are familiar with is the RGB color encoding for pixels. You could add a fourth channel to represent an alpha channel an encode RGBA. If we use one-byte per channel, we can encode all four channels in a single 32-bit word, which can be processed very efficiently.

Imagine we are looking at pixel values in memory and the bytes appear in this format: RR GG BB. It takes two hexadecimal digits to represent a single byte. Therefore, the value of pure green could be represented as 00 FF 00. To view this same value as a 24-bit decimal, is much less helpful, 65,280‭.

If we were to change the value to this, 8,388,608‬, what has changed? We can tell the value has changed by roughly 8.3M. Since a 16-bit value can hold ~65K, we know that the third byte has been modified, and we can guess that it has been increased to 120 or more (8.3M / 65K). But what is held in the lower two bytes now? Our ability to deduce information is not much greater than an estimate. The value in hexadecimal is 80 00 00.

The difference between 8,388,608‬ and 8,388,607‬ are enormous with respect to data encoded at the byte-level:

 8,388,608‬‬ 8,388,607‬‬‬ 00 80 00 00 7F FF

Now consider that we are dealing with 24-bit values in a stream of pixels. For every 12-bytes, we will have encoded 4 pixels. Here is a representation of what we would see in the data as most computer views of memory are grouped into 32-bit groupings:

 ‭4,260,948,991‬‬ ‭2,568,312,378‬‬‬ ‭3,954,253,066 FD F8 EB FF 99 15 56 3A 8C B1 1D 0A

## Binary

I typically try to use binary only up to 8-bits. Anything larger than that, I simply skip the first 8-bits (1-byte), and focus on the next 8-bits. For example: 1111 1111 1111 1111. As I demonstrated with the RGB color encoding, values do not always represent a single number. In fact, it is a stream of data. So whether there is one byte, four bytes, or a gigabytes worth of data, we usually process it either one byte or one word at a time. We actually break down decimal numbers into groups by using a comma (or some other regional punctuation) to separate thousands, e.g. ‭4,294,967,295, 294 of what? Millions.

Binary manipulation can be found in many contexts. One of the most common is storing a collection of flags in an unsigned buffer. These flags can be flipped on and off with the Boolean flag operations of your programming language. Using a mask with multiple bits allows an enumerated value with more than two options to be encoded within the binary buffer. I'm not going to go to describe the mechanics here, I simply want to demonstrate that data is encoded in many ways, and there are many reasons for you to become proficient at picking apart byte-streams down to the bit.

It is just as beneficial to be able to convert between hexadecimal and decimal in your head from 1 to 255, especially if you ever deal with selecting colors for web-pages or you edit images in programs like Photoshop. It only takes two hex-digits to represent an 8-bit byte. If you memorize the values that are defined by the high-order hex-digit, reading hexadecimal byte values becomes almost as easy as reading decimal values. There are two tables listed below. The table on the left indicates the value mapping for the high-order and low-order hex-digits of a byte. The table on the right contains a set of landmark values that you will most certainly encounter and find useful:

 161 Decimal 160 Decimal 0x00 0 0x0 0 0x10 16 0x1 1 0x20 32 0x2 2 0x30 48 0x3 3 0x40 64 0x4 4 0x50 80 0x5 5 0x60 96 0x6 6 0x70 112 0x7 7 0x80 128 0x8 8 0x90 144 0x9 9 0xA0 160 0xA 10 0xB0 176 0xB 11 0xC0 192 0xC 12 0xD0 208 0xD 13 0xE0 224 0xE 14 0xF0 240 0xF 15
 Decimal Hexadecimal 32 0x20 64 0x40 100 0x64 127 0x7F 128 0x80 168 0xA8 192 0xC0 200 0xC8 224 0xE0 239 0xEF 250 0xFA

Some of the numbers listed in the table on the right are more obvious than others for why I chose to included them in the map. For example, 100, that's a nice round number that we commonly deal with daily. When you run into 0x64, now you can automatically map that in your mind to 100 and have a reference-point for its value. Alternatively, if you have a value such as 0x6C, you could start by calculating the difference: 0x6C - 0x64 = 8; add that to 100 to know the decimal value is 108.

Some of you will recognize the values 127, 168, 192, 224 and 238. For those that do not see the relevance of these values, they are common octets found in landmark network IP addresses. The table below shows the name and common dotted-decimal for as well as the hexadecimal form of the address in both big-endian and little-endian format:

 Name IPv4 Address Hexadecimal big-endian little-endian local-host 127.0.0.1 0x7F000001 0x0100007F private range 192.168.x.x 0xC0A80000 0x0000A8C0 multicast base address 224.0.0.0 0xE0000000 0x000000E0 multicast last address 239.255.255.255 0xEFFFFFFF 0xFFFFFFEF

One additional fact related to the IPv4 multicast address range, is the official definition declares any IPv4 address with the leading four bits set to 1110 to be a multicast address. 0xE is the hex-digit that maps to binary 1110. This explains why the full multicast range of addresses is from 0xE0000000 to 0xEFFFFFFF, or written in decimal dotted notation as 224.0.0.0 to 239.255.255.255.

## Octal

I'm sure octal has uses, but I have never ran into a situation that I have used it.

Actually, there is one place, which is to demonstrate this equality:

Oct 31 = Dec 25

## Landmark Values

Binary has landmark values similar to the way decimal does. For example, 1's up to 10's, 10's through 100, then 1000, 1M, 1B ... These values are focused on powers of 10, and arranged in groups that provide a significant range to be meaningful when you round-off or estimate. Becoming proficient with the scale of each binary bit up to 210, which is equal to 1024, or 1K. At this point, we can use the powers of two in multiple contexts; 1) Simple binary counting, 2) measuring data byte lengths.

I have constructed a table below that shows landmark values with the power of 2. On the left I have indicated the name of the unit if we are discussing bits and the size of a single value; for example, 8-bits is a byte. I haven't had the need to explore data values larger than 64-bits yet, but it's possible that some of you have. To the right I have indicated the units of measure used in computers when we discuss sizes. At 1024 is a kilobyte, and 1024 kilobytes is officially a megabyte (not if you're a hard drive manufacturer...). I continued the table up through 280, which is known as a "yotta-byte." Becoming familiar with the landmarks up to 32-bits is probably enough for most people. To the far right I converted the major landmark values into decimal to give you a sense of size and scale for these numbers.

 Unit(bits) BinaryExponent Unit(bytes) PlaceValue LargestValue Bit (b) 20 Byte (B) 1 21 Word 2 22 Double-word 4 Nibble 23 Quad-word 8 24 16 25 32 26 64 Byte (B) 27 128 255 28 256 29 512 210 Kilobyte (KB) 1024 220 Megabyte (MB) 10242 1,048,576 230 Gigabyte (GB) 10243 1,073,741,824 Double-word 231 10243·2 4,294,967,295 232 10243·22 240 Terabyte (TB) 10244 250 Petabyte (PB) 10245 260 Exabyte (EB) 10246 Quad-word 263 10246·23 9,223,372,036,854,775,807 270 Zettabyte (ZB) 10247 280 Yottabyte (YB) 10248

## Summary

Looking at a large mass of symbols, such as a memory dump from a computer, can appear to be overwhelming. We do have tools that we use to develop and debug software help us organize and make sense of this data. However, these tools cannot always display this data in formats that are helpful for particular situations. In these cases, understanding the relationships between the numbers and their different representations can be extremely helpful. This is especially true when the data is encoded at irregular offsets. I presented a few common forms that you are likely to encounter when looking at the values stored in computer memory. Hopefully you can use these identity and navigation tips to improve your development and debugging abilities.

## Number Base Conversion

5 feedbacks »

I have started writing an entry that discusses the value of becoming familiar with the binary (base-2) and hexadecimal (base-16) number systems because they are generally more useful to a programmer than decimal (base-10). My daughter is currently in high-school and she is taking a programming course. One of the things that she is currently learning how to do is count in binary. So I decided to expand my explanation of conversion between number systems as a reference for her and all of those who would like a refresher. The entry following this one will describe how binary and hexadecimal will make you a more effective programmer.

To be able to convert numbers between different numerical bases, it is important to review the concept of place value. Each place, or column, in a number represents a value equal to number system base raised to the power of its place index starting at 0. The official name of the base-value is the radix. For example, consider the first three place values for a number system that has a radix of b.

#### b2+b1+b0

If we are dealing with decimal (base-10), the radix = 10 and we would have place values of:

#### 102 + 101 + 100

Now we have the 1's column, 10's column, and 100's column:

#### 100 + 10 + 1

The number stored at each column in the number is called the coefficient. To construct the final number, we multiply the coefficient by its place-value and add the results at each place together. Here is the decimal number 237 broken down by place value:

#### 237

Hopefully decimal form is so natural to you that 237 seems like a single number, rather than the sum of place values that are multiplied by their coefficient.

### The Binary Number System

There are 10 types of people, those that understand binary and those that don't

If you are wondering what the eight other types of people are, continue reading.

Binary is a base-2 number system. This means that each column in a binary number represents a value of 2 raised to the power of its place index. A number system requires a number of symbols to represent each place value that is equal to its Base value, and zero is always the first symbol to include in this set. For instance, decimal (base ten) requires ten symbols: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. Therefore, binary (base two) only requires two symbols: {0, 1}.

Adding the radix as a subscript after a number is a common notation to indicate the base-value of a number when there is a possibility for confusion. Unless the context specifically indicates a different number system, decimal form is assumed. For example, the value 2 in binary:

#### 102 = 210

If we had used that notation in the joke at the beginning... well, it just wouldn't have been as funny.

Another place that you will see the subscript notation is when you study logarithms:

#### logax = logbx / logba

I'll leave the details for logarithms for another time.

#### Counting in Binary

When learning anything new, it can be helpful to map something that you already know to the new topic. For a new number system, counting with both number systems can be a helpful exercise. Counting in all number systems uses the same process:

2. Count up until you have used all of the symbols in increasing order in the least significant, 1's, column
3. When the limit is reached, increment the value in the next column, and reset the current column to zero.
4. If the next column has used all of the symbols, increment the column after that and reset the current column.
5. Once no further columns reach their limit, return to step 2 to continue counting.

Starting with decimal, if we count up from zero to 9, we get:

#### 0 - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 ...roll over

We are now at step 3, we have reached the limit, so we increment the next column from an implied 0 to 1, and reset the current column for the result of:

#### 10

Continuing to count, we increment the 1's column and rolling over the successive columns as necessary:

#### 100

Here is 0-15 in binary. When working with computers, and binary in general, you will typically see zeroes explicitly written for the more significant columns. We require 4 binary digits to represent 15.

 Binary Sum of Columns Decimal 0000 0 + 0 + 0 + 0 0 0001 0 + 0 + 0 + 1 1 0010 0 + 0 + 2 + 0 2 0011 0 + 0 + 2 + 1 3 0100 0 + 4 + 0 + 0 4 0101 0 + 4 + 0 + 1 5 0110 0 + 4 + 2 + 0 6 0111 0 + 4 + 2 + 1 7 1000 8 + 0 + 0 + 0 8 1001 8 + 0 + 0 + 1 9 1010 8 + 0 + 2 + 0 10 1011 8 + 0 + 2 + 1 11 1100 8 + 4 + 0 + 0 12 1101 8 + 4 + 0 + 1 13 1110 8 + 4 + 2 + 0 14 1111 8 + 4 + 2 + 1 15

Hexadecimal is a base-16 number system. Therefore, we will need sixteen symbols to represent the place values. We can start with the ten numbers used in decimal, and we use letters of the alphabet to represent the remaining six symbols. Although letter-case can matter in programming, the letters used in hexadecimal are case-insensitive. Here is a mapping of the hexadecimal values to decimal:

 Decimal: { 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 } Hexadecimal: { 0 1 2 3 4 5 6 7 8 9 A B C D E F }

## Number Base Conversion

Let's discuss how to convert between number systems with different base values. Specifically, we will describe how to convert from "Decimal to Basex" and "Basex to Decimal".

### Decimal to Basex

Here is the algorithm to convert a number in decimal to a different base:

1. Divide the decimal number by the radix of the target base
2. The remainder from step 1 becomes the value for the current column.
3. Use the quotient (answer) from step 1 as the decimal value to calculate the next column.
4. Return to step 1 and repeat until the quotient is zero.

Let's return to the number 237 and convert it to a binary number:

 decimal: 237 radix: 2 Decimal Radix Quotient Remainder 237 / 2 118 1 (20) 118 / 2 59 0 (21) 59 / 2 29 1 (22) 29 / 2 14 1 (23) 14 / 2 7 0 (24) 7 / 2 3 1 (25) 3 / 2 1 1 (26) 1 / 2 0 1 (27) binary: 11101101

Here is 237 converted to a hexadecimal number:

 decimal: 237 radix: 16 Decimal Radix Quotient Remainder 237 / 16 14 D16 (13) (160) 14 / 16 0 E16 (14) (161) hexadecimal: ED16

A common notation to represent hexadecimal when dealing with computers and in programming languages themselves, is to prepend an 'x' in front of the number like so: xED.

Here is one more decimal-to-hexadecimal example:

 decimal: 3,134,243,038 radix: 16 Decimal Radix Quotient Remainder 3,134,243,038 / 16 195,890,189 E (14) (160) 195,890,189 / 16 12,243,136 D (13) (161) 12,243,136 / 16 765,196 0 ( 0) (162) 765,196 / 16 47,824 C (12) (163) 47,824 / 16 2,989 0 ( 0) (164) 2,989 / 16 186 D (13) (165) 186 / 16 11 A (10) (166) 11 / 16 0 B (11) (167) hexadecimal: xBAD0C0DE

### Basex to Decimal

Actually, I have already demonstrated how to convert a number from a base different than ten, into decimal. Once again, here is the complete formula, where cx represents the coefficients at each place-column.

#### cn·bn + ... + c2·b2 + c1·b1 + c0·b0

As an example, let's convert the binary answer back into decimal:

### Basex to Basey

Is it possible to convert a number from Basex directly to Basey without converting to decimal (base-10) first?

Yes, however, you will need to perform all of your math operations in either Basex or Basey. The algorithms that I have presented are performed with base-10 since that is the number system most people are familiar with.

## Demonstration

Here is a short demonstration program to convert a decimal number into a value of any numeral base between 2-36.

Why between those two ranges?

Try to imagine how a base-1 number system would work with only the symbol {0} to work with. Alternatively, we can combine the numerical set: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} with the alphabetic set: {A, B, C ... X, Y, Z} to create a set of 36 symbols.

C++

 // The set of possible symbols for representing other bases. const char symbols[] = {'0', '1', '2', '3', '4', '5',                          '6', '7', '8', '9', 'A', 'B',                          'C', 'D', 'E', 'F', 'G', 'H',                          'I', 'J', 'K', 'L', 'M', 'N',                          'O', 'P', 'Q', 'R', 'S', 'T',                          'U', 'V', 'W', 'X', 'Y', 'Z'};

Here is the base-conversion algorithm from above written in C++.

C++

 void ConvertToBase( const unsigned long decimal,                     const unsigned long radix) {   unsigned long remainder[32] = {0};   unsigned long quotient = decimal;   unsigned char place    = 0;     while (0 != quotient)   {     unsigned long value = quotient;     remainder[place] = value % radix;     quotient = (value - remainder[place]) / radix;       place++;   }     cout << decimal << " in base " << radix << " is ";     for (unsigned char index = 1; index <= place; index++)   {     cout << symbols[remainder[place - index]];   } }

The values are from the examples above. You can modify the values that are used with the function calls to ConvertToBase in the program below:

C++

 int main(int argc, char* argv[]) {   ConvertToBase(237,        2);   ConvertToBase(237,        10);   ConvertToBase(237,        16);   ConvertToBase(3134243038, 16);   ConvertToBase(3134243038, 36);     return 0; }
Run this code

#### Output:

237 in base 2 is 11101101
237 in base 10 is 237
237 in base 16 is ED
3134243038 in base 16 is BAD0C0DE
3134243038 in base 36 is 1FU1PEM


## Summary

Ever since you learned to count in decimal you have been using exponents with a base-10; it's just that no one ever made a big deal of this fact. To use a different numeral system, such as binary (base-2) or hexadecimal (base-16), you simply need to determine a set of symbols to represent the values and you can use the same counting rules that you use in base-10, except that you have a different number of symbols. Converting between any number system is possible, however, it is simplest to convert to decimal first if you want to continue to use base-10 arithmetic operations.

Contact / Help. ©2017 by Paul Watt; Charon adapted from work by daroz. blogging tool / hosting reviews / fp.
Design & icons by N.Design Studio. Skin by Tender Feelings / Evo Factory.