Alchemy is a collection of independent library components that specifically relate to efficient lowlevel constructs used with embedded and network programming.
The latest version of Embedded Alchemy[^] can be found on GitHub.The most recent entries as well as Alchemy topics to be posted soon:
 Steganography[^]
 Coming Soon: Alchemy: Data View
I just completed my Masters Degree in Cybersecurity at Johns Hopkins University. I plan to resume Alchemy's development. I plan to use my newly acquired knowledge to add constructs that will help improve the security of devices built for the Internet of (Insecure) Things.
When you know nothing, it's easy to think you know everything
I haven't yet decided if I think this is a universal flaw among people or that it is a trait more prone to the practitioners of our profession; I am referring to the absolute mentality that so many people seem to use when making decisions. This is a closedminded approach to solving problems. This is the basic concept:
"There is one absolutely best practice, and you are an idiot if you don't subscribe to the same approach that I take."
It is important to leave your comfort zone to expand your abilities. You can increase your potential by learning to spot this cognitive bias, recognize when you are limiting your potential by falling prey to it, and learn to evaluate objectively as you make decisions in your daytoday activities.
Ignorance
I do think Ignorance is Bliss in many ways. When you are unaware of politics, current world conflicts, or the unfair business practices of large corporations, there is less for you to worry about. In most cases, you couldn't do anything to resolve these issues by yourself anyway. Regardless, this post is about something a bit more focused, willful ignorance.
Willful Ignorance
Willful ignorance is what I believe most people mean when they call someone else ignorant. Ignorant by itself, simply means "lack of knowledge, education or awareness." It's not an inherently negative state of being. We are all ignorant of programming concepts when we first start our careers. Along with "best" practices, the "best" tools and the "best" coding styles. As we learn skills and become informed, we shed that state of ignorance and become capable of making more informed decisions for our daily activities. However, to reach this point requires a conscious active effort to increase one's knowledge. Willful ignorance is when a person actively stops acquiring new knowledge.
Expert Beginner
I will admit that I have trapped myself in that state of being before, multiple times actually. The first occurred when I first started programming and I reached a level of skilled commonly referred to as "Expert Beginner". This is that level when developers are quite dangerous. They are capable enough to accomplish things, but still quite ignorant of the world of computing. I remember feeling allpowerful. I mean, just a year earlier I knew zero programming concepts and now I could program in multiple languages and I was even commanding a computer to drive CNC milling machines. Also, consider that I had no difficulty developing code for the same type of problems that were disguised in different forms. I stopped running into new challenges, and mistakenly believed there wouldn't be any others.
It took a mix of observing other coworkers exhibiting similar behavior and the exposure to more complex concepts in my college courses to realize how little I actually knew. It's an easy trap in which to fall prey and it doesn't have to be permanent. It wasn't for me because I was actively pursuing knowledge and continued to seek growth. I will describe the other time I became ignorant of better ways later in this post.
Limited Perspective
A limited perspective provides limited choices.
It's that simple.
Have you ever run across a piece of code that just didn't seem to fit? Or you are trying to put together a solution but everything you try feels like it should be much simpler than your solution? That may be your intuition trying to guide you to that simpler solution. Since you don't currently see any better solution, take a step back. Go for a walk even.
While on your walk, don't think about the problem. In fact, don't think about any problems. Let your mind wander and relax. You may find that idea your intuition had in mind bubbles to the surface before you make it back to your desk. If not, no worries. With a clear mind, return back to your problem and seek out a new perspective.
Chalk Art by Kurt Wenner
I believe it is important to experiment, especially as a programmer. Hopefully you are able to find the time to explore here and there, to discover different approaches to solving problems.
Prototypes
Create a prototype solution the next time you design a feature. This is how you can explore the possibilities without giving up your nights and weekends. This will also give you a better understanding of your solution when you are ready to implement the real solution. You may think that you are too busy. Try it! You will surprise yourself.
This code truly should be proof of concept. Quick and dirty to test the feasibility of an approach. To help with this, consider using a different context for naming your objects to make it less tempting and convenient to paste your prototype into your production implementation. I will often create my prototypes in a unit test harness. This allows me to create multiple, small test scenarios to use my prototype while I try to understand the problem space.
Prototypes also give you the opportunity to experiment with features of a language or a library of which you recently learned. You should spend a minimal amount of time experimenting to determine if there are merits to this feature for your current problem. If it turns out to be a good fit, fantastic! However, even if it is not a good fit for your current problem, you may gain insight to the types of problems you could solve with this feature in the future. Most importantly, don't view prototype development as wasted time and effort. Even if your prototype fails to provide value, you have still learned something and gained experience; that is invaluable.
UnitTest Harnesses
It's no coincidence that I embraced prototyping when I started to embrace the concept of unit testing with a test framework. Unit test frameworks provide a harness to execute and observe your prototype code. If you have setup your environment so that creating new test harnesses are effortless, it becomes effortless to explore ideas in the form of prototypes. Another advantage of this approach is that the code remains segregated from your main project. Therefore, the temptation to incorporate this experimental code into your main project is minimized.
Read Someone Else's Code
Starting a new programming solution from scratch is an exciting prospect. You get to work from a clean slate, and create your solution. Working in a codebase that was written by someone else. Well, that just sucks. However, there's much to be learned from an existing code base. Especially if the program works.
If the program works, that means the previous developer did something right. It's easy to jump into unknown code and criticize it.
"What the hell where they thinking?"Consider this, what are you possibly missing? Although the software may not be perfect, it probably has some redeeming qualities. For legacy code, these qualities may be hidden beneath the years of maintenance additions that have compromised it's original architectural integrity. So how does it work? Learning to read and understand new code is extremely valuable.
Pro Tip:It's easier to understand new code if you don't judge it while you are reading it.
If you find it difficult to not judge the code in which you are stuck working, then spend a little time dissecting and understanding an opensource library that you admire. For example, something from Boost.org. Take a little bit of time to play around with the library. Possibly even create your own simplified version of it, while emulating the techniques used to implement the library.
The key theme in all of these techniques are they are different ways to practice and explore something new.
Absolutely
There is are no silver bullets, or killer strategies that will always work. Best practices, sure. These are strategies that usually lead to successful implementations. But remember they may not provide for the optimal solution, or even a useful solution. Best practices are the greatest common denominators for practices to guide developers that range from expertlevel all of the way to beginner.
Use a healthy dose of skepticism for every guideline you apply to your code. Generally, the guideline is valid and should be used. That is, until it's not.
Expert Beginner, Part 2
The other time I realized that I had become an expert beginner, was just a few years ago. I have had the opportunity to utilize a wide variety of algorithms after nearly two decades of programming. What I found, was that I basically resorted to the same small set of algorithms and data structures. That's because they were good enough. The problem is they were good enough for most situations. Because I had acquired a small list of algorithms that I depended on, I ignored all of the other possibilities that existed.
I returned to college to work on a masters degree. The program I chose had a required class "Foundation of Algorithms". I thought this will be boring because I took this as an undergraduate. I also considered getting a waiver for the class. I am glad that I didn't, because it opened my mind and reacquainted me with many of the concepts that I had tossed aside. I still go to my favorite list of algorithms, but not before considering the criteria required to solve the problem at hand.
As an example, consider the binarysearch tree (BST). The redblack tree is a great implementation of this datastructure because it is selfbalancing. Therefore, you have a solid general purpose binarytree that gives you O(log n) performance for search, insert, and delete. Many implementations of the C++ Standard Library use a redblack tree to implement the std::map
.
There is another type of selfbalancing BST called, AVL tree (AVL are the initials of the authors). This tree also provides O(log n) performance for the three main operations of a BST. One is just as good as the other, right? The difference between these two trees is in how the tree is balanced. The AVL tree uses a stricter algorithm that keeps the height of the tree to a minimum. Therefore, it performs better in situations that rely heavily on search operations.
The redblack tree is still the better data structure for generalpurpose use, because the balancing algorithm is more efficient than AVL's algorithm. This highlights the tradeoff that we need to make when selecting between the two datastructures. If our data is typically loaded once, then primarily searched, the AVL tree would be the better choice. However, if the data in the tree is fairly dynamic with insertions and deletions being at least as likely as a search, then the redblack tree will be a better choice. Of course, if the dataset remains small, then they type of tree may not matter at all. Each situation is unique, and should be analyzed.
Just Follow the Path, It Will Take You ... Somewhere
Selecting the best algorithm becomes a set of choices between the tradeoffs. In my case, I quit analyzing the situation and simply reached for the correct class of algorithm.
This led me to the realization that lists of best practices or development methodologies are designed to guide developers to a good solution in most circumstances, without the requirement of giving much thought. "Just follow the steps down the path, you'll know when you get there..."
For a while, I thought this was a good thing. Someone has distilled this list of bestpractices so I don't have to think about the minutiae, I can focus on the problem at hand. The problem is, we tend to lump all types of development together. What works well for website development probably will not be compatible with development of embedded hardware in highly regulated industries. And as these two types of development diverge to their extremes, the differences only become greater, e.g. highdemand websites on distributed serverfarms vs. safetycritical embedded controllers for commercial aircraft.
Another way this manifest itself is with methodologies, such as the Agile method. Actually, not the Agile method itself, but how it is practiced, very rigidly. The point of being agile, is to being open to adapt to changes gracefully. It may be that a certain aspect of the SCRUM process does not fit well with the team or the way products are developed at your company. You should evaluate what doesn't work and modify it for your needs, or possibly even eliminate it. Strictly adhering to a process because that's what the expert's say is, well... willfully ignorant.
Summary
Originally, I started this post as a rant against the absolute mentality that many people use with work and maybe even life. Then as I wrote, I thought it would be more constructive to help others identify this mind trap, and avoid it. It happens to all of us. The thing is, you don't have to stay in that spot. Following guidelines or a methodology are not excuses to mentally checkout. If something doesn't make sense, try to understand it and change it if necessary. Otherwise, you run the risk of becoming an expertbeginner with no path for growth.
Recently I have been helping a colleague convert a series of painfully repetitive code segments into function templates and template classes. I respect this colleague very much and he is a very skilled developer, including the use of the C++ Standard Library. However, he has never developed any templates of his own. As I was helping him learn how to develop new templates it occurred to me that there are plenty of fundamental C++ concepts that are given very little attention. This creates an enormous gap in reference material to progress from the competent skill level to proficient and expert levels.
Therefore, I am going to periodically write about fundamental concepts and how to actually apply them to your daily development. This post discusses templates at a very basic level. Entire books have been written about them. I will revisit with more sophisticated applications of templates in the future.
What is a template?
Templates are the basis for generic programming in C++. They are commonly referenced as parameterizedtypes and they accept parameters. Templates are extremely powerful constructs, but they require practice to benefit from their value.
Templates are also a bit tricky to work with when you first encounter them. They behave as a purely functional sublanguage in C++. If you are familiar with function programming techniques, C++ templates may feel a bit more natural to you. Regardless, they open many doors to improve the robustness and quality of your code, and are well worth the time and effort required to use them effectively.
Similar to the usage of a template in the physical world, the final result of a C++ template is not compiled code. An example may be the best way to further explain how template instantiation works:
C++
template <typename T>  
T sum(const T &lhs, const T &rhs)  
{  
T result(lhs);  
result += rhs;  
return result;  
} 
This example is a simple function that calculates the sum of two input values of the same type, T. This type simply acts as a placeholder for when the actual function is instantiated. With this definition alone, no code is generated. It is the pattern the compiler will use to create the target code when the template is instantiated.
Instantiation
The instantiation of a template is the creation of a templateinstance for a specific type. Instantiation only requires the use of the template. In many cases the compiler can deduce the intended type for the template instantiation.
C++
int a = 10;  
int b = 30;  
 
// Instantiates the function template, sum<>().  
int c = sum(a,b); 
For the situations where the type may be ambiguous, or you want to be certain of the type instantiated, you can explicitly declare the type during instantiation.
C++
short a = 10;  
short b = 30;  
 
// Explicitly specifies the int type for the instantiation  
// of the function template, sum<>().  
int c = sum<int>(a,b); 
In the previous example, the two values, a
and b
are of type short
. They will be implicitly converted to type int
to match the type of the template instantiation.
Explicit Instantiation
It is possible to force the compiler to generate an instance of a template, even when the instantiation will not be used. This is called explicit instantiation. This is useful for forward declarations and declarations for export from shared libraries. Explicit instantiations are nothing more than declarations with the specified type.
C++
// Explicitly instantiates sum for type short.  
template short sum<short>(short, short);  
 
// Explicitly instantiates and deduces type long.  
template long sum<>(long, long);  
 
// Explicitly instantiates and deduces type double.  
template double sum(double, double); 
There are many uses for explicit instantiation, and the topic deserves an entire post of its own. I wanted you to be aware of the concept as well as its name. I will revisit this topic another time.
A Compiled Template
We've seen an example of a template, as well as the concept of template instantiation. What actually happens though?
It depends.
Through experience, I believe the simplest way to reason about template substitution is to treat the template type as a placeholder. Then replace each instance of the placeholder with the instantiated type. Our sum
function template instantiated with type int
would look like this:
C++
// template <typename int>  
int sum(const int &lhs, const int &rhs)  
{  
int result(lhs);  
result += rhs;  
return result;  
} 
In fact, this is how I prefer to get a template working in the first place. I create a working version of the function or class with a specific type, then convert it to a template.
It is important to recognize the different operations that are required by the type used in the template instantiation. Specifically, in the sum
example, support for the operator +=
is required for the function to compile properly. Most of the basic intrinsic types in C++ support this operator. Certainly the int
type supports operator +=
. Consider some of the classes included in the C++ Standard Library, such as std::basic_string
. This class supports operator +=
. Therefore, we could instantiate sum
as follows:
C++
std::basic_string first = "Hello ";  
std::basic_string second = "World";  
 
std::basic_string result = sum(first, second);  
std::cout << result ; 
The value assigned to result
is "Hello World".
Unfortunately, this code would not compile if the container classes were used with the template, like list
, map
, or vector
. They do not provide support for this operator.
Header Only
It is not a strict requirement that template definitions are located in a header file. However, it is necessary for the compiler to have access to the entire template definition when a template is to be instantiated. So, if a template definition is only required for a single module, it could be defined entirely in the source file. However, if a template shall be used across modules in your program, it will be necessary to defined the entire implementation within a header file.
Also, take some advice that will make your life simpler: Implement your entire template classes inlined within the class itself. The member function declarations become much simpler to work with, as you don't have to repeatedly define the correct template declaration of the host class.
Parameter Types
Templates are restricted to the types of template parameters that can be processed. Templates can handle:
 Types
 NonTypes
 Template TemplateParameters
Types refer to any type that can be defined, including const/volatile qualifiers.
NonTypes are things like specific values. Rather than declaring a Typeparameter, a variable can be declared instead, such as int
. Nontype values are restricted to constant integral values. This means that floatingpoint and string literals cannot be used as template arguments. There are tricks to get around the limitations for string literals, which I will save for another time.
Here is a common example that calculates the value of factorial:
C++
template <unsigned int N>  
struct factorial  
{  
enum  
{  
value = <N * factorial<N1>:: value;  
};  
};  
 
template <>  
struct factorial <0>  
{  
enum { value = 1 };  
}; 
Notice how a struct is used to contain the value rather than a function. This is because functions are not executed at compiletime. However, the calculation specified in the declaration, value = <* factorial<N1>:: value
, will be computed by the compiler. This is due to each instantiation is required recursively until the basecase of factorial <0>
is reached. There are two versions of the factorial
. This is called specialization. It allows a template to handle specialcases in a different way than the general implementation. I further discuss specialization in the next section.
Template TemplateParameters are described in a later section.
Template Specialization
Template specialization allows you to define custom implementations of functions or objects for different types. This allows you to create a generic implementation as well as a custom version for types that deviate from the generic behavior. You can think of this as function overloading for template implementations.
There are two types of specialization, full and partial:
 FullTemplate Specialization: This is an implementation that specifies a specific template parameter for every parameter of the original template.
 PartialTemplate Specialization: This type of specialization only customizes a portion of the template. Only objecttemplates and membertemplates may be partially specialized. Regular function templates can only be fullyspecialized.
The Factorial example from the previous section demonstrated specialization. I use partialtemplate specialization in the Generic Example at the end of this post.
Keyword: class vs typename
One oddity that you may encounter is the usage of two different keywords in template syntax that are roughly equivalent, class
and typename
:
C++
template <typename T>  
T sum(const T &lhs, const T &rhs)  
{  
// ...  
}  
 
// The class keyword is interchangeable with typename, in most cases.  
template <class T>  
T sum(const T &lhs, const T &rhs)  
{  
// ...  
} 
There are two exceptions where the syntax requires a specific keyword:
 Dependent Types: Require the
typename
keyword to be prepended to the declaration.  Template Template Parameters: Require the
class
keyword to be used in the declaration. However,typename
can be used as well as of C++17.
Let's introduce these two concepts so that you are aware of their existence. I will revisit these topics in detail at another time.
Dependent Types
A dependent type is a type whose definition is dependent upon another type. This could occur in both class and function definitions. There are a few cases in C++, where it becomes necessary to disambiguate the syntax of an expression, dependent types are one of those cases. Suppose we have a function that is passed a std::string
and a global variable, value
:
C++
int value = 0;  
template <typename T>  
void example(std::string<T> &str)  
{  
std::string<T>::const_pointer* value;  
} 
The previous function intended to declare of a new variable, value. However, const_pointer type has not yet been established as a type. Therefore, the compiler interprets this expression as a multiplication of the value 'const_pointer' with the int 'value' defined globally.
This declaration requires a disambiguation for the compiler, to help it identify this new item as a type. The typename
keyword can accomplish this:
C++
int value = 0;  
template <typename T>  
void example(std::string<T> &str)  
{  
// Adding 'typename' the declaration will disambiguate  
// the expression for the compiler.  
typename std::string<T>::const_pointer* value;  
} 
Another alternative is to use typedef to declare a new type. This declaration also requires the use of typename
C++
int value = 0;  
template <typename T>  
void example(std::string<T> &str)  
{  
typedef typename std::string<T>::const_pointer* ptr_t;  
 
// Now, 'ptr_t', has been established as a type to the compiler.  
// The variable can be declared as:  
ptr_t value;  
} 
Template TemplateParameters
Template TemplateParameters is the C++ version of movie Inception. This construct allows you to build a construct that takes both a parameterizedtype (template) and a type to complete it's definition. Basically, a template embedded within a template.
Here is an example of template template syntax. Notice the use of the keyword, class
, in the template parameter:
C++
template< size_t index_t,  
template<size_t> class T  
>  
struct accumulate_value  
{  
static const size_t value = T<index_t>::value  
+ accumulate_value<index_t1,T>::value;  
};  
 
template<template<size_t> class T>  
struct accumulate_value<0,T>  
{  
static const size_t value = T<0>::value;  
}; 
The previous code is a solution that I created to add the sum for each of the values held in a generic template that contained a set of templateindexed subobjects. This is actually a metatemplate programming solution. I will address that topic at a later time. Template templates are not encountered very often. However, when they are needed, this syntax becomes very useful.
A Generic Example
When working with data that must be portable across different computing platforms, the concept of byteorder or endianess, is important to understand. Some platforms like PowerPC and MIPS use bigendian byteorders. While architectures like x86 operate on littleendian byteorders. Bigendian places the largest byte in a word on the left.
Example:
We'll use the number: 287,454,020, which is equivalent to 0x11223344 in hexadecimal:
Broken up into bytes, we have: 0x11, 0x22, 0x33, 0x44. The highestorder byte in 0x11223344 is 0x11. This is how the values will be stored in memory for each endiantype. The orange cells indicate the highorder byte for the specified platform:
Bigendian   LittleEndian  
287,454,020  
11  22  33  44  11  22  33  44  
1,144,201,745  
11  22  33  44  44  33  22  11  
287,454,020 
Network Data Transfer
By convention, network communication protocols usually specify data to be transferred in network byteorder, which is bigendian byteorder. The Berkeley socket implementation (as well as most other socket library implementations) provides a set of functions to help with this conversion process, htons
and htonl
. These functions stand for hosttonetwork short and hosttonetwork long, respectively. Some modern implementations provide hosttonetwork long long, htonll
, for 64bit integers, but this function is far from standard.
If we had a data structure such as the following:
C++
struct data  
{  
char d1;  
short d2;  
long d3;  
long long d4;  
unsigned char d5;  
unsigned short d6;  
unsigned long d7;  
unsigned long long d8;  
}; 
An adequate conversion function to prepare this data for transfer on the network would look like this:
C++
void data_to_network(const data& input, data& output)  
{  
output.d1 = input.d1;  
output.d2 = htons (input.d2);  
output.d3 = htonl (input.d3);  
output.d4 = htonll(input.d4);  
output.d5 = input.d5;  
output.d6 = (unsigned short) htons ((short)input.d6);  
output.d7 = (unsigned long) htonl ((long)input.d7);  
output.d8 = (unsigned long long) htonll((long long)input.d8);  
}; 
Code like this is a bit fragile. There is a different function name used to convert each data type. Remember, these are Clibrary calls. Also, the singlebyte values that do not require byteorder conversion, but if the type for these fields is increased in size this code would need to be revisited to add the appropriate conversion function.
We could use function overloading in C++. Simply create a set of functions with the same name for all of the different types, including the singlebyte types. That's only 8 functions to implement... Oh, wait! We forgot about the int
variants. Also, how to deal with floatingpoint types?
This is a perfect fit for a parameterized solution (templates). The only problem is some of the types requires a different conversion implementations. I previously mentioned template specialization. This is the technique we need to employ to solve this problem.
Solution
Let's first start with the base implementation for this template. That would be the conversion function that simply passes the data through to the return value.
C++
template <typename T>  
T to_network_byte_order(T value)  
{  
return value;  
} 
Simple.
Now let's create the conversion function for a short
, which is twobytes in length. We start with a specialized definition for this template:
C++
template <>  
short to_network_byte_order<short>(short value)  
{  
return htons(value);  
} 
The only problem is this looks an awfully lot like the implementation if we were to use the overloaded function solution. There are two problems, 1) types are explicitly specified and we wanted to avoid that, 2) we need to address the signed vs. unsigned type specifiers.
So solve this, we can differentiate on the template implementation based on the size of the data type. That is essentially what we did in the first place when we used the htons
function to convert the unsigned short
. This will require a slight modification to the base template.
C++
template <typename T, size_t SizeT>  
T to_network_byte_order(T value)  
{  
return value;  
} 
We also want to have something like this for our new version of the short
conversion function:
C++
// This will not compile, why?  
template <typename T>  
T to_network_byte_order<T, 2>(T value)  
{  
return htons(value);  
} 
The problem is, this is called partial specialization and it is not permitted for functions. However, it is allowed for class
and struct
. So we can still achieve our goal with one more adjustment to our strategy. We will now encapsulate our byteorder conversion logic within a member function of a partiallyspecialized struct
. Then use a toplevel template function to construct and call this conversion struct
. Here is the definition of the templated structs.
C++
template <typename T, size_t SizeT>  
struct Convert  
{  
static T swap(T value) { return value; }  
};  
 
template <typename T>  
struct Convert<T, 2>  
{  
static T swap(T value) { return htons(value); }  
};  
 
template <typename T>  
struct Convert<T, 4>  
{  
static T swap(T value) { return htonl(value); }  
};  
 
template <typename T>  
struct Convert<T, 8>  
{  
static T swap(T value) { return htonll(value); }  
}; 
Now finally, the toplevel function that will access the byteorder conversion logic:
C++
template <typename T, size_t SizeT>  
T to_network_byte_order(T value)  
{  
return Convert<T, sizeof(T)>::swap(value);  
} 
What does this solution look like when it is used in our original conversion function:
C++
void data_to_network(const data& input, data& output)  
{  
output.d1 = to_network_byte_order(input.d1);  
output.d2 = to_network_byte_order(input.d2);  
output.d3 = to_network_byte_order(input.d3);  
output.d4 = to_network_byte_order(input.d4);  
output.d5 = to_network_byte_order(input.d5);  
output.d6 = to_network_byte_order(input.d6);  
output.d7 = to_network_byte_order(input.d7);  
output.d8 = to_network_byte_order(input.d8);  
}; 
Now, if the datatypes are changed during the life of this program, this parameterized implementation will automatically recompile and adjust to the proper implementation because of this generic implementation.
If only C++ supported reflection, then a function could be written to simply apply the function call, to_network_byte_order
to each member of a class
or struct
. Many efforts are currently underway to add reflection to C++. I don't know when or if that will happen. Until then, this is the type of problem that my library Alchemy[^] solves.
Summary
Templates are a very powerful tool that is overlooked by many C++ developers. Learning to use the C++ Standard Library is a good start towards increasing your productivity and the reliability of your software. Learning to develop your own robust templates to solve problems for a variety of types will take you to the next level. However, the foreign syntax, functional behavior, and somewhat obscure rules tend to trip up beginners to this aspect of C++ development. This introduction should provide you with the knowledge required to tackle these hurdles. Continue to practice with them and improve your skills. If you have any questions, feel free to post a comment or send me an email.
Copyright Infringement
Let's start with my personal experience with copyright infringement.
Before I created my own site, I posted many articles at codeproject.com. I like to write on topics that I am interested in to learn them better. Around 2002 I wrote a basic implementation of the game Tetris to learn DirectX and similar technologies. I posted this on CodeProject.com.
The Tetris Company is a very litigious entity, and ownership of game itself is has been surrounded in controversy throughout its existence. In 2007 I received a letter from CodeProject informing me that they had to remove my article from their site because they received a DMCA copyright notice from The Tetris Company.
Then about nine months later, CodeProject sent me another letter indicating that if I were to remove all references to "Tetris" from my article and code, they could repost it on the site. They also included a few legal references for me to help educate myself on copyright law, which I will share in a moment.
After a bit of research I settled on a new name for my game, Quadrino. I removed references to the name "Tetris" and cleaned up my article. CodeProject then reposted it to their site, and I haven't been harrassed by The Tetris Company since then.
If you are interested, you can checkout Quadrino[^] at CodeProject. This version uses DirectX 7 and 8, but it still works. I have a port that I updated to use Direct 2D and added support for Xbox 360 controllers, however, I never polished it up enough to release it.
What Does a Copyright Protect?
(Disclaimer: The following is my understanding and experiences with copyright law. I'm not lawyer and the courts and legal system do not always seem to play out logically to me. Also, it seems to me that what you can prove in a court of law tends to be more valuable than the truth.)
It turns out that a copyright only protects the original expression of an idea, but not the idea itself (which really does lead to a lot of confusion and misinterpretations).
For example:
 Books
 Poems
 Lyrics to a song (written or otherwise)
 Written sheet music for a melody
 A recorded version of the singer singing or a musician playing
 Paintings
 Sculptures
Other examples of creations that are protected:
 Software Source Code as well as compiled binaries
 Hardware design documents
 Research Papers
 Blog entries
 Internet forum comments
 Contracts
 Technical Manuals
 Parker Brothers's written rules to Monopoly
The name of the game Monopoly is trademarked (a different form of protection, which is also different from a "registered trademark"). The written rules to Monopoly have copyright protection, however, the concept of the game of Monopoly itself cannot be protected in any way. That is why you will see games similar to Monopoly from time to time. Such as a local city themed version of the game with local landmarks, events, celebrities. As long as they write their own version of the rules and avoid the name Monopoly, they can legally sell their game.
This is the aspect of copyrights that allowed me to change the name of my game and avoid any further infringement violations.
Then issues start to arise such as the "look and feel" of a product and so on.
And yes, works published on the Internet are publically accessible, however, they are not considered in the public domain, which means you still hold the copyright to whatever you post. Terms of service for a website may state that by posting content on their site that you give them ownership, a limited use copyright license, or many other things (damn fine print.)
How do you Copyright something?
Step 1: You create it
Congratulations! You have just completed the copyright process to the expression of your idea!
That's it!
There is only one step. You do not need to put a copyright symbol on the creative work, no date is required, and the "poor man's" copyright is a myth. That is, sending yourself a sealed copy of your work in the mail doesn't gain you anything (you'll actually be out the cost of the envelope and price of shipping, not to mention the "opportunity cost" of what you could have done with your time instead of mailing something to yourself).
Adding the symbols, date, signing with your name etc. helps establish ownership and disambiguate that you are claiming your copy rights. Otherwise, if you can prove that you are the creator of a work, then you automatically own the copyright of that work (and it's what you can prove in a court of law that actually matters.)
This is for works created after 1989, because copyright notices were required before this point. For more details on this, look up the Berne Convention Implementation Act, which makes copyright ownership automatic. If you created your work before 1989 and forgot to properly mark your creative work, you may still be able to claim it. You should consult a lawyer if it matters that much.
Fair Use
I am not going to go into full detail on this, but there is a concept of fair use on copyrights. For the purpose of reviews, references and citations you can use a portion of a creation that is under copy protection. You can also use this content for satire and parodies and to create derivative works.
Supreme Court decisions
 1994 Campbell v. AcuffRose Music, Inc. [Copyright  Fair Use  Parody]
 1984Sony Corp. of Am. v. Universal City Studios, Inc. [Copyright  Fair Use  Sale and Use of VCRs]
Derivative Works
Derivative works are a sticky issue. These works can be derivations of an existing work, but they must be more your work than the original. Beyond that basic notion, my understanding is limited. This is a very gray area. Hell, Google avoided a $9.2B lawsuit against Oracle that has been raging in our courts since 2011, because the jury ruled Google had Fair Use rights to create a derivative work. Many analysts are expecting Oracle to appeal. We'll have to wait and see what happens.
Digital Millennium Copyright Act(DMCA)
The Digital Millennium Copyright Act (DMCA) is a fourletter word for security researchers and hobbyists, especially section 1201. It was enacted in 1998 and was aimed at protecting the actors, authors, artists and musicians (more accurately studios, publishers, and recording companies) creative works (distributed content) from being illegally pirated on the Internet and other forms of digital media that began to evolve.
One of the clauses and subsequent sideeffects of this law (in the United States) prohibits a lawful owner from reverse engineering anticircumvention provisions in most cases. This has brought John Deere and auto manufacturer's into the spot light recently as they are trying to use this law to prevent security researchers from looking for vulnerabilities in their equipment and maintain a monopoly on the support and repair of these complex systems.
It's some of the sideeffects of the DCMA that make me a little jumpy at the threat of being sued. The penalties could reach a fine of $5M and 5 years in prison. For this reason, the Electronic Frontier Foundation (EFF) is suing the federal government on behalf of Andrew Huang, and Matthew Green. You can read the press release made by the EFF here: EFF Lawsuit Takes on DMCA Section 1201: Research and Technology Restrictions Violate the First Amendment[^].
Wait! What are those subclauses in section 1201?
There are a number of subclauses in section 1201 that actually give owners of lawfully acquired (i.e., not pirated or stolen) copy written material, to reverseengineer and circumvent the copyright protection mechanism in a few select instances:
 f. Reverse Engineering for achieving interoperability
 g. Encryption Research
 i. Protection of Personally Identifying Information
 j. Security Testing (Research)
I mentioned this to Matt Green through Twitter, and his response was:
Matt wrote a blog entry that details why he is doing this. You can read that here: Matthew Green's Statement on DMCA lawsuit[^]
After I read his blog post I asked myself this question:
Even with the law on my side, do I really want to risk getting taken to court by a megacorporation with deep pockets?
My conclusion:
Nope!
Summary
Copyright and patent infringement are civil offenses and are likely only to become a concern for hackers if the goal is to duplicate and manufacture their own product for profit. Regardless of their moral view on if they are entitled to hack systems, violation of one of these IP legal protections is likely to only affect a hacker if their activities will end in a lawsuit and probable loss in an infringement case with the original manufacturer.
Otherwise, the criminal penalties for hacking are much more severe with penalties that could include both jail time and monetary fines. When the topic moves into espionage, a death sentence is even a potential outcome. Therefore, I doubt that any hackers (with the exception of corporate reverseengineers) even consider the legal violations of IP protection that they are committing.
A software library provides no value if it does not simplify the task of creating your application. At the very least we would like to show that the library contains all of the tools required to complete the intended goal. Ideally, the library is complete, easy to use, and is efficient. The only way to learn how well the library is designed and implemented is to use it.
Furthermore, it is useful and sometimes necessary to provide an exemplar for others to see how the library is intended to be used. The Steganography sample program included with Alchemy is this exemplar. I chose steganography to demonstrate that Alchemy is much more useful than the serialization of data for networking. In the process of developing this application I discovered some painpoints with the library and added tools to Alchemy to eliminate this pain.
Steganography
What is steganography?
Steganography is the hiding of messages within plainsight. This should not be confused with "Stenography," which is the recording of dictation. Steganography can be performed in may ways. Normal words can be given special meaning and included within a message that appears to be mundane. The location of words relative to others in the message can have a significant meaning. The second letter of every other word can be extracted to form the message. The possibilities are endless.
The form of steganography that I have implemented with Alchemy embeds a text message within a bitmap image. This can be achieved by taking advantage of the fact that the loworder bits for the color channels in an image affect the final color much less compared to the highorder bits.
The table below shows a sample for each color channel, with and without the two lowerbits set. The row with binary indicates the values of the four lowerbits for each 8bit color. For demonstration purposes, the alpha channel is represented with grayscale.
Red  Green  Blue  Alpha  
FF  FC  FF  FC  FF  FC  FF  FC  
1111 
1100 
1111 
1100 
1111 
1100 
1111 
1100 

Compare this to the result if we substitute only the single highbit for each color channel:
Red  Green  Blue  Alpha  
7F  FF  7F  FF  7F  FF  7F  FF  
0111 
1111 
0111 
1111 
0111 
1111 
0111 
1111 

The only caveat is the image should have a sufficient amount of entropy, otherwise the noise added by the encoded data may become visible; if not to a human, then most certainly to computer searching for such anomalies. Photographs with a range of gradients are good candidates for this form of steganography.
Why Use Steganography as a Sample?
Through the development of the base set of features for Alchemy, I focused solely on the serializing of data for network data transfer protocols. However, Alchemy is a flexible serialization library that is not restricted to network communication. Portable file formats also require serialization capabilities similar to the capabilities found in Alchemy. To this end, loading and storing a bitmap from a file is a good serialization task; bitmaps are relatively easy to acquire, and the format is simple enough to be implemented in a small sample program.
I wanted to keep the program simple. Writing a portable network communication program is not simple; especially since Alchemy does not provide functionality directly related to network communication. I also felt that if I were to use a network related exemplar, potential user of Alchemy would assume it can only be used for network related tasks. Moreover, I did not want to add extra support code to the application that would hide or confuse the usage of Alchemy.
Strategy
In keeping with simplicity, the sample program requires 32bit bitmaps. For this type of encoding, there are four color channels (Red, Green, Blue, and Alpha) for each pixel, where each channel is onebyte in size. We will encode a onebyte of data within each pixel. To accomplish this, we will assign twobits of the encoded byte into the two lowerbits of each color channel. This results in a 25% encoding rate within the image.
Consider an example where we combine the orange color 0xFF9915 with the letter i
, 0x69:
Channel 1  Channel 2  Channel 3  Channel 4  
Input  0xFF  0x99  0x15  0x00  
Value  1111 
1111 
1001 
1001 
0001 
0101 
0000 
0000 

Data  01 
10 
10 
01 

Result  1111 
1101 
1001 
1010 
0001 
0110 
0000 
0001 

Output  0xFD  0x9A  0x16  0x01 
This is not a very complex encoding strategy. However, it will allow me to demonstrate the serialization of data for both input and output, as well as the packeddata bit (bitfield) functionality provided by Alchemy.
Bitmap Format
The bitmap file format has many different definitions. The variety of formats are a result of its inception on IBM's OS/2 platform, migration to Windows, and evolution through the years. Additionally, the format allows for an index 8bit color table, RunLength Encoded (RLE) compression, gamma correction, color profiles and many other features.
The sample application simply uses the bitmap format introduced with Windows 3.0. It contains a file header that indicates the file is of type BITMAP, a bitmap information section, and the pixel data. The Alchemy definitions for each section are found below. These definitions provide the fundamental structure for the data; the goal was to provide a tablebased definition that looks very similar to the definition of a struct. This declaration is also for generating the majority of the serialization logic for Alchemy:
File Header
The bitmap file header is a short constructor that is only 14bytes large. The first two bytes will contain the letters "BM" to indicate that this is a bitmap. The length of the file, and the offset to the first pixel data are also encoded in this structure:
C++
// *************************************************************  
ALCHEMY_STRUCT(bitmap_file_header_t,  
ALCHEMY_DATUM(uint16_t, type),  
ALCHEMY_DATUM(uint32_t, length),  
ALCHEMY_DATUM(uint16_t, reserved_1),  
ALCHEMY_DATUM(uint16_t, reserved_2),  
ALCHEMY_DATUM(uint32_t, offset)  
) 
Bitmap Information Header
The bitmap information section is 40bytes of data that defines the dimensions and colordepth of the encoded bitmap:
C++
// *************************************************************  
ALCHEMY_STRUCT(bitmap_info_header_t,  
ALCHEMY_DATUM(uint32_t, size),  
ALCHEMY_DATUM(int32_t, width),  
ALCHEMY_DATUM(int32_t, height),  
ALCHEMY_DATUM(uint16_t, planes),  
ALCHEMY_DATUM(uint16_t, bit_depth),  
ALCHEMY_DATUM(uint32_t, compression),  
ALCHEMY_DATUM(uint32_t, sizeImage),  
ALCHEMY_DATUM(int32_t, x_pixels_per_meter),  
ALCHEMY_DATUM(int32_t, y_pixels_per_meter),  
ALCHEMY_DATUM(uint32_t, color_count),  
ALCHEMY_DATUM(uint32_t, important_color)  
) 
Bitmap Information
This is a utility definition to combine the information header and the color data from the buffer for convenience:
C++
// *************************************************************  
ALCHEMY_STRUCT(bitmap_info_t,  
ALCHEMY_DATUM(bitmap_info_header_t, header),  
ALCHEMY_ALLOC(byte_t, header.sizeImage, pixels)  
) 
Pixel Definition
This is a convenience structure to access each colorchannel independently in a pixel:
C++
// *************************************************************  
ALCHEMY_STRUCT(rgba_t,  
ALCHEMY_DATUM(byte_t, blue),  
ALCHEMY_DATUM(byte_t, green),  
ALCHEMY_DATUM(byte_t, red),  
ALCHEMY_DATUM(byte_t, alpha)  
) 
Alchemy Declarations
Storage Buffer
Alchemy supports both static and dynamic memory management for its internal buffers; dynamic allocation is the default. However, the storage policy can easily be changed to a static policy with a new typedef
. The definition below shows the static buffer definitions used by the sample program:
C++
namespace detail  
{  
typedef Hg::basic_msg<Hg::bitmap_file_header_t,  
Hg::BufferedStaticStoragePolicy> hg_file_t;  
 
typedef Hg::basic_msg<Hg::bitmap_info_t,  
Hg::BufferedStaticStoragePolicy> hg_info_t;  
} 
Alchemy Message
For convenience, we also predefine a type for the message format type.
C++
typedef Hg::Message< detail::hg_file_t> file_t;  
typedef Hg::Message< detail::hg_info_t> info_t; 
Bitmap Abstraction
As I mentioned previously, I wanted to keep this sample application as simple as possible. One of the things that I was able to do is encapsulate the bitmap data details into the following Bitmap
abstraction. This class provides storage for a loaded bitmap, loads and stores the contents, and provides a generic processing function on each pixel:
C++
class Bitmap  
{  
public:  
bool Load (const std::string &name);  
bool Store(const std::string &name);  
 
void process( std::string &msg,  
pixel_ftor ftor);  
private:  
std::string m_file_name;  
 
file_t m_file_header;  
info_t m_info;  
}; 
The processing function takes a functionpointer as an argument that specifies the processing operation to be performed each time the function is called. This is the definition for that functionpointer.
C++
typedef void (*pixel_ftor) ( Hg::rgba_t& pixel,  
Hg::byte_t& data); 
Load and Store
This section shows the implementation for both the Load
and Store
operations of the bitmap. The implementation uses the Standard C++ Library to open a file, and read or write the contents directly into the Hg::Message
type with the stream operators.
C++
// *************************************************************  
bool Bitmap::Load (const std::string &name)  
{  
m_file_name = name;  
 
std::ifstream input(m_file_name, std::ios::binary);  
if (input.bad())  
{  
return false;  
}  
 
input >> m_file_header;  
 
const size_t k_info_len = 0x36ul;  
if (k_info_len != m_file_header.offset)  
{  
return false;  
}  
 
input >> m_info;  
 
return true;  
} 
And the implementation for Store
:
C++
// ************************************************************  
bool Bitmap::Store (const std::string &name)  
{  
std::ofstream output(name, std::ios::binary);  
if (output.bad())  
{  
return false;  
}  
 
output << m_file_header;  
output << m_info;  
 
return true;  
} 
Process
I mentioned at the beginning that it is important to implement programs that perform realwork with your libraries to verify that your library is easy to use and provides the desired functionality as expected. With my first pass implementation of this program, both of those qualities were true for Alchemy, except the performance was quite slow. The cause turned out to be the load and initialization of every single pixel into my implementation for Hg::packed_bits
.
The problem is that the bytes that represent the pixel data are normally read into an array as a bulk operation. Afterwards, the proper address for each pixel is indexed, rather than reading the data into an independent object that represents the pixel. When I recognized this, I came up with the idea for the data_view<T>
construct. This allows a large buffer to be loaded as raw memory, and a view of the data can be mapped to any type desired, even a complex data structure such as the rgba_t
type that I defined.
The data_view
is an object that provides nonowning access to the underlying raw buffer. If this sounds familiar that is because it is very similar to the string_view
construct that is slated for C++17. It was shortly after I implemented data_view
that discovered that string_view
existed. So I was a bit shocked, and delighted when I realized how similar the concepts and implementations are to each other. It was a bit of validation that I had chosen a good path to solve this problem.
I plan to write an entry that describes the data_view
in detail at a later time. Until then, if you would like to learn more about the approach, I encourage you to check out its implementation in Alchemy, or the documentation for the string_view
object.
The purpose of process
is to sequentially execute the supplied operation on a single message byte and source image pixel. This is continued until the entire message has been processed, or there are no more available pixels.
C++
// *************************************************************  
void Bitmap::process( std::string &msg,  
pixel_ftor ftor)  
{  
auto t = Hg::make_view<Hg::rgba_t>(m_info.pixels.get());  
auto iter = t.begin();  
 
// Calculate the number of bytes that can be encoded or extracted  
// from the image and ensure the the message buffer is large enough.  
size_t length = t.end()  iter;  
msg.resize(length);  
 
for (size_t index = 0; iter != t.end(); ++iter, ++index)  
{  
ftor(*iter, (Hg::byte_t&)(msg[index]));  
}  
} 
Weave and Extract
These are the two functions that provide the pixellevel operations to encode a message byte into a pixel with the strategy that was previously mentioned. Weave
combines the message byte with the supplied pixel, and Extract
reconstructs the message byte from the pixel.
I am investigating the possibility of implementing a uniontype for Alchemy. If I end up doing this I will most likely revisit this sample and provide an alternative implementation that incorporates the Hg::packed_bits
type. This will completely eliminate the manual bittwiddling logic that is present in both of these functions:
C++
// *************************************************************  
void weave_data ( Hg::rgba_t& pixel,  
Hg::byte_t& data)  
{  
using Hg::s_data;  
 
s_data value(data);  
 
pixel.blue = (pixel.blue & ~k_data_mask)  
 (value.d0 & k_data_mask);  
pixel.green = (pixel.green & ~k_data_mask)  
 (value.d1 & k_data_mask);  
pixel.red = (pixel.red & ~k_data_mask)  
 (value.d2 & k_data_mask);  
pixel.alpha = (pixel.alpha & ~k_data_mask)  
 (value.d3 & k_data_mask);  
} 
Extract implementation:
C++
// *************************************************************  
void extract_data ( Hg::rgba_t& pixel,  
Hg::byte_t& data)  
{  
using Hg::s_data;  
 
s_data value;  
 
value.d0 = (pixel.blue & k_data_mask);  
value.d1 = (pixel.green & k_data_mask);  
value.d2 = (pixel.red & k_data_mask);  
value.d3 = (pixel.alpha & k_data_mask);  
 
data = value;  
} 
The Main Program
The main program body is straightforward. Input parameters are parsed to determine if an encode or decode operation should be performed, as well as the names of the files to use.
C++
// *************************************************************  
int main(int argc, char* argv[])  
{  
if (!ParseCmdParams(argc, argv)) {  
PrintHelp();  
return 0;  
}  
 
string message;  
sgraph::Bitmap bmp;  
bmp.Load(input_file);  
if (is_encode) {  
message = ReadFile(msg_file);  
bmp.process(message, weave_data);  
bmp.Store(output_file);  
}  
else {  
bmp.process(message, extract_data);  
WriteFile(output_file, message);  
}  
 
return 0;  
} 
Results
To demonstrate the behavior of this application I ran sgraph
to encode the readme.txt file from its project. Here is the first portion of the file:
======================================================================== CONSOLE APPLICATION : sgraphy Project Overview ======================================================================== AppWizard has created this sgraphy application for you. This file contains a summary of what you will find in each of the files that make up your sgraphy application.
Into this image:
This is the result image:
For comparison, here is a sample screencapture from a Beyond Compare diff of the two files:
Summary
I implemented a basic application that performs steganography to demonstrate how to use the serialization features of my library, Alchemy. I chose a unique application like this to make the demonstration application a bit more interesting and to show the library can be used for much more than just serialization of data for network transfer.
Twitter is an... interesting way to spend one's time. It can be quite a challenge to cram a thought into 140 characters. However, I have found there are many ways to be entertained by this microblogging site. One of them includes interacting and learning from a wide variety of people. During an interaction with a creativewriter, I was a bit challenged by a poem.
This is a brief entry that includes my implementation of that "Poem Anagram Generator."
The Original Poem
The first thing that is required, is the poem. You can find the original at what_adri_writes[^].
Here is an excerpt:
Stanley the fishmonger
told me how to know it
when I saw it
without a poet
And we fished out drowned Phlebas,
Patron saint of Unconsidered Phoenicians
and failed Changers of Minds.
The Highlander Art doublechecked the veil
of Reichenbach Falls
three days later
and found
A beekeeper.
Strategy
When I thought about it, I realized it wouldn't take much to actually write a program to reorganize the words of the poem. Especially if I were to use the algorithms
that are available in the C++ Standard Library. That is, of course, creating the naïve implementation that ignores such possibly desirable features such as sentence structure or proper grammar.
Then again, this is poetry!
So let's go ahead and throw those concerns out the window. This will be a postmodern avantgarde cyberexperience.
Tokenize
My original goal was to tokenize all of the words from the poem, then randomize with something like next_permutation
. However, when I really started to look at the text I saw all of the punctuation. Did I want to remove the punctuation and just live with the words? Well then there are also the newlines that give the text form and clues the reader in that "pssst, this is probably a poem"
So I decided that I would include both the punctuation and newlines as tokens to be generated for the poem generator.
To do this I put a space between ever word, punctuation mark, and newline in the poem; like so:
C++
const std::string poem( "Oh but I too want to be a poet ! \n "  
"and yet \n "  
"Tell me where is meaning bred \n "  
"In my heart or in your head ? \n "  
 
// omitted for brevity  
 
"And so the poem \n "  
"was not to be . \n"); 
Here is a simple function to add each token into a vector
for future modification:
C++
typedef vector<string> words_t;  
 
// ************************************************************  
words_t tokenize(const string &poem)  
{  
words_t words;  
size_t next = 0;  
 
do  
{  
size_t cur = next;  
next = poem.find(' ', cur);  
 
size_t count = next == string::npos  
? string::npos  
: next  cur;  
words.push_back(poem.substr(cur, count));  
 
if (next != string::npos)  
next += 1;  
 
} while (next != string::npos);  
 
return words;  
} 
If I missed a potential algorithm from the standard library that would perform this task I would be interested to learn how this function could be simplified.
The Generator
The generator code is found below. It contains three algorithms from the standard library; A random number generator, shuffle
and copy
. Then of course the call to tokenize
.
You can run the code below to generate a new poem each time.
C++
// ************************************************************  
int main()  
{  
// Tokenize the poem.  
words_t words(tokenize(poem));  
 
// Jumble the words.  
random_device rdev;  
mt19937 rng(rdev());  
 
shuffle(words.begin(), words.end(), rng);  
 
// Print the results.  
copy(words.begin(), words.end(), ostream_iterator<string>(cout, " "));  
 
cout << "\n";  
 
return 0;  
} 
Output:
Twitter and Poetry...
Instant art!
Maybe to improve upon this I could pretokenize based on particular phrases.
Summary
Twitter is fun!
C++ is fun!
Combining Twitter and C++ makes poetry fun even for a leftbrained analytic like myself.
If you end up generating an interesting poem, post it in the comments.
The "Little Planet" effect is the colloquial name often used to refer to the mathematical concept of a stereographic projection. The end result is quite impressive, especially considering the little amount of code that is actually required to create the image. All that is required is a panoramic image with a 360° view from side to side, or a photosphere, such as used with Google Earth to provide an immersive view of a location.
Stereographic Projection
This is a mapping from a spherical position onto a plane. You will commonly see this type of projection in cartography; two examples are mapping the earth and planispheres (celestial charts). This projection is useful because it is not possible to map from a sphere to a plane without some type of distortion. The stereographic projection preserves angles and distorts areas. This tradeoff is preferred for navigation, which is typically performed with angles.
The projection is typically performed from one of the poles. However, it can originate from any point on the sphere. For simplicity, unless otherwise stated I will refer to projections that originate at the North Pole of the sphere. For a unitsphere located at the origin, this would be at \(\matrix{[0 & 0 & 1]}\)
The distortion of the image depends on the placement of the plane relative to the sphere. The upper hemisphere exhibits most of the distortion. The distortion becomes more extreme the closer the point in the sphere's surface approaches the origin of the projection. The projection extends to infinity as the projection's origin is undefined.
If the plane bisects the sphere at the equator, the lower hemisphere will be projected within an area the size of the circumference of the sphere, and the upper hemisphere is projection on the plane outside of the sphere.
When the plane is located at the surface of the sphere, opposite from the projection's origin, the lower hemisphere will project over an area that is twice that of the sphere's equator. The image below illustrates this configuration.
We can reference any point on the spheres surface with two angles representing the latitude \(\phi\) and longitude \(\lambda\), where:
\(\pi \lt \lambda \lt \pi, \cfrac{\pi}{2} \lt \phi \lt \cfrac{\pi}{2} \)
The following image is a scaled down version of the panorama that I used to generate the stereographic projection of the Golden Gate Bridge at the beginning of the article. Normally we would index the pixels in this image with two variables, \(x\) (width) and \(y\) (height).
We can simplify the math to map a fullview panorama to a sphere by normalizing the dimensions for both the sphere and our surface map; that is, to reduce the scale to the unit scale of one. This means we will perform the surface map operation on a unitsphere, and the dimensions of our panorama will then span from: \(1 \lt x \lt 1, 1 \lt y \lt 1\).
The following image shows how the coordinate system from the sphere maps to the image:
Project the sphere onto the Plane
We will create a ray, to calculate the projection of a single point from the sphere to the plane. This ray begins at the projective origin, passes through the sphere and points at any other point on the surface of the sphere. This ray continues and will intersect with the projective plane. This intersection is the point of projection. Alternatively, we can calculate the intersection point on the sphere's surface, given a ray that points between the projective origin and the projection plane. This demonstrates that the stereographic projection is a bijective operation; there is a onetoone correspondence for the values on both surfaces.
The diagram below depicts the projection in twodimensions:
If \(\lambda_0\) as the central longitude and (\phi_1\) as the central latitude, this relationship can be stated mathematically as:
\( \eqalign{ u &= k \cos \phi \sin(\lambda  \lambda_0) \cr v &= k [ \cos \phi_1 \sin \phi  \sin \phi_1 \cos \phi \cos(\lambda  \lambda_0)]\cr & where \cr k &= \cfrac{2R}{1+ \sin \phi_1 \sin \phi + \cos \phi_1 \cos \phi \cos(\lambda  \lambda_0)} } \)
The term \(k\) determines where the projected plane is located.
Codez Plz
That is enough theory and explanation. Here is the code that I used to generate the stereographic projections of the Golden Gate Bridge and Las Vegas. The code presented below is adapted from a project I just completed that used the Computer Vision library OpenCV. The only important thing to note in the code below is that Mat
objects are used to store images and pixels are represented with a type of std::vector
. You should have no problem converting the pixel access operations from the code below to whatever image processing API that you are using.
If you do run into problems, leave a comment and I will help you work through porting the code.
First, here are two constants defined in the code:
C++
const double k_pi = 3.1415926535897932384626433832795;  
const double k_pi_inverse = 0.31830988618379067153776752674503; 
There are three functions:
Main Projection
This function works by creating a ray between the projection origin and a pixel location on the projection plane. The intersection of the sphere's surface is calculated, which indicates the location to sample from the sphere's surface map. Because we are dealing with discrete, pixelated digital images, this sampling process creates visual artifacts. To help improve the smoothness of the image, we use a bilinear filter to average the values of four surrounding pixels of the sample location from the sphere.
C++
void RenderProjection(Mat &pano, long len, Mat &output) {  
output.create(len, len, CV_16UC3)  
long half_len = len / 2;  
Size sz = pano.size();  
 
for (long indexX = 0; indexX < len; ++indexX) {  
for (long indexY = 0; indexY < len; ++indexY) {  
double sphereX = (indexX  half_len) * 10.0 / len;  
double sphereY = (indexY  half_len) * 10.0 / len;  
double Qx, Qy, Qz;  
 
if (GetIntersection(sphereX, sphereY, Qx, Qy, Qz))  
{  
double theta = std::acos(Qz);  
double phi = std::atan2(Qy, Qx) + k_pi;  
theta = theta * k_pi_inverse;  
phi = phi * (0.5 * k_pi_inverse);  
double Sx = min(sz.width 2.0, sz.width * phi);  
double Sy = min(sz.height2.0, sz.height * theta);  
 
output.at<Vec3s>(indexY, indexX) = BilinearSample(pano, Sx, Sy);  
}  
}  
}  
} 
Calculate the Intersection
This calculation is an optimized reduction of the quadratic equation to calculate the intersection point on the surface of the sphere.
C++
bool GetIntersection(double u, double v,  
double &x, double &y, double &z)  
{  
double Nx = 0.0;  
double Ny = 0.0;  
double Nz = 1.0;  
double dir_x = u  Nx;  
double dir_y = v  Ny;  
double dir_z = 1.0  Nz;  
 
double a = (dir_x * dir_x) + (dir_y * dir_y) + (dir_z * dir_z);  
double b = (dir_x * Nx) + (dir_y * Ny) + (dir_z * Nz);  
 
b *= 2;  
double d = b*b;  
double q = 0.5 * (b  std::sqrt(d));  
 
double t = q / a;  
 
x = (dir_x * t) + Nx;  
y = (dir_y * t) + Ny;  
z = (dir_z * t) + Nz;  
 
return true;  
} 
Bilinear Filter
The bilinear filter calculates a weightedsum of the four surrounding pixels for a digital image sample.
C++
Vec3s BilinearSample(Mat &image, double x, double y) {  
Vec3s c00 = image.at<vec3s>(int(y), int(x));  
Vec3s c01 = image.at<vec3s>(int(u), int(x)+1);  
Vec3s c10 = image.at<vec3s>(int(y)+1, int(x));  
Vec3s c11 = image.at<vec3s>(int(y)+1, int(x)+1);  
 
double X0 = x  floor(x);  
double X1 = 1.0  X0;  
double Y0 = y  floor(y);  
double Y1 = 1.0  Y0;  
 
double w00 = X0 * Y0;  
double w01 = X1 * Y0;  
double w10 = X0 * Y1;  
double w11 = X1 * Y1;  
 
short r = short(c00[2] * w00 + c01[2] * w01  
+ c10[2] * w10 + c11[2] * w11);  
short g = short(c00[1] * w00 + c01[1] * w01  
+ c10[1] * w10 + c11[1] * w11);  
short b = short(c00[0] * w00 + c01[0] * w01  
+ c10[0] * w10 + c11[0] * w11);  
 
return make_BGR(b, g, r);  
} 
...and a helper function:
C++
Vec3s make_BGR(short blue, short green, short red)  
{  
Vec3s result;  
result[0] = blue;  
result[1] = green;  
result[2] = red;  
 
return result;  
} 
Here is another sample of a stereographic projection and the panorama that I used to create it:
Summary
The stereographic projection has been known and used since the time of the ancient Greeks. It was heavily used in the Age of Exploration to create maps of the world where the distortion was applied to distances, and the relative angles between local points is preserved. When it is applied to fullview panoramas a neat effect is created called, "The Little Planet Effect." With just a little bit of theory and some concepts from computer graphics we were able to turn this concept into code with less than 100 lines of code.
I wanted to start writing about secure coding practices as well as more instructive posts related to security topics such as encryption and hacking. You probably already have a conceptual understanding of things like the "stack", "heap" and "program counter". However, it's difficult to have concrete discussions regarding security unless you have a solid grasp on the computer memory model. This post is intended to provide a concrete foundation of the memory model, and my future posts related to security will build on this foundation.
It is easy to take for granted the complexity of computer memory because of the many layers of abstraction that programmers work through today. The same basic memory design has existed for all computers that use a paged memory structure since the early 60's. These are some of the areas where the knowledge of the memory layout plays a crucial role in application portability and embedded resources, program security, code optimization. The diagrams I present will also help you understand where the different activities occur in a program during runtime.
Memory (RAM)
Random Access Memory (RAM) is where the majority of your program will live while it is actively running. As the name indicates any position in your application is capable of being addressed by the hardware. When your program is executed, it will be loaded at some starting address. This address is typically called the Base Address. The value of the baseoffset differs from system to system. A technique called, Address Space Layout Randomization (ASLR), loads your program's modules at random address locations to make hacking a bit more difficult.
RAM addresses start at zero and reach whatever limit is imposed by the system on which you are running. However, not every address is accessible by your program; some areas are reserved for the operating system. The hardware and operating system work together to abstract the details for accessing locations in RAM. Therefore, RAM can be thought of as one continuous linear array of bytes. Let this be basic representation of the layout of RAM for this discussion.
All of the sections described below are located in RAM.
Segmentation
Systems that utilized segmented RAM started to appear in the 60's with operating systems written with higher level languages. Originally, segmentation provided many services such as creating the ability address memory address higher than the word size of the processor. For Windows developers, the FAR specifier is a remnant of the limited ability to access addresses on 16bit Intel chips, such as the 80386.
Now, the primary feature implemented with segmented memory is memory paging (segment) and virtual addressing. I do not want to get into the technical details regarding virtual memory here, but I can revisit that topic in the future. For now, realize that these features allow the system to store and order all of the active resources in the most efficient way possible while hiding the details from the programmer.
From this point on, I will call a memory segment a page. A page is a predetermined number of bytes that are considered a single group. Memory is most efficiently managed as pages. All of the modern architectures that I have worked with have used a page size of 4Kb (4096 bytes). Why stop there? Because 4097 bytes would have been just too many?!
Now that memory segmentation has been introduced, let's alter the representation of RAM just a bit to simplify the illustrations. This new abstraction will help us visualize the concepts in a more manageable way:
Program structure
The descriptions that I give below are in general terms because there are subtle nuances between the different executable formats defined on each platform. For instance:
 Unix/Linux:
 COFF is an older format
 ELF is the current format
 Mac OS X:
 MachO
 Windows:
 NE, derived from DOS formats for 16bit Windows
 PE, 32bit Windows
 PE32+, introduced for 64bit Windows
While the actual file definitions vary between different platforms, executable programs are generally organized with different memory sections as described below.
Program header
There are basically two types of sections to consider when discussing executable file formats: 1) ReadOnly segments, 2) ReadWrite segments.Why isn't the entire program just made ReadOnly, we don't want the program to be changed do we?
The actual file the program is store in is not used directly. The program header directs the loader to the different segments, which are described below.
ReadOnly
The ReadOnly segments can be loaded into shared memory of the current system. Therefore, space can be saved when multiple instances of the same module are executed.
ReadWrite
Each ReadWrite segment must be given its own copy relative to the process that uses it. These segments cannot be safely shared between processes unless special care is taken by the developer.
Section header
Each type of program segment is described below. I will indicate whether it is typically a readonly or readwrite segment, as well as the role it plays in the execution of the program. Most module formats do not restrict the segment types to a single instance. I will describe some of the possibilities in the relevant sections below.
.text (RO)
Another name used for this segment is the code segment. This is where the sequence of hardware commands are encoded sequentially to command the processor. It is important for this segment to be loaded into a readonly region of the system's RAM to prevent selfmodifying code. Selfmodifying code is extremely difficult to debug, and it creates a great potential for security vulnerabilities.
It is also important to mention that most systems require memory segments to be marked with the "executable" privilege in order to process the segment as executable instructions.
.rodata (RO)
Some file layouts include a readonly data section to store constants and other readonly data that needs to reference an address by the program. This segment type is found in the ELF format. Again, this segment type is not used in all program file formats.
.data (RW)
The data section contains preallocated storage space for all of the global and static variables defined in your program. These values are packed efficiently for the target platform so they can be accessed on the proper memory boundary for the size of the data type. The commands generated in the .text segment reference the values in this segment at a relative offset during runtime. First the address of the segment is found, then the relative offset of the desired variable is accessed as needed.
A program may want to define multiple .data segments to create a simple mechanism to share data between two processes. This is called a shared data section. I have only used this technique in Windows, so I do not know the specifics for its use on other platforms.
The shared segment is loaded into globally accessible system memory. Any process that loads this module will have access to the same variables. If one process changes a value, this will be instantly reflected in all other processes. I have used this mechanism to synchronize unique data among different programs on a single system.
.bss (RW)
The .bss section is unique compared to the other segments. There is only one .bss per module, and it is not actually included in the module. That is because the .bss is the segment where all of the uninitialized data fields of a module are located.
The size required to store the uninitialized fields is all that is required in the definition for the .bss. The initialized fields defined below would be placed in the .bss address space:
C++
int classId;  
char courseName[32];  
std::vector< student > classRoster;  
 
// Static member variables and local static variables  
// will also be allocated in the BSS. 
Runtime Structure
Now that we have defined each of the major types of segments that exist in all modern computer architectures, let's describe how they are structured in a running program. There are three types of memory that are directly referenced by machine instructions.
Heap
The Heap is where all dynamically allocated memory is located. Another name for this collection of memory is the FreeStore. The heap is essentially all of the remaining accessible RAM that isn't used by your program modules or isn't reserved for the stack.
CallStack
The Stack is a nickname for the CallStack. The callstack is where parameters are passed into functions and storage is allocated for local variables. That is why it is referred to as "creating the variable on the stack". A stackframe is used to represent each instance of a function call. Each time a function is called, the current function state is recorded at the current location on the stack, and a new stackframe is pushed onto the stack to track the next function. When a function call returns, the current stackframe is popped off the stack, and the previous function state is restored to continue execution.
One callstack is allocated for each thread that is started in the program. In order to efficiently use the available address space on the system, a stacksize is usually defined for each thread's callstack. Factors that you want to consider when selecting the size of your callstack is the size of the local variables created in your functions and how deep your function calls execute.
I am going to leave the specific details to how the callstack is managed for a later post; because callstack processing is a complex topic that requires an entire post on its own. What is important to understand for now is that function call management and local variable allocations are the primary responsibilities of the callstack.
System Registers
The hardware registers are memory locations that are actually housed within the CPU. The registers are the same size as the wordsize for the CPU. Word has taken on a different meaning over the years with computers. In the context of hardware, wordsize always means the bitwidth of the processing pipeline of the CPU. For instance, the wordsize on a 32bit processor is 32bits, and the wordsize for a 64bit processor is 64bits.
They are crucial to the structure of a computer system. The registers are the CPUs only way to operate upon the values when a command is executed. Values are loaded into the registers from addressable RAM locations via the system bus. If the CPU has an internal memory cache, large chunks of data can be preloaded from RAM. This will ensure the data is ready when the CPU is ready to process an instruction. The CPU cache can provide an enormous boost in performance.
Fundamental x86 and AMD64 Registers
The types of registers that are available depend upon the CPU's ISA (Industry Standard Architecture). I am going to briefly introduce the commonly used registers for the x86 ISA, because all three major desktop operating systems (Apple, Linux, Windows) support this platform.
There are eight general purpose registers and the Instruction Pointer, which holds the program counter. The registers are named based upon a purpose that was originally envisioned for each register. There are special operations that are designed for specific registers; such as the Accumulator, EAX, has an efficient opcode to add a byte index specified for it. However, the other registers support these operations as well; only the opcodes are longer generalpurpose commands.
16bit  32bit  64bit  Purpose  
AX  EAX  RAX  Accumulator  
BX  EBX  RBX  Base index (arrays)  
CX  ECX  RCX  Counter (loops)  
DX  EDX  RDX  Extends the precision of the accumulator  
SI  ESI  RSI  Source Index for string operations  
DI  EDI  RDI  Destination Index for string operations  
SP  ESP  RSP  Stack Pointer  
BP  EBP  RBP  Base Pointer  
IP  EIP  RIP  Instruction Pointer 
The first four registers of the previous table have additional registers defined to access the loworder and highorder bytes of the 16bit register. The x86 and AMD64 instruction sets use an 8bit byte. Here are their names:
16bit  Low Byte  High Byte  
AX  AL  AH  
BX  BL  BH  
CX  CL  CH  
DX  DL  DH 
The stackpointer typically points to the top of the call stack to indicate the next address to use for parameter passing or local variable allocation. The instructionpointer points to the address of the current instruction to execute. This address should always be pointing to an address that is found in the .text segment of your program, or the system's libraries.
Interactions
Ignoring restrictions due to permissions, RAM can be addressed as one continuous sequence from the lowest address to the highest address. The system will move segments inandout of RAM one page of memory at a time. There are typically regions of the address space that is reserved for the operating system (kernel). If shared memory between processes is allowed on the system, then a region will also be reserved for globally accessible address space.
Your program itself is loaded into available address space. There is no universally common addressing scheme for the layout of the .text segment (executable code), .data segment (variables) and other program segments. However, the layout of the program itself is wellformed according to the system that will execute the program. This allows the system's program loader to navigate the program file and properly load the program into RAM. Jump tables and other internal structures are fixed up to allow the different memory segments to be properly referenced based on their final address.
The diagram below depicts a simplistic view of a single program's elements loaded into memory on a system. The CPU accesses the RAM through the system bus.
The callstack and heap are usually situated at opposite ends of the address space and they grow towards each other. If the program allocates too much dynamic memory, or a recursive call continues unbounded the system will run out of address space. For the recursive function call scenario, the stack will have used all of its allotted space and cause a stackoverflow.
The only hard boundaries in this continuous address space is typically at the pagelevel. Therefore if an operation attempts to access memory across a page boundary a segmentationfault or segfault will occur. If permissions are set to restrict access to specific pages and a program attempts to access the space, some type of access violation is raised.
Summary
Most programmers do not need to have a deep understanding of a computer's memory architecture in order to complete their jobs. However, having a solid understanding of this memory model can help you make better design decisions and improve your debugging skills. As you move closer to the hardware in your development, it becomes more necessary to truly understand this structure. Finally, there are some tasks that are simply not possible to accomplish (or at least them become extremely difficult) if you do not have a clear picture of the memory structure for computers.
Security is one of the concepts with computer programming where it becomes necessary to have a better understanding of this structure. Even though you may never look at a disassembled program or manually access the registers, it is important to understand how what causes a security vulnerability, as well as the qualities that make a vulnerability exploitable. So with this foundation of memory structure, I will be able to write about secure programming practices and also demonstrate some of the techniques used to exploit these flaws.
The criticism of strongencryption by lawenforcement has been an interesting topic to follow in the news and politics for the last nine months. It became even more interesting in February when the shortlived court battle between Apple and the FBI made headlines. Now looming on the horizons is a piece of legislation that proposes to give judges the authority to order makers of products with encryption to help lawenforcement.
Why can’t these companies help lawenforcement and give them a backdoor that they can only get the keys with a court order?
A Word about Encryption
Message encryption has existed for millennia, primarily used for communication in the military. It is now a ubiquitous tool of the Information Age and the Internet. Encryption is a versatile tool that goes well beyond privacy services such as online banking, the protection of personal medical records or even the endtoend message encryption recently added to WhatsApp. Encryption techniques are also used to create digital signatures to verify our digital content has not been tampered. Both aspects are fundamental to computer security.
How does encryption work?
Math is Hard
Specifically, for a computer to efficiently factor extremely large integers and to compute discrete logarithms. These problems are considered “intractable” or hard to deal with. They are difficult problems that cannot be solved quickly. Other types of encryption rely on concepts learned from information theory, computational complexity and statistics.
Essentially, an encryption algorithm is a set of instructions that scramble a plaintext message so there is no discernable pattern. Statistically, it should look like a truly random sequence of numbers, noise. The scrambled message is called a cipher message. There is also an algorithm that is used to decrypt the cipher message.
To make encryption algorithms more useful, a key is used with the algorithm to modify the scrambling instructions in a way that is unique to each key.
It takes relatively little time for a computer to encode and decode a message with an encryption algorithm and the associated key. However, the intractable math problems and encryption concepts used to design the algorithm make decryption take a very long time without the key.
This reddit comment [^] calculates the time required to crack an AES 256bit message at 9 e50 years. To put this in perspective, the universe is believed to be 14 billion (1.4 e10) years old.
What’s the takeaway from all of this?
It is far simpler and faster to search for a vulnerability in the system that employs encryption than it is to attempt to bruteforce crack a message.
Well, demanding that a backdoor be engineered into products that are normally secure is also faster and simpler than a bruteforce attack.
Why Can't We Give Them a Backdoor?
Vulnerabilities are regularly discovered in computer systems. The chances are that there are vulnerabilities in the system. It may not be with the encryption algorithm itself, but it may be with how the keys are transferred, or how the data is copied. Creating a secure system is difficult even when we aren’t trying to create backdoors in the system.
In fact, the National Vulnerability Database (NVD)[^] reports an average of 19 new vulnerabilities were reported each day in 2014. At the time of this writing, NVD vulnerability workload index workload index[^] was 4.62. This is a calculation of the number of important vulnerabilities that information technology operations staff need to address each day.
It seems as though there is no need for a backdoor.
Now these vulnerabilities are spread across a wide variety of networked systems, software and devices. So it is not likely that a new vulnerability is discovered for your iPhone or Android device each day. Moreover, it is likely that many vulnerabilities only become exploitable when you combine two specific systems configured in specific way. Otherwise they would be considered to be secure by themselves.
It is difficult enough to secure the systems that we have when we intend for them to be 100% secure. Imagine what happens when we start adding secret access methods to these designs. Once the secret access method is uncovered by less than honorable groups, the secret access feature becomes the front door… with no locks … a huge welcome mat out front… and a note on the door that says “Let yourself in and make yourself comfortable.”
Summary
So, we actually can add backdoors to these products, but that defeats the purpose of trying to secure a product in the first place. Networked computers cannot communicate securely over a public network without encryption. Adding an alternate method to access a computer that bypasses strongencryption is not an acceptable solution.
My previous two entries have presented a mathematical foundation for the development and presentation of 3D computer graphics. Basic matrix operations were presented, which are used extensively with Linear Algebra. Understanding the mechanics and limitations of matrix multiplication is fundamental to the focus of this essay. This entry demonstrates three ways to manipulate the geometry defined for your scene, and also how to construct the projection transform that provides the threedimensional effect for the final image.
Transforms
There are three types of transformations that are generally used to manipulate a geometric model, translate, scale, and rotate. Your models and be oriented anywhere in your scene with these operations. Ultimately, a single square matrix is composed to perform an entire sequence of transformations as one matrix multiplication operation. We will get to that in a moment.
For the illustration of these transforms, we will use the twodimensional unitsquare shown below. However, I will demonstrate the math and algorithms for three dimensions.
\( \eqalign{ pt_{1} &= (\matrix{0 & 0})\\ pt_{2} &= (\matrix{1 & 0})\\ pt_{3} &= (\matrix{1 & 1})\\ pt_{4} &= (\matrix{0 & 1})\\ }\) 
Translate
Translating a point in 3D space is as simple as adding the desired offset for the coordinate of each corresponding axes.
\( V_{translate}=V+D \)
For instance, to move our sample unit cube from the origin by \(x=2\) and \(y=3\), we simply add \(2\) to the xcomponent for every point in the square, and \(3\) to every ycomponent.
\( \eqalign{ pt_{1D} &= pt_1 + [\matrix{2 & 3}] \\ pt_{2D} &= pt_2 + [\matrix{2 & 3}] \\ pt_{3D} &= pt_3 + [\matrix{2 & 3}] \\ pt_{4D} &= pt_4 + [\matrix{2 & 3}] \\ }\) 
There is a better way, though.
That will have to wait until after I demonstrate the other two manipulation methods.
Identity Matrix
The first thing that we need to become familiar with is the identity matrix. The identity matrix has ones set along the diagonal with all of the other elements set to zero. Here is the identity matrix for a \(3 \times 3\) matrix.
\( \left\lbrack \matrix{1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1} \right\rbrack\)
When any valid input matrix is multiplied with the identity matrix, the result will be the unchanged input matrix. Only square matrices can identity matrices. However, nonsquare matrices can be used for multiplication with the identity matrix if they have compatible dimensions.
For example:
\( \left\lbrack \matrix{a & b & c \\ d & e & f \\ g & h & i} \right\rbrack \left\lbrack \matrix{1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1} \right\rbrack = \left\lbrack \matrix{a & b & c \\ d & e & f \\ g & h & i} \right\rbrack \)
\( \left\lbrack \matrix{7 & 6 & 5} \right\rbrack \left\lbrack \matrix{1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1} \right\rbrack = \left\lbrack \matrix{7 & 6 & 5} \right\rbrack \)
\( \left\lbrack \matrix{1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1} \right\rbrack \left\lbrack \matrix{7 \\ 6 \\ 5} \right\rbrack = \left\lbrack \matrix{7 \\ 6 \\ 5} \right\rbrack \)
We will use the identity matrix as the basis of our transformations.
Scale
It is possible to independently scale geometry along each of the standard axes. We first start with a \(2 \times 2\) identity matrix for our twodimensional unit square:
\( \left\lbrack \matrix{1 & 0 \\ 0 & 1} \right\rbrack\)
As we demonstrated with the identity matrix, if we multiply any valid input matrix with the identity matrix, the output will be the same as the input matrix. This is similar to multiplying by \(1\) with scalar numbers.
\( V_{scale}=VS \)
Based upon the rules for matrix multiplication, notice how there is a onetoone correspondence between each row/column in the identity matrix and for each component in our point..
\(\ \matrix{\phantom{x} & \matrix{x & y} \\ \phantom{x} & \phantom{x} \\ \matrix{x \\ y} & \left\lbrack \matrix{1 & 0 \\ 0 & 1} \right\rbrack } \)
We can create a linear transformation to scale point, by simply multiplying the \(1\) in the identity matrix, with the desired scale factor for each standard axis. For example, let's make our unitsquare twice as large along the \(xaxis\) and half as large along the \(yaxis\):
\(\ S= \left\lbrack \matrix{2 & 0 \\ 0 & 0.5} \right\rbrack \)
This is the result, after we multiply each point in our square with this transformation matrix; I have also broken out the mathematic calculations for one of the points, \(pt_3(1, 1)\):
\( \eqalign{ pt_{3}S &= \\ &= [\matrix{1 & 1}] \left\lbrack\matrix{2 & 0 \\ 0 & 0.5} \right\rbrack\\ &= [\matrix{(1 \cdot 2 + 1 \cdot 0) & (1 \cdot 0 + 1 \cdot 0.5)}] \\ &= [\matrix{2 & 0.5}] }\) 
Here is the scale transformation matrix for three dimensions:
\(S = \left\lbrack \matrix{S_x & 0 & 0 \\ 0 & S_y & 0 \\ 0 & 0 & S_z }\right\rbrack\)
Rotate
A rotation transformation can be created similarly to how we created a transformation for scale. However, this transformation is more complicated than the previous one. I am only going to present the formula, I am not going to explain the details of how it works. The matrix below will perform a counterclockwise rotation about a pivotpoint that is placed at the origin:
\(\ R= \left\lbrack \matrix{\phantom{} \cos \Theta & \sin \Theta \\ \sin \Theta & \cos \Theta} \right\rbrack \)
\(V'=VR\)
As I mentioned, the rotation transformation pivots the point about the origin. Therefore, if we attempt to rotate our unit square, the result will look like this:
The effects become even more dramatic if the object is not located at the origin:
In most cases, this may not be what we would like to have happen. A simpler way to reason about rotation is to place the pivot point in the center of the object, so it appears to rotate in place.
To rotate an object in place, we will need to first translate the object so the desired pivot point of the rotation is located at the origin, rotate the object, then undo the original translation to move it back to its original position.
While all of those steps must be taken in order to rotate an object about its center, there is a way to combine those three steps to create a single matrix. Then only one matrix multiplication operation will need to be performed on each point in our object. Before we get to that, let me show you how we can expand this rotation into three dimensions.
Imagine that our flat unitsquare object existed in three dimensions. It is still defined to exist on the \(xy\) plane, but it has a third component added to each of its points to mark its location along the \(zaxis\). We simply need to adjust the definition of our points like this:
\( \eqalign{ pt_{1} &= (\matrix{0 & 0 & 0})\\ pt_{2} &= (\matrix{1 & 0 & 0})\\ pt_{3} &= (\matrix{1 & 1 & 0})\\ pt_{4} &= (\matrix{0 & 1 & 0})\\ }\) 
So, you would be correct to surmise that in order to perform this same \(xy\) rotation about the \(zaxis\) in three dimensions, we simply need to begin with a \(3 \times 3\) identity matrix before we add the \(xy\) rotation.
\( R_z = \left\lbrack \matrix{ \phantom{}\cos \Theta & \sin \Theta & 0 \\ \sin \Theta & \cos \Theta & 0 \\ 0 & 0 & 1 } \right\rbrack \) 
We can also adjust the rotation matrix to perform a rotation about the \(x\) and \(y\) axis, respectively:
R_x = \( \left\lbrack \matrix{ 1 & 0 & 0 \\ 0 & \phantom{}\cos \Theta & \sin \Theta \\ 0 & \sin \Theta & \cos \Theta } \right\rbrack \) 
\( R_y = \left\lbrack \matrix{ \cos \Theta & 0 & \sin \Theta\\ 0 & 1 & 0 \\ \sin \Theta & 0 & \phantom{}\cos \Theta } \right\rbrack \) 
Composite Transform Matrices
I mentioned that it is possible to combine a sequence of matrix transforms into a single matrix. This is highly desirable, considering that every point in an object model needs to be multiplied by the transform. What I demonstrated up to this point is how to individually create each of the transform types, translate, scale and rotate. Technically, the translate operation, \(V'=V+D\) is not a linear transformation because it is not composed within a matrix. Before we can continue, we must find a way to turn the translation operation from a matrix add to a matrix multiplication.
It turns out that we can create a translation operation that is a linear transformation by adding another row to our transformation matrix. However, a transformation matrix must be square because it starts with the identity matrix. So we must also add another column, which means we now have a \(4 \times 4\) matrix:
\(T = \left\lbrack \matrix{1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ t_x & t_y & t_z & 1 }\right\rbrack\)
A general transformation matrix is in the form:
\( \left\lbrack \matrix{ a_{11} & a_{12} & a_{13} & 0\\ a_{21} & a_{22} & a_{23} & 0\\ a_{31} & a_{32} & a_{33} & 0\\ t_x & t_y & t_z & 1 }\right\rbrack\)
Where \(t_x\),\(t_y\) and \(t_z\) are the final translation offsets, and the submatrix \(a\) is a combination of the scale and rotate operations.
However, this matrix is not compatible with our existing point definitions because they only have three components.
Homogenous Coordinates
The final piece that we need to finalize our geometry representation and coordinate management logic is called Homogenous Coordinates. What this basically means, is that we will add one more parameter to our vector definition. We refer to this fourth component as, \(w\), and we initially set it to \(1\).
\(V(x,y,z,w) = [\matrix{x & y & z & 1}]\)
Our vector is only valid when \(w=1\). After we transform a vector, \(w\) has often changed to some other scale factor. Therefore, we must rescale our vector back to its original basis by dividing each component by \(w\). The equations below show the threedimensional Cartesian coordinate representation of a vector, where \(w \ne 0\).
\( \eqalign{ x&= X/w \\ y&= Y/w \\ z&= Z/w }\)
Creating a Composite Transform Matrix
We now possess all of the knowledge required to compose a linear transformation sequence that is stored in a single, compact transformation matrix. So let's create the rotation sequence that we discussed earlier. We will a compose a transform that translates an object to the origin, rotates about the \(zaxis\) then translates the object back to its original position.
Each individual transform matrix is shown below in the order that it must be multiplied into the final transform.
\( T_c R_z T_o = \left\lbrack \matrix{ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ t_x & t_y & t_z & 1 \\ } \right\rbrack \left\lbrack \matrix{ \phantom{}\cos \Theta & \sin \Theta & 0 & 0 \\ \sin \Theta & \cos \Theta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ } \right\rbrack \left\lbrack \matrix{ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ t_x & t_y & t_z & 1 \\ } \right\rbrack \)
Here, values are selected and the result matrix is calculated:
\(t_x = 0.5, t_y = 0.5, t_z = 0.5, \Theta = 45°\)
\( \eqalign{ T_c R_z T_o &= \left\lbrack \matrix{ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0.5 & 0.5 & 0.5 & 1 \\ } \right\rbrack \left\lbrack \matrix{ 0.707 & 0.707 & 0 & 0 \\ 0.707 & 0.707 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ } \right\rbrack \left\lbrack \matrix{ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0.5 & 0.5 & 0.5 & 1 \\ } \right\rbrack \\ \\ &=\left\lbrack \matrix{ \phantom{}0.707 & 0.707 & 0 & 0 \\ 0.707 & 0.707 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0.5 & 0.207 & 0 & 1 \\ } \right\rbrack }\)
It is possible to derive the final composite matrix by using algebraic manipulation to multiply the transforms with the unevaluated variables. The matrix below is the simplified definition for \(T_c R_z T_o\) from above:
\( T_c R_z T_o = \left\lbrack \matrix{ \phantom{}\cos \Theta & \sin \Theta & 0 & 0 \\ \sin \Theta & \cos \Theta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ (t_x \cos \Theta + t_y \sin \Theta + t_x) & (t_x \sin \Theta  t_y \cos \Theta + t_y) & 0 & 1 \\ } \right\rbrack\)
Projection Transformation
The last thing to do, is to convert our 3D model into an image. We have threedimensional coordinates, that must be mapped to a twodimensional surface. To do this, we will project a view of our worldspace onto a flat twodimensional screen. This is known as the "projection transformation" or "projection matrix".
Coordinate Systems
It may be already be apparent, but if not I will state it anyway; a linear transformation is a mapping between two coordinate systems. Whether we translate, scale or rotate we are simply changing the location and orientation of the origin. We have actually used the origin as our frame of reference, so our transformations appear to be moving our geometric objects. This is a valid statement, because the frame of reference determines many things.
We need a frame of reference that can be used as the eye or camera. We call this frame of reference the view space and the eye is located at the viewpoint. In fact, up to this point we have also used two other vectorspaces, local space (also called model space and world space. To be able to create our projection transformation, we need to introduce one last coordinate space called screen space. Screen space is a 2D coordinate system defined by the \(uvplane\), and it maps directly to the display image or screen.
The diagram below depicts all four of these different coordinate spaces:
The important concept to understand, is that all of these coordinate systems are relatively positioned, scaled, and oriented to each other. We construct our transforms to move between these coordinate spaces. To render a view of the scene, we need to construct a transform to go from world space to view space.
View Space
The view space is the frame of reference for the eye. Basically, our view of the scene. This coordinate space allows us to transform the world coordinates to a position relative to our view point. This makes it possible for us to create a projection of this visible scene onto our view plane.
We need a point and two vectors to define the location and orientation of the view space. The method I demonstrate here is called the "eye, at, up" method. This is because we specify a point to be the location of the eye, a vector from the eye to the focalpoint, at, and a vector that indicates which direction is up. I like this method because I think it is an intuitive model for visualizing where your viewpoint is located, how it is oriented, and what you are looking at.
Right or Left Handed?
The view space is a lefthanded coordinate system, which differs from the righthanded coordinate systems that we have used up to this point. In math and physics, righthanded systems are the norm. This places the \(yaxis\) up, and the \(xaxis\) pointing to the right (same as the classic 2D Cartesian grid), and the resulting \(zaxis\) will point toward you. For a lefthanded space, the positive \(zaxis\) points into the paper. This makes things very convenient in computer graphics, because \(z\) is typically used to measure and indicate depth.
You can tell which type of coordinate system that you have created by taking your open palm and pointing it perpendicularly to the \(xaxis\), and your palm facing the \(yaxis\). The positive \(zaxis\) will point in the direction of your thumb, based upon the direction your coordinate space is oriented.
Let's define a simple view space transformation.
\( \eqalign{ eye&= [\matrix{2 & 3 & 3}] \\ at&= [\matrix{0 & 0 & 0}] \\ up&= [\matrix{0 & 1 & 0}] }\)
Simple Indeed.
From this, we can create a view located at \([\matrix{2 & 3 & 3}]\) that is looking at the origin and is oriented vertically (no tilt to either side). We start by using the vector operations, which I demonstrated in my previous post, to define vectors for each of the three coordinate axes.
\( \eqalign{ z_{axis}&= \ ateye \ \\ &= \ [\matrix{2 & 3 & 3}] \ \\ &= [\matrix{0.4264 & 0.6396 & 0.6396}] \\ \\ x_{axis}&= \ up \times z_{axis} \ \\ &=\ [\matrix{3 & 0 & 2}] \\\ &=[\matrix{0.8321 & 0 & 0.5547}]\\ \\ y_{axis}&= \ z_{axis} \times x_{axis} \ \\ &= [\matrix{0.3547 & 0.7687 & 0.5322}] }\)
We now take these three vectors, and plug them into this composite matrix that is designed to transform world space to a new coordinate space defined by axisaligned unit vectors.
\( \eqalign { T_{view}&=\left\lbrack \matrix{ x_{x_{axis}} & x_{y_{axis}} & x_{z_{axis}} & 0 \\ y_{x_{axis}} & y_{y_{axis}} & y_{z_{axis}} & 0 \\ z_{x_{axis}} & z_{y_{axis}} & z_{z_{axis}} & 0 \\ (x_{axis} \cdot eye) & (y_{axis} \cdot eye) & (z_{axis} \cdot eye) & 1 \\ } \right\rbrack \\ \\ &=\left\lbrack \matrix{ 0.8321 & 0.3547 & 0.4264 & 0 \\ 0 & 0.7687 & 0.6396 & 0 \\ 0.5547 & 0.5322 & 0.6396 & 0 \\ 3.3283 & 4.6121 & 4.6904 & 1 \\ } \right\rbrack }\)
Gaining Perspective
The final step is to project what we can see from the \(eye\) in our view space, onto the viewplane. As I mentioned previous, the viewplane is commonly referred to as the \(uvplane\). The \(uv\) coordinatesystem refers to the 2D mapping on the \(uvplane\). The projection mapping that I demonstrate here places the \(uvplane\) some distance \(d\) from the \(eye\), so that the viewspace \(zaxis\) is parallel with the plane's normal.
The \(eye\) is commonly referred to by another name for the context of the projection transform, it is called the center of projection. That is because this final step, basically, projects the intersection of a projected vector onto the \(uvplane\). The projected vector originates at the center of projection and points to each point in the viewspace.
The cues that let us perceive perspective with stereoscopic vision is called, foreshortening. This effect is caused by objects appearing much smaller as they are further from our vantagepoint. Therefore, we can recreate the foreshortening in our image by scaling the \(xy\) values based on their \(z\)coordinate. Basically, the farther away an object is placed from the center of projection, the smaller it will appear.
The diagram above shows a side view of the \(y\) and \(z\) axes, and is parallel with the \(xaxis\). Notice the have similar triangles that are created by the view plane. The smaller triangle on the bottom is the distance \(d\), which is the distance from the center of projection to the view plane. The larger triangle on the top extends from the point \(Pv\) to the \(yaxis\), this distance is the \(z\)component value for the point \(Pv\). Using the property of similar triangles we now have a geometric relationship that allows us to scale the location of the point for the view plane intersection.
\(\frac{x_s}d = \frac{x_v}{z_v}\) 
\(\frac{y_s}d = \frac{y_v}{z_v}\) 
\(x_s = \frac{x_v}{z_v/d}\) 
\(y_s = \frac{y_v}{z_v/d}\) 
With this, we can define a perspectivebased projection transform:
\( T_{project}=\left\lbrack \matrix{ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1/d \\ 0 & 0 & 0 & 0} \right\rbrack \)
This is now where the homogenizing component \(w\) returns. This transform will have the affect of scaling the \(w\) component to create:
\( \eqalign{ X&= x_v \\ Y&= y_v \\ Z&= z_v \\ w&= z_v/d }\)
Which is why we need to access these values as shown below after the following transform is used:
\( \eqalign{ [\matrix{X & Y & Z & w}] &= [\matrix{x_v & y_v & z_v & 1}] T_{project}\\ \\ x_v&= X/w \\ y_v&= Y/w \\ z_v&= Z/w }\)
Scale the Image Plane
Here is one last piece of information that I think you will appreciate. The projection transform above defines a \(uv\) coordinate system that is \(1 \times 1\). Therefore, if you make the following adjustment, you will scale the unitplane to match the size of your final image in pixels:
\( T_{project}=\left\lbrack \matrix{ width & 0 & 0 & 0 \\ 0 & height & 0 & 0 \\ 0 & 0 & 1 & 1/d \\ 0 & 0 & 0 & 0} \right\rbrack \)
Summary
We made it! With a little bit of knowledge from both Linear Algebra and Trigonometry, we have been able to construct and manipulate geometric models in 3D space, concatenate transform matrices into compact and efficient transformation definitions, and finally create a projection transform to create a twodimensional image from a threedimensional definition.
As you can probably guess, this is only the beginning. There is so much more to explore in the creation of computer graphics. Some of the more interesting topics that generally follow after the 3D projection is created are:
 Shading Models: Flat, Gouraud and Phong.
 Shadows
 Reflections
 Transparency and Refraction
 Textures
 Bumpmapping ...
The list goes on and on. I imagine that I will continue to write on this topic and address polygon fill algorithms and the basic shading models. However, I think things will be more interesting if I have code to demonstrate and possibly an interactive viewer for the browser. I have a few demonstration programs written in C++ for Windows, but I would prefer to create something that is a bit more portable.
Once I decide how I want to proceed I will return to this topic. However, if enough people express interest, I wouldn't mind continuing forward demonstrating with C++ Windows programs. Let me know what you think.
References
Watt, Alan, "Threedimensional geometry in computer graphics," in 3D Computer Graphics, Harlow, England: AddisonWesley, 1993
Foley, James, D., et al., "Geometrical Transformations," in Computer Graphics: Principles and Practice, 2nd ed. in C, Reading: AddisonWesley, 1996
I previously introduced how to perform the basic matrix operations commonly used in 3D computer graphics. However, we haven’t yet reached anything that resembles graphics. I now introduce the concepts of geometry as it’s related to computer graphics. I intend to provide you with the remaining foundation necessary to be able to mathematically visualize the concepts at work. Unfortunately, we will still not be able to reach the "computer graphics" stage with this post; that must wait until the next entry.
This post covers a basic form of a polygonal mesh to represent models, points and Euclidean vectors. With the previous entry and this one, we will have all of the background necessary for me to cover matrix transforms in the next post. Specifically the 3D projection transform, which provides the illusion of perspective. This will be everything that we need to create a wireframe viewer. For now, let's focus on the topics of this post.
Modeling the Geometry
We will use models composed of a mesh of polygons. All of the polygons will be defined as triangles. It is important to consistently define the points of each triangle either clockwise or counterclockwise. This is because the definition order affects the direction a surface faces. If a single model mixes the order of point definitions, your rendering will have missing polygons.
The order of point definition for the triangle can be ignored if a surface normal is provided for each polygon. However, to keep things simple, I will only be demonstrating with basic models that are built from an index array structure. This data structure contains two arrays. The first array is a list of all of the vertices (points) in the model. The second array contains a threeelement structure with the index of the point that can be found in the first array. I define all of my triangles counterclockwise while looking at the front of the surface.
Generally, your models should be defined at or near the origin. The rotation transformation that we will explore in the next post requires a pivot point, which is defined at the origin. Model geometries defined near the origin are also easier to work with when building models from smaller components and using instancing to create multiple copies of a single model.
Here is a definition for a single triangle starting at the origin and adjacent to the Y plane. The final corner decreases for Y as it moves diagonally along the XZ plane:


This definition represents a unit cube starting at the origin. A cube has 8 corners to hold six squarefaces. It requires two triangles to represent each square. Therefore, there are 8 points used to define 12 triangles:


Vectors
Recall from my previous post that a vector is a special type of matrix, in which there is only a single row or column. You may be aware of a different definition for the term "vector"; something like
NOUN
A quantity having direction as well as magnitude, especially determining the position of one point in space relative to another.
Oxford Dictionary
From this point forward, when I use the term "vector", I mean a Euclidean vector. We will even use singlerow and singlecolumn matrices to represent these vectors; we’re just going to refer to those structures as matrices to minimize the confusion.
Notational Convention
It is most common to see vectors represented as singlerow matrices, as opposed to the standard mathematical notation which uses column matrices to represent vectors. It is important to remember this difference in notation based on what type of reference you are following.
The reason for this difference, is the rowform makes composing the transformation matrix more natural as the operations are performed lefttoright. I discuss the transformation matrix in the next entry. As you will see, properly constructing transformation matrices is crucial to achieve the expected results, and anything that we can do to simplify this sometimes complex topic, will improve our chances of success.
Representation
As the definition indicates, a vector has a direction and magnitude. There is a lot of freedom in that definition for representation, because it is like saying "I don’t know where I am, but I know which direction I am heading and how far I can go." Therefore, if we think of our vector as starting at the origin of a Cartesian grid, we could then represent a vector as a single point.
This is because starting from the origin, the direction of the vector is towards the point. The magnitude can be calculated by using Pythagoras' Theorem to calculate the distance between two points. All of this information can be encoded in a compact, singlerow matrix. As the definition indicates, we derive a vector from two points, by subtracting one point from the other using inverse of matrix addition.
This will give us the direction to the destination point, relative to the starting point. The subtraction effectively acts as a translation of the vectors starting point to the origin.
For these first few vector examples, I am going to use 2D vectors. Because they are simpler to demonstrate and diagram, and the only difference to moving to 3D vectors is an extra field has been added.
Here is a simple example with two points (2, 1) and (5, 5). The first step is to encode them in a matrix:
\( {\bf S} = \left\lbrack \matrix{2 & 1} \right\rbrack, {\bf D} = \left\lbrack \matrix{5 & 5} \right\rbrack \)
Let’s create two vectors, one that determines how to get to point D starting from S, and one to get to point S starting from D. The first step is to subtract the starting point from the destination point. Notice that if we were to subtract the starting point from itself, the result is the origin, (0, 0).
\( \eqalign{ \overrightarrow{SD} &= [ \matrix{5 & 5} ]  [ \matrix{2 & 1} ] \cr &= [ \matrix{3 & 4} ] } \)
\( \eqalign{ \overrightarrow{DS} &= [ \matrix{2 & 1} ]  [ \matrix{5 & 5} ] \cr &= [ \matrix{3 & 4} ] } \)
Notice that the only difference between the two vectors is the direction. The magnitude of each vector is the same, as expected. Here is how to calculate the magnitude of a vector:
Given:
\( \eqalign{ v &= \matrix{ [a_1 & a_2 & \ldots & a_n]} \cr v &= \sqrt{\rm (a_1^2 + a_2^2 + \ldots + a_n^2)} } \)
Operations
I covered scalar multiplication and matrix addition in my previous post. However, I wanted to briefly revisit the topic to demonstrate one way to think about vectors, which may help you develop a better intuition for the math. For the coordinate systems that we are working with, vectors are straight lines.
When you add two or more vectors together, you can think of them as moving the tail of each successive vector to the head of the previous vector. Then add the corresponding components.
\( \eqalign{ v &= A+B \cr v &= \sqrt{\rm 3^2 + 4^2} \cr &= 5 }\)
Regardless of the orientation of the vector, it can be decomposed into its individual contributions per axis. In our model for the vector, the contribution for each axis is the raw value defined in our matrix.
\( v = A+B+C \)
\( \eqalign{ v &= \sqrt{\rm (3+23)^2 + (1+3+2)^2} \cr &= \sqrt{\rm 2^2 + 6^2} \cr &= 2 \sqrt{\rm 10} }\)
Unit Vector
We can normalize a vector to create unit vector, which is a vector that has a length of 1. The concept of the unit vector is introduced to reduce a vector to simply indicate its direction. This type of vector can now be used as a unit of measure to which other vectors can be compared. This also simplifies the task of deriving other important information when necessary.
To normalize a vector, divide the vector by its magnitude. In terms of the matrix operations that we have discussed, this is simply scalar multiplication of the vector with the inverse of its magnitude.
\( u = \cfrac{v}{ v } \)
\(u\) now contains the information for the direction of \(v\). With algebraic manipulation we can also have:
\( v = u v \)
This basically says the vector \(v\) is equal to the unit vector that has the direction of \(u\) multiplied by the magnitude of \(v\).
Dot Product
The dot product is used for detecting visibility and shading of a surface. The dot product operation is shown below:
\( \eqalign{ w &= u \cdot v \cr &= u_1 v_1 + u_2 v_2 + \cdots + u_n v_n }\)
This formula may look familiar. That is because this is a special case in matrix multiplication called the inner product, which looks like this (remember, we are representing vectors as a row matrix):
\( \eqalign{ u \cdot v &= u v^T \cr &= \matrix {[u_1 & u_2 & \cdots & u_n]} \left\lbrack \matrix{v_1 \\ v_2 \\ \vdots \\ v_n } \right\rbrack }\)
We will use the dot product on two unit vectors to help us determine the angle between these vectors; that’s right, angle. We can do this because of the cosine rule from Trigonometry. The equation that we can use is as follows:
\( \cos \Theta = \cfrac{u \cdot v} {uv} \)
To convert this to the raw angle, \(\Theta\), you can use the inverse function of cosine, arccos(\( \Theta \)). In C and C++ the name of the function is acos().
\( \cos {\arccos \Theta} = \Theta \)
So what is actually occurring when we calculate the dot product that allows us to get an angle?
We are essentially calculating the projection of u onto v. This gives us enough information to manipulate the Trigonometric properties to derive \( \Theta \).
I would like to make one final note regarding the dot product. Many times it is only necessary to know the sign of the angle between two vectors. The following table shows the relationship between the dot product of two vectors and the value of the angle, \( \Theta \), which is in the range \( 0° \text{ to } 180° \). If you like radians, that is \( 0 \text{ to } \pi \).
\( w = u \cdot v = f(u,w) \)
\( f(u,w) = \cases{ w \gt 0 & \text { if } \Theta \lt 90° \cr w = 0 & \text { if } \Theta = 90° \cr w \lt 0 & \text { if } \Theta \gt 90° }\)
Cross Product
The cross product is used to calculate a vector that is perpendicular to a surface. The name for this vector is the surface normal. The surface normal indicates which direction the surface is facing. Therefore, it is used quite extensively for shading and visibility as well.
We have reached the point where we need to start using examples and diagrams with three dimensions. We will use the three points that define our triangular surface to create two vectors. Remember, it is important to use consistent definitions when creating models.
The first step is to calculate our planar vectors. What does that mean? We want to create two vectors that are coplanar (on the same plane), so that we may use them to calculate the surface normal. All that we need to be able to do this are three points that are noncollinear, and we have this with each triangle definition in our polygonal mesh model.
If we have defined a triangle \(\triangle ABC\), we will define two vectors, \(u = \overrightarrow{AB}\) and \(v = \overrightarrow{AC}\).
If we consistently defined the triangle surfaces in our model, we should now be able to take the cross product of \(u \times v\) to receive the surface normal, \(N_p\), of \(\triangle ABC\). Again, the normal vector is perpendicular to its surface.
In practice, this tells us which direction the surface is facing. We would combine this with the dot product to help us determine if are interested in this surface for the current view.
So how do you calculate the cross product?
Here is what you need to know in order to calculate the surface normal vector:
\(\eqalign { N_p &= u \times v \cr &= [\matrix{(u_2v_3 – u_3v_2) & (u_3v_1 – u_1v_3) & (u_1v_2 – u_2v_1)}] }\)
To help you remember how to perform the calculation, I have rewritten it in parts, and used \(x\), \(y\), and \(z\) rather than the index values.
\(\eqalign { x &= u_yv_z – u_zv_y \cr y &= u_zv_x – u_xv_z \cr z &= u_xv_y – u_yv_x }\)
Let's work through an example to demonstrate the cross product. We will use surface A from our axisaligned cube model that we constructed at the beginning of this post. Surface A is defined completely within the XY plane. How do we know this? Because all of the points have the same \(z\)coordinate value of 1. Therefore, we should expect that our calculation for \(N_p\) of surface A will be an \(z\) axisaligned vector pointing in the positive direction.
Step 1: Calculate our surface description vectors \(u\) and \(v\):
\( Surface_A = pt_7: (0,0,1), pt_6: (1,0,1), \text{ and } pt_4: (0,1,1) \)
\( \eqalign { u &= pt_6  pt_7 \cr &= [\matrix{1 & 0 & 1}]  [\matrix{0 & 0 & 1}] \cr &=[\matrix{\color{red}{1} & \color{green}{0} & \color{blue}{0}}] }\) 
\( \eqalign { v &= pt_4  pt_7 \cr &= [\matrix{0 & 1 & 1}]  [\matrix{0 & 0 & 1}] \cr &=[\matrix{\color{red}{0} & \color{green}{1} & \color{blue}{0}}] }\) 
Step 2: Calculate \(u \times v\):
\(N_p = [ \matrix{x & y & z} ]\text{, where:}\)
\( \eqalign { x &= \color{green}{u_y}\color{blue}{v_z} – \color{blue}{u_z}\color{green}{v_y} \cr &= \color{green}{0} \cdot \color{blue}{0}  \color{blue}{0} \cdot \color{green}{1} \cr &= 0 }\) 
\( \eqalign { y &= \color{blue}{u_z}\color{red}{v_x} – \color{red}{u_x}\color{blue}{v_z} \cr &= \color{blue}{0} \cdot \color{red}{0}  \color{red}{1} \cdot \color{blue}{0} \cr &= 0 }\) 
\( \eqalign { z &= \color{red}{u_x}\color{green}{v_y} – \color{green}{u_y}\color{red}{v_x} \cr &= \color{red}{1} \cdot \color{green}{1}  \color{green}{0} \cdot \color{red}{0} \cr &= 1 }\) 
\(N_p = [ \matrix{0 & 0 & 1} ]\)
This is a \(z\) axisaligned vector pointing in the positive direction, which is what we expected to receive.
Notation Formality
If you look at other resources to learn more about the cross product, it is actually defined as a sum in terms of three axisaligned vectors \(i\), \(j\), and \(k\), which then must be multiplied by the vectors \(i=[\matrix{1&0&0}]\), \(j=[\matrix{0&1&0}]\), and \(k=[\matrix{0&0&1}]\) to get the final vector. The difference between what I presented and the form with the standard vectors, \(i\), \(j\), and \(k\), is I omitted a step by placing the result calculations directly in a new vector. I wanted you to be aware of this discrepancy between what I have demonstrated and what is typically taught in the classroom.
Apply What We Have Learned
We have discovered quite a few concepts between the two posts that I have written regarding the basic math and geometry involved with computer graphics. Let's apply this knowledge and solve a useful problem. Detecting if a surface is visible from a specific pointofview is a fundamental problem that is used extensively in this domain. Since we have all of the tools necessary to solve this, let's run through two examples.
For the following examples we will specify the viewpoint as \(pt_{eye} = [\matrix{3 & 2 & 3}]:\)
Also, refer to this diagram for the surfaces that we will test for visibility from the viewpoint. The side surface is surface \(D\) from the cube model, and the hidden surface is the bottomfacing surface \(J\).
Detect a visible surface
Step 1: Calculate our surface description vectors \(u\) and \(v\):
\( \eqalign{ Surface_D: &pt_5: (1,1,1), \cr &pt_1: (1,0,0), \cr &pt_2: (1,1,0) } \)
\( \eqalign { u &= pt_1  pt_5 \cr &= [\matrix{1 & 0 & 0}]  [\matrix{1 & 1 & 1}] \cr &=[\matrix{0 & 1 & 1}] }\) 
\( \eqalign { v &= pt_2  pt_5 \cr &= [\matrix{1 & 1 & 0}]  [\matrix{1 & 1 & 1}] \cr &=[\matrix{0 & 0 & 1}] }\) 
Step 2: Calculate \(u \times v\):
\(N_{pD} = [ \matrix{x & y & z} ]\text{, where:}\)
\( \eqalign { x &= 1 \cdot (1)  (1) \cdot 0 \cr &= 1 }\) 
\( \eqalign { y &= 1 \cdot 0  (1) \cdot 0 \cr &= 0 }\) 
\( \eqalign { z &= 0 \cdot 0  (1) \cdot 0 \cr &= 0 }\) 
\(N_pD = [ \matrix{1 & 0 & 0} ]\)
Step 3: Calculate vector to the eye:
To calculate a vector to the eye, we need to select a point on the target surface and create a vector to the eye. For this example, we will select \(pt_5\)
\( \eqalign{ \overrightarrow{view_D} &= pt_{eye}  pt_5\cr &= [ \matrix{3 & 2 & 3} ]  [ \matrix{1 & 1 & 1} ]\cr &= [ \matrix{2 & 1 & 2} ] }\)
Step 4: Normalize the viewvector:
Before we can use this vector in a dot product, we must normalize the vector:
\( \eqalign{ eye_D &= \sqrt{\rm 2^2 + 1^2 + 2^2} \cr &= \sqrt{\rm 4 + 1 + 4} \cr &= \sqrt{\rm 9} \cr &= 3 \cr \cr eye_u &= \cfrac{eye_D}{eye_D} \cr &= \cfrac{1}3 [\matrix{2 & 1 & 2}] \cr &= \left\lbrack\matrix{\cfrac{2}3 & \cfrac{1}3 & \cfrac{2}3} \right\rbrack }\)
Step 5: Calculate the dot product of the viewvector and surface normal:
\( \eqalign{ w &= eye_{u} \cdot N_{pD} \cr &= \cfrac{2}3 \cdot 1 + \cfrac{1}3 \cdot 0 + \cfrac{2}3 \cdot 0 \cr &= \cfrac{2}3 }\)
Step 6: Test for visibility:
\(w \gt 0\), therefore \(Surface_D\) is visible. 
Detect a surface that is not visible
Step 1: Calculate our surface description vectors \(u\) and \(v\):
\( \eqalign{ Surface_J: &pt_6: (1,1,1), \cr &pt_0: (0,0,0), \cr &pt_1: (1,0,0) } \)
\( \eqalign { u &= pt_0  pt_6 \cr &= [\matrix{0 & 0 & 0}]  [\matrix{1 & 0 & 1}] \cr &=[\matrix{1 & 0 & 1}] }\) 
\( \eqalign { v &= pt_1  pt_6 \cr &= [\matrix{1 & 0 & 0}]  [\matrix{1 & 0 & 1}] \cr &=[\matrix{0 & 0 & 1}] }\) 
Step 2: Calculate \(u \times v\):
\(N_{pJ} = [ \matrix{x & y & z} ]\text{, where:}\)
\( \eqalign { x &= 0 \cdot (1)  (1) \cdot 0 \cr &= 0 }\) 
\( \eqalign { y &= 1 \cdot 0  (1) \cdot (1) \cr &= 1 }\) 
\( \eqalign { z &= (1) \cdot 0  0 \cdot 0 \cr &= 0 }\) 
\(N_pJ = [ \matrix{0 & 1 & 0} ]\)
Step 3: Calculate vector to the eye:
For this example, we will select \(pt_6\)
\( \eqalign{ \overrightarrow{view_J} &= pt_{eye}  pt_6\cr &= [ \matrix{3 & 2 & 3} ]  [ \matrix{1 & 0 & 1} ]\cr &= [ \matrix{2 & 2 & 2} ] }\)
Step 4: Normalize the viewvector:
Before we can use this vector in a dot product, we must normalize the vector:
\( \eqalign{ eye_J &= \sqrt{\rm 2^2 + 2^2 + 2^2} \cr &= \sqrt{\rm 4 + 4 + 4} \cr &= \sqrt{\rm 12} \cr &= 2 \sqrt{\rm 3} \cr \cr eye_u &= \cfrac{eye_J}{eye_J} \cr &= \cfrac{1}{2 \sqrt 3} [\matrix{2 & 2 & 2}] \cr &= \left\lbrack\matrix{\cfrac{1}{\sqrt 3} & \cfrac{1}{\sqrt 3} & \cfrac{1}{\sqrt 3}} \right\rbrack }\)
Step 5: Calculate the dot product of the viewvector and surface normal:
\( \eqalign{ w &= eye_{u} \cdot N_{pJ} \cr &= \cfrac{1}{\sqrt 3} \cdot 0 + \cfrac{1}{\sqrt 3} \cdot (1) + \cfrac{1}{\sqrt 3} \cdot 0 \cr &= \cfrac{1}{\sqrt 3} }\)
Step 6: Test for visibility:
\(w \lt 0\), therefore \(Surface_J\) is not visible. 
What has this example demonstrated?
Surprisingly, this basic task demonstrates every concept that we have learned from the two posts up to this point:
 Matrix Addition: Euclidean vector creation
 Scalar Multiplication: Unit vector calculation, multiplying by \(1 / v \).
 Matrix Multiplication: Dot Product calculation, squarematrix multiplication will be used in the next post.
 Geometric Model Representation: The triangle surface representation provided the points for our calculations.
 Euclidean Vector: Calculation of each surfaces planar vectors, as well as the view vector.
 Cross Product: Calculation of the surface normal, \(N_p\).
 Unit Vector: Preparation of vectors for dot product calculation.
 Dot Product: Used the sign from this calculation to test for visibility of a surface.
Summary
Math can be difficult sometimes, especially with all of the foreign notation. However, with a bit of practice the notation will soon become second nature and many calculations can be performed in your head, similar to basic arithmetic with integers. While many graphics APIs hide these calculations from you, don't be fooled because they still exist and must be performed to display the threedimensional images on your screen. By understanding the basic concepts that I have already demonstrated, you will be able to better troubleshoot your graphics programs when something wrong occurs. Even if you rely solely on a graphics library to abstract the complex mathematics.
There is only one more entry to go, and we will have enough basic knowledge of linear algebra, Euclidean geometry, and computer displays to be able to create, manipulate and display 3D models programmatically. The next entry discusses how to translate, scale and rotate geometric models. It also describes what must occur to create the illusion of a threedimensional image on a twodimensional display with the projection transform.
References
Watt, Alan, "Threedimensional geometry in computer graphics," in 3D Computer Graphics, Harlow, England: AddisonWesley, 1993
Recent Comments