Embedded Alchemy

Alchemy Send feedback »

Alchemy is a collection of independent library components that specifically relate to efficient low-level constructs used with embedded and network programming.

The latest version of Embedded Alchemy[^] can be found on GitHub.
The most recent entries as well as Alchemy topics to be posted soon:
  • Steganography[^]
  • Coming Soon: Alchemy: Data View
  • Coming Soon: Quad (Copter) of the Damned
Alchemy: Documentation[^]

Alchemy: Prototype

CodeProject, C++, maintainability, Alchemy, design Send feedback »

This is an entry for the continuing series of blog entries that documents the design and implementation process of a library. This library is called, Network Alchemy[^]. Alchemy performs data serialization and it is written in C++.

I have written about many of the concepts that are required to implement Alchemy. However, up to this point Alchemy has only remained an idea. It's time to use the concepts that I have demonstrated in previous entries and create a prototype of the library. This will allow us to evaluate its value and determine if it has the potential to fulfill its intended goals.

Prototype

Notice how I referred to the code I am presenting in this entry as a Prototype. That's because I am careful to refer to my "proof of concept " work only as a prototype. In fact, it is common for me to develop an idea this far during the design phase of a project to determine of an idea is feasible. I want to know that a path of development is likely to succeed before I commit to it.

Building a prototype provides a plethora of information that can help complete the design phase as well as succeed in the stages that follow. For example, I can examine how difficult this approach has been, how much time I have spent to reach this point, how well the result matches my needs and expectations, and finally, I can compare this approach to similar problems that I have solved previously. Sometimes it is not even possible to understand the problem at hand unless you experiment.

I typically develop my prototypes in a test harness. However, I do not write the tests as thoroughly. The test harness acts more like a sandbox to play in, rather than a set of regression test's to protect me in the future. Individual tests can be used to logically separate and document different ideas as you develop them. with a little bit of discipline, this sandbox implementation can be a chronicIe of how you reached this outcome. The best benefit of all is the prototype runs on a test harness, it hardly resembles a complete program that management may perceive as almost complete.

I will expand on this last point in the future post. Remember, this is just a prototype. If you do decide to follow the approach you took with this experimental code, resist the urge to start you actual project from the point of the prototype. Feel free to reference it, but do not continue to build upon it.

The Plan

How can you arrive at a destination, if you don't know where you're going? You need to have a plan, or at least a set of goals to define what you are working towards.

As a proof of the concept, I want to be able to demonstrate these things:

  1. define a message format
  2. populate the data with the same syntax as the public fields in a struct
  3. Convert the byte-order for the appropriate fields in the message
  4. All of these must require minimal effort for the end-user
This seems like a huge step compared to what I have built and presented up to this point. So let's review the pieces that we have to work with, and determine which part of the prototype they will help complete. The titles of each section are hyperlinks to the post that I introduced the concept.

Typelist

The Typelist is the foundation, which this solution is built upon. The Typelist is simply a type. Instantiating it as an object at run-time provides no value. There is not any data in the object, nor are there any functions defined within it. The value is in the type information encoded within its definition.

The Typelist alone cannot achieve any of the goals that I laid out above. However, it is a key contributor to solving the first problem, define a message format.

Message Interface

The message interface object is another important component to solve the first goal. The Message object is a template class, and the parameterized types are a Typelist and a byte-order. The Message is the object that will be instantiated and manage the users data. These two components combined can demonstrate the first goal.

Data

The Datum object is a proxy container that will provide the access that we need to interact with the data fields defined in our message. Value constructors, conversion operators, and assignment operators are used to allow the user to interact with the Datum as if it were the type specified in the Message. This provides the ability to both get> and set the values of these fields naturally, like a struct.

This takes care of the second goal populate the data with the same syntax as the public fields in a struct

Byte-order Processing

When I first started this blog, I created a post demonstrating some basic template programming techniques, and even a little template meta-programming. The entry described the issue of byte-order processing when you want to have portable code. Most platforms that support network programming also have a few functions to convert the byte-order of integer values. However, the functions are designed for types of a specific size. This can lead to maintenance problems if the type of a value is changed, but the conversion function is not.

The solution that I presented will handle all of the details. You simply call the byte-order conversion function for each and every field in your message, and the compiler will select the correct function specialization for each value. Furthermore, the constructs are designed to not perform any operations if the byte-order of your variable is already in the order requested by the caller.

For example, if you call to_host with a data value that is already in host order, the function will return the same value passed in. Fields that do not require byte-order conversions, such as char, are given a no-op implementation as well. Your optimizing compiler will simply eliminate these function calls in your release build.

This byte-order conversion code can solve may of the conversion issues elegantly. But it cannot provide enough functionality by itself to demonstrate any of the goals.

Typelist Operations

I described how to navigate through the different types defined in the Typelist with meta-functions. These operations are the key to allow use to programmatically process a user-defined message. We have the ability to calculate the required size of the buffer, the offset of each field, the size of each field, and most importantly, the type.

The challenge will be to create a loop that knows how to loop through each of the fields. Because remember, all of the types must be known at compile-time. We cannot rely on runtime decisions to create the correct solution. Moreover, it will be most beneficial if we can keep all of the decisions static, because this will give the optimizer more information to work with, and hopefully the result will be a faster program.

The Typelist operations combined with the byte-order conversion templates should be able to easily accomplish goal 3 Convert the byte-order for the appropriate fields in the message.

The only way to verify the final goal, All of these must require minimal effort for the end-user, is to put these components together and evaluate the experience for ourselves.

Assembling the pieces

Let's start by defining a simple message to work with. I am going to use three parameters of different types and sizes for this demo to keep the examples short, and still provide variety.

C++

typedef Typelist
<
  char,
  size_t,
  short    
>  DemoType;

We are only defining the types of the fields, and their position in the message at this point, that is why there are no names associated with the fields. Otherwise, with this syntax it is similar to how structs are defined. So far, so good. Next we need to define the message object and populate it with the data fields.

C++

template < class MessageT >
struct DemoTypeMsg
{
  // Define an alias to provide access to this parameterized type.
  typdef MessageT         format_type;
 
  Datum< 0, char >     letter;
  Datum< 1, size_t >   count;
  Datum< 2, short >    number;
};

This will satisfy goal 1 and 2, and so far it wasn't that much work; more than a struct definition, but it seems we are on a good path. I have learned through experience, and looking through a lot of code, mostly Boost, that it is helpful to provide alias declarations to your parameterized types. Now, we need to consider what is it going to take to use the Typelist operators on this Datum objects?

We have specified a template parameter for the Typelist, but it has not been incorporated into the implementation in any way. My entry about the Message Interface describes many utility functions and derives from a message Typelist class. This is how the connection will be made between the parameters defined by the user, the Typelist, and the main Message object.

C++

Message< DemoTypeMsg, HostByteOrder >    DemoMsg;
Message< DemoTypeMsg, NetByteOrder >     DemoMsgNet;

In my actual definition of the Message outer class, I have specified HostByteOrder as the default type. This is because most users of the class will only be concerned with the HostByteOrder type. The NetByteOrder messages should only be encountered right before the message is serialized and sent over the network or saved to a file.

Demonstration of goals 1 and 2

C++

// This is a simpler definition for HostByteOrder.
typedef Message< DemoTypeMsg >    DemoMsg;
 
// Usage:
DemoMsg msg;
 
msg.letter = 'A';
msg.count =  sizeof(short);
msg.number = 100;
 
// This also works.
msg.count = msg.number;
// msg.count now equals 100.

Programmatic byte-order conversion

We have the message structure, data fields, a Typelist to define the format, and even Typelist operations and a defined index into the Typelist. But there is not yet a way to associate a Datum entry with a type entry in the Typelist. Since we're prototyping, let's come up with something quick-and-dirty.

C++

template < class MessageT >
struct DemoTypeMsg
{
  typdef MessageT         format_type;
  // Template member function
  template < size_t IdxT >
  Datum < IdxT , format_type >&amp;
    FieldAt();
};

Woah, woah, woah... WTF is that? (I also said that when I previewed the code first pasted into the blog). Let's break it down one line at a time. As the comment indicates, this is a template member function of the user defined class, which requires an unsigned number that I have designated as an index (line 6). it returns a reference to a Datum object that requires an index, and a Typelist (line 7). The function is called FieldAt(), (line 8). Finally, this function does not have an implementation.

There is a new template function declaration that does not have an implementation. This is not a big change, however, this path is slowly becoming more complicated. So I am becoming a little weary of this solution. But I will cautiously continue to see what we can learn and accomplish.

Implementing FieldAt()

The FieldAt() function is intended to return a reference to the actual Datum at the specified index. This will make the connection between the Typelist operations and the actual message fields. We only provide a declaration of the template, because there will be a unique implementation of FieldAt() defined for each Datum in the message. If an invalid index is requested, the program will not compile. Here is the definition that is required for each field:

C++

Datum< 0, char >     letter;
template < >
Datum< 0, format_type>&amp;  FieldAt< 0 >()
{
  return letter;
}

This is not much extra work, but it is definitely more complicated than defining a single field in a struct. It is also prone to copy-paste errors. We have managed to bridge the gap, and I would at least like to see if the concept works. So what would it take to programmatically process each field of the message? Keep in mind, I want everything to be determined at compile-time as much as possible. Otherwise a simple for-loop would solve the problem. I like the generic algorithms in STL, especially std::for_each. I think that it would be possible to use the standard version if I had an iterator mechanism to feed to the function. But I don't, yet. So I set out to create a static version of this function because I think it will be the simpler path.

Static ForEach

What do I mean by "Static ForEach?" I want to provide a functor to a meta-function that can compile and optimize as much of the processing as possible. The other challenge is that our code would require a lot of dispatch code to handle the variety of types that could potentially be processed if we simply used a callback function or hand-written loop. The reason is because template functions can only be called when every parameterized value is defined. A function like this will not compile:

C++

template < size_t IdxT >
void print_t()
{
  std::cout << "Value: " << idxt << std::endl;
}
 
int main()
{
  for (int i =0; i < 5; ++i)
  {
    print_t < i >();  
  }
  return 0;
}
Run this code

Output:

main.cpp:13:5: error: no matching function for call to 'print_t'
    print_t&lt; i >();  
    ^~~~~~~~~~~~
main.cpp:4:6: note: candidate template ignored: invalid 
    explicitly-specified argument for template parameter 'IdxT'
void print_t()
     ^
1 error generated.

Go ahead. It's ok, give it a whirl.

Now, the code in the block below is what the compiler needs to successfully compile that code. That is why a regular for loop will not solve this problem. If you want to see it work, replace the for-loop in the block above with this code:

C++

// Explicitly defined calls.
// Now the compiler knows which values to use
// to create instantiations of the template.
  print_t< 0 >();
  print_t< 1 >();
  print_t< 2 >();
  print_t< 3 >();
  print_t< 4 >();

What we need to do, is figure out a way to get the compiler to generate the code block above with explicitly specified parameterized values. The equivalent of a loop in meta-programming is recursion and a meta-function must be implemented as a struct. With this knowledge, we can create a static and generic for_each function to process our Typelists.

User-based Function

We want to make the library calls as simple as possible. Because if our library is too much work to setup and use, it won't be used. Therefore, the first thing to define is a template function that will be called by the user. Except in this case, we are the user, because we will be wrapping this function call in the byte-order conversion logic.

C++

template< size_t   BeginIndexT,
          size_t   EndIndexT,
          typename ContainerT,
          typename FunctorT
        >
FunctorT&amp; ForEachType(FunctorT   &amp;fn)
{
  // A convenience typedef (extremely useful).
  typedef detail::ForEachTypeHelper< BeginIndexT,
                                     EndIndexT,
                                     ContainerT,
                                     FunctorT>     Handler;
  // Explicitly create an instance of the functor.
  // This is necessary to force the compiler instantiation.
  Handler process(fn);
  // Process the functor.
  process();
 
  return fn;
}

Meta-function Implementation

Next let's define the basic structure of our template functor, and work our way inward:

C++

template< size_t CurIndexT,
          size_t EndIndexT,
          typename ContainerT,
          typename FunctorT>
class ForEachTypeHelper
{
public:
  ForEachTypeHelper(FunctorT&amp; fn)
    : ftor(fn)
  { }
 
  // The function operator that allows
  // this structure to operate as a functor.
  void operator()()
  {
    process< CurIndexT, EndIndexT, ContainerT >();
  }
 
private:
  // The parameterized functor
  FunctorT&amp; ftor;
 
  // To be implemented
  // process < >();
};

The object above contains a constructor to allow use to store a reference, avoiding unnecessary copies. It also implements the function operator()(). This operator could take parameters if we desired, but in this case it is not necessary. The parameters would be placed in the second set of parenthesis.

Now we need to define a function that operates like this:

C++

template< ... >
void process()
{
  // Call the functors function operator for the current index.
 
  // If the current index is not the last,
  // recursively call this function with the next index.
}

It's actually a pretty simple and elegant solution, at least on paper or pseudo-code. The syntax of templates in C++, and differences in how each compiler handles templates obscures the elegance of this solution quite a bit. Nonetheless, here is the most standards compliant version as compiled with G++.

C++

template< size_t IndexT,
          size_t LastT,
          typename FormatT>
void process()
{
  typedef typename
    TypeAt< IndexT , FormatT >::type type_t;
  //    v--- This is interesting (i.e. Odd)
  ftor. template operator()
    < IndexT,
      type_t
    >(type_t());
 
  if (IndexT < LastT)
  {
    process< value_if< (IndexT < LastT),
                           size_t,
                           IndexT+1,
                           LastT>::value,
             LastT,
             FormatT>();
    }
}

Admittedly, that is just ugly to look at. That is why I recommend to new developers, learning how to use STL, to not step into the STL containers. It is riddled with syntax like this, except they use a lot of underscores and variable names that only have two letters. The real implementation of this in my source files have many comments to help break up the morass of compound statements and odd-syntax.

How does this work?

I wish I knew, but it compiles so it must be good right?!

I am joking, mostly. The syntax originally started out much simpler. As I added more complex capabilities the syntax became stricter. The syntax alterations that were required by each compiler gradually morphed into the block above. Look at the block above, can you deduce what the code starting at line 9 is doing?

This block decomposes the code, including the odd use of template that seems out of place:

C++

// Original
ftor. template operator()
  < IndexT,
    typename TypeAt< IndexT , FormatT >::type
  >(type_t());
 
// Remove the template hint, add a typedef for TypeAt
ftor.operator()< IndexT, type_t::type>(type_t());
 
// Assume the compiler can deduce the parameterized types.
ftor.operator()(type_t());
 
// We are calling the function operator of our functor.
// The type_t() is a zero initialized instance
// of the type for which we are calling.
 
ftor(type_t());

Why is that odd template there?

The template keyword is required to help disambiguate the intention of your call for the parser. Look at the tokens that would be generated without the user of template:

ftor . operator () &lt; IndexT

After grouping a collection of the symbols, the compiler may end up with this:

ftor.operator() &lt; IndexT

The compiler might think you are trying to perform a less-than operation between a member function call and the first template parameter. That is the reason. In this case, it may seem obvious that the only conclusion is that we are calling a parameterized function, however, I am sure there are examples that exist that are much more ambiguous. Unfortunately I do not have any of them. Here is a list of the tokens with the template hint for the parser to continue looking for the right angle-bracket to close the template definition:

ftor. template operator() &lt; IndexT, ... >

In case you are curious, Visual Studio would not accept the code with the template hint. This is one of the only places in Alchemy that requires a compiler #ifdef to solve the problem. I'm very proud of that.

One last thing to explain with the static functor, the terminating case. Here is the code I am referring to:

C++

// Test to determine to recurse or exit.
if (IndexT < LastT)
{
  // value_if meta-function increments to the next element
  // IF IndexT is still less-than the last element.
  process< value_if< (IndexT < LastT),
                     size_t,
                     IndexT+1,
                     LastT>::value,
          LastT,
          FormatT>();
}

First, the required range to be called when instantiating this static for_each loop, is the first-index through the actual last-index. Not one passed the last-index as with STL algorithms. Therefore, while the processed index is not the last index, the function will continue to recurse. The last time the process function will be called is when the index is incremented and becomes equal to the last index. The function will be called, the last index functor will be called, then the if-statement is evaluated.

Since the current index is the same as the last index, no more recursion occurs and the chain ends. However, if the value_if statement were not in place for the call to the last index, a call would have been declared to process one passed the last index, and this would be invalid. Even though there is no logical chance that function will be called, the compiler will still attempt to generate that instantiation. The value_if test prevents this illegal declaration from occurring.

Return to Byte-Order Conversion

I wasn't thinking that I would have to explain the static for-each loop to complete this entry, but it's over now. We need to things to complete the byte-order conversion functionality, a functor to pass to the for-each processor, and a user-accessible function. Here is the implementation of the functor that calls the previous byte-order swap implementation:

C++

template< typename FromMessageT,
          typename ToMessageT
        >
struct ByteOrderConversionFunctor
{
  //  Typedefs: They make template programming bearable.
  typedef FromMessageT                  from_message_type;
  typedef ToMessageT                    to_message_type;
  typedef typename
    from_message_type::message_type     message_type;
  typedef typename
    message_type::format_type           format_type;
 
  // Value Constructor
  ByteOrderConversionFunctor(const from_message_type&amp; rhs)
    : input(rhs)
  { }
 
  from_message_type input;
  to_message_type   output;
 
  // void operator()(const value_t&amp;)
};

This is the function object processing function. This function only processes one item then it exits.

C++

template< size_t   Idx,
          typename value_type>
void operator()(const value_t&amp;)
{
  value_type from_value  =
    input.template FieldAt< Idx >().get();
  value_type to_value    =
    from_value;
 
  // Create an instance of a selection template that will choose between
  // nested processing, and value conversion.
  ConvertEndianess< value_type > converter;
  converter(from_value, to_value);
  output.template FieldAt< Idx >().set(to_value);
}

This is the meta-function that finally invokes the byte-order conversion code. It is invoked on (line 10) above.

C++

template< typename T>
struct ConvertEndianess
{
  void operator()(const T &amp;input,
                        T &amp;output)
  {
    output = EndianSwap(input);
  }
};

User-friendly Function

Let's give the user a simple function to invoke byte-order conversion on an Alchemy message. This first block is a generic function written to reduce redundant code.

C++

template< typename MessageT,
          typename FromT,
          typename ToT
        >
Hg::Message< MessageT, ToT >
  convert_byte_order(
    const Hg::Message< MessageT, FromT >&amp; from)
{
  typedef typename
    MessageT::format_type        format_type;
  ByteOrderConversionFunctor
    < Hg::Message< MessageT, FromT>,
      Hg::Message< MessageT, ToT>  
    > ftor(from);  
 
  Hg::ForEachType < 0,
                    Hg::length< format_type>::value - 1,
                    format_type
                  > (ftor);
 
  return ftor.output;
}

Here are the actual functions intended for the user.

to_network

C++

template< typename T >
Message< typename T::message_type, NetByteOrder>
  to_network(T&amp; from)
{
  return detail::convert_byte_order
            < typename T::message_type,
              typename T::byte_order_type,
              NetByteOrder
            >(from);
}

to_host

C++

template< typename T >
Message< typename T::message_type, HostByteOrder>
  to_host(T&amp; from)
{
  return detail::convert_byte_order
            < typename T::message_type,
              typename T::byte_order_type,
              HostByteOrder
            >(from);
}

Demonstration of goal 3

C++

// Once again we have these message types
typedef Message< DemoTypeMsg, HostByteOrder >    DemoMsg;
typedef Message< DemoTypeMsg, NetByteOrder >     DemoMsgNet;
 
DemoMsg msg;
 
msg.letter = 'A';
msg.count =  sizeof(short);
msg.number = 100;
 
// It doesn't get much simpler than this.
DemoMsgNet netMsg  = to_network(msg);
DemoMsg    hostMsg = to_host(netMsg);

When I reached this point, my enthusiasm was restored. This is exactly what I was aiming for when I set out to create Alchemy. Once the user has defined a message, they are natural and easy to work with. All of that hidden work for programmatic byte-order conversion is automatically handled for the user now. Again, it all depends on the user successfully defining a message, which has turned out to me more complicated than I wanted. So let's turn to another technique that I am fond of, at least for complicated definitions.

Preprocessor Code Generation

At the bottom of my preprocessor post, I mention one of my favorite techniques that are used quite extensively in ATL and WTL to create table definitions. That is almost exactly what we need. Let's see what it would take to simplify the work required for a user to define an Alchemy message.

As a temporary usage convention through development, whatever name is used for the Typelist definition, the text 'Msg' will be appended to it.

HG_BEGIN_FORMAT

C++

// Please forgive me, the code highlighter that
// I am using does not handle 'pound' well.
// I will substitute with @
@define HG_BEGIN_FORMAT(TYPE_LIST)          \
template < class MessageT >                 \
struct TYPE_LIST @@ Msg                     \
{                                           \
  typdef MessageT         format_type;      \
  template < size_t IdxT >                  \
  Datum < IdxT , format_type >&amp; FieldAt();  \

HG_DATUM

C++

@HG_DATUM(IDX, TYPE, NAME)                   \
Datum< IDX, TYPE >     NAME;                 \
template < >                                 \
Datum< IDX, format_type>&amp; FieldAt< IDX >()   \
{ return NAME; }

HG_END_FORMAT

C++

@define HG_END_FORMAT         };

Demonstration of Goal 4

That was straight-forward. Here is a sample message definition with this set of MACROs.

C++

// The new Alchemy message declaration
HG_BEGIN_FORMAT(DemoType)
  HG_DATUM(0, char,   letter)
  HG_DATUM(1, size_t, count)
  HG_DATUM(2, short,  number)
HG_END_FORMAT

I think that qualifies as simple to use. I think adding the actual index is annoying, and it should be possible to remove it, and it is possible. Because I have done it. However, I will save that for a future post, because this one has already passed 4000 words.

We have the type information from the Typelist; why do we have to specify the type in the HG_DATUM entry? It would be possible to remove them, however, I think they act as a nice reference to have right next to the name of the field. It would also be possible to keep the user 'honest' and double-check that the type specified in the MACRO matches the type they define in the Typelist. I am not going to worry about that for now, because I think it is possible to actually define the Typelist inside of the message definition above. There will definitely be an entry posted when I do that.

Summary

Now we have seen that it is possible to define a message, and interact with it as simply as if it were a struct. It is always nice to reach a point like this and see a basic working proof-of-concept. There were some surprising challenges to overcome to reach this point. However, in creating these few pieces, my skills with the functional programming paradigm have started to improve my problem-solving skills with the other programming paradigms.

More work needs to be done before this library can be used for anything. In the near future I will describe how I implemented serialization to buffers, and I will add these additional types: Packed-bits, nested-fields, arrays and vectors.

GitHub

When I started writing about Alchemy, I had intended to publish code with each posting that represented the progress up to that point. However, I have developed the library far beyond this point, and I am factoring in some of lessons that I have learned and used to improve the code. Even with source version control, it is a lot of work to create a downloadable set of source that matches what is demonstrated, compiles, and is generally useful.

So instead, I would like to let you know that Network Alchemy[^] is hosted on GitHub, I encourage you to take a look and send some feedback. At this point I am optimizing components, and fixing a few pain points in its usage. While I program primarily on Windows, another contributor is in the process of integrating auto-tools so that an install and build setup can be created for all of the *nix-based systems.

Coliru Test

C++ 5 feedbacks »
This is a sample entry that I am using to integrate interactive code tutorials with the Coliru online compiler. This is currently a work in progress.

Example

C++

int main()
{
  std::cout << "Hello World";
  return 0;
}
Run this code

Output:

Hello World

Evolution to the Internet of Things

general Send feedback »

The Internet of Things, what a great idea because of all of the possibilities. I think the best place to test this would be in Utopia. Whenever a new invention or discovery is made, there is always a potential for misuse. For instance, fire, alcohol, software patents, YouTube, and the list becomes potentially endless when you start combining two or more seemingly innocuous things like Diet Coke and Mentos. Every business is racing to cash in on The Internet of Things, and some even want to be the leader. The reality is, this thing will come to life sooner or later. However, I think it would be best if we started out small and create many Intranets of Things (IaoT) first. Then watch them evolve and develop into something valuable and safe.

The Concept

Before we go any further, let's specify exactly what we are referring to. The Internet itself is considered to be:

noun:


a vast computer network linking smaller computer networks worldwide (usually preceded by the). The Internet includes commercial, educational, governmental, and other networks, all of which use the same set of communications protocols.

Dictionary.com[^].

In the name of science, curiosity, waste reduction, profit margins and many other motivators, the next step is to start connecting other things to The Internet. Things refers to anything that we do not consider traditional computing devices that we interact with the set of items that could be considered as things is anything that is not in the set of traditional computers. The word smart is often prefixed to these other items.

What can be a Thing?

At this point, anything is up for consideration. If you can think it, it is a possibility to become a smart thing on IoT.

  • Household appliances (Refrigerator, stove, iron, light switches)
  • Doors and windows
  • Sensors (Moisture sensors in your plants and lawn)
  • Vehicles (Planes, Trains and Automobiles)
  • Animals (Pets/Livestock)
  • Your children
  • Your parent's children (and possibly your children's parents)
  • Tiny pebbles

What do smart things do?

Smart things report data. The more ambitious vision is for these smart things to be able to also receive data from other things and change their own behavior.

What's the purpose?

Many fascinating idea's have been proposed for what IoT could be.

  • Household appliances interact with sensors on your doors, windows, and persons. If no one is in the house the iron or stove will shut themselves off for safety.
  • Your sprinkler system only waters the sections of your lawn that need watering according to the moisture sensors that report dry sections of your lawn.
  • Cars interact with each other on the road and help drivers avoid collisions.
  • When the previous concept has a failure, the remaining cars are notified of traffic accidents and suggested alternate routes are provided to save time.
  • Warehouse inventories are automatically restocked when the shelves detect a shortage for particular items
  • We will be able to determine if a tree makes a sound when it falls in the forest and no one is around to hear it.

Improved safety, convenience, efficiency, and answer age-old philosophical questions. The IoT has a lot of potential.

Potential Misuse of the IoT

The history of The Internet, up to this point, has demonstrated that many companies are not capable of completely securing their data; correction, your data. There are millions of businesses in the world, and only the largest data breaches make the news, such as Target (there's some irony), Home Depot and the iCloud accounts of many celebrities.

Security

Security is an easy subject to attack. Most developers do not possess the knowledge and skills to develop secure software. Unfortunately, every developer that writes code for a product must write it securely. Otherwise, this may lead to a vulnerability, even if it is not directly related to the encryption, authorization, or communication protocols. Simple mistakes and failures to verify correct execution can open the door to provide the opportunities for malicious users to create an exploit.

Even after the device engineers have developed The Perfectly Secure Device there is another security factor that exists, which is the users and administrators of the equipment. When the device is not used properly, misconfigured, or not even configured at all leaving the default credentials, the device and what it protects is no longer secure again. There are many potential points of failure when discussing security, only one weak spots must exist for a vulnerability to exist.

Privacy

Let's imagine (as opposed to the less precise assumption) that security is not an issue with the IoT. And we will also ignore the Big Brother aspect as well. There are two important elements required for the IoT to be useful; 1) Smart Things, 2) Data, lots of data. The limits are unknown as to what will be useful or relevant in the IoT. The majority of the data is innocuous, by itself. However, when many pieces of data can be collected and compared to one and another, patterns may emerge. Some inferences that can be made may be harmless, such as your shopping preferences. However, other patterns that can be inferred from your data may be quite personal facts that would cause you great embarrassment or worse.

Even though we imagine your data is "technically" secure a problem still remains. The IoT is based upon your devices communicating with other devices on the internet, reporting data. The communication may only be restricted to secure recipients like the original product manufacturer, service providers and your family. There's big business in data. Any one of those sources could sell your data to someone else.

We could also imagine that the company of your Smart Thing states they will keep your data private. However, the EULA for many software policies give the company the right to collect more data than is necessary for you to be able to properly use the software. This includes information like the times of day you use your computer, internet browsing history and social site account names. Now consider how many conglomerate corporations exist. Even though the company states it will not sell your data, it will most likely state that it has the right to share the data with any of its subsidiaries.

Safety

The more I read about IoT, the more I read ideas about giving human control to the machines in our life. C'mon people! Haven't you seen The Terminator?! That movie is 30 years old now, or even The Matrix, which is only 15 years old. Actually, there are many more probable reasons to be concerned with this application of the IoT:

  • Sensor malfunction
  • Communication interruptions
  • Device incompatibilities
  • Design and implementation errors
  • Unintended accidents:
    • Misconfigurations
    • Making Smart Things do dumb things
    • Hobbyists
    • Groups of Hobbyists (Danger in numbers)
  • Malicious intent:
    • Hackers
    • Disgruntled employees of device manufacturers
    • Governments

People make mistakes, pure and simple. People are designing, building, installing and using this smart equipment. The enormity of safety should not be overlooked. Why do airplanes cost so much to design and build? It is because of all of the regulations, restrictions and requirements for the designers and manufacturers to follow. Once planes are sold, there are also strict regulations for the inspection and regular maintenance of these machines. And in most cases, people always have control of these extremely complex machines.

I believe we are very far off from the point our cars drive autonomously down the highway in constant communication with the road, traffic lights and other cars. The reason is cost. Things are not made the way they used to be. They are manufactured with much less quality now to lower the price and make profit on volume. As the price of appliances continue to drop, it becomes rarer where it is worth the money to have a repair man fix an existing appliance rather than buying a new one.

More careful designs and component redundancies will need to be added as the stakes at risk rise for giving control to devices and machines. This will raise the price of these things. Then a cost-benefit analysis will be performed by the companies that will make these devices to determine if they will sell enough things to recuperate their investment. Much like pharmaceutical companies that neglect to invest in developing medicines for diseases that will not be profitable; this occurs primarily for diseases in developing countries where the patients cannot afford to pay for the medicines.

The IoT is Complex

Hopefully you have started to realized how complex the IoT really is. Up to this point in time, there is still only a vague notion of what this invention can be or will be. I think the IoT will be too complex for any single human to comprehend and understand. In a way, it may become an imitation of us and our interactions (hopefully without reaching self-awareness).

Creating the IoT

In order for the development of the IoT to be successful, many independent models of operation will need to be built. These are collections of device eco-systems that work successfully without the interference of outside influences. Design evolution can then start to take hold has the most successful models for development and interaction of these Things are identified. This means that there will be many different microcosms of device collections that are incompatible with other device groups. Imagine two neighboring smart houses built upon different technologies in capable of communicating with each other (or at least interacting optimally).

The Evolution of Social Interaction for Machines

The obvious and novelty products that have been created up to this point cannot be the limit of what is created in order for the IoT to succeed. More complex interactions of the devices will need to be created to allow the entire system to evolve into something more useful. The sum of its will need to become greater as a whole. How did humans eventually become so successful? The answer is the development of cities.

Cities became places of gathering, which provided more safety, stability, variety, diversity. The needs of the people living in and near cities were more easily met. Trades, and services sprang into existence because a family or tribe no longer needed to spend their time to minimally meet their needs. The citizens became proficient in their craft or trade and were able to benefit from the products offered by others that differed from their own. As cities grew larger, the social structures became more complex.

As the devices become more capable, new ways will be identified that we can combine and apply the data. This larger more diverse set of information that is collected in a device community, such as a smart house, will then start to provide increased value to its owner. The potential value could continue to grow with each device that is added to this mechanical community. Unfortunately the potential for exploitation will also grow. This is the point in human cities where governments were developed along with laws and enforcement.

The point that I wanted to make is that we will need to teach these devices how to communicate with each other, and then interact. Most appliances and machines that we have today are already specialized, so that part of machine society will not be a challenge to develop. However, communication is an extremely complex topic to tackle. It's as if these machines have evolved on their own to become farmers, shepherds, blacksmiths, bakers, haberdashers and pan-handlers on their own. There is no common language that these machines used to determine what trade they should learn. More importantly, how can their abilities be of service to the other machines in their community.

Communication Protocols

Communication can range from very simple to extremely complex. Once again look at human interaction. There is simple body language cues, to the complex and precise language used in engineering designs or legal documents. And unlike engineering documents are supposed to be, legal documents are still open to precedent and interpretation. Machines are much simpler at this point in our history. They require very precise communication protocols, precise to the bit.

There are many network architectures, data routing protocols, and finally application communication protocols. At some-point machine communication protocols will need to be developed that are flexible enough for the machines to be allowed to interpret the information with a certain amount of freedom. I believe this is a scary prospect, especially if the risks are not properly assessed before giving the machine these capabilities.

I can just imagine machines developing their own Ponzi schemes that shift the electricity from some devices in the house to pay promised returns to the other machines as they build their empire. At this point it will be especially important for the smart house owner to continue to buy new devices at an exponential rate to keep the house running properly.

Complete Connectivity

I would be satisfied if the IoT ends at personal device communities, the smart house. However, the realization of autonomous cars interacting on the road to take their occupants to their destination safely will not happen unless the different device communities are taught to communicate with other communities.

This level of communication is so complex, I am not even sure what it would take to create this. The ability of a computer to comprehend abstract data is far less than a human, and yet humans get into auto accidents every day. Yes, many times it's because of poor decisions on the human. However, this doesn't seem to much different than the results that could occur from a malfunctioning sensor on a machine.

Summary

The Internet of Things is very complex. Like it or not it is already here, and it is only in its infancy of development. How it is built and evolves depends quite a bit on how consumers choose to adopt and ultimately value from the development of this endeavor. The progression of advancement will most likely continue along the same path where companies develop their own separate clouds. Smart house eco-systems will be developed, as well as the evolution of manufacturing plants and other environments. Eventually these systems must be taught to interoperate openly and freely for the most grandiose visions of the IoT to be realized. However, do not forget about the potential for abuse and exploitation of the IoT. Many challenges lie ahead.

Alchemy: Data

adaptability, portability, CodeProject, C++, maintainability, Alchemy, design Send feedback »

This is an entry for the continuing series of blog entries that documents the design and implementation process of a library. This library is called, Network Alchemy[^]. Alchemy performs data serialization and it is written in C++.

By using the template construct, Typelist, I have implemented a basis for navigating the individual data fields in an Alchemy message definition. The Typelist contains no data. This entry describes the foundation and concepts to manage and provide the user access to data in a natural and expressive way.

Value Semantics

I would like briefly introduce value semantics. You probably know the core concept of value semantics by another name, copy-by-value. Value semantics places importance on the value of an object and not its identity. Value semantics often implies immutability of an object or function call. A simple metaphor for comparison is money in a bank account. We deposit and withdraw based on the value of the money. Most of us do not expect to get the exact same bills and coins back when we return to withdraw our funds. We do however, expect to get the same amount, or value, of money back (potentially adjusted by a formula that calculates interest).

Value semantics is important to Alchemy because we want to keep interaction with the library simple and natural. The caller will provide the values they would like to transfer in the message, and Alchemy will copy those values to the destination. There are many other important concepts related to value semantics. however, for now I will simply summarize the effects this will have on the design and caller interface.

The caller will interact with the message sub-fields, as if they were the same type as defined in the message. And the value held in this field should be usable in all of the same ways the caller would expect if they were working with the field's type directly. In essence, the data fields in an Alchemy message will support:

  • Copy
  • Assignment
  • Equality Comparison
  • Relative Comparison

Datum

Datum is the singular form of the word data. I prefer to keep my object names terse yet descriptive whenever possible. Datum is perfect in this particular instance. A Datum entry will represent a single field in a message structure. The Datum object will be responsible for providing an opaque interface for the actual data, which the user is manipulating. An abstraction like Datum is required to hide the details of the data processing logic. I want the syntax to be as natural as possible, similar to using a struct.

I have attempted to write this next section a couple of times, describing the interesting details of the class that I'm about to show you. However, the class described below is not all that interesting, yet. As I learned a bit later in the development process, this becomes a pivotal class in how the entire system works. At this point we are only interested in basic data management object that provides value semantics. Therefore I am simply going to post the code with a few comments.

Data management will be reduced to one Datum instance for each field in a message. The Datum is ultimately necessary to provide the natural syntax to the user and hide the details of byte-order management and portable data alignment.

Class Body

I like to provide typedefs in my generic code that are consistent and generally compatible with the typedefs used in the Standard C++ Library. With small objects, you would be surprised the many ways new solutions can be combined from an orthogonal set of compatible type-definitions:

C++

template &lt; size_t   IdxT,
           typename FormatT
         >
class Datum
{
public:
  //  Typedefs ***********************************
  typedef FormatT                        format_type;
  typedef typename
    TypeAt&lt; IdxT, FormatT>::type         value_type;
 
  //  Member Functions ...
 
private:
  value_type         m_value;
};

The Datum object itself takes two parameters, 1) the field-index in the Typelist, 2) the Typelist itself. This is the most interesting statement from the declaration above:

Code

typedef typename
    TypeAt< IdxT, FormatT>::type         value_type;

This creates the typedef value_type, which is the type the Datum object will represent. It uses the TypeAt meta-function that I demonstrated in the previous Alchemy entry to extract the type. In the sample message declaration at the end of this entry you will see how this all comes together.

Construction

C++

// default constructor
Datum()
  : m_value(0)        
{ }
 
// copy constructor
Datum(const Datum &amp;rhs)
  : m_value(rhs.m_value)      
{ }
 
// value constructor
Datum(value_type rhs)
  : m_value(rhs)      
{ }

Generally it is advised to qualify all constructors with single parameters with explicit because they can cause problems that are difficult to track down. These problems occur when the compiler is attempting to find "the best fit" for parameter types to be used in function calls.In this case, we do not want an explicit constructor. This would eliminate the possibility of the natural syntax that I am working towards.

Conversion and Assignment

Closely related to the value constructor, is type-conversion operator. This operator provides a means to typecast an object, into the type defined by the operator. The C++ standard did not specify an explicit keyword for type-conversion operations before C++ 11. Regardless of which version of the language you are using, this operator will not be declared explicit in the Datum object,

C++

operator value_type() const {
  return m_value;
};
 
// Assign a Datum object
Datum&amp; operator=(const Datum&amp; rhs) {
  m_value =  rhs.m_value;
  return *this;
};
 
// Assign the value_type directly
Datum&amp; operator=(value_type rhs) {
  m_value =  rhs;
  return *this;
};

Comparison operations

All of the comparison operators can be implemented in terms of less-than. Here is an example for how to define an equality test:

C++

bool operator==(const value_type&amp; rhs) const {
  return !(m_value &lt; rhs.m_value)
      &amp;&amp; !(rhs.m_value &lt; m_value);
}

I will generally implement a separate equality test because in many situations, simple data such as the length of a container could immediately rule two objects as unequal. Therefore, I use two basic functions to implement relational comparisons:

C++

bool equal(const Datum &amp;rhs) const {
  return m_value == rhs.m_value;
}
 
bool less(const Datum &amp;rhs) const {
  return m_value &lt; rhs.m_value;
}


All of the comparison operators can be defined in terms of these two functions. This is a good thing, because it eliminates duplicated code, and moves maintenance into two isolated functions.



C++

bool operator==(const Datum&amp; rhs) const {
  return  equal(rhs);
}
bool operator!=(const Datum&amp; rhs) const {
  return !equal(rhs);
}
bool operator&lt; (const Datum&amp; rhs) const {
  return  less (rhs);
}
bool operator&lt;=(const Datum&amp; rhs) const {
  return  less (rhs) || equal(rhs);
}
bool operator>= (const Datum&amp; rhs) const {
  return  !less (rhs);
}
bool operator> (const Datum&amp; rhs) const {
  return  !operator&lt;=(rhs);
}


Buffer read and write


One set of functions is still missing. These two functions are a read and a write operation into the final message buffer. I will leave these to be defined when I determine how best to handle memory buffers for these message objects.



Proof of concept message definition


Until now, I have only built up a small collection of simple objects, functions and meta-functions. It's important to test your ideas early, and analyze them often in order to evaluate your progress and determine if corrections need to be made. So I would like to put together a small message to verify the concept is viable. First we need a message format:



C++

typedef TypeList
&lt;
  uint8_t;
  uint16_t;
  uint32_t;
  int8_t;
  int16_t;
  int32_t;
  float;
  double;
> format_t;


This is a structure definition that would define each data field. Notice how simple our definition for a data field has become, given that we have a pre-defined Typelist entry to specify as the format. The instantiation of the Datum template will take care of the details based on the specified index:


C++

struct Msg
{
  Datum&lt;  0, format_t > one;
  Datum&lt;  1, format_t > two;
  Datum&lt;  2, format_t > three;
  Datum&lt;  3, format_t > four;
  Datum&lt;  4, format_t > five;
  Datum&lt;  5, format_t > six;
  Datum&lt;  6, format_t > seven;
  Datum&lt;  7, format_t > eight;
};


Finally, here is a sample of code that interacts with this Msg definition:



C++

Msg msg;
 
msg.one   = 1;
msg.two   = 2;
msg.three = 3;
 
// Extracts the value_type value from each Datum,
// and adds all of the values together.
uint32_t sum = msg.one
             + msg.two
             + msg.three;


Summary


All of the pieces are starting to fit together rather quickly now. There are only a few more pieces to develop before I will be able to demonstrate a working proof-of-concept Alchemy library. The library will only support the fundamental types provided by the language. However, message format definitions will be able to be defined, values assigned to the Datum fields, and the values written to buffers. These buffers will automatically be converted to the desired byte-order before transmitting to the destination.


To reach the working proto-type, I still need to implement a memory buffer mechanism, the parent message object, and integrate the byte-order operations that I developed early on. Afterwards, I will continue to document the development, which will include support for these features:


  • Nested messages
  • Simulated bit-fields
  • Dynamically sized message fields
  • Memory access policies (allows adaptation for hardware register maps)
  • Utility functions to simplify use of the library

This feature set is called Mercury (Hg), as in, Mercury, Messenger of the Gods. Afterwards, there are other feature sets that are orthogonal to Hg, which I will explore and develop. For example, adapters that will integrate Hg messages with Boost::Serialize and Boost::Asio, as well as custom written communication objects. There is also need for utilities to translate an incoming message format to an outgoing message format forwarding.


Feel free to send me comments, questions and criticisms. I would like to hear your thoughts on Alchemy.

Value Semantics

general, CodeProject, C++, design Send feedback »

Value semantics for an object indicates that only its value is important. Its identity is irrelevant. The alternative is reference/pointer semantics; the identity of the object is at least as important as the value of the object. This terminology is closely related to pass/copy-by-value and pass-by-reference. Value semantics is a very important topic to consider when designing a library interface. These decisions ultimately affect user convenience, interface complexity, memory-management and compiler optimizations.

Regular Types

I am going to purposely keep this section light on the formal mathematics, because the rigorous proof and definition of these terms are not my goal for the essay. Throughout this entry, the word object will be used to mean any variable type that can store a value in memory. Regular types are the basis created by the set of properties which are common to all objects representable in a computer.

Huh?! Let's pare this down a little bit more. Specifically, we are interested in the set of operations that can be performed on objects in our program. Regular types promote the development of objects that are interoperable.

Value Semantics

If we choose the same syntax for operations on different types of objects, our code becomes more reusable. A perfect example is the fundamental (built-in) types in C++. Assignment, copy, equality, and address-of all use the same syntax to operate on these types. User types that are defined to support these regular operations in the same way the fundamental types do, can participate in value semantics. That is, the value of the object can be the focus rather than the identity of the object.

Reference Semantics

Reference semantics refers to when your object is always referred to indirectly. Pointers and references are an example of this. We will see in a moment how the possibility of multiple references to your object can quickly complicate your logic.

An Object's Identity

The identity of an object relates to unique information that identifies instances of an object such as its location and size in memory. Management of interactions with the object instance is often the primary focus when reference semantics are used. The reasons vary from efficiency by avoiding an expensive copy to controlled access to an object by multiple owners. Be sure to evaluate the purpose of each interaction with an object to determine what is most important.

When coding guidelines are blindly followed you will often find object's that are passed-by-reference when pass-by-value would suffice equally as well. The compiler is able to perform copy elision if it detects it is safe to do so. Pass-by-reference adds an extra level of indirection. Eliminating the management of an object by identity, often eliminates a resource the user must manage as well, such as a shared_ptr.

Scope of Reasoning

Global variables are considered bad because of their Global Scope of reasoning. Meaning that any other entity that has access to the same variable could interfere with our interactions with the variable. Our ability to reason logically for interactions with the global variable are considerably more complex. Where as a local variable's scope of reasoning is limited to the local scope.

Deterministic behavior is much easier to achieve with a smaller scope of reasoning. The scope of reasoning is instantly reduced to the local object's value when value semantics are used exclusively to interact with the object. Reasoning for the developer and the compiler become much simpler as well as the code may become simpler to read. Small and simple should be a goal every developer strives for.

Semantics

Let's return back to value semantics. Value semantics allows for equivalency relationships to be considered. For example, take the object x and give it the value 10. As the examples below demonstrate, there are many other value relationships that are equivalent to the object x with value 10.

a = 5 + 5;
b = 5 * 2;
c = 24 / 3 + 2;
d = 10;
e = x;

Some representations of a value are more efficient than others, such as 1/2 compared to sin(pi/6) (sin(30°) ). In the case of computing hardware, some values may be represented more accurately in certain forms than others. Therefore, one should always analyze the context of a problem to determine which object property should be the focus of design.

Syntax

Our goal is to define regular types that adhere to the same syntax and semantics of the built-in types. We want to be able to interact with our objects using the natural syntax provided by value semantics.

C++ provides a rich set of tools to work with in order to create objects that use value semantics. A default implementation is provided for all of the operations required for an object to behave with value semantics. However, based upon the implementation of the of the object, the default implementation may not be adequate.

Objects with dynamic memory allocation or handle resource management may need to make a copy of the original resource. Otherwise two references to a single resource will exist. When one object is destroyed the internal resources of the other object will become invalid. However, since the compiler is allowed to optimize the code by eliding copies, it is important that the copy constructor behaves just as the default copy constructor would. The default copy constructor performs a member-wise copy of each value.

This conundrum can be solved by using existing objects to internally manage the resources that must be duplicated. Standard containers such as the std::vector can manage duplication of resources. Other resources, such as system handles, will require a custom object to be implemented. This type of isolated memory management is important to allow your object to provide value semantics.

Summary

This entry focused on the advantages of developing objects that use value semantics. Some of these advantages are:

  • Promotes interoperable software objects
  • Simplifies logic
  • Increases productivity
  • Increases performance

I purposely kept the discussion at a high-level with no code to introduce you to the concepts. The application of these concepts and concrete examples will appear in the next entry. I will introduce the Alchemy::Datum object, which is designed and implemented with value semantics.

Do As I Say, Not As I Do

general, CodeProject Send feedback »

How often are you given instructions by a person of authority and then at some later point in time witness them going against exactly what they just asked you to do?!

  • Your dad telling you not to drink out of the milk carton; then you catch him washing down a bite of chocolate cake with a swig directly from the milk carton.
  • You see a police car breaking the speed limit.
  • You see the host of a party that you are at double-dip, even though the host has a "No Double-Dipping" policy.

Doesn't that just irritate you?

I'm sorry to inform you that I do not follow all of the practices that I present on this site; at least not anymore. There is a good reason though, it is to teach you a valuable lesson. I type that facetiously, but actually it is the truth. Let me go a bit deeper into my methodology and hopefully it will help you better understand.

I was once like you

"Good looking and wise beyond your years" you ask?

No. I was only one of those.

I was also tired of fixing the same bugs over and over. Especially when I would fix it in one place, only to have a colleague unintentionally re-introduce the issue at a later date. I was tired of avoiding the radioactive code that was fragile and liable to cause mutations to occur in your code if you changed even a variable name. I was looking for a better way. I knew of Extreme Programming (XP), which was ridiculed by my co-workers. I had previously used the Agile development methodology with a previous employer. However, I needed something that I could start with on my own. If it turned out to be good, then I could try to get others to adopt the practice.

So there I was at the book store. I didn't know what I wanted or needed. I hoped that I would recognize it if I saw it. That's when I saw this huge book by, Gerard Meszaros, called, xUnit Test Patterns. I had heard of unit testing before. I had heard of JUnit, however, I was a C++ programmer.

... Except I'm pretty sure I had heard of CppUnit as well, but I had ignored all of these tools. Mostly out of ignorance of learning a new way, otherwise I have no idea why I had ignored the frameworks that I soon discovered would drastically change the way I develop code. At first I skimmed Meszaros' book. It looked very intriguing. There appeared to be three sections to it, the first section had details for how to work with unit tests. The other two looked like they got deeper into the details. I held onto this book and decided to scan the shelves to see what else there was.

Next I saw Test Driven Development by Example, by Kent Beck. Both of these books had the same type of look and were part of Martin's signature series. Flipping through this book I saw the acronym TDD over and over. I had recognized that in the xUnit book as well. This book had a small bit of text annotating the steps for each change that was made to a small currency exchange class and unit tests that would be used to verify the work as it was designed and built. It looked over-simplified, but I was still intrigued.

One last book caught my attention, Working Effectively with Legacy Code by, Michael Feathers. Legacy Code! That's what I had been working in so I thought. I read the first chapter right there in the bookstore. Feather's definition of Legacy Code is:

Code without tests is bad code. It doesn't matter how well written it is; it doesn't matter how pretty or object-oriented or well encapsulated it is. With tests, we can change the behavior of our code quickly and verifiably. Without them, we really don't know if our code is getting better or worse.

That was the end of the beginning. I bought all three books, and immediately started to peruse the text gleaning whatever I could. This just happened to be the start of the Christmas holiday season, and I usually plan things so I can take the last two weeks of the year off from work. I spent that time becoming acquainted with the concepts, because I intended to put them to good use.

Practice

I started with xUnit Test Patterns. I finished the first section of this book, which is split into three sections. Including the excellent Forward written by, Martin Fowler, this was about 200 pages. There is a lot of information in this section, along with many diagrams. However, it is very well laid out, and I could start to imagine the possibilities for how I could put this to work. I didn't want to just keep reading though; I wanted to put these concepts into practice so I could better grasp the information as I continued to read. Besides, one of the books had "by Example" in the title.

So I searched the Internet for a unit test framework. Initially I was going to go with the one that Michael Feathers wrote, CppUnit. However, I soon discovered there were a plethora of frameworks to choose from, even just for C++. Furthermore, at the time I was working on mobile devices, which included Windows CE 4.0. This was an issue because even though it was 2009, that compiler and it's libraries was based upon the Visual Studio 6.0 compiler and it's IDE. CppUnit required some advanced features from C++ including RTTI. Even though the unit tests do not need to run on the hardware, I do like to use the same libraries when I test, not to mention I did not know what I was doing yet.

I searched for some comparison articles. Eventually I found this one: Exploring the C++ Unit Testing Framework Jungle This article settled it for me. I would learn with CxxTest. The reasons were:

  • It did not require any advanced features of C++
  • It could be included with only using header files
  • The tests were auto-registered by python, which was already on my development machine
  • The tests still looked like C++

Now I was off and running. I started by writing unit tests for a basic utility class that I had been using for two years. I wrote unit tests that aimed to verify every branch of code in my class. I did well and I even enjoyed it. I managed to discover and fix one or two bugs in the process. I went ahead and created another test suite, this time I wrote tests for a series of stand-alone utility functions. I was really seeing the value of the test suite approach and felt like I was on a good path. The only problem was, by the time I created my third test suite, I was getting tired of creating and configuring test suite projects.

That is when I took a small detour and created the unit test wizard that I use with Visual Studio and posted a few months ago. When I created the unit test project wizard, I also thought it would be more convenient to have the tests automatically run as part of the compile rather than a separate step. A big part of this was due to the tests themselves would probably never be of much use outside of the development environment./

Enlightenment

When I returned to practicing the development of unit tests, I decided I would apply the concepts of Test Driven Development to a new object that I was in the process of writing. Immediately I noticed a benefit. I had already written a few dozen unit tests. Therefore, I was able to benefit from some of the experience required to develop interfaces and code that could be tested.

I noticed the code that I was developing started to take on a new form. I was writing much simpler solutions. The functions that I was writing were smaller; they did not contain extraneous features that were not needed. The primary reason why is because I would have had to write a test to verify that feature. A secondary reason is because I didn't always have all of the information that I needed to implement a feature. Therefore there would be no way to test the extra code. Essentially it was dead-code cluttering up the important code.

This change in my development style was very surprising to me. It was such a simple technique and yet it had a profound impact upon the code that I now created. I continued to discover additional benefits from the code that I now produced. I was reusing these smaller functions to handle work that I would have previously duplicated. Again, duplicating this code, would mean writing duplicate tests to verify this section of code. This is the part of the TDD process that really emphasizes refactoring.

One other characteristic that I noticed was that I started to break down larger functions into small functions. Even if the entire length of the function would end up with <100 lines, I would break that function up into possibly 3 to 10 smaller functions. This then allowed the intent I was trying to convey in the top-level function become clearer. Even though these sub-routines were not going to be used in any other location but this top-level function, they contained one logical piece of behavior. This type of development bled over into the size of my objects as well. I found my smaller cohesive objects were much more reusable, not to mention easier to verify.

Evolution

As time carried on, I continued to develop with this process. However, I started to become much more proficient at anticipating how these smaller components would need to be structured to build what I wanted. I now build more of my objects external interface up front. This is especially true as I develop simple prototypes. I will then adapt these prototype interfaces into the production version of the object and start to implement a test as I develop each internal piece of logic for the object.

As I get further along, I actually get a chance to use the interface that I originally thought was a good design. Sometimes I learn that what I have created is quite cumbersome and unnatural to work with. I discover early on that I need to restructure the interface of the object, before I have gone too far in the development of this object. However, I also feel a bit fettered, and unable to see far enough ahead to anticipate the best approach to building a member function interface without additional context. You could say that I struggle with "the chicken or the egg" conundrum. This adapted approach that I have evolved to use still follows the tenets taught with TDD. Except that I do not strictly adhere to the exact process and always write a test before I write code.

Summary

So you see, even though TDD is defined and appears to be a tedious process, there is much to be learned if you practice and follow its mantra, Red, Green, Refactor. Also, if you practiced TDD, learned and evolved your own process, the both of us should be able to look back and have a good laugh about how it seemed like I was a hypocrite. And one final set of thoughts, I'll never tell you not to take a swig of milk directly from the carton (just check the expiration date before you do), or get upset if I catch you double-dipping. Those are some of my own guilty pleasures as well. As for the topic of speeding, I will let you decide how to handle that on your own.

(For your convenience and use, I have provided these two 5 minute introductions to unit testing and test driven development as a refresher, or for you to present if you are trying to improve the practices in your own organization.)

Selling Ideas to Management

communication, CodeProject Send feedback »

Does this scenario sound familiar?

  • You have identified a piece of troublesome logic in your code-base, which has been the source of many headaches for both you and your managers.
  • You have also determined an elegant fix to make the code maintainable and easy to work with for many years to come.
  • You make a request to your project managers to schedule some time for these improvements to be implemented.
  • When you make your pitch you are sure to mention how the quality of the software will be improved and save the company money because the code will be easier to work with and less bugs will be reported by the customers.

Management seems to agree with your ideas and replies with:

"That sounds great, we should definitely do that. However, now is not a good time. We should be able to do that a couple of months from now."

Where's the Value?

There is a reason your request is not given a higher priority. This is because you have not been able to communicate how this work will directly make more money for the company. This is true that saving money could lead to bigger profits. However, in this instance, saving money is much like saving time.

You cannot save time, only spend it

Although you have saved time and money related to one area of the code, your time and company's money will still be spent else where to improve the product. Nothing has changed in the software to make it more compelling for a customer to spend more money on your product. Essentially, there is no marketing checkbox to associate with the work

Internal Value

This proposed improvement may not be to refactor a troublesome section of code. It could be to upgrade your product to use a new API that will make it more compatible with other software. It could be improving the software patch system so it is an automated process for your team. It could even be fixing an incompatibility that if not addressed soon, may some day prevent your company from selling software at all. The common theme all of these improvements are they represent work that provides Internal value to your company.

Internal Beauty

Here is a similar scenario, different context. Your best friend, or their boyfriend/girlfriend would like to set you up on a blind-date. As they are describing this person the first thing they say is "you're gonna love this person they are funny and have a great personality." If you're a patient person you let them finish, otherwise you interrupt, and either way you most likely ask "How attractive are they?" Both inner-beauty and outer-beauty are important. However, the outer-beauty is what generally grabs our attention and helps us decide if we want to take the time to get to know a person.

Internal Value is Abstract

As a developer you can recognize and appreciate the value that will be gained by refactoring troublesome code. You understand the complex ideas that have been expressed in a computer-based dialect. You have learned how the painful code has morphed over the years into a run-on sentence that keeps repeating itself like "The 'B' Book" from Dr. Seuss. However, this is a very foreign concept to observers such as your managers and your customers (it is important to understand that your managers are also a customer to you.) Over time, the concept of the abstract internal value is even lost on managers that were once software developers themselves.

Think like the customer

You have been a customer before. What do you think about when you about to make a purchase?

  • Will this meet my needs?
  • What features does this model have that makes it so much more expensive than that one?
  • What kind of warranty and support is there if I have problems with it?
  • Does this contain any artificial sweeteners or preservatives?
  • Are these girl scout cookies made with real girl scouts?
  • Does it come in an enormous box that will entertain my kids in the backyard for hours?
  • Can I get it in lime green?

When you go to buy software, do you wonder how well the code was written, how many automated regression-tests are built for the system or if the code reads like an abstract version of a Dr. Seuss book? It's ok if you do, however, I think that more common questions customers ask are related to the check-boxes that marketing likes to put on the back of the box, the screen-shots of how cool the program looks, and the price. Do I value all of these features enough to pay this price for this program?

Change the Pitch

What does a customer want? They want to get a good deal on something they value. Customers expect software that works. Trying to pitch to your manager (a customer) that we will make the software work (better) does not sound like a good value. As disgusting as this may sound, you will have to think like you are a salesperson and marketer when making a pitch. It's not actually that disgusting at all, you simply need to look at your software from a different perspective.

When you first look from this new perspective, do not worry about your original goals to improve quality or ease of development .

  • What innovations can you think of that would pique a customer's interest?
  • How can you expand the feature set that you already have?
  • What features do your competitor's have that you don't?
  • What optimizations have you discovered and how will that improve performance?

Any ideas that you have thought of are now potential candidates to propose to your manager. These ideas directly correlate to some external feature that demonstrates more value for your product. This extra value can be used to command a higher price or will be a more compelling reason to buy your product (a better deal for the customer.) The last item listed above is somewhat unique, in that you can describe an internal improvement that will result in a direct external improvement. For some reason marketers really like features like "2x", "5x" or "10x faster". Who am I kidding, we all like to see that.

For internal software improvements that are not optimizations, such as quality or maintenance improvements, you will need to find some way to associate the modifications with one of your innovative features.

Yes, it is that simple.

No, this is not a cop-out.

Face it, asking management to stop normal development activities to add quality or mainenance enhancements, is kind of like asking for extra time to be doing things that you should have been doing all along. This is true even if this is legacy code that you have inherited. You and your team are responsible for the "great personality" and internal beauty of your software. If you want to pitch ideas to management and the committees in your company that hold the purse strings, you will need to pitch ideas that provide external value. Value that can be directly related to increased sales and profits.

How do I get the time to improve quality?

The answer to this question is:

        "you have to make the time."

You will have to adjust how you develop. The next time you will be modifying that troublesome spot of code, incorporate a little extra time to rework it. Until then, try isolating the trouble area into its own module behind some well-defined interfaces. This would allow you to work on the replacement module independent from the product version. I have many more thoughts on this evolution style of development that will save for another time.

Summary

When your company or management has an open-call for ideas, you should participate and submit your ideas. However, only submit the ideas that are externally visible to your management and customers. These are the ideas that provide direct value to the bottom-line, which is a very important concern. As developers, we are fully aware of the amount of time that we can free up by improving the quality of code in "trouble" spots. We could empirically demonstrate the amount of money that would be saved by spending the time to fix the code. However, this type of project provides an indirect benefit to the bottom-line. It is much more difficult to convince people to pay for this type of work.

  • Be innovative and pitch features that are externally visible to management.
  • Save the internal quality improvement ideas for the development phase.
  • Adjust your style of development to evolve more quality into your code over time.

The Singleton Induced Epiphany

general, portability, reliability, CodeProject, maintainability, design Send feedback »


I am not aware of a software design pattern that has been vilified more than The Singleton. Just as every other design pattern, the singleton has its merits. Given the right situation, it provides a simple a clean solution, and just as every other design pattern, it can be misused.

I have always had a hard time understanding why everyone was so quick to criticize the singleton. Recently, this cognitive dissonance has forced me on a journey that led to an epiphany I would like to share. This post focuses on the singleton, its valid criticisms, misplaced criticisms, and guidelines for how it can be used appropriately.

Oh! and my epiphany.

The Pattern


The Singleton is classified as an Object Creational pattern in Design Patterns - Elements of Resusable Object-Oriented Software, the Gang-of-Four (GOF) book. This pattern provides a way to ensure that only one instance of a class exists, and also provides a global point of access to that instance.


This design pattern gives the control of instantiation to the class itself. This eliminates the obvious contention of who is responsible for creating the sole instance. The suggested implementation for this pattern is to hide the constructors, and provide a static member function to create the object instance when it is first requested.


There are many benefits that can be realized using the Singleton:

  • Controlled access to the sole instance of the object
  • Permits refinement of operations and implementations through subclassing
  • Permits a variable number of instances
  • More flexible than class operations
  • Reduced pollution of the global namespace
    • Common variables can be grouped in a Singleton
    • Lazy allocation of a global resource

Structure


Here is the UML class-structure diagram for the Singleton:

Singleton

- instance : Singleton = null

+ getInstance() : Singleton

- Singleton() : void

Criticisms

I have heard three primary criticisms against the use of a Singleton design pattern.

  • They are overused/misused
  • Global State
  • Difficult to correctly implement a multi-threaded singleton in language X


I will explain these criticisms in depth in a moment. First, on my quest I recognized these common criticisms and the topics they were focused on. In many cases, however, I don't think the focus of the criticism was placed on the right topic. In the forest of problems, not every problem is a tree, such as The Mighty Singleton. There is a bigger picture to recognize and explore here.

Overuse/Misuse


"Here's a hammer, don't hurt yourself!"

That about sums it up for me; all kidding aside, a software design pattern is simply another tool to help us develop effective solutions efficiently. Maybe you have heard this question raised about the harm technology 'X' is capable of:

'X' can be really dangerous in the wrong hands, maybe it would be best if we never used 'X'?!

Now fill in the technology for 'X', fire, nuclear energy, hammers, Facebook... the list is potentially endless.


"Give an idiot a tool, and no one is safe" seems to be the message this sort of thinking projects. At some point we are all novices with respect to these technologies, tools, and design patterns. The difference between becoming an idiot and a proficient developer is the use of good judgment. It is foolish to think sound judgment is not a required skill for a programmer.

Global State


Globally scoped variables are bad, because they are like the wild west of user data. There is no control, anyone can use, modify, destroy the data at anytime. Let's backup a step and rephrase the last sentence to be more helpful. Caution must be used when storing state in the global scope because its access is uncontrolled. Beyond that last statement, how global state is used should be decided when it is considered to help solve a problem.


I believe this is the webpage that is often cited to me when discussing why the use of Singletons is bad: Why Singletons Are Controversial [^]. The two problems asserted by the author relate to global data. The first problem claims that objects that depend on singleton's are inextricably coupled together cannot be tested in isolation. The second objection is that dependencies are hidden when they are accessed globally.

The first problem is not too difficult to solve, at least with C and C++. I know enough of JAVA and C# to be productive, but not enough to make any bold claims regarding the Singleton. If you know how to get around this perceived limitation in a different language, please post it as a comment. Regardless, the approach I would take in any language to separate these dependencies is to add another abstraction layer. When the resources are defined for reference or linking, refer to an object that can stand-in for your Singleton.

The second objection suggests that all parameters should be explicitly passed as parameters. My personal preference is to not have to type in parameters, especially if the only reason for adding a parameter is to pass it another layer deeper. Carrying around a multitude of data encapsulated within a single object is my preference.

Also consider system state that is simply read, but not written. A single point of controlled access to manage the lookup of system time and network connectivity may provide a much cleaner solution than a hodge-podge collection of unrelated system calls. The collection of dependencies is now managed in the Singleton. This is true; that you may not know where every access of your Singleton occurs, however, you can know for certain what is accessed through the Singleton.

Multi-thread Safe Singleton Implementations

One challenge that must be accounted for when considering the use of a Singleton, is its creation in a multi-threaded environment. However, the debates that stem from this initial conversation regarding thread-safe creation of the Singleton diverges into a long list of other potential issues that are possible. The final conclusion, therefore, is that the Singleton is very difficult to get right, so it shouldn't be used at all.


This type of rationale says very little about the Singleton and much more about the opinion of the person making the argument. One thing I think it is important for everyone to understand is that there is no silver-bullet solution. No solution, technique or development process is the perfect solution for every problem.

Double-Checked Lock Pattern (DCLP)

The Double-Checked Lock Pattern is an implementation pattern to reduce the overhead to provide thread-safe solution in a location that will be called frequently, yet the call to synchronize will be needed infrequently. The pattern uses two conditional statements to determine first if the lock needs to be acquired, and second to modify the resource, if necessary, after the lock has been acquired. Here is a pseudo-code example:

C++

// class definition ...
private:
  Object m_instance;
 
public:
  static
    Object GetInstance()
    {
      // Has the resource been created yet?
      if (null == m_instance)
      {
        // No, synchronize all threads here.
        synchronize(this);
        // Check again.
        if (null == m_instance)
        {
          // First thread to continue enters here,
          // and creates the object.
          m_instance = new Object;
        }
      }
 
      return m_instance;
    }
// ...


The code above looks innocuous and straight-forward. It is important to understand that a compiler inspects this code, and attempts to perform optimizations that will improve the speed of the code, and continue to be reliable. This white-paper, C++ and the Perils of Double-Checked Locking [^] , by Scott Meyers and Andrei Alexandrescu is an excellent read that helped me reach my epiphany. While the concrete details focus on C++, the principles described are relevant for any language.

It's about 15 pages long and a fun read. The authors describe the hidden perils of the DCLP in C++. Every step of the way they dig deeper to show potential issues that can arise and why. Each issue I was thinkning "Yeah, but what if you used 'X'?" The very next paragraph would start like "You may be thinking of trying 'X' to resolve this issue, however..." So in this regard, it was very entertaining for me, and also opened my eyes to some topics I have been ignorant about.


The subtle perils that exist are due to the compilers ability to re-order the processing of statements that are deemed unrelated. In some cases, this means that the value, m_instance, is assigned before the object's constructor has completed. This would actually permit the second thread to continue processing on an uninitialized object if it hits the first if statement after the first thread starts the call to:

m_instance = new Object;

The primary conclusion of the investigation is that constructs with hard sequence-points, or memory barriers, are required to create a thread-safe solution. The memory barriers do not permit the compiler and processor to execute instructions on different sides of the barrier until the memory caches have been flushed.

Here is the augmented pseudo-code that indicates where the memory barriers should be applied in C++ to make this code thread-safe:

C++

static
    Object GetInstance()
    {
      [INSERT MEMORY BARRIER]
 
      // Has the resource been created yet?
      if (null == m_instance)
      {
        // No, synchronize all threads here.
        synchronize(this);
        // Check again.
        if (null == m_instance)
        {
          // First thread to continue enters here,
          // and creates the object.
          m_instance = new Object;
          [INSERT MEMORY BARRIER]
        }
      }
 
      return m_instance;
    }

I was struck by the epiphany when I read this sentence from the paper:

Finally, DCLP and its problems in C++ and C exemplify the inherent difficulty in writing thread-safe code in a language with no notion of threading (or any other form of concurrency). Multithreading considerations are pervasive, because they affect the very core of code generation.


My Epiphany

Before I explain my epiphany, take a quick look at the UML structure diagram again. The Singleton is deceptively simple. There isn't much to take in while looking at that class diagram. There is an object that contains a static member function. That's it.

The devil is in the details

  • How is it implemented?
  • How is it used?
  • What is it used for?
  • Which language is used to implemented it?
  • How do you like to test your software?

I believe that the Singleton may be criticized so widely because of those who jump too quickly to think they understand all there is to know about this design pattern. When something goes wrong, it's not their misunderstanding of the pattern, the pattern's simply broken, so don't use it. Being bitten by a software bug caused by someone else's ignorance is frustrating. To me, watching someone dismiss something they do not understand is even more frustrating.


Before my epiphany, I felt many times like I was trying to stand up and defend the Singleton when I was involved in conversations focused on this design pattern. Now I realize, that is not was I was trying to accomplish. I was trying to understand what the real issue was. Out of the three arguments I common hear from above, the multi-threaded argument is the only one that seemed to be a valid concern. The other two simply require the use of good judgment to overcome.


Now if we focus on the multi-threading issue, we can take a step back and realize that the problem does not lay with the Singleton, but the language the Singleton is implemented in. If your programming language does not natively provide multi-threading as part of the language, it becomes very challenging to write portable and correct multi-threaded code. All of this time, blame and criticisms have been placed upon the Singleton, when actually it was the tools we were using to implement the Singleton. It is foolish to eschew the Singleton for its unsafe behavior in a multi-threaded environment without considering that all of the other code written in that environment is open to the same liabilities.

Usage Guidelines

As promised, here are some guidelines to help you use the Singleton effectively. There are at least four common implementations of the Singleton in JAVA, and the DCLP version should not be used prior to the J2SE 5.0 release of the language. C++ has the same potential issue that can be resolved with the native threading support in C++11. For .NET developers, the memory synchronization provides the appropriate memory barriers.

For earlier versions of C++ it is recommended to create Singleton instances before the main function starts. This will prevent the benefit of lazy instantiation, however, it will safely create the instance before other threads begin executing. Care must still be taken if your Singleton has dependencies on other modules, as C++ does not provide a guarantee for the order that modules are initialized.

It is possible to use lazy instantiation safely in C++ with a minor adjustment. Modify the get_instance() function to use the one-check locking by simply synchronizing with a mutex. Then make a call to get_instance() at the beginning of each thread that will access the object and store a pointer to it. Thread-Local Storage mechanisms can be used to provide thread specific access to the object. The introduction of TLS will not be portable, however, it will be thread-safe.

Summary


We manage complexity by abstracting the details behind interfaces. Every abstraction reduces the perceived complexity of a solution just a bit more. Sometimes the complexity is reduced until it is a single box with the name of a static member function inside. The majority of our job is to identify problems, and provide solutions. Unfortunately, many times we are quick attribute the root cause to the wrong factor. Therefore, when an issue is discovered with a software component, take a step back and look at the bigger picture. Is this a localized phenomenon, or a symptom of a more systemic issue?

Alchemy: Message Interface

portability, CodeProject, C++, maintainability, Alchemy, design Send feedback »

This is an entry for the continuing series of blog entries that documents the design and implementation process of a library. This library is called, Network Alchemy[^]. Alchemy performs data serialization and it is written in C++.

I presented the design and initial implementation of the Datum[^] object in my previous Alchemy post. A Datum object provides the user with a natural interface to access each data field. This entry focuses on the message body that will contain the Datum objects, as well as a message buffer to store the data. I prefer to get a basis prototype up and running as soon as possible in early design & development in order to observe potential design issues that were not initially considered. In fact, with a first pass implementation that has had relatively little time invested, I am more willing to throw away work if it will lead to a better solution.

Many times we are reluctant to throw away work. However, we then continue to throw more good effort to try to force a bad solution to work; simply because we have already invested so much time in the original solution. One rational explanation for this could be because we believe it will take the same amount of time to reach this same point with a new solution. Spending little time on an initial prototype is a simple way to mitigate this belief. Also remember, the second time around we have more experience with solving this problem, so I would expect to move quicker with more deliberate decisions. So let's continue with the design of a message container.

Experience is invaluable

When I design an object, I always consider Item 18 in, Scott Meyer's, Effective C++:

Item 18: Make interfaces easy to use correctly and hard to use incorrectly.

The instructions for how to use an object's interface may be right there in the comments or documentation, but who the hell read's that stuff?! Usually it isn't even that good when we do take the time to read through it, or at least it first appears that way. For these very reasons, it is important to strive to make your object's interfaces intuitive. Don't surprise the user. And even if the user is expecting the wrong thing, try to make the object do the right thing. This is very vague advice, and yet if you can figure out how to apply it, you will be able to create award winning APIs.

This design and development journey that I am taking you on is record of my second iteration for this library. The first pass proved to me that it could be done, and that it was valuable. I also received some great feedback from users of my library. I fixed a few defects, and discovered some short-comings and missing features. However, the most important thing that I learned was that despite the simplicity of the message interface, I had made the memory management too complicated. I want to make sure to avoid that mistake this time.

Message Design

Although the Message object is the primary interface that the user will interact with Alchemy, it is a very simple interface. Most of the user interaction will be focused on the Datum fields defined for the message. The remainder of the class almost resembles a shared_ptr. However, I did not recognize this until this second evaluation of the Message.

Pros/Cons of the original design

Let me walk you through the initial implementation with rational. I'll describe what I believe I got right, and what seemed to make it difficult for the users. Here is a simplified definition for the class, omitting constructors and duplicate functions for brevity:

C++

template &lt; typename MessageT,  typename ByteOrderT >
class Message
  : public MessageT
{
public:
  //  General ************************
  bool       empty() const;
  size_t     size() const;
  void       clear();
  Message    clone() const;
 
  //  Buffer Management **************
  void       attach (buffer_ptr sp_buffer);
  buffer_ptr detach();
  buffer_ptr buffer();
  void       flush();
 
  //  Byte-order Management **********
  bool   is_host_order() const;
 
  Message &lt; MessageT, HostByteOrder > to_host()    const;
  Message &lt; MessageT, NetByteOrder >  to_network() const;
};

Pros

First the good, it's simpler to explain and sets up the context for the Cons. One of the issues I have noticed in the past is a message is received from the network, and multiple locations in the processing end up converting the byte-order of parameters. For whatever reason the code ended up this way, when it did, it was a time consuming issue to track down. Therefore I thought it would be useful to use type information to indicate the byte-order of the current message. Because of the parameterized definitions I created for byte-order management, the compiler elides any duplicate conversions attempted to the current type.

I believe that I got it right when I decided to manage memory internally for the message. However, I did not provide a convenient enough interface to make this a complete solution for the users. In the end they wrote many wrapper classes to adapt to the short-comings related to this. What is good about the internal memory management is that the Message knows all about the message definition. This allows it to create the correct size of buffer and verify the reads and writes are within bounds.

The Alchemy Datum fields have shadow buffers of the native type they represent to use as interim storage, and to help solve the data-alignment issue when accessing memory directly. At fixed points these fields would write their contents to the underlying packed message buffer that would ultimately be transmitted. The member function flush was added to allow the user to force this to occur. I will keep the shadow buffers, however, I must find a better solution for synchronizing the packed buffer.

Cons

Memory Access

I did not account for memory buffers that were allocated outside of my library. I did provide an attach and detach function, but the buffers these functions used had to be created with Alchemy. I did provide a policy-based integration path that would allow users to customize the memory management, however, this feature was overlooked, and probably not well understood. Ultimately what I observed, was duplicate copies of buffers that could have been eliminated, and a cumbersome interface to get direct access to the raw buffer.

This is also what caused issues for the shadow buffer design, keeping the shadow copy and packed buffer synchronized. For the most part I would write out the data to the packed buffer when I updated the shadow buffer. For reads, if an internal buffer was present I would read that into the shadow copy, and return the value of the shadow copy. The problems arose when I found I could not force the shadow copies to read or write from the underlying buffer on demand for a few types of data.

This led to the users discovering that calling flush would usually solve the problem, but not always. When I finally did resolve this issue correctly, I could not convince the users that calling flush was no longer necessary. My final conclusion on this uncontrolled method of memory access was brittle and became too complex.

Byte-Order Management

While I think I made a good decision to make the current byte-order of the message part of the type, I think it was a mistake to add the conversion functions, to_host and to_network, to the interface of the message. Ultimately, this complicates the interface for the message, and they can just as easily be implemented outside of the Message object. I also believe that I would have arrived at a cleaner memory management solution had I gone this route with conversion.

Finally, it seemed obvious to me that users would only want to convert between host and network byte-orders. It never occurred to me about the protocols that intentionally transmit over the wire in little-endian format. Google's recently open-source, Flat Buffers, is one of the protocol libraries that does it this way. However, some of my users are defining new standard protocols that transmit in little-endian format.

Message Redesign

After some analysis, deep-thought and self-reflection, here is the redesigned interface for the Message:

C++

template &lt; typename MessageT,  typename ByteOrderT >
class Message
  : public MessageT
{
public:
  //  General ************************
  //  ... These functions remain unchanged ...
 
  //  Buffer Management **************
  void assign(const data_type* p_buffer, size_t count);
  void assign(const buffer_ptr sp_buffer, size_t count);
  void assign(InputIt first, InputIt last);
  const data_type* data()   const;
 
  //  Byte-order Management **********
  bool   is_host_order() const;
};
 
// Now global functions
Message &lt; MessageT, HostByteOrder>    to_host(...);
Message &lt; MessageT, NetByteOrder>     to_network(...);
Message &lt; MessageT, BigEByteOrder>    to_big_end(...);
Message &lt; MessageT, LittleEByteOrder> to_little_end(...);

Description of the changes

Synchronize on Input

The new interface is modelled after the standard containers, with respect to buffer management. I have replaced the concept of attaching and detaching a user managed buffer with the assign function. With this function, users will be able to specify an initial buffer they would like to initialize the contents of the message with. At this point, the input buffer will be processed and the data copied to the shadow buffers maintained by the individual Datum objects. Ownership of the memory will not be taken for the buffers passed in by the user, the content will simply be read in order to initialize the message values. This solves half of the synchronization issue.

Synchronize on Output

The previous design was not strict enough with control of the buffers. The buffers should only be accessed by a single path between synchronization points. This forced me to basically write the data down to the buffer on ever field set, and read from the buffer on every get, that is if there was an internal buffer to reference.

Now there is a single accessor function to get access to the packed format of the message, data(). data() will behave very much like std::string::data(). A buffer will be allocated if necessary, the shadow copies will be synchronized, and the user will be given a raw-pointer to the data buffer. The behavior of modifying this buffer directly is undefined. This pointer will become invalid upon another synchronization function being called (to be specified once all calls are determined). Basically, do not hold onto this pointer. Access the data, and forget about the buffer.

This decision will also make a new feature to this version simpler to implement, dynamically-sized messages. This is one of the short-comings of the initial implementation, it could only operate on fixed-format message definitions. I will still be holding off the implementation of this feature until after the full prototype for this version has been developed. However, I know that the path to implement will be simpler now that the management of memory is under stricter control.

Byte-Order Conversion

I'm still shaking my head over this (I understand it, it's just hard to accept.) I will create a more consistent interface by moving the conversion functions to global scope rather than member functions of the message. It will then be simpler to add new converters, such as to_big_end and to_little_end. This also removes some functions from the interface making it even simpler.

Eliminate Flush

flush was an interim solution that over-stayed its invitation. In an attempt to figure things out on their own, users discovered that it solved their problem, most of the time. When I finally solved the real problem, I couldn't pry that function from their hands, and they were still kicking and screaming. Rightly so, they did not trust the library yet.

Adding refined control over synchronization, or even a partial solution usually is a recipe for disaster. I believe that I provided both with the flush member function. The new design model that I have created will allow me to remove some of the complicated code that I refused to throw away, because it was so close to working. This is proof to me yet again, sometimes it is better to take a step back and re-evaluate when things become overly complex and fragile. Programming by force rarely leads to a successful and maintainable solution.

Summary

So now you know this whole time I have actually been redesigning a concept I have worked through once with mediocre success. The foundation in meta-template programming you have read about thus far has remained mostly unchanged. Once we reached the Message class, I felt it would be more valuable to present the big-picture of this project. Now hopefully you can learn from my previous mistakes if you find yourself designing a solution with similar choices. I believe the new design is cleaner, more robust, and definitely harder to use incorrectly. Let me know what you think. Do you see any flaws in my rationale? The next topic will be the internal message buffer, and applying the meta-template processing against the message definition.

Software Design Patterns

general, communication, CodeProject, maintainability, design Send feedback »

Software Design Patterns have helped us create a language to communicate and concepts and leverage the skills of previous work. Design patterns are very powerful, language agnostic descriptions problems and solutions that have been encounter and solved many times over. However, design patterns are only a resource for solving programming problems. A tool that can help software programs be developed elegantly, efficiently, and reliably; exactly the same way that programming languages, 3rd party libraries, open source code, software development processes, Mountain Dew and The Internet can improve the quality of code. I would like to discuss some of my thoughts and observations regarding design patterns, with the intent to help improve the usefulness of this wildly misused resource.

What is a design pattern?

Let's clarify what we are talking about before we go any further. A design pattern is an abstract description of a solution to a common problem and the context in which the pattern is useful. Often it will also include a description of the trade-offs that following the pattern will provide, such as the benefits you will gain the concessions you will make in your design to use the pattern. If you are aware of software patterns then most certainly you have heard of the Gang of Four (GOF) book on software design patterns. Its actual name is Design Patterns - Elements of Reusable Object-Oriented Software. There are four authors, hence the nickname. It is a good resource to start with, as it describes 23 design patterns great detail. They provide all of the information about the pattern I mentioned above as well as sample uses, implementations, and known uses for the pattern.

Why is a design pattern a useful resource?

A design pattern can be useful to you in primarily two ways.

  1. Improve your communication with other software designers
  2. Gain the ability to leverage the experience and wisdom of previously discovered and proven solutions.

A design pattern becomes a useful resource once you learn the vocabulary and a small amount of experience to relate, or apply the concepts. It becomes a shorthand way of describing a complex sequence of steps. Similar to the moves in chess, Castling[^] and En passant[^]. Granted, these are moves built into the rules of the game. Also consider then the situations that create a fork, pin or a skewer advantage.

Understanding is the key

Design Patterns are useful when they are able to improve understanding of the concepts and ideas. If a design pattern cannot be communicated clearly, its value is greatly diminished as the message is lost or mis-interpreted. Memorizing the names of popular design pattern is simple. However, to understand the concepts, benefits, disadvantages and overall value takes time, practice, and sometimes experience to properly understand.

When are we going to use this in the real-world?

Remember that one phrase that was uttered, at least once a day in your math classes?! Design patterns are very much like the models and concepts that are taught in math. Over hundreds of years, mathematicians have been developing equations to more simply represent characteristics and behaviors that we can observe:

  • Calculate the area of shapes
  • Calculate trajectories of artillery
  • Calculate the probabilities that you will lose money at a casino
  • Calculate how long it will take two trains leaving different stations at the exact same moment. Engine A is travelling at a rate of...

How many times have you felt like you completely understood a math instructor; you could follow the math and logic at each step; yet when you try to apply the same process to the homework problems it just does not seem to work out? Did you ever figure out why it didn't work? I typically discovered the cause to be the form of a number was changed in the problem. The units of the problems were different than the original problem and I had to learn how to convert the numbers I was given into something useful. Most of my classmates hated the story problems, but that is where the real learning takes place.

Apply what you think you know

Practice, practice, practice. Some things we learn only take repetition or rote memorization to learn. Most design patterns are not that simple. A design pattern is much like the 3 or 4 pages in a math book, at the beginning of the section that describes a new math concept. It's much like solving a story problem, when you go to use one of these patterns in your program. However, there was no set of problems to practice on before you tried to apply your new knowledge. To effectively use a design pattern, you must understand the pattern. Until you try to apply the pattern to create a solution, you only have that abstract concept floating in your mind.

Having an arsenal chess moves and strategies in hand will not guarantee your victory. You may create vulnerabilities, which you are not even aware of. That creates opportunities that your opponent can take advantage. This can occur even if you properly apply the moves (they were probably the wrong moves though). Solving the story problems in math were much simpler once you got some practice, working on the simple problems already setup for you to solve with the pattern or process just taught to you. Applying new concepts leads to a deeper understanding.

Follow the path to understanding

To use a pattern effectively, often requires a certain level of awareness with regards to the pattern.

  • To discover that these patterns exist
  • To know how to implement them
  • To know the restrictions on there use
  • To understand how they can be useful and harmful
  • To understand when and when not to use them

Notice that there is a progression in the information from above:

Discovery -> Knowing -> Understanding.

To discover

Chances are that you are already using many design patterns that have been defined and are not even aware of them. That's ok! That simply means that you discovered a common solution that generally provides good results on your own. Hopefully you used the most appropriate pattern to solve your problem. Unless you are aware something exists, you cannot actively search for it. In the case of a thought or concept, you would seek more knowledge on the subject.

To know

Now that you are aware that something exists, you can study it. You can seek more information on the topic to become more familiar with the subject. The blind use of knowledge can be fraught with dangers. For example, the outcome could end disastrously if one had a recipe for black powder, yet did not understand how to safely handle this volatile mixture. There may be more information to know in order to successfully make black powder than just the recipe.

To understand

To understand is also regarded as being firmly communicated. Once you understand, you can more completely grasp the significance, implications, on importance of the subject.

Anti-patterns

Falling into pitfalls is so common when applying design patterns, that the pitfalls have been given their own names. These pitfalls are called, Anti-patterns

. Many solutions and processes exist that are ineffective, yet continue to reappear and applied. Here is a brief list of some well known anti-patterns, if not by name, by concept:

  • God Object: The Swiss-Army Knife of the object-oriented world. This object can do anything and everything. The problem is, the larger an object is, the more difficult it becomes to reuse the object in other contexts. The implementation also runs the risk of becoming a tiny ball-of-mud encapsulated in an object.
  • Premature Optimization: This is a classic quote with regards to an anti-pattern:

    "Premature optimization is the root of all evil (or at least most of it) in programming"
    The Art of Computer Programming, p. 671, Donald Knuth

    The important message to understand when discussing this quotation, is that it is very difficult for humans to predict where the bottlenecks in code are. Do not work on optimizing code until you have run benchmarks, and identified a problem.
  • Shotgun Surgery: This one is painful. Adding a new feature in a single change that spans many other features, files and authors. Once all of the changes have been made, and the code successfully compiles, the chances are great that some of the original features are broken, and possibly new feature as well.
  • Searching for the Silver Bullet: This is that one trick, fix, pattern, language, process, get-rich-quick scheme... that promises to make everything easier, better, simpler. It is much more difficult to prove that non-existence than existence. And since I am not aware of any Silver Bullets, when I am asked "What is the best ...?" Typically I will respond with "It depends..."

  • Cargo Cult Programming: This is when patterns and methods are used without understanding why.
    • Why would you choose the MVC when you have the MVVM?!
    • It's got less letters, you'll save time typing, duh!

Singleton

Many developers consider The Singleton be an anti-pattern, and advise to never use it. I believe that absolutes are absolutely dangerous, especially if the discussion is regarding "Best Practices." Always and never are examples of absolute qualifiers. Some tools are the right tool for the job. To go out of your way and avoid using a design pattern, or a feature in a language, only to recreate that solution in another form is counter-productive; possibly anti-productive. There are some valid concerns with regards to the singleton. One of the most important concerns to be aware of, is how to safely use them in multi-threaded environments. However, this does not invalidate the value that it provides, especially when it is the right pattern for the situation.

I will revisit the singleton in the near future to clarify some misunderstandings, and demonstrate how and when it can be used effectively.

Summary

Design Patterns are another resource to be aware of that can help you succeed as a software developer. In order to take advantage of this resource you must understand the concepts of the design pattern. This is very similar the mathematical concepts that must be understood before they can be applied to solve real-world problems. When things work out well, communication is improved, and more effective development by leveraging the proven work others. When the use of design patterns does not work out so well, we get Anti-patterns. Solutions and processes that appear to be beneficial, but are actually detracting from the project. Keep an open mind when designing software, searching for a solution. Be aware of what exists, and understand how and why the pattern is beneficial before you try to use it.

Contact / Help. ©2024 by Paul Watt; Charon adapted from work by daroz. CMS / cheap web hosting.
Design & icons by N.Design Studio. Skin by Tender Feelings / Skin Faktory.