## Steganography

Send feedback »

A software library provides no value if it does not simplify the task of creating your application. At the very least we would like to show that the library contains all of the tools required to complete the intended goal. Ideally, the library is complete, easy to use, and is efficient. The only way to learn how well the library is designed and implemented is to use it.

Furthermore, it is useful and sometimes necessary to provide an exemplar for others to see how the library is intended to be used. The Steganography sample program included with Alchemy is this exemplar. I chose steganography to demonstrate that Alchemy is much more useful than the serialization of data for networking. In the process of developing this application I discovered some pain-points with the library and added tools to Alchemy to eliminate this pain.

## Steganography

What is steganography?

Steganography is the hiding of messages within plain-sight. This should not be confused with "Stenography," which is the recording of dictation. Steganography can be performed in may ways. Normal words can be given special meaning and included within a message that appears to be mundane. The location of words relative to others in the message can have a significant meaning. The second letter of every other word can be extracted to form the message. The possibilities are endless.

The form of steganography that I have implemented with Alchemy embeds a text message within a bitmap image. This can be achieved by taking advantage of the fact that the low-order bits for the color channels in an image affect the final color much less compared to the high-order bits.

The table below shows a sample for each color channel, with and without the two lower-bits set. The row with binary indicates the values of the four lower-bits for each 8-bit color. For demonstration purposes, the alpha channel is represented with grayscale.

 Red‬‬ Green‬‬‬ Blue Alpha FF FC FF FC FF FC FF FC 1111 1100 1111 1100 1111 1100 1111 1100

Compare this to the result if we substitute only the single high-bit for each color channel:

 Red‬‬ Green‬‬‬ Blue Alpha 7F FF 7F FF 7F FF 7F FF 0111 1111 0111 1111 0111 1111 0111 1111

The only caveat is the image should have a sufficient amount of entropy, otherwise the noise added by the encoded data may become visible; if not to a human, then most certainly to computer searching for such anomalies. Photographs with a range of gradients are good candidates for this form of steganography.

## Why Use Steganography as a Sample?

Through the development of the base set of features for Alchemy, I focused solely on the serializing of data for network data transfer protocols. However, Alchemy is a flexible serialization library that is not restricted to network communication. Portable file formats also require serialization capabilities similar to the capabilities found in Alchemy. To this end, loading and storing a bitmap from a file is a good serialization task; bitmaps are relatively easy to acquire, and the format is simple enough to be implemented in a small sample program.

I wanted to keep the program simple. Writing a portable network communication program is not simple; especially since Alchemy does not provide functionality directly related to network communication. I also felt that if I were to use a network related exemplar, potential user of Alchemy would assume it can only be used for network related tasks. Moreover, I did not want to add extra support code to the application that would hide or confuse the usage of Alchemy.

## Strategy

In keeping with simplicity, the sample program requires 32-bit bitmaps. For this type of encoding, there are four color channels (Red, Green, Blue, and Alpha) for each pixel, where each channel is one-byte in size. We will encode a one-byte of data within each pixel. To accomplish this, we will assign two-bits of the encoded byte into the two lower-bits of each color channel. This results in a 25% encoding rate within the image.

Consider an example where we combine the orange color 0xFF9915 with the letter i, 0x69:

 Channel 1 Channel 2 Channel 3 Channel 4 Input 0xFF 0x99 0x15 0x00 Value 1111 1111 1001 1001 0001 0101 0000 0000 Data 01 10 10 01 Result 1111 1101 1001 1010 0001 0110 0000 0001 Output 0xFD 0x9A 0x16 0x01

This is not a very complex encoding strategy. However, it will allow me to demonstrate the serialization of data for both input and output, as well as the packed-data bit (bit-field) functionality provided by Alchemy.

## Bitmap Format

The bitmap file format has many different definitions. The variety of formats are a result of its inception on IBM's OS/2 platform, migration to Windows, and evolution through the years. Additionally, the format allows for an index 8-bit color table, Run-Length Encoded (RLE) compression, gamma correction, color profiles and many other features.

The sample application simply uses the bitmap format introduced with Windows 3.0. It contains a file header that indicates the file is of type BITMAP, a bitmap information section, and the pixel data. The Alchemy definitions for each section are found below. These definitions provide the fundamental structure for the data; the goal was to provide a table-based definition that looks very similar to the definition of a struct. This declaration is also for generating the majority of the serialization logic for Alchemy:

The bitmap file header is a short constructor that is only 14-bytes large. The first two bytes will contain the letters "BM" to indicate that this is a bitmap. The length of the file, and the offset to the first pixel data are also encoded in this structure:

C++

 //  ************************************************************* ALCHEMY_STRUCT(bitmap_file_header_t,   ALCHEMY_DATUM(uint16_t, type),   ALCHEMY_DATUM(uint32_t, length),   ALCHEMY_DATUM(uint16_t, reserved_1),   ALCHEMY_DATUM(uint16_t, reserved_2),   ALCHEMY_DATUM(uint32_t, offset)  )

The bitmap information section is 40-bytes of data that defines the dimensions and color-depth of the encoded bitmap:

C++

 //  ************************************************************* ALCHEMY_STRUCT(bitmap_info_header_t,   ALCHEMY_DATUM(uint32_t, size),   ALCHEMY_DATUM(int32_t,  width),   ALCHEMY_DATUM(int32_t,  height),   ALCHEMY_DATUM(uint16_t, planes),   ALCHEMY_DATUM(uint16_t, bit_depth),   ALCHEMY_DATUM(uint32_t, compression),   ALCHEMY_DATUM(uint32_t, sizeImage),   ALCHEMY_DATUM(int32_t,  x_pixels_per_meter),   ALCHEMY_DATUM(int32_t,  y_pixels_per_meter),   ALCHEMY_DATUM(uint32_t, color_count),   ALCHEMY_DATUM(uint32_t, important_color)  )

### Bitmap Information

This is a utility definition to combine the information header and the color data from the buffer for convenience:

C++

 //  ************************************************************* ALCHEMY_STRUCT(bitmap_info_t,   ALCHEMY_DATUM(bitmap_info_header_t, header),   ALCHEMY_ALLOC(byte_t, header.sizeImage, pixels)  )

### Pixel Definition

This is a convenience structure to access each color-channel independently in a pixel:

C++

 //  ************************************************************* ALCHEMY_STRUCT(rgba_t,   ALCHEMY_DATUM(byte_t, blue),   ALCHEMY_DATUM(byte_t, green),   ALCHEMY_DATUM(byte_t, red),   ALCHEMY_DATUM(byte_t, alpha)  )

## Alchemy Declarations

### Storage Buffer

Alchemy supports both static and dynamic memory management for its internal buffers; dynamic allocation is the default. However, the storage policy can easily be changed to a static policy with a new typedef. The definition below shows the static buffer definitions used by the sample program:

C++

 namespace detail { typedef Hg::basic_msg      hg_file_t;   typedef Hg::basic_msg      hg_info_t; }

### Alchemy Message

For convenience, we also pre-define a type for the message format type.

C++

 typedef Hg::Message< detail::hg_file_t>   file_t; typedef Hg::Message< detail::hg_info_t>   info_t;

## Bitmap Abstraction

As I mentioned previously, I wanted to keep this sample application as simple as possible. One of the things that I was able to do is encapsulate the bitmap data details into the following Bitmap abstraction. This class provides storage for a loaded bitmap, loads and stores the contents, and provides a generic processing function on each pixel:

C++

 class Bitmap { public:   bool Load (const std::string &name);   bool Store(const std::string &name);     void process( std::string &msg,                 pixel_ftor   ftor); private:   std::string   m_file_name;     file_t        m_file_header;   info_t        m_info; };

The processing function takes a function-pointer as an argument that specifies the processing operation to be performed each time the function is called. This is the definition for that function-pointer.

C++

 typedef void (*pixel_ftor) ( Hg::rgba_t&  pixel,                              Hg::byte_t&  data);

This section shows the implementation for both the Load and Store operations of the bitmap. The implementation uses the Standard C++ Library to open a file, and read or write the contents directly into the Hg::Message type with the stream operators.

C++

 //  ************************************************************* bool Bitmap::Load (const std::string &name) {   m_file_name = name;     std::ifstream input(m_file_name, std::ios::binary);   if (input.bad())   {     return false;   }     input >> m_file_header;     const size_t k_info_len = 0x36ul;   if (k_info_len != m_file_header.offset)   {     return false;   }     input >> m_info;     return true; }

And the implementation for Store:

C++

 //  ************************************************************ bool Bitmap::Store (const std::string &name) {   std::ofstream output(name, std::ios::binary);   if (output.bad())   {     return false;   }     output << m_file_header;   output << m_info;     return true; }

### Process

I mentioned at the beginning that it is important to implement programs that perform real-work with your libraries to verify that your library is easy to use and provides the desired functionality as expected. With my first pass implementation of this program, both of those qualities were true for Alchemy, except the performance was quite slow. The cause turned out to be the load and initialization of every single pixel into my implementation for Hg::packed_bits.

The problem is that the bytes that represent the pixel data are normally read into an array as a bulk operation. Afterwards, the proper address for each pixel is indexed, rather than reading the data into an independent object that represents the pixel. When I recognized this, I came up with the idea for the data_view<T> construct. This allows a large buffer to be loaded as raw memory, and a view of the data can be mapped to any type desired, even a complex data structure such as the rgba_t type that I defined.

The data_view is an object that provides non-owning access to the underlying raw buffer. If this sounds familiar that is because it is very similar to the string_view construct that is slated for C++17. It was shortly after I implemented data_view that discovered that string_view existed. So I was a bit shocked, and delighted when I realized how similar the concepts and implementations are to each other. It was a bit of validation that I had chosen a good path to solve this problem.

I plan to write an entry that describes the data_view in detail at a later time. Until then, if you would like to learn more about the approach, I encourage you to check out its implementation in Alchemy, or the documentation for the string_view object.

The purpose of process is to sequentially execute the supplied operation on a single message byte and source image pixel. This is continued until the entire message has been processed, or there are no more available pixels.

C++

 //  ************************************************************* void Bitmap::process( std::string &msg,                       pixel_ftor   ftor) {   auto t    = Hg::make_view(m_info.pixels.get());   auto iter = t.begin();     // Calculate the number of bytes that can be encoded or extracted   // from the image and ensure the the message buffer is large enough.   size_t length = t.end() - iter;   msg.resize(length);     for (size_t index = 0; iter != t.end(); ++iter, ++index)   {     ftor(*iter, (Hg::byte_t&)(msg[index]));   } }

## Weave and Extract

These are the two functions that provide the pixel-level operations to encode a message byte into a pixel with the strategy that was previously mentioned. Weave combines the message byte with the supplied pixel, and Extract reconstructs the message byte from the pixel.

I am investigating the possibility of implementing a union-type for Alchemy. If I end up doing this I will most likely revisit this sample and provide an alternative implementation that incorporates the Hg::packed_bits type. This will completely eliminate the manual bit-twiddling logic that is present in both of these functions:

C++

 //  ************************************************************* void weave_data ( Hg::rgba_t&  pixel,                   Hg::byte_t&  data) {   using Hg::s_data;     s_data value(data);     pixel.blue  = (pixel.blue  & ~k_data_mask)               | (value.d0    &  k_data_mask);   pixel.green = (pixel.green & ~k_data_mask)               | (value.d1    &  k_data_mask);   pixel.red   = (pixel.red   & ~k_data_mask)               | (value.d2    &  k_data_mask);   pixel.alpha = (pixel.alpha & ~k_data_mask)               | (value.d3    &  k_data_mask); }

Extract implementation:

C++

 //  ************************************************************* void extract_data ( Hg::rgba_t&  pixel,                     Hg::byte_t&  data) {   using Hg::s_data;     s_data value;     value.d0  = (pixel.blue  & k_data_mask);   value.d1  = (pixel.green & k_data_mask);   value.d2  = (pixel.red   & k_data_mask);   value.d3  = (pixel.alpha & k_data_mask);     data = value; }

## The Main Program

The main program body is straight-forward. Input parameters are parsed to determine if an encode or decode operation should be performed, as well as the names of the files to use.

C++

 //  ************************************************************* int main(int argc, char* argv[]) {   if (!ParseCmdParams(argc, argv))  {     PrintHelp();     return 0;   }     string         message;   sgraph::Bitmap bmp;   bmp.Load(input_file);   if (is_encode)  {     message = ReadFile(msg_file);     bmp.process(message, weave_data);     bmp.Store(output_file);   }   else  {     bmp.process(message, extract_data);     WriteFile(output_file, message);   }     return 0; }

## Results

To demonstrate the behavior of this application I ran sgraph to encode the readme.txt file from its project. Here is the first portion of the file:

========================================================================
CONSOLE APPLICATION : sgraphy Project Overview
========================================================================

AppWizard has created this sgraphy application for you.

This file contains a summary of what you will find in each of the files that


Into this image:

This is the result image:

For comparison, here is a sample screen-capture from a Beyond Compare diff of the two files:

## Summary

I implemented a basic application that performs steganography to demonstrate how to use the serialization features of my library, Alchemy. I chose a unique application like this to make the demonstration application a bit more interesting and to show the library can be used for much more than just serialization of data for network transfer.

## The "Little Planet" Effect

3 feedbacks »

The "Little Planet" effect is the colloquial name often used to refer to the mathematical concept of a stereographic projection. The end result is quite impressive, especially considering the little amount of code that is actually required to create the image. All that is required is a panoramic image with a 360° view from side to side, or a photo-sphere, such as used with Google Earth to provide an immersive view of a location.

## Stereographic Projection

This is a mapping from a spherical position onto a plane. You will commonly see this type of projection in cartography; two examples are mapping the earth and planispheres (celestial charts). This projection is useful because it is not possible to map from a sphere to a plane without some type of distortion. The stereographic projection preserves angles and distorts areas. This trade-off is preferred for navigation, which is typically performed with angles.

The projection is typically performed from one of the poles. However, it can originate from any point on the sphere. For simplicity, unless otherwise stated I will refer to projections that originate at the North Pole of the sphere. For a unit-sphere located at the origin, this would be at $$\matrix{[0 & 0 & 1]}$$

The distortion of the image depends on the placement of the plane relative to the sphere. The upper hemisphere exhibits most of the distortion. The distortion becomes more extreme the closer the point in the sphere's surface approaches the origin of the projection. The projection extends to infinity as the projection's origin is undefined.

If the plane bisects the sphere at the equator, the lower hemisphere will be projected within an area the size of the circumference of the sphere, and the upper hemisphere is projection on the plane outside of the sphere.

When the plane is located at the surface of the sphere, opposite from the projection's origin, the lower hemisphere will project over an area that is twice that of the sphere's equator. The image below illustrates this configuration.

We can reference any point on the spheres surface with two angles representing the latitude $$\phi$$ and longitude $$\lambda$$, where:

$$-\pi \lt \lambda \lt \pi, -\cfrac{\pi}{2} \lt \phi \lt \cfrac{\pi}{2}$$

The following image is a scaled down version of the panorama that I used to generate the stereographic projection of the Golden Gate Bridge at the beginning of the article. Normally we would index the pixels in this image with two variables, $$x$$ (width) and $$y$$ (height).

We can simplify the math to map a full-view panorama to a sphere by normalizing the dimensions for both the sphere and our surface map; that is, to reduce the scale to the unit scale of one. This means we will perform the surface map operation on a unit-sphere, and the dimensions of our panorama will then span from: $$-1 \lt x \lt 1, -1 \lt y \lt 1$$.

The following image shows how the coordinate system from the sphere maps to the image:

## Project the sphere onto the Plane

We will create a ray, to calculate the projection of a single point from the sphere to the plane. This ray begins at the projective origin, passes through the sphere and points at any other point on the surface of the sphere. This ray continues and will intersect with the projective plane. This intersection is the point of projection. Alternatively, we can calculate the intersection point on the sphere's surface, given a ray that points between the projective origin and the projection plane. This demonstrates that the stereographic projection is a bijective operation; there is a one-to-one correspondence for the values on both surfaces.

The diagram below depicts the projection in two-dimensions:

If $$\lambda_0$$ as the central longitude and (\phi_1\) as the central latitude, this relationship can be stated mathematically as:

\eqalign{ u &= k \cos \phi \sin(\lambda - \lambda_0) \cr v &= k [ \cos \phi_1 \sin \phi - \sin \phi_1 \cos \phi \cos(\lambda - \lambda_0)]\cr & where \cr k &= \cfrac{2R}{1+ \sin \phi_1 \sin \phi + \cos \phi_1 \cos \phi \cos(\lambda - \lambda_0)} }

The term $$k$$ determines where the projected plane is located.

## Codez Plz

That is enough theory and explanation. Here is the code that I used to generate the stereographic projections of the Golden Gate Bridge and Las Vegas. The code presented below is adapted from a project I just completed that used the Computer Vision library OpenCV. The only important thing to note in the code below is that Mat objects are used to store images and pixels are represented with a type of std::vector. You should have no problem converting the pixel access operations from the code below to whatever image processing API that you are using.

First, here are two constants defined in the code:

C++

 const double k_pi         = 3.1415926535897932384626433832795; const double k_pi_inverse = 0.31830988618379067153776752674503;

There are three functions:

### Main Projection

This function works by creating a ray between the projection origin and a pixel location on the projection plane. The intersection of the sphere's surface is calculated, which indicates the location to sample from the sphere's surface map. Because we are dealing with discrete, pixelated digital images, this sampling process creates visual artifacts. To help improve the smoothness of the image, we use a bilinear filter to average the values of four surrounding pixels of the sample location from the sphere.

C++

 void RenderProjection(Mat &pano, long len, Mat &output) {   output.create(len, len, CV_16UC3)   long half_len = len / 2;   Size sz       = pano.size();       for (long indexX = 0; indexX < len; ++indexX) {     for (long indexY = 0; indexY < len; ++indexY) {       double sphereX = (indexX - half_len) * 10.0 / len;       double sphereY = (indexY - half_len) * 10.0 / len;       double Qx, Qy, Qz;         if (GetIntersection(sphereX, sphereY, Qx, Qy, Qz))       {         double theta  = std::acos(Qz);         double phi    = std::atan2(Qy, Qx) + k_pi;         theta          = theta * k_pi_inverse;         phi            = phi   * (0.5 * k_pi_inverse);         double Sx      = min(sz.width -2.0, sz.width  * phi);         double Sy      = min(sz.height-2.0, sz.height * theta);           output.at(indexY, indexX) = BilinearSample(pano, Sx, Sy);       }     }   } }

### Calculate the Intersection

This calculation is an optimized reduction of the quadratic equation to calculate the intersection point on the surface of the sphere.

C++

 bool GetIntersection(double u, double v,   double &x, double &y, double &z) {   double Nx    = 0.0;   double Ny    = 0.0;   double Nz    = 1.0;   double dir_x = u - Nx;   double dir_y = v - Ny;   double dir_z = -1.0 - Nz;     double a = (dir_x * dir_x) + (dir_y * dir_y) + (dir_z * dir_z);   double b = (dir_x * Nx) + (dir_y * Ny) + (dir_z * Nz);     b *= 2;   double d = b*b;   double q = -0.5 * (b - std::sqrt(d));     double t = q / a;     x = (dir_x * t) + Nx;   y = (dir_y * t) + Ny;   z = (dir_z * t) + Nz;     return true; }

### Bilinear Filter

The bilinear filter calculates a weighted-sum of the four surrounding pixels for a digital image sample.

C++

 Vec3s BilinearSample(Mat &image, double x, double y) {   Vec3s c00 = image.at(int(y), int(x));   Vec3s c01 = image.at(int(u), int(x)+1);   Vec3s c10 = image.at(int(y)+1, int(x));   Vec3s c11 = image.at(int(y)+1, int(x)+1);     double X0 = x - floor(x);   double X1 = 1.0 - X0;   double Y0 = y - floor(y);   double Y1 = 1.0 - Y0;     double w00 = X0 * Y0;   double w01 = X1 * Y0;   double w10 = X0 * Y1;   double w11 = X1 * Y1;     short r  = short(c00[2] * w00 + c01[2] * w01                  + c10[2] * w10 + c11[2] * w11);   short g  = short(c00[1] * w00 + c01[1] * w01                  + c10[1] * w10 + c11[1] * w11);   short b  = short(c00[0] * w00 + c01[0] * w01                  + c10[0] * w10 + c11[0] * w11);     return make_BGR(b, g, r); }

...and a helper function:

C++

 Vec3s make_BGR(short blue, short green, short red) {   Vec3s result;   result[0] = blue;   result[1] = green;   result[2] = red;     return result; }

Here is another sample of a stereographic projection and the panorama that I used to create it:

## Summary

The stereographic projection has been known and used since the time of the ancient Greeks. It was heavily used in the Age of Exploration to create maps of the world where the distortion was applied to distances, and the relative angles between local points is preserved. When it is applied to full-view panoramas a neat effect is created called, "The Little Planet Effect." With just a little bit of theory and some concepts from computer graphics we were able to turn this concept into code with less than 100 lines of code.

## Alchemy: PackedBits (BitLists Mk3)

Send feedback »

A continuation of a series of blog entries that documents the design and implementation process of a library. The library is called, Network Alchemy[^]. Alchemy performs low-level data serialization with compile-time reflection. It is written in C++ using template meta-programming.

My second attempt to create a bit-field type was more successful. The size of the container only grew linearly with each sub-field that was added, and the implementation was cleaner. However, I showed an image of what this implementation looked like in the debugger and it was very in convenient. The thing I was concerned with the most was the pitiful performance that was revealed by my benchmark tests.

This entry describes my discoveries and the steps that I took to re-invent the bit-field type in Alchemy for the third time. This is also the current implementation in use by Alchemy, which is about 10% faster than hand-coded collection of packed-bits.

Full story »

## Unit Testing a Singleton in C++

Send feedback »

I have written about the Singleton[^] before. As a quick review from what I previously stated, I don't think the Singleton is misunderstood, I think it is the only software design pattern that most people do understand. Those that call the Singleton an anti-pattern believe that it is overused. It's simple enough in concept compared to the other patterns, that itself may explain why it is used so often. Another criticism that I hear is that it is difficult to unit-test, or at least unit-test properly with a fresh fixture for each test. No, it's not, and I will demonstrate how.

Full story »

## Alchemy: Array / Vector Serialization

Send feedback »

A continuation of a series of blog entries that documents the design and implementation process of a library. The library is called, Network Alchemy[^]. Alchemy performs low-level data serialization with compile-time reflection. It is written in C++ using template meta-programming.

The alterations required up to this point have been relatively minor to integrate arrays and vectors into Alchemy. That does not mean that the solutions were clean and simple from the beginning. The exercise for integrating the serialization support of these containers was quite challenging. These containers became especially challenging because of the possibilities they created for flexibility with data management

Full story »

## Alchemy: Vectors

Send feedback »

A continuation of a series of blog entries that documents the design and implementation process of a library. The library is called, Network Alchemy[^]. Alchemy performs low-level data serialization with compile-time reflection. It is written in C++ using template meta-programming.

It's time to break some barriers that have existed within Alchemy since its inception, message with fixed sizes. While the storage policy concept that I use with the message buffer allows Alchemy to dynamically allocate memory for messages, the current structure of the library only allows messages whose size is known at compile-time.

There is already so much value in what Alchemy is capable of accomplishing, even with the static size limitation. However, the only way for Alchemy to expand and reach its potential, is to remove this limitation and provide support for dynamically sized messages. This entry will demonstrate the changes that were required to achieve this goal.

Full story »

## Alchemy: Arrays

Send feedback »

A continuation of a series of blog entries that documents the design and implementation process of a library. The library is called, Network Alchemy[^]. Alchemy performs low-level data serialization with compile-time reflection. It is written in C++ using template meta-programming.

Once Alchemy was functional and supported a fundamental set of types, I had other development teams in my department approach me about using Alchemy on their product. Unfortunately, there was one type I had not given any consideration to up to this point, arrays. This group needed the ability to have variable sized messages, where the array payload started at the last byte of the fixed-format message. At that point, I had no clean solution to help deal with that problem.

Full story »

## C++: Type Decay

Send feedback »

I have previously written about code rot (code decay). This post is about decay in a different context. Essentially there are three sets of types in C++ that will decay, lose information. This entry will describe the concept, the circumstances, and in some cases ways to avoid type decay from occurring. This is an important topic for me to cover because the addition of support for arrays in Alchemy would have been much more difficult without knowledge of this concept.

Full story »

## Alchemy: Nested Types

Send feedback »

A continuation of a series of blog entries that documents the design and implementation process of a library. The library is called, Network Alchemy[^]. Alchemy performs low-level data serialization with compile-time reflection. It is written in C++ using template meta-programming.

I am almost done describing the first set of features that I was targeting when I set out to create Alchemy. The only remaining feature to be documented is the ability to have nested types. Basically, structs within structs. This entry describes the approach that I took as well as some of the challenges that I had to conquer in order to create a usable solution.

Full story »

## Alchemy: BitLists Mk2

Send feedback »

A continuation of a series of blog entries that documents the design and implementation process of a library. The library is called, Network Alchemy[^]. Alchemy performs low-level data serialization with compile-time reflection. It is written in C++ using template meta-programming.

With my first attempt at creating Alchemy, I created an object that emulated the behavior of bit-fields, yet still resulted in a packed-bit format that was ABI compatible for portable wire transfer protocols. You can read about my design and development experiences regarding the first attempt here Alchemy: BitLists Mk1[^].

My first attempt truly was the epitome of Make it work. Because I didn't even know if what I was attempting was possible. After I released it, I quickly received feedback regarding defects, additional feature requests, and even reported problems with it's poor performance. This pass represents the Make it right phase.

Full story »

Contact / Help. ©2017 by Paul Watt; Charon adapted from work by daroz. multiblog / hosting companies.
Design & icons by N.Design Studio. Skin by Tender Feelings / Evo Factory.