Embedded Alchemy

Alchemy Send feedback »

Alchemy is a collection of independent library components that specifically relate to efficient low-level constructs used with embedded and network programming.

The latest version of Embedded Alchemy[^] can be found on GitHub.
The most recent entries as well as Alchemy topics to be posted soon:
  • Steganography[^]
  • Coming Soon: Alchemy: Data View
  • Coming Soon: Quad (Copter) of the Damned
Alchemy: Documentation[^]

What Is a Software Architect?

general, leadership, communication, CodeProject, engineering Send feedback »

There is so much confusion surrounding the purpose of a Software Architect and the value they provide and what they are supposed to do. So much so, that it seems the title is being used less and less by companies and replaced with a different title such as principal or staff. I assume this is due to the perception there must be a way to distinguish a level above senior, which is handed out after only about three years of experience.

Introduction

This has been a topic that I have wanted to discuss for quite some time. However, up until I read this question "Are there any Software Architects here?[^]" at CodeProject.com, I had been more compelled to focus on the more technical aspects of software development.

This is an excellent post that has multiple questions. If you were to take a quick peek at the question (it's ok, just come back), you would see about 80 responses in the discussion; with a wide variety of facts, opinions, and jokes related to software architecture and software development in general. Furthermore, many of the posts differ in opinion and even contradict each other.

I define the role of Software Architect in this entry. The definition is based upon my nearly two decades of experience developing software. Up to this point in my career, I have worked for small companies where I was the only software developer; all of the way up to companies where I was one of hundreds of developers and about one-thousand engineers of many engineering disciplines. Through the years I have performed just about every role found in the Software Development Lifecycle (SDLC) including a software architect.

The Basics of the Role

The Software Architect is not another stepping stone above Senior Software Engineer or Software Engineer V. They do work closely with people in these other roles, and in smaller companies they may also function as a Software Engineer V. However, the role of Software Architect is an entirely different position, and it has different responsibilities that are so much different than the task of writing software.

The software architect is a technical leader for the development team, and they are also the figurehead of the product for which they are responsible. One reason they become a figurehead for the product, is because the organization should make them responsible for the technical aspects of the product.

This makes them a good point of contact for information regarding the product, especially for personnel in the organization that may not be closely tied with the product. They may not always be the best person to answer a question, but they most certainly know who is.

As a leader, the architect is responsible for foster unity and trust among their team for success. The technical aspect requires the architect to mentor and guide the team along the way toward maintainable product with high quality. Moreover, they provide technical guidance and recommendations to the customer, which quite often is their own management team.

The Value

The architect adds value to the product and the organization by providing structure to both the design and implementation of the system, as well as the flow of information. If you are a fan of Fred Brooks and his book, The Mythical Man Month, you will recall that he states the implementation of a system tends to mimic the communication structure of your organization. In my experience, this has always proven to be true.

Improved Quality and Stability

Design by committee can work. However, you often need a group of like-minded individuals that do not differ in opinion too drastically for this succeed. In the extreme circumstances, if there are wildly divergent opinions, the group must be willing to forego their egos and accept the decisions that are made even when the decision is the polar opposite of what they believe to be the best decision. When all else fails, there is a final decision-maker at the top.

Thus enters the architect. They are after all, responsible for the product that is ultimately designed and built. They will help the team make decisions that are aligned with the macro-scale goals for the organization. While the development team themselves focus on the design decisions that are most beneficial at the micro-scale of the feature or system for which they are responsible.

Communication

A good flow of information is often required for harmony and success among a group. This can lead to other desirable benefits such as a high-level of morale and greater productivity. Effective communication is essential. Technical endeavors especially suffer when teams do not communicate effectively.

Development may move along steadily during the initial stages. However, if the integration of the components was not planned well in advance, progress will screech to a halt. Adjustments will need to be made for each component to be joined correctly. The architect can monitor the progress and direction for each developer or team, and provide guidance early on to correct course for the intended target.

The management team and key stake-holders communicate with the architect to remain in touch with the technical aspects of the system that require their attention. Such as clarifications on requirements, or missing resources that are necessary to continue to make progress on the project.

One problem that I see talented engineers struggle with, is to be able to adjust the language and content of their message to match their audience. Generally, they tend to be too technical and provide too many extraneous details for the important message to be properly conveyed. A good architect helps bridge this communication gap. They are able to effectively use the best style of communication to convey the details and properly persuade their audience.

Roles

At this point I would like to diverge and briefly and discuss the different roles that are required or exist for a software development project. Primarily because I often run across the question "What does an architect do that a developer doesn't?" This will also provide some context to refer to for the final section in which I will list desirable qualities you should look for in a software architect.

I mentioned there are many roles in software development. Each role is responsible for performing certain tasks. The problem is, the responsibilities for each role differ based on the organization. The size, culture and industry can affect how responsibilities are organized.

A company with a small staff may assign multiple tasks to its employees. To be able to complete all of the work in a timely fashion, the different roles may be responsible for overlapping tasks. While a large organization and development team may have many people that focus on performing a single task. All of the while, both companies may use the same titles to describe the roles of their employees.

It is important to know what your role and responsibilities are for any organization to be able to succeed at your own job; not to forget the success of the project and company as well. Therefore, I think it makes more sense to discuss the basic tasks that must be accomplished or commonly are included with a companies SDLC.

Tasks

Here is a non-comprehensive list of the technical tasks that are required to develop software.

  1. Analysis: Determining what you want, need, or should build
  2. Requirements Gathering: Create a definition of what is to be built.
  3. Planning: Create the budget and the schedule. This task is not entirely technical, but the schedule and budget will be more accurate with technical guidance.
  4. Coordination: This is a bit abstract, however, it basically covers all communication and resource scheduling.
  5. Design: Create plans for the structure and operation of your product.
  6. Development: Build the product.
  7. Verification: Verify that it meets requirements, specifications and quality levels.
  8. Source Control Management: Organization of the resources required to build the product.
  9. Documentation: Useful or necessary information to the creators and users.

The only task listed above that is actually necessary is development. However, the process of development and the quality of the product suffers when the other tasks are omitted. Also, this same effect can occur when the level of effort is reduced for any of the other tasks.

Example: Analysis

How difficult is it to build a jigsaw puzzle when you do not even know what the final image is? It is possible. However, it is much more difficult without knowing what you are aiming for.

Let's complicate the example further. How much more difficult does it become to assemble the puzzle when:

  • Extraneous pieces from another puzzle are added to your pile of loose pieces
  • The number of pieces in the puzzle is not known
  • No corner pieces exist
  • No edge pieces exist

Example: Coordination

When you run in a three-legged race, the team that is most coordinated will have a great advantage. As the amount coordination increases, the less amount of time the team-members will spend fighting each other trying to find a rhythm.

The same concept holds true when you have more than a single person performing tasks on a project. These are the types of things that should be targeted through communication and coordination:

  • Agreement between the groups for how their systems will interoperate.
  • Scheduling project tasks to minimize conflicts and dependencies. If each task is dependent linearly upon another, it will not make sense to have more than one person on the project at a time.
  • Good communication throughout the project helps spot potential issues and address them before they are too difficult to tackle.
  • If the schedule requires your software team to develop at the same time the hardware is being developed, some alternate and temporary solution will be required for the developers. Virtual machines or reference hardware maybe acquired for software to use until the hardware is available. This is also an effective solution when there is a shortage of real hardware compared to the number of developers on your team.

Return to Roles

I will describe the set of tasks the software architect may perform for a small to moderately sized team. We will define this as about 20 people, with 8 or so are developers. The remainder of the team includes product managers, personnel management, marketing, sales, and quality control.

Analysis, Requirements. Planning and Documentation

The architect should be involved from the beginning. Once you know there may be a product to create, or a new engineering cycle on an existing project, bring the architect in. They can assist with the analysis, requirements gathering, and planning tasks. They can spend a fraction of their time acting as a consultant for these tasks. They can help advise and make sure the right questions get asked, the proper information is gathered, and only realistic promises are made. When it comes to creating documentation, the architect and development team should be accessible to answer questions for the technical writers.

Coordination

As a leadership role, one of the primary responsibilities of the developer will be to communicate. More so listening than speaking. One of the most valuable things that an architect, or a software lead and manager for that matter, can do is to make sure their team understands what they are building. Then make sure each individual understands how their work will contribute to the final project.

I have seen projects go from extremely behind schedule, to finishing on time, after a new software lead was put into place. On one particular project, the developers had a vague idea what they were building, but on the surface it only seemed like another version of the five things they had built before. The new lead spent the first few days going over the project, its goals, what made it so cool, and verified each member of the team knew what they were responsible for. At that point, they moved forward with excitement and a new understanding. When some engineers finished their work, they jumped in and helped in other areas where possible. Do not underestimate the value of proper team coordination.

Design

Design is definitely something that the software architect will be involved with. However, I think that most people start to misunderstand the responsibility of the architect when the task of design is mentioned. The perception that I see the most often is that the architect designs the entire system; and this can cause some angst among developers that believe this is the case.

The cause of any angst may be because the developers believe their control and creative freedom will be limited; hopefully that is not the case. I have a bit more to say about this later in the essay.

The architect should take responsibility for the overall structure of the program. High-level behavior, control-flow and technologies to be utilized will all be determined by the architect. Other software developers are given the responsibility to design smaller sections of the program at a higher level of detail. Their design should be guided by the structure the architect puts in place.

Development

Any work performed up to this point is mostly focused on making this stage run smoothly. Hopefully all of the assumptions are correct, and the requirements are fixed. If not, the team must adapt. This is where the architect's responsibilities really become important. A foundation designed to be flexible with logically separated modules will have a better chance of adapting to any surprises that appear during development.

The architect is most likely to play more of a support role during this phase than a primary producer. That is because their time will be spent on guiding, mentoring and guiding the developers, as well as inspecting the results, and providing support where it will be most valuable. Moreover, they will keep the technical teams aligned with one and other, as well as the business and management teams informed of progress.

Furthermore, any surprises or new discoveries that appear along the way can also be adequately managed by the architect. On the other hand, any events occur that require the architects attention could cause further delays if their time is scheduled nearly 100% towards a development work items.

Verification

This is one phase where I believe the architect should have as little to do with this as possible. The reason being, you want an independently verified product. QA teams usually get the raw end of the deal when it comes to verifying and shipping software. If the programmers overrun on their schedule, QA is often tasked with finding creative ways of making the quality of the software better, faster.

If the architect is involved, the integrity of the verification process could be compromised. QA is a control gate to verify that things work correctly before they are released to the customer. The last thing you want is the architect and developers influencing QA's findings; making excuses for why something is not a bug, or at least why it should be classified as a level 4 superficial bug, rather than a level 2 major.

The architect should only coordinate with QA in order to get QA the resources they need to properly perform this task.

Source Control Management

There is so much that can be said about software SCM, but I am not going to. that is because it is a complicated task that deserves an entire essay of its own. The bottom line is the architect must be involved in the SCM for the product. It is crucial during development. It is damn near critical that SCM is handled according to a policy the architect defines (based upon the guidelines and strategies of the organization of course.)

Some products live a single existence, and slowly evolve with each release. Others spring into existence, and core components are reused to build other projects, which the process is then repeated ad infinitum. If there is no one managing the source code appropriately, or the way it is managed does not work for everyone, you may just get forked[^].

An Effective Software Architect

I was going to write a section titled Qualities. However, qualities are subjective, and you could probably guess the items on the list. It would look like every job ad for a software architect, and nearly the same for every software engineer. Just fill in the blank for the desired number of years of experience.

Because that list of qualities is so predictable and repeated everywhere, it would not add any more value to conveying the purpose and value of a software architect. Therefore, I thought it would be better to describe a few actions an Effective Software Architect would perform and the potential benefits that can be realized.

Focus on the Future

Jargon litters our lives. When job ads say "work in a fast-paced environment", that's a nice way of saying, "We want it right now!" No matter how quickly a piece of software can be developed, it still isn't quick enough. That is where many projects go awry; they are too focused on just for today. Tomorrow always becomes another today. Next thing you know you're programming in a swamp of code.

The software architect should focus on the long-term direction of a product, and execute towards the short-term goals of the business. It is good to have goals, they help provide direction. Every step forward may not lead directly to the goal. However, if the goal is always kept in mind when design decisions are made, the goal will become more attainable.

Shortcuts will be taken, and technical debt will accrue on the product. There never seems to be enough time to go back and correct all of those blemishes. However, the design can be made to mitigate short-term decisions and still provide a stable path towards the goals of the product.

Management and Business Development need to support the Software Architect by providing them with goals and information regarding the product. This will help the architect to develop a vision for the project and help guide it towards long-term goals; despite all of the rash and short-term decisions that are usually made in software development.

Create Unity

This is a more focused description of the coordination task above. This action is focused on building and maintaining a level of trust between all of the development team, as well as the management and other stake-holders that are involved with the project. As I mentioned earlier, the software product tends to mimic the communication style of the organization. Therefore a more unified team is more likely to develop a program that is unified in structure and quality. There is more to unity than the final structure of the program.

Trust

You may think that it is odd that I started an essay discussing computer programming, and I now I am on the topic of trust. For a team to work well together, they must trust each other. They must also trust their leaders that are directing them. The software architect is in a perfect position to help foster trust.

They are a technical leader, which means that they focus their attention on the technology and the way it is used. The role itself does not manage people. However, some organizations make the architect the manager of the developers as well. I believe this is a mistake because it throws away this opportunity to have a role to mediate situations of mistrust between development and management. This is especially true with respect to the technical aspect of the project.

The list below briefly describes how trust can be fostered with the other roles in the organization:

  • Developers:

    Listen to the development team. Incorporate their ideas into the design where they may fit, and be sure that they get credit for their ideas and work. When someone shares ideas, then implements them or sees someone else implement them and get the credit, they will tend to share less.

    That's unfortunate, because the variety of experience and ideas that come from a collective group potentially provides more ideas to choose from when searching for a solution.

  • Management:

    A software architect needs the trust of management to succeed for two simple reasons:

    1. Funding:
    2. The managers control the purse strings of the business, and they decide where money is invested. If the trust of management is lost, they may be less likely to invest in your product. Developers may be peeled off and moved to another project that is deemed a higher priority. In the worst case, they decide they no longer want to fund you or your position.

    3. Support:
    4. The software architect is the steward of the product; they take all of the responsibility yet do not own it. In many cases, you will simply do the research, and present the facts, possibly providing a few options for management to choose from. However, for the topics that matter the most, persuasion of management to support your initiatives may be crucial. It is much easier to persuade someone when you are in good favor with them. Moreover, you may find yourself in a conflict with the development team, and without the support of management, you may lose that battle.

  • Business Development and Marketing:

    This group if interested parties becomes important to the architect if their software is a product that is owned and sold by the company. Having a good line of communication with the groups that drive the business's growth is extremely important. Information is the most valuable thing in our industry; a lack of information leaves us to speculate. It's better to in line with the initiatives set by the company, if for no other reason than you may spot problems before too many resources have been invested. Change becomes much more difficult at that point.

    One other potential benefit of creating a strong relationship with these groups is they can gain a better grasp of what your product does. This is important because it allows them to consider the existing capabilities of your product when they are looking to convert a business opportunity into a sale. You flow information upstream, they flow information and potentially more sales back downstream.

  • Customers:

    The customer is the reason we write the software in the first place. The best way to earn trust with the customer is to listen to them. Sometimes it may seem like they don't know what they're talking about. However, it could simply be because you two are using different terms to mean the same things.

    Therefore, it is important to clarify what you understand the customer is telling you. A quick way to lose their trust, as well as their confidence in you, is to ignore the advice or requests of the customer. If you can't do something or decide that you are not going to do it, have that discussion with them rather than avoid them.

Let's compare trust to technology, or a software codebase. You need to maintain relationships, otherwise that bond of trust starts to weaken. I am sure there already is a fancy term to describe this like trustfulness debt. If not, there you go. Open and transparent communication will help develop trust. Returning and visiting with your contacts periodically is another way to help maintain that bond. When trust is lost, it is very difficult to earn back.

Utilize the Teams Abilities

In a way, this is the Software Architect showing trust in their development team. People expect the Software Architect to be the smartest person on the team, or the most knowledgeable. That does not have to be the case, and usually it depends on the question or topic that you are referring to. Architects are good at seeing the Big Picture. In most cases I would expect there to be a number of developers with finer skills when referring to a deep domain topic, such as device drivers.

Say the architect is responsible for the development of an operating system kernel. There are many generalizations and design decisions that can be made to create structure for the kernel. Then a bit more thought would go towards a driver model and its implementation to simplify that task. When you reach the implementation for each particular type of driver, there are different nuances between file-system drivers, network drivers, and port I/O drivers. At some point, you will reach the limit of the architect's knowledge and expertise, and you will reach the realm of the domain expert.

Some engineers like to learn every minute detail about a topic, no matter how abstruse; that, is the domain expert. Typically they do not do well in the role of architect because they tend to get caught up in the low-level details when it is not necessary; think, Depth-first Search. Nonetheless this is the perfect candidate for the architect to trust and depend on when knowledge and advice is required on the expert's domain of specialty.

Summary

When the word architect is used, it is usually associated with design, and every programmer already does design (whether it is formal or on-the-fly is another discussion.) I think it would be best to help clarify the role of the position if effort was made to emphasize structure when discussing the software architect, with regards to both team and the software product. This may help disambiguate the purpose and value a software architect provides and distinguish it from the next step up after senior engineer.

There is much more to this role that dictating how the program should be built, which if you read the entire article you now know that isn't even one of the architect's responsibilities.

C++: SFINAE

general, adaptability, CodeProject, C++, design 5 feedbacks »

This post will focus on the concept of SFINAE, Substitution Failure Is Not An Error. This is a core concept that is one of the reasons templates are even possible. This concept is related exclusively to the processing of templates. It is referred to as SFINAE by the community, and this entry focuses on the two important aspects of SFINAE:

  1. Why it is crucial to the flexibility of C++ templates and the programming of generics
  2. How you can use it to your advantage

What the hell is it?

This is a term exclusively used in C++, which specifies that an invalid template parameter substitution itself is not an error. This is a situation specifically related to overload resolution of the considered candidates. Let me restate this without the official language jargon.

If there exists a collection of potential candidates for template substitution, even if a candidate is not a valid substitution, this will not trigger an error. It is simply eliminated from the list of potential candidates. However, if there are no candidates that can successfully meet the substitution criteria, then an error will be triggered.

Template type selection

SFINAE < Example >

I gave a vague description that somewhat resembled a statement in set theory. I also added a the Venn-diagram to hopefully add more clarity. However, there is nothing like seeing a demonstration in action to illustrate a vague concept. This concept is valid for class templates as well.

Below I have created a few overloaded function definitions. I have also created two template types that use the same name (overloaded), but have completely different structures. This example demonstrates the reason for the original rule:

C++

struct Field
{
  typedef double type;
};
 
template <typename T>
typename T::type Scalar(typename T::type value)  
{
  return value * 4;
}
 
template <typename T>
T Scalar(T value)
{
  return value * 3;
}

The first case below, is the simpler example. It only requires, and accepts type where the value can be extracted implicitly from the type passed in; such as the intrinsic types, or types that provide a value conversion operator. More details have been annotated above each function.

C++

int main()
{
  // The version that requires a sub-field called "type"
  // will be excluded as a possibility for this version.
  cout << "Field type: " << Scalar<int> (5) <<"\n";
 
  // In this case, the version that contains that
  // sub-field is the only valid type.
  cout << "Field type: " << Scalar<Field>(5) << "\n";
}

Output:

Field type: 15
Field type: 20

Curiosity

SFINAE was added to the language to make templates usable for fundamental purposes. It was envisioned that a string class may want to overload the operator+ function or something similar for an unordered collection object. However, it did not take long for the programmers to discover some hidden powers.

The power that was unlocked by SFINAE was the ability to programmatically determine the type of an object, and force a particular overload based on the type in use. This means that a template implementation is capable of querying for information about it's type and qualifiers at compile-time. This is similar to the feature that many languages have called reflection. Although, reflection occurs at run-time (and also incurs a run-time penalty).

Rumination

I am not aware of a name for this static-form of reflection. If there is could someone comment and le me know what it is called. If it hasn't been name I think it should be something similar to reflection, but it is still a separate concept.

When I think of static, I think of "fixed-in-place" or not moving. Meditation would fit quite well, it's just not that cool. Very similar to that is ponder. I thought about using introspection, but that is just a more pretentious form of reflection.

Then it hit me. Rumination! That would be perfect. It's a verb that means to meditate or muse; ponder. There is also a secondary meaning for ruminate: To chew the cud; much like the compiler does. Regardless, it's always fun to create new buzzwords. Remember, Rumination.

Innovative Uses

I make heavy use of SFINAE in my implementation of Network Alchemy. Mostly the features provided by the < type_traits > header. The construct std::enable_if is built upon the concept of SFINAE. I am ashamed to admit, that I have not been able to understand and successfully apply std::enable_if yet. I have crossed many situations that it seemed like it would be an elegant fit. When I figure it out, I will be sure to distill what I learn, and explain it so you can understand it too. (I understand enable_if[^] now.)

Useful applications of SFINAE

To read a book, an article or blog entry and find something genuinely new and useful that I have an immediate need for is fantastic. I find it extremely irritating when there is not enough effort put into the examples that usually accompany the brief explanation. This makes the information in the article nearly useless. This is even more irritating if the examples are less complicated than what I could create with my limited understanding of the topic to begin with.

A situation is extremely frustrating when I believe that I have found a good solution, yet I cannot articulate the idea to apply it. So unless you get extremely frustrated by useful examples applied to real-world problems, I hope these next few sections excite you.

Ruminate Idiom

We will create a meta-function that can make a decision based on the type of T. To start we will need to introduce the basis on which the idiom is built. A scenario is required where there are a set of choices, and only one of the choices is valid. Let's start with the sample, and continue to build until we reach a solution.

We will need two different types that are of a different size

C++

template < typename T >
struct yes_t
{ char buffer[2]; };
 
typedef char no_t;

We will also need two component that are common in meta-programming:

  1. the sizeof operator
  2. static-member constanst

We define a meta-function template, that will setup a test between the two types using the size of operator to determine which type was selected. This will give us the ability to make a binary decision in the meta-function.

C++

template < typename T >
struct conditional
{
private:
  template < typename U >
  static yes_t  < /* conditional on U */ > selector(U);
 
  static no_t selector(...);
 
  static T* this_t();
 
public:
  static const bool value =
    sizeof(selector(*this_t())) != sizeof(no_t);
};

We started with static declarations of the two types that I defined earlier. However, there is no defined conditional test for the yes_t template, yet. It is also important to understand that the template parameter name must be something different than the name used in the templates outer parameter. Otherwise the template parameter for the object would be used and SFINAE would not apply.

The lowest type in the order of precedence for C++ is .... At first glance this looks odd. However, think of it as the catch all type. If the conditional statement for yes_t produces an invalid type definition, the no_t type will be used for the declaration of the selector function.

It is important to note that it is not necessary to define the function implementations for selector because they will never actually be executed. Therefore, it is not required by the linker. We also use an arbitrary function, selector, that returns T, rather than a function that invokes T(), because T may not have a default constructor.

It is also possible to declare the selector function to take a pointer to T. However, a pointer type will allow void to become valid as a void*. Also, any type of reference will trigger an error because pointers to references are illegal. This is one area where there is no single best way to declare the types. You may need to add other specializations to cover any corner cases. Keep these alternatives in mind if you receive compiler warnings with the form I presented above.

More Detail

You were just presented a few facts, a bit of code, and another random mix of facts. Let's tie all of this information together to help you understand how it works.

  • SFINAE will not allow a template substitution error halt the compiling process
  • Inside of the meta-function we have created to specializations that accept T
  • We have selected type definitions that will help us determine if a condition is true based upon the type. (An example condition will be shown next).
  • We also added a catch-all declaration for the types that do not meet the conditional criteria (...)
  • The stub function this_t() has been created to be used in a sizeof expression. The sizeof expression compares the two worker types to the no_t type to determine the result of our conditional.

The next section contains a concrete example conditional that is based on the type U.

is_class

Months ago I wrote about the standard header file, Type Traits[^]. This file contains some of the most useful templates for correctly creating templates that correctly support a wide range of types.

The classification of a type can be determined, such as differentiating between a Plain-old data (POD) struct and struct with classes. Determine if a type is const or volatile, if it's an lvalue, pointer or reference. Let me demonstrate how to tell if a type is a class type or not. Class types are the compound data structures, class, struct, and union.

What we need in the conditional template parameter is something that can differentiate these types from any other type. These types are the only types that it is legal to make a pointer to a scope operator ::*. The :: operator resolves to the global namespace.

Here is the definition of this template meta-function:

C++

template < typename T >
struct is_class
{
private:
  template < typename U >
  static yes_t  < int U::* > selector(U);
 
  static no_t selector(...);
 
  static T* this_t();
 
public:
  static const bool value =
    sizeof(selector(*this_t())) != sizeof(no_t);
};

Separate classes based on member declarations

Sometimes it becomes beneficial to determine if an object supports a certain function before you attempt to use that feature. An example would be the find() member function that is part of the associative containers in the C++ Standard Library. This is because it is recommended that you should prefer to call the member function of a container over the generic algorithm in the library.

Let's first present an example, then I'll demonstrate how you can take advantage and apply this call:

C++

template < typename T >
struct has_find
{
private:
  // Identify by using a pointer to a member
  template < typename U >
  static yes_t  <U::find*> selector(U);
 
  static no_t selector(...);
 
  static T* this_t();
 
public:
  static const bool value =
    sizeof(selector(*this_t())) != sizeof(no_t);
};

Applying the meta-function

The call to std::find() is very generic, however, it can be inconvenient. Also, imagine we want to build a generic function ourselves that will allow any type of container to be used. We could encapsulate the std::find() call itself in a more convenient usage. Then build a single version of the generic function, as opposed to creating specializations of the implementation.

This type of approach will allow us to encapsulate the pain-point in our function that would cause the implementation of a specialization for each type our generic function is intended to support.

We will need to create one instance of our meta-function for each branch that exist in the final chain of calls. However, once this is done, the same meta-function can be combined in any number of generic ways to build bigger and more complex expressions.

C++

namespace CotD
{
template < typename C, bool hasfind>
struct call_find
{
   bool operator()(C& container, C::value_type& value, C::iterator &result)
  {
    result = container.find(container.begin(),
                            container.end(),
                            value) != container.end();
    return result != container.end();
  }
};
 
} // namespace CotD

C++

namespace CotD
{
template <typename C>
struct call_find <C, false>
{
 
  bool operator()(C& container, C::value_type& value, C::iterator &result)
  {
    result = std::find( container.begin(),
                        container.end(),
                        value);
 
    return result != container.end();
  }
};
 
} // namespace CotD

This is a very simple function. In my experience, the small, generic, and cohesive functions and objects are the ones that are most likely to be reused. With this function, we can now use it in a more specific context, which should still remain generic for any type of std container:

C++

namespace CotD
{
template < typename T >
void generic_call(T& container)
{
  T::value_type target;
  // ... Code that determines the value ...
 
  T::iterator item;
  if (!CotD::call_find<T, has_find<T>::value>(container, target, item))
  {
    return;
  }
 
  // ... item is valid, continue logic ...
}
 
} // namespace CotD

C++

int main(int argc, _TCHAR* argv[])
{
  bool res = has_find<SetInt>::value;
 
  call_find<SetInt, has_find<SetInt>::value> set_call;
 
  SetInt::iterator set_iter;
  SetInt s;
  set_call(s, 0, set_iter);
 
  call_find<VecInt, has_find<VecInt>::value> vec_call;
  VecInt::iterator vec_iter;
  VecInt v;
  vec_call(v, 0, vec_iter);
}
Run this code

Output:

T::find() called
std::find() called

We made it possible to create a more complex generic function with the creation of the small helper function , CotD::find(). The resulting CotD::generic_call is agnostic to the type of container that is passed to it.

This allowed us to avoid the duplication of code for the larger function, CotD::generic_call, due to template specializations.

There is also a great chance that the helper template will be optimized by the compiler to eliminate the unreachable branch due to the types being pre-determined and fixed when the template is instantiated.

Summary

Substitution Failure Is Not An Error (SFINAE), is a subtle addition that was added to C++ make the overload resolution possible with templates. Just like the other basic functions and classes. However, this subtle compiler feature opens the doors of possibility for many applications in generic C++ programming.

Are You Mocking Me?!

general, adaptability, portability, CodeProject, maintainability Send feedback »

It seems that every developer has their own way of doing things. I know I have my own methodologies, and some probably are not the simplest or the best (that I am aware of). I have continued to refine me design, development, test and software support skills through my career.

I recognize that everyone has their own experiences, so I usually do not question or try to change someone else's process. I will attempt to suggest if I think it might help. However, sometimes I just have to ask, "are you sure you know what you are doing?" For this entry I want to focus on unit testing, specifically with Mock Objects.

  

Are you sure you know what you are doing?

What do you mean by "Mock"?

I want to seriously focus on a type of unit-test technique, one that is so misused, that I would even go so far as to call it an anti-technique. This is the use of Mock Objects.

Mock objects and functions can fill in an important gap when a unit-test is attempting to eliminate dependencies or to avoid the use of an expensive resource. Many mock libraries make it pretty damn simple to mock the behavior of your codes dependencies. This is especially true when the library is integrated into your development environment and will generate much of the code for you.

There are other approaches that exist, which I believe are a better first choice. Whatever method you ultimately chose should depend on the goal of your testing. I would like to ruminate on some of my experiences with Mock Objects as well as provide some alternate possibilities for clipping dependencies for unit tests.

Unit Testing

Mock Objects are only a small element of the larger topic of unit-testing. Therefore, I think it's prudent to provide a brief overview of unit testing to try to set the context of this discussion, as well as align our understanding. Unit testing is a small isolated test written by the programmer of a code unit, which I will universally refer to as a system.

You can find a detailed explanation of what a unit test is and how it provides value in this entry: The Purpose of a Unit Test[^].

It is very important to try to isolate your System Under Test (SUT) from as many dependencies as possible. This will help you differentiate between problems that are caused by your code and those caused by its dependencies. In the book xUnit Patterns, Gerrard Meszaros, introduces the concept of a Test Double used to stand-in for these dependencies. I have seen many different names used to describe test doubles, such as dummy, fake, mock, and stub. I think that it is important to clarify some vocabulary before we continue.

The best definitions that I have found and use today come from this excellent blog entry by, Martin Fowler, Mocks Aren't Stubs. Martin defines a set of terms that I will use to differentiate the individual types of test doubles.

  • Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it 'sent', or maybe only how many messages it 'sent'.
  • Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive

Martin's blog entry above is also an excellent source for learning a deeper understanding between the two general types of testing that I will talk about next.

So a Mock is just a different type of test double?!

No, not really.

Besides replacing a dependency, mock objects add assertions to their implementation. This allows a test to report if a function was called, if a set of functions were called in order, or even if a function was call that should not have been called. Compare this to simple fake objects, and now fake objects look like cheap knock-offs (as opposed to high-end knock-offs). The result becomes a form of behavioral verification with the addition of these assertions.

Mock objects can be a very valuable tool for use with software unit tests. Many unit test frameworks now also include or support some form of mocking framework as well. These frameworks simplify the creation of mock objects for your unit tests. A few frameworks that I am aware of are easyMock and jMock for JAVA, nMock with .Net and GoogleMock if you use GoogleTest to verify C++ code.

Behavior verification

Mock objects verify the behavior of software. For a particular test you may expect your object to be called twice and you can specify the values that are returned for each call. Expectations can be set within the description of your mock declaration, and if those expectations are violated, the mock will trigger an error with the framework. The expected behavior is specified directly in the definition of the mock object. This in turn will most likely dictate how the actual system must be implemented in order to pass the test. Here is a simple example in which a member function of an object registers the object for a series of callbacks:

Code

// No, I'm not a Singleton.
// I'm an Emporium, so quit asking.
class CallbackEmporium
{
  // Provides functions to register callbacks
}
 
TheCallbackEmporium& TheEmporium()
{ ... }
 
// Target SUT
void Object::Register()
{
  TheEmporium().SetCallbackTypeA( Object::StaticCallA, this );
  TheEmporium().SetCallbackTypeB( Object::StaticCallB, this );
}

Clearly the only way to validate in this function is through verifying its behavior. There is no return value, so that cannot be verified. Therefore, a mock object will be required to verify the register function. With the syntax of a Mock framework, this will be a snap, because we add the assertion right in the declaration of the Mock for the test, and that's it!

Data verification

Ultimately the function object::Register() is interested to know if the two proper callbacks were registered with TheEmporium. So if you nodded your head in agreement in the previous section when I said "Clearly the only way...", I suggest you stop after you read sentences that read like that and challenge the author's statement. Most certainly there are other ways to verify, and here is one of them.

1 point if you paused after that trick sentence, 2 points if you are reserving judgment for evidence to back up my statement.

It would still be best if we have a stand-in object to replace TheEmporium. However, If there is some way for use to verify after the SUT call, that the expected callback functions were registered in the correct parameters of TheEmporium, then we do not need a mock object. We have verified the final data of the system was as expected, not that the program executed to a prescribed behavior.

Why does it matter?

Tight Coupling between the test and the implementation.

Suppose you wrote your mock object to verify the code in this way:

Code

// This is a mocked yet serious syntax
// for a mock-object statement to verify Register().
Mocker.call( SetCallbackTypeA()
               .with(Object::StaticCallA)).and()
      .call( SetCallbackTypeB()
               .with(Object::StaticCallB));

That will test the function as-is currently implemented. However, if the implementation of Object::Register were implemented like anyone of these, the test may report a failure, even though the intended and correct results were achieved by the SUT.

Code

void Object::Register()
{
  TheEmporium().SetCallbackTypeB( Object::StaticCallB, this );
  TheEmporium().SetCallbackTypeA( Object::StaticCallA, this );
}

Code

// Too many calls to one of the functions
void Object::Register()
{
  TheEmporium().SetCallbackTypeA( Object::StaticCallA, this );
  TheEmporium().SetCallbackTypeB( Object::StaticCallB, this );
  TheEmporium().SetCallbackTypeB( Object::StaticCallB, this );
}

Code

// Call each function twice:
// Assign incorrect values first.
// Then call a second time with the correct values.
void Object::Register()
{
  TheEmporium().SetCallbackTypeA( Object::StaticCallB, this );
  TheEmporium().SetCallbackTypeA( Object::StaticCallA, this );
  TheEmporium().SetCallbackTypeB( Object::StaticCallA, this );
  TheEmporium().SetCallbackTypeB( Object::StaticCallB, this );
}

Code

// Assign incorrect values first.
// Then call a second time with the correct values.void Object::Register()
{
  TheEmporium().SetCallbackTypeA( Object::StaticCallB, this );
  TheEmporium().SetCallbackTypeB( Object::StaticCallA, this );
  TheEmporium().SetCallbackTypeA( Object::StaticCallA, this );
  TheEmporium().SetCallbackTypeB( Object::StaticCallB, this );
}

All four of these implementations would have continued to remain valid for the data validation form of the test. Because the correct results were assigned to the proper values at the return of the SUT.

When the Mock Object Patronizes You

Irony. Don't you love it?!

Mock objects can get you very far successfully. In fact, you may get towards the very end of your development phase and you have unit tests around every object and function. You are wrapping up your component integration phase. Things are not working as expected. These are some that I have personally observed:

  • The compiler complains about a missing definition
  • The linker (for the C and C++ folks) complains about undefined symbols have been referenced
  • This is a network application, everything compiles and links, the program loads and doesn't crash. It doesn't do anything else either. You connect a debugger, it is not sending traffic.

I have seen developers become so enthusiastic with how simple mock objects made developing tests for them, that they virtually created an entire mock implementation. When it compiled and was executed, critical core components had a minimal or empty implementation. All of the critical logic was complete and verified. However, the glue that binds the application together, the utility classes, had not been implemented. They remained stubs.

Summary

There are many ways to solve problems. Each type of solution provides value in its own way. Some are simple, others elegant, while others sit and spin in a tight loop to use up extra processing cycles because the program doesn't work properly on faster CPUs. Just be aware of what value you are gaining from the approaches that you take towards your solution. If it's the only way you know, that is a good first step. Remember to keep looking, because their may be better solutions in some way that you value more.

I Found "The Silver Bullet"!

general, leadership, communication, CodeProject, maintainability, knowledge Send feedback »

I found the metaphorical Silver Bullet that everyone has been searching for in software development and it worked beautifully on my last project. Unfortunately, I only had one of them. I am pretty sure that I could create another one if I ever have to work with a beast that is similar to my last project. However, I don't think that my bullet would be as effective if the circumstances surrounding the project varies too much from my original one.

No Silver Bullet

To my knowledge, the source of this term is from. Fred Brooks, the author of The Mythical Man Month. He wrote a paper in 1986 called No Silver Bullet -- Essence and Accident in Software Engineering. Brooks posits:

There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

The concepts and thoughts that I present in this entry are deeply inspired by his thoughts and assertions that he states in his paper.

There have been a number of studies and papers that challenge the assertions that Brooks presents in this paper. To this day, the only way that programmers have increased their productivity by an order of magnitude has been by moving to a higher-level programming language. This comparison is done by comparing the productivity between programmers developing in assembly and C. Then comparing C to the next level, such as C# or JAVA, and then on to scripting languages.

The Silver Bullet

For thoroughness

I feel obligated to let you know where the term Silver Bullet originates. The mythical creature, the werewolf, is fabled to only be able to be killed by silver weapons. The modern day equivalent is a bullet made from silver. Depending on which mythology, author, video game or table-top gaming platform you are most fond of, other types of weapons may harm the creature, but only temporarily. Unless it is made from pure silver, it will not completely kill the creature.

For the uninitiated

I feel obligated to interpret this metaphor, which I think is a great metaphor. Your project is the werewolf. For decades, millions of programmers and project managers have been searching for a way to kill, reliably execute the development of a software project. Software development has been notoriously plagued by cost and schedule overruns due to the poor ability of estimating the costs required to develop a project. The silver bullet is that special tool, process, programming language, programmer food or sensory deprivation chamber that tames this beast and allows a project to be developed with some increased form of reliable predictability.

Software Engineering

I want to take a scenic detour from the primary topic of this post, to perform an exercise that may help demonstrate my point.

Every since I started programming I have heard debates and also wondered myself, "Is computer programming a science or an art?" I have never heard a convincing argument or definition that I could accept. Each year I find myself further away from what I believe to be a valid definition because I have stumbled upon many other titles that could classify where a computer programmer fits in the caste of our workforce. This is a non-exhaustive list of other labels that seem to have fit at one time or another:

  • Form of Engineering
  • Science
  • Art
  • Craft
  • Trade
  • Linguistic Translator
  • Competitive Sport (both formally and informally)
  • Modern day form of a Charlatan / Snake-oil salesman / Witch-doctor
  • A way to get paid for working with my hobby

To create a precise definition for the practice of computer programming, is much like trying to nail JELLO to the wall. No matter where you go, it will be practiced differently. The culture will be different. The management will be different. The process will be different. The favored framework will be different. Experience can teach us, mold us, and jade us just to name a few ways in which we can change. I have collected a list of my experiences that span my career in an attempt to classify a computer programmer.

Experience

I can only speak from my personal research and experiences, and both of those have mostly been focused on the realm of software development. Here is a broad sampling of my experiences related to software development.

I have read scores of books that span many aspects of software development, such as...

  • the programming languages du jour
  • ...the programming processes du jour
  • ...the programming styles du jour
  • ...C++, lots of books related to C++
  • ...books on the sociology of software development
  • ...books only tangentially related to software development because they were right next to the software development section in the bookstore

I have worked for a variety of companies, which...

  • ...had many intelligent people
  • ...had people of much less intelligence
  • ...had enthusiastic learners
  • ...had people with passion for technology
  • ...had people I always wondered how they passed the stringent hiring process
  • ...had bosses with a variety of management styles
  • ...aimed for completely different goals
  • ...valued results that ranged from the next quarter to ten years from now

Over my career I have also...

  • ...learned many things from colleagues
  • ...spent time unlearning some things
  • ...helped others understand and become better developers,
  • ...written a lot of code
  • ...learned that that amount of code I have written is too much, there are better ways
  • ...deleted a lot of code (after my thumbs, my right pinky is the digit on my hand that I value the most)
  • ...lost a lot of code due to power outages, workstation upgrades by IT, and I suspect from BigFoot
  • ...repeated myself far too much
  • ...gathered many requirements
  • ...communicated and miscommunicated with many people
  • ...rewritten someone else's code because my way is cleaner
  • ...had my cleaner code rewritten back to its original form
  • ...had many fantastic ideas
  • ...created a fewer number of those fantastic designs
  • ...implemented even fewer programs based on those designs
  • ...implemented programs based on other peoples designs
  • ...criticized the poor quality of untold amounts of code
  • ...was humbled when I discovered I wrote some of that code
  • ...was shocked when I looked at old code of mine and literally said out loud "This is my code?! When did I learn how to do that, and why don't I remember, because that's something I've always wanted to learn how to do?!"
  • ...became too emotionally attached to projects (yes, multiple times)
  • ...reached a point where I lost all motivation on a project and it was excruciating to watch how slowly the LEDs would take to blink 3600 (or if you prefer 0xE10) times each hour
  • ...learned how to articulate most of my ideas
  • ...become a better listener
  • ...learned that no matter how similar a situation is, my next experience will always be different

To summarize my experiences

I have worked for a half-dozen companies in a variety of roles. Although there have been similarities, each of these companies still had completely unique talent level, cultures, goals, company values, management styles, reward systems and more. These factors played into how well the teams worked together, the quality of products the company produced, how pleased the customers were and in the end this affected how they defined the role of programmer or software engineer.

Instincts

There is a very strong observation that I have silently mulled over for a few years now. The digital world has turned the world we know upon its head. In true digital form, there are 10 types of people in this world, those that can grasp the abstract concepts encoded in digital technology, and those that can't.

It seems that most people develop some sort of intuition when it comes to the physical world. You can sense with possibly all five of your senses qualities about any physical object. Will it be soft and cuddly, heavy and smooth, squishy and sticky? Those of you old enough to remember TVs when they were large, heavy and thick; you know Cathode Ray Tube (CRT) technology. For some reason, we have this instinct to whack the TV on the side. One or two of those usually did the trick. Why? The mechanical components like the vacuum tubes were becoming unseated. A firm jolt let them settle correctly back into place.

Now consider a digital circuit. To all but experts even, it is not possible to tell if that circuit card is running properly. I had a flat-screen TV give out, and before I decided to throw it out, thought I would look online to see if there was a solution, and there was. It turned out to be cheap capacitors going bad on the power board. I opened the TV up to replace the bad capacitors, and luckily for me that is all that it was. I would have had no idea if something had gone wrong on the IC with more complex components. No amount of whacking on the side of a digital TV or computer monitor will resolve the issue.

Abstract Thought

The ability to grasp the abstract digital concepts is a valuable talent. Other engineering disciplines such as electrical, chemical, astro-physics are like magic to most people not in those fields. However, there is one sound basis on which they all are based upon, and that is physics. These fields are based on the laws and models that human-kind developed to approximate the best we can understand about our physical world.

What is computing based upon? The physics that allow the electrical circuits to switch at lightening speed in a defined pattern to create some effect, generally a calculation. It's time to consider how we define these patterns to compute.

Computer programming is an activity that articulates an amorphous abstract thought, in as precise of a manner as to be interpreted and executed by a computer. We are converting this thought pulsing in our minds into a language of sorts, to communicate with a digital creature. That is fascinating.

What is troubling, is that 8 developers (we'll go back to base 10) can be sitting in a room, listening to a set of requirements. Then independently recreate that list of requirements. Quite often the result is 9 distinct lists of requirements (remember the original requirements). If these 8 developers went off to converse with their digital pet, each of them will create very similar programs and results, yet they will all differ because of the programmers interpretation. That is not even considering programmers that do not quite understand all of the nuances of the language with which they are commanding their computer.

What was the question again?

What is Software Engineering?

It's mostly an engineering discipline and also has a strong foundation in science.

It can be a form of art, but mostly only appreciated by other programmers that understand the elegance and beauty of what the program does.

I think it is definitely a trade or a craft. The more you practice, the better you become and natural talent can sure help as well.

It was only recently that I considered the communication/linguistic/translation concept. Especially when abstract thoughts and concepts are factored into how those ideas are translated. Math is very similar in its abstract concepts and models that we have created. However, math is also much more well defined than computer programming.

To me, programming is very much like writing an essay on a book only using mathematical notations.

The digital nature and complexity of computer programs allow us to become charlatans. It's possible to tell your managers and customers that your program does the right thing, even though it's only good enough to appear to do the right thing. If they find the bug, we'll create a patch for that; maybe.

Software Engineering is many things, most of them great. However, it is still a relatively new discipline humans are attempting to master. It does not work very well to compare this profession to other engineering professions, because we are not bound by the laws of physics. Human creativity and stupidity are the forces that limit what can be done with computers.

Back to the Silver Bullet

I think we should continue to develop new tools, processes and energy drinks that will help developers write better code. I also think that communication is an aspect that really needs to be explored to further solidify the definition of this profession.

In order to improve our processes, everyone that has a stake in the project must consider the differences between the next project, and the previous project. A team that works well and communicates effectively at a company that is doing well and has a great culture, will out perform that exact same team at a company with layoffs looming in the near future. It would be interesting to take the great team working at the great company and scale the project and personnel size up by 5 times.

What will still be good?

What could go wrong?

What will need to change for the new team to succeed on this project?

Success through feedback

I think it's possible for the new team to succeed. However, they have to consider the differences of their new project compared to their past ones. They can't expect processes that worked well for a team of four, to work equally as well with a team of 20 without making adjustments. When the work is underway, the team will need to be observant and use the new information they receive to adjust their processes to ensure success.

Summary

There is no Silver Bullet. Well, maybe one or two. But not every project is a werewolf. If you run into BigFoot, don't waste your silver bullet, just protect your data.

Alchemy: Message Serialization

portability, reliability, CodeProject, C++, maintainability, Alchemy, design Send feedback »

This is an entry for the continuing series of blog entries that documents the design and implementation process of a library. This library is called, Network Alchemy[^]. Alchemy performs data serialization and it is written in C++. This is an Open Source project and can be found at GitHub.

If you have read the previous Alchemy entries you know that I have now shown the structure of the Message host. I have also demonstrated how the different fields are pragmatically processed to convert the byte-order of the message. In the previous Alchemy post I put together the internal memory management object. All of the pieces are in place to demonstrate the final component to the core of Alchemy, serialization.

Serialization

Serialization is a mundane and error prone task. Generally, both a read and a write operation are required to provide any value. Serialization can occur on just about any medium including: files, sockets, pipes, and consoles to name a few. The primary purpose of a serialization task is to convert a locally represented object into a data stream. The data stream can then be stored or transferred to a remote location. The stream will be read back in, and converted to an implementation defined object.

It is possible to simply pass the object exactly as you created it, but only in special situations. You must be working on the same machine as the second process. Your system will require the proper security and resource configuration between processes, such as a shared memory buffer. Even then there are issues with how memory is allocated. Are the two programs developed with the same compiler? A lot of flexibility is lost when raw pointers to objects are shared between processes. In most cases I would recommend against doing that.

Serialization Types

There are two ways that data can be serialized:

  1. Text Serialization:
    Text serialization works with basic text and symbols. This scenario often happens when editing a raw text file in Notepad. When the file is saved in Notepad, it writes out the text, in plain text. Configuration and XML files, are another example of files that are stored in plain text. This makes it convenient for users to be able to hand edit these files. Again, all data is serialized to a human readable format (usually).
  2. Binary Serialization:
    Binary serialization is simply that, a stream of binary bytes. As binary is only 1s and 0s, it is not human friendly for reading and manipulating. Furthermore, if your binary serialized data will be used on multiple systems, it is important to make sure the binary formats are compatible. If they are not compatible, adapter software can be used to translate the data into a compatible format for the new system. This is one of the primary reasons Alchemy was created.

Alchemy and Serialization

Alchemy serializes data in binary formats. The primary component in Alchemy is called ,Hg (Mercury - Messenger of the Gods). Hg is only focused on the correct transformation and serialization of data. On one end Hg provides a simple object interface that behaves similarly to a struct. On the other end, the data is serialized and you will receive a buffer that is packed according to the format that you have specified for the message. With this buffer, you will be able to send it directly to any transport medium. Hg is also capable of reading input streams and populating a Hg Message object.

Integrating the Message Buffer

The MsgBuffer will remain an internal detail of the Message object that the user interacts with. However, there is one additional definition that will need to be added to the Message template parameters. That is the StoragePolicy chosen by the user. This will allow the same message format implementation to be used to interact with many different types of mediums. Here is a list of potential storage policies that could be integrated with Alchemy:

  • User-supplied buffer
  • Alchemy managed
  • Hardware memory maps
For hardware memory maps, the read/write operations could be customized to reading data on the particular platform. The Hg message format would provide a simple user-friendly interface to the fixed-memory on the machine. The additional template parameter, along with some convenience typedefs are shown below:

C++

template &lt; class MessageT,
           class ByteOrderT = Hg::HostByteOrder,
           class StorageT   = Hg::BufferedStoragePolicy
         >
struct DemoTypeMsg
{
  // Define an alias to provide access to this parameterized type.
  typedef MessageT                            format_type;
 
  typedef StorageT                            storage_type;
 
  typedef typename
    storage_type::data_type                   data_type;
  typedef data_type*                          pointer;
  typedef const data_type*                    const_pointer;
 
  typedef MsgBuffer&lt; storage_type >           buffer_type;
  typedef std::shared_ptr&lt; buffer_type >      buffer_sptr;
 
  // ... Field declarations
private:
  buffer_type       m_msgBuffer;
};

The Alchemy managed storage policy, Hg::BufferedStoragePolicy, is specified by default. I have also implemented a storage policy that allows the user to supply their own buffer called, Hg::StaticStoragePolicy. This is included with the Alchemy source.

Programmatic Serialization

The solution for serialization is very similar to the byte-order conversion logic that was demonstrated in post that I introduced the basic Alchemy: Prototype[^]. Once again we will use the ForEachType static for loop that I implemented to serialize the Hg::Messages. This will require a functor to be created for both input and output serialization.

Since I have already presented the detail that describe how this static for-loop processing works, I am going to present serialization from top to bottom. We will start with how the user interacts with the Hg::Message, and continue to step deeper into the processing until the programmatic serialization is performed.

User Interaction

C++

// Create typedefs for the message.
// A storage policy is provided by default.
typedef Message&lt; DemoTypeMsg, HostByteOrder >    DemoMsg;
typedef Message&lt; DemoTypeMsg, NetByteOrder >     DemoMsgNet;
 
// Populate the data in Host order.
DemoMsg msg;
 
msg.letter = 'A';
msg.count =  sizeof(short);
msg.number = 100;
 
// The data will be transferred over a network connection.
DemoMsgNet netMsg  = to_network(msg);
 
// Serialize the data and transfer over our open socket.
// netMsg.data() initiates the serialization,
// and returns a pointer to the buffer.
send(sock, netMsg.data(), netMsg.size(), 0);

This is the definition of the user accessible function. This code first converts the pointer to this to a non-const form, in order to call a private member-function that initiates the operation. This is required so the m_msgBuffer field can be modified and store the data. There are a few other options. The first is to remove the const qualifier from this function. This is not a good solution because it would make it impossible to get serialized data from objects declared const. The other option is to declare m_msgBuffer as mutable. However, this form provides the simplest solution, and limits the modification of m_msgBuffer to this function alone.

C++

//  *********************************************************
/// Returns a pointer to the memory buffer
/// that contains the packed message.
///
const_pointer data() const
{
  Message *pThis = const_cast&lt; Message* >(this);
  pThis->pack_data();
 
  return m_msgBuffer.data();
}

In turn, the private member-function calls a utility function that initiates the process:

C++

//  **********************************************************
void pack_data()
{
  m_msgBuffer =  *pack_message &lt; message_type,
                                 buffer_type,
                                 size_trait
                               >(values(), size()).get();
}

Message packing details

Now we are behind the curtain where the work begins. Again, you will notice that this first function is a global top-level parameterized function, which calls another function. The reason for this is the generality of the final implementation. When nested fields are introduced, processing will return to this point a specialized form of this function. This is necessary to allow nested message formats to also be used as independent top-level message formats.

C++

template&lt; class MessageT,
          class BufferT
        >
std::shared_ptr&lt; BufferT >
  pack_message( MessageT&amp; msg_values,
                size_t    size)
{
  return detail::pack_message &lt; MessageT,
                                BufferT
                              >(msg_values,
                                size);
}

... And just like the line at The Hollywood Tower Hotel ride at the California Adventure theme park, the ride has started and you weren't even aware. But, there's another sub-routine.

C++

template&lt; typename MessageT,
          typename BufferT
        >
std::shared_ptr&lt; BufferT >
  pack_message( MessageT  &amp;msg_values,
                size_t          size)
{
  // Allocate a new buffer manager.
  std::shared_ptr&lt; BufferT > spBuffer(new BufferT);
  // Resize the buffer.
  spBuffer->resize(size);
  // Create an instance of the
  // functor for serializing to a buffer.
  detail::PackMessageWorker
    &lt; 0,
      Hg::length&lt; typename MessageT::format_type >::value,
      MessageT,
      BufferT
    > pack;     // Note: Pack is the instantiated functor.
 
  // Call the function operator in pack.
  pack(msg_values, *spBuffer.get());
  return spBuffer;
}

Here is the implementation of the pack function object:

C++

template&lt; size_t    Idx,
          size_t    Count,
          typename  MessageT,
          typename  BufferT
         >
struct PackMessageWorker
{
  void operator()(MessageT &amp;message,
                  BufferT  &amp;buffer)
  {
    // Write the current value, then move to
    // the next value for the message.
    size_t dynamic_offset = 0;
    WriteDatum&lt; Idx, MessageT, BufferT >(message, buffer);
 
    PackMessageWorker &lt; Idx+1, Count, MessageT, BufferT> pack;
    pack(message, buffer);
  }

This should start to look familiar of you read the Alchemy: Prototype entry. Hopefully repetition does not bother you because that is what recursion is all about. This function will first call a template function called, WriteDatum, which performs the serialization of the current data field. Then a new instance of the PackMessageWorker functor is created to perform serialization of the type at the next index. To satisfy your curiosity, here is the implementation for WriteDatum:

C++

template&lt; size_t   IdxT,      
          typename MessageT,
          typename BufferT
        >
struct WriteDatum
{
  void operator()(MessageT &amp;msg,
                  BufferT  &amp;buffer)
  {
    typedef typename
      Hg::TypeAt
        &lt; IdxT,
          typename MessageT::format_type
        >::type                                   value_type;
 
    value_type value  = msg.template FieldAt&lt; IdxT >().get();
    size_t     offset =
                 Hg::OffsetOf&lt; IdxT, typename MessageT::format_type >::value;
 
    buffer.set_data(value, offset);
  }
};

That is pretty much the top-to-bottom journey for the serialization path in Alchemy. However, something is not quite right. I will give you a moment to see if you notice a difference between how this version works, compared to the byte-order processing in the other method.



Brief intermission for deep reflection on the previous recursive journey...


How did you do?

There are two things that you may have noticed.

  1. The ForEachType construct I mentioned was not used.
  2. This recursive function does not contain a terminating case.
Originally I had used the ForEachType construct. However, at the point I am now with the project hosted on GitHub, I required more flexibility. Therefore, I had to create a more customized solution to work with. The code segments above are adapted from the source on GitHub. The only thing I changed was the removal of types and fields that relate to support for dynamically-sized arrays.

As for the terminating case, I have not shown that yet. Here it is:

C++

template&lt; size_t    Idx,
          typename  MessageT,
          typename  BufferT
         >
struct PackMessageWorker&lt; Idx, // Special case:
                          Idx, // Current Idx == End Idx
                          MessageT,
                          BufferT
                        >
{
  void operator()(MessageT&amp; msg,
                  BufferT&amp; buffer)
  { }
};

This specialization of the PackMessageWorker template is a more specific fit for the current types. Therefore the compiler chooses this version. The implementation of the function is empty, which breaks the recursive spiral.

Message unpacking

For the fundamental types, the process looks almost exactly the same. Alchemy verifies the input buffer is large enough to satisfy what the algorithm is expecting. Then it churns away, copying the data from the input stream into the parameters of the Hg::Message.

Is all of that recursion necessary?

Yes.

Remember, this is a template meta-programming solution. Recursion is the only loop mechanism available to us at compile-time. For a run-time algorithm, all of these function stack-frames would kill performance. If you run this portion of code compiled with a debug build you will see that. However, things change once it is compiled for release mode with optimizations enabled.

Most of those function calls work as conditional statements to select the best-fit serializer for each type. After the optimizer gets ahold of the chain of calls, it is able to generate code that is very similar to loop unrolling that would occur in a run-time algorithm where the size of the loop was fixed.

I have just barely started the optimization process of this library as a whole. I am locating the places with unnecessary copies and other actions the kill performance. The library as a whole is performing well and I am happy with the progress. With the exception of the nested field structures, all of the other types perform 10-30% faster than the hand-coded version that uses memcpy on the field of the struct. The nested types are about 50% slower. However, overall, the average of the tests indicate that Hg outperforms the hand implemented version by 5%, and I am aware of places that I can optimize. I have not had time to perform a deep analysis of the code that is generated. I will be posting an entry on the benchmarking process that I went through and I will post plenty of samples of assembly decomposition then.

What's next?

Up to this point in the Alchemy series, I have demonstrated a full pass through the message management with simple types. This is enough to be able to pack the data buffers for just about any protocol. However, some formats would be very cumbersome to work with, and much of the work is still left to the user. My goal for Alchemy is to encapsulate all of that work within the library itself and help keep the user focused on solving their problem at hand.

Fundamental types are now supported. Here is a list of the additional types that I will add support, as well as other features that are congruent with this library:

  • Packed-bit fields
  • Nested message formats
  • Arrays
  • Variable-sized buffers (vector)
  • Additional StoragePolicy implementations
  • Simplify the message definitions even further

Whats Wrong With Code Reviews?

general, leadership, reliability, communication, CodeProject, maintainability Send feedback »

Code reviews seem to be the bane of many developers. Very few developers that I know like to participate in code reviews. Once they do participate, the criticisms about the code are superficial. Some examples are criticizing the lack of comments, violations to the naming conventions in the guidelines, and even the formatting of the code.

To top it all off, if you work in a shop that first presents an online code review to become familiar with the code, then a formal meeting to discuss the code, little to no prep time is spent by the reviewers. This is an enormous waste of time. How can a code review be valuable. More importantly, what can you do to change your companies culture, to not think of these as meetings of despair?

Eliminate the Superficial Aspects

Get rid of all of the things that sit there on the surface, that make code reviews appear to be a waste of time. Think of code reviews as M&Ms. If you don't know what M&Ms are, that may make this analogy even more poignant. They are small chocolate candies with a thin candy shell, and they have an 'm' printed on them. The thin candy shell is just there to keep the chocolate from melting in your hand people. The superficial aspects of a code review that developers tend to focus on is like that thin candy shell. There really is chocolate inside of a code review, metaphorically speaking.

Tools

I mean software tools, not your engineers. Many code analysis tools exist, both commercial and open-source. These tools can inspect the code both statically and dynamically. That there should be no reason for a developer to have to point out violations of the coding guidelines, variable names, formatting, use of forbidden language constructs. These tools can be run automatically, and they are highly configurable. I agree, if this is what your code reviews have consisted of, it has been a waste of your time.

Excuses

We all like to believe that excuses simply make everything better, therefore, we don't have to feel guilt or can't be blamed for something. An excuse is usually a misappropriation of logic to rationalize something. A person can only create a finite amount of logic. Don't believe me, look up the definition of death. What if you applied all of that logic wasted from excuses to efforts that could improve your software?

The Meeting

Many times the meeting simply becomes a formality. If you have a good collaborative code review tool where developers can review the code, make comments and have discussions over the course of a few days, this will definitely be time better spent than having a meeting, and you could eliminate this formality altogether. The code review tool will record the entire discussion and even allow you to generate the action items that must be completed before the code is accepted. The details of the review can then be tracked and referenced later if needed.

This form can be especially beneficial for introverts. Introverts generally prefer to think about their answers before they speak, or may not arrive at an answer before the conversation has moved onto the next topic. This can give them more time to arrive at the answer, or question they are looking for. I personally believe there are many benefits to gathering the reviewers in a meeting. However, I will address that in a different section.

Effective Reviews

Make things as simple as possible, but no simpler.
Albert Einstein

Effective reviews are both constructive and concise. The purpose is to increase the overall value of the project, the code. You increase value by ensuring quality. If the code reviews your team has tends to focus on the superficial elements of the previous section, your reviews are too simple.

Note: To authors of the review:

Hopefully your team can hold constructive and useful reviews. Here are a few quick tips to keep in mind:

  • Do your best to avoid becoming defensive. Sometimes it may feel like a personal attack. In most cases its not. Even if it is, take charge of the discussion and keep it focused on the code. Proactively ask for feedback. Is there anything that you could have done better? This is especially helpful when your team has not broken passed the thin-candy shell of code reviews.
  • There's no need to make excuses for your code when someone else points out a defect. If it was a mistake, all that's required is for you to correct it. If it's something that you don't quite understand, ask the person who pointed it out to elaborate. This is a learning opportunity.
  • I don't see people do this often. Point out code that you are proud of, or is the result of solving a difficult problem and how you arrived at that solution. They others many not realize the work involved for you to simplify a nasty problem to such a simple and elegant solution.

Constructive

To be constructive, you must build. This isn't the time to tear apart the author's code and destroy their every sense of self-worth. Besides, you don't need an excuse like a code review to do that. You can do that anytime you want to.

Leave out "You"

It's very easy for a person to attach their identity with the work they produce. This is especially true if they are craftsman like a software developer. In your explanations and reasoning, try to focus on the issue itself. Discuss the issue, what effects it may cause, and potential ways to resolve the issue. Yes, it is a defect, that just happens to be in this code. However, it's not a personal attack, and the issue itself can be rectified. This is about improving code quality, not fixing the social issues you have with a co-worker.

Make Suggestions

I have adopted this approach from Scott Meyer's Effective Series of books. I try to avoid telling others how to do things, with the exception of when I am the team lead or architect and the implementation is unacceptable. Simply adding the work "consider" at the beginning of your sentence is all that it takes in most cases. The choice is the author's. Most of the time they will respond by following your suggestion. If they don't use your suggestion, remember, this is their work not yours. There's no need to take it personally.

Add Compliments

Just like in book and movie reviews, the critic will usually try to point out any redeeming qualities even for worst pieces of work that they review. The same should occur with code reviews. For the most part, code is just code. Every now and then, there's a defect. Hopefully there are some highlights to be able to point out as well. This piece of advice is especially important for the leaders and architects of the team. There are two reasons that this piece of advice is valuable:

  1. Affirm to the author that you recognize value in their work, and you are not simply looking for flaws.
  2. Highlight examples of good practices and ideas that you think others should follow. These diamonds in the rough may otherwise go unnoticed.

Concise

Assuming that you want to spend as little time as possible on a code review, this section provides some suggestions that may help your efficiency.

Independently Reviewed

Have each member of the review, inspect the code independently to optimize the amount of time spent reviewing code. You provide the opportunity for people to tune out if your code review is a meeting where one person navigates the display, One or two may participate, the others may be checking their email, or posting selfies on Facebook. Reviewing independently is not only more efficient, it also often produces a different collection of issues spotted by each reviewer. So a more thorough review occurs.

Divide and Conquer

In general, I think that developers tend to write code in feature sets that are too large. This causes much more code to be inspected, and boredom to set-in as each file is reviewed. Soon it all looks the same. If this review were like counting to 100, it would sound like this: "One, two, skip a few, ninety-nine, one-hundred".

Solve this problem by dividing up the responsibilities among the reviewers. One could focus on resource management, another on correct integration into the existing system, while a third focuses on overall style. These roles could be assigned to each developer for a subset of the files, and their role changes for a different set of files. There are many ways this can be done. However, assigning specific tasks will help insure that each reviewer doesn't focus on the same element.

One thing I would definitely not recommend, is splitting up the files and only have one person review that subset of files. You miss out on the potential benefit of the diverse background and experience levels of many people reviewing the code. We all have learned and arrived where we are at by different paths, just as we have all been bitten by different "bugs" that have changed how we develop and what we value in our code.

Where's the Value?

I have often heard "The value of something is the price you are willing to pay for it." Time is a precious resource, in fact it is the resource that I value the most in life. When I consider my work day, it is also the most valuable resource. There are only so many things that I can accomplish in a fixed amount of time. It is important to remember that you are a part of a team developing this software project. This is bigger than anything you can create by yourself in a reasonable amount of time. Investing time in an effective code review, is investing in both your team and the quality of the code that you work in.

Improved Quality

Hopefully it goes without saying, there is an opportunity for the quality of the code to be improved. If your code is not improving because of the code reviews that you hold, let's work under the assumption that your organization is doing it wrong. With the suggestions in this article, is there anything that could help improve that? If not, there is more likely a more fundamental issue in your organization that needs to be resolved before code reviews can contribute.

Addressing issues while in development

It is often easier to fix a defect, while you are still developing the code that contains the defect. That is one of the purposes of the code review. Why can't it be fixed later? It may be possible to fix it at a later time; when it is discovered. Hopefully that is not when you are on site with a customer trying to provide answers.

What if its not a defect, but a fundamental implementation issue. One that relies heavily on global values and made most of the object hierarchy friends with each other. Some things become almost impossible to resolve unless you have dedicated resources to resolve it. At this point, you do. So take advantage of them.

Transfer of knowledge

This is one that I do not think many people think of at all. So much focus is placed on the author as being the center of attention, the focus of criticism in fact, that many developer's do not think of this time as an opportunity to learn something themselves. You may learn a new technique. You may learn about a set of utility functions that were not housed in the most convenient location, and now that you know of them your tasks will become much simpler. Programming techniques and system architecture just scratch the surface of potential for what you can learn.

It's a Waste of My Time

That's funny. This seems an awful lot like an excuse... Let's put it in perspective. Think about all of the hours you have spent watching reality TV over the last decade. Compare all of those hours devoted towards Honey Boo-Boo and Pregnant at 16 with time spent on code reviews. Which do you think is a bigger waste of time?

Attitude

The first thing that needs to change is the attitude and perception of the review. Rather than stating "It's a waste of my time.", ask this question instead "How can I find value in this?". Positive outlooks tend to beget positive outcomes. If you do not expect to get anything out of reading code written by someone else, you'll probably spend that time looking at pictures of kittens on The Internet.

I realize this advice may seem like something you teach your child, but as adults, we're just as prone to becoming jaded and stuck in a rut with our opinion and attitudes. There's also the chance that any form of meeting or social interaction is waste of your time; that can change too. Before you even start the code review, you're criticizing the processes of the code review.

Safety precaution

First, appreciate that bit of irony.

Next, if you every go work in the aeronautics industry, please send me an email and let me know what company you work for. Because every line of code is scrutinized multiple times. And once code has been blessed, it is very difficult to go back in and make changes. I prefer to not fly on planes where the developers believe that code reviews are a waste of time. Similar processes are also in place for DoD contractors, and where the potential for bodily harm or the loss of life exists like with medical devices and nuclear production facilities.

Summary

Code reviews can provide value if you apply your time spent towards constructive activities. There are many valuable aspects to a code review, beyond verifying the code. This is an opportunity for all participating members of the team to learn new and better ways to solve problems. The knowledge about how the system works can be spread amongst the participants. It can also be an opportunity to discuss the non-tangible aspects of the development that does not appear in the final code.

There is value in performing code reviews, and you do not have to dig to deep to find it. Mostly it only takes a redirection of your energies, and for some a minor attitude adjustment. Formal code reviews are not always appropriate. Sometimes a buddy check will suffice. Either way, good judgment is required.

Alchemy: Message Buffer

adaptability, portability, reliability, CodeProject, C++, maintainability, Alchemy, design Send feedback »

This is an entry for the continuing series of blog entries that documents the design and implementation process of a library. This library is called, Network Alchemy[^]. Alchemy performs data serialization and it is written in C++. This is an Open Source project and can be found at GitHub.

Previously I posted the first prototype that demonstrates that the concept of Alchemy is both feasible and useful. However, the article ended up being much longer than I had anticipated and was unable to cover serializing the user object to and from a data stream. This entry will finish the prototype by adding serialization capabilities to the prototype for the basic datum fields that have already been specified.

Message Buffer

One topic that has been glossed over up to this point is how is the memory going to be managed for messages that are passed around with Alchemy. The Alchemy message itself is a class object that holds a composited collection of Datum fields convenient for a user to access, just like a struct. Unfortunately, this format is not binary compatible or portable for message transfer on a network or storage to a file.

We will need a strategy to manage memory buffers. We could go with something similar to the standard BSD socket API and require that the user simply manage the memory buffer. This path is unsatisfying to me for two reasons:

  1. BSD sockets ignore the format of the data and simply setup end-points as well as read/write capabilities.
  2. Alchemy is an API that handles the preparation of binary data formats to create ABI compatible data-streams.

Ignoring the memory buffer used to serialize the data would only provide a marginal service to the user, however, not enough to be compelling for this to be a universal necessity when serializing data. Adding a memory management strategy to Alchemy would only require a small amount of extra effort on our part, yet provide enormous value to the user.

Considerations

It will be possible for us to create a solution that is completely transparent to the user, with respect to memory management. The Message object could simply hide the allocations and management internally. A const shared_ptr could be given to the user once they call an accessor function like data(). However, experience has shown be that often times developers have already tackled the memory management on their own.

Furthermore, even if they have not yet tackled the memory management problem, the abstractions that they have created around their socket and other transport protocols has forced a mechanism upon a user. Therefore, I propose that we develop a generic memory buffer. One that meets our immediate needs of development, and also provides flexibility to integrate other strategies in the future.

The Basics

There are four operations that must be considered when memory management is discussed. "FOUR?! I thought there was only two!" Go ahead and silently snicker at the other readers that you know made that exclamation because you were aware of the four operations:

  1. Allocation
  2. De-allocation
  3. Read
  4. Write

It's very easy to overlook the that read and write must be considered when we discuss memory allocation. Because if we simply talk in terms of malloc/free, new/delete, or simply new for JAVA and C#, you allocate a buffer, and reads and writes are implicitly built into the language. This only is only true for the fundamental types native to the language.

However, when you create an object, you control read and write access to the data with accessory functions for the specific fields of your object. In most cases we are interested in keeping the concept of raw memory abstract inside of an object. We are managing a buffer of memory, and it is important for us to be able to provide proper access to appropriate locations within the buffer that correspond to the values advertised to the user through the Datum interfaces.

That brings to mind one last piece of information that we will want to have readily available at all times, the size of the buffer. This is true whether we choose a strategy that uses a fixed size block of buffers, dynamically allocate the buffers, or we adapt a buffer previously defined by the user.

The Policy Design Pattern

Strictly speaking, this is better known as the Strategy design pattern. I am sure there are other names as well, probably as many as there are ways to implement it. We are developing in C++, and this solution is traditionally implemented with a policy-based design. We want to create a memory buffer object that is universal to our message implementation in Alchemy. So far we have not provided any hint of a special memory object to deal with in the Alchemy interface. I do not plan on changing this either.

However, we have already established there are multiple ways that memory will be used to transfer and store data. A Policy-based design will allow us to implement a single object to perform the details of managing a memory buffer and providing the correct read/write access, and still allow the user to integrate their own memory management system with Alchemy. This design pattern is an example of the 'O' in the SOLID object-oriented methodology. The 'O' represents Open for extension, closed for modification.

In order for a user to integrate their custom component, they will be required to implement a policy class to map the four memory management functions mentioned above to a standard form that will be accessed by our memory buffer class. A policy class is a collection of constants and static member functions. Generally a struct is used because of its public by default nature. The class that is extended expects a certain set of functions to be available in the policy type. The policy class is associated with the extended class as a template parameter. The only requirement is the policy class implements all of the functions and constants accessed by the policy host.

Policy Declaration

Here is the declaration for an Alchemy storage policy:

C++

struct StoragePolicy
{
  // Typedefs for generalization
  typedef unsigned char                 data_type;
  typedef data_type*                    pointer;
  typedef const data_type*              const_pointer;
  typedef std::shared_ptr&lt; data_type >  s_pointer;
 
  static
    s_pointer allocate(size_t size);
  static
    void deallocate(s_pointer &amp;spBuffer)
  static
    bool read ( const_pointer   pBuffer,
                void*           pStorage,
                size_t          size,
                std::ptrdiff_t  offset)
  static
    bool write( pointer         pBuffer,
                const void*     pStorage,
                size_t          size,
                std::ptrdiff_t  offset)
}:

The typedefs can be defined to any type that makes sense for the users storage policy. The class doesn't even need to be named or derived from StoragePolicy, because it will be used as a parameterized input type. The only requirement, is that the type does support all of the declarations defined above. When this is put to use, it becomes an example of static polymorphism. This is the foundation that most of The C++ Standard Library (formerly STL) is built upon. The polymorphism is invoked implicitly rather than explicitly by way of deriving from a base class and overriding virtual functions.

Policy Implementation

At this point, I am only concerned with leaving the door open to extensibility without major modifications in the future. That is my front-loaded excuse for why the implementation to these policy interface functions are so damn simple. Frankly, this code was original implemented inline with the original message buffer class. I thought that it would be better to introduce this policy extension now, so that some other decisions that you will see in the near future make much more sense. Don't blink as you scroll down, or you may miss the implementation for the functions of the storage policy below:

Allocate:

C++

static
  s_pointer allocate(size_t size)
  {
    s_pointer spBuffer =
      std::make_shared(new(std::nothrow) data_type[size]);
    return spBuffer;
  }

Deallocate:

C++

static
    void deallocate(s_pointer &amp;spBuffer)
  {
    // No real action for this storage_policy.
    // Clear the pointer anyway.
    spBuffer.reset();
  }

Read:

C++

static
  bool read ( const_pointer   pBuffer,
              void*           pStorage,
              size_t          size,
              std::ptrdiff_t  offset)
  {
    ::memcpy( pStorage,
              pBuffer + offset,
              size);
    return true;
  }

Write:

C++

static
  bool write( pointer           pBuffer,
              const void*       pStorage,
              size_t            size,
              std::ptrdiff_t    offset)
  {
    ::memcpy( pBuffer + offset,
              pStorage,
              size);
    return true;
  }

Message Buffer (continued)

I have covered all of the important concepts related to the message buffer, basic needs, extensibility and adaptability. There isn't much left except to present the class declaration and clarify any thing particularly tricky within the implementation of the actual class. Keep in mind this is an actual class, and we don't intend on providing direct user access to this particular object. The Alchemy class Hg::Message will be the consumer of this object:

Class Definition and Typedefs

typedefs are extremely important when practicing generic programming techniques in C++. They provide the flexibility to substitute different types in the function declarations. In some cases the types defined may seem silly, such as the size_type fields used in the STL. However, in our case the definitions for data_type, pointer and const_pointer become invaluable.

If it isn't obvious, the policy class that we just created is used as the template parameter below for the MsgBuffer. You will see further below in the function implementations that I display how the calls are make through the policy. We declared the functions static, therefore there is no need to create an instance of the policy.

One last note: Starting with C++11 the ability to alias definitions is preferred over the typedef. There are many advantages, some of which include partially defined template aliases, a more intuitive definition for function pointers, and the compiler preserves the name of the aliased type. Preservation of the type in the compiler error messages goes a long way towards improving the readability of template programming errors, especially template meta-programming errors.

C++

template &lt; typename StorageT>
class MsgBuffer
{
public:
  //  Typedefs **************************************************
  typedef StorageT                           storage_type;
  typedef typename
    storage_type::data_type                  data_type;
  typedef typename
    storage_type::s_pointer                  s_pointer;
  typedef typename
    storage_type::w_pointer                  w_pointer;
 
  typedef data_type*.                        pointer;
  typedef const data_type*                   const_pointer;
 
  // ...
};

Construction

C++

//  Ctor ********************************************
  MsgBuffer();
 
  //  Fill Ctor ***************************************
  // Create a zeroed buffer with the requested size
   explicit
    MsgBuffer(size_t n);
 
  //  Copy Ctor ***************************************
  MsgBuffer(const MsgBuffer&amp; rhs);
 
  //  Dtor ********************************************
  ~MsgBuffer();
 
  //  Assignment Operator ****************************
  MsgBuffer&amp; operator=(const MsgBuffer&amp; rhs);

Status

For a construct like the message buffer, I like to use functions that are consistent with the naming and behavior of the standard library. Or if my development fits closer in context to some other API I will select names that closely match the primary environment that most closely matches the code.

C++

bool empty() const;
 
  size_t capacity() const;
 
  size_t size() const;
 
  void clear();
 
  void resize(size_t n);
 
  void resize(size_t n, byte_t val);
 
  MsgBuffer clone() const;
 
  const_pointer data() const;

Basic Methods

There was one mistake, actually, learning experience that I acquired during my first attempt with this library. I did not provide a simple way for users to directly initialize an Alchemy buffer, from a buffer of raw memory. When in many cases, that is how their memory was managed or accessible to the user. I encouraged and intended for users to develop StoragePolicy objects to suite their needs. Instead they would create convoluted wrappers around the main Message object to allocate and copy data into the message construct.

This time I was sure to add an assign operation that would allow the initialization of the internal buffer from raw memory.

C++

//  *************************************************
  /// Zeroes the contents of the buffer.
  void zero();
 
  //  *************************************************
  /// Assigns the contents of an incoming
  /// raw memory buffer to the message buffer.
  void assign(const_pointer pBuffer, size_t n);
 
  //  *************************************************
  /// Returns the offset used to access the buffer.
  std::ptrdiff_t offset() const;
 
  //  *************************************************
  /// Assigns a new base offset for
  /// memory access to this object.
  void offset(std::ptrdiff_t new_offset);

I would like to briefly mention the offset() property. This will not be used immediately, however, it becomes useful once I add nested Datum support. This will allow a message format to contain sub-message formats. The offset property allows a single MsgBuffer to be sent to the serialization of sub-structures without requiring a distinction to be made between a top-level format and a nested format. When this becomes more relevant to the project I will elaborate further on this topic.

Getting Values

This function deserves an explanation. This is a template member-function. That means this is a parameterized member function, a function that requires template type-definitions. An instance of this function will be generated for every type that is called against it.

This function provides two values beyond allowing data to be extracted.

  1. A convenient interface is created for the user to get values without a typecast.
  2. Type-safety is introduced with this type specific function. All operations on the value can have the appropriate type associated with it up through this function call. This call performs the typecast to a void* at the final moment when data will be read into the data type.

C++

template &lt; typename T >
  size_t get_data(T&amp; value, std::ptrdiff_t pos) const
  {
    if (empty())
      return 0;
 
    std::ptrdiff_t total_offset = offset() + pos;
 
    // Verify the enough space remains in the buffer.
    size_t bytes_read = 0;
    if ( total_offset >= 0
      &amp;&amp; total_offset + sizeof(value) &lt;= size())
    {
      bytes_read =
        storage_type::read( data(),
                            &amp;value,
                            sizeof(T),
                            total_offset)
        ? sizeof(T)
        : 0;
    }
 
    return bytes_read;
  }

Setting Values

This function is similar to get_data, and provides the same advantages. The only difference is this function writes user data to the buffer rather than reading it.

C++

template &lt; typename T >
  size_t set_data(const T&amp; value, size_t pos)
  {
    if (empty())
      return 0;
 
    size_t total_offset =
      static_cast&lt; size_t >(offset()) + pos;
 
    size_t bytes_written = 0;
    size_t total_size = size();
    if ( (total_offset >= 0)
      &amp;&amp; (total_offset + Hg::SizeOf&lt; t >::value) &lt;= total_size)
    {
      bytes_written =
        storage_type::write ( raw_data(),
                              &amp;value,
                              Hg::SizeOf&lt; t >::value,
                              total_offset)
        ? Hg::SizeOf&lt; t >::value
        : 0;
    }
 
    return bytes_written;
  }

Summary

I have just presented the internal memory management construct that will be used in an Alchemy Message. We now have the final piece that will allow us to move forward and serialized the message fields programmatically into a buffer. My next entry on Alchemy will demonstrate how this is done.

Why Computers Haven't Replaced Programmers

general, CodeProject, knowledge Send feedback »

When I first started my college education to become a Computer Scientist (Programmer) an ignorant acquaintance of mine told me with some uncertainty, "Computer programming, don't they have computers write the programs now?" I thought he may have been thinking of the compiler. Alas, no. He continued to become more certain, while he told me that computers were writing programs now, and in ten years I wouldn't be able to find a job. I no longer know this person, and I, along with millions of other programmers make a living writing computer programs. Why aren't computers writing these programs for us?

Information

The most basic answer to this question is Information. I will try to avoid giving a completely academic answer, however, we will need to visit a few concepts studied in Information Theory and Computability Theory. A specialized combination of these two fields of study is, Algorithmic Information Theory (AIT), this will also provide a more precise, or at least satisfying answer.

What is Programming?

Unfortunately, we won't be able to get very far unless we define what we mean when we refer to Programming. For simplicity, let's define programming in a way that is congruent with AIT. This will make the discussion easier to associate to the relevant theories, and simplify the definition to level that can be easily visualized and reasoned about.

Here's a dictionary definition of programming:

Pro-gram-Ming
noun
    The action or process of writing computer programs.

What is a Program Then?

I think that definition is actually simple enough. Let's look at a basic definition for computer program:

A computer program, or just a program, is a sequence of instructions, written to perform a specified task with a computer.

This is also simple, but not specific enough. Therefore, it's time to turn to AIT and use one of their basic constructs that is often used to also represent a program. This construct is the string. Here is an excerpt of text from Wikipedia regarding the relationship of a string and a program in AIT:

Wikipedia:

... the information content of a string is equivalent to the length of the most-compressed possible self-contained representation of that string. A self-contained representation is essentially a program – in some fixed but otherwise irrelevant universal programming language – that, when run, outputs the original string.

I added the extra emphasis in the text to make it more obvious that there is a relationship between these three concepts. After a long-winded and roundabout simplification, we will represent and visualize a program as a string such as this one:

1100100001100001110111101110110011111010010000100101011110010110
Or even an 8-bit string like this:
11001001

... and what does this have to do with information?

Yes, let's get back to information. AIT defines a relationship between information and a string, which if it is self-contained representation of the string that contains the information, it is a program. We just defined our purpose for having a program. Which is to reproduce the desired information encoded in the program itself.

Computer Programs

We have established that for this discussion, the purpose of a computer program, or just program, is to reproduce information. Also, we will represent a program like this 11001001. So in essence, computer programmers generate strings that, when executed will produce the information originally encoded within the program. Of course there are plenty of tools that programmers use to run over their language of choice, that will compiler, link, interpret, convert and eventually generate a string that is executable by the target computer.

How do programmers know what to program?

Programmers are given a set of requirements that define the information that the program needs to produce. In the real-world, this information can represent projectile trajectories, financial calculations, images, interactive processing of commands, this list is potentially endless. With the known requirements, the programmers can set out to create a program that will produce the desired information in a reasonable amount of time.

I mention time because of some of the concepts that exist in the fields of study that I mentioned. These concepts will help us reason, and answer the question I posed for this entry. The most obvious part of programming is writing code. It's the visible aspect to an outside observer.

    "What's he doing?"
    "Oh, he's eating cold pizza, drinking Mountain Dew, and writing code."

Again, we can thinking of a program as a simple string. Before the programmer can write this simple string, they have to have a concept that they are trying to implement as the program. Once they have this concept in mind, they write can write the code. This is very much like expressing an idea to a person, except instead it is the concept is articulated in a language or form that is computable by the computer.

In English, at least, there are many ways to say things. Some people are verbose, others are terse, and yet others speak in innuendo. Solving a problem in a computer program can be done in many different ways as well. Sometimes the language and hardware that you are writing for will dictate how you need to solve the problem. Other times there are no strict limitations, and it is up to you to find the best way to solve the problem. The best might not always be the fastest. Sometimes it means the most portable, maintainable, or uses the least amount of memory. Much of the study of Computer Science is focused on these concepts

Turing machines

A Turing machine is a hypothetical device that allows computer scientists understand the limits of computation by a machine. When reasoned about, the Turing machine is given a fixed-instruction set, infinite memory, and infinite time to run the program. Even with unlimited resources, we discover problems that are very difficult to calculate, and attempt to approach the infinite time limit. These problems only known solutions scale exponentially as the size of the problem increases.

On the other hand, we can also discover problems that are quickly solvable and verifiable in polynomial-time, such as AES encryption. However, if the constants chosen for these problems are large enough, the amount of time required to calculate the solution can still attempt to approach an infinite amount of time.

Computers

So we've established that programs are encoded strings, that produce information when the program is executed. We mentioned a theoretical computer called a Turing machine that is used to reason about the calculability and complexity of problems solved by programs. I told you I was going to try to avoid as much academics as possible. What about real-world computers?

Real-world computers generally fantastic. The majority of computers we interact with are General Purpose CPU's(GPCPU). Very much like the Turing machine, except without access to unlimited resources. We have got quite a bit of resources on the current processing hardware. We have hit a point where computers are no longer getting faster. In order to continue to gain processing power, the trend is to now provide multiple CPU's and perform parallel processing.

An extreme example of parallel processing is the use of Graphics Process Units to perform General Purpose computing (GPGPU). GPGPU processing performs up to 1664 parallel processing streams on the graphics card that I own. This is for consumer grade hardware; I don't know about the high-end chips, I can't afford them, so I don't torture myself. The challenge with this path, is that you must have a problem that can be solved independently and in parallel. Graphics related problems are natural fits for this model, as well as many scientific simulations.

Artificial Intelligence

What is Artificial Intelligence (AI)? It is when intelligence is exhibited by a machine or software. Intellingence, Damn! more definitions. I don't actually want to go there, mostly because a great definition for intelligence is still debated. Let's simply state that AI involves giving computers and machines enough intelligence to make decisions to perform a predefined tasks.

AI is far enough along that we could command it to write computer programs. However, they would be fairly simple programs. Oh, and here's the catch, a programmer would be the person to command the AI program to write the simpler program. AI will continue to improve. So the programmer will be able to command the AI to program even more complex programs. But still not one as complex as itself.

Do you see the trend?

"We can't solve problems by using the same kind of thinking we used when we created them."
Albert Einstein

I have a feeling there is a better quote that fits the concept I am trying to convey, but this one by Einstein still fits, and I like it. Technology is built upon other technologies that have come before it. When a limit is reached, a creative turn occurs, and progress continues forward. I understand how computers work, in the virtual sense. I myself am not capable of building one from scratch. I take that back. I had a project in college where I had to build an 8-bit accumulator by wire-wrapping; the inputs were toggle switches, and the clock was a manual button that I pushed. For me to build a computer with the processing power we have today, would be a monumental task (that I am not capable of today).

We keep improving our technologies, both physically and virtually. We continue to use known and invented technologies to build and invent new technologies. When some people reach their limits, others may pick up research and advance it a step further by approaching it from a different direction. This is similar to the theorems mentioned from AIT, regarding the amount of information encoded in a program.

This point is:

In order for a computer to write computer programs, it will need to be at least as intelligent as the program that it is going to encode.

In AIT, the string that is defined may be the program that will generate the desired information. In order for a computer to develop programs, it will need to be more intelligent than the program that it is trying to write. Which will require a program to have developed the top-level computer developer in the first place. At some point a program could develop a genetic algorithm to write this new computer that is a programmer. However, we're not there yet.

When that happens, many possibilities become available. Just imagine, a computer writing genetic algorithms. Generations of its algorithm can be adjusted at lightening speed, but hopefully it is an intelligent computer using the existing algorithms that have been mathematically proven to be the most efficient. Because if it is just let loose to try to arrive at the desired output, well, that could take forever.

There is no drop in replacement

There's actually another point that I want to make related to this sci-fi concept of the computers actually writing new programs. There is no drop-in replacement that exists for an experienced developer. There are many fields of study, a wide range of algorithms and problems that have already been solved. These things could conceivably be added to the programming computer's database of knowledge. However, this task alone is monumental.

The same statement applies to people too

That's right. Software Engineers are not components or warm bodies that can be replaced interchangeably. Each engineer has focused on their own field of study or interests. They each have followed separate paths through their career to reach the point that they are at now. I have seen this on projects where a company manages a pool of engineers. When there is work and a need for a software engineer, one is plucked from the pool.

However, the pool may consist of network programmers, modem programmers, antenna programmers, user interface programmers and so on. They each know their area of study very well. However, if you try to place an antenna programmer in a group that is need of network programmers, or a UI programmer to develop modem software, you may have a problem. Or at least you will not get the value that you expect from placing this engineer in a misfit group. Their knowledge of the topic is not great enough to effectively develop a solution to provide the desired information efficiently.

Summary

I am not sure what spurred the idea for this topic. The incident with the person that told me I was making a poor decision about becoming a software engineer happened about 15 years ago. It's fascinating watch and be a part of the new advances in technology that are occurring with both software and hardware. Better hardware means more things become possible in software. It can be frustrating when there is a situation where the software engineers are treated as warm-bodies; but I don't expect a computer to be doing my job anytime soon.

Devil's Advocate: TDD

adaptability, reliability, communication, CodeProject, maintainability, Devil's Advocate Send feedback »

The Devil's Advocate is often an effective role that can help uncover logical weaknesses for a point of view. For those that are unfamiliar with this term, the Devil's Advocate takes a position that they do not necessarily agree with for the sake of debate. I usually do it to learn more about the topic the proponent is advocating; I'll admit, sometimes I just do it to push buttons.

Preface

I have had many discussions with developers from a variety of backgrounds and skill levels. I read programming articles and other development blogs. Everyone has an opinion. This got me thinking about how people go about rationalizing arguments for the technologies and processes that they prefer. I want to present a dialogue where the Devil's Advocate will drive the discussion based on logical, and sometimes illogical arguments. As with many arguments, some are valid points and others are distractions that hijack the discussion by changing the subject. The comments that the Devil's advocates makes will come from any of these sources.

On the opposite side is the proponent. The answers that are specified by the proponent be clear and succinct answers. I may cite a link to some other source that expands on the idea provided in the answer.

I hope this creates a format that flows naturally (as a discussion). A discussion that primarily presents facts and arguments; sometimes opinions will be presented as well. If you have a differing opinion, I would love to hear it. Let's continue the discussion after the entry ends. If this turns out well, I will continue to write posts like this from time to time.

Test Driven Development

Test Driven Development (TDD) is a software development process that focuses on the use of short development cycles to develop robust code. The short development cycles (30 seconds to an hour or so) create a tight feedback loop to inform the developer if the most recent changes have been good or bad. The developer initially writes a failing test, then adds the code to make the test pass, and finally evaluates the solution and improves it if necessary. This process is repeated to add all of the features and functionality towards a program.

I don't think TDD is a good process because I am supposed to write all of the test's first. Since it is test first development, I have split the work of my developers writing tests and developing the code. One group will define the interface and write the tests, and the other group will implement the code to the provided interface and make the tests pass.
You are over-simplifying the process by referring to it as Test First Development and writing all of the tests before you start development.

The developer that writes the code should also write the tests. One at a time, gradually building up the code.

That single developer has to write much more code then. They have to all of their normal code and the tests.

This will take twice as long, and you're telling me that the work can't be distributed?

My schedule can't afford that!

are many benefits that occur naturally with TDD. This in turn, will make your schedule more predictable:
  • TDD keeps your developers focused on solving the immediate problem, by adding one feature at a time.
  • This can lead to less actual production code being produced when TDD is used.
  • You're software will be testable
  • These tests will give you confidence when entering system integration.
  • Yes, your development phase may take a little longer.
  • However, you will have confidence during system integration to make changes, and detect if it affected your system negatively.
  • This will make your overall schedule more predictable, and should shorten the length of the system integration phase.

Speaking of integration, let's not leave development just yet.

When I integrate my code with everyone else's, I have to fix all of the broken tests.

This is not a situation that is unique to TDD, it is possible with any process that develops any type of regression test system.

If there are broken tests after your changes, this could mean a few things:
  • Your tests may be too complex.
  • Your code is tightly coupled, and your programming side-effects are interfering with this other code.
  • The other developers delivered code with broken tests.
  • The length of your Integration cycles are too large.
Here are some tips:
  • Write simple tests so they will be maintained.
  • Before you make any changes, compile your source to verify you are starting with a clean build.
  • Even if you need a large amount of time to complete a task, you should still rebase with the developer stream often.
I can't develop my UI with TDD because it depends on the control logic which is not ready yet.
TDD isn't a Silver Bullet, which is a process that can solve every problem. TDD does not always fit well with your development project. Analyze your project, and use TDD when it is a good fit.

I read David Heinemeier Hansson's blog (creator of Ruby on Rails), and he wrote an entry titled "TDD is dead. Long live testing."[^].

Is this a process that is on it's way out?

What's the point of learning it if it is dead?

Ok, hold on.

One needs to read the entire entry to first gain the context, then read the conclusion that he has reached and why. He explains that he adopted TDD, and it taught him some things, but now he prefers to simply perform system tests. Because TDD creates horrible designs.

Let's address a few issues that David raises in this entry. You state the issues, and I will respond.

David Heinemeier Hanssaon:
"Over the years, the test-first rhetoric got louder and angrier, though. More mean-spirited. And at times I got sucked into that fundamentalist vortex, feeling bad about not following the true gospel. Then I'd try test-first for a few weeks, only to drop it again when it started hurting my designs."

I want to address something with this statement. It seems that there are many different groups of technology and process advocates professing the true way to develop.

Again, there is no silver bullet.

What works for one development group, may not work for another; it may not even be possible or appropriate to try to apply the prescribed method in all situations.

Don't ever feel like you need to be following a method prescribed by the gospel.

Every environment, developer, language, company has their own ways to do things. Success of a technology in one application does not guarantee success in any other application of it.

David Heinemeier Hanssaon:
"Test-first units leads to an overly complex web of intermediary objects and indirection in order to avoid doing anything that's "slow". Like hitting the database. Or file IO. Or going through the browser to test the whole system. It's given birth to some truly horrendous monstrosities of architecture. A dense jungle of service objects, command patterns, and worse."

I posit that if you simply start coding, without tests, you will also "give birth to some truly horrendous monstrosities of architecture." TDD does not alleviate you from performing any of the common steps in the software development process. The one truth stated in the entry above about TDD is:

"avoid doing anything that's 'slow'. Like hitting the database. Or file IO. Or going through the browser to test the whole system."

TDD stands for "Test Driven Development", not "Test Driven Design". You should have an overall picture of what your design and architecture should be to accomplish your goals.

TDD is a process to help direct the development to produce code that is testable, correct, robust, and complete by providing feedback quickly during development.

Yeah, but it won't find tests during system integration.

That is correct.

And, these unit-tests become regression tests during system integration. Now they are used to defect if changes that are made during system integration break a feature that previous existed.

There are very few tools that exist today that find bugs. These tools are designed to look at specific things that are common sources of errors, such as memory management.

I read this paper written by, James Coplien, called "Why Most Unit Testing is Wasted."[^].

I found this paper very compelling. James make many points against unit-testing in general.

If unit-testing is a waste in general, then doesn't that make TDD a waste?

I don't want to stray too far from TDD. However, unit-testing is a fundamental part of TDD.

Let's look at the context and reasoning for a few of the arguments presented in the paper.

James Coplien:

"1.3 Tests for their Own Sake and Designed Tests
I had a client in northern Europe where the developers were required to have 40% code coverage for Level 1 Software Maturity, 60% for Level 2 and 80% for Level 3, while some where aspiring to 100% code coverage.

Remember, though, that automated crap is still crap. And those of you who have a corporate (sic) Lean program might note that the foundations of the Toyota Production System, which were the foundations of Scrum, were very much against the automation of intellectual tasks

It’s more powerful to keep the human being in the loop..."

Those are some strong words, and I couldn't agree more. Testing for code coverage is a misguided endeavor that only provides a false sense of security.

All tests should provide value. If a test does not provide value, it should be removed.

Code coverage is another metric that can be used to evaluate code. However, this metric alone does not indicate how well a unit is actually tested.

I like this statement: "automated crap is still crap."

James Coplien:

"If your coders have more lines of unit tests than of code, it probably means one of several things. They may be paranoid about correctness; paranoia drives out the clear thinking and innovation that bode for high quality. "

James then continues on with some pretty harsh words attacking developers analytical design skills and cognitive abilities, as well as rigid development processes.

Most of this paper presents justified arguments. However, this section appears to be the author's opinion rather than fact.

I believe that unit tests for the sake of unit tests is bad; similar to my thoughts on the code coverage metrics for tests. If a test provides value, then it is good. If you end up with more valuable test code that production code, this says nothing about the developer or the code. Hopefully the tests were well designed, and the production code is flexible and robust.

There is no coupling between the Test code : Production Code ratio. Again I posit, the same developers that created an inflexible and low-quality system with too many tests, would create the same quality system with only using system-level tests.

One last point.
James Coplien:

"1.8 You Pay for Tests in Maintenance — and Quality!:
... One technique commonly confused with unit testing, and which uses unit tests as a technique, is Test-Driven Development. People believe that it improves coupling and cohesion metrics but the empirical evidence indicates otherwise (one of several papers that debunk this notion with an empirical basis is Janzen and Saledian, “Does Test-Driven Development Really Improve Software Design Quality?” IEEE Software 25(2), March/April 2008, pp. 77 - 84.)

To make things worse, you’ve introduced coupling — coordinated change — between each module and the tests that go along with it. You need to thing of tests as system modules as well. That you remove them before you ship doesn’t change their maintenance behavior."

I have not read that paper by, Janzen and Saladin. It sounds interesting. If I can get access to it, I will read it and get back to you. Or if you read it, let me know what it says.

Otherwise, tests do not need to be that tightly coupled to the code. Furthermore, if you find that they are that coupled, and you need to ship them with your product, you are doing something wrong.

Yes, unit tests will be associated with a module, and there may be stubs, fakes and mocks to help verify that module. However, the code in that module should not change in order to be in a "test mode".

The point is to verify the code the way it will be run in production is correct, not to create tests that pass.

It looks like we are starting to digress into a discussion about unit testing in general.

Let's save that for another time.

Summary

There are many processes for developing quality software. Some work better than others, and also many are only appropriate for certain development environments. What works for Continuous Deployment web-development is not appropriate nor allowed for Aviation and Defense development. You must always be cognizant of the requirements of the application to be developed and its industry. Then also consider the processes involved in order to create high-quality software.

I have had great success in the places that I have applied TDD. I have successfully applied it with commercial software development as well as development in the Defense industry. However, I have recognized many projects that TDD would not provide value, and therefore I went with a different process to verify my software.

I feel the same way about software development processes as I do software technologies and tools. You select the best tool for the project. You can't always use a hammer, because some projects are delicate. Moreover, its best to not try to use a screwdriver as a hammer, because it makes one look like an idiot.

How I Avoid Making Mistakes

general, CodeProject, knowledge Send feedback »

No one likes to be wrong, except maybe the class clown; even then, I'm sure they don't like it if their incorrect answer does not get any laughs from the others. I especially hate when someone breaks the build, and the cause turns out to be a change that I made. I learned long ago not to try to chase perfection. However, I also learned there are many things that can be done to improve productivity and success.

It's Only a Mistake If You Do It Twice

That's not the actual definition of a mistake, it's simply a new frame of mind to help see a different picture. Here's the actual definition of a mistake:

mistake
noun
1. An action or judgment that is misguided or wrong.

verb
1. To be wrong about.

If you misjudge and action, but you know not to make that same action, you have just had a learning experience. You have learned something from the first time you made the mistake, and you don't let it happen again. To continue to make the same mistake over and over means that a person is not learning from your misjudgments. This could be for a variety of reasons, they are careless, ambivalent, distracted, or even they take away the wrong lesson from each time they recreate their learning experience (LE). But that's not you. You're here to learn how to reduce the number of mistakes that you make, or at least the number that others have to know about.

What Went Wrong?

This is important. If you have made a mistake, be sure to find the cause of the mistake. Not just a potential cause, but the actual cause. At least when that is possible. You will not be able to change your behavior, or improve your judgments unless you know where you were wrong.

Consider the Flight Recorder on commercial aircraft, also known as the Black Box (although we all know that it is really orange.) These devices record important information with respect to aircraft for the purpose of investigation of accidents. There are two data recording components, the Cockpit Voice Recorder(CVR) and the Flight Data Recorder (FDR). The CVR generally records the last 2 hours of audio from the cockpit. The FDR records at a minimum, 88 data parameters many times per second. The information from both of these devices are used to analyze and help identify the cause or contributing factors to the accident.

Chances are that you don't have a personal black box to analyze and reconstruct your mistake. Hopefully the mistake, I mean learning experience, occurred recently. The details will be clearer in your mind. Things like:

  • What was I thinking
  • What caused me to believe that?
  • Was I under time pressure?
  • Was this a quick fix that I forgot to return to?
When you are programming, you make a quick change, recompile, run, and the change does not work, you know what you last changed. That information is fresh in your mind so you know exactly where to go to analyze, and attempt to correct the problem.

Whenever a colleague states that they had to fix some code that I wrote, I ask them where, and learn what the change was. Or if it is more convenient I go silently perform a diff with the version control software to find out what I did wrong. Then I can start analyzing my mistake.

It's much more difficult analyze your hopeful LE when you made the change two weeks ago, or six months ago. Your change completed the immediate task at hand, but at the expense of another part of the program that went undetected. It becomes more difficult to remember the details as time passes. At the very least, you need to understand the details that answer What to be able to avoid creating the same situation again. You don't always need to understand Why that is a problem; only that things like using the hair-dryer in the bathtub cause a problem.

A Personal Learning Experience

I remember when I was six (no I didn't use the hair-dryer in the bathtub.) I was making macaroni and cheese, although it could have been Kraft Cheese and Macaroni. I don't know for sure, but I do know that fact is not important. The noodles were cooked to perfection (probably), and I was going to drain the boiling water from the pan into a colander in the sink. Sitting underneath the colander was an ugly green 70's era glass pitcher. I poured out the water into the sink, draining over the pitcher, and the pitcher cracked and fell into a number of pieces.

I didn't understand then, the reason why the glass cracked. But I did surmise that it was not a good thing to pour boiling hot liquids over cold glass. Ever since then, I check the sink for glass items before I drain hot liquids into the sink. I am not sure, but I think it took another LE to discover that pouring ice-cold liquids over hot glass has the same affect. Also, I think it's likely that I didn't get in trouble for that LE because I was doing my mom a favor by breaking the pitcher. The point is, some times knowing What is enough to avoid making a mistake. Consider that half the battle.

The Value of Why?

If you can learn enough information to determine Why something went wrong, then you will be on path that can help you recognize similar situations that led up to your previous LE's. If I had a basic understanding of thermodynamics and the forces involved with the rapid expansion of molecules with the application of energy, I may have been able to deduce that something similar could happen in the opposite direction when energy is removed. However, I did not have that understanding, and simply gained another LE.

Let's imagine you are working with a developer that constantly makes mistakes, and these are their responses through progression of mistakes.

  • So what if I don't delete my dynamic allocations? When the program exits the system will do it for me.
  • I did what you asked, other objects ask for this pointer, I am sure they call delete when they are done.
  • Fine, dynamic memory is too much trouble, I'll just put everything on the stack.
  • I can't put everything on the stack?! It's not large enough?! Fine, I'll make everything global.
Technically, this developer is not making the same mistake over and over, it just happens to be a different mistake in the same context. These mistakes haven't become LE's for him. It's as if he is using finger paints, and keeps rearranging the placement of colors. The colors slowly blend and pretty soon the only color on the paper is brown, and this developer is content to paint with brown.

Why does he continue to make this series of mistakes?

Clearly this developer does not understand computer memory management.

Why is important to understand for you to be able to change the circumstances and decisions that led to the LE. You will also be able to recognize similar situations that result in the negative outcome. This could even be true for situations in completely different contexts. If you can not understand why, you are most likely doomed to continue make mistakes, that will feel like Déjà vu.

How to Avoid Mistakes

I was about to summarize and end this post, then I realized I haven't told you how I avoid mistakes. Thinking about it, my tendency to fire off emails with an important attachment, but forgetting to the attachment, just helped me avoid making this mistake. I sometimes still send off those emails too quickly. As for the "Reply to All" button, it's like brown paint to me, and I usually avoid using it as much as possible.

Learn From Other Peoples Mistakes

I think the best way to avoid mistakes, is to make other peoples LE's your own. Then you never have to experience the pain or embarrassment that you just witnessed. Unfortunately, it's not that simple. For some reason we don't like to listen to our parents, mentors and colleagues, and make a mistake. It really is a mistake because they "Told us that would happen." It frustrates me to no end to watch my kids make the exact same mistakes that I did, even after I told them the story of how I made that mistake.

A Collection of Small Mistakes

If you seem to be making the same mistake over and over, hopefully you can at least mitigate it to small relatively harmless mistakes. Such as sending of an email without the attachment. You can usually recover by quickly sending a second email, be sure to add the attachment first, then add a little joke. If you fire off that second email without adding the attachment, you should step away from the keyboard and take a walk and reflect on what you just did. You're escalating the original small mistake. However, making that same mistake from time-to-time does happen. After I made that mistake many times, I avoid that mistake now by adding my attachment first, then I write my email. I haven't figured out how to mitigate the overuse of the "Reply to All" button.

Controlled Experiments

As I grew older, I am certain that I cautiously experimented with boiling hot water and different materials in the sink to see which ones can't handle the abrupt change in temperature. Thinking that was odd when I ran into Corningware and Pyrex. In this situation I wasn't ignorantly content with just avoiding glass. I'm not content with only using brown paint. I was curious. I was doing Science!

I still do experiments when I learn of some misguided action that I have taken. Is it a misunderstanding of the programming language I am working with? The way the product was designed? Something non-programming related? It doesn't matter. I experiment to understand. Then I avoid repeating the experiments with negative consequences.

Write It Down

I don't mean write it in your diary for you to review periodically and dwell on the mistakes you have made in your life. Most people have plenty of those already. I mean as another way to understand what happened. You write it down on a piece of paper, as if you were explaining it to someone. You want it to be more than simple notes to remind you what the mistake was, because that doesn't force you to think through all of the details. And that is what is important, those details that we tend to abstract away and over-simplify. Then throw the piece of paper away, because you found clarity in what you wrote. Alternatively, you could talk it through with a colleague or someone you trust. Clarity often appears, and you remember when you actively fill in the details.

Summary

The idea for this entry came to me when I helped a colleague get a test tool up and running. It turned out to be something simple like a missing semi-colon that I spotted after we had been looking for about 10 minutes. I think he was embarrassed, and I said "It's not a mistake, it's a Learning Experience... Just don't let it happen again."

Contact / Help. ©2024 by Paul Watt; Charon adapted from work by daroz. CMS / cheap web hosting.
Design & icons by N.Design Studio. Skin by Tender Feelings / Skin Faktory.