Embedded Alchemy

Alchemy Send feedback »

Alchemy is a collection of independent library components that specifically relate to efficient low-level constructs used with embedded and network programming.

The latest version of Embedded Alchemy[^] can be found on GitHub.
The most recent entries as well as Alchemy topics to be posted soon:
  • Steganography[^]
  • Coming Soon: Alchemy: Data View
  • Coming Soon: Quad (Copter) of the Damned
Alchemy: Documentation[^]

Abstraction Layers of the Human Body

general, adaptability, CodeProject Send feedback »

I think that almost no one would disagree that the human body is a very complex structure. Most of the complexity is hidden from our view. I would like to make a literal comparison between the human body and abstraction layers, as though the body was defined in software. I want to hopefully connect the dots for many to help convince you of the ultimate importance of a well defined and protected interface.

At the outer-most level there is the body itself where a small sample of its interfaces are capable of:

  • Sensory input is given in the form of the 5 senses.
  • Communiucation can be expressed with a variety of means:
    • Speech is expressed with the mouth
    • Signals expressed with sign-language
    • Emotions conveyed with body language
    • Pheremones and other more subtle message transports
  • Energy and medications are administered through a finite number of orifices.
  • Waste and excrement are ejected through well defined interfaces. (When things leave the body from unexpected orifices, this should be concerning.)
  • When the body is sick, it expresses symptoms in many ways. Some of them are only internally detected, others are clearly visible or audible.

Internally, the body is further abstracted by its internal systems, and composed of discrete organs and glands to perform specific purposes. A small sample of some of these systems are listed below.

  • Commands are issued by the nervous system.
  • Sensory information is received by the nervous system.
  • The endocrine system helps regulate the different systems, even indirectly issuing its own commands.
  • Energy, waste, and hormones are transferred by the circulatory system.

Value of Interface Boundaries

The interface boundaries provide two very important functions:

Provide discreet functionality

Consider one of the bodies organs (objects), the eye. Its purpose is to detect light and send the signals to the brain for interpretation. In order to accomplish this task, smaller interfaces are composited to achieve this. Input through the cornea, output through the optic nerve and other various components for protection and adapting to the environment. While the eye can be used for other purposes, such as identification or even feeling sensations, its basic purpose can be summarized to provide sight.

Access Control

Interfaces control access to both the data and the implementation.

The cornea and the optic nerve provide access for input and output for the eye. However, these are not the only components that are required to have a properly functioning eye. The Iris, Ciliary muscle, Lens, Vitreous humor, Retina, Fovea, and many other components are contained within the eye. The eye works as a closed system. If the behavior or characteristics of any one of these components changes, it may affect the overall quality of vision the eye is capable of producing. For example, many people see floaters. Floaters are caused by debris or impurities find their way into the Vitrious humor and distort the signals detected by the retina.

I only have a rudimentary understanding of the eye, mostly what can be observed from the outside, and information from a few diagrams. Even though the diagrams I have viewed of the eye are very detailed and depiect all of its internal components of the eye, I wouldn’t know where to begin to hook in and create a new type of eye by just looking at the diagrams. There are many interconnected components to the eye that do not directly contribute to providing sight. However, if these components are compromised, the eye may no longer provide reliable information to the brain or cease to function altogether.

System Integrity

Now consider what it means to violate these interfaces. I already pointed out what happens when the Vitreous humor in the eye is contaminated. Let’s consider the internal regulatory systems of the body. Under natural circumstances, the only way to interaction with the nervous system is through the 5 senses.

Doctors have invented ways to get around that:

  • Ingest medications
    • Input validation is important. If you cannot control the input you may no longer be able to control the output either.
  • Inject chemicals into your system with hypodermic needles.

  • Brain surgeons can stimulate regions of the brain with a probe to induce laughing, searching kayak.com other behaviors and actions during surgery

  • Catheter

    • The ingenious invention of the doctors and surgeons to take advantage of the orifices provided by the bodies original designers.

Generally, we get to elect when and who we let violate our natural interfaces. I wouldn’t want just anybody reaching in and tickling your kidneys.

Inheritance

Inheritance is such a valuable and widely abused tool. The topic is so broad many blog entries could be devoted to inheritance. Therefore, I will leave the specifics for another time, and just give you this thought.

Parents interfaces are completely protected from access even to their children. I trust my children even less than my doctor to monkey with my internals. .

Conclusion

Think of your objects as living organisms when you define the interfaces and create the implementations. Imagine you were that organism and consider what is required to guarantee the integrity of your object.

Take advantage of strict interfaces to:

  • Protect access
  • Verify input
  • Encapsulate details
  • Abstract complexity

Would you trust the invariants of your design to be left up to users that interact with you?

 

Test Driven Development

general, reliability, CodeProject Send feedback »

Test Driven Development (TDD) can be a very effective method to develop reliable and maintainable software. However, I have witnessed instances where the development process and results were from ideal because the tenets of TDD were not fully understood. I will provide a brief overview of TDD, which will include a description of the concepts, development process and potential benefits associated with TDD.

Concepts

Rapid Feedback During Development

The most basic goal of TDD is to provide the developer with the shortest development cycle possible. This is based on the concept that it is simpler and less expensive to find and fix defects, the closer you are to the point where the defect was introduced. This seems reasonable if you consider that you have all of the context and details for the change you just made floating around in your head, these extra details are forgotten over time.

Manage the Risk of Change

This rapid cycle of constant feedback that informs you of the quality level of each change. This process works best when you have a unit-test framework for your development environment; unit-test frameworks are an entirely different topic. For now, let's assume that it is easy to write and run all of the tests that you develop during a TDD session. Each test should be small, and only verify a tiny port of the code being developed. This is why it is important for it to be simple to create and run new tests.

Reduce Waste, Maximize Value

We want lots of tests, but no more than it takes to verify the code. As you are developing with this instant feedback cycle, you are able to focus on solving the problem at hand, and your array of tests that you are building the code upon, provides feedback on the overall system if you make a mistake. The result of your implementation should be a testable piece of logic that is minimal and correct. Hopefully building the feature with only statements that are required, and eliminating wasteful code that is often put in place for a cool future addition.

Red. Green. Refactor.

"Red. Green. Refactor." is the mantra of a developer working by TDD. If "Red. Green. Refactor." is not mentioned when a person describes TDD, they are most likely not describing it accurately. Simply put:

  • Red: A test is written for a small non-existent feature, then it is run and inevitably fails.
    • A set of tests that fail is called "Red"
  • Green: The feature is implemented - Rerun the test and it passes.
    • A set of tests that pass is called "Green"
  • Refactor: Inspect the code, can it be improved?
    • Is all of the functionality implemented?
    • Can the implementation be simplified, especially duplicated code?

Keep the mantra in mind; it will help you focus on the process and the goals of TDD.

Let's go into a bit more detail with an example to demonstrate the details that are often glossed over. We'll walk through building a function to convert a temperature from Celsius to Fahrenheit. This should only take two or three iterations to get a complete function with the correct functionality. The detail of the process I demonstrate below is a bit exaggerated, however, the process itself scales very well far all types of development with a unit-test framework.

This is the starting point of the function, which will compile.

C++

float celsius_to_fahrenheit(float temperature)
{
  return 0;
}

The Approach

We will use with the simplest conversion that we know about Celcius, the freezing point of water, which is zero. This has the equivalent value of thirty-two in fahrenheit. Let's write a test that will verify this fact. I will use some imaginary verification MACROs to verify the code.

C++

void TestCelsiusAtZero()
{
  ASSERT_EQUAL(32, celsius_to_fahrenheit(0) );
}

Now we initiate the tests. 

  • TestCelsiusAtZero():          Fail

This is good, because now we have verified that we have written a test that fails. Yes, it is possible to write a test that never fails, which provides no value, and adds to our maintenance overhead. We have just achieved RED in our TDD development cycle. The next step is to add the feature code that will allow this test to pass. Keep in mind, we want simple.  Simple code is easy to understand and easy to maintain.

C++

float celsius_to_fahrenheit(float temperature)
{
  return 32;
}

Run the tests:

  • TestCelsiusAtZero():          Pass

You might say, "Well that's cheating!"

Well is it? When we run our single test, it indicates we have done the right thing. TestCelsiusAtZero() is only verifying one facet of our function. This one facet is correct, for the moment. This means that we have reached the next step, GREEN.

It's time to analyze our solution, or REFACTOR. Did we add all of the functionality that is required to create a correct solution? Obviously not, Fahrenheit has other temperatures that 32°. The next test will verify a conversion of the boiling point of water, 100°C.

C++

void TestCelsiusAt100()
{
  ASSERT_EQUAL(212, celsius_to_fahrenheit(100) );
}

This time there are two tests that are run.

  • TestCelsiusAtZero():           Pass
  • TestCelsiusAt100():            Fail

RED

With no changes to the implementation, we still expect the first function to pass and we have now verified our new test fails properly. It's time to add the implementation details to the conversion function to support our conversion from 100°C without breaking our first test.

C++

float celsius_to_fahrenheit(float temperature)
{
  return (temperature == 0) ? 32 : 212;
}

Run the tests:

  • TestCelsiusAtZero():           Pass
  • TestCelsiusAt100():            Pass

GREEN 

 

REFACTOR

Yes this is exaggerated, but hopefully you see the point. Let's select one more temperature, the average temperature of the human body, 37°C. Implement the test:

C++

void TestCelsiusAtHumanBodyTemp()
{
  ASSERT_EQUAL(98.6f, celsius_to_fahrenheit(37.0f) );
}

Run the tests:

  • TestCelsiusAtZero():           Pass
  • TestCelsiusAt100():            Pass
  • TestCelsiusAtHumanBodyTemp():  Fail

RED

Add the implementation for this test:

C++

float celsius_to_fahrenheit(float temperature)
{
  return (temperature * 5.0f / 9.0f) + 32.0f;
}

Run the tests:

  • TestCelsiusAtZero():           Pass
  • TestCelsiusAt100():            Pass
  • TestCelsiusAtHumanBodyTemp():  Pass

GREEN

 

REFACTOR

Upon inspection this time, it appears that we have all of the functionality to complete the implementation of this function and meet the requirements. Can this function be further simplified? Possibly, by reducing 5.0 / 9.0 into a decimal. However, I believe that the fraction 5/9 is clearer. Therefore I will choose to leave it as it is, and declare done for this function.

Benefits

By default, the code is written to be testable and more maintainable. The code also contains unit-tests from the very beginning of development. This will help eliminate the undefined amount of debugging time that is usually required at the end of a project. As each change is added to the code, continue to add a test before making the change. This will ensure as much code as possible is covered by a test and continue to add value to your codebase.

Creating tests helps focus on smaller steps to develop and verify each part of code used to develop a feature. This increased focus can improve the productivity for the developer. Single paths through the code are considered for the addition of new tests and changes to the code. This leads to exceptional and error cases can be handled in a verifiable and useful manner. Finally, no more code than is necessary is developed. Code for potential "cool" features  in the future is left out because it may not be verifiable, or they would require more tests for something that is not required. All of these factors help contribute to a leaner and more correct codebase.

The tests become a sandbox and playground for new developers learning the project. They can make a change, run the tests, and see how the different parts are interconnected by what breaks. Undo the change, and poke into another spot. This is a much more fun and interactive approach to learning. Especially when compared to tediously reading through the code in your head. Alternatively, veteran developers of the project can experiment with their changes, and verify their hypothesis to determine if a change they are considering is the best choice or not.

Serendipity

An unexpected benefit I have experience many times is the early use of the objects and APIs by developing the tests. I have found it very helpful to be able to use the interfaces that I am developing as I design them. I have gotten mid-way through the development of an object and thought "This interface is shit!" What appeared to be perfectly reasonable as a header file on paper and design diagrams was actually a very cumbersome and clumsy object to use. Developing the tests gave me the chance to experience and discover this before I had completed my implementation.

Similarly, I have discovered errors in assumptions for the behavior of a feature-set critical to the system. This was for a public command interface where the command variables could be set or get one at a time. However, I discovered a set of parameters that were required to be set in a specific order, because even if the entire set of parameters would result in a valid configuration, the system could be commanded into a state where the configuration sequence would have to be started over if they were sent in the wrong order. Since I discovered this early enough in the development, I was able to raise this issue, and the team made the appropriate design changes to account for this issue. Had this been discovered in qualification testing, it would have been much more difficult to design and implement the change. Not to mention how much more time it probably would have required compared to discovery of this issue early in the schedule.

One last benefit I have experienced is the development of small, modular, and reusable components. The Test Driven Development process focuses on small tasks and incremental steps. This has helped me develop function and object interfaces that are more cohesive. They perform one task, and they do it very well. This lets me create a small collection of interoperable functions and components that I can use to compose more complex objects and functions that remain cohesive. Yet when I inspect their logic and tests, they still feel simple and easy to maintain. Basically, I have become much better at managing complexity with the use of Test Driven Development.

Drawbacks

Test Driven Development cannot be easily applied to all types of development. One example is User Interface testing. Full functional testing may be required of the application before many useful tasks can be verified. TDD therefore cannot be brought into the development early enough to benefit the entire project. However, it is important to note, no matter where you start in the development process, once pragmatic tests can be written, TDD can be applied to help guide additional changes.

The tests that are developed are part of the maintenance effort required by the project. This is also the case for any other type of development that creates tests. If the tests are not maintained, the value they provide is lost. It is just as important to write small maintainable tests, as it is to write small and maintainable production code. The simplest way to ensure the tests are maintained, is to make running the unit-tests part of the build process. The system will not create the output binary unless all of the tests pass. Continuous Integration is an excellent process to help manage this task in a pragmatic way.

Unit Test Process in General

Most of other drawbacks are shared with other processes that are based upon large sets of automated regression tests. It does not matter whether these are unit-tests or higher level component tests. The tests must be maintained. That is why it is important to write maintainable tests.

Management support becomes essential because of the previous drawback. A project management team that does not understand the benefits of the process may view the unit-tests as a waste of time that could be spent writing code. The entire organization that has direct input to the codebase must understand, believe in, and follow the process. Otherwise, the test-set will slowly fall into disrepair with incomplete patches of code that are vulnerable to risk when changes are made.

Misunderstandings

There are two common misinterpretations that I would like to bring to your attention to help you identify if you are moving down this path. This will help you self correct and maximize the potential benefit from following TDD practices.

Write All of Your Tests First

The fast feedback concept is lost when this is the statement that is emphasized: "Write the tests first". This has been misinterpreted as "write all of your tests upfront, and then write your code." Value can still be derived from a process like this, because much thought will be put into writing and compiling the tests, which hopefully carries over to the implementation, and the code will still be testable. However, I believe this makes the task of developing a solution much more difficult. Another layer of indirection has been created before the code is developed. The developed code must fit in this testing mold.

There is one thing that I have noticed about this interpretation of TDD that can make it successful in the short-term. That is the use of mock objects. Mock objects are a testing tool that allows behavioral verification of a unit, verifying that certain functions are called with the correct values, specified number of times and order. In this case, it is not as difficult to imagine what the functional implementation should be to test it, because you are thinking in terms of the behavior already as you develop the test.

Behavior Driven Tests are somewhat fragile, in that they use internal knowledge of the implementation to verify it is doing the correct action. If you use the same interface, return the same results, but use a different implementation, a Behavior Driven Test may possibly fail, where as a Data Driven Test will continue to function properly. In this case, the same data unit test could verify two different implementations of an object, while a separate test suite would need to be created for each implementation with behavior tests.

Yeah, But TDD Won't Find Bugs At Integration Testing

This misunderstanding has to do with unit testing in general, just as much as TDD itself. The comment is most often heard from a developer or manager that has not yet seen the value TDD can provide, let alone experienced it first-hand. The very first thing I think everyone should understand when they work at the unit test level is that the unit test is for the developer. It is written and maintained by the developer, and it is intended to give the developer near instant feedback on changes they make to the system.

The second thing to understand is a unit test does not find bugs. It is written to detect a bug that the developer has already found or imagined will exist. Integration testing is an entirely different level of testing. While developers are making changes to properly integrate their software, they can still use the unit tests to perform regression testing, however, bugs will still pop up. The developer should write a test to detect this defect before they make the changes to fix the problem. A developer or software tester found the defect in integration. Now a unit test will detect the defect exists before the next integration test cycle starts. Again, the unit tests and TDD are for the developers.

Conclusion

I discovered Test Driven Development out of frustration about four years ago when I was searching a brick-and-mortar bookstore for a better way to write software. I played around with it, read some books by Martin Fowler and Kent Beck, and I have been using TDD successfully ever since. When you try to explain TDD to someone else that has not seen the need for a better way than they are already used to, your efforts may fall on deaf ears. However, I have found sometimes the best way to convey the value of something, is to simply demonstrate it.

Test Driven Development is about three things:

  1. Rapid Feedback: Red, Green, Refactor.
  2. Manage the Risk of Change: Make sure each change adds value to your code
  3. Reduce Waste, Maximize Value: Eliminate code that does not provide value, only write code that is necessary

This is in contrast to the developer that chooses to make an enormous number of changes over 3 weeks. Then, one day you hear them say in a status meeting "I'm going to start to try and get it to compile tomorrow."

Which method of implementation do you think has the greatest chance of success? 

Code Rot

general, CodeProject Send feedback »

What is code rot

A.K.A software rot, code decay, code entropy; there are many similar names. 

Code rot: A term used to describe the quality of source code as it changes over time and migrates further away from the orignal design. This may continue until the code is no longer viable. A passive use of the term code rot describes the source code for an aging system that require dependencies or tools that are no longer available. Eventually the hardware fails and there is no way to update or port the software to newer tools.  

This entry will focus on the former variant rather than the latter because it describes active code rot. Each and every change to software can introduce decay. Please recognize that rot, decay, and entropy, these are all just another word for risk, which is the potential for a problem to occur. 

Divide and Conquer

The way we build computer programs is with Divide and Conquer. Take the problem at hand, and break it down into a set of simpler problems, and work towards solving the simpler problems. Continue this process until you reach a level you can instruct the computer with your implementation language of choice. Take what is complex, and rework it until it is simple. Essentially we build a codebase one line at a time.

Every change to a program should be intentional and add value. Because every change has the potential to add the risk of new defects in the program. Unmanaged risk in development will eventually be realized as bugs. Consider yourself lucky if the bug presents itself immediately. These potential problems will linger in the code and collect silently over time. Until one day, when changes are added that start to expose these lingering problems. If this process were given a name, I would choose something that meant the opposite of Divide and Conquer,  possibly Multiply and Fail. Essentially every change becomes a new factor to multiply into the myriad of ways the code will eventually fail.

Rotten Code

Bugs aren’t the only potential problem  that can be introduced. The quality of the code and design can be  compromised as well. This is actually what most developers are referring to when they say code rot. This is the point where adding the most minor change cannot be made without causing one of these hidden bugs to appear, or a relatively significant amount of code is required to implement a minor feature.

It does stink to work in rotten code. Rotten code can turn a new and exciting assignment into a nightmare. It’s difficult to tell how rotten code is unless you actually jump in and navigate through the morass of structured gibberish. Unless you are in the code, you also do not know what qualities and hidden gems remain from the original design and implementation. Therefore, it is very important to trust and guide the engineers that are the most familiar with code to help you make good decisions regarding change.

Change

Change can range from the addition of a minor feature, to a major refactor, or an entire rewrite and integration of a module. Change for change sake is never a good thing when programming. Each of these augmentations requires careful consideration for how and why the new implementation will be executed.

“Those who cannot remember the past are condemned to repeat it”

George Santayana

The continual change of a program is the driving force behind code rot. The amount of impatience, inattentiveness, urgency of the schedule/budget, and in many cases, plain ignorance, all contribute to how quickly code will deteriorate. One line at a time, each feature is built. One object at a time, swaths of code are rewritten to combat the filth that seeps into the program. The only problem is, too many developers are quick to declare “This code is hopeless,” and decide to replace it with something…else.

Learning to understand an existing implementation of someone else’s idea is not nearly as fun or glamorous as designing and developing your own idea. However, there is generally always something to learn from existing code. The lesson may just as often be an example of how not to implement something. Previous mistakes are not bad, as long as you only let them happen once. It’s even better if you can develop enough wisdom to recognize a mistake that you have not made before, correct it, and avoid making that same mistake in the future.

Boundaries

Boundaries are an under-appreciated design principle in software. There are many types of boundaries from the obvious, interfaces, objects, modules, and function calls to the not-so-obvious concept of control scope. Consider the potential value a boundary can provide. Have you used boundaries in all of the contexts listed below?

  • Simplify implementations by enforcing the divide phase of our divide and conquer implementation process. 
  • Become a barrier to prevent rotten implementations from spreading
  • Conversely, use the barrier to isolate code so it can be repaired
  • Reduce the coupling that tends to causes code to become unnecessarily rigid and complex:
    • Hide implementation details to minimize assumptions that cause side-effects
    • Prevent direct coupling to the implementation
    • Protect the integrity of the design by protecting the integrity of the data
  • Use scope boundaries to:
    • Limit access
    • Perform automated cleanup
    • Optimize
  • Provide security or robustness with trust
  • Protect Intellectual Property

Can you think of any uses that I did not list?

Boundaries are not the only solution to prevent and fix code rot. However, they are the simplest solution that I am aware of that are so versatile and can be used in many different contexts. Next time you consider violating a boundary so you can easily access a some data, think of code rot. It is the collection of these quick fixes, and the unintended side-effect that they introduce that allows rotting to fester and thrive. Hopefully it will help keep your code clean, tidy and a joy to work in.

 

Controlled Access of Global Data

adaptability Send feedback »

Global variables have a way of becoming a binding element that tightly couples modules. Certain desired behaviors may only occur because of side-effects created when the value of the variable is changed. Conversely, undesired features seem to intermittently appear in ways that cannot reliably be reproduced. As more global variables that are added to the source code base, the system seems to become more unstable. At this point, removing or altering set of global variables in the system becomes a monumental risk, rather than a safe and simple task.

It is not realistic to simply prohibit the use of global variables. There is a reason that global variables still exist in programming languages, they serve a purpose. Just like every other feature that a language provides, it is important to know how and when to use the feature appropriately. Global variables are no exception to this rule. They can be used safely and effectively with a few practices.

Although the majority of the code examples on this site are in C/C++, the concepts and principles can be applied to any language. The most important thing to remember when adding global data, is to control access to the data. This rule applies regardless of whether you intend the data to be private to the current compilation unit, or truly global to the entire program. The most reliable way to accomplish this, is to only give access to the data through a function call. Controlling access with a function provides a control point to manage change in your application. This keeps your application flexible and able to adapt with change rather than break and have to be reassembled. 

C++

size_t g_count = 0;
size_t GetGlobalCount()
{
  return g_count;
}

The code above is fine, except access to the global variable is still not protected. There are three ways to protect access to the data at a global level without using classes.

[adsense:]

1) The static qualifier

Adding the static qualifier to a global variable will tell the compiler to only provide access to this variable from the current compilation unit. This method works in both C and C++:

C++

static size_t g_count = 0;

Be aware that a problem can arise with this method if two different files both declare static variables with the same name. The linker will complain about multiply defined symbols.

2) Use a namespace

This is the recommended method to limit the scope of access to global variables in C++. The unnamed namespace places the global variable in a scope block that is only accessible to the current file. Therefore, even if multiple files declared the same global variable and placed them in unnamed namespaces, the data would be viewed as a multiple variables to the linker:

C++

namespace // unnamed
{
  size_t g_count = 0;
}

 3) Use a static variable within a function

This method has the advantage that the global variable is not accessible by methods inside the same compilation unit as well as other compilation units. The drawback is that a reference or pointer to the static function variable to be useful or input parameters to the function dictate how the value is manipulated.

C++

size_t& GetGlobalCount()
{
  static size_t g_count = 0;
  return g_count;
}
Or

C++

size_t GlobalCount(size_t value, bool isSet)
{
  static size_t g_count = 0;
  if (isSet)
    g_count = value;
 
  return g_count;
}

Conclusion

Global data is considered by many to be evil. Often times it is necessary to have some sort of globally accessible data. Regardless, if access to the global data is properly controlled, global data can continue to be an asset rather than a liability. The key concept, which is the ability to adapt, has been given to the global variables. Access to the data is controlled completely by you. Therefore if a change needs to occur, you still have the flexibility to make your change without this becoming an engineering nightmare.

Introduction

general Send feedback »

This is a journal for those who feel they have been damned to live in a code base that has no hope. However, there is hope. Hope comes in the form of understanding how entropy enters the source code you work in and using discipline, experience, tools and many other resources to keep the chaos in check. Even software systems that have the most well designed plans, and solid implementations can devolve into a ball of mud as the system is maintained.

Software maintenance is definitely a misnomer. Once the system has been tested and delivered, any further changes are simply another round of design and development. Unfortunately, Software maintenance is typically left to junior level developers in order to free the senior level engineers to move on and make the next shiny object. When in fact, I believe the re-engineering of a software system should be led by the most skilled engineers of the organization.

In my experience, a set of newly developed code tends to hold its original design integrity for about 12-18 months after the original phase of development has completed. After this amount of time, the original vision seems to get lost, schedules become king, the original developers have moved on etc. These are some of the many reason that code seems to rot. There really is no single force to blame, just as there is no single fix to prevent this from occurring either.

I had been a software engineer for more than a decade before I realized that much of the code that I have written was really horrible, at least from a maintenance, or re-engineering point-of-view. Up to that point in my career, I usually found myself working at a new company every two years. This seems to be a common trend in this industry given the high-demand for good talent, volatility of technology startups and the enormous amount of opportunity that exists. The only problem is this never put me in the position to live in the filth that I had just helped create. I actually have found that once I was put in this position, it became a great opportunity for learning and honing my skills for this craft.

I will continue to document various experiences, tips, tricks, practices, tools, mantras and whatever else I have found useful along the way to become a better engineer. Now, with very little extra effort, I write each programming statement as if I will be continuing to re-engineer this same set of source for many years to come. Please continue to visit and learn, as well as share your experiences and practices that you have discovered for software maintenance. I also welcome any questions.

Best Regards,

Paul M. Watt

Contact / Help. ©2024 by Paul Watt; Charon adapted from work by daroz. CMS / cheap web hosting.
Design & icons by N.Design Studio. Skin by Tender Feelings / Skin Faktory.