|« Why Computers Haven't Replaced Programmers||How I Avoid Making Mistakes »|
The Devil's Advocate is often an effective role that can help uncover logical weaknesses for a point of view. For those that are unfamiliar with this term, the Devil's Advocate takes a position that they do not necessarily agree with for the sake of debate. I usually do it to learn more about the topic the proponent is advocating; I'll admit, sometimes I just do it to push buttons.
I have had many discussions with developers from a variety of backgrounds and skill levels. I read programming articles and other development blogs. Everyone has an opinion. This got me thinking about how people go about rationalizing arguments for the technologies and processes that they prefer. I want to present a dialogue where the Devil's Advocate will drive the discussion based on logical, and sometimes illogical arguments. As with many arguments, some are valid points and others are distractions that hijack the discussion by changing the subject. The comments that the Devil's advocates makes will come from any of these sources.
On the opposite side is the proponent. The answers that are specified by the proponent be clear and succinct answers. I may cite a link to some other source that expands on the idea provided in the answer.
I hope this creates a format that flows naturally (as a discussion). A discussion that primarily presents facts and arguments; sometimes opinions will be presented as well. If you have a differing opinion, I would love to hear it. Let's continue the discussion after the entry ends. If this turns out well, I will continue to write posts like this from time to time.
Test Driven Development
Test Driven Development (TDD) is a software development process that focuses on the use of short development cycles to develop robust code. The short development cycles (30 seconds to an hour or so) create a tight feedback loop to inform the developer if the most recent changes have been good or bad. The developer initially writes a failing test, then adds the code to make the test pass, and finally evaluates the solution and improves it if necessary. This process is repeated to add all of the features and functionality towards a program.
The developer that writes the code should also write the tests. One at a time, gradually building up the code.
That single developer has to write much more code then. They have to all of their normal code and the tests.
This will take twice as long, and you're telling me that the work can't be distributed?
My schedule can't afford that!
- TDD keeps your developers focused on solving the immediate problem, by adding one feature at a time.
- This can lead to less actual production code being produced when TDD is used.
- You're software will be testable
- These tests will give you confidence when entering system integration.
- Yes, your development phase may take a little longer.
- However, you will have confidence during system integration to make changes, and detect if it affected your system negatively.
- This will make your overall schedule more predictable, and should shorten the length of the system integration phase.
Speaking of integration, let's not leave development just yet.
When I integrate my code with everyone else's, I have to fix all of the broken tests.
This is not a situation that is unique to TDD, it is possible with any process that develops any type of regression test system.If there are broken tests after your changes, this could mean a few things:
- Your tests may be too complex.
- Your code is tightly coupled, and your programming side-effects are interfering with this other code.
- The other developers delivered code with broken tests.
- The length of your Integration cycles are too large.
- Write simple tests so they will be maintained.
- Before you make any changes, compile your source to verify you are starting with a clean build.
- Even if you need a large amount of time to complete a task, you should still rebase with the developer stream often.
Is this a process that is on it's way out?
What's the point of learning it if it is dead?
Ok, hold on.
One needs to read the entire entry to first gain the context, then read the conclusion that he has reached and why. He explains that he adopted TDD, and it taught him some things, but now he prefers to simply perform system tests. Because TDD creates horrible designs.
Let's address a few issues that David raises in this entry. You state the issues, and I will respond.
"Over the years, the test-first rhetoric got louder and angrier, though. More mean-spirited. And at times I got sucked into that fundamentalist vortex, feeling bad about not following the true gospel. Then I'd try test-first for a few weeks, only to drop it again when it started hurting my designs."
I want to address something with this statement. It seems that there are many different groups of technology and process advocates professing the true way to develop.
Again, there is no silver bullet.
What works for one development group, may not work for another; it may not even be possible or appropriate to try to apply the prescribed method in all situations.
Don't ever feel like you need to be following a method prescribed by the gospel.
Every environment, developer, language, company has their own ways to do things. Success of a technology in one application does not guarantee success in any other application of it.
"Test-first units leads to an overly complex web of intermediary objects and indirection in order to avoid doing anything that's "slow". Like hitting the database. Or file IO. Or going through the browser to test the whole system. It's given birth to some truly horrendous monstrosities of architecture. A dense jungle of service objects, command patterns, and worse."
I posit that if you simply start coding, without tests, you will also "give birth to some truly horrendous monstrosities of architecture." TDD does not alleviate you from performing any of the common steps in the software development process. The one truth stated in the entry above about TDD is:
"avoid doing anything that's 'slow'. Like hitting the database. Or file IO. Or going through the browser to test the whole system."
TDD stands for "Test Driven Development", not "Test Driven Design". You should have an overall picture of what your design and architecture should be to accomplish your goals.
TDD is a process to help direct the development to produce code that is testable, correct, robust, and complete by providing feedback quickly during development.
That is correct.
And, these unit-tests become regression tests during system integration. Now they are used to defect if changes that are made during system integration break a feature that previous existed.
There are very few tools that exist today that find bugs. These tools are designed to look at specific things that are common sources of errors, such as memory management.
I found this paper very compelling. James make many points against unit-testing in general.
If unit-testing is a waste in general, then doesn't that make TDD a waste?
I don't want to stray too far from TDD. However, unit-testing is a fundamental part of TDD.
Let's look at the context and reasoning for a few of the arguments presented in the paper.
"1.3 Tests for their Own Sake and Designed Tests
I had a client in northern Europe where the developers were required to have 40% code coverage for Level 1 Software Maturity, 60% for Level 2 and 80% for Level 3, while some where aspiring to 100% code coverage.
Remember, though, that automated crap is still crap. And those of you who have a corporate (sic) Lean program might note that the foundations of the Toyota Production System, which were the foundations of Scrum, were very much against the automation of intellectual tasks
It’s more powerful to keep the human being in the loop..."
Those are some strong words, and I couldn't agree more. Testing for code coverage is a misguided endeavor that only provides a false sense of security.
All tests should provide value. If a test does not provide value, it should be removed.
Code coverage is another metric that can be used to evaluate code. However, this metric alone does not indicate how well a unit is actually tested.
I like this statement: "automated crap is still crap."
"If your coders have more lines of unit tests than of code, it probably means one of several things. They may be paranoid about correctness; paranoia drives out the clear thinking and innovation that bode for high quality. "
James then continues on with some pretty harsh words attacking developers analytical design skills and cognitive abilities, as well as rigid development processes.
Most of this paper presents justified arguments. However, this section appears to be the author's opinion rather than fact.
I believe that unit tests for the sake of unit tests is bad; similar to my thoughts on the code coverage metrics for tests. If a test provides value, then it is good. If you end up with more valuable test code that production code, this says nothing about the developer or the code. Hopefully the tests were well designed, and the production code is flexible and robust.
There is no coupling between the Test code : Production Code ratio. Again I posit, the same developers that created an inflexible and low-quality system with too many tests, would create the same quality system with only using system-level tests.
"1.8 You Pay for Tests in Maintenance — and Quality!:
... One technique commonly confused with unit testing, and which uses unit tests as a technique, is Test-Driven Development. People believe that it improves coupling and cohesion metrics but the empirical evidence indicates otherwise (one of several papers that debunk this notion with an empirical basis is Janzen and Saledian, “Does Test-Driven Development Really Improve Software Design Quality?” IEEE Software 25(2), March/April 2008, pp. 77 - 84.)
To make things worse, you’ve introduced coupling — coordinated change — between each module and the tests that go along with it. You need to thing of tests as system modules as well. That you remove them before you ship doesn’t change their maintenance behavior."
I have not read that paper by, Janzen and Saladin. It sounds interesting. If I can get access to it, I will read it and get back to you. Or if you read it, let me know what it says.
Otherwise, tests do not need to be that tightly coupled to the code. Furthermore, if you find that they are that coupled, and you need to ship them with your product, you are doing something wrong.
Yes, unit tests will be associated with a module, and there may be stubs, fakes and mocks to help verify that module. However, the code in that module should not change in order to be in a "test mode".
The point is to verify the code the way it will be run in production is correct, not to create tests that pass.
It looks like we are starting to digress into a discussion about unit testing in general.
Let's save that for another time.
There are many processes for developing quality software. Some work better than others, and also many are only appropriate for certain development environments. What works for Continuous Deployment web-development is not appropriate nor allowed for Aviation and Defense development. You must always be cognizant of the requirements of the application to be developed and its industry. Then also consider the processes involved in order to create high-quality software.
I have had great success in the places that I have applied TDD. I have successfully applied it with commercial software development as well as development in the Defense industry. However, I have recognized many projects that TDD would not provide value, and therefore I went with a different process to verify my software.
I feel the same way about software development processes as I do software technologies and tools. You select the best tool for the project. You can't always use a hammer, because some projects are delicate. Moreover, its best to not try to use a screwdriver as a hammer, because it makes one look like an idiot.