January 27, 2005 - The Developer Testing Paradox

Most software development organizations have compelling reasons to improve their quality, reduce their costs, and accelerate their schedules. Time after time, and year after year, the majority of software projects are of lower quality than desired, cost more than budgeted, and are completed later than planned. Many projects can only be branded complete failures and have to be restarted from scratch.

The root cause of most software project failures, and of the poor general health of most software, is the lack of early-stage unit testing.

Every software project post-mortem I have attended – including those for successful or mostly successful projects – has reached the conclusion that earlier, more thorough, and more frequent testing would have made a huge difference. The bug databases of most software projects hold compelling evidence that an early dose of unit testing could have prevented the explosion of bugs that forces the team to decide which bugs must be fixed and which bugs can wait. And even when, through heroic testing efforts from QA and all-night debugging sessions by developers, the software finally achieves the enviable status of usable, the resulting product is not necessarily a healthy and robust system. Instead, it is a fragile artifact that people are reluctant to touch, because they fear breaking the delicate and mysterious threads of code that miraculously hold it together. It is no coincidence that customers often delay purchase of a 1.0 or other major release product until the first patch levels have been released – they’ve learned from bitter experience.

Given the alarming frequency and the magnitude of software project disasters and near-disasters, the motivation for investing in developer testing and developing a body of unit tests in parallel with the code is obvious and compelling. I have yet to encounter an experienced, smart, and honest software professional who is willing to argue against the practice of developer testing, or seriously discount its benefits on software quality and software development economics. However, today only a very small percentage of software organizations actually practice developer testing with any rigor or consistency. This discrepancy is the foundation of the developer testing paradox.

A paradox is a seemingly contradictory statement that may nonetheless be true (e.g., the paradox that standing is more tiring than walking). In our case, the paradox is that the practice of developer testing, which is so obviously right and so widely regarded as beneficial, and which could improve software quality and economics more than any other alternative, is still a rarity in software development organizations. I believe that the reason for this paradox is that starting and running a successful developer testing program is easier said than done.

Fortunately, in the past three years, I have learned a lot of valuable lessons from organizations that have managed to succeed at developer testing, and even more from those that have failed at it.

The paper is organized in two parts. In Part I, I present a compelling case for practicing developer testing. In Part II, I present The Seven Laws of Developer Testing – a set of principles and guidelines, learned the hard way, that must be followed if you want your developer testing program to succeed.

PART I

The Developer Testing Solution

Developer testing, also known as unit testing or programmer testing, is not a new concept. But since both the term and the practice have been abused and misused, it’s important to define with some precision what I mean when I use the term.

I define developer testing as:

A practice where software developers commit to unit testing their own code by creating and running thorough and automated unit tests during development.

In this context, a unit test is a test that focuses on isolated behavior. A well-written unit test does not require the whole system to be running.

As this definition implies, developer testing means more than contributing a few superficial unit tests as an afterthought to the development effort. The creation and execution of unit tests must be an integral and important part of development, performed by developers, as opposed to being handed off to QA. Furthermore, it should be performed in parallel with development not postponed until after development is complete. A common mistake among newcomers to unit testing is to write mini functional tests. Functional tests are also important, but cannot replace unit tests. When developers concentrate their efforts on unit tests, QA can focus on its real job – system and integration testing, load and stress testing, and independent verification – rather than having to find unit-level bugs that developers should have already caught and fixed.

A developer working in such an environment writes or modifies some code, writes tests to make sure that the code behaves as expected, then runs the newly written tests, as well as other relevant unit tests, to confirm that the new code does not cause regressions in previously working code. Going one step further, some developers practice test-driven development and write the tests before writing the code.

photo 1

Figure 1: If an organization practices developer testing as described above, the body of unit tests grows at the same rate as the body of product code.

An implicit practice that goes along with developer testing is that developers should address any bugs discovered by the unit tests (or any other tests) as soon as possible and, ideally, before adding any new code. The objective is to keep the bug count consistently low, from the beginning and throughout the development cycle.

photo 2

Figure 2: The known bug count should not increase over time; it should be kept as close to zero as possible.

It’s important to state that developer testing does not replace the need for integration or system testing, which is typically performed by QA. A good developer testing program, however, makes integration and system testing efforts much more effective and efficient. Delivering software to QA that is not riddled with unit-level bugs gives QA more time to discover and report integration and system-level bugs. More about this later.

Bugterial Infection – Why Developer Testing Is Critical

Developer testing is, by far, the best option for improving software development economics and overall software quality for two main reasons.

The first reason is already well known to most software development professionals: the cost of finding and fixing a bug increases exponentially as the software goes from the design and development phase to the integration and system testing phase, and ultimately to customers. Finding and fixing a bug discovered during development usually involves only one person (the developer) and takes a few minutes. By the time the bug slips through to QA, there are a few more people involved in reporting, reproducing, fixing, and verifying the bug; dealing with it usually takes a few hours of company time. If the bug slips past QA and into a released product, it can affect a large number of customers, involve multiple customer support calls, and require developer time to fix and retest the code, which causes the cost of resolving it to skyrocket. The cumulative effect and cost of these problems is huge. On a national level, a recent study by the US Department of Commerce puts the cost of inadequate software testing at nearly $60B/year.

photo 3

Figure 3: The growing cost of fixing bugs. The longer you wait, the more you’ll pay.

The growing cost of an undiscovered bug over time should be reason enough to invest in developer testing to improve early detection, but there is a less known – and far more insidious – reason: software bugs are infectious.

The second reason for practicing developer testing is that, in some ways, bugs are like bacteria; if you let the bug count grow past a certain point, you will trigger a chain-reaction and end up with a serious and hard-to-fight bugterial infection. Each undiscovered unit-level bug – even a trivial one – that survives and becomes part of the code during the development phase has the potential to cause very serious damage.

Left unchecked, a relatively small number of unit-level bugs can easily multiply and impair a software system so insidiously that bringing it back to a reasonable level of quality and usability may be impossible or, at best, horrendously painful and expensive. If you don’t pay attention to bugs early enough, you can easily end up with a system that is such a nightmare to maintain, evolve, and yes, even test, that the best option may be to throw it away and start over. I have heard of, personally witnessed, and even participated in such software horror stories and I am sure I am not alone.

Unfortunately, as with the awareness of the role of bacteria in the history of medicine, it seems to be taking a while for people to realize that unit-level bugs are often the primary cause for large-scale software disasters. A major objective of this white paper is to raise awareness of the bugterial infection problem.

From Bugs To Bugterial Infection

There are many mechanisms by which unchecked unit-level bugs can develop into full-blown bugterial infections. Here are a few of the major ones.

Reuse of buggy code: The most obvious mechanism for spreading bug infections is through multiple uses of defective code. A simple bug introduced in the code of a well-used mathematical library function, for example, could manifest itself as bugs in a salary calculation, which could affect payroll processing, which could then affect the overall financial reporting system, etc. More than once, I have seen a single root cause unit-level bug in a well-used module, which eventually caused dozens of hard-to-track-down bugs in other modules, and hundreds of customer support calls.

Low expectations: An environment where developers are not expected to test for, and quickly fix, unit-level bugs creates a hospitable habitat for bugs of all types. And since bugs are a natural and inevitable by-product of programming, the bug population will naturally grow out of hand in the absence of any natural predators (i.e., unit tests).

Schedule compression due to unexpected buggy code: If unit-level bugs are not systematically found and fixed through developer testing, they often surface accidentally during later coding. This multiplies their impact, affects other developers, and slows down overall development until they get fixed. This steals valuable time from the schedule, which can cause rushed and careless coding after the bug is fixed and work can resume, ultimately resulting in even more bugs.

Compensating through workarounds: Sometimes, developers prefer to, or have to, come up with temporary workarounds for bugs rather than waiting for them to be fixed. Just as two wrongs don’t make a right, a bug and a workaround are not a good idea. In many cases the workarounds tend to be buggy, or bug-promoting, themselves.

Low-testability code: If you delay testing, you might discover that the system you have built is not easily testable, and low-testability inevitably leads to less testing and more bugs.

Bug indigestion for QA: If the code delivered to QA for integration or system testing is riddled with unit-level bugs, then QA will get bug indigestion. As we have seen before, a single unit-level bug can infect the system and manifest itself in many other modules, forcing the QA organization to submit multiple bug reports – each of which takes valuable time to reproduce, file, verify, etc. A significant portion of the QA time that should be spent on integration and system-level bugs is wasted on unit-level bugs that could have been caught and fixed during development at a fraction of the cost. And while QA is busy dealing with the multiplicative effects of unit-level bugs that could have easily been squashed with developer testing, many critical integration and system-level bugs will go undetected.

These are just a few of the ways that bugs can multiply. To make matters worse, these mechanisms also interact with each other, causing even more bugs and eventually triggering a serious bugterial infection.

The Law Of Bugterial Infection

The law of bugterial infection states that:

If the bug count is not kept in check during the development stage, the interaction and compounded effect of unresolved bugs will trigger a chain reaction and cause a bugterial infection. Once triggered, a bugterial infection will negatively impact the overall quality of the system, causing permanent and disproportionate increases in further development and testing costs.

In many cases, the effects of a bugterial infection will result in a system riddled with so many intrinsic and hard-to-address problems that the most effective course of action will be to scrap the system and start over.

Waiting until the software is ready for system and integration testing and leaving all, or most, of the testing responsibility to QA practically guarantees a bugterial infection. Without a way to actively discover and quickly fix bugs, they will go forth and multiply.

photo 4

Figure 4: Once the bug count gets past a certain threshold, the Law of Bugterial Infection begins to take effect and the bug count starts to increase dramatically.

It’s important to stress the fact that a bugterial infection indicates more than just a dramatic increase in bug count. A bugterial infection indicates a state change for the software system; it means that software has crossed the threshold from being a healthy, solid, manageable, predictable, easy to evolve and modify system, to a system that is inherently unhealthy, fragile, and hard to modify.

photo 5

Figure 5: The correlation between the health of a software system and the bug count. A quantitative increase in bug count leads to qualitative changes in the overall health of a system.

Bugterial infections are the primary cause of most software project failures and problems. The only way to prevent bugterial infections is to keep the bug count low throughout development, and the best way to accomplish that is by arming your body of code with the equivalent of an immune system.

Unit Tests – A Critical Part of the Immune System Against Bugterial Infections

Unit tests, written at the right time, providing thorough coverage, and executed frequently, are the best line of defense against bugterial infections.

Properly-timed test creation, thorough tests, and frequent test execution are necessary to effectively boost the immune system:

These requirements (timing, thoroughness, and frequency of execution) make it essential for developers to be involved in test creation, execution, and analysis. While it’s best and most practical to have an external QA organization develop, execute, and analyze integration and system tests, the primary creators, users, and analyzers of unit tests should be the developers themselves.

Developing and executing unit tests in parallel with the development effort is the only way to give the QA organization adequate time and a fighting chance when it has to address system-level bugs.

PART II

The Seven Laws Of Developer Testing

The case for practicing developer testing is very compelling but, as I have already mentioned, starting and running a successful developer testing program is easier said that done. However, it can be done. In the past three years I have had the opportunity to observe and talk to dozens of software development organizations and individuals about their experiences with developer testing. On top of that, I have personally started and managed developer testing programs during my tenure at Google as Director of Engineering and, of course, at Agitar as VP of Engineering. From all these experiences, I have distilled seven basic laws which must be followed in order to have a successful, efficient, and effective developer testing program:

  1. The Law of Management Commitment
  2. The Law of Team Buy-In
  3. The Law Of Metrics
  4. The Law Of Targets
  5. The Law Of Training And Coaching
  6. The Law of Automation
  7. The Law Of Failure

Let’s go through each of these laws in detail.

1. The Law of Management Commitment

A successful developer testing practice requires initial and ongoing management support and commitment.

The practice of developer testing requires an initial dose of courage, conviction, and investment to get started, and an ongoing dose of the same to keep it going successfully. I have seen developer testing programs succeed without the team’s management as the driving force, but I haven’t seen any of them succeed without at least a strong dose of management support. If management does not believe in the benefits of developer testing, they are not going to be very supportive of engineers “wasting” time writing tests when they should be coding more features.

Management commitment to developer testing should be evident through the following:

Management commitment is essential but not sufficient. The rest of the team must buy in, which leads us to the second law.

2. The Law of Team Buy-In

A successful developer testing practice requires full team buy-in.

Management commitment is necessary, but not sufficient. Even if the team’s management is 100% sold on the idea of developer testing, and willing to commit and make the necessary investment, training, etc., the program will not succeed if the rest of the team, or at least most of the team (and especially the thought leaders), is not on-board.

This law is obvious and does not require much explanation, but one of the most common major challenges is having the team’s management and leaders committed to developer testing while many developers are skeptical or downright hostile to the idea. One of my favorite quotes is: .As developers we successfully avoided having to do testing for decades. I don’t see why we should start now. A major culprit for this lack of motivation is that, historically, poor code quality has not affected developers as much as managers: “I have never seen developers fired for poor quality software or a late delivery, but I have seen their managers fired for those reasons.”

This is one situation where the management courage, conviction, and commitment from the previous law come in. I don’t believe a developer testing program can be called truly successful if some developers are exonerated just because they don’t like the idea of having to write unit tests.

The first step in dealing with developers who are skeptical of, or hostile to, the preposterous idea of developer testing is to ask them to give it a try for a couple of weeks, or on a specific small or medium development project. If a developer accepts the task and acts upon it in good faith and with an open mind, they will most likely see the benefit and, perhaps still with some reluctance, agree that it’s the right thing to do going forward. When I started the developer testing programs at my current and previous companies, I was certain that a couple of developers on each team would stubbornly oppose it and find every possible excuse and rationalization to be excused or avoid the testing in one way or another. Since in each of these cases these developers were top contributors and valuable team members, I was not looking forward to the prospect of losing them over this. Much to my surprise, after some initial grumbling and resistance, each of those developers became top contributors in terms of unit test quality and quantity. As it turns out, the qualities and skills that make a person a great software developer can, with some encouragement, make them a great at developer testing.

3. The Law Of Metrics

A successful developer testing practice requires a carefully chosen set of metrics.

If you cannot measure, you cannot manage.
If you cannot manage, you cannot improve.

As we have seen, a developer testing program requires initial as well as ongoing investment. Since developer time is a very valuable resource, we need to direct and manage the effort to ensure that the investment is focused on the right objectives and is achieving the desired results. The best way to achieve and maintain the right focus is to select and apply an appropriate set of metrics.

Identifying the right metrics for developer testing can be tricky. All metrics have weaknesses and none of them are perfect, and no major decisions should be based purely on metrics data without taking into account common sense or circumstances that might be hard to quantify. Having said that, I have identified a set of developer testing metrics that are relatively simple, ideal for the initial phase of a developer testing practice, and have been proven very effective for us and other teams that have adopted them. You can use these simple metrics to get going and add to, or refine them over time to meet your specific needs.

The three metrics I recommend for getting started are:

Now that you have some metrics, you are ready to set some targets for the team, but setting the right developer testing targets is important enough and tricky enough that it deserves its own law.

4. The Law Of Targets

A successful developer testing practice has to have long-term objectives as well as frequently updated short-term targets

The long-term objective of a developer testing program should be to make the creation and execution of unit tests by developers a routine and implicit part of development. Test creation should go hand-in-hand with code creation (code a little, test a little – or the other way around), and test execution should happen as frequently as compilation (ideally each compile should be automatically followed by a run of unit tests.) In an organization where the practice of developer testing has fully evolved and matured, the act of writing and executing tests should be so commonplace that questions such as: “Did you write and run the unit tests for this code?” are as unnecessary as asking: “Did you compile this code?” The existence of tests to accompany each unit of code should be taken for granted, and the exception should be code that does not have unit tests rather than the other way around. This is DeveloperTesting Utopia, and it’s a great long-term objective. Unfortunately, you cannot reach this state overnight; it may take a year or more, depending on the history and legacies of your team and project, to even get close to it. Therefore, having a set of objective and achievable intermediate targets is critical to ensure consistent progress and provide feedback and encouragement.

When it comes to setting targets for a fledgling developer testing practice, the biggest danger is being too ambitious too soon. I recommend using the metrics I just introduced and starting slowly, to give the team time to learn the ropes of developer testing and, more importantly, gain appreciation for their benefits.

Below is a table with some reasonable sample targets for a medium-sized team/project.

Table 1: Developer Testing Targets
  Q1 Q2 Q3 Q4
Total test points  1,000   4,000   10,000   20,000 
% of classes w/ test points 5% 20% 50% 90%
% of code covered by unit tests 5% 20% 40% 80%

Notice the easy start in the first quarter and the significant increase in each of the metrics each successive quarter. These targets take into account that: 1) scoring test points and getting good code coverage gets considerably easier/faster with experience, 2) the last 10-20% of code coverage is usually harder to achieve, and 3) there may be some legitimate reasons for not testing 100% of your code. You should change these targets based on your situation and not hesitate to change them again as you gain experience.

5. The Law of Training and Coaching

A successful developer testing practice requires initial training and ongoing coaching.

It’s unfortunate, but today most computer science education programs don’t include software testing in the curriculum. A few developers, driven by internal motivation or naturally inclined toward testing, can be very creative and effective at thinking of, and writing, unit tests; but the majority needs initial direction and ongoing coaching until they achieve a basic understanding of the core principles and good proficiency with the basic skills.

The good news is that most developers are quite smart and are quick learners. If you have the budget it’s hard to beat hiring a developer testing coach or trainer for a week or so, but you can take the do-it-yourself approach and use one of the many books now available on the subject of unit testing and test-driven development (coupled with the many resources available online) and groom one or two in-house gurus who can then train the rest of the team.

In either case, it’s a good idea to eventually develop a couple of in-house experts on developer testing and the associated tools and technology, because it’s unlikely that the initial training will cover all bases, and it’s important to make sure that the there is someone who can answer questions and provide high-level oversight as the practice grows and evolves. Fortunately, my experience is that in a group of 10 or 20 developers there are always a couple of people who are (or will become) test infected and will be more than happy to serve as the in-house developer testing gurus and evangelists.

6. The Law of Automation

A successful developer testing practice must take advantage of automation in test creation, execution, and reporting.

Many tasks associated with the developer testing cycle are combinatorial and repetitive in nature. In order to be considered adequate, for example, unit tests should cover a wide range of inputs and input conditions to ensure that all the possible code behaviors are exercised, but creating all the necessary inputs and accompanying test code by hand can be very tedious and inefficient. As a result, most manually written developer tests fall short in terms of coverage, tend to concentrate on a few positive test cases, and ignore most corner and exception/error cases. Our experience (based on analyzing hundreds of manually written unit tests from dozens of Java-based projects) is that the ratio of test code to code under test required to achieve at least 90% code coverage is between 2/1 and 4/1. This means that to thoroughly test a 100-line Java class requires 200 to 400 lines of test code. This can be done; but it’s very inefficient, most developers don’t like the idea of spending so much time writing test code, and, as a result, most tests fail to achieve the desired/required coverage.

Similarly, test execution, reporting, and analysis must be as automated as possible. If the test execution is not automated (e.g., by automatically running the tests after each build, or at the very least nightly) the tests will not be run as frequently as necessary, greatly reducing their benefit. By the same token, if the results of the tests are not filtered and reported to the right people as soon as possible, so they can take action as soon as possible, they will lose a lot of their value.

Fortunately, today there is a lot of technology available to automate all the major testing tasks.

One of the key lessons I have learned about software development in general, and developer testing in particular, is that the flexibility of software makes it easy to design and offer a huge variety of options, configurations, behaviors, etc. When it comes to managing and testing all the possible combinations you must take advantage of automation. Computers created the problem, and you need computer assistance to address it.

7. The Law of Failure

A successful developer testing program must take into account that good tests fail.

This last law is a bit strange, but I had to include it because a successful developer testing program will result in a lot of test cases. When combined with frequent code changes/additions, as well as frequent builds and test executions, you will be faced with at least a few failures on most test runs – especially if the project is in the middle of major new development or refactoring. I have noticed that this level of testing thoroughness, and the associated level of test failures, is new and disconcerting to many people and organizations, and that it requires some adjustments.

Test failures are a very good thing. They are evidence that the tests are thorough and are doing their job of detecting bugs and changes in the code behavior which, left unaddressed, could lead to further and more serious bugs. In terms of perception and day-to-day operations, however, test failures can be seen as a major challenge in a fledgling developer testing program. They are a constant reminder that the code has bugs, or that the code and the tests are out of sync in one way or another. In most cases, fixing the bugs, or resynchronizing the code and the tests should take priority over adding new functionality (because you should not be building on a shaky foundation), but this can be hard to do .especially at the beginning.

Taking care of failing tests is hard to do at first because it will seem to slow you down, and it will make you wonder if the extra work is really worthwhile. Instead of adding, say, 10 new features a week, you are now adding just 7 and spending the rest of the time tracking down those pesky unit-level bugs and keeping the tests up to date. This is another time where commitment to developer testing and confidence in its benefits is required. The apparent reduction in the velocity of feature implementation is caused by a different interpretation of what it takes to be done with that feature.

If you have only implemented the code for a feature or a change you are not really done. In a successful developer testing program, a feature or change is considered done only if all of the following requirements are met:

At the beginning of a project, adding features without taking care of writing or maintaining tests may seem faster, but the lack of testing will inevitably come back to haunt you and slow your progress down (as I have described in Part I). It’s easy to move fast at the beginning of a project, but as the body of code grows, it becomes harder and harder to add functionality and make changes without causing other components to break. This is where your investment in developer testing starts to pay off. Any sacrifice in initial project velocity will be repaid many times over as the body of code grows. The tests will repay you not only in terms of increased software quality and stability, but in your ability to make changes quickly and with confidence.

Conclusion

The software industry is slowly waking up, and warming up, to the idea of developer testing as a way to improve software quality and software development economics. But there is a lack of urgency and a serious underestimation of how deep an impact developer testing will have on their projects. The attitude of most software professionals is that developer testing is a promising and interesting approach worth investigating – eventually.

The arguments I have made in the first part of this paper hopefully convinced you that the practice of developer testing should not be an optional add-on, but a core and essential part of every professional software development organization worthy of its name. To create healthy software, you need unit testing. To keep your software healthy, you need the immune system that only thorough unit tests can provide. But as we have also seen, this is easier said that done. Many fledgling developer testing efforts started with great enthusiasm ended up with less than stellar results, partial adoption, or total abandonment. This is why regular, ongoing developer testing practices are the exception rather than the rule, and why there is a developer testing paradox in the first place. Fortunately, we have learned valuable lessons from the hard-gained experience (including both successes and failures) of many other software development organizations who have gone down the developer testing path, and the Seven Laws of Developer Testing in Part II of this paper summarize the most important lessons and address the most common reasons why developer testing practices fail.

The idea and practice of developer testing has had an incredibly positive effect in my team’s ability to deliver high-quality software within an aggressive schedule. There is nothing like the feeling of confidence and protection that we get by having tens of thousands of test points spread across numerous tests that are run several times a day. System and integration tests are still necessary and useful, but they can’t match the fine granularity, resolution, and assurance of having specialized tests for each unit of code. I can’t even imagine going back to the days where the only way to know the status of our code was to wait for QA to run a set of integration/system tests that might several days and weeks.

I sincerely hope that the information, ideas, and experiences I shared with you in this paper give you the motivation, knowledge, and resolution to make developer testing an imperative in your software organization.


Posted by Alberto Savoia at January 27, 2005 05:46 PM


Trackback Pings

TrackBack URL for this entry:
http://www.developertesting.com/mt/mt-tb.cgi/141


Comments

Alberto,
Great Article!
With respect to having the achieving better quality software, I wonder if it is valuable to think about the above suggestions in the same way you would tell someone who is looking to loose some weight: Eat Less & Excercise more... Seems very simple formula... but as we know we see thousands of diets and gimics to 'help' people achieve the goal of loosing weight.... And as we know ... a great % never achieve their goals because they simply do not execute the formula.... And the psychology of a creating that discipline is a whole other topic .... That aside ... the most important thing to do in any change is baseline where you are today with respect to your goal... For the diet analogy it would be getting on the scale.... I was wondering if you have such a scale to help people understand where they are in terms of developer testing... and also to help them track progress over time....
/Cheers Andrew
PS:
I also wondering if you had any thoughts in applying agitar to performance testing ?

Posted by: Andrew Sliwkowski on February 3, 2005 06:06 AM

Andrew. Funny that you mention the weight/diet analogy since we use that a lot in-house. If you go and look at our dashboard

http://www.agitar.com/products/000023.html

you'll see that we use a baseline as well as targets to help people make gradual progress. In-house we typically update the targets for every release cycle. Our code coverage goals, for example, have been going up steadily from 50%, to 60%, ..., all the way to 90%+ for the dashboard itself. Same thing with test points (you get a test point for each test assertion) the original objective was 10,000 test points for Agitator and we are now close to 30,000. I find the Dashboard targets and graphs indispensible and look at them at least daily to see if we are on track.

Regarding applying Agitator to performance testing, we already do internally - sort of. We track and graph the time it takes to go through our entire suite of Agitator tests (~30,000) and if we notice dramatic changes from one build to the next (or creeping performance deterioration) we take steps. We thought of having finer granularity performance analysis in Agitator but we haven't done that yet.

Thanks a bunch for your feedback and questions.

Alberto

Posted by: Alberto Savoia on February 11, 2005 01:48 PM

Could you create named anchors for the sections of this document currently headed with h4's? E.g. "From Bugs To Bugterial Infection" and "The Seven Laws Of Developer Testing"? Your topic is large, and I have cases where I'd like to cite sections of your argument without telling folks "go to [URL] and scroll down to [heading]."

Posted by: Tom Roche [TypeKey Profile Page] on July 7, 2005 06:26 PM

Great article! I posted a post on my blog on the Cost Benefits of Unit Testing. See http://dlsthoughts.blogspot.com/2005/08/cost-benefits-of-unit-testing.html

Posted by: David Le Strat on August 8, 2005 07:46 AM

I agree with what was written here and in the junit group regarding the following:
It is the worst case having no or just a few automated test cases at hand to ensure the quality of the product to a certain extent.
For me, it doesn't really matter whether one writes tests first or after the code. Important is the will to write 'em and to execute.
In commercial software projects it often is not supported officially writing "unproductive" test cases (please understand what I mean with the quotation marks). So it's best doing what helps to reach the target, may it be philosophy number one or two (and I claim this despite the opinion of Robert Martin who sees test-first being superior to code-first).

Posted by: Klaus Meffert on June 3, 2006 01:28 AM

Post a comment




Remember Me?