Jump to content

User:Legaia/Sandbox

From Wikipedia, the free encyclopedia

Why TDD

[edit]

The basic idea behind good test driven development is to prioritize making code right before making it good. This isn't to say we will never have time to write good code, but we focus our energies towards solving the problem correctly first. Isn't that the whole goal at the end of the day? If it doesn't work, it doesn't matter if it's elegant! With that in mind, you're going to (1) Write a test (2) Make it run, then (3) Make it right.

If you still remain unconvinced, there are other reasons for test before, and for test at all (for the cowboys out in the crowd). When we write tests as we go, we prove our code works as we go. The tests represent assumptions we've made about valid inputs and outputs and if they are succeeding, they represent proof that our assumptions still hold true. As we add new features or refactor existing code, the old tests automatically regress our past assumptions, but with our current feature set. And handily, when you create a bug, especially when you accidentally break existing code, the tests will tell you right away. If you run your tests often enough, you'll know right away which lines of code introduced the bug, because you just wrote them. This is so much more effective than writing tests 2 weeks later and having to rehash "Now what was I thinking here?"

Tests before also helps reduce "fluff" architecture, code for code's sake. If you only write code to help pass a test or remove duplication, you'll find yourself only adding architecture as it's justified. Thus you'll add designs organically as they're needed rather than on intuition about best practices. My favorite part of all, whenever our tests are passing, we know we have produced something. It may not yet be 100% of whatever the customer/business was asking for, but it's a set of functionality that actually works. With this in mind, it's possible to release stable functionality on a very short time scale, potentially on a daily basis.

Assumptions that make TDD work

[edit]

Agile, like any other methodology, is not the golden ticket to success in and of itself. It takes a team that's educated in how to make it work, and supportive to help work through problems. This discussion won't go through all of the details of Agile, but these are the basic assumptions that you need to support a healthy TDD environment.

Communication between developers is cheap

[edit]

Learning to be effective with TDD should not be expected to be learned overnight. Good communication especially between developers with experience writing tests and TDD to those who may be newer is critical. Questions come up, such as how do you write a test for an abstract class? How do you fake a database connection? How do you decouple two classes?

Communication with the customer is cheap

[edit]

Developers are documenting assumptions about the requirements as they go. When they are blocked by a question, if the customer isn't available to unblock them, the developer either has to guess the right answer or work stops on that feature. The implications of having easy communication channels to the customer are obvious. No one will ever write the perfect requirements document such that questions will never come up. Even if they could - how much money would it cost to write such a document, and what would be the ROI, versus the time it would just take to ask some questions?

Refactoring is cheap

[edit]

When you have regression tests, refactoring is cheap. You can confidentially make a little change, even if the implications are good if you have confidence that your tests are rigorously testing your code. TDD also focuses refactoring efforts primarily on removing duplication which is a simple task.

Testing is cheap

[edit]

Automated tests are very fast in comparison to manual tests. Well written unit tests can run through hundreds or even thousands of scenarios in seconds and minutes, compared to the hours it would take in manual testing. Essentially, you can run your tests as often as you like, because the overhead is so near to zero. Plugins like Infinitest make this even easier by automatically executing your unit tests as you write code, similar to compilation as you type.

Requirements as Tests

[edit]

Anti-Patterns

[edit]

No Tests

[edit]
Anti-pattern
@Test 
public void bark() {
  // TODO
}
Solution

Use TDD or code coverage to identify and generate reports for untested code.

Always fails

[edit]
Anti-pattern
@Test
public void bark() {
  fail("Bark test has not been implemented");
}
Solution

Use TDD or code coverage to identify and generate reports for untested code.

JUnit 3 confusion (forgot the annotation)

[edit]
Anti-pattern
public class TestDog {

  public void testBark() {
    ...
  }
}
AssertionFailedError: No tests found in TestDog
Solution
public class TestDog {
  @Test
  public void testBark() {
    ...
  }
}

Test via debugger

[edit]
Anti-pattern
@Test
public void bark() {
  Dog feefee = new Poodle("FeeFee");
  feefee.bark(); // Use a debugger
}
Solution
@Test
public void bark() {
  Dog feefee = new Poodle("FeeFee");
  Assert.assertTrue(feefee.bark()); // Check bark was successful
}

Test via logging

[edit]
Anti-pattern
@Test 
public void bark() {
  try {     
    Dog feefee = new Poodle("FeeFee");
    log.debug(feefee);
    log.debug(feefee.bark());
  } catch (DogException e) {
    log.error("DogException has occurred " + e.getMessage()); 
  }
}
Solution
@Test 
public void bark() {    
  Dog feefee = new Poodle("FeeFee");
  Assert.assertTrue(feefee.bark());
}

Multiple asserts

[edit]
Anti-pattern
@Test 
public void bark() {
  Dog feefee = new Poodle("FeeFee"); 
  Assert.assertNotNull(feefee.getName());
  Assert.assertEquals("Poodle", feefee.getBreed());
  Assert.assertNotNull(feefee.bark());
}
Tests should always have an assert/fail statement (PMD).
Solution
public class DogTest {
  private Dog feefee;

  @Before
  public void initialize() {
    feefee = new Poodle("FeeFee");
  }

  @Test 
  public void name() {
    Assert.assertNotNull(feefee.getName());
  }

  @Test 
  public void breed() {
    Assert.assertEquals("Poodle", feefee.getBreed());
  }

  @Test 
  public void bark() {
    Assert.assertNotNull(feefee.bark());
  }
}


JUnit is designed to work best with a number of small tests. It executes each test within a separate instance of the test class. It reports failure on each test. Shared setup code is most natural when sharing between tests. This is a design decision that permeates JUnit, and when you decide to report multiple failures per test, you begin to fight against JUnit. This is not recommended. [1]

Redundant assertion

[edit]
@Test
public void puppy() {
  Assert.assertEquals(true, new Poodle("FeeFee").isPuppy());
}


UnneccessaryBooleanAssertion rule (PMD)

Brittle tests

[edit]
@Test
public void age() {
  Assert.assertTrue(new Poodle("FeeFee").assertTrue(feefee.getAge() == 12);
}

Superficial coverage (edge cases)

[edit]
@Test
public void perimeter() {
  Assert.assertTrue(square.getPerimeter() > 0);
}


Code coverage tools like EclEmma for Eclipse and Cobertura for Maven can help you identify areas that need additional tests.

Overly complex tests

[edit]
public class RecordTest extends TestCase {
  public void testRecordContainsCorrectCustomerData() {
    // setup
    String expectedName = "Estragon";
    int expectedId = 1001;
    String [] expectedItemNames = {"A man", "A plan", "A canal", "Suez"};
    Customer customer = new Customer(expectedId, expectedName, 
    expectedItemNames);
    // execute
    BillingCenter.processCustomer(customer);
    // assert results
    File file = new File("customer.rec");
    assertTrue(file.exists());
    FileInputStream fis = new FileInputStream(file);
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    byte [] buffer = new byte[16];
    int numRead;
    while ((numRead = fis.read(buffer)) >= 0) {
      baos.write(buffer, 0, numRead);
    }
    byte [] record = baos.toByteArray();
    assertEquals(128, record.length); // exactly one record
    String actualName = new String(record, 0, 15).trim();
    assertEquals(expectedName, actualName);
    int [] temp = new int[4];
    temp[0] = record[15];
    temp[1] = record[16];
    temp[2] = record[17];
    temp[3] = record[18];
    int actualId = (temp[0] << 24) & (temp[1] << 16) & 
    (temp[2] << 8) & temp[3]; 
    assertEquals(expectedId, actualId);
    int itemFieldLength = 16;
    int itemFieldOffset = 19; 
    for(int i = 0; i < 4; ++i) {
      String actualItemName = new String(record, 
        itemFieldOffset + itemFieldLength * i, itemFieldLength);
      assertEquals(expectedItemNames[i], actualItemName.trim());
    }
  }
}

Integration tests as unit tests

[edit]

If you have external dependencies, you have created an integration test not a unit test. The following are some examples of such dependencies:

  • Specific date or time
  • Third-party library jar
  • File
  • Database
  • Network connection
  • Web container
  • Application container
  • Randomness

Masking failures

[edit]
@Test
public void calculate() {
  try {
    deepThought.calculate();
    assertEquals("Calculation wrong", 42, deepThought.getResult());
  } catch(CalculationException ex) {
    Log.error("Calculation caused exception", ex);
  }
}

Masking stack traces

[edit]
@Test
public void calculate() {
  try {
    deepThought.calculate();
    assertEquals("Calculation wrong", 42, deepThought.getResult());
  } catch(CalculationException ex) {
    fail("Calculation caused exception");
  }
}

Empty catch blocks

[edit]
@Test
public void indexOutOfBoundsException() {
  try {
    ArrayList emptyList = new ArrayList();
    Object o = emptyList.get(0);
    fail("Exception was not thrown");
  } catch(IndexOutOfBoundsException ex) {
    // Success!
  }
}

FAQ

[edit]
Do you have to write a test for everything?
No, just code that might reasonably break. Especially if you get stuck in a test-after situation, such as adding tests for legacy code, look for the highest ROI. Try to focus your efforts on the most complex and the most critical code.
How simple is "too simple to break?"
If it can't break on its own, it's too simple to break.
Why test before and not just test after?
You can find a more extensive answer above. The main points are (a) decrease over-designed code, (b) provide more stable builds (c) constant regression (d) immediate feedback when bugs are introduced, and (e) features are developed iteratively on a short time scale.
I have a deadline. How do I find time to write tests?
Potentially this is a discussion point where you will need support from your management chain. However, when you stick to the facts, using TDD saves money in the life of the project, even on the first project you use it. The most expensive part of software development is the maintenance cost. Good testing and good design are our best weapons to combat maintenance cost. Another hard fact to keep in mind is that bug detection becomes dramatically more expensive the later the bugs are found in the SDLC. Bottom line - TDD reduces the total cost of ownership.
How do I integrate JUnit?
Automated build processes such as ANT and Maven have JUnit support. Eclipse and IntelliJ editors have native JUnit support as well as plugins such as Infinitest to continuously run your tests. There are additional reporting tools such as EMMA and Cobertura that do line by line analysis to help identify coverage and code complexity. Also mentioned above, PMD has rule sets available to help you avoid common misteps like the anti-patterns listed above.

Bibliography

[edit]
  1. "About GCN". Global Computer Networks. 2005. Retrieved 2010-09-01.
  2. Beck, Kent (2003). Test-Driven Development: By Example. Addison-Wesley Signature Series. ISBN 0321146530.
  3. Big Oak SEO. "Software Maintenance Implications on Cost and Schedule". Retrieved 2010-09-01. {{cite web}}: External link in |author= (help)
  4. Clark, Mike (2006-02-26). "JUnit FAQ". Retrieved 2010-08-31.
  5. Dror Helper (03-2009). "The Cost of Test Driven Development". http://blog.typemock.com/2009/03/cost-of-test-driven-development.html. Retrieved 2010-09-01.
  6. Garrett, Alex (2005-07-15). "JUnit antipatterns: How to resolve". Retrieved 2010-08-31.
  7. GreatZero (2008-06-28). "JUnit 5". Retrieved 2010-08-31.
  8. Koskinen, Jussi (2003-09-12). "Software Maintenance Costs". Retrieved 2010-09-01.
  9. Kruger, Jon (2008-11-20). "The relative cost of fixing defects". Retrieved 2010-09-01.
  10. Meszaros, Gerard (2008-07-24). "XUnit Patterns: Test Smells". Retrieved 2010-08-31.
  11. "PMD: JUnit Rules". InfoEther. 2002. Retrieved 2010-08-31.
  12. Schmetzer, Joe (2009-02-06). "JUnit Anti-Patterns". Retrieved 2010-08-31.
  13. "Test Infected: Programmers Love Writing Tests". Retrieved 2010-08-31.
  14. Turk, Daniel (2003-05-29). "Assumptions Underlying Agile Software Development Processes" (PDF). Submitted to Journal of Database Management. Retrieved 2010-08-31. {{cite web}}: Italic or bold markup not allowed in: |work= (help); Unknown parameter |coauthors= ignored (|author= suggested) (help)