Testing Your Programs
CS 300 Lecture 5-2
There are three basic methods of verifying and validating code.
- Formal methods
We've already done inspection.
Today we will briefly cover formal methods and begin covering testing.
Formal methods involve
- Writing down a mathematical model of a system
- Using that model to verify system properties
Not a validation technique
Not related to "formal inspection"
Can be used for requirements, design, implementation
Today we will look at Z ("zed"), a formal notation for requirements specification
A Test Is An Input-Output Pair
This includes "should fail" negative tests
How to get correct outputs? (the "effective oracle" problem)
- Slow but correct code (but few tests)
- Working backward from output to input
From Glenford Myers "The Art of Software Testing"
Let's write a program printing whether a triangle is scalene, isosceles or equilateral
- The program should take three numeric arguments
- It should print the appropriate message after analysis
Write as many tests as you can think of that might be useful here
Let's look at an implementation
Write as many more tests as you can think of
Now we iterate a bit
Testing Doesn't Work
Space of possible tests is ridiculously big, like 10^1000
Tests can only fail
Test failures usually give minimal insight
Best that can be hoped for is to test enough cases a user might encounter that the program is known to "kind of" work
Black Box vs White Box
Black Box: Tests given without knowledge of implementation
White Box: Use implementation knowledge to construct tests
Both are valuable
Idea: Divide input or output space up such that only one representative in each domain need be tested
- How to draw domain boundaries?
- There are still a lot
Better idea: formal methods plus testing
What should be tested?
- System (acceptance)
Integration testing can be
Random (e.g. "fuzz testing")
Bugs can come back, tests are expensive
- Run tests after every change
Debugging: Write tests for every fix
Add tests to regression test suite
"Test-driven development" says write tests first, then code to pass tests
- Typically unit tests and/or system tests
Idea is to always run tests ("regression test")
- So build environment + code is always full of tests
- Without automatic support from tools, this gets ugly fast
Integrated with build environment for "continuous integration"
Lots and lots of this out there
Flavors of the week: "JUnit" and friends, "Travis"
How much of the program has been tested?
- All statements?
- All branches (each way)?
- All code paths?
- All data patterns?
Automated tools (e.g.
100% coverage is impossible
Untested code is broken code
Attempt to find out how good the tests are
Use SCMS to reliably remove seeded faults (!)
Tests need to be maintained with code
Tests need to be runnable automatically
Test failures need to be logged as tickets until fixed