Testing Concluded; Git
CS 300 Lecture 6-1
Reminder: Last week we talked about test cases and where they come from, and principles of testing
Today we will pick that up where we left off
Black Box vs White Box
Black Box: Tests given without knowledge of implementation
White Box: Use implementation knowledge to construct tests
Both are valuable
Idea: Divide input or output space up such that only one representative in each domain need be tested
- How to draw domain boundaries?
- There are still a lot
Better idea: formal methods plus testing
What should be tested?
- System (acceptance)
Integration testing can be
Stubs and Drivers
Stub: piece of code that "simulates" a function or module so that others can call it during testing
Driver: piece of code that calls a function or module to test it
Random (e.g. "fuzz testing")
Bugs can come back, tests are expensive
- Run tests after every change
Debugging: Write tests for every fix
Add tests to regression test suite
How much of the program has been tested?
- All statements?
- All branches (each way)?
- All code paths?
- All data patterns?
Automated tools (e.g.
100% coverage is impossible
Untested code is broken code
Attempt to find out how good the tests are
Use SCMS to reliably remove seeded faults (!)
"Test-driven development" says write tests first, then code to pass tests
- Typically unit tests and/or system tests
Idea is to always run tests ("regression test")
- So build environment + code is always full of tests
- Without automatic support from tools, this gets ugly fast
Integrated with build environment for "continuous integration"
Lots and lots of this out there
Flavors of the week: "JUnit" and friends, "Travis"
Tests need to be maintained with code
Tests need to be runnable automatically
Test failures need to be logged as tickets until fixed
- Push, Pull
- Repo storage
- Issue tracker
- Email list