Testing involves more than simply discovering bugs. Testing helps discover the causes of errors and eliminating them. Testing helps make explicit the assumptions and requirements that customers have without getting overly technical. Testing ensures code integrity and compatibility with other code modules.
Testing minimizes the risks caused by humans, machines, and environment.
Planning the testing strategy should also begin as early as possible. This requires careful consideration: what will be tested, how much can be automated and what must still be done manually, what the testing environment(s) will include, who will do testing, what they need to know, and what standards they will follow.
Acceptance testing helps the team answer the question, “Does the code do the right thing?” Acceptance testing is required by Lean-Agile.
Acceptance tests come from the point of view of the user of the system. Factors to consider include:
- Up-front acceptance test specifications help the understanding of the requirement.
- Acceptance testing can be used to integrate development and test, increasing the efficiency of development.
- Acceptance testing can be used to improve the process being used to develop the system by looking at how to avoid errors from occurring in the first place.
- Automating acceptance tests can lower regression costs.
Acceptance tests are as much about the understanding and definition of the requirement as they are about validating that the system properly implements it. Ask the question, “How will I know I’ve done that?” This is something that opens the door for getting an answer in the form of an example – which can (should) be a test specification. This process of having an abstract statement (the requirement) and a corresponding concrete example (the test specification) improves the conversation between developers and analysts and testers and customers.
Every feature and every story must have an acceptance test. These tests can involve both manual and automated methods. The outcome is to deliver fairly well-perfected code where defects have no serious consequences.
Automated Testing is the use of software and hardware to verify and validate the behavior and qualities of the software under test. It involves software to control the execution of tests, to compare actual outcomes with predicted outcomes, to set up test preconditions, and to control reporting functions. Usually, test automation involves automating manual testing processes that are already being used.
Planning for test automation happens at the earliest stages and should include all levels of testing: unit, integration, system, system integration, and functional/acceptance.
As a general rule,
All tests, including related setup, configuration, and evaluation steps, which can be automated, should be automated with priority given to aspects which are the most time consuming, error-prone, and tedious.
Common automated test frameworks include xUnit-based approaches (CPPUnit, JUnit, PyUnit), Boost test, and FIT.
When selecting an automated testing framework, consider the following:
- Adequate range of available interfaces: GUI-driven, an interface friendly to automation via scripts and command-lines
- Hardware-software dependencies and system requirements
- Procurement difficulty and cost
- Community support
- Feature mismatch between automatable and manual interfaces
- Available API library
- Test data post-processing
- Fit in the overall Agile life-cycle tool suite
- Report generation
The following table offers a good procedure for automated testing.
|Check out the software from the repository
||Check out from the repository must be handled by scripting or other means in order to avoid error.
|Build the software into the deployable or executable form
||Compile (and link) the code in order to detect errors in grammar, etc. The automated tool should report errors to compile to the user in an effective manner, such as a scoreboard.
Examine errors in the full context which was present when the errors were emitted from the compiler or linker.
In the case of an automated build server, use a full-fledged, fully automated build system infrastructure.
|Set up of test preconditions
||Establish the dependencies required by the system or unit under test. This can involve many activities:
- Simple initialization of a variable or instantiation of a class under test
- Setup routine or table of a testing framework such as CPPUnit or FIT
- Initialization of target nodes
- Setting up environment variables
- Connections to other systems, such as databases
The degree to which developers can properly handle dependencies is the degree to which they can easily and properly test their software in a self-contained and automated fashion.
Distributed, real-time, and embedded (DRE) and network systems introduce unique complexities for establishing full testing preconditions, such as setting up COTS network test equipment via third party automation APIs as well as command line scripts, accessing and modifying a database of network topologies, and automated configuration of COTS network connectivity devices.
|Deploy the software
||This step may be straightforward when deploying on a single workstation or quite complex when deploying on DRE and network systems involving multiple levels of security.
|Run the tests
||If there are only a few tests, a script can automate running the tests. With a larger number of tests, use a testing framework to register and run the suite of tests.
Use “Test Runners” or “Test Conductors” to orchestrate the control and actions of many pieces of test equipment, test software, and test result display to fully execute a set of complete functional tests.
|Control software and resources that the Site Unit Test requires
||Whether testing a module in isolation testing the module with the actual collaborators, the test strategy needs to control the other software entities: their lifetime (creation, initialization, configuration, destruction) and their behavior during the tests.
A simple technique is to use a mock object that is part of a runtime polymorphic hierarchy and is hidden behind an interface. This approach allows for the substitution of the mock or the real entity depending on which kind of test is being run. More complex domains will require more sophisticated approaches; for example, controlling the network traffic entering nodes of a large, heterogeneous, mobile ad hoc network.
|Retrieve and display the results
||The testing framework should enable viewing test results, ideally in a simple scoreboard display or other standardized report. As far as possible, they should help developers find which tests failed, which errors occurred, and where.
|Notify team members
||Rather than requiring the team to monitor the test, the testing framework should actively notify the team about results so that they can isolate and fix problems quickly and effectively.
|Information statistical analysis
||The test strategy should be informed by statistical correlations of test results and other metrics that show how testing is progressing over time. For example, code coverage over the course of a week or growth rate of the number of test cases.