In the last post, we discussed about the software testing, why testing is important, the testing principles, software testing levels and their types, the software testing life cycle, software development life cycle, the bug life cycle, and the static techniques. In the static techniques, we saw how we can perform testing by looking into the documents, and code but not executing the code. In this section, we will be looking into the testing done by executing the tests on the code running in the software.
The Test Development Process
Before executing the tests, we should know what we are testing, the inputs, the expected outputs and do we get ready for and run the tests. Test conditions are documented in test design specification and test cases are documented in a test case specification. Similarly, test procedures are documented in a test procedure specification (also known as test script or manual test script).
The formality of test documentation
Testing can vary from very formal to very informal. Very formal testing includes extensive documentation which is well-controlled and the document involves every detail of the tests including the set of inputs and the expected outputs. Very informal testing may not be documented, but the testers have to keep in mind the ideas, what they will test, and what possible outcome they are expecting.
The level of formality differs based on the organization, the people working over there, the culture, how mature the development process is, how mature the testing process is, etc.
Test analysis is the process of looking at something that can be used to derive the test information. This basis for the tests is called the test basis. Test basis can be system requirements, a technical specification, the code itself, or a business process. The tests, sometimes, can be based on an experienced user’s knowledge of the system, which may not be documented.
Test basis: All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based.
If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Test basis is basically a documentation on which the test cases are based. It is the information to start the test analysis and creating our own test cases. Test basis documents can be used to understand what the system will do once built.
From a testing perspective, we look at the test basis to see what needs to be tested, the conditions are called test conditions.
Test condition: An item or event of the component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute or structural element.
A test condition is something that we could test. If we are looking to measure coverage of code decisions(branches), then the test basis would be the code itself, and the list of test conditions would be the decision outcomes (true or false). If we have a requirements specification, the table of contents can be our initial list of test conditions.
A good way to understand requirements better is to try to define tests to meet those requirements.
Since exhaustive testing (testing everything) is an impractical goal, we have to select a possible subset of all possible tests. Practically, the smallest subset may contain the highest number of defects. For guiding our selection of subsets, we need an intelligent thought process called test techniques.
A testing technique helps us to select a good set of tests from the total number of all possible sets for a given system. Different techniques offer different ways of looking at the software under test. Each technique provides a set of rules or guidelines for a tester to identify the test conditions and the test cases.
The test conditions we choose will depend upon the test strategy. Once we have identified the test conditions, prioritizing them is important, so that the most prior test conditions can be identified.
Test conditions should be able to link back to their sources on a test basis – this is called traceability.
Traceability can be either horizontal or vertical. It can be vertical through all the layers of development documentation, e.g., from requirements to components and horizontal through all the test documentation for a given test level, e.g., system testing, from test conditions through test cases to test scripts.
Why is traceability important?
Let’s look into some examples that will help us understand why we are focusing on traceability.
- A set of tests that have run OK in the past has now started creating serious problems. What functionality do these tests actually exercise? Traceability between the tests and the requirement being tested enables the functions or features affected to be identified more easily.
- Before delivering a new release, we want to know whether or not we have tested all of the specified requirements in the requirements specification. We have the list of the tests that have passed – was every requirement tested?
- The requirements for a given function or feature have changed. Some of the fields now have different ranges that can be entered. Which tests were looking at those boundaries? They now need to be changed. How many tests will actually be affected by this change in the requirements? These questions can be answered easily if the requirements can easily be traced to the tests.
Now that we have prioritized our test conditions, we don’t want to spend time implementing the poor tests and we will be focusing more on the priority conditions.
Test design: Specifying test cases
Test design is basically an act of creating and writing test suites for testing software. Test analysis and identifying test conditions give us a general idea for testing, covering quite a range of possibilities.
When we need to make a test case, we need to be very specific. We need exact input specifications and not descriptions. Test conditions can be rather vague, covers quite a range of possibilities. A test case covers multiple conditions for specific detail.
A test case needs to have input values but just having some input values to a system is not a test. If we don’t know what the system is supposed to do with the inputs, we can’t tell whether our test has passed or failed.
Test cases can be formally documented as per the IEEE 829 standard for test documentation.
One of the most important aspects is that it checks that the system performs the same way it is supposed to perform. In order to know what the system should do, we need to have a source of information about the correct behavior of the system called an ‘oracle’ or a ‘test oracle’.
Once a given input value has been chosen, the tester needs to determine what the expected result would be and document them in the test case. Expected results include information according to an input but it also includes changes to data and/or states, and any other consequences of the test.
If we haven’t decided on the expected result before running the test, we can still look at what the system produces and would probably notice if something goes wrong.
Ideally, expected results should be run before the test is run and then the assessment whether the software passed or failed will be more objective.
For a few applications, it may not be possible to predict the expected results before running the software, in that case, we can only do a ‘reasonable check’. We have a ‘partial oracle’- we know something is wrong but would probably have to accept something that looked reasonable.
In addition to expected results, the test case also specifies the environment and other things that must be in place before the test can be run (the pre-conditions) and any things that should apply after the test completes (the post-conditions).
The test case should also say why it exists – i.e., the objective of the test or the traceability. Test cases need to be prioritized from high priority to low priority so that the execution is done accordingly.
Test cases need to be detailed so that we can accurately check the results and know that we have exactly the right response from the system.
The next step is to group the test cases in a sensible way for execution and order them in sequence in order to run the test. Some test cases may need to run in a sequence, else they won’t test what they are meant to test.
The document that describes the steps to be taken in running a set of tests and specifies the executable order of the tests is called a test procedure in IEEE 829 and is also known as a test script. When test Procedure Specification is prepared then it is implemented and is called Test implementation.
Test script is also used to describe the instructions to a test execution tool. An automation script is written in a programming language that the tool can understand. The tests that are intended to be run manually rather than using a test execution tool can be called as manual test script.
The test procedures, or test scripts, are then formed into a test execution schedule that specifies which procedures are to be run first – a kind of superscript.
The test schedule is when the given test should be run and by whom. The schedule could vary depending on newly perceived risks affecting the priority of a script that addresses that risk.
Categories of Test Design Technique
A test design technique basically helps us to select a good number of tests from the possible number of test cases for a system. There are mainly two types of testing techniques: static testing and dynamic testing. We have already learned in our previous article about static testing.
In this section, we are going to focus more on dynamic testing. Dynamic testing has three types:
- Specification-based (black box test techniques or behavioral techniques)
- Structure-based (white-box or structural techniques)
- Experienced-based techniques.
Static testing techniques
These techniques do not execute code being examined and are used before any tests are executed on the software. Most static test techniques can be used to ‘test’ any form of document including source code, design documents, models, functional specifications and requirement specifications.
Static testing techniques have two types: Reviews and Static analysis. We have already seen these techniques in our last section which talked about Static Testing entirely.
Dynamic testing techniques
The testing is performed by executing the code and it has three types:
Specification-based (black-box) testing techniques
Black-box (specification-based) testing technique is a procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional of a component or system without reference to its internal structure.
These are known as black-box or input/output driven testing technique because they view the software without any knowledge of how the system or component is structured. The tester concentrates on what the software does and not how the software does it. It focuses both on functional and non-functional testing.
Functional testing, as we know, is concerned with what the system does, its features and functions. On the other hand, non-functional testing is majorly concerned with how well the system performs.
Structure-based (white-box) testing techniques
- Structure-based (white-box) test design technique is a procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
- These are called white-box or glass-box techniques as they require the knowledge of how the software is implemented, i.e., how it works. Structure-based testing techniques use the internal structure of the software to derive the test cases.
Experience-based testing techniques
Experience-based test design technique is a procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.
In this technique, people’s knowledge, skills and background are important to the test conditions and test cases. The experience of both technical and business people is important as they bring a different perspective to the test analysis and design process.
An advantage of this technique could be the insights from the previous experiences with similar tests that what could go wrong, which is useful for the testing.
Where to apply the different testing techniques?
Black-box test techniques are applicable at all levels of testing wherever a specification exists. While performing system/ acceptance testing, the requirement specification or the functional specification may form the test basis. When it comes to component or integration testing, a design document or a low-level specification forms the basis of the tests.
White-box test techniques are also applicable at all levels of the testing. Developers use these techniques in the component and the component-integration testing levels, where there is good tool support for code coverage. These techniques are also used in system and acceptance testing levels, but the structures are different.
Experience-based test techniques are used to complement white-box and black-box test techniques and are also used when there is no specification, or if the specification is inadequate or out-of-date.
This is the only type of testing used for low risks systems, but this approach is useful under extreme time pressure.