Imagine you’ve developed an electric scooter and now you want to see whether it works. Above all, you need to check whether it functions according to your client’s wishes. So you start testing. You determine that it drives and that the brakes and the steering work well. However, the battery display doesn’t indicate the drive time remaining. That’s an unfortunate mistake, as now drivers won’t know how far they can go.
What could be responsible for this missing feature? There are a few possible causes to consider:
- There was no requirement/no use case scenario for the display
- There was an erroneous requirement
- There was a consistent requirement, but it wasn’t implemented
In the first case, there’s no basis for testing. In the second case, the basis for testing is erroneous. Without a concrete and correct requirement, there’s no way to check whether something has been implemented. This implies that verification and validation require that requirements, users, use cases, and user goals have been specified. For software development there are precise guidelines for specifying requirements, for example the Recommended Practice for Software Requirements Specification (SRS) IEEE 830, or the newer Standard ISO/IEC/IEEE 29148:2011 Systems and Software Engineering – Life Cycle Processes – Requirements Engineering.
Both verification and validation are quality management methods. Accordingly, both disciplines are handled in ISO 9000. However, because it’s not possible to test complex software for all eventualities, guidelines from within risk management are also increasingly coming into play. Imagine a medical app for measuring vital functions. Is it actually possible today for a manufacturer to guarantee that software will run stably on all mobile and non-mobile devices and OS’s, taking all versions into account? Certainly not. In order to test all possible scenarios, they need to be defined. These days, that would be an endless task. Therefore risks that may arise in the development context are identified and evaluated, and measures are taken in order to minimize the probability of their occurrence.
Was the product built correctly?
ISO 9000 defines verification as follows:
Confirmation, based on objective evidence, that requirements are implemented.
Verification determines whether the results of development conform to specifications. In the software branch, verification can be accomplished with testing (e.g. module tests, integration tests, and system tests). Nevertheless, testing alone can’t establish that a product is free of errors – testing an error won’t improve it.
Was the right product developed?
ISO 9000 defines validation as follows:
Confirmation, based on objective evidence, that requirements are implemented for a particular use or application.
Validation determines whether requirement definitions are suited to stakeholder goals. In other words, whether the product offers what the user needs. In classical approaches, validation usually takes place in the form of an acceptance test. This test also requires a good basis: for example, scenarios with use cases. The main difference to verification lies in interaction with the user. Possible instruments for validation are reviews, prototypes, A/B tests, and usability tests.
Verification isn’t validation
It can happen that verification of a product is successful, while validation fails and the other way around. For example, it’s possible that the electric scooter is equipped according to specification with a 200W motor (verification successful) and accelerates to 20km/hr (intended purpose), but not when the driver weighs over 100 kg (validation failed).
On the other hand, the scooter could be built with a 150W motor (verification failed), can reach a speed of 20km/hr (intended purpose), and it can do this when the driver weighs over 100 kg (validation successful).
With validation it’s also possible to differentiate between whether an intended use was achieved and, beyond that, whether the intended use was implemented in such a way that users are convinced of its benefits. After all, that’s the goal of product development: not just usability, but usability that interests people and leads to purchases.
First verify and then validate?
In the classical V model, validation takes place after verification. Requirements are refined for implementation in successive phases and subsequently verified from the fine-grained level up to testing at the system level. After verification is complete, validation is carried out. This is a logical method, but it often takes too long.
Continuous verification and validation
What’s the agile way? Agile teams take responsibility for the product, including quality. In an agile environment it’s not feasible to set clear boundaries and handover points between development, quality management and IT operations. Instead, testing inside the team is distinguished from testing outside the team. Testing within an autonomous team could also be called verification. Testing outside the team for usability and technical quality could be called validation. Within the team, one should differentiate between quality requirements and the technical quality of the implementation. The product owner determines acceptance criteria (sometimes in cooperation with the team and stakeholders) and the team takes responsibility for the implementation. Usually there’s no external quality manager (for example, in Scrum). It’s becoming more common, however, for teams to have developers with testing skills. Their role is to offer support with unit tests and test automation, rather than to monitor or supervise.
Because here there are no fixed and finalized specifications that serves as a basis for development, verification and validation are carried out continuously. Products are delivered in increments, which ensures up-to-date feedback on real usage. DevOps, test automation, test driven development, and continuous integration and delivery are all methods used to ensure that interim results can be evaluated in order to avoid failure due to unsuccessful validation. The earlier mistakes are discovered, the less time and money it takes to fix them. The Rule of Ten states that the cost of an undiscovered mistake increases through the value chain by a factor of ten.
Requirements must be tip top
What’s the best way to discover and correct mistakes? No matter whether you’re developing classically or agilely, up-to-date, precise, unambiguous, comprehensible and complete requirements play the decisive role. The more requirements change, the easier it is to lose track. That’s why we’ve created software that gives you what you need to ensure quality through traceability, verification and validation.