
Avoiding test confusion – V&V and DoD
There can be some confusion between the different types of testing which are discussed in Agile development. This article clarifies three of the key areas and how they differ.
- Definition of Done
- Validation
- Functional Verification.

Definition of Done
The Definition of Done (DoD) is standard across all backlog items. This set of activities should be completed for all items to ensure quality and completeness.
Before our backlog item can be marked as complete (or “Done”), we must be able to tick off all of the items on the Definition of Done.
Think of the Definition of Done as a universal quality checklist which applies to every backlog item without adding new checks for each individual item.
An example DoD item might be “Code has been peer reviewed”. This rule applies to all code, although each work item has a separate review, comments and updates.
Validation
Validation looks at whether the code delivers the intended user value. Delivering value is central to Agile development, so we would expect every work item to be validated before it is considered complete.
Ron Jeffries summarised the concept of developing a User Story as “Card, Conversation and Confirmation“. The “Confirmation” part of this is one or more tests which ensure that the work item delivers value.
When the conversation about a card gets down to the details of the acceptance test, the customer and programmer settle the final details of what needs to be done.
“Essential XP: Card, Conversation, Confirmation” – Ron Jeffries
Typically validation tests are defined and performed by someone representing the customer. In most product organizations this will be a Product Owner or equivalent role. In the quote above, Validation is called “Acceptance Tests”. For product development, I prefer “Validation” as a name. This avoids the suggestion that the Product Owner is “accepting” or “rejecting” the work, when both should be collaborating to maximise customer value.
Validation is generally performed as an end-to-end test on the full system, replicating the user experience. These tests are typically agreed when work on the item starts and checked when the item is ready to be declared complete.
An example validation item might be “When a user attempts to submit the form without completing all the mandatory fields a warning is given and the submission does not occur”. Note that this directly references the user experience with the software.


Verification
Verification is testing for functional correctness. The name is similar enough to Validation to cause confusion. Combining the two areas of testing is key to quality and these are often discussed together as “Verification and Validation” or “V&V” for short.
Verification tests that the code behaves according to the specification and design. It is possible for a solution to be implemented without error (verification) but not deliver value (validation).
Verification tests will usually be created by developers, test experts working with the developers, or both. They are created before, during and after the creation of code. Traditional project approaches focussed on coding first and then testing against the specification once complete. Good practice now promotes “left shifting” test, creating it in parallel with or before code.
Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product
“Out of the Crisis” – W.E.Deming
Verification tests will typically be both developed and executed by the developers or test specialists. Sometimes independently by a separate individual from the coder. While validation tests are end-to-end tests using the whole application, this is rarely the main focus for verification tests which seek to isolate specific functionality. Unit testing tests a specific API call or procedure in isolation. Integration testing tests functionality at the API level allowing the full functionality, generally with multiple modules but without the user interface.
Leave a Reply