Inspection as the Path to Good Specifications
“Check out the specification – see if you notice anything,” your co-worker says to you and then disappears out the door. With that kind of instruction you could find yourself staring at the document, meditating over it, thinking for ages about whether or not you’ve noticed anything. Eventually something will probably stick out. But the things that stick out and their degree of helpfulness are a matter of coincidence. 15% of all software errors are delivered, and many of them have already been part of the specifications. Requirements get misunderstood, special cases might be insufficiently described, and various terms could end up being used for the same things etc. Besides requirements documents, you could also do an inspection of architecture drafts, source code, test cases and all other written interim results in the software development.
The Quality of the Inspection
As with so many things in life, the quality of the results is dependent on the quality of the questions asked. The instruction to “read over the document and see if you notice anything” is obviously subjective and dependent on the situation at hand. Scientific experiments have shown that the results of an inspection are never quite complete – you’ll never find every error, and even the same reader will discover new problems with each reading.
That doesn’t mean you can skip inspections. They reveal errors early on that you otherwise would have carried over throughout the project. I recommend the following to get the best possible results:
Six Best Practices for a Successful Inspection
- Goal definition: The goal needs to be clear first of all. Should the inspection be the last bit of quality control before the implementation? Or should you ascertain whether the project is generally comprehensive enough during the first inspection, without paying too much attention to the details?
- Checklist: Clear criteria make searching for errors easier. Checklists are time-tested. Yes/no questions could be e.g. “Is a use case diagram present?” or “Are the stakeholder names in chapter 2 consistent with the actors’ names in chapter 3?” or “Have all the errors been dealt with?” If the answer is yes then everything is okay. Otherwise there is room for improvement. The quality criteria can be derived from the relevant standards. I use the standard ISO/IEC/IEEE 29148:2011 “Systems and software engineering — Life cycle processes — Requirements engineering” for requirements.This standard specifies the following quality properties for requirements: necessary, independently implementable, clear, consistent, complete, atomizable, achievable, traceable and verifiable. A checklist like this is a one dimensional representation of a two dimensional process. A list of quality criteria should be compared with several chapters (text and images) in the document. I recommend working chapter by chapter and re-assessing each chapter as you go. Only then should you move on to the next chapter. The diagram chapters are generally so complex that you need to think about it very carefully before you can make an evaluation. It is recommended in the literature that checklists should be no longer than a page or two. I disagree. My checklist for specification sheet templates with ten chapters are five to ten pages long. Why? A list of abstract quality criteria like e.g. “complete” or “consistent” is better than nothing but it could make the inspection results random and subjective. It is better to concretize the criteria for the template. Consequently that means they increase in number. For example with consistency: the content of chapters should be consistent between each other, as well as with several other chapters.It is easy for an expert to break things down, it is well known, according to standards and in the literature, what is needed to have complete textual requirements or a complete use case diagram or what consistency relationships among them should look like. I have consolidated this knowledge in my checklist and can use it efficiently during any inspection. If the criterion has been formulated concretely enough then it is easy to work through it. One would hope that the results are repeatable and not dependent on the inspector. This has proven not to be the case. The clear criteria are often subjective. For example a question related to the understandability like: “Are all the names of all use cases self-explanatory?”
- No hope that the error list will be complete: Are we just trying to find the major errors, is a quick inspection by individual experts enough? Even half a dozen independent analysts couldn’t find all the errors. This is no reason to fire anyone. A specification is an extremely complex document with so many cross-references in its content that it is humanly impossible to find all errors. That is why other quality control measures are needed after the inspection.
- Multiple inspectors: There are semantic and syntactical inspection criteria. The syntax refers to the elements used (words, model symbols) and their permitted combinations, that is, the grammar of the text and graphical notations. An external specification expert can assess this very well, often without having to understand the content completely. The semantic criteria look at the quality of the content, e.g. whether or not the requirements are complete. Because specifications and industry-expertise are spread out across different people, it becomes necessary, for this reason alone, to cover all the criteria as well as possible. Another reason is having complete results.
- Repetition: A document should go through quality assurance more than one. Not just because continual changes threaten the quality and consistency of the project, but also different questions pop up at different points in time. And you notice that you have probably overlooked errors on previous occasions. Repetition gives you another go.
- Automation: Getting help from automated tools sounds tempting. We just press a button and then we get a list of the errors. Unfortunately the tools I have tested up until now have not impressed me. Not all quality criteria can be ascertained automatically, only the syntactical ones, for which a rule can be defined. And even then too many “false errors” occur – errors related to issues that fit the definition of an error but which are still correct.Even if you were to receive a list of a hundred errors, you should still manually inspect whether they really need to be solved or whether these are false alarms. Only very lean evaluations are accurate enough to be more efficient than a manual one, e.g. the search for sentences with too many words or for words to be avoided like “any time”. You can find the list of terms to be avoided in the literature, e.g. in the standard ISO/IEC/IEEE 29148:2011 and in the CPRE Foundation Level¹. My recommendation is that you define the check list first and find out afterwards if you can delegate individual criteria to a single tool. It is more expensive if you go through all the evaluation possibilities that a tool possesses first and then try to figure out if all the information is useful to you.
And finally, a slightly selfish tip to end things off: allow an external professional to read over it, to see whether s/he would be happy with it. He/she will develop a check list according to your working question and will add his/her own suggestions for improvement. An inspection like this is definitely worth your time even if the expert isn’t able to find every single error.
Checklists. Definitely they help make an inspection complete, at least as complete as the checklist is, and they can help bring back under control an inspection that has gone into a rat-hole. But they form a chicken-and-egg problem, with a double yolk. The first is, if the checklist isn’t in the requirement writer’s hand from the beginning of the project, then the requirements documents that are written are guaranteed to fail the checklist in the reviewer’s hand. The checklist is a set of requirements for the requirements document, and the document review is the test. The second part is, you will only have items in the checklist that weed out problems that you foresee or have seen in the past. Neither of those covers all the bases, especially in new organizations. Every new set of requirements is an opportunity for a new problem, or worse a new global problem affecting the whole document and therefore suggesting a new item for the checklist. The other thing about checklists is the terminology used in the checklist has to be absolutely clear to everyone or those requirement writers and reviewers who didn’t write the checklist will quickly learn to substitute their interpretation or criticality for items or whole checklists to keep projects on schedule. So writers and reviewers need to be trained on the checklist, item by item, to ensure that the checklist is interpreted as intended, and references and examples should be readily available to disambiguate anything arcane in the wording.
either you do a check-list training for the inspectors and requirements authors, or you use a standard terminology which they know. Or you do both.
The checklist will probably never be complete, but it still is more complete than no list at all.
Are we trying to make system engineering and specification preparation into a ‘cook-book’ process? That has been tried for years, without success. After over 40 years of that kind of work, I am convinced that the best specifications are truly a ‘work of art’, created by artisans. There are a few guidelines that give good insight into the process but they are just the beginning of the effort.
I did not see any mention of the role that verification plays in the development of good requirements. If you can’t verify it, it is not a good requirement and needs to be reworked. That is a check and balance that should never be ignored.
I have only seen one tool that helped create a good set of requirements and unfortunately, it has been retired in favor of far more expensive tools that never get the job done.
even in the most informal software development process, there probably will come a day where someone will have a look at the requirements and judge their quality or at least verify whether they are ready for implementation. Then, a checklist can help. “Testable / verifiable” is one of the quality criteria defined in the IEEE standard. I agree with you that it is an important quality of a requirement.
Software engineering will never be a cookbook process and can not be taylorized into small steps which can be executed by people only understanding their own work step. Most IT systems are too complex for this and therefore, their development necessarily is iterative. The inspection checklist can not replace knowledge and experience. So, not everyone can do an expection, even with a checklist. But it is a helpful tool.
Speaking of art: Even art-work must satisfy quality criteria. For instance, there are also inspection checklists for other texts written. But the experienced artist knows these criteria by experience and will not need a checklist. A checklist is rather making explicit the knowledge of the professionals.