“Check out the specification – see if you notice anything,” your co-worker says to you and then disappears out the door. With that kind of instruction you could find yourself staring at the document, meditating over it, thinking for ages about whether or not you’ve noticed anything. Eventually something will probably stick out. But the things that stick out and their degree of helpfulness are a matter of coincidence. 15% of all software errors are delivered, and many of them have already been part of the specifications. Requirements get misunderstood, special cases might be insufficiently described, and various terms could end up being used for the same things etc. Besides requirements documents, you could also do an inspection of architecture drafts, source code, test cases and all other written interim results in the software development.
The Quality of the Inspection
As with so many things in life, the quality of the results is dependent on the quality of the questions asked. The instruction to “read over the document and see if you notice anything” is obviously subjective and dependent on the situation at hand. Scientific experiments have shown that the results of an inspection are never quite complete – you’ll never find every error, and even the same reader will discover new problems with each reading.
That doesn’t mean you can skip inspections. They reveal errors early on that you otherwise would have carried over throughout the project. I recommend the following to get the best possible results:
Six Best Practices for a Successful Inspection
- Goal definition: The goal needs to be clear first of all. Should the inspection be the last bit of quality control before the implementation? Or should you ascertain whether the project is generally comprehensive enough during the first inspection, without paying too much attention to the details?
- Checklist: Clear criteria make searching for errors easier. Checklists are time-tested. Yes/no questions could be e.g. “Is a use case diagram present?” or “Are the stakeholder names in chapter 2 consistent with the actors’ names in chapter 3?” or “Have all the errors been dealt with?” If the answer is yes then everything is okay. Otherwise there is room for improvement. The quality criteria can be derived from the relevant standards. I use the standard ISO/IEC/IEEE 29148:2011 “Systems and software engineering — Life cycle processes — Requirements engineering” for requirements.This standard specifies the following quality properties for requirements: necessary, independently implementable, clear, consistent, complete, atomizable, achievable, traceable and verifiable. A checklist like this is a one dimensional representation of a two dimensional process. A list of quality criteria should be compared with several chapters (text and images) in the document. I recommend working chapter by chapter and re-assessing each chapter as you go. Only then should you move on to the next chapter. The diagram chapters are generally so complex that you need to think about it very carefully before you can make an evaluation. It is recommended in the literature that checklists should be no longer than a page or two. I disagree. My checklist for specification sheet templates with ten chapters are five to ten pages long. Why? A list of abstract quality criteria like e.g. “complete” or “consistent” is better than nothing but it could make the inspection results random and subjective. It is better to concretize the criteria for the template. Consequently that means they increase in number. For example with consistency: the content of chapters should be consistent between each other, as well as with several other chapters.It is easy for an expert to break things down, it is well known, according to standards and in the literature, what is needed to have complete textual requirements or a complete use case diagram or what consistency relationships among them should look like. I have consolidated this knowledge in my checklist and can use it efficiently during any inspection. If the criterion has been formulated concretely enough then it is easy to work through it. One would hope that the results are repeatable and not dependent on the inspector. This has proven not to be the case. The clear criteria are often subjective. For example a question related to the understandability like: “Are all the names of all use cases self-explanatory?”
- No hope that the error list will be complete: Are we just trying to find the major errors, is a quick inspection by individual experts enough? Even half a dozen independent analysts couldn’t find all the errors. This is no reason to fire anyone. A specification is an extremely complex document with so many cross-references in its content that it is humanly impossible to find all errors. That is why other quality control measures are needed after the inspection.
- Multiple inspectors: There are semantic and syntactical inspection criteria. The syntax refers to the elements used (words, model symbols) and their permitted combinations, that is, the grammar of the text and graphical notations. An external specification expert can assess this very well, often without having to understand the content completely. The semantic criteria look at the quality of the content, e.g. whether or not the requirements are complete. Because specifications and industry-expertise are spread out across different people, it becomes necessary, for this reason alone, to cover all the criteria as well as possible. Another reason is having complete results.
- Repetition: A document should go through quality assurance more than one. Not just because continual changes threaten the quality and consistency of the project, but also different questions pop up at different points in time. And you notice that you have probably overlooked errors on previous occasions. Repetition gives you another go.
- Automation: Getting help from automated tools sounds tempting. We just press a button and then we get a list of the errors. Unfortunately the tools I have tested up until now have not impressed me. Not all quality criteria can be ascertained automatically, only the syntactical ones, for which a rule can be defined. And even then too many “false errors” occur – errors related to issues that fit the definition of an error but which are still correct.Even if you were to receive a list of a hundred errors, you should still manually inspect whether they really need to be solved or whether these are false alarms. Only very lean evaluations are accurate enough to be more efficient than a manual one, e.g. the search for sentences with too many words or for words to be avoided like “any time”. You can find the list of terms to be avoided in the literature, e.g. in the standard ISO/IEC/IEEE 29148:2011 and in the CPRE Foundation Level¹. My recommendation is that you define the check list first and find out afterwards if you can delegate individual criteria to a single tool. It is more expensive if you go through all the evaluation possibilities that a tool possesses first and then try to figure out if all the information is useful to you.
And finally, a slightly selfish tip to end things off: allow an external professional to read over it, to see whether s/he would be happy with it. He/she will develop a check list according to your working question and will add his/her own suggestions for improvement. An inspection like this is definitely worth your time even if the expert isn’t able to find every single error.