Session 3.9 — Static Testing: Reviews, Walkthroughs & Inspections
Module 3: Static Testing | Duration: 1 hour
Learning Objectives
- Define static testing and distinguish it from dynamic testing.
- Explain the economic and quality benefits of early defect detection through static techniques.
- Describe the four types of reviews: informal review, walkthrough, technical review, and formal inspection.
- Identify the phases of a Fagan inspection and the roles involved.
- Select the appropriate review type for a given document, risk level, and project context.
What Is Static Testing?
Static testing examines software artefacts — source code, requirements documents, design specifications, test plans, and user stories — without executing the software. Defects are found by reading, analysing, and reasoning about the work product rather than by running it.
Static testing can be applied to any artefact that can be read: requirements specs, design diagrams, code, test cases, configuration files, and contracts.
The cost of fixing a defect rises exponentially the later it is found. A requirement ambiguity fixed in review costs a fraction of the same defect fixed after system testing.
Reviews — human-led examination of documents (this session). Static analysis — automated tool analysis of code structure and patterns (Session 3.10).
- Requirements specifications and user stories
- Architecture and design documents
- Source code (logic, style, security)
- Test plans, test cases, and test scripts
- User manuals, help documentation
- Web pages, configuration scripts, database schemas
Static vs Dynamic Testing
- No software execution required.
- Applicable to all artefacts (requirements, design, code, tests).
- Finds defects before code is written.
- Detects: ambiguity, inconsistency, omissions, logic errors, coding standards violations.
- Methods: reviews, walkthroughs, inspections, static analysis tools.
- Cost of defect removal: very low (up to 10x cheaper than post-test fix).
- Requires a running, executable software system.
- Applicable only to executable artefacts.
- Finds defects by observing failures during execution.
- Detects: incorrect outputs, performance bottlenecks, security vulnerabilities at runtime.
- Methods: unit, integration, system, acceptance testing.
- Cost of defect removal: higher (fix, rebuild, re-deploy, re-test).
Static testing eliminates entire classes of defects before they reach code, dramatically reducing dynamic test failures. Dynamic testing validates that the correctly-specified (post-review) system behaves as intended at runtime. Industry data shows that a requirement defect found in review costs approximately 1x to fix; the same defect found in system testing costs 10x–100x.
Benefits of Static Testing
Defects in requirements and design are found before any code is written, preventing them from propagating to later phases where they are far more expensive to fix.
Authors receive structured feedback, improving their understanding of quality standards and reducing the number of defects they introduce in future work.
Reviews expose team members to parts of the system they would not otherwise see, distributing knowledge and reducing single points of failure in the team.
Reviewers can identify requirements that are untestable as written — vague, unmeasurable, or contradictory — which dynamic testing could never directly detect.
Fewer late-stage defects means less rework, fewer emergency patches, and more predictable release schedules.
Reviews can verify adherence to coding standards, naming conventions, regulatory requirements, and documentation templates.
| Phase defect is found | Relative cost to fix |
|---|---|
| Requirements review | 1x |
| Design review | 2x–5x |
| Code review / unit test | 5x–10x |
| System / integration test | 10x–25x |
| Acceptance test | 25x–50x |
| Production (post-release) | 50x–200x |
Types of Reviews
Reviews range from completely informal (a quick check by a colleague) to highly formal (a structured Fagan inspection with defined roles and entry/exit criteria). The appropriate formality level depends on the criticality of the artefact, available time, and team maturity.
| Type | Formality | Typical artefact | Key characteristic |
|---|---|---|---|
| Informal Review | Lowest | Draft code, early requirements | No defined process; quick feedback from one or two colleagues. |
| Walkthrough | Low–Medium | Requirements, design specs | Author leads; focus on education and finding defects together. |
| Technical Review | Medium | Design, architecture, test plans | Peer-driven; documented defects; no author leading. |
| Formal Inspection | Highest | Critical requirements, safety-critical code | Structured process (Fagan); defined roles; entry/exit criteria; metrics collected. |
Informal Review
An informal review has no defined process, roles, or documentation requirements. It is the most common review type in practice — often as simple as asking a colleague to "take a look" at a document or code snippet.
- No formal meeting structure or agenda required.
- No defined roles (author shows, reviewer comments).
- Defects are communicated verbally or via informal comments (chat, email, inline code comments).
- No entry or exit criteria; no formal defect log.
- Very low overhead; results depend heavily on reviewer's skill and time commitment.
Early drafts, prototypes, small code changes, low-risk documentation. Useful when a quick second opinion is needed without scheduling overhead.
No documentation means defects can be forgotten. No systematic process means important areas may be skipped. Not suitable for safety-critical or compliance-mandated artefacts.
Walkthrough
In a walkthrough, the author presents their work product to a group of reviewers, guiding them step-by-step through the document. The primary goals are defect detection, knowledge sharing, and consensus building.
- The author leads the session and explains the work product.
- Reviewers ask questions and raise concerns as the author walks through.
- Focus is on understanding and education as well as defect finding.
- A scribe records defects or open issues during the session.
- May or may not have formal entry/exit criteria; documentation is light.
- No formal follow-up process; author decides how to handle raised issues.
Requirements specs, design documents, complex algorithms, new team members getting familiar with a system module.
Author-led presentation can create bias (author skips over weak areas unconsciously). Without prep time, reviewers may not find deep defects. Less systematic than inspection.
- Brief introduction of the document and its context (5 min).
- Author walks through each section, explaining purpose and logic (30–40 min).
- Open questions, issues, and alternative suggestions raised by reviewers (10–15 min).
- Scribe reads back recorded issues; author confirms (5 min).
- Author takes action on raised issues and follows up.
Technical Review
A technical review is a peer-led examination of a work product by technically qualified reviewers. Unlike a walkthrough, the author does not lead the session. The focus is on finding technical defects, deviations from standards, and evaluating fitness for purpose.
- Led by a moderator, not the author.
- Reviewers prepare individually before the meeting (pre-review checklist).
- Defects are documented in a formal defect list during and after the session.
- May have entry criteria (e.g., "document must pass spell-check and conform to template").
- Output: a documented list of issues categorised by severity and type.
- Author may or may not be present in the meeting.
Architecture documents, interface specifications, high-level design, test plans, and configuration documents where technical correctness matters more than education.
Moderator-led structure prevents author bias. Pre-prepared reviewers find more defects. Documentation enables tracking and follow-up.
Formal Inspection
A formal inspection (also called a Fagan inspection after Michael Fagan who formalised it at IBM in 1976) is the most rigorous and structured review type. It has a defined six-phase process, mandatory roles, entry and exit criteria, and produces measurable metrics.
- Strictly defined process with mandatory phases (planning, overview, preparation, meeting, rework, follow-up).
- Formal roles with specific responsibilities (moderator, author, reader, reviewer, scribe).
- Entry criteria must be met before the inspection starts (e.g., document is complete, reviewers have enough time to prepare).
- Exit criteria must be met before the inspection is considered complete (e.g., all major defects reworked and verified).
- Metrics collected: defect density, review rate, defect type distribution, re-inspection rate.
- Author is present but does not present — the reader reads the document aloud or paraphrased.
- Process improvement feedback is provided to the authoring team after completion.
Safety-critical requirements, medical device software specifications, avionics documents, security-sensitive code, and any artefact where defects have high cost or regulatory impact.
IBM studies showed formal inspections found 82% of all defects before execution, and organisations using inspections consistently reported 10:1 ROI on inspection effort vs. late defect fix cost.
Fagan Inspection Process — Six Phases
The moderator checks entry criteria, selects reviewers, distributes the work product and supporting materials, and schedules the inspection meeting. Entry criteria typically include: document is complete, conforms to template, author has self-reviewed, and reviewers have sufficient preparation time (typically 1 hour per 10 pages).
The author provides a brief orientation to the work product — its purpose, scope, context, and relationship to other documents. This is not the author leading the review; it is background context only. Reviewers who are already familiar with the material may skip this phase.
Each reviewer independently examines the work product against checklists, standards, and their domain knowledge. Reviewers log potential defects, questions, and issues. The optimal review rate is approximately 100–200 lines of code per hour or 5–10 pages of requirements per hour.
The moderator leads the meeting. The reader presents the document (paraphrasing, not reading verbatim). Reviewers raise logged issues. The scribe records all defects in an inspection log with type and severity. The team does not try to fix defects during the meeting — only identify and record them. Maximum recommended meeting duration is 2 hours.
The author corrects all defects identified in the inspection log. Each defect is addressed systematically. The author does not decide unilaterally to ignore a defect — all open items must be resolved or explicitly deferred with justification.
The moderator verifies that all defects have been correctly resolved. If major defects were found or significant portions were rewritten, a re-inspection of the modified sections may be required. Metrics are compiled (defects found per page/hour, types distribution) for process improvement.
| Metric | Formula / Description | Purpose |
|---|---|---|
| Defect density | Defects found ÷ pages (or KLOC) | Baseline for future inspections; process improvement indicator. |
| Preparation rate | Pages reviewed ÷ preparation hours | Ensures reviewers don't rush; recommended ≤ 10 pages/hour. |
| Inspection rate | Pages covered ÷ meeting hours | Recommended ≤ 5 pages/hour in the meeting. |
| Defect type distribution | % of defects by category (omission, wrong, extra) | Identifies systemic process problems in requirements or design. |
| Re-inspection rate | % of inspections requiring a second pass | High rate signals poor author preparation or complex artefacts. |
Review Types Compared
| Criterion | Informal | Walkthrough | Technical Review | Formal Inspection |
|---|---|---|---|---|
| Led by | Author / Anyone | Author | Moderator (not author) | Moderator (not author) |
| Preparation required | No | Optional | Yes | Mandatory (logged) |
| Defect log | No | Optional | Yes | Formal, mandatory |
| Entry / Exit criteria | None | None | Optional | Mandatory |
| Metrics collected | No | No | Partially | Yes (mandatory) |
| Roles defined | No | Author + Reviewers | Moderator + Reviewers | All 5 roles mandatory |
| Follow-up | Informal / none | Author decides | Moderator verifies | Formal verification step |
| Cost & overhead | Very low | Low | Medium | High |
| Defect yield | Low | Low–Medium | Medium | Highest |
| Best for | Drafts, low risk | Knowledge sharing | Design & architecture | Safety-critical artefacts |
Roles in a Formal Review
Plans and leads the review meeting. Ensures the process is followed, manages time, keeps the discussion focused on defect identification (not fixing), and performs follow-up verification. Must be trained in the review process.
Wrote the work product under review. Answers questions about intent, clarifies ambiguities during the meeting, and performs rework after the inspection. Should not defend the document — must remain receptive to feedback.
In a formal inspection, reads or paraphrases the work product during the meeting. Separates presentation from authorship, reducing defensive reactions and forcing the document to stand on its own.
A subject-matter expert who prepares individually, logs potential defects, and raises them during the meeting. One reviewer may be the customer or user representative. Multiple reviewers examine the document from different perspectives.
Records all defects, issues, and decisions raised during the meeting in the inspection log. Must be accurate and neutral — not judging whether an issue is valid, only recording it. The moderator or author may also serve as scribe in less formal reviews.
Ensures reviews are scheduled, resourced, and planned into the project. Reviews the process improvement metrics. Does NOT participate in the inspection meeting itself to avoid status hierarchy pressure on reviewers.
Defect Types Found by Reviews
Reviews find defect types that dynamic testing cannot detect, particularly at the requirements and design level:
| Defect Type | Definition | Example |
|---|---|---|
| Omission | A required behaviour, constraint, or scenario is missing from the artefact. | Requirement says "user can log in" but does not specify what happens after 5 failed attempts. |
| Ambiguity | A statement can be interpreted in more than one way by different readers. | "The system shall process requests quickly" — "quickly" is not measurable. |
| Inconsistency | Two statements in the same or different documents contradict each other. | Requirement 3.1 says timeout = 30s; requirement 5.4 says timeout = 60s for the same feature. |
| Incorrect fact | A statement that is factually wrong or does not match the domain knowledge. | The formula for VAT calculation in the spec is wrong. |
| Extra (gold-plating) | A feature or constraint that was added but is not required by any stakeholder. | Developer added an unrequested export-to-PDF function that is never tested. |
| Untestability | A requirement that cannot be verified by any test or measurable criterion. | "The system shall be user-friendly" — no measurable criterion defined. |
| Standards violation | Code or document does not conform to agreed coding standards, naming conventions, or templates. | Variable named x instead of following the agreed camelCase convention. |
| Interface mismatch | Caller and callee have mismatched parameter types, order, or units. | Module A sends distance in miles; Module B expects kilometres. |
Common Mistakes
The inspection meeting is for identification only. Fixing during the meeting wastes everyone's time, derails the agenda, and produces unreviewed fixes.
Authors who argue against every defect raised discourage reviewers from raising issues. The moderator must enforce a no-defence rule.
Reviewers who attend without preparing individually find far fewer defects. Preparation is the most defect-productive phase of an inspection.
Reviewing more than 10 pages per hour causes reviewer fatigue and misses defects. Limit inspection scope to what can be reviewed thoroughly in a 2-hour meeting.
Rework without verification means corrected defects are never confirmed. Some rework introduces new defects that are never caught.
Cutting inspections "to save time" is a false economy. Defects found later cost exponentially more to fix than the inspection time saved.
Class Activity — Mini Inspection
You are given the following requirements extract for a library book reservation system:
REQ-01: The system shall allow registered users to reserve available books online.
REQ-02: A user may hold a maximum of 3 reservations at any time. A premium user may hold a maximum of 5 reservations at any time.
REQ-03: The system shall send a notification when the reserved book becomes available quickly.
REQ-04: Reservations shall expire after 7 days. Reservations for premium users shall expire after 14 days.
REQ-05: If a user fails to collect the reserved book, the reservation is cancelled and the user is penalised appropriately.
- In groups of 4–5, assign roles: Moderator, Author, Reader, Reviewer × 2, Scribe.
- Each reviewer independently identifies defects in the requirements above (5 min individual prep).
- Conduct a 10-minute inspection meeting: Reader reads each requirement, Reviewers raise defects, Scribe logs them.
- Classify each defect by type (Omission, Ambiguity, Inconsistency, Untestability, etc.).
- Moderator computes: total defects found, defects per requirement, most common defect type.
- 2 marks: Correct role assignment and adherence to Fagan process during the mini-inspection.
- 4 marks: Defects correctly identified (at least 5 defects expected in REQ-01 to REQ-05).
- 2 marks: Correct defect type classification for each found defect.
- 2 marks: Inspection metrics computed (defect count, density, most common type).
- REQ-03: "quickly" is ambiguous and untestable. What notification channel? What time limit?
- REQ-05: "penalised appropriately" is ambiguous. What is the penalty? Duration? Number of offences?
- REQ-01: Omission — no specification of what "registered" means or how registration is validated.
- REQ-02 & REQ-04: Implicit dependency on user type (regular vs premium) — is "premium user" defined anywhere? Omission of definition.
- REQ-04: Omission — what happens when a reservation expires? System notifies user? Auto-returns to available pool?
Exit Ticket
- What is the fundamental difference between static testing and dynamic testing? Give one example of a defect each finds that the other cannot.
- A software team says: "We found 15 defects in a 10-page requirements document during inspection. Is this good or bad?" What additional information would you need to evaluate this?
- During an inspection meeting, a reviewer proposes rewriting a requirement on the spot to fix an ambiguity. What should the moderator do, and why?
Summary & Assignment
Static testing examines artefacts without execution, finding defects that dynamic testing cannot reach — especially at the requirements and design level. Reviews range from quick informal checks to rigorous Fagan inspections with defined roles, phases, entry/exit criteria, and metrics. The appropriate review type depends on artefact criticality, team maturity, and project constraints. Defect cost amplification makes early static testing one of the highest-ROI activities in software development.