Session 3.10 — Static Analysis, Review Success Factors & Module Summary
Module 3: Static Testing | Duration: 1 hour
Learning Objectives
- Define static analysis and distinguish it from manual reviews.
- Classify the main categories of static analysis checks and give examples of each.
- Name and compare common static analysis tools by language and check category.
- Identify the critical success factors for effective reviews and the common failure modes.
- Apply a requirements review checklist to a real document.
- Summarise the complete static testing module and relate it to the overall testing strategy.
Concept Overview
Static testing has two complementary pillars: manual reviews (human examination of artefacts, covered in Session 3.9) and static analysis (automated tool-based examination of code without execution). This session covers static analysis in depth, then addresses the organisational and human factors that determine whether a review programme succeeds or fails.
Automated tool examination of source code, byte-code, or models to detect patterns associated with defects, vulnerabilities, and standards violations — without running the code.
Reviews only deliver their full value when the team has the right culture, defined process, trained participants, and management support. Human factors are as important as process.
Static testing (reviews + analysis) + dynamic testing (black-box + white-box) form the complete testing strategy. Static techniques shift defect detection left, reducing dynamic test cost.
Static Analysis
Static analysis is the automated examination of software artefacts (primarily source code) to detect defects, coding standard violations, security vulnerabilities, and quality metrics — without executing the program. It is performed by tools that parse, model, and reason about the code structure.
- Fully automated — no human reviewer required to run it, though results must be interpreted.
- Executes much faster than a manual review: a 10,000-line codebase can be analysed in seconds.
- Consistent and repeatable — the same code always produces the same findings.
- Produces both true positives (real defects) and false positives (reported issues that are not actual defects).
- Cannot understand intent, business context, or domain-specific correctness — these require human reviewers.
- Most effective when integrated into the CI/CD pipeline so it runs automatically on every commit.
A real defect correctly reported by the tool. Needs developer attention and a fix.
A warning raised by the tool that is not actually a defect. Must be reviewed, suppressed, or configured away. High false-positive rates cause teams to ignore tool output.
A real defect not detected by the tool. No static analysis tool achieves zero false negatives — this is why manual reviews remain necessary.
Categories of Static Analysis
| Category | What it checks | Example findings |
|---|---|---|
| Control Flow Analysis | Detects structural anomalies in program flow: unreachable code, missing return paths, infinite loops. | A function that can exit without returning a value; a loop with no termination condition. |
| Data Flow Analysis | Tracks how variables are defined, used, and killed. Finds: use before define, define-define (overwritten before use), define-kill (defined but never used). | Variable result assigned but never read; variable count used before initialisation. |
| Information Flow Analysis | Tracks how data flows from inputs to outputs. Detects taint propagation (unvalidated external data reaching sensitive operations). | User-supplied string passed directly to a SQL query without sanitisation (SQL injection risk). |
| Coding Standards Checking | Verifies adherence to naming conventions, indentation, comment requirements, and forbidden language features. | Function name not in camelCase; missing Javadoc for public method; use of goto statement. |
| Security Analysis (SAST) | Static Application Security Testing scans for vulnerability patterns from OWASP Top 10, CWE/SANS Top 25. | Hardcoded credentials, buffer overflow risk, XSS injection point, insecure random number generation. |
| Complexity Metrics | Computes cyclomatic complexity, coupling, cohesion, lines of code, depth of nesting. | Function with CC > 15 flagged for refactoring; class with 30+ dependencies flagged for decomposition. |
| Dependency Analysis | Examines import/include structures for circular dependencies, unused imports, and outdated libraries. | Circular dependency between modules A and B; imported library with known CVE vulnerability. |
| Clone Detection | Identifies copy-pasted code blocks that should be refactored into shared functions. | Same 15-line validation logic duplicated across 6 files — a fix in one location may be missed in others. |
Static Analysis Worked Example
Consider the following Python function with several embedded defects:
| ID | Line | Category | Severity | Finding |
|---|---|---|---|---|
| W1 | 4 | Data Flow (define-kill) | Warning | Variable result is assigned but never read. Likely dead code or missing logic. |
| E1 | 8 | Control Flow / Runtime Error | Error | Division by count when scores is empty (count=0). Will raise ZeroDivisionError at runtime. |
| W2 | 13 | Control Flow (missing return) | Warning | Function returns None implicitly for the range 0 ≤ average ≤ 100. Should return average explicitly. |
A dynamic test with a non-empty list of valid scores would achieve 100% statement coverage and pass all assertions — yet E1 (division by zero for empty input) would remain undetected until a user submits an empty score list in production. Static analysis finds this structural risk without any test execution.
Static Analysis Tools
| Language | Tool | Primary Focus |
|---|---|---|
| Python | Pylint Flake8 Bandit Mypy | Standards, data flow, security (Bandit), type checking (Mypy) |
| Java | SpotBugs Checkstyle PMD SonarQube | Bug patterns (SpotBugs), standards (Checkstyle), complexity (PMD), all-in-one (SonarQube) |
| JavaScript | ESLint JSHint Semgrep | Standards, best practices, security patterns |
| C / C++ | Cppcheck Clang-Tidy PC-lint | Memory errors, undefined behaviour, MISRA compliance |
| Multi-language | SonarQube Semgrep CodeQL Coverity | Comprehensive SAST, security, compliance, CI/CD integration |
| Security-focused | OWASP Dependency-Check Snyk Veracode | Known CVE detection in dependencies, OWASP Top 10 patterns |
- Pre-commit hooks: Run fast linters (Flake8, ESLint) locally before code is committed. Catches trivial issues instantly.
- Pull request checks: Run comprehensive analysis (SonarQube, Semgrep) on every PR. Block merges that drop quality below threshold.
- Nightly / release scans: Run full SAST and dependency checks. Slower tools that don't fit in the PR pipeline.
- Quality gates: Define thresholds: e.g., no new critical issues, coverage ≥ 80%, CC ≤ 15 per function. Pipeline fails if gates are not met.
Manual Reviews vs Static Analysis
| Aspect | Manual Review | Static Analysis Tool |
|---|---|---|
| Speed | Slow (hours per document) | Fast (seconds to minutes) |
| Consistency | Varies by reviewer skill and attention | Fully consistent, same result every run |
| Finds ambiguity / omissions | Yes (human language understanding) | No (tools cannot understand intent) |
| Finds code logic errors | Yes (with preparation and expertise) | Yes (data flow, control flow analysis) |
| Finds security vulnerabilities | Partially (depends on reviewer expertise) | Yes (SAST tools specialise in this) |
| Applies to non-code artefacts | Yes (requirements, design, test plans) | No (mostly code and models) |
| False positives | Low (humans understand context) | Can be high (requires tuning) |
| Knowledge sharing | High (team discussions) | None |
| Cost per run | High (human time) | Very low after initial setup |
| Best used for | Requirements, design, complex logic, architecture | Code standards, security, complexity metrics |
Use static analysis tools to automatically handle standards compliance, common bug patterns, and security checks — freeing human reviewers to focus on what only humans can do: understanding intent, detecting architectural flaws, evaluating requirements completeness, and assessing testability.
Review Success Factors
Research into industrial review programmes identifies eight factors that consistently predict review effectiveness and adoption:
Why Reviews Fail
Review Checklists
Checklists are the most effective tool for improving review yield. They encode accumulated experience about defect patterns into a structured reminder list that every reviewer applies to each document.
| # | Check | Defect type if violated |
|---|---|---|
| 1 | Is each requirement statement unambiguous (one interpretation only)? | Ambiguity |
| 2 | Is each requirement verifiable (can a test confirm it)? | Untestability |
| 3 | Are all terms defined in a glossary or the document itself? | Ambiguity / Omission |
| 4 | Does any requirement contradict another in this or related documents? | Inconsistency |
| 5 | Is there a requirement for every stated user need / stakeholder expectation? | Omission |
| 6 | Does every requirement have a unique identifier? | Traceability omission |
| 7 | Are performance requirements expressed with measurable criteria (units, thresholds)? | Ambiguity / Untestability |
| 8 | Are error handling and exception scenarios specified? | Omission |
| 9 | Are requirements free of implementation detail (what, not how)? | Gold-plating / Design leak |
| 10 | Is each requirement consistent with applicable laws, regulations, and standards? | Compliance omission |
| # | Check | Defect type if violated |
|---|---|---|
| 1 | Are all variables initialised before use? | Data flow (use before define) |
| 2 | Is every function return value checked or explicitly ignored? | Error handling omission |
| 3 | Are division operations protected against zero denominators? | Runtime error risk |
| 4 | Are all external inputs validated before use? | Security / robustness |
| 5 | Are resources (files, connections, memory) always released in all paths? | Resource leak |
| 6 | Are loop termination conditions correct and always reachable? | Infinite loop / off-by-one |
| 7 | Do function and variable names follow the agreed naming convention? | Standards violation |
| 8 | Is cyclomatic complexity ≤ 10 for all functions? | Complexity / maintainability |
| 9 | Are hardcoded values replaced with named constants or configuration? | Maintainability |
| 10 | Are error messages informative and do they avoid leaking security details? | Security / usability |
Static Testing Module Summary
| Technique | Type | Formality | Best artefact | Key output |
|---|---|---|---|---|
| Informal Review | Manual | None | Drafts, code snippets | Verbal / informal comments |
| Walkthrough | Manual | Low | Requirements, design | Issue list (informally logged) |
| Technical Review | Manual | Medium | Architecture, test plans | Formal defect log |
| Formal Inspection | Manual | High | Safety-critical specs, code | Inspection log + metrics |
| Linting / Style Check | Automated | Tool | Source code | Standards violation report |
| Data Flow Analysis | Automated | Tool | Source code | Variable anomaly report |
| Control Flow Analysis | Automated | Tool | Source code | Unreachable code, missing returns |
| SAST | Automated | Tool | Source code | Security vulnerability report |
| Complexity Metrics | Automated | Tool | Source code | CC, coupling, cohesion report |
| Phase | Primary Technique | Purpose |
|---|---|---|
| Requirements phase | Walkthrough / Technical Review | Eliminate ambiguity, omissions, and untestable requirements before any code is written. |
| Design phase | Technical Review / Inspection | Verify architectural soundness, interface consistency, and compliance with non-functional requirements. |
| Coding phase | Static analysis + Code Review | Enforce standards, detect logic errors, identify security risks, measure complexity. |
| Unit testing phase | White-box dynamic testing | Verify code correctness for all paths, branches, and conditions. |
| Integration/System testing | Black-box dynamic testing | Verify end-to-end behaviour against requirements. |
| Acceptance testing | Black-box + Exploratory | Validate fitness for purpose from the user/customer perspective. |
Common Mistakes
Teams that don't tune their static analysis tools accumulate thousands of false positive warnings. Developers learn to ignore all tool output — including real defects.
Static analysis cannot find requirement ambiguity, architectural unsuitability, or business logic incorrectness. Tools complement, not replace, human judgment.
Reviewers without checklists tend to focus on surface issues (typos, formatting) and miss systematic defect patterns (missing error handling, interface mismatches).
Publishing defect-found counts per author causes authors to avoid reviews. Metrics should only be used for process quality measurement, never individual evaluation.
Class Activity — Static Analysis + Checklist Review
Part A — Static Analysis (15 min): You are given the following Python function. Apply the static analysis categories manually (as if you were the tool) and identify all findings:
- Identify all static analysis findings by category (security, data flow, control flow, coding standards).
- Classify each finding as Error (E), Warning (W), or Info (I) by severity.
- State which tool category would detect each finding.
Part B — Checklist Review (10 min): Apply the Requirements Review Checklist (10 items) to the following requirement:
- Work through each checklist item and mark Pass / Fail / N-A.
- Log each Fail as a defect with type classification.
- Rewrite REQ-12 to fix all identified defects.
- 4 marks Part A: At least 4 findings correctly identified, classified by category and severity.
- 2 marks Part A: Correct tool category mapped to each finding.
- 2 marks Part B: At least 3 checklist failures correctly identified and classified.
- 2 marks Part B: Rewritten requirement addresses all identified defects without introducing new ones.
- Line 2: Hardcoded credential (SAST / Security) — DB_PASSWORD in source code. Severity: Error.
- Line 10: SQL Injection (SAST / Information Flow) — user_id concatenated directly into SQL. Severity: Error.
- Line 3: Define-kill (Data Flow) —
discountinitialised to 0 but reassigned before use only conditionally; if code = "FLAT50" never true, discount is defined-and-killed. Severity: Warning. - Line 11–12: Unreachable guard (Control Flow) —
if amount == 0check after calculation; should be at function entry. Severity: Warning / logic error. - REQ-12 defects: "securely" is ambiguous/untestable; "calling application" is not defined (omission); no performance criterion; no error handling specification (omission); no supported card types (omission).
Exit Ticket
- Name two defect types that static analysis tools can find that manual code reviews frequently miss, and two defect types that manual reviews find that tools cannot.
- A team runs their static analysis tool and receives 800 warnings on a legacy codebase. They decide to suppress all warnings. What is the risk of this decision, and what should they do instead?
- A project manager says: "We don't have time for reviews. We'll just test more." Provide two specific arguments, with data or examples, that counter this position.
Summary & Assignment
Static analysis automates detection of structural defects, coding standard violations, security vulnerabilities, and complexity issues — consistently and at speed. Manual reviews complement tools by capturing what tools cannot: intent, ambiguity, architecture, and business correctness. Together they form the static testing pillar of a comprehensive quality strategy. The effectiveness of both depends on human factors: clear objectives, trained participants, checklists, constructive culture, and management commitment.
This session completes the Module 3 Static Testing block. Combined with the white-box dynamic testing sessions (3.5–3.8) and black-box techniques (3.1–3.4), you now have the full testing technique toolkit for software quality assurance.