Session 3.10 — Static Analysis, Review Success Factors & Module Summary

Module 3: Static Testing | Duration: 1 hour

Learning Objectives
  • Define static analysis and distinguish it from manual reviews.
  • Classify the main categories of static analysis checks and give examples of each.
  • Name and compare common static analysis tools by language and check category.
  • Identify the critical success factors for effective reviews and the common failure modes.
  • Apply a requirements review checklist to a real document.
  • Summarise the complete static testing module and relate it to the overall testing strategy.

Concept Overview

Static testing has two complementary pillars: manual reviews (human examination of artefacts, covered in Session 3.9) and static analysis (automated tool-based examination of code without execution). This session covers static analysis in depth, then addresses the organisational and human factors that determine whether a review programme succeeds or fails.

Static Analysis
Automated tool examination of source code, byte-code, or models to detect patterns associated with defects, vulnerabilities, and standards violations — without running the code.
Success Factors
Reviews only deliver their full value when the team has the right culture, defined process, trained participants, and management support. Human factors are as important as process.
Module Integration
Static testing (reviews + analysis) + dynamic testing (black-box + white-box) form the complete testing strategy. Static techniques shift defect detection left, reducing dynamic test cost.

Static Analysis

Static analysis is the automated examination of software artefacts (primarily source code) to detect defects, coding standard violations, security vulnerabilities, and quality metrics — without executing the program. It is performed by tools that parse, model, and reason about the code structure.

Key characteristics
  • Fully automated — no human reviewer required to run it, though results must be interpreted.
  • Executes much faster than a manual review: a 10,000-line codebase can be analysed in seconds.
  • Consistent and repeatable — the same code always produces the same findings.
  • Produces both true positives (real defects) and false positives (reported issues that are not actual defects).
  • Cannot understand intent, business context, or domain-specific correctness — these require human reviewers.
  • Most effective when integrated into the CI/CD pipeline so it runs automatically on every commit.
True positives
A real defect correctly reported by the tool. Needs developer attention and a fix.
False positives
A warning raised by the tool that is not actually a defect. Must be reviewed, suppressed, or configured away. High false-positive rates cause teams to ignore tool output.
False negatives
A real defect not detected by the tool. No static analysis tool achieves zero false negatives — this is why manual reviews remain necessary.

Categories of Static Analysis

CategoryWhat it checksExample findings
Control Flow Analysis Detects structural anomalies in program flow: unreachable code, missing return paths, infinite loops. A function that can exit without returning a value; a loop with no termination condition.
Data Flow Analysis Tracks how variables are defined, used, and killed. Finds: use before define, define-define (overwritten before use), define-kill (defined but never used). Variable result assigned but never read; variable count used before initialisation.
Information Flow Analysis Tracks how data flows from inputs to outputs. Detects taint propagation (unvalidated external data reaching sensitive operations). User-supplied string passed directly to a SQL query without sanitisation (SQL injection risk).
Coding Standards Checking Verifies adherence to naming conventions, indentation, comment requirements, and forbidden language features. Function name not in camelCase; missing Javadoc for public method; use of goto statement.
Security Analysis (SAST) Static Application Security Testing scans for vulnerability patterns from OWASP Top 10, CWE/SANS Top 25. Hardcoded credentials, buffer overflow risk, XSS injection point, insecure random number generation.
Complexity Metrics Computes cyclomatic complexity, coupling, cohesion, lines of code, depth of nesting. Function with CC > 15 flagged for refactoring; class with 30+ dependencies flagged for decomposition.
Dependency Analysis Examines import/include structures for circular dependencies, unused imports, and outdated libraries. Circular dependency between modules A and B; imported library with known CVE vulnerability.
Clone Detection Identifies copy-pasted code blocks that should be refactored into shared functions. Same 15-line validation logic duplicated across 6 files — a fix in one location may be missed in others.

Static Analysis Worked Example

Consider the following Python function with several embedded defects:

def calculate_average(scores): total = 0 count = 0 result = 0 # W1: defined but never used (define-kill) for score in scores: total += score count += 1 average = total / count # E1: ZeroDivisionError if scores is empty (data flow) if average > 100: return 100 if average < 0: return 0 # W2: implicit return None for 0 ≤ average ≤ 100 (missing return)
Static analysis findings
IDLineCategorySeverityFinding
W14Data Flow (define-kill)WarningVariable result is assigned but never read. Likely dead code or missing logic.
E18Control Flow / Runtime ErrorErrorDivision by count when scores is empty (count=0). Will raise ZeroDivisionError at runtime.
W213Control Flow (missing return)WarningFunction returns None implicitly for the range 0 ≤ average ≤ 100. Should return average explicitly.
Key observation

A dynamic test with a non-empty list of valid scores would achieve 100% statement coverage and pass all assertions — yet E1 (division by zero for empty input) would remain undetected until a user submits an empty score list in production. Static analysis finds this structural risk without any test execution.

Static Analysis Tools

Tools by language and focus
LanguageToolPrimary Focus
PythonPylint Flake8 Bandit MypyStandards, data flow, security (Bandit), type checking (Mypy)
JavaSpotBugs Checkstyle PMD SonarQubeBug patterns (SpotBugs), standards (Checkstyle), complexity (PMD), all-in-one (SonarQube)
JavaScriptESLint JSHint SemgrepStandards, best practices, security patterns
C / C++Cppcheck Clang-Tidy PC-lintMemory errors, undefined behaviour, MISRA compliance
Multi-languageSonarQube Semgrep CodeQL CoverityComprehensive SAST, security, compliance, CI/CD integration
Security-focusedOWASP Dependency-Check Snyk VeracodeKnown CVE detection in dependencies, OWASP Top 10 patterns
Integrating static analysis into CI/CD
  1. Pre-commit hooks: Run fast linters (Flake8, ESLint) locally before code is committed. Catches trivial issues instantly.
  2. Pull request checks: Run comprehensive analysis (SonarQube, Semgrep) on every PR. Block merges that drop quality below threshold.
  3. Nightly / release scans: Run full SAST and dependency checks. Slower tools that don't fit in the PR pipeline.
  4. Quality gates: Define thresholds: e.g., no new critical issues, coverage ≥ 80%, CC ≤ 15 per function. Pipeline fails if gates are not met.

Manual Reviews vs Static Analysis

AspectManual ReviewStatic Analysis Tool
SpeedSlow (hours per document)Fast (seconds to minutes)
ConsistencyVaries by reviewer skill and attentionFully consistent, same result every run
Finds ambiguity / omissionsYes (human language understanding)No (tools cannot understand intent)
Finds code logic errorsYes (with preparation and expertise)Yes (data flow, control flow analysis)
Finds security vulnerabilitiesPartially (depends on reviewer expertise)Yes (SAST tools specialise in this)
Applies to non-code artefactsYes (requirements, design, test plans)No (mostly code and models)
False positivesLow (humans understand context)Can be high (requires tuning)
Knowledge sharingHigh (team discussions)None
Cost per runHigh (human time)Very low after initial setup
Best used forRequirements, design, complex logic, architectureCode standards, security, complexity metrics
Optimal hybrid strategy

Use static analysis tools to automatically handle standards compliance, common bug patterns, and security checks — freeing human reviewers to focus on what only humans can do: understanding intent, detecting architectural flaws, evaluating requirements completeness, and assessing testability.

Review Success Factors

Research into industrial review programmes identifies eight factors that consistently predict review effectiveness and adoption:

1. Clear objectives: The review team knows exactly what type of defects they are looking for. A requirements review targets ambiguity and omissions; a code review targets logic errors and standards violations. Without clear objectives, reviewers lose focus.
2. Trained participants: Both moderators and reviewers receive formal training in the review process. Untrained reviewers tend to comment on style rather than substance and miss systematic defect patterns.
3. Defined checklists: Reviewers prepare using role-specific checklists tailored to the artefact type. Checklists ensure consistent coverage and help inexperienced reviewers find defects they would otherwise miss.
4. Management support: Management allocates time for reviews, treats them as a required project activity (not optional), and uses review metrics for process improvement rather than individual performance evaluation.
5. Constructive culture: Reviews focus on the work product, not the person. The team understands that finding defects is a success, not an embarrassment. A blame-free environment maximises defect reporting.
6. Right-sized review scope: Each review session covers a manageable amount of material (5–10 pages of requirements, 200–400 lines of code). Reviewing too much causes fatigue and misses defects.
7. Individual preparation: Reviewers examine the document individually before the meeting and come prepared with logged issues. Meeting time is used for discussion, not first reading.
8. Actionable follow-up: Defects are tracked to resolution. The moderator verifies rework. Metrics from completed reviews feed back into the team's process improvement cycle.

Why Reviews Fail

Schedule pressure: Reviews are skipped or compressed when deadlines approach. Ironically, this creates the extra work downstream (more defects in testing and production) that caused the pressure in the first place.
Ego and blame culture: Authors become defensive; reviewers avoid raising issues to spare feelings. Reviews become rubber-stamp exercises with no real defect detection.
No preparation: Reviewers attend the meeting without reading the document. The session becomes a first reading, not a structured defect-finding exercise. Defect yield drops by 50–70%.
Fixing during meetings: The team spends meeting time redesigning rather than recording defects. The review runs overtime, participants lose focus, and later sections of the document are rushed or skipped.
Misusing metrics: If managers use defect-found counts to evaluate individual authors punitively, authors will under-prepare to avoid having defects found. Metrics must be used for process improvement only.
Wrong participants: Reviewers without sufficient domain knowledge or technical depth cannot identify relevant defects. Including too many reviewers increases meeting time without proportional benefit.

Review Checklists

Checklists are the most effective tool for improving review yield. They encode accumulated experience about defect patterns into a structured reminder list that every reviewer applies to each document.

Requirements Review Checklist
#CheckDefect type if violated
1Is each requirement statement unambiguous (one interpretation only)?Ambiguity
2Is each requirement verifiable (can a test confirm it)?Untestability
3Are all terms defined in a glossary or the document itself?Ambiguity / Omission
4Does any requirement contradict another in this or related documents?Inconsistency
5Is there a requirement for every stated user need / stakeholder expectation?Omission
6Does every requirement have a unique identifier?Traceability omission
7Are performance requirements expressed with measurable criteria (units, thresholds)?Ambiguity / Untestability
8Are error handling and exception scenarios specified?Omission
9Are requirements free of implementation detail (what, not how)?Gold-plating / Design leak
10Is each requirement consistent with applicable laws, regulations, and standards?Compliance omission
Code Review Checklist
#CheckDefect type if violated
1Are all variables initialised before use?Data flow (use before define)
2Is every function return value checked or explicitly ignored?Error handling omission
3Are division operations protected against zero denominators?Runtime error risk
4Are all external inputs validated before use?Security / robustness
5Are resources (files, connections, memory) always released in all paths?Resource leak
6Are loop termination conditions correct and always reachable?Infinite loop / off-by-one
7Do function and variable names follow the agreed naming convention?Standards violation
8Is cyclomatic complexity ≤ 10 for all functions?Complexity / maintainability
9Are hardcoded values replaced with named constants or configuration?Maintainability
10Are error messages informative and do they avoid leaking security details?Security / usability

Static Testing Module Summary

Complete static testing techniques — Sessions 3.9 & 3.10
TechniqueTypeFormalityBest artefactKey output
Informal ReviewManualNoneDrafts, code snippetsVerbal / informal comments
WalkthroughManualLowRequirements, designIssue list (informally logged)
Technical ReviewManualMediumArchitecture, test plansFormal defect log
Formal InspectionManualHighSafety-critical specs, codeInspection log + metrics
Linting / Style CheckAutomatedToolSource codeStandards violation report
Data Flow AnalysisAutomatedToolSource codeVariable anomaly report
Control Flow AnalysisAutomatedToolSource codeUnreachable code, missing returns
SASTAutomatedToolSource codeSecurity vulnerability report
Complexity MetricsAutomatedToolSource codeCC, coupling, cohesion report
How static and dynamic testing fit together
PhasePrimary TechniquePurpose
Requirements phaseWalkthrough / Technical ReviewEliminate ambiguity, omissions, and untestable requirements before any code is written.
Design phaseTechnical Review / InspectionVerify architectural soundness, interface consistency, and compliance with non-functional requirements.
Coding phaseStatic analysis + Code ReviewEnforce standards, detect logic errors, identify security risks, measure complexity.
Unit testing phaseWhite-box dynamic testingVerify code correctness for all paths, branches, and conditions.
Integration/System testingBlack-box dynamic testingVerify end-to-end behaviour against requirements.
Acceptance testingBlack-box + ExploratoryValidate fitness for purpose from the user/customer perspective.

Common Mistakes

Ignoring false positives
Teams that don't tune their static analysis tools accumulate thousands of false positive warnings. Developers learn to ignore all tool output — including real defects.
Treating analysis as a replacement for reviews
Static analysis cannot find requirement ambiguity, architectural unsuitability, or business logic incorrectness. Tools complement, not replace, human judgment.
No checklist for review preparation
Reviewers without checklists tend to focus on surface issues (typos, formatting) and miss systematic defect patterns (missing error handling, interface mismatches).
Metrics used punitively
Publishing defect-found counts per author causes authors to avoid reviews. Metrics should only be used for process quality measurement, never individual evaluation.

Class Activity — Static Analysis + Checklist Review

Part A — Static Analysis (15 min): You are given the following Python function. Apply the static analysis categories manually (as if you were the tool) and identify all findings:

def process_payment(amount, user_id, discount_code): DB_PASSWORD = "admin123" # hardcoded credential discount = 0 tax = amount * 0.18 if discount_code == "FLAT50": discount = 50 total = (amount - discount) + tax if total < 0: total = 0 query = "SELECT * FROM payments WHERE user='" + user_id + "'" if amount == 0: return 0 return total
  1. Identify all static analysis findings by category (security, data flow, control flow, coding standards).
  2. Classify each finding as Error (E), Warning (W), or Info (I) by severity.
  3. State which tool category would detect each finding.

Part B — Checklist Review (10 min): Apply the Requirements Review Checklist (10 items) to the following requirement:

REQ-12: The payment gateway shall process credit card transactions securely and return the result to the calling application.
  1. Work through each checklist item and mark Pass / Fail / N-A.
  2. Log each Fail as a defect with type classification.
  3. Rewrite REQ-12 to fix all identified defects.
Evaluation rubric (10 marks)
  • 4 marks Part A: At least 4 findings correctly identified, classified by category and severity.
  • 2 marks Part A: Correct tool category mapped to each finding.
  • 2 marks Part B: At least 3 checklist failures correctly identified and classified.
  • 2 marks Part B: Rewritten requirement addresses all identified defects without introducing new ones.
Expected findings guide (instructor reference)
  • Line 2: Hardcoded credential (SAST / Security) — DB_PASSWORD in source code. Severity: Error.
  • Line 10: SQL Injection (SAST / Information Flow) — user_id concatenated directly into SQL. Severity: Error.
  • Line 3: Define-kill (Data Flow) — discount initialised to 0 but reassigned before use only conditionally; if code = "FLAT50" never true, discount is defined-and-killed. Severity: Warning.
  • Line 11–12: Unreachable guard (Control Flow) — if amount == 0 check after calculation; should be at function entry. Severity: Warning / logic error.
  • REQ-12 defects: "securely" is ambiguous/untestable; "calling application" is not defined (omission); no performance criterion; no error handling specification (omission); no supported card types (omission).

Exit Ticket

  1. Name two defect types that static analysis tools can find that manual code reviews frequently miss, and two defect types that manual reviews find that tools cannot.
  2. A team runs their static analysis tool and receives 800 warnings on a legacy codebase. They decide to suppress all warnings. What is the risk of this decision, and what should they do instead?
  3. A project manager says: "We don't have time for reviews. We'll just test more." Provide two specific arguments, with data or examples, that counter this position.

Summary & Assignment

Static analysis automates detection of structural defects, coding standard violations, security vulnerabilities, and complexity issues — consistently and at speed. Manual reviews complement tools by capturing what tools cannot: intent, ambiguity, architecture, and business correctness. Together they form the static testing pillar of a comprehensive quality strategy. The effectiveness of both depends on human factors: clear objectives, trained participants, checklists, constructive culture, and management commitment.

This session completes the Module 3 Static Testing block. Combined with the white-box dynamic testing sessions (3.5–3.8) and black-box techniques (3.1–3.4), you now have the full testing technique toolkit for software quality assurance.

Final Module Assignment: Integrate static testing into your mini-project: (1) Configure and run a static analysis tool on your codebase. Record all findings, triage them (true positive vs false positive), and fix all true-positive Errors. (2) Conduct a technical review of your main requirements document using the 10-item requirements checklist. Produce a defect log, perform rework, and get moderator sign-off. (3) Submit: static analysis tool report (before/after), requirements defect log, revised requirements document, and a half-page reflection comparing what static testing found vs what your dynamic tests would have found.