5.2 — Security Testing Automation
Listen instead
Learning Objectives
- ✓ Explain the capabilities, limitations, and appropriate use cases for SAST, DAST, SCA, and IAST.
- ✓ Design a security testing pipeline with scan points at every stage of the SDLC.
- ✓ Evaluate AI-powered security testing tools and understand how they differ from traditional approaches.
- ✓ Implement EPSS-based vulnerability prioritization to reduce remediation workload by 60-80%.
- ✓ Manage false positives through triage workflows, suppression rules, and baseline management.
1. CIS Control 16.12 β Implement Code-Level Security Checks
CIS Safeguard 16.12 is an Implementation Group 3 (IG3) control that states:
βApply static and dynamic analysis tools within the application lifecycle to verify that security practices are being adhered to.β
This is a comprehensive mandate. βWithin the application lifecycleβ means at every stage, not just before release. βStatic AND dynamicβ means both β they catch fundamentally different classes of vulnerabilities. βVerify security practicesβ means the tools are checking that the security requirements defined in your SSDLC (Module 2.1) are actually implemented.
The control aligns directly with NIST SSDF practices PW.7 (review human-readable code to identify vulnerabilities) and PW.4 (reuse existing well-secured software when feasible and verify all other software). This module covers the full spectrum of automated security testing tools that implement this control.
2. SAST β Static Application Security Testing
2.1 What SAST Does
SAST analyzes source code, bytecode, or compiled binaries without executing the application. It is white-box testing β the tool has full visibility into the code structure, data flows, and control flows.
SAST tools build an abstract model of the application (Abstract Syntax Tree, Control Flow Graph, Data Flow Graph) and apply rules to identify patterns that indicate vulnerabilities. The most sophisticated tools perform taint analysis β tracking untrusted input from its entry point (source) through all transformations to where it is used (sink), flagging paths where sanitization is missing.
2.2 When to Run SAST
- Every commit: IDE plugins and pre-commit hooks provide immediate developer feedback.
- Every build: CI pipeline stage runs the full rule set.
- Every PR: Incremental analysis on changed files, blocking merge on critical findings.
- Scheduled full scans: Weekly or monthly complete codebase analysis to catch issues in unchanged code with new rules.
2.3 What SAST Catches
| Vulnerability Class | How SAST Detects It |
|---|---|
| SQL Injection | Taint analysis: user input reaches SQL query without parameterization |
| Cross-Site Scripting | Taint analysis: user input rendered in HTML without encoding |
| Buffer Overflows | Bounds checking analysis on array/buffer operations |
| Insecure Cryptography | Pattern matching: weak algorithms (MD5, SHA1, DES), small key sizes |
| Hardcoded Secrets | Entropy analysis + pattern matching: API keys, passwords, tokens |
| Path Traversal | Taint analysis: user input in file system operations |
| Command Injection | Taint analysis: user input in system calls |
| Insecure Deserialization | Pattern matching: deserialization of untrusted data |
| XXE | Configuration analysis: XML parser settings |
2.4 SAST Tools Landscape
Commercial:
| Tool | Strengths |
|---|---|
| Checkmarx SAST | Deep taint analysis, 30+ languages, enterprise scale |
| Fortify (OpenText) | Comprehensive rule set, government/DoD adoption |
| Veracode | SaaS-first, binary analysis (no source code required) |
| SonarQube Enterprise | Quality + security in one platform, strong IDE plugins |
Open Source:
| Tool | Language Focus | Notes |
|---|---|---|
| Semgrep | 30+ languages | Custom rules in YAML, fast, low false positives |
| Bandit | Python | AST-based, well-maintained, pip install |
| Brakeman | Ruby on Rails | Framework-aware, fast, low config |
| SpotBugs + FindSecBugs | Java | Bytecode analysis, extensive security plugins |
| ESLint Security | JavaScript | eslint-plugin-security, integrates with existing linting |
2.5 AI-Powered SAST
Traditional SAST relies on pattern matching and predefined rules. AI-powered SAST uses semantic understanding to identify vulnerabilities that no rule could catch.
Claude Opus 4.6 vulnerability discovery: In 2025, Anthropicβs Claude model was demonstrated finding over 500 high-severity vulnerabilities in open-source software. What made this significant was not the volume β it was the nature of the findings:
- Zero-day vulnerabilities: Issues that no existing SAST tool had rules for, because the vulnerability patterns had never been cataloged.
- Complex logic flaws: Multi-step vulnerabilities where the bug only manifests through a specific sequence of operations across multiple functions.
- Context-aware analysis: Understanding not just what the code does, but what it was intended to do, and identifying where those diverge.
This represents a fundamental capability shift. Traditional SAST asks: βDoes this code match a known vulnerable pattern?β AI-powered SAST asks: βIs this code doing something dangerous?β β a question that encompasses both known and unknown vulnerability classes.
Practical implications for teams:
- AI SAST supplements but does not replace traditional SAST. Pattern-based tools are faster, more consistent, and have lower compute costs for known vulnerability classes.
- AI SAST is most valuable for complex business logic, custom security controls, and novel code patterns that fall outside standard rule sets.
- Results from AI SAST require experienced human review β the model can identify genuine zero-days, but it can also flag correct code that looks suspicious.
2.6 NIST SSDF PW.7 Alignment
NIST SSDF Practice PW.7 states: βReview and/or analyze human-readable code to identify vulnerabilities and verify compliance with security requirements.β SAST directly implements PW.7 by:
- Automatically reviewing all code changes for known vulnerability patterns.
- Verifying compliance with coding standards (banned functions, required error handling, mandatory input validation).
- Providing evidence of review for audit purposes.
3. DAST β Dynamic Application Security Testing
3.1 What DAST Does
DAST tests running applications by simulating attacks from the outside. It is black-box testing β the tool has no knowledge of the source code. It interacts with the application the same way an attacker would: through HTTP requests, form submissions, and API calls.
DAST tools crawl the application to discover endpoints, then systematically send malicious payloads to each endpoint and analyze the responses for signs of vulnerability.
3.2 When to Run DAST
- Against staging/QA environments: After deployment, before promotion to production.
- Post-deployment CI/CD stages: Automated DAST scan of the deployed artifact in a test environment.
- Scheduled scans: Weekly against production (with care β DAST can be noisy and potentially disruptive).
- After infrastructure changes: New load balancers, WAF rule changes, TLS configuration changes.
3.3 What DAST Catches That SAST Cannot
| Vulnerability Class | Why DAST Is Required |
|---|---|
| Server misconfiguration | TLS version, HTTP headers, directory listing, default credentials |
| Authentication flaws | Session fixation, cookie attributes, login bypass |
| Runtime injection | Server-side template injection, header injection |
| CORS misconfiguration | Runtime header analysis |
| WAF bypass | Tests the actual deployed defense, not the code behind it |
| Infrastructure vulnerabilities | Web server, reverse proxy, load balancer issues |
SAST sees the code. DAST sees the deployed application. Many vulnerabilities exist in the gap between the two: server configuration, deployment settings, infrastructure components, and runtime behavior that cannot be predicted from source code alone.
3.4 DAST Tools Landscape
Commercial:
| Tool | Strengths |
|---|---|
| Burp Suite Pro | Gold standard for manual + automated web app testing |
| Qualys WAS | Cloud-based, strong asset discovery, compliance reporting |
| Rapid7 InsightAppSec | Crawl + attack engine, CI/CD integration |
Open Source:
| Tool | Strengths |
|---|---|
| OWASP ZAP | Full-featured proxy, active/passive scanning, scripting engine |
| Nuclei | Template-based scanning, community-maintained templates, fast |
| Nikto | Web server scanner, 7,000+ tests, quick infrastructure audit |
3.5 AI-Enhanced DAST
AI improves DAST in three critical areas:
-
Smarter crawling: Traditional DAST crawlers struggle with JavaScript-heavy applications, single-page apps, and complex navigation flows. AI-enhanced crawlers understand application structure semantically, discovering endpoints that traditional crawlers miss.
-
Intelligent payload generation: Instead of brute-forcing payloads from a static list, AI generates context-aware payloads based on the applicationβs technology stack, input validation patterns, and observed responses.
-
Context-aware testing: AI can understand the business context of an endpoint (e.g., βthis is a password reset flowβ) and apply relevant attack patterns (token prediction, race conditions, workflow bypass) rather than generic tests.
4. SCA β Software Composition Analysis
4.1 What SCA Does
SCA analyzes third-party and open-source components in your application for known vulnerabilities (CVEs), license compliance risks, and version currency.
4.2 Why SCA Is Critical
The numbers make the case unambiguously:
- ~85% of security vulnerabilities are in third-party dependencies, not in code your team wrote.
- 10,000+ malicious packages are published to public registries per quarter (npm, PyPI, RubyGems, Maven Central).
- The average application has 200-500 direct and transitive dependencies. Each is a potential attack surface.
- Supply chain attacks (SolarWinds, Log4Shell, XZ Utils) demonstrate that compromised dependencies can bypass every other security control.
4.3 When to Run SCA
- Every build: Check dependencies against vulnerability databases.
- Every dependency change:
package.json,requirements.txt,pom.xml,go.modchanges trigger SCA. - Continuous monitoring: New CVEs are published daily. A dependency that was safe yesterday may have a critical CVE today.
- Pre-merge: Block PRs that introduce dependencies with known critical/high vulnerabilities.
4.4 SCA Capabilities
CVE matching: Cross-referencing your dependency tree against multiple vulnerability databases:
- NVD (National Vulnerability Database)
- OSV (Open Source Vulnerabilities)
- GitHub Advisory Database
- Vendor-specific advisories (e.g., Red Hat, Ubuntu)
License risk analysis: Identifying dependencies with licenses incompatible with your distribution model. GPL in a proprietary product. AGPL in a SaaS. Unlicensed code with no terms at all.
Transitive dependency analysis: Your code depends on Library A. Library A depends on Library B. Library B depends on Library C, which has a critical CVE. Without transitive analysis, you would never know Library C is in your supply chain.
Reachability analysis: Not all CVEs in your dependencies are exploitable. Reachability analysis determines whether your code actually calls the vulnerable function. A CVE in a library function you never invoke is a lower priority than one in a function you call on every request.
Malicious package detection: Identifying packages that are intentionally malicious β typosquatting (e.g., lodas instead of lodash), dependency confusion attacks, or compromised maintainer accounts publishing backdoored versions.
4.5 SCA Tools Landscape
Commercial:
| Tool | Strengths |
|---|---|
| Snyk | Developer-first UX, auto-fix PRs, extensive ecosystem |
| Mend.io (WhiteSource) | Deep license analysis, policy engine, reachability |
| Black Duck (Synopsys) | Enterprise-grade, binary analysis, M&A due diligence |
| FOSSA | License compliance focus, policy-as-code, SBOM generation |
Open Source:
| Tool | Strengths |
|---|---|
| OWASP Dependency-Check | Mature, multi-language, NVD integration |
| OSV-Scanner (Google) | Uses OSV database, fast, Go-based |
| Trivy (Aqua) | Container + filesystem + IaC scanning, comprehensive |
4.6 AI-Powered SCA
AI is transforming SCA from a βlist of CVEsβ tool into an intelligent risk prioritization engine:
Mend.io: Combines CVSS 4.0 scoring with EPSS (Exploit Prediction Scoring System) probabilities and reachability analysis. Instead of saying βyou have 47 critical CVEs,β it says βyou have 3 CVEs that are reachable in your code AND have a high probability of exploitation in the wild.β This reduces noise by 80-90%.
Sonatype (Nexus): AI-powered model and library governance. Automatically identifies when AI-generated code introduces dependencies that are unmaintained, malicious, or license-incompatible. Specifically designed for the era where developers accept AI suggestions without checking what packages they pull in.
Arnica: Combines OpenSSF Scorecards (open-source project health metrics) with EPSS to assess not just βis this vulnerable?β but βis this project likely to have MORE vulnerabilities?β Projects with no security policy, no code review, no signed releases, and declining maintainer activity are flagged as high-risk regardless of current CVE count.
4.7 NIST SSDF PW.4 Alignment
NIST SSDF Practice PW.4 states: βReuse existing, well-secured software when feasible instead of duplicating functionality, which may lead to the introduction of new vulnerabilities. Verify that all other software used is current and well-maintained.β
SCA implements PW.4 by:
- Verifying dependencies are well-maintained (update frequency, maintainer activity).
- Identifying known vulnerabilities in reused software.
- Ensuring license compliance (legal risk is also a security risk).
- Monitoring for supply chain compromise.
5. IAST β Interactive Application Security Testing
IAST combines the strengths of SAST and DAST by instrumenting the running application. An agent deployed within the application runtime observes code execution in real-time while the application is being tested.
5.1 How IAST Works
- An instrumentation agent is deployed alongside the application (Java agent, .NET profiler, Node.js require hook).
- As functional tests or DAST scans exercise the application, the agent observes:
- Which code paths are executed
- How data flows from input to output
- Where security controls (validation, encoding, parameterization) are applied or missing
- When a vulnerability is detected, the agent reports the exact source code location, the data flow, and the HTTP request that triggered it.
5.2 IAST Advantages
- Lower false positive rate: IAST sees actual runtime behavior, not theoretical code paths. If the code is instrumented and the vulnerability is triggered, it is real.
- Precise location: IAST pinpoints the exact line of code and the exact HTTP request. Developers get actionable findings, not abstract warnings.
- Runtime context: IAST understands that a potential SQL injection is parameterized at runtime (false positive in SAST) or that a variable marked βsafeβ by SAST is actually user-controlled through a framework binding (false negative in SAST).
5.3 IAST Limitations
- Requires application instrumentation β deployment and performance overhead.
- Only detects vulnerabilities on exercised code paths β coverage depends on test quality.
- Language and framework support varies.
- Not suitable for production deployment (performance impact).
6. EPSS-Based Vulnerability Prioritization
6.1 The Prioritization Problem
A typical enterprise application has hundreds to thousands of known vulnerabilities across its dependency tree. CVSS scores tell you the theoretical severity of each vulnerability. They do not tell you which ones will actually be exploited.
The result: teams waste enormous effort patching CVSS 9.8 vulnerabilities that have no known exploit, no proof of concept, and no attacker interest β while a CVSS 6.5 vulnerability with an active exploit in the wild goes unpatched.
6.2 What EPSS Is
The Exploit Prediction Scoring System (EPSS), maintained by FIRST.org, uses machine learning to predict the probability that a vulnerability will be exploited in the wild within the next 30 days. It is a probability score from 0 to 1.
The critical insight:
| Scenario | CVSS | EPSS | Priority |
|---|---|---|---|
| A | 6.5 | 0.94 | HIGH β 94% chance of exploitation in 30 days |
| B | 9.8 | 0.003 | LOW β 0.3% chance of exploitation in 30 days |
Under CVSS-only prioritization, Scenario B is patched first. Under EPSS-informed prioritization, Scenario A is patched first. The second approach prevents actual breaches; the first prevents theoretical ones.
6.3 Real-World Impact
Organizations that have migrated to EPSS-informed prioritization consistently report 60-80% reduction in effective remediation workload. They patch fewer vulnerabilities but prevent more exploits.
6.4 Key Platforms Using AI/EPSS for Prioritization
Microsoft Vuln.AI: AI-driven vulnerability assessment achieving 50%+ faster triage. Correlates EPSS, threat intelligence, asset criticality, and exploit availability to produce actionable priority rankings.
CrowdStrike ExPRT.AI: Expert Prediction Rating system that identifies the 5% of vulnerabilities that pose 95% of the actual risk. Reduces remediation scope by an order of magnitude while increasing security posture.
Tenable VPR (Vulnerability Priority Rating): Analyzes the full CVE landscape and concludes that only 1.6% of vulnerabilities represent actual exploitable risk at any given time. VPR combines CVSS, EPSS, exploit maturity, and threat intelligence to focus remediation on that 1.6%.
6.5 Implementing EPSS in Your Workflow
1. SCA scan produces list of CVEs in dependencies
2. Enrich each CVE with:
- CVSS score (severity)
- EPSS score (exploitation probability)
- Reachability analysis (is the vulnerable code actually called?)
- Asset criticality (how important is this application?)
3. Prioritize:
- EPSS > 0.5 AND reachable AND critical asset β Immediate remediation
- EPSS > 0.1 AND reachable β Next sprint remediation
- EPSS > 0.1 AND NOT reachable β Monitor, patch in next maintenance window
- EPSS < 0.1 β Batch remediation in scheduled updates
4. Track and validate: compare predicted vs. actual exploitation rates
7. Pipeline Integration β Scan Points
Security testing must be integrated at every stage of the pipeline, not bolted on at the end.
7.1 Recommended Scan Points
Developer Workstation
βββ IDE Plugin: SAST (real-time feedback as code is written)
βββ Pre-commit hook: secrets detection, basic SAST
β
Commit / Pull Request
βββ SAST: incremental scan on changed files
βββ SCA: dependency check on lockfile changes
βββ License check: new dependency license validation
β
Build
βββ SAST: full scan if not done at commit
βββ SCA: full dependency tree analysis
βββ Container scan: base image vulnerabilities (if applicable)
β
Test Environment Deploy
βββ DAST: automated scan against deployed application
βββ IAST: instrumented during functional test execution
β
Package / Artifact
βββ Container scan: final image scan
βββ SBOM generation: complete software bill of materials
βββ Signature: artifact signing for integrity verification
β
Production Deploy
βββ Pre-deployment gate: all critical/high findings resolved
βββ Runtime: RASP (Runtime Application Self-Protection)
βββ Continuous SCA: new CVE monitoring for deployed dependencies
β
Runtime / Monitoring
βββ RASP: real-time attack detection and blocking
βββ WAF: web application firewall rules
βββ SCA continuous monitoring: alerts on new CVEs
βββ EPSS monitoring: re-prioritization as EPSS scores change
7.2 Gate Policies
| Gate | Policy |
|---|---|
| PR merge | Zero critical SAST findings, zero critical/high SCA findings |
| Build promotion | All SAST findings triaged, SCA findings within policy |
| Deploy to staging | DAST scan complete, zero critical findings |
| Deploy to production | All critical/high findings resolved, SBOM generated, artifact signed |
8. False Positive Management
False positives are the silent killer of security testing programs. When 40-60% of findings are false positives (common for SAST), developers learn to ignore all findings β including the real ones.
8.1 Triage Workflows
Every finding must be triaged into one of four categories:
- True Positive β Fix: Genuine vulnerability. Assigned to a developer. Tracked to resolution.
- True Positive β Accept Risk: Genuine vulnerability but business decision to accept (documented, approved by security, time-limited, reviewed periodically).
- False Positive β Suppress: Not a real vulnerability. Suppressed with documentation of why. Reviewed periodically.
- Needs Investigation: Requires more context. Time-boxed β must be resolved within one sprint.
8.2 Suppression Rules
Suppression rules prevent known false positives from reappearing:
# Semgrep suppression example
- id: suppress-false-positive
pattern: ...
paths:
exclude:
- "tests/**" # Test code has different risk profile
metadata:
suppress:
- finding_id: "sql-injection-in-test-helper"
reason: "Test helper uses parameterized queries internally"
approved_by: "security-team"
expires: "2027-01-01"
Key rules for suppression management:
- All suppressions must have a documented reason.
- All suppressions must have an expiration date (re-evaluate periodically).
- All suppressions must be approved by security team (not self-approved by developers).
- Suppressions are stored in version control and reviewed in PRs.
8.3 Baseline Management
When introducing SAST/DAST to an existing codebase, you will have hundreds or thousands of initial findings. Do not attempt to fix them all before starting β you will never start.
Instead:
- Run initial scan and capture all findings as the baseline.
- Triage the baseline: prioritize by severity and EPSS.
- Set policy: zero new findings going forward (all new findings must be resolved before merge).
- Remediate baseline findings in scheduled sprints (burn-down approach).
- Track baseline reduction over time as a security program metric.
9. Metrics
Effective security testing programs track these metrics:
| Metric | What It Measures | Target |
|---|---|---|
| Findings by severity | Volume of open vulnerabilities | Trending downward |
| Fix rate | % of findings remediated within SLA | >90% for critical, >80% high |
| Mean time to remediate (MTTR) | Average time from finding to fix | <7 days critical, <30 days high |
| False positive rate | % of findings that are false positives | <20% (tool tuning indicator) |
| Scan coverage | % of applications with automated security testing | 100% for production apps |
| Escaped vulnerabilities | Vulnerabilities found in production that scans missed | Zero (aspirational) |
| SCA currency | % of dependencies within N versions of latest | >80% within 2 major versions |
| EPSS-weighted risk exposure | Sum of EPSS scores for open, reachable vulnerabilities | Trending downward |
10. Key Takeaways
- CIS 16.12 requires BOTH static and dynamic analysis. SAST catches code-level flaws. DAST catches deployment and configuration flaws. Neither is sufficient alone.
- SCA is not optional. With 85% of vulnerabilities in dependencies, skipping SCA is like locking the front door while leaving the back wall missing.
- AI-powered tools are a capability multiplier, not a replacement. AI SAST finds zero-days. AI SCA reduces noise. AI DAST improves coverage. But all require human oversight and judgment.
- EPSS transforms vulnerability management. Switching from CVSS-only to EPSS-informed prioritization reduces workload 60-80% while improving actual security posture.
- False positive management determines program success. A tool that generates 50% false positives will be ignored. Invest in tuning, suppression, and baseline management.
- Security testing is continuous, not a phase. Scan at every stage of the pipeline. Monitor in production. Re-prioritize as threat landscape changes.
Review Questions
-
A developer says βWe already do SAST, so we donβt need DAST.β Provide three specific vulnerability classes that DAST catches and SAST cannot.
-
Your SCA tool reports 230 CVEs across your dependency tree. You have capacity to remediate 15 per sprint. Describe the prioritization methodology you would use, referencing EPSS, reachability analysis, and asset criticality.
-
Explain how AI-powered SAST differs from traditional rule-based SAST in its approach to vulnerability detection. What types of vulnerabilities can AI SAST find that traditional tools miss?
-
Your organization is introducing SAST to a 500,000-line legacy codebase. The initial scan produces 3,200 findings. Describe your baseline management strategy.
-
A QA engineer notices that the DAST tool is not finding any vulnerabilities. Is this good news? What are three possible explanations, and how would you investigate each?
References
- CIS Controls v8, Safeguard 16.12 β Implement Code-Level Security Checks
- NIST SSDF PW.4 β Reuse Existing, Well-Secured Software
- NIST SSDF PW.7 β Review and/or Analyze Human-Readable Code
- OWASP Testing Guide v4.2
- FIRST.org EPSS β https://www.first.org/epss/
- Anthropic, βClaude AI Finds Novel Vulnerabilities in Open-Source Softwareβ (2025)
- Semgrep Documentation β https://semgrep.dev/docs/
- OWASP ZAP Documentation β https://www.zaproxy.org/docs/
- OWASP Dependency-Check β https://owasp.org/www-project-dependency-check/
- Snyk State of Open Source Security Report (2025)
Study Guide
Key Takeaways
- CIS 16.12 requires both SAST and DAST β They catch fundamentally different vulnerability classes; neither alone is sufficient.
- 85% of vulnerabilities are in dependencies β SCA is critical since the average app has 200-500 direct and transitive dependencies.
- EPSS transforms prioritization β Predicts exploitation probability within 30 days; organizations report 60-80% reduction in remediation workload.
- AI-powered SAST finds zero-days β Asks βis this code dangerous?β vs. traditional βdoes this match a known pattern?β
- Only 1.6% of vulnerabilities pose actual risk β Tenable VPR analysis shows most CVEs are never exploited in practice.
- IAST combines SAST and DAST strengths β Instruments the running application for lower false positives and precise code location.
- False positive management determines program success β 40-60% false positive rates train developers to ignore all findings including real ones.
- Security testing is continuous, not a phase β Scan at every pipeline stage: commit, build, test, package, deploy, runtime.
Important Definitions
| Term | Definition |
|---|---|
| SAST | Static Application Security Testing β analyzes source code without execution (white-box) |
| DAST | Dynamic Application Security Testing β tests running applications from outside (black-box) |
| SCA | Software Composition Analysis β identifies known vulnerabilities in third-party dependencies |
| IAST | Interactive Application Security Testing β instruments runtime to observe code execution during testing |
| EPSS | Exploit Prediction Scoring System β ML-based probability of exploitation within 30 days (0-1) |
| Taint Analysis | SAST technique tracking untrusted input from source through transformations to sink |
| VPR | Vulnerability Priority Rating β Tenableβs risk-based scoring combining CVSS, EPSS, and threat intel |
| Reachability Analysis | Determines whether vulnerable code in a dependency is actually called by the application |
| Baseline Management | Capturing initial scan findings and enforcing zero-new-findings going forward |
| RASP | Runtime Application Self-Protection β monitors and blocks attacks at runtime |
Quick Reference
- EPSS Prioritization: >0.5 + reachable + critical asset = immediate; >0.1 + reachable = next sprint; <0.1 = batch
- Gate Policies: PR merge (zero critical SAST/SCA), Build promotion (all triaged), Staging (DAST clean), Production (all critical/high resolved + SBOM + signed)
- MTTR Targets: Critical <7 days, High <30 days
- False Positive Rate Target: <20%
- Common Pitfalls: CVSS-only prioritization, skipping SCA, not tuning scanners, treating security testing as a single phase, ignoring false positive management
Review Questions
- Provide three specific vulnerability classes that DAST catches and SAST cannot, and explain why SAST misses them.
- With 230 CVEs and capacity for 15 per sprint, describe an EPSS-informed prioritization methodology including reachability and asset criticality.
- How would you implement a baseline management strategy for introducing SAST to a 500,000-line legacy codebase with 3,200 initial findings?
- What distinguishes AI-powered SAST from traditional rule-based SAST, and what vulnerability types can AI find that traditional tools miss?
- A DAST tool reports zero vulnerabilities β is this good news? What are three possible explanations and how would you investigate each?