5.2 — Security Testing Automation

Testing & Verification 90 min QA & Security
0:00 / 0:00
Listen instead
Security Testing Automation
0:00 / 0:00

Learning Objectives

  • Explain the capabilities, limitations, and appropriate use cases for SAST, DAST, SCA, and IAST.
  • Design a security testing pipeline with scan points at every stage of the SDLC.
  • Evaluate AI-powered security testing tools and understand how they differ from traditional approaches.
  • Implement EPSS-based vulnerability prioritization to reduce remediation workload by 60-80%.
  • Manage false positives through triage workflows, suppression rules, and baseline management.

1. CIS Control 16.12 β€” Implement Code-Level Security Checks

CIS Safeguard 16.12 is an Implementation Group 3 (IG3) control that states:

β€œApply static and dynamic analysis tools within the application lifecycle to verify that security practices are being adhered to.”

This is a comprehensive mandate. β€œWithin the application lifecycle” means at every stage, not just before release. β€œStatic AND dynamic” means both β€” they catch fundamentally different classes of vulnerabilities. β€œVerify security practices” means the tools are checking that the security requirements defined in your SSDLC (Module 2.1) are actually implemented.

The control aligns directly with NIST SSDF practices PW.7 (review human-readable code to identify vulnerabilities) and PW.4 (reuse existing well-secured software when feasible and verify all other software). This module covers the full spectrum of automated security testing tools that implement this control.


2. SAST β€” Static Application Security Testing

2.1 What SAST Does

SAST analyzes source code, bytecode, or compiled binaries without executing the application. It is white-box testing β€” the tool has full visibility into the code structure, data flows, and control flows.

SAST tools build an abstract model of the application (Abstract Syntax Tree, Control Flow Graph, Data Flow Graph) and apply rules to identify patterns that indicate vulnerabilities. The most sophisticated tools perform taint analysis β€” tracking untrusted input from its entry point (source) through all transformations to where it is used (sink), flagging paths where sanitization is missing.

2.2 When to Run SAST

  • Every commit: IDE plugins and pre-commit hooks provide immediate developer feedback.
  • Every build: CI pipeline stage runs the full rule set.
  • Every PR: Incremental analysis on changed files, blocking merge on critical findings.
  • Scheduled full scans: Weekly or monthly complete codebase analysis to catch issues in unchanged code with new rules.

2.3 What SAST Catches

Vulnerability ClassHow SAST Detects It
SQL InjectionTaint analysis: user input reaches SQL query without parameterization
Cross-Site ScriptingTaint analysis: user input rendered in HTML without encoding
Buffer OverflowsBounds checking analysis on array/buffer operations
Insecure CryptographyPattern matching: weak algorithms (MD5, SHA1, DES), small key sizes
Hardcoded SecretsEntropy analysis + pattern matching: API keys, passwords, tokens
Path TraversalTaint analysis: user input in file system operations
Command InjectionTaint analysis: user input in system calls
Insecure DeserializationPattern matching: deserialization of untrusted data
XXEConfiguration analysis: XML parser settings

2.4 SAST Tools Landscape

Commercial:

ToolStrengths
Checkmarx SASTDeep taint analysis, 30+ languages, enterprise scale
Fortify (OpenText)Comprehensive rule set, government/DoD adoption
VeracodeSaaS-first, binary analysis (no source code required)
SonarQube EnterpriseQuality + security in one platform, strong IDE plugins

Open Source:

ToolLanguage FocusNotes
Semgrep30+ languagesCustom rules in YAML, fast, low false positives
BanditPythonAST-based, well-maintained, pip install
BrakemanRuby on RailsFramework-aware, fast, low config
SpotBugs + FindSecBugsJavaBytecode analysis, extensive security plugins
ESLint SecurityJavaScripteslint-plugin-security, integrates with existing linting

2.5 AI-Powered SAST

Traditional SAST relies on pattern matching and predefined rules. AI-powered SAST uses semantic understanding to identify vulnerabilities that no rule could catch.

Claude Opus 4.6 vulnerability discovery: In 2025, Anthropic’s Claude model was demonstrated finding over 500 high-severity vulnerabilities in open-source software. What made this significant was not the volume β€” it was the nature of the findings:

  • Zero-day vulnerabilities: Issues that no existing SAST tool had rules for, because the vulnerability patterns had never been cataloged.
  • Complex logic flaws: Multi-step vulnerabilities where the bug only manifests through a specific sequence of operations across multiple functions.
  • Context-aware analysis: Understanding not just what the code does, but what it was intended to do, and identifying where those diverge.

This represents a fundamental capability shift. Traditional SAST asks: β€œDoes this code match a known vulnerable pattern?” AI-powered SAST asks: β€œIs this code doing something dangerous?” β€” a question that encompasses both known and unknown vulnerability classes.

Practical implications for teams:

  • AI SAST supplements but does not replace traditional SAST. Pattern-based tools are faster, more consistent, and have lower compute costs for known vulnerability classes.
  • AI SAST is most valuable for complex business logic, custom security controls, and novel code patterns that fall outside standard rule sets.
  • Results from AI SAST require experienced human review β€” the model can identify genuine zero-days, but it can also flag correct code that looks suspicious.

2.6 NIST SSDF PW.7 Alignment

NIST SSDF Practice PW.7 states: β€œReview and/or analyze human-readable code to identify vulnerabilities and verify compliance with security requirements.” SAST directly implements PW.7 by:

  • Automatically reviewing all code changes for known vulnerability patterns.
  • Verifying compliance with coding standards (banned functions, required error handling, mandatory input validation).
  • Providing evidence of review for audit purposes.

3. DAST β€” Dynamic Application Security Testing

3.1 What DAST Does

DAST tests running applications by simulating attacks from the outside. It is black-box testing β€” the tool has no knowledge of the source code. It interacts with the application the same way an attacker would: through HTTP requests, form submissions, and API calls.

DAST tools crawl the application to discover endpoints, then systematically send malicious payloads to each endpoint and analyze the responses for signs of vulnerability.

3.2 When to Run DAST

  • Against staging/QA environments: After deployment, before promotion to production.
  • Post-deployment CI/CD stages: Automated DAST scan of the deployed artifact in a test environment.
  • Scheduled scans: Weekly against production (with care β€” DAST can be noisy and potentially disruptive).
  • After infrastructure changes: New load balancers, WAF rule changes, TLS configuration changes.

3.3 What DAST Catches That SAST Cannot

Vulnerability ClassWhy DAST Is Required
Server misconfigurationTLS version, HTTP headers, directory listing, default credentials
Authentication flawsSession fixation, cookie attributes, login bypass
Runtime injectionServer-side template injection, header injection
CORS misconfigurationRuntime header analysis
WAF bypassTests the actual deployed defense, not the code behind it
Infrastructure vulnerabilitiesWeb server, reverse proxy, load balancer issues

SAST sees the code. DAST sees the deployed application. Many vulnerabilities exist in the gap between the two: server configuration, deployment settings, infrastructure components, and runtime behavior that cannot be predicted from source code alone.

3.4 DAST Tools Landscape

Commercial:

ToolStrengths
Burp Suite ProGold standard for manual + automated web app testing
Qualys WASCloud-based, strong asset discovery, compliance reporting
Rapid7 InsightAppSecCrawl + attack engine, CI/CD integration

Open Source:

ToolStrengths
OWASP ZAPFull-featured proxy, active/passive scanning, scripting engine
NucleiTemplate-based scanning, community-maintained templates, fast
NiktoWeb server scanner, 7,000+ tests, quick infrastructure audit

3.5 AI-Enhanced DAST

AI improves DAST in three critical areas:

  1. Smarter crawling: Traditional DAST crawlers struggle with JavaScript-heavy applications, single-page apps, and complex navigation flows. AI-enhanced crawlers understand application structure semantically, discovering endpoints that traditional crawlers miss.

  2. Intelligent payload generation: Instead of brute-forcing payloads from a static list, AI generates context-aware payloads based on the application’s technology stack, input validation patterns, and observed responses.

  3. Context-aware testing: AI can understand the business context of an endpoint (e.g., β€œthis is a password reset flow”) and apply relevant attack patterns (token prediction, race conditions, workflow bypass) rather than generic tests.


4. SCA β€” Software Composition Analysis

4.1 What SCA Does

SCA analyzes third-party and open-source components in your application for known vulnerabilities (CVEs), license compliance risks, and version currency.

4.2 Why SCA Is Critical

The numbers make the case unambiguously:

  • ~85% of security vulnerabilities are in third-party dependencies, not in code your team wrote.
  • 10,000+ malicious packages are published to public registries per quarter (npm, PyPI, RubyGems, Maven Central).
  • The average application has 200-500 direct and transitive dependencies. Each is a potential attack surface.
  • Supply chain attacks (SolarWinds, Log4Shell, XZ Utils) demonstrate that compromised dependencies can bypass every other security control.

4.3 When to Run SCA

  • Every build: Check dependencies against vulnerability databases.
  • Every dependency change: package.json, requirements.txt, pom.xml, go.mod changes trigger SCA.
  • Continuous monitoring: New CVEs are published daily. A dependency that was safe yesterday may have a critical CVE today.
  • Pre-merge: Block PRs that introduce dependencies with known critical/high vulnerabilities.

4.4 SCA Capabilities

CVE matching: Cross-referencing your dependency tree against multiple vulnerability databases:

  • NVD (National Vulnerability Database)
  • OSV (Open Source Vulnerabilities)
  • GitHub Advisory Database
  • Vendor-specific advisories (e.g., Red Hat, Ubuntu)

License risk analysis: Identifying dependencies with licenses incompatible with your distribution model. GPL in a proprietary product. AGPL in a SaaS. Unlicensed code with no terms at all.

Transitive dependency analysis: Your code depends on Library A. Library A depends on Library B. Library B depends on Library C, which has a critical CVE. Without transitive analysis, you would never know Library C is in your supply chain.

Reachability analysis: Not all CVEs in your dependencies are exploitable. Reachability analysis determines whether your code actually calls the vulnerable function. A CVE in a library function you never invoke is a lower priority than one in a function you call on every request.

Malicious package detection: Identifying packages that are intentionally malicious β€” typosquatting (e.g., lodas instead of lodash), dependency confusion attacks, or compromised maintainer accounts publishing backdoored versions.

4.5 SCA Tools Landscape

Commercial:

ToolStrengths
SnykDeveloper-first UX, auto-fix PRs, extensive ecosystem
Mend.io (WhiteSource)Deep license analysis, policy engine, reachability
Black Duck (Synopsys)Enterprise-grade, binary analysis, M&A due diligence
FOSSALicense compliance focus, policy-as-code, SBOM generation

Open Source:

ToolStrengths
OWASP Dependency-CheckMature, multi-language, NVD integration
OSV-Scanner (Google)Uses OSV database, fast, Go-based
Trivy (Aqua)Container + filesystem + IaC scanning, comprehensive

4.6 AI-Powered SCA

AI is transforming SCA from a β€œlist of CVEs” tool into an intelligent risk prioritization engine:

Mend.io: Combines CVSS 4.0 scoring with EPSS (Exploit Prediction Scoring System) probabilities and reachability analysis. Instead of saying β€œyou have 47 critical CVEs,” it says β€œyou have 3 CVEs that are reachable in your code AND have a high probability of exploitation in the wild.” This reduces noise by 80-90%.

Sonatype (Nexus): AI-powered model and library governance. Automatically identifies when AI-generated code introduces dependencies that are unmaintained, malicious, or license-incompatible. Specifically designed for the era where developers accept AI suggestions without checking what packages they pull in.

Arnica: Combines OpenSSF Scorecards (open-source project health metrics) with EPSS to assess not just β€œis this vulnerable?” but β€œis this project likely to have MORE vulnerabilities?” Projects with no security policy, no code review, no signed releases, and declining maintainer activity are flagged as high-risk regardless of current CVE count.

4.7 NIST SSDF PW.4 Alignment

NIST SSDF Practice PW.4 states: β€œReuse existing, well-secured software when feasible instead of duplicating functionality, which may lead to the introduction of new vulnerabilities. Verify that all other software used is current and well-maintained.”

SCA implements PW.4 by:

  • Verifying dependencies are well-maintained (update frequency, maintainer activity).
  • Identifying known vulnerabilities in reused software.
  • Ensuring license compliance (legal risk is also a security risk).
  • Monitoring for supply chain compromise.

5. IAST β€” Interactive Application Security Testing

IAST combines the strengths of SAST and DAST by instrumenting the running application. An agent deployed within the application runtime observes code execution in real-time while the application is being tested.

5.1 How IAST Works

  1. An instrumentation agent is deployed alongside the application (Java agent, .NET profiler, Node.js require hook).
  2. As functional tests or DAST scans exercise the application, the agent observes:
    • Which code paths are executed
    • How data flows from input to output
    • Where security controls (validation, encoding, parameterization) are applied or missing
  3. When a vulnerability is detected, the agent reports the exact source code location, the data flow, and the HTTP request that triggered it.

5.2 IAST Advantages

  • Lower false positive rate: IAST sees actual runtime behavior, not theoretical code paths. If the code is instrumented and the vulnerability is triggered, it is real.
  • Precise location: IAST pinpoints the exact line of code and the exact HTTP request. Developers get actionable findings, not abstract warnings.
  • Runtime context: IAST understands that a potential SQL injection is parameterized at runtime (false positive in SAST) or that a variable marked β€œsafe” by SAST is actually user-controlled through a framework binding (false negative in SAST).

5.3 IAST Limitations

  • Requires application instrumentation β€” deployment and performance overhead.
  • Only detects vulnerabilities on exercised code paths β€” coverage depends on test quality.
  • Language and framework support varies.
  • Not suitable for production deployment (performance impact).

6. EPSS-Based Vulnerability Prioritization

6.1 The Prioritization Problem

A typical enterprise application has hundreds to thousands of known vulnerabilities across its dependency tree. CVSS scores tell you the theoretical severity of each vulnerability. They do not tell you which ones will actually be exploited.

The result: teams waste enormous effort patching CVSS 9.8 vulnerabilities that have no known exploit, no proof of concept, and no attacker interest β€” while a CVSS 6.5 vulnerability with an active exploit in the wild goes unpatched.

6.2 What EPSS Is

The Exploit Prediction Scoring System (EPSS), maintained by FIRST.org, uses machine learning to predict the probability that a vulnerability will be exploited in the wild within the next 30 days. It is a probability score from 0 to 1.

The critical insight:

ScenarioCVSSEPSSPriority
A6.50.94HIGH β€” 94% chance of exploitation in 30 days
B9.80.003LOW β€” 0.3% chance of exploitation in 30 days

Under CVSS-only prioritization, Scenario B is patched first. Under EPSS-informed prioritization, Scenario A is patched first. The second approach prevents actual breaches; the first prevents theoretical ones.

6.3 Real-World Impact

Organizations that have migrated to EPSS-informed prioritization consistently report 60-80% reduction in effective remediation workload. They patch fewer vulnerabilities but prevent more exploits.

6.4 Key Platforms Using AI/EPSS for Prioritization

Microsoft Vuln.AI: AI-driven vulnerability assessment achieving 50%+ faster triage. Correlates EPSS, threat intelligence, asset criticality, and exploit availability to produce actionable priority rankings.

CrowdStrike ExPRT.AI: Expert Prediction Rating system that identifies the 5% of vulnerabilities that pose 95% of the actual risk. Reduces remediation scope by an order of magnitude while increasing security posture.

Tenable VPR (Vulnerability Priority Rating): Analyzes the full CVE landscape and concludes that only 1.6% of vulnerabilities represent actual exploitable risk at any given time. VPR combines CVSS, EPSS, exploit maturity, and threat intelligence to focus remediation on that 1.6%.

6.5 Implementing EPSS in Your Workflow

1. SCA scan produces list of CVEs in dependencies
2. Enrich each CVE with:
   - CVSS score (severity)
   - EPSS score (exploitation probability)
   - Reachability analysis (is the vulnerable code actually called?)
   - Asset criticality (how important is this application?)
3. Prioritize:
   - EPSS > 0.5 AND reachable AND critical asset β†’ Immediate remediation
   - EPSS > 0.1 AND reachable β†’ Next sprint remediation
   - EPSS > 0.1 AND NOT reachable β†’ Monitor, patch in next maintenance window
   - EPSS < 0.1 β†’ Batch remediation in scheduled updates
4. Track and validate: compare predicted vs. actual exploitation rates

7. Pipeline Integration β€” Scan Points

Security testing must be integrated at every stage of the pipeline, not bolted on at the end.

Developer Workstation
β”œβ”€β”€ IDE Plugin: SAST (real-time feedback as code is written)
β”œβ”€β”€ Pre-commit hook: secrets detection, basic SAST
β”‚
Commit / Pull Request
β”œβ”€β”€ SAST: incremental scan on changed files
β”œβ”€β”€ SCA: dependency check on lockfile changes
β”œβ”€β”€ License check: new dependency license validation
β”‚
Build
β”œβ”€β”€ SAST: full scan if not done at commit
β”œβ”€β”€ SCA: full dependency tree analysis
β”œβ”€β”€ Container scan: base image vulnerabilities (if applicable)
β”‚
Test Environment Deploy
β”œβ”€β”€ DAST: automated scan against deployed application
β”œβ”€β”€ IAST: instrumented during functional test execution
β”‚
Package / Artifact
β”œβ”€β”€ Container scan: final image scan
β”œβ”€β”€ SBOM generation: complete software bill of materials
β”œβ”€β”€ Signature: artifact signing for integrity verification
β”‚
Production Deploy
β”œβ”€β”€ Pre-deployment gate: all critical/high findings resolved
β”œβ”€β”€ Runtime: RASP (Runtime Application Self-Protection)
β”œβ”€β”€ Continuous SCA: new CVE monitoring for deployed dependencies
β”‚
Runtime / Monitoring
β”œβ”€β”€ RASP: real-time attack detection and blocking
β”œβ”€β”€ WAF: web application firewall rules
β”œβ”€β”€ SCA continuous monitoring: alerts on new CVEs
β”œβ”€β”€ EPSS monitoring: re-prioritization as EPSS scores change

7.2 Gate Policies

GatePolicy
PR mergeZero critical SAST findings, zero critical/high SCA findings
Build promotionAll SAST findings triaged, SCA findings within policy
Deploy to stagingDAST scan complete, zero critical findings
Deploy to productionAll critical/high findings resolved, SBOM generated, artifact signed

8. False Positive Management

False positives are the silent killer of security testing programs. When 40-60% of findings are false positives (common for SAST), developers learn to ignore all findings β€” including the real ones.

8.1 Triage Workflows

Every finding must be triaged into one of four categories:

  1. True Positive β€” Fix: Genuine vulnerability. Assigned to a developer. Tracked to resolution.
  2. True Positive β€” Accept Risk: Genuine vulnerability but business decision to accept (documented, approved by security, time-limited, reviewed periodically).
  3. False Positive β€” Suppress: Not a real vulnerability. Suppressed with documentation of why. Reviewed periodically.
  4. Needs Investigation: Requires more context. Time-boxed β€” must be resolved within one sprint.

8.2 Suppression Rules

Suppression rules prevent known false positives from reappearing:

# Semgrep suppression example
- id: suppress-false-positive
  pattern: ...
  paths:
    exclude:
      - "tests/**"  # Test code has different risk profile
  metadata:
    suppress:
      - finding_id: "sql-injection-in-test-helper"
        reason: "Test helper uses parameterized queries internally"
        approved_by: "security-team"
        expires: "2027-01-01"

Key rules for suppression management:

  • All suppressions must have a documented reason.
  • All suppressions must have an expiration date (re-evaluate periodically).
  • All suppressions must be approved by security team (not self-approved by developers).
  • Suppressions are stored in version control and reviewed in PRs.

8.3 Baseline Management

When introducing SAST/DAST to an existing codebase, you will have hundreds or thousands of initial findings. Do not attempt to fix them all before starting β€” you will never start.

Instead:

  1. Run initial scan and capture all findings as the baseline.
  2. Triage the baseline: prioritize by severity and EPSS.
  3. Set policy: zero new findings going forward (all new findings must be resolved before merge).
  4. Remediate baseline findings in scheduled sprints (burn-down approach).
  5. Track baseline reduction over time as a security program metric.

9. Metrics

Effective security testing programs track these metrics:

MetricWhat It MeasuresTarget
Findings by severityVolume of open vulnerabilitiesTrending downward
Fix rate% of findings remediated within SLA>90% for critical, >80% high
Mean time to remediate (MTTR)Average time from finding to fix<7 days critical, <30 days high
False positive rate% of findings that are false positives<20% (tool tuning indicator)
Scan coverage% of applications with automated security testing100% for production apps
Escaped vulnerabilitiesVulnerabilities found in production that scans missedZero (aspirational)
SCA currency% of dependencies within N versions of latest>80% within 2 major versions
EPSS-weighted risk exposureSum of EPSS scores for open, reachable vulnerabilitiesTrending downward

10. Key Takeaways

  1. CIS 16.12 requires BOTH static and dynamic analysis. SAST catches code-level flaws. DAST catches deployment and configuration flaws. Neither is sufficient alone.
  2. SCA is not optional. With 85% of vulnerabilities in dependencies, skipping SCA is like locking the front door while leaving the back wall missing.
  3. AI-powered tools are a capability multiplier, not a replacement. AI SAST finds zero-days. AI SCA reduces noise. AI DAST improves coverage. But all require human oversight and judgment.
  4. EPSS transforms vulnerability management. Switching from CVSS-only to EPSS-informed prioritization reduces workload 60-80% while improving actual security posture.
  5. False positive management determines program success. A tool that generates 50% false positives will be ignored. Invest in tuning, suppression, and baseline management.
  6. Security testing is continuous, not a phase. Scan at every stage of the pipeline. Monitor in production. Re-prioritize as threat landscape changes.

Review Questions

  1. A developer says β€œWe already do SAST, so we don’t need DAST.” Provide three specific vulnerability classes that DAST catches and SAST cannot.

  2. Your SCA tool reports 230 CVEs across your dependency tree. You have capacity to remediate 15 per sprint. Describe the prioritization methodology you would use, referencing EPSS, reachability analysis, and asset criticality.

  3. Explain how AI-powered SAST differs from traditional rule-based SAST in its approach to vulnerability detection. What types of vulnerabilities can AI SAST find that traditional tools miss?

  4. Your organization is introducing SAST to a 500,000-line legacy codebase. The initial scan produces 3,200 findings. Describe your baseline management strategy.

  5. A QA engineer notices that the DAST tool is not finding any vulnerabilities. Is this good news? What are three possible explanations, and how would you investigate each?


References

Study Guide

Key Takeaways

  1. CIS 16.12 requires both SAST and DAST β€” They catch fundamentally different vulnerability classes; neither alone is sufficient.
  2. 85% of vulnerabilities are in dependencies β€” SCA is critical since the average app has 200-500 direct and transitive dependencies.
  3. EPSS transforms prioritization β€” Predicts exploitation probability within 30 days; organizations report 60-80% reduction in remediation workload.
  4. AI-powered SAST finds zero-days β€” Asks β€œis this code dangerous?” vs. traditional β€œdoes this match a known pattern?”
  5. Only 1.6% of vulnerabilities pose actual risk β€” Tenable VPR analysis shows most CVEs are never exploited in practice.
  6. IAST combines SAST and DAST strengths β€” Instruments the running application for lower false positives and precise code location.
  7. False positive management determines program success β€” 40-60% false positive rates train developers to ignore all findings including real ones.
  8. Security testing is continuous, not a phase β€” Scan at every pipeline stage: commit, build, test, package, deploy, runtime.

Important Definitions

TermDefinition
SASTStatic Application Security Testing β€” analyzes source code without execution (white-box)
DASTDynamic Application Security Testing β€” tests running applications from outside (black-box)
SCASoftware Composition Analysis β€” identifies known vulnerabilities in third-party dependencies
IASTInteractive Application Security Testing β€” instruments runtime to observe code execution during testing
EPSSExploit Prediction Scoring System β€” ML-based probability of exploitation within 30 days (0-1)
Taint AnalysisSAST technique tracking untrusted input from source through transformations to sink
VPRVulnerability Priority Rating β€” Tenable’s risk-based scoring combining CVSS, EPSS, and threat intel
Reachability AnalysisDetermines whether vulnerable code in a dependency is actually called by the application
Baseline ManagementCapturing initial scan findings and enforcing zero-new-findings going forward
RASPRuntime Application Self-Protection β€” monitors and blocks attacks at runtime

Quick Reference

  • EPSS Prioritization: >0.5 + reachable + critical asset = immediate; >0.1 + reachable = next sprint; <0.1 = batch
  • Gate Policies: PR merge (zero critical SAST/SCA), Build promotion (all triaged), Staging (DAST clean), Production (all critical/high resolved + SBOM + signed)
  • MTTR Targets: Critical <7 days, High <30 days
  • False Positive Rate Target: <20%
  • Common Pitfalls: CVSS-only prioritization, skipping SCA, not tuning scanners, treating security testing as a single phase, ignoring false positive management

Review Questions

  1. Provide three specific vulnerability classes that DAST catches and SAST cannot, and explain why SAST misses them.
  2. With 230 CVEs and capacity for 15 per sprint, describe an EPSS-informed prioritization methodology including reachability and asset criticality.
  3. How would you implement a baseline management strategy for introducing SAST to a 500,000-line legacy codebase with 3,200 initial findings?
  4. What distinguishes AI-powered SAST from traditional rule-based SAST, and what vulnerability types can AI find that traditional tools miss?
  5. A DAST tool reports zero vulnerabilities β€” is this good news? What are three possible explanations and how would you investigate each?
Security Testing Automation
Page 1 of 0 ↧ Download
Loading PDF...

Q1. CIS Safeguard 16.12 requires which types of analysis tools within the application lifecycle?

Q2. What technique does SAST use to track untrusted input from entry point through transformations to where it is used?

Q3. What percentage of security vulnerabilities are found in third-party dependencies rather than custom code?

Q4. What distinguishes AI-powered SAST from traditional rule-based SAST?

Q5. What does EPSS measure and what is its scoring range?

Q6. Organizations that migrate to EPSS-informed prioritization consistently report what level of reduction in remediation workload?

Q7. What is IAST and how does it differ from SAST and DAST?

Q8. When introducing SAST to an existing legacy codebase with thousands of initial findings, what is the recommended baseline management approach?

Q9. According to the recommended gate policies, what must be true before deploying to production?

Q10. Tenable VPR analysis concludes that what percentage of vulnerabilities represent actual exploitable risk at any given time?

Answered: 0 of 10 Β· Score: 0/0 (0%)