4.2 — Change Management & Release Control
Listen instead
Learning Objectives
- ✓ Implement a structured change management workflow from request through closure
- ✓ Classify changes by risk level and route them through appropriate approval paths
- ✓ Design and operate a Change Advisory Board with defined decision criteria
- ✓ Establish release criteria checklists that enforce minimum security acceptability
- ✓ Apply severity rating systems (CVSS, EPSS) for risk-based vulnerability prioritization
- ✓ Define and enforce remediation SLAs tied to severity
- ✓ Implement policy-as-code for compliance controls in deployment pipelines
- ✓ Evaluate the capabilities and risks of AI-assisted change management and triage
1. CIS Control 16.6 — Severity Rating System
CIS Control 16.6 requires organizations to establish and maintain a severity rating system and process to facilitate prioritizing the remediation of discovered software vulnerabilities. This is not a suggestion to “prioritize things.” It is a mandate to establish a formal, documented, consistently applied system with the following characteristics:
- Defined severity levels with unambiguous criteria for classification
- Prioritization logic that determines remediation order based on severity, exploitability, and business impact
- Minimum security acceptability — a defined threshold below which software cannot be released or must be remediated before the next release
- Annual review — the severity rating system must be reviewed and updated at least annually to reflect changes in the threat landscape, tooling, and organizational risk appetite
Organizations that lack a severity rating system make ad hoc decisions about which vulnerabilities to fix. Ad hoc decisions under pressure consistently favor speed over security.
2. Change Management Fundamentals
Change management is the discipline of ensuring that every modification to a system — code, configuration, infrastructure, data — is intentional, authorized, tested, and reversible. In the context of secure software development, change management is the process that prevents “it works on my machine” from becoming “it broke in production.”
2.1 Standard Change Workflow
Every change follows a defined lifecycle. Skipping steps is how incidents happen.
Step 1 — Request A change request (CR) is submitted with: description of the change, business justification, affected systems and components, estimated risk, proposed implementation window, rollback plan, and testing evidence. The CR is the starting document for the audit trail.
Step 2 — Risk Classification The change is classified into one of three categories (detailed in Section 2.2). Classification determines the approval path, testing requirements, and deployment constraints.
Step 3 — Review and Approval Based on risk classification: standard changes proceed with pre-approved authority, normal changes go through CAB review, emergency changes receive expedited single-approver authorization. The approver verifies that the change has been tested, the risk is acceptable, and the rollback plan is viable.
Step 4 — Testing Testing is not optional, and “I tested it locally” is not testing. The change must pass through the full test pipeline: unit tests, integration tests, security scans (SAST, DAST, SCA), and environment-specific validation (staging, pre-production). Testing evidence is attached to the CR.
Step 5 — Deployment Window Changes are deployed within approved maintenance windows unless classified as emergency. The deployment window accounts for team availability, monitoring coverage, and rollback time. Deploying at 4:55 PM on a Friday before a holiday weekend is not a deployment window — it is a career-limiting decision.
Step 6 — Execution The change is deployed following the documented implementation plan. Deployment should be automated (CI/CD pipeline), not manual. Manual deployment steps are error-prone and not reproducible.
Step 7 — Post-Implementation Review (PIR) After deployment, verify: Did the change achieve its objective? Are there unexpected side effects? Are monitoring systems showing normal behavior? Is the rollback plan still viable if issues emerge in the coming hours/days? PIR is documented in the CR.
Step 8 — Documentation and Closure The CR is updated with: actual implementation details (vs. planned), any deviations, PIR results, and final status. The CR is closed. This completed record is the audit artifact.
2.2 Risk Classification
Not all changes carry equal risk. The classification system determines the rigor of the approval process.
Standard Changes (Pre-Approved)
- Definition: low-risk, well-understood changes with documented procedures and established success history
- Examples: dependency patch updates (non-breaking), configuration changes within approved parameters, routine certificate renewals, adding monitoring alerts
- Approval: pre-approved by policy, no CAB review required
- Constraints: must follow documented procedure exactly; any deviation reclassifies as Normal
- Audit: logged and available for review, but no individual approval record required
Normal Changes (CAB Review)
- Definition: changes with moderate risk, changes to production systems, changes affecting multiple components, or changes without established precedent
- Examples: new feature deployments, infrastructure changes, database schema modifications, dependency major version upgrades, security architecture changes
- Approval: submitted to Change Advisory Board with full documentation, approved prior to implementation
- Lead time: typically 3-5 business days from submission to CAB review
- Audit: full approval chain documented
Emergency Changes (Expedited)
- Definition: changes required to resolve a production incident or address an actively exploited vulnerability
- Examples: hotfixes for production outages, patches for zero-day vulnerabilities, configuration changes to mitigate active attacks
- Approval: single authorized approver (on-call manager, security lead, or designated emergency approver)
- Constraint: mandatory post-implementation review within 24 hours — the expedited approval is not a blank check, it is a loan against future accountability
- Documentation: retroactive CR created within 24 hours with full details
- Audit: emergency change records are flagged and included in the next CAB meeting for review
2.3 Change Advisory Board (CAB)
The CAB is the governance body responsible for reviewing and approving Normal changes and retrospectively reviewing Emergency changes.
Composition:
- Change manager (chair)
- Engineering leads (application, platform, infrastructure)
- Security representative (mandatory, not optional)
- QA/testing representative
- Business stakeholder(s) for impacted areas
- Operations/SRE representative
Decision criteria:
- Is the risk assessment accurate and complete?
- Has testing been adequate for the scope of change?
- Is the rollback plan viable and tested?
- Are there conflicting changes in the same deployment window?
- Is monitoring in place to detect issues post-deployment?
- Does the change comply with security policy and regulatory requirements?
Meeting cadence:
- Weekly for organizations with regular change volume
- Bi-weekly for smaller organizations
- On-demand for urgent Normal changes that cannot wait for the next scheduled meeting
- Asynchronous approval (via ticketing system) for changes with clear documentation and no objections — not every change requires a synchronous meeting
2.4 Audit Trail Requirements
Every change must be traceable from request through approval, implementation, and verification. The audit trail answers five questions for any change:
- Who requested the change?
- Who approved the change?
- What was changed (specifically)?
- When was the change implemented?
- What was the result (success, partial, rollback)?
This trail must be tamper-evident. Use ticketing systems with audit logs (Jira, ServiceNow, Linear), not email threads or Slack messages that can be edited or deleted.
3. Release Management
Release management is the process of getting approved changes from the repository to production in a controlled, repeatable, and verifiable manner.
3.1 Release Criteria Checklist
A release candidate must satisfy all of the following criteria before it is eligible for production deployment. This is a gate, not a guideline.
Automated Testing:
- All unit tests passing (100% pass rate, not “mostly passing”)
- Integration tests passing
- End-to-end (E2E) tests passing
- Performance/load testing results within defined thresholds (response time p95, throughput, error rate)
- No test regressions from previous release
Security Scanning:
- SAST (Static Application Security Testing) findings: zero critical, zero high, all medium findings triaged with documented accept/defer/fix decisions
- DAST (Dynamic Application Security Testing) findings: zero critical, zero high
- SCA (Software Composition Analysis) findings: zero critical CVEs in dependencies, no unapproved licenses
- Secret detection scan: zero findings
- Container image scan (if applicable): zero critical, zero high
Quality Gates:
- Code coverage meets or exceeds minimum threshold (organization-defined, typically 70-80%)
- No known regression bugs
- UAT (User Acceptance Testing) sign-off obtained from business stakeholders
Documentation:
- SBOM (Software Bill of Materials) generated and archived
- Release notes documented: new features, bug fixes, security-relevant changes, dependency updates, known issues, breaking changes
- Rollback plan documented and tested (not just written — actually tested)
- API documentation updated (if API changes included)
Approvals:
- Security review sign-off for high-risk changes (changes to auth, crypto, data handling, infrastructure)
- Change request approved (per Section 2)
- Release manager sign-off
3.2 Go/No-Go Decision Matrix
The go/no-go decision is binary — not “mostly go” or “go with known issues.” The decision matrix defines what blocks a release.
| Condition | Decision |
|---|---|
| All criteria met | GO |
| Non-critical test failure with documented workaround and acceptance | GO with risk acceptance sign-off from business owner |
| Any critical or high security finding unresolved | NO-GO — no exceptions |
| Rollback plan not tested | NO-GO |
| SBOM not generated | NO-GO |
| UAT not signed off | NO-GO |
| Performance regression >10% from baseline | NO-GO — investigate before proceeding |
| Emergency security patch for critical vulnerability | GO with expedited process (Section 5) |
3.3 Release Notes
Release notes are not just for users. They are a security and compliance artifact. Release notes must include:
- Security-relevant changes: any changes to authentication, authorization, encryption, data handling, or security configuration. This does not mean disclosing vulnerability details to attackers — it means documenting that “authentication flow updated” or “TLS configuration hardened.”
- Known issues: any issues discovered during testing that were accepted for release. Each known issue must have a tracking ticket and a remediation timeline.
- Dependency updates: list all dependency changes with version numbers. Highlight any dependency with a CVE that was updated.
- Breaking changes: any changes that require consumer action (API changes, configuration changes, migration steps).
4. Severity Rating Systems
Severity ratings determine how urgently a vulnerability must be fixed. The wrong severity system — or no system at all — leads to either panic-driven patching or complacent neglect.
4.1 CVSS — Common Vulnerability Scoring System
CVSS is the industry-standard vulnerability scoring framework, currently at version 4.0 (released November 2023).
Score components:
Base Score (0.0-10.0): inherent characteristics of the vulnerability that do not change over time.
- Attack Vector (Network, Adjacent, Local, Physical)
- Attack Complexity (Low, High)
- Privileges Required (None, Low, High)
- User Interaction (None, Required)
- Scope (Unchanged, Changed)
- Impact: Confidentiality, Integrity, Availability (None, Low, High)
Temporal Score: characteristics that change over time.
- Exploit Code Maturity (Not Defined, Unproven, Proof-of-Concept, Functional, High)
- Remediation Level (Not Defined, Official Fix, Temporary Fix, Workaround, Unavailable)
- Report Confidence (Not Defined, Unknown, Reasonable, Confirmed)
Environmental Score: characteristics specific to your organization’s environment.
- Modified base metrics reflecting your specific deployment
- Confidentiality/Integrity/Availability Requirements (Low, Medium, High) based on the asset’s importance to your organization
CVSS limitations:
- Base scores are static and context-free — a CVSS 9.8 in an air-gapped internal tool is not the same as a CVSS 9.8 in a public-facing API
- CVSS does not measure likelihood of exploitation
- Approximately 50-60% of all CVEs score between 7.0 and 9.9, creating a “wall of criticals” that is impossible to prioritize on CVSS alone
4.2 EPSS — Exploit Prediction Scoring System
EPSS, developed by FIRST (Forum of Incident Response and Security Teams), provides the missing dimension: probability that a vulnerability will be exploited in the wild within the next 30 days.
EPSS uses machine learning on historical exploitation data, vulnerability characteristics, social media mentions, exploit database entries, and other signals to produce a probability score between 0.0 (0%) and 1.0 (100%).
Why EPSS matters:
- Of the approximately 200,000+ CVEs in the NVD, only about 2-5% are ever exploited in the wild
- CVSS alone cannot distinguish between the 95% that will never be exploited and the 5% that will
- EPSS provides that distinction
4.3 Risk-Based Prioritization: Combining CVSS and EPSS
The power of modern vulnerability management comes from combining severity (CVSS) with exploitability (EPSS).
Critical insight: a CVSS 6.5 vulnerability with an EPSS score of 0.94 is more urgent than a CVSS 9.8 vulnerability with an EPSS score of 0.003.
The CVSS 6.5/EPSS 0.94 vulnerability has a 94% probability of exploitation in the next 30 days. It is being actively exploited or has weaponized exploits readily available. The CVSS 9.8/EPSS 0.003 vulnerability, while theoretically devastating, has a 0.3% chance of exploitation — it may require exotic conditions, have no public exploit, or target an obscure protocol.
Prioritization matrix:
| EPSS | CVSS Critical (9.0-10.0) | CVSS High (7.0-8.9) | CVSS Medium (4.0-6.9) | CVSS Low (0.1-3.9) |
|---|---|---|---|---|
| >0.7 (High) | P1 — Immediate | P1 — Immediate | P2 — Urgent | P3 — Planned |
| 0.3-0.7 (Medium) | P1 — Immediate | P2 — Urgent | P3 — Planned | P4 — Backlog |
| <0.3 (Low) | P2 — Urgent | P3 — Planned | P4 — Backlog | P4 — Backlog |
This matrix is a starting point. Organizations should calibrate based on their asset criticality, threat model, and risk appetite.
4.4 Organizational Severity Definitions
Map the scoring systems to organizational severity levels with explicit definitions and SLAs.
Critical
- CVSS 9.0+ with EPSS >0.3, OR any actively exploited vulnerability (CISA KEV catalog), OR any vulnerability in authentication/authorization that allows unauthorized access to sensitive data
- Business impact: data breach, service outage, regulatory violation, reputational damage
- Examples: unauthenticated RCE, SQL injection in production, authentication bypass
- Remediation SLA: 24 hours
High
- CVSS 7.0-8.9 with EPSS >0.3, OR CVSS 9.0+ with EPSS <0.3, OR vulnerabilities requiring low-complexity attack paths
- Business impact: potential data exposure, service degradation, compliance gap
- Examples: authenticated RCE, SSRF with internal access, privilege escalation
- Remediation SLA: 7 days
Medium
- CVSS 4.0-6.9, OR CVSS 7.0+ with EPSS <0.1, OR vulnerabilities requiring significant preconditions
- Business impact: limited exposure, defense-in-depth bypass, information disclosure
- Examples: reflected XSS, CSRF on non-critical functions, verbose error messages
- Remediation SLA: 30 days
Low
- CVSS 0.1-3.9, OR informational findings, OR best-practice deviations
- Business impact: minimal direct impact, hardening opportunity
- Examples: missing security headers on non-sensitive pages, server version disclosure
- Remediation SLA: 90 days or next release cycle
4.5 Minimum Security Bar for Release
The minimum security bar defines what severity levels block a release. This must be documented, approved by security leadership, and enforced in the pipeline — not negotiated at release time.
Recommended minimum bar:
- Critical findings: block release, no exceptions
- High findings: block release unless risk acceptance signed by CISO/security director with documented compensating controls and remediation commitment
- Medium findings: do not block release, but must have tracking tickets with SLA commitments
- Low findings: do not block release, tracked in backlog
5. Remediation SLAs
SLAs without enforcement are aspirations. Enforcement requires:
| Severity | Remediation SLA | Escalation at | Escalation to |
|---|---|---|---|
| Critical | 24 hours | 12 hours | VP Engineering + CISO |
| High | 7 days | 5 days | Engineering Director + Security Lead |
| Medium | 30 days | 25 days | Engineering Manager |
| Low | 90 days | 75 days | Team Lead |
SLA measurement:
- Clock starts when the finding is triaged and assigned (not when the scanner detects it — triage must happen within 24 hours of detection)
- Clock stops when the fix is deployed to production (not when the PR is merged)
- Track SLA compliance as a team and organizational metric
- Report SLA breaches in monthly security reviews
SLA exceptions:
- Must be documented with business justification
- Must include compensating controls
- Must have a revised remediation date
- Must be approved by the appropriate authority (security lead for High, CISO for Critical)
- Exceptions are tracked and reported separately
6. Emergency Change Process
Emergency changes bypass the normal approval process but do not bypass accountability.
Process:
- Declaration: the change is declared an emergency by an authorized individual (on-call manager, incident commander, security lead)
- Single-approver authorization: one authorized approver reviews and approves the change (not self-approval — the implementer cannot approve their own emergency change)
- Implementation: the change is deployed with available testing (at minimum: the fix works, it does not break existing functionality)
- Monitoring: enhanced monitoring for 24-48 hours post-deployment
- Mandatory post-review within 24 hours: a full review is conducted as if the change were a Normal change. This includes: was the emergency classification justified? Was the implementation appropriate? Are there follow-up actions required? Does the change need to be re-implemented through the normal process?
- Documentation: full CR created retroactively within 24 hours
- CAB review: the emergency change is reviewed at the next CAB meeting
Abuse prevention:
- Track the percentage of changes classified as “emergency” — if more than 5-10% of changes are emergencies, the process is being abused or the system is unstable (both require intervention)
- Emergency changes that are later determined to be non-emergencies are flagged as process violations
- Repeated emergency changes for the same root cause indicate a systemic problem, not an emergency
7. Policy-as-Code
Policy-as-code embeds compliance controls directly into the CI/CD pipeline. Instead of a document that says “all releases must pass security scanning,” the pipeline enforces it — the build fails if the scan fails. Policy-as-code converts audit findings from “they didn’t follow the process” to “the process cannot be bypassed.”
7.1 Compliance Frameworks as Pipeline Gates
PCI-DSS (Payment Card Industry Data Security Standard):
- Requirement 6.3: Address known security vulnerabilities in custom code → SAST/DAST gate
- Requirement 6.5: Protect against common coding vulnerabilities → SAST rules for OWASP Top 10
- Requirement 6.4: Change management procedures → CR required for deployment
- Requirement 11.3: Penetration testing → DAST gate, periodic manual testing evidence
SOC 2 (Service Organization Control 2):
- CC6.1: Logical access controls → branch protection, signed commits
- CC7.1: Monitoring for security events → deployment logging, change detection
- CC8.1: Change management → CR workflow, approval records, testing evidence
GDPR (General Data Protection Regulation):
- Article 25: Data protection by design → privacy impact assessment gate for changes touching personal data
- Article 32: Security of processing → encryption verification, access control checks
HIPAA (Health Insurance Portability and Accountability Act):
- Section 164.312: Technical safeguards → encryption validation, access logging, audit controls
- Section 164.308: Administrative safeguards → change management, risk analysis
7.2 Implementation Patterns
Open Policy Agent (OPA):
# Deny deployment if critical vulnerabilities exist
deny[msg] {
input.scan_results.critical_count > 0
msg := sprintf("Deployment blocked: %d critical vulnerabilities found", [input.scan_results.critical_count])
}
# Deny deployment without approved change request
deny[msg] {
not input.change_request.approved
msg := "Deployment blocked: no approved change request"
}
# Deny deployment if SBOM not generated
deny[msg] {
not input.artifacts.sbom_generated
msg := "Deployment blocked: SBOM not generated"
}
GitHub Actions example:
name: Release Gate
on:
pull_request:
branches: [main]
jobs:
security-gate:
runs-on: ubuntu-latest
steps:
- name: Check SAST results
run: |
CRITICAL=$(jq '.critical' sast-results.json)
HIGH=$(jq '.high' sast-results.json)
if [ "$CRITICAL" -gt 0 ] || [ "$HIGH" -gt 0 ]; then
echo "BLOCKED: $CRITICAL critical, $HIGH high findings"
exit 1
fi
- name: Check SCA results
run: |
BLOCKED_LICENSES=$(jq '.blocked_licenses | length' sca-results.json)
CRITICAL_CVES=$(jq '.critical_cves | length' sca-results.json)
if [ "$BLOCKED_LICENSES" -gt 0 ] || [ "$CRITICAL_CVES" -gt 0 ]; then
echo "BLOCKED: license or CVE violations"
exit 1
fi
- name: Verify SBOM
run: |
if [ ! -f sbom.json ]; then
echo "BLOCKED: SBOM not generated"
exit 1
fi
8. AI in Change Management
AI is increasingly used to augment change management processes. Like all AI applications in security-critical workflows, the capabilities are real but so are the risks.
8.1 AI-Generated Change Risk Assessments
AI can analyze a change request and its associated code changes to produce a risk assessment that considers:
- Historical data: what happened when similar changes were deployed in the past?
- Blast radius analysis: how many systems, services, and users are affected?
- Dependency mapping: what downstream systems could be impacted?
- Timing factors: is this change going out before a high-traffic period?
- Code complexity metrics: cyclomatic complexity, change coupling, churn rate
This does not replace human judgment. It provides a structured starting point that reduces the time spent on routine risk assessment and highlights factors that humans might miss.
8.2 Automated Release Note Generation
AI can generate release notes from commit messages, PR descriptions, and ticket metadata. This eliminates the tedious process of assembling release notes manually and reduces the risk of omissions.
Effective approach:
- AI generates the initial draft from structured data (commits, PRs, tickets)
- Human reviews and edits for accuracy, completeness, and appropriate disclosure level
- Security-sensitive changes are flagged for manual review of the release note wording (do not let AI decide how much to disclose about a vulnerability fix)
8.3 AI-Assisted Severity Triage
This is one of the most impactful applications of AI in vulnerability management.
Microsoft Vuln.AI: Microsoft’s internal AI triage system has demonstrated the ability to triage vulnerabilities more than 50% faster than manual processes while maintaining comparable accuracy. The system analyzes vulnerability reports, code context, and historical patterns to assign severity ratings.
CrowdStrike ExPRT.AI: CrowdStrike’s Exploit Prediction Rating Tool uses AI to predict which vulnerabilities are most likely to be exploited. Their data shows that focusing on the top 5% of vulnerabilities identified by ExPRT.AI captures approximately 95% of actual exploited vulnerabilities. This means security teams can focus their limited remediation capacity on the vulnerabilities that matter most.
How AI triage works in practice:
- New vulnerability detected by scanner
- AI system ingests: CVE details, CVSS score, EPSS score, affected component, deployment context, historical exploitation data, threat intelligence feeds
- AI produces: recommended severity, recommended priority, estimated blast radius, suggested remediation approach, confidence score
- Human reviewer: validates AI recommendation, adjusts based on organizational context, approves final severity and priority
8.4 AI in CAB Decision Support
AI can support CAB decisions by:
- Summarizing change requests and highlighting risk factors
- Identifying conflicts between changes scheduled for the same window
- Predicting deployment success probability based on historical data
- Flagging changes that resemble past incidents (pattern matching against post-incident reviews)
8.5 Risks of AI in Change Management
AI hallucinating low risk for high-risk changes: this is the most dangerous failure mode. If the AI system consistently underestimates risk, the CAB may develop misplaced confidence in AI assessments and reduce their own scrutiny. Mitigations:
- AI risk assessments are advisory, not authoritative — human approval is always required
- Track AI assessment accuracy over time (compare AI predictions against actual outcomes)
- Red team the AI system: submit known high-risk changes and verify the AI flags them correctly
- Maintain human expertise — if the team stops understanding how to assess risk because “the AI does it,” the organization has created a single point of failure
Bias in training data: AI triage systems trained on historical data may perpetuate biases (e.g., underestimating risk for novel vulnerability classes that were not in the training data).
Over-automation: automating the decision is different from automating the analysis. AI should inform the decision. Humans should make it.
9. Key Takeaways
- Change management is the bridge between “we wrote secure code” and “we deployed secure code.” Without it, secure development practices are negated by chaotic deployment.
- Risk classification (Standard/Normal/Emergency) routes changes through appropriate rigor. Emergency changes are not exempt from accountability — they carry a 24-hour review debt.
- Release criteria are gates, not guidelines. If the criteria are not met, the release does not ship. Define the minimum security bar and enforce it.
- CVSS alone is insufficient for prioritization. Combine CVSS severity with EPSS exploitability for risk-based prioritization. A likely-exploited medium is more urgent than a theoretical critical.
- Remediation SLAs must be defined, measured, enforced, and escalated. SLAs without teeth are fiction.
- Policy-as-code converts compliance requirements from documents into pipeline enforcement. You cannot bypass what the pipeline will not allow.
- AI augments change management through faster triage, risk assessment, and decision support — but AI risk assessments must remain advisory. The human approver is accountable for the decision.
Review Questions
-
A developer submits a change request classified as “Standard” that modifies the authentication module. Is this classification correct? What would you do?
-
Your SAST scan reports 3 high-severity findings on a release candidate. The release is scheduled for tomorrow and the business is pressuring the team to ship. Walk through the decision process.
-
You have 47 open vulnerabilities: 5 critical, 12 high, 18 medium, and 12 low. Using the CVSS/EPSS prioritization matrix, explain how you would determine remediation order. What additional data would you need?
-
Your organization’s emergency change rate has risen from 3% to 15% over the past quarter. What does this indicate, and what would you investigate?
-
The AI triage system rates a vulnerability as “Low — no action required.” A junior analyst notices that the affected component handles payment processing. What should happen next?
Module 4.2 of the SSDLC + CIS Controls v8 CG16 + AI-Augmented Development Training Program Track 4: Version Control & Change Management (Dev + DevOps)
Study Guide
Key Takeaways
- Change management bridges secure development and secure deployment — Without it, secure coding practices are negated by chaotic deployment processes.
- Three risk classifications route changes through appropriate rigor — Standard (pre-approved), Normal (CAB review), Emergency (expedited with 24-hour review debt).
- CVSS alone is insufficient for prioritization — Combine CVSS severity with EPSS exploitability; a CVSS 6.5/EPSS 0.94 is more urgent than CVSS 9.8/EPSS 0.003.
- Release criteria are gates, not guidelines — Any critical/high security finding unresolved is an absolute NO-GO with no exceptions.
- Remediation SLAs must be defined, measured, enforced, and escalated — Critical: 24 hours; High: 7 days; Medium: 30 days; Low: 90 days.
- Policy-as-code converts compliance into pipeline enforcement — OPA Rego deny rules block deployments with critical vulnerabilities; cannot be bypassed.
- AI risk assessments must remain advisory — Most dangerous failure mode is AI hallucinating low risk for high-risk changes, creating misplaced CAB confidence.
Important Definitions
| Term | Definition |
|---|---|
| CAB | Change Advisory Board — governance body reviewing and approving Normal changes |
| CVSS | Common Vulnerability Scoring System — severity scoring (0-10) based on vulnerability characteristics |
| EPSS | Exploit Prediction Scoring System — probability of exploitation within 30 days (0.0-1.0) |
| Emergency Change | Change required for production incidents or active exploits; expedited single-approver with mandatory 24-hour post-review |
| Policy-as-Code | Encoding compliance policies as executable rules (e.g., OPA Rego) enforced in CI/CD pipelines |
| Minimum Security Bar | Defined threshold below which software cannot be released; critical findings always block |
| PIR | Post-Implementation Review — verification after deployment that change achieved objectives without unexpected effects |
| CISA KEV | CISA Known Exploited Vulnerabilities catalog — actively exploited vulnerabilities requiring immediate remediation |
Quick Reference
- Framework/Process: Eight-step change workflow; three risk classifications; go/no-go decision matrix; CVSS+EPSS prioritization matrix; OPA policy-as-code
- Key Numbers: 24-hour Critical SLA; 7-day High SLA; 5-10% max emergency rate before process investigation; EPSS >0.7 = High probability; top 5% of ExPRT.AI captures ~95% of exploited vulns
- Common Pitfalls: Classifying Normal changes as Emergency to skip CAB; using CVSS alone without EPSS for prioritization; negotiating release criteria at release time instead of defining them in advance; deploying at 4:55 PM Friday before a holiday
Review Questions
- Why is a CVSS 6.5 vulnerability with EPSS 0.94 more urgent to patch than a CVSS 9.8 with EPSS 0.003?
- What does an emergency change rate above 5-10% indicate, and what should you investigate?
- How does policy-as-code (OPA Rego) differ from a compliance document that says “all releases must pass security scanning”?
- What is the most dangerous failure mode of AI in change management, and how should it be mitigated?
- A developer classifies a change to the authentication module as “Standard” — what should happen?