1.3 — CIS Controls v8: CG16 Deep Dive
Listen instead
Learning Objectives
- ✓ Explain the purpose and scope of CIS Controls v8 Control Group 16
- ✓ Describe all 14 safeguards, their asset types, security functions, and Implementation Group assignments
- ✓ Identify how AI tools interact with each safeguard as both enabler and risk vector
- ✓ Map CIS CG16 safeguards to NIST CSF 2.0 functions
- ✓ Map CIS CG16 safeguards to NIST 800-53 Rev 5 control families
- ✓ Map CIS CG16 safeguards to ISO 27001:2022 controls
- ✓ Map CIS CG16 safeguards to OWASP frameworks
- ✓ Articulate why CG16 is absent from Implementation Group 1
1. CG16 Overview
Official Description
βManage the security life cycle of in-house developed, hosted, or acquired software to prevent, detect, and remediate security weaknesses before they can impact the enterprise.β
Control Group 16 β Application Software Security β is among the most comprehensive control groups in CIS Controls v8. With 14 individual safeguards spanning governance, identification, protection, and detection, CG16 establishes a complete framework for securing the software development lifecycle.
Why CG16 Is Absent from Implementation Group 1
CIS Controls v8 organizes safeguards into three Implementation Groups (IGs) representing increasing organizational maturity:
- IG1 (Essential Cyber Hygiene): 56 safeguards applicable to every organization regardless of size or resources. Focused on foundational controls that provide maximum risk reduction with minimum complexity.
- IG2 (Managed): IG1 + 74 additional safeguards for organizations with moderate resources and technical capability.
- IG3 (Mature): IG1 + IG2 + 23 additional safeguards for organizations with significant security resources and complex environments.
Not a single CG16 safeguard appears in IG1. This is deliberate. Application software security requires:
- Organizational maturity: You need established development processes before you can secure them
- Technical capability: SAST, DAST, SCA, and threat modeling require specialized tools and expertise
- Resource investment: Secure development programs require dedicated security personnel, tooling budgets, and training programs
- Process foundation: CG16 builds on controls from other groups (asset management, access control, audit logging) that must be in place first
This does not mean IG1 organizations can ignore application security. It means that CIS recognizes application security is a more advanced discipline that requires foundational controls to be in place first. Organizations aspiring to IG2 or IG3 maturity must implement CG16 safeguards as a priority.
Distribution Across Implementation Groups
- IG2 Safeguards (11): 16.1, 16.2, 16.3, 16.4, 16.5, 16.6, 16.7, 16.9, 16.10, 16.11, 16.12
- IG3 Safeguards (3): 16.8, 16.13, 16.14
Security Function Distribution
Each safeguard is assigned a primary security function aligned with the NIST Cybersecurity Framework:
| Security Function | Count | Safeguards |
|---|---|---|
| Govern | 3 | 16.1, 16.6, 16.9 |
| Identify | 1 | 16.4 |
| Protect | 8 | 16.2, 16.3, 16.5, 16.7, 16.10, 16.11, 16.12, 16.14 |
| Detect | 2 | 16.8, 16.13 |
The heavy weighting toward Protect (8 of 14) reflects CG16βs focus on preventing vulnerabilities from entering production software. The Govern safeguards establish the foundation, Identify ensures visibility, Protect implements controls, and Detect catches what escapes prevention.
2. All 14 Safeguards β Detailed Analysis
Safeguard 16.1 β Establish and Maintain a Secure Application Development Process
Full Description: Establish and maintain a secure application development process. In the process, address such items as: secure application design standards, secure coding practices, developer training, vulnerability management, security of third-party code, and application security testing procedures. Review and update documentation annually, or when significant enterprise changes occur that could impact this Safeguard.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Govern |
| Implementation Group | IG2 |
Key Requirements:
- Documented process covering all six areas (design standards, coding practices, training, vulnerability management, third-party code, security testing)
- Annual review cycle at minimum
- Update trigger: significant enterprise changes
AI Augmentation: AI tools can assist in drafting and maintaining process documentation, generating coding standards from framework templates, and tracking compliance with documented processes.
AI Risk: AI tools must be explicitly addressed in the documented process β their acceptable use, data handling requirements, and the review standards for AI-generated code. Failure to govern AI tools within the SSDLC process creates an uncontrolled attack surface.
Safeguard 16.2 β Establish and Maintain a Process to Accept and Address Software Vulnerabilities
Full Description: Establish and maintain a process to accept and address reports of software vulnerabilities, including providing a means for external entities to report. The process is to include such items as: a vulnerability handling policy that identifies reporting process, responsible party for handling vulnerability reports, and a process for intake, assignment, remediation, and remediation verification. As part of the process, use a vulnerability tracking system to report and track vulnerability status. Review and update documentation annually, or when significant enterprise changes occur that could impact this Safeguard.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Protect |
| Implementation Group | IG2 |
Key Requirements:
- External vulnerability reporting mechanism (security.txt, vulnerability disclosure policy)
- Vulnerability handling policy with defined: reporting process, responsible parties, intake process, assignment process, remediation process, verification process
- Vulnerability tracking system
- Annual review
AI Augmentation: AI can automate vulnerability triage (severity classification, exploitability assessment), suggest remediation approaches based on vulnerability type and codebase context, and track remediation progress.
AI Risk: AI-generated remediation suggestions must be validated β AI may propose βfixesβ that introduce new vulnerabilities or that do not actually address the root cause. AI triage models may misclassify severity, leading to incorrect prioritization.
Safeguard 16.3 β Perform Root Cause Analysis on Security Vulnerabilities
Full Description: Perform root cause analysis on security vulnerabilities. When reviewing vulnerabilities, root cause analysis is the task of evaluating underlying issues that create vulnerabilities in code, and allows development teams to move beyond just fixing individual vulnerabilities as they arise.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Protect |
| Implementation Group | IG2 |
Key Requirements:
- Root cause analysis performed on discovered vulnerabilities (not just fix-and-forget)
- Analysis identifies underlying issues (patterns, training gaps, tooling gaps, process failures)
- Findings feed back into process improvement
AI Augmentation: AI excels at pattern recognition across vulnerability data. Given a corpus of historical vulnerability findings, AI can identify systemic patterns β β72% of SQLi findings originate from the reporting module, suggesting the team responsible for that module needs additional training on parameterized queries.β
AI Risk: AI root cause analysis may identify correlations that are not causation, leading to misdirected improvement efforts. Human review of AI-generated root cause analysis is essential.
Safeguard 16.4 β Establish and Manage an Inventory of Third-Party Software Components
Full Description: Establish and manage an updated inventory of third-party components used in development, often referred to as a βbill of materials,β as well as components scheduled for future use. This inventory is to include any risks that each third-party component could pose. Evaluate the list at least monthly to identify any changes or updates to these components, and validate that the component is still supported.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Identify |
| Implementation Group | IG2 |
Key Requirements:
- Maintained inventory of all third-party components (SBOM)
- Risk assessment for each component
- Monthly evaluation for changes, updates, and support status
- Coverage of components in use AND components scheduled for future use
AI Augmentation: AI-powered SCA tools can automatically generate and maintain SBOMs, assess risk based on vulnerability history, maintenance activity, and license terms, and alert on components that become unsupported or unmaintained.
AI Risk: AI coding assistants routinely introduce third-party dependencies β sometimes correctly, sometimes hallucinating non-existent packages (slopsquatting). Every dependency suggested by an AI tool must be verified to exist, be appropriate, and be added to the inventory. AI tools themselves are third-party components that must be inventoried with their own risk assessments.
Safeguard 16.5 β Use Up-to-Date and Trusted Third-Party Software Components
Full Description: Use up-to-date and trusted third-party software components. When possible, choose established and proven frameworks and libraries that provide adequate security. Acquire these components from trusted sources or evaluate the software for vulnerabilities before use.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Protect |
| Implementation Group | IG2 |
Key Requirements:
- Third-party components must be up-to-date
- Components must be from trusted sources
- Preference for established, proven frameworks with adequate security
- Evaluation for vulnerabilities before use (or acquisition from trusted sources)
AI Augmentation: AI tools can monitor dependency freshness across the portfolio, recommend updates with compatibility assessment, and evaluate new component choices against security criteria.
AI Risk: AI tools may recommend specific versions of dependencies based on training data that is outdated. They may recommend components that were trusted at training time but have since been compromised, abandoned, or had licenses changed. The βtrusted sourceβ evaluation must be current, not based on AI training data.
Safeguard 16.6 β Establish and Maintain a Severity Rating System and Process for Application Vulnerabilities
Full Description: Establish and maintain a severity rating system and process for application vulnerabilities that facilitates prioritizing the order in which discovered vulnerabilities are fixed. This process includes setting a minimum level of security acceptable for releasing code or applications. Revisit on an annual basis, or when significant enterprise changes occur that could impact this Safeguard.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Govern |
| Implementation Group | IG2 |
Key Requirements:
- Formal severity rating system (CVSS, SSVC, or organizational risk-based system)
- Prioritization process for remediation ordering
- Minimum security threshold for code/application release (quality gate)
- Annual review
AI Augmentation: AI can enhance vulnerability severity assessment by correlating CVSS scores with exploit availability, environmental factors, and business context to produce more actionable prioritization (similar to what SSVC does systematically).
AI Risk: AI-specific vulnerabilities (prompt injection, model poisoning) do not map cleanly to CVSS. Organizations must extend their severity rating systems to account for AI-specific vulnerability classes with appropriate impact and likelihood assessments.
Safeguard 16.7 β Use Standard Hardening Configuration Templates for Application Infrastructure
Full Description: Use standard, industry-recommended hardening configuration templates for application infrastructure components. This includes underlying servers, databases, and web servers, and applies to cloud containers, Platform as a Service (PaaS) components, and SaaS components. Do not allow in-house developed software to weaken configuration hardening.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Protect |
| Implementation Group | IG2 |
Key Requirements:
- Use industry-standard hardening templates (CIS Benchmarks, vendor security guides)
- Apply to all infrastructure: servers, databases, web servers, cloud containers, PaaS, SaaS
- In-house software must not weaken configuration hardening
- Coverage of the full application infrastructure stack
AI Augmentation: AI tools can generate infrastructure-as-code (IaC) templates pre-configured with hardening baselines, scan existing configurations against benchmarks, and recommend remediations for drift.
AI Risk: AI-generated infrastructure configurations may appear hardened but contain subtle misconfigurations. AI coding assistants may generate Docker files, Kubernetes manifests, or Terraform configurations that bypass security controls (running as root, exposing unnecessary ports, disabling TLS verification) unless explicitly prompted to apply hardening standards.
Safeguard 16.8 β Separate Production and Non-Production Systems
Full Description: Maintain separate environments for production and non-production systems. Developers should not have unmonitored access to production environments.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Detect |
| Implementation Group | IG3 |
Key Requirements:
- Separate environments for production and non-production
- Developer access to production must be monitored (not necessarily prohibited, but monitored)
- Clear boundary between environments
AI Augmentation: AI can monitor access patterns to detect when development activities are occurring in production environments, and can help enforce environment separation through policy-as-code.
AI Risk: AI coding assistants that connect to live systems for context (databases, APIs, logs) may inadvertently connect to production rather than non-production environments. If an AI tool has credentials for production systems, a prompt injection attack could result in production data access or modification. AI tool configurations must enforce environment separation.
Safeguard 16.9 β Train Developers in Application Security Concepts and Secure Coding
Full Description: Ensure that all software development personnel receive training in writing secure code for their specific development environment and responsibilities. Training can include general security principles and application security standard practices. Conduct training at least annually and design in a way that promotes security within the development team, and build a culture of security among the developers.
| Attribute | Value |
|---|---|
| Asset Type | N/A |
| Security Function | Govern |
| Implementation Group | IG2 |
Key Requirements:
- All software development personnel receive training
- Training covers secure code writing for their specific environment and responsibilities
- Includes general security principles AND application security standard practices
- At minimum annual frequency
- Designed to promote security culture within development teams
AI Augmentation: AI-powered training platforms can provide personalized, adaptive training based on individual developer weaknesses (identified through their SAST/code review findings). AI can generate realistic, contextual exercises using the organizationβs actual technology stack.
AI Risk: Developers must be trained specifically on AI tool security β understanding the limitations of AI-generated code, recognizing AI hallucinations, knowing what data can and cannot be shared with AI tools, and understanding AI-specific vulnerability classes. Traditional security training that ignores AI tools is increasingly insufficient.
Safeguard 16.10 β Apply Secure Design Principles in Application Architectures
Full Description: Apply secure design principles in application architectures. Secure design principles include the concept of least privilege and enforcing mediation to validate every operation that the user makes, promoting the concept of βnever trust user input.β Examples include ensuring that explicit error checking is performed and documented for all input, including for size, data type, and acceptable ranges or formats. Secure design also means minimizing the application infrastructure attack surface, such as turning off unprotected ports and services, removing unnecessary programs and files, and renaming or removing default accounts.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Protect |
| Implementation Group | IG2 |
Key Requirements:
- Least privilege implementation
- Complete mediation (validate every operation)
- Input validation for all input: size, data type, acceptable ranges, formats
- Explicit error checking for all input, documented
- Attack surface minimization (disable unnecessary ports, services, programs, files)
- Removal/renaming of default accounts
- βNever trust user inputβ principle
AI Augmentation: AI can analyze application architectures against secure design principles, identify violations of least privilege, flag missing input validation, and suggest attack surface reduction opportunities.
AI Risk: AI-generated code frequently violates secure design principles unless explicitly instructed otherwise. Common violations include: missing input validation, overly broad error handling (catch-all exceptions), running with elevated privileges, leaving debug endpoints active, and trusting external input without validation. The βnever trust user inputβ principle must extend to βnever trust AI-generated codeβ β review all AI output against secure design principles.
Safeguard 16.11 β Leverage Vetted Modules or Services for Application Security Components
Full Description: Leverage vetted modules or services for application security components, such as identity management, encryption, and auditing and logging. Using platform features in critical security functions will reduce developersβ workload and minimize the likelihood of design or implementation errors. Modern operating systems provide effective mechanisms for identification, authentication, and authorization and make those mechanisms available to applications. Use only standardized, currently accepted, and extensively reviewed encryption algorithms. Operating systems also provide mechanisms to create and maintain secure audit logs.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Protect |
| Implementation Group | IG2 |
Key Requirements:
- Use vetted, proven modules for: identity management, encryption, auditing and logging
- Leverage platform-provided security mechanisms
- Use standardized, currently accepted, extensively reviewed encryption algorithms
- Do not create custom implementations of security-critical functions
AI Augmentation: AI tools can recommend established security libraries and frameworks appropriate to the technology stack, detect when developers are implementing custom cryptography or authentication, and suggest migration to vetted alternatives.
AI Risk: AI coding assistants frequently generate custom implementations of security functions. Left unchecked, AI may generate: custom password hashing (instead of bcrypt/scrypt/Argon2), custom JWT validation (instead of vetted JWT libraries), custom encryption (instead of NaCl/libsodium/platform crypto), custom session management (instead of framework-provided). These custom implementations are almost always weaker than vetted alternatives.
Safeguard 16.12 β Implement Code-Level Security Checks
Full Description: Apply static and dynamic analysis tools within the application life cycle to verify that secure coding practices are being adhered to. Most modern tools can be applied during coding, at build, or in production.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Protect |
| Implementation Group | IG2 |
Key Requirements:
- Static analysis (SAST) integrated into the development lifecycle
- Dynamic analysis (DAST) integrated into the development lifecycle
- Applied at multiple points: during coding (IDE), at build (CI), in production (runtime)
- Purpose: verify adherence to secure coding practices
AI Augmentation: AI-powered SAST tools (e.g., GitHub CodeQL with AI, Snyk DeepCode, Amazon CodeGuru) provide significantly lower false positive rates and can detect complex vulnerability patterns that rule-based tools miss. AI can also prioritize findings by exploitability and impact.
AI Risk: AI-generated code must pass the same SAST/DAST quality gates as human-written code β there should be no exceptions or reduced standards for βAI-assistedβ code. In fact, some organizations apply additional scrutiny to AI-generated code due to the known tendency of AI tools to generate subtly insecure patterns. AI security scanning tools themselves may have blind spots, particularly for novel vulnerability classes.
Safeguard 16.13 β Conduct Application Penetration Testing
Full Description: Conduct application penetration testing. For critical applications, authenticated penetration testing is better suited to finding business logic vulnerabilities than code scanning and automated security testing. Penetration testing relies on the skill of the tester to manually manipulate an application as an authenticated and unauthenticated user.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Detect |
| Implementation Group | IG3 |
Key Requirements:
- Penetration testing of applications
- Critical applications require authenticated penetration testing
- Focus on business logic vulnerabilities that automated tools miss
- Both authenticated and unauthenticated testing perspectives
- Relies on skilled human testers (not fully automatable)
AI Augmentation: AI-assisted penetration testing tools can automate reconnaissance, generate test payloads, identify potential attack paths, and prioritize testing targets. AI can also analyze penetration test results to identify patterns and recommend focused areas for manual testing.
AI Risk: Applications that incorporate AI features require specialized penetration testing that includes: prompt injection testing, system prompt extraction attempts, privilege escalation through AI tool chaining, data exfiltration through AI outputs, and testing of AI-specific business logic. Traditional penetration testers may not have the skills to test AI features β specialized training or AI-aware testing teams may be required.
Safeguard 16.14 β Conduct Threat Modeling
Full Description: Conduct threat modeling. Use threat modeling and attack surface analysis to help identify threats and prioritize risk. Threat modeling uses a structured approach to identify threats, characterize an attack surface, and prioritize defensive efforts. Modern threat modeling requires understanding the data flows within an application, understanding trust boundaries, and using frameworks like STRIDE or PASTA to enumerate possible threats.
| Attribute | Value |
|---|---|
| Asset Type | Applications |
| Security Function | Protect |
| Implementation Group | IG3 |
Key Requirements:
- Structured threat modeling methodology (STRIDE, PASTA, or equivalent)
- Attack surface analysis
- Understanding of data flows within the application
- Understanding of trust boundaries
- Threat enumeration and prioritization
- Output feeds into defensive priorities
AI Augmentation: AI can dramatically accelerate threat modeling by analyzing architecture diagrams, data flow descriptions, and technical documentation to generate initial threat models. AI can identify common threat patterns for specific technology stacks and suggest mitigations based on known effective controls.
AI Risk: Applications that use AI components introduce new trust boundaries, data flows, and threat categories that traditional threat modeling may not capture. Threat models must explicitly include: AI model inputs and outputs as data flows, AI tool access to systems as trust boundary crossings, AI-specific threats (prompt injection, data poisoning, model theft, excessive agency), and the AI supply chain (model providers, training data, plugins/tools). AI-generated threat models may also miss organization-specific threats and should always be reviewed and augmented by human analysts.
3. Summary Table β All 14 Safeguards
| ID | Safeguard Title | Asset Type | Security Function | IG |
|---|---|---|---|---|
| 16.1 | Establish and Maintain a Secure Application Development Process | Applications | Govern | IG2 |
| 16.2 | Establish and Maintain a Process to Accept and Address Software Vulnerabilities | Applications | Protect | IG2 |
| 16.3 | Perform Root Cause Analysis on Security Vulnerabilities | Applications | Protect | IG2 |
| 16.4 | Establish and Manage an Inventory of Third-Party Software Components | Applications | Identify | IG2 |
| 16.5 | Use Up-to-Date and Trusted Third-Party Software Components | Applications | Protect | IG2 |
| 16.6 | Establish and Maintain a Severity Rating System and Process for Application Vulnerabilities | Applications | Govern | IG2 |
| 16.7 | Use Standard Hardening Configuration Templates for Application Infrastructure | Applications | Protect | IG2 |
| 16.8 | Separate Production and Non-Production Systems | Applications | Detect | IG3 |
| 16.9 | Train Developers in Application Security Concepts and Secure Coding | N/A | Govern | IG2 |
| 16.10 | Apply Secure Design Principles in Application Architectures | Applications | Protect | IG2 |
| 16.11 | Leverage Vetted Modules or Services for Application Security Components | Applications | Protect | IG2 |
| 16.12 | Implement Code-Level Security Checks | Applications | Protect | IG2 |
| 16.13 | Conduct Application Penetration Testing | Applications | Detect | IG3 |
| 16.14 | Conduct Threat Modeling | Applications | Protect | IG3 |
4. Framework Cross-Mappings
CIS CG16 to NIST Cybersecurity Framework (CSF) 2.0
| CIS Safeguard | NIST CSF 2.0 Function | CSF Category / Subcategory |
|---|---|---|
| 16.1 | GOVERN | GV.PO β Policy |
| 16.2 | RESPOND | RS.MA β Incident Management |
| 16.3 | IDENTIFY | ID.RA β Risk Assessment |
| 16.4 | IDENTIFY | ID.AM β Asset Management |
| 16.5 | PROTECT | PR.DS β Data Security |
| 16.6 | GOVERN | GV.RM β Risk Management Strategy |
| 16.7 | PROTECT | PR.IP β Information Protection Processes |
| 16.8 | PROTECT | PR.AC β Access Control |
| 16.9 | GOVERN | GV.AT β Awareness and Training |
| 16.10 | PROTECT | PR.IP β Information Protection Processes |
| 16.11 | PROTECT | PR.IP β Information Protection Processes |
| 16.12 | DETECT | DE.CM β Continuous Monitoring |
| 16.13 | DETECT | DE.CM β Continuous Monitoring |
| 16.14 | IDENTIFY | ID.RA β Risk Assessment |
CIS CG16 to NIST SP 800-53 Rev 5
| CIS Safeguard | NIST 800-53 Controls |
|---|---|
| 16.1 | SA-3 (System Development Life Cycle), SA-8 (Security and Privacy Engineering Principles), SA-15 (Development Process, Standards, and Tools) |
| 16.2 | SI-2 (Flaw Remediation), SI-5 (Security Alerts, Advisories, and Directives), SR-3 (Supply Chain Controls and Processes) |
| 16.3 | SI-2 (Flaw Remediation), CA-7 (Continuous Monitoring), RA-5 (Vulnerability Monitoring and Scanning) |
| 16.4 | SA-4 (Acquisition Process), SR-4 (Provenance), CM-8 (System Component Inventory) |
| 16.5 | SA-4 (Acquisition Process), SA-22 (Unsupported System Components), SI-2 (Flaw Remediation) |
| 16.6 | RA-3 (Risk Assessment), RA-5 (Vulnerability Monitoring and Scanning), PM-16 (Threat Awareness Program) |
| 16.7 | CM-6 (Configuration Settings), CM-7 (Least Functionality), SA-8 (Security and Privacy Engineering Principles) |
| 16.8 | CM-4 (Impact Analyses), SA-11 (Developer Testing and Evaluation), SC-32 (System Partitioning) |
| 16.9 | AT-2 (Literacy Training and Awareness), AT-3 (Role-Based Training), SA-16 (Developer-Provided Training) |
| 16.10 | SA-8 (Security and Privacy Engineering Principles), SA-17 (Developer Security and Privacy Architecture and Design), SC-7 (Boundary Protection) |
| 16.11 | SA-4 (Acquisition Process), SA-8 (Security and Privacy Engineering Principles), SC-13 (Cryptographic Protection) |
| 16.12 | SA-11 (Developer Testing and Evaluation), SA-15 (Development Process, Standards, and Tools), SI-7 (Software, Firmware, and Information Integrity) |
| 16.13 | CA-8 (Penetration Testing), SA-11 (Developer Testing and Evaluation), RA-5 (Vulnerability Monitoring and Scanning) |
| 16.14 | RA-3 (Risk Assessment), RA-5 (Vulnerability Monitoring and Scanning), SA-11 (Developer Testing and Evaluation) |
CIS CG16 to ISO 27001:2022
| CIS Safeguard | ISO 27001:2022 Controls |
|---|---|
| 16.1 | A.8.25 (Secure development life cycle), A.8.26 (Application security requirements) |
| 16.2 | A.8.8 (Management of technical vulnerabilities), A.6.8 (Information security event reporting) |
| 16.3 | A.8.8 (Management of technical vulnerabilities), A.5.27 (Learning from information security incidents) |
| 16.4 | A.8.9 (Configuration management), A.5.23 (Information security for use of cloud services) |
| 16.5 | A.8.8 (Management of technical vulnerabilities), A.8.9 (Configuration management) |
| 16.6 | A.8.8 (Management of technical vulnerabilities), A.5.12 (Classification of information) |
| 16.7 | A.8.9 (Configuration management), A.8.27 (Secure system architecture and engineering principles) |
| 16.8 | A.8.31 (Separation of development, test and production environments) |
| 16.9 | A.6.3 (Information security awareness, education and training) |
| 16.10 | A.8.27 (Secure system architecture and engineering principles), A.8.26 (Application security requirements) |
| 16.11 | A.8.28 (Secure coding), A.8.25 (Secure development life cycle) |
| 16.12 | A.8.29 (Security testing in development and acceptance), A.8.28 (Secure coding) |
| 16.13 | A.8.29 (Security testing in development and acceptance), A.8.30 (Outsourced development) |
| 16.14 | A.8.25 (Secure development life cycle), A.8.27 (Secure system architecture and engineering principles) |
CIS CG16 to OWASP Frameworks
| CIS Safeguard | OWASP SAMM | OWASP ASVS | OWASP Testing Guide | OWASP Threat Modeling |
|---|---|---|---|---|
| 16.1 | Governance β Strategy & Metrics, Policy & Compliance | Chapter 1 (Architecture) | Section 2 (Introduction) | β |
| 16.2 | Implementation β Defect Management | β | β | β |
| 16.3 | Implementation β Defect Management | β | β | β |
| 16.4 | Implementation β Secure Build | V14 (Configuration) | β | β |
| 16.5 | Implementation β Secure Build | V14 (Configuration) | β | β |
| 16.6 | Implementation β Defect Management | β | β | β |
| 16.7 | Implementation β Secure Deployment | V14 (Configuration) | Section 4.10 (Config Testing) | β |
| 16.8 | Implementation β Secure Deployment | V14 (Configuration) | β | β |
| 16.9 | Governance β Education & Guidance | β | β | β |
| 16.10 | Design β Security Architecture | V1 (Architecture) | Section 4 (Assessment) | Full methodology |
| 16.11 | Design β Security Architecture | V6 (Cryptography), V2 (Authentication) | β | β |
| 16.12 | Verification β Security Testing | V-All chapters | Sections 4.1β4.11 | β |
| 16.13 | Verification β Security Testing | β | Section 3 (Methodology) | β |
| 16.14 | Design β Threat Assessment | V1 (Architecture) | Section 4.1 (Info Gathering) | Full methodology |
5. Implementation Guidance
Phased Implementation Approach
For organizations building a CG16 program from scratch, the following phased approach aligns with IG progression:
Phase 1 β Foundation (Months 1β3): Implement the governance safeguards first.
- 16.1: Document the secure development process
- 16.6: Establish severity rating system
- 16.9: Initiate developer training program
Phase 2 β Visibility (Months 3β6): Gain visibility into the current state.
- 16.4: Build third-party component inventory (SBOM)
- 16.5: Address known vulnerable and outdated components
- 16.12: Deploy SAST/DAST tools (start with SAST in CI/CD)
Phase 3 β Protection (Months 6β12): Implement protective controls.
- 16.2: Establish vulnerability handling process
- 16.3: Begin root cause analysis practice
- 16.7: Apply hardening configuration templates
- 16.10: Formalize secure design principles
- 16.11: Standardize security component usage
Phase 4 β Maturity (Months 12β18): Advance to IG3 safeguards.
- 16.8: Formalize environment separation with monitoring
- 16.13: Establish penetration testing program
- 16.14: Implement threat modeling practice
Measuring Progress
For each safeguard, define measurable indicators:
| Safeguard | Example Metric | Target |
|---|---|---|
| 16.1 | Process documentation exists, reviewed within 12 months | 100% of required areas documented |
| 16.2 | Mean time to acknowledge external vulnerability reports | <48 hours |
| 16.3 | Percentage of critical/high vulnerabilities with RCA completed | >90% |
| 16.4 | Percentage of applications with current SBOM | 100% |
| 16.5 | Percentage of dependencies with no known critical/high CVEs | >95% |
| 16.6 | Percentage of vulnerabilities rated using standard system | 100% |
| 16.7 | Percentage of infrastructure components meeting hardening baselines | >95% |
| 16.8 | Number of unauthorized production access events | 0 |
| 16.9 | Developer training completion rate (annual) | >95% |
| 16.10 | Percentage of new applications with documented secure design review | 100% |
| 16.11 | Percentage of applications using vetted security components | >95% |
| 16.12 | Percentage of repositories with SAST/DAST in CI/CD | 100% |
| 16.13 | Percentage of critical applications pen tested annually | 100% |
| 16.14 | Percentage of critical applications with current threat model | 100% |
Summary
CIS Controls v8 Control Group 16 provides a comprehensive, structured approach to application software security. Its 14 safeguards span the full software lifecycle β from governance and training through design, implementation, testing, and operational vulnerability management.
Key takeaways:
- CG16 is not for beginners: Its complete absence from IG1 reflects the organizational maturity required. Build foundational controls first.
- Governance before tooling: Safeguards 16.1, 16.6, and 16.9 establish the process, rating system, and training that make all other safeguards effective.
- AI changes every safeguard: Each of the 14 safeguards is affected by AI β both as an enabler (AI tools that help implement the safeguard) and as a risk vector (AI tools that introduce new threats the safeguard must address).
- Cross-framework mapping enables efficiency: Organizations subject to multiple compliance frameworks can use CG16 as a common denominator β implementing CG16 addresses requirements across NIST CSF, NIST 800-53, ISO 27001, and OWASP simultaneously.
- Measurement matters: Each safeguard must have defined metrics. What gets measured gets managed.
Assessment Questions
-
Why is CG16 completely absent from Implementation Group 1? What does this tell us about the prerequisites for an effective application security program?
-
For Safeguard 16.4 (Third-Party Component Inventory), describe how the introduction of AI coding assistants changes both the opportunity and the risk. Include the concept of slopsquatting in your answer.
-
Map the following scenario to specific CG16 safeguards: βA developer uses an AI coding assistant to generate a REST API endpoint. The AI suggests using a custom JWT implementation instead of a vetted library, introduces a dependency on a package that has a known critical CVE, and the endpoint lacks input validation.β
-
Compare the security function distribution across CG16 (Govern: 3, Identify: 1, Protect: 8, Detect: 2). Why is Protect so heavily weighted? What would you recommend to balance this distribution?
-
Select three CIS CG16 safeguards and for each, identify the corresponding NIST 800-53, ISO 27001:2022, and OWASP SAMM controls. Explain how addressing the CIS safeguard simultaneously satisfies requirements across all three frameworks.
References
- CIS Controls v8 β Control Group 16: Application Software Security
- CIS Controls v8 Implementation Guide
- NIST Cybersecurity Framework (CSF) 2.0
- NIST SP 800-53 Rev 5: Security and Privacy Controls for Information Systems and Organizations
- ISO/IEC 27001:2022: Information Security, Cybersecurity and Privacy Protection
- OWASP Software Assurance Maturity Model (SAMM) v2.0
- OWASP Application Security Verification Standard (ASVS) v4.0
- OWASP Web Security Testing Guide v4.2
- OWASP Threat Modeling Community
Study Guide
Key Takeaways
- CG16 is absent from IG1 β Application security requires organizational maturity, specialized tools, and foundational controls before CG16 safeguards are effective.
- Protect dominates the security functions β 8 of 14 safeguards focus on prevention, with Govern (3), Identify (1), and Detect (2) filling out the framework.
- Governance before tooling β Phase 1 implementation starts with 16.1 (process), 16.6 (severity), and 16.9 (training) before deploying any security tools.
- AI changes every safeguard β Each of the 14 safeguards is affected by AI as both an enabler and a new risk vector that the safeguard must address.
- Cross-framework mapping enables efficiency β Implementing CG16 addresses requirements across NIST CSF, NIST 800-53, ISO 27001, and OWASP simultaneously.
- Three safeguards are IG3 β 16.8 (environment separation), 16.13 (pen testing), and 16.14 (threat modeling) require the highest organizational maturity.
- Monthly component evaluation β Safeguard 16.4 requires third-party component inventory evaluation at least monthly, not annually.
Important Definitions
| Term | Definition |
|---|---|
| Implementation Group (IG) | CIS maturity tiers: IG1 (Essential), IG2 (Managed), IG3 (Mature) |
| CG16 | CIS Controls v8 Control Group 16 β Application Software Security, containing 14 safeguards |
| Safeguard 16.1 | Establish and maintain a documented secure application development process covering six areas |
| Safeguard 16.11 | Leverage vetted modules for security components; use only standardized encryption algorithms |
| Safeguard 16.14 | Conduct threat modeling using structured approaches like STRIDE or PASTA |
| SBOM | Software Bill of Materials β inventory maintained per Safeguard 16.4 |
| Root Cause Analysis | Safeguard 16.3 requirement to evaluate underlying issues creating vulnerabilities, not just fix individual instances |
Quick Reference
- Framework/Process: 14 safeguards spanning Govern/Identify/Protect/Detect; phased implementation over 18 months across four phases
- Key Numbers: 11 IG2 safeguards, 3 IG3 safeguards; monthly evaluation cadence for 16.4; 100% target for SBOM coverage, training completion, and SAST/DAST coverage
- Common Pitfalls: Deploying tools before establishing governance; treating AI-generated code with reduced scrutiny; forgetting that 16.8 says βunmonitoredβ not βprohibitedβ for developer production access; implementing safeguards in isolation without cross-framework mapping
Review Questions
- Why is CG16 completely absent from IG1, and what does this tell you about prerequisites for an effective application security program?
- How does AI change both the opportunity and the risk for Safeguard 16.11 (vetted modules)?
- If you could only implement three CG16 safeguards first, which would you choose and why?
- How would you use the NIST CSF and ISO 27001 cross-mappings to satisfy multiple compliance frameworks with a single CG16 implementation?
- What metrics would you define for Safeguard 16.13 (penetration testing) to demonstrate effectiveness to leadership?