1.3 — CIS Controls v8: CG16 Deep Dive

Foundations & Governance 120 min All Roles
0:00 / 0:00
Listen instead
CIS Controls v8: CG16 Deep Dive
0:00 / 0:00

Learning Objectives

  • Explain the purpose and scope of CIS Controls v8 Control Group 16
  • Describe all 14 safeguards, their asset types, security functions, and Implementation Group assignments
  • Identify how AI tools interact with each safeguard as both enabler and risk vector
  • Map CIS CG16 safeguards to NIST CSF 2.0 functions
  • Map CIS CG16 safeguards to NIST 800-53 Rev 5 control families
  • Map CIS CG16 safeguards to ISO 27001:2022 controls
  • Map CIS CG16 safeguards to OWASP frameworks
  • Articulate why CG16 is absent from Implementation Group 1

1. CG16 Overview

Official Description

β€œManage the security life cycle of in-house developed, hosted, or acquired software to prevent, detect, and remediate security weaknesses before they can impact the enterprise.”

Control Group 16 β€” Application Software Security β€” is among the most comprehensive control groups in CIS Controls v8. With 14 individual safeguards spanning governance, identification, protection, and detection, CG16 establishes a complete framework for securing the software development lifecycle.

Why CG16 Is Absent from Implementation Group 1

CIS Controls v8 organizes safeguards into three Implementation Groups (IGs) representing increasing organizational maturity:

  • IG1 (Essential Cyber Hygiene): 56 safeguards applicable to every organization regardless of size or resources. Focused on foundational controls that provide maximum risk reduction with minimum complexity.
  • IG2 (Managed): IG1 + 74 additional safeguards for organizations with moderate resources and technical capability.
  • IG3 (Mature): IG1 + IG2 + 23 additional safeguards for organizations with significant security resources and complex environments.

Not a single CG16 safeguard appears in IG1. This is deliberate. Application software security requires:

  1. Organizational maturity: You need established development processes before you can secure them
  2. Technical capability: SAST, DAST, SCA, and threat modeling require specialized tools and expertise
  3. Resource investment: Secure development programs require dedicated security personnel, tooling budgets, and training programs
  4. Process foundation: CG16 builds on controls from other groups (asset management, access control, audit logging) that must be in place first

This does not mean IG1 organizations can ignore application security. It means that CIS recognizes application security is a more advanced discipline that requires foundational controls to be in place first. Organizations aspiring to IG2 or IG3 maturity must implement CG16 safeguards as a priority.

Distribution Across Implementation Groups

  • IG2 Safeguards (11): 16.1, 16.2, 16.3, 16.4, 16.5, 16.6, 16.7, 16.9, 16.10, 16.11, 16.12
  • IG3 Safeguards (3): 16.8, 16.13, 16.14

Security Function Distribution

Each safeguard is assigned a primary security function aligned with the NIST Cybersecurity Framework:

Security FunctionCountSafeguards
Govern316.1, 16.6, 16.9
Identify116.4
Protect816.2, 16.3, 16.5, 16.7, 16.10, 16.11, 16.12, 16.14
Detect216.8, 16.13

The heavy weighting toward Protect (8 of 14) reflects CG16’s focus on preventing vulnerabilities from entering production software. The Govern safeguards establish the foundation, Identify ensures visibility, Protect implements controls, and Detect catches what escapes prevention.


2. All 14 Safeguards β€” Detailed Analysis

Safeguard 16.1 β€” Establish and Maintain a Secure Application Development Process

Full Description: Establish and maintain a secure application development process. In the process, address such items as: secure application design standards, secure coding practices, developer training, vulnerability management, security of third-party code, and application security testing procedures. Review and update documentation annually, or when significant enterprise changes occur that could impact this Safeguard.

AttributeValue
Asset TypeApplications
Security FunctionGovern
Implementation GroupIG2

Key Requirements:

  • Documented process covering all six areas (design standards, coding practices, training, vulnerability management, third-party code, security testing)
  • Annual review cycle at minimum
  • Update trigger: significant enterprise changes

AI Augmentation: AI tools can assist in drafting and maintaining process documentation, generating coding standards from framework templates, and tracking compliance with documented processes.

AI Risk: AI tools must be explicitly addressed in the documented process β€” their acceptable use, data handling requirements, and the review standards for AI-generated code. Failure to govern AI tools within the SSDLC process creates an uncontrolled attack surface.


Safeguard 16.2 β€” Establish and Maintain a Process to Accept and Address Software Vulnerabilities

Full Description: Establish and maintain a process to accept and address reports of software vulnerabilities, including providing a means for external entities to report. The process is to include such items as: a vulnerability handling policy that identifies reporting process, responsible party for handling vulnerability reports, and a process for intake, assignment, remediation, and remediation verification. As part of the process, use a vulnerability tracking system to report and track vulnerability status. Review and update documentation annually, or when significant enterprise changes occur that could impact this Safeguard.

AttributeValue
Asset TypeApplications
Security FunctionProtect
Implementation GroupIG2

Key Requirements:

  • External vulnerability reporting mechanism (security.txt, vulnerability disclosure policy)
  • Vulnerability handling policy with defined: reporting process, responsible parties, intake process, assignment process, remediation process, verification process
  • Vulnerability tracking system
  • Annual review

AI Augmentation: AI can automate vulnerability triage (severity classification, exploitability assessment), suggest remediation approaches based on vulnerability type and codebase context, and track remediation progress.

AI Risk: AI-generated remediation suggestions must be validated β€” AI may propose β€œfixes” that introduce new vulnerabilities or that do not actually address the root cause. AI triage models may misclassify severity, leading to incorrect prioritization.


Safeguard 16.3 β€” Perform Root Cause Analysis on Security Vulnerabilities

Full Description: Perform root cause analysis on security vulnerabilities. When reviewing vulnerabilities, root cause analysis is the task of evaluating underlying issues that create vulnerabilities in code, and allows development teams to move beyond just fixing individual vulnerabilities as they arise.

AttributeValue
Asset TypeApplications
Security FunctionProtect
Implementation GroupIG2

Key Requirements:

  • Root cause analysis performed on discovered vulnerabilities (not just fix-and-forget)
  • Analysis identifies underlying issues (patterns, training gaps, tooling gaps, process failures)
  • Findings feed back into process improvement

AI Augmentation: AI excels at pattern recognition across vulnerability data. Given a corpus of historical vulnerability findings, AI can identify systemic patterns β€” β€œ72% of SQLi findings originate from the reporting module, suggesting the team responsible for that module needs additional training on parameterized queries.”

AI Risk: AI root cause analysis may identify correlations that are not causation, leading to misdirected improvement efforts. Human review of AI-generated root cause analysis is essential.


Safeguard 16.4 β€” Establish and Manage an Inventory of Third-Party Software Components

Full Description: Establish and manage an updated inventory of third-party components used in development, often referred to as a β€œbill of materials,” as well as components scheduled for future use. This inventory is to include any risks that each third-party component could pose. Evaluate the list at least monthly to identify any changes or updates to these components, and validate that the component is still supported.

AttributeValue
Asset TypeApplications
Security FunctionIdentify
Implementation GroupIG2

Key Requirements:

  • Maintained inventory of all third-party components (SBOM)
  • Risk assessment for each component
  • Monthly evaluation for changes, updates, and support status
  • Coverage of components in use AND components scheduled for future use

AI Augmentation: AI-powered SCA tools can automatically generate and maintain SBOMs, assess risk based on vulnerability history, maintenance activity, and license terms, and alert on components that become unsupported or unmaintained.

AI Risk: AI coding assistants routinely introduce third-party dependencies β€” sometimes correctly, sometimes hallucinating non-existent packages (slopsquatting). Every dependency suggested by an AI tool must be verified to exist, be appropriate, and be added to the inventory. AI tools themselves are third-party components that must be inventoried with their own risk assessments.


Safeguard 16.5 β€” Use Up-to-Date and Trusted Third-Party Software Components

Full Description: Use up-to-date and trusted third-party software components. When possible, choose established and proven frameworks and libraries that provide adequate security. Acquire these components from trusted sources or evaluate the software for vulnerabilities before use.

AttributeValue
Asset TypeApplications
Security FunctionProtect
Implementation GroupIG2

Key Requirements:

  • Third-party components must be up-to-date
  • Components must be from trusted sources
  • Preference for established, proven frameworks with adequate security
  • Evaluation for vulnerabilities before use (or acquisition from trusted sources)

AI Augmentation: AI tools can monitor dependency freshness across the portfolio, recommend updates with compatibility assessment, and evaluate new component choices against security criteria.

AI Risk: AI tools may recommend specific versions of dependencies based on training data that is outdated. They may recommend components that were trusted at training time but have since been compromised, abandoned, or had licenses changed. The β€œtrusted source” evaluation must be current, not based on AI training data.


Safeguard 16.6 β€” Establish and Maintain a Severity Rating System and Process for Application Vulnerabilities

Full Description: Establish and maintain a severity rating system and process for application vulnerabilities that facilitates prioritizing the order in which discovered vulnerabilities are fixed. This process includes setting a minimum level of security acceptable for releasing code or applications. Revisit on an annual basis, or when significant enterprise changes occur that could impact this Safeguard.

AttributeValue
Asset TypeApplications
Security FunctionGovern
Implementation GroupIG2

Key Requirements:

  • Formal severity rating system (CVSS, SSVC, or organizational risk-based system)
  • Prioritization process for remediation ordering
  • Minimum security threshold for code/application release (quality gate)
  • Annual review

AI Augmentation: AI can enhance vulnerability severity assessment by correlating CVSS scores with exploit availability, environmental factors, and business context to produce more actionable prioritization (similar to what SSVC does systematically).

AI Risk: AI-specific vulnerabilities (prompt injection, model poisoning) do not map cleanly to CVSS. Organizations must extend their severity rating systems to account for AI-specific vulnerability classes with appropriate impact and likelihood assessments.


Safeguard 16.7 β€” Use Standard Hardening Configuration Templates for Application Infrastructure

Full Description: Use standard, industry-recommended hardening configuration templates for application infrastructure components. This includes underlying servers, databases, and web servers, and applies to cloud containers, Platform as a Service (PaaS) components, and SaaS components. Do not allow in-house developed software to weaken configuration hardening.

AttributeValue
Asset TypeApplications
Security FunctionProtect
Implementation GroupIG2

Key Requirements:

  • Use industry-standard hardening templates (CIS Benchmarks, vendor security guides)
  • Apply to all infrastructure: servers, databases, web servers, cloud containers, PaaS, SaaS
  • In-house software must not weaken configuration hardening
  • Coverage of the full application infrastructure stack

AI Augmentation: AI tools can generate infrastructure-as-code (IaC) templates pre-configured with hardening baselines, scan existing configurations against benchmarks, and recommend remediations for drift.

AI Risk: AI-generated infrastructure configurations may appear hardened but contain subtle misconfigurations. AI coding assistants may generate Docker files, Kubernetes manifests, or Terraform configurations that bypass security controls (running as root, exposing unnecessary ports, disabling TLS verification) unless explicitly prompted to apply hardening standards.


Safeguard 16.8 β€” Separate Production and Non-Production Systems

Full Description: Maintain separate environments for production and non-production systems. Developers should not have unmonitored access to production environments.

AttributeValue
Asset TypeApplications
Security FunctionDetect
Implementation GroupIG3

Key Requirements:

  • Separate environments for production and non-production
  • Developer access to production must be monitored (not necessarily prohibited, but monitored)
  • Clear boundary between environments

AI Augmentation: AI can monitor access patterns to detect when development activities are occurring in production environments, and can help enforce environment separation through policy-as-code.

AI Risk: AI coding assistants that connect to live systems for context (databases, APIs, logs) may inadvertently connect to production rather than non-production environments. If an AI tool has credentials for production systems, a prompt injection attack could result in production data access or modification. AI tool configurations must enforce environment separation.


Safeguard 16.9 β€” Train Developers in Application Security Concepts and Secure Coding

Full Description: Ensure that all software development personnel receive training in writing secure code for their specific development environment and responsibilities. Training can include general security principles and application security standard practices. Conduct training at least annually and design in a way that promotes security within the development team, and build a culture of security among the developers.

AttributeValue
Asset TypeN/A
Security FunctionGovern
Implementation GroupIG2

Key Requirements:

  • All software development personnel receive training
  • Training covers secure code writing for their specific environment and responsibilities
  • Includes general security principles AND application security standard practices
  • At minimum annual frequency
  • Designed to promote security culture within development teams

AI Augmentation: AI-powered training platforms can provide personalized, adaptive training based on individual developer weaknesses (identified through their SAST/code review findings). AI can generate realistic, contextual exercises using the organization’s actual technology stack.

AI Risk: Developers must be trained specifically on AI tool security β€” understanding the limitations of AI-generated code, recognizing AI hallucinations, knowing what data can and cannot be shared with AI tools, and understanding AI-specific vulnerability classes. Traditional security training that ignores AI tools is increasingly insufficient.


Safeguard 16.10 β€” Apply Secure Design Principles in Application Architectures

Full Description: Apply secure design principles in application architectures. Secure design principles include the concept of least privilege and enforcing mediation to validate every operation that the user makes, promoting the concept of β€œnever trust user input.” Examples include ensuring that explicit error checking is performed and documented for all input, including for size, data type, and acceptable ranges or formats. Secure design also means minimizing the application infrastructure attack surface, such as turning off unprotected ports and services, removing unnecessary programs and files, and renaming or removing default accounts.

AttributeValue
Asset TypeApplications
Security FunctionProtect
Implementation GroupIG2

Key Requirements:

  • Least privilege implementation
  • Complete mediation (validate every operation)
  • Input validation for all input: size, data type, acceptable ranges, formats
  • Explicit error checking for all input, documented
  • Attack surface minimization (disable unnecessary ports, services, programs, files)
  • Removal/renaming of default accounts
  • β€œNever trust user input” principle

AI Augmentation: AI can analyze application architectures against secure design principles, identify violations of least privilege, flag missing input validation, and suggest attack surface reduction opportunities.

AI Risk: AI-generated code frequently violates secure design principles unless explicitly instructed otherwise. Common violations include: missing input validation, overly broad error handling (catch-all exceptions), running with elevated privileges, leaving debug endpoints active, and trusting external input without validation. The β€œnever trust user input” principle must extend to β€œnever trust AI-generated code” β€” review all AI output against secure design principles.


Safeguard 16.11 β€” Leverage Vetted Modules or Services for Application Security Components

Full Description: Leverage vetted modules or services for application security components, such as identity management, encryption, and auditing and logging. Using platform features in critical security functions will reduce developers’ workload and minimize the likelihood of design or implementation errors. Modern operating systems provide effective mechanisms for identification, authentication, and authorization and make those mechanisms available to applications. Use only standardized, currently accepted, and extensively reviewed encryption algorithms. Operating systems also provide mechanisms to create and maintain secure audit logs.

AttributeValue
Asset TypeApplications
Security FunctionProtect
Implementation GroupIG2

Key Requirements:

  • Use vetted, proven modules for: identity management, encryption, auditing and logging
  • Leverage platform-provided security mechanisms
  • Use standardized, currently accepted, extensively reviewed encryption algorithms
  • Do not create custom implementations of security-critical functions

AI Augmentation: AI tools can recommend established security libraries and frameworks appropriate to the technology stack, detect when developers are implementing custom cryptography or authentication, and suggest migration to vetted alternatives.

AI Risk: AI coding assistants frequently generate custom implementations of security functions. Left unchecked, AI may generate: custom password hashing (instead of bcrypt/scrypt/Argon2), custom JWT validation (instead of vetted JWT libraries), custom encryption (instead of NaCl/libsodium/platform crypto), custom session management (instead of framework-provided). These custom implementations are almost always weaker than vetted alternatives.


Safeguard 16.12 β€” Implement Code-Level Security Checks

Full Description: Apply static and dynamic analysis tools within the application life cycle to verify that secure coding practices are being adhered to. Most modern tools can be applied during coding, at build, or in production.

AttributeValue
Asset TypeApplications
Security FunctionProtect
Implementation GroupIG2

Key Requirements:

  • Static analysis (SAST) integrated into the development lifecycle
  • Dynamic analysis (DAST) integrated into the development lifecycle
  • Applied at multiple points: during coding (IDE), at build (CI), in production (runtime)
  • Purpose: verify adherence to secure coding practices

AI Augmentation: AI-powered SAST tools (e.g., GitHub CodeQL with AI, Snyk DeepCode, Amazon CodeGuru) provide significantly lower false positive rates and can detect complex vulnerability patterns that rule-based tools miss. AI can also prioritize findings by exploitability and impact.

AI Risk: AI-generated code must pass the same SAST/DAST quality gates as human-written code β€” there should be no exceptions or reduced standards for β€œAI-assisted” code. In fact, some organizations apply additional scrutiny to AI-generated code due to the known tendency of AI tools to generate subtly insecure patterns. AI security scanning tools themselves may have blind spots, particularly for novel vulnerability classes.


Safeguard 16.13 β€” Conduct Application Penetration Testing

Full Description: Conduct application penetration testing. For critical applications, authenticated penetration testing is better suited to finding business logic vulnerabilities than code scanning and automated security testing. Penetration testing relies on the skill of the tester to manually manipulate an application as an authenticated and unauthenticated user.

AttributeValue
Asset TypeApplications
Security FunctionDetect
Implementation GroupIG3

Key Requirements:

  • Penetration testing of applications
  • Critical applications require authenticated penetration testing
  • Focus on business logic vulnerabilities that automated tools miss
  • Both authenticated and unauthenticated testing perspectives
  • Relies on skilled human testers (not fully automatable)

AI Augmentation: AI-assisted penetration testing tools can automate reconnaissance, generate test payloads, identify potential attack paths, and prioritize testing targets. AI can also analyze penetration test results to identify patterns and recommend focused areas for manual testing.

AI Risk: Applications that incorporate AI features require specialized penetration testing that includes: prompt injection testing, system prompt extraction attempts, privilege escalation through AI tool chaining, data exfiltration through AI outputs, and testing of AI-specific business logic. Traditional penetration testers may not have the skills to test AI features β€” specialized training or AI-aware testing teams may be required.


Safeguard 16.14 β€” Conduct Threat Modeling

Full Description: Conduct threat modeling. Use threat modeling and attack surface analysis to help identify threats and prioritize risk. Threat modeling uses a structured approach to identify threats, characterize an attack surface, and prioritize defensive efforts. Modern threat modeling requires understanding the data flows within an application, understanding trust boundaries, and using frameworks like STRIDE or PASTA to enumerate possible threats.

AttributeValue
Asset TypeApplications
Security FunctionProtect
Implementation GroupIG3

Key Requirements:

  • Structured threat modeling methodology (STRIDE, PASTA, or equivalent)
  • Attack surface analysis
  • Understanding of data flows within the application
  • Understanding of trust boundaries
  • Threat enumeration and prioritization
  • Output feeds into defensive priorities

AI Augmentation: AI can dramatically accelerate threat modeling by analyzing architecture diagrams, data flow descriptions, and technical documentation to generate initial threat models. AI can identify common threat patterns for specific technology stacks and suggest mitigations based on known effective controls.

AI Risk: Applications that use AI components introduce new trust boundaries, data flows, and threat categories that traditional threat modeling may not capture. Threat models must explicitly include: AI model inputs and outputs as data flows, AI tool access to systems as trust boundary crossings, AI-specific threats (prompt injection, data poisoning, model theft, excessive agency), and the AI supply chain (model providers, training data, plugins/tools). AI-generated threat models may also miss organization-specific threats and should always be reviewed and augmented by human analysts.


3. Summary Table β€” All 14 Safeguards

IDSafeguard TitleAsset TypeSecurity FunctionIG
16.1Establish and Maintain a Secure Application Development ProcessApplicationsGovernIG2
16.2Establish and Maintain a Process to Accept and Address Software VulnerabilitiesApplicationsProtectIG2
16.3Perform Root Cause Analysis on Security VulnerabilitiesApplicationsProtectIG2
16.4Establish and Manage an Inventory of Third-Party Software ComponentsApplicationsIdentifyIG2
16.5Use Up-to-Date and Trusted Third-Party Software ComponentsApplicationsProtectIG2
16.6Establish and Maintain a Severity Rating System and Process for Application VulnerabilitiesApplicationsGovernIG2
16.7Use Standard Hardening Configuration Templates for Application InfrastructureApplicationsProtectIG2
16.8Separate Production and Non-Production SystemsApplicationsDetectIG3
16.9Train Developers in Application Security Concepts and Secure CodingN/AGovernIG2
16.10Apply Secure Design Principles in Application ArchitecturesApplicationsProtectIG2
16.11Leverage Vetted Modules or Services for Application Security ComponentsApplicationsProtectIG2
16.12Implement Code-Level Security ChecksApplicationsProtectIG2
16.13Conduct Application Penetration TestingApplicationsDetectIG3
16.14Conduct Threat ModelingApplicationsProtectIG3

4. Framework Cross-Mappings

CIS CG16 to NIST Cybersecurity Framework (CSF) 2.0

CIS SafeguardNIST CSF 2.0 FunctionCSF Category / Subcategory
16.1GOVERNGV.PO β€” Policy
16.2RESPONDRS.MA β€” Incident Management
16.3IDENTIFYID.RA β€” Risk Assessment
16.4IDENTIFYID.AM β€” Asset Management
16.5PROTECTPR.DS β€” Data Security
16.6GOVERNGV.RM β€” Risk Management Strategy
16.7PROTECTPR.IP β€” Information Protection Processes
16.8PROTECTPR.AC β€” Access Control
16.9GOVERNGV.AT β€” Awareness and Training
16.10PROTECTPR.IP β€” Information Protection Processes
16.11PROTECTPR.IP β€” Information Protection Processes
16.12DETECTDE.CM β€” Continuous Monitoring
16.13DETECTDE.CM β€” Continuous Monitoring
16.14IDENTIFYID.RA β€” Risk Assessment

CIS CG16 to NIST SP 800-53 Rev 5

CIS SafeguardNIST 800-53 Controls
16.1SA-3 (System Development Life Cycle), SA-8 (Security and Privacy Engineering Principles), SA-15 (Development Process, Standards, and Tools)
16.2SI-2 (Flaw Remediation), SI-5 (Security Alerts, Advisories, and Directives), SR-3 (Supply Chain Controls and Processes)
16.3SI-2 (Flaw Remediation), CA-7 (Continuous Monitoring), RA-5 (Vulnerability Monitoring and Scanning)
16.4SA-4 (Acquisition Process), SR-4 (Provenance), CM-8 (System Component Inventory)
16.5SA-4 (Acquisition Process), SA-22 (Unsupported System Components), SI-2 (Flaw Remediation)
16.6RA-3 (Risk Assessment), RA-5 (Vulnerability Monitoring and Scanning), PM-16 (Threat Awareness Program)
16.7CM-6 (Configuration Settings), CM-7 (Least Functionality), SA-8 (Security and Privacy Engineering Principles)
16.8CM-4 (Impact Analyses), SA-11 (Developer Testing and Evaluation), SC-32 (System Partitioning)
16.9AT-2 (Literacy Training and Awareness), AT-3 (Role-Based Training), SA-16 (Developer-Provided Training)
16.10SA-8 (Security and Privacy Engineering Principles), SA-17 (Developer Security and Privacy Architecture and Design), SC-7 (Boundary Protection)
16.11SA-4 (Acquisition Process), SA-8 (Security and Privacy Engineering Principles), SC-13 (Cryptographic Protection)
16.12SA-11 (Developer Testing and Evaluation), SA-15 (Development Process, Standards, and Tools), SI-7 (Software, Firmware, and Information Integrity)
16.13CA-8 (Penetration Testing), SA-11 (Developer Testing and Evaluation), RA-5 (Vulnerability Monitoring and Scanning)
16.14RA-3 (Risk Assessment), RA-5 (Vulnerability Monitoring and Scanning), SA-11 (Developer Testing and Evaluation)

CIS CG16 to ISO 27001:2022

CIS SafeguardISO 27001:2022 Controls
16.1A.8.25 (Secure development life cycle), A.8.26 (Application security requirements)
16.2A.8.8 (Management of technical vulnerabilities), A.6.8 (Information security event reporting)
16.3A.8.8 (Management of technical vulnerabilities), A.5.27 (Learning from information security incidents)
16.4A.8.9 (Configuration management), A.5.23 (Information security for use of cloud services)
16.5A.8.8 (Management of technical vulnerabilities), A.8.9 (Configuration management)
16.6A.8.8 (Management of technical vulnerabilities), A.5.12 (Classification of information)
16.7A.8.9 (Configuration management), A.8.27 (Secure system architecture and engineering principles)
16.8A.8.31 (Separation of development, test and production environments)
16.9A.6.3 (Information security awareness, education and training)
16.10A.8.27 (Secure system architecture and engineering principles), A.8.26 (Application security requirements)
16.11A.8.28 (Secure coding), A.8.25 (Secure development life cycle)
16.12A.8.29 (Security testing in development and acceptance), A.8.28 (Secure coding)
16.13A.8.29 (Security testing in development and acceptance), A.8.30 (Outsourced development)
16.14A.8.25 (Secure development life cycle), A.8.27 (Secure system architecture and engineering principles)

CIS CG16 to OWASP Frameworks

CIS SafeguardOWASP SAMMOWASP ASVSOWASP Testing GuideOWASP Threat Modeling
16.1Governance β€” Strategy & Metrics, Policy & ComplianceChapter 1 (Architecture)Section 2 (Introduction)β€”
16.2Implementation β€” Defect Managementβ€”β€”β€”
16.3Implementation β€” Defect Managementβ€”β€”β€”
16.4Implementation β€” Secure BuildV14 (Configuration)β€”β€”
16.5Implementation β€” Secure BuildV14 (Configuration)β€”β€”
16.6Implementation β€” Defect Managementβ€”β€”β€”
16.7Implementation β€” Secure DeploymentV14 (Configuration)Section 4.10 (Config Testing)β€”
16.8Implementation β€” Secure DeploymentV14 (Configuration)β€”β€”
16.9Governance β€” Education & Guidanceβ€”β€”β€”
16.10Design β€” Security ArchitectureV1 (Architecture)Section 4 (Assessment)Full methodology
16.11Design β€” Security ArchitectureV6 (Cryptography), V2 (Authentication)β€”β€”
16.12Verification β€” Security TestingV-All chaptersSections 4.1–4.11β€”
16.13Verification β€” Security Testingβ€”Section 3 (Methodology)β€”
16.14Design β€” Threat AssessmentV1 (Architecture)Section 4.1 (Info Gathering)Full methodology

5. Implementation Guidance

Phased Implementation Approach

For organizations building a CG16 program from scratch, the following phased approach aligns with IG progression:

Phase 1 β€” Foundation (Months 1–3): Implement the governance safeguards first.

  • 16.1: Document the secure development process
  • 16.6: Establish severity rating system
  • 16.9: Initiate developer training program

Phase 2 β€” Visibility (Months 3–6): Gain visibility into the current state.

  • 16.4: Build third-party component inventory (SBOM)
  • 16.5: Address known vulnerable and outdated components
  • 16.12: Deploy SAST/DAST tools (start with SAST in CI/CD)

Phase 3 β€” Protection (Months 6–12): Implement protective controls.

  • 16.2: Establish vulnerability handling process
  • 16.3: Begin root cause analysis practice
  • 16.7: Apply hardening configuration templates
  • 16.10: Formalize secure design principles
  • 16.11: Standardize security component usage

Phase 4 β€” Maturity (Months 12–18): Advance to IG3 safeguards.

  • 16.8: Formalize environment separation with monitoring
  • 16.13: Establish penetration testing program
  • 16.14: Implement threat modeling practice

Measuring Progress

For each safeguard, define measurable indicators:

SafeguardExample MetricTarget
16.1Process documentation exists, reviewed within 12 months100% of required areas documented
16.2Mean time to acknowledge external vulnerability reports<48 hours
16.3Percentage of critical/high vulnerabilities with RCA completed>90%
16.4Percentage of applications with current SBOM100%
16.5Percentage of dependencies with no known critical/high CVEs>95%
16.6Percentage of vulnerabilities rated using standard system100%
16.7Percentage of infrastructure components meeting hardening baselines>95%
16.8Number of unauthorized production access events0
16.9Developer training completion rate (annual)>95%
16.10Percentage of new applications with documented secure design review100%
16.11Percentage of applications using vetted security components>95%
16.12Percentage of repositories with SAST/DAST in CI/CD100%
16.13Percentage of critical applications pen tested annually100%
16.14Percentage of critical applications with current threat model100%

Summary

CIS Controls v8 Control Group 16 provides a comprehensive, structured approach to application software security. Its 14 safeguards span the full software lifecycle β€” from governance and training through design, implementation, testing, and operational vulnerability management.

Key takeaways:

  1. CG16 is not for beginners: Its complete absence from IG1 reflects the organizational maturity required. Build foundational controls first.
  2. Governance before tooling: Safeguards 16.1, 16.6, and 16.9 establish the process, rating system, and training that make all other safeguards effective.
  3. AI changes every safeguard: Each of the 14 safeguards is affected by AI β€” both as an enabler (AI tools that help implement the safeguard) and as a risk vector (AI tools that introduce new threats the safeguard must address).
  4. Cross-framework mapping enables efficiency: Organizations subject to multiple compliance frameworks can use CG16 as a common denominator β€” implementing CG16 addresses requirements across NIST CSF, NIST 800-53, ISO 27001, and OWASP simultaneously.
  5. Measurement matters: Each safeguard must have defined metrics. What gets measured gets managed.

Assessment Questions

  1. Why is CG16 completely absent from Implementation Group 1? What does this tell us about the prerequisites for an effective application security program?

  2. For Safeguard 16.4 (Third-Party Component Inventory), describe how the introduction of AI coding assistants changes both the opportunity and the risk. Include the concept of slopsquatting in your answer.

  3. Map the following scenario to specific CG16 safeguards: β€œA developer uses an AI coding assistant to generate a REST API endpoint. The AI suggests using a custom JWT implementation instead of a vetted library, introduces a dependency on a package that has a known critical CVE, and the endpoint lacks input validation.”

  4. Compare the security function distribution across CG16 (Govern: 3, Identify: 1, Protect: 8, Detect: 2). Why is Protect so heavily weighted? What would you recommend to balance this distribution?

  5. Select three CIS CG16 safeguards and for each, identify the corresponding NIST 800-53, ISO 27001:2022, and OWASP SAMM controls. Explain how addressing the CIS safeguard simultaneously satisfies requirements across all three frameworks.


References

  • CIS Controls v8 β€” Control Group 16: Application Software Security
  • CIS Controls v8 Implementation Guide
  • NIST Cybersecurity Framework (CSF) 2.0
  • NIST SP 800-53 Rev 5: Security and Privacy Controls for Information Systems and Organizations
  • ISO/IEC 27001:2022: Information Security, Cybersecurity and Privacy Protection
  • OWASP Software Assurance Maturity Model (SAMM) v2.0
  • OWASP Application Security Verification Standard (ASVS) v4.0
  • OWASP Web Security Testing Guide v4.2
  • OWASP Threat Modeling Community

Study Guide

Key Takeaways

  1. CG16 is absent from IG1 β€” Application security requires organizational maturity, specialized tools, and foundational controls before CG16 safeguards are effective.
  2. Protect dominates the security functions β€” 8 of 14 safeguards focus on prevention, with Govern (3), Identify (1), and Detect (2) filling out the framework.
  3. Governance before tooling β€” Phase 1 implementation starts with 16.1 (process), 16.6 (severity), and 16.9 (training) before deploying any security tools.
  4. AI changes every safeguard β€” Each of the 14 safeguards is affected by AI as both an enabler and a new risk vector that the safeguard must address.
  5. Cross-framework mapping enables efficiency β€” Implementing CG16 addresses requirements across NIST CSF, NIST 800-53, ISO 27001, and OWASP simultaneously.
  6. Three safeguards are IG3 β€” 16.8 (environment separation), 16.13 (pen testing), and 16.14 (threat modeling) require the highest organizational maturity.
  7. Monthly component evaluation β€” Safeguard 16.4 requires third-party component inventory evaluation at least monthly, not annually.

Important Definitions

TermDefinition
Implementation Group (IG)CIS maturity tiers: IG1 (Essential), IG2 (Managed), IG3 (Mature)
CG16CIS Controls v8 Control Group 16 β€” Application Software Security, containing 14 safeguards
Safeguard 16.1Establish and maintain a documented secure application development process covering six areas
Safeguard 16.11Leverage vetted modules for security components; use only standardized encryption algorithms
Safeguard 16.14Conduct threat modeling using structured approaches like STRIDE or PASTA
SBOMSoftware Bill of Materials β€” inventory maintained per Safeguard 16.4
Root Cause AnalysisSafeguard 16.3 requirement to evaluate underlying issues creating vulnerabilities, not just fix individual instances

Quick Reference

  • Framework/Process: 14 safeguards spanning Govern/Identify/Protect/Detect; phased implementation over 18 months across four phases
  • Key Numbers: 11 IG2 safeguards, 3 IG3 safeguards; monthly evaluation cadence for 16.4; 100% target for SBOM coverage, training completion, and SAST/DAST coverage
  • Common Pitfalls: Deploying tools before establishing governance; treating AI-generated code with reduced scrutiny; forgetting that 16.8 says β€œunmonitored” not β€œprohibited” for developer production access; implementing safeguards in isolation without cross-framework mapping

Review Questions

  1. Why is CG16 completely absent from IG1, and what does this tell you about prerequisites for an effective application security program?
  2. How does AI change both the opportunity and the risk for Safeguard 16.11 (vetted modules)?
  3. If you could only implement three CG16 safeguards first, which would you choose and why?
  4. How would you use the NIST CSF and ISO 27001 cross-mappings to satisfy multiple compliance frameworks with a single CG16 implementation?
  5. What metrics would you define for Safeguard 16.13 (penetration testing) to demonstrate effectiveness to leadership?
CIS Controls v8: CG16 Deep Dive
Page 1 of 0 ↧ Download
Loading PDF...

Q1. Why is CIS Control Group 16 completely absent from Implementation Group 1 (IG1)?

Q2. How many of the 14 CG16 safeguards are assigned to Implementation Group 3 (IG3)?

Q3. Which security function is most heavily represented among the 14 CG16 safeguards?

Q4. Safeguard 16.4 requires organizations to evaluate their third-party component inventory at what minimum frequency?

Q5. Which CG16 safeguard specifically states that developers should not have unmonitored access to production environments?

Q6. According to the phased implementation approach, which safeguards should be implemented first (Phase 1, Months 1-3)?

Q7. Safeguard 16.11 specifically warns against which common AI coding assistant behavior?

Q8. Which CG16 safeguard maps to the NIST CSF 2.0 IDENTIFY function's Asset Management (ID.AM) category?

Q9. What is the key AI-specific risk identified for Safeguard 16.14 (Conduct Threat Modeling)?

Q10. Safeguard 16.13 (Conduct Application Penetration Testing) emphasizes that authenticated penetration testing is better suited than automated tools for finding which type of vulnerability?

Answered: 0 of 10 Β· Score: 0/0 (0%)