1.4 — Regulatory & Compliance Framework
Listen instead
Learning Objectives
- ✓ Articulate PCI-DSS 4.0 Requirement 6.2.2 training requirements and why generic OWASP Top 10 training alone is insufficient
- ✓ Map SOC 2 Trust Services Criteria to SSDLC activities
- ✓ Describe FDA cybersecurity guidance requirements for medical device software
- ✓ Identify HIPAA technical safeguards relevant to application development
- ✓ Explain FedRAMP and NIST 800-53 software assurance requirements
- ✓ Navigate the EU AI Act timeline, risk classifications, and obligations
- ✓ Apply the NIST AI RMF to organizations using AI development tools
- ✓ Describe ISO/IEC 42001 requirements and their implications for AI-augmented development
- ✓ Design compliance evidence automation using CI/CD pipeline data
- ✓ Use the compliance matrix to determine training frequency, content, and evidence requirements per regulation
1. PCI-DSS 4.0 — Requirement 6.2.2
Full Requirement Text
“Software development personnel working on bespoke and custom software are trained at least once every 12 months as follows:
- On software security relevant to their job function and development languages.
- Including secure software design and secure coding techniques.
- Including, if security testing tools are used in the development process, how to use the tools for detecting vulnerabilities in software.”
Why Generic OWASP Top 10 Training Does Not Satisfy 6.2.2
A common misconception is that an annual one-hour presentation on the OWASP Top 10 satisfies PCI-DSS training requirements. It does not. Requirement 6.2.2 explicitly demands:
Language and framework specificity. Training must be relevant to “development languages” in use. A Java developer must receive Java-specific secure coding training. A Python developer must receive Python-specific training. A generic language-agnostic overview is insufficient.
Role specificity. Training must be relevant to the individual’s “job function.” An architect needs different training than a frontend developer, who needs different training than a DevOps engineer. One-size-fits-all training does not satisfy this requirement.
Secure design AND coding. Both design principles (threat modeling, secure architecture patterns) and coding practices (input validation, parameterized queries, output encoding) must be covered. Many training programs focus solely on coding and neglect design.
Security tool usage. If the organization uses security testing tools (SAST, DAST, SCA), developers must be trained on how to use those specific tools to detect vulnerabilities. “We have tools” is not the same as “developers know how to use the tools effectively.”
Attack type coverage. Customized guidance (6.2.4) requires that training cover attacks relevant to the specific application type and data handled. A payment processing application requires training on attacks against payment flows, not generic web security.
Evidence Requirements
- Training records with dates, attendees, content covered
- Content mapping to development languages and frameworks in use
- Role-specific training tracks documented
- Tool-specific training materials and completion records
- Assessment results demonstrating comprehension (not just attendance)
- Annual training plan and execution records
AI Implications for PCI-DSS 4.0
PCI-DSS 4.0 Requirement 6.2.2 must now extend to cover AI tool usage in development:
- How AI coding assistants handle cardholder data (CHD) and sensitive authentication data (SAD)
- Data classification rules for AI tool input (no CHD/SAD may be shared with external AI tools)
- Review requirements for AI-generated payment processing code
- Secure use of AI tools within the cardholder data environment (CDE)
- AI tool security testing (if AI tools are used in security testing, training on their limitations)
2. SOC 2 Trust Services Criteria
SOC 2 examinations evaluate an organization’s controls against the Trust Services Criteria. Four criteria categories directly affect SSDLC:
CC1 — Control Environment
The set of standards, processes, and structures that provide the basis for carrying out internal control across the organization.
SSDLC relevance:
- Management commitment to security as evidenced by SSDLC policy
- Organizational structure supporting security (security champions, security team)
- Personnel policies including developer training requirements
- Board and executive oversight of the security program
Evidence: Secure development policy, organizational chart showing security reporting lines, training program documentation, board security briefing materials.
CC2 — Communication and Information
The processes and controls to support the generation, use, and communication of quality information to support internal control.
SSDLC relevance:
- Communication of security policies to development teams
- Security requirements communicated and accessible
- Incident and vulnerability information communicated to affected parties
- Change management communication processes
Evidence: Policy distribution records, security requirements in project documentation, vulnerability notification procedures, change advisory board minutes.
CC3 — Risk Assessment
The processes for identifying, analyzing, and managing risks relevant to the achievement of objectives.
SSDLC relevance:
- Application risk assessment process
- Threat modeling as a risk identification activity
- Vulnerability management as ongoing risk assessment
- Third-party component risk assessment
Evidence: Application risk registers, threat model documents, vulnerability scan reports, third-party risk assessments.
CC8 — Change Management
The processes and controls related to changes to infrastructure, data, software, and procedures.
SSDLC relevance:
- Change management process for all code changes
- Security review as a change management gate
- Version control and access controls on source code
- Deployment processes with approval workflows
- Rollback procedures and testing
Evidence: Change management policy, pull request / merge request records with security review, deployment approval records, version control access logs.
SOC 2 and AI Development
SOC 2 auditors are increasingly asking about AI tool usage in development. Key questions include:
- What AI tools are used in the development process?
- What data is shared with AI tool providers?
- How is AI-generated code reviewed for quality and security?
- What controls prevent sensitive data from being shared with AI tools?
- How are AI tool access and usage logged?
Organizations should proactively document AI tool governance as part of their SOC 2 control environment (CC1) to avoid findings.
3. FDA Cybersecurity Guidance
Final Guidance (June 2025)
The FDA’s final guidance on cybersecurity for medical devices, codified through the PATCH Act and Section 524B of the FD&C Act, establishes mandatory cybersecurity requirements for premarket submissions.
Required SSDLC Activities
Code Review: All code in medical device software must undergo security-focused code review. This includes both manual review and automated analysis. The FDA expects evidence that code review processes are documented, consistently applied, and that findings are tracked to resolution.
Static Application Security Testing (SAST): SAST must be applied to identify vulnerability patterns in source code. The FDA expects coverage of the entire codebase, not just security-critical modules. SAST findings must be triaged, and the disposition of each finding must be documented.
Dynamic Application Security Testing (DAST): DAST must be performed on running software to identify vulnerabilities not detectable through static analysis. This is particularly important for medical devices with web interfaces, APIs, or network services.
Penetration Testing: Independent penetration testing is expected for Class II and Class III medical devices. The testing must cover the device’s attack surface including network interfaces, wireless protocols, USB/serial interfaces, web interfaces, and APIs.
Threat Modeling: Threat modeling is mandatory. The FDA expects threat models that identify: the device’s attack surface, potential threat actors and their motivations, vulnerability classes relevant to the device’s technology stack, and mitigations for identified threats.
Software Bill of Materials (SBOM): The FDA requires an SBOM for all medical device software. The SBOM must include all open-source and commercial third-party components, their versions, and known vulnerabilities. The SBOM must be maintained throughout the product lifecycle and provided to customers and the FDA.
Coordinated Vulnerability Disclosure: Manufacturers must have a coordinated vulnerability disclosure policy and process. This must include a publicly accessible mechanism for reporting vulnerabilities, a defined timeline for acknowledgment and response, and a process for issuing patches.
Patch Timelines: The FDA establishes expectations for patch timelines:
- Critical vulnerabilities with active exploitation: Immediate mitigation with patch within days
- Critical vulnerabilities without active exploitation: Patch within a defined short timeline
- Non-critical vulnerabilities: Reasonable patch timeline with risk-based prioritization
- All patches must be validated through the device’s quality management system
AI Implications for Medical Devices
The FDA has issued separate guidance on AI/ML-based Software as a Medical Device (SaMD). Key requirements:
- Algorithm change protocol (predetermined change control plan)
- Real-world performance monitoring
- Bias detection and mitigation
- Transparency in AI decision-making
- Training data integrity verification
- If AI tools are used in developing medical device software, the development process must account for AI tool limitations and mandate human review of all AI-generated code in safety-critical components
4. HIPAA Technical Safeguards
The HIPAA Security Rule’s Technical Safeguards (45 CFR 164.312) establish security requirements for electronic protected health information (ePHI). These directly impact application development for healthcare systems.
Access Controls (164.312(a)(1))
- Unique User Identification (Required): Assign a unique name and/or number for identifying and tracking user identity
- Emergency Access Procedure (Required): Establish procedures for obtaining necessary ePHI during an emergency
- Automatic Logoff (Addressable): Implement electronic procedures that terminate a session after a predetermined time of inactivity
- Encryption and Decryption (Addressable): Implement a mechanism to encrypt and decrypt ePHI
SSDLC impact: Applications handling ePHI must implement proper authentication, authorization, session management, and encryption. These must be security requirements captured early in the SDLC.
Audit Controls (164.312(b))
Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use ePHI.
SSDLC impact: Applications must log all access to ePHI — who accessed what data, when, from where, and what action was performed. Logging must be immutable, retained per policy, and available for review. This must be designed into the application architecture, not added as an afterthought.
Integrity Controls (164.312(c)(1))
Implement policies and procedures to protect ePHI from improper alteration or destruction. Includes mechanism to authenticate ePHI (addressable).
SSDLC impact: Applications must protect data integrity through checksums, digital signatures, or equivalent mechanisms. Database integrity controls, backup verification, and tamper detection must be designed into the system.
Transmission Security (164.312(e)(1))
Implement technical security measures to guard against unauthorized access to ePHI transmitted over electronic communications networks. Includes integrity controls (addressable) and encryption (addressable).
SSDLC impact: TLS for all data in transit, certificate validation, secure API communication, VPN for remote access to ePHI systems. Applications must enforce transport security and reject insecure connections.
Training Requirements
HIPAA requires workforce training on security policies and procedures (164.308(a)(5)). For development teams working on ePHI systems, this must include:
- HIPAA Security Rule awareness
- Minimum necessary access principle
- ePHI handling in development environments (de-identification, synthetic data)
- Incident reporting procedures
- Specific application security training relevant to ePHI protection
AI and HIPAA
Using AI tools with ePHI is highly regulated:
- ePHI must not be shared with AI tools unless a Business Associate Agreement (BAA) is in place with the AI provider
- Very few AI tool providers currently offer BAAs
- De-identified data (per the Safe Harbor or Expert Determination methods) may be used with AI tools, but de-identification must be verified
- AI tools used within healthcare applications that process ePHI must themselves comply with HIPAA technical safeguards
5. FedRAMP / NIST 800-53
SA-11 — Developer Testing and Evaluation
Requires the organization to require the developer of information systems, system components, or information system services to:
- Create and implement a security assessment plan
- Perform unit, integration, system, and regression testing/evaluation
- Produce evidence of the execution of the security assessment plan and the results of the testing/evaluation
- Implement a verifiable flaw remediation process
- Correct flaws identified during security testing/evaluation
Enhancements:
- SA-11(1): Static Code Analysis — require the developer to employ static code analysis tools
- SA-11(2): Threat Modeling / Vulnerability Analysis — require the developer to perform threat modeling and vulnerability analysis
- SA-11(4): Manual Code Reviews — require the developer to perform manual code review
- SA-11(5): Penetration Testing — require the developer to perform penetration testing
- SA-11(8): Dynamic Code Analysis — require the developer to employ dynamic code analysis tools
SA-15 — Development Process, Standards, and Tools
Requires the organization to require the developer of information systems, system components, or services to follow a documented development process that:
- Explicitly addresses security and privacy requirements
- Identifies the standards and tools used in the development process
- Documents the specific tool options and tool configurations used
- Documents, manages, and ensures the integrity of changes to the process and/or tools
SA-16 — Developer-Provided Training
Requires the organization to require the developer of information systems, system components, or services to provide training on the correct use and operation of implemented security functions, controls, and/or mechanisms.
AT Family — Awareness and Training
- AT-2: Literacy Training and Awareness — security awareness training for all users
- AT-3: Role-Based Training — role-based security training for personnel with assigned security roles
- AT-4: Training Records — document and monitor training activities
Rev 5.2 Software Resilience Requirements
NIST 800-53 Rev 5.2 (draft) introduces enhanced requirements for software resilience:
- Software diversity requirements to reduce monoculture risk
- System component recovery procedures
- Software integrity verification at runtime
- Supply chain risk management integration with SSDLC
FedRAMP and AI
FedRAMP is actively developing guidance for AI services in federal cloud environments. Key emerging requirements:
- AI model transparency documentation
- AI system boundary documentation (what data the AI accesses)
- AI-specific continuous monitoring
- AI tool authorization within the FedRAMP authorization boundary
- Training data provenance for AI systems processing federal data
6. EU AI Act
The EU AI Act is the world’s first comprehensive regulation of artificial intelligence. Its phased implementation creates immediate compliance obligations for organizations developing or using AI systems in the EU market.
Implementation Timeline
| Date | Milestone |
|---|---|
| August 1, 2024 | AI Act enters into force |
| February 2, 2025 | Prohibited AI practices ban takes effect. AI literacy obligations take effect (Article 4). General-purpose AI model rules begin |
| August 2, 2025 | Governance rules for GPAI with systemic risk. Obligations for providers of GPAI models. Notified bodies designated |
| August 2, 2026 | Full application of rules for high-risk AI systems listed in Annex III. Most provisions become enforceable |
| August 2, 2027 | Rules for high-risk AI systems that are safety components of products (Annex I). Full enforcement |
Risk Classification
The AI Act classifies AI systems into four risk tiers with corresponding regulatory treatment:
Unacceptable Risk (Prohibited)
AI systems that are considered a clear threat to safety, livelihoods, and rights:
- Social scoring by governments
- Real-time biometric identification in public spaces (with limited law enforcement exceptions)
- Emotion recognition in workplace and educational institutions
- Cognitive behavioral manipulation of vulnerable groups
- AI that exploits vulnerabilities of specific groups (age, disability)
- Untargeted scraping of facial images from internet or CCTV
High Risk
AI systems used in critical areas that must comply with strict requirements before market placement:
- Biometric identification and categorization of natural persons
- Management and operation of critical infrastructure
- Education and vocational training (admission, assessment)
- Employment, worker management, access to self-employment
- Access to essential private and public services (credit scoring, insurance)
- Law enforcement (risk assessments, lie detection, profiling)
- Migration, asylum, and border control management
- Administration of justice and democratic processes
Mandatory requirements for high-risk AI:
- Risk management system throughout lifecycle
- Data governance and management practices
- Technical documentation
- Record-keeping (logging)
- Transparency and information to deployers
- Human oversight measures
- Accuracy, robustness, and cybersecurity
- Quality management system
- Conformity assessment before market placement
Limited Risk
AI systems with specific transparency obligations:
- AI systems that interact with natural persons (chatbots) must disclose they are AI
- AI-generated content (deepfakes, synthetic text) must be labeled
- Emotion recognition systems must inform subjects
- Biometric categorization systems must inform subjects
Minimal Risk
All other AI systems, which may be developed and used without additional legal requirements beyond existing law. The AI Act encourages voluntary codes of conduct for minimal-risk AI.
General-Purpose AI (GPAI) Models
GPAI models (foundation models like GPT, Claude, Gemini) face specific obligations:
All GPAI providers must:
- Maintain and make available technical documentation
- Provide information and documentation to downstream providers
- Comply with EU copyright law
- Publish a summary of training data content
GPAI models with systemic risk (>10^25 FLOPs threshold) must additionally:
- Perform model evaluations and adversarial testing
- Track, document, and report serious incidents
- Ensure adequate cybersecurity protection
- Report energy consumption
Code of Practice: The European Commission has established Codes of Practice for GPAI providers. These are voluntary but provide a safe harbor for demonstrating compliance.
Penalties
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | EUR 35 million or 7% of global annual turnover |
| High-risk AI system non-compliance | EUR 15 million or 3% of global annual turnover |
| Incorrect information to authorities | EUR 7.5 million or 1.5% of global annual turnover |
For SMEs and startups, fines are proportionally reduced.
AI Literacy (Article 4) — Already Active
As of February 2, 2025, Article 4 requires:
“Providers and deployers of AI systems shall take measures to ensure, to the best extent possible, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.”
This is an immediate obligation. Organizations using AI development tools must ensure their development teams have adequate AI literacy — understanding of AI capabilities, limitations, risks, and appropriate use.
Implications for Development Organizations
Even organizations not developing AI products but using AI coding assistants are affected:
- AI literacy training for all developers using AI tools (Article 4 — already active)
- Data handling awareness for any data processed by AI tools used in EU market products
- Transparency requirements if the organization deploys AI-powered products
- Documentation requirements if AI is used in high-risk application domains
7. NIST AI RMF 1.0
Overview
The NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0), published in January 2023, provides a voluntary framework for managing risks associated with AI systems. Unlike the EU AI Act (regulatory), the AI RMF is guidance-based but is increasingly referenced in procurement requirements, contracts, and organizational policies.
Four Core Functions
GOVERN
Cultivate and implement a culture of risk management within organizations designing, developing, deploying, evaluating, or acquiring AI systems.
19 subcategories covering:
- Organizational policies for AI risk management
- Roles and responsibilities for AI governance
- Workforce diversity, equity, inclusion, and accessibility
- Organizational risk tolerance for AI
- Processes for mapping, measuring, and managing AI risks
- Integration with broader enterprise risk management
Application to AI development tools: Organizations must establish governance policies for AI tools used in development, including acceptable use policies, data handling requirements, and risk tolerance for AI-generated code.
MAP
Contextualizing risks related to an AI system through understanding its context of use, potential harms, and stakeholders.
Categories covering:
- Intended purpose, context of use, and benefits
- Interdisciplinary expertise in AI risk mapping
- AI system categorization and impact assessment
- Risks and benefits mapping for all stakeholders
- Organizational risk tolerance for specific AI applications
Application to AI development tools: Map the specific risks of each AI tool used in development — what data it accesses, what actions it can take, what outputs it produces, who is affected by those outputs, and what the consequences of AI errors are.
MEASURE
Employ quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, benchmark, and monitor AI risk and related impacts.
Categories covering:
- AI system performance metrics
- Trustworthiness metrics (accuracy, fairness, security, resilience, privacy, transparency)
- AI risk assessment methodologies
- Evaluation of AI system behavior in deployment
- Tracking of known risks and emergent risks
Application to AI development tools: Measure the quality and security of AI-generated code (defect rates, vulnerability rates compared to human-written code), AI tool accuracy for security recommendations, and false positive/negative rates for AI security scanning tools.
MANAGE
Allocate risk management resources to mapped and measured risks on a regular basis.
Categories covering:
- Risk response planning and implementation
- Mechanisms for continuous improvement
- Pre-deployment risk management
- Post-deployment monitoring and response
- Documentation and accountability
Application to AI development tools: Manage identified risks through controls (code review requirements for AI-generated code, data classification restrictions, tool access controls), monitor for emerging risks, and continuously improve AI governance based on experience and incidents.
AI RMF for Tool Users (Not Just Tool Builders)
A critical distinction: the AI RMF applies not just to organizations building AI systems but also to organizations deploying and using AI systems. A development team using GitHub Copilot, Claude Code, or Cursor is deploying AI and must manage the associated risks. The GOVERN function’s organizational policies, the MAP function’s risk identification, the MEASURE function’s performance monitoring, and the MANAGE function’s risk response all apply to AI tool usage in development.
8. NIST SP 800-218A — SSDF Supplement for AI/ML
Overview
NIST SP 800-218A extends the Secure Software Development Framework (SSDF, covered in Module 1.1) to address the unique security challenges of AI and ML systems. This supplement adds AI/ML-specific tasks to existing SSDF practices.
New Practice: PW.3 — Training Data Integrity
The most significant addition is explicit requirements for training data integrity:
- Data provenance: Maintain records of where training data originated, how it was collected, and what transformations were applied
- Data validation: Verify training data quality, accuracy, and absence of poisoning
- Data access controls: Protect training data from unauthorized modification
- Data retention: Maintain training data for reproducibility and audit
- Bias assessment: Evaluate training data for biases that could affect AI system behavior
Data Provenance Requirements
SP 800-218A requires organizations to maintain data provenance records including:
- Origin of data (source, collection method, date)
- Data transformations (cleaning, normalization, augmentation)
- Data labeling process (who labeled, quality control)
- Data splits (training, validation, test)
- Data version control (changes over time)
- Data access logs (who accessed, when, for what purpose)
Implications for Development
For organizations using AI development tools, SP 800-218A implies:
- Understanding the training data provenance of AI models used in development (what code was the model trained on?)
- Ensuring that organizational code used for fine-tuning AI models is properly governed
- Protecting organizational training data and fine-tuning data from poisoning
- Maintaining records of AI model versions used in development for reproducibility
9. ISO/IEC 42001 — AI Management System Standard
Overview
ISO/IEC 42001:2023 is the first international standard for Artificial Intelligence Management Systems (AIMS). It provides a structured approach for organizations to manage AI-related risks and opportunities. Published in December 2023, it is rapidly becoming the benchmark for AI governance.
10 Clauses
Like other ISO management system standards (27001, 9001), ISO 42001 follows the Harmonized Structure (Annex SL):
- Scope — Defines the standard’s applicability
- Normative References — Referenced standards and documents
- Terms and Definitions — Key terminology (aligned with ISO/IEC 22989)
- Context of the Organization — Understanding the organization and its needs
- Leadership — Top management commitment, policy, roles and responsibilities
- Planning — AI risk assessment, AI objectives, planning of changes
- Support — Resources, competence, awareness, communication, documented information
- Operation — AI system impact assessment, AI system lifecycle, data management, third-party and customer relationships
- Performance Evaluation — Monitoring, measurement, analysis, evaluation, internal audit, management review
- Improvement — Nonconformity, corrective action, continual improvement
38 Annex A Controls
ISO 42001 Annex A provides 38 controls organized into the following control domains:
A.2 — AI Policies (3 controls)
- AI policy establishment
- AI policy communication
- AI policy review
A.3 — Internal Organization (2 controls)
- AI roles and responsibilities
- AI reporting
A.4 — Resources for AI Systems (5 controls)
- Computing resources
- Data resources and management
- AI system tooling
- AI system components
- AI system documentation
A.5 — Assessing Impacts of AI Systems (4 controls)
- AI impact assessment methodology
- Impact assessment execution
- Impact assessment documentation
- Impact assessment review
A.6 — AI System Lifecycle (8 controls)
- AI system design and development
- Data management for AI
- AI model management
- AI system verification and validation
- AI system deployment
- AI system operation and monitoring
- AI system retirement
- AI supply chain management
A.7 — Data for AI Systems (5 controls)
- Data quality
- Data provenance
- Data preparation
- Data labeling
- Data protection
A.8 — Information for Interested Parties (4 controls)
- Transparency about AI use
- User interaction with AI systems
- AI system documentation for users
- AI system accessibility
A.9 — Use of AI Systems (3 controls)
- Responsible use guidelines
- Monitoring AI system use
- Handling AI system misuse
A.10 — Third-Party and Customer Relationships (4 controls)
- Third-party AI provider management
- AI-related customer communication
- AI-related supply chain management
- Customer AI system monitoring
AIMS Requirements
Organizations seeking ISO 42001 certification must:
- Establish, implement, maintain, and continually improve an AIMS
- Conduct AI risk assessments considering both organizational and societal risks
- Define AI objectives and measurement criteria
- Implement controls from Annex A (or justify exclusions)
- Conduct internal audits and management reviews
- Pursue continuous improvement
Industry Adoption: First AI Coding Assistant Certification
Augment Code became the first AI coding assistant to achieve ISO 42001 certification, demonstrating that certification is achievable and that the market is moving toward requiring it. This certification provides independently verified assurance of:
- AI governance policies and processes
- Risk management for AI systems
- Data management practices
- Transparency about AI capabilities and limitations
- Continuous monitoring and improvement
Organizations selecting AI development tools should consider ISO 42001 certification as a selection criterion.
10. Compliance Evidence Automation
Policy-as-Code
Instead of maintaining security policies as static documents that drift from implementation, policy-as-code encodes policies as executable rules that are enforced automatically:
Open Policy Agent (OPA): Define access control, admission control, and security policies as Rego rules that are evaluated at decision points (API gateways, Kubernetes admission controllers, CI/CD pipelines).
Sentinel (HashiCorp): Define infrastructure policies that are enforced during Terraform plan/apply, preventing non-compliant infrastructure from being provisioned.
Checkov / tfsec / kics: Encode security policies as configuration scanning rules that run in CI/CD pipelines.
Custom policy engines: Organizations can build policy-as-code engines that encode their specific security policies (from the 10 governance documents in Module 1.1) as automated checks.
Automated Evidence Collection from Pipelines
CI/CD pipelines naturally produce compliance evidence. Organizations should systematically collect and retain this evidence:
| Pipeline Stage | Evidence Produced | Compliance Mapping |
|---|---|---|
| Source Control | Code review records, approval records, branch protection logs | SOC 2 CC8, PCI 6.5.1, ISO A.8.28 |
| Build | SAST scan results, SCA scan results, SBOM | CIS 16.12, FDA SBOM, NIST SA-11 |
| Test | DAST scan results, security test results, test coverage | CIS 16.12, CIS 16.13, PCI 6.5.3 |
| Deploy | Deployment approval records, environment configuration scans | SOC 2 CC8, CIS 16.7, NIST CM-6 |
| Runtime | Monitoring data, vulnerability scan results, incident records | CIS 16.2, SOC 2 CC3, HIPAA 164.312(b) |
| Training | LMS completion records, assessment scores, content mapping | CIS 16.9, PCI 6.2.2, NIST AT-3 |
Evidence Retention and Format
Evidence must be:
- Timestamped: Provably generated at a specific point in time
- Immutable: Cannot be modified after generation (write-once storage, cryptographic signatures)
- Accessible: Available for audit within a reasonable timeframe
- Retained: Kept for the period required by applicable regulations (typically 1–7 years)
- Mapped: Explicitly linked to the compliance requirement it satisfies
11. Summary Compliance Matrix
| Regulation | Training Frequency | Content Requirements | Assessment Required | Evidence |
|---|---|---|---|---|
| PCI-DSS 4.0 | Annual (minimum) | Language-specific, role-specific, secure design + coding, tool usage, attack types for application context | Yes — comprehension verification | Training records, content mapping, assessment results, tool training materials |
| SOC 2 | Per organizational policy (typically annual) | Security awareness, role-specific security, policy awareness | Not explicitly required but strongly recommended | Training completion records, policy distribution records, organizational chart |
| FDA | Per QMS requirements | Secure development practices, code review, SAST/DAST, threat modeling, SBOM management, vulnerability disclosure | Per QMS | QMS records, training records, security test results, SBOMs, vulnerability disclosure policy |
| HIPAA | Periodic (typically annual) | Security awareness, ePHI handling, access controls, incident reporting | Not explicitly required | Training records, policy acknowledgments |
| FedRAMP / 800-53 | Annual (AT-2), role-based (AT-3) | Security awareness (AT-2), role-based security (AT-3), developer security (SA-16) | Yes — knowledge testing | Training records (AT-4), assessment results, role-based content documentation |
| EU AI Act | Ongoing (Article 4) | AI literacy — capabilities, limitations, risks, appropriate use | Not specified (measures to ensure “sufficient level”) | Training materials, completion records, demonstrated organizational AI literacy |
| NIST AI RMF | Per organizational policy | AI risk management awareness, AI governance, responsible AI use | Recommended | Training records, governance documentation, risk assessment records |
| ISO 42001 | Per AIMS requirements | AI management system awareness, AI risk, AI policy, AI-specific competencies | Required for relevant roles | Competence records (Clause 7.2), training records, awareness records (Clause 7.3) |
Summary
The regulatory landscape for software development is increasingly complex, and the introduction of AI tools adds new compliance dimensions to every framework. Key takeaways:
- PCI-DSS 6.2.2 requires specificity: Generic OWASP Top 10 training is not sufficient. Language-specific, role-specific, tool-specific training with assessment is required.
- SOC 2 auditors are asking about AI: Proactively document AI governance as part of your control environment.
- FDA mandates SSDLC for medical devices: SBOM, threat modeling, penetration testing, and coordinated vulnerability disclosure are all mandatory.
- HIPAA restricts AI tool usage with ePHI: No AI tool without a BAA may process ePHI.
- EU AI Act is already partially active: AI literacy requirements (Article 4) apply now. Full enforcement comes in August 2026.
- NIST AI RMF applies to tool users, not just builders: Organizations using AI development tools must govern, map, measure, and manage AI risks.
- ISO 42001 is becoming a selection criterion: AI tool procurement should consider ISO 42001 certification status.
- Automate evidence collection: CI/CD pipelines naturally produce compliance evidence — collect it systematically.
Organizations operating under multiple regulatory frameworks should use the compliance matrix to identify common requirements and build unified training programs that satisfy all applicable regulations simultaneously.
Assessment Questions
-
Your organization uses GitHub Copilot for Java development of a PCI-DSS in-scope payment application. Describe the specific training requirements under PCI-DSS 4.0 Requirement 6.2.2, including how AI tool usage must be addressed.
-
Explain why an annual one-hour OWASP Top 10 webinar does not satisfy PCI-DSS 6.2.2. What specific elements are missing?
-
A healthcare organization wants to use an AI coding assistant for developing an ePHI-processing application. What HIPAA requirements apply to the AI tool itself? What must the organization verify before proceeding?
-
The EU AI Act’s Article 4 (AI Literacy) is already in effect. What concrete actions must a development organization take today to comply?
-
Using the NIST AI RMF’s four functions (Govern, Map, Measure, Manage), describe how an organization should manage the risks of adopting Claude Code as a development tool.
-
Design a compliance evidence automation strategy for a CI/CD pipeline that must satisfy PCI-DSS, SOC 2, and CIS CG16 simultaneously. What evidence is collected at each pipeline stage?
References
- PCI-DSS v4.0: Requirement 6.2 — Bespoke and Custom Software Development
- AICPA: SOC 2 Trust Services Criteria (2017, updated)
- FDA: Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions (Final Guidance, 2025)
- HIPAA Security Rule: 45 CFR Part 164, Subpart C
- NIST SP 800-53 Rev 5: Security and Privacy Controls
- EU Artificial Intelligence Act (Regulation (EU) 2024/1689)
- NIST AI 100-1: Artificial Intelligence Risk Management Framework (AI RMF 1.0)
- NIST SP 800-218A: Secure Software Development Practices for Generative AI and Dual-Use Foundation Models
- ISO/IEC 42001:2023: Artificial Intelligence Management System
- NIST SP 800-218: Secure Software Development Framework (SSDF) v1.1
Study Guide
Key Takeaways
- PCI-DSS 6.2.2 demands specificity — Language-specific, role-specific training covering secure design, coding, and security tool usage; generic OWASP Top 10 presentations are insufficient.
- SOC 2 auditors now ask about AI tools — Proactively document AI governance as part of CC1 (Control Environment) to avoid audit findings.
- HIPAA requires a BAA before AI tools touch ePHI — Very few AI providers currently offer Business Associate Agreements.
- EU AI Act Article 4 (AI Literacy) is already active — As of February 2, 2025, organizations using AI development tools must ensure staff AI literacy.
- NIST AI RMF applies to tool users, not just builders — Govern, Map, Measure, and Manage functions apply to organizations deploying AI coding assistants.
- ISO 42001 is the first AI management system standard — 38 Annex A controls across 10 domains; becoming a procurement criterion for AI tools.
- Compliance evidence should be automated — CI/CD pipelines naturally produce evidence (SAST results, SBOMs, approval records) that maps to multiple compliance frameworks.
Important Definitions
| Term | Definition |
|---|---|
| PCI-DSS 6.2.2 | Payment Card Industry requirement for annual, language-specific, role-specific developer security training |
| SOC 2 CC8 | Trust Services Criteria for Change Management — governs code change processes |
| DPIA | Data Protection Impact Assessment — mandatory under GDPR Article 35 for high-risk processing |
| NIST AI RMF | AI Risk Management Framework with four functions: Govern, Map, Measure, Manage |
| SP 800-218A | NIST SSDF supplement for AI/ML adding training data integrity requirements |
| ISO 42001 | First international standard for AI Management Systems with 38 Annex A controls |
| EU AI Act | World’s first comprehensive AI regulation with risk-based classification and penalties up to EUR 35M or 7% turnover |
| Policy-as-Code | Encoding compliance policies as executable rules enforced automatically in CI/CD pipelines |
Quick Reference
- Framework/Process: Eight regulatory frameworks (PCI-DSS, SOC 2, FDA, HIPAA, FedRAMP/800-53, EU AI Act, NIST AI RMF, ISO 42001) mapped to SSDLC activities
- Key Numbers: EUR 35M or 7% turnover max fine (EU AI Act); February 2, 2025 (AI Literacy effective); 38 controls in ISO 42001; SA-11(1) through SA-11(8) NIST testing enhancements
- Common Pitfalls: Using generic OWASP training for PCI compliance; sharing ePHI with AI tools lacking BAAs; ignoring EU AI Act obligations because “we don’t build AI products”; treating compliance as annual rather than continuous
Review Questions
- Why does a one-hour annual OWASP Top 10 webinar fail to satisfy PCI-DSS 6.2.2, and what specific elements are missing?
- How do the four functions of the NIST AI RMF (Govern, Map, Measure, Manage) apply to an organization using GitHub Copilot?
- What concrete actions must a development organization take today to comply with EU AI Act Article 4 (AI Literacy)?
- Design a compliance evidence automation strategy for a CI/CD pipeline that must satisfy PCI-DSS, SOC 2, and CIS CG16 simultaneously.
- How would you determine whether a specific AI tool can be used in a HIPAA-regulated development environment?