6.7 — Secure Development Environment
Listen instead
Introduction
The software development environment is where code is born. Developer workstations, IDEs, source code repositories, build systems, and the tools that connect them — these form the foundation on which all application security is built. If this foundation is compromised, every downstream control is potentially undermined. An attacker who compromises a developer workstation can inject malicious code, steal credentials, exfiltrate source code, and pivot into production systems. An attacker who compromises the source code repository can modify code that will be built, tested, and deployed through the trusted pipeline.
NIST SSDF practices PS.1 and PS.2 require organizations to protect all forms of code from unauthorized access and tampering. Microsoft’s SDL Practice 6 mandates securing the engineering environment. CIS Control 16.1 requires a secure application development process, and CIS Control 16.8 requires separation of production and non-production systems — including the development environment itself.
With the rapid adoption of AI coding assistants in 2025-2026, the development environment has gained a new dimension of risk. AI tools that read project files, execute commands, and make network requests introduce data exposure, prompt injection, and supply chain vectors that did not exist two years ago. This module covers the full spectrum of development environment security, with particular depth on AI tool permission controls and secure AI integration.
NIST SSDF and Microsoft SDL Alignment
NIST SSDF PS.1: Protect All Forms of Code
- Protect source code repositories from unauthorized access.
- Use access controls, encryption, and monitoring.
- Ensure that code changes are authorized, reviewed, and tracked.
NIST SSDF PS.2: Protect All Forms of Code from Tampering
- Verify the integrity of code at each stage (commit, build, test, deploy).
- Use cryptographic signing and verification.
- Detect unauthorized modifications.
Microsoft SDL Practice 6: Secure the Engineering Environment
- Harden developer workstations and build systems.
- Enforce MFA on all development tools and services.
- Monitor for anomalous activity in the engineering environment.
- Protect the integrity of the build and release pipeline.
Developer Workstation Security
Developer workstations are high-value targets because they contain source code, credentials, access tokens, and direct paths to internal systems.
Full Disk Encryption
macOS (FileVault):
- Full disk encryption using XTS-AES-128.
- Enforced through MDM (Mobile Device Management) configuration profiles.
- Recovery key escrowed with the organization.
Windows (BitLocker):
- Full disk encryption using AES-256.
- TPM-backed key storage for transparent operation.
- Recovery key escrowed in Active Directory or Intune.
Linux (LUKS):
- Linux Unified Key Setup for full disk encryption.
- Configured at OS installation time.
Disk encryption protects against physical theft. If a developer laptop is stolen from a hotel room, airport, or coffee shop, disk encryption prevents the thief from accessing source code, credentials, and tokens on the device.
OS and Application Patching
- Automated patch management: OS patches applied within 14 days of release (7 days for critical security patches).
- Application updates: Development tools (IDEs, Git clients, terminal emulators, container runtimes) kept current. Many development tool vulnerabilities (VS Code extension vulnerabilities, Git client path traversal) have been exploited in the wild.
- Zero-day response: For actively exploited vulnerabilities, patching SLA is 48 hours or immediate compensating controls.
Endpoint Detection and Response (EDR)
- Continuous monitoring: EDR agents (CrowdStrike, SentinelOne, Microsoft Defender for Endpoint, Carbon Black) monitor for malicious activity, process injection, credential theft, lateral movement.
- Developer-aware policies: EDR policies must account for legitimate developer activities (compilers, debuggers, network tools) without creating excessive false positives that lead developers to disable the tool.
- Tampering protection: EDR agents should resist tampering — a compromised process should not be able to disable the EDR.
Screen Lock Policies
- Automatic lock after inactivity: 5 minutes maximum. Enforced through MDM.
- Lock on lid close: Immediate screen lock when laptop lid is closed.
- Lock on Bluetooth disconnect: If using a Bluetooth device for presence, lock when it disconnects (macOS: Near Lock, Windows: Dynamic Lock).
USB Device Restrictions
- Block unauthorized USB mass storage: Prevent unauthorized USB drives from connecting to developer workstations. Enforced through MDM or endpoint management.
- Allow authorized peripherals: Keyboards, mice, displays, audio devices — whitelisted by vendor/device class.
- USB Rubber Ducky defense: EDR monitors for rapid keystroke injection from USB devices.
Approved Software Inventories
- Application allowlisting: Only approved software runs on developer workstations. Managed through MDM (Intune, Jamf, Kandji).
- Developer tool catalog: An organizational catalog of approved IDEs, extensions, CLI tools, container runtimes, and AI assistants. Tools not in the catalog require a request and security review before installation.
- Browser extension policies: Chrome/Edge/Firefox managed policies restrict which extensions can be installed.
MFA on Development Tools
Every tool in the development chain is a potential entry point. MFA is mandatory on all of them.
Git Hosting
- GitHub: Enforce organization-wide MFA requirement. Use hardware security keys (WebAuthn/FIDO2) or TOTP. SMS-based MFA is insufficient for this context.
- GitLab: Enforce group-level MFA requirement.
- Bitbucket: Enforce workspace-level MFA through Atlassian Access.
- SSH keys: Protected with passphrases. Consider SSH certificate authorities (Smallstep, Teleport) for short-lived SSH certificates instead of long-lived keys.
CI/CD Platforms
- MFA + SSO: Integrate CI/CD platforms (GitHub Actions, GitLab CI, Jenkins, CircleCI) with the organization’s SSO provider (Okta, Azure AD, Google Workspace).
- Session management: Short session timeouts, re-authentication for sensitive operations (deployment approvals, secret management).
Cloud Provider Consoles
- MFA mandatory: All cloud provider access (AWS, Azure, GCP) requires MFA. No exceptions.
- Privileged operations: Additional MFA challenge for destructive operations (delete resources, modify IAM, change billing).
Artifact Registries
- Authentication: All artifact registry access (push and pull) requires authentication. Anonymous pull should be disabled for private registries.
- Token-based: Use scoped tokens or OIDC federation rather than username/password.
Secrets Management Tools
- Vault access: MFA required for all Vault access, whether through UI, CLI, or API.
- Privileged operations: Admin operations (create policies, manage auth methods) require additional approval.
Source Code Repository Access Control
Least Privilege
- Repository-level access: Developers access only the repositories they need for their current work. Not “all repos in the org.”
- Role-based access: Read (can view and clone), Write (can push to branches), Maintain (can manage settings), Admin (full control). Most developers need Write access to their team’s repos and Read access to shared libraries. Few need Admin.
- Team-based management: Access managed through teams/groups, not individual user assignments. When someone joins a team, they get the team’s access. When they leave, access is automatically revoked.
Regular Access Reviews
- Quarterly minimum: Review who has access to which repositories at least every quarter.
- Focus areas: Former team members who still have access, contractors whose engagement has ended, service accounts with unused permissions.
- Automated tools: GitHub’s access review features, GitLab’s member management, or dedicated tools like Vanta, Drata, or custom scripts that compare HR systems with repository access lists.
Immediate Access Revocation
- Role change: When a developer moves to a different team, old team access is removed on the day of transition.
- Departure: When a developer leaves the organization, all repository access is revoked immediately — before the exit interview. SSH keys deauthorized, personal access tokens revoked, SSO session terminated.
- Automation: HR system integration that triggers access revocation automatically on employment status change.
Audit Logging
- All access logged: Clone, push, pull request, merge, branch creation/deletion, settings changes — all logged with identity, timestamp, and action.
- Admin actions: Repository creation, deletion, visibility changes, branch protection rule modifications — all logged and reviewed.
- Alerting: Alert on anomalous access patterns — bulk repository cloning (data exfiltration), access from unusual geographies, access outside business hours to sensitive repositories.
IP Restrictions
- Sensitive repositories: High-sensitivity repos (infrastructure code, security tooling, credential configuration) can be restricted to specific IP ranges (corporate VPN, office IPs).
- GitHub Enterprise: IP allow lists at the organization level.
- GitLab: IP restriction at the group level.
Build Environment Security
Isolated Build Environments
- Separate from development: Build systems run on dedicated infrastructure, not on developer workstations.
- Separate from production: Build systems cannot reach production. They output artifacts to a registry; deployment systems pull from the registry to production.
- Network segmentation: Build agents in a dedicated network segment with restricted egress.
Ephemeral Build Agents
- Destroyed after each build: No persistent state between builds. No leftover credentials, artifacts, or modified configurations.
- Fresh from a known-good image: Each build starts from an immutable, scanned, hardened base image.
- Implementation: Container-based runners (GitHub Actions, GitLab runners on Kubernetes), cloud auto-scaling groups with terminate-on-complete.
Hardened Build Images
- Minimal tooling: Only the tools needed for the build. No SSH server, no general-purpose utilities.
- Scanned: Build images scanned for vulnerabilities on a regular schedule and after any modification.
- Version-controlled: Build image definitions (Dockerfile, Packer template) in version control with the same review process as application code.
No Internet Access from Build Agents
- Artifact proxy: Build agents pull dependencies from a private artifact proxy/cache, not directly from the internet.
- Egress filtering: Only the artifact proxy, container registry, and CI/CD control plane are reachable from build agents.
- Why: If a build agent can reach the internet, a compromised build step can download and execute arbitrary payloads, exfiltrate source code, or communicate with command-and-control servers.
Build Service Accounts
- Minimal permissions: Build service accounts can read source code, write to artifact registries, and read from secrets managers — nothing more.
- Per-pipeline scoping: Each pipeline has its own service account with permissions scoped to its specific needs.
- No human-usable credentials: Build service accounts authenticate through platform identity (OIDC) or machine certificates, not username/password.
AI Tool Permission Controls
AI coding assistants are powerful tools that require careful permission management. Each tool provides different control mechanisms with different granularity.
Claude Code: Three-Tier Permission System
Claude Code provides the most granular permission model available among current AI coding assistants:
Deny rules (highest precedence): Operations that are always blocked. Cannot be overridden by ask or allow rules.
{
"deny": [
"Read(.env*)",
"Read(*.key)",
"Read(*.pem)",
"Read(config/secrets/**)",
"Read(credentials.*)",
"Read(.aws/**)",
"Bash(rm -rf *)",
"Bash(curl * | bash)",
"Bash(*> /dev/sd*)",
"Bash(chmod 777 *)",
"Bash(ssh *)",
"WebFetch(*.internal.company.com/*)"
]
}
Ask rules: Operations that require manual confirmation from the developer before execution. This is the default for potentially risky operations.
{
"ask": [
"Bash(git push *)",
"Bash(docker *)",
"Bash(npm publish *)",
"Write(Dockerfile)",
"Write(*.yml)",
"Write(*.yaml)"
]
}
Allow rules (lowest precedence): Operations that proceed without confirmation. Used for safe, routine operations.
{
"allow": [
"Read(**/*.ts)",
"Read(**/*.py)",
"Read(**/*.go)",
"Write(src/**)",
"Write(tests/**)",
"Bash(npm test)",
"Bash(npm run lint)",
"Bash(pytest *)"
]
}
Control levels:
- File-level: Block reading, writing, or modifying specific files or patterns.
- Command-level: Block execution of specific shell commands or command patterns.
- Network-level: Block fetch requests to specific URLs or domains.
Settings hierarchy: Project-level settings (.claude/settings.json) can be overridden by user-level settings and enterprise-level settings. Deny rules at any level take precedence.
GitHub Copilot: Organization Content Exclusions
GitHub Copilot (Business and Enterprise tiers) provides centralized content exclusion:
Configuration: Through the GitHub web interface at the organization level (Settings > Copilot > Content exclusions).
Capabilities:
- Exclude entire repositories from Copilot context.
- Exclude specific file paths across all repositories.
- Path patterns use gitignore syntax.
# Organization-level Copilot content exclusions
- "**/.*env*"
- "**/config/secrets/**"
- "**/*.key"
- "**/*.pem"
- "**/credentials.*"
- "**/infrastructure/terraform/**"
Strengths:
- Centralized management — one configuration applies to all org members.
- Scales across large organizations without per-developer configuration.
- Managed by security team, not individual developers.
Limitations:
- File/path-based only — no command-level or network-level controls.
- No ask/confirm tier — exclusions are binary (included or excluded).
- Cannot prevent Copilot from suggesting code patterns based on its training data (only from reading excluded files).
Cursor: Privacy Mode and Workspace Trust
Cursor provides several security mechanisms:
Privacy Mode:
- When enabled, no code is stored by Cursor or used for training.
- Must be explicitly enabled in settings.
- Critical for any commercial or sensitive development.
Workspace Trust:
- Cursor prompts to verify workspace trust when opening a new project.
- Untrusted workspaces have restricted AI capabilities.
.cursorignore:
# .cursorignore — same syntax as .gitignore
.env*
*.key
*.pem
config/secrets/
credentials.*
*.p12
*.pfx
infrastructure/terraform/
Limitations compared to Claude Code:
- No command-level controls (cannot restrict which shell commands AI can execute).
- No tiered permission system (deny/ask/allow).
- Less comprehensive overall — privacy mode is the primary control.
Codex CLI: OS-Level Sandboxing
OpenAI’s Codex CLI takes a fundamentally different approach — OS-level isolation:
macOS Seatbelt:
- Uses macOS sandbox profiles to restrict file system access, network access, and system calls.
- The AI agent operates within a sandbox that the agent itself cannot modify.
Linux Landlock + seccomp:
- Landlock restricts filesystem access to specific paths.
- seccomp restricts which system calls the process can make.
- Combined, they create a robust sandbox that limits what the AI can access and what operations it can perform.
Network access disabled by default:
- Codex CLI cannot make network requests unless explicitly enabled.
- This is the strongest network isolation available — other tools restrict by URL pattern, Codex blocks all network by default.
Strengths:
- Strongest isolation available among AI coding assistants.
- Cannot be bypassed by the AI itself (enforced at OS level, not at application level).
- Network-off default prevents data exfiltration.
Limitations:
- Most restrictive — some development workflows that require network access (downloading dependencies, accessing documentation) are blocked.
- Less granular for file access (sandboxed vs. not sandboxed, rather than per-file deny/ask/allow).
Roo Code: .rooignore
Roo Code uses .rooignore with gitignore syntax:
# .rooignore
.env*
*.key
*.pem
config/secrets/
credentials.*
node_modules/
Permission-based approval gates: Roo Code implements an approval system for potentially risky operations.
Known risk: symlink bypass: Roo Code’s .rooignore can be bypassed through symbolic links. If a developer (or attacker) creates a symlink from an allowed directory to a denied file, Roo Code may follow the symlink and access the denied file. Organizations using Roo Code should:
- Monitor for symlink creation in development directories.
- Test
.rooignoreexclusions with symlinks to verify effectiveness. - Consider additional controls (filesystem permissions, EDR monitoring) to compensate.
Data Retention Policies by Tool
Understanding what each tool retains and for how long is critical for compliance and risk management:
| Tool | Tier | Training Use | Retention | Notes |
|---|---|---|---|---|
| Claude | Enterprise / API | Not used for training | 30-day configurable, can be set to 0 | Enterprise agreements may specify custom retention |
| Claude | Pro / Free | May be used for training | Standard retention | Opt-out available but verify current terms |
| Copilot | Business / Enterprise | Not used for training | Immediate discard of code snippets | Telemetry data may be retained separately |
| Copilot | Individual | May be used for improvement | Variable | Check current terms |
| Cursor | Privacy Mode ON | Not used | Zero retention | Must be explicitly enabled |
| Cursor | Privacy Mode OFF | May be used | Standard retention | Default state — ensure Privacy Mode is enabled |
| Codex CLI | Default | Not used for training | No retention by default | Telemetry can be disabled |
Critical: Organizational/enterprise tiers have fundamentally different data handling than individual tiers. Verify that all developers are using the organizational plan. A single developer on a personal plan can expose organizational code to training data pipelines.
IDE Security
Extension/Plugin Vetting
IDE extensions run with the same permissions as the IDE itself. A malicious extension has full access to all open files, the terminal, and network:
- Approved extension list: Maintain an organizational list of approved IDE extensions. New extensions require security review before use.
- Review criteria: Publisher reputation, installation count, source code availability, permissions requested, update frequency, vulnerability history.
- VS Code: Manage through settings.json (
extensions.allowed,extensions.blocked) or through MDM profile. - JetBrains: Plugin management through JetBrains Marketplace with organizational restrictions.
AI Extension Security
AI extensions deserve extra scrutiny:
- Data access: What project files does the extension read? Can it be scoped to specific directories?
- Network access: Where does the extension send data? To the vendor’s API? To third-party analytics? Can network destinations be audited?
- Execution: Can the extension execute commands or modify files? Under what conditions? With what approval?
- Update behavior: Can the extension update itself? Auto-updates can introduce new capabilities or vulnerabilities without review.
IDE Telemetry Controls
Most IDEs collect usage telemetry by default:
- VS Code:
telemetry.telemetryLevel: "off"disables telemetry. - JetBrains: Settings > Appearance & Behavior > System Settings > Data Sharing.
- Cursor: Settings > Privacy > Telemetry.
For sensitive development (security tools, proprietary algorithms, financial systems), telemetry should be disabled or carefully evaluated to ensure it does not transmit code or behavioral data to the vendor.
Workspace Trust Settings
Modern IDEs implement workspace trust to protect against malicious repositories:
- VS Code Workspace Trust: When opening an untrusted folder, VS Code restricts extensions, terminal access, and task execution. Developers should not blindly trust repositories they have cloned.
- JetBrains Safe Mode: Similar concept — restricted mode when opening untrusted projects.
- Why this matters: A repository can contain
.vscode/settings.json,.vscode/tasks.json, or.idea/configuration that executes arbitrary commands when the project is opened. Workspace trust prevents this.
Development Network Security
VPN for Remote Development
- All remote development through VPN: Developers working from home, hotels, coffee shops, or airports must connect through the organizational VPN before accessing internal resources (repositories, CI/CD, artifact registries, cloud consoles).
- Split-tunnel considerations: Split tunneling (routing only internal traffic through VPN) reduces VPN load but means external traffic is unprotected. Full tunneling routes all traffic through VPN for maximum control. The choice depends on organizational risk tolerance and VPN capacity.
- Always-on VPN: For high-security environments, configure the VPN client to connect automatically and prevent disconnection.
Network Segmentation
- Development networks isolated from production: Developer VLANs or VPCs cannot reach production systems directly. All production interaction flows through the CI/CD pipeline and designated management interfaces.
- Separate networks for different security tiers: The network segment for developers working on internet-facing applications may have different controls than the segment for developers working on internal tools.
- Monitoring: Network traffic from development segments is monitored for anomalies — large data transfers, connections to unusual destinations, use of unauthorized protocols.
DNS Filtering
- Block known-malicious domains: DNS filtering (Cisco Umbrella, Cloudflare Gateway, Zscaler) blocks developer workstations from resolving known-malicious domains.
- Block categories: Malware, phishing, command-and-control, newly registered domains (commonly used for attacks).
- Developer exceptions: Some categories that are blocked for general users (developer tools, code hosting) may need to be allowed for developer workstations.
TLS Inspection
- Organizational CA: For environments that require TLS inspection, the organization deploys a trusted CA to developer workstations and inspects TLS traffic at the network boundary.
- Exceptions: Some traffic should not be inspected (banking, healthcare portals, personal authentication). Define exemption lists.
- Developer trust: Be transparent about TLS inspection. Developers who discover unexpected certificate substitution without prior disclosure lose trust in the organization’s security program.
- Alternative: For organizations that do not implement TLS inspection, use EDR-based file analysis and DNS filtering as compensating controls.
Comprehensive AI Tool Security Configuration Template
For organizations deploying AI coding assistants, here is a comprehensive configuration template:
Pre-Deployment Checklist
- Tier verification: All developers on organizational/enterprise plans. No personal plans for work use.
- Privacy settings: Privacy mode / training opt-out enabled.
- File exclusions: Standard deny patterns configured (see Module 6.4 for complete list).
- Command restrictions: Dangerous commands blocked (force push, production SSH, destructive operations).
- Network restrictions: AI tools cannot reach unauthorized endpoints.
- Data classification: Developers trained on what data can and cannot be included in AI context.
- Monitoring: Logging of AI tool interactions for audit and incident investigation.
Per-Tool Configuration
Claude Code (.claude/settings.json — committed to repository):
{
"deny": [
"Read(.env*)",
"Read(*.key)",
"Read(*.pem)",
"Read(*.p12)",
"Read(*.pfx)",
"Read(config/secrets/**)",
"Read(credentials.*)",
"Read(.aws/**)",
"Read(.ssh/**)",
"Read(id_rsa*)",
"Read(id_ed25519*)",
"Bash(rm -rf /)",
"Bash(git push --force *)",
"Bash(ssh * production*)",
"Bash(curl * | bash)",
"Bash(wget * | sh)",
"Bash(*DROP TABLE*)",
"Bash(*DROP DATABASE*)",
"WebFetch(*.internal.corp.com/*)"
],
"ask": [
"Bash(git push *)",
"Bash(docker *)",
"Bash(terraform *)",
"Bash(kubectl *)",
"Write(Dockerfile*)",
"Write(*.yml)",
"Write(*.yaml)",
"Write(Makefile)",
"Write(*.sh)"
],
"allow": [
"Read(src/**)",
"Read(tests/**)",
"Read(docs/**)",
"Write(src/**)",
"Write(tests/**)",
"Bash(npm test)",
"Bash(npm run lint)",
"Bash(pytest *)",
"Bash(go test *)"
]
}
GitHub Copilot (Organization settings > Copilot > Content exclusions):
- "**/.*env*"
- "**/*.key"
- "**/*.pem"
- "**/*.p12"
- "**/*.pfx"
- "**/config/secrets/**"
- "**/credentials.*"
- "**/.aws/**"
- "**/.ssh/**"
- "**/infrastructure/terraform/state/**"
Cursor (.cursorignore — committed to repository):
.env*
*.key
*.pem
*.p12
*.pfx
config/secrets/
credentials.*
.aws/
.ssh/
infrastructure/terraform/state/
Roo Code (.rooignore — committed to repository, with symlink monitoring):
.env*
*.key
*.pem
*.p12
*.pfx
config/secrets/
credentials.*
.aws/
.ssh/
Implementation Checklist
| Control | Priority | Status |
|---|---|---|
| Full disk encryption on all developer workstations | Critical | |
| EDR deployed on all developer workstations | Critical | |
| MFA enforced on all development tools | Critical | |
| Repository access follows least privilege | Critical | |
| AI tool deny rules configured for all tools | Critical | |
| AI tools on organizational/enterprise plans only | Critical | |
| Build environments isolated and ephemeral | High | |
| Approved software inventory enforced via MDM | High | |
| Quarterly access reviews for repositories | High | |
| IDE extension vetting process | High | |
| VPN required for remote development | High | |
| OS patching SLA: 14 days (7 for critical) | High | |
| Screen lock: 5 minutes max inactivity | Medium | |
| USB mass storage restricted | Medium | |
| DNS filtering on development networks | Medium | |
| Network segmentation (dev isolated from prod) | High | |
| AI tool data retention verified | Medium | |
| Workspace trust enabled in IDEs | Medium | |
| IDE telemetry controlled | Low |
Key Takeaways
- The development environment is the supply chain origin: If the environment where code is written is compromised, every artifact it produces is suspect. Secure the foundation first.
- MFA everywhere, no exceptions: Every tool in the development chain — Git hosting, CI/CD, cloud consoles, artifact registries, secrets managers — requires MFA. A single tool without MFA is the weakest link.
- AI tools require explicit permission controls: Each AI coding assistant has different control mechanisms with different granularity. Claude Code’s three-tier deny/ask/allow system provides the most comprehensive control. Copilot provides centralized organization-wide exclusions. Codex CLI provides the strongest isolation. Know what your tools offer and configure them fully before use.
- Organizational plans matter more than you think: The difference between a personal and organizational AI tool plan can be the difference between your code being used for model training and not. Verify every developer’s plan tier.
- Build environments are not development environments: Build systems should be isolated, ephemeral, and hardened. No internet access, no developer login, no persistent state.
- Access is not binary: Developers sometimes need production access. The answer is not “never” — it is “structured, justified, time-limited, audited, and automatically revoked.”
- Defense in depth for the environment: Encryption + EDR + patching + network segmentation + access control + MFA + monitoring. No single control protects the development environment; the combination does.
References
- NIST SP 800-218 (SSDF): Practices PS.1, PS.2
- Microsoft Security Development Lifecycle: Practice 6
- CIS Controls v8, Controls 16.1 and 16.8
- Claude Code Documentation: https://docs.anthropic.com/en/docs/claude-code
- GitHub Copilot Content Exclusions: https://docs.github.com/en/copilot/managing-copilot/managing-github-copilot-in-your-organization
- OpenSSF Secure Software Development Fundamentals: https://openssf.org/
- OWASP Developer Guide: https://owasp.org/www-project-developer-guide/
Study Guide
Key Takeaways
- Development environment is the supply chain origin — If where code is written is compromised, every artifact it produces is suspect.
- MFA everywhere, no exceptions — Git hosting, CI/CD, cloud consoles, artifact registries, secrets managers; single tool without MFA is weakest link.
- AI tools require explicit permission controls — Claude Code 3-tier deny/ask/allow, Copilot org exclusions, Codex CLI OS sandbox, Cursor privacy mode.
- Organizational AI plans matter critically — Personal vs. organizational plan can mean the difference between code being used for training or not.
- Build environments are not development environments — Isolated, ephemeral, hardened, no internet access, no developer login, no persistent state.
- Screen lock at 5 minutes max — Enforced through MDM; 7-day SLA for critical OS patches, 14-day for standard.
- IDE extensions run with full IDE permissions — Malicious extension has access to all files, terminal, and network; maintain approved extension list.
Important Definitions
| Term | Definition |
|---|---|
| NIST SSDF PS.1 | Protect source code from unauthorized access using access controls, encryption, monitoring |
| NIST SSDF PS.2 | Protect code from tampering through cryptographic signing and verification |
| Claude Code Deny Rules | Highest-precedence rules blocking file, command, and network operations; cannot be overridden |
| Workspace Trust | IDE feature restricting extensions and execution when opening untrusted folders |
| EDR | Endpoint Detection and Response — continuous monitoring for malicious activity on workstations |
| Full Disk Encryption | FileVault (macOS), BitLocker (Windows), LUKS (Linux) protecting against physical theft |
| Shadow AI | Unapproved AI tools adopted by developers without organizational approval or security review |
| Codex CLI Sandboxing | OS-level isolation using macOS Seatbelt or Linux Landlock+seccomp with network disabled by default |
| Split Tunneling | VPN routing only internal traffic through VPN; full tunneling routes all traffic |
| Symlink Bypass | Known vulnerability where symbolic links circumvent Roo Code .rooignore file exclusions |
Quick Reference
- Patching SLAs: Critical security = 7 days, Standard OS = 14 days, Zero-day = 48 hours
- AI Tool Tiers: Claude Code (deny/ask/allow, most granular), Copilot (org-wide file exclusions), Cursor (.cursorignore + privacy mode), Codex CLI (OS-level sandbox, strongest isolation), Roo Code (.rooignore, symlink risk)
- Access Reviews: Quarterly minimum for repository access; immediate revocation on departure
- Build Agent Rules: Ephemeral, no internet, no SSH/exec during builds, minimal permissions, separate per-pipeline service accounts
- Common Pitfalls: Personal AI plans for work use, unvetted IDE extensions, persistent build agents with internet, no VPN for remote dev, trusting cloned repos without workspace trust
Review Questions
- Compare the security control mechanisms of Claude Code, Copilot, Cursor, and Codex CLI — rank them by granularity and explain the tradeoffs.
- Why must build agents have no direct internet access, and what architecture enables dependency resolution without it?
- A developer clones a repository containing a malicious .vscode/tasks.json — what IDE feature prevents execution and how does it work?
- Why is the distinction between personal and organizational AI tool plans a critical security decision?
- Design a comprehensive developer workstation security policy covering encryption, EDR, patching, screen lock, and AI tool controls.