2.2 — Secure Design Principles
Listen instead
Learning Objectives
- ✓ Explain CIS 16.10 requirements in full and map them to design decisions
- ✓ Apply all ten core secure design principles to system architecture
- ✓ Perform attack surface analysis and identify reduction opportunities
- ✓ Implement secure design patterns for common security functions
- ✓ Identify and avoid common anti-patterns in secure design
- ✓ Evaluate AI-generated architecture reviews and understand their current accuracy limitations
- ✓ Identify how AI coding assistants frequently violate secure design principles
1. CIS Control 16.10 — Apply Secure Design Principles
CIS Control 16.10 mandates five specific secure design principles for application architectures:
1.1 Least Privilege
Every component, user, process, and service operates with the minimum set of permissions necessary to accomplish its legitimate function. No more, no less, no default-to-admin.
In practice:
- Database accounts for applications use read-only connections for queries, separate write accounts for mutations
- Service accounts have scoped IAM roles, not wildcard permissions
- Container processes run as non-root users
- API tokens carry only the scopes required for their specific function
- File system permissions follow principle: no world-readable, no world-writable
1.2 Complete Mediation
Every access to every resource is checked for authorization every time. There is no caching of authorization decisions that could become stale. There is no bypassing the authorization check because “this path was already validated.”
In practice:
- Every API endpoint validates authorization on every request
- Authorization is checked at the server/service level, never relying on client-side enforcement
- Middleware or interceptors enforce authorization consistently, not per-handler ad hoc checks
- Object-level authorization: verifying the requester can access this specific resource, not just resources of this type
- No “back door” administrative interfaces that bypass normal authorization
1.3 Never Trust User Input
All input from all external sources — users, APIs, files, environment variables, DNS responses, HTTP headers — is treated as potentially malicious until validated. This is not paranoia; it is engineering discipline.
In practice:
- Server-side validation for all input, regardless of client-side validation
- Allowlisting (what is permitted) over denylisting (what is forbidden)
- Type checking, range checking, length checking, format checking
- Encoding output to prevent injection (HTML encoding, SQL parameterization, command escaping)
- Content-type enforcement on all request bodies
- No direct use of user input in file paths, SQL queries, shell commands, or LDAP queries
1.4 Explicit Error Checking
Every operation that can fail must have its failure mode explicitly handled. No swallowed exceptions. No assumed success. No “this should never happen” without a code path for when it does happen.
In practice:
- Every external call (database, API, file system) has error handling
- Error messages to users reveal no internal system details (no stack traces, no database errors, no file paths)
- Error messages to logs contain full diagnostic detail for troubleshooting
- Failed security operations (authentication, authorization, encryption) default to deny
- Return values from security-critical functions (crypto operations, permission checks) are always checked
1.5 Attack Surface Minimization
The attack surface is the sum of all points where an attacker can try to interact with the system. Minimization means reducing these points to the absolute minimum required for functionality.
In practice:
- Disable or remove unnecessary features, endpoints, services, and ports
- Remove default accounts, default credentials, sample applications, and documentation endpoints from production
- Limit exposed HTTP methods to those actually used (no OPTIONS, TRACE, DELETE if not needed)
- Minimize information disclosure in headers, error messages, and API responses
- Reduce dependencies: every library is an attack surface expansion
- Network segmentation: limit what can communicate with what
2. Core Secure Design Principles in Depth
Beyond CIS 16.10’s five principles, secure design draws on Saltzer and Schroeder’s classic security design principles (1975) and modern extensions. These principles form the theoretical foundation for every design decision.
2.1 Least Privilege
(Covered in CIS 16.10 above.) The principle that every program and every user of the system should operate using the least set of privileges necessary to complete the job.
Design implications:
- Default-deny posture: start with no access, grant explicitly
- Temporal least privilege: elevate only when needed, drop when done (just-in-time access)
- Granular permission models: avoid coarse “admin/user” dichotomies
- Separate duties: no single role should have unchecked power over critical operations
2.2 Defense in Depth
No single security control is sufficient. Multiple layers of defense ensure that if one control fails, others prevent or detect the breach.
Design implications:
- WAF + input validation + parameterized queries (three layers protecting against injection)
- Network segmentation + service authentication + application authorization (three layers protecting against lateral movement)
- Encryption at rest + encryption in transit + access control (three layers protecting data)
- Prevention + detection + response (three phases of security operations)
Common failure: Relying on a single perimeter firewall as the sole security control. Once the firewall is bypassed (via VPN, compromised insider, or application-layer attack), there is no further defense.
2.3 Fail Secure / Fail Closed
When a component fails, it should default to a secure state, not an open one. Failure should deny access, not grant it.
Design implications:
- If the authorization service is unavailable, deny all requests (do not default to allow)
- If TLS negotiation fails, drop the connection (do not fall back to plaintext)
- If input validation cannot determine whether input is safe, reject it
- If a WAF fails open (passes traffic when it cannot inspect), that is a design flaw
- Exception handlers should not grant elevated access during error recovery
Tension: Fail-closed can cause availability issues. The design must balance security with availability requirements. For critical systems, this means redundant security controls that maintain fail-closed behavior without single points of failure.
2.4 Complete Mediation
(Covered in CIS 16.10 above.) Every access to every object must be checked for authority. The system must never rely on cached permissions or assume that because a user was authorized for the previous request, they are authorized for the current one.
Design implications:
- Stateless authorization checks per request
- No authorization decision caching without explicit invalidation mechanisms
- Indirect object references: users specify references that the system resolves and authorizes, never allowing direct object access by internal ID alone
- API gateways that enforce authorization before routing to backend services
2.5 Economy of Mechanism (Keep It Simple)
Security mechanisms should be as simple as possible. Complexity is the enemy of security. Every additional line of code, configuration option, and integration point is a potential vulnerability.
Design implications:
- Prefer well-tested, widely-used security libraries over custom implementations
- Minimize the number of security-critical code paths
- Centralize security functions (one authentication module, one authorization module, one crypto module)
- Reduce the number of external dependencies
- Simpler designs are easier to review, test, and audit
Anti-pattern: Building a custom authentication system with SAML, OIDC, LDAP, certificate-based, and API key authentication all implemented in-house. Each implementation is a security surface. Use a proven identity provider and delegate.
2.6 Open Design
The security of a system should not depend on the secrecy of its design or implementation. Security through obscurity is not security. A system should be secure even if everything about it, except the cryptographic keys, is public knowledge.
Design implications:
- Use published, peer-reviewed cryptographic algorithms (AES, RSA, Ed25519), not custom or secret ones
- Assume attackers know your architecture, technology stack, and source code
- Security controls must work even when their implementation is known
- Open-source security tools are generally preferable because they receive broader scrutiny
- Password hashing algorithms (bcrypt, Argon2id) are designed to be secure even when the algorithm is fully known
This does not mean: Publishing your production credentials, encryption keys, or vulnerability scan results. Open design means the mechanism is public. The keys and secrets remain protected.
2.7 Separation of Privilege
No single condition should be sufficient to grant access to a critical function. Multiple independent conditions should be required.
Design implications:
- Multi-factor authentication: knowledge + possession + biometrics
- Dual authorization for high-risk operations (two-person rule for production deployments, key ceremonies)
- Separation of duties: the person who writes code should not approve their own code for production
- Segregated environments: developers cannot directly access production data
- Break-glass procedures require multiple approvals
2.8 Least Common Mechanism
Minimize the mechanisms shared between users or between components with different trust levels. Shared mechanisms create covert channels and common failure points.
Design implications:
- Separate databases for different tenants (or at minimum, strong row-level security)
- Process isolation between components with different privilege levels
- Separate network segments for different trust zones
- Do not share service accounts across services
- Avoid shared file systems between components with different trust levels
2.9 Psychological Acceptability (Usable Security)
Security mechanisms must be easy enough to use that users will actually use them correctly. If security is too burdensome, users will find workarounds that are less secure than having no security at all.
Design implications:
- Authentication flows should be as smooth as possible (biometrics, passwordless, SSO)
- Error messages should help users correct their behavior, not just reject them
- Security defaults should be secure without requiring user configuration
- Security policies should be enforceable through system design, not user compliance
- MFA enrollment should be guided, not abandoned after a link to documentation
Classic failure: Requiring passwords with uppercase, lowercase, numbers, symbols, minimum 16 characters, changed every 30 days, never repeated. Users respond by writing passwords on sticky notes. NIST SP 800-63B now recommends length over complexity, and eliminating mandatory rotation.
2.10 Zero Trust Architecture Principles
Zero trust extends traditional secure design principles to modern distributed environments. The core tenet: never trust, always verify. No implicit trust based on network location, device type, or previous authentication.
Core principles:
- Verify explicitly: Authenticate and authorize based on all available data points (identity, location, device health, data classification, anomalies)
- Use least-privilege access: Just-in-time and just-enough access. Adaptive policies based on risk
- Assume breach: Design as if the network is already compromised. Minimize blast radius. Segment access. Encrypt everything. Verify end-to-end
Design implications:
- mTLS between all services, even within the “internal” network
- Identity-based access control (not network-based)
- Microsegmentation: every service communicates only with its declared dependencies
- Continuous authentication: re-validate identity based on behavior, not just initial login
- Every data flow encrypted, every request authenticated, every action authorized
3. OWASP Secure Design Principles Reference
OWASP maintains a Secure Design Principles reference that aligns with the principles above and adds practical web application context. Key additions:
- Keep security simple: Avoid complex security architectures when simple ones suffice
- Fix security issues correctly: When a vulnerability is found, understand the root cause and fix the pattern, not just the instance
- Establish secure defaults: Out of the box, the application should be in its most secure configuration
- Avoid security by obscurity: Do not rely on hidden URLs, undocumented APIs, or secret parameters
4. Attack Surface Analysis and Reduction
4.1 Identifying the Attack Surface
The attack surface consists of:
- Entry points: Every point where data enters the system (HTTP endpoints, API routes, message queues, file upload, email intake, CLI arguments, environment variables)
- Exit points: Every point where data leaves the system (API responses, logs, emails, reports, error messages) — these can leak information
- Trust boundaries: Every point where data crosses from one trust level to another (user → application, application → database, service → service, internal → external)
- Data flows: Every path data takes through the system, especially paths involving sensitive data
- Assets: What the system protects (user data, credentials, business logic, financial transactions)
4.2 Attack Surface Reduction Strategies
| Strategy | Implementation |
|---|---|
| Remove unused features | Disable or delete endpoints, services, and functions not required for current functionality |
| Minimize entry points | Consolidate APIs, reduce the number of exposed endpoints, use API gateways |
| Restrict access methods | Limit HTTP methods, restrict content types, enforce authentication on all endpoints |
| Reduce privilege | Run services as non-root, use read-only file systems, drop capabilities |
| Minimize dependencies | Audit and remove unused libraries, prefer standard library over third-party |
| Network segmentation | Place components in appropriate network zones, restrict inter-zone communication |
| Data minimization | Collect, store, and expose only the data necessary for the function |
4.3 Measuring Attack Surface
Microsoft’s Relative Attack Surface Quotient (RASQ) provides a quantitative method:
- Count entry points weighted by privilege level
- Count exit points weighted by data sensitivity
- Count trust boundaries and their permeability
- Track changes over time
A useful shortcut: count the number of exposed endpoints, the number of external dependencies, and the number of privileged operations. Track these metrics per release. If they increase without justification, the attack surface is growing.
5. Secure Design Patterns
5.1 Input Validation Gateway
Centralize all input validation in a single gateway layer before data reaches business logic.
[User Input] → [Validation Gateway] → [Business Logic]
↓ (invalid)
[Error Response]
Implementation: API gateway or middleware that validates all requests against schema definitions (OpenAPI/Swagger), enforces type constraints, length limits, format rules, and content-type requirements. Business logic never receives unvalidated input.
5.2 Authentication Broker
Centralize authentication in a dedicated service or identity provider. Application services never implement authentication logic directly.
[User] → [Authentication Broker (IdP)] → [Token/Session] → [Application Services]
Implementation: OAuth 2.0 / OIDC identity provider (Keycloak, Auth0, Azure AD, Okta). Application services validate tokens but never handle credential verification. Single point of authentication policy enforcement.
5.3 Authorization Enforcer
Centralize authorization logic in a policy engine separate from business logic.
[Request] → [Authorization Enforcer] → [Allow/Deny] → [Business Logic]
Implementation: Open Policy Agent (OPA), AWS IAM, RBAC middleware. Policies defined declaratively, evaluated consistently, audited centrally. Business logic does not contain if user.role == "admin" scattered throughout.
5.4 Secure Session Management
Sessions are created, maintained, validated, and destroyed through a single session management module.
Pattern requirements:
- Cryptographically random session identifiers (minimum 128 bits of entropy)
- Session identifier regeneration on authentication state change
- Server-side session storage (not client-side)
- Absolute timeout (maximum session lifetime) and idle timeout (inactivity limit)
- Secure session transmission (Secure, HttpOnly, SameSite cookie flags)
- Complete session invalidation on logout (server-side deletion)
5.5 Cryptographic Key Management
Keys are generated, stored, used, rotated, and destroyed through a dedicated key management service.
[Application] → [Key Management Service] → [HSM/KMS]
↓
[Key Lifecycle: Generate → Store → Use → Rotate → Revoke → Destroy]
Implementation: AWS KMS, Azure Key Vault, HashiCorp Vault, hardware security modules (HSMs). Application code never contains key material. Keys are referenced by identifier, not value.
5.6 Secure Logging Pattern
Logging is centralized, immutable, and security-aware.
Pattern requirements:
- Log security-relevant events: authentication, authorization, data access, configuration changes, errors
- Never log sensitive data: passwords, tokens, credit card numbers, PII
- Structured logging format for machine parsing (JSON)
- Tamper-evident storage (write-once, append-only)
- Centralized aggregation for correlation and alerting
- Time synchronization across all logging sources
6. Anti-Patterns
6.1 Security Through Obscurity
The mistake: Relying on hidden URLs (/admin-secret-panel), undocumented API parameters, obfuscated client-side code, or “nobody will find this” as security controls.
Why it fails: Attackers use automated discovery tools (directory brute-forcing, parameter fuzzing, JavaScript analysis) that find these “hidden” resources in minutes.
The fix: Every resource is authenticated and authorized. Discovery is irrelevant because access is controlled.
6.2 Client-Side Trust
The mistake: Performing security validation only in client-side code (JavaScript validation, hidden form fields for authorization, client-side role checks).
Why it fails: Attackers bypass the client entirely. They send requests directly to the server using tools like curl, Burp Suite, or custom scripts. Client-side code is suggestion, not enforcement.
The fix: All security validation occurs server-side. Client-side validation is a UX convenience, not a security control.
6.3 Mixed Trust Contexts
The mistake: Processing data from different trust levels in the same execution context without proper isolation. Running user-uploaded code in the same process as the application. Storing user data and system configuration in the same database with the same credentials.
Why it fails: A compromise in the lower-trust context (user input, user-uploaded content) directly compromises the higher-trust context (system configuration, other users’ data).
The fix: Isolation boundaries between trust levels. Sandboxing for untrusted code execution. Separate credentials and storage for different trust zones.
6.4 Excessive Functionality
The mistake: Shipping features “just in case,” leaving debug endpoints in production, including administrative tools accessible from the public interface.
Why it fails: Every feature is attack surface. Unused features are unmonitored attack surface — the most dangerous kind.
The fix: Remove what is not needed. Disable what cannot be removed. Monitor what cannot be disabled.
7. AI-Generated Architecture Reviews
7.1 Current State of Accuracy
As of 2025-2026, AI-assisted architecture review tools demonstrate approximately 50-55% accuracy in identifying security design flaws. This means roughly half of identified issues are genuine concerns, while the other half are false positives, misunderstandings of context, or irrelevant observations.
What AI architecture review does well:
- Identifies common anti-patterns (hardcoded credentials, missing TLS, SQL injection patterns)
- Checks configuration files against known-good baselines
- Detects missing authentication on endpoints
- Identifies overly permissive IAM policies
- Flags known-vulnerable dependencies
What AI architecture review does poorly:
- Understanding business context (which data is actually sensitive, what constitutes a legitimate access pattern)
- Evaluating complex authorization models (multi-tenant, hierarchical RBAC)
- Identifying logic flaws in business workflows
- Assessing architectural decisions that involve tradeoffs (security vs. performance, security vs. usability)
- Understanding custom protocols or non-standard architectures
7.2 Available Tools
| Tool | Type | Use Case |
|---|---|---|
| GitHub Copilot (code review mode) | AI coding assistant | Inline code review with security annotations |
| Snyk Code / DeepCode | AI-powered SAST | Identifies security patterns in code |
| Amazon CodeGuru | ML-based review | Identifies security issues and performance problems |
| Semgrep (with AI rules) | Pattern matching + AI | Custom security rules with AI-generated suggestions |
| Claude / GPT-4 (manual review) | General LLM | Architecture document review and threat identification |
7.3 Human-in-the-Loop Requirements
AI architecture reviews must always operate in a human-in-the-loop model:
- AI generates findings: Initial scan produces candidate issues
- Security engineer triages: Each finding is validated as true positive, false positive, or needs investigation
- Engineer adds context: True positives get severity ratings, remediation guidance, and business impact analysis
- AI assists with remediation: Once a human confirms the issue, AI can suggest remediation approaches
- Human approves remediation: All design changes require human approval
7.4 How AI Assistants Violate Secure Design Principles
AI coding assistants frequently generate code that violates the principles in this module:
| Principle Violated | Common AI Behavior |
|---|---|
| Least privilege | Generating IAM policies with wildcard permissions (*), suggesting admin-level database credentials |
| Complete mediation | Generating CRUD endpoints without authorization checks, assuming the caller is authorized |
| Never trust input | Using user input directly in queries, file paths, or shell commands without validation |
| Fail secure | Generating catch-all exception handlers that default to permissive behavior |
| Economy of mechanism | Generating complex custom security solutions when proven libraries exist |
| Open design | Suggesting “security” through hidden URLs or obfuscated parameters |
Mitigation: Security-focused code review (Module 3.4) must specifically check for these AI-generated violations. Automated security linting rules (Semgrep, ESLint security plugins) should catch the most common patterns.
8. Design Review Checklists and Processes
8.1 Secure Design Review Checklist
The following checklist should be evaluated during design review for every significant feature or component:
Authentication:
- All entry points require authentication (or are explicitly documented as public)
- Authentication mechanism is centralized (not reimplemented per endpoint)
- Multi-factor authentication supported for sensitive operations
- Session management follows secure patterns (Section 5.4)
Authorization:
- Authorization is enforced at every access point (complete mediation)
- Least privilege applied to all roles and service accounts
- Object-level authorization verified (not just role-level)
- Separation of duties enforced for critical operations
Data Protection:
- Sensitive data encrypted at rest and in transit
- Data classification applied and reflected in controls
- Key management centralized and follows lifecycle practices
- Data retention and disposal policies defined and implemented
Input Handling:
- All input validated server-side using allowlists
- Output encoding applied to prevent injection
- File uploads restricted, validated, and sandboxed
- API request size limits enforced
Error Handling:
- Errors fail secure (deny access on failure)
- Error messages reveal no internal details to users
- All exceptions are explicitly handled
- Error conditions are logged with diagnostic detail
Attack Surface:
- Unnecessary features and endpoints removed
- Network exposure minimized (segmentation, firewalls)
- Dependencies audited and minimized
- Default credentials and configurations removed
Logging and Monitoring:
- Security-relevant events are logged
- Sensitive data is never logged
- Logs are tamper-evident and centrally aggregated
- Alerting rules defined for security events
8.2 Design Review Process
- Preparation: Reviewer receives design documentation, threat model, and security requirements at least 3 business days before review
- Independent review: Reviewer evaluates the design against the checklist, secure design principles, and identified threats
- Review meeting: Designer presents, reviewer asks questions, team discusses findings
- Finding documentation: Each finding gets a severity, description, and recommended remediation
- Remediation tracking: Findings are tracked to closure. Critical and high findings block progression to implementation
- Re-review: Significant design changes trigger re-review
9. NIST SSDF PW.1 Alignment
This module aligns with NIST Secure Software Development Framework practice PW.1:
PW.1: Design Software to Meet Security Requirements and Mitigate Security Risks
- PW.1.1: Use forms of risk modeling — such as threat modeling, attack modeling, or attack surface mapping — to help assess the security risk for the software
- PW.1.2: Track and maintain the software’s security requirements, risks, and design decisions
- PW.1.3: Where appropriate, build in support for using the software securely by default
All design principles in this module directly support PW.1 by providing the framework within which security requirements are translated into secure designs.
Summary
Secure design principles are not academic abstractions. They are the engineering rules that determine whether software can be defended or will be breached. CIS 16.10 mandates five of them. Saltzer and Schroeder contributed eight more. Zero trust extends them to modern distributed systems.
Key takeaways:
- CIS 16.10 requires least privilege, complete mediation, never trusting user input, explicit error checking, and attack surface minimization. These are non-negotiable.
- Defense in depth ensures no single control failure results in a breach.
- Fail secure means failure defaults to deny, not allow.
- Economy of mechanism means simpler systems are more secure systems.
- Zero trust extends these principles to environments where the network is no longer a trust boundary.
- Secure design patterns (input validation gateway, authentication broker, authorization enforcer) provide reusable solutions.
- Anti-patterns (security through obscurity, client-side trust, mixed trust contexts) must be actively identified and eliminated.
- AI architecture reviews are approximately 50-55% accurate and require human validation for every finding.
- AI coding assistants routinely violate secure design principles — security review must specifically check for these violations.
References
- CIS Controls v8, Control 16.10
- Saltzer, J.H. and Schroeder, M.D. “The Protection of Information in Computer Systems” (1975)
- OWASP Secure Design Principles
- NIST SP 800-160 Vol. 1: Systems Security Engineering
- NIST SP 800-207: Zero Trust Architecture
- NIST Secure Software Development Framework (SSDF) v1.1
- Microsoft SDL Practice 4: Threat Modeling and Practice 5: Secure Design Review
- BSIMM: Architecture Analysis
- OWASP SAMM: Design — Security Architecture
Study Guide
Key Takeaways
- CIS 16.10 mandates five principles — Least privilege, complete mediation, never trust user input, explicit error checking, and attack surface minimization.
- Defense in depth prevents single-point failures — Multiple security layers (WAF + validation + parameterized queries) ensure one failure does not cause breach.
- Fail secure means failure defaults to deny — When authorization is unavailable, deny all requests; when TLS fails, drop the connection.
- Economy of mechanism means simpler is more secure — Centralize security functions; prefer proven libraries over custom implementations.
- Zero trust extends principles to modern environments — Never trust, always verify; assume breach; minimize blast radius through microsegmentation.
- Secure design patterns provide reusable solutions — Authentication broker, authorization enforcer, input validation gateway, secure logging pattern.
- AI architecture reviews are ~50-55% accurate — Every AI finding requires human validation; AI assistants routinely violate secure design principles.
Important Definitions
| Term | Definition |
|---|---|
| Complete Mediation | Every access to every resource is checked for authorization every time — no cached decisions |
| Fail Secure | When a component fails, it defaults to a secure state (deny access) rather than an open state |
| Economy of Mechanism | Security mechanisms should be as simple as possible — complexity is the enemy of security |
| Open Design | Security should not depend on secrecy of design; use published, peer-reviewed algorithms |
| Separation of Privilege | Multiple independent conditions required for critical access (e.g., MFA, dual authorization) |
| Attack Surface | Sum of all points where an attacker can interact with the system |
| RASQ | Relative Attack Surface Quotient — Microsoft’s quantitative method for measuring attack surface |
| Zero Trust | Never trust, always verify; no implicit trust based on network location |
Quick Reference
- Framework/Process: CIS 16.10 five principles + Saltzer/Schroeder eight principles + zero trust three principles; NIST SSDF PW.1 alignment
- Key Numbers: 50-55% accuracy for AI architecture reviews; 10 secure design principles total; 6 secure design patterns (input validation gateway, auth broker, authz enforcer, session management, key management, secure logging)
- Common Pitfalls: Security through obscurity (hidden URLs are not access controls); client-side trust (attackers bypass clients); mixed trust contexts (no isolation between trust levels); AI generating wildcard IAM policies
Review Questions
- How does complete mediation differ from simply checking authentication at login time?
- When should fail-secure behavior be balanced against availability requirements, and how?
- Why does the open design principle not mean publishing your encryption keys?
- What specific secure design principle violations do AI coding assistants most commonly generate?
- How would you apply zero trust principles to a microservices architecture with 20+ services?