3.3 — Security Libraries and Vetted Components
Listen instead
Learning Objectives
- ✓ Explain CIS Control 16.11 requirements and apply them to component selection decisions
- ✓ Identify the categories of security functionality that must never be custom-built
- ✓ Evaluate third-party components using a structured security assessment framework
- ✓ Detect and mitigate AI hallucinated package attacks (slopsquatting)
- ✓ Implement an approved component registry and lifecycle management process
- ✓ Apply NIST SSDF PW.4 guidance for reusing existing well-secured software
1. CIS Control 16.11 β Leverage Vetted Modules or Services
CIS Control 16.11 is classified at Implementation Group 2 (IG2) and states:
Leverage vetted modules or services for application security components, such as identity management, encryption, and auditing and logging. Using platform features in critical security functions will reduce developersβ workload and minimize the likelihood of design or implementation errors. Modern operating systems provide effective mechanisms for identification, authentication, and authorization and make those mechanisms available to applications. Use only standardized, currently accepted, and extensively reviewed encryption algorithms. Operating systems also provide mechanisms to create and maintain secure audit logs.
Breaking Down the Requirements
Use platform/OS-provided security mechanisms: Every major platform (JVM, .NET, Node.js, Python, Go, Rust) provides security primitives that have been:
- Reviewed by thousands of security researchers
- Battle-tested in millions of production deployments
- Patched rapidly when vulnerabilities are discovered
- Validated against cryptographic standards (FIPS 140-2/3)
Using these instead of custom implementations is not laziness β it is engineering discipline.
Applies to identity management, encryption, auditing, and logging: These are the four domains where custom implementations are most dangerous:
- Identity management: Authentication, authorization, session management, credential storage, MFA. These are complex, have subtle security requirements (timing attacks, enumeration, brute force), and are attacked constantly.
- Encryption: Cryptographic implementation is notoriously unforgiving. A single implementation error β incorrect IV reuse, missing padding validation, unauthenticated encryption mode β renders the entire system insecure.
- Auditing: Audit logs must be tamper-evident, complete, and correctly formatted for compliance. Custom implementations miss requirements that auditors and regulators expect.
- Logging: Security logging requires careful decisions about what to log and what not to log. Frameworks handle structured output, sensitive data masking, and centralized forwarding.
Reduces developer burden: Developers should focus on business logic, not reimplementing security primitives. Every hour spent building a custom authentication system is an hour not spent on the applicationβs core value proposition β and it produces a worse outcome than using an established framework.
Standardized, accepted, and extensively reviewed algorithms only: This explicitly prohibits:
- Custom encryption algorithms (βour proprietary cipherβ)
- Modified standard algorithms (βAES with our custom key scheduleβ)
- Deprecated algorithms (MD5, SHA-1, DES, 3DES, RC4)
- Obscure algorithms not reviewed by the cryptographic community
- Unvetted implementations of standard algorithms (using a random GitHub library instead of the platformβs crypto module)
2. Why Vetted Components Matter
The Math of Security Review
A custom authentication implementation might be reviewed by:
- The developer who wrote it (1 person)
- A peer reviewer (2 people)
- A security review, if the organization has the resources (3 people)
The Spring Security framework has been reviewed by:
- Hundreds of core contributors
- Thousands of security researchers
- Millions of production deployments that surface bugs
- Professional security audit firms
- The entire CVE ecosystem monitoring for vulnerabilities
The attack surface of security code is enormous. Authentication alone must handle: credential storage, timing attacks, brute force protection, session management, CSRF, MFA, password reset, account lockout, enumeration prevention, SSO integration, token management, and dozens of edge cases. No small team can match the review depth that established frameworks receive.
Leverage Community Response
When a vulnerability is found in a major security framework:
- It receives a CVE within hours
- A patch is typically available within days
- Automated tools (Dependabot, Renovate, Snyk) alert and create PRs
- Upgrade guides are published
- The community validates the fix
When a vulnerability is found in custom security code:
- It may not be recognized as a vulnerability
- There is no CVE, so automated tools do not detect it
- The fix depends on the original developerβs availability and understanding
- There is no community to validate the fix
- Other organizations with the same pattern do not benefit
Faster Security Patches
The Log4Shell incident (CVE-2021-44228) demonstrated this clearly. Organizations using Log4j2 through their frameworkβs dependency management received automated alerts and patches within days. Organizations with custom logging that happened to use Log4j2 directly often did not discover their exposure for weeks.
3. Categories of Security Components to Never Build Custom
Authentication
What to use:
- OAuth 2.0 / OIDC providers: Auth0, Okta, AWS Cognito, Azure AD B2C, Keycloak, Google Identity Platform
- SAML: For enterprise SSO integration, use your platformβs SAML library (Spring Security SAML, passport-saml, python3-saml)
- Platform auth frameworks: Spring Security (Java), ASP.NET Identity (.NET), Passport.js (Node.js), Django Auth (Python), GoGuardian (Go)
- Passwordless: WebAuthn/FIDO2 libraries (SimpleWebAuthn, java-webauthn-server)
What NOT to build:
- Custom password hashing (use bcrypt/Argon2id library)
- Custom session management (use framework sessions)
- Custom JWT validation (use jose, jsonwebtoken, auth-jwt)
- Custom MFA implementation (use TOTP libraries β pyotp, otpauth)
- Custom OAuth flows (use certified client libraries from the OpenID Foundation)
Authorization
What to use:
- RBAC/ABAC frameworks: OPA (Open Policy Agent), Cedar (AWS), Casbin, Spring Security ACLs, CASL (JavaScript)
- Policy engines: OPA/Rego for microservices, Cedar for fine-grained permissions, Zanzibar-based systems (SpiceDB, Authzed) for relationship-based access
- Framework authorization:
@PreAuthorize(Spring),[Authorize](.NET), middleware patterns (Express, FastAPI)
What NOT to build:
- Custom role-permission mapping systems (you will miss edge cases)
- Custom policy evaluation engines (combinatorial explosion of policies)
- Custom attribute-based access control from scratch (policy conflict resolution is hard)
Cryptography
This is the single most dangerous area for custom implementation. The history of cryptographic failures is almost entirely the history of custom implementations.
What to use:
- Language/platform crypto libraries:
java.security/javax.crypto(Java),System.Security.Cryptography(.NET),crypto(Node.js),cryptography(Python),crypto/aes,crypto/tls(Go),ring/RustCrypto(Rust) - High-level wrappers: libsodium/NaCl (all languages), Tink (Google, multiple languages)
- TLS: Platform TLS implementations, never custom TLS handshake code
What ABSOLUTELY NOT to build:
- Custom encryption algorithms β never, under any circumstances
- Custom key derivation functions
- Custom random number generators for security use
- Custom TLS implementations
- Custom certificate validation logic
- βImprovedβ versions of standard algorithms
- Custom padding schemes
The rule is absolute: if the word βcryptoβ appears in the task description, use a vetted library. If you are tempted to write a single line of cryptographic code that is not a call to a vetted library, stop and reconsider.
Session Management
What to use:
- Framework session handlers: Express session, Flask session, Spring Session, ASP.NET Session State
- Token-based: JWT libraries with framework integration
- Redis/Memcached session stores for distributed sessions
What NOT to build: Custom session ID generation, custom session storage, custom session rotation logic.
Input Validation
What to use:
- JavaScript/TypeScript: Zod, Joi, Yup, class-validator
- Python: Pydantic, marshmallow, cerberus, attrs with validators
- Java: Jakarta Bean Validation (Hibernate Validator), Spring Validation
- Go: go-playground/validator
- Rust: validator crate
- API validation: OpenAPI schema validation middleware (express-openapi-validator, connexion)
What NOT to build: Custom regex-based validation frameworks, custom sanitization libraries, custom encoding/escaping utilities.
Logging
What to use:
- Java: SLF4J + Logback or Log4j2 (properly configured β disable message lookups)
- Python: structlog, python-json-logger with stdlib logging
- .NET: Serilog, NLog
- JavaScript: Winston, Pino, Bunyan
- Go: zap, zerolog
- Rust: tracing, env_logger
Security configuration for logging frameworks:
# Log4j2 β disable the feature that caused Log4Shell
log4j2.formatMsgNoLookups=true
# Or in XML configuration:
# <Configuration>
# <Properties>
# <Property name="log4j2.formatMsgNoLookups">true</Property>
# </Properties>
# </Configuration>
Password Storage
What to use: Argon2id (preferred), bcrypt (cost factor 12+), or scrypt. Always through a vetted library:
- Python:
argon2-cffi,bcrypt - Node.js:
argon2,bcryptjs - Java:
Bouncy Castle,Spring Security Crypto - Go:
golang.org/x/crypto/argon2,golang.org/x/crypto/bcrypt - Rust:
argon2,bcrypt
What NOT to use: MD5, SHA-1, SHA-256 (even with salt β too fast for password hashing), custom key stretching.
4. Evaluating Third-Party Components
Not all libraries are equal. A structured evaluation framework prevents adopting components that become liabilities.
Security Assessment Criteria
| Criterion | Green | Yellow | Red |
|---|---|---|---|
| Maintenance | Active commits in last 3 months | Commits in last 12 months | No commits in 12+ months |
| Security track record | Quick CVE response, security policy published | Occasional slow responses | History of unpatched vulnerabilities |
| Vulnerability history | Few CVEs, all patched promptly | Multiple CVEs, all eventually patched | CVEs with no patches or extended exposure windows |
| Community size | 1000+ GitHub stars, multiple maintainers | 100+ stars, 2+ maintainers | Single maintainer, low community |
| License | MIT, Apache 2.0, BSD | LGPL, MPL | GPL (for proprietary projects), AGPL, unknown |
| Download count | Millions of weekly downloads | Thousands of weekly downloads | Hundreds or fewer |
| Documentation | Comprehensive, with security guidance | Adequate for basic use | Missing or outdated |
| SBOM available | Yes, published with releases | Partial | No |
The Evaluation Process
- Check for known vulnerabilities: Search NVD (nvd.nist.gov), Snyk vulnerability database, GitHub Security Advisories
- Review the security policy: Does the project have a SECURITY.md? Does it describe how to report vulnerabilities? Does it have a disclosure timeline?
- Assess dependency health: What does the component itself depend on? A library is only as secure as its weakest transitive dependency
- License audit: Is the license compatible with your use case? GPL in a proprietary codebase creates legal exposure
- Code quality review: For security-critical components, review the source. Look for test coverage, fuzzing, CI/CD security scanning
- Alternatives assessment: Are there better-maintained or more secure alternatives? Is this the communityβs preferred solution?
OWASP Dependency-Check and Similar Tools
Automated tools that scan your dependency tree for known vulnerabilities:
- OWASP Dependency-Check: Free, open-source, supports Java, .NET, Ruby, Node.js, Python. Checks against NVD
- Snyk: Commercial with free tier, broad language support, monitors continuously, provides fix PRs
- GitHub Dependabot: Built into GitHub, creates PRs for vulnerable dependencies
- npm audit / pip-audit / cargo audit: Language-specific audit tools built into package managers
- Trivy: Comprehensive scanner for dependencies, container images, and IaC
- Grype: Fast vulnerability scanner for container images and filesystems
Integration requirement: At least one dependency scanning tool must run in CI on every PR and block merges when critical vulnerabilities are detected.
5. AI and Component Selection
AI coding tools introduce three specific risks to component selection that developers must recognize and mitigate.
AI Hallucinated Package Names β Slopsquatting
Research published in 2025 found that AI models hallucinate package names at a rate of 19.7% β nearly one in five package recommendations refers to a package that does not exist.
The attack (slopsquatting):
- Researchers identify the most commonly hallucinated package names by querying AI models thousands of times
- Attackers register these hallucinated names on public package registries (npm, PyPI, RubyGems)
- The registered packages contain malicious code: credential stealers, cryptocurrency miners, reverse shells, supply chain backdoors
- When a developer uses an AI-suggested package name and installs it, they get the malicious package
- The malicious package may include actual functionality (to avoid suspicion) alongside its malicious payload
Real examples:
- AI frequently suggests
python-jwtinstead ofPyJWTβ ifpython-jwtwere registered with malicious code, thousands of developers would install it - AI suggests
flask-cors-handlerinstead offlask-corsβ a plausible-sounding but nonexistent package - AI suggests
express-rate-limiterinstead ofexpress-rate-limitβ one character difference
Mitigation workflow:
1. AI suggests a package β DO NOT install immediately
2. Verify the package exists on the official registry (npm, PyPI, etc.)
3. Check download counts β legitimate packages have thousands+ weekly downloads
4. Verify the publisher β check their other packages and profile
5. Check the package's GitHub repository β ensure it is real and maintained
6. Compare with known alternatives β is this the standard solution?
7. If anything seems off, search for the package name + "malware" or "malicious"
8. Only then install, and pin to a specific version with hash verification
AI Tendency to Suggest Custom Crypto/Auth
AI coding assistants have a documented tendency to generate custom implementations of security functionality instead of recommending established libraries.
Example: When asked βHow do I encrypt user data in my Node.js application?β, AI tools frequently generate code like:
// AI-GENERATED β DO NOT USE
const crypto = require('crypto');
function encrypt(text, key) {
const iv = crypto.randomBytes(16);
const cipher = crypto.createCipheriv('aes-256-cbc', key, iv); // CBC mode, not GCM
let encrypted = cipher.update(text, 'utf8', 'hex');
encrypted += cipher.final('hex');
return iv.toString('hex') + ':' + encrypted; // No authentication tag
}
This code uses AES-256-CBC (unauthenticated β vulnerable to padding oracle attacks) instead of AES-256-GCM (authenticated). A developer without cryptographic expertise would accept this and deploy a vulnerable system.
What should be suggested:
// CORRECT: Use a high-level library that handles the details
const { secretbox, randomBytes } = require('tweetnacl');
const { encodeBase64, decodeBase64 } = require('tweetnacl-util');
// Or better: use libsodium
const sodium = require('libsodium-wrappers');
await sodium.ready;
function encrypt(plaintext, key) {
const nonce = sodium.randombytes_buf(sodium.crypto_secretbox_NONCEBYTES);
const encrypted = sodium.crypto_secretbox_easy(plaintext, nonce, key);
// Authenticated encryption β handles all the details correctly
return { nonce: encodeBase64(nonce), ciphertext: encodeBase64(encrypted) };
}
AI Suggesting Outdated or Deprecated Libraries
AI models have a training data cutoff. They may suggest:
request(Node.js HTTP client) β deprecated since 2020, recommendundiciornode-fetchurllib2(Python) β Python 2 library, useurllib3orhttpxlog4j1.x β end of life since 2015, uselog4j2orlogbackmoment.jsβ in maintenance mode, recommenddate-fnsorTemporalserialize-javascriptold versions with known RCE vulnerabilities
Mitigation: Always check the packageβs current status before adopting:
- Is it deprecated or in maintenance mode?
- When was the last release?
- Are there open security advisories?
- What does the community recommend as the current best choice?
6. Approved Component Registries
Organizations should maintain internal registries of approved components to prevent ad-hoc dependency adoption.
Registry Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β APPROVED REGISTRY β
β (Artifactory / Nexus / private npm/PyPI/Maven) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Tier 1: Pre-Approved (auto-update permitted) β
β ββ Security frameworks (Spring Security, etc.) β
β ββ Platform crypto libraries β
β ββ Logging frameworks β
β ββ Validation libraries β
β β
β Tier 2: Approved (manual update review) β
β ββ Database drivers β
β ββ HTTP clients β
β ββ Serialization libraries β
β ββ Testing frameworks β
β β
β Tier 3: Conditional (requires justification) β
β ββ New/emerging libraries β
β ββ Libraries with complex licenses β
β ββ Libraries with limited community β
β β
β Blocked: Known-vulnerable, deprecated, β
β license-incompatible, abandoned β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Registry Implementation
JFrog Artifactory / Sonatype Nexus:
- Proxy public registries (npm, PyPI, Maven Central)
- Cache approved packages locally
- Block packages not in the approved list
- Automatically scan proxied packages for vulnerabilities
- Enforce license policies
Private registries:
- Private npm registry (
npm config set registry https://registry.internal.company.com) - Private PyPI (devpi, AWS CodeArtifact, Artifactory)
- Private Maven repository (Nexus, Artifactory)
- Private Go module proxy (
GOPROXY=https://proxy.internal.company.com)
Package lockfile enforcement: Require lockfiles (package-lock.json, poetry.lock, Cargo.lock, go.sum) in version control and verify integrity during CI builds.
7. Component Lifecycle Management
Components are not βset and forget.β They require active lifecycle management.
The Four Phases
Phase 1: Adoption
- Security assessment using the evaluation criteria in Section 4
- License review by legal/compliance
- Architecture review for integration patterns
- Documentation of the componentβs purpose and alternatives
- Addition to the approved registry
Phase 2: Monitoring
- Continuous vulnerability scanning (Dependabot, Snyk, Trivy)
- Track release notes for security-relevant changes
- Monitor the componentβs health: commit frequency, maintainer activity, open issues
- Subscribe to security advisories for critical components
Phase 3: Deprecation
Triggered when:
- A critical vulnerability is found with no patch available
- The project is abandoned (no commits in 12+ months, unresponsive maintainers)
- A significantly better alternative emerges
- License changes make the component incompatible
- The componentβs functionality is now available in the platform/framework
Deprecation process:
- Identify all usages across the codebase (
grep, SCA tools) - Select and evaluate the replacement component
- Create a migration plan with timeline
- Update the approved registry: move component to βDeprecatedβ with replacement guidance
- Set a hard deadline for removal
Phase 4: Replacement
- Execute the migration plan
- Remove the deprecated component from all codebases
- Remove from the approved registry
- Update documentation and training materials
- Verify no remaining references in any codebase
Emergency Response
When a critical zero-day vulnerability is discovered in a component (like Log4Shell):
- Immediate: Determine exposure β which applications use the affected component and version?
- Within 4 hours: Apply available mitigations (configuration changes, WAF rules)
- Within 24 hours: Upgrade to patched version or implement compensating controls
- Within 72 hours: Verify all instances are patched through automated scanning
- Post-incident: Review how the component was adopted, whether monitoring was adequate, and whether the response was fast enough
8. NIST SSDF PW.4 β Reuse Existing Well-Secured Software
NIST Special Publication 800-218, the Secure Software Development Framework (SSDF), includes practice PW.4:
Reuse existing, well-secured software when feasible instead of duplicating functionality, which may introduce additional security risks.
PW.4 Tasks
PW.4.1: Acquire and maintain well-secured software components (e.g., software libraries, modules, middleware, frameworks) from commercial, open-source, and other third-party developers for use by the organizationβs software.
PW.4.2: Create and maintain a list of all third-party software components used by the organizationβs software, including name, version, license, and source.
PW.4.3: Verify the provenance of all acquired components (e.g., validate digital signatures, checksums).
PW.4.4: Evaluate the security posture of acquired components before use and monitor after adoption.
SBOM Requirements
Software Bill of Materials (SBOM) generation is increasingly mandated:
- Executive Order 14028 (US): Requires SBOMs for software sold to the federal government
- EU Cyber Resilience Act: Requires SBOMs for products with digital elements sold in the EU
- NTIA Minimum Elements: Defines the minimum fields an SBOM must contain
SBOM formats:
- SPDX (ISO/IEC 5962:2021): Industry standard, supports multiple serialization formats
- CycloneDX (OWASP): Designed for security use cases, includes vulnerability tracking
SBOM generation tools:
syft(Anchore): Generates SBOMs from container images and filesystemscdxgen(OWASP): CycloneDX SBOM generator for multiple ecosystemsspdx-sbom-generator: Official SPDX toolingnpm sbom/pip-licenses: Language-specific tools
9. Practical Decision Framework
When a developer needs a security-related component, follow this decision tree:
Does the platform/framework provide this functionality?
ββ YES β Use it. Stop here.
β
ββ NO β Is there a well-established, vetted library?
β ββ YES β Is it in our approved registry?
β β ββ YES β Use it. Stop here.
β β ββ NO β Submit for evaluation. Wait for approval.
β β
β ββ NO β Is this security-critical (auth, crypto, etc.)?
β ββ YES β STOP. Consult security team.
β β Do not build custom. Find an alternative.
β β
β ββ NO β Build custom with:
β - Full security review
β - Comprehensive testing
β - Documentation
β - Plan to replace when a vetted
β alternative becomes available
Red Flags That Should Trigger Investigation
- AI suggests a package you have never heard of
- A package has fewer than 1,000 weekly downloads
- A package was published within the last 6 months with no clear provenance
- A package has a single maintainer with no other public work
- A packageβs name is very similar to a well-known package (typosquatting)
- A package does not have a linked GitHub/GitLab repository
- AI generates custom cryptographic code instead of using a library
- AI suggests implementing your own authentication/session management
- A component requires disabling security features to integrate
10. Summary and Key Takeaways
-
CIS 16.11 is explicit: Use vetted, platform-provided security components for identity management, encryption, auditing, and logging. This is not a suggestion.
-
Never build custom crypto, auth, or session management: The risk-reward ratio is catastrophically unfavorable. Established libraries have thousands of person-years of security review. Your custom implementation has a few person-days.
-
AI hallucinates package names at 19.7%: Nearly one in five AI package suggestions refers to a nonexistent package. Slopsquatting attacks exploit this by registering malicious packages under hallucinated names.
-
AI defaults to custom implementations: AI coding tools frequently generate custom security code instead of recommending established libraries. Always ask: βIs there an established library for this?β
-
Component evaluation is a structured process: Security track record, maintenance activity, license compatibility, community size, and vulnerability history all matter.
-
Lifecycle management is mandatory: Adoption, monitoring, deprecation, and replacement. Components are not βset and forget.β
-
SBOMs are increasingly required: Executive orders and regulations mandate software bills of materials. Integrate SBOM generation into your build pipeline now.
Lab Exercise
Exercise 3.3: Component Security Assessment
Part A: Hallucination Detection (30 minutes)
You will be given a list of 20 package names βrecommended by AIβ across npm, PyPI, and Maven Central. For each:
- Determine if the package is real or hallucinated
- If real, assess its security posture using the evaluation criteria
- If hallucinated, find the correct/intended package
- For 3 of the hallucinated names, check if they have been registered with suspicious content
Part B: Crypto Library Audit (30 minutes)
Review three code samples that use cryptographic operations:
- Identify which use vetted libraries correctly
- Identify which use vetted libraries incorrectly (wrong mode, bad parameters)
- Identify which use custom cryptographic implementations
- Rewrite all three using the recommended vetted library approach
Part C: Component Lifecycle Plan (30 minutes)
Given a simulated dependency tree with:
- 2 deprecated libraries
- 1 library with a critical CVE
- 1 library with a license change from MIT to AGPL
- 1 library that has been abandoned (no commits in 18 months)
Create a lifecycle management plan addressing each, including:
- Immediate actions
- Replacement candidates (evaluated against the criteria in Section 4)
- Migration timeline
- Verification approach
Deliverable: Hallucination analysis table, crypto audit report, lifecycle management plan Time: 1.5 hours total
Module 3.3 Complete. Next: Module 3.4 β Secure Code Review
Study Guide
Key Takeaways
- CIS 16.11 is absolute β βUse only standardized, currently accepted, and extensively reviewed encryption algorithms.β No exceptions for custom or internally tested algorithms.
- Never build custom auth, crypto, or session management β Established frameworks have thousands of person-years of security review; your custom implementation has a few person-days.
- AI hallucinates package names at 19.7% β Nearly one in five AI package suggestions is nonexistent; slopsquatting attacks exploit this by registering malicious packages under hallucinated names.
- AI defaults to custom implementations β AI coding assistants frequently generate custom security code (e.g., AES-CBC without authentication) instead of recommending established libraries like libsodium.
- Component evaluation is structured β Green/Yellow/Red criteria across maintenance, security track record, vulnerability history, community size, license, and downloads.
- Lifecycle management has four phases β Adoption, Monitoring, Deprecation, Replacement; components are not βset and forget.β
- SBOMs are increasingly mandated β Executive Order 14028, EU Cyber Resilience Act; CycloneDX (security-focused) and SPDX (ISO standard) are the two formats.
Important Definitions
| Term | Definition |
|---|---|
| CIS 16.11 | Control mandating use of vetted, standardized security modules for identity, encryption, auditing, and logging |
| Slopsquatting | Supply chain attack exploiting AI-hallucinated package names registered with malicious code on public registries |
| SBOM | Software Bill of Materials β inventory of all components with name, version, license, and source |
| CycloneDX | OWASP SBOM format designed for security use cases including vulnerability tracking |
| NIST SSDF PW.4 | Practice: βReuse existing, well-secured software when feasible instead of duplicating functionalityβ |
| Approved Component Registry | Tiered internal registry: Tier 1 (pre-approved, auto-update), Tier 2 (approved, manual review), Tier 3 (conditional) |
| Transitive Dependency | A dependency of your dependency; must be managed because vulnerabilities propagate |
| SCA | Software Composition Analysis β automated scanning of dependencies for known vulnerabilities |
Quick Reference
- Framework/Process: CIS 16.11; NIST SSDF PW.4; three-tier approved registry; four-phase component lifecycle; decision tree for component selection
- Key Numbers: 19.7% AI package hallucination rate; zero tolerance for custom crypto; 12+ months without commits = Red flag; SCA must run in CI on every PR
- Common Pitfalls: Installing AI-suggested packages without verifying they exist; building custom JWT validation instead of using vetted libraries; using deprecated libraries suggested by AI (e.g.,
requestin Node.js); no lockfile enforcement in CI
Review Questions
- Why does the math of security review favor established frameworks over custom implementations for authentication?
- What is the complete mitigation workflow when AI suggests a package name you have never heard of?
- How does the four-phase component lifecycle management process handle an abandoned library with no commits in 18 months?
- When is it acceptable to build a custom security component, according to the decision framework?
- How do SBOMs and SCA tools work together to provide supply chain visibility and vulnerability management?