7.6 — Software Decommissioning
Listen instead
Learning Objectives
- ✓ Identify when software should be decommissioned and justify the decision with documented criteria.
- ✓ Plan a complete decommissioning project covering dependencies, data, credentials, infrastructure, and communication.
- ✓ Execute data management during decommissioning, including migration, archival, and destruction per regulatory requirements.
- ✓ Perform credential and access cleanup to eliminate residual attack surface.
- ✓ Decommission AI models and AI development tools with appropriate data destruction and inventory updates.
- ✓ Verify that decommissioning is complete through systematic post-decommissioning checks.
1. Why Decommissioning Matters
Software decommissioning is the forgotten phase of the software development lifecycle. Organizations invest heavily in designing, building, testing, deploying, and operating software but routinely fail to plan for its retirement. The result is a growing inventory of unmaintained software that becomes an increasingly attractive target for attackers.
Unmaintained software becomes a liability. Software that is no longer actively maintained does not receive security patches. Vulnerabilities discovered after maintenance ends remain permanently unpatched. The software’s dependencies become increasingly outdated, accumulating known vulnerabilities that will never be remediated. Every day an unmaintained application runs in production, the attack surface grows.
Zombie applications consume resources. Decommissioned-in-name-but-not-in-practice applications continue to consume compute, storage, network, and licensing resources. They appear in vulnerability scans, requiring triage time to confirm they are “not our problem.” They hold credentials, service accounts, and network access that expand the blast radius of any compromise in the environment.
Compliance exposure. Regulations require organizations to know what software they operate and what data it processes. Software that nobody owns, nobody maintains, and nobody can authoritatively describe is a compliance risk. Auditors ask “who is responsible for this system?” and “what data does it hold?” and zombie applications have no good answers.
CSSLP Domain 7 Alignment
The (ISC)2 CSSLP (Certified Secure Software Lifecycle Professional) certification includes software end-of-life in Domain 7: Software Deployment, Operations & Maintenance. The CSSLP recognizes that the secure lifecycle does not end at deployment — it ends when the software is properly retired.
Key CSSLP requirements for end-of-life:
- Define end-of-life criteria during initial software planning.
- Create a decommissioning plan before the software reaches end-of-life.
- Execute data migration, archival, and destruction per policy.
- Remove all access, credentials, and infrastructure.
- Verify complete decommissioning.
CIS Control 16.1
CIS 16.1 (Establish and Maintain a Secure Application Development Process) implicitly includes decommissioning. A secure application development process must address the entire lifecycle, including retirement. An organization that cannot securely retire software does not have a complete secure development process.
2. When to Decommission
The decision to decommission software should be proactive, not reactive. Waiting until a system fails or is breached is the most expensive and risky approach.
Decommissioning Triggers
End of support (vendor or internal): When a vendor announces end of support for a product (e.g., Windows Server end of life, Java version end of support, SaaS vendor sunsetting a product), the organization must either migrate to a supported version or decommission the software. Running unsupported software is explicitly called out by multiple compliance frameworks as a risk that must be documented and accepted if it continues.
For internally developed software, end of support means the organization has decided to stop investing engineering time in maintaining the software. This decision should be explicit and documented, not the gradual result of team attrition and priority shifts.
Replaced by successor system: When new software is deployed to replace existing functionality, the old system must be decommissioned. The most common failure mode is “we deployed the new system but never turned off the old one.” This results in two systems performing the same function, with the old system receiving no maintenance while remaining accessible.
Business function no longer needed: Business requirements change. Products are discontinued. Markets are exited. When the business function a software system supports is no longer needed, the system must be decommissioned.
Cost of maintenance exceeds value: Every system has a maintenance cost (infrastructure, licensing, developer time for patches and updates, security scanning, compliance audit). When this cost exceeds the value the system provides, decommissioning is the rational choice.
Security vulnerabilities cannot be adequately remediated: Some systems reach a state where security vulnerabilities cannot be fixed without a fundamental rewrite. The technology stack may be too old, the codebase too fragile, or the original developers long gone. When the security risk of continued operation exceeds the organization’s risk tolerance and cannot be mitigated to an acceptable level, decommissioning is required.
Compliance requirements can no longer be met: Regulatory changes may impose requirements that a legacy system cannot satisfy (e.g., encryption standards, access control requirements, audit logging capabilities). If the system cannot be brought into compliance and the cost of compliance modifications exceeds the value of the system, decommission.
3. Decommissioning Planning
Decommissioning is a project, not a task. It requires the same planning discipline as any other engineering project.
Dependency Inventory
Before anything else, map every dependency on the software being decommissioned.
Upstream consumers (who depends on this system):
- Other applications that call this system’s APIs.
- Batch processes that read from this system’s databases.
- Users who access this system directly.
- Reporting systems that query this system’s data.
- Monitoring systems that watch this system.
- Third parties that integrate with this system.
Downstream dependencies (what does this system depend on):
- Databases and data stores.
- Message queues and event streams.
- Authentication and authorization services.
- External APIs and third-party services.
- Shared libraries and common services.
- Infrastructure (servers, containers, load balancers, DNS, certificates).
Data flows:
- What data enters this system? From where?
- What data exits this system? To where?
- What data is stored? Where? In what format?
- What data is processed? For what purpose?
Use application dependency mapping tools (ServiceNow, Dynatrace, Datadog Service Map) to supplement manual discovery. Manual discovery alone misses dependencies that are not documented.
Stakeholder Identification
Identify every stakeholder who must be involved in or informed about the decommissioning:
- System owner: Accountable for the decommissioning decision and resource allocation.
- Development team: Responsible for technical decommissioning activities.
- Operations team: Responsible for infrastructure decommissioning.
- Security team: Responsible for credential cleanup, access revocation, and security verification.
- Data governance: Responsible for data classification, migration, archival, and destruction decisions.
- Compliance/Legal: Responsible for regulatory retention requirements and legal hold obligations.
- Business stakeholders: Affected business units, product managers, customer-facing teams.
- Customer support: May receive queries about discontinued functionality.
- External partners/customers: If the system provided external-facing services.
Timeline and Milestones
A typical decommissioning project follows this timeline:
| Phase | Duration | Activities |
|---|---|---|
| Planning | 2-4 weeks | Dependency mapping, stakeholder identification, data inventory, migration planning |
| Notification | 4-12 weeks before cutoff | Stakeholder notification, deprecation warnings, migration guidance |
| Migration | 4-12 weeks | Data migration, consumer migration, parallel running |
| Soft decommission | 2-4 weeks | Read-only mode, reduced traffic, monitoring for stragglers |
| Hard decommission | 1 week | Service shutdown, access revocation, infrastructure removal |
| Cleanup | 2-4 weeks | Data destruction, credential cleanup, infrastructure teardown, verification |
| Verification | 1 week | Post-decommissioning verification checklist |
Rollback Plan
Despite best efforts, decommissioning may reveal missed dependencies or unforeseen impacts. A rollback plan must exist:
- Define the rollback window (typically 30-90 days after hard decommission).
- Maintain the ability to restore the system from backup during the rollback window.
- Keep infrastructure configuration (IaC templates) available for re-provisioning.
- Do not destroy data or revoke credentials until the rollback window expires.
- Define criteria for triggering a rollback (e.g., critical business process affected).
4. Data Management During Decommissioning
Data management is the most complex and risk-laden aspect of decommissioning. The organization must decide, for every data element, whether to migrate, archive, or destroy it — and then execute that decision correctly.
Data Classification Review
Before making data management decisions, classify all data held by the system:
- What types of data does the system hold? Personal data (PII), financial data, health data (PHI), intellectual property, business-critical data, operational data, temporary/cache data.
- What is the sensitivity level of each data type? Restricted, confidential, internal, public.
- What regulatory requirements apply? PCI-DSS for cardholder data, HIPAA for health data, GDPR for EU personal data, SOX for financial records.
- Is any data subject to legal hold? Litigation hold, regulatory investigation, e-discovery obligations.
Data Migration
Data that is needed by successor systems or must be preserved for business purposes must be migrated.
Migration requirements:
- Completeness: Verify all required data is migrated. Use record counts, checksums, and spot checks.
- Integrity: Verify data is not corrupted during migration. Compare source and destination records.
- Confidentiality: Protect data in transit during migration. Use encrypted transfer channels. Do not stage data in temporary locations without appropriate access controls.
- Validation: Run validation queries against the destination to confirm data accuracy.
- Reconciliation: Generate a reconciliation report comparing source and destination counts, sums, and sample records.
Data Archival
Data that is not needed for active operations but must be retained for regulatory or business purposes must be archived.
Regulatory retention requirements:
| Regulation | Retention Requirement |
|---|---|
| PCI-DSS | 1 year for audit logs, varies for transaction data |
| SOX | 7 years for financial records |
| HIPAA | 6 years for security-related records, varies by state for medical records |
| GDPR | As long as necessary for the stated purpose (minimize) |
| SEC Rule 17a-4 | 6 years for securities records |
| IRS requirements | 3-7 years depending on record type |
Archival requirements:
- Immutable storage: Archived data must be stored in a way that prevents modification or deletion during the retention period (WORM storage, S3 Object Lock, Azure Immutable Blob Storage).
- Encryption: Archived data must be encrypted at rest. Ensure encryption keys are managed separately and will remain available for the duration of the retention period.
- Access control: Restrict access to archived data to authorized personnel only. Access for regulatory or legal purposes should require documented approval.
- Retrievability: Ensure archived data can be retrieved if needed (regulatory audit, legal discovery, business need). Test retrieval procedures.
- Format preservation: Ensure archived data remains readable. If the data format is proprietary, export to an open format before archival.
Data Destruction
Data that is not needed for migration, archival, or retention must be destroyed. Data destruction is a security-critical activity.
NIST SP 800-88 Guidelines:
NIST SP 800-88 Rev. 1 (Guidelines for Media Sanitization) defines three levels of sanitization:
| Level | Method | Use Case |
|---|---|---|
| Clear | Logical techniques (overwrite, reset to factory) | Non-sensitive data on reusable media |
| Purge | Physical or logical techniques rendering data unrecoverable | Sensitive data on reusable media |
| Destroy | Physical destruction (shredding, melting, incinerating) | Highest sensitivity data, media leaving organizational control |
Cryptographic erasure: For encrypted data, destroying the encryption keys renders the data unrecoverable without physically destroying the media. This is effective and efficient for cloud and virtualized environments where physical media destruction is not feasible.
Requirements for cryptographic erasure:
- Verify the encryption is strong (AES-256 or equivalent).
- Destroy all copies of the encryption key, including backups and key escrow.
- Verify no key recovery mechanism can restore the keys.
- Document the key destruction for audit purposes.
Database destruction:
- Drop databases after verifying data migration/archival is complete.
- Overwrite database files on disk (not just deleting the database, which leaves data recoverable).
- Verify destruction by attempting to recover data from the storage media.
Backup destruction: This is frequently overlooked. Destroying the production database while retaining backups that contain the same data does not achieve data destruction. All backups must be addressed:
- Local backups on production servers.
- Off-site backup copies.
- Cloud-based backups (S3, Azure Blob, GCS).
- Backup tapes (physical destruction may be required).
- Replication copies (disaster recovery sites).
- Snapshots (VM snapshots, database snapshots, storage snapshots).
Certificate of destruction: For regulated data, generate a certificate of destruction that documents:
- What data was destroyed.
- When it was destroyed.
- How it was destroyed (method and standard).
- Who performed the destruction.
- Who verified the destruction.
- The regulatory requirement satisfied.
5. Credential and Access Cleanup
Every credential, account, and access path associated with the decommissioned system must be revoked. Residual credentials are the number one post-decommissioning security risk.
Service Account Revocation
- Identify all service accounts used by the system (database accounts, API service accounts, message queue accounts, cloud IAM roles).
- Disable each service account (do not delete immediately — keep disabled for the rollback window).
- After the rollback window expires, delete the service accounts.
- Verify no other systems are using the same service accounts (shared service accounts are a common antipattern that makes decommissioning difficult).
Secret Rotation
- Any shared secret that was known to the decommissioned system must be rotated. This includes:
- Database passwords shared with other systems.
- Encryption keys used by multiple services.
- API keys shared across applications.
- Certificates used for mutual TLS with other services.
- Rotation ensures that even if the decommissioned system’s secrets are compromised (e.g., from old backups or configuration files), they cannot be used to access other systems.
API Key and Token Revocation
- Revoke all API keys issued to the system.
- Revoke all OAuth client credentials.
- Revoke all JWT signing keys (and rotate the issuer’s signing key if the decommissioned system had access to it).
- Remove the system from any API gateway or service mesh configurations.
SSH Key and Certificate Removal
- Remove the system’s SSH keys from all authorized_keys files on other systems.
- Revoke the system’s TLS certificates.
- Remove the system’s client certificates from certificate stores.
- Update certificate revocation lists (CRLs) or OCSP responders.
Network Access Cleanup
- Update firewall rules to close ports and remove rules that allowed traffic to/from the decommissioned system.
- Remove DNS records (A, AAAA, CNAME, SRV) for the system. Consider pointing DNS records to a tombstone page for a period before deletion, to catch any clients still attempting to reach the system.
- Update load balancer configurations to remove the system from pools.
- Remove the system from service discovery registries (Consul, Eureka, Kubernetes services).
- Remove VPN or network peering configurations specific to the system.
6. Infrastructure Decommissioning
Compute Resources
- Servers/VMs: Power off, then decommission after the rollback window. For cloud VMs, terminate the instances and verify no EBS volumes or other attached storage remain.
- Containers: Remove container images from registries (after the rollback window). Delete Kubernetes deployments, services, config maps, and secrets. Remove Helm releases.
- Serverless functions: Delete Lambda/Cloud Functions/Azure Functions. Remove triggers (API Gateway routes, event subscriptions, queue bindings).
Monitoring and Alerting Removal
- Remove the system from monitoring dashboards and alert configurations.
- Remove synthetic monitors and health checks.
- Remove log collection agents and log forwarding configurations.
- Remove APM instrumentation.
- Remove the system from on-call rotation coverage.
Failure to remove monitoring produces phantom alerts — alerts for a system that no longer exists. These contribute to alert fatigue and waste on-call responders’ time.
Backup Schedule Removal
- Remove the system from backup schedules.
- Destroy existing backups per the data destruction plan (after the retention period expires, if applicable).
- Remove the system from disaster recovery plans and runbooks.
Infrastructure as Code Cleanup
- Remove Terraform/CloudFormation/Pulumi resources for the decommissioned system.
- Run
terraform plan/pulumi previewto verify no orphaned resources remain. - Remove the system’s IaC module from the repository (archive to a separate repository if needed for reference).
- Verify no other IaC modules reference the decommissioned system’s resources.
Orphaned Resource Check
After infrastructure decommissioning, verify no orphaned resources remain:
- Storage: Unattached disks, empty S3 buckets, orphaned file shares.
- Networking: Unused security groups, unused VPCs/subnets, dangling elastic IPs, unused NAT gateways.
- IAM: Unused roles, unused policies, unused groups.
- Certificates: Unused ACM/Let’s Encrypt certificates.
- Secrets: Unused secrets in Secrets Manager/Vault/Parameter Store.
Cloud cost optimization tools (AWS Cost Explorer, Azure Cost Management, GCP Billing) can help identify orphaned resources that are still accruing charges.
7. Code and Repository Management
Repository Archival
Repositories for decommissioned software should be archived, not deleted.
Why archive instead of delete:
- Audit trail: The code history may be needed for forensic investigation, legal discovery, or compliance audit.
- Knowledge preservation: Future developers may need to understand how a business process was previously implemented.
- License compliance: Open-source license obligations may require preserving attribution and source availability.
Archive procedure:
- Set the repository to read-only/archived status (GitHub “Archive repository” feature, GitLab “Archive project”).
- Add a README notice indicating the repository is archived, the date of archival, and the reason.
- Remove the repository from CI/CD pipeline triggers.
- Remove branch protection rules (no longer needed for an archived repository).
- Verify the repository does not contain secrets (scan with truffleHog, git-secrets, or Gitleaks before archiving).
CI/CD Pipeline Removal
- Remove or disable CI/CD pipeline configurations for the decommissioned system.
- Remove the system from deployment targets.
- Remove pipeline secrets and variables.
- Remove webhook integrations.
- Remove artifact storage (container images, build artifacts) after the rollback window.
Documentation Update
- Mark the system as decommissioned in all documentation systems (Confluence, internal wikis, architecture documents).
- Update architecture diagrams to remove the system.
- Update service catalogs and CMDB.
- Preserve the final architecture documentation with the archived repository.
- Update runbooks and playbooks that referenced the system.
Release Artifact Preservation
Preserve final release artifacts (binaries, container images, deployment packages) for a defined retention period:
- Minimum: duration of the rollback window.
- Recommended: 1 year after decommissioning.
- Required: per regulatory requirements for auditable systems.
8. Dependency Sunset
When the decommissioned system provided services consumed by other systems (APIs, data feeds, shared libraries), those consumers must be migrated.
Deprecation Communication
Deprecation notice (6-12 months before decommission):
- Announce the system will be decommissioned.
- Provide the planned decommission date.
- Describe the successor system (if applicable) and migration path.
- Offer migration support and resources.
Deprecation headers (for APIs): Add HTTP deprecation headers to API responses:
Deprecation: true
Sunset: Sat, 19 Jun 2027 00:00:00 GMT
Link: <https://docs.example.com/migration-guide>; rel="successor-version"
These headers (RFC 8594 — Sunset Header, draft-dalal-deprecation-header) allow automated tooling to detect and alert on deprecated API usage.
Gradual Traffic Reduction
For high-traffic services:
- Announce deprecation and publish the migration guide.
- Monitor consumer migration progress. Identify consumers who are not migrating and reach out directly.
- Reduce availability gradually:
- Remove from documentation and discovery services.
- Return deprecation warnings in responses.
- Reduce SLA commitments.
- Limit rate limits progressively.
- Final cutoff with monitoring for any remaining consumers.
- Grace period with error responses pointing to the successor system.
- Complete shutdown after the grace period.
Version API Deprecation
For versioned APIs:
- Mark the old version as deprecated in API documentation.
- Set a sunset date.
- Monitor usage of the deprecated version.
- Provide migration guides and tooling (code mod scripts, compatibility layers).
- Add deprecation warnings to responses.
- Shut down the old version after the sunset date.
9. AI Model and Tool Decommissioning
AI systems introduce unique decommissioning requirements that go beyond traditional software. The data used to train models, the model weights themselves, and the tools used during development all require careful retirement procedures.
AI Model Lifecycle
AI models follow a lifecycle that parallels traditional software but includes additional phases:
Training → Validation → Deployment → Monitoring → Drift Detection → Retraining → Retirement
Model Retirement Triggers
| Trigger | Description | Example |
|---|---|---|
| Accuracy degradation | Model performance drops below acceptable thresholds | Classification accuracy drops from 95% to 80% due to data distribution changes |
| Concept drift | The relationship between inputs and outputs changes | Customer behavior patterns shift post-pandemic, making pre-pandemic models unreliable |
| Bias detection | Model exhibits unacceptable bias in production | Hiring model shows demographic bias not detected in training |
| Newer model available | A superior model is available | GPT-4o replaced by a more capable model for the same use case |
| Regulatory change | New regulations prohibit the model’s use or require changes the model cannot support | EU AI Act reclassifies the use case as high-risk, requiring transparency the model cannot provide |
| Cost/benefit | Maintaining the model costs more than the value it provides | Model requires expensive GPU infrastructure for diminishing returns |
Training Data Destruction
When an AI model is decommissioned, the training data must be addressed:
- Data classification: What data was used to train the model? Did it contain PII, proprietary information, or licensed content?
- Retention requirements: Are there regulatory or contractual requirements to retain or destroy the training data?
- Destruction method: Apply NIST SP 800-88 guidelines based on data sensitivity.
- GDPR considerations: If training data contained EU personal data, GDPR’s right to erasure may require destruction. This is complicated when personal data has been embedded into model weights (the “machine unlearning” problem).
Model Weight Destruction
Model weights (the parameters learned during training) are intellectual property and may encode information from the training data. When a model is retired:
- Delete model weight files from all locations (model registries, deployment systems, backup storage).
- Verify deletion — model weights may be large files stored across multiple locations.
- Consider whether the model weights could be used to extract training data (model inversion attacks). If the training data was sensitive, model weight destruction is a security requirement, not just cleanup.
- Document the destruction for audit purposes.
AI Tool Removal
When an AI development tool is decommissioned (e.g., switching from one AI coding assistant to another, or discontinuing an AI tool):
- Uninstall: Remove the tool from all developer machines, CI/CD systems, and IDE configurations.
- Revoke API keys: Revoke all API keys and authentication tokens used by the tool.
- Audit data shared: Determine what code, data, and context was shared with the AI tool’s API during its use. If the tool’s provider retains this data, evaluate the risk and consider requesting deletion under their data processing agreement.
- Configuration cleanup: Remove tool configuration files, cached data, and local model files.
- License termination: Cancel subscriptions and terminate licensing agreements.
- Policy update: Update the organization’s approved AI tools list to remove the decommissioned tool and note the decommission date.
Shadow AI Cleanup
Decommissioning is an opportunity to discover and address shadow AI — unapproved AI tools that developers have adopted without organizational approval.
- Discovery: Use network monitoring, CASB logs, and DLP tools to identify AI services being accessed.
- Assessment: For each discovered shadow AI tool, assess what data was shared with it and what risk it poses.
- Cleanup: Revoke access, block network access, remove installed software.
- Root cause: Why did developers adopt shadow AI? Was the approved tool inadequate? Were developers unaware of the policy? Address the root cause to prevent recurrence.
- Amnesty period: Consider an amnesty period where developers can disclose shadow AI usage without consequences, enabling thorough cleanup.
AI BOM (Bill of Materials) Update
Maintain an AI Bill of Materials that documents all AI models and tools in use. When decommissioning AI components:
- Remove the decommissioned model/tool from the AI BOM.
- Document the decommission date and reason.
- Document what data was destroyed.
- Archive the AI BOM entry for audit purposes.
- Update any AI governance documentation that referenced the decommissioned component.
10. Communication and Documentation
Stakeholder Notification Schedule
| Milestone | Notification | Audience |
|---|---|---|
| Decommission decision | Formal announcement with timeline and rationale | All stakeholders |
| 6 months before cutoff | Migration guide published, support offered | Consumers, partners |
| 3 months before cutoff | Reminder with migration progress report | All stakeholders |
| 1 month before cutoff | Final warning, migration deadline | Consumers still on old system |
| 1 week before cutoff | Imminent shutdown notice | All stakeholders |
| Cutoff day | Confirmation of shutdown | All stakeholders |
| Post-decommission | Completion report with verification results | System owner, security, compliance |
Documentation Updates
- Architecture diagrams: Remove the system from all architecture diagrams. Archive the final state diagram.
- CMDB/asset inventory: Update the system’s status to “Decommissioned” with the decommission date. Do not delete the CMDB entry — maintain it for audit trail.
- Service catalog: Remove the system from active service listings.
- Runbooks and playbooks: Remove or archive runbooks. Update any runbooks for other systems that referenced the decommissioned system.
- Dependency maps: Update dependency maps for all systems that previously depended on the decommissioned system.
- Training materials: Update training materials that referenced the system.
Lessons Learned
After decommissioning, conduct a brief retrospective:
- What went well in the decommissioning process?
- What was harder than expected?
- Were there missed dependencies that caused problems?
- Were there data management challenges?
- Were there credential or access cleanup gaps?
- What would we do differently next time?
Feed these lessons into the decommissioning procedure for future projects.
Compliance Evidence
For regulated systems, maintain a decommissioning evidence package:
- Decommissioning plan (approved by system owner and security).
- Data migration reconciliation reports.
- Data destruction certificates.
- Credential revocation evidence.
- Infrastructure decommissioning verification.
- Post-decommissioning verification results.
- Stakeholder notification records.
- Sign-off from system owner, security, and compliance.
11. Post-Decommissioning Verification
The final step is systematic verification that the decommissioning is complete. This checklist must be executed and signed off before the decommissioning project is closed.
Verification Checklist
System Accessibility:
- The system’s URLs/endpoints return no response or a tombstone page (not the old application).
- DNS records for the system are removed or point to a tombstone.
- The system is not accessible from the internal network.
- The system is not accessible from the internet.
- Load balancer configurations no longer route traffic to the system.
- Service discovery no longer lists the system.
Data Verification:
- All required data has been migrated to the successor system (with reconciliation evidence).
- All required data has been archived per retention requirements (with immutable storage verification).
- All data requiring destruction has been destroyed (with certificates of destruction).
- Backups have been destroyed or placed under retention management.
- No copies of the data exist in unexpected locations (developer machines, staging environments, temporary storage).
Credential Verification:
- All service accounts are disabled or deleted.
- All API keys are revoked.
- All OAuth client credentials are revoked.
- All SSH keys are removed from other systems’ authorized_keys.
- All TLS certificates are revoked.
- Shared secrets have been rotated.
- Firewall rules have been updated to remove the system.
- No credentials associated with the system appear in secret scanning results.
Infrastructure Verification:
- Servers/VMs are terminated.
- Containers and container images are removed.
- Serverless functions are deleted.
- Storage volumes are deleted.
- No orphaned cloud resources remain (use cloud cost tools to verify).
- IaC resources are removed.
- Monitoring and alerting are removed.
- Backup schedules are removed.
Code and Documentation Verification:
- Repository is archived (read-only).
- CI/CD pipelines are disabled.
- Architecture diagrams are updated.
- CMDB/asset inventory is updated.
- Service catalog is updated.
- Runbooks are archived or updated.
AI-Specific Verification (if applicable):
- AI models are removed from model registries and deployment systems.
- Training data is destroyed per policy.
- Model weights are destroyed.
- AI tool API keys are revoked.
- AI BOM is updated.
- Shadow AI usage has been addressed.
Final Sign-Off:
- System owner sign-off.
- Security team sign-off.
- Compliance team sign-off (for regulated systems).
- Decommissioning evidence package assembled and stored.
Key Takeaways
- Decommissioning is a security-critical activity, not an afterthought. Unmaintained software accumulates vulnerabilities, consumes resources, and expands the attack surface. Proactive decommissioning is a security control.
- Dependency mapping is the foundation of safe decommissioning. Before you turn anything off, you must know everything that depends on it. Missed dependencies cause outages and data loss.
- Data management is the highest-risk aspect of decommissioning. Migration, archival, and destruction must be executed with the same care as data processing during normal operations. Regulatory retention requirements add complexity.
- Credential cleanup eliminates residual attack surface. Every service account, API key, SSH key, certificate, and shared secret associated with the decommissioned system must be revoked. This is the most commonly missed step.
- AI decommissioning adds new dimensions. Training data destruction, model weight destruction, AI tool data audit, and AI BOM updates are requirements that did not exist in traditional decommissioning.
- Verification is not optional. The post-decommissioning verification checklist must be executed and signed off. Trust but verify — confirm the system is unreachable, data is destroyed, credentials are revoked, and infrastructure is cleaned up.
- Communication prevents surprises. A structured notification timeline with clear milestones, migration guidance, and support ensures stakeholders are prepared and no one is caught off guard.
Practical Exercise
Scenario: Your organization is decommissioning a legacy customer portal application. The application:
- Has been running for 8 years.
- Stores customer PII (names, emails, phone numbers, addresses) and transaction history.
- Uses a PostgreSQL database with 2TB of data.
- Has REST APIs consumed by 3 internal services and 2 partner integrations.
- Uses a service account to access the shared authentication service.
- Has an AI-powered chatbot feature that was added 2 years ago using a third-party LLM API.
- Is hosted on 4 VMs behind a load balancer.
- Is subject to PCI-DSS (transaction data) and GDPR (EU customer data).
- Is being replaced by a new customer portal built on modern architecture.
Tasks:
- Create a complete dependency map (upstream consumers, downstream dependencies, data flows).
- Develop a data management plan: for each data type, specify whether it will be migrated, archived, or destroyed, with justification and regulatory reference.
- Create a credential and access cleanup checklist specific to this system.
- Develop a stakeholder notification timeline.
- Address the AI chatbot decommissioning: what data was shared with the LLM API? What cleanup is needed?
- Complete the post-decommissioning verification checklist.
References
- CIS Controls v8, Safeguard 16.1: Establish and Maintain a Secure Application Development Process
- NIST SP 800-88 Rev. 1: Guidelines for Media Sanitization
- (ISC)2 CSSLP Domain 7: Software Deployment, Operations & Maintenance
- NIST SP 800-53 Rev. 5: SA-22 (Unsupported System Components)
- PCI-DSS v4.0: Requirement 3.2 (Do not store sensitive authentication data after authorization)
- GDPR Article 17: Right to Erasure
- RFC 8594: The Sunset HTTP Header Field
- OWASP Software Component Verification Standard (SCVS): Component End-of-Life
- Cloud Security Alliance: Cloud Data Lifecycle
Study Guide
Key Takeaways
- Decommissioning is a security-critical activity — Unmaintained software accumulates vulnerabilities, consumes resources, and expands attack surface.
- Residual credentials are the #1 post-decommissioning risk — Every service account, API key, SSH key, and certificate must be revoked.
- Dependency mapping before shutdown — Know everything that depends on the system before turning anything off; missed dependencies cause outages.
- Data management is highest-risk — Migration, archival, and destruction per regulatory requirements; backup destruction is frequently overlooked.
- Cryptographic erasure for cloud environments — Destroy all encryption key copies to render data unrecoverable without physical media destruction.
- AI decommissioning adds new dimensions — Training data destruction, model weight deletion, machine unlearning problem for GDPR compliance.
- Archive repositories, do not delete — Code history needed for forensic investigation, legal discovery, compliance audit, and knowledge preservation.
Important Definitions
| Term | Definition |
|---|---|
| Cryptographic Erasure | Destroying all encryption key copies, rendering encrypted data unrecoverable |
| NIST SP 800-88 | Guidelines for Media Sanitization — three levels: Clear, Purge, Destroy |
| Machine Unlearning | Problem of removing specific data from trained AI model weights for GDPR compliance |
| Soft Decommission | Read-only mode, reduced traffic, monitoring for stragglers before final shutdown |
| Hard Decommission | Service shutdown, access revocation, infrastructure removal |
| Rollback Window | 30-90 days post-decommission maintaining ability to restore the system |
| Sunset Header | RFC 8594 HTTP header communicating planned API decommission date |
| Certificate of Destruction | Audit document recording what, when, how, and who for data destruction |
| Zombie Application | Decommissioned-in-name system still running, consuming resources, unpatched |
| Expand-Contract (API) | Deprecation headers, gradual traffic reduction, grace period, complete shutdown |
Quick Reference
- Decommission Timeline: Planning (2-4 wk) -> Notification (4-12 wk) -> Migration (4-12 wk) -> Soft (2-4 wk) -> Hard (1 wk) -> Cleanup (2-4 wk) -> Verification (1 wk)
- Data Retention: PCI-DSS 1 year, SOX 7 years, HIPAA 6 years, GDPR minimize, SEC 17a-4 6 years
- NIST 800-88 Levels: Clear (overwrite, non-sensitive), Purge (unrecoverable, sensitive), Destroy (physical, highest sensitivity)
- API Sunset Headers:
Deprecation: true+Sunset: <date>+Link: <migration-guide>; rel="successor-version" - Common Pitfalls: Not revoking credentials, forgetting backup destruction, deleting repos instead of archiving, no rollback window, ignoring AI model data destruction
Review Questions
- Why are residual credentials the number one post-decommissioning security risk, and design a complete credential cleanup checklist.
- Explain cryptographic erasure, its requirements, and when you would choose it over physical media destruction.
- An 8-year-old customer portal with PII and PCI data needs decommissioning — design the data management plan specifying migration, archival, and destruction.
- What unique challenges does AI model decommissioning present for GDPR’s right to erasure, and how does the machine unlearning problem complicate this?
- Design a stakeholder notification timeline for decommissioning an API consumed by 3 internal services and 2 external partners.