5.3 — UAT & Acceptance Testing

Testing & Verification 90 min QA & Security
0:00 / 0:00
Listen instead
UAT & Acceptance Testing
0:00 / 0:00

Learning Objectives

  • Explain CIS 16.8 and the security rationale for environment separation.
  • Design a UAT environment that mirrors production while maintaining data compliance.
  • Construct a complete UAT test plan with entry/exit criteria, traceability, and formal sign-off.
  • Implement data anonymization and synthetic data strategies compliant with GDPR, PCI-DSS, and HIPAA.
  • Evaluate AI-assisted test plan generation and understand its benefits and risks.

1. CIS Control 16.8 — Maintain Separate Environments

CIS Safeguard 16.8 mandates environment separation:

“Maintain separate environments for production and non-production systems. Developers should not have unmonitored access to production environments.”

This control addresses three fundamental security risks:

  1. Accidental production impact: A developer testing a schema migration runs it against the wrong database. Environment separation makes this structurally impossible.
  2. Unauthorized data access: Developers with production access can view customer PII, financial data, and health records. This violates the principle of least privilege and creates compliance risk (GDPR, PCI-DSS, HIPAA).
  3. Audit trail integrity: If developers can directly modify production systems, the audit trail cannot distinguish between authorized changes (deployed through the pipeline) and unauthorized changes (made directly by a developer at 2 AM).

The control explicitly states “unmonitored access.” Break-glass access with logging, approval, and time-boxing is acceptable for incident response. Standing access without monitoring is not.


2. Environment Management

2.1 Environment Topology

A mature SSDLC maintains these distinct environments:

EnvironmentPurposeWho Has Access
DevelopmentDeveloper testing, feature branches, experimentationDevelopers
CI/CDAutomated build and test executionCI system only (no human access)
StagingIntegration testing, pre-UAT verificationQA, DevOps
UATBusiness acceptance testing by stakeholdersQA, Business Users, PM
Pre-ProductionFinal validation, production mirrorDevOps, Security, QA (read-only)
ProductionLive customer-facing systemOperations (monitored, audited)

2.2 Production Parity

The UAT environment must mirror production. Every difference between UAT and production is a location where bugs hide and security gaps form.

What must match:

  • Hardware/compute: Same instance types, same memory, same CPU allocation. A test that passes on a 4-core dev box and fails on a 2-core production container is a deployment surprise.
  • Network topology: Same load balancers, same firewall rules, same DNS configuration, same TLS certificates (or equivalent). Network-dependent behavior (timeouts, retries, connection pooling) must be tested in a matching topology.
  • Operating system and runtime: Same OS version, same language runtime, same system libraries. An application that works on Ubuntu 22.04 in dev and deploys to Amazon Linux 2023 in production has an untested gap.
  • Database schemas: Exact same schema, same indexes, same constraints. Schema drift between environments is one of the most common sources of production failures.
  • Integrations: Same third-party service endpoints (or faithful mocks with identical response schemas). If UAT uses a payment gateway sandbox but production uses the live gateway, any behavioral difference between sandbox and live is untested.
  • Security controls: Same WAF rules, same rate limiting, same authentication providers, same session management. Testing in an environment without production security controls means those controls are untested.

Infrastructure as Code (IaC) is the enforcement mechanism. The same Terraform/Pulumi/CloudFormation templates that define production define UAT. Differences are limited to environment-specific parameters (scaling, domain names, credentials) — not architecture.

# Same module, different parameters
module "app_environment" {
  source = "./modules/app-stack"

  environment     = var.environment  # "uat" or "production"
  instance_count  = var.environment == "production" ? 4 : 2
  domain          = var.environment == "production" ? "app.example.com" : "uat.example.com"

  # Architecture is IDENTICAL
  db_engine       = "postgres"
  db_version      = "16"
  cache_engine    = "redis"
  cache_version   = "7"
  mq_engine       = "rabbitmq"
}

2.3 Separate Credentials and Secrets

UAT must use completely separate credentials from production:

  • Separate database credentials (different passwords, different accounts)
  • Separate API keys for third-party services
  • Separate TLS certificates
  • Separate encryption keys
  • Separate service accounts

This is not optional. If UAT uses the same credentials as production, a compromised UAT environment grants production access. It also means anyone with UAT access has production credentials — violating CIS 16.8.

2.4 Automated Provisioning and Teardown

Environment drift is the gradual divergence between environments due to manual changes, hotfixes applied to one but not the other, and configuration drift over time.

The solution: automated provisioning and teardown.

  • UAT environments are created from IaC for each release cycle.
  • After UAT sign-off, the environment is destroyed.
  • For the next release, a fresh environment is provisioned from the same IaC.
  • This ensures every UAT cycle starts from a known, production-parity state.

Persistent UAT environments that are never torn down will inevitably drift from production. Ephemeral environments provisioned from IaC cannot drift because they do not persist long enough to accumulate changes.


3. Data Management

3.1 The Data Problem

UAT requires realistic data to test realistic scenarios. Production data is realistic. Production data also contains real customer PII, financial records, health information, and other regulated data.

Using production data in non-production environments violates:

  • GDPR: Personal data processed only for stated purposes, with data minimization (Article 5). Copying production data to UAT is a new processing activity that may lack legal basis.
  • PCI-DSS: Cardholder data must not be used for testing (Requirement 6.5.6). Test data must be anonymized or synthetic.
  • HIPAA: Protected health information requires the same safeguards in test environments as in production. Most organizations cannot (and should not) extend full HIPAA controls to UAT.
  • SOC 2: Logical access controls must prevent unauthorized access to data. Developers with UAT access should not see production customer data.

3.2 Data Anonymization and Masking

Data anonymization transforms production data to remove or obscure identifying information while preserving data characteristics necessary for testing.

Techniques:

TechniqueDescriptionUse Case
SubstitutionReplace real values with fake but realistic valuesNames, emails, phone numbers
ShufflingRandomly reassign values within a columnPreserves distribution
MaskingPartially obscure values (e.g., ****-****-****-1234)Credit card numbers, SSNs
GeneralizationReduce precision (exact age → age range)Demographic data
PerturbationAdd random noise to numerical valuesFinancial amounts, dates
TokenizationReplace sensitive data with non-reversible tokensIDs, account numbers
EncryptionEncrypt sensitive fields with UAT-specific keysReversible if needed for testing

Critical requirements for anonymization:

  • Irreversibility: It must be computationally infeasible to reverse the anonymization and recover original data.
  • Referential integrity: Relationships between tables must be preserved. If Customer 123 has Orders 456 and 789, the anonymized data must maintain those relationships (even though Customer 123 is now Customer ABC).
  • Data type consistency: Anonymized data must pass the same validation rules as real data. A masked email must still be a valid email format. A masked phone number must still be a valid phone format.
  • Statistical properties: For performance testing, anonymized data should have similar distributions (number of orders per customer, transaction amounts, geographic distribution).

3.3 Synthetic Data Generation

Synthetic data is generated from scratch based on data models and statistical properties. It has no connection to real individuals.

Advantages over anonymized data:

  • Zero re-identification risk (there are no real individuals to re-identify)
  • Can generate edge cases that may not exist in production data
  • Can generate data at any scale (need 10x production volume for load testing? Generate it.)
  • No dependency on production data pipelines

Challenges:

  • May not capture real-world data anomalies (corrupt records, encoding issues, legacy format inconsistencies)
  • Complex referential integrity across many tables is hard to generate correctly
  • Statistical distributions may not match production, causing tests to pass on synthetic data and fail on production data

Tools for synthetic data:

  • Faker libraries (Python Faker, JavaScript Faker) for basic fake data
  • Gretel.ai for AI-powered synthetic data generation
  • Tonic.ai for production-like synthetic data
  • Mostly AI for privacy-preserving synthetic data

3.4 Data Subset Strategies

For large production datasets, even anonymized copies may be impractical. Data subsetting creates a representative, relationally consistent subset.

  1. Identify core entities (customers, accounts, transactions).
  2. Select a representative sample (e.g., 1% of customers across all segments).
  3. Follow all foreign key relationships to include related data.
  4. Anonymize the subset.
  5. Validate referential integrity of the result.

A 1% subset of a 100TB production database yields a 1TB UAT database that is orders of magnitude faster to provision, query, and test against, while still containing realistic data distributions.


4. UAT Test Plan Structure

A UAT test plan is a formal document that defines what will be tested, how, by whom, and what constitutes success. It is not a suggestion — it is a contract between the development team and the business stakeholders.

4.1 Scope and Objectives

Define precisely what business requirements are being validated and, equally important, what is NOT in scope.

## Scope
- User registration flow (Story US-1234 through US-1240)
- Two-factor authentication enrollment (Story US-1245)
- Account dashboard data display (Story US-1250, US-1251)

## Out of Scope
- Admin portal (covered in separate UAT cycle)
- Third-party payment integration (tested via mocked sandbox)
- Performance characteristics (covered in load testing phase)

4.2 Entry Criteria

UAT does not begin until all entry criteria are met. Starting UAT prematurely wastes stakeholder time and erodes trust.

CriterionVerification Method
All system/integration tests passingCI dashboard green for release branch
UAT environment provisionedIaC deployment verified, health checks passing
Test data loaded and verifiedData validation scripts confirm integrity
Known defects documentedAll open defects listed with severity and workarounds
UAT test plan reviewed and approvedSign-off from QA lead and PM
User accounts provisionedAll UAT testers have credentials and access
Rollback plan documentedDocumented procedure to revert if UAT is blocked

4.3 Test Cases

UAT test cases are written in business language, not technical language. They describe user actions and expected business outcomes.

## TC-001: New User Registration
**Requirement**: US-1234 (User can create account with email and password)
**Preconditions**: User has a valid email address not previously registered

### Steps:
1. Navigate to registration page
2. Enter valid email address
3. Enter password meeting complexity requirements (8+ chars, upper, lower, digit, special)
4. Confirm password
5. Accept terms of service
6. Click "Create Account"

### Expected Results:
- Account is created successfully
- Welcome email is received within 5 minutes
- User is redirected to the account dashboard
- Dashboard displays the user's email address
- Two-factor authentication enrollment prompt is displayed

### Acceptance Criteria:
- [ ] Account created (Pass/Fail)
- [ ] Welcome email received (Pass/Fail)
- [ ] Dashboard redirect (Pass/Fail)
- [ ] Email displayed correctly (Pass/Fail)
- [ ] 2FA prompt displayed (Pass/Fail)

4.4 Acceptance Criteria

Every test case has measurable, binary pass/fail acceptance criteria. No ambiguity. No “looks about right.” Either the system meets the requirement or it does not.

Criteria must be:

  • Specific: “Response time under 3 seconds” not “response is fast.”
  • Measurable: Can be objectively verified by any tester.
  • Traceable: Maps directly to a documented requirement.
  • Binary: Pass or fail, no partial credit.

4.5 Test Data Requirements

For each test case, document the specific data needed:

  • Normal scenarios: Standard data within expected ranges.
  • Boundary scenarios: Minimum length name (1 char), maximum length name (255 chars), password exactly at minimum length.
  • Negative scenarios: Invalid email format, duplicate email, SQL injection payload in name field, XSS payload in address field.

4.6 Roles and Responsibilities

RoleResponsibility
QA LeadTest plan authoring, test execution coordination, defect triage
Business TestersExecute test cases, report defects, validate fixes
Product ManagerClarify requirements, prioritize defects, approve sign-off
Business OwnerFinal sign-off authority, risk acceptance for deferred defects
Development TeamFix defects, support environment issues, available for questions
Security TeamReview security-relevant test results, approve security findings

4.7 Defect Management

SeverityDefinitionResponse
CriticalSystem crash, data loss, security breach, total blockTesting halted. Fix immediately.
HighMajor feature not working, no workaroundFix before sign-off. May continue testing other areas.
MediumFeature works but with significant issuesFix or accept risk before sign-off.
LowMinor issues, cosmetic, workaround availableDocument and defer if needed.

Every defect must be:

  1. Documented with reproduction steps, screenshots, and expected vs. actual results.
  2. Severity-classified using the definitions above.
  3. Linked to the test case that found it and the requirement it violates.
  4. Assigned to a developer with a target fix date.
  5. Re-tested after fix to confirm resolution.
  6. Regression-tested to confirm the fix did not break other functionality.

4.8 Exit Criteria

UAT is complete when ALL exit criteria are met:

CriterionThreshold
Test execution completion100% of test cases executed
Test pass rate≥95% (zero critical/high failures)
Critical defectsZero open
High defectsZero open
Medium defectsAll documented with disposition (fix or accept risk)
Low defectsAll documented
Regression testingAll fixes re-tested, no new failures
Performance acceptanceResponse times within acceptable thresholds
Formal sign-offPM, Business Owner, and QA Lead signatures

4.9 Timeline and Schedule

A realistic UAT schedule includes time for:

  • Environment provisioning and verification (1-2 days)
  • Test data preparation and loading (1-2 days)
  • Test execution (depends on scope — typically 5-10 business days)
  • Defect fix cycles (allow 2-3 rounds of fix-retest)
  • Regression testing after fixes (2-3 days)
  • Sign-off and documentation (1 day)

5. Sign-Off Process

5.1 Formal Gate

UAT sign-off is a formal governance gate, not a casual thumbs-up. It requires:

  1. Written approval from designated signatories: Product Manager, Business Owner/Sponsor, and QA Lead at minimum.
  2. All critical and high defects resolved: Resolved means fixed AND re-tested AND regression-verified. Not “in progress.” Not “developer says it’s done.”
  3. Medium and low defects documented: Each with a disposition — either “will fix before release” or “accepted risk with documented justification and deferred to future release.”
  4. Digital signature or auditable approval record: Email approval with timestamp, digital signature in the project management tool, or electronic sign-off in TestRail/Zephyr. NOT verbal approval. NOT a Slack message saying “looks good.”
  5. Sign-off resets if scope changes: If ANY requirements change after sign-off, the sign-off is invalidated. Impacted test cases must be re-executed and sign-off must be re-obtained. This prevents scope changes from slipping through without validation.

5.2 What Invalidates Sign-Off

  • Any code change to the signed-off release (even “just a config change”)
  • New requirements added to the release scope
  • Environment changes between UAT sign-off and production deployment
  • Discovery of a defect not covered by existing test cases

5.3 Audit Trail

The complete UAT record must be preserved for audit purposes:

  • Final test plan (version-controlled)
  • All test case results with timestamps and tester identity
  • All defect records with full lifecycle (open → assign → fix → retest → close)
  • Sign-off records with signatories and timestamps
  • Environment configuration at time of testing
  • Test data used (or its generation parameters)

6. Bidirectional Traceability Matrix

The traceability matrix is the document that proves every requirement has been tested and every test traces back to a requirement. It is the primary audit artifact for regulatory compliance.

6.1 Structure

Requirement IDRequirement DescriptionTest Case(s)Test ResultDefect(s)
US-1234User registration with emailTC-001, TC-002Pass
US-1235Password complexity enforcementTC-003, TC-004PassDEF-089 (fixed)
US-1245Two-factor authenticationTC-010, TC-011, TC-012Pass
US-1250Dashboard data displayTC-020FailDEF-095 (open, medium)

6.2 Coverage Gap Analysis

The matrix immediately reveals gaps:

  • Requirements with no test cases: Untested requirements. These are release risks.
  • Test cases with no requirements: Orphan tests. Either the requirement is undocumented (fix the requirements) or the test is unnecessary (remove it).
  • Requirements with all test cases failing: Features that are not working. Release blockers unless deferred.

6.3 Bidirectional Navigation

The matrix must be navigable in both directions:

  • Forward traceability (Requirement → Test): “Has this requirement been tested?” For every requirement, you can find all test cases that validate it.
  • Backward traceability (Test → Requirement): “Why does this test exist?” For every test case, you can identify the requirement it validates.
  • Defect traceability: Every defect links to the test case that found it and the requirement it violates. This enables impact analysis when requirements change.

7. Tools

ToolTypeStrengths
Jira + Zephyr ScaleTest management + projectTight Jira integration, traceability, reporting
Azure DevOps Test PlansIntegrated test managementFull ALM integration, Microsoft ecosystem
TestRailDedicated test managementComprehensive, flexible, API-first
qTestEnterprise test managementScalable, strong reporting, compliance focus
Xray for JiraTest management pluginBDD support, CI integration, traceability

Tool selection criteria:

  • Bidirectional requirement traceability (mandatory)
  • Test case versioning and history
  • Defect linking and lifecycle tracking
  • Sign-off workflow support
  • Audit trail and export capabilities
  • CI/CD integration for automated test result import
  • Reporting (coverage, progress, defect trends)

8. AI in UAT

8.1 AI-Assisted Test Plan Generation

AI can accelerate test plan creation by analyzing requirements documents and generating test cases:

How it works:

  1. Feed requirements documents (user stories, BRDs, PRDs) to an LLM.
  2. The LLM generates test cases with steps, expected results, and acceptance criteria.
  3. QA engineers review, modify, and approve the generated test cases.

Benefits:

  • Reduces test plan creation time from days to hours.
  • Identifies edge cases and boundary conditions that human testers might overlook.
  • Provides consistent test case format and structure.
  • Can generate test cases for multiple user personas simultaneously.

Example prompt and output:

Input: "US-1234: As a user, I can register with my email address and a
password that meets complexity requirements (minimum 8 characters, at least
one uppercase letter, one lowercase letter, one digit, one special character)."

AI-Generated Test Cases:
- TC-001: Register with valid email and compliant password → Success
- TC-002: Register with password of exactly 8 characters (boundary) → Success
- TC-003: Register with password of 7 characters (below minimum) → Reject
- TC-004: Register with password missing uppercase → Reject
- TC-005: Register with password missing digit → Reject
- TC-006: Register with password missing special character → Reject
- TC-007: Register with already-registered email → Reject with appropriate message
- TC-008: Register with invalid email format → Reject
- TC-009: Register with empty email → Reject
- TC-010: Register with password of maximum allowed length → Success
- TC-011: Register with SQL injection in email field → Reject safely
- TC-012: Register with XSS payload in name field → Reject/sanitize

8.2 AI for Test Data Generation and Anonymization

AI enhances test data management in three ways:

  1. Intelligent synthetic data generation: AI generates data that mimics real-world distributions, including realistic correlations between fields (e.g., city-state-zip consistency, age-appropriate income ranges).

  2. Context-aware anonymization: AI identifies sensitive data patterns that rule-based systems miss. For example, a free-text “notes” field that contains embedded SSNs, phone numbers, or medical terminology.

  3. Edge case data generation: AI generates data specifically designed to trigger edge cases — Unicode characters, extremely long strings, boundary dates, negative amounts, null bytes, and combinations that are statistically rare but important for testing.

8.3 Risks of AI in UAT

Hallucinated test cases: AI may generate test cases for features that do not exist or requirements that were never stated. Every AI-generated test case MUST be validated against actual requirements.

Missing edge cases: AI reflects its training data. If the training data lacks examples of certain business rules, industry regulations, or domain-specific edge cases, the AI will miss them. AI-generated test plans are a starting point, not a finished product.

Over-reliance on AI coverage: Teams may trust AI-generated test plans without reviewing them, creating a false sense of completeness. The traceability matrix (Section 6) is the check — if a requirement has no test case, the gap is visible regardless of whether tests were AI-generated or human-written.

Data compliance risks: AI-generated anonymization rules must be validated against specific regulatory requirements. “Looks anonymized” is not the same as “meets GDPR Article 89 standards for anonymization.”


9. OWASP SAMM Alignment

The OWASP Software Assurance Maturity Model (SAMM) defines maturity levels for requirements-driven testing:

Level 1: Ad hoc testing based on developer knowledge. No formal test plan. No traceability.

Level 2 (target for this module): Formal test plan derived from requirements. Bidirectional traceability. Defined entry/exit criteria. Formal sign-off process. Data anonymization for test environments.

Level 3: Automated test case generation from requirements. Risk-based test prioritization. Continuous validation with production telemetry feedback. AI-assisted test plan optimization.

This module targets Level 2 maturity as the baseline, with AI capabilities as acceleration toward Level 3.


10. NIST SSDF PW.5 Alignment

NIST SSDF Practice PW.5 states: “Create source code by adhering to secure coding practices.” While PW.5 focuses on coding practices, the UAT process validates that those practices resulted in software that meets security requirements.

Specifically:

  • UAT verifies that security requirements (Module 2.1) are implemented correctly.
  • Environment separation (CIS 16.8) ensures testing occurs without exposing production data.
  • Data anonymization ensures compliance with privacy regulations during testing.
  • The traceability matrix provides evidence that security requirements were tested.

11. Key Takeaways

  1. Environment separation is a security control, not a convenience. CIS 16.8 mandates it. Violations create compliance risk, data exposure risk, and audit findings.
  2. Production parity eliminates environment-specific bugs. Use IaC to ensure UAT matches production in architecture, configuration, and security controls.
  3. Never use real PII in test environments. Anonymize, mask, or generate synthetic data. No exceptions.
  4. The test plan is a contract. Entry criteria, exit criteria, and sign-off are formal gates. They are not negotiable or skippable under schedule pressure.
  5. Bidirectional traceability proves coverage. Every requirement has test cases. Every test case has a requirement. Gaps are immediately visible.
  6. Sign-off is auditable or it did not happen. Digital records, timestamps, signatories. Verbal approvals and Slack messages do not count.
  7. AI accelerates test plan creation but does not replace validation. Review every AI-generated test case against actual requirements. Trust but verify.

Review Questions

  1. A project manager wants to use production data in UAT to save time on test data preparation. Construct a compliance-based argument against this, referencing specific regulations.

  2. Your UAT environment uses a different database version than production. During UAT, all tests pass. After deployment to production, a critical query fails. What process failure allowed this, and how would you prevent it?

  3. Design entry criteria for a UAT cycle for a financial application that processes wire transfers. Include at minimum 8 criteria.

  4. A business owner verbally approves the release during a meeting but does not provide written sign-off. A defect is discovered in production. During the post-mortem, the business owner denies approving the release. What governance gap does this expose, and how does the sign-off process in this module prevent it?

  5. Describe how you would use AI to generate a test plan for a healthcare patient portal, including the specific validations you would perform on the AI output to ensure compliance with HIPAA.


References

  • CIS Controls v8, Safeguard 16.8 — Maintain Separate Environments
  • NIST SSDF PW.5 — Create Source Code Adhering to Secure Coding Practices
  • OWASP SAMM — Verification: Requirements-Driven Testing
  • GDPR Article 5 — Principles Relating to Processing of Personal Data
  • GDPR Article 89 — Safeguards for Processing for Archiving, Research, or Statistical Purposes
  • PCI-DSS v4.0, Requirement 6.5.6 — Test Data
  • HIPAA Security Rule — Technical Safeguards
  • ISTQB Foundation Level Syllabus — Acceptance Testing
  • Gretel.ai — Synthetic Data Platform
  • Tonic.ai — Test Data Management

Study Guide

Key Takeaways

  1. CIS 16.8 mandates environment separation — Developers must not have unmonitored access to production; break-glass with logging is acceptable.
  2. Never use production data in UAT — Violates GDPR, PCI-DSS, HIPAA, and SOC 2; use anonymized or synthetic data instead.
  3. UAT environment must mirror production — Use IaC to ensure same architecture, configuration, and security controls; ephemeral environments prevent drift.
  4. Sign-off is a formal governance gate — Requires written approval from PM, Business Owner, and QA Lead; verbal and Slack approvals do not count.
  5. Sign-off resets if any code changes — Even “just a config change” invalidates sign-off; impacted test cases must be re-executed.
  6. Bidirectional traceability proves coverage — Forward (requirement to test) and backward (test to requirement) navigation reveals gaps immediately.
  7. AI accelerates but does not replace validation — AI may hallucinate test cases for features that do not exist; every generated case must map to actual requirements.

Important Definitions

TermDefinition
UATUser Acceptance Testing — validates system meets business requirements from user perspective
CIS 16.8Mandates separate environments for production and non-production with controlled developer access
Bidirectional TraceabilityMatrix linking requirements to tests (forward) and tests to requirements (backward)
Data AnonymizationTransforms production data to remove identifying info while preserving characteristics
Synthetic DataGenerated from scratch based on data models with zero connection to real individuals
Entry CriteriaConditions that must be met before UAT begins (all tests passing, environment ready, data loaded)
Exit CriteriaConditions for UAT completion (>=95% pass rate, zero critical/high defects, formal sign-off)
Production ParityNon-production environments mirroring production in architecture, configuration, and security
OWASP SAMM Level 2Formal test plan with traceability, entry/exit criteria, sign-off, and data anonymization
Ephemeral EnvironmentsCreated from IaC for each release cycle, destroyed after sign-off to prevent drift

Quick Reference

  • Exit Criteria: 100% executed, >=95% pass rate, zero critical/high open, all medium documented, formal sign-off
  • Required Signatories: Product Manager, Business Owner/Sponsor, QA Lead (minimum)
  • Data Solutions: Anonymization (transforms real data), Synthetic (generated from scratch), Subset (representative sample)
  • SAMM Target: Level 2 baseline, Level 3 with AI acceleration
  • Common Pitfalls: Using production data in test environments, verbal-only sign-off, skipping entry criteria under pressure, not maintaining traceability matrix, environment drift from persistent environments

Review Questions

  1. Construct a compliance-based argument against using production data in UAT, referencing GDPR, PCI-DSS, and HIPAA specifically.
  2. Your UAT uses a different database version than production and a critical query fails after deployment — what process failure allowed this and how would you prevent it?
  3. A business owner verbally approves release but later denies it after a production defect — what governance gap does this expose?
  4. How would you use AI to generate a test plan for a healthcare portal while ensuring HIPAA compliance in the AI output?
  5. Design entry criteria for a UAT cycle for a financial wire transfer application with at minimum 8 criteria.
UAT & Acceptance Testing
Page 1 of 0 ↧ Download
Loading PDF...

Q1. What does CIS Safeguard 16.8 mandate regarding environments?

Q2. Why should UAT environments never use real production data?

Q3. What is the key difference between data anonymization and synthetic data generation?

Q4. What happens to UAT sign-off if any code changes are made to the signed-off release?

Q5. Which three roles are required as minimum signatories for formal UAT sign-off?

Q6. What does forward traceability in a bidirectional traceability matrix answer?

Q7. According to the UAT exit criteria, what is the minimum required test pass rate?

Q8. What is the primary risk of using AI to generate UAT test plans?

Q9. What is the recommended approach to prevent environment drift between UAT and production?

Q10. At what OWASP SAMM maturity level does this module set the baseline target?

Answered: 0 of 10 · Score: 0/0 (0%)