System Design Space
Knowledge graphSettings

Updated: March 15, 2026 at 8:01 PM

Threat Modeling: STRIDE and LINDDUN

medium

Practical threat modeling for security and privacy: DFD, STRIDE/LINDDUN, and prioritization of architectural controls.

This Theme 12 chapter focuses on STRIDE/LINDDUN threat-modeling methods and risk prioritization.

In real-world system design, this material supports security-by-design: explicit trust boundaries, control requirements, and operational response patterns.

For system design interviews, the chapter provides structured security reasoning: how threats are identified, why controls are chosen, and how residual risk is evaluated.

Practical value of this chapter

Design in practice

Use guidance on STRIDE/LINDDUN threat-modeling methods and risk prioritization to define architectural security requirements before implementation starts.

Decision quality

Validate solutions through threat model, security invariants, and production control operability, not compliance checklists alone.

Interview articulation

Frame answers as threat -> control -> residual risk, linking business scenario to concrete protection mechanisms.

Trade-off framing

Make trade-offs explicit for STRIDE/LINDDUN threat-modeling methods and risk prioritization: UX friction, latency overhead, operational cost, and compliance constraints.

Context

OWASP Top 10 in the context of System Design

OWASP tells you what to protect, while STRIDE/LINDDUN structures how to find and prioritize threats.

Open chapter

Threat modeling is an engineering practice that turns security and privacy discussion into concrete architecture decisions. In this chapter we use STRIDE for security risks and LINDDUN for privacy risks to produce an actionable control backlog.

When to use STRIDE, LINDDUN, or both

STRIDE for security threats

Best for integrity and availability risks: spoofing, tampering, privilege escalation, DoS, and operation forgery.

LINDDUN for privacy threats

Focused on privacy risk categories: linkability, deanonymization, personal-data disclosure, and policy non-compliance.

STRIDE + LINDDUN for products with PII

Use both when you need to control business security risks and personal-data privacy risks at the same time.

STRIDE: threat classes and architecture controls

SSpoofing

Key question: Who can impersonate a trusted principal?

Threat example: Service account impersonation to call internal APIs.

Controls: MFA, mTLS, signed tokens, device/workload identity.

TTampering

Key question: Where can data be modified without authorization?

Threat example: Payload manipulation between BFF and backend service.

Controls: Message signatures, checksums, immutable logs, strict authz.

RRepudiation

Key question: Who can deny a performed action?

Threat example: User disputes a payment event without a provable audit trail.

Controls: Correlated audit logs, timestamping, request IDs, non-repudiation evidence.

IInformation Disclosure

Key question: Which data can leak?

Threat example: PII exposed in logs and traces with broad access.

Controls: Data classification, masking, encryption, policy-based access control.

DDenial of Service

Key question: What can exhaust or disrupt the service?

Threat example: Public API overwhelmed by bot traffic without rate limiting.

Controls: Rate limits, queueing, autoscaling bounds, circuit breakers, WAF.

EElevation of Privilege

Key question: How can an attacker gain higher privileges?

Threat example: IDOR path escalating from user role to admin capabilities.

Controls: Least privilege, hop-by-hop policy checks, permission boundaries, regular reviews.

LINDDUN: privacy threats and mitigations

LLinkability

Key question: Can separate user actions be linked together?

Risk example: Single device identifier links behavior across contexts.

Controls: Pseudonymization, rotating identifiers, dataset separation.

IIdentifiability

Key question: Can identity be reconstructed from available data?

Risk example: ZIP + birth date + gender combination re-identifies users.

Controls: Minimization, aggregation, k-anonymity approaches, differential privacy where relevant.

NNon-repudiation

Key question: Does strict proofability conflict with privacy expectations?

Risk example: Detailed operation logs reveal complete user behavior history.

Controls: Policy-driven retention, scoped audit access, purpose limitation.

DDetectability

Key question: Can an observer detect user presence or sensitive state?

Risk example: Response timing reveals whether an account exists.

Controls: Constant-time responses, generic errors, traffic padding for sensitive flows.

DDisclosure of Information

Key question: Where can personal data be exposed?

Risk example: Data export endpoint leaks sensitive attributes by default.

Controls: Encryption, redaction, DLP controls, least-privilege data access.

UUnawareness

Key question: Do users understand what data is collected and why?

Risk example: Silent telemetry collection without clear consent UX.

Controls: Consent UX, just-in-time notices, transparent data usage communication.

NNon-compliance

Key question: Do processes satisfy legal and policy requirements?

Risk example: Data retained beyond regulatory retention limits.

Controls: Retention policies, legal basis tracking, automated compliance checks.

Threat modeling workflow

1. Define scope and architecture context

Mark system boundaries, user types, trusted/untrusted zones, and business assets with highest impact.

Output: Asset list and trust boundaries.

2. Build DFD and trust intersections

Map data flow between client, API gateway, services, queues, and storage. Mark external dependencies explicitly.

Output: Data Flow Diagram with explicit trust boundaries.

3. Run STRIDE on each DFD element

For process, data store, data flow, and external entity, ask STRIDE questions and log concrete security threats.

Output: Draft security threat register.

4. Run LINDDUN on PII and identity flows

Evaluate privacy risks for identifiers, profiles, analytics events, and user-level data processing flows.

Output: Privacy threat register and compliance gaps.

5. Prioritize risks and assign owners

Score likelihood x impact, choose mitigation strategy, and assign a clear owner for each threat item.

Output: Prioritized remediation backlog.

6. Embed controls into delivery lifecycle

Connect controls to ADRs, CI/CD gates, observability signals, and incident response runbooks.

Output: Implementation plan with acceptance criteria.

Practical example: checkout + user profile

Checkout API

STRIDE: Spoofing: forged customer token; Tampering: modified payment amount; DoS: endpoint flooding.

LINDDUN: Detectability: account existence leakage via error behavior; Disclosure: payment PII in logs.

Controls: mTLS + signed JWT, idempotency keys, rate limits, masked logs, consistent error responses.

User Profile Service

STRIDE: Elevation of Privilege: cross-user profile read; Repudiation: missing provable audit trail for updates.

LINDDUN: Linkability: behavioral event correlation; Non-compliance: retention-limit violations for profile data.

Controls: ABAC/ReBAC checks, immutable audit trail, pseudonymous analytics IDs, retention automation.

Required session outputs

  • DFD with explicit trust boundaries and external dependency list.
  • Threat register (STRIDE + LINDDUN) with severity, owner, and remediation deadline.
  • Mandatory control list for architecture and CI/CD integration.
  • Security/privacy test set linked to concrete threat items.
  • Scheduled threat-model review plan for architecture changes.

Typical antipatterns and recommendations

Typical antipatterns

Running threat modeling once as a pre-release checklist ritual.

Using only STRIDE and ignoring privacy risks in products with personal data.

No clear ownership or deadlines, leaving the threat list non-executable.

No model refresh after introducing new integrations or data flows.

Recommendations

Include threat modeling in Definition of Ready for major architecture changes.

Start with a simple DFD and short sessions instead of trying to model everything at once.

Map each high-risk threat to a concrete control and CI/CD test.

Combine STRIDE and LINDDUN when both security and privacy outcomes matter.

Related materials

Related chapters

Enable tracking in Settings

System Design Space

© 2026 Alexander Polomodov