Privacy in digital identity is protected through multiple mechanisms: statutory requirements, technical safeguards, civil liberties frameworks, and governance structures. Together, these create a system of overlapping protections that ensure digital identity serves individuals rather than enabling surveillance.
Statutory protections
Laws establish binding privacy requirements. Utah Code § 63A-16-1202 provides the clearest example, enshrining specific protections into statute: no phone-home tracking (issuers cannot log when credentials are used), no forced device handover (holders never surrender their phones to verifiers), selective disclosure (holders choose what attributes to share), physical credentials remain valid (digital is always optional), and no remote kill-switch (credentials cannot be disabled without due process).
State consumer privacy laws like CCPA, CPRA, VCDPA, and Utah's Consumer Privacy Act create rights to know what data is collected, request deletion, opt out of sales, and correct inaccurate information. These laws apply to digital identity systems and create legal accountability for privacy violations.
Technical enforcement
Technology can enforce privacy protections in ways that policy promises cannot. This distinction, between systems that cannot track versus systems that promise not to track, is fundamental. For example:
Selective disclosure through cryptographic techniques like BBS+ signatures allows holders to reveal only specific attributes while keeping everything else hidden. The bartender sees "over 21" but not your name, address, or birthdate.
Zero-knowledge proofs go further, allowing holders to prove statements without revealing underlying data. You can prove you're not on a sanctions list without giving anyone your full identity record.
Pairwise identifiers prevent verifiers from correlating your credentials across contexts. Each verifier interaction uses different key material, making it impossible to build profiles by linking presentations.
Civil liberties frameworks
Organizations such as the ACLU, EFF, and EPIC have developed frameworks that define what privacy protection should entail in digital identity systems. The ACLU's twelve essential safeguards for state digital ID programs have become a reference point for policymakers.
These frameworks emphasize that privacy isn't optional or secondary, it must be embedded from the start. Once surveillance capabilities are built into technical standards, they become nearly impossible to remove. The "No Phone Home" campaign, urging DHS and TSA to avoid centralized logging, reflects this principle: design for privacy first.
Data minimization
The principle of data minimization pervades privacy protection. Credentials should contain only necessary attributes. Verifiers should request only what they need. Systems should retain only what is required and discard everything else.
Machine-readable personal data licenses can attach usage terms to credential presentations, specifying "one-time use only" or "no retention beyond this session," creating enforceable limits on how data can be used.
Verifier obligations, including requirements for minimal collection, short retention, and breach insurance, align financial incentives with privacy protection. Organizations that hold less data face less liability.
The right to choice
Perhaps the most fundamental privacy protection is the right to choose. Digital identity must remain voluntary. Physical credentials must remain valid alternatives. No one should be forced into a digital-only world.
This principle recognizes that privacy protection isn't just about limiting what systems can do, it's about ensuring individuals retain agency over their own identity. You decide whether to use verifiable digital credentials, what to share when you do, and with whom. The technology and policy exist to serve that choice, not to override it.

Want to keep learning?
Subscribe to our blog.


