#5 Privacy Is a Spectrum, Not a Switch

Privacy breaks at enforcement, not at encryption

Most privacy debates still ask for a verdict: is the system private or not, anonymous or not. That framing works right up until the moment privacy stops being a product claim and becomes an enforcement question.

A withdrawal gets blocked. An issuer gets pressured. A wallet provider gets a request. A relay gets asked to “do the right thing.” From that point on, the relevant question is not what the system can hide. It’s what the system needs to know to decide what happens next.

That’s why privacy systems often disappoint users and alarm regulators at the same time. They hide what is easy to hide while leaking what is structurally necessary to operate. They protect content and expose context.

When the guarantee doesn’t match the use case, the system compensates the only way it can: bigger pools, more intermediaries, more monitoring, more exceptions, more friction. Privacy becomes a product story when it should have been a specification.

The spectrum is three dimensions, not one slider

Most systems treat privacy as a set of techniques: shield the transaction, encrypt the state, obfuscate the route. Those are mechanisms. They aren’t a model.

A model starts with two questions: what can observers learn, and what must the system decide? In practice, three different problems keep getting called “privacy,” and the confusion is where the failures begin.

Identity privacy (who): can actions be tied to a person, or even to a stable pseudonym?

When this leaks, optional disclosure becomes permanent labeling.

State privacy (what): can observers learn balances, positions, holdings, or private state?

When this leaks, privacy becomes an attack surface, economic, social, and sometimes physical.

Activity privacy (how/when): can observers learn timing, sequences, counterparties, routing, retries?

When this leaks, behavior becomes governable. Patterns become enough.

A system can be strong on one dimension and weak on another. Many are.

Shielded pools often protect state while activity stays readable. Mixers often break linkage while timing still leaks. Private RPCs can hide origin while concentrating knowledge in new intermediaries. Wallets can preserve custody while leaking intent through submission paths, because when the rest of the stack is underspecified, wallets become the trust boundary by default.

The failure mode is not that privacy is “hard.” It’s that we treat these dimensions as interchangeable and then act surprised when different stakeholders evaluate different things.

Users usually want confidentiality: don’t expose what I have or what I’m doing.

Protocols want composability: interoperate without importing new trust.

Compliance wants verifiability: prove constraints without learning everything else.

Those goals can coexist. They only collide when the architecture is vague about what is being protected, and where enforcement is allowed to live.

Ambiguity is what makes privacy collapse under pressure

Privacy fails first as an undefined promise.

When “privacy” isn’t specified, stakeholders project their own expectations onto it. The system fills the gap with whatever survives pressure.

That pressure is always the same. Someone, somewhere, needs to make a decision: allow, deny, delay, freeze, restrict, report. If the system can’t support that decision cleanly, it reaches for the fallback.

Observation is a technical fallback.

If constraints can’t be proven at the moment they matter, they get inferred from behavior. If they get inferred from behavior, somebody has to watch.

Can’t prove a withdrawal is allowed? Watch behavior and infer.

Can’t prove a flow fits policy? Watch the graph and infer.

Can’t prove a counterparty is safe? Watch patterns and infer.

Over time, privacy stops being a property the system defends and becomes a risk the system prices. It turns into friction, tiering, and exceptions. Not because anyone set out to punish privacy, because the system cannot separate “privacy for participants” from “uncertainty for operators” without a better enforcement primitive.

This is the point where people start talking about privacy as if it’s a moral debate. It isn’t. It’s an architecture dispute over what the system is allowed to learn in order to function.

Legibility becomes leverage

Once behavior is legible, it stops being neutral.

If others can see your intent, they can price against it.

If they can see your history, they can profile it.

If they can see your flows, they can gate access around them.

This is the part many privacy discussions miss because it sounds abstract until you’ve lived it. Markets don’t just observe information. They use it. Visibility turns into advantage. Advantage turns into extraction. Then extraction gets rebranded as “efficiency.”

That’s why “more transparency” is not automatically “more integrity.” In many contexts it is simply more surface area for intermediaries, sophisticated traders, and infrastructure providers to act on what less sophisticated participants cannot see in time.

And it’s why anonymity is often a workaround, not the goal.

A crowd is what you need when the system can’t enforce constraints without interpretation. You hide inside statistical cover because the system has no other way to let you participate without learning too much about you.

That’s not a stable foundation for a financial rail. It’s an admission that the enforcement layer is missing.

Verifiability stabilizes the boundary

The right starting point is not “what can we hide?” It’s: what can the system avoid learning?

Privacy scales when systems need less information to make the decisions that matter. This isn’t preference. It’s system integrity. If a rail can’t make decisions without absorbing context, it will accumulate context until it can.

Verifiability is how you stop that drift.

When constraints are enforceable at execution, interpretation stops being the system’s crutch. The system no longer needs to learn from patterns in order to decide what to allow.

That changes the economics immediately. If the rules can be enforced without broad disclosure, participants don’t need to expose themselves to get access. Flows can remain confidential without becoming suspicious by default. Integrations become simpler because “what must be known” is explicit, and “what must remain private” is defensible.

This is what makes a point on the spectrum stable: proof replaces inference. Boundaries stay clean. Privacy stops decaying into process and discretion.

It also clarifies where durable power will accumulate if you do not do this work. Moats form around decision points, not cryptography. Authority concentrates where denial is possible: interfaces, relays, issuers, off-ramps. If privacy isn’t specified and enforceable, it will be quietly redefined by whoever sits at those chokepoints.

A spectrum isn’t a compromise. It’s control.

If you don’t choose which dimension you’re optimizing for, enforcement will choose for you.

One thing to remember

Privacy isn’t “more” or “less.” It’s which dimension you reveal.

When privacy isn’t specified, it gets filled in by enforcement.

Forward this to someone still treating privacy like a switch.

Subscribe to Hidden Surface

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe