Systems effects: what the models do not capture¶
Threat models are reductive by design. They name assets, adversaries, objectives, and impacts in order to make the problem tractable. The cost is that they do not capture effects that are diffuse, slow-moving, emergent, or that operate across multiple layers simultaneously.
This page is an attempt to name some of those effects. It does not offer mitigations for all of them. Some are structural conditions, not problems that can be solved by any actor at an individual or organisational level. Naming them matters because ignoring them produces a map of the problem that is too small, and decisions made on too small a map tend to optimise locally while the larger system continues to degrade.
The chilling effect as a systemic output¶
Each of the three threat models produces a chilling effect. Surveillance of activists changes what activists say and where. Deanonymisation risk changes what people post, what they search for, what groups they join. The risk of partner surveillance changes what survivors disclose and to whom.
These are not separate chilling effects. They compound. A civil society researcher who is also a member of a marginalised group also lives with a partner and also uses commercial platforms is not experiencing three separate threat models. They are experiencing a single environment in which the cost of visibility is high and the cost of caution accumulates into a progressively smaller life.
The aggregate effect of this across a population is not measurable in individual incident statistics. It shows up in what does not happen: the disclosures not made, the organisations not joined, the research not published, the campaigns not started. The absence is the harm, and absence is very difficult to count.
The epidemiological framing applies: surveillance at population scale produces population-level behavioural effects that are invisible in any single case but visible in aggregate trends over time. These effects are documented in the research literature and routinely excluded from cost-benefit analyses of surveillance programmes.
Trust as infrastructure¶
Every functional system depends on trust. Not as a sentiment, but as the condition under which information flows, cooperation is possible, and action can be coordinated.
The surveillance threat model describes a dynamic in which the mere perception of surveillance capability, independent of whether it is actively exercised, produces behaviour change. Systems under perceived threat shift their communication patterns in ways that reduce the quality and completeness of information flow.
This applies at every scale. Citizens who do not trust that their communications are private will not share information that enables collective action. NGOs that suspect their internal communications are monitored will not plan effectively. Journalists who believe sources may be identifiable through metadata will lose sources. Research collaborators who face legal uncertainty will avoid the most sensitive topics.
Trust once damaged recovers slowly and unevenly. Revelations of surveillance programmes, once public, do not simply reset to the pre-revelation baseline when the programme is ended or reformed. The revealed fact of capability changes the rational assessment of what is possible, and the rational assessment does not go away when the specific programme does. Snowden’s 2013 disclosures changed how a significant portion of the technology-using public thinks about communications infrastructure. That change persists.
Epistemic effects¶
The cumulative result of self-censorship at scale is a distortion of what is known, what is published, what is researched, and what is publicly argued.
Academic researchers who face surveillance risk from funders, governments, or corporate interests do not simply continue their work unchanged and hope for the best. They make decisions about what to study, where to publish, and what to say at conferences based on a rational assessment of risk. The research that does not get done because the risk calculus is unfavourable is invisible as a gap unless you can identify what should have been there.
Journalism is a sharper case. The chilling effect on journalism from surveillance of sources is better documented, because the absence of source-based journalism in some contexts is more visible than the absence of a particular type of academic paper. But the mechanism is the same: surveillance capability shapes what can be reported, which shapes what is known, which shapes what can be debated and decided.
Democratic deliberation depends on the quality of information available. Surveillance that degrades information quality or reduces the range of perspectives in circulation is not only a privacy problem. It is a structural problem for the systems that depend on informed public discourse to function.
Algorithmic amplification as a compounding layer¶
The commercial data layer does not only collect data. It processes data and uses it to shape what people see, what they are shown, what is amplified, and what is suppressed.
The interaction between surveillance infrastructure and algorithmic curation produces effects that neither model captures alone. Profiles built from behavioural surveillance are used to determine what content each person sees. This creates feedback loops in which the data collected shapes the environment that generates more data.
The political implications of this are substantial. Targeted amplification of content based on inferred psychological profiles is not a neutral advertising technology. It is an infrastructure for shaping political beliefs and behaviour at scale. The fact that this infrastructure is commercially operated and commercially motivated does not make its political effects less real.
The homeostatic trap¶
A recurring pattern in complex systems applies directly here: systems resist change that would reduce the function they are optimised for, even when that function is harmful.
The surveillance economy has become a fundamental layer of the commercial internet. Advertising models built on behavioural data drive revenue for platforms that have become essential infrastructure. Disrupting the surveillance model disrupts the revenue model disrupts the services. This is the homeostatic trap: the system resists reform not because of any individual bad actor but because the reform threatens the equilibrium on which many actors depend.
Working at leverage points matters more than working at symptoms. Regulatory action that changes what the system is rewarded for, rather than action that addresses individual harms case by case, is the intervention that can actually move the equilibrium.
Cascading model failures¶
At the scale of surveillance governance, technically sound recommendations are systematically blocked by political conditions that benefit from the status quo.
This produces cascading model failures. Policy is made on models that are incomplete or outdated. Decisions based on those models produce outcomes that the models predicted were unlikely. The outcomes are attributed to implementation failures or external factors rather than to the models themselves. The models are not updated. The cycle continues.
The corrective is to treat recurring policy failures as evidence that the model is wrong rather than as evidence that the implementation was insufficient. If data protection regulation has repeatedly failed to contain the commercial surveillance market, the question is not how to write better regulation of the same design. The question is what assumption in the current regulatory model is producing this outcome consistently.
The question of who benefits¶
Every system analysis eventually arrives at the question of who benefits from the current state, because that is where the resistance to change is located.
The commercial data economy produces significant revenue for a small number of very large platforms. State surveillance programmes produce intelligence that is of value to security services and to the political principals they serve. The data broker market provides services that are purchased by actors ranging from advertisers to law enforcement to private investigators.
The people who bear the costs of these systems, those whose data is extracted, whose behaviour is shaped, and whose autonomy is constrained, have less structural influence over the continuation of those systems than the people who benefit from them. This asymmetry is not an accident. It is a feature of the political economy of data, and addressing it requires changing who has standing, who has access to information, and who has the capacity to act at the scale where the decisions are made.
That is, ultimately, a political project. The greenhouse metaphor has a limit here. You can tend your own greenhouse carefully. You can share what you grow with your neighbours. But the landscaping decisions for the larger garden are made somewhere else, by people with different interests, and the best individual greenhouse management in the world does not change what happens to the soil.