Impacts¶
Even the most carefully tended greenhouse can be ruined if someone tampers with the soil, overfeeds the aphids, or simply barges in and starts tagging your petunias. The impacts of privacy breaches and data misuse aren’t just theoretical — they’re personal, structural, and often quietly devastating.
Here’s what can go wrong when the data compost heap is turned over too eagerly:
↑ Data¶
Data is the new fertiliser, and everyone wants more of it. Companies scrape, hoard, and analyse as much as they can, convinced that quantity equals quality. But adding more data to a bad model is like pouring Miracle-Gro on knotweed — the problem just grows faster.
Yes, recommending the right shoe size is handy. But context is everything. Imagine you’re permanently in a wheelchair and, out of curiosity or kindness, browse a product you’d never use yourself. That brief detour now blooms into an advertising campaign across your digital life, pushing products you can’t use. Or you’re a teenage girl, and your browsing of a baby product site triggers unsolicited catalogues sent to your home. A little click, a lot of consequence.
Big data can feel less like insight and more like invasive species, clogging the ecosystem. The root of the issue? Many assume that more data equals better data. It doesn’t.
Sometimes less is more: clean your data, prune the outliers, sample wisely.
Add different features, not just more of the same weeds.
Models with high bias don’t benefit from more training data — they need complexity.
Models with high variance? They’re overgrown. Cut features. Add control.
Complex models can become so entangled they can’t scale — the digital equivalent of a vine strangling its own trellis.
In short: if your analytics garden is full of noise, don’t keep turning up the volume. Learn to listen better.
↑ Bias and discrimination¶
AI was meant to be the rational gardener — pruning with precision, impartial and tireless. Instead, we built topiary nightmares that reflect our own worst habits.
Bias isn’t just a side effect — it’s often built in. The systems are opaque, the data’s messy, and the consequences are real:
Tay, Microsoft’s chatbot, famously turned racist in less than 24 hours after being exposed to the internet. Like a greenhouse fungus, it thrived on what it was fed.
HR algorithms that downgrade CVs from women, minorities, or older applicants because their profiles don’t fit the mould — as if hiring decisions were a matter of preferred soil pH.
Loan models that exclude students from poor areas because their postcodes are “risky”. No loan, no education, no way out — a vicious composting cycle.
DNA test companies handing anonymised health data to insurers. The averages may stay the same, but the premiums certainly don’t — especially for subgroups already under strain.
Models reward the lucky, punish the rest, and rarely apologise. Worse, the solution often presented is to “add a human in the loop”. But humans are where the biases came from. They just taught the machine to be more efficient about it.
Avoiding bias isn’t easy, but it’s not rocket science either:
We have the techniques.
We have the tests.
What we lack apparently, is the incentive. Bias, like poor security in the past, isn’t profitable to fix — so we don’t. At least not yet.
↓ Competition¶
Data has become the prized orchid of the tech world — valuable, delicate, and aggressively protected.
Microsoft bought LinkedIn for $26.2 billion.
IBM acquired Truven Health for $2 billion — buying records for over 200 million patients.
Google scooped up résumés and job data from over 200 million people to fine-tune its employment tools.
This isn’t just capitalism in bloom — it’s an arms race. The more data you have, the more you can learn, the more you dominate. It’s the network effect, where the biggest platforms pull in more users, more advertisers, and more data — until moving elsewhere feels impossible.
The result? A landscape where privacy is a luxury, not a standard — because you’re not really the customer. You’re the crop.
↑ Surveillance and tracking¶
Once upon a time, surveillance was about trench coats and binoculars. Now it’s about cookies, clickstreams, and whether your smart toaster knows you skipped breakfast.
We are all being tracked. Always. Some examples are explicit — you click “Accept Cookies” and move on. Others are implicit — no action needed, just existing in digital space is enough:
Implicit data includes your order history, page views, search terms — all logged and repurposed.
First-party tracking is done by the site you’re on. Amazon uses it to suggest products based on what you (and others like you) have browsed.
Third-party tracking follows you across sites, devices, even locations. That Like button? It reports home, even if you don’t click it.
With mobile phones, it gets worse:
Your location is constantly tracked. For directions, sure — but also for marketing.
Smart devices watch your habits: what you watch, when you sleep, how often you boil your kettle.
Somatic surveillance tracks your body — heart rates, sleep cycles, even fertility. Insurance companies are taking notes, rest assured.
Surveillance exists because it’s cheap and easy — and because fear pays:
Governments surveil to control.
Corporations surveil to profit.
Both say it’s for your safety.
But when every aspect of your life can be monitored, logged, and monetised, privacy isn’t just under threat — it’s being composted.
↑ Regulation¶
Laws exist. Sort of. But the weeds grow faster than the hedges.
The GDPR tries to impose order in the private sector, but it doesn’t stop EU states doing their own thing. Law enforcement and intelligence agencies may follow entirely different tracks — and the garden gate doesn’t close neatly across borders.
Social media adds another complication:
It blurs the line between public and private. Is your tweet public? What about your location data?
There’s a long history of disproportionate surveillance on marginalised communities — profiling, infiltration, covert tactics.
Social media content is sometimes used as evidence, but nuance is lost. Your snarky comment may not survive data mining intact.
Legal scholars have started distinguishing OSINT (open source intelligence) from SOCMINT (social media intelligence), the latter being murkier and more prone to misuse.
As companies entrench their position with proprietary datasets, competition policy is starting to notice. Who controls the data controls the future — and the regulators are finally waking up.
Meanwhile, the EU’s ePrivacy regulation is promising tighter control of cookies and trackers — possibly more stringent than GDPR. Time will tell whether it’s a new hedge… or just another decorative trellis.
↑ Datafication of the self¶
Personal worth becomes a metric: engagement rate, credit score, productivity level, likes, followers, sleep cycles. You’re no longer a person — you’re a performance indicator.
This creates stress, alienation, and a strange new kind of inequality — algorithmic precarity. If your data profile doesn’t match what the system thinks is “successful,” your options quietly shrink.
It’s not just surveillance — it’s data feudalism.
↑ Loss of context and consent¶
Data, once collected, is decontextualised and repurposed in ways the user never agreed to.
You gave your postcode for delivery. Now it’s used to decide your insurance premium. You liked a tweet on a joke about depression. Now an algorithm thinks you’re vulnerable. You uploaded a face to an app. Now it’s training someone else’s facial recognition system.
Consent is not just “I clicked agree.” Real consent is informed, specific, and revocable. The current system pretends at consent but delivers functional coercion.
↑ Global inequity¶
Most regulations like GDPR are regional. Meanwhile, many developing countries lack the infrastructure, legal frameworks, or economic leverage to push back against major data-hungry platforms.
Result? The digital colonialism dynamic. Rich countries export surveillance tools, extract data, and impose their standards. Poorer regions become both test beds and resource mines, with little say in what happens.
↓ Accountability black holes¶
When something goes wrong (e.g. a wrongful arrest, job rejection, or loan denial due to automated profiling), it is hard to trace:
Was it the model?
Was it the data?
Was it the person who trained it?
Or the person who used the output?
These systems create accountability gaps, where no one is clearly at fault — and the harmed person has no clear path to recourse.