EPISODE 57: SUSPICION BY DESIGN: INSIDE DWP’S UNIVERSAL CREDIT AI FRAUD SYSTEM
What happens when the welfare state designs its technology to side-eye first and ask questions later?
In this episode of Compromising Positions, we get hands-on with Big Brother Watch’s “Suspicion by Design” report, unpacking how the UK Department for Work and Pensions (DWP) uses algorithmic profiling and AI systems to detect Universal Credit fraud and why defaulting to suspicion is a dangerous position for any government to take.
This episode is a measured examination of welfare AI, algorithmic decision-making, and what happens to trust, consent, and dignity when systems are built to watch first and explain never.
Expect socio-technical theory, legal realities, real-world harms, and the kind of uncomfortable questions policymakers really don’t like being asked.
In This Episode, We Discuss:
Suspicion Architecture: What happens when suspicion is a design choice.
The Algorithmic Gaze meets Dataveillance: What happens when you can’t opt out of AI lead services that are inherently bias against you.
Why “Security Through Obscurity” Fails: We show why secrecy doesn’t equal safety.
Fraud Detection that Punishes the Many, not the Few: How to design AI systems that protect public funds without criminalising the people who need it most.
Show Notes
Suspicion by Design: What we know about the DWP’s algorithmic black box, and what it tries to hide by Big Brother Watch (2025)
Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination by David Lyon (Ed) (2003)
Information Technology and Dataveillance by Roger Clarke (1988; 3015)

