FireShonks - Try this at home

FireShonks - Try this at home

Deine Spracheinstellungen wurden gespeichert. Wir bilden uns ein, hier gutes Deutsch zu schreiben, aber wenn du Probleme oder gar Fehler findest, sag uns bitte Bescheid!

Chat Control: Mass Screenings, Massive Dangers
27.12.2023 , Fireshonks
Sprache: Deutsch

As technological changes including digitalization and AI increase infrastructural capacities to deliver services, new mass screenings for low-prevalence problems (MaSLoPPs) appear to improve on old ways of advancing public interests. Their high accuracy and low false positive rates – probabilities – can sound dazzling. But translating the identical statistical information into frequency formats – body counts – shows they tend to backfire. The common (false positives) overwhelms the rare (true positives) – with serious possible consequences. Ignoring this fact is known as the base rate fallacy - a common cognitive bias. Due to pervasive cognitive biases such as this, as well as perverse structural incentives, society needs a regulatory framework governing programs that share this dangerous structure. This framework must work at the system rather than individual level. It should include better mechanisms for evidence-based policymaking that holds interventions to basic scientific evidentiary standards, and a right to deliberate ignorance where relevant. These solutions may help combat perverse incentives and cognitive biases, mitigating the damage from these dangerous programs. But we should expect ongoing sociopolitical struggle to articulate and address the problem of likely net damage from this type of program under common conditions.


New technology seems to herald progress toward improving public safety in relation to old threats, from heinous crimes like child sexual abuse and terrorism, to illnesses like cancer and heart disease. Enter "Chat Control," a mass scanning program designed to flag potential child sexual abuse material in digital communications. While the goal of protecting children from exploitation is laudable, the statistical and social implications of such a mass screening program are scary. An empirical demonstration of Bayes’ rule in this context shows that, under relevant conditions of rarity, persistent inferential uncertainty, and substantial secondary screening harms, Chat Control and programs like it backfire, net degrading the very safety they’re intended to advance.

Highlighting the inescapable accuracy-error dilemma in probability theory, we'll journey through the nuances of the base rate fallacy, highlighting how mass screening programs’ real-world efficacy is often not what it seems. When screenings involve entire populations, high "accuracy" translates into huge numbers of false positives. Additionally, proponents of such screenings have perverse incentives to inflate accuracy — and real-world validation to mitigate such inflation is often impossible. Dedicated attackers can also game the system, inflating false negatives. Meanwhile, secondary screening harms accrue to the very people we’re trying to protect. So, under certain common conditions, net harm can result from well-intentioned mass screenings.

These problems extend well beyond this particular program. The structure and challenges faced by Chat Control parallel those faced by other programs that share the same mathematical structure across diverse domains, from healthcare screenings for numerous diseases, to educational screenings for plagiarism and LLM use, and digital platform screenings for misinformation. Numerous additional case studies are discussed in brief. But the pattern is the point. The laws of statistics don’t change. Maybe policy-level understanding of their implications, can.

Solutions to the complex, system-level problem of mass screenings for low-prevalence problems (MaSLoPPs) must themselves work at the level of the system. This focus looks different from individual-level solutions often proposed, particularly in the health context in terms of risk communication and informed consent. Across contexts, we need evidence-based policy that holds interventions to basic scientific evidentiary standards. The burden of proof that new programs do more good than harm must rest on proponents. Independent reviewers should evaluate evidence to that standard. Transparency is a prerequisite of such independent review.

In addition enhancing policymaker and public understanding of these statistical realities, and adopting widely accepted scientific evidentiary standards, society has to grapple with another set of perverse incentives: Politicians and policymakers may benefit from being seen as taking visible action on emotionally powerful issues — even if that action is likely to have bad consequences. This implicates the ancient tension between democratic participation and expertise that Plato satirized in “Gorgias.” Just as children might rather have their illnesses treated by pastry chefs than doctors, so too majorities in democratic publics might rather have their politicians “just do something” against horrible problems like child abuse, terror, and cancer — than not. Even if those efforts net harm people in exactly the feared contexts (e.g., degrading security and health). But if we care about outcomes, then critically evaluating interventions by explaining their statistical implications, and actually measuring outcomes of interest empirically, seems like a good start to improve evidence-based policymaking, and also presents one way to perhaps mitigate the problem of short-term perverse political incentives.

Due to such perverse incentives and cognitive biases, we should expect political institutions to continue to struggle to formulate and implement a regulatory structure governing MaSLoPPs. One other facet of such a structure might stipulate deliberate ignorance as an opt-in/opt-out patient right. This way, medical information that is overwhelmingly likely to lead to needless anxiety and hassle at best — and unnecessary and harmful intervention at worst — such as incidental growth findings on imaging, doesn’t have to filter down to patients whom immediate healthcare providers may have financial incentives to overdiagnose.

Together we can clean up MaSLoPP!

Siehe auch: Slides. (3,4 MB)

Dr. Vera Wilde is a political scientist with 15 years’ experience studying bias, technology, and health. After completing National Science Foundation-supported PhD and postdoctoral research at the University of Virginia, UCLA, and Harvard, she co-created the website iBorderCtrl.no.