Ian Bogost and Charlie Warzel in The Atlantic with a very interesting look at how everything known about you (which is basically everything) could become a problem.

A worst-case scenario is easy to imagine. Some of this information could be useful simply for blackmail—medical diagnoses and notes, federal taxes paid, cancellation of debt. In a kleptocracy, such data could be used against members of Congress and governors, or anyone disfavored by the state. Think of it as a domesticated, systemetized version of kompromat—like opposition research on steroids: Hey, Wisconsin is considering legislation that would be harmful to us. There are four legislators on the fence. Query the database; tell me what we’ve got on them.

Say you want to arrest or detain somebody—activists, journalists, anyone seen as a political enemy—even if just to intimidate them. An endless data set is an excellent way to find some retroactive justification. Meyer told us that the CFPB keeps detailed data on consumer complaints—which could also double as a fantastic list of the citizens already successfully targeted for scams, or people whose financial problems could help bad actors compromise them or recruit them for dirty work. Similarly, FTC, SEC, or CFPB data, which include subpoenaed trade secrets gathered during long investigations, could offer the ability for motivated actors to conduct insider trading at previously unthinkable scale. The world’s richest man may now have access to that information.

An authoritarian, surveillance-control state could be supercharged by mating exfiltrated, cleaned, and correlated government information with data from private stores, corporations who share their own data willingly or by force, data brokers, or other sources. What kind of actions could the government perform if it could combine, say, license plates seen at specific locations, airline passenger records, purchase histories from supermarket or drug-store loyalty cards, health-care patient records, DNS-lookup histories showing a person’s online activities, and tax-return data?

It could, for example, target for harassment people who deducted charitable contributions to the Palestine Children’s Relief Fund, drove or parked near mosques, and bought Halal-certified shampoos. It could intimidate citizens who reported income from Trump-antagonistic competitors or visited queer pornography websites. It could identify people who have traveled to Ukraine and also rely on prescription insulin, and then lean on insurance companies to deny their claims. These examples are all speculative and hypothetical, but they help demonstrate why Americans should care deeply about how the government intends to manage their private data.

A future, American version of the Chinese panopticon is not unimaginable, either: If the government could stop protests or dissent from happening in the first place by carrying out occasional crackdowns and arrests using available data, it could create a chilling effect. But even worse than a mirror of this particular flavor of authoritarianism is the possibility that it might never even need to be well built or accurate. These systems do not need to work properly to cause harm. Poorly combined data or hasty analysis by AI systems could upend the lives of people the government didn’t even mean to target.