When low-income people fall victim to an online fraud or a data breach, … (Mary Madden/New York Times)

0
18

One of the tragic ironies of the digital age is this: Despite the fact that low-income Americans have experienced a long history of disproportionate surveillance, their unique privacy and security concerns are rarely visible in Washington and Silicon Valley. As my colleague Michele Gilman has noted, the poor often bear the burden of both ends of the spectrum of privacy harms; they are subjected to greater suspicion and monitoring when they apply for government benefits and live in heavily policed neighborhoods, but they can also lose out on education and job opportunities when their online profiles and employment histories aren’t visible (or curated) enough.

The poor experience these two extremes — hypervisibility and invisibility — while often lacking the agency or resources to challenge unfair outcomes. For instance, they may be unfairly targeted by predictive policing tools designed with biased training data or unfairly excluded from hiring algorithms that scour social media networks to make determinations about potential candidates. In this increasingly complex ecosystem of “networked privacy harms,” one-size-fits-all privacy solutions will not serve all communities equally. Efforts to create a more ethical technology sector must take the unique experiences of vulnerable and marginalized users into account.

I led Pew Research Center’s