Our personal data and the ways private companies harvest and monetize it plays an increasingly powerful role in modern life. Corporate databases are vast, interconnected, and opaque. The movement and use of our data is difficult to understand, let alone trace. Yet companies use it to reach inferences about us, leading to lost employment, credit, and other opportunities

One unifying thread to this pervasive system is the collection of personal information from marginalized communities, and the subsequent discriminatory use by corporations and government agencies—exacerbating existing structural inequalities across society. Data surveillance is a civil rights problem, and legislation to protect data privacy can help protect civil rights. 

Discriminatory collection of data

Our phones and other devices process a vast amount of highly sensitive personal information that corporations collect and sell for astonishing profits. This incentivizes online actors to collect as much of our behavioral information as possible. In some circumstances, every mouse click and screen swipe is tracked and then sold to ad tech companies and the data brokers that service them. 

Where mobile apps are used disparately by specific groups, the collection and sharing of personal data can aggravate civil rights problems. For example, a Muslim prayer app (Muslim Pro) sold geolocation data about its users to a company called X-Mode, which in turn provided access to this data to the U.S. military through defense contractors. Although Muslim Pro stopped selling data to X-Mode, the awful truth remains: the widespread collection and sale of this data by many companies makes users vulnerable to discrimination. Yet far too many companies that collect geolocation data can make a quick buck by selling it. And law enforcement and other government agencies are regular buyers

In 2016, Twitter, Facebook, Instagram, and nine other social media platforms were found to have provided software company Geofeedia with social media information and location data from their users. This data was subsequently used by police departments across the U.S. to track down and identify individuals attending Black Lives Matter protests. The FBI has also been a Geofeedia client and one report by The Intercept disclosed that the CIA’s venture firm, In-Q-Tel, has invested in Geofeedia. These examples demonstrate how social media monitoring, excessive data collection, and disclosures by digital platforms can have far-reaching inequitable consequences for Black people.

Moreover, lower-income people are often less able to avoid corporate harvesting of their data. For example, some lower-priced technologies collect more data than other technologies, such as inexpensive smartphones that come with preinstalled apps that leak data and can’t be deleted. Likewise, some tech companies require customers to pay extra to avoid data surveillance, such as AT&T charging $29 per month to ISP customers to avoid tracking their browsing history. Similarly, some tech companies require customers to pay extra for basic security features that protect them from data theft, such as Twitter’s new plan to charge $11 per month for two-factor authentication. Sadly, data privacy often is a luxury that lower-income people cannot afford.

Discriminatory use of data in ad delivery 

Once personal data is collected, highly sensitive information about millions of people is broadly up for sale. Corporations and governments use it in ways that target some vulnerable groups in society for disfavored treatment—and exclude others from important opportunities. Despite legal rules against discrimination based on ethnicity, gender, and other characteristics, many corporations have used algorithms that target advertisements on these very characteristics. 

Many platforms and advertisers use personal data to target ads to some people and not others. For example, Twitter’s Tailored Audiences tool enables advertisers to target users on keywords, interests, and geographic location, whilst Google employs a Customer Match tool for advertisers to combine their information with Google’s user data.

Such targeting is often discriminatory. The Federal Reserve Board  found that “even consumers who seek out information to make informed decisions may be thwarted from making the best choices for themselves or their families and instead may be subject to digital redlining or steering”. 

Companies have directed risky advertisements to vulnerable groups. Thousands of seniors have been targeted with ads for investment scams by subprime lenders. Likewise, political ads have been targeted at minority ethnic groups—leading to voter suppression. This is made possible through the mass harvesting of personal information and compilation into dossiers that identify characteristics like ethnicity. One targeted ad employed by former President Trump  included an animated graphic of Hillary Clinton that sought to convince Black voters not to vote on Election Day.

Personal data is also used to prevent certain groups from receiving ads for positive opportunities. In 2016, for example, ProPublica revealed that Facebook allowed advertisers to exclude protected racial groups from viewing their content. One academic journal previously reported that women receive fewer online ads for high paying jobs than men. Discriminatory impact can occur even when the advertiser does not intend to discriminate. In 2018, Upturn found that Facebook distributed its ad for a bus driver job to an audience that was 80 percent men, even though Upturn did not intend to target the ad based on gender.

Housing ads also have been distributed in a racially discriminatory manner. In 2019, Facebook was subject to a lawsuit in Federal Court alleging that the platform maintained a “pre-populated list of demographics, behaviors and interests” for real estate brokers and landlords to exclude certain buyers or renters from seeing their ads. The lawsuit further alleged that this allowed “the placement of housing ads that excluded women, those with disabilities, and those of certain national origins”. Facebook’s system has since evolved following an agreement with the U.S. Department of Justice. Announcing the settlement, the government explained that Facebook’s algorithms violated federal fair housing laws.

The widespread system of businesses harvesting and monetizing personal information leads in many cases to discriminatory ad delivery. As a result, protected groups miss out on important opportunities for jobs and housing. To avoid such discrimination in ad delivery, we need laws that limit the initial collection of personal information.  

Discriminatory use of data in automated decision-making

Banks and landlords use automated decision-making systems to help decide whether or not to provide services to potential customers. Likewise, employers use these systems to help select employees, and colleges use them to help select students. Such systems discriminate against vulnerable groups. There are many solutions to this problem, including algorithmic transparency, and rigorous enforcement of laws against organizational policies that disparately impact vulnerable groups.

Part of the problem is that automated decision-making systems have easy access to the vast reservoir of personal data that businesses have collected from us and sell to each other. This data fuels algorithmic bias. So part of the solution is to drain these reservoirs by limiting how businesses collect our data in the first place.

Special concerns are raised when brick-and-mortar stores use face recognition technology to screen all of their potential customers to exclude supposedly unwanted customers. Many stores have long used this technology to try to detect potential shoplifters, often relying on error-prone, racially biased criminal justice data. Madison Square Gardens was recently caught using this technology to exclude employees of a law firm that sued the venue’s parent company. A business could easily extend this kind of “enemies list” to people who, online or on the sidewalk outside, protest a venue’s discriminatory policies. 

In addition, face recognition all too often does not work—particularly pertaining to Black people and women. The technology was used to erroneously expel Black teenager Lamya Robinson from a public skating rink in Detroit after misidentifying her as a person who’d allegedly gotten into a fight there. Again, there is a data privacy solution to this civil rights problem: prohibit businesses from collecting faceprints from anyone, without previously obtaining their voluntary, informed, opt-in consent. This must include consent to use someone’s face (or a similar identifier like a tattoo) in training data for algorithms

Discrimination in data breach and misuse

The collection and storing of massive quantities of personal information also generates risks that corporate employees will abuse the data in ways that violate civil rights. For example, in 2014 and 2015, 52 employees at Facebook were fired for exploiting their access to user data. One engineer used Facebook’s repository of private Messenger conversations, location data, and personal photographs to search why a woman he dated stopped replying to his messages. Another engineer used Facebook’s data to track a woman to her hotel. The company’s overcollection of data enabled this harassment.

Overcollection also creates risk of data breach, which can disparately impact lower-income people. Data theft creates collateral risk of identity theft, ransomware attacks, and unwanted spam. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit reports, and to obtain identity theft prevention services. These financial costs can often be more burdensome for low-income and marginalized communities. Further, housing instability might make it harder to alert vulnerable people that a breach occurred.

An important way to reduce these kinds of civil rights risks is for businesses to collect and store less personal data.

Disclosure of data by corporations to government, which use it in discriminatory ways

Discriminatory government practices can be fueled by purchase of personal data from corporations. Governments use automated decision-making systems to help make a multitude of choices about people’s lives, including whether police should scrutinize a person or neighborhood, whether child welfare officials should investigate a home, and whether a judge should release a person while awaiting trial. Such systems “automate inequality,” in the words of Virginia Eubanks. Government increasingly purchases data from businesses for use in these decisions.

Likewise, since the U.S. Supreme Court overturned Roe v. Wade, reproductive health has become an increasingly important attack vector for digital rights. For example, data from Google Maps can inform police if you searched for the address of a clinic. This expanded threat to digital rights is especially dangerous for BIPOC, lower-income, immigrant, LGBTQ+ people, and other traditionally marginalized communities, and the healthcare providers serving these communities. We should reduce the supply of personal data that anti-choice sheriffs and bounty hunters can acquire from businesses. And we should also limit police access to this data.

Moreover, police acquire face surveillance services from companies like Clearview, which extract faceprints from billions of people without their permission then use their faceprint database to help police identify unknown suspects in photos. For example, Clearview helped police in Miami identify a protester for Black lives

Police use of this kind of corporate data service is inherently dangerous. False positives from face recognition have caused the wrongful arrest of at least four Black men. In January 2020, Detroit police used face recognition software to arrest Robert Williams for allegedly stealing watches. Williams was held by police for 30 hours. After a long interrogation, police admitted “the computer must have gotten it wrong”. One year prior, the same Detroit detective arrested another man, Michael Oliver, after face recognition software misidentified him as a match. Nijeer Parks was accused of shoplifting snacks in New Jersey and wrongfully arrested after misidentification. Parks spent 10 days in jail and almost a year with charges hanging over him. Most recently, the Baton Rouge Police Department arrested and jailed Randal Reid for almost a week after an incorrect match to a theft.

Next steps

Corporations, governments, and others use personal data in many kinds of discriminatory ways. One necessary approach to solving this problem is to reduce the amount of data that these entities can use to discriminate. To resist these civil rights abuses at their source, we must limit the ways that businesses collect and harvest our personal data. 

EFF has repeatedly called for such privacy legislation. To be effective, it must include effective private enforcement, and prohibit “pay for privacy” schemes that hurt lower-income people. Legislation at the federal level must not preempt state legislation.

ASK INTELWAR AI

Got questions? Prove me wrong...