A leaked memo has revealed that the Supreme Court plans to overturn the landmark Roe v. Wade decision. If this does occur, so-called trigger laws already passed in 13 states—along with other laws on the way—would immediately prohibit abortions in a large portion of the country. And one of the ways courts could find people to prosecute is to use the data that our phones produce every day.
A smartphone can be a massive storehouse of personal information. Most people carry one at all times, automatically registering their daily activities through Internet searches, browsing, location data, payment history, phone records, chat apps, contact lists and calendars. “Your phone knows more about you than you do. There is data on your phone that could show how many times a day you go to the bathroom, things that are incredibly intimate,” says Evan Greer, director of the nonprofit digital rights organization Fight for the Future. “If, because of these draconian laws, basic activities like seeking or providing reproductive health care become criminalized in a manner that would allow law enforcement to get an actual warrant for your device, it could reveal incredibly sensitive information—not just about that person but about everyone that they communicate with.”
Even with Roe intact, this type of digital footprint has already been used to prosecute those seeking to terminate pregnancies. In 2017 a woman in Mississippi experienced an at-home pregnancy loss. A grand jury later indicted her for second-degree murder, based in part on her online search history—which recorded that she had looked up how to induce a miscarriage. (The charge against the woman was eventually dropped.)
Such information can be extracted directly from a phone. But doing so legally requires a judge to issue a warrant. And for this, law enforcement officials must show they have probable cause to believe a search is justified. This requirement can deter frivolous searches—but it can also be evaded with relative ease. In particular, privacy activists warn that law enforcement agencies can sidestep the need for a warrant by obtaining much of the same information from private companies. “A little-known treasure trove of information about Americans is held by data brokers, who sell their digital dossiers about people to whoever will pay their fee,” explains Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory. “Law enforcement agencies have used data brokers to do an end run around the Fourth Amendment’s warrant requirement. They just buy the information they’d otherwise need a warrant to get.”
They can also access these data by presenting a tech company with a subpoena, which is easier to obtain than a warrant because it only requires “reasonable suspicion” of the need for a search, Greer explains, not the higher bar of probable cause. “We also have seen law enforcement in the past issue [subpoenas for] incredibly broad requests,” Greer says. “For example, requesting that a search engine hand over the IP addresses of everyone who has searched for a specific term or requesting that a cell phone company hand over what’s considered ‘geofence data,’ [which reveal] all of the cell phones that were in a certain area at a certain time.”
By obtaining these data in bulk—whether through purchase or subpoena—an agency can crack down on a large number of people at once. And geofence and other location data can easily reveal who has visited a clinic that provides abortion care. Greer’s worry is not merely theoretical: Vice’s online tech news outlet Motherboard recently reported two cases of location data brokers selling or freely sharing information about people who had visited abortion clinics, including where they traveled before and after these visits. Although both companies claimed they had stopped selling or sharing this information in the wake of the news coverage, other data brokers are free to continue this type of tracking.
Such information can be even more revealing when combined with health data. For that reason, some privacy advocates warn against period-tracking apps, which many use to stay on top of their menstrual cycles and track their fertility. When software is “tracking your period, and your period’s regular, then your period is late, [the app] could certainly identify a pregnancy before someone might be aware of it,” says Daniel Grossman, a professor of obstetrics, gynecology and reproductive science at University of California, San Francisco. Government officials have in fact already charted periods to determine a person’s pregnancy status. For example, in 2019 a Missouri state official said his office had created a spreadsheet to track the periods of patients who had visited the state’s lone Planned Parenthood facility. In that case, the government did not obtain its information from an app, but the incident demonstrates the interest that authorities might have in such data.
Although policies vary depending on the app involved, experts say companies that produce menstrual-cycle programs generally have no obligation to keep these data private. “If it’s not part of a health system, which I think most of these [apps] are not, I don’t think there would necessarily be any [privacy] requirement,” Grossman says. Despite the fact that these data are about personal health, they are not protected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA), which protects health information from being shared without a patient’s consent. “Everyone needs to understand that HIPAA, the federal health privacy law, is not the huge magic shield that many people seem to believe it is,” Pfefferkorn warns. “HIPAA is actually fairly limited in terms of which entities it applies to—and your period-tracking app is not one of them. Plus, HIPAA has exceptions for law enforcement and judicial proceedings. So even if an entity (such as an abortion clinic) is covered by HIPAA, that law doesn’t provide absolute protection against having your reproductive health care records disclosed to the police.”
Ultimately, the vulnerability of users’ phone data depends on the choices made by the companies that develop the software and apps they use. For instance, when contacted with a request for comment, a representative of the period-tracking app Clue responded, “Keeping Clue users’ sensitive data safe is fundamental to our mission of self-empowerment, and it is fundamental to our business model, too—because that depends on earning our community’s trust. In addition, as a European company, Clue is obligated under European law (the General Data Protection Regulation, GDPR) to apply special protections to our users’ reproductive health data. We will not disclose it.” In the U.S., however, many companies are not subject to GDPR’s requirements—and plenty of them take advantage of their free rein to sell data on to third parties. Experts recommend that users read the privacy policies and terms of service of any given app before entrusting it with their data.
“What this exposes is that the entire tech industry’s business model of vacuuming up essentially as much data as possible, in the hopes that it can be turned into profits, has created this vast attack surface for surveillance and crackdowns on people’s basic rights,” Greer says. “And when we start thinking about how activities that are perfectly legal right now could be criminalized in the very near future, it exposes how even very seemingly mundane or innocuous data collection or storage could put people in danger.” Lawmakers have introduced privacy legislation such as the Fourth Amendment Is Not For Sale Act, which would prevent law enforcement from sidestepping the need for a warrant by purchasing information from data brokers. But this has not passed into law.
Instead of relying on the government to protect privacy, some advocates suggest it would be more effective to pressure companies directly. “I think that our best bet for carrying out systemic change now is to call on companies that are gathering this data to simply stop collecting it and to stop sharing it and to make plans for what is going to happen when the government demands it,” says Eva Galperin, director of cybersecurity at the nonprofit Electronic Frontier Foundation, which promotes digital rights.
Individuals can also take steps to maintain their privacy now rather than waiting on action from either the government or the tech industry. As a first line of defense, Greer recommends locking accounts securely: protecting phones and computers with strong passwords, using password managers for other programs and turning on two-factor authentication. “These three steps will protect you from most non-law-enforcement attacks,” Greer says. For those worried about law enforcement, organizations such as the Digital Defense Fund have published security guides on how to further hide one’s information. Potential steps include using encrypted chat apps, privacy-centric browsers such as Tor or Brave and virtual private networks to screen one’s online communications and activity. Additionally, disabling location tracking or leaving a phone at home while visiting a clinic can protect information about one’s whereabouts.
Such measures may seem unnecessary now, but Galperin warns that, without the protection of Roe v. Wade, the fear that our most personal information can be weaponized against us is justified. “I have spent more than a decade working with journalists and activists, people in vulnerable populations all over the world and especially in authoritarian regimes,” she says. “And the most important lessons that I have learned from this work is that when rights are curtailed, it happens very quickly. And when that happens, you need to have all of your privacy and security plans in place already, because if you are making those changes after your rights have already been taken away, it is already too late.”