Disinfo Defense League Comment

Ftc’s Commercial Surveillance and Data Security Advance Notice of Proposed Rulemaking

November 21, 2022

Executive Summary

The undersigned 21 members of the Disinfo Defense League (DDL) are pleased to submit this comment to the Federal Trade Commission (FTC) on the Advance Notice of Proposed Rulemaking (ANPR) regarding Commercial Surveillance and Data Security.

DDL is a distributed national network of over 230 grassroots, community-based organizations that are building a collective defense against disinformation campaigns that deliberately target Black, Latinx, Asian American, and other communities of color. We are deeply concerned by systemic problems posed by the complex set of digital tactics, extractive data practices, and manipulative tech platform and app designs used to undermine confidence in our democracy, sow distrust in our public health institutions, disenfranchise voters, and chill engagement for our communities. All of these practices contribute to the weaponization of online narratives that target our communities.

Since its inception in 2020, DDL has sounded the alarm about nefarious data practices which violate our fundamental civil rights online. The DDL Policy Platform codifies policy principles designed to rein in technology companies’ extractive data practices and to safeguard privacy and civil rights on social media platforms with comprehensive digital-privacy measures [1]. The platform outlines steps Congress and other regulatory bodies, including the FTC, should take to adopt comprehensive digital-privacy protections for digital civil rights.

The platform includes several priorities for the FTC:

The FTC should have the power and resources to conduct rulemakings and effectively enforce against and prevent data abuses and other unfair or deceptive practices. Congress cannot anticipate and legislate against all future uses and abuses of data that companies may engage in, so lawmakers should enable the FTC to oversee and respond to future violations. For instance, users shouldn’t have to waive our privacy, quality of service, or other rights just to access a given service when there’s no need for that data to deliver the promised goods [2].

DDL is pleased that the FTC has opened this comment period for the agency to consider rules on commercial surveillance and data security. Data about what we do, with whom and where, is in the hands of often unscrupulous tech companies, data brokers, and other private entities. Many of these companies engage in a widespread pattern of unfair and deceptive practices embedded throughout society, especially harmful to historically disadvantaged communities.

How data is collected, processed, retained, and sold has a direct impact on civil rights and economic opportunities. These issues fall squarely within the FTC’s authority, bolstered by its history of advising on complex privacy issues.

While existing federal and state statutes ostensibly provide a measure of liability for companies’ discriminatory practices and Congress has been in active debates about the need for robust privacy and consumer data legislation in the near future, the FTC’s action and leadership here is desperately needed in the present. It is needed to set a federal benchmark for protections against commercial surveillance and unfair data practices but for which companies currently face no accountability.

DDL’s comments center on remedying the harm caused to real people by companies’ practices that strip users of equal opportunity, access, and agency. We argue that robust rulemaking by the FTC should address the need for companies to:

  • Meet exacting limits on commercial surveillance of users that narrow the scope of collection, retention, sale, and security of people’s data;

  • Conduct ongoing and regular company auditing of algorithmic impact as a way of mitigating discriminatory treatment across protected classes; and,

  • Employ transparent notice, accessible opt-out and consent for users regarding their data.

  • Online companies – ranging from those providing social media platforms, to consumer goods and other sales and services – collect troves of private demographic and behavioral data about us all. The ways companies collect and leverage that data vary. In many instances, the data collected allows companies to use it to target us with ads, recommendations, and other content based on our perceived interests and vulnerabilities. These practices have been well-documented [3].

    That data collection is pervasive, and in numerous instances also invasive and extractive. Years of research, investigative journalism, activism and whistleblower revelations have illuminated that the companies collecting that data have little to no accountability to the American people. Indeed they have failed outright at fulfilling even the most basic requests for transparency about how their systems work, what they know about us, and the methods for collecting and storing data collected from their own users or acquired on other individuals with whom the company has no customer relationship. They haven’t even attempted to redress the harms caused by their business models, which promote content to maximize engagement and profit handsomely from amplifying conspiracy theories and bigotry [4].

    In response to ANPR Questions 3, 7, 43, 47, and 51, the FTC’s rulemaking efforts should focus on deterring the harms that companies employing these data practices knowingly and negligently cause. Companies need to disclose not just what information they collect, but where they get the information; who shares data with them, and with whom they share data; how they analyze data to profile us; how else they use our information; how they make decisions about what content, goods or services to offer us; and how they secure our data.

    Additionally, new rules should include prohibitions on harmful and unnecessary data collection and use. Users shouldn’t have to waive our privacy, quality of service, or other rights by surrendering unnecessary data just to access a given service when there’s no need for that extraneous data to deliver the promised service.

  • To date, we know that tech companies have facilitated, profited from and sometimes even participated in activities that harm our democracy and voting rights, public health and safety, and other civil and human rights. Digital discrimination and digital redlining are particularly troublesome, meaning “the creation and maintenance of technology practices that further entrench discriminatory practices against already marginalized groups” [5]. Digital redlining includes social media advertising that intentionally targets, or excludes information and opportunities from, members of protected classes in a harmful and discriminatory fashion.

    Through the use of discriminatory algorithms and other machine learning which segments online users, digital discrimination causes harm similar to offline discrimination [6]. Civil rights laws have long proscribed discriminatory advertising that makes it harder for some classes to access opportunities. Such discrimination causes both economic and stigmatic harms as the barriers created by disparate and discriminatory treatment do not allow those targeted to “compete on equal footing” [7]. The FTC has also found that digital redlining, including the use of “racially biased algorithms,” constitutes an unlawful unfair or deceptive practice [8].

    Algorithms make inferences about users to create efficiencies. The data that allows algorithms to make inferences – data about our geography, location history, employment history, credit history – is built on decades of institutionalized discrimination and segregation. Ultimately, the algorithms mistake this history for user preference, giving different consumers different (and inequitable) experiences and content. Thus, the impact of these methods is not abstract. They have real-world, civil rights consequences. To cite a few of the most prominent examples:

    • In the healthcare realm: Researchers from the University of California found that an algorithm widely used in U.S. hospitals to allocate healthcare to patients systematically discriminated against Black people, who were less likely than equally sick white counterparts to be referred to hospitals [9].

    • Dangerous COVID-19 and vaccine conspiracy theories have proliferated over social media, exacerbating the public health crisis, impacting our hospitals and leading to far more death and serious illness than might otherwise have occurred. On Facebook, African Americans, Native Americans, Latinx people and other people of color were less likely to be shown credible public health information than white people [10].

    • Twitter and Meta have consistently failed to remove or flag as false content that discourages people from voting [11]. This content includes deception (lying about the time, place, and manner of voting); calls for boycott from individuals with alleged sponsorship ties to foreign state actors; and voter intimidation or threats, such as claims that people will show up to polling locations with guns [12].

    • Voter suppression campaigns over social media included surgical efforts to dissuade Black, Indigenous, and Latinx voters from turning out to the polls. False election and polling messages in non-English languages were much less likely to be taken down or flagged than similar messages in English [13]. Non-partisan organization Protect Democracy found that “[s]ocial media platforms were plagued by false content about various candidates for office, patently untrue information about electoral processes, systematic efforts to amplify bogus claims about voter fraud, and coercive political messaging tied to COVID-19 conspiracy theories. A great deal of this content targeted marginalized communities and, in particular, communities of color” [14].

    • Internal Meta research from 2019 brought to light by whistleblower Frances Haugen found that there was a concerted effort to discourage Latinx people from participating in the U.S. Census [15]. The company’s research summarized posts “telling Hispanic[s] to not fill out the form; telling Hispanics not to participate in answering questions about citizenship; saying that people would be in danger of being deported if they participated; implying the government would ‘get’ immigrants who participated; and discouraging ethnic groups from participating” [16].

    • Meta has allowed alleged discriminatory employment ad targeting on the basis of gender and age [17].

    • Meta recently settled a lawsuit with the Department of Justice regarding its housing advertising scheme, based on evidence that it engaged in discriminatory practices that meant Black users saw fewer or no ads for affected housing on Facebook [18].

    • Google’s algorithms drive discriminatory search results, pushing users to image search results that under-represent women and women of color [19]. Research from 2018 showed its search terms related to Black girls mostly led to pornography, even when terms like “‘porn,’ ‘pornography,’ or ‘sex’ were not included in the search box” [20].

    In response to Questions 3, 7, 65 - 70, and 92, the FTC’s rulemaking should include requirements for companies to conduct ongoing and regular company auditing of their algorithmic impact on consumers. New trade rules should also incentivize all companies to invest in compliance with civil rights statutes by affirming that their algorithms do not discriminate against consumers. The FTC may work in collaboration with other federal agencies to pursue compliance with civil rights and other federal statutes, including but not limited to the Department of Justice, Health and Human Services, Domestic Policy Council, and more.

    Where companies are found to be engaging in discrimination, pursuant to the FTC Act, the FTC should impose civil penalties for first-time violations because such discriminatory commercial practices cleary constitute and should be readily understood as “unfair . . . acts or practices in or affecting commerce” under Section 5.

  • As discussed above in both Parts I and II, the ways that companies use, analyze, sell, and retain consumer data have invasive, discriminatory real-world consequences. The United States also currently has no comprehensive legislation guaranteeing consumers information, notice, consent, and other insights into how their data is used by companies. That means there is no effective limit on companies’ ability to continue extractive practices by which they gather consumer information, retain it without proper security mechanisms, and even sell troves of it to government, data brokers, and other actors often without our knowledge or consent.

    While it is necessary to promulgate new regulations that address companies’ practices once data has been collected (see Part I) as well as mitigating measures to stem the harms of digital and algorithmic redlining (see Part II), so too is it necessary for companies to provide information to people about the data practices companies engage in. These efforts relate to Questions 73, 89, and 90. Through this process, companies should allow people opportunities to minimize and tailor the information collected on them, and to opt-out of companies’ collection and retention of their data altogether.

    New trade regulation rules also must consider the languages used by consumers and require companies to provide information to consumers in the language they use through any company site, app, or service. The rules should also require companies to provide plain-speak language on companies’ data practices and consumers’ opt-out opportunities. The design elements of these notice and opt-out/tailoring mechanisms should be clean, with as few steps for consumers to reach critical information and opt-out processes as possible.

This comment was prepared by the Disinfo Defense League. Member signatories include:

Access Humboldt

Access Now

Arab American Institute

Asian Americans Advancing Justice | AAJC

Center for Countering Digital Hate

Center on Race and Digital Justice

Fight for the Future

Filipino Young Leaders Program

Free Press

GLAAD

Global Exchange

Indivisible Plus Washington

MediaJustice

Muslim Advocates

New Georgia Project Action Fund

NYU - Cybersecurity for Democracy

SIECUS: Sex Ed for Social Change

South Lake Tahoe Indivisible

Stop Online Violence Against Women

The Greenlining Institute

United We Dream

    1. Disinfo Defense League, Policy Platform (Dec. 7, 2021), https://www.disinfodefenseleague.org/policy-platform. 

    2.  Id.

    3. Shoshana Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power (First edition, 2019); see also Dipayan Ghosh, What is Microtargeting and What is it Doing in our Politics? Mozilla (Oct. 4, 2018), https://blog.mozilla.org/en/products/firefox/microtargeting-dipayan-ghosh/; Tech Transparency Project, Amazon’s Data Dragnet, January 22, 2021, https://www.techtransparencyproject.org/articles/amazons-data-dragnet; Ashley Capoot, Apple Reportedly Plans to Put Ads in More Apps on Your iPhone, CNBC, August 15, 2022, https://www.cnbc.com/2022/08/15/apple-reportedly-plans-to-put-ads-in-more-apps-on-your-iphone.html.

    4.  Jeremy B. Merrill and Will Oremus, “Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation,” The Washington Post (Oct. 26, 2021), https://www.washingtonpost.com/technology/2021/10/26/facebook-angry-emoji-algorithm/; It’s the Business Model: How Big Tech’s Profit Machine is Distorting the Public Sphere and Threatening Democracy, Ranking Digital Rights (last accessed Jan. 26, 2022), https://rankingdigitalrights.org/its-the-business-model/.

    5. Banking on Your Data: the Role of Big Data in Financial Services: Hearing before Task Force on Fin. Tech. of the House Comm. on Fin. Serv., 116th Cong., at 9 (Nov. 21, 2019) (statement of Dr. Christopher Gilliard), https://financialservices.house.gov/uploadedfiles/chrg-116hhrg42477.pdf. 

    6. Yeshimabeit Milner and Amy Traub, “Data Capitalism and Algorithmic Racism,” Data for Black Lives & Demos, 2021: https://www.demos.org/research/data-capitalism-and-algorithmic-racism. 

    7. Northeastern Florida Chapter of Assoc. Gen. Contractors of Am. v. City of Jacksonville, Fla., 508 U.S. 656, 666 (1993).

    8. Elisa Jillson, Aiming for truth, fairness, and equity in your company’s use of AI, FTC (Apr. 19, 2021), https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truthfairness-equity-your-companys-use-ai; see also, FTC, A Look At What ISPs Know About You: Examining the Privacy Practices of Six Major Internet Service Providers, at iii (Oct. 21, 2021), https://www.ftc.gov/system/files/documents/reports/look-what-isps-know-about-you-examining-privacy-practices-six-major-internet-service-providers/p195402_isp_6b_staff_report.pdf  (many ISPs “allo[w] advertisers to target consumers by their race, ethnicity, sexual orientation, economic status, political affiliations, or religious beliefs.”).

    9. Rachel Courtland, “Bias detectives: the researchers striving to make algorithms fair,” Nature (June 20, 2018), https://www.nature.com/articles/d41586-018-05469-3.

    10. Corin Faife and Dara Kerr, “Official Information about COVID-19 Is Reaching Fewer Black People on Facebook,” The Markup (Mar. 4, 2021), https://themarkup.org/citizen-browser/2021/03/04/official-information-about-covid-19-is-reaching-fewer-black-people-on-facebook. 

    11. See, e.g., Young Mie Kim, “Voter Suppression Has Gone Digital,” The Brennan Center for Justice (Nov. 20, 2018), https://www.brennancenter.org/our-work/analysis-opinion/voter-suppression-has-gone-digital; see also Ian Vandewalker, “Digital Disinformation and Vote Suppression,” The Brennan Center for Justice (Sep. 2, 2020), https://www.brennancenter.org/our-work/research-reports/digital-disinformation-and-vote-suppression.

    12. See Kim, supra n.5.

    13. Samuel Woolley and Mark Kumleben, “At The Epicenter: Electoral Propaganda in Targeted Communities of Color,” Protect Democracy (Nov. 2021), https://protectdemocracy.org/project/understanding-disinformation-targeting-communities-of-color/#section-1 (“In Georgia, African Americans and Hispanic Americans were on the receiving end of sophisticated microtargeting efforts erroneously claiming that then-Senate candidate Raphael Warnock “celebrated” Fidel Castro. In Arizona, Hispanic American and Native American communities faced a cascade of untrue digital messaging over Twitter about the voting process. In Wisconsin, multiple communities of color from Madison to Milwaukee were targeted with lies about mail-in ballot fraud and ballot dumping.” (internal citations omitted)).

    14. Id.

    15. Brian Contreras and Maloy Moore, “What Facebook knew about its Latino-aimed disinformation problem,” The Los Angeles Times (Nov. 16, 2021), https://www.latimes.com/business/technology/story/2021-11-16/facebook-struggled-with-disinformation-targeted-at-latinos-leaked-documents-show.

    16. Id. (internal quotation marks omitted).

    17. Noam Scheiber, “Facebook Accused of Allowing Bias Against Women in Job Ads,” New York Times (Sep. 18, 2018), https://www.nytimes.com/2018/09/18/business/economy/facebook-job-ads.html. 

    18. U.S. Department of Justice, “United States Attorney Resolves Groundbreaking Suit Against Meta Platforms, Inc., Formerly Known As Facebook, To Address Discriminatory Advertising For Housing,” (June 21, 2022), https://www.justice.gov/usao-sdny/pr/united-states-attorney-resolves-groundbreaking-suit-against-meta-platforms-inc-formerly. 

    19. Xavier Harding, “Breaking Bias: Search Engine Discrimination? Sounds About White,” Mozilla Foundation (Sep. 28, 2021), https://foundation.mozilla.org/en/blog/breaking-bias-search-engine-discrimination-sounds-about-white/. 

    20. Dr. Safiya Noble, “Google Has a Striking Bias Against Black Girls,” TIME (Mar. 26, 2018), https://time.com/5209144/google-search-engine-algorithm-bias-racism/.