Social Media Platforms Archives - Just Security https://www.justsecurity.org/tag/social-media-platforms/ A Forum on Law, Rights, and U.S. National Security Thu, 04 May 2023 13:05:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://i0.wp.com/www.justsecurity.org/wp-content/uploads/2021/01/cropped-logo_dome_fav.png?fit=32%2C32&ssl=1 Social Media Platforms Archives - Just Security https://www.justsecurity.org/tag/social-media-platforms/ 32 32 77857433 Montana is Trying to Ban TikTok. What Does the First Amendment Have to Say? https://www.justsecurity.org/86435/montana-is-trying-to-ban-tiktok-what-does-the-first-amendment-have-to-say/?utm_source=rss&utm_medium=rss&utm_campaign=montana-is-trying-to-ban-tiktok-what-does-the-first-amendment-have-to-say Thu, 04 May 2023 13:05:22 +0000 https://www.justsecurity.org/?p=86435 The current debate's failure to engage a complex reality serves neither the interests of national security nor freedom of expression.

The post Montana is Trying to Ban TikTok. What Does the First Amendment Have to Say? appeared first on Just Security.

]]>
Last month, Montana became the first U.S. state to pass a bill banning TikTok from operating within its borders. If Governor Greg Gianforte signs some version of the bill, it will become the first statewide ban in the country to take direct aim at the popular social media app, which various U.S. government officials have warned poses a serious national security threat. But while Montana may be the first to act, significant gaps remain in the public debate surrounding both the nature of the threat that TikTok presents, and the constitutional questions that trying to regulate it might create.

On the security side, state and federal lawmakers have spoken with great concern about the seriousness of the threat posed by the Chinese-owned app. But they have remained unfortunately general in describing the exact nature of their concerns – officials often cite to one or more worries about the proliferation of disinformation, the compromise of personal data, or threats to U.S. national security more broadly – and have done little to clarify how exactly TikTok’s dangers distinguish it from any number of other online platforms and data collection operations that channel disinformation and vast quantities of similar data to the open market (which are accessible to Beijing or anyone else).

On the speech side, civil liberties advocates and scholars have, in contrast, been remarkably clear: “banning TikTok” would violate the First Amendment (see here, here, and here). Yet given the variety of forms that legislative or executive action against the platform might eventually take, the constitutional question is quite a bit more complicated than it initially appears or is currently acknowledged. The outcome of any court challenges to a ban will depend in key part on the factual strength of the government’s claims. As it stands, the failure of current public debate to engage the complex reality serves neither the interests of national security nor freedom of expression. This piece explains why.

The First Amendment Question Depends on Whose Speech is Regulated and Why

Start with the First Amendment. On the text of the Amendment itself, it is easy to imagine that it categorically bans any government rule that even vaguely burdens “the freedom of speech.” But that is not how the First Amendment works. The speech universe has long been divided into “protected” and “unprotected” or “less protected” forms of speech. The Supreme Court has said that the government only needs to provide a modest justification when it regulates less protected forms of speech (such as defamation, incitement of violence, and commercial fraud). Even within the realm of protected speech, regulations that aim merely at the time, place, or manner of speaking – rather than the speech’s content – have regularly passed First Amendment muster, so long as the government can show that its interest in regulating the speech is significant, that the regulation is no more restrictive than necessary, and that a potential speaker has ample opportunity to convey the same message at some other time or place. In short, the availability of First Amendment protection may greatly depend on whose speech would be regulated, why, and how.

Three Models for Thinking About TikTok and the First Amendment

To understand how the First Amendment applies to TikTok, it might help to consider three theories about whose free speech rights are implicated: (1) TikTok’s rights as a platform; (2) the rights of TikTok’s U.S. users to speak on a major social media platform; and (3) the rights of TikTok’s U.S. users to access content available on the platform.

Option 1: TikTok’s Rights as a Platform

As the Supreme Court has made clear in many contexts, corporations have speech rights just like (or almost like) individuals. It’s easy to imagine TikTok’s lawyers arguing that their client has the same right to speak in the United States as the New York Times, Verizon, or any other U.S. company. But two important issues make this claim problematic. First is that foreign individuals or corporations outside the United States may not have any cognizable rights under the First Amendment. It is in part for that reason that the Court has generally treated cases involving the restriction of foreign speech from outside the United States as implicating the First Amendment rights of U.S. listeners, rather than foreign speakers (more on that below).

Foreign nationals inside the United States of course have all kinds of constitutional rights, and one might argue that to the extent TikTok is operating within the territorial United States, it should enjoy the same First Amendment protections that any other U.S. publisher or speaker enjoys. But that brings us to the second issue: Whether social media apps are indeed speakers or publishers at all. That debate is at the heart of the giant, looming question that overhangs all social media regulation in the United States at the moment – namely, what is the status of social media companies for First Amendment purposes? The Supreme Court hasn’t weighed in yet, but if it treats social media companies as publishers or speakers, they may well enjoy roughly the same First Amendment rights available to other corporate speakers in the United States. On the other hand, if the Court considers them more like, say, common carriers or conduits – an equally live possibility – then the government would have much more room to regulate. Or, the Court could conclude, social media companies are publishers for some purposes, and conduits for others. But this simply reposes the question: what is TikTok’s status here?

Option 2: TikTok’s U.S. Users’ Right to Speak on a Major Public Social Media Platform

The doctrinal hurdles to TikTok itself asserting a First Amendment claim make it far more tempting, then, to revert to Option 2: TikTok’s U.S. users’ right to speak on a major public social media platform (of which TikTok, with more than 150 million U.S. users, is surely one). Here, the theory is far more intuitively attractive: TikTok (like Twitter was or Instagram perhaps remains) is the modern-day equivalent of the town square, a traditional public forum in First Amendment terms in which any government restriction on the content of speech must be justified by a compelling government interest, and must be narrowly tailored to be the least speech-restrictive means possible to achieve that compelling interest. It was some version of this town square idea that a federal district court in California embraced when it stopped the Trump administration from banning WeChat, a Chinese-owned instant messaging and social media app widely used by the Chinese-speaking and Chinese American community in the United States. But the district court’s opinion in WeChat offered little discussion (or for that matter citations) to support its vague public-forum analogy. The court instead rested its willingness to temporarily suspend the effect of the proposed ban almost entirely on its factual finding that WeChat was “effectively the only means of communication” with friends and relatives in China among the community that used it – not only “because China bans other apps,” but also because, as the court repeatedly emphasized, “Chinese speakers with limited English proficiency” have no other options. The situation for TikTok’s U.S. users – including millions of teens who are equally active and adept at using closely analogous and widely popular U.S.-owned platforms like Instagram – hardly seems the same.

In any case, TikTok and platforms like it are not in any traditional sense a “public” forum, in that they are owned or maintained by the government; they are privately owned services that other courts may yet conclude are not any kind of “forum” at all, but are rather, again, private speakers or publishers, common carriers, or some combination of the two – the answer to which has potentially conclusive implications for whether U.S. users have any “right” to speak on it at all. We don’t mean for a moment to suggest that U.S. users’ speech rights are not implicated here. A law banning TikTok could indeed limit U.S. users’ speech on something that functions something like a public forum. Montana’s law is especially vulnerable under almost any First Amendment analysis, aimed as it is at a single platform and expressly identifying the content on the platform it finds objectionable, rather than focusing solely on the manner in which it collects and secures data (more on that below). But where a lawmaker could design a ban not aimed squarely at the site’s content, and where ample alternative channels for the same speech on other platforms remain, existing doctrine offers no guarantee the First Amendment would be offended.

Option 3: TikTok’s U.S. Users First Amendment Rights to Access TikTok Content

A third theory goes as follows: TikTok’s U.S. users’ First Amendment rights are separately burdened when they are deprived of the ability to access the content otherwise available on the platform.  Indeed, because one of the core purposes of the First Amendment has long been thought to protect a diverse “marketplace of ideas” sufficient to sustain democratic governance, the Supreme Court has repeatedly recognized the First Amendment right of listeners to access information in that marketplace. More than half a century ago, the Court struck down a federal law barring the mail delivery of “communist propaganda” from abroad unless the intended recipient specifically asked the postal service to deliver it. The Court held that it was wrong for the government to interfere with the mail and attempt “to control the flow of ideas to the public.”  This right of access was also part of the Court’s rationale in a more recent decision striking down a sweeping North Carolina law that barred convicted sex offenders from “accessing” any “commercial social networking website” with child members. In that case, the Court wrote at length about the indisputable importance of the government’s interest in passing the law in the first place, mainly to protect children from sex offenders. But a ban of “such staggering reach” – involving a law that could be read to block access to everything from Facebook to WebMD to Amazon – was not remotely tailored enough to survive any degree of constitutional scrutiny from the Court.

The Court reached the right answer in both of those right-of-access cases. But it’s easy to imagine a “TikTok ban” written much more carefully than Montana’s initial version to have far more targeted scope. For instance, a time, place, and manner regulation that effectively precludes U.S. users from accessing foreign-owned platforms on U.S.-based mobile devices for so long as those platforms offer insufficient safeguards to prevent, for example, the geolocation and facial recognition data of U.S. users from being shared with foreign adversaries. In this form, a “ban” starts to look far less problematic than longstanding regulations on foreign speech that still operate in the United States. For better or worse, the First Amendment rights of U.S. listeners have never before posed an obstacle to federal restrictions on foreign involvement (through speech or otherwise) in federal elections, or even federal regulations surrounding the distribution of “foreign political propaganda.”  As it stands, U.S. copyright law already precludes a foreign broadcaster from directing copyright-infringing performances into the United States from abroad. The Court has never suggested that such restrictions run afoul of U.S. listener’s right to access that information. If the government were actually able to demonstrate a genuine threat to U.S. national security, and if the content available on TikTok could also be accessed elsewhere, it is not obvious how a court would reconcile the speech and security interests-on-all-sides.

The Need for Deeper Engagement 

All of this uncertainty highlights why it is important for the public conversation to engage more deeply with the questions of both security and speech around a TikTok ban. It is entirely right to assume in the first instance that an outright government ban of a major social media platform violates the First Amendment. There is no question that any proposed government restriction on the operation of a social media platform available in the United States raises First Amendment concerns, especially when a ban targets a single platform by name. But the issue, should it eventually make its way to Court – like so many of the current questions in the online speech space – will inevitably present a novel question of constitutional law. Suggesting otherwise makes it more likely that speech advocates will be unprepared for the serious litigation battle they are sure to face if, and when, any TikTok-specific (or, as even more likely, non-TikTok-specific) regulation is enacted. The illusion that the resolution of an issue is settled or entirely certain likewise tends to relieve scholars of the burden of helping judges to think through the genuine range of interests and issues at play. (“If the question is easy, why write an amicus brief or article about it?”)

That over-simplification also disserves security policymakers, who need to understand the full landscape of arguments and litigation risks that potential legislation or administrative action is likely to face. Blanket statements that a TikTok ban would violate the First Amendment suggests to policymakers that there is little point to setting forth in detail – for litigation or even in their own minds – the specific, evidence-backed reasons why the government’s interest is compelling, or how that interest might most narrowly be achieved. (“If the case is a sure loser, they may assume, why even try to go to the trouble?”)

Perhaps most important, the expectation that the courts will step in to correct any constitutional failings may make law or policymakers more likely to take action they fear is constitutionally defective. Legislators get to “take the political win,” show constituents they are acting to address a problem, and then wait for the courts to sort it out. As one Senator put it in a different context, “the Court will clean it up” later. But depending on the actual terms of any federal action, it is simply unclear the courts will.

The current information ecosystem is new, the global threats are complicated, and the facts on the ground are changing quickly. There is reason to worry about the First Amendment implications of many of the proposed online speech regulations circulating these days. But those are not the only worry. We also worry that the confidence that often comes with deep expertise – whether in legal training, security experience, or technical know-how – often promotes a false sense of certainty. That confidence may prove more of a hindrance when trying to solve problems that require collaboration across disciplines.

The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the U.S. Government.

IMAGE: An 11-year-old boy looks at the TikTok app on a smartphone screen in the village of St Jean d’Aulps on April 4, 2023, near Morzine, France. (Photo by Matt Cardy via Getty Images)

The post Montana is Trying to Ban TikTok. What Does the First Amendment Have to Say? appeared first on Just Security.

]]>
86435
Introduction to Expert Statements on Democracy and Political Violence, submitted to January 6th House select committee https://www.justsecurity.org/86298/introduction-to-expert-statements-on-democracy-and-political-violence-submitted-to-january-6th-house-select-committee/?utm_source=rss&utm_medium=rss&utm_campaign=introduction-to-expert-statements-on-democracy-and-political-violence-submitted-to-january-6th-house-select-committee Mon, 01 May 2023 12:51:26 +0000 https://www.justsecurity.org/?p=86298 33 statements from leading experts in law, academia, and other research organizations

The post Introduction to Expert Statements on Democracy and Political Violence, submitted to January 6th House select committee appeared first on Just Security.

]]>
During the course of the January 6th House Select Committee’s work, investigative staff received dozens of statements from leading experts in law, academia, and other research. Although only some of these expert statements were ultimately cited in the Select Committee’s hearings and final report, many others helped to contextualize our work as we sought to uncover the full truth behind the attack on our democracy. The individuals and organizations who submitted these statements came from a broad range of disciplines and backgrounds, and therefore approached the events of January 6, 2021 from vastly different angles. Nevertheless, their statements coalesce in a single, frightening call to alarm, which warns us that former President Donald Trump’s attack on the rule of law and the ensuing insurrection was not an isolated event. Instead, the experts show that it should be seen as an inflection point in a violent, anti-democratic movement that has deep roots in America’s own history of racist violence and far-right extremism and fits within global patterns of political violence and lurches toward authoritarianism.

In collecting some of these statements and launching this collection, Just Security is providing an invaluable resource to all Americans, and others beyond, who still seek a more holistic understanding of January 6th, and who want to explore what the sobering conclusions of the Select Committee might mean for the future of our democracy. 

First and foremost, these statements help to place the insurrection as part of a dark, American tradition of mob violence that has repeatedly tried to nullify the electoral triumph of multiracial coalitions and attack governments that support equal rights for Black Americans. Statements such as those from Professors Carol Anderson, Kellie Carter Jackson, Kate Masur, Gregory Downs, and Kathleen Belew, provide historical analysis and specific examples—ranging from Reconstruction to the modern white power movement—that demonstrates the continuity between January 6th and previous vigilante attempts to beat back progress toward a more inclusive and racially equitable America.

Other statements, like those from leaders at prominent, nonpartisan institutes like the NAACP Legal Defense FundBrennan CenterStates United Democracy Center, and Campaign Legal Center, explain how this history of racial violence and disenfranchisement is intimately bound up in President Trump’s Big Lie, which singles out largely non-white cities as centers of voter fraud and has since been used as a justification for further restrictions on voting rights that disproportionately impact Black and Brown citizens. Related analyses we received explained how key actors in the insurrection were motivated by a toxic brew of racism, homophobia, misogyny, xenophobia, and conspiracy – the same beliefs that continue to motivate acts of mass violence and intimidation across the country. In a statement from the Institute for Constitutional Advocacy and Protection (where I now work), Professor Mary McCord explains how January 6th also fits into a yearslong trend of increased mobilization by unlawful private paramilitary groups, which have continued to evolve since the attack. 

Even more broadly, these assorted statements give a global perspective on the anti-democratic coalition that burst forth on January 6th. Leading experts on authoritarianism and fascism, such as Professors Ruth Ben-GhiatJason Stanley, and Federico Finchelstein, remind us of the stakes of January 6th as a moment when vigilante violence and authoritarian schemes converged to assert control over democratic society, as we have seen replicated, in one form or another, throughout history to catastrophic effect. This moment of autocratic consolidation was enabled by a broader acceptance of political violence by mainstream politicians and their supporters, a phenomenon that is elucidated by experts like Rachel Kleinfeld and Professors Liliana Hall Mason and Nathan Kalmoe.

These statements can also help to shine a light on some of the less-examined elements of the broader story of January 6th, such as explanations of the role of Christian Nationalism and anti-government extremism in the attack, the FBI’s persistent failures to adequately address the threat of far-right violence, the crisis of extremist radicalization within the U.S. military, and the proliferation of violent, conspiratorial content on alternative social media platforms like Parler. Taken together, these expert analyses should help us reject narrow explanations for the insurrection, especially the kind that  attempts to whitewash the violent extremism we saw on that day and try to sweep over the true, violent potential of the movements that fueled it.

The legacy of January 6th remains a fiercely contested issue, and it is vitally important that supporters of American democracy still speak loudly and clearly about the realities of that day. This collection will help us do just that, by providing explanations about why the insurrectionist forces have lingered on in our national life, through continued threats of political violence and anti-democratic instability. Over two years after the attack, groups like the Proud Boys continue to menace local governments and LGBTQ+ individuals, while an openly vengeful Trump embraces the insurrectionists and demonizes the same minority communities that are now in their crosshairs. 

Seen in this light, January 6th never ended. 

We are in the midst of the latest retelling of a very old, very dangerous story of authoritarianism and violence that both America and the world has seen before. That makes it all the more important for us to push for accountability whenever and wherever we can, and to guard against the resurgence of political violence as the next national election looms ever closer.

I hope that experts will continue to submit their statements to Just Security (email address) so that it can create as complete a repository as possible. Although they were not all incorporated into the work of the Select Committee, these statements give essential context to complement the factual narrative contained in the committee’s final report and underlying documents. As shocking as that narrative remains, it is even more terrifying when examined in this wider lens. Because of this, I know the collection will foster a deeper understanding of the insurrection and illuminate its most difficult lessons, which is the best way to ensure that January 6th is remembered as a wake-up call to the bipartisan alliance that saved American democracy, and not as the triumphant first chapter of an extreme coalition eager to destroy it.

Editor’s note: The expert statements on this topic are listed below and also available at Just Security’s January 6th Clearinghouse

  1. Carol Anderson (Charles Howard Candler Professor, African American Studies, Emory University)
    “The Role of White Rage and Voter Suppression in the Insurrection on January 6, 2021″
    Expert Statement
  2. Anti Defamation League
    Extremist Movements and the January 6, 2021 Insurrection”
    Expert Statement 
  3. Heidi Beirich (Co-Founder and Executive Vice President, Global Project Against Hate and Extremism)
    “The Role of the Proud Boys in the January 6th Capitol Attack and Beyond”
    Expert Statement
  4. Kathleen Belew (Associate Professor of History, University of Chicago)
    Expert Statement
  5. Ruth Ben-Ghiat (Professor of History, New York University)
    “Strongmen Don’t Accept Defeat: The January 6th, 2021, Assault on the Capitol as an Outcome of Donald J. Trump’s Authoritarian Presidency”
    Expert Statement
  6. Bright Line Watch
    John Carey (John Wentworth Professor in the Social Sciences, Dartmouth College), Gretchen Helmke (Thomas H. Jackson Distinguished University Professor, University of Rochester), Brendan Nyhan (James O. Freedman Presidential Professor, Dartmouth College) and Susan Stokes (Tiffany and Margaret Blake Distinguished Service Professor, University of Chicago)
    “The Destructive Effects of President Trump’s Effort to Overturn the 2020 Election”
    Expert Statement 
  7. Anthea Butler (Geraldine R. Segal Professor of American Social Thought, University of Pennsylvania)
    “What is White Christian Nationalism?”
    Expert Statement
  8. Kellie Carter Jackson (Michael and Denise Kellen ‘68 Associate Professor of Africana Studies, Wellesley College)
    “Understanding the Historical Context for White Supremacist Violence in America in Tandem with the Events of January 6, 2021”
    Expert Statement 
  9. Katherine Clayton (Ph.D. Candidate, Stanford University), Nicholas T. Davis (Assistant Professor, The University of Alabama), Brendan Nyhan (James O. Freedman Presidential Professor, Dartmouth College), Ethan Porter (Assistant Professor, George Washington University), Timothy J. Ryan (Associate Professor, The University of North Carolina at Chapel Hill) and Thomas J. Wood (Assistant Professor, The Ohio State University)
    “President Trump’s Rhetoric Undermined Confidence in Elections Among His Supporters”
    Expert Statement
  10. Michael German (Fellow, Brennan Center for Justice, New York University School of Law)
    “Why the FBI Failed to Anticipate Violence at the U.S. Capitol on January 6th, and How to Prevent it From Happening Again”
    Expert Statement 
  11. Philip Gorski (Frederick and Laura Goff Professor of Sociology and Religious Studies, Yale University)
    “White Christian Nationalism: The What, When, How and Where.”
    Expert Statement 
  12. Jared Holt (Resident Fellow, Digital Forensic Research Lab, Atlantic Council)
    Expert Statement
  13. Aziz Huq (Professor of Law, University of Chicago Law School) and Tom Ginsburg (Professor of Law, University of Chicago Law School)
    “Statement on the January 6, 2021 Attacks and the Threat to American Democracy”
    Expert Statement
  14. Michael Jensen (Associate Research Scientist, START), Elizabeth Yates (Assistant Research Scientist, START) and Sheehan Kane (Senior Researcher, START)
    “Radicalization in the Ranks: An Assessment of the Scope and Nature of Criminal Extremism in the United States Military”
    Expert Statement 
  15. Rachel Kleinfeld (Senior Fellow, Carnegie Endowment for International Peace)
    “The Rise in Political Violence in the United States and Damage to Our Democracy”
    Expert Statement
  16. Samantha Kutner (Proud Boys Research Lead, Khalifa Ihler Institute), Bjørn Ihler (Co-Founder, Khalifa Ihler Institute), and C.L. Murray (Khalifa Ihler Institute and Lecturer in Criminology, University of North Carolina Wilmington)
    “Function Over Appearance; Examining the Role of the Proud Boys in American Politics Before and After January 6th”
    Expert Statement
  17. Liliana Mason (Associate Professor of Political Science, Johns Hopkins University), Nathan Kalmoe (Associate Professor of Political Communication, Louisiana State University), Julie Wronski (Associate Professor of American Politics, University of Mississippi) and John Kane (Clinical Assistant Professor, Center for Global Affairs, New York University)
    Expert Statement
  18. Kate Masur (Professor of History, Northwestern University) and Gregory Downs (Professor of History, University of California, Davis)
    “Our Fragile Democracy: Political Violence, White Supremacy, and Disenfranchisement in American History”
    Expert Statement
  19. Mary McCord (Executive Director and Visiting Professor of Law, Institute for Constitutional Advocacy and Protection, Georgetown University Law Center)
    Expert Statement
  20. Jennifer Mercieca (Professor, Department of Communication, Texas A&M University)
    Expert Statement
  21. Suzanne Mettler (John L. Senior Professor of American Institutions, Cornell University) and Robert C. Lieberman (Krieger-Eisenhower Professor of Political Science, Johns Hopkins University)
    “How Four Historic Threats to Democracy Fueled the January 6, 2021 Attack on the United States Capitol”
    Expert Statement 
  22. Janai Nelson (President and Director-Counsel, NAACP Legal Defense and Education Fund, Inc.)
    Expert Statement
  23. Trevor Potter (Founder and President, Campaign Legal Center)
    Expert Statement
  24. Candace Rondeaux (Director, Future Frontlines, New America), Ben Dalton (Open Source Fellow, Future Frontlines, New America), Cuong Nguyen (Social Science and Data Analytics Fellow, Future Frontlines, New America), Michael Simeone (Associate Research Professor, School for Complex Adaptive Systems, Arizona State University), Thomas Taylor (Senior Fellow, New America) and Shawn Walker (Senior Research Fellow, Future Frontlines, New America)
    “Investigating Alt-Tech Ties to January 6”
    Expert Statement
  25. Mike Rothschild (Journalist and Author)
    “Regarding The Role of QAnon in the Events of January 6th and Beyond”
    Expert Statement
  26. Andrew Seidel (Constitutional Attorney, Freedom From Religion Foundation)
    “Events, People, and Networks Leading Up to January 6” and “Attack on the Capitol: Evidence of the Role of White Christian Nationalism”
    Expert Statement
  27. Peter Simi (Professor of Sociology, Chapman University)
    “Understanding Far-Right Extremism: The Roots of the January 6th Attack and Why More is Coming”
    Expert Statement
  28. Southern Poverty Law Center
    Michael Edison Hayden (Senior Investigative Reporter and Spokesperson, Intelligence Project), Megan Squire (Senior Fellow, Intelligence Project) Hannah Gais (Senior Research Analyst, Intelligence Project) and Susan Corke (Director, Intelligence Project)
    Expert Statement 1
    Cassie Miller (Senior Research Analyst, Intelligence Project) and Susan Corke (Director, Intelligence Project)
    Expert Statement 2
    Michael Edison Hayden (Senior Investigative Reporter and Spokesperson, Intelligence Project) and Megan Squire (Deputy Director for Data Analytics and OSINT, Intelligence Project)
    Expert Statement 3
  29. Jason Stanley (Jacob Urowsky Professor of Philosophy, Yale University) and Federico Finchelstein (Professor of History, The New School)
    “The Fascist Danger to Democracy Represented by the Events of January 6, 2021”
    Expert Statement
  30. Amanda Tyler et al (Executive Director, Baptist Joint Committee for Religious Liberty, and Leader, Christians Against Christian Nationalism Initiative)
    “Christian Nationalism and the January 6, 2021 Insurrection” – Report
    Expert Statement
  31. Wendy Weiser (Vice President for Democracy, Brennan Center for Justice, New York University School of Law)
    Expert Statement
  32. Andrew Whitehead (Associate Professor of Sociology, Indiana University–Purdue University Indianapolis) and Samuel Perry (Associate Professor of Sociology, University of Oklahoma)
    “What is Christian Nationalism?”
    Expert Statement
  33. Christine Whitman (Former Governor, New Jersey), Steve Bullock (Former Governor, Montana), Jim Hood (Former Attorney General, Mississippi), Tom Rath (Former Attorney General, New Hampshire), Trey Grayson (Former Secretary of State, Kentucky) and Frankie Sue Del Papa (Former Secretary of State, Nevada)
    Expert Statement
IMAGE: Pro-Trump protesters gather in front of the U.S. Capitol Building on January 6, 2021 in Washington, DC. (Photo by Jon Cherry/Getty Images)

The post Introduction to Expert Statements on Democracy and Political Violence, submitted to January 6th House select committee appeared first on Just Security.

]]>
86298
Trump’s Reinstatement on Social Media Platforms and Coded Forms of Incitement https://www.justsecurity.org/85902/trumps-reinstatement-on-social-media-platforms-and-coded-forms-of-incitement/?utm_source=rss&utm_medium=rss&utm_campaign=trumps-reinstatement-on-social-media-platforms-and-coded-forms-of-incitement Tue, 11 Apr 2023 12:59:25 +0000 https://www.justsecurity.org/?p=85902 Co-published with Tech Policy Press Over the past few weeks, major social media companies including Facebook, Twitter and YouTube reinstated former President Donald Trump’s social media accounts and privileges. Now, in the aftermath of his indictment in Manhattan’s Criminal Court and likely future indictment elsewhere, their decisions will be put to the test.  After his […]

The post Trump’s Reinstatement on Social Media Platforms and Coded Forms of Incitement appeared first on Just Security.

]]>
Co-published with Tech Policy Press

Over the past few weeks, major social media companies including Facebook, Twitter and YouTube reinstated former President Donald Trump’s social media accounts and privileges. Now, in the aftermath of his indictment in Manhattan’s Criminal Court and likely future indictment elsewhere, their decisions will be put to the test. 

After his day in court, Trump was back at Mar-a-Lago, where he addressed the media and streamed his remarks on Facebook Live. He used his platform to lay out a list of grievances against his perceived political opponents, including doubling down on unfounded conspiracy theories about the 2020 election and framing his legal troubles as “political persecution” designed to “interfere with the upcoming 2024 election.”

As a whistleblower from inside one of those major social media companies, I can say with conviction that the path we are on is dangerous. I know first hand. As I testified to Congress, while an employee at Twitter I spent months warning the company’s leadership that the coded language Trump and his followers were using was going to lead to violence on Jan. 6, 2021. I am also the person who argued to Twitter executives that they would have more blood on their hands if they did not follow my team’s recommendation to permanently suspend Trump’s account on Jan. 8, 2021. 

Just weeks after that, former Twitter CEO Jack Dorsey told Congress that Twitter played a role in the violence of Jan. 6. However, the exact role that social media played in the violent attack on the Capitol has never been fully disclosed, though it was investigated by the House Select Committee. 

The committee heard days of detailed accounts from myself, another brave former Twitter employee, and employees from other social media companies about the failings we saw with our own eyes. But, the committee’s almost 900 page final report, released in early January 2023, did not present the findings from the team tasked with looking into social media. 

An unpublished draft of this team’s findings was leaked in late January. It painted a damning picture of the culpability of social media companies in the Capitol attack. Chief among its key findings was that “social media platforms delayed response to the rise of far-right extremism—and President Trump’s incitement of his supporters—helped to facilitate the attack on January 6th.”

But the report didn’t stop there. It went on to detail critical failings within specific social media companies. It said, “key decisions at Twitter were bungled by incompetence and poor judgement,” and “Twitter failed to take actions that could have prevented the spread of incitement to violence after the election.” By the time these findings were shared, however, the former president’s account had been reinstated at Twitter by its new owner, Elon Musk, following a Twitter poll. 

According to the committee’s social media team’s findings, Twitter was not alone in sharing responsibility for allowing violence to be inspired on its platform in the runup to Jan. 6. Rather, the committee’s investigators found: “Facebook did not fail to grapple with election delegitimization after the election so much as it did not even try.” The investigators also noted that Facebook was due to review the former President’s suspension. The draft report clearly states, “President Trump could soon return to social media—but the risk of violence has not abated.” 

Yet, within days of the committee’s draft social media report publicly leaking, Meta announced that it would reinstate the former President’s accounts. Nick Clegg, the company’s President of Global Affairs, boldly proclaimed that after assessing the serious risk to public safety and the current security environment, “our determination is that the risk has sufficiently receded.” He then hedged, “Mr. Trump is subject to our Community Standards.” 

The January 6th committee’s draft social media report also singled out YouTube in its key findings. It detailed the company’s “failure to take significant proactive steps against content related to election disinformation or Stop the Steal.” It also concluded that “YouTube’s policies relevant to election integrity were inadequate to the moment.” 

Last month YouTube also decided to reinstate Donald Trump’s posting privileges. YouTube’s vice president of public policy, Leslie Miller, said the platform’s determination was made after it carefully evaluated the continued risk of real-world violence” and “the importance of preserving the opportunity for voters to hear equally from major national candidates in the run up to an election.” Like Meta, YouTube also promised that the former president’s account would still be subject to company content moderation policies. 

How does this happen? How do social media companies come to the exact opposite conclusion of a year’s long congressional investigation? 

It’s like January 6th never happened. It’s like we haven’t learned our lessons. Or maybe we just want to forget. 

But I haven’t forgotten. In February, I was a witness at a congressional hearing that highlighted the extreme political polarization that our country is currently undergoing. 

During the hearing, I was called by representatives an “American hero” and a “sinister overlord.” I was told by members of the United States Congress that I should be celebrated for speaking the truth, and that my arrest for unspecified crimes was imminent. Throughout the hearing, people on the internet posted images of nooses directed toward me. 

Since then, members of Congress who swore oaths to uphold the Constitution have continued their veiled calls for an American civil war on Twitter. As Donald Trump faced his indictment in New York City, he posted on Truth Social with language that directly mirrored the dog whistles he used in the days leading up to January 6th, 2021, and he gathered his followers in Waco, where he glamorized the Capitol attack.

These repetitions of history did not go unnoticed. During Trump’s court appearance this week, the prosecutor raised concerns over the former President’s threatening statements and social media posts, such as Trump’s warning of “potential death and destruction” that he said would follow his indictment. While the judge did not impose a gag order, he  noted his serious concerns about this activity, requesting the defense counsel to tell their client to “please refrain from making comments or engaging in conduct that has the potential to incite violence, create civil unrest, or jeopardize the safety or well-being of any individuals.”

While there thankfully was no immediate political violence in the aftermath of Trump’s arraignment, the threat of violence is nowhere near over. 

Trump’s New York indictment, ongoing criminal investigations at both the state and federal level, and his political campaign will be a pressure test of companies’ decisions to reinstate the former president. While these companies have promised that the former president will now be subject to their rules, the truth is, we’ve heard this promise before. As I testified, companies previously bent and broke their own rules behind closed doors in order to protect Trump’s dangerous speech. After Trump’s remarks on social media this week which led to the judge and prosecutor having reportedly been subject to an increase in death threats,what indication do we have that this time will be different? 

Even if platforms do decide to enforce their rules, the reality remains that these baseline policies are insufficient. As I testified to Congress, in 2020 my team at Twitter advocated for the creation of a new nuanced policy that would prohibit coded language like dog whistles that would lead to the incitement of violence. Despite seeing the interpretation of Trump’s statements by his base, it was not until the Capitol had been attacked on Jan. 6, 2021 that we were allowed to implement the policy. 

Both the other Twitter whistleblower and I testified that we left the company in part after this policy was eliminated and we realized that the rolled back enforcement would inevitably lead to more political violence. The riots on Jan. 8, 2023 in Brazil’s capital showed us that companies have still not created policies that address nuanced or coded language, and that world leaders and their followers can still employ anti-democratic campaigns that incite violence. It has become an off-the-shelf playbook.

The normalization of hate, dehumanization and harmful misinformation within political discourse on social media has put us on a cataclysmic course. Politicians skirting the lines of content moderation policies under the guise of open communication with constituents has fueled lawless actions. And companies have not only failed to update their policies to address these gaps, many have scaled back or wholly removed the teams who were responsible for moderation.

As social media companies now reevaluate whether to stay the course, I encourage company leaders to learn from our not too distant history. Allowing former President Donald Trump to retake his algorithmically amplified megaphone on the largest social media platforms poses a threat to our democracy. Not only does it lead us down the exact path to violence we have already walked, it signals to would-be authoritarians all over the world that there is safe harbor for dangerous speech at American technology companies. 

I challenge my former colleagues and peers at other platforms to ask themselves: Do you really want to bear responsibility when violence happens again?

IMAGE: Supporters hold “Witch Hunt” signs as former US President Donald Trump speaks during a 2024 election campaign rally in Waco, Texas, March 25, 2023.  (Photo by Suzanne Cordeiro/AFP via Getty Images)

The post Trump’s Reinstatement on Social Media Platforms and Coded Forms of Incitement appeared first on Just Security.

]]>
85902
Digital Privacy Legislation is Civil Rights Legislation https://www.justsecurity.org/85810/digital-privacy-legislation-is-civil-rights-legislation/?utm_source=rss&utm_medium=rss&utm_campaign=digital-privacy-legislation-is-civil-rights-legislation Mon, 10 Apr 2023 14:15:30 +0000 https://www.justsecurity.org/?p=85810 Seven must-have provisions for a comprehensive federal consumer data privacy law - without such a law, America can’t have “liberty and justice for all.” 

The post Digital Privacy Legislation is Civil Rights Legislation appeared first on Just Security.

]]>
As Congress ponders legislation to reform “big tech,” it must view comprehensive digital privacy legislation as desperately needed civil rights legislation, because data abuses often disproportionately harm communities already bearing the brunt of other inequalities.

Harvesting and monetizing personal data whenever anyone uses social media or even vital online services has become ubiquitous, yet it shouldn’t be accepted as normal or necessary. Corporate databases are vast, interconnected, and opaque, making the movement and use of our data difficult to understand or trace. Companies use it to reach inferences about us, leading to lost opportunities for employment, credit, and more.

But those already marginalized lose even more in this predatory data ecosystem.

Data is highly personal. Where we go online or in the real world, who and how we communicate with our communities, how and when we pay for things, our faces, our voices: All these data points represent aspects of individuals’ lives that should be protected. Even when our data supposedly is stripped of “personally identifying” characteristics, companies often can reassemble our data back into information that leads right to our doorsteps.

Consider our phones and tablets — apps harvest our personal and behavioral information, which is subsequently purchased and sold by data brokers, businesses, and governments. A Muslim prayer app, for example, sold users’ geolocation data to a company which in turn gave it to defense contractors serving the U.S. military.

It’s also harder for lower-income people to avoid corporate harvesting of their data. Some lower-cost technologies collect more data than more expensive options, such as cheaper smartphones with preinstalled apps that leak data and can’t be deleted. Some companies charge customers extra to avoid surveillance; AT&T once sought $29 per month from ISP customers to avoid tracking their browsing history. And some companies require customers to pay extra for basic security features that protect them from data theft, such as Twitter’s recent plan to charge $11 per month for two-factor authentication via SMS.

Once collected, highly sensitive information about millions of people is up for sale. Despite laws against discrimination based on ethnicity, gender, and other protected characteristics — like the Fair Housing Act, for example — many corporations have used algorithms that send advertisements in these ways, targeting some vulnerable groups for disfavored treatment while excluding others from important opportunities. For example, seniors have been targeted with ads for investment scams by subprime lenders, while political ads have been targeted at minority ethnic groups in order to suppress votes.

Personal data also is used to prevent certain groups from learning about positive opportunities. ProPublica revealed in 2016 that Facebook let advertisers exclude protected racial groups from viewing their content. And one academic journal reported that women receive fewer online ads for high paying jobs than men.

Moreover, automated decision-making systems often rely on the vast reservoirs of personal data that businesses have collected from us. Banks and landlords use such systems to help decide whether to engage potential customers, employers use them to help select workers, and colleges use them to help select students. Such systems invariably discriminate against vulnerable groups, as organizations like the Greenlining Institute and the ACLU have documented. Imagine, as the Greenlining Institute has, an algorithm that uses a loan applicant’s age, income, and ZIP code to predict that borrower’s likely outcome — payment or default — according to a set of rules. But algorithms often learn their rules by first analyzing “training data” for useful patterns and relationships between variables, and if that training data is biased — perhaps showing that the lender historically gave higher interest rates to residents in a ZIP code that’s predominately Black — the algorithm learns to discriminate.

Like the private sector, government buys data and uses automated decision-making systems to help make choices about people’s lives, such as whether police should scrutinize a person or neighborhood, whether child welfare officials should investigate a home, and whether a judge should release a person who’s awaiting trial. Such systems “automate inequality,” in the words of political scientist and tech scholar Virginia Eubanks, exacerbating existing biases. There are also surveillance concerns; Twitter, Facebook, Instagram, and nine other social media platforms provided software company Geofeedia with information and location data from their users that later was used by police to identify people at Black Lives Matter protests. There is a data privacy solution to this civil rights problem: prohibit businesses from collecting faceprints from anyone, without previously obtaining their voluntary, informed, opt-in consent. This must include consent to use someone’s face (or a similar identifier like a tattoo) in training data for algorithms.

Addressing Overcollection and Retention of Personal Data

Part of the solution is to drain the data reservoirs on which these systems feed by passing laws to limit how businesses collect our data in the first place. Collecting and storing massive quantities of personal information also creates the risk that corporate employees will abuse the data in ways that violate civil rights. For example, 52 Facebook employees were fired for exploiting access to user data; one used the company’s repository of private Messenger conversations, location data, and personal photographs to probe why a woman he dated had stopped replying to his messages.

And overcollection amplifies the harm caused by data breaches, which disproportionately impact lower-income people. Data theft can lead to identity theft, ransomware attacks, and unwanted spam, so victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit, and to obtain identity theft prevention services. Such costs are more burdensome for low-income and marginalized communities.

Comprehensive federal consumer data privacy must include several must-have provisions.

First, no pre-emption. Federal privacy law must be a floor and not a ceiling; states must be free to enact privacy laws that are stronger than the federal baseline, and to meet the challenges of tomorrow that are not foreseeable today. California, Colorado, Connecticut, Utah, and Virginia, for example, have passed laws in the past few years, demonstrating state legislators’ commitment to protect their constituents’ data privacy. A federal law must not drag them backward.

Second, strong enforcement requires that people have a private right of action to sue the corporations that violate their statutory privacy rights. Remedies must include liquidated damages, injunctive and declaratory relief, and attorney fees. People must be able to bring their claim to a judge, and not be forced into the kangaroo court of forced arbitration.

Third, a comprehensive federal data privacy law must include strong minimization, prohibiting companies from processing a person’s data except as strictly necessary to provide them what they asked for.

Fourth, the law must prohibit companies from processing a person’s data, except with their informed, voluntary, specific, opt-in consent.

Fifth, the law can’t allow pay-for-privacy schemes. When a person declines to waive their privacy rights, companies must be prohibited from refusing to do business with them, charging a higher price, or providing lower quality. Otherwise, privacy will be a commodity that only the wealthy can afford. This safeguard is necessary to ensure that “consent” is truly voluntary.

Sixth, the law must ban deceptive design. Companies must be prohibited from presenting people with user interfaces (sometimes called “dark patterns”) that have the intent or substantial effect of impairing autonomy and choice. This protection is also necessary to ensure that consent is genuine.

And seventh, the law must ban online behavioral ads. Companies must be prohibited from targeting ads to a person based on their online behavior. These ads are especially dangerous because they incentivize all businesses to harvest as much consumer data as possible, either to use it to target ads or to sell it to someone who will.

Some signs of progress

Sometimes the news pushes progress. Since the U.S. Supreme Court’s decision last year in Dobbs v. Jackson Women’s Health Organization ended the protection for abortion rights that had existed for half a century under Roe v. Wade, reproductive health has become a digital rights attack vector. This is especially dangerous for BIPOC, lower-income, immigrant, and LGBTQ+ people, as well as healthcare providers serving them. The My Body, My Data Act — expected to be reintroduced in Congress this year — would create a new national standard to protect personal reproductive health data, minimizing its collection and retention while creating a private right of action for violations and a non-preemption clause to protect stricter state statutes. And in California, AB 793 would protect safe access to reproductive and gender-affirming care by prohibiting “reverse warrants,” in which law enforcement requests identities of people whose digital data shows they’ve spent time near an abortion clinic or searched online for information about certain types of health care — a digital surveillance dragnet that lets bounty hunters and anti-choice prosecutors target these people.

But legislating to protect a specific set of vulnerable people is no substitute for comprehensive reform that protects all vulnerable people. Left unchecked, data privacy abuses affecting us all will grow more numerous and onerous, and disproportionate impacts upon the marginalized will widen.

Without a strong comprehensive data privacy law, America simply can’t have “liberty and justice for all.”

IMAGE: People data communication network on infographic background. (via Getty Images)

The post Digital Privacy Legislation is Civil Rights Legislation appeared first on Just Security.

]]>
85810
How Lawmakers Hope to Sidestep Existing National Security Reviews to Target Foreign Investment https://www.justsecurity.org/85836/how-lawmakers-hope-to-sidestep-existing-national-security-reviews-to-target-foreign-investment/?utm_source=rss&utm_medium=rss&utm_campaign=how-lawmakers-hope-to-sidestep-existing-national-security-reviews-to-target-foreign-investment Mon, 03 Apr 2023 13:05:20 +0000 https://www.justsecurity.org/?p=85836 Though regulatory efforts have worked to monitor the app’s potential national security threats so far, politicians growing impatient.

The post How Lawmakers Hope to Sidestep Existing National Security Reviews to Target Foreign Investment appeared first on Just Security.

]]>
Last month, lawmakers peppered TikTok CEO Shou Chew with a barrage of questions as he testified before the House Committee on Energy and Commerce. The hearing was the latest development in a crescendo of concern over the social media platform’s national security implications, and the larger political and diplomatic competition between the United States and China. Though regulatory efforts have worked to curb and monitor the app’s potential as a national security threat so far, politicians on both sides of the aisle are growing impatient.

TikTok’s ties to China, combined with recent international and domestic political developments, have pushed the Biden administration to threaten to completely ban the app in the United States (an effort which failed during the Trump administration). Depending on how the Committee on Foreign Investment in the United States (CFIUS) responds to this threat and simultaneous pressure from Congress to wrap up its ongoing review sooner rather than later, a new bill may allow the Biden administration to circumvent traditional regulatory avenues while instituting a marked expansion of presidential powers over national security and foreign direct investment (FDI).

That bill, the Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act is poised to expand the executive branch’s authority to order the sale, or potentially the complete ban, of technology assets tied to “foreign adversaries.” The bill explicitly lists China, Cuba, Russia, Iran, North Korea, and Venezuela as “foreign adversaries,” but allows the Secretary of Commerce to add or remove “any foreign government or regime” that acts adversely to U.S. national security interests. The law’s grant of authority to the Commerce Department and the president would sidestep the more comprehensive, cooperative, multi-agency CFIUS process. Perhaps more chillingly, the Act’s new rulemaking procedures would be exempt from the Administrative Procedure Act’s standard rulemaking procedures, which require federal agencies to provide the public with an opportunity to comment on proposed regulations.

However, even if Congress enacts this new law, the executive branch will encounter significant hurdles if it chooses to ban or force the sale of TikTok, including Chinese technological export regulations and potential political pushback from TikTok users in the United States, both of which stand in the way of a forced sale or an outright ban.

The CFIUS Review Process

CFIUS is a multi-agency committee of expert trade, national security, and technology policy representatives of key executive branch agencies chaired by the Department of the Treasury. It is empowered to review certain foreign investment and real estate transactions involving U.S. assets to determine and mitigate their potential negative effects on national security interests.

CFIUS’ baseline grant of authority, Section 721 of the 1950 Defense Production Act (DPA), provides discretionary authority to review “covered transactions.” Initially, this only included transactions that could be defined as a “merger, acquisition, or takeover” that results in “foreign control” of “entities engaged in interstate commerce in the United States.” However, after the 2018 enactment of FIRRMA, a law modernizing and expanding the CFIUS review process and its authority, the scope of covered transactions expanded to include non-controlling interests acquired by foreign actors. Specifically, FIRRMA directed CFIUS to review transactions involving critical technology, infrastructure, and sensitive personal data regardless of whether an investment resulted in complete foreign control of the entity under review.

Companies engaging in transactions that may be flagged for, and thus hindered by, CFIUS investigation may provide the committee with a “declaration.” This process, which is generally voluntary, allows for an expedited review process that may ultimately earn “safe harbor” status for the transaction, meaning that CFIUS is legally restricted from taking future regulatory actions against transactions it has already cleared.

Some transactions, particularly after FIRRMA reforms regarding tech and data transactions, are subject to mandatoryreporting requirements, meaning companies face a duty to file a “declaration” with CFIUS if a foreign purchaser plans to acquire a “substantial interest” in a company dealing with sensitive U.S. technology.

Upon closing its investigations, CFIUS either publishes “findings” clearing a transaction of national security concerns or enters into an agreement with the company or companies involved including certain limitations on transactions or subsequent investment operations. CFIUS can institute compliance monitoring measures to ensure investments follow these agreements and can penalize infractions. If an agreement is made among interested parties, safe harbor status is earned for the original transaction.

Alternatively, instead of an agreement, CFIUS can refer a matter to the president to block the transaction or take other measures. Past CFIUS-referred Presidential Orders have ultimately demanded, and achieved, divestment.

CFIUS Review of ByteDance

In 2019, CFIUS reportedly opened its investigation into the security implications of the acquisition of Musical.ly, a Shanghainese social media company with a significant U.S. presence, by TikTok’s Beijing-headquartered parent company, ByteDance Ltd.

When Musical.ly was first acquired by ByteDance, FIRRMA’s reporting requirements were not yet enacted, and thus the transaction did not have to notify CFIUS of the deal. However, by choosing not to file a declaration, TikTok passed up an opportunity for the expedited review process and, perhaps, reduced scrutiny.

FIRRMA implemented updated directives for CFIUS on the tail of Musical.ly’s rise in popularity among U.S. users. After TikTok and Musical.ly’s apps merged, the unitary TikTok platform saw meteoric growth in the U.S. in 2019 that coincided with both U.S. and Chinese efforts to strengthen domestic cybersecurity and data privacy laws.

Ultimately, these developments drew increased scrutiny from not only CFIUS, but also the Trump administration, which unsuccessfully tried to ban the app from U.S. app stores outside the CFIUS referral process.

CFIUS itself chose to act on its post-FIRRMA mandate to open an investigation into several already-completed tech acquisitions, including the post-closure review and eventual forced sale of the dating app Grindr.

CFIUS’ review process is opaque, particularly while matters remain under investigation. That’s because Section 721 of the DPA binds CFIUS to confidentiality during the transaction notification and review process to protect national security and industry interests. CFIUS is even prohibited from publishing the commencement of an investigation; news of the TikTok investigation was reported anonymously in 2019.

Therefore, for the time being, there is little is known about whether TikTok’s mitigating negotiations are helping to clear the transaction or how much longer the committee might take to complete the review process.

TikTok’s Potential Risks to U.S. National Security

Several factors have ostensibly influenced the bipartisan push to attack TikTok regardless of whether doing so would intrude on CFIUS’ regulatory domain. One issue explicitly addressed in the Mar. 23 congressional hearing with Chew was minor safety. However, two other concerns are most prominent in rallying policymakers: data protection and the app’s powerful recommendation algorithm.

ByteDance’s Ownership of TikTok and China’s Control Over User Data

First, because ByteDance is headquartered in China, there is a risk of subjecting user data collected by the TikTok app to Chinese law and access by the Chinese Communist Party (CCP). New Chinese national security laws impacting data storage, export, and access give legal grounds for concern over Beijing’s ability to monitor and use data held by companies under its jurisdiction in a manner opposed to U.S. interests on the global stage. In some cases, those laws may even open U.S. citizens to Chinese criminal liability for taking actions in opposition to Chinese national security

Though the U.S. has undertaken similar policy initiatives with extraterritorial reach, data storage and access is at the center of negotiations between CFIUS and TikTok. Policymakers are concerned by laws potentially allowing the Chinese government’s access to general user data including, but not limited to, contact information, location, facial recognition data, viewership and engagement history, and related device data such as clipboard contents, or test, images, and other files that have been copied in an app to be pasted elsewhere, and the IP address of the device running the app.

These kinds of data are integral to the operation and continuing development of TikTok’s recommendation algorithm and, as U.S. tech firms demonstrate, can be valuable to third parties who may buy or otherwise access and abuse poorly protected data. Furthermore, in 2020, CFIUS forced the sale of dating app Grindr after it was acquired by a Chinese company. It determined the sale posed a national security risk because of the potential for Chinese actors to access and leverage identified users’ sexual orientation, HIV status, communications, and private photos as blackmail. Though TikTok does not necessarily pose the same risk in terms of data subject matter, there is still some risk regarding leverageable private information that may be shared and stored through the app.

TikTok’s Recommendation Algorithm

Second, TikTok’s powerful recommendation algorithm drives the app’s popularity and inspires fear that malicious (or negligent) manipulation of the algorithm could spread misinformation to both U.S. and global users. TikTok has been accused of censorship by its user base and U.S. policymakers alike; particularly concerning politically divisive content or content directly critical of the CCP. However in his testimony, Chew asserted that TikTok’s U.S. algorithm is stored and operates separately through servers operated and monitored by a U.S. partner, Oracle.

Project Texas is a proposed initiative to mitigate CFIUS’ national security concerns over TikTok’s U.S. operations. According to Chew, it entails storing and operating the U.S. algorithm and user data from Oracle Cloud servers located on U.S. soil. Since summer of 2022, TikTok reports that all new U.S. dataflows have been handled by Oracle’s cloud and that limited user data is stored as a backup in the U.S. and Singapore, but not in China. Furthermore, this data is allegedly in the process of being deleted in favor of full reliance on the U.S.-based Oracle Cloud systems. Chew projects this process will be completed later this year.

Project Texas’ storage and third-party monitoring schemes are designed to alleviate some national security concerns like the use of U.S. user data to further develop TikTok’s valuable algorithm or direct Chinese access to user data for intelligence purposes. Furthermore, Chew promised both in his Congressional testimony and again in a TikTok posted on the @tiktok account, that the company “will ensure that TikTok remains a platform for free expression and that it cannot be manipulated by any government.” Still, U.S. policymakers worry that government-mandated, CCP-affiliated corporate actors within ByteDance or direct coercion by the CCP could influence future abuse of the algorithm to promote Chinese interests.

Sidestepping the CFIUS Process?

While the CFIUS process continues in the shadows, a bipartisan coalition in Congress is poised to expand presidential powers to circumvent CFIUS’ authority and address TikTok’s potential risks directly. Congress passed a law in December banning TikTok from government devices. Officials now threaten either a full public ban, previously tried and failed under the Trump administration, or the forced severance of TikTok’s U.S. operations from ByteDance.

However, the White House could not sidestep the CFIUS process to unilaterally ban TikTok without new legislation; courts blocked president Trump from forcibly removing the app from the marketplace in 2020, and similar measures to snuff out the company’s foothold in the U.S. market would require Congress to provide the Biden administration with new emergency economic authority.

Recent events may motivate Congress to do exactly that. The Chinese spy balloon fiasco and the revelation that rogue, now-terminated ByteDance employees accessed the data of multiple journalists covering the company are pushing a bipartisan bill, the RESTRICT Act, that would grant the Secretary of Commerce and the president the ability to order divestment of certain security-sensitive investments after a limited review outside the multi-agency CFIUS framework.

The bill’s sponsor Senator Mark Warner (D-VA), who also chairs the Senate Intelligence Committee, said the bill is designed to give the Commerce Department tools beyond CFIUS review to address technology that could harm the United States that range from divestment “up to and including a ban,” though how a ban would operate is not explicit under the bill.

Meanwhile, China asserts a forced sale of TikTok to a U.S. firm as a violation of its own export control regulations. Similar to U.S. export regulations on advanced technologies, China regulates algorithms and related intellectual property based as a vital economic and national security interest and, therefore, imposes licensing requirements on transactions like the potential forced sale of TikTok.

Finally, China is not alone in resisting policy action against TikTok. Many in the United States, including creators on the app, are criticizing policymakers for antagonizing the social media company and organizing in support of it. Comparisons have been made between other tech giants such as Google and Meta, who have established histories of mishandling and selling user data. Others see the attack on the Chinese-based tech company as an expression of xenophobia, a new front for Sino-American economic warfare, or a step towards quashing free expression and political mobilization among young voters. And, especially after Thursday’s hearing, many accuse lawmakers and administration officials of being out of touch and under-informed.

Only time will tell what may come of the ongoing CFIUS investigation due to the committee’s confidentiality requirements. A complete failure to come to an agreement with TikTok could yield a forced divestment without a new law, but the social media company’s active steps to mitigate data privacy concerns seem to signal progress towards a future agreement with CFIUS, which would cement the 2017 acquisition into safe harbor status and shield it from future review by the committee. However, preclusion from further CFIUS review may not protect it from the RESTRICT Act if enacted.

Even with the expanded authority the Act would provide, the Biden administration would ultimately still face significant legal obstacles, both international and domestic, if it seeks an outright ban of the app.

IMAGE: TikTok CEO Shou Zi Chew prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill on Mar. 23, 2023, in Washington, D.C. (Photo by Chip Somodevilla via Getty Images)

The post How Lawmakers Hope to Sidestep Existing National Security Reviews to Target Foreign Investment appeared first on Just Security.

]]>
85836
TikTok and the First Amendment https://www.justsecurity.org/85683/tiktok-and-the-first-amendment/?utm_source=rss&utm_medium=rss&utm_campaign=tiktok-and-the-first-amendment Fri, 24 Mar 2023 13:59:31 +0000 https://www.justsecurity.org/?p=85683 It’s unfortunately commonplace around the world for governments to invoke national security as a pretext for denying their citizens access to media. Historically, the United States has been a vocal critic of this practice. During the Cold War, U.S. opposition to restrictions on the international flow of information and ideas helped define the United States […]

The post TikTok and the First Amendment appeared first on Just Security.

]]>
It’s unfortunately commonplace around the world for governments to invoke national security as a pretext for denying their citizens access to media. Historically, the United States has been a vocal critic of this practice. During the Cold War, U.S. opposition to restrictions on the international flow of information and ideas helped define the United States as a free society in the eyes of the world. In more recent years, the United States has often condemned governments that deny their citizens access to American social media and messaging platforms. 

Against this background, it is disconcerting, at least, to see the U.S. government threatening to ban TikTok, an app used by more than 150 million Americans. Fortunately, the United States has something that many other countries don’t: strong constitutional protections for free speech that extend to the right to access social media as well as the right to receive information from abroad. Those protections don’t necessarily mean the government won’t ultimately be able to ban TikTok. But if it’s going to shut down a major social media platform, it will have to come up with better reasons than it’s offered so far. 

In an article published by the New York Times this morning, I explain why this is so—and why it is an important feature of our system, not a bug, that the government can’t interfere with Americans’ access to social media without carrying a heavy justificatory burden. As I argue: 

The First Amendment has so far played only a bit part in the debate about banning TikTok. This may change. If the U.S. government actually tries to shut down this major communications platform, the First Amendment will certainly have something to say about it.

Perhaps the reason First Amendment rights haven’t received more attention in this debate already is that TikTok is a subsidiary of ByteDance, a Chinese corporation that doesn’t have constitutional free speech rights to assert. But setting aside the question of TikTok’s own rights, the platform’s users include more than 150 million Americans, as TikTok’s chief executive testified at a contentious congressional hearing on Thursday. TikTok’s American users are indisputably exercising First Amendment rights when they post and consume content on the platform.

Read the rest of the article here

IMAGE: TikTok CEO Shou Zi Chew testifies before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill on March 23, 2023 in Washington, DC.

The post TikTok and the First Amendment appeared first on Just Security.

]]>
85683
7 Experts on Trump’s Call for Protests and Social Media Threat Models https://www.justsecurity.org/85653/7-experts-on-trumps-call-for-protests-and-social-media-threat-models/?utm_source=rss&utm_medium=rss&utm_campaign=7-experts-on-trumps-call-for-protests-and-social-media-threat-models Thu, 23 Mar 2023 14:16:33 +0000 https://www.justsecurity.org/?p=85653 Comparing expert analyses of the threat of domestic extremist violence with assessments by social media platforms.

The post 7 Experts on Trump’s Call for Protests and Social Media Threat Models appeared first on Just Security.

]]>
Cross-published at Tech Policy Press.

Social media platforms’ recent decisions to reinstate former President Donald Trump’s accounts were based on empirical claims about the threat landscape for political violence in the United States. These kinds of assessments are the type that the social media companies could apply in other countries as well. Their evaluation of the threat in the United States may face its first test in the days and weeks ahead. We asked several experts to evaluate the companies’ assessments of the threat of political violence.

On Saturday, the former U.S. president indicated in a post on his social media network, Truth Social, that he anticipates he will be arrested on charges stemming from Manhattan District Attorney Alvin Bragg’s investigation into hush money payments to adult film star Stormy Daniels. Trump urged his followers to protest and “take our nation back!”

The timing of any potential indictment is unknown, but reports suggest a grand jury could decide as early as this week. So far, Trump’s call for protest has drawn only small crowds near his Mar-A-Lago residence in Florida and at the Manhattan Criminal Court in New York. But whether his arrest may spur larger crowds in the future remains to be seen. On fringe sites such as The Donald, Gab and 4Chan, there is talk of “civil war” and in some instances threats of violence, though some far-right activists are apparently too concerned about a supposed law enforcement ‘trap’ to demonstrate.

Nevertheless, Trump’s call for protests “echoed his rhetoric before his supporters stormed the U.S. Capitol on Jan. 6, 2021,” reported The Washington Post. Multiple analyses, including that of the House Select Committee that investigated the January 6 attack on the Capitol, noted the importance of Trump’s appeals on social media to summon the crowds that day, and to propagate the false claims that motivated many to violence. 

In recent weeks, Facebook and YouTube reinstated Trump’s accounts citing a reduction in the risk of political violence, the threat of which served as part of the rationale for their decisions to suspend him following the events of Jan. 6, 2021. 

“We carefully evaluated the continued risk of real-world violence, balancing that with the importance of preserving the opportunity for voters to hear equally from major national candidates in the run up to an election,” said YouTube vice president of public policy Leslie Miller last Friday, a day before Trump’s latest call for protests.

“To assess whether the serious risk to public safety that existed in January 2021 has sufficiently receded, we have evaluated the current environment according to our Crisis Policy Protocol, which included looking at the conduct of the US 2022 midterm elections, and expert assessments on the current security environment,” wrote Meta president of global affairs Nick Clegg in January. “Our determination is that the risk has sufficiently receded,” and thus Trump was reinstated on Facebook.

(Shortly after acquiring the platform in November last year, Elon Musk reinstated Trump’s Twitter account after running a Twitter poll.)

If Trump is indicted, as the criminal process proceeds, it may represent the first true test of the platforms’ threat assessment. And, that test may have implications beyond the narrow question of whether it was prudent to reinstate Trump’s accounts. It may indicate whether the platforms are prepared to take swift action in the case of future demagogues, in the U.S. and abroad, who use their accounts to incite violence or propagate false claims about the result of an election.

In order to understand whether their more relaxed posture is consistent with independent analyses on domestic extremism and the potential for civil unrest, we put the following question to the seven experts:

Is your assessment of the current threat of domestic extremist violence related to Donald Trump congruent with the assessment of these social media platforms?

Below, find responses from:

  • Jacob Glick: Glick is Policy Counsel with the Institute for Constitutional Advocacy and Protection at the Georgetown University Law Center. He previously served as Investigative Counsel on the House Select Committee to Investigate the January 6th Attack on the U.S. Capitol, where he was a lead counsel on the Committee’s investigations into domestic extremism and social media’s role in the attempted insurrection
  • Donell Harvin, DrPH: Harvin is on the faculty at Georgetown University, where he teaches on the subjects of homeland security and terrorism. He is the former Executive Director for the Washington, DC Fusion Intelligence Center and oversaw it during the insurrection on January 6th. He met with and testified before the House Select Committee investigating January 6 on several occasions. 
  • Jared Holt: Holt is a Senior Research Manager at the Institute for Strategic Dialogue, working on topics of hate and extremism in the United States. Prior to joining ISD, he worked at The Atlantic Council’s DFRLab, Right Wing Watch and Media Matters for America.
  • Tom Joscelyn: Joscelyn was a senior professional staff member on the House Select Committee to Investigate the January 6th Attack on the U.S. Capitol and has testified before Congress on more than 20 occasions.
  • Mary B. McCord: McCord is Executive Director of the Institute for Constitutional Advocacy and Protection (ICAP) and a Visiting Professor of Law at Georgetown University Law Center. She served as legal counsel to the U.S. House of Representatives Task Force Capitol Security Review appointed by Speaker Nancy Pelosi after the January 6 attack. 
  • Candace Rondeaux: Rondeaux is director of the Future Frontlines program at New America, a professor of practice at the School of Politics and Global Studies, a senior fellow with the Center on the Future of War at Arizona State University, and the author of an investigative report into the role of alt-tech platforms such as Parler in the attack on the U.S. Capitol.
  • Peter Simi: Simi is a Professor of Sociology at Chapman University. He has studied extremist groups and violence for the past 20 years, is coauthor of American Swastika: Inside the White Power Movement’s Hidden Spaces of Hate, and frequently serves as an expert legal consultant on criminal cases related to political extremism.

* * *

Jacob Glick

The Select Committee’s evidence showcased the crucial importance of mainstream social media networks in President Trump’s attempt to incite his supporters and topple American democracy. In deposition after deposition, witnesses described how they learned about the President’s summons to “be there, will be wild” as it circulated on Twitter and Facebook, which is how many of them decided to travel to D.C. for January 6th. We also collected testimony from employees inside Twitter and Meta that illustrated how these companies were blindsided by Trump’s willingness to embrace political violence and extremism. Ahead of January 6th, they hesitated to act against his brazenly authoritarian conduct and gave him an extraordinary amount of leeway as he railed against the results of the 2020 election.

 By refusing to decisively confront pro-Trump extremism, these companies helped to enable the insurrection at the U.S. Capitol, and only belatedly acted to ban his accounts once the damage had already been done. Now, as Trump calls for protests ahead of his expected indictment, it’s clear that he is once again preparing to leverage his social media megaphone to incite his most fringe supporters. Over the weekend, his initial post echoed the bellicose language he deployed prior to the Capitol attack. He also made his first post on Facebook since his account was reactivated, in a clear signal that he plans to take advantage of Meta’s decision to allow him back on the platform.

 This is the dangerous – and entirely predictable – result of the decision to re-platform Trump. Since the insurrection, the former president has only tightened his embrace of the Big Lie, violent conspiracies like QAnon, and even political violence. The evidence for this should have been plainly recognizable to major social media companies. Over the past year, we’ve seen Trump incite an attack against the FBI for their search of Mar-a-Lago, dismiss a brutal attack on Paul Pelosi that was fueled by the Big Lie, and amplify a message that called for his supporters to be “locked and loaded” ahead of the 2024 election. His verbal attacks on the LGBTQ+ community also illustrate his enduring symbiosis with violent extremist groups like the Proud Boys. This all should make it obvious that Trump remains aware of his ability to rally his supporters to engage in intimidation and violence when it suits his political needs.

Despite these clear signals, major social media companies have decided to act as if the threat has passed. This places all Americans at great risk, despite these companies’ promises to keep Trump in check this time around. There is no reason to believe that the political considerations that convinced Meta, Twitter, and other companies to tiptoe around Trump in 2020 will be any different now, as he attempts to re-energize his followers with a sense of conspiracy and grievance. In failing to learn the lessons of January 6th, these companies have paved the way for Trump to launch another, even more embittered assault on our system of democratic self-government. Let’s hope that American democracy can survive their mistake.

Donell Harvin

The current threat of extremist violence associated with Trump is incongruent with the assessment of the social media (SM) platforms, but it is a complicated situation that these companies find themselves in. There are several important factors that must be considered in discussing how social media companies engage in content moderation of the former President:

  1. The assessment that went into the decision to re-platform Trump likely unfolded over a period of time, and those engaged in the process would not be expected to predict the recent events associated with the former President. The question is, are they committed to reevaluating their decision should the need arise, and have they developed a fair and transparent mechanism to deplatform him for future incidents of incitement? Multiple studies have shown that deplatforming those that spread hate speech and violent rhetoric is highly effective in decreasing its spread. 
  2. Trump supporters and those on the right often decry their deplatforming as a violation of “freedom of speech,” however the First Amendment does not apply to these social media companies. Private entities can create user agreements and remove users and their posts without running afoul of the Constitution. Yet, while they may have legal, moral and ethical grounds for deplatforming, they may assess that doing so may be inconsistent with their business model. Twitter has made the decision to replatform individuals, including the former president, that spread mis and disinformation, and other hateful and other unsavory views online. Since Elon Musk took over the platform, there has been an exponential rise in antisemitism, anti-minority, misogynistic, homophobic and anti-LGBTQ+ rhetoric. 
  3. Trump’s latest calls for protests were made on his own platform, Truth Social, and not posted on his official accounts on other social media platforms. Social media companies would be hard-pressed to deplatform a national figure for views expressed on another platform, unless there is a clear violation of their user terms of agreement. This makes it difficult for social media companies to take action, while also providing them cover for failure to do so.
  4. The ability for social media companies to accurately determine the current extremist threat environment is questionable, and not necessarily their responsibility. The government is tasked with homeland security, and considering that multiple federal intelligence and law enforcement entities were unsuccessful in recognizing the threat that Trump’s supporters posed in the lead up to January 6th, it is unreasonable to expect private companies to assume that responsibility or be more successful at threat analysis. That said, the social media companies play an outsized role in online extremist radicalization and should be held accountable for the consequences that their lack of content moderation and algorithms play in contributing to the explosion of online violent extremism in this country. 
  5. Lastly, OSINT (open-source intelligence) has become limited in accurately detecting violent actors or predicting widespread violence. OSINT entails the collection and analysis of online content to determine if individuals or groups pose a threat. The collection and analysis of OSINT is resource intensive, and when performed by the government, is fraught with legitimate civil rights and civil liberties concerns. Post January 6th, many domestic extremists and potential violent lone actors have abandoned (or been deplatformed) from the sites that OSINT is routinely gleaned from. These malign actors now share violent ideologies, rhetoric, memes and conspiracies on platforms and encrypted chat rooms that do little to no moderation, such as Reddit, 4chan, 8kun and online video games. Detecting violent intent through “leakage” from online actors across multiple platforms has become a daunting enterprise for the government, and it is not the responsibility of social media companies to police sites other than those that they control.

Hate is a profitable enterprise in the US and the reality is that the public should not expect social media companies to accurately assess and respond to evolving threats, especially if that response is inconsistent with their financial interests. 

Jared Holt

The landscape around domestic extremist violence has changed in major ways since the Capitol riot and the legal, social, and political fallout afterward that fell upon far-right groups that were supportive of Trump. There are also valid questions as to whether Trump is still able to wield influence over the spectrum of far-right movements as he was in 2020–something I would argue is not the case, at least to the degree he once did. De-platforming Trump certainly played some sort of role in those shifts, though I don’t know that kicking Trump off big platforms did a whole lot to actually change the trajectory of extremist and political violence in the United States. Banning Trump and other movements that were most visible on January 6 probably disrupted those organizing spaces enough to prevent further wreckage, but extremist movements adapted and overcame those hurdles, like they always do. It’s a fluid problem.

For Meta and Google, I think what ultimately matters here is Trump’s behavior, which I’d argue hasn’t changed at all since he lost the 2020 election. (I’m not going to pretend Elon Musk is interested in content moderation policy.) Trump is living his own Groundhog Day, waking up every morning and stirring up his most loyal followers with forms of election denialism, hate, and conspiracy theories. Meta and Google might believe the broader cultural conditions have changed, but I can’t imagine any coherent argument to claim Trump will behave better once he starts using the platforms again. Trump’s behavior is a crucial part of assessing the risk here, especially considering he is the widely presumed front-runner for the Republican Party’s 2024 presidential nomination and that whatever loss of influence he may have suffered is theoretically still up for grabs in the years ahead.

Tom Joscelyn

On Dec. 19, 2020, then President Trump announced via Twitter that there would be a “Big protest in D.C. on January 6th.” He added: “Be there, will be wild!” As demonstrated in the January 6th Select Committee’s hearings and final report, rightwing extremists from around the country read this tweet as a call to arms. Within hours, they began planning for violence on January 6th. Within days, the Proud Boys and others made plans to storm the U.S. Capitol. There is no material dispute over these facts. For the first time in American history, there was no peaceful transfer of power on January 6, 2021. Trump’s incendiary use of social media caused the violence we witnessed. So we should not be surprised if Trump’s tweets and other social media posts incite violence once again. 

Mary McCord

It’s hard to comprehend what the social media platforms were considering when determining that the risk to public safety from Trump’s presence on their platforms has receded.  Knowing his history of calling on his base whenever he feels threatened, and knowing he is the subject of multiple ongoing criminal investigations–one of which he already used to publicly put a target on the backs of federal law enforcement, DOJ officials, and judges–the social media platforms had more than enough reason to continue their suspensions of Trump. The recent escalating calls to “TAKE OUR NATION BACK!” and “PROTEST, PROTEST, PROTEST!!!” along with the veiled threats against the Manhattan District Attorney affirm that. The platforms should answer now how they will treat posts like those on Truth Social over the last several days.

Candace Rondeaux

It appears America is facing another DeJa’Vu moment as Donald Trump once again whistles his dogs onto the streets and tech platform companies are set for yet another rude awakening. It’s clear the corrective lies with Congress but few are likely to be motivated to take action in the run up to the 2024 elections.

Peter Simi

No, these social media platforms once again seem to be putting their bottom line ahead of public safety and democracy. Their assessments lack credibility and, of course, transparency, so there is no way for experts or anyone else to evaluate how these companies made their determinations. What is clear, however, in terms of the threat landscape is that threats to public officials are at all time highs and many of those threats are communicated on these very same platforms. The threat environment is not receding as some of the social media officials claim, and most experts that I am aware of have grave concerns about the current threat level and a rapid increase in the threat landscape as we inch closer to the 2024 presidential election.

IMAGE: People show their support for former President Donald Trump near his Mar-a-Lago home on March 21, 2023 in Palm Beach, Florida. (Photo by Joe Raedle/Getty Images)

The post 7 Experts on Trump’s Call for Protests and Social Media Threat Models appeared first on Just Security.

]]>
85653
Two Supreme Court Cases Could “Break the Internet”: What Role Should Free Speech Play? https://www.justsecurity.org/85633/two-supreme-court-cases-could-break-the-internet-what-role-should-free-speech-play/?utm_source=rss&utm_medium=rss&utm_campaign=two-supreme-court-cases-could-break-the-internet-what-role-should-free-speech-play Wed, 22 Mar 2023 13:30:15 +0000 https://www.justsecurity.org/?p=85633 Instead of demonstrating eagerness to reconsider Section 230, the Justices appeared unsure about how exactly the law should be interpreted.

The post Two Supreme Court Cases Could “Break the Internet”: What Role Should Free Speech Play? appeared first on Just Security.

]]>
Last month, the Supreme Court heard five hours of oral arguments in two terrorism cases, Gonzalez v. Google and Twitter v. Taamneh. Many fear the outcome of these cases could “break the internet.” Perhaps aware of the far-reaching implications of the wrong outcome in these cases, and the limitations of their technical expertise – as Justice Kagan noted, Supreme Court justices aren’t “the nine greatest experts on the internet” – the Court seemed hesitant to break the internet just yet.

But while the Justices showed reluctance to take steps that could radically change how content curation and content moderation work, they mostly ignored the free speech and user rights considerations that were also up for debate. Yet, the decisions in Gonzalez and Taamneh could potentially have severe implications for free speech online. Should the Supreme Court rule for the plaintiffs, online platforms would effectively be compelled to remove vast amounts of content that is protected under international freedom of expression standards to shield themselves from liability. In addition, platforms would be encouraged to increasingly rely on automated content moderation tools in a manner that is likely to over-restrict speech.

The Cases Have Broad Implications for How Social Media Platforms Moderate Content

The facts and legal questions underlying both cases, which are closely related, have been discussed extensively. In a nutshell, both cases were initiated by families whose relatives were killed in ISIS attacks in Paris and Istanbul. In each case, the plaintiffs argue that the defendant platforms have aided and abetted ISIS and thus violated U.S. antiterrorism statutes.

However, the questions before the Court in each case revolve around the interpretation of different laws. In Taamneh,the question is whether a platform that provides a widely available service that is also used by terrorist entities for propaganda and recruitment can be liable for aiding and abetting international terrorism under the Anti-Terrorism Act(ATA). In Gonzalez, the Court must determine whether Section 230 of the 1996 Communications Decency Act covers recommendation systems. Section 230 grants legal immunity to online platforms for content posted by third parties and allows platforms to remove objectionable content without exposing themselves to liability. The provision has been fundamental in protecting free speech and fostering innovation online.

If the Court interprets aiding and abetting liability in a strict manner and also narrows Section 230 protections for recommender systems, which organize, rank and display third-party content, it could mean that platforms would need to fundamentally change how they operate. Indeed, although the plaintiffs formally seek liability for recommendation systems it is hard to see how siding with the plaintiffs would not result in liability for content posted by users. The plaintiffs did not claim that recommender systems were designed to push ISIS content or that they singled out ISIS content in any way. If the fact that an algorithm has sorted, ranked or prioritized content is sufficient to restrict immunity under Section 230, this would render Section 230 inapplicable to virtually all content on the major platforms. To avoid liability, platforms might either completely abandon the use of recommender systems or – more likely – increase their reliance on automated tools and remove protected speech in a precautionary and likely overbroad manner. 

In Gonzalez, the Justices questioned how immunity could be maintained for third-party content if immunity restrictions applied to content-neutral algorithms that recommend the same content. Justice Kagan observed that “every time anybody looks at anything on the Internet, there is an algorithm involved” and that “in trying to separate the content from the choices that are being made, whether it’s by YouTube or anyone else, you can’t present this content without making choices.” Even Justice Thomas, who was previously eager for the Court to review Section 230, questioned why Google should be held liable if it applies its algorithm in a content-neutral way, showing “cooking videos to people who are interested in cooking and ISIS videos to people who are interested in ISIS, [and] racing videos to people who are interested in racing.” As Justice Roberts noted, limiting Section 230 immunity in that way would have consequences well beyond liability under the ATA, exposing platforms to all sorts of legal actions, such as defamation or discrimination claims.

Similarly, in Taamneh, the Justices sought to find some limiting principle or middle ground approach to constrain the scope of the ATA. Many of their questions aimed at understanding when someone could be considered to “knowingly” provide assistance to a terrorist group and what sort of assistance could be considered “substantial”. Justice Gorsuch also observed that there was very little in the plaintiff’s arguments that linked Twitter to the ISIS attack and that it was important to “preven[t] secondary liability from becoming liability for just doing business.” And indeed, plaintiffs’ counsel, Eric Schnapper, insisted that a defendant platform could face ATA liability even if it lacks any knowledge or awareness of a particular attack and did not assist the attack in any way. But Justice Thomas appeared concerned about such a wide interpretation of the ATA when he said “If we’re not pinpointing cause-and-effect or proximate cause for specific things, and you’re focused on infrastructure or just the availability of these platforms, then it would seem that every terrorist act that uses this platform would also mean that Twitter is an aider and abettor in those instances.” At the same time, Justice Kagan seemed to suggest that a platform might be held liable if it did not have any content moderation policy in place or failed to take any actions to remove terrorist content.

The Supreme Court Largely Ignored Threats to Free Speech

While the Justices seemed generally aware of the ramifications of a decision for the future of the internet, they mostly ignored the implications their ruling could have for free speech and user rights. The sheer scale of the content potentially giving rise to liability, which amounts to millions of posts per day, also did not play a major role in the discussion.

A ruling limiting Section 230 protections could mean – in the words of Justice Kavanaugh – that lawsuits against media platforms would become “non-stop.” Platforms would be forced to monitor everything posted on their platforms worldwide, censoring large swaths of content. There is no other conceivable way to avoid liability. It takes little imagination to predict that in such circumstances controversial, shocking, or offensive speech would be generously removed as companies seek to shield themselves from liability and prevent lawsuits, even though these types of speech are protected under international freedom of expression standards. Increased reliance on automated content moderation tools is a further likely consequence, despite their limitations. These tools are unable to make complex assessments of whether speech is illegal or qualifies as “hate speech,” “extremist,” or “terrorist” content – particularly in languages other than English. These tools are also unable to detect nuances, irony, or whether the content is in the public interest, and would likely restrict all sorts of lawful speech.

The Justices also made analogies to other industries during the Taamneh hearing, namely the hypothetical liability of gun dealers, banks, or pharmaceutical companies. But there was no recognition that online platforms are very different. They do not merely offer services to potentially problematic or dangerous actors. They also enable public discourse and expression online and host content generated by hundreds of millions of users in the case of Twitter and billions of users in the case of Google.

Google’s counsel, Lisa Blatt, did specifically point to the amici briefs raising free speech concerns. She argued that if the Supreme Court sided with the plaintiffs, then platforms would either over-moderate and turn into The Truman Showor not moderate at all and turn into a “horror show.” Surprisingly, Twitter’s counsel, Seth Waxman, did not bring up free speech. In a somewhat unfortunate example, he even suggested that platforms could be considered to have the level of knowledge that could give rise to liability if they were notified by law enforcement authorities, such as the Turkish police, of certain posts or accounts but ignored takedown requests. This is a problematic position to take, given that many governments ask for removal of content as a way to control speech and stifle dissent, rather than as a tool to prevent terrorist attacks.

In any case, it should be up to independent and impartial judicial authorities, not the executive, to make decisions on removal of speech in accordance with due process and the international human rights law standards of legality, legitimacy, necessity and proportionality. Complex legal and factual issues should also not be delegated to private actors, including online platforms. Allowing private actors whose motives are primarily economic, and who have the incentive to limit their liability exposure, to make decisions on the legality of users’ speech will inevitably lead to undue restriction of free speech online.

Low thresholds for aiding and abetting liability could also impact freedom of expression beyond questions of liability of internet intermediaries. If the Court were to follow the plaintiffs’ theory of liability, it might also have chilling effects in other areas, such as public interest reporting, as Justice Kavanaugh raised. More concretely, Justice Kavanaugh asked Schnapper whether CNN should have been sued for aiding and abetting the 9/11 attacks by airing an interview with Osama Bin Laden? Schnapper rightly suggested that CNN has a First Amendment right to show the interview. In fact, free speech protections should play a larger role when assessing liability, whether the conduct involves CNN, other news outlets, or online platforms that host the speech of millions.

Will the Supreme Court Defer to Congress for Section 230 Reform?

For those worried that the Supreme Court could upend the legal structure of intermediary liability, the oral arguments are cause for optimism. Instead of demonstrating eagerness to reconsider Section 230, the Justices appeared unsure about how exactly the law should be interpreted, where to draw the line between intermediary conduct and user-generated content, and whether they had the necessary technical expertise to do so. The Court also seemed conscious that the outcome of these cases, in particular Gonzalez, could have serious economic consequences. Justice Kavanaugh cited warnings put forward by amici that a decision narrowing Section 230’s immunity could “crash the digital economy.”

Some Justices contended that the Supreme Court may not be the best venue to decide whether and how Section 230 should be reformed as they are not equipped to account for the consequences their decision might entail. Justice Kagan asked whether reforming Section 230 is “something for Congress to do, not the Court?” She joins several amici who argued that any changes to Section 230 or the broader regulatory framework should come from Congress. And indeed, as ARTICLE 19 and others have argued, the complexity of policymaking and lawmaking in this area, which affects the human rights of billions of users, in the United States and beyond, “requires careful legislative fact-finding and drafting” that is not amenable to judicial decision-making.

Many governments around the world are grappling with the question of how to regulate major platforms to prevent the amplification of radical, hateful, or extremist content online. In the United States, regulations of online platforms and recommender systems might take the form of changes to Section 230, but they can also occur through other means. Regardless of whether it is the Supreme Court or Congress, any institution considering and reviewing platform regulation needs to ensure that human rights lie at the heart of their considerations. This means that the principles of legality, legitimacy, necessity and proportionality must be applied throughout. Any framework that imposes limitations on free expression must be grounded in robust evidence and prioritize the least censorial and restrictive measures to address online harms.

Instead of asking platforms to exercise even more powers over our speech by screening and assessing all user-generated content, regulators should focus on less intrusive methods that are specifically tailored to tackling some of the negative effects of the platforms’ recommendation systems. For example, regulatory solutions should require companies to be more transparent towards regulators, researchers and users about how their recommendation systems work, set clear limits on the amount of user data that platforms are allowed to collect, and mandate the performance of human rights due diligence. They should also address the dominant position of the biggest online platforms through regulatory tools that would increase competition in the market and enhance users’ choice over what content they get to see online. Some of these regulatory solutions were adopted in the European Union last year with the EU Digital Services Act and the Digital Markets Act. While these regulations could have been more ambitious in protecting human rights online (for example by establishing an explicit right for users to encryption and anonymity), they do correctly focus on rebalancing digital markets and regulating the content moderation and curation systems applied by online platforms rather than mandating the restriction of undesirable types of users’ speech.

These policy considerations go beyond what the Justices will consider in Gonzalez and Taamneh. Should the Justices decide to reinterpret Section 230 in any way, at the very least, they will have to carefully consider the impact of any limitations to Section 230 on freedom of expression online and whether such limitations can be compatible with Section 230’s underlying purpose and the online ecosystem it has created.

IMAGE: A close up image of a woman’s hand typing on a backlit computer keyboard in the dark. (via Getty Images)

The post Two Supreme Court Cases Could “Break the Internet”: What Role Should Free Speech Play? appeared first on Just Security.

]]>
85633
Is Meta Up for the Challenge Now That It’s Reinstated Trump? https://www.justsecurity.org/85450/is-meta-up-for-the-challenge-now-that-its-reinstated-trump/?utm_source=rss&utm_medium=rss&utm_campaign=is-meta-up-for-the-challenge-now-that-its-reinstated-trump Tue, 14 Mar 2023 13:02:36 +0000 https://www.justsecurity.org/?p=85450 Meta has struggled to articulate clear, accessible policies on content moderation that are sufficiently flexible to respond to evolving threats.

The post Is Meta Up for the Challenge Now That It’s Reinstated Trump? appeared first on Just Security.

]]>
Earlier this year, Meta reinstated former President Donald Trump’s accounts on Facebook and Instagram, following his two-year suspension for praising rioters as they stormed the U.S. Capitol on Jan. 6, 2021. According to Meta, the risk to public safety from Trump had “sufficiently receded” to allow him back on its platforms, and it had introduced new guardrails to deter repeat offenses.

As with Twitter, where he was reinstated in November 2022, Trump has not yet posted on Facebook and Instagram. These platforms were key to his previous campaigns, however, and as election season heats up, it may be hard to resist their lure. Twitter under Elon Musk has moved away from robust content moderation. In contrast, Facebook under pressure from its Oversight Board has instituted a suite of safeguards meant to prevent a repeat of Trump’s 2020 tactics and to improve transparency about its content moderation processes. What has changed and will it really help?

Proliferating Policies

Facebook relies on several overlapping policies to make content moderation decisions. Some (e.g., its Community Standards) have long been public, others have only recently come to light (e.g., cross-check), and still others have undergone significant changes (e.g., newsworthiness). It has also rolled out new policies in the past two years. Since this thicket of old, new, and revamped policies is found in separate blog posts, we have summarized the main ones relevant to the Trump reinstatement below.

Prevailing Policies

  • Facebook’s Community Standards, which apply to all users, prohibit violence and incitement, hate speech, and bullying and harassment. Despite the plethora of Trump posts that seem to violate Facebook’s Community Standards (e.g., “when the looting starts, the shooting starts”), the company maintains that the former president violated the standards on only one occasion, when he fat-shamed an attendee at one of his rallies.
  • Facebook’s Dangerous Individuals and Organizations policy prohibits “praise” and “support” of designated individuals and organizations and events that the company deems to be “violating.” This was the primary basis of the company’s decision to boot Trump off its platforms and was upheld by the Board on the theory that the Capitol attack was a “violating event.” The overall policy has long been criticized, including by the Oversight Board, due to the ambiguity of terms like “praise” and “support,” and the lack of clarity on how individuals or organizations are deemed to be “dangerous.”
  • The Board’s decision on Trump’s suspension brought to light Facebook’s “cross-check” system, which diverts posts by high-reach users from the company’s normal content moderation system and shuttles them over to a team of senior officials. While it may make sense to have such a special system, it can result in overly deferential treatment for users who drive engagement on the platform and extended delays in removing posts (on average, more than five days). In response to the Oversight Board’s recommendations, Meta recently committed to taking immediate action on “potentially severely violating” content and to reducing the program’s backlog, but rejected several other important recommendations.

Revamped and New Policies

  • Facebook’s “newsworthiness” exemption previously presumed that a public figure’s speech is inherently of public interest. It is now a balancing test that aims to determine whether the public interest value of content outweighs the risk of harm by leaving it up, an approach the Oversight Board recently criticized as “vague and leaves significant discretion.”
  • In June 2021, Meta issued a policy on public figures’ accounts during civil unrest, which recognized that because of the influence exercised by such people standard restrictions may be insufficient. For a public figure who violated policies “in ways that incite or celebrate ongoing violent events or civil unrest,” Meta specified that it could restrict their accounts for up to two years.
  • The 2021 policy promised that Meta would conduct a public safety risk assessment with experts (weighing factors such as instances of violence, restrictions on peaceful assembly, and other markers of global or civil unrest) to decide whether to lift or extend the restriction. In August 2022, the company announced a new crisis protocol to weigh the “risks of imminent harm both on and off of our platform,” which it used in letting Trump back on Facebook. Although the policy is not public and it’s not clear what factors Meta will consider, Meta may use the types of factors listed in its civil unrest policy. For Trump, these included “the conduct of the U.S. 2022 midterm elections” and “expert assessments on the current security environment.”
  • Finally, under another new policy, when a public figure is reinstated, Meta may now impose penalties for content that does not violate its Community Standards “but that contributes to the sort of risk that led to the public figure’s initial suspension.” These penalties are largely similar to those that could apply for violations of Meta’s “more severe policies” under its Community Standards.

There are three main takeaways from this web of overlapping policies. First, the Oversight Board’s scrutiny and the bright spotlight on social media companies generally has obliged Meta to provide a fair amount of transparency about its processes. The company’s cross-check system, for example, was not public knowledge until it was raised in the Board’s review of cases.

Second, many of Meta’s changes are procedurally-oriented, seemingly designed to address the legality principle of the international human rights law framework typically used by the Oversight Board to evaluate Facebook’s content moderation decisions. There is no doubt that the Board has consistently—and rightly—pushed the company to take a rules-based approach (e.g., taking Facebook to task for imposing an indefinite suspension on Trump when its rules included no provision for such a penalty). Meta’s new policies also nod to the framework’s necessity and proportionality principles by articulating a sliding scale of penalties.

Third, despite all the new policies and the Oversight Board’s push for more clear and accessible rules, Meta has just as much discretion as ever about how to respond—and in some cases has granted itself latitude to act even when no substantive rules are violated.

Content Moderation Policies in Practice

Imagine, if you will, a scenario where Biden and Trump are running for re-election in 2024. As he did in 2020—and continues to do on the Truth social platform—Trump shares a post casting doubt on the fairness of the upcoming election. Even if the post did not violate Meta’s Community Standards, under its new guardrails for public figures returning from a suspension, the company could limit the reach of Trump’s posts because it relates to the reason for his initial suspension. The same would be true if he promoted QAnon content. If Trump was undeterred and continued such posts, Meta could go further—restricting access to its advertising tools and even disabling his account.

All these decisions are discretionary and ultimately depend on how the company weighs the risks posed by Trump. The fact that they are untethered from Facebook’s Community Standards creates an additional layer of uncertainty about the basis for the decisions, although the risk of potential abuses is somewhat ameliorated by the required link to past transgressions. In the case of Trump, his history of encouraging political violence may lead Meta to respond more forcefully and quickly than it did in the 2020 election season—at least if he continues to rely on the same narrative.

If Trump were to move away from the rigged election/QAnon narrative but violate the company’s policies in other ways “that incite or celebrate ongoing violent events or civil unrest,” Meta’s rubric for public figures in times of civil unrest would come into play. In deciding whether to impose penalties under that framework, the company would evaluate (1) the severity of the violation and the person’s history of violations; (2) their “potential influence over, and relationship to, the individuals engaged in violence”; and (3) “the severity of the violence and any related physical harm.”

But it is unclear how this content will be reviewed, particularly considering Meta’s cross-check policy. The operation of cross-check vis-à-vis Trump seems an obvious failure—though Meta’s new commitment to taking immediate action on “severely violating” content may alleviate some issues. And of course, this all hinges on Meta deciding that a particular situation amounts to “civil unrest” and does not meet its “newsworthiness” exemption.

It is worth considering other applications of this “civil unrest” model. Facebook has been taken to task for failing to act against senior military officials in Myanmar who spread hate speech and incited violence against Rohingya. The “civil unrest” rubric could be very useful in those types of situations, but also needs to be accompanied by the allocation of sufficient resources to provide decision-makers at Meta with context and language expertise—a recommendation Meta recently committed to implementing for its cross-check policy.

What about contexts such as the summer of racial justice protests following the killing of George Floyd, which at times involved instances of property damage? These were painted by the Trump administration as a threat to national security requiring the deployment of homeland security officers and the activation of counterterrorism measures. Would Meta consider itself entitled to shut down the accounts of protest leaders on the theory that they were celebrating “civil unrest”? Given that civil rights groups have long complained about asymmetries in the company’s enforcement that have disadvantaged minority communities, the scenario is one worth considering.

Ultimately, Meta, like other social media platforms, has struggled to articulate clear and accessible policies surrounding content moderation that are sufficiently flexible to respond to rapidly evolving threats. Its new and rejiggered policies (like the old ones) leave the company with copious discretion. Their efficacy will depend on how they are enforced. And on that score, Meta’s record leaves much to be desired.

IMAGE: The logos of applications, WhatsApp, Messenger, Instagram and Facebook belonging to the company Meta are displayed on the screen of an iPhone in front of a Meta logo on February 03, 2022. (Photo illustration by Chesnot/Getty Images)

The post Is Meta Up for the Challenge Now That It’s Reinstated Trump? appeared first on Just Security.

]]>
85450
Mind the Gap: The UK is About to Set Problematic Precedents on Content Regulation https://www.justsecurity.org/85358/mind-the-gap-the-uk-is-about-to-set-problematic-precedents-on-content-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=mind-the-gap-the-uk-is-about-to-set-problematic-precedents-on-content-regulation Mon, 06 Mar 2023 13:47:36 +0000 https://www.justsecurity.org/?p=85358 Potentially positive elements of the UK's Online Safety Bill "are overshadowed and at risk of being negated by some of the more politically-motivated, hyperbolic aspects. The House of Lords must take advantage of its review and opportunity to amend the Bill."

The post Mind the Gap: The UK is About to Set Problematic Precedents on Content Regulation appeared first on Just Security.

]]>
Issues like the EU’s latest regulatory push or recent U.S. Supreme Court hearings may have dominated recent tech policy headlines, but less attention has been paid to serious legislative tinkering in the United Kingdom that could have global impacts. Despite a high turn-over rate, the government and Parliament have made a series of significant amendments to the already highly controversial Online Safety Bill (OSB). Taken together these changes could significantly change the way the internet is experienced in the United Kingdom, and establish deeply problematic precedents that would further embolden governments around the world. 

The OSB is the result of five years of on-again, off-again government attention to the issue of “online harms,” having wound its way through a green and a white paper, the re-branding of the Department of Digital, Culture, Media and Sport and now the creation of a new digital ministry, developments that together have spanned the tenures of four prime ministers. The original “online harms” approach, initially set out in 2019, centered around an ill-defined “duty of care” standard requiring companies to prevent certain harmful user-generated content, including so-called “lawful but awful” content. Through recent amendments, the government has simultaneously narrowed and expanded the Bill’s scope by limiting liability for failure to remove content that is not illegal, while broadening the range of priority offenses (illegal content) covered and expanding the severity, scope, and reach of enforcement powers. The Bill now sits with the sometimes feckless House of Lords, which represents the last chance for further amendments. 

Of the many questions and concerns that the OSB raises, several aspects deserve attention for how they resemble tactics traditionally associated with regimes seeking to increase their leverage over tech companies in order to surveil their populations and censor information – efforts that are also often “justified” in the name of “online safety.” These provisions unnecessarily risk validating and encouraging further government repression and should be reconsidered by the House of Lords. Three aspects in particular deserve to be called out and considered carefully against the UK’s “constitutional” framework and international commitments: extraterritorial application, individual liability, and the use of digital laws to regulate offline behavior.

The Sun Will Never Set on British Online Enforcement

Governments around the world have expressed frustration for years about their inability to get the attention of (not to mention obedience from) tech companies. Until recently, the countries that typically acted most aggressively to expand leverage over tech companies were those seeking to exert more control over data and information for repressive purposes. With the OSB, the UK would join several other democratic countries that have recently added themselves to that list by expanding enforcement powers and enacting personnel localization (aka “hostage taking”) provisions. While the instinct to establish jurisdiction and enforcement power is understandable, the Peers should ask themselves if such an approach is actually, in the language of human rights, “necessary and proportionate” to the legitimate objectives that the law is seeking to address. 

The OSB’s “safety duties” apply to any internet user-to-user or search service, regardless of its location, that targets UK users or has a significant number of UK users, as well as if “there are reasonable grounds to believe that there is a material risk of significant harm to individuals in the [UK]” presented by the service. While the user-base and targeting criteria are relatively uncontroversial (if underdefined), the “material risk of significant harm” prong raises questions as to how such a determination might be made. The Bill also makes clear that certain “information offenses” will apply regardless of whether they take place in the UK “or elsewhere.” These “information offenses” include failure to comply with a requirement included in an “information notice” from Ofcom, the OSB-empowered regulator, as well as providing information in response to a notice that is “encrypted such that it is not possible for Ofcom to understand it.” While the latter offense contains a mens rea provision (“intent”), that may be shallow comfort since the innate purpose of end-to-end encryption (E2EE) is to make underlying content unreadable to anyone but the sender and recipient. (It is worth noting that other provisions in the Bill have been criticized for potentially undermining the ability of private messaging services to deploy E2EE.) For good measure, the Bill also makes clear that it also applies extraterritorially to any officer for any offense committed by a covered entity that “is attributable to any neglect on the part of [the] officer of the entity.”

While it may feel good and be politically savvy for the government to assert that it will reach to the ends of the earth (i.e., Silicon Valley) to hold companies accountable for harms that occur in the UK, the likelihood that it will have to go after individual employees located abroad is incredibly slim. That’s in part because it will be unnecessary for it to do so because the rest of the OSB delivers extensive, effective penalties to ensure compliance. Specifically, the Bill allows Ofcom to assess fines of up to 10 percent of global revenue on covered entities (which, based on last year’s reported revenues, would amount to fines of $120 million, $16 billion, and $28 billion for Twitter, Meta, and Google respectively), as well as to obtain court orders against third parties to deny non-compliant entities business facilities or, as a last resort, to block them within the UK. Based on last year’s reported revenues. 

The more likely impact of this flexing of jurisdictional muscle will be to empower compliance lawyers within companies, whose job is to minimize legal risks, however slim. These unheralded corporate actors will increasingly bring experience with financial and export-control regulations to bear on content regulations and are likely to recommend aggressive approaches to complying with the broad provisions in the OSB. It will also contribute to the ongoing competition among countries around the world, regardless of their democratic credentials, to take unnecessarily expansive approaches to their own regulatory powers. Collectively, these efforts make conflicts of law between jurisdictions more likely and could force companies to make radical decisions, such as providing significantly different versions of products and services, or even withholding them altogether, in distinct countries. This in turn will limit benefits that we take for granted, such as the way the internet has been used to foster the development of cross-border communities, trends, and commerce.

Getting a Bit Too Personal

Beyond extraterritorial application, in what appears to be another cathartic but unnecessary move, recently proposed amendments would extend personal, criminal liability to “senior managers” for persistent breaches of their duty of care to children. The text of the Bill that passed out of the House of Commons earlier this year already included provisions holding any “officer” (defined as “a director, manager, associate, secretary or other similar officer”) accountable for any information offenses by the entities they work for, if it is committed with their “consent or connivance” or “attributable to any neglect” on their part. This includes criminal liability for “failure to take all reasonable steps” to avoid presenting false or encrypted information, or to avoid destruction of documents.

On top of this, a new amendment proposed by the government would provide criminal sentences of up to two years for senior managers who fail to comply with the child protection duties set out in the Bill. This amendment was apparently made to satisfy Conservative party “back benchers” upset with the removal of “lawful but harmful” liability. Given this political context, it is not surprising but still disappointing that the government has failed to articulate a clear case for why institutional liability is insufficient or why personal, criminal sanctions are necessary. As Jacqueline Rowe of Global Partners Digital has recently pointed out, the lack of any requirement to exhaust alternative remedies before pursuing criminal sanctions is out of keeping with other countries’ approaches, the focus on content targeting children could lead to disproportionate impacts on children’s free expression rights, and the lack of definitional clarity around the conduct being criminalized raises questions about its consistency with the UK’s international commitments.

Locally-based, tech company staff have long been targets for non-democratic governments seeking to compel compliance with their censorship or surveillance orders. While authoritarian governments are likely to continue to engage in such “hostage taking” regardless of what the UK does, having a leading, democratic country ratify this approach undermines the moral weight of arguments made in response to such pressure. 

Regulating the Analog World, One Post at a Time

Late last year, in what it framed as a safeguard for freedom of expression, the UK government removed provisions providing liability for failure to remove “lawful but harmful” content. However, since then the government has significantly expanded the list of “priority offences” covered by the Bill (and its associated, expanded enforcement mechanisms). In addition to creating new categories of illegal content, the OSB would bring a wide range of existing crimes into the regulatory and enforcement scheme that it creates. 

The Council of Europe’s Convention on Cybercrimes (“Budapest Convention”) is a 20-year-old treaty that defines and seeks to harmonize the scope of “cybercrime,” as well as related criminal procedural and mechanisms for intergovernmental cooperation. The UK ratified the Convention in 2011 and has been a staunch advocate for its approach since then, including most recently as part of the UK’s pushback against the Russia-led efforts to define cybercrime more broadly through the U.N. Ad-Hoc Committee on Cybercrime. The OSB incorporates a range of cybercrimes that fall into categories covered in the Budapest Convention, such as misuse of data or computer systems, computer-related crimes, and child pornography. 

In addition to these crimes that are committed through the use of computers, the government has been stuffing more and more “analog” crimes that cannot be committed online into the Bill. Most controversially, the government recently introduced amendments that declare “Modern Slavery” and “immigration offenses” as “priority offenses.” As Secretary of State for Digital, Culture, Media and Sport Michelle Donelan has explained, “[a]lthough the offences . . . cannot be carried out online . . . aiding, abetting, counselling, conspiring etc those offences by posting videos of people crossing the channel which show that activity in a positive light could be an offence that is committed online and therefore falls within what is priority illegal content.” 

As the Budapest Convention makes clear, there is nothing wrong with using digital evidence to convict people who break the law. However, it is harder to justify why a website or platform should be held legally responsible for removing or prohibiting content related to border crossing or sex trafficking. At the same time, it isn’t hard to imagine how such an expansive approach would result in companies erring on the side of caution, which could limit journalistic content, pro-immigration content, or even anti-trafficking content that gets caught up in imprecise filters. As observers in the United States have pointed out, the last effort by Congress to allow intermediary, criminal liability for sex trafficking (through FOSTA/SESTA) has resulted in a range of unintended consequences. 

This application of digital law to analog behavior is even more problematic considering that the OSB introduces an apparently novel and legally untested threshold for companies to remove content: “reasonable grounds to infer” illegality. This, combined with the limited defenses provided, exacerbates the risk that covered entities will choose to be very conservative about what content they allow UK users to see. This in turn means that UK users may have less opportunity to understand and participate in global conversations about sensitive topics that are mediated online, which amounts to quite a few conversations these days.

While such an approach would likely be read as the kind of “prior restraint” prohibited under the U.S. First Amendment (or “prior censorship” prohibited under Article 13 of the American Convention on Human Rights), it is not clear how this will be interpreted under UK law, which grants wider latitude for limitations on expression. What is clear is that authoritarians have long coveted similar powers to repress politically inconvenient or contentious content and would welcome the opportunity to respond to any critiques thereof with claims of hypocrisy or moral equivalence.

Time to Focus

Governments should consider how best to incentivize tech companies to be responsible for the harms that occur on, through, and as a result of their products, services, and processes. However, if governments are serious about their commitments to protecting freedom of expression, privacy, and the open, interoperable, secure, and global internet, they must act thoughtfully and responsibly. 

The provisions addressed above are neither necessary, nor proportionate to the valid purposes that the OSB seeks to address. The UK regularly imposes significant monetary penalties on companies in other regulatory contexts, including foreign tech companies. And there is no evidence that UK regulators have any systemic issues with compliance. For a country like the UK, which all the major tech companies have staff in, and which has been granted privileged access to data held by U.S.-based tech companies, to use the OSB to grab unnecessary powers is unseemly and, in the long run, counter-productive. Similarly, the extension of the Bill’s scope to regulate offline conduct is a step too far that the Peers should walk back.

There is much in the OSB that is both thoughtful and responsible, including its provisions around transparency, non-discrimination, and fairness. These aspects are in-line with the kinds of human rights-compliant approaches that digital rights groups have advocated. But those are overshadowed and at risk of being negated by some of the more politically-motivated, hyperbolic aspects. The House of Lords must take advantage of its review and opportunity to amend the Bill to focus it and strip away unnecessary and problematic provisions. This will not only result in a more efficient, less legally-vulnerable regulatory framework, but also ensure that the UK government can continue to hold its head-up high and assert itself as a global leader on matters of international human rights law, due process principles, and economic and technological progress. 

This post is written in the author’s personal capacity.

IMAGE: In this photo illustration, a teenage child looks at a screen of age-restricted content on a laptop screen on January 17, 2023 in London, England. The Online Safety Bill aims to protect young and vulnerable viewers by introducing new rules for social media companies which host user-generated content, and for search engines, which will have tailored duties focused on minimizing the presentation of harmful search results. (Photo by Leon Neal/Getty Images)

The post Mind the Gap: The UK is About to Set Problematic Precedents on Content Regulation appeared first on Just Security.

]]>
85358