Cybersecurity Archives - Just Security https://www.justsecurity.org/tag/cybersecurity/ A Forum on Law, Rights, and U.S. National Security Wed, 15 Mar 2023 14:01:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://i0.wp.com/www.justsecurity.org/wp-content/uploads/2021/01/cropped-logo_dome_fav.png?fit=32%2C32&ssl=1 Cybersecurity Archives - Just Security https://www.justsecurity.org/tag/cybersecurity/ 32 32 77857433 Attorney General Merrick Garland and Intelligence Community Leaders Testify on the Reauthorization of Section 702 of FISA https://www.justsecurity.org/85348/attorney-general-merrick-garland-testifies-on-the-reauthorization-of-section-702-of-fisa/?utm_source=rss&utm_medium=rss&utm_campaign=attorney-general-merrick-garland-testifies-on-the-reauthorization-of-section-702-of-fisa Wed, 15 Mar 2023 13:58:21 +0000 https://www.justsecurity.org/?p=85348 Editor’s Note: This article, originally published on March 2, has been updated to reflect the testimony of top intelligence leaders before the Senate Select Committee on Intelligence on March 8. U.S. Attorney General Merrick Garland testified on March 1 for the first time before the new Congress at a Senate Judiciary Committee hearing, “Oversight of […]

The post Attorney General Merrick Garland and Intelligence Community Leaders Testify on the Reauthorization of Section 702 of FISA appeared first on Just Security.

]]>
Editor’s Note: This article, originally published on March 2, has been updated to reflect the testimony of top intelligence leaders before the Senate Select Committee on Intelligence on March 8.

U.S. Attorney General Merrick Garland testified on March 1 for the first time before the new Congress at a Senate Judiciary Committee hearing, “Oversight of the Department of Justice.” Buried in wide-ranging testimony was an exchange about the reauthorization of Section 702 of the Foreign Intelligence Surveillance Act (FISA). A week later, on March 8, during a rare public Senate Select Committee on Intelligence hearing, “Worldwide Threats,” Director of National Intelligence Avril Haines and other intelligence community leaders reaffirmed Section 702’s importance amidst discussion on great power competition, the origins of the coronavirus pandemic, and nuclear proliferation. Just Security recently ran a series featuring pieces by Elizabeth Goitein and Ashely Gorski on surveillance under the Act, and published an article on the topic by George Croner, who has criticized proposed FISA reforms.

The Biden administration has amplified efforts to push for Section 702 reauthorization prior to its expiration on December 31, 2023. In a January keynote speech at the Privacy and Civil Liberties Oversight Board Public Forum, National Security Agency Director General Paul M. Nakasone underscored the important role Section 702 plays in allowing the U.S. government to collect intelligence on foreign government’s plans and strategic intentions. On Feb. 28, National Security Adviser Jake Sullivan released a statement reaffirming that the Biden administration considers the reauthorization of Section 702, along with other expiring FISA provisions, to be a “top priority.” That same day, Garland and Haines sent a letter to Senate and House leadership regarding Title VII of FISA, and in particular Section 702. The letter outlines the statute’s role in ensuring national and cyber security, and strongly urges its prompt reauthorization.

Garland Testifies Before the Senate Judiciary Committee

In his congressional testimony on March 1, Garland outlined his reasons for supporting reauthorization. He stated that “an enormously large percentage of the threats information” that he receives daily “all threats” briefings with the FBI and the Department of Justice (DOJ)’s National Security Division (NSD) stem from intelligence collected under Section 702 authority. Examples he gave include materials relevant to threats related to Ukraine, as well as to threats by foreign terrorist organizations and other state adversaries, such as China, North Korea, Iran, and Russa. Additionally, he credited Section 702 collection with contributing to the DOJ’s cybersecurity enforcement efforts, including by providing information vital to ransomware investigations and to obtaining decryption keys. He concluded that, without Section 702, “we would be intentionally blinding” both the United States and our allies “to extraordinary danger.”

In support of Garland’s testimony, Sen. Richard Blumenthal (D-CT) said that “without going into any classified information,” Section 702 “was instrumental in preventing major catastrophic aggression against our nation and also helping our allies like Ukrainians with intelligence that was extremely critical to pushing back the Russians.” Sen. Mike Lee (R-UT), meanwhile, cited skepticism of the Department’s independence — a common refrain in Republican questioning of Garland during the hearing—as a basis to withhold reauthorization until the Department undertook “major reforms.” He asserted that “the current standard for a warrantless backdoor search of the content of communications of American persons is reasonably likely to return evidence of a crime.” Referencing a recently declassified report from the Office of the Director of National Intelligence (ODNI), Lee expressed concern about non-compliant searches, including “searches of prospective FBI employees, members of a political party, individuals recommended to participate in the FBI citizens’ academy, journalists, and even a Congressman.”

In contemporaneously submitted written testimony and responses to senators’ queries for the record, Garland and DOJ again highlighted that Section 702 surveillance is crucial to the state’s ability to gather foreign intelligence information about non-U.S. persons reasonably believed to be outside the United States. It emphasized the guardrails in place to prevent abuses, including “robust targeting, minimization, and querying procedures to protect the privacy and civil liberties of U.S. persons.” DOJ further underscored its corrective measures to strengthen accuracy in FISA applications through new mechanisms, including providing NSD attorneys with information that could “undermine a probable cause determination” and enforcing new training requirements for FBI personnel and DOJ lawyers. Lastly, oversight of FISA applications has been expanded to include “completeness reviews” intended to ensure the Foreign Intelligence Surveillance Court (FISC, or colloquially, the “FISA Court”) is provided with accurate and complete FBI case files for a probable cause determination. The FISC is charged with issuing relevant warrants under FISA, including those that authorize electronic surveillance or a physical search.

Top Intelligence Community Leaders Testify Before the Senate Select Committee on Intelligence

Top intelligence officials’ testimony at the March 8 Senate Intelligence Committee hearing reaffirmed Section 702’s crucial role in mitigating national security threats. Haines noted that Section 702 permits intelligence-gathering against foreign targets “at a speed and reliability” that the United States “cannot replicate with any other authority.” The authority helped ensure that the ODNI delivered an accurate 2023 Annual Threat Assessment of the U.S. Intelligence Community, she testified, and played a “key role” in the U.S. government’s operations against former Al-Qaeda leader Ayman al-Zawahiri. Haines also told the Committee that the authority is crucial for countering malicious cyber actors targeting critical U.S. infrastructure and the proliferation of weapons of mass destruction. FBI Director Christopher Wray similarly testified that Section 702 is vital to efforts to stymie China’s hacking program, which is “bigger than [those of] every major nation combined;” Russia’s “significant” use of cyber “as an asymmetric weapon;” and Iran’s “efforts to conduct destructive [cyber] attacks even in the United States.” He suggested that each of these powers is “trying to build pre-positioned capabilities” they might deploy during “a much more serious conflict.”

The officials also underscored that Section 702 does not authorize collection of information on American citizens. Gen. Paul Nakasone, the Director of the National Security Agency, agreed with Sen. Rounds’ (R-SD) characterization that Section 702 targets non-U.S. persons outside of the country who are using platforms with a nexus to U.S. communications systems. However, Nakasone added, when foreigners operating outside the United States reference someone in the United States in their communications, the intelligence community follows protocols to “minimize” and “hide that data” in order to protect the civil liberties and privacy of U.S. persons. Wray similarly testified that he was “very pleased to be able to share with the Committee…publicly for the first time that we saw in 2022 a 93 year-over-year drop in U.S. person queries.”

FISA is set to expire in December 2023 unless reauthorized.

IMAGE: U.S. Attorney General Merrick Garland testifies before the Senate Judiciary Committee in the Hart Senate Office Building on Capitol Hill on March 01, 2023 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

The post Attorney General Merrick Garland and Intelligence Community Leaders Testify on the Reauthorization of Section 702 of FISA appeared first on Just Security.

]]>
85348
The Year of Section 702 Reform, Part I: Backdoor Searches https://www.justsecurity.org/85068/the-year-of-section-702-reform-part-i-backdoor-searches/?utm_source=rss&utm_medium=rss&utm_campaign=the-year-of-section-702-reform-part-i-backdoor-searches Mon, 13 Feb 2023 13:51:08 +0000 https://www.justsecurity.org/?p=85068 Requiring a warrant for U.S. person queries honors the balance between security and liberty struck in the Fourth Amendment and ensures that Section 702 can’t be used to get around Americans’ constitutional rights. That essential reform should be the starting point for any reauthorization of the law.

The post The Year of Section 702 Reform, Part I: Backdoor Searches appeared first on Just Security.

]]>
Editor’s Note: This is part one in a multi-part series on foreign intelligence surveillance reform.

This year’s reauthorization of Section 702 of the Foreign Intelligence Surveillance Act (FISA) — a law that authorizes broad surveillance of foreigners outside the United States to acquire foreign intelligence information— will be unlike any previous one. In the past, reauthorization was a foregone conclusion, and civil liberties advocates struggled to secure even minor procedural safeguards. But a series of recent government reports and FISA Court opinions have demonstrated that Section 702 has become a go-to domestic spying tool for the FBI, and that FBI agents are routinely violating statutory and court-ordered limits on accessing Americans’ data “incidentally” collected under Section 702. At the same time, conservative lawmakers have turned against FISA in light of the government’s flawed applications to conduct surveillance of Trump associate Carter Page. With House Judiciary Committee Chairman Jim Jordan on record opposing reauthorization, it’s clear that Section 702 will not be renewed without a major overhaul.

In public, at least thus far, the Biden administration is acting as if this year’s reauthorization is business as usual. At a recent Privacy and Civil Liberties Oversight Board (PCLOB) hearing on Section 702, NSA Director Paul Nakasone’s opening remarks struck a tone-deaf note, reciting boilerplate talking points about balancing national security and civil liberties without any mention of the extensive violations revealed since the last reauthorization. Behind the scenes, though, the government’s anxiety is evident. Intelligence officials have been setting up Hill briefings since at least last fall — several months before this type of advocacy usually begins. They are also endeavoring to rebrand Section 702 as a cybersecurity authority, recognizing that the specter of terrorism no longer serves as a trump card in any conversation about reforms. For their part, lawmakers who support reauthorization are attempting to distinguish Section 702 of FISA from Title I (the part of the law used in the investigation of Carter Page), suggesting — wrongly, as discussed below — that Section 702 is used only against foreigners.

At bottom, intelligence officials and other defenders of broad surveillance authorities are aware that a straight reauthorization is out of the question, and so they are attempting to level-set around a small number of modest oversight provisions. This approach is evident in a recent Lawfare post by Adam Klein, President Trump’s appointee to chair the PCLOB, who has occasionally advocated strengthening oversight mechanisms but generally eschews substantive reforms. Klein’s post does not even mention the most controversial aspect of Section 702, namely, backdoor searches (discussed below). Instead, Klein focuses on improvements to the FISA Court process that would apply mainly in the area of Title I applications. Mike Herrington, an FBI official who spoke at the recent PCLOB hearing, similarly focused on ways in which the FBI is supposedly strengthening its internal oversight processes.

This time, however, lawmakers’ concerns are unlikely to be allayed by a mere bolstering of oversight requirements. For one thing, it’s doubtful that adding new layers of internal oversight will accomplish much, given the government’s 15-year cycle of violations, followed by the adoption of new administrative oversight measures, followed by more violations. At a more fundamental level, though, oversight — whether internal, in the form of FBI training or audits, or external, in the form of FISA Court review — is not an end in itself; it is a means to ensure that the government is following the rules. Where, as here, the rules themselves have been interpreted to permit warrantless searches of Americans’ private communications, all the oversight in the world won’t solve the problem.

Congress must rewrite the rules to ensure that the government cannot rely on its foreign intelligence surveillance authorities to conduct warrantless surveillance of Americans. This article is the first in a series that will examine the key reforms Congress should implement, including: (1) imposing a warrant requirement before the government searches Section 702-acquired data for Americans’ communications; (2) closing gaps in the law that permit the collection and use of Americans’ communications and other Fourth Amendment-protected information without any statutory limits or judicial oversight; (3) limiting the permissible pool of Section 702 targets to those who might reasonably have information about foreign threats, which would in turn limit the “incidental” collection of Americans’ communications; and (4) removing artificial barriers to existing judicial review mechanisms established by Congress.

Closing the Backdoor Search Loophole

Congress enacted Section 702 in 2008 to make it easier for the government to conduct surveillance of suspected foreign terrorists. Previously, FISA required the government to obtain an individualized order from the FISA Court in order to acquire communications inside the United States or from a U.S. company, even if the target was a foreigner overseas. The government also had to show probable cause that the target was a foreign power or agent of a foreign power. Under Section 702, no individualized order or probable cause showing is needed. The government may target any foreigner abroad to obtain foreign intelligence, and the FISA Court’s role is limited to approving general procedures for the surveillance on an annual basis.

Although Section 702 surveillance must be targeted at foreigners abroad, it inevitably sweeps in large volumes of Americans’ communications — e.g., calls and emails between foreigners and their American friends, relatives, or colleagues. If the government’s intent were to eavesdrop on those Americans, it would have to obtain a warrant (for a criminal investigation) or a FISA Title I order (for a foreign intelligence investigation) to comply with the Fourth Amendment. Accordingly, Congress required the government to “minimize” the sharing, retention, and use of this “incidentally” collected U.S. person information, and to certify that it is not engaging in “reverse targeting” — i.e., using Section 702 surveillance to spy on Americans.

These protections for Americans’ constitutional rights are simply not working. Rather than “minimize” the sharing and retention of U.S. person information, the National Security Agency (NSA) routinely shares raw Section 702 data — which includes Americans’ communications — with the FBI, CIA, and National Counterterrorism Center (NCTC). All agencies retain the data for a functional minimum of five years. (Agency policies describe the 5-year period as a ceiling, not a floor. However, these same policies include several exceptions to this limit, and the PCLOB has reported that agencies rarely if ever delete information before the 5-year trigger.)

Worse, all of these agencies have policies in place that allow them to search through Section 702 data for Americans’ communications. In other words, after certifying to the FISA Court that it is not seeking the communications of any particular, known Americans (which would be “reverse targeting”), the government searches through the warrantlessly acquired data for the communications of . . . particular, known Americans. This is a bait and switch that violates the spirit, if not the letter, of the prohibition on reverse targeting, and it drives a gaping hole through Americans’ Fourth Amendment rights.

The FBI routinely performs these “backdoor searches” in ordinary domestic investigations that have nothing to do with national security or foreign intelligence. Until recently, though, the full extent of this practice was unknown. Although Congress has long required the NSA and CIA to report how many such searches they perform annually (the number is in the thousands for both agencies), the FBI for years claimed it had no ability to track this information. In early 2018, however, Congress required the FBI to begin keeping records of U.S. person queries. The FBI failed to comply for over two years, advancing an absurd legal argument that it could satisfy the requirement by simply counting all queries (i.e., including queries of non-U.S. persons). It finally began keeping the required records in 2020 after the FISA Court and FISA Court of Review rejected that argument.

Thus, the Office of the Director of National Intelligence (ODNI) included the number of FBI backdoor searches for the first time in its 2022 annual statistical report. The report revealed that the FBI performed up to 3.4 million U.S. person queries of Section 702 data in 2021 alone. ODNI cautioned that this number likely overcounts the number of Americans affected, in part because the FBI might use multiple identifiers for, or run multiple queries on, the same individual. But even if the number is off by an order of magnitude, that still represents nearly 1,000 warrantless searches for Americans’ communications each day.

In light of this new information, the government cannot plausibly maintain that Section 702 is solely foreign-focused. Instead, it has become something Congress never intended: a domestic spying tool — one that enables the government to routinely search for and review Americans’ phone calls, emails, and text messages without obtaining a warrant.

Both Congress and the FISA Court have attempted to place limits, albeit modest ones, on backdoor searches. In 2018, Congress required the FBI to show probable cause and obtain a court order for a very small subset of U.S. person queries: those conducted in predicated criminal investigations unrelated to national security. (The subset is small in part because the FBI generally runs U.S. person queries at early stages of the investigation, i.e., before they qualify as “predicated.”) According to the government’s figures, this requirement to obtain a court order has been triggered on more than 100 occasions since 2018. By the government’s own admission, however, the FBI has never once complied with it. Some of these non-compliance incidents can be traced to a technical issue with how the FBI’s systems display data — a problem the FBI notably failed to fix for nearly two years. But the FISA Court made clear that the violations could not all be explained by this technical issue.

In cases not subject to this statutory court-order requirement — i.e., the vast majority of cases — the only limitation on backdoor searches is a FISA Court-approved requirement that U.S. person queries must be reasonably likely to retrieve foreign intelligence or evidence of a crime. This is a low bar, and it’s been in place, in one form or another, for longer than Section 702 itself (as it has long been part of more general FISA minimization rules). Nonetheless, FISA Court opinions from 2018, 2019, and 2020 reveal that the FBI has engaged in “widespread violations” of this rule. To name just a few examples, FBI agents searched for the communications of people who came to the FBI to perform repairs; victims who approached the FBI to report crimes; business, religious, and community leaders who applied to participate in the FBI’s “Citizens Academy”; college students participating in a “Collegiate Academy”; police officer candidates; and colleagues and relatives of the FBI agent performing the search. The FBI also engages in “batch queries,” querying thousands or even tens of thousands of Americans’ communications at one time using a single justification.

Government reports released in 2022 reveal even more disturbing violations. In one instance, an FBI agent conducted U.S. person queries of Section 702 data because a witness had reported seeing two “Middle Eastern” men loading boxes labeled “Drano” into a vehicle. In another case, an agent conducted several queries using the name of a U.S. congressman, and reviewed information that these queries returned. Another agent conducted queries using the names of “a local political party.” And one agent conducted a batch query that included “multiple current and former United States Government officials, journalists, and political commentators.” These incidents raise the specter of backdoor searches being used to target individuals based on race, religion, politics, and journalistic activity. That’s alarming, but it should not be surprising. When government officials are not required to show probable cause of criminal activity to a court, it dramatically increases the risk that searches will be driven by improper considerations — including officials’ conscious or subconscious prejudices or political leanings.

Finally, while the most flagrant recent violations were committed by the FBI, the NSA has similarly violated the rules limiting access to Americans’ communications. Most notably, in 2011, the FISA Court prohibited the NSA from conducting any U.S. person queries of data obtained through “upstream” collection — a method of Section 702 collection that is more likely to sweep in purely domestic communications. The Court made clear that this prohibition was necessary to preserve the constitutionality of the program. Several years later, the NSA reported to the FISA Court that its agents had not been complying with this rule. The agency blamed the violations on “human error” and “system design issues”; the NSA’s Inspector General found that “the problem was widespread during all periods under review.” In a 2017 opinion, the FISA Court chided the NSA, not only for its failure to comply with the querying prohibition, but for its “institutional lack of candor” in failing to timely report the violations.

Given this background, the only way to fully protect Americans’ Fourth Amendment rights and prevent abuses is to require the government to obtain a probable-cause court order before performing U.S. person queries. In law enforcement investigations, the government should be required to obtain a warrant from a magistrate judge. In foreign intelligence investigations, it should be required to obtain a FISA Title I order from the FISA Court, which means showing probable cause that the subject is an “agent of a foreign power.” (FISA defines this term, as applied to U.S. persons, in a way that requires involvement in espionage, terrorism, identity fraud, or other illegal activity.)

This requirement would prevent the government from using Section 702 as an end-run around the Fourth Amendment. And while there might well be violations of this mandate, as well, “widespread violations” like those we’re seeing now — or, at least, the FISA Court’s willingness to continue approving the program despite such violations — would be far less likely. The FBI has claimed that some agents simply didn’t understand existing limits on conducting U.S. person queries. A requirement to obtain a probable-cause order for all U.S. person queries, however, is as clear and simple as any rule could be. The FBI would be hard pressed to claim confusion over such a requirement.

Many lawmakers have already embraced this approach. Senators Diane Feinstein, Mike Lee, Patrick Leahy, and Kamala Harris cosponsored an amendment requiring the government to obtain a probable-cause order for U.S. person queries the last time Section 702 was reauthorized, although it didn’t receive a vote. And the House has twice passed a similar amendment (in 2014 and 2015) with both Democratic and Republican support.

The FBI’s Arguments

The government, predictably, opposes closing off the backdoor search loophole. It leads with the assertion that these searches are lawful. That is indeed the view of the FISA Court. But among the handful of federal courts outside the FISA Court that have had the opportunity to weigh in on this question, a divide has emerged, with several judges — including a unanimous panel of the Second Circuit Court of Appeals — raising constitutional concerns. (Notably, judges on the other side of this divide have relied heavily on a misrepresentation that the Department of Justice made in litigation, namely, that government officials are required to review Americans’ communications anyway as part of the minimization process.) Outside of the courts, constitutional scholars have argued that backdoor searches must be treated as a separate Fourth Amendment event than the underlying collection, thus triggering the warrant requirement. In short, the constitutionality of backdoor searches is anything but settled.

The FBI next argues that requiring a warrant would interfere with efforts to protect Americans. At the PCLOB hearing, Herrington identified hypothetical scenarios in which backdoor searches could be used to identify victims of cyberattacks and targets of espionage. Indeed, ODNI has stated that 1.9 million of the U.S. person queries conducted in 2021 were for the purpose of identifying potential cyberattack victims. Herrington expressed concern that the government would not be able to obtain a warrant for such searches.

The fundamental problem with this argument is that there is no “cybersecurity” or “victim” exception to the Fourth Amendment. If the FBI were investigating a cyberattack perpetrated by a purely domestic actor, it could not simply help itself to the communications of 1.9 million Americans to identify victims. It would have to use other investigative techniques. The Fourth Amendment doesn’t afford lesser protection to American victims simply because the perpetrator happens to be foreign. The foreign suspect may not have Fourth Amendment rights, but the American victims most certainly do.

In any event, if protecting victims were the sole or even primary purpose of backdoor searches, the government would not oppose a warrant requirement outright. It would instead propose a narrow and rigorously overseen carveout — e.g., one that would not involve accessing communications content and that would require FISA Court approval on a case-by-case basis — for situations in which the government has reason to believe someone is a victim or target of malign foreign activity.

Instead, the government is flatly opposing a warrant requirement on the ground that it would recreate “the wall.” That’s nonsense, and the government knows it. “The wall” refers to pre-9/11 rules that governed how law enforcement officials could use foreign intelligence information acquired with a FISA Title I order. In other words, these were rules that (in practice, if not on paper) limited the use of foreign intelligence information for law enforcement purposes even after the government made the probable-cause showing required by the statute. Requiring a warrant or FISA Title I order for U.S. person queries would involve no such restrictions or distinctions. It would constitute a “wall” only in the sense that the Fourth Amendment’s warrant requirement establishes a wall between the government and the private communications of Americans.

As for the FBI’s widespread violations of existing limits on U.S. person queries, the government told the FISA Court that FBI agents didn’t understand those limits. To address that problem, the FBI is bolstering its training requirements and imposing new internal oversight measures. This would be a more convincing argument if the rule the FBI has been violating (i.e., that queries must be designed to retrieve foreign intelligence or evidence of a crime) was a new one. But the notion that FBI agents didn’t understand the relevant standard — and that they simply need better training and oversight — is hard to accept when that standard has been in place for at least 14 years, and when the government has been touting its rigorous training and oversight throughout that period. As the FISA Court suggested, there’s an alternative explanation for the FBI’s behavior: not just a misunderstanding of the standard, but “indifference toward it.”

Indeed, it’s important to recognize that the recent FISA Court opinions are only the latest in a string of opinions dating back to 2009 that reveal an unbroken pattern of violations — by the FBI, NSA, and CIA — of the rules designed to protect Americans’ privacy. (See here for a compilation of Section 702 violations as of 2017.) In written comments to the PCLOB, I documented the FISA Court’s rising frustration with these violations and the government’s failure to timely disclose them. On multiple occasions, the government has responded by pledging to improve its training and/or bolster internal oversight. None of these efforts has been sufficient to disrupt the pattern. In the words of surveillance expert Julian Sanchez, the FISA Court and the government have been engaged in a game of “compliance whackamole.”

Ultimately, though, even if the FBI could ensure perfect compliance with the existing rules, it wouldn’t obviate the need for a warrant. Communications are collected without a warrant under Section 702 based on the premise that the subjects of the government’s investigative activity are foreigners abroad. If that premise changes, so does the constitutional calculus. Requiring a warrant for U.S. person queries honors the balance between security and liberty struck in the Fourth Amendment and ensures that Section 702 can’t be used to get around Americans’ constitutional rights. That essential reform should be the starting point for any reauthorization of the law.

IMAGE: Futuristic data screen and hologram world map. (Getty) 

The post The Year of Section 702 Reform, Part I: Backdoor Searches appeared first on Just Security.

]]>
85068
Poland’s Position on International Law and Cyber Operations: Sovereignty and Third-Party Countermeasures https://www.justsecurity.org/84799/polands-position-on-international-law-and-cyber-operations-sovereignty-and-third-party-countermeasures/?utm_source=rss&utm_medium=rss&utm_campaign=polands-position-on-international-law-and-cyber-operations-sovereignty-and-third-party-countermeasures Wed, 18 Jan 2023 13:55:17 +0000 https://www.justsecurity.org/?p=84799 Poland's positions push the discussion forward on contested areas of law.

The post Poland’s Position on International Law and Cyber Operations: Sovereignty and Third-Party Countermeasures appeared first on Just Security.

]]>
On December 29, 2022 Poland published its position on how international law applies to cyberspace. The 8-page document delivers a well-crafted and nuanced position on the current issues regarding the applicability of international law to cyber operations and puts Poland firmly into the mainstream of opinion on many of them, while offering useful examples to further the debate. The paper also includes bold propositions and arguments on issues such as third-party countermeasures and non-intervention, which will push the discussion forward on contested areas of law. This article offers a brief overview and discussion of main points of the position paper.

Sovereignty at the core of international obligations in cyberspace

The position paper starts with reiterating Poland’s position that international law applies to cyber operations and that respect for international law and norms is a necessary precondition for the preservation of international peace and security in cyberspace. Furthermore,

[i]n In Poland’s view, the practice of publicly presenting positions in key matters concerning international law increases the level of legal certainty and transparency, at the same time contributing to strengthening respect for international law commitments, and offers an opportunity to develop customary law.

In this spirit, the paper’s discussion of legal issues opens with sovereignty, which Poland regards as a fundamental principle of international law, giving rise to other rights and obligations such as the principle of non-intervention, as well as norms on jurisdiction and immunities. Referring to the Palmas case, Poland sees the core of sovereignty in independence, equality and the inviolability of a State’s territorial integrity and political independence. Consequently, States exercise supreme authority over their own territory, which includes persons and objects, such as information and communication technology (ICT) infrastructure. From this supreme authority also stems the right to protect persons and objects within a State’s territory.

As a result, the Republic of Poland takes the position that the violation of a state‘s sovereignty may occur both in the event of an attack against state infrastructure and against private infrastructure. A mere fact that IT infrastructure is linked in a number of ways with an international network does not result in the state‘s losing any of its rights with respect to such infrastructure.

At the same time, external sovereignty implies that

a state is independent in its external relations and is capable of freely engaging in any actions in cyberspace, also outside its own territory, subject to restrictions under international law.

Here, Poland explicitly affirms the view that sovereignty is not only a principle of international law, but a right in itself, requiring States to respect the boundaries of sovereignty both offline and online.

The principle of sovereignty requires other states to refrain from any actions that would violate sovereignty, and in particular states are obliged not to knowingly make their territory available for the purposes of acts that would violate the rights of other states. Poland is of the opinion that in the event of a hostile operation conducted in cyberspace, causing serious adverse effects within the territory of a state, such actions should be considered a violation of the principle of sovereignty, irrespective of whether such effects are of kinetic nature or are limited to cyberspace.

This passage is notable for two reasons. First, Poland argues that cyber operations causing “serious adverse consequences” within a State’s territory would qualify as violations of sovereignty. This would imply that non-consensual cyber operations conducted in foreign networks might not violate another State’s sovereignty if they are low-intensity and do not produce any or only negligible effects. Such a position would reject the French penetration-based approach, whereby any penetration of a State’s ICT infrastructure qualifies as a violation of sovereignty, and be more in line with the Tallinn Manual 2.0 approach, which finds support with States such as the Netherlands or Germany. However, the examples Poland gives to show which cyber operations it would qualify as sovereignty violations do not fully square with the Manual’s approach either:

The violation of the principle of sovereignty may be exemplified by a conduct attributable to a third country that consists in interfering with the functioning of state organs, for instance by preventing the proper functioning of ICT networks, services or systems of public entities, or by a theft, erasure or public disclosure of data belonging to such entities.

The Tallinn Manual 2.0 and its supporters argue that a remote cyber operation may violate sovereignty in two situations: (1) where there is a significant infringement of the target State’s territorial integrity that causes damage or a serious loss of functionality; and (2) when there is an interference with or usurpation of a State’s inherently governmental functions. Poland’s example of “preventing the proper functioning of ICT networks” would fall into the first category as a “loss of functionality,” consistent with the view of the Tallinn experts, although they could not find consensus on the precise threshold. Here, Poland seems to position itself on the progressive side of the argument, as it does not specifically require the loss of functionality, to include the necessity to repair or replace physical components, to find a violation of sovereignty. 

At the same time, the example of “theft, erasure or public disclosure of data” belonging to State organs goes much further than “interference with or usurpation of inherently governmental functions,” as data theft usually is clandestine and does not impact the targeted State’s ability to exercise its governmental authority or produce serious adverse effects. This, in turn, brings Poland closer to the French penetration-based approach. 

There is, therefore, an unresolved tension in Poland’s position and further clarification would be highly welcome. By way of background, it should be added that in June 2021 Poland was hit by a hack-and-leak operation targeting, amongst others, minister Michał Dworczyk, then head of the Office of the Prime Minister. The revelations from his e-mails, which have been published on Telegram and continue to be leaked up to this day, shook the Polish political scene. Poland has subsequently attributed the hack to the “Ghostwriter” campaign allegedly conducted from the territory of the Russian Federation by hackers from UNC1151 group. This may help to explain why the current administration in Poland would want to qualify such operations as violations of sovereignty. 

Cyber due diligence

Second, Poland elegantly links the core of sovereignty (exclusive control and authority over territory) to the question of cyber due diligence, i.e., the obligation not to knowingly allow one’s territory to be used for internationally wrongful acts which violate the rights of other States. A logical consequence of the exclusive authority over territory is the obligation to prevent its malicious use. As such, cyber due diligence already exists as an obligation under international law as a corollary of sovereignty and does not require new and extensive State practice to emerge, as is for instance the position of the United Kingdom (here and here, at p. 117), the United States (here at p. 141) and Israel (here). 

In Poland’s view

States should exercise due care to ensure that the IT infrastructure located within their territory is not used for unauthorised actions targeted at third countries. The same applies to persons staying within the territory of the state. An assessment of whether the state exercised due care or not should be contingent upon its technological advancement, expertise/resources and knowledge about actions in cyberspace initiated within its territory.

Thus, Poland implicitly agrees that due diligence is an obligation of conduct, not of result, and the extent of the obligation depends on actual knowledge and capacities and capabilities of the territorial State. This is again generally consistent with the approach taken by the Tallinn experts, which concluded that a rule of due diligence requires a territorial state to take “feasible” measures to terminate ongoing hostile cyber operations “that are causing serious adverse consequences for another state’s legal right.” It is also generally consistent with views expressed by Germany and Japan. Unfortunately, Poland does not address the question whether there is a threshold of severity of the malicious cyber operation for the cyber due diligence obligation to be triggered, as has been proposed for instance by Japan.

Non-intervention

On the duty not to interfere in the internal affairs of other States, Poland places itself in the mainstream of opinion, affirming the International Court of Justice’s definition of intervention:

The threshold for considering a specific operation in cyberspace to be in breach of the principle of non-intervention is higher than in the case of deeming it solely a violation of the principle of sovereignty. To be in breach of international law, an intervention must include the element of coercion that aims at influencing the state’s decisions belonging to its domaine réservé, i.e. the area of state activity that remains its exclusive competence under the principle of sovereignty.

The position paper does not dive into a dogmatic discussion of the concept of coercion, simply noting that there is no universally applicable definition, but nevertheless gives some useful examples of what Poland would consider as constituting a prohibited intervention.

In particular, any action in cyberspace that would prevent the filing of tax returns online or any interference with ICT systems that would prevent a reliable and timely conduct of democratic elections would be a violation of international law. Similarly, depriving the parliament working remotely of the possibility of voting online to adopt a law or modifying the outcome of such voting would also be such a violation.

In furthering a line of argumentation first developed by Germany, the paper also notes that 

a wide-scale and targeted disinformation campaign may also contravene the principle of non-intervention, in particular when it results in civil unrest that requires specific responses on the part of the state.

In its national position, Germany argued that “spreading disinformation via the internet, may deliberately incite violent political upheaval, riots and/or civil strife in a foreign country, thereby significantly impeding the orderly conduct of an election and the casting of ballots [and thus] may be comparable in scale and effect to the support of insurgents and may hence be akin to coercion.” Germany thus used a scale-and-effects test to compare the results of online disinformation campaigns to other examples held to constitute coercion (support for insurgents) in order to apply the non-intervention rule. 

Poland offers a slightly different route, instead looking at the type of actions a State may be forced to undertake in response to the unrest caused by disinformation campaigns. If a disinformation campaign forces a State to undertake actions or make choices on matters falling within the domain reserve, then these choices are no longer free, but coerced. 

Use of force and self-defence

Just as most other States, Poland affirms that cyber operations may, under certain circumstances, qualify as a use of force and even an armed attack, thereby triggering the right of self-defence. Consistent with the mainstream view, Poland applies an effects-based test to the question when a cyber operation crosses the force-threshold:

Perceiving a cyberattack as the use of force is supported by the possibility of it causing similar effects to those caused by a classic armed attack executed with the use of conventional weapons. […] An action in cyberspace that leads to: a permanent and significant damage of a power plant, a missile defence system deactivation or taking control over an aircraft or a passenger ship and causing an accident with significant effects may be considered the use of force.

On self-defence, the position paper stresses that the attacked State is not limited to actions in kind, i.e., to the cyber realm when responding to a cyber operation. Such a limitation would effectively deprive the State of its right to self-defence, if it were attacked by a State with a lesser degree of dependence on ICTs. 

The Polish position paper is very brief on international humanitarian law, simply noting that “[t]he requirements of international humanitarian law apply also to actions carried out in cyberspace during an armed conflict.”

State responsibility and third-party countermeasures

The last parts the Polish position paper are devoted to issues relating to the law of State responsibility and contain perhaps the most interesting and novel, albeit contentious, proposition. First, Poland affirms the applicability to cyber operations of the customary rules on State responsibility as laid down in the International Law Commission’s (ILC) Articles. This applies both to the question of attribution, as well as countermeasures, which the paper discusses under the topic of “response options.” 

Second, on the issue of countermeasures the paper follows the established position that any action taken as a countermeasure must in essence be limited to the non-performance of international obligations, be limited in time, aimed at inducing the responsible State to fulfil its obligations, be proportionate, and must not affect norms pertaining to fundamental human rights, IHL obligations, and peremptory norms. Sadly, Poland does not address the issue of procedural preconditions for the taking of countermeasures, especially whether a State is required to call upon the responsible State to cease its violating conduct and announce the intent to take countermeasures.

Next, the position paper addresses the issue of third-party countermeasures, that is countermeasures taken by a State other than the State which had been the target of the malicious cyber operation. It should be recalled that under the law of State responsibility measures of redress against violations of legal obligations are usually confined to bilateral action. Typically, only the injured State may, for example, institute legal proceedings against the responsible (perpetrator) State or take countermeasures. Whether and to what extent third States, which have not been injured by the primary violation, may also take action against the responsible State has long been a matter of intensive debate, with many States and academics saying that collective action should be limited to action through the UN Security Council. 

With regard to the question of collective action against cyber operations, it was Estonia who first proposed that “States which are not directly injured may apply countermeasures to support the state directly affected by the malicious cyber operation.” As explanation, it argued that it is important for States to “respond collectively to unlawful cyber operations where diplomatic action is insufficient, but no lawful recourse to use of force exists.” However, this view has been rejected by France and Canada due to a lack of sufficient State practice and opinio iuris to support this position, while New Zealand professed to be “open to the proposition.” 

In its national position Poland makes a short but powerful argument in support of the notion of third-party countermeasures:

the Republic of Poland expresses the view that the evolution of customary international law over the last two decades provides grounds for recognising that a state may take countermeasures in pursuit of general interest as well. In particular, the possibility of taking such measures materialise itself in response to states’ violations of peremptory norms, such as the prohibition of aggression.

In its 2001 Articles on State Responsibility, the ILC concluded that third States (other than the injured one) may invoke the responsibility of the perpetrator State if the obligation that has been breached is established for the protection of a collective interest of a group of States or is owed to the international community as a whole (Article 48). However, it left open whether this would also include the right to take countermeasures (Article 54). Poland now argues that since the drafting of the 2001 ILC Articles on State responsibility, international law has now evolved to a point where there is sufficient State practice and opinio iuris to argue for the emergence of a customary right to collective – or third-party – countermeasures in those cases where the norm that has been violated protects not only the rights of the injured State, but also the interests of a group of States or the international community as a whole. This argument is not new and has already been put forward in academic studies (for instance here and here), while others even argue that “no clear prohibition on collective countermeasures has crystallized to unequivocally preclude a state position, such as the one Estonia took” (here).

Furthermore, recent events such as Russia’s aggression against Ukraine have once again highlighted the limits of the United Nations as the forum for collective action to protect community interests. If the Security Council cannot fulfil its responsibility to protect international peace and security and to act against violators of peremptory norms, such as the prohibition of aggression, then it falls to collectives of like-minded States willing to protect the international rules-based order by way of, inter alia, countermeasures. The Polish position paper will therefore surely prove a valuable contribution to the debate surrounding third-party countermeasures in cyberspace and beyond.

Summing up, Poland’s national position on the application of international law to cyber operations is a welcome and necessary addition to the growing list of States’ views on this issue. It reflects the existing consensus on the use of force and non-intervention, strengthens the arguments of those States arguing that Sovereignty is a rule, rather than only a principle of international law, and gives strong arguments for the international discussion on third-party countermeasures, disinformation campaigns and cyber due diligence. 

IMAGE: via Getty Images. 

The post Poland’s Position on International Law and Cyber Operations: Sovereignty and Third-Party Countermeasures appeared first on Just Security.

]]>
84799
Regulating Artificial Intelligence Requires Balancing Rights, Innovation https://www.justsecurity.org/84724/regulating-artificial-intelligence-requires-balancing-rights-innovation/?utm_source=rss&utm_medium=rss&utm_campaign=regulating-artificial-intelligence-requires-balancing-rights-innovation Wed, 11 Jan 2023 13:55:05 +0000 https://www.justsecurity.org/?p=84724 "The United States should do what it has done for generations now when it comes to innovative thought and be a world leader ensuring AI supports society by providing the most benefits while producing the least possible harm."

The post Regulating Artificial Intelligence Requires Balancing Rights, Innovation appeared first on Just Security.

]]>
Across the technology industry, artificial intelligence (AI) has boomed over the last year. Lensa went viral creating artistic avatar artwork generated from real-life photos. The OpenAI chatbot ChatGPT garnered praise as a revolutionary leap in generative AI with the ability to provide answers to complex questions in natural language text. Such innovations have ignited an outpouring of investments even as the tech sector continues to experience major losses in stock value along with massive job cuts. And there is no indication the development of these AI-powered capabilities will slow down from their record pace. Governments and corporations are projected to invest hundreds of billions of dollars on associated technologies globally in the next year.

With this unprecedented growth, however, communities have grown more concerned about the potential risks that accompany AI. Reports indicate Chatbot GPT is already being leveraged by criminals to perpetrate fraud against unsuspecting victims. The Lensa app generated explicit images of individuals without their consent. Georgetown University School of Law’s Center for Privacy and Technology recently released a report highlighting long-held concerns of the use of face recognition in criminal investigations. Jurisdictions often lack the proper policies and procedures necessary to govern the use of face recognition evidence, and that has led to rights violations and wrongful arrests.

Existing Regulatory Frameworks

Faced with these concerns of privacy and safety, a patchwork of state and local regulation has begun to form in the United States. In 2020, Madison, Wisconsin outright banned the use of facial recognition and associated computer vision AI algorithms by any entity. In 2021, the city of Baltimore banned the use of face recognition technology with a limited exception for some use by police. That ban expired in December 2022, as council members continue to determine how to best address the privacy and data collection concerns of the community. Three states – Illinois, Texas, and Washington – have all enacted strict laws pertaining to data and privacy with face recognition. Illinois’s Biometric Information Privacy Act, or BIPA, remains one of the country’s strictest set of AI associated privacy regulations, gaining regular challenges from tech companies over complacency issues. In recent years, a host of states from Alabama to California enacted legislation intended to regulate the use of AI. However, regulation of AI domestically still remains a patchwork, with the U.S. Chamber of Commerce estimating less than an estimated one-third of states have at least one law that specifically addresses the use of AI technologies. Most of the existing laws focus on privacy collection, data protection, and data sharing.

Federally, there currently is no comprehensive law that governs AI development or use. The American Data and Privacy Protection Act, which would have created a national standard and safeguards for personal information collection and address algorithmic bias, failed to pass last year, and divided party control of an arguably hyper-partisan landscape doesn’t immediately give rise to the comity needed to pass new legislation.

The international regulatory landscape is just as uneven, with the European Union and China taking action to protect rights. Last year, the Chinese government’s first major AI regulatory initiative focused on informed consent, in which companies had to inform users whether or not an algorithm was being used to display certain information about them and provide them an opportunity to opt out. The government has since focused on a variety of policy initiatives with different government entities aimed at impacting international development of AI technologies and governance. However, the Chinese government’s own use of AI in privacy-invasive ways remains a deep concern. The European Union’s AI Act is much broader, designed as an all-encompassing framework focused on specific levels of risk associated with the use of AI technology.

However, thus far it has mostly been up to the tech industry to self-regulate when it comes to AI, but in a 2021 survey conducted by the consulting firm McKinsey, only fifty-six percent of responding companies had AI ethics policies and practices in place. Although countries are beginning to establish governance standards, without a unified approach or model guidance, industry will still be required to self-regulate to the requirements of the most arduous laws to which they’ve availed themselves while simultaneously attempting to understand how their business may be affected by pending global legislation.

Toward a Consistent Regulatory Approach

AI presents many possible sweeping benefits through its ability to enhance the capabilities of current technology. When algorithms are properly trained, they can make unbiased decisions, reduce human error by making processes faster and more efficient, solve complex problems, and support a host of other potential improvements to society. Conversely, AI can present challenges and risks from cyberattacks, to the aforementioned support of criminal conduct, the potential misuse of autonomous weapons, general misuse and unforeseen consequences due to poorly or improperly trained models, and a host of other potential threats.

Given the disparities in regulation both domestically and internationally and the inherent levels of risk associated with its use, the United States must pass formal regulation that provides clear guidance for industry and proper protections for society while making room for continued innovation within industry. The government will need to address concerns such as protection of privacy rights and use, aggregation, and security of personal data while ensuring loopholes are closed for potential unforeseen abuses and misuse of associated technologies. It will take a comprehensive framework to achieve this consisting of measured policies that provide protections and not draconian blanket prohibitions. Outright bans don’t allow for industry to collaborate with governments and academia to find thoughtful, sustainable answers to outgoing concerns. Additionally, companies will likely avoid engaging in business in those jurisdictions that prohibit all use, forgoing investments, infrastructure and training that will be crucial for the American workforce moving forward. Finally, setting proper regulations on the development and use of AI will make the United States safer. Ensuring all AI technologies utilized in the country meet baseline safety standards and protocols set by agencies such as the National Institute of Science and Technology, the Department of Defense, and Department of Homeland Security as they relate to cybersecurity and the protection of the Internet of Things, misinformation and disinformation amplification online, and other potential interests that may disrupt security operations will be paramount.

Drafting and passing a legislative framework will be difficult in this Congress, but not necessarily impossible, as legislators on both sides of the aisle have indicated strong interests in, and often concerns about, the capabilities and enhancements AI presents. The Biden administration has provided a model blueprint for an AI Bill of Rights that could serve as a good foundation for federal and state officials to build on. The AI Bill of Rights focuses on five key principles – Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Alternative Options – each with its own correlating technical point.

U.S. legislators could also look abroad for models. The EU’s executive office, the EU Council, adopted a common position (or general approach) for its AI Act. Similar to the AI Bill of Rights, the model legislation aims to provide a balance between ensuring the rights of citizens and supporting continued growth and innovation in the sector. Both documents seek to reduce and prevent unsafe practices while allowing industry to succeed and governments to become more efficient. The EU Council AI Act is proposed legislation that takes a risk assessment-based approach while highlighting specific prohibitions, establishes an AI Board for oversight, and presents assessments for conformity, governance framework, and enforcement of law and penalties for violations. The EU Parliament has its own separate legislative process, and its own AI Act is in committee. While the EU Council AI Act takes a more nuanced, risk-based approach to governing the technology, the current draft of the parliament draft legislation has many prohibitions of AI technology to include a blanket ban on “remote biometric systems.” The two bodies will enter negotiations known as a trilogue that is similar to a conference committee in Congress to hopes of reaching an agreement on proposed legislation by the end of this year.

Both the Bill of AI Rights and the EU Council AI Act could serve as a good starting point for comprehensive American legislation, as both documents seek to strike the challenging balance between protections and innovation. Interested parties will have a keen eye set towards the legislative process in the EU, as the two opposing approaches for sweeping bans versus mitigating risk will have to be resolved during the trilogue. The resulting legislation could set a new standard of how nations address all the combined concerns.

If most legislative efforts stall on the federal level, AI regulation still could present a rare opportunity for both parties to work with stakeholders at the state and local levels in a win for bipartisanship. Government and the tech industry can work together with community leaders and subject matter experts to smartly shape AI regulation so that they don’t have a chilling effect on innovation or unforeseen consequences on positive uses of the technology. In the meantime, industry leaders should work to provide reasonable transparency about company actions in the absence of stronger regulation to help put government and societal concerns at ease.

Government officials must recognize that the AI industry has been the lead in development of this technology and endeavored at self-regulation for a long time. I’ve seen this personally as a member of the industry in a Government Affairs and Public Policy position. Working with companies to find reasonable protections for privacy and other concerns is paramount in maintaining trust and safety between society, government, and industry, and such a collaborative effort ensures that the best possible practices are established, and healthy, reasonable safeguards are put in place. Without such an effort, society runs the risk of creating policies that allow unconscious bias within algorithms, loopholes within otherwise acceptable business cases that allow for abuse and misuse by third party actors, and other negative unforeseen consequences associated with AI technology. These actions will erode societal trust in the technology as well as institutions meant to serve and protect it.

All interested parties are working towards the same goal: the protection of the rights and safety of American citizens and allies. Clear frameworks exist as models for congressional legislation that can provide much needed guidance and regulation for the tech industry as the world witnesses the evolutionary leap of AI technologies. 2023 could prove to be a major inflection point for policy, law, and regulation that govern a variety of this industry. The U.S. government must also work with communities and industry leaders to properly draft protections that won’t have a chilling effect on innovation. This is a historic opportunity to shape the future of the world through this pivotal and powerful technology. The United States should do what it has done for generations now when it comes to innovative thought and be a world leader ensuring AI supports society by providing the most benefits while producing the least possible harm.

IMAGE: Futuristic digital circuit background.(Getty Images) 

The post Regulating Artificial Intelligence Requires Balancing Rights, Innovation appeared first on Just Security.

]]>
84724
UN Counterterrorism and Technology: What Role for Human Rights in Security? https://www.justsecurity.org/84246/un-counterterrorism-and-technology-what-role-for-human-rights-in-security/?utm_source=rss&utm_medium=rss&utm_campaign=un-counterterrorism-and-technology-what-role-for-human-rights-in-security Wed, 23 Nov 2022 15:31:04 +0000 https://www.justsecurity.org/?p=84246 A key UN committee opened its doors to civil society and experts, but the resulting Delhi Declaration contains little of that input thus far.

The post UN Counterterrorism and Technology: What Role for Human Rights in Security? appeared first on Just Security.

]]>
The first meeting of the United Nations Security Council Counter-Terrorism Committee (CTC) held outside of U.N. headquarters in New York since 2015 marked important advances in engaging with civil society and experts who have questioned the embrace of counterterrorism approaches that too often backfire or result in human rights violations. But the Delhi Declaration that emerged reflected little of that input. Prepared mostly in advance, the meeting’s outcome document merely confirmed the trend towards further expansion of the U.N. counterterrorism agenda related to the use of new and emerging technologies, with little attention to the abuses committed by governments in the process. 

The Oct. 28-29 meeting, conducted in India, was convened to discuss a specific element of the fight against terrorism: “Countering the use of new and emerging technologies for terrorist purposes.” The Security Council has sought for more than 20 years to address the perceived exploitation of information and communications technology (ICT) and related technologies for terrorist purposes. Adopted in the wake of the 9/11 attacks of 2001, Security Council resolution 1373 referred from the beginning to ICT and the abuse of communications technologies by terrorists.

Most notably in the last five years, the Security Council has adopted a series of resolutions under Chapter VII of the U.N. Charter, imposing legally binding obligations on all U.N. member states to introduce biometrics technologies at borders for counterterrorism purposes; to develop capabilities to collect, process and analyze travelers’ data, such as Advance Passenger Identification (API) and passenger name record (PNR) data; and to take action to prevent, investigate, and prosecute terrorism financing.

These resolutions have led to a significant expansion of surveillance powers by member states, often introduced without an adequate domestic legal framework and human rights safeguards. For example, Privacy International (Tomaso’s organization) has documented the human rights abuses stemming from biometric data identification systems across the globe since the Security Council adopted resolution 2396 in 2017. In its December 2021 analytical briefing on biometrics and counter-terrorism, the U.N. Counter-terrorism Committee Executive Directorate (CTED) noted how “the use of biometrics for counter-terrorism purposes – notably in the context of border management and security – has become increasingly widespread.” CTED’s own assessments show that many states “still lack sufficient legal and regulatory frameworks, data management and processing protocols, risk and impact assessment practices, and rigorous access controls and records for technology-based systems, including those based on AI.”

In addition, the Security Council has sought to address incitement to terrorism, including through the use of ICTs. Resolution 1624 (2005) contained a clear obligation for States to prohibit incitement to commit terrorist acts. At least in that case, it contains a strong human rights clause, including an explicit reference to the right to freedom of expression as reflected in Article 19 of the Universal Declaration of Human Rights and in the International Covenant on Civil and Political Rights. In 2010, the Security Council turned its attention for the first time to the use of the internet, including to incite terrorists as well as to finance, plan, and prepare terrorist actions.

These resolutions then served as building blocks for several subsequent resolutions. Those expanded attempted regulation of terrorist narratives, and instructed member States on how to counter the use of the internet, other ICTs, and other emerging technologies for terrorist purposes. ARTICLE 19 (Anna’s organization) has documented how these broad, sweeping provisions have led governments to adopt laws criminalizing “extremist” speech that does not amount to incitement; to block, filter, or ban websites; and to remove or restrict certain online content in a manner that violates freedom of expression and the right to privacy.

The CTC’s special meeting in India and the preparatory technical meetings that preceded it proved no different in the committee’s approach. The focus of the debate in most of the panels was on the real and perceived threats of abuses of ICT by terrorists; human rights got short shrift, as did even legitimate questions about the credibility of evidence cited for the alleged terrorist abuses.

A notable exception to this trend was a civil society roundtable organized by CTED on Oct. 12, in which participants pointed to some of the most significant human rights concerns raised by government counterterrorism measures. At the special meeting itself, the CTC also for the first time allowed the U.N. Special Rapporteur on Counter-Terrorism and Human Rights to address one of its formal meetings and invited civil society representatives to participate. This allowed CSOs, including ours (Privacy International and ARTICLE 19, respectively), to deliver their messages directly to members of the CTC, other member States, CTED, the U.N. Office of Counter-Terrorism, and other U.N. representatives. This level of access is highly welcomed, and it hopefully will set a precedent for future engagement between the CTC and civil society.

Key Takeaways and Shortcomings of the Declaration

In terms of substance, the Delhi Declaration covers three topics in relatively broad fashion: “Countering terrorist exploitation of ICT and emerging technologies,” “Threats and opportunities related to new payment technologies and fundraising methods,” and “Threats posed by misuse of unmanned aerial systems (UAS) by terrorists.”

Of these three topics, the abuse of ICT and emerging technologies by terrorists is the least defined. It could be interpreted to cover a vast range of measures — anything from content moderation to social media monitoring, from limiting the use of encryption to resorting to hacking for surveillance – all in the name of countering terrorism. It essentially encourages governments to introduce counterterrorism measures with a view to addressing abuses of ICT that will likely end up undermining human rights, particularly the right to privacy and the right to freedom of expression.

Ironically, such abuses ultimately will undermine national security itself. In our intervention at the special session, Privacy International expressed its concerns at the expansion of new technologies employed for the surveillance of public spaces, whether online or offline, in the name of countering terrorism. It noted that social media monitoring is often justified as a form of content moderation for counterterrorism purposes, but that it is also abused to surveil peaceful assemblies and profile people’s social conduct. Privacy International also noted how attempts by governments to access encrypted communications to identify potential terrorist threats risk introducing vulnerabilities into the systems; the result could be indiscriminate surveillance of digital communications, compromising the privacy and security of potentially all users of digital communication services.

ARTICLE 19 offered eight concrete recommendations to the CTC and the international community at large on how to ensure human rights safeguards and the rule of law while countering terrorism. These include key tenets for a successful fight against terrorism: 1)  security without rights is meaningless and rights inherently advance security; and 2) international human rights law dictates that the same rights apply online as offline. States can only restrict free speech on the basis of national security if the principles of legality, legitimacy, proportionality, and necessity have all been met. In terms of addressing terrorist content online, States should take a careful approach when regulating how social media platforms should undertake content moderation. State regulation should not prescribe what content to restrict, but rather focus platform accountability on processes, access to effective remedies, transparency, and human rights protections, including through impact assessments.

Unsurprisingly, given that the text was negotiated by the 15 CTC member States behind closed doors and well ahead of the meeting in India, the Delhi Declaration does not seek to address these or other concerns raised by civil society organizations and human rights experts.

The declaration does include the reaffirmation that “Member States must ensure that any measures taken to counter terrorism, including the use of new and emerging technologies for terrorist purposes, respect the Charter of the United Nations and comply with their obligations under international law, including international human rights law, international humanitarian law and international refugee law, as applicable” (paragraph 14 of the Delhi Declaration). However, it fails to refer to the principles of legality, legitimacy, necessity and proportionality, non-discrimination, accountability, transparency and effective redress for human rights violations. All of those issues have been raised consistently by human rights experts and civil society.

Little Guidance on Rights Obligations

The Delhi Declaration offers hardly any guidance on how States’ counterterrorism measures should comply with existing human rights obligations. And it offers only a vague blueprint for the next steps the U.N. CTC and CTED will take in this field. CTED is expected to play a leading role in the follow-up to declaration, including in the identification of trends, the development of threat analysis, the collation of good practices and the production of a gap analysis on the capacities of member States to counter the use of new and emerging technologies for terrorist purposes. In doing so, CTED is expected “to deepen its engagement and cooperation with civil society, including women and women’s organizations.”

As many civil society organizations noted during their presentations, there cannot be security without putting human rights at the center of any and all counterterrorism efforts. And counterterrorism policies or principles cannot be effective if they are developed in a vacuum, without the meaningful participation of human rights experts and diverse and independent civil society organizations. Failing to do so will lead to the criminalization of speech not amounting to incitement, based on the vagaries of potentially abusive governments, and it will spur content restrictions that violate freedom of expression.

It will not result in effective counterterrorism, and it is highly likely to lead to further erosion of human rights and the rule of law. The planned follow-up to the Delhi Declaration, in particular the drafting of “non-binding guiding principles” by the committee on ICT and terrorism, offers an important opportunity for the U.N. CTC and CTED to prove their commitment to meaningfully include civil society organizations and human rights experts in the development of counterterrorism policies.

IMAGE: Large surveillance desk with someone watching a wall of monitors. (Photo: Getty Images)

The post UN Counterterrorism and Technology: What Role for Human Rights in Security? appeared first on Just Security.

]]>
84246
More Turbulence Ahead for Twitter as the EU’s Digital Services Act Tests Musk’s Vision https://www.justsecurity.org/84234/more-turbulence-ahead-for-twitter-as-the-eus-digital-services-act-tests-musks-vision/?utm_source=rss&utm_medium=rss&utm_campaign=more-turbulence-ahead-for-twitter-as-the-eus-digital-services-act-tests-musks-vision Wed, 23 Nov 2022 14:06:37 +0000 https://www.justsecurity.org/?p=84234 Much depends on how social media platforms interpret their obligations under the new regulation, and how EU authorities enforce it.

The post More Turbulence Ahead for Twitter as the EU’s Digital Services Act Tests Musk’s Vision appeared first on Just Security.

]]>
Elon Musk’s promise to European regulators that Twitter would follow the rules set forth in the European Union’s new Digital Services Act (DSA) will finally be put to the test. The first few weeks of Musk’s reign at the company suggest he could be in for a brawl.

Within hours of Musk taking over, racist language previously blocked on the platform surged. The billionaire praised targeted ads, which is the focus of an EU crackdown to better protect users from pervasive online surveillance. He personally retweeted misinformation about the assault and attempted kidnapping that left U.S. House Speaker Nancy Pelosi’s husband gravely injured. And he has now slashed Twitter’s workforce by half and jettisoned the majority of the company’s thousands of contractors, who traditionally handle content moderation.

The DSA, which came into force on Nov. 16, is a bold set of sweeping regulations about online-content governance and responsibility for digital services that make Twitter, Facebook, and other platforms subject in many ways to the European Commission and national authorities. These bodies will oversee compliance and enforcement and seem determined to make the internet a fairer place by reining in the power of Big Tech. Many expect this EU flagship legislation to be a new gold standard for other regulators in the world.

Concerns for Safety and Respect

The DSA’s content-moderation rules could be a real challenge for Musk. While the DSA doesn’t tell social media platforms what speech they can and can’t publish, it sets up new standards for terms of service and regulates the process of content moderation to make it more transparent for both users and regulators. The DSA is as much animated by concerns for safety and the respect of fundamental rights as it is by market interest. It draws attention to the impact of algorithmic decision-making (and there’s a fine line between Twitter’s new pay-to-play subscription model and algorithmic discrimination), and gives stronger teeth to EU codes of conduct, such as the Code of Practice on Disinformation, which Twitter has signed onto.

And it does require platforms to remove illegal content. EU regulators say the DSA will ensure that whatever is illegal offline will also be illegal online. But what is considered illegal varies among EU member states. For example, an offensive statement flagged as illegal in Austria might be perfectly legal in France. It is true that the DSA took some steps to avoid imposing the law of one country on all other countries, but the EU fragmentation has caused and will continue to require significant compliance efforts by platforms. There are also new due-diligence obligations, some of which could increase the chance that lawful speech will be swept up as platforms err on the side of removal to avoid fines.

Perhaps working in Musk’s favor, the DSA does not require platforms to remove all harmful content, such as “awful but lawful” hate speech. In fact, the DSA sets some limits on what can be taken down, which means Musk may find it easier to remove as little as possible. But platforms are obliged to act responsibly and work together with trusted flaggers and public authorities to ensure that their services are not misused for illegal activities. And while platform operators have a lot of leeway to set out their own speech standards, those standards must be clear and unambiguous, which is something Musk vowed to do anyway.

Sometimes it’s hard to take Musk at his word. He said in late October that no major content decisions or account reinstatements would occur until a content moderation council comprised of “widely diverse viewpoints” was formed. He hasn’t announced such a council yet exists, but on Nov. 20 reinstated several people kicked off the platforms for hate speech and misinformation, including former President Donald Trump — who was allowed back on Twitter after Musk polled users – and Kanye West.

Twitter’s trust-and-safety and content-moderation teams have taken huge hits since Musk took over as “Chief Twit,” as he calls himself. Half of Twitter’s 7,500 employees were let go, with trust and safety departments hit the hardest. Twitter’s top trust-and-safety, human rights, and compliance officers left the company, as have some EU policy leads. That Musk fired the entire human rights team may have a boomerang effect. That team was tasked with ensuring Twitter adheres to U.N. principles establishing corporate responsibility to protect human rights.

Media outlets reported that content-moderation staffers at the company were blocked from accessing their enforcement tools. Meanwhile, a second wave of global job cuts hit 4,400 of Twitter’s 5,500 outside contractors last week, and many of them had worked as content moderators battling misinformation on the platform in the U.S. and abroad.

Far-Reaching Obligations for Large Platforms

All this raises questions about whether Twitter has the technical and policy muscle to comply with and implement DSA obligations, especially if Twitter is designated as a “very large online platform” or VLOP (more than 45 million users) under the DSA. If it is designated as such, the company will have to comply with far-reaching obligations and responsibly tackle systemic risks and abuse on their platform. They will be held accountable through independent annual compliance audits and public scrutiny and must provide a public repository of the online ads they displayed in the past year. It’s unlikely that Twitter will be able to respect these commitments if it doesn’t have enough qualified staff to understand the human rights impact of its operations.

What’s more, Musk is a self-described “free speech absolutist,” who said he acquired Twitter because civilization needs “a common digital town square.” He has criticized Twitter’s content-moderation policies and said he opposes “censorship” that “goes far beyond the law.” Yet, after widespread criticism, he also said Twitter can’t become a “free-for-all hellscape” where anything can be said with no consequences.

Sadly, Twitter descended into pretty much that kind of landscape shortly after Musk took over. A surge in racists slurs appeared on the platform in the first few days, while a revised paid-subscription program that gives users a blue check mark — which used to denote that the account holder’s identity had been verified as authenticated — was reintroduced but without the verification step. Anyone paying $7.99 could buy a blue check, and many who did set up fake accounts impersonating people and companies and tweeted false information, from an Eli Lilly account announcing its insulin product was now free to multiple accounts impersonating and parodying Musk himself.

Musk paused the rollout, but the damage had been done. Margrethe Vestager, executive vice president of the European Commission, told CNBC that such a practice indicates “your business model is fundamentally flawed.”

If Twitter is considered a “very large online platform,” it will need to assess all sorts of risks, including disinformation, discrimination, and lack of civic discourse stemming from the use of the service, and take mitigating actions to reduce societal harm. If Musk is successful with his plans to increase Twitter’s user base in the next few years, Twitter will certainly face tougher scrutiny under the DSA and could be required to reverse course and staff up its content moderation department.

This is also the opinion of Thierry Breton. Commenting on the reduction in moderators, the EU’s internal market commissioner warned Musk that “he will have to increase them in Europe.”

“He will have to open his algorithms. We will have control, we will have access, people will no longer be able to say rubbish,” Breton said.

Restricting User Information for Ads

Targeted advertising is another area where Musk’s plans could clash with DSA obligations. Twitter’s ad business is its primary source of revenue — with the company losing money, Musk is keen to boost ad revenue. The DSA restricts platforms from using sensitive user information such as ethnicity or sexual orientation, for advertising purposes.  Advertisements can no longer be targeted based on such data.

More broadly, the DSA increases transparency about the ads users see on their feeds: platforms must clearly differentiate content from advertising; ads must be labeled accordingly. It’s hard to see how all of these requirements will square perfectly with Musk’s plans. He’s told advertisers that what’s needed on Twitter are ads that are as relevant as possible to users’ needs. Highly relevant ads, he says, will serve as “actual content.”

One of the most concerning things the DSA does not do is fully protect anonymous speech. Provisions give government authorities alarming powers to flag controversial content and to uncover data about anonymous speakers — and everyone else — without adequate procedural safeguards. Pseudonymity and anonymity are essential to protecting users who may have opinions, identities, or interests that do not align with those in power.

Marginalized groups and human rights defenders may be in grave danger if those in power are able to discover their true identities. Musk has vowed to “authentic all real humans” on Twitter; unfortunately the DSA does nothing to help them if Musk follows through on his promise.

The DSA is an important tool to make the internet a fairer place, and it’s going to cause turbulence for Twitter as Musk seeks to carry out his vision for the platform. Much will depend on how social media platforms interpret their obligations under the DSA, and how EU authorities enforce the regulation. Breton, the internal market commissioner, vowed that Twitter “will fly by our rules.”  For Musk, the seatbelt sign is on. It’s going to be a bumpy ride.

IMAGE: The Twitter account of Elon Musk is displayed on a smartphone with a Twitter logo in the background on November 21, 2022. (Photo by Nathan Stirk/Getty Images)

The post More Turbulence Ahead for Twitter as the EU’s Digital Services Act Tests Musk’s Vision appeared first on Just Security.

]]>
84234
Encryption Helps Ukrainians Resist Russia’s Invasion, but a European Plan Threatens the Underlying Trust Any Tech User Needs https://www.justsecurity.org/84156/encryption-helps-ukrainians-resist-russias-invasion-but-a-european-plan-threatens-the-underlying-trust-any-tech-user-needs/?utm_source=rss&utm_medium=rss&utm_campaign=encryption-helps-ukrainians-resist-russias-invasion-but-a-european-plan-threatens-the-underlying-trust-any-tech-user-needs Thu, 17 Nov 2022 13:57:02 +0000 https://www.justsecurity.org/?p=84156 The intended crime-fighting proposals could force encrypted-messaging services to abandon basic confidentiality or pull out of the market.

The post Encryption Helps Ukrainians Resist Russia’s Invasion, but a European Plan Threatens the Underlying Trust Any Tech User Needs appeared first on Just Security.

]]>
Western military assistance to Ukraine has clearly been an essential part of its ability to defend itself against Russian attacks. Another critical factor, highlighted during a series of recent transatlantic dialogues organized by the Munich Security Conference, has been the capability of widely available technology that has empowered everyday Ukrainians to participate in a truly whole-of-nation effort to repel the Russian invaders. After Russia began its full-scale assault in February, downloads of encrypted messaging apps such as Signal surged, as users looked for safe channels to communicate. Major technology companies responded by further expanding their use of encrypted technologies.

That is why it is worrying to also learn that European governments are considering proposals that, while technically not affecting encryption itself, could unintentionally undermine the very reason the Ukrainian people – and others around the world, including endangered human rights defenders — put their trust in that technology: the ability to keep their communications confidential. These policies could force some providers of encrypted messaging services to abandon the basic guarantee of confidentiality or pull out of the market.

Ukraine has a vibrant tech sector and a tech savvy population, with many highly skilled technology workers in Kyiv and other major cities employed by domestic companies or Silicon Valley giants. Immediately upon the Russian invasion, this community sprang to life. Encrypted communication tools empowered Ukrainians to combat disinformation, organize relief efforts, and protect evacuees. Some apps helped protect citizens and soldiers, dramatically improving Ukraine’s Cold War-era warning system to alert mobile devices of incoming missile attacks. Others allowed anyone to monitor Russian troop movements and send the information to the authorities, crowdsourcing military intelligence. Without internet communication and encrypted messaging and tracking apps, it is unlikely the Ukrainians would have been able to resist the Russian war machine as effectively as they have.

At the same time the European Union is pressing ahead with a kind of mechanism to combat child sexual abuse and exploitation online that has prompted strong criticism from privacy experts. The proposed legislation would first assess which technology companies are at high risk of being used to transmit or store child sexual abuse material, and then establish an obligation that such material is detected and reported. The categories of material would be known images, new images, and grooming activities or soliciting children for sexual abuse, meaning it involves both pictures and text. The proposal does not mandate that providers of communications and hosting services use specific technical approaches, and it allows for significant discretion in how the law would be implemented and how the obligation to detect this material is met.

Potential Serious Privacy Concerns

The juxtaposition of the critical role encryption has played in the war in Ukraine with the concern about this proposal’s potential unintended consequences was another major takeaway from these discussions among transatlantic security officials and experts. For encrypted communications and storage capabilities, the means to detect this material could raise serious privacy concerns for all users, even though it would not affect the actual encryption.

Even though the EU proposal does not specify exactly how the material would be detected, it is likely that some form of content scanning will be used. In the context of child sexual abuse material, images are compared to a central database of such known material through a process called hashing or fingerprinting. A hash, or a cryptographic snapshot that is exact to the content at issue, is compared to hashes in the database, which under the EU proposal would be maintained by a new European Center to Prevent and Counter Child Sexual Abuse that would be independent of the EU yet funded by it. When notified of a match, the European Center would then determine whether to transmit it to law enforcement for possible further action. It is unclear what method would be used to detect grooming activities, but it would likely require scanning the contents of messages for keywords.

Of critical significance is where this scanning takes place. One way the EU legislation could be implemented is by scanning the content on a provider’s servers. Users agree, as they do now when they accept a provider’s terms of service, to only transmit or store certain types of material on a provider’s servers, so providers can scan material on their servers for illegal content or that which violates their terms of service. This is how most of this type of scanning is done currently, whether for comparatively innocuous material like email spam or existing voluntary efforts to detect child sexual abuse material. An example of this is Microsoft’s PhotoDNA, which scans hashes of user uploads against the hash database of known child sexual abuse images maintained by the National Center for Missing and Exploited Children.

For encrypted communications and storage, however, this option of scanning on the server is not available, because the content is encrypted and the provider cannot access it. If any content scanning is to occur, it must be done on each individual user’s device, in a process called client-side scanning. The method is the same — hashes are compared to those in a database, but the location of the scan shifts to the user’s device. Such a potentially generalized search of all content that takes place on a user’s device is what triggers the concerns from privacy advocates and others.

U.K. Pursues This Route Too

It’s not just the EU going in this direction, the U.K.’s Online Safety Bill makes a similar, yet broader, proposal that is primarily focused on protecting children from sexual abuse online but also covers detection of content related to terrorism, threats, stalking, drugs, weapons sales, and disclosure of private sexual images. This bill also does not specify a defined method of detecting this material, but U.K. authorities appear to support client-side scanning after two of Britain’s top cyber officials wrote a paper in favor of it.

A growing cohort of encryption experts, computer scientists, human rights advocates, privacy experts, and civil libertarians are warning about serious privacy concerns in addition to the uncertain effectiveness of client-side scanning.

The EU’s own European Data Protection Supervisor and the European Data Protection Board issued a joint statement about the EU proposal, stating that, “measures permitting the public authorities to have access on a generalised (sic) basis to the content of a communication in order to detect solicitation of children are more likely to affect the essence of the rights guaranteed in Articles 7 and 8 of the Charter” of Fundamental Rights of the European Union.

A report on the right to privacy in the digital age by the United Nations High Commissioner for Human Rights, released in August, warned that “mandating general client side scanning would inevitably affect everyone using modern means of communications,” and that “frequent false positives cannot be avoided.” It concluded that client-side scanning “would constitute a paradigm shift that raises a host of serious problems with potentially dire consequences.”

The High Commissioner also specifically identified risks to human rights defenders if the confidentiality of encryption is undermined: “In specific instances, journalists and human rights defenders cannot do their work without the protection of robust encryption, shielding their sources and sheltering them from the powerful actors under investigation.”

Writing earlier this year about the U.K. proposal, a group of 45 organizations and experts highlighted the “possibility of similar approaches being taken to infiltrate private communications channels for other purposes both in the UK and around the world, including to further violate human rights.”

Law Professor Jeffrey Vagle wrote in Just Security last year raising important questions about client-side scanning, and noting that forced implementation would essentially require everyone to relinquish control over what technologies have access to sensitive data. Vagle argued that, to maintain trust, users should be able to choose who has access to our data, what technologies have access to their devices, and that in his view, maintaining trust “means ensuring users retain control over the devices they own.”

A “General Mass-Surveillance Tool?”

Additionally, more than a dozen prominent computer scientists, privacy experts, and academics wrote a detailed paper in October 2021 identifying the risks that client-side scanning could be abused as “a general mass-surveillance tool,” and that implementing it “would be much more privacy invasive than previous proposals.” Their conclusion:

“Plainly put, [client-side scanning] is a dangerous technology. Even if deployed initially to scan for child sex-abuse material, content that is clearly illegal, there would be enormous pressure to expand its scope. We would then be hard-pressed to find any way to resist its expansion or to control abuse of the system.”

Will Cathcart, the chief executive officer of encrypted messaging app WhatsApp, told the BBC’s Tech Tent podcast that WhatsApp is a global product and that the company would not change its technology so that it could be used under any new regulation. If either the EU or U.K. proposals were to become law, the result could be that those governments may choose to block its use. As Cathcart says, WhatsApp offers “a global service and some countries choose to block it. I hope that never happens in a liberal democracy.”

We are currently witnessing a significant real-world example of the power of widely available commercial encryption technology that is helping a democratic nation to defend itself against a more powerful invader. Western democracies should support the ability to have effective means of secure communications, particularly when used in actions to preserve democratic institutions. Policymakers should guard against the unintended consequences of well-meaning proposals that could result in the requirement to embed tools today that could be used in ways that go far beyond the original purpose tomorrow. The ability to choose tools like encryption to keep communications confidential empowers individuals with the confidence that they can take steps to control the technology they use. It is a key driver of a future digital ecosystem based on trust.

IMAGE: (Photo by NICOLAS ASFOURI/AFP via Getty Images)

The post Encryption Helps Ukrainians Resist Russia’s Invasion, but a European Plan Threatens the Underlying Trust Any Tech User Needs appeared first on Just Security.

]]>
84156
Emerging Tech Has a Front-Row Seat at India-Hosted UN Counterterrorism Meeting. What About Human Rights? https://www.justsecurity.org/83837/emerging-tech-has-a-front-row-seat-at-india-hosted-un-counterterrorism-meeting-what-about-human-rights/?utm_source=rss&utm_medium=rss&utm_campaign=emerging-tech-has-a-front-row-seat-at-india-hosted-un-counterterrorism-meeting-what-about-human-rights Fri, 28 Oct 2022 10:02:54 +0000 https://www.justsecurity.org/?p=83837 Hype and untested promises have accelerated deployment of artificial intelligence, biometrics, and more, in the dubious name of security.

The post Emerging Tech Has a Front-Row Seat at India-Hosted UN Counterterrorism Meeting. What About Human Rights? appeared first on Just Security.

]]>
A special meeting of the U.N. Security Council Counter-Terrorism Committee in India this week marks a disturbing new twist on an already stained record of global security initiatives. Civil society organizations worldwide – especially in the Middle East/North Africa – as well as various U.N. human rights mechanisms and even Secretary-General António Guterres have decried the long years of harm done in the name of countering terrorism and ensuring security more broadly. Abuses accelerated in the past few years, especially as governments used the cover of responding to the COVID-19 pandemic to impose a gamut of emergency measures and repressive regulations.

Now, the U.N. and member States are increasingly focused on the role of emerging technology such as artificial intelligence (AI), biometric systems, and information and communications technology (ICT) to facilitate terrorism. Moreover, they are using assumptions about such threats to justify calls for broad and unrestricted counterterrorism responses, including the use of the very same technologies that are ripe for abuse by those same authorities.

So-called AI systems have become today’s shiny new toys for preventing terrorist attacks. Uses  range from tracking individuals and predicting their actions, to moderating terrorist-related content online. In other words, a person’s movements, payments, behaviors, and social networks may be monitored in hopes of predicting future terrorist activity.

The premise is that terrorists exploit technology such as new payment mechanisms and fundraising methods, for example, and must therefore be stopped at all costs. Yet the U.N. and member States provide little evidence for how terrorists are using technology in practice, and a nuanced understanding of the threat is still lacking.

The U.N. Security Council has played a significant role in accelerating the use of technology for counterterrorism purposes. In Security Council Resolution 2396 (2017), member States were instructed to collect biometric data and encouraged to develop and use biometric systems. Other resolutions that followed further emphasized the need for increased focus and collaboration on preventing the misuse of technologies, including emerging ones, for terrorist acts. Of note is the recent and unanimously approved Security Council Resolution 2671 (2021), which extended the mandate of the Counter-Terrorism Committee Executive Directorate (CTED) and highlighted financing, information and communications technologies. In June 2021, member states also flagged technology as a key concern when reviewing the U.N. Global Counter-Terrorism Strategy, based on U.N. General Assembly Resolution 60/288 (2006).

Efficacy vs. Harm

This week’s meeting in Mumbai and New Delhi is focused on precisely this issue of new and emerging technologies in terrorism and counterterrorism. This is certainly a cause for alarm, not least given the ongoing attack on human rights defenders and civil society organizations in India in the name of countering terrorism.  While there is little to no evidence of the efficacy of emerging technologies for preventing terrorism, there is evidence of harm to the rights and democratic freedoms of individuals and groups in their own countries.

Research to be published soon and undertaken by my organization, the European Center for Not-for-Profit Law (ECNL), in partnership with seven civil society organizations based in the Global South illustrates how governments use technology under the guise of counterterrorism to suppress legitimate dissent and infringe upon activists’ freedoms of speech, assembly, and privacy. This is especially true in countries where individuals, organizations, and civil liberties such as freedom of expression and association are already under attack.

Official justifications for the use of biometric technologies, for instance, rely on tech developers’ claims that they can identify perpetrators of terrorist offences. In 2021, Privacy International investigated such use in Iraq, Afghanistan, and Israel/Palestine and concluded that many claims of effectiveness weren’t substantiated. Moreover, they documented harms and restrictions to civil liberties and human rights resulting from the use of such technologies. ECNL’s new research shows that one of the most concerning trends for civic liberties is the use of biometrics to surveil protestors and dissidents. The partners we collaborated with exposed such use in countries including India, Mexico, Turkey, Uganda, and Thailand. Beyond direct human rights violations, the mere existence of surveillance technology can have a chilling effect on political expression and civic engagement, as individuals can self-censor and refrain from organizing due to the fear of being identified.

Other alarming impacts to freedom of expression and assembly stem from over-broad efforts by social media companies. This is further exacerbated by the short deadlines imposed by policymakers to remove terrorist content, which may not give platforms enough time to discern whether content violates the law or their internal policies. This can inadvertently result in the suppression of legitimate content, especially content shared by members of marginalized and vulnerable groups, such as Muslim and Arab users. This is partly due to social media companies’ lack of contextual understanding and investment when moderating content in these regions and languages, as well as the challenge of adequately enforcing policies at scale. Content exposing human rights abuses or criticizing powerful actors can be erroneously flagged as violative and thus removed, as seen in the recent human rights impact assessment of Meta’s activities in Israel/Palestine.

Furthermore, as data is increasingly collected and processed by private companies, issues arise when they disclose this content to law enforcement. Yet users are often left in the dark as regards the modalities of such disclosure, and hardly ever have a say in the matter. In Mexico, for example, mobile phone companies were required by law to request biometric information of phone users as a tool to combat organized crime. Given the severe risks to privacy, the Mexican Supreme Court struck this policy down in August 2022, declaring it unconstitutional.

Importantly, even when governments do not intend to use technology maliciously, there is little to no evidence that the technology is effective to achieve a broadly defined goal. And yet, these technologies are often designed and deployed without robust safeguards and consultation with affected communities.

Proportionate Approaches and Meaningful Engagement

Lessons learned from counterterrorism-related abuses in the past unequivocally show the importance of proportionate approaches and meaningful engagement with civil society prior to any use of technology in combating terrorism. After 20 years of applying a preventive (and pre-emptive) counterterrorism agenda – and finally admitting the harm it caused to freedoms worldwide –  the U.N. and member States can no longer justify taking hasty, immediate action without considering the potentially severe damage to human rights. When technology is thrown into the mix, the risk is exacerbated by a common “move fast, break things” approach championed by reckless technology companies, as they race for innovation while disregarding their impact on human rights.

More scrutiny and better safeguards begin with better understanding the limits of the technologies themselves and, through evidence-based research, assessing whether they are indeed fit for purpose and can prevent terrorism in practice. A corollary of that is the need to investigate how technologies introduced in the name of security and counterterrorism, including in financing of terrorism, respond to the actual threats and how they will impact human rights and civic freedoms.

To be proportionate, tech-based responses to terrorism must be based on a full risk assessment of their impact on human rights and civic space, and deployed in a way that mitigates the identified risks. Only when grounded in a multistakeholder approach can such assessments and actions be sufficiently informed, legitimate, and effective. This begins – and ends – with meaningfully engaging various sectors, including relevant national authorities, companies, civil society, and academia. Members of historically marginalized and vulnerable groups must have a privileged seat at the table, because they often are the most immediately and/or severely impacted. This especially includes racialized persons, women and gender non-binary persons, LGBTQIA+, religious minorities, migrants and refugees, disabled persons, children, the elderly, and those of lower socio-economic status. Voices and representatives from the Global South must not only be included, but also elevated throughout the process.

Once human rights risks and impacts are properly assessed, policymakers at the U.N. and in member States should consider existing guidance and resolutions related to counterterrorism and financing. How do current instruments and responses address the threat? Are more regulation and action really needed? Importantly, what has been done to address the unintended consequences of counterterrorism measures to date, and what lessons can be learned in crafting future responses?

In the meantime, a global coalition of civil society organizations has been pushing for a ban of biometric surveillance technologies. And U.N. special rapporteurs called for a moratorium on the sale of surveillance technology in August 2021, given the severe risk to human rights.

Any future policy intervention must be risk-based, targeted and in full compliance with the wider international human rights framework. This includes not only binding instruments, but also the U.N. Guiding Principles on Business and Human Rights and the U.N. systemwide guidance on human rights diligence in developing, deploying, and using new technologies. Recent Human Rights Council resolutions are also relevant, such as Resolution 41/11 on new and emerging technologies (2019), and the new 51/L.25 on the use of technology in the military domain. Restrictions to human rights, including civic freedoms, must always meet the three-part test of legality, legitimate aim, and proportionality and necessity. No blanket exemption for the use of technology for counterterrorism or national security could ever meet that test.

As the U.N. and member States engage during the meeting in India, they must pause, listen, and take this opportunity to scrutinize the use and impact of emerging technologies, with wide and meaningful consultation of civil society.

IMAGE: A sign on Queen Street in the city center of Cardiff, United Kingdom, on August 25, 2022, warns that South Wales Police are using facial recognition. (Photo by Matthew Horwood/Getty Images)

The post Emerging Tech Has a Front-Row Seat at India-Hosted UN Counterterrorism Meeting. What About Human Rights? appeared first on Just Security.

]]>
83837
The UN Cybercrime Treaty Has a Cybersecurity Problem In It https://www.justsecurity.org/83582/the-un-cybercrime-treaty-has-a-cybersecurity-problem-in-it/?utm_source=rss&utm_medium=rss&utm_campaign=the-un-cybercrime-treaty-has-a-cybersecurity-problem-in-it Mon, 17 Oct 2022 12:51:23 +0000 https://www.justsecurity.org/?p=83582 Proposals for an international cyber crime treaty could have unintended consequences that undermine the very purpose for its existence.

The post The UN Cybercrime Treaty Has a Cybersecurity Problem In It appeared first on Just Security.

]]>
The United Nations is engaged in a landmark effort to establish a new global cybercrime treaty. The goal is laudable. Cybercrime does not respect borders, nor is it limited by them. And, as we have seen, cyberattacks that begin with one target can quickly spill into the broader digital ecosystem, causing widespread damage.  But this initiative at the U.N. – if not carefully curated – could also serve as a vehicle for countries to criminally prosecute security researchers, technology companies, and others for activities that are essential to the overall security of our global digital community.

The estimated economic cost of cyberattacks is staggering and seems to grow each year.  The expansion of the cyber insurance industry is a natural consequence as more companies look to protect themselves against these attacks.  The damage wrought by cybercrime has a nontrivial human component too. When a cyberattack targets the healthcare industry – a common victim – the impact on individual lives is stark : prescriptions don’t get filled, surgeries are delayed, and an individual’s health can rest in the hands of a cybercriminal thousands of miles away and out of reach of local and allied law enforcement agencies. Innovative approaches to combatting cybercrime, including drawing on all elements of geopolitical power, are needed if the international community hopes to put a dent in the seemingly unbounded growth of this malicious enterprise. But while the goal of increased global cooperation in the prosecution of cybercrime is worthwhile, current proposals from various countries, discussed during the summer’s U.N. Ad Hoc Committee’s Second Session, raise concerns.

As it currently stands, the most influential and important international cybercrime treaty is the Council of Europe Convention on Cybercrime, more commonly referred to as the “Budapest Convention.”  That Convention was the first international cybercrime treaty and has been adopted by 67 countries, including Australia, Canada, the Council of Europe (which includes the European Union as well as other countries), Japan, the U.K., and the U.S..  The goal of the Budapest Convention was to establish a global approach to cybercrime that would involve harmonizing national law, improving investigative abilities, and enabling international cooperation.  Among other things, the Budapest Convention defined criminal offenses for cybercrimes such as illegal access to a computer system, fraud and forgery, and illegal data interception.  While the Budapest Convention has been the subject of controversy over the years, including concerns that it undermines individual privacy rights,  it is generally regarded as a useful instrument setting an international standard for addressing cybercrime.

In 2019, the U.N. General Assembly adopted a resolution that initiated a multi-year process of negotiating what could become a global cybercrime treaty more widely adopted and influential than the Budapest Convention.  Negotiations for this treaty are wide-ranging and illustrate a lack of unanimity concerning what should be defined as “cybercrime.” Where some proposed crimes mirror the language and approach of the Budapest Convention, such as prohibitions against illegal access to a computer system, others include new provisions, such as those that criminalize the receipt of “any stolen computer resource.”  The competing proposals also raise the specter of significant human rights concerns with sweeping concepts of criminalized conduct,  especially since the countries driving the movement toward the new treaty are among those with the most restrictive laws concerning the free and open use of the internet.

While human rights concerns are the most significant danger in some of the proposals, they are not the only problem. Most ironically, one of the potential flaws in many of the proposed crimes is that they may undermine the goal of bolstering global cybersecurity. One of the notable ways this concern manifests is in the number of proposals calling for the criminalization of computer-enabled conduct without a requirement to show some kind of “intent.”

Intent is a common element in many global cybercrime legal frameworks – and criminal law, generally. The crimes outlined in the Budapest Convention, Articles 2-11, specify some element of intent as a prerequisite to the criminal prohibitions, such as illegal access, illegal interception, and data interference.  While some of the parties participating in the negotiation of the new U.N. Cybercrime Treaty have proposed cybercrimes that are consistent with the language of the Budapest Convention, many other countries have proposed crimes without any intent element. That’s ill-advised and dangerous. For instance, with respect to the crime of “[c]omputer interference,” Proposal 5 from India states:

Each State party shall adopt such legislative and other measures as are necessary to establish as an offence under its domestic law, if any person without permission of the owner or any other person who is in charge of a computer, computer system or computer network – (d) damages or causes to be damaged any computer, computer system or computer network, data, computer data base or any other programmes residing in such computer, computer system or computer network…

Another example is Egypt’s Proposal 1 for an offense relating to “[a]ttack on a site design,” which states:

Each State party shall also adopt such legislative and other measures as are necessary to criminalize the following acts:

The unlawful damaging, disruption, slowing, distortion, concealment or modification of the site design of a company, institution, establishment or natural person.

Where many proposals omit intent, other countries seek to maintain it as an important element of the proposed crimes in the new treaty. For instance, Canada’s Proposal 3 for an offense relating to “data interference” states that countries shall:

Establish as a criminal offence to, intentionally and without right, seriously hinder the functioning of a computer system by inputting, transmitting, damaging, deleting, deteriorating, altering, or suppressing computer data.

When intent is removed from a criminal prohibition, it increases the likelihood that innocent individuals who inadvertently produce certain effects from their conduct will be subjected to the full weight of criminal prosecution and the threat of significant penalties, including, potentially the loss of their freedom. This is a danger that is well-recognized in the field of cybersecurity.  To be sure, security research does not always involve activities that might implicate cybercrime laws as such research does not necessarily involve conduct that might constitute “interfering” with a system or circumventing security measures. Omitting intent as an element of a cybercrime may, however, criminalize such conduct, in those circumstances when the effects of cybersecurity research are less clear.

By maintaining the intent element in cybercrime laws, many jurisdictions can avoid the risk of discouraging or chilling the activities of security researchers such that those researchers, who are legitimately acting in good faith, should generally not worry about being prosecuted for inadvertent effects for which different parties might debate whether they constitute “accessing” or “interfering” with a system. There should be no room for ambiguity.

Through its enforcement of the Computer Fraud and Abuse Act (CFAA), the United States itself has struggled to reconcile the line between legitimate computer research and criminal access to a computer system.  In particular, in the case of vulnerability research, some identification and testing of vulnerabilities could potentially, if inadvertently, cause effects that some might argue constitute “interfering” with a computer system in violation of the CFAA.  This has left many critics claiming that vital cybersecurity research, including vulnerability research, is threatened unnecessarily by the specter of potential federal criminal prosecution.  Many technology companies that offer cybersecurity services or products, as well as corporate security departments, depend on the ability to obtain and use actionable intelligence concerning cybersecurity vulnerabilities to protect their systems, the many consumers they serve, and the broader cybersecurity ecosystem. The importance of insulating “good faith” security researchers from cybercrime laws was recognized recently by the U.S. Department of Justice, which announced a new policy for federal prosecutors investigating potential violations of the CFAA.   That policy explicitly discourages prosecutors from pursuing “good faith” security researchers for violations of the law.

To the extent any of the current cybercrime proposals that do not require intent survive in the final version of the U.N. Cybercrime Treaty, it could significantly alter the landscape for cybersecurity researchers, discouraging their work and even potentially threatening them with criminal prosecution.

A new global cybercrime treaty, especially one that aspires to something closer to universal adoption in countries that are not parties to the Budapest Convention, could have significant positive effects on the fight against global cybercrime. An instrument that enables more extensive international cooperation in cybercrime investigations could mean, among other things, more favorable conditions for the extradition of cybercriminals from countries currently unwilling to do so. It could also shrink the number of “friendly” jurisdictions where cybercriminals can act with relative impunity. But when significant human rights concerns are coupled with blind spots that could endanger cybersecurity research, it is apparent that an international instrument that is not carefully crafted could have unintended consequences, including undermining the very purpose for its existence.

 

Photo: Third session Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communications Technologies for Criminal Purposes, New York, Aug. 30, 2022.

The post The UN Cybercrime Treaty Has a Cybersecurity Problem In It appeared first on Just Security.

]]>
83582
A Different Kind of Russian Threat – Seeking to Install Its Candidate Atop Telecommunications Standards Body https://www.justsecurity.org/83286/a-different-kind-of-russian-threat-seeking-to-install-its-candidate-atop-telecommunications-standards-body/?utm_source=rss&utm_medium=rss&utm_campaign=a-different-kind-of-russian-threat-seeking-to-install-its-candidate-atop-telecommunications-standards-body Wed, 28 Sep 2022 13:15:32 +0000 https://www.justsecurity.org/?p=83286 The new secretary-general of the standard-setting body will have global impact on whether the digital sphere will be beneficial for all.

The post A Different Kind of Russian Threat – Seeking to Install Its Candidate Atop Telecommunications Standards Body appeared first on Just Security.

]]>
For the billions of digital devices that people the world over use each day, technical standards provide rules that ensure a device produced in one country can run software developed in a second country, on networks located in a third country. These rules are established in international standards organizations (ISOs), and the leadership of these organizations is critical to ensuring that a transparent, values-based set of principles is in effect, and that a level playing field exists for all companies to participate. Authoritarian states, like China and Russia, have worked to take control of such organizations over the past two decades. To wit, the International Telecommunication Union (ITU) is holding leadership elections this week at its Plenipotentiary Conference in Budapest, and as Russia floats a candidate to take over the top position from a Chinese official, the United States and its allies need to ensure that the candidate who is more likely to enforce the right principles wins.

The ITU was established more than 150 years ago after the invention of the telegraph to facilitate communications between disparate national communication systems. It quickly grew to encompass broader radio and telephone network issues, and eventually all forms of telecommunications. Ensuring independent control of the ITU is critical for two reasons.

First, consistent international standards help to facilitate international trade and contribute to economic growth. Industrialized and “informationalized” countries like the United States have historically been leaders in standards setting to ensure a free and competitive environment for the development and sale of products. This effort significantly impacts national security — the United States’ ability to pursue strategic objectives is inextricably linked to its economic competitiveness, and to the extent that standards participation benefits U.S. economic power, it benefits U.S. national security interests.

Second, standards participation provides a venue for countries to promote or discourage certain values. Though scientific in nature, standards significantly affect the values that are protected in technical designs. China’s growing influence in standards bodies, for instance, threatens human rights and privacy in the information and communications technology (ICT) ecosystem, as can be seen in efforts by Chinese technology firms to shape technical standards related to facial recognition and surveillance. These surveillance technologies have been fundamental to China’s repression of ethnic Muslim minorities in the Xinjiang region, and China looks to export these technologies to other authoritarian states. While countries could import Chinese technology regardless of the ITU’s adoption of any specific standard, the inclusion of repressive technologies in internationally adopted standards would certainly help these technologies become the norm.

The ITU was initially a boring place, doing boring (but important) work. Over the past two decades however, China has worked to gain control of the ITU, and other ISOs, in order to promote Chinese companies and interests. Lately, China has also tried to expand the ITUs remit to include internet standards, as these are currently controlled by a non-governmental body that China has struggled to manipulate.

From Chinese Secretary-General to Russian?

The current leader of the ITU is a Chinese official named Houlin Zhao, who has used his role to promote Chinese companies and policies. His actions are so brazen that analysts wrote even two years ago that “[i]t is extraordinary for an international civil servant to shill blatantly for a company from his home country the way Zhao is doing for Huawei, or to so boldly endorse initiatives of his home country the way Zhao has championed China’s Belt and Road Initiative. It is even rarer when those statements involve official responsibilities.”

As Zhao’s term comes to an end, it would be unusual for another Chinese leader to gain support, as the organization generally seeks to vary the representation in its leadership among the membership. But a similarly authoritarian regime, Russia, is looking to take the secretary-general post with its nominee, Rashid Ismailov. He has worked in both the Russian government and in Russian and international telecommunications companies. Running against him is Doreen Bogdan-Martin, an American who is a career ITU official.

While Ismailov has worked with Russian companies and international companies like Nokia, Ericsson, and Huawei, it is his work for the Russian government that should prompt questions about his commitment to international principles like transparency and rules-based systems. In 2014, Ismailov was appointed as vice-minister of telecom and mass communications of the Russian Federation. Ismailov has said he intends to lead the ITU in creating initiatives to prioritize individuals’ well-being, but that would run counter to both apparent Russian policy and his own record. The Russian government, to say the least, has a spotty record in terms of how it treats information and personal privacy, and Ismailov’s own experience includes serving as CEO of a company that installed devices that ensured all Russian internet traffic was filtered through sovereign internet infrastructure.

In fact, Russia is notorious for its opposition to free speech and for regulations that violate privacy and tighten control over online content. In 2019, Putin signed the “sovereign internet law” allowing Russia to restrict social media platforms to protect Russia’s “digital sovereignty.” This legislation allows the Kremlin to further restrict social media platforms and the influence of American social media specifically. The new laws theoretically allow Russia to impose fines on platforms that do not block forbidden content such as calls for suicide, child pornography, or information on drug use, but in reality, these laws allow Putin to remove content that contradicts Russian interests or fails to parrot the Kremlin’s talking points.

Moscow also banned virtual private networks, which serve to protect the privacy of customers. The Russian government plans to cut Russia off from the global internet and use the homegrown “Ru-Net” instead, following the China model. The Russians have previously made it clear they would like to use the ITU to establish favorable controls over the internet, There is not a long history of Russian officials serving in international organizations and working at cross purposes with Putin’s principles, so it is hard to imagine Ismailov having the inclination, or opportunity, to push for transparency and rules-based systems that prioritize personal rights. Any Russian’s candidacy must be seen as a package deal with Vladimir Putin.

Alternate Candidate

On the other hand, the alternate candidate, Bogdan-Martin, is a career international official and has been serving as the director or member of the ITU’s Development Bureau since the early 2000s and her work has focused on global equity initiatives. She has spearheaded both the ITU’s donations to the EQUALS Global Partnership for Gender Equality in the Digital Age and the ITU’s partnership with UNICEF on its Giga project on connecting schools internationally.  The United States has come out strong in support of her candidacy

Given the criticality of the ITU to national security and U.S. values of transparency and freedom, and the stark choice in candidates, the U.S. needs take action to ensure the election of Bogdan-Martin, not because she is an American, but because her campaign for election is based on exactly the necessary principles of transparency and fact-based leadership.

When confronted with a similarly stark choice in 2020, the Trump administration corralled U.S. allies and partners to ensure the pro-transparency candidate was elected. In that case, it was a vote for the directorship of the World Intellectual Property Organization, and the United States was able to ensure that Daren Kang of Singapore defeated Wang Binying of China. The United States did a superb job of working overseas capitals and successfully wrangled votes thanks to ambassadors and cabinet members gathering a coalition of allies and partners to support the candidate most committed to transparency and a rules-based system.

As the United States and its like-minded allies look at this impending ITU election, a similarly aggressive effort is needed, and this should be the number one priority this week for every U.S. overseas mission. At the same time, the Biden administration should continue to insist on similar efforts from allies.

While ITU elections rarely make international headlines, the power wielded by whomever is elected this week will have global impact. If the United States and its allies do not stand by principles of transparency now, defending them later will only prove more difficult. As technology continues to advance, it has never been more important to preserve a digital space that is beneficial for all.

IMAGEs: (Left image) Russian nominee Rashid Ismailov (right) shaking hands with current ITU Secretary-General Houlin Zhao, and American alternative candidate Doreen Bogdan-Martin (right) with Houlin. (Courtesy ITU Pictures via Flickr) 

The post A Different Kind of Russian Threat – Seeking to Install Its Candidate Atop Telecommunications Standards Body appeared first on Just Security.

]]>
83286