+ All Categories
Home > Documents > STANFORD INTERNET OBSERVATORY 2019-2021

STANFORD INTERNET OBSERVATORY 2019-2021

Date post: 28-Dec-2021
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
24
STANFORD INTERNET OBSERVATORY The first two years. 2019-2021
Transcript

S T A N F O R D I N T E R N E T OBSERVATORY

T h e f i r s t t w o y e a r s .

2019-2021

Internet ObservatoryCyber Policy Center

i o . s t a n f o r d . e d u

0 4 / 0 5

L E T T E R F R O M T H E D I R E C T O R S

1 6 / 1 7

S P O T L I G H T : T E L L I N G C H I N A ' S S T O R Y

2 2 / 2 3

A C K N O W L E D G E M E N T S

0 6 / 0 7S I O B Y T H E N U M B E R S

1 4 / 1 5

T E A C H I N G

0 8 / 0 9

R E S E A R C H

1 0 / 1 1

S P O T L I G H T : P O T E M K I N P A G E S A N D P E R S O N A S

1 2 / 1 3

S P O T L I G H T : P L A T F O R M T A K E D O W N S

1 8 / 1 9P O L I C Y

2 0 / 2 1

S P O T L I G H T : T H E E L E C T I O N I N T E G R I T Y P A R T N E R S H I P

CONTENTS

T w o Y e a r R e v i e wJ u n e 6 , 2 0 1 9 — J u n e 6 , 2 0 2 1

4

Two years ago, we launched the Stanford Internet Observatory as a cross-disciplinary laboratory for the study of abuse in current information technologies, with a focus on the misuse of social media. The Observatory was created to learn about these abuses in real time and to translate our research discoveries into education for the next generation of engineers and entrepreneurs and into policy innovations for the public good. The term “Observatory” was not an accident: for centuries, physicists and astronomers have coordinated resources to build the massive technological infrastructure necessary to research the universe. The internet is similarly an ecosystem constantly in flux as new apps, emerging technologies, and new communities of users transform the space; researchers need innovative capabilities to research this new information frontier.

When we launched, we knew our work would be important because of the extent to which online activity increasingly shapes public perception of our society’s most important issues. We did not anticipate some of the specific forms this activity would take. The global pandemic moved human interaction from substantively online to near-completely online. As our team adapted to working from home, the spread of online information intensified: an organized marketing campaign to launch the conspiratorial “Plandemic” video;

LETTER FROM THE DIRECTORS

A l e x S t a m o s , D i r e c t o r

manipulation of livestreams to push fear during Black Lives Matter protests; global superpowers using health diplomacy as concerted soft power moves in the global south; and the 2020 US election, culminating in the unprecedented—although perhaps not unanticipated—Capitol insurrection on January 6, 2021.

We launched on June 6, 2019, with an initial team of three and have since grown to a full-time team of 10, working with 76 student research assistants over the past two years. SIO’s success relies on the tireless efforts of the students and staff whose work is highlighted in this report.

As we embark on our third year, we reflect deeply on our research and refine our path forward as a research center. In addition to highlighting the output of our team, this report details our focus areas and goals for the coming year.

We would like to extend our gratitude to our faculty leads Nate Persily and Dan Boneh at the Stanford Cyber Policy Center; Michael McFaul, the director of the Freeman Spogli Institute; and our generous supporters including Craig Newmark Philanthropies, the William and Flora Hewlett Foundation, the Omidyar Network, the Charles Koch Foundation and Felicis Ventures.

E l e n a C r y s t , A s s o c i a t e D i r e c t o r

5

b u i l d i n g t h e i n f r a s t r u c t u r e

t o s t u d y t h e i n t e r n e t

L E T T E R F R O M T H E D I R E C T O R S

6

3C O L L A B O R A T I V E R E S E A R C H

I N I T I A T I V E S

A T T R I B U T I O N . N E W S

76

479

S T U D E N T R E S E A R C H A S S I S T A N T S

S T U D E N T S E N R O L L E D I N S I O C O U R S E S

STANFORD INTERNET OBSERVATORY BY THE NUMBERS

9C O U R S E S T A U G H T

T R U S T & S A F E T Y E N G I N E E R I N G x3O N L I N E O P E N - S O U R C E I N V E S T I G A T I O N S x3H A C K L A B x2C U R R E N T T O P I C S I N T E C H N O L O G Y P L A T F O R M P O L I C Y

x1

5E N D - T O - E N D

E N C R Y P T I O N W O R K S H O P S

200+W O R K S H O P P A R T I C I P A N T S

E L E C T I O N I N T E G R I T Y P A R T N E R S H I P

V I R A L I T Y P R O J E C T

7

B L O G P O S T S

A R T I C L E S A N D O P - E D S

R E P O R T S

P U B L I C A T I O N S

B Y T H E N U M B E R S

W O R K I N G P A P E R S U N D E R R E V I E W

P U B L I S H E D P E E R - R E V I E W E D J O U R N A L A R T I C L E S

58

33

20

2

1

8

The Stanford Internet Observatory (SIO) started with intentionally broad research interests—including disinformation, trust and safety, and encryption policy—to see where the inquiry areas would grow. Our focus was on creating short-term impact through policy briefs, blog posts, and media engagement that would allow our research to inform decision makers in real time. That flexibility enabled us to adapt our research and embrace new projects. Our technical team built tools to ingest, process, and analyze data from a dozen different social media platforms. These shared research tools have greatly reduced the manual effort necessary to understand and analyze online conversation trends, and have empowered individual social science researchers to pursue their own lines of inquiry without the need for specialized data science skills.

Our researchers—from students to postdocs, young professionals to established experts—leveraged this capacity to understand such topics as social media

RESEARCHmanipulation around the 2020 Taiwan election, the influence of Russian cyber mercenaries in Libya, and the dynamics and behavior of Parler’s first 13 million users. In addition, we spearheaded two inter-institutional collaborations to study misinformation around specific domestic US events, the Election Integrity Partnership and the Virality Project, which are detailed in this report’s Research Spotlights. The research spotlights in this report highlight our takedown analysis and three of our pivotal reports published in the past two years.

The pathologies of information ecosystems extend beyond mis- and disinformation, so we continue to reevaluate the scope of our investigation. In looking at our work from the past two years—over 100 blog posts, op-eds, reports, and papers—we have refined our research into four distinct but interrelated areas: Trust and Safety, Platform Policy, Information Interference, and Emergent Technology.

T r u s t a n d S a f e t yAt the core of the Stanford Internet Observatory’s vision is the desire for the future of the consumer internet to be informed by the mistakes of its past. The Trust and Safety research area focuses on the ways consumer internet services are abused to cause real human harm and on the potential operational, design, and engineering responses. Our research will continue to investigate technical weaknesses on platforms such as Clubhouse, share innovative open-source research techniques such as our published work about Wikipedia, and examine the accessibility of self-harm prevention content on platforms across the internet. Our key outputs in the next year will be our open-access textbook on Trust and Safety Engineering, and a new cross-disciplinary journal incentivizing research collaboration in this space.

9P l a t f o r m P o l i c y

How well do technology companies’ published policies address user and content concerns? Is government regulation grounded in technical reality? An unexpected outcome of our Election Integrity Partnership was the power of our comprehensive platform policy analysis for creating a rubric to assess policy. Since publishing that analysis, we have continued pressure-testing platform policies in light of the ever-evolving geopolitical climate. This research area works in tandem with the Trust and Safety track but focuses strictly on policy as a way to strengthen trust and safety. Projects examine ways to limit abuse on end-to-end encrypted platforms, the impact of platform policy on access to self-harm content, the role of consent in online intimate imagery, and others.

E m e r g i n g T e c h n o l o g yWe identified our fourth research area with an eye to the future, which will include new generative machine learning technologies such as deep fakes or the Generative Pre-trained Transformer (GPT) autoregressive language models capable of emulating near-perfect human writing. Our research will look at both the potential and the harm inherent in these technologies, and how they will shape our online world. We also will explore the ways new machine learning techniques can be deployed in protective situations, and look at policy frameworks that can appropriately regulate AI advancements. Our technical infrastructure will play an integral role in creating toolkits for researchers interested in further exploring this new technology.

I n f o r m a t i o n I n t e r f e r e n c eInformation manipulation and disinformation research to date has primarily focused on finding and moderating away networks of inauthentic actors on social media platforms. We believe this scopes the problem too narrowly. Campaigns that deliberately spread false and misleading information no longer rely on inauthentic actors; they originate from domestic as well as foreign actors, who blend traditional media as well as social media manipulation into their strategic efforts. In our assessments of campaigns we increasingly examine full-spectrum activity: both overt/attributable and covert/inauthentic activity spanning broadcast, print, and social media. Information interference, more so than mis- and disinformation, encompasses the variety of operations to manipulate public consensus within the information environment writ large. This area of our work increasingly intersects with the others, as cryptocurrencies, emerging technologies, and new platforms are incorporated into manipulation efforts. At times, these efforts incorporate hacked and leaked material as well, which serves as an intersection point with more traditional cybersecurity research.

R E S E A R C H

1 0

R E S E A R C H S P O T L I G H T

The Stanford Internet Observatory published its first detailed whitepaper in 2019 at the request of the United States Senate Select Committee on Intelligence (SSCI). The report was based on a dataset of social media posts and Pages that Facebook provided to the Committee and attributed to the Main Directorate of the General Staff of the Armed Forces of the Russian Federation, known by its prior acronym GRU. A substantial amount of this content had not previously been publicly attributed to the GRU. Although the initial leads were provided by the Facebook dataset, many of these Pages had ties to material that remained accessible on the broader internet, and our report aggregated and archived that broader expanse of data for public viewing and in service to further academic research.

POTEMKIN PAGES AND PERSONAS: ASSESSING GRU ONLINE OPERATIONS,

2014-2019R e n é e D i R e s t a a n d S h e l b y G r o s s m a n • N o v e m b e r 1 2 , 2 0 1 9

1 1

T r a d i t i o n a l n a r r a t i v e l a u n d e r i n g o p e r a t i o n s h a v e b e e n u p d a t e d f o r t h e i n t e r n e t a g e .

T h e G R U d i s s e m i n a t e s t h e r e s u l t s o f i t s h a c k -a n d - l e a k o p e r a t i o n s t h r o u g h n a r r a t i v e l a u n d e r i n g .

O p e r a t o r s u s e a t w o - p r o n g e d a p p r o a c h o f n a r r a t i v e a n d m e m e t i c p r o p a g a n d a r u n t h r o u g h d i f f e r e n t m e d i a e n t i t i e s .

Narrative laundering is an “active-measures” tactic with a long history. Updating its tactics for the social media era, the GRU created think tanks and media outlets to serve as initial content drops, and fabricated personas to serve as authors. A network of accounts distributed the content to platforms such as Twitter and Reddit. In this way, GRU-created content could make its way from a GRU media property to a real, ideologically aligned independent media website—a process designed to reduce skepticism about the original little-known blog.

One of the salient characteristics of the GRU’s well-known hack-and-leak tactic is the need for a second party (such as WikiLeaks) to create an audience for the operation. While the GRU’s attempts to leak through its own social media accounts were generally ineffective, it did have success in generating media attention, which in turn led to wider coverage of the results of these operations. A GRU group’s Facebook posts about its hack-and-leak attack on the World Anti-Doping Agency, for example, received relatively little engagement, but write-ups in Wired and the Guardian ensured that its operations got wider attention.

The GRU fed its narratives into the wider mass-media ecosystem with the help of think tanks, affiliated websites, and fake personas. This strategy is distinct from that of the Internet Research Agency, which invested primarily in a social-first meme-based approach to maximize online engagement. Although the GRU conducted operations on Facebook, it either did not view maximizing social audience engagement as a priority or did not have the wherewithal to do so. To the contrary, it appears to have designed its operation to achieve influence in other ways.

T h i s w h i t e p a p e r i s a v a i l a b l e o n l i n e a t i o . s t a n f o r d . e d u .

K e y t a k e a w a y s

R E S E A R C H S P O T L I G H T

1 2

P l a t f o r m s p e r i o d i c a l l y a s k t h e S t a n f o r d I n t e r n e t O b s e r v a t o r y t o i n d e p e n d e n t l y a n a l y z e n e t w o r k s o f a s s e t s i d e n t i f i e d a n d s u s p e n d e d t h r o u g h i n t e r n a l i n v e s t i g a t i o n s o n s o c i a l m e d i a p l a t f o r m s . A d d i t i o n a l l y , w e o c c a s i o n a l l y i d e n t i f y o p e r a t i o n s i n t h e c o u r s e o f o u r o w n a n a l y s i s . T o d a t e , w e h a v e a n a l y z e d a n d

R E S E A R C H S P O T L I G H T

An astroturfing operation involving fake accounts left thousands of comments on Facebook, Twitter, and Instagram. The accounts, linked to the Rally Forge marketing agency, posted comments that appeared grassroots but were in fact paid commentary, much of it from people who did not exist.

O C T O B E R 2 0 2 0 : R A L L Y F O R G E

Facebook assets linked to a US strategy firm engaged in coordinated inauthentic behavior targeting people in Bolivia and Venezuela. This is the first documented case of a platform suspending assets linked to a US firm targeting a foreign country

S E P T E M B E R 2 0 2 0 : U S F I R M T A R G E T S B O L I V I A N E L E C T I O N S

INDEPENDENT INVESTIGATIONS OF PLATFORM TAKEDOWNS

1 3

A Russian businessman targeted people in Libya, Sudan, Central African Republic, Madagascar, Mozambique and the Democratic Republic of the Congo to create credible social media presences that furthered their interests on the ground. The operation appeared to have involved — wittingly or unwittingly — Sudanese individuals who had studied in Russia.

O C T O B E R 2 0 1 9 : R U S S I A N S T A R G E T A F R I C A

A network of Pakistan-based Facebook assets engaged in mass-reporting to silence critics of Islam in Pakistan. The network used a Chrome extension to expedite reporting.

S E P T E M B E R 2 0 2 0 : M A S S R E P O R T I N G F R O M P A K I S T A N

R E S E A R C H S P O T L I G H T

A large network cultivated over 5 million followers on Facebook. The operation involved participation by Syrians, and possibly Libyan and Sudanese individuals, living in Russia. The Libyan assets had developed media brands and supported the insurgent Libyan National Army.

D E C E M B E R 2 0 2 0 : A S S E T S T A R G E T L I B Y A , S U D A N , S Y R I A

s h a r e d o u r f i n d i n g s o n 2 9 n e t w o r k s o r i g i n a t i n g f r o m o r t a r g e t i n g 3 2 c o u n t r i e s a r o u n d t h e w o r l d . T h e m a p a b o v e h i g h l i g h t s a s e l e c t i o n o f o u r i n v e s t i g a t i o n s . T h e f u l l a r c h i v e o f r e p o r t s i s a c c e s s i b l e o n o u r w e b s i t e , i o . s t a n f o r d . e d u .

1 4

A new training handbook and feedback process to improve the orientation and professional development of SIO research assistants.

Publication of the Trust and Safety Engineering textbook based on the course lectures from the Trust and Safety Engineering course and ongoing research by the SIO team.

New MOOCs adapted from the Hack Lab and Trust and Safety course lectures.

A new social science course to be taught in parallel with Trust and Safety Engineering. This course will explore political science research on topics such as the ways foreign and domestic actors promote disinformation, the use of the internet for extremist recruiting, and the use of chat apps to incite violence, and will explore how online platforms currently respond to these threats. Students will gain an understanding of the most pressing challenges in global communication platforms and a strong foundation for future research and work on mitigating these harms. For the final project, students will collaborate with those in the Trust and Safety course to design policy guidelines that respond to harmful content.

Over the last two years, SIO has developed and taught four new courses at Stanford, reaching students in programs in international policy, computer science, business, law, political science, and more. Outside of the classroom, SIO has been incredibly fortunate to continue our long-term engagement with students through our research assistant and research analyst positions. In just the past two years, we have trained 76 student research analysts to join our technical and research projects, and students have coauthored 67% of our publications. In January 2021, we hosted the first Trust and Safety Job Fair at Stanford, which was attended by 11 companies and 121 students.

SIO will continue to expand our teaching impact through new courses and open-source training material through five initiatives.

L o o k i n g F o r w a r d

An update of the Hack Lab course material to incorporate new developments in relevant case law, as well as to integrate recent significant cybersecurity incidents, such as the SolarWinds and Microsoft Exchange hacks.

TEACHING

R i a n a P f e f f e r k o r n a n d A l e x S t a m o s d e l i v e r i n g a l e c t u r e i n H a c k L a b .

1 5

Trust and Safety Engineering, also taught by Alex Stamos, is designed to expose computer science undergraduates to the ways consumer internet services are abused to cause real human harm, along with potential operational, product, and engineering responses. Abuses covered in the course include spam, fraud, account takeovers, the use of social media by terrorists, misinformation, child exploitation, and harassment. Students explore both the technical and sociological roots of these harms and the ways various online providers have responded; the students then complete a practical group project addressing a real-world example of harmful behavior. This course is designed to give future entrepreneurs and product developers an understanding of how technology can be unintentionally misused and to expose them to the trade-offs and skills necessary to proactively mitigate future abuses. The course was piloted in the fall of 2019 with a dozen selected students, and has since enrolled 180 students.

Online Open-Source Investigations was developed by SIO research scholar Shelby Grossman as a practical introduction to internet research using free and publicly available information. Dr. Grossman’s syllabus blends best-in-practice open-source intelligence tools (OSINT) developed by organizations such as Bellingcat with SIO’s own research practices. The course prepares students for online open-source research in jobs in the public sector, with technology companies and human rights organizations, and with other research and advocacy groups. SIO has also adapted the course into onboarding training for our research analysts. The small seminar has been taught three times and has enrolled an average of 15 students per quarter over the past two years.

Current Topics in Technology Platform Policy was created by SIO associate director Elena Cryst in the spring of 2021 to expose students to a broader set of outside experts and practitioners on the frontiers of emerging technology issues. Speakers included Chris Krebs, former director of the Cybersecurity and Infrastructure Security Agency; Julie Owono, executive director of Internet Sans Frontièrs and a member of the Oversight Board; Olga Belogolova, Facebook’s policy lead for countering influence operations; and Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation. In its first quarter, the course had an enrollment of 32 students.

Hack Lab, cotaught by SIO director Alex Stamos and research scholar Riana Pfefferkorn, aims to give students a solid understanding of the most common types of attacks used in cybercrime and cyberwarfare. Lectures cover the basics of an area of technology, how that technology has been misused in the past, and the legal and policy implications of those hacks. In lab sections, students apply this background to attack vulnerable, simulated targets using techniques and tools seen in the field. Students going into leadership positions will understand how events like ransomware attacks occur, how they as leaders can protect cyber assets, and what legal responsibilities they have. In two years, 210 students have completed the course.

T R U S T A N D S A F E T Y E N G I N E E R I N G

O N L I N E O P E N - S O U R C E I N V E S T I G A T I O N S

C U R R E N T T O P I C S I N T E C H P L A T F O R M P O L I C Y

H A C K L A B

COURSES T E A C H I N G

1 6

Much of the attention to state-sponsored influence practices in recent years has focused on social media activity, particularly as companies such as Facebook and Twitter announced takedowns of accounts linked to state-backed operations. However, state-sponsored operations are broader than social media. This report looks at China’s demonstrated ability to operate a full-spectrum capability set that spans both traditional and social media ecosystems.

The Chinese Communist Party (CCP) relies on an extensive influence apparatus that spans a range of print and broadcast media to advance both its domestic monopoly on power and its claims to global leadership. This apparatus draws on nearly a century of experience running information operations. What is the scope and nature of China’s overt and covert capabilities, and how do they complement one another? China’s longstanding commitment to managing narratives means that it will likely continue to learn, iterate, and adapt. Our report analyzes its capabilities, how these capabilities are leveraged, and how they continue to evolve.

R E S E A R C H S P O T L I G H T

TELLING CHINA’S STORY: THE CHINESE COMMUNIST PARTY’S

CAMPAIGN TO SHAPE GLOBAL NARRATIVES

R e n é e D i R e s t a , C a r l y M i l l e r , V a n e s s a M o l t e r , J o h n P o m f r e t , a n d G l e n n T i f f e r t • J u l y 2 0 , 2 0 2 0

1 7

C h i n a h a s a r o b u s t o v e r t p r o p a g a n d a a p p a r a t u s , a n d m a n a g i n g b o t h i n w a r d a n d o u t w a r d - f a c i n g m e s s a g i n g r e m a i n s a t o p p r i o r i t y f o r t h e C C P.

C h i n a h a s l e s s -a t t r i b u t a b l e c o m m u n i c a t i o n o p t i o n s t h a t i t u s e s t o i n f l u e n c e o p i n i o n s .

C h i n a i s g r o w i n g i t s s o c i a l m e d i a a n d b r o a d c a s t i n f l u e n c e a b r o a d .

Under President Xi Jinping, its work has taken on a new urgency, with the CCP tightening control of the media and elevating its propaganda machine to the top tier of party organs.

These include content farms and fabricated accounts and personas on social media channels. Perhaps the most famous of China’s more covert domestic influence capabilities is the digital commenter brigade known as Wumao, or “50 Cent Party.” In August 2019, clusters of fake accounts and content were concretely attributed to the CCP by several technology companies, including Facebook, Twitter, and YouTube.

Internationally, overt infrastructure includes regionalized and language-specific traditional media channels, social media pages and accounts, and prominent-figure influencer accounts with millions of followers on Western social media platforms.

T h i s r e p o r t w a s p r e p a r e d w i t h c o l l e a g u e s a t t h e H o o v e r I n s t i t u t i o n ' s C h i n a G l o b a l S h a r p P o w e r P r o j e c t . T h e f u l l r e p o r t i s a v a i l a b l e o n o u r w e b s i t e , i o . s t a n f o r d . e d u .

K e y t a k e a w a y s

R E S E A R C H S P O T L I G H T

1 8

Over the last two years, the Stanford Internet Observatory scholars and experts have pushed forward efforts to keep technology policy grounded in technical reality. We have hosted events with policy makers, given expert congressional testimony, held private briefings for government staff, organized a public event at the European Parliament, released policy briefs, and provided feedback to online platforms on their policies and practices.

Our launch two years ago coincided with the release of the Freeman Spogli Institute’s report, “Securing American Elections: Prescriptions for Enhancing the Integrity and Independence of the 2020 U.S. Presidential Election and Beyond.” This report, coauthored by Alex Stamos, laid out concrete steps that election authorities could take to improve the security of election infrastructure, combat state-sponsored disinformation campaigns, and regulate online advertising around the elections.

In March 2021, after six months of monitoring, research, and analysis, SIO and the Election Integrity Partnership published our report on misinformation in the 2020 presidential election, “The Long Fuse.” Those findings are detailed in the research spotlight on pages 19-20 of this report. We found that both government and platforms took many appropriate steps to mitigate the risks seen in the 2016 election, but were still unprepared for the depth of grassroots and domestic misinformation spread in the leadup to the 2020 election and fed by the global pandemic and unprecedented rhetoric of the sitting president.

POLICYL o o k i n g F o r w a r d

O P E N A N E W O F F I C E I N W A S H I N G T O N , D C .

The Stanford Internet Observatory plans to establish a persistent presence in DC as we locate several full-time staff positions in the nation’s capital. This will facilitate engagement with all branches of the federal government, and with our colleagues and counterparts at think tanks in Washington.

Our team has been prolific during our year of sheltering in place. As we return to the office and to bicoastal travel this summer, we will bring over 100 publications to ongoing global policy debates. Strategically, we have identified several goals that will accelerate our policy impact in the coming years.

I N F O R M T H E D E B A T E A R O U N D S E C T I O N 2 3 0 R E F O R M A N D E N D - T O - E N D E N C R Y P T I O N R E G U L A T I O N .

The regulation debate persists both in government and within technology companies. We will continue to publish commentary on proposals to reform Section 230 of the Communications Decency Act. At the same time, we will use the convening power of our popular end-to-end encryption workshops to further the discussion on how to improve user safety while preserving privacy on encrypted networks.T A I L O R R E S E A R C H O U T P U T S T O W A R D P O L I C Y R E C O M M E N D A T I O N S .

Last year we piloted a research approach that combines short-term, policy-relevant research with longer-term research for peer-reviewed publications. For example, we published an evaluation of internet platform self-harm policies, in order to have an immediate effect on those policies. We then used that whitepaper as a jumping-off point for a larger project evaluating the implementation of these policies, which we will submit to a peer-reviewed journal. This model enables us to publish findings and recommendations when they are still relevant and able to have an impact.E X P A N D O U R I N T E R N A T I O N A L E N G A G E M E N T.

Although we started 2020 with a visit to the European Commission and Parliament, over the past year we concentrated most of our resources on two major domestic research projects: the 2020 election and the US COVID-19 vaccine rollout. In the coming year we will reinvigorate our international research and seek opportunities to expand engagement overseas, with a renewed focus on developing democracies.

1 9

P O L I C Y

P r o c e d u r a l I n t e r f e r e n c e

P a r t i c i p a t i o n I n t e r f e r e n c e F r a u d

D e l e g i t i m i z a t i o n o f E l e c t i o n

R e s u l t s

Facebook Comprehensive Comprehensive Comprehensive Comprehensive

Twitter Comprehensive Comprehensive Non-Comprehensive ComprehensiveYouTube Comprehensive Comprehensive Non-Comprehensive Non-ComprehensivePinterest Comprehensive Comprehensive Comprehensive ComprehensiveNextdoor Non-Comprehensive Non-Comprehensive Non-Comprehensive Non-ComprehensiveTikTok Non-Comprehensive Non-Comprehensive Non-Comprehensive Non-ComprehensiveSnapchat Non-Comprehensive Non-Comprehensive Non-Comprehensive Non-ComprehensiveParler

*No election–related policies

GabDiscordWhatsAppTelegramRedditTwitch

March 3, 2021: The Long Fuse: Misinformation and the 2020 Election, published report

May 11, 2020: “Coronavirus And Homeland Security Part Seven: Flattening The Misinformation Curve,” House Committee on Homeland Security virtual forum

February 17, 2020: “The Dilemma of Disinformation: How should democracies respond?” lecture by Alex Stamos at the European Parliament Research Service

January 19, 2021: Launch of the Virality Project weekly policy briefs

March 5, 2020: SIO report “Evidence of Russia-Linked Influence Operations in Africa” cited in Senate Hearing 116-275

June 25, 2019: “Artificial Intelligence and Counterterrorism: Possibilities and Limitations,” testimony by Alex Stamos at a hearing before the Subcommittee on Intelligence and Counterterrorism of the Committee on Homeland Security, House of Representatives

October 13, 2020: Launch of Election Integrity Partnership weekly briefing calls

February 18, 2020: “Safety, Privacy and Internet Policy in Europe,” event at the European Parliament at the invitation of Polish MEP Radosław Sikorski

June 6, 2019: Securing American Elections: Prescriptions for Enhancing the Integrity and Independence of the 2020 U.S. Presidential Election and Beyond, published report

POLICY OUTPUTS

2019–2021C o m p a r a t i v e P o l i c y A n a l y s i s

SIO publishes tables, to compare social media platforms’ policies around a specific topic based on a published rubric. This table shows our final comparison of platforms’ election related policies, as of October 28, 2020.

2 0

On January 6, 2021, an armed mob stormed the US Capitol to prevent the certification of what they claimed was a “fraudulent election.” The insurrection was the culmination of months of online mis- and disinformation directed toward eroding American faith in the 2020 election.

R E S E A R C H S P O T L I G H T

THE LONG FUSE: MISINFORMATION AND THE 2020 ELECTION

T h e E l e c t i o n I n t e g r i t y P a r t n e r s h i p • M a r c h 3 , 2 0 2 1

2 1US elections are decentralized: almost 10,000 state and local election offices are primarily responsible for the operation of elections. Dozens of federal agencies support this effort. However, none of these federal agencies has a focus on, or authority regarding, election misinformation originating from domestic sources. This limited federal role reveals a critical gap for non-governmental entities to fill. Increasingly pervasive mis- and disinformation, both foreign and domestic, creates an urgent need for collaboration across government, civil society, media, and social media platforms.

The Election Integrity Partnership, a coalition of four organizations that specialize in understanding those information dynamics, aimed to create a model for whole-of-society collaboration and facilitate cooperation among partners dedicated to a free and fair election. With the narrow aim of defending the 2020 election against voting-related mis- and disinformation, it bridged the gap between government and civil society, helped to strengthen platform standards for combating election-related misinformation, and shared its findings with its stakeholders, media, and the American public. The report details our process and findings, and provides recommendations for future actions.

T h e E l e c t i o n I n t e g r i t y P a r t n e r s h i p w a s c o n c e i v e d o f a n d i n i t i a t e d b y t h e S t a n f o r d I n t e r n e t O b s e r v a t o r y w i t h t h e p a r t n e r s h i p o f t h e t h e A t l a n t i c C o u n c i l ’ s D i g i t a l F o r e n s i c R e s e a r c h L a b , G r a p h i k a , a n d t h e U n i v e r s i t y o f W a s h i n g t o n ’ s C e n t e r f o r a n I n f o r m e d P u b l i c . R e p o r t a v a i l a b l e a t e i p a r t n e r s h i p . n e t .

The 2020 election demonstrated that actors—both foreign and domestic—remain committed to weaponizing viral false and misleading narratives to undermine confidence in the US electoral system and erode Americans’ faith in our democracy. While the Partnership was intended to meet an immediate need, the conditions

that necessitated its creation have not abated. Academia, platforms, civil society, and all levels of government must be committed to predicting and pre-bunking false narratives, detecting mis- and disinformation as it occurs, and countering it whenever appropriate.

K e y t a k e a w a y sMisleading and false claims and narratives coalesced into the metanarrative of a “stolen election,” which later propelled the January 6 insurrection.

The primary repeat spreaders of false and misleading narratives were verified, blue-check accounts belonging to partisan media outlets, social media influencers, and political figures, including President Trump and his family.The production and spread of misinformation

was multidirectional and participatory. Many platforms expanded their election-related policies during the 2020 election cycle. However, application of moderation policies was inconsistent or unclear.

Narrative spread was cross-platform: repeat spreaders leveraged the specific features of each platform for maximum amplification.

R E S E A R C H S P O T L I G H T

2 2

. . . t o o u r s t a f f . . .

A C K N O W L E D G E M E N T S

S a m a n t h a B r a d s h a wD a n i e l B u s hE l e n a C r y s tR e n é e D i R e s t aI s a b e l l a G a r c i a - C a m a r g oJ o s h G o l d s t e i nS h e l b y G r o s s m a nM a t t M a s t e r s o nC a r l y M i l l e rR i a n a P f e f f e r k o r nA l e x S t a m o sD a v i d T h i e l . . . a n d t o o v e r a

h u n d r e d s t u d e n t s , c o l l e a g u e s a n d f r i e n d s w h o h a v e b e e n p a r t o f o u r t e a m i n o u r f i r s t t w o y e a r s .

t h a n k y o u . . .

. . . t o o u r d o n o r s . . .

C r a i g N e w m a r k P h i l a n t h r o p i e s H e w l e t t F o u n d a t i o n

O m i d y a r N e t w o r k C h a r l e s K o c h F o u n d a t i o n

F e l i c i s V e n t u r e s

2 3

A C K N O W L E D G E M E N T S

Internet ObservatoryCyber Policy Center

The Stanford Internet Observatory is a cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social media. The Stanford Internet Observatory was founded in 2019 to research the misuse of the internet to cause harm, formulate technical and policy responses, and teach the next generation how to avoid the mistakes of the past.

i o . s t a n f o r d . e d u • @ s t a n f o r d i oi n t e r n e t o b s e r v a t o r y @ s t a n f o r d . e d u

Photo Credits: p.2 – Stanford Dish/Stanford News Service; p.4 – Alex Stamos and Elena Cryst/Rod Searcy; p.10 – Potemkin Village, Carson City, Vårgårda, Sweden, 2016/Greg Sailer; p.14 – Screenshot from Hack Lab Course Recording/Stanford Center for Professional Development; p.16 – NASA Earth Observatory/Wikimedia Commons; p.20 – Original illustration for the election integrity partnership/Alex Atkins Design, Inc. This page spread L-R: SIO election night warroom; SIO E2EE Workshop; Carly Miller presenting at Cyber Center Seminar; Hack Lab TA with lab ma-terial; Alex Stamos delivering congressional testimony; students in a classroom; Renee DiResta presenting; SIO team members collaborating.Editing provided by Eden Beck.

Internet ObservatoryCyber Policy Center


Recommended