Government censorship—internet shutdowns, blockages, firewalls—impose significant barriers to the transnational flow of information despite the connective power of digital technologies. In this paper, we examine whether and how information flows across borders despite government censorship. We develop a semi-automated system that combines deep learning and human annotation to find co-occurring content across different social media platforms and languages. We use this system to detect co-occurring content between Twitter and Sina Weibo as Covid-19 spread globally, and we conduct in-depth investigations of co-occurring content to identify those that constitute an inflow of information from the global information ecosystem into China. We find that approximately one-fourth of content with relevance for China that gains widespread public attention on Twitter makes its way to Weibo. Unsurprisingly, Chinese state-controlled media and commercialized domestic media play a dominant role in facilitating these inflows of information. However, we find that Weibo users without traditional media or government affiliations are also an important mechanism for transmitting information into China. These results imply that while censorship combined with media control provide substantial leeway for the government to set the agenda, social media provides opportunities for non-institutional actors to influence the information environment. Methodologically, the system we develop offers a new approach for the quantitative analysis of cross-platform and cross-lingual communication.
As audiences have moved to digital media, so too have governments around the world. While previous research has focused on how authoritarian regimes employ strategies such as the use of fabricated accounts and content to boost their reach, this paper reveals two different tactics the Chinese government uses on Douyin, the Chinese version of the video-sharing platform TikTok, to compete for audience attention. We use a multi-modal approach that combines analysis of video, text, and meta-data to examine a novel dataset of Douyin videos. We find that a large share of trending videos are produced by accounts affiliated with the Chinese government. These videos contain visual characteristics designed to maximize attention such as high levels of brightness and entropy and very short duration, and are more visually similar to content produced by celebrities and ordinary users than to content from non-official media accounts. We also find that the majority of videos produced by regime-affiliated accounts do not fit traditional definitions of propaganda but rather contain stories and topics unrelated to any aspect of the government, the Chinese Communist Party, policies, or politics.
Muise, D., Lu, Y., Pan J., & Reeves, B. (2022). Selectively Localized: Temporal and Visual Structure of Smartphone Screen Activity across Media Environments. Mobile Media & Communication. 10(3), 487–509. (DOI)
This study demonstrates how localization and homogenization can co-occur in different aspects of smartphone usage. Smartphones afford individualization of media behavior: users can begin, end, or switch between countless tasks anytime, but this individualization is shaped by shared environments such that smartphone usage may be similar among those who share such environments but contain differences, or localization, across environments or regions. Yet for all users, smartphone screen interactions are bounded and guided by nearly identical smartphone interfaces, suggesting that smartphone usage may be similar or homogenized across all individuals regardless of environment. We study homogenization and localization by comparing the temporal, visual, and experiential composition of screen activity among individuals in three dissimilar media environments—the United States, China, and Myanmar—using one week of screenshot data captured passively every 5 s by the novel Screenomics framework. We find that overall usage levels are consistently dissimilar across media environments, while metrics that depend more on moment-level decisions and user-interface design do not vary significantly across media environments. These results suggest that quantitative research on homogenization and localization should analyze behavior driven by user interfaces and by contextually determined parameters, respectively.
When COVID-19 first emerged in China, there was speculation that the outbreak would trigger public anger and weaken the Chinese regime. By analyzing millions of social media posts from Sina Weibo made between December 2019 and February 2020, we describe the contours of public, online discussions pertaining to COVID-19 in China. We find that discussions of COVID-19 became widespread on January 20, 2020, consisting primarily of personal reflections, opinions, updates, and appeals. We find that the largest bursts of discussion, which contain simultaneous spikes of criticism and support targeting the Chinese government, coincide with the January 23 lockdown of Wuhan and the February 7 death of Dr. Li Wenliang. Criticisms are directed at the government for perceived lack of action, incompetence, and wrongdoing—in particular, censoring information relevant to public welfare. Support is directed at the government for aggressive action and positive outcomes. As the crisis unfolds, the same events are interpreted differently by different people, with those who criticize focusing on the government’s shortcomings and those who praise focusing on the government’s actions.
The proliferation of social media and digital technologies has made it necessary for governments to expand their focus beyond propaganda content in order to disseminate propaganda effectively. We identify a strategy of using clickbait to increase the visibility of political propaganda. We show that such a strategy is used across China by combining ethnography with a computational analysis of a novel dataset of the titles of 197,303 propaganda posts made by 213 Chinese city-level governments on WeChat. We find that Chinese propagandists face intense pressures to demonstrate their effectiveness on social media because their work is heavily quantified–measured, analyzed, and ranked–with metrics such as views and likes. Propagandists use both clickbait and non-propaganda content (e.g., lifestyle tips) to capture clicks, but rely more heavily on clickbait because it does not decrease space available for political propaganda. Government propagandists use clickbait at a rate commensurate with commercial and celebrity social media accounts. The use of clickbait is associated with more views and likes, as well as greater reach of government propaganda outlets and messages. These results reveal how the advertising-based business model and affordances of social media influence political propaganda and how government strategies to control information are moving beyond censorship, propaganda, and disinformation.
Reeves, B., Ram N., Robinson T. N., Cummings J. J., Giles L., Pan J., Chiatti A., Cho M., Roehrick K., Yang X., Gagneja A., Brinberg M., Muise D., Lu Y., Luo M., Fitzgerald A., Yeykelis L. (2021). Screenomics: A Framework to Capture and Analyze Personal Life Experiences and the Ways that Technology Shapes Them. Human-Computer Interaction. 36(2), 150-201. (DOI. New York Times report)
Digital experiences capture an increasingly large part of life, making them a preferred, if not required, method to describe and theorize about human behavior. Digital media also shape behavior by enabling people to switch between different content easily, and create unique threads of experiences that pass quickly through numerous information categories. Current methods of recording digital experiences provide only partial reconstructions of digital lives that weave – often within seconds – among multiple applications, locations, functions, and media. We describe an end-to-end system for capturing and analyzing the “screenome” of life in media, i.e., the record of individual experiences represented as a sequence of screens that people view and interact with over time. The system includes software that collects screenshots, extracts text and images, and allows searching of a screenshot database. We discuss how the system can be used to elaborate current theories about psychological processing of technology, and suggest new theoretical questions that are enabled by multiple timescale analyses. Capabilities of the system are highlighted with eight research examples that analyze screens from adults who have generated data within the system. We end with a discussion of future uses, limitations, theory, and privacy.
Lu, Y., & Shen, C. (Revise and Resubmit). Unpacking Multimodal Fact-checking: Features and Engagement of Fact-checking Videos on Chinese TikTok (Douyin). Social Media + Society.
Chen, K., Lu, Y.*, & Wang, Y. (Revise and Resubmit). Toward an Evidence-Driven Understanding of Digital Trace Research on China. Information, Communication \& Society. [∗co-first author]
Chen, A., Lu, Y.*, Chen, K., & Ng, A. (Revise and Resubmit). Pandemic Nationalism: Use of Government Social Media for Political Information and Belief in COVID-19 Conspiracy Theories in China. The International Journal of Press/Politics. [∗co-first author]
Christin, A., & Lu, Y. (Revise and Resubmit). The Influencer Pay Gap: Platform Labor Meets Racial Capitalism. New Media & Society.
Lu, Y., & Peng, Y. (Extended abstract accepted, full paper under review). The Mobilizing Power of Visual Media across Cycles of Social Movements. Political Communication.
Peng, Y., Lu, Y.*, & Shen, C. (Under Review). An Agenda for Studying Credibility Perceptions of Visual Misinformation. [∗co-first author]
Pan, J., Lu, Y.*, & Chen, A. (Under Review). The Chilling Effect of Decreasing Anonymity on Chinese Social Media. [∗co-first author]
Selected Working Papers
Lu, Y. (Book project). Fandom, Propaganda, and State Mobilization on Chinese Social Media.
Lu, Y., Pan, J., Xu, X., & Xu, Y. (Manuscript in preparation). The Evolution of Propaganda in the Digital Age: Chinese Government Mobilization on Douyin.
Lu, Y., Liu, S., & Hancock, J. (Manuscript in preparation). Computational Approaches to Understanding Credibility in Video-Based Misinformation: An Analysis of Covid-19 Content on TikTok.