At around midnight on the evening of January 18, 2025, TikTok went dark. For 170 million Americans, the endlessly scrolling videos vanished overnight in a stunning enforcement of a new U.S. law. Just 48 hours later, and much to the surprise of users, it was reported that “TikTok begins restoring service in the U.S. after a dramatic shutdown over a ‘divest-or-ban’ law.” TikTok was restored under a temporary order from incoming President Donald Trump. Although it is unknown whether or not TikTok’s service will be terminated in the future, it is clear that the protectionist fears that led to its termination have not subsided, and in fact, are increasingly shaping America’s views about the media.
Protectionism is an approach to media literacy that is rooted in the belief that humans are too vulnerable to resist problematic, false, or misleading information. From documentaries like Social Dilemma to authors like Johnathon Haidt, the protectionist perspective is shaping how Americans perceive media, often shifting focus away from the true problem: Big Tech.
In the U.S., media literacy is defined as “the ability to access, analyze, evaluate, create, and act using all forms of communication.” Protectionism represents the oldest and most rigid framework of media literacy pedagogy. Emerging in the early 20th century from fears of propaganda, this approach taught people to spot and avoid harmful media. It assumes the media is inherently dangerous and that audiences are passive, powerless to resist its influence. Critics argue otherwise, claiming that individuals are active participants capable of interpreting and rejecting harmful content. For example, reading excerpts from Adolf Hitler’s writings does not automatically turn someone into a white supremacist—people are not mindless receptors.
For those who view users as ignorant or powerless, protectionism offers a simplistic solution: shield people from media. This mindset played a key role in the TikTok ban. Many lawmakers feared foreign governments—particularly China—might use the platform to harvest data and manipulate American users. Others worried about TikTok content showcasing the plight of Palestinians in Gaza during Israeli bombing campaigns, labeling such posts as “fake news” or propaganda. Combined with pressure from American social media competitors lobbying lawmakers, these fears culminated in TikTok’s forced shutdown.
But TikTok is not alone in this dynamic. Increasingly, documentaries like The Social Dilemma and authors such as Jean Twenge and Johnathon Haidt raise legitimate concerns about the effects of technology on mental health—issues like bullying, body dysmorphia, anxiety, depression, and loneliness. Yet, the dominant response has been to blame the technology itself, as seen in the rise of school cellphone bans.
This protectionist stance overlooks a critical point: technology itself is not inherently the problem. Innovations like pencils, light bulbs, calculators, and computers have historically enhanced learning and productivity. Indeed, social media, like any tool, is neither inherently good nor bad; it depends on how we use it. Pete Etchells, in Unlocked: The Real Science of Screen Time (and How to Spend It Better), argues through a vast amount of recent scientific literature that it’s not the amount of time spent on screens that harms us but rather how we use that time.
A problem arises when these tools are monopolized by corporations who elevate the profit motive over competing considerations. Indeed, time and time again, whistleblowers from Google, Meta, and others have argued that internal studies in Big-Tech companies have shown that their products are associated with mental and physical harm of their users and the spread of false information that threaten democratic process, but the company always chooses maintain their profits instead of altering their product to be less harmful.
The same is true for AI. While AI has transformative potential—helping to correct essays, summarize texts, or organize information—the profit model of the corporation threatens to reduce AI to an invention that increases the profits of the few while exploiting the majority. Indeed, the monetization of people’s privacy (which is erased in the process), the theft of intellectual property, and the creation of doppelgangers of humans without their consent is just the beginning of how corporate AI is choosing profits over the interests of users. In addition, analysts are predicting the loss of jobs and utilization of data to economically exploit people in terms of their labor, insurance costs, education, and more. These companies collect and analyze massive amounts of data, eroding privacy and putting vulnerable individuals, like undocumented migrants or victims of stalking, at risk.
An alternative vision for digital spaces comes from journalist Julia Angwin, who, in The Right’s Triumph Over Social Media, imagines a decentralized ecosystem. In this model, users set their own content moderation standards, creating a more diverse and democratic online space. While radical, such an approach addresses the structural issues in social media rather than relying on bans or heavy-handed regulations.
The protectionist framework dominating today’s discourse is too reactionary. Instead, we should focus on empowering users through critical media literacy. A critical media literacy approach shows people how to think, but not what to think. It encourages users to negotiate their relationship with media, ask questions, learn about media including who owns it and what conflict of interest they may have, and explore what messages the media will share and which they will censor particularly about vulnerable communities.
As we grapple with AI’s ethical implications, we must also rethink how we approach our relationship with social media. Rather than repeating the mistakes of protectionist alarmism, we must focus on addressing the real problem: the profit motives of Big-Tech. This will force a discussion about prioritization, where the citizenry must ask, “Do you think democracy, economic viability, truth, and mental and physical health are more important than Big-Tech’s profits?”
Sydney Sullivan is a full-time lecturer at San Diego State University and a PhD candidate at the University of California, Davis, specializing in education and digital rhetoric with an emphasis on well-being. Sydney’s teaching centers around guiding her students to learn critical media literacy skills while diving into the critical concerns and opportunities surrounding artificial intelligence and writing. Her remaining time in graduate school is spent developing administrative skills to design writing curricula that integrate critical media literacy with well-being. She has recently been published with Routledge and Composition Studies with chapters entitled “Rethinking Curriculums: How Critical Digital Literacy and Mandatory Composition Courses Collide’ (2024) and ‘Self-Determination Theory and Authenticity: A Response to Power Inequities within Higher Education”(2022).
Nolan Higdon is a political analyst, author, lecturer at Merrill College and the Education Department at University of California, Santa Cruz, and Project Censored National Judge. Higdon’s areas of concentration include critical AI literacy, podcasting, digital culture, news media history & propaganda, and critical media literacy. All of Higdon’s work is available at Substack (https://nolanhigdon.substack.com/). He is the author of The Anatomy of Fake News: A Critical News Literacy Education (2020); Let’s Agree to Disagree: A Critical Thinking Guide to Communication, Conflict Management, and Critical Media Literacy (2022);The Media And Me: A Guide To Critical Media Literacy For Young People (2022); and Surveillance Education: Navigating the conspicuous absence of privacy in schools (Routledge). Higdon is a founding member of the Critical Media Literacy Conference of the Americas. Higdon is a regular source of expertise for CBS, NBC, The New York Times, and The San Francisco Chronicle.
Photos for header image by Nik on Unsplash and Patrick Hoesly on flickr.