Elon Musk’s takeover of Twitter and his controversial decisions as its owner have fueled a new wave of calls for regulating social media companies. Elected officials and policy scholars have argued for years that companies such as Twitter Inc. and Facebook – now Meta Platforms Inc. – have immense power over public discussions and can use that power to elevate some views and suppress others.
I wonder what that regulation would look like. There are many regulatory models in use around the world, but few seem to fit the realities of social media.
The central ideas behind economic regulation – safe, reliable service at fair and reasonable rates – have been around for centuries. The U.S. has a rich history of regulation.
The first federal economic regulator in the U.S. was the Interstate Commerce Commission, which was created by the Interstate Commerce Act of 1887. This law required railroads to operate safely and fairly and to charge reasonable rates for service.
The law reflected concerns that railroads were monopolies that could behave in any manner they chose and charge any price they wanted. This power threatened people who relied on rail service.
Individual social media companies don’t really fit this traditional mold of economic regulation.
While Internet access is becoming an essential service, it’s debatable whether social media platforms are essential. And companies such as Facebook and Twitter aren’t monopolies and don’t directly charge people. So the traditional focus of economic regulation – fear of exorbitant rates – doesn’t apply.
A more relevant model might be the regulation of the electricity grid and pipelines. These industries fall under the Federal Energy Regulatory Commission and state utility regulators. Like these networks, social media carries a commodity, and the public’s primary concern is that social media companies should do it safely and fairly.
In this context, regulation means establishing safety and equity standards. If a company violates those standards, it faces fines. But it would be complicated.
First, establishing these standards requires a careful definition of the regulated company’s roles and responsibilities. Since social media companies continuously adapt to the needs and wants of their users, establishing roles and responsibilities could prove challenging.
Texas attempted to do this in 2021 with a law that barred social media companies from banning users based on their political views. Social media trade groups sued, arguing that the measure infringed upon their members’ First Amendment rights. A federal appellate court blocked the law, and the case is likely headed to the Supreme Court.
Laws often inhibit agencies from energetically policing target industries.
For example, the Office of Enforcement at the Federal Energy Regulatory Commission is concerned with the safety and security of U.S. energy markets. But under a 2005 law, the office can’t levy civil penalties higher than $1 million per day. In comparison, the cost to customers of the California power crisis of 2000-2001, fueled partially by energy market manipulation, has been estimated at approximately $40 billion. Clearly, the prospect of a fine from the regulator is not a sufficient deterrent in every instance.
If a new law withstands legal challenges, a regulatory agency such as the Federal Communications Commission or the Federal Trade Commission, or perhaps a newly created agency, would have to write regulations establishing social media companies’ roles and responsibilities and creating enforcement mechanisms. In doing so, regulators would need to be mindful that changes in social preferences and tastes could render these roles moot.
We can assume that social media companies would evolve quickly, so regulators would likely be assessing a moving target. As I see it, even if bipartisan support develops for regulating social media, it will be easier said than done.
Theodore J. Kury is the director of energy studies at the University of Florida’s Public Utility Research Center. Distributed by The Associated Press.