Calls to regulate social media in the public interest fail to grapple with the messy details of policymaking, or the disparate desires of internet users

Media Name: clownvtxphoto.jpg

Will Rinehart is Director of Technology and Innovation Policy at the American Action Forum, where he specializes in telecommunication, Internet, and data policy, with a focus on emerging technologies, innovation, and increasingly, algorithmic decision-​making.

The public option. The public interest. Public utility regulation. Something seems to be attractive about laws made in the “public” spirit. And with tech facing an avalanche of bad news in the last year, tech regulation advanced in the name of the public appears to be gaining momentum.

For example, just before he left his post, White House advisor Stephen Bannon pushed policymakers to consider regulating the largest social media platforms as public utilities. While radical at the time, the idea has gained traction, even among those on the right. Republican Representative Steve King floated the idea when he asked Facebook CEO Mark Zuckerberg, “What about converting the large behemoth organizations that we’re talking about here into public utilities?” Fox News host Tucker Carlson also contends that “Google should be regulated like the public utility that it is.” Perhaps surprisingly, Carlson’s view is shared by many academics.

Behind the public-​spirited label of proposed new regulation lie two distinct legal courses. On one side, public-​interest and public-​utility regulation follow the regulatory path and attach a set of must-​carry requirements on private companies. On the other, in Canada, in Western Europe, and in Australia, governments have chosen to fund a public option in addition to other regulations on the tech and media sectors. The Canadian Broadcasting Company (CBC), the British Broadcasting Company (BBC), and the Australian Broadcasting Corporation (ABC) are notable (and the most successful) media and tech companies that provide a publicly funded alternative to private companies.

Both regulatory approaches—regulating tech companies in the interest of the public, and the government outright creating an alternative option—have been floated for tech platforms to solve a litany of problems like fake news, the perceived lack of competition, and the reliance on advertising. Unfortunately for both would-​be regulators, and the public at large, neither route is a panacea.

Distinguishing Public-​Interest Regulations

The most obvious path in consideration is direct regulation of the largest tech companies, but regulation can take a variety of forms. For one, public utility regulation is a catchall term describing organizations that perform services for the public and are subject to government regulation. Prices and contracts might be regulated, or the company might be subject to common carriage mandates.

Common carriage requires that a company not discriminate when they serve the public. As such, common carriage is much older form of regulation that stretches back to the Roman Empire and finds its current iteration in English common law. As far back as 1701, English courts had ruled that ,

If a man takes upon him a public employment, he is bound to serve the public as far as the employment extends; and for refusal an action lies, as against a farrier refusing to shoe a horse…Against an innkeeper refusing a guest when he has room…Against a carrier refusing to carry goods when he has convenience, his wagon not being full.

The first application of common carriage to a network industry pops up in the late 1840s when New York state required telegraph companies to serve all individuals. The Interstate Commerce Act followed in 1887, creating a duty for rail companies to serve everyone. These regulations normalized the imposition of common carriage rules, which were in turn applied to telephone networks by the newly formed FCC in the 1930’s.

Price regulation, on the other hand, generally stems from the Supreme Court case Munn v. Illinois. In 1871, Illinois set the maximum price that could be charged to farmers for storing their grain in silos after an agricultural advocacy group lobbied the state legislature. The Chicago grain warehouse firm, Munn and Scott, was found guilty of violating the law and appealed their case to the Supreme Court . The Court upheld the law, prophetically pronouncing that, “a government may regulate the conduct of its citizens toward each other, and, when necessary for the public good, the manner in which each shall use his own property.”

Regulating private companies in the public interest thus leads to two broad kinds of regulations. Price controls determine the prices that user pay, while common carriage rules mandate openness. In practice, laws that regulate telecommunication, electricity, and gas services mix these two categories.

The Problems with Regulating Tech

Getting price regulation right is difficult, its effect can be positive, negative, or ambiguous, depending on the industry, type of regulation, time period, and the method of comparison.

Turning Facebook and the other large tech platforms into price-​regulated public utilities could be devastating. Some, such as New York Times columnist Zeynep Tufecki , have called for Facebook to charge users for its service, because a fee based model would allow customers to opt out of advertising. But mandating that social media platforms charge their users a price could seriously hamstring these services, and price millions out of a service they value. In a recent study on the topic, economist Caleb Fuller estimated that even under generous assumptions, Google could hope to make somewhere between $14 and $15 million per year if it charged a fee. To put that in perspective, the 2017 total revenue for Google’s parent company, Alphabet, was $111 billion. In a survey conducted by Cass Sunstein, a legal scholar best known for his work in applying behavioral economics to public policy, nearly half of participants said that they would pay $0 if Facebook tried to charge them a fee.

Adding a common carriage requirement to platforms has its own kind of problems. Compelling platforms to carry certain kinds of speech would face a very serious challenge under the First Amendment . The government could compel platforms if it designated them as public forums, a suggestion pushed by Senator Ted Cruz. The traditional notion of a public forum encompasses government property such as streets or parks, which have been devoted to public expressive use “by long tradition or by government fiat.”

In rare cases, however, private property can be designated a public forum. The governing Supreme Court decision here is PruneYard Shopping Center v. Robins . In PruneYard, some high school students set up a table in the PruneYard shopping center to distribute literature and solicit signatures for a petition against a United Nations resolution. A security guard told them to leave since their table violated the shopping center’s regulations against publicly expressive activities. In its decision, the Supreme Court gave California the ability to limit the private owner of a shopping center from using state trespass law to exclude peaceful expressive activity in the open areas of the shopping center. A crucial distinction here is that the Supreme Court merely allowed for California to designate the mall a public forum because the state constitution includes the affirmative right to “freely speak, write and publish … sentiments on all subjects.” Since few state constitutions have a speech provision like California’s, the public forum designation is largely limited to that state.

Public forum proponents have also tried to reach back to the 1946 Supreme Court case Marsh v. Alabama to ground legal action against platforms. In Marsh, the Court determined that a trespassing statue could not be used to stop the distribution of religious tracts in a privately-​owned company town. Earlier this year, Prager University brought a case against Google on similar grounds. But District Court Judge Lucy Koh rejected the claim that YouTube “somehow engaged in one of the ‘very few’ functions that were traditionally ‘exclusively reserved to the State.’”

Successful court cases on online public forums have typically been limited to the actions that government officials can legally take in their official capacity. In Knight First Amendment Institute v. Trump, for example, a federal district court in New York held that the First Amendment prohibited President Trump from blocking Twitter users solely based on those users’ expression of their political views. He could, however, mute people as much as he wanted. In a Virginia court case , it was decided that a local official’s Facebook page was a limited public forum because the government had invited citizens’ speech on that page. Even so, a decision to delete a comment that was off topic did not violate the Constitution, the court found, because the comment violated an announced policy for social media comments. That same federal judge, in another case, found a slightly different holding. Again, it seems a local official had deleted political criticism from her Facebook page, which was otherwise held open for discussion and because of this, was found to be unconstitutional. On the other hand, in 2015 an Illinois district court agreed with a municipality that had prohibited political messages on its website and Facebook page because they were limited purpose forums intended to act only “as small business forums.”

What unites the court cases is that they focus on public officials limiting political speech on their own online pages. They aren’t instances where the government mandated that a platform transmit a certain kind of speech. As legal scholar Eugene Volokh noted , the Supreme Court has shot down many of these kinds of mandated speech laws, finding that there isn’t a compelling government interest in “equalizing the relative ability of individuals and groups to influence the outcome of elections,” in “reducing the allegedly skyrocketing costs of political campaigns,” in “preserving party unity during a primary,” or in protecting speakers who “are incapable of deciding for themselves the most effective way to exercise their First Amendment rights.”

Applying a public forum designation to Facebook, Twitter, or YouTube would immediately strike down the terms of use and the community standards of these platforms. In doing so, the platforms would be severely limited in their ability to kick people off of their sites for hate speech or speech that is “misleading, malicious, or discriminatory.” The result would be a massive spike in spam, leading people to exit the platform and find better alternatives.

Whether the government took a public-​utilities approach or a common-​carriage approach to regulating the tech industry, the result would be massive disruption and destruction of value. Such sweeping direct regulation would almost certainly create far more problems than it would solve.

Minitel & Quaero

On the other hand, some have called for the wholesale displacement of these technologies through the funding of a government alternative. Most of the advocates come from Europe, including Diane Coyle, a public policy professor at the University of Cambridge, who argued that;

Creating a genuine alternative to the tech titans will have to involve a different model. A direct approach would be to set up a public service alternative. The big tech groups can hardly complain (at any rate openly) about new competition. They always claim they do not need to be regulated because they could be overturned by a newcomer at any time, just as Facebook powered past MySpace in 2009.

There’s a lot to that initial phrase, a genuine alternative. A genuine alternative wouldn’t include advertisements, the author suggests. A genuine alternative would involve a different means of monetizing. And so, a genuine alternative would serve consumers better. But the French and the Germans have tried to create genuine alternatives to private communication networks before.

Started by the French in 1978, Minitel allowed for an early form of electronic communication in the days before the Internet. Users could make purchases, secure train reservations, check stock prices, search the telephone directory, or simply chat on a terminal provided by the nationalized French telecom service, PTT. After a trial run in rural Brittany in the late 1970s, the system spread throughout the rest of the country. With the expansion of users came the rise of new companies to provide services as well, creating a tech boomlet. At its peak in the mid-​1990s, 9 million Minitel devices were being used by 25 million users in France to access 26,000 different services.

Meanwhile, what would become the Internet was starting to take hold, and pushed out competing communications platforms that relied on dedicated terminals. In short order, Minitel became an artifact of a past age, even though it was clearly the most successful version of a certain model of communication technology, interactive videotex . In the United States, Time Warner’s Full Service Network promised the same full range of services as Minitel, but after nearly $18 billion in debt-​load, it shut its doors. The PLATO system out of University of Illinois was the first of these systems to build a community behind it, but it too shut down in the early 1990s just as the campus became wired for the Internet. Competition swept away each of these projects.

Minitel suffered from a number of defects, but the most relevant flaws stem from the project’s political aims. As Karin Lefevre of France Telecom explained to the BBC when the program shut down, “As well as being a technological project, it was political. The aim was to computerise French society and ensure France’s technological independence.” Fundamentally, the Internet isn’t built on technological independence but technological connection. While its openness is sometime derided, the Internet creates a common space for people to interact. The French telecom service hoped to control the direction of computerization, but their technocratic vision lost out to the dynamism of the open Internet.

Still, Minitel wasn’t the only project pushed in the name of technological independence. Quaero, a failed multimedia search engine, was supported by the French and German governments throughout 2005 and 2006. Like Minitel, the project wasn’t about connecting people or providing a globally competitive service but creating a public alternative to big-​tech. German Chancellor Gerhard Schröder made this aim abundantly clear when he said , “we need to have a European answer to the dominance of Google in the search area.” Quaero eschewed metadata tags, leaping toward direct image search, while competitors used image metadata to train classifiers eventually capable of direct search, while providing a useful service in the short term. The project faltered when it became clear that a top-​down approach could hardly stand up to the competitive forces set in motion by the Internet.

In the United States, a government-​backed platform would face steep competition and numerous hurdles. For one, there is a lot more competition going on behind the scenes than is first imagined. Google changes its search algorithm around 500-600 times a year to combat spam and the actions of people trying to game the system. Facebook also changes its algorithm regularly. A government alternative would have to match that speed of evolution to keep up.

Creating a government alternative would also necessitate an alternative ecosystem to support the company with back-​end applications, efficient information storage, internal performance analysis, and data centers. Billions would have to be dumped into the project. Microsoft, Apple, Amazon, Facebook, and Google constantly trot out new product lines to expand and deepen their customer based while also going after rivals. In 2017 alone, Google spent $45 billion and Facebook spent $5 billion to keep everything running, which a government would have to replicate.

And then there is the issue of content moderation. Content moderation at the scale of a Facebook or a Twitter or a Google goes largely unnoticed but is a critical component to maintaining a platform. By the end of this year, for example, YouTube will have 10,000 people on its “trust & safety teams,” while Facebook’s “safety and security team” will grow to 20,000 people. The thousands employed to moderate a government site day after day would face numerous lawsuits like those in Virginia and Illinois mentioned above. And speaking of legal issues, all of these problems would be compounded by the hoops that government agencies have to jump through to get software.

In 2007, predicting the demise of Quaero, UC Berkely economist Hal Varian argued that Quaero would fail in the face of politicized content moderation. “How will Quaero handle searches for erotica, Nazis, politics, tax avoidance, al-​Qaeda, Basque separatists … the temptation to intervene in such controversial topics will be irresistible, I think.” Creating a government alternative to Facebook or Google is a nice idea for an op-​ed. But in laying out a specific plan to execute the idea, the issues pile up.

On the idea of public interest, media scholar Benjamin Compaine rightly noted, “In democracies, there is no universal ‘public interest.’ Rather there are numerous and changing ‘interested publics’ which fight battles issue by issue, in legislative corridors, regulatory commission hearings and ultimately at the ballot box.” His lens is important to adopt. There is no singular public interest. Rather, interest groups take on the mantle of the public to underpin their cause. Attempts to create a platform that serves the public interest would end up with no one satisfied. The result wouldn’t be a better served public but a more energetic use of government power to control the platform.

Mandating that a company work for the public doesn’t solve problems. It creates a myriad of other problems to solve.