E346 -

Trevor and Aaron are joined by Cato’s Matthew Feeney and John Samples (a new member of the Facebook Oversight Board), to talk about the pros and cons of content moderation.

Hosts
Aaron Ross Powell
Director and Editor
Trevor Burrus
Research Fellow, Constitutional Studies
Guests

Matthew Feeney is the director of Cato’s Project on Emerging Technologies, where he works on issues concerning the intersection of new technologies and civil liberties. . Before coming to Cato, Matthew worked at Reason magazine as assistant editor of Rea​son​.com. He has also worked at The American Conservative, the Liberal Democrats, and the Institute of Economic Affairs. Matthew is a dual British/​American citizen and received both his B.A and M.A in philosophy from the University of Reading in England.

John Samples directs Cato’s Center for Representative Government, which studies campaign finance regulation, delegation of legislative authority, term limits, and the political culture of limited government and the civic virtues necessary for liberty. He is an adjunct professor at Johns Hopkins University. Samples is the author of The Struggle to Limit Government: A Modern Political History and The Fallacy of Campaign Finance Reform. Prior to joining Cato, Samples served eight years as director of Georgetown University Press, and before that, as vice president of the Twentieth Century Fund. He has published scholarly articles in Society, History of Political Thought, and Telos. Samples has also been featured in mainstream publications like USA Today, the New York Times, and the Los Angeles Times. He has appeared on NPR, Fox News Channel, and MSNBC. Samples received his Ph.D. in political science from Rutgers University.

Matthew Feeney and John Samples join the show today to talk about how private companies are moderated their vast social networks. Recently, Facebook announced its’ new Oversight Board and Cato Institute’s very own, John Samples, is one of the members. The Board will effectively take final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram.

Are big tech companies censoring conservative viewpoints? How should we talk about conservative bias? Can governments censor private companies? Does Facebook have to be transparent about what content they moderate?

Further Reading:

What Made the Internet Possible?, Building Tomorrow Podcast

Free Speech Online: Unfriended, Free Thoughts Podcast

Social Media’s Moral Panic (with Milton Mueller), Free Thoughts Podcast

Transcript

[music]

00:07 Trevor Burrus: Welcome to Free Thoughts. I’m Trevor Burrus.

00:09 Aaron Ross Powell: And I’m Aaron Powell.

00:10 Trevor Burrus: Joining us today are our colleagues Matthew Feeney and John Samples. Matthew is the Director of Cato’s Project on Emerging Technologies, and John is Vice President at Cato and Director of the Center for Representative Government. He was recently named to the Oversight Board for Facebook and Instagram, something we’ll get to later in the show. But I’d like to start with Matthew, and John, of course, you can chime in. Why are big tech companies censoring conservative viewpoints?

[chuckle]

00:36 Matthew Feeney: Well, I suppose it depends who you ask. I would say that, and as I’ve written at the Cato blog and elsewhere that I’m actually somewhat skeptical of a lot of the claims that you hear from conservative activists that their accounts or their views and their content are being systematically taken down by so‐​called big tech. When people say big tech, they usually mean these California‐​based companies such as Twitter, Facebook, Google, YouTube. And despite the fact that I might find a lot of these claims unconvincing, I nonetheless think that it’s interesting that these claims are being so pronounced these days. Many of the listeners will be familiar with claims made by PragerU, run by the conservative commentator Dennis Prager. But we shouldn’t ignore the fact that these complaints have been leveled from Capitol Hill and the White House. The President has certainly mentioned that he’s upset about how Silicon Valley is treating conservatives, and there are also prominent senators such as Senators Ted Cruz and Josh Hawley who are concerned about this alleged bias. And all of this, I think raises questions about what the role of government is when it comes to regulating the internet and internet speech.

01:55 Matthew Feeney: And my claim has been for a while, that even if I was convinced on the claim that these companies all hate conservatives and are taking down their content, that nonetheless, that that behavior doesn’t call out for increased regulation and oversight despite what some conservatives these days seem to think.

02:14 John Samples: So I would add to what Matthew said, but first of all, I would say that the Oversight Board part actually has some requirements to it, one of which is that I say now that anything I say in the next hour or so is my own opinion, it does not reflect the opinion of the Oversight Board, and therefore, you should just take it from me that it’s me, not… I don’t even know what it would mean at this point for the Oversight Board to have opinions ‘cause it’s just starting up, but certainly I have opinions about stuff, I’m not even sure everyone else… I don’t think everyone else shares those. So there’s a number of ways to talk about conservative bias, and Matthew, I agree with what Matthew said. One thing I keep coming back to, I think, part of it’s anecdotes and experience, something that somebody gets people… Gets something taken down. Sometimes it’s an error and it’s put back up immediately. There’s a sense of being persecuted and that sort of thing, that unfortunately, for some time in our politics, has been a highly effective kind of political claim.

03:26 John Samples: However, I have my own experiences too, and I use Facebook, and what I find in Facebook, for example, in the last couple of weeks, one of my friends from an earlier stage in my life referred to the Governor of Kentucky as the Antichrist. Now, it seems to me that if you’re going to censor something in a religious area of the country, referring to the Governor himself as an Antichrist would be something that would get hit, and yet it showed up in my newsfeed. The other thing is that I read a great deal of material in my newsfeed about COVID-19, about studies or reflections on the various studies about it. A lot of it’s quite skeptical. A lot of it’s quite skeptical about the original study and the results it had for the policy. And again, I don’t even get warnings about that, and it’s certainly not taken down, because the people that put it up would let everyone know.

04:29 John Samples: So if there is censorship, there’s also a great deal of things that get through it. But let’s pause for a moment on the idea of censorship. I think in terms of suppression only governments can censor, and it’s a good distinction to make. Facebook, as Matthew mentioned, can take things down because they’re a private company, and Cato’s not required to bring anyone into the Hayek Auditorium. We can choose who we want to be there when it’s open. So there’s all of that. And I think actually there is certainly a reality that Facebook can do what it wants to, but I think they also for business reasons, want to have as many people on the platform as possible, and they have to have content moderation to achieve that goal. So they also want the content moderation to to be legitimate.

05:30 John Samples: Now, one way to do that is to publish your rules, which the community standards are their rules, and then you have them applied in a consistent neutral fashion, and that’s what conservatives have denied. I think they believe that when they get taken down, they haven’t necessarily violated a rule but that there’s in fact a lot of discretion involved and that essentially people here in Menlo Park or nearby are all a bunch of liberals and they don’t like the speech and they take it down. So the Oversight Board is an attempt to provide a bit of process on that, so that the community standards perhaps will be interpreted more consistently, the process itself will be more transparent, the appeals will be open to them, to anyone who feels that their speech has been taken down wrongly.

06:26 John Samples: But there’s this idea that I think that someone’s going to reach into the process and, not just Facebook but at Google and elsewhere, and manipulate it for political reasons. The last thing I would say about this for a while is people out here aren’t terribly political, they seem to be concerned about business mostly to me and to building stuff and making these… The term “standing up” is ubiquitous here, which usually means standing up a business. So I think it’s vital not to think about what goes on here, in the same way as what goes on in DC, where everything is much more political, and people think about political tactics and so on in the work they do. Here it seems to me that while they are correct, there’s a… Southern California, it’s a very liberal place, Mark Zuckerberg himself said that. The politics is not such a dominant activity for everyone in everyday life.

07:34 Aaron Ross Powell: As we talk about conservatives saying that they have been censored online, it makes me think… There’s a joke that went around Twitter, a tweet, and so I just pulled it up and it says, the conservative says, “I have been censored for my conservative views. “You were censored for wanting lower taxes?” “No, not those views.” “So, deregulation?” “Ha, no, not those views either.” “So, which views exactly?” “Oh, you know the ones.” And that seems to be in almost every instance where there’s been a high profile conservative, and I say that in quotes, talking about how they were censored on Facebook or Twitter. When you peel it back, it seems to be that they were saying, in a lot of cases they were saying racist or homophobic or sexist stuff or they were doxing people or stuff that’s not conservatism, it’s more like borderline hate speech which isn’t… And so is that fair? And fair in the sense of an accurate characterization of a lot of what gets pulled off of platforms or gets people kicked off of platforms. And if it is, is it just a case that among people who call themselves conservatives, that kind of speech tends to be more common than among people who are on the left. So this is less about censoring political views and simply saying, like, “Look, you can’t be racist on our platforms.”

08:54 Matthew Feeney: I think it’s fair to say that a number of the most high profile takedowns from the most prominent platforms have included people that were not taken down because of, they were arguing for increased localism, lower taxes and less regulation. So a number of people come to mind, such as Milo Yiannopoulos or Alex Jones. And when I think people hear these kind of voices, they don’t think of libertarian economics, they think about derogatory things said about minorities as well as conspiracy theorists. Now, I think, Aaron has made an interesting point, which is you can take a look at that and think it’s saying something kind of interesting about modern conservatism, that modern conservatism is so broad that it includes people who make these kind of pronouncements and claims. However, something I do want to stress, though, is that the left has their own version of complaints about these companies too.

09:57 Matthew Feeney: When I talk about this to Cato interns and other students, I cite a letter that the World Socialist website sent to Google saying that Google was systematically taking down left wing and socialist views. There have been complaints about Facebook and its treatment of Black Lives Matter, and other groups. And I think that revealed something kind of interesting, which is that it’s easy to spot perceived bias or discrimination when you’re on the receiving end of it. But, importantly, I think we should stress that these companies are put between a rock and a hard place here, which is the more they try to explain how they go about making these decisions, the more opportunities their critics have to criticize them, but of course the less they talk about this, then they’re accused of not being transparent. And I think even if… Aaron mentioned hate speech, even content that you… Or rules that you would think everyone would agree to can raise difficult issues.

11:00 Matthew Feeney: So one brief example, and then I know John will want to jump in, but if we were all sitting together and trying to come up with a new website that was going to compete with Facebook, we might say, “Well, we want it to be family‐​friendly. We want as most number of visitors as possible, but we don’t want pornography or beheading videos or things like that.” So let’s come up with a guide for content. We might say, “Well, let’s ban the images of nude children.” And I think most people intuitively think, “Yeah, that sounds like a good policy for some kind of Facebook competitor.” And you implement the policy, but then I’m sure listeners and all of you will be familiar of that photograph on the Vietnam War, in the wake of a napalm attack of the children running to the photographer and one of them is naked. I think it’s fair to say it’s one of the famous images of the 20th century. And this isn’t a hypothetical, Facebook was tasked with dealing with this issue and initially, I believe that the photo was taken down. So more than anything, I try to stress to people that content moderation is hard and that even good sounding rules can result in gray areas and ultimately, decisions have to be made.

12:03 John Samples: The other thing that should be added to this, particularly in the United States, I think, is that even though it’s not correct according to the Supreme Court, and much in our political culture, I think people sort of start out with a First Amendment point of view that if it’s a protected speech under the First Amendment, then it should be protected on these platforms, and that’s just not true. Now, it may well be the case as I said for business reasons and other reasons that the platforms may wish to have very broad protections for speech, but it’s not going to be the same as the Supreme Court and what Supreme Court has supported more generally. I will say that I don’t know if there’s been bias against conservatives, I don’t know if there’s been bias against liberals, I don’t know if there’s been bias against left, the people on the left.

13:00 John Samples: I mean, I just, I have opinions about that, but what I would note is that this board and then other efforts at other places, I presume, are an attempt to provide some kind of way of adding process to this, so if there is a take‐​down that’s wrong or if it’s a take‐​down that runs against the community standards or, as Matthew was mentioning, if there’s a tough decision that has to be made and then it’s not just going to be made by Facebook staff, it’s going to be made publicly, it’s going to be made by people who have, as you look at the board members come from a wide variety of backgrounds, and so there is some appeal, there’s a real appeal to that, and I would expect that people who… I would hope that people who are persuadable will look at that and say, maybe this process can work, in particular because I think the alternative is US Congress or US regulatory agencies to become involved in the United States, at least, or European agencies already are involved, and that’s a libertarian insight I think and hope that is widely shared, which is that government involvement in freedom of expression is just not going to end well.

14:34 Trevor Burrus: In general, though, it’s interesting because going back to some of your original points, John, you said a few minutes ago that sure, they might be biased, they might not be biased, but in general, given the size and effect that these companies have on our discourse, are conservatives and I guess as we said some people on the left correct to be concerned about that power? Basically, if Google decides to de‐​rank your business they can destroy your business and if they decide to de‐​rank your page or its effect on political speech they can drastically affect the American political conversation. And I think conservatives might be thinking, as you pointed out, that as conservatism, and we said this in the episode that went up last week which we did with our colleague Paul Matzko on right‐​wing radio in the ’60s, that conservatism is sort of a persecution movement and they definitely feel that Silicon Valley is not on their side. So even though it’s not as political as you said, John, which I completely buy, we do live in the age of Trump where there are people definitely in these organizations that probably view Trump as essentially Hitler and that his re‐​election would be an apocalyptic event. So is it really beyond imagination that, say, Google or Twitter or Facebook would just suddenly de‐​rank pro‐​Trump speech in an effort to what they perceive is save the country?

15:58 John Samples: No, it’s not unimaginable. I guess the way I would put it is do people across the political spectrum have reasonable concerns? Now, some concerns are unreasonable and may be paranoid or whatever, but the whole situation, do they have reasonable concerns, the answer is yes, that you could imagine. Consider just one element of all of this. In fact, if you look at Elizabeth Warren or Donald Trump or several other presidential candidates, they were sharp critics of the tech companies, not so much on content moderation, though some of that, but on antitrust issues and so on. Now, these people are making these cases, they’re running for office, it then becomes plausible to even think that one of these tech companies with its interests so deeply involved down the line would use their power over the platform to intervene there, to make it harder for those criticisms to be heard.

17:06 John Samples: Now, on the other hand, this, it seems to me to be if you look at the Facebook advertising policy for candidates for office, which is very controversial, and people wanted these ads to be cleansed of all falsehoods and so on, Facebook decided that… They followed the Federal Broadcasting standard, which was that they acted like TV executives have no right to and the term is used in broadcasting a lot to censor these ads from Donald Trump or anybody else running for president, you have to show it and that’s the Facebook policy too. So it strikes me that companies can do things to try to build confidence that these kinds of potential problems actually are mitigated, if not… Mitigated for reasonable people, mitigated for people that are persuadable. And I think the Oversight Board is like that. And other places you could say, well, look at Reddit, what you have is a decentralized system there and that helps with these kinds of issues, you don’t have a company, there are centralized standards and centralized powers for that company, but they are rarely used and it would be very hard for the company to actually act on its economic interest in the content moderation area.

18:34 John Samples: They’d have to get rid of a bunch of content moderators or it would be fairly clear what they were doing. So yes, there are potential issues. These kinds of issues are throughout human life in a way but what you’re seeing is people responding that… The other side of this is people call for political accountability as a response to those issues and my question is, is that… I think I’m talking to a favorable audience here, what are you going to get if you get… If politics dominates this, if government acts. It’ll be accountable in a certain way, but I don’t think it’s going to be necessarily accountable to everyone, to the general user of Facebook, Google or Reddit. So that’s where we are. It’s not a perfect situation but we’re beginning to see these ways of responding to deal with these kinds of potential problems.

19:32 Aaron Ross Powell: As we’ve said, it sounds like lots and lots of people across the ideological spectrum, and lots of people in Washington have grievances, have aired many grievances about these tech companies and platforms and censorship and imagined censorship and so on, and the thing that they all seem to point to is Section 230, which is to blame for everything wrong with digital speech. So what is Section 230 and should we get rid of it?

20:02 Matthew Feeney: I’m happy to take this, but I’m sure John will be able to fill in the gaps that I miss here. So yes, as Aaron pointed out, the piece of legislation here that is often cited is Section 230 of the Communications Decency Act which was passed by Congress in 1996. And it’s I would say widely cited but often misunderstood, and I think the best way to understand it is to understand what came before it and why Representative Chris Cox and his colleague Representative Ron Wyden felt the need to write it. As the internet is developing, there might be some listeners old enough to remember some of these names. There were internet service providers like CompuServe and Prodigy and AOL. And as they were developing, there were numerous fora, bulletins, newsletters, you could subscribe to a bunch of these as well as take part by offering content.

21:00 Matthew Feeney: And I suppose it was only a matter of time before there were allegations of defamatory content and who was responsible for that. So in 1991, a federal judge considered a case called Cubby v. CompuServe where there was an allegation made that defamatory content had been posted on one of these newsletters, I believe it was, run by the internet service provider CompuServe. And there the judge said, “Well, these internet service providers are basically the digital equivalents of news vendors, they’re kind of like newsstands.” And because they don’t engage in much content moderation at all, you can’t hold the news vendor or the bookstore liable; similarly here, you can’t hold CompuServe liable for third‐​party content.

21:48 Matthew Feeney: A few years later, though, there’s a New York Supreme Court case that comes to a different outcome. This was a case involving Stratton Oakmont, which is of Wolf of Wall Street fame and Prodigy. And here, because Prodigy actually did engage in some content moderation, the judge in that case said, “Well, here actually, when it comes to defamatory content and Prodigy is actually the publisher of that and it can be held liable.” And these two cases gave rise to what’s called the moderator’s dilemma, which is this burgeoning internet industry, people in it had to consider, well, do we engage in a hands‐​off approach and be considered some kind of distributor or do we engage in content moderation and then face potential liability for a content posted by third parties? And neither one of these is a great option for, I hope for what are obvious reasons, but firstly, as we’ve discussed before, internet sites don’t want to just adopt a free‐​for‐​all First Amendment will, they do want to have moderation of things like pornography and images of violence, content that’s protected by the First Amendment but nonetheless, might not be family‐​friendly.

23:08 Matthew Feeney: But secondly of course, the Prodigy approach, the publisher approach isn’t great because it means that to avoid liability, companies are going to have to spend many, many resources screening every piece of content to make sure that they can’t be sued. So, what rose out of this is Section 230 of the Communications Decency Act, and it provides what’s called a sword and a shield, and I’ll just briefly mention. The first is a liability protection, so Section 230 [c] [1], for those keeping along, which is basically an explicit rejection of the Prodigy case saying that no provider or user of an interactive computer service shall be treated as the publisher or speaker of third‐​party content. But an important part of the law is also the protection for content moderation, saying that no provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access for whatever reason you want, even if that content is First Amendment protected.

24:14 Matthew Feeney: So the sword is this section 230 permission to go ahead and moderate content as much as you want and the shield is this liability protection, saying that companies such as YouTube, Facebook aren’t considered publishers of third‐​party content. I want to stress before wrapping up, though, this is not just a piece of law for big tech, as it’s called, this protects, of course, and applies to companies that are household names, but anyone who runs a little blog with a comments section also enjoys Section 230 protection.

24:52 John Samples: So, what I would add to that… First of all, excellent account of that. I always get those cases messed up. I can’t remember who’s involved, which is named, and so Matthew did a very good job on that. The thing I would add to it, is the part about, I guess, the sword, which is Section 230 empowered the companies to deal with a wide range of speech. And what I think you see here is over time, and it took a while, that these are companies trying to develop legitimate institutions for carrying that out. They certainly have the power to do so, but in time, particularly Facebook, but others will face this issue, they wanted the wielding of the sword, they wanted it to have a certain amount of legitimacy. And the great thing I hope to see with that is a kind of Hayekian process of trial and error, and trying different kinds of institutions, not just at Facebook, but at other places also.

25:57 John Samples: So we can learn something about how to do this process, which is actually somewhat new, I think, in the history. So the great threat to that, I would say, and it’ll be no surprise to the listeners of this podcast, I don’t want to see government regulation coming in and having a kind of effect of creating only one model, even if it’s a Facebook model, even if it’s the Oversight Board, I don’t want to see one model imposed because people are negotiating with regulators. And I think that is something we really need to keep in mind in part because the people that usually worry about regulation are maybe not going to be represented in policy discussions about these kinds of issues.

26:48 Trevor Burrus: There’s a book by Jeff Kosseff called The Twenty‐​Six Words That Created The Internet that is about this Section 230, and it’s an interesting title, though, ‘cause it’s a very strong statement. What does the internet look like if this 230 had never been created?

27:04 Matthew Feeney: Yeah, it’s a good question, and I think Jeff’s book might be the victim of this publisher tendency to try and come up with catchy, but maybe inaccurate headlines. But here I think, actually, the headline’s fairly accurate. The 26 words that Trevor alludes to that are the title are this Section 230 [c] [1], basically. And what the internet looks like without it is an internet where social media as we know it doesn’t exist, in part because there aren’t enough resources to screen every single piece of content that is uploaded to these websites. Just for example, I might… Last I checked it was something like… I don’t have the exact citation in front of me, but when I was last looking at this, I think it’s accurate to say that something like 400 hours of footage are uploaded to YouTube every minute or something like this.

28:05 Matthew Feeney: Even with billions and billions of dollars, it’s just not conceivable to curate that content before it goes live in a feasible way. I want to stress that there are some scholars out there, Jennifer Huddleston, who, with her colleague at the time, Brent Skorup at Mercatus, did write a paper arguing that something like Section 230 might have emerged absent it being passed, and that’s an interesting hypothesis, but I think it can be criticized. But if you want to think of a world today without Section 230, just imagine a world in which a internet company or anyone who hosts content on the internet could potentially be held liable for content they don’t host, and it becomes…

28:55 Trevor Burrus: So Reddit would not exist, basically.

28:57 Matthew Feeney: Yeah, so there’s no YouTube, there’s no Twitter, there’s no Facebook, and… Look, you can have criticisms of all of these companies, I’m sure, but I think… And Lord knows that I have my concerns with some of these companies too, but I would say that what we’ve experienced in the last couple of decades with the emergence of the internet and especially the ability of people to access it all over the world is a revolution in speech that hasn’t been paralleled since the invention of the movable type printing press. It’s an incredibly liberating force, and I think we should be aware that these companies will sometimes make mistakes with their own content moderation rules. But Section 230 may upset some people, but I would argue that it’s far better than any alternative I see.

29:45 John Samples: So I would add to that point Matthew made, which is, even… We talked about bias, we talked about suppressing speech, but what needs to be kept in mind by everybody is whatever, I think we’ll have a legitimate process. In the big picture, the amount of speech that is actually controversial in terms of community standards, the work of Josh Tucker at NYU has shown this, is really immensely small. So even if the nightmares are right, it’s going to be a relatively small amount of speech that’s affected, and we will hope it will be legitimately taken down because it violates the community standards. But there’s all sorts… Gatekeeping is not going to return in any strong way. It’s going to be very much at the margins of these platforms. And you’re going to have all… Again, I come back to my own experience, my friend from childhood, he’s on there, he’s talking about churches being closed in Kentucky, he doesn’t like this because of the COVID regulations, and that the Governor is the Antichrist.

31:01 John Samples: I mean, he’s getting his say, right? And we’re still going to have a fantastic… My concern is that the reason, one of the reasons I’m engaged with this Oversight Board is we get a lot of benefits from that. This is new and great. And we’ve got to deal with the cost, or you’re going to get a lot of government regulation that I think will really take away a lot of the benefits. I just don’t trust that kind of process for dealing with freedom of expression.

31:29 Trevor Burrus: I want to ask about something that, just to clarify again, ‘cause if you read Matthew’s work in particular and follow him on Twitter, Matthew loves all the myths about Section 230. They’re his favorite thing on the planet. But probably one of the biggest ones is a subsidy. So it’s called a subsidy by people like Josh Hawley and I think Ted Cruz. What is that argument, and is it correct?

31:53 Matthew Feeney: Well, thank you for making me sound like I’m really fun at parties, Trevor. Yeah…

[laughter]

31:57 Trevor Burrus: He is fun at parties, I can attest to that.

32:00 Matthew Feeney: I do have real hobbies, I promise. Yes, so anyone who’s been paying attention to this will know that there are a few persistent myths about Section 230. One is that there’s this big difference between platforms and publishers, but the one that Trevor cited is I would argue probably the most clever, and I hope… Well, I don’t… Maybe hope’s the wrong word, but I wouldn’t be surprised if whoever came up with this enjoyed some salary bump. There is a argument that you’ll hear, especially among those on the right, but sometimes on the left, that Section 230 is the equivalent of a government handout or a subsidy, and the argument goes something like absent Section 230, these companies would have to spend millions, if not billions, of dollars so far on litigation. And because of that, because Section 230 protects them from that process, we should consider Section 230 a big government tech industry‐​friendly subsidy, which I think is a clever turn of phrase, but I think it’s mistaken for the obvious reason. I just take the approach that you can look up subsidies in bills and budgets where they are outlined item by item of who’s getting what.

33:13 Matthew Feeney: But secondly, and one of the reasons I talk about this is it’s not the case that absent Section 230 that Twitter and Facebook would have to spend a lot of money on court cases, it’s that it wouldn’t exist. And I think you can actually make the arguments, and actually, I think the liberal argument, that others have made which is that, “Look, Section 230 actually is a rather… If you think about it, a rather conservative piece of law in the sense that it just says, ‘Look, you’re responsible for what you post.’” So, it doesn’t say… It doesn’t get rid of libel law or it doesn’t get rid of any of the content people are worried about.

33:55 Matthew Feeney: Look, if you post illegal content or if you defame someone, you certainly can get in trouble for that. Section 230 doesn’t change that, you are responsible, and I think that’s probably the right approach to have to it. But nonetheless, you’ll still see many people making this kind of argument.

34:11 John Samples: So, I’d like to make a point that’s a little bit follows. It’s about Section 230 and it’s for lib​er​tar​i​an​ism​.org podcasts you gotta say something positive about it and hopeful for libertarians. And here’s the point I would like to make. Think about Section 230 and compare it to broadcasting law. So, broadcasting law comes around, and it’s about radio, in the 1920s. It’s a collectivist age, the government’s doing all sorts of great things in the ’20s and ’30s, and thereafter. So, it was very possible in the 1920s for the government to say, “Look, we own the airways and these airways have to be regulated to make them work in the public interest,” and so you end up with the Fairness Doctrine, various things, which end up with a heavy regulation of broadcasting and all sorts of problems that go with that.

35:03 John Samples: 1995, an interesting period, because in my view the 1980 to 2000 period is an era of liberal reform in which both parties came to the view that, “You know what, maybe we want the economy to work better and if it’s left to itself more, maybe it will.” And so instead of claims to own the internet by the government or claims to have heavy regulation of the internet by the government compared to the 1920s, what you get is Section 230, and Section 230 says, “Well, we want the internet to really develop and we want to take away some of the claims of people. We want to have a nice framework so that people can create these companies and can do what they want to on there within the normal responsibilities of defamation, and so on,” that Matthew mentions.

35:51 John Samples: My point here is that 230 is an indication of what actually can happen when you get some loosening up, you have an era of libertarian or liberal reform, which we may well have again, and it’s crucial. We’re having a debate now but we’re not talking… They’re trying to get back to the 1920s. It’s a different debate than starting with a kind of government ownership of media and then you’re trying to deal with regulations at the margin. That’s an impossible thing and you end up with the Supreme Court saying, “Yeah, you can have broadcasting licenses. Yeah, you can have speech licenses.”

36:28 John Samples: The starting point, I would say, is because libertarians and others who saw the value in free markets in the 1980s had a powerful effect on everyone in there even before you get 230. So, keep your chin up, there’ll be a good time again and this Section 230 and the internet you have before you is something that shows what the real effect of arguing for liberty, I think.

36:52 Aaron Ross Powell: We touched a bit on one of the concerns, but I want to go back to it for a second to tee up this question which is, it’s one thing if we’re talking about some small zine decides they don’t want to publish your article because it doesn’t align with their views or even a major newspaper won’t run your op‐​ed because it doesn’t fit the take of their editorial board, but Google controls almost all internet search. YouTube is if you want to distribute videos on the internet, unless you can somehow get into Netflix, YouTube is basically it. If you can’t be on YouTube, your video is not going to get much distribution.

37:33 Aaron Ross Powell: Social media, which is the dominant way that the world communicates with each other now, is controlled by Twitter or various platforms that Facebook owns. Facebook, Instagram, Whatsapp and so on. And so, if these companies decide, as we touched on earlier, to pull the plug on you, there’s not really… There are alternatives but in many cases, they’re not meaningful alternatives. They’re alternatives that are quite bad compared to the big platforms as far as the reach that you can actually have. And so, it does seem to be the case that getting government involved in setting their internal content moderation policies is a recipe for disaster. It’s a recipe for politicization and people using the law and these regs and rules to punish political enemies and shut down political speech that they’re not a fan of. So, it’d be every bit as bad as what people imagine the platforms are already doing now.

38:33 Aaron Ross Powell: But another way that people are increasingly approaching it is to say, “Okay, maybe we don’t want to tell Facebook how to manage what content’s on its platform but Facebook doesn’t need to be as dominant as it is. And so, if we institute antitrust proceedings against them or we force these platforms to essentially be smaller and create more diversity, then we get back to a world that looks more like the op‐​eds in the newspapers where you can find another platform that might not be as good but is at least close to as good to put your speech on.” Is that a better way to approach these sorts of worries, to just say like, “Maybe these platforms have gotten too big”?

39:13 Matthew Feeney: I wouldn’t take that approach. And you could probably dedicate a whole podcast to talk about the antitrust issues, but first, I would just say, I find a lot of the antitrust concerns just slightly misguided, because they seem to mistake market dominance for monopoly. And I think Aaron’s right to say, look, you’d have to be naive to not know that Facebook is the social media giant, but it’s not the only one. And there were a lot of people who were upset that certain views were being stifled online, so they set up their own.

39:53 Matthew Feeney: You can go to a white supremacist social media site if you want, they’re out there. Dare I say, you could actually Google these companies and find ways to go there. BitChute is a similar competitor to YouTube. So firstly, I don’t think we should confuse market dominance with monopoly. This is like saying we should have antitrust against Starbucks, because it’s the most dominant coffee store in America. But more importantly, I think… There’s fundamentally confused, because Facebook is competing with Google, these are competitors, this is a market, you have to think about how these companies are making their money. If you’re worried about content moderation, though, let’s accept that, yes, they’re monopolies and there should be antitrust.

40:41 Matthew Feeney: It takes certain resources to do content moderation at scale, and we should anticipate there to be more mistakes if these companies don’t have the same resources they do now. For example, just one example, many listeners will be familiar that, with the Christchurch, New Zealand shooting where a gunman committed a mass shooting in some mosques in Christchurch, New Zealand and live streamed of this atrocity with a helmet camera. And of course, social media companies all over world were rushing to try and take this content down, and of course, there were false positives in that. But I believe it was someone at YouTube, I think it was, who said we were just willing to embrace false positives to get this video down.

41:28 Matthew Feeney: Now, look, maybe AI will get cheaper and cheaper and better and better. But at the moment dealing with content like that is something that requires resources. And if you break these companies up I think you’ll just make a lot of the content moderation debate worse, because companies will struggle to be as good at it as they are now.

41:48 John Samples: So I’m again, a slightly different directions on Matthew’s good comments there. I wonder to what extent at this point that companies can actually take things down. Is it just an open question, right? I’ll give you an example. The other night I was reading, as I do occasionally, President Trump’s Twitter feed and he referred to and linked to another video that I followed, and the video had been taken down. So naturally I turned to Matthew and said, “Did you download it, or did you screenshoot it before it was taken down?” Instead of sending me that, Matthew sent me another link on the same platform where the video was put back up. So there’s that issue. The more broader issue, which has complicated issues, I think, which is that the question of whether anything can actually be taken off the internet. Things show up somewhere else.

42:51 John Samples: Now, the cost there, I think, or the cost to the speaker or maybe it’s a benefit for public, is that they’ll show up at a site that contains generally stigmatized speech, and you know there’s a reason speech is stigmatized, often, and that is something, you know, power… What would worry me, though, is the false positives, ‘cause I know of one case at the beginning of the pandemic that was up on Twitter, I guess, and was taken down and ended up on a site that was somewhat plausible, somewhat legitimate, but also has conspiracy theories and therefore in moving it from Twitter to this site, there was a real stigma attached to the piece, and I think a lot of people thought, yeah, this is probably incorrect or inexpert or whatever, but it’s not speech that should be suppressed. So my question here is, why isn’t there a intermediary that can make money, a website, easy entry, an intermediary that takes these kinds of false positives and keeps them up and would attract attention and pay for it by advertising. Is it that there’s not enough non‐​stigmatized, non‐​properly stigmatized material to support the site or will there be in the future, and so on.

44:16 John Samples: I just, I wonder, like the libertarians we are, when people are left to their own devices, will they come up with arguments that deal with this false positive, this improper stigmatization of speech?

44:31 Matthew Feeney: Yeah, a thought occurred to me while John speaking, as sometimes happens. And I think this is an interesting question for liberals to consider. And we’re, of course, at Cato part of the liberal family. I think we should view this debate as part of the larger debate about what’s the best way to deal with speech that we don’t like. And as classical liberals, I think we are loath to embrace government. But if Trevor and I were walking down the street, which you know I hope will happen and not too long, and we see someone wearing an SS uniform handing out copies of Mein Kampf on the sidewalk, there are a couple of options available to us. We could just ignore him and just keep on going. We could stop to debate the Nazi, we could try and shame the Nazi, we could take photos and send them around to try and make it dangerous for or unpleasant for the Nazi to speak out in public, but what we couldn’t do is call the cops. As long as this Nazi is standing on a public sidewalk, he’s not engaged in anything illegal.

45:47 Trevor Burrus: Not in America at least, I think you could in Germany.

45:48 Matthew Feeney: Not in United States, yes. This would not be happening for very long in parts of Europe, that’s for sure. Now, if you were an atheist, say, and you walked into a Christian bookstore with copies of The God Delusion or whatever and started harassing customers there be absolutely no legal issue whatsoever for the shop to say, “Look, get out of here.” And I think we should… We have to mention the fact that a lot of people I think mistakenly think of Facebook like the sidewalk, not the Christian bookstore. And that’s a problem that I think we need to address and be better at clarifying, but nonetheless, that you’ll see some people saying, “Yeah, but maybe people should be more tolerant of views they don’t like in private platforms.” And look, that’s… But what I wish people would appreciate is that people are going to come to different decisions about that. Some people are more tolerant of people who disagree with them than others, and I think John’s right, that there does seem to be an opening perhaps for some kind of intermediary, although the inner economist in me thinks if there was such an opportunity and demand it would have happened already. So there’s probably a reason why not, but that’s something to consider.

47:04 Trevor Burrus: John, I have a question mostly for you, but Matthew might want to weigh in given that we’ve worked on campaign finance issues for a while, and you for much longer than me, and you wrote an excellent book on campaign finance, and now you’re working on this content moderation stuff. In general, like in your career kind of looking at political speech and how this stuff works together, these people’s viewpoints on political speech, how does the current, this current debate regarding internet and social media platforms sort of relate philosophically to the campaign finance debate and the work that you’ve done on that?

47:40 John Samples: So the crucial distinction here is the fundamental one we’ve discussed, which is campaign finance always involved fairly increasingly intricate regulations of spending money on speech. They have the tendency to suppress speech or to censor speech, actually. This involves private companies, and so that’s the sort of gate that I think of as mattering, once it’s private that things change. I will say there are some cultural matters, I would put it, that in my career up to now have concerned me, and I will mention two of them from, as Matthew properly says, a liberal perspective. One is that there’s not a lot of self‐​consciousness about talking about falsehoods in this regard, and you would see that in campaign finance, people seemed to be concerned about money. But when they started talking, you realize they were outraged because somebody had said something false, which often was something just controversial that disagreed with their views. In this context, because there are no First Amendment protections in the private realm, people feel free to talk a great deal about falsehoods.

49:00 John Samples: Now, I think the actual distinction here, for somebody like Mark Zuckerberg, is the difference between hoaxes and conspiracies, outlandish conspiracy theories and then other political speech that’s controversial. And so he thinks when he talks about fake news or whatever, or the company does, or other companies do, they’re talking about conspiracy, things that are really out there. But I think the conversation drifts and you can’t really tell what’s being talked about and what’s being… Everything that’s false or that someone deems as being false in all of the disinformation, misinformation, too often I think, in my experience at least, what I find a little troubling is that people talk about these matters of speech in ways that don’t recognize that there’s a speech element here.

50:00 Matthew Feeney: We’re talking about something that doesn’t have to be protected in every instance, but it’s something very valuable. It’s not just that the companies or anyone is supposed to go around getting rid of false stuff because there’s not… I would like to see more self‐​awareness from everyone that… I might be wrong. The stuff I think are lies could be not lies or at least could be more controversial. So that’s part of it. You saw that… The second thing is the kind of bleeding and slippery slopes involved in speech here. Again, you see that, and again, I’m not talking, I’m talking about my own opinion and not the Oversight Board or Facebook, but in general, you see, I would like to see more awareness of the dangers of legislate or having community standards or whatever, against hate speech, right? Certainly, the companies are well within their powers to drive it off the platform, but then the question becomes, where are the borders, what are the lines to be drawn, and a bit of more of an awareness of that these lines can be blurry, we can be on a slippery slope.

51:15 Matthew Feeney: And then people aren’t thinking five and 10 years down the line what are these standards going to look like, perhaps, down the line? And when we end up taking down speech that people right now would say, “Well, I don’t like that speech, but I think it should be heard,” and I should be clear about this. I’m not saying that the American Supreme Court standard is the right one here at all. I’m just saying that I think the thing you so often saw on campaign finance is people, money is not speech, right? There was no awareness that speech was even implicated. I think because of the situation after 2016 and because people became very concerned about democracy and authoritarianism, and a bunch of issues, right, there’s… You were in a situation where people were, they were looking at the things that were making them concerned and they weren’t drawing balances.

52:16 Matthew Feeney: And I think, as we go forward, not just at Facebook again, or anywhere, but we need to try to keep those balances and those trade‐​offs in mind and the… And government regulation or regulation in general, that tends to slide as time goes on. And just as you might guess, I’m worried that at the end of the day, nobody will be particularly evil or particularly malicious and we’ll end up regulating speech that should be heard.

52:51 Aaron Ross Powell: In our last bit of time left, John, I wanted to ask specifically about the Facebook Oversight Board, and so maybe you could tell us briefly what that board is intended to do, what you specifically as a member of the board will be doing, and then how the philosophy that we’ve talked, the view of all this that we’ve just talked about for the last 50 minutes, what those kinds of ideas look like, or I guess the philosophy you’ll use when you are making decisions as part of this board.

53:29 John Samples: So, let me personalize this between me and you. You’re on Facebook. Let’s say you put… This would never happen, actually, by the way, seriously, but let’s say that you put something up and you turn around and it gets taken down and you look and see it’s been taken down, you are informed that it’s been taken down because it violates the community standards. Well, you also know that in a few months that I have a right to appeal that and the appeal doesn’t go to Facebook staff but goes to this new Oversight Board. So then you do that, you exhaust your appeals with Facebook, maybe you have… You stop… First stopping point is them but they refuse your appeal. But you said, it’s not right? So you are asked to go to the Oversight Board with your appeal ‘cause you want to make the case that whatever you posted didn’t violate the standards, the community standards.

54:30 John Samples: So you end up interacting with the staff of the Oversight Board, they ask you to write a small statement about why you think the appeal was justified and your material should be put back up. They add some material, they get some material from Facebook, they look at what you’ve done, what the case is, and it’s said if the case is significant and difficult that they then focus on it and send it to the Oversight Board itself, which has four co‐​chairs. And ultimately, if the case is going to be heard a panel of five members are chosen, one of which would be, in your case, from North America. So that might be me, let’s just say, but these are chosen at random. There’s five North American members, so there’s no… Well, actually, in your case, I would never be on your panel, because I know you and I could obviously have to recuse myself, but if I didn’t know you, you would get that. And then the decision has to be made by this panel of five members, whether in fact your material violated their community standards or not.

55:51 John Samples: Now, the interesting thing is that the hope is going in is that this would be a unanimous decision, that there will be deliberation that produces consensus. If there’s a sort of thought that if there’s dissents that they would be included in that statement and perhaps answered, I would guess. So, that would then become a kind of common law for at least Facebook community standards and how they are. And so, if you were right, and it was wrongly, your material was wrongly chosen, it’ll be put back up, and perhaps also similar material if it can be identified and it is technically possible will also be put back up, depending on the situation, but certainly Facebook is bound by its agreements with us and by its agreements in setting up the board to recognize that these decisions are binding. The only other thing I would say is the entire board can review every decision, and if a majority wishes to have the material looked at or the decision looked at by another panel or changes made to make the decision acceptable to a majority, that too can happen.

57:06 John Samples: So you’ve got, the way I would put it, I was reading the pile, there’s a lot of procedures here, right? This is process, this is the way to legitimacy, would be the banner above the Oversight Board’s room or its building. And so there would be all of that, and you could also lose and the material would stay down, but it’s an attempt through process to get legitimacy, and I would say also, probably of consistency. I think some of the problems that people complain about are not so much bias or anything like that, it’s a result of consistency and these kinds of decisions may give the process that will yield more consistency across time.

[music]

58:00 Aaron Ross Powell: Thank you for listening. If you enjoy Free Thoughts, make sure to rate and review us on Apple Podcasts, or on your favorite podcast app. Free Thoughts is produced by Landry Ayres. If you’d like to learn more about libertarianism, visit us on the web at www​.lib​er​tar​i​an​ism​.org.