Navigated to From Brandenburg to Britain: Rethinking Free Speech in the Digital Era with Eric Heinze | Eugene Volokh and Jane Bambauer | Hoover Institution - Transcript

From Brandenburg to Britain: Rethinking Free Speech in the Digital Era with Eric Heinze | Eugene Volokh and Jane Bambauer | Hoover Institution

Episode Transcript

- Hello.

Well, welcome to Free Speech Unmuted.

I'm joined by my co-host Jane Bambauer from the University of Florida.

I'm Eugene Volokh.

I'm a Senior Fellow at the Hoover Institution at Stanford, and a professor of Law emeritus at UCLA.

And we're delighted to have a special guest, Eric Heinze, who is a professor of law and Humanities at the school of Law, queen Mary University of London.

He specializes in, among other things, comparative free speech law.

And we've been hearing a lot about free speech controversies in, in England, and perhaps in Great Britain more broadly that we've been hearing a lot about free speech controversies in America as well.

But we've talked about those.

We wanted to date talk about the, the, the British controversies with someone who actually knows this and who, who sees it from, from ground level.

So, so Eric, tell us, tell us what's been going on by way of free speech law and free speech practice in England today.

And to the extent there's a distinction with between say, England and Scotland and such, tell us about that too.

- Yeah, well, I mean, I guess if it all has to be summarized because in fact there have been very distinctive controversies, I think the common denominator for Britain and for most Western democracies is that all of them are still trying to keep pace with developments in technology.

And in my view, this is something that every democracy is struggling with right now.

No democracy is getting it right, nor will that happen in any foreseeable future.

Every democracy is going to make mistakes with whatever rules it comes up with.

And so, you know, my recent work has been much more interested in, yeah, how, how do we, how do we minimize the margin of error and not eliminated?

And so in many ways we're kind of we're what, what I would call kind of post-classical era, and particularly from an American point of view in the pre-internet era, you could literally have conversations which some people are charmingly, still having, you know, about, you know, getting a kind of ideal regime, right?

And so, you know, I guess a, a stellar example in the United States would be something like the Brandenburg case of 1969, right?

Where you come up with an, an almost perfect understanding of what can and cannot be said with, you know, very few borderline cases, right?

And so and so, and so you agree with it or you disagree with it, but the paradigm, you know, is still clear.

And I think a real problem for experts in free speech is they're still in that world, right?

And, and they, they still think that we can basically get the right rules.

And I'm very suspicious of this.

I I just think there're going to be a lot of mistakes.

I think there are some rules that is, is, are, are not yet amenable to a simple formulation.

Moreover, the technology is going to keep evolving often in response to whatever rules are adopted, whether the rules be more favorable to the speaker or more favorable to state regulation.

So that's my, that's my kind of view of, of the universe.

And so I would call Britain just one of many examples of this struggle that's going on.

So, to name a few examples, in my opinion, probably one of the most interesting from a legal perspective is a case that you might've heard about.

I know it had some press in the US and abroad.

It was the case of Lucy Connolly.

She is the wife of a local Tory, a local conservative party official.

So not a nobody, but not really a household name either.

Just really, you know, a local politician now she run in with had a run in with the law.

Last year, you might have heard about riots that took place in the city of Southport in Britain just over a year ago.

Now, what had happened was basically this, there was a gathering of young children, just a kind of, you know, a musical class of some sort.

And three young girls were stabbed, right?

Basically a depraved killer came and stabbed three young girls.

He ended up, you know, getting, as you can imagine, a life sentence.

But when this happened in the city of Southport where this happened, there was also a migrant hotel.

There are several migrant hotels, south Britain.

In other words, we have this, you know, a problems of immigration as again, most western countries do have, don't always know where to put people who are waiting for decisions on their status.

They were put in a migrant hotel, a lot of anti-immigrant sentiment, again, as we see in many countries now.

And so what happened was this, the stabbing took place, and immediately again, before any of the facts are out, I mean, literally in no time at all, the internet is a blaze with accusations.

Yeah, it's, you know, one of the migrants has come and killed these three girls.

That turned out to be entirely false.

In fact, it was a, it was someone who had been born in, bred in Britain.

It had nothing to do with the migration system at all.

But again, this was going online.

So what happens to Lucy Connolly, again, the wife of this local, local Tory counselor, she had tweeted something to the effect of, you know, I'm fed up with these immigrants, you know, go burn down that hotel, or something to that effect.

Okay?

Moreover, she, you know, she had a few thousand followers, but again, that morphed into tens, hundreds of thousands very, very quickly.

Now, what I find fascinating about this case, so she was arrested, she got a jail term of 31 months.

Everyone including me thought that that was far too long to be in prison for this.

Fortunately, the hotel was not burned, no migrants were attacked.

And so, and many of us thought that the prison term was too long.

And in fact, she is free now, the prison term was reduced.

Nevertheless, for some people, she became a kind of free speech martyr.

And indeed, many of them were kind of trying out the Brandenburg test, say, ah, in America this wouldn't have happened.

She would not have been tossed in prison just for her words.

Now, of course, anyone who knows the precise wording of Brandenburg knows it's border.

It's not so clear.

It's not so clear.

I think that at least some courts, or let me put it this way, some localities in America might well have prosecuted someone like Lucy Conlan.

I'm not saying that they would've done it.

I'm saying that strictly speaking, it's by no means clear that such a law, even under Brownen book, would have been unconstitutional in America.

And yet, nevertheless, again, we tend to be more in a free speech mythology than in a free speech world.

So in the free speech mythology, right, the United States, you can say quote unquote, whatever you like when hears this all the time, right?

Whereas as JD Vance would have it, you know, Europe is nothing but kind of, and Britain in particular, he seems to like, you know, is, is, is, is just, is just a half a step away from, from Aaron Totalitarianism.

And of course, this is all false.

And you know, again, Lucy Connolly, I think is a very good example of a borderline case, right?

I mean, this was incitement to imminent violence that was its intent or purpose.

And as you know, under Brandenburg, what matters is not the actual harm that results, but the level of risk.

And we're always looking at risk.

And this is why the internet is so hard to regulate, right?

Again, in the pre-internet world, right?

And, you know, the kind of classical world of free speech jurisprudence, harms and risks were much more known quantities, risks, certainly.

And Brandenburg was always a risk test.

It was never a harm test.

And so here in Britain, there's nothing like a Brandenburg standard.

But again, it was very easy to caricature what had happened in Lucy Connolly's case.

In fact, I, so the reason why I like to discuss it is because I don't think there's an obvious or simple solution, particularly if this is a solution that needs to be generalized onto what a nation of many millions of people in an area which is rapidly evolving, namely internet technologies.

So the Lucy Connolly case, I can, I can name some other examples and I can see that you are already onto things going on in Scotland.

Of course, that's another favorite topic of JD Vans.

But if you want, I can stop there and catch my breath before going onto more examples, - Right?

Not stop, pause.

We've got lots more time.

Let me read the text of the tweet just just to lay it out there for the listeners, at least this is the text as, as I read it from the sentencing remarks.

In that case, mass deportation now set fire to all the fucking hotels full of the bastards for all I care.

While you're at it, take the treacherous government and politicians with them, I feel physically sick knowing what these families will now have to endure.

If that makes me racist, so be it.

So it does, does sound like, at least in America, we'd probably say that there's at least a question as to whether this is a command burned down this hotel, which would be, which would be probably punishable solicitation in America be if it specifically identifies the hotel.

This doesn't quite, but you could imagine a ho a a in command or advocacy instruction, you should burn down the, the hotels, or whether it is hyperbole or something that's much more kind of, not even conditional, but maybe approval set fire to all the fucking hotels full of the bastards for all I care.

Usually, I think in American law, if you say, do something for all I care, you're not really saying do it.

You're saying, I don't mind if you do it, it would be justified if you did it.

So, so do you think that ought to make a difference under, under some correct legal system or for that matter under English law?

Or is this just an illustration of, of what you're talking about with the new technology, which is that people don't parse these instructions that fine, and maybe courts shouldn't say, oh, well, she said, for all I care, that must mean this and that, but be more concerned about the risk that people will take that seriously.

- Yeah, I mean, and these are very contextual, you know, in this case, I don't think for residents of Southport, there would've been any doubt about the, it being a very specific hotel.

Again, if you want to, you know, compare it to some of your branden book, standard Branden book doesn't require a command, - Right?

- Right.

Only an intent or a purpose, but require, require advocacy.

I think I, but I agree with your, I I agree with your broader point, which is yeah, one can one can go down the rabbit hole of Kazoo Street as might happen in a courtroom, right?

And, and, and, and, and, and then kind of, you know, slice the sausage endlessly as to possible meanings, right?

But again, you know, ultimately we have to figure out what kind of regime we generally want.

Now, for example, you know, a again, I, you know, I considered her case a borderline one.

In other words, I, I also, you know, in some of my recent writing, I've also considered cases that were far more loose in the phrasing.

It wasn't, you know, you know, go get that hotel or, you know, go get these people.

It was just kind of, you know, fucking immigrants or whatever, right?

And that's, yeah, that's just viewpoint.

That's just viewpoint.

Again, it's, you know, rather hideous.

But that, that's the sort of thing that I think cannot be prosecuted.

But then we have another distinction, which I think many people in the free speech community have missed.

So, just to take a step back in the pre-internet world, you know, as I'm sure both of you know as well as anyone, and certainly in the, in the United States, you know, there was already an enormous literature on free speech before the internet was really even underway, right?

There was a, there was already a solid jurisprudence, you know, loads of scholarly writing, loads of public debate.

You know, if you go back to the Skokie case or any number of other examples, the flag burning cases, right?

So this was all rather in place before, you know, before the internet, you know, was really underway.

And because of, so, so there was already a kind of, you know, sort of, you know, viewpoints on free speech one way or the other.

And people already taking pretty firm views, right?

So you had people who are, you know, very virulently anti, you know, against things like hate speech, bans, others, very much militating for them.

And I think one of the problems is that many people are still stuck in whatever those views were.

In other words, whatever views they had in, I don't know, 1970 or 1980, right?

When the internet came along, the idea is, well, okay, we're just gonna extend that now onto the internet, right?

So if you were a pro band, okay, it has to be banned on the internet.

If you were anti-man, okay, free speech on the internet.

And, you know, again, I think more and more we're seeing that that's just too simple.

I think, because again, back in that, back in those days, it was again, things like harm were much easier to ascertain.

Things like risk were much easier to ascertain any given speaking event like the one that took place in Ohio, or you know, the burning of the flag or, you know, name whatever examples you like, you know, you could find the speaker, you could find the place, you could find the audience, right?

These things were highly de the, the essential ingredients of a case were highly determinant or if it was a newspaper, what have you, right?

And therefore, harms are often rather predictable, at least compared to today's standards.

And so I think what happened was that you had the free speech community, you know, with which I generally associate myself these speech, you know, say, you know, saying, you know, there's really no evidence of harm simply from hateful expression.

Then you had the pro ban community saying, look, we only want narrow exceptions.

Both of those are now not entirely right?

Once we look, move into the internet, on the one hand, we have loads of evidence now about speech being part of lots of people dying around the world in democracies and outside democracies, right?

We do have evidence of harm.

We, the free speech advocates have to address this.

And I think the ways in which some of them address it are rather poultry, right?

So, you know, that's the price we pay.

Well, okay, how many deaths then?

How many deaths?

Then, then you have to say, then you have to give a number.

Are you a thousand a year, 10,000 a year?

How many children you have to be pro.

You can't just say that's the price as if it's an abstraction.

We're dealing with human life.

But on the pro ban side, there also has to be a recognition.

We're not talking about narrow exceptions when once we bring in the, once we bring in bans and regulations, we're often talking about millions of speech acts that can and have been censored.

So both sides have to radically rethink their notions of harm and risk.

And this is why.

And so this brings me back to this, you know, to people other than Lucy Connolly, like, you know, for the people who are just kind of speaking in more general terms, what I refer to as generalized hate speech, that I think we have to distinguish.

Are we talking about regulating the speaker or are we talking about regulating large, wealthy, powerful platforms?

In other words, I think we have to move away from this idea that whatever idea you used to have about speakers, that now applies to the internet.

And so, in my opinion, the internet should be part of a more democratic process of regulation, which again, is going to make mistakes no matter how it go, it proceeds.

But that doesn't necessarily mean just because we might need to regulate the internet more.

That doesn't necessarily mean that we should be punishing speaker.

And so this is where I pointed some other examples in Britain where people, again, were speaking, we, we engaging in hate speech in more general language, where I think there would've been a legitimate role for there, this relationship between two very powerful entities, the state and the multinational platform companies, as opposed to the individual speaker whom I'm still much more interested in protecting.

- So happy to have Jane.

You, you should, you should jump in whenever you want.

But I, I, I wanted to just to ask one question about, about the death point.

'cause I do think this is a very important point, that a lot of times people argue about free speech protection as saying, well, you know, speech isn't really harmful.

Well, if it's effective for good, it could be effective for ill.

And I, I think, I mean, you're saying how many deaths should it take to justify restricting speech?

You're saying, I want to, I I've seen over hundreds, thousands.

I mean, it's easily in the millions, probably tens of millions, right?

The, it's been said that communist regimes have killed a hundred million people in the us in the US and thankfully not in the us in the world, in the world, in the world.

Over basically the, the 20th century.

Obviously the Nazis killed tens of millions as well.

And likewise, religious terrorist attacks, which I think at this point have not reached anywhere near like that level.

At least for, for centuries.

Still virtually every political crime is a politically motivated crime stems from speech.

And in fact, this was an argument, yes, of course we should restrict, restrict communist advocacy, let's say.

Because after all, we know that it's leads to many millions of deaths.

Now it leads to it not just through speech.

What happens is the speech creates political environment where revolution becomes more possible.

Then the government takes over, so immediately suppresses free speech and then kills people.

Nonetheless, indirectly, we, we do see that, that that speech is part of the cause.

And you can imagine that for, for a wide range of other regimes, every revolution, including ones that we might think in retrospect, are justified or at least important steps forward, have been fomented by speech, often speech that advocates for violence.

So part of the problem that seems to me with saying, look, speech, especially in the era of the internet, we know it happened even before the internet, but especially in the area of the internet, can lead to all these deaths.

You know, the answer is certainly it.

Can I take another example of the printing press will probably took into account, speaking of religious, religious violence that probably not took into account, probably caused in part the wars of religion in Europe, which killed many millions of people.

But so part of the problem with this, oh, we need to prevent violence caused by extremism argument, is that it seems so ubiquitous, right?

I mean, it could be used to justify restricting publication of religious works publication of all sorts of political advocacy publication.

Not just of, of the, the, the communist advocacy.

Because after all, a lot of those revolutions were fermented in part by kinda much more borderline speech that then eventually brought about conditions in which a a, a dick, a, a tyrannical regime could take over.

So, so part of the problem is just what's the limit?

- Yeah.

Yeah.

You raised so many points that are relevant to a lot of my recent writing.

It's, there must be some tele telepathy going on there.

If you don't mind, just before I respond, I hope you don't mind that I make a very pedantic point.

- Pedantic, - For what it's worth, - We're all in favor of pedent.

You are, I pedantic - As it turns out.

'cause I've researched this in, you know, quite thoroughly to be quite pedantic.

The 100 million number has actually been very much questioned.

But again, who cares?

Tens of millions.

I don't think anybody could, tens of millions than - Easily between, between communist China primarily, and - Yeah, no, I know - Soviet Union.

- Yeah.

- At the very least, tens of millions.

- Yeah.

Okay, let's agree on that.

In its heinous and indeed, I've written, I mean, I wrote a piece for the human rights quarterly and then wrote something similar in my last book, making this very point and indeed not to advocate that all of the speech be shut down.

'cause I think that's the real point that you're getting to, and it's an important one.

And that all speech can be dangerous or any kind of speech on anything that's meaningful and controversial can be, so to speak, dangerous.

So I think you and I agree on, on your fundamental point there.

And in fact, what I did was I made the very point you've just made in some of this recent writing, you know, actually to challenge what I call the eurocentrism, you know, of, of the free speech and indeed of the hu of human rights generally.

I was using this to illustrate how international human rights law, which was created by people very much in the name of anti eurocentrism, became quite Eurocentric in its notion of who are victims.

And so this is what I call Eurocentric anti Eurocentrism, the very people who are insisting that we have to take the center, the, you know, our focus away from the West.

We're using purely western paradigms of hate speech and end of dangerous speech.

And in fact, far from, you know, getting away from Eurocentrism, they keep reaffirming eurocentrism.

And so I did this much more to challenge existing conceptions of hate speech than to advocate for lots more regulation.

So I think you and I certainly agree on that point, that yeah, you know, these, you know, these, these, these classifications, these classifications are very, very arbitrary and we can't draw a line, you know, a, a, you know, lines, you know, easy lines between safe and dangerous speech, at least when we're talking about speech, which is not trivial, which has anything to do with any issue that might be controversial.

So I absolutely take that point on board.

But you know, the question, and, and again, I, you know, I'm, I, I designate my, and I've been doing this for quite a few years now, I designate my philosophy as viewpoint absolutist, right?

It's not free speech absolutism, right?

Of, as I've written many times, there's no such thing as free speech absolutism.

- I agree with you on that.

- Even the most libertarian regime, true, even the most libertarian regime would collapse within hours if we were free speech absolutist.

And any and any critically minded libertarian will notice that immediately.

So good.

We agree on that.

And instead of viewpoint absolutism, in other words, that no individual, no individual citizen should incur a legal penalty and certainly not a criminal penalty solely on grounds of the philosophy or worldview that is spoken.

And, and so this is my point of departure, but what still remains then is so while I will protect the individual speaker to that degree, to the degree of viewpoint, the question is whether this translates onto a, unto a blanket protection for the large providers for the large platforms.

In other words, we, if we regulate the platforms without penalizing the individual speakers, we're certainly narrowing the window of opportunity for speech.

But free speech does not require any minimum threshold of opportunity.

In other words, no government is required to supply a platform.

It just can't shut it down on the basis of a worldview.

And so regulation of online platforms could proto diminish windows of opportunity, but that does not equate necessarily with penalizing individual speakers solely on grounds of their viewpoint.

And indeed, I would extend that same provision, that same protection to, for example, smaller platforms that don't have the size, the scope of X - Like the printing press.

- Yes.

- Like the printing press.

You would, you would, okay, so newspapers too, could be, - Yeah, yeah.

Oh yeah.

I think newspapers are infinitely, again, that's why I was, you know, you know, that's why I was very much, you know, in the American free speech line insofar as it applies to the pre-internet world.

Yeah.

You know, so, so, so that I, you know, I just think the internet, the internet has not simply a power and reach and wealth, but also and opacity, which has no match, which has no correlation to, to, to to to the traditional print media.

- So, but I'm, I'm sorry, I just wanna make sure I understand.

So you'd say that newspapers - Yeah, - Are protected that individuals couldn't be, say, thrown in prison for their views, but that the government could say, platforms, you must block certain views.

We're not gonna put individuals in in prison, but we're just basically gonna make it impossible for them, or much more difficult for them to convey views that we think are dangerous.

- Possibly.

Possibly, yeah.

Got - It.

So why couldn't that same logic apply to a printing press then?

A, a a a book press?

It's a, - It's a calculation of risk, - Which, and again, oh, okay.

So it's - A calculation of risk, which we've always done - Well, always well, okay.

I wanna, I wanna, I wanna press on that a bit too.

- We've always been involved in the calculation of risk.

It's just calculation of risk is far more complicated.

Sorry, go - Ahead.

Okay.

But let me, yeah, but so I, I, let me, let me try to recharacterize the more like, sort of naive free speech, you know, American view here.

It's true that we have exceptions.

We're not free speech absolutist, even here in the us and we have exceptions to them.

But the philosophy behind even the exceptions is one that is not trying to optimize it is not trying to get the, it it is not trying to be proportionate to the risk involved.

It is instead quite deeply skeptical that once there is an opportunity for the government to start tinkering with what is heard in order to think about what is been thought, and then ev eventually what is then done that, that kind of, that line of, you know, that that, you know, attempt to go as far upstream as possible pre-thought and pre, pre, pre, you know, public debate is a tool that's so dangerous that rather than trying to optimize risk, we look for things that are, you know, things like fraud, things like incitement where we can have confidence that they could be administered in a viewpoint, neutral way and for the, you know, and, and, and, and that they serve real risks.

And not to quote the famous Brandeis line, allow men to fear witches and burn women.

Right?

So, so I'm wondering if you could react to the kind of the skeptical negative liberty viewpoint and how much that differs, if at all from the, what I hear as more of like a risk management, kind of more of like a, you know, what a regulatory agency would do with medical risks or something like that.

- Yeah.

Where again, I'm underlining, I'm underlining as to the largest, most powerful and wealthiest platforms, not as to individual speakers.

Again, - I think, but I don't think that ma so IIII think you ju and I are reacting because ultimately it means that individual speakers are not reaching willing listeners.

And so I I still think you have have a philosophy.

Go ahead.

- I'm sorry.

Sorry.

No, sorry.

I didn't mean to interrupt.

- No.

So, so I still, I I still think it takes me to the, you know, what is the philosophy of the reason that, you know, the, the, of the justification for government involvement?

I, I get that there is some, I get I like that, that you are recognizing that maybe the speakers per se should not be in legal risk, legal turmoil, and especially not in criminal risk.

Right?

But, but in terms of the actual access to content, the, the viewpoint from the libertarian version of the First Amendment is that it's true that not everyone gets a platform.

Not everyone gets a megaphone, but the government doesn't pick and choose who gets one and who doesn't.

It's the private, you know, industry.

You know, and this was just as true in the 18th century as it is now that those who, the, those who, you know, command the audience or, or manage the infrastructure that gets the audience get to make these decisions without government interference.

- Yeah.

And I mean, I do actually think that Brandenburg was about managing risk, - But I but admit it was a, I agree with that to some extent, but it took, like, it took the approach of, okay, these risks we're quite confident in because a, we have some evidence that the defendant wanted, you know, there's an intent element.

So within all the risks that a, an a perfect omniscient God could manage, it took just a subset, just a slice of those that, that we thought the government would be able to be competently, would be able to competently administer without running the risk of viewpoint sort of corruption Does that, - But I think, - First of all, do you think that's a, a fair articulation of, of, of what's going on?

And, and, and it sounds like you think that as a policy matter, that's just not, that's just too little risk that's being managed.

Is that right?

- Yeah, well, well, I mean, again, I think, I think, let me put it this way.

I'm not sure Brandenburg would've been decided the way it was in if we had been talking about the internet and not a farm in Ohio.

I think it's really that simple.

In other words, I, I don't mean to say that Bron Bo was only about managing risk, but I just think that element is not, is, is undeniable otherwise, it, it didn't even need to stop with, you know, a likelihood of, of, of imminent lawless action because that's, that's just a risk calculus.

So it was really about managing risk, but it was about managing in a risk where managing risk at a time and in a world and for a domain of activity where risk was something very, very different, right?

And so, and so up until recently, people have barely spoken about Brandenburg as managing risk.

They've rather spoken about it as, you know, as a, as, as an absolute, excuse me, as a guarantee of a quasi absolute freedom.

But in, and, and again, in, in the offline world, I still adhere to it.

I still think it makes sense insofar as there is still an offline world.

But I think if, if Vandenberg was only partly about managing risk, it was ne necessarily about it, it was not incidentally about managing risk.

That's fundamentally what the Brandon book principle is telling us how to manage risk.

And so I'm not sure why that would suddenly disappear again, as to the largest platforms part, particularly platforms, which, you know, again, everyone in Ohio, everyone on trial, you know, in, in, in Brandenburg in any similar case, these are individual citizens and voters.

The farm was not on trial.

The, the right Mr.

Brandenburg was a citizen and a voter.

Twitter doesn't vote, Facebook doesn't vote.

They are not democratic citizens.

And so I think another problem in our understandings, the free speech is that we're only ever doing a liberty calculus.

And we let democracy, even though democracy, we often pay lip service to democracy, democracy actually falls by the wayside.

- But lemme ask, - Sorry.

And I don't think that, I don't think that, I don't think that demo democracy is just a tool for freedom.

Democracy is a political community.

It's a participatory community.

And you know, and, and you know, you know, and therefore I don't give Twitter and x the status, the standing in the political community that I give any given individual speaker, assuming we're talking about an individual speaker who might communicate through those media.

- But it, but if you restrict, say Twitter, if you restrict Facebook, especially if you say, well, they can't post certain kinds of views because they're too, too risky.

You are restricting the ability of citizens and voters to speak to each other, right?

So imagine that, that the Trump administration has this power and it, and it is allowed to order, or it doesn't have to be just executive.

It could imagine that in alliance with Congress allowed to order only these large platforms, not to post things that are, or not to allow posts that are, that, that support the killing or that, that, that defend the killing of Charlie Kirk or that defend transgender ideology or that sharply criticized the American military or sharply criticized the police or, or immigration enforcement.

Or imagine that a future Newsom or Harris administration orders the suppression of, of other, of, of the speech that, that it doesn't want to see.

That may not involve individuals going to jail, but it will involve individuals being unable to convey their views on those subjects while they're political liables No, no.

Are free to convey the others no.

Or unable to at least do this through this very important medium of communication, - Less able, less able.

Yeah.

They get a narrow window.

- So, I'm sorry, so you say that, that, that, that, that there's just no, there would be no free speech objection in America, we'd call it First Amendment, obviously not elsewhere to the government doing that, just even in a viewpoint based way, even in a way that is probably aimed at swinging the next election to try to make sure that some views are advantaged in public debate and some are disadvantaged that you think is fine, so long as individuals aren't - Yeah, yeah.

But, but yeah, but the online, the platforms are doing this anyway.

I mean, right.

Surely you're not going, you're not parting from an assumption that the platforms are wholly neutral.

- Well, but New York Times is not who platform, but the government says order the New York Times - Is why.

This is where I would distinguish between the large platforms and just a website that A or B might set up at home as to a website that A or B might set up at home.

I would, I would give it the same protection that I give to that citizen, right?

But, but what about the New York Times?

This is why we have a relationship, right?

This is why we have, you know, what Jack Balkan has called a triangular relationship.

We're not, we're dealing essentially with, with one state, dealing with an entity which has the power of another state, right?

Hopefully a democracy, though not only hopefully a democracy dealing with an entity which is not a democratic entity, which doesn't even purport to be one, no matter what kind of mission statement it posts on its website, right?

Twitter and Facebook in no way give any guarantee of, of, of, of any loyalty to democracy.

They don't give any guarantee.

Again, they might have nice phrases about democracy, but they give no guarantee, right?

And so we're talking, so basically, you know, talk, asking whether, asking whether you're not gonna like this, asking whether, you know, Washington should have the power to regulate Twitter is pretty much like saying whether Washington should have the power to regulate Russia in terms of communications.

- We're trouble understanding how this differs from Washington having the power to regulate the New York Tom Times not any individual author.

But, but, but it shouldn't, yeah, it shouldn't.

What shouldn't, because Brandenburg was ultimate, but why, why shouldn't it, why shouldn't the government be able to regulate the New York Times if it can regulate Twitter?

What's the difference?

- 'cause Brandenburg was about risk because Brandenburg was, - But, but New York, the New York Times also has risk.

If, if New York Times decided to completely sway it, decided to have an agenda of, you know, furthering a white supremacist, you know, nation, then it, it would pose some risk, I think.

Yeah.

And you're saying, but nevertheless, - Yeah.

You okay?

We, we perfectly agree.

Yes, it would, and that would risk would be nothing compared to the risk of Twitter doing it.

So we really - No, I, I, oh, absolutely.

Okay.

Well, well, I don't agree about the risk assessment.

I guess it's funny, I I'm actually sympathetic though to this idea that in this era, risks are different.

That a, a, a, a provocative comment at one time to one audience has a life that is different, you know, that goes on.

However, I, I do find it challenging, you know, even when I try to think through how regulation of these types of new risks would work, I found it challenging to distinguish it from the types of stochastic risks that could be applied to rap lyrics, to, to, to, you know, news, to, to factually accurate news that nevertheless aggravates, you know, totally foreseeably aggravates and produces a riot or something like that.

And, and, and so the nature of the risk analysis is something where I, I, I guess that's where I come back to the, the First Amendment deciding to be extremely conservative in its approach to recognizing risks that are appropriate for the government to get involved in.

And that, I think, I think that sounds like a distinction, but - Yeah, and, and what you see when I, when I recognize that Brandenburg was a risk calculation, my point is not that we therefore from this, that government can therefore adopt a policy of zero risk.

And that's why the New York Times, right, of course, there's never zero risk in any, in any situations, never zero risk, right?

And that's why we simply need to look at history, to look at the danger of the New York Times and look at the dangers inherent in Twitter to look at the transparency of each operation, to look at the potential for accountability and so forth.

So these are, these are wildly different New York Times and Twitter X, sorry, I shouldn't be saying X, right?

And so, so these are wildly Now, if we've, now, if so, if we've already agreed that there is indeed always been a risk capitalist, in fact at play, well then we, then we have to either take that seriously or explain why we should suspend it, but it clearly gives us a, a clear water between Twitter and the New York Times.

- So I guess I'm still not sure how this fits within what I think has long been understood as a different facet of risk management, which is the risk of allowing the government to use its coercive power to control public debate.

Now, to be sure you point out that Twitter and Facebook may use their own power to do that, and it's an interesting question whether they should or should not be allowed to do that.

But it's generally been understood as especially risky for the government to be able to say, essentially you can't even, in particular media only, you can't say certain things because that means that the next election won't really be a result of the free choice of the people.

It'll be a result of the government essentially shutting down in some very important media, certain voices, just to again, turn to immigration.

You know, I'm an immigrant into the us I generally support immigration.

I think there's a lot to be said for, especially in the US it may actually be different for smaller countries, but I think US profits greatly from immigration.

But, but it seems to me that the, that that America to remain a democracy must have those decisions chosen by a democratic process.

And if the government, again, some future Newsom administration says, any posts that are sharply critical of immigration or any posts that spread what we consider misinformation about immigration as decided by us, or any posts that might have a bad tendency of leading to attacks or criminal attacks on immigrants should must be removed, will the following election where immigration will be a topic for, for, for the voters to think about, will that really be free and fair if, if the government was able to shut down, not everywhere, but at least on, on the most massive platforms, the places that, that so many people rely on much more than the street corner have been able to shut down One view is that really democracy?

- And I would say I, if I, if you don't mind me sounding a bit pedantic again, Eugene, I think that you just gave an absolutely perfect articulation of the pre-internet world where there were, we had a strictly bipolar relationship, the almighty state and the isolated individual citizen, right?

And within that world, I think your articulation is flawless, and I agree with every word, but we cannot say, we simply cannot assume that on the internet we're dealing with an open arena and for the government to start coming in and regulating the suspect, we are dealing already with, in other words, we're dealing with regulation regardless, with deep, constant regulation manipulation.

And the only question then is do we have a democratic say over it?

Now, maybe I don't have much of a democratic say in what, how my government regulates it, but I have no democratic say over how Elon Musk does it.

So I'd, - But that sounds like a good argument.

- I'd rather have at least some say over how my government does it than absolutely no say over how Elon Musk does it.

So, because those are by only two choices.

Those are my only, there is no third choice in which it's simply the government and the citizens, as in the pre-internet world, that one choice, which you perfectly articulate for the pre-internet world, does not exist online.

We have those two options, right?

Right.

Do we take the Russian option or do we take the Democratic option?

The democratic option, you are right, is far from ideal, but it's better than the Russian one, which is Elon Musk's option.

I don't mean that Elon Musk agrees with everything Putin does.

I mean that his running, that Mark Zuckerberg's running, that the large part are completely opaque as to how they run.

They're completely unaccountable as to how they run.

We have no idea what algorithms are used when they're changed, how they're applied.

So the idea that it's, there's somehow a problem and government marches into that, right?

There's a problem in government marches into a genuinely open and free and transparent arena.

And that's what you've just articulated really well, and I agree with it, but that's not what government is marching into when they're marching into Twitter or to Facebook.

- But, but I'm sorry, I'm still not sure in about two related questions.

One is, even in the pre-internet world, - Yeah, - The media had extraordinary power compared to ordinary citizens who really weren't able to speak to a large group unless they had a lot of money or access to the media.

Yeah.

Likewise, the New York Times had its own editorial line.

The New York Times controlled what was, what was published in it through an opaque process.

I mean, they, they didn't have formal rules about which op-eds, let's say they accept, yet I take it, we wouldn't say, well, because, you know, the media tightens already restrict what is printed in their pages.

Well, might as well have the government restrict that too.

But the second point is, I can see what you're saying as an argument against, say, the result in net choice v Moody, where the Supreme Court held that the government may not mandate neutrality on the part of platforms.

So you might imagine saying, look, because the platforms are so powerful and so opaque, we need to have non-discrimination rules for the platforms, notwithstanding the court's arguments in net choice, we moody as a opacity, we need to have disclosure rules, which maybe there's still some room for post net choice.

Or, and again, you might just say the court got it wrong in net choice, but it sounds to me like you're saying, because these platforms have so much power, rather than trying to diminish the platform power, we want to increase the government power to, to prohibit certain views on those platforms.

It just seems, it, it, wait, what, what, what was that first, the, the solution doesn't get the problem.

- What was that first option about diminishing the platform's power?

Yeah, - You uphold neutrality rules say that the platforms cannot discriminate based on viewpoint as to opacity require them to have various, various disclosure requirements such the things that Florida and Texas tried to do in some measure, I'm not a fan of the, of, of all aspects by any means of the Florida and Texas models.

Yeah.

So maybe net choice.

Got it.

Right.

But if your concern is about the platforms of all this unaccountable power, you might say, okay, limit the platform power.

But it sounds to me like you're saying instead increase the government's power to control what is set on the platform.

- Oh, no, I love, I love your, I love your proposal.

Let's do it.

Yeah, no, I love your propo.

I would infinitely prefer to keep the government out, except, I mean, as to content as, - But how does the government's power to regulate viewpoint solve the problem, problem the government power to raise?

It doesn't, - No, it, it doesn't solve the problem.

As I said in my, as I said at the very beginning, we're not going to get a solution.

Well, it's always gonna be a bad regime.

And then the question is how do we minimize that?

But yes, Eugene, I agree entirely.

I would much rather if the gov you, if the government is going to get involved, then simply have it, police neutrality, police transparency, make it into a genuinely democratic arena and verifiably and accountably.

So, so that we can stay out of the business of policing viewpoints.

You've got my vote on that.

Let's, - Well, the pro, the, so I, I would love to end this on this wonderful point of agreement.

However, I do think that the problem will be that in this neutrality, new neutrality re regime, there will still be content that some think are very risky and likely to cause these stochastic events, you know, violence and whatnot.

And I am, and you know, and so I don't think we've solved the problem of figuring out how to either in incentivize or compel the, the, the social media companies to deal with that problem.

But maybe that will have to be an ongoing years and decades long discussion.

- But, but that, that anxiety, oh, I will feel much, I will be far less worried about content, if at all, if we could genuinely have a transparent regime, transparency regime and everything, everything I've been saying in defending for the last hour has been on the assumption of opacity, of non-transparency of the large platforms.

'cause that's the world we're living in.

But yes, 100%, if we can turn them into genuinely democratic arenas, then yeah, then, then I'm, then for me, even the very concept of risk changes again.

- Interesting.

Okay.

- Well good.

Absolutely.

- Let's close it there before we, before we all more confusion.

- Eric, thank you so much for joining us.

I very much appreciate your perspective, and I think we had, we had a fun discussion as a result.

Jane, thank you so much.

As always, great pleasure.

And our, our listeners should expect some more interesting content on all, a lot of what, of exciting things happening in Free Speech World just in the weeks and maybe even days to come.

- Well, thank you very much.

It's been my absolute pleasure.

It's been a pleasure to join you today.

Thank you all.

Thanks.

- This podcast is a production of the Hoover Institution, where we generate and promote ideas advancing freedom.

For more information about our work, to hear more of our podcasts or view our video content, please visit hoover.org.

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.