Mis- and disinformation will always exist in free democracies, but communities can still find ways to push back against the most harmful conspiracy theories

One of the most challenging problems of the current moment is determining how communities should respond to false information spreading online. Legal precedent isn’t always helpful, since many landmark First Amendment cases were decided in the 1960s and ‘70s, argues Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University.

“There’s this big question about whether that framework makes sense, given the challenges that we’re facing today,” he said during a conversation at Unfinished Live. “Part of the misinformation problem, I think, is a result of new technology, new communications technology, which didn’t exist when the Supreme Court was thinking about the lines that ought to define free speech in the United States.”

Jaffer is part of the Commission on Information Disorder, an effort organized by The Aspen Institute to find solutions to the global misinformation crisis. He said that it’s difficult to find fixes that don’t give government officials or tech companies undue power over public discourse. “Misinformation is at least, in large part, a function of our collective commitment to autonomy and the free trade of ideas,” he explained.

Yasmin Green, another member of the commission and the director of research and development at Google’s Jigsaw, said people need more resources to navigate the internet. “We don’t really give you any tools to understand where the information that you are seeing came from,” she said. “I’m really interested in that aspect.”

Watch the full conversation below, and scroll for a written transcript. The transcript has been lightly edited for clarity. Statements by these speakers have not been fact-checked by Unfinished and represent their individual opinions.

Vivian Schiller

My name is Vivian Schiller and I am the Executive Director of Aspen Digital. Aspen Digital is a program of the Aspen Institute, which is a partner at this conference. But when we’re not speaking in front of audiences like yourself, Aspen Digital focuses on all things at the intersection of media, technology, and democracy. So just minor things like that. So one of our most exciting projects that we’re working on right now is something called the Aspen Commission on Information Disorder. It is a very short, six month commission that we pulled together, an extraordinary group. Two of whom I’m going to introduce you to in a minute are with us today to look at solutions and recommendations to help solve some of the biggest issues of our time. Which is misinformation, disinformation, and malinformation, and the harms that they are creating to this country.

 

And so, our group, there are 16 commissioners, have been meeting a lot over the last six months. Mostly by Zoom, we did get together once in person, and in just a couple of weeks the commission is going to be releasing its recommendations. [Editor’s note: That report was released this week. You can read it here.] But really, really excited today to have just two of our commissioners with us. Yasmin Green is the Director of Research and Development for Jigsaw, which is a unit of Google that’s building technology to make the world safer from global security challenges. Yasmin leads an interdisciplinary team to forecast threats and validate tech interventions. She has pioneered new approaches to counter violent extremism and state sponsored disinformation, which are issues that the commission talked about quite a bit.

 

Also with us is Jameel Jaffer. He is the inaugural Executive Director of the Knight First Amendment Institute at Columbia, which promotes freedom of speech and press through litigation, research, and public education. They are involved with a number of major projects that advocate on behalf of journalists and researchers who study social media platforms. Jameel previously was the deputy legal director of the ACLU where he oversaw work related to free speech, privacy, national security, international human rights. So First Amendment issues, as you might imagine, were also a topic that wove through all of the commission’s deliberations. So anyway, I wanted to just ask each of you first, so as members of the commission, what were the challenges that you saw out in the world that led you to take on this work? I mean, not just the work of the commission, but broadly in your day job. And Jameel, I’ll start with you.

 

Jameel Jaffer

All right, thanks Vivian. And great to be here. I run this Institute of Columbia whose focus is free speech in the digital age. And the big question we think about all the time is whether the First Amendment framework that we’ve inherited is adequate to the challenges that we’re facing today. And if you think about the First Amendment, landmark decisions of the First Amendment, they define free speech in the United States today. They were all decided in the 1960s and 70s, a case like the Pentagon Papers case or New York Times versus Sullivan, which many people think of as almost synonymous with press freedom in the United States. Or Brandenburg, which is a case involving advocacy of violence and this question of, at what point advocacy of violence can be regulated. Those kinds of questions were decided in the 1960s and ‘70s, long before the internet or social media or smartphones.

 

And there’s this big question about whether that framework makes sense, given the challenges that we’re facing today, and I see this question and misinformation is a big piece of that. A lot of, well, a part of the misinformation problem, I think, is a result of new technology, new communications technology, which didn’t exist when the Supreme Court was thinking about the lines that ought to define free speech in the United States. And so that’s really the question that I had in my head when you approached me about the commission. And I feel like I’ve learned a ton just from interacting with so many people from different disciplinary backgrounds. Because most of the people that I spend my time with are lawyers who think about the First Amendment, but people think about this issue from all sorts of different perspectives.

 

Vivian Schiller

Yeah and we certainly had a lot of debate. The commission had a lot of debate around issues of that balance between the First Amendment and the tradition in this country about free expression and the actual real world harms that result from things that people say online.

 

Jameel Jaffer

Yeah. I mean, I think that there are, in my mind, two really big challenges in this sphere. And one is, how you address misinformation in a way that doesn’t invest government officials with the authority to decide what’s true and what’s false. And the other challenge is how you deal with the set of problems without investing very powerful technology companies with even more power than they have right now over public discourse. And coming up with ways to address or mitigate the problem of misinformation, I think with those two challenges in mind, it’s a tough thing to do.

 

Vivian Schiller

Yeah, all right. We’re going to come back to that. Yasmin, what were the challenges that you saw that you wanted to bring to the commission when we first started out?

 

Yasmin Green

Yeah, it does resonate with the point, about harms, that Jameel mentioned. So my group at Google, it’s been around for 10 years, half engineers, half other disciplines, researchers. And we’re trying to understand how motivated bad actors are using the internet to explain people in ways that we didn’t imagine when the internet was built. And so, 10 years ago when we started, we were looking at radicalization. And at the time, the perspective on radicalization from the people who are experts in extremism and violent Islamism, was that the internet was nothing to do with why anyone would go to join the Mujahideen in Afghanistan. And the perspective of people on the internet was, it’s kind of a digression and kind of odd that we have a group that’s thinking about online radicalization, because it doesn’t really happen. And then you fast forward five years and then you had ISIS and it was, “if there was no internet, there would be no ISIS. It’s the internet’s fault.”

 

And so throughout our decade of existing as a group, we’ve really watched issues that used to feel very fringe and remote become really prevalent and really worrying for everyone, including the major Western countries. And that was the entry point to misinformation, was thinking about how conspiracy theories can be used to mobilize people to take some kind of violent action. There are no extremist groups in any ideology in history who haven’t used conspiracy theories to justify taking action against out-groups and to create a sense of urgency around that. So we were really interested in misinformation, really coming from a harm’s angle, and even extending harms in the pandemic era to thinking about more diffuse harms.

 

I tip my colleague, Beth Goldberg, who’s here, who has been doing work to think about how we can think of harms beyond somebody, a suicide bomber or something, that we have more traditional notions of incitement to violence or mobilizing people to be violent. So I was really interested to come to the commission because we had a lot of people with different lived experiences of what harm could mean. And everyone was interested in pretty much tackling the most harmful type of misinformation, just we each had a different idea of what that was, which made for very lively commission meetings.

 

Vivian Schiller

Yeah, they certainly did. So where do you see now, Yasmin, I’ll just stay with you for another minute. Let’s move from … I think everybody in this room, and it’s so much of the conversations that have been happening over the last two days, are about the harms and the problems stemming out of a lot of the products and services that come from big tech. So let’s move to talking a little bit about fixes. So what is the responsibility, you’re part of greater Google, your group is focusing on solutions, but what are the responsibilities of the big tech companies towards mitigating these harms? 

 

Yasmin Green

I mean, they’re responsible for keeping their users safe and for keeping society functioning well. And I mean, if they were to deliver in any way on their missions, that would qualify, for me, as being responsible actors. I think, we found this in the commission too, there was a real desire to see systemic fixes. How can you do things that will … what could we advocate for that would make, not just platforms, because we were looking at other types of media and cable news, what would make them just behave differently? How could you change the incentives? So there was a big focus on that. It’s interesting that, from the inside of big tech, I see a lot of focus on algorithms. How can you change the way algorithms recommend information? And a lot of movement on that, actually, to make algorithms offer less damaging, sensational, radicalizing content.

 

And so some of the work that we’ve been doing, my team at Jigsaw, has actually been on, well, we’ve invested so much in algorithms and so much of our time and energy and people, and actually relatively very little on how can we help people be savvier about how they navigate the internet. We don’t really give you any tools to understand where the information that you are seeing came from. How old it is, its provenance, whether the sources are themselves reputable or have been debunked many times. And so, I’m really interested in that aspect, actually, in terms of … I suppose it’s an attribute of responsibility, just really seeing innovation in ways that we can help people make better decisions for themselves.

 

Vivian Schiller

So using good tech to solve some of the problems for some of the bad tech, so to speak. 

 

Yasmin Green

Does that make me sound naive?

 

Vivian Schiller

Jameel, let’s stay with that topic because one of the things that the commission has talked about quite a bit is the issue of transparency. I know some of the work that you’re doing at your program at Columbia is to … You’re representing the group at NYU, which has been shut down by Facebook …

 

Jameel Jaffer

Yeah. Well, we’ve represented a whole bunch of journalists and researchers who study social media platforms. Including the NYU team that’s behind Ad Observer. But I won’t talk specifically about that project, but the problem that journalists and researchers sometimes run into when they study the social media platforms is that the social media platforms rely on their terms of service to squelch research in journalism that is inconvenient to the platforms. And the terms of service make sense at a high level of generality and at a high level of generality it makes sense that Facebook would restrict people from scraping the platform, for example, or collecting information by automated means from the platform. After the Cambridge Analytica scandal, Facebook was under a lot of pressure to crack down on third parties’ use of information from the platform and, again at a high level, it kind of makes sense that they would have terms of service like this.

 

But one effect of these terms of service is that journalism and research about Facebook, and more generally journalism and research about the social media platforms, becomes possible only with the permission of the platforms. And in this situation in which the platforms are, themselves, the gatekeepers to journalism and research that the public needs, I think is unacceptable and something that somebody has got to do something about. So we’re trying to do something about it. There’s definitely a role for regulators and legislators in this context as well. But Vivian, while you were talking to Yasmin, I thought about your earlier question. And the one thing I wanted to say about this problem of misinformation is that, I think it’s a mistake to think that this is a problem that can be solved. Misinformation is at least, in large part, a function of our collective commitment to autonomy and the free trade of ideas, the marketplace of ideas.

 

If you have a system in which you’re going to let people decide for themselves what to believe and what not to believe, in other words, if you have a system that’s not a totalitarian system, then some people are going to get things wrong. And to that extent, misinformation is here to stay. I don’t think that means there’s nothing we can do about it. I think that there are things we can do, you mentioned structural or systemic fixes, there are things we can do to reduce misinformation or to mitigate the harms caused by misinformation. One area where I think that there’s a lot of low hanging fruit is transparency. Protecting journalists and researchers, for example, who study the platforms.

 

Requiring the platforms to disclose more information to researchers who want to study, for example, how misinformation spreads on the platforms. Or even legislation that requires the disclosure of certain categories of information to the public, like ad targeting data, so that people who want to understand how advertising is or isn’t a vector for misinformation online, can better understand it. I think those are low hanging fruit and it’s a little bit frustrating that Congress hasn’t already acted to address those kinds of things, but maybe our report, if those recommendations end up in the report, can help a little bit.

 

Vivian Schiller

Yeah, yeah. There certainly will be, as just a bit of a tease, in the report recommendations for Congress to act on certain measures. And certainly, if there’s one area of agreement, it is that this is not … misinformation, disinformation is not a problem that can be solved. Like you said, it cannot be eradicated. The only way to eradicate it would bring up consequences that I don’t think anyone in a democratic society would choose. But thanks for raising those points. Let’s stay on the subject of the responsibility of the government. So some of the things that the commission has talked about, calling for Congress to take certain actions that will help increase this transparency.

 

Where else can the federal government make a change? And one of the things that came up very often, and I know the commissioners are mindful of is, whatever change you seek for the federal government, you need to be comfortable with how that works no matter who’s in office. And so, what are the other kinds of … what is the role the government can play? Not just in regulating, necessarily, big tech. But in general, to help mitigate, not eradicate, but mitigate some of the harms of mis and disinformation. Go for it, Jameel.

 

Jameel Jaffer

Yeah, well, okay. So I mean, I think you’re absolutely right Vivian, that one of the questions we should all have in our heads when we think about this set of issues is, who is going to be in charge of this new authority? If we’re giving you authority, somebody, to act against misinformation, who’s going to be in charge of that authority? Because you may trust the way that the current decision makers use that authority, but the next set of decision makers might use that authority in a very different way. And I think that whatever your politics, you can just look at the last eight years and see how dramatically things can change so quickly in the political landscape. And if you had, for example, a law that allowed the government to suppress misinformation, well, President Obama might have interpreted that law quite differently than President Trump. Or their justice departments might’ve interpreted those laws very differently.

 

That said, there are lots of places where Congress regulates misinformation. Even now, Congress regulates commercial speech, for example, there are laws against … Consumer fraud laws, for example. Securities regulation is, for the most part, regulation of lies. There are all sorts of places where we accept government regulation of lies. And I think it’s worth thinking about, whether there are other places where we should be more accepting of that kind of regulation as well. But there are also ways the government can act without regulating what people can and can’t say.

 

I mean, I think the transparency ones are maybe the best example, but certainly not the only example. You can imagine, for example, a fund that was meant to protect certain kinds of journalism or certain kinds of academic research. Those more systemic interventions, I think, are less likely to run up against this. I mean, you can describe it as a First Amendment problem, but I think that there are good reasons why it’s a First Amendment problem. But those systemic changes are less likely to end up giving decision makers authority that can be used to suppress, not just misinformation as we might all define it, but also descent.

 

Vivian Schiller

Yasmin, what are the other levels? We’ve been talking about in this conversation, we’ve been talking about what the big tech companies can do. We’ve been talking about what the government could do. But mis and disinformation is a whole of society issue. The commission spent a lot of time talking about other kinds of solutions or actions. Whether it’s about journalism, whether it’s about education, other civil society. What are some of the other areas that you came out of the commission work thinking were important to examine, aside from actions of big tech and actions from government?

 

Yasmin Green

That’s a really good one. We were so focused on those two.

 

Vivian Schiller

We did talk about media, the crisis in local media.

 

Yasmin Green

Yeah. Yeah. I think the question we asked ourselves a lot, that the phrase that came kept coming up is, where’s the teeth in this proposal? We can say what we’d like to happen, but why will things differ from how they are today? And it’s interesting, with platforms, the idea was that regulatory pressure or the public climate, where the public discourse is. Even where the energies of their employees are. All of those things, I think, are creating momentum in the right direction for change. And then when you think about civil society, it’s not incentives issue. I think we felt largely that it’s a funding issue. So we were thinking a lot about how we could support funding into civil society without strings attached.

 

So for example, if there were a fund that we could create to support journalism or researchers or even local libraries, how could we ensure that that would have the autonomy to operate independently? And there was a consensus view that we need to build back institutions that, both, that are responsible for promoting education and literacy skills. But also that could contribute to bringing us closer together, across divides, that were also sorted into our homogenous groups. That however you cut that, that it’s really easy to believe harmful disinformation. It’s really easy to believe that other people are bad or less than human and those types of recommendations or discussions felt a lot more profound.

 

Vivian Schiller

Yeah. Trust was a big topic of conversation. Building trust. Interesting, for those who saw Rashad Robinson speak yesterday, he specifically talked about his problem with the word resilience. I think that was something that there was consensus across the commission was the notion of putting the blame solely on the people who are the recipients of the mis- and disinformation who are maybe victims of mis and disinformation is not where it needs to be. That this is a whole of society approach is needed. And we didn’t get enough chance, we have to wrap up in a minute, but we didn’t get a chance to talk but I think everyone will see in the report, there’s a very strong emphasis on underrepresented communities and equity and the specific kind of harms that are caused to, particularly, communities of color. Jameel, Yasmin, thank you so much.