Salespeople often promote the tech as mysterious and all-powerful, obscuring biases and preventing people from questioning its purpose

There’s perhaps no buzzword that’s become more ubiquitous in the technology industry than “artificial intelligence.” It typically refers to automated systems designed to conduct some form of decision-making, from choosing what videos you see on TikTok to determining whether a defendant should be released on bail.

But companies often exaggerate how much their products rely on A.I. to make them seem more powerful or obscure how they actually function. “It’s aspirational, it’s a narrative, an ideology that can really mobilize vast amounts of funding,” said Mona Sloane, a senior research scientist at the NYU Center for Responsible A.I. and a professor at NYU’s Tandon School of Engineering, during a conversation at Unfinished Live.

“It’s a good way to frankly hide poorly designed systems or systems that are designed to incriminate or adversely impact certain communities,” said Rumman Chowdhury, the director of ethics, transparency, and accountability at Twitter and the founder of the algorithmic audit company Parity. “So it does, in a sense, provide a shield.”

To solve the problem, communities should demand that new technologies be made available for the public to inspect, evaluate, and understand. “We have to have the tools to make the choices collectively about what we’re allowing to govern our lives,” said Albert Fox Cahn, the executive director of the nonprofit Surveillance Technology Oversight Project. “Or we lose that control that is indispensable to having a republic.”

This talk was moderated by Damon Beres, editor in chief of Unfinished Media.

Watch the full conversation below, and scroll for a written transcript. The transcript has been lightly edited for clarity. Statements by these speakers have not been fact-checked by Unfinished and represent their individual opinions.

Damon Beres

I’d like to introduce our guests first. So we have Albert Fox Cahn here, he is the founder and executive director of the Surveillance Technology Oversight Project and an Ashoka fellow. Welcome Albert. We also have Hilary Mason, co-founder of the narrative A.I. company Hidden Door and founder of Fast Forward Labs. Welcome.

And on our video panel here, we have two amazing guests dialing in. We have Dr. Rumman Chowdhury, director of the ML Ethics, Transparency, and Accountability team at Twitter and founder of Parity, an algorithmic audit platform company. And then we have Dr. Mona Sloane, senior research scientist at the Center for Responsible A.I. at New York University’s Tandon School of Engineering. Hello, Mona.

And I’m Damon. I am the editor in chief of Unfinished Media here at Unfinished, and I am just delighted to see you all here today. 

Alright, jumping right in here, I want to start from a very foundational place actually. Because artificial intelligence is, as many of us in the audience and of course, on the panel know, it’s incorporated into so many aspects of modern life that actually the term probably means very different things to different people. And I thought it would be great just to get a foundational, brief individual definition of A.I. from each of you. And I’d love to start with Rumman. Can you talk to us, give a definition of A.I. from a technical perspective?

Rumman Chowdhury

Oh, that’s a hard question, because it has come to mean everything and nothing in application. I think of it now as an algorithmic decision making system. So systems that use data and algorithms to make some sort of a decision. So just the clarifying factor being that there may be a human engaged in it, and there may not be a human engaged in it.

Damon Beres

Okay. Hilary also from the technical perspective, I’d be curious to hear your definition.

Hilary Mason

Yeah. This is a term that’s so difficult to define because it has a very rich history in one context, in the academic context, and then the way it’s been used most recently has been more on the marketing side of things. And so I want to build on what Rumman said, which is absolutely correct, and just say that from a technical point of view, this is a shift from writing deterministic software, where you’re saying, “If this happens, do this thing,” to building probabilistic models off of data through code, and it looks the same.

But I found that the important thing to get out of a definition of A.I. versus traditional software engineering systems is that you are shifting from that deterministic to probabilistic mindset. And then you may deploy that probabilistic thing to predict things you can’t observe directly, to do so in an automated context, as she said, or to inform a person who’s making a decision. And so if we want to really get to a point of what A.I. is for this discussion today, we say it’s in that space.

Damon Beres

Got it. And Mona, I’d like to throw it to you for a definition that maybe touches on some of the social implications of A.I..

Mona Sloane

Yeah. Thank you… As a sociologist, I’m really interested in how A.I. intervenes into the fabric of our social lives. And so when we look at how A.I. works, then I would say that A.I. systems are essentially scaling technologies that analyze large sets of data to the tech patterns that then serve as a basis for making or suggesting a decision.

I think what’s really important to be mindful of is that there can be high or low state situations in which these decisions are suggested or made. There is a difference between automated recommendation systems that are suggesting a product to you versus systems that have really profound impacts on individuals’ lives, for example, in the criminal justice system, the financial industry or the public sector. And we really know from a growing body of work and research that the negative impact of high stakes decisions that are made with and through A.I. systems are disproportionately affecting historically marginalized populations.

And so as a scaling technology then, A.I. can end up scaling up these inequalities. So against that backdrop, I would actually say that we have a threefold definition of A.I.. It’s aspirational, it’s a narrative, an ideology that can really mobilize vast amounts of funding. It’s technical, as we’ve just heard and it is social, and that has become infrastructural for social life. It’s scaling, it’s amplifying, and it’s also changing how we relate to one another.

Damon Beres

And I think that leads quite nicely — well, Albert, I don’t want to presume what you’re going to say, but I would love to hear your definition just from the perspective of policy and surveillance.

Albert Fox Cahn

Well, I come to this work from the perspective of someone who’s trying to outlaw A.I. most of the time. So it’s a pretty skeptical outlook. And I would say that rather than starting with the technology and defining A.I., I start with the human beings that A.I. is displacing. So my definition of A.I., when I’m looking at governance techniques, when I’m looking at what I’m trying to outlaw at the state and local level, I ask the question, is this technology displacing human decision-making?

Because you can have a really simple technology that is making very profound choices about our lives. You don’t need advanced machine learning models in order to have a system that is putting people in prison, that’s choosing who gets to keep their homes. A lot of that stuff is running in Excel. And so when we’re looking at the frameworks for addressing these social harms and creating new laws, and in my case, suing these companies on occasion, I’m really looking at, what is the technology that is taking away the decision-making power that human beings used to have?

Rumman Chowdhury

I think you’ve made a really great point here that the term A.I. … Often when we are asked this question, we think about, “Oh, there’s all these … The term A.I.’s used for things that are non-technical or not even a predictive system or a probabilistic system.” And I think that’s a very dangerous trend, because the term comes with, as Mona said, this laden socio-technical understanding that, “Oh, it’s smarter than people,” or “Oh, it’s too confusing to understand. And it’s a good way to frankly hide poorly designed systems or systems that are designed to incriminate or adversely impact certain communities. So it does, in a sense, provide a shield.

Albert Fox Cahn

And I think it’s important to know that when you’re creating rules for all this stuff, you need rules that can actually be implemented. And when you have highly technical standards and definitions, judges don’t get this, lawmakers don’t get it. And so you end up with something that might be technically accurate on paper, but ends up falling short in practice.

Damon Beres

Hilary, I see you nodding. Do you have thoughts about this or reactions to anything that’s been said?

Hilary Mason

I do. I love to hear it. And also I think there’s a really important insight in the discussion, which is that a lot of the framing of A.I. has been around, how do we replace human decision-making? And that is very much the dominant narrative, whereas there’s an alternative framing around, how do we augment decision-making? How do we give better insight through the lens of data or tools for understanding that data that I think is under-appreciated, certainly in the marketing hype side of the A.I. world.

Damon Beres

So that brings me to a question I’d like to ask Mona, actually, which is, so we now have a very good understanding, I think, of the ways in which A.I. is multifaceted. And at the same time, however, A.I., I think especially in consumer context, is strongly associated with relatively few companies like Nvidia or IBM or Facebook, Google, Amazon, these giant tech companies that we’re familiar with. And a lot of consumer interactions with A.I. are predicated on, I think, the vast consumption and processing of personal data. And in these respects, it’s A.I. has been something of a centralized technology thus far.

And I’d be curious to hear what some of the issues are that you see in this current paradigm.

Mona Sloane

Thank you for that, Damon. I just want to make clear that I don’t see an issue with that paradigm. I think it’s a very accurate observation, because A.I. is by design centralizing power, because it’s centralizing decisions that affect a lot, a lot, a lot of people at the same time. We’re no longer dealing with social situations or interactions that are one on one. For example, when we look at the welfare system case worker and individual, it is all about scale. It only works as a centralized system at the moment.

And so when we stick with that, and we stick with the observation of centralized power in and through that system, and that system can only work as centralized power, then we need to look at, what is the narrative that underpins the logic? What’s the end game as it were? And what we see really is large scale deployment of A.I. systems for the sake of increased efficiency, particularly in the public sector. And now that we are still in a global health emergency and we see strapped resources among public institutions, that becomes even more.

So what is behind the system is a question we should always ask against that backdrop. What is the problem we are actually solving with technology? And who is defining that problem? And so these are kinds of questions that we can use to address the situation of centralized power in and through A.I.. But I think we need to acknowledge that we are looking at A.I. as centralizing power, and then we can also talk about corporate power. But I think as a design logic, that very much underpins these systems.

Damon Beres

Does anyone have a reaction to that? Do we agree with that?

Hilary Mason

I think there’s something to observe here that those companies have a particular way of framing the technology and the data for the technology and certainly a kind of access that other folks don’t have. And they use it for a purpose that is their own purpose, but there are other pieces of the ecosystem that are worth drawing attention to as well. So there’s a whole, of course, academic research community and machine learning that does a ton of amazing work, but their work also is biased towards the interests of those large companies because of the funding and the data and the resources.

I have a startup. This is my fourth startup. I’ve been doing A.I. startups for 20 years at this point. Startups have yet another set of constraints and limitations and ways to start building and products and playing in that space as well. And so I think if we draw anything from that to build on it, it’s really observing that this ecosystem has non-optimal incentives for humans. And that those incentives are pushed by the folks who have power in that ecosystem, whether it’s technical power or business power, political power.

Albert Fox Cahn

I was going to say, well, the process of developing this technology can often be quite centralized. The process of selling it can be quite centralized by a relatively small number of companies, but it’s being purchased in a highly decentralized way by small cities, towns, counties, state governments. That process is unfolding in parallel around the world. And so what you see is thousands and thousands of communities having the exact same debate at the exact same time, but not realizing that this is unfolding with all their neighbors.

And I think part of why that’s been so dangerous is you end up with this repeat player dynamic, where you have a really lopsided advantage by the companies that are selling this. And they keep coming up with these pitches that, “This tool is what you need for this problem,” and then oftentimes the states and governments don’t have the resources to actually push back and know if that’s really an accurate pitch.

Damon Beres

Can you give us a couple of specific examples? So we’ve seen, for example, certain municipalities take action against facial recognition, which is maybe the most obvious example to use in this context. What else should we be thinking about? You’re talking about local governments. What specifically are you really addressing?

Albert Fox Cahn

Yeah. And whenever we look at new technologies being rolled out, of course it is poor communities, it’s BIPOC communities, it’s undocumented communities that are the ones that are targeted. So where do we see new forms of A.I. being rolled out? It’s for things like detecting benefits fraud. We saw a project called MiDAS in Michigan, which was supposed to streamline their system for identifying fraud in unemployment benefits. The only problem is, over 90% of the fraud that they identified never happened.

Thousands of lives upended, people who went bankrupt, people who committed suicide, because they had had their entire financial life ruined by an algorithm that was there to solve a problem that really wasn’t that pronounced to begin with. There wasn’t this acute need to deal with benefits fraud. What there was, was a software vendor with a technology that they thought they could make money by selling to the state, and that they were able to sell to the state and have the public bear the cost. And we see this in immigration enforcement with a new software for tracking individuals who are waiting for their asylum hearings.

We see this in new forms of policing tech, like predictive policing technology, which claims to have a crystal ball to see the future. But in reality it is just giving us a crude map to where we’ve abused our police powers in the past and replicating a lot of that historical human injustice. And so there are ways that A.I. can solve real problems. There are ways it can be a tool for good, but a lot of the most profitable ways of selling A.I. right now is selling us a solution to a problem we don’t have and doing it in a way that costs us a lot of money.

Damon Beres

Those are excellent points that bring me to a question for Rumman, which is around algorithmic auditing and this entire process. There have been issues with the technology you’re describing. There’s this whole process, and Mona has written about this as well in the past about these bogus audits that happen, that are supposed to give a seal of approval to algorithms in some of the contexts that you’re describing. And functionally, there’s some problems there. And I would love to just ask Rumman about what we can do about this issue of auditing and what the proper way to get oversight into some of these algorithmic functions is.

Rumman Chowdhury

What a great question. And I feel like both Hilary and Albert have raised very relevant points. And I will try my best to systematically talk to all of them. So first, I want to speak to Hilary’s point about funding. I think that is, frankly, an under-discussed issue of where companies are getting their funding from. And frankly, as a startup founder myself, your wrists are a little bit tied. You’re limited by the vision of the VCs that are funding you and their incentives. For part of that, I’ve actually started a fund of my own. It’s called the Parity Responsible Innovation Fund. And the goal is actually to encourage the next wave of responsible innovation and ethical use technologies. Just a little plug right there. Because I ran into this issue as well, trying to start an algorithmic audit company. 

But to your specific point, Damon, there are a lot of ways that one can define the term “audit.” And I have seen everything from a checklist to what I was building, which is literally a patented natural language process technology that is aligned for code review and data analysis.

And frankly, on the surface, you can’t tell the difference, who’s selling which, and we have no government regulation and guidelines. One thing I worry about quite a bit is the regulatory capture of regulation being passed, calling for audits without this regulation actually telling us what an audit is, which is what we have seen nearly universally. I will also then ask Mona, because Mona and I have worked a little bit on this, in thinking through specifically public use of technology and government procurement processes. Because this is something that’ll be very important in the public sector and it’ll probably be the first place we see significant and major audits happening.

Damon Beres

Yeah. Mona, do you have a response to that?

Mona Sloane

I do. I have a response to both the auditing question and the procurement question, and because Rumman gave such a wonderful cue, I’m going to talk about the procurement just for a second. Albert has alluded to that as well that we actually see a lot of literacy among public agencies when it comes to both technical mechanics office systems and their social impact and how these intersect and what impacts these technologies can have. And that’s a very acute problem.

As I said earlier, we’re seeing public agencies facing strapped resources even more than they used to face before, a need for increase in efficiency. And we see the need that procurement officers actually know a little more about these systems. And Rumman and I ran a project together with the IEEE over the last year that it looked at the A.I. procurement project, which convened a group of wonderful experts that talked about how we can actually kick off processes by which procurement officers can generate new knowledge and capabilities in that space.

And procurement actually is a really great opportunity to give more definition and body and shape to the question or answer the question, “What can an audit be?” Because it could be baked into the procurement requirements that there are regular audits that happen of any technology that’s being purchased. And then we can talk about what this should look like beyond a purely technical audit that perhaps only checks if the algorithm works as intended, which is insufficient very often, because we face these scaled up social impacts. And we can talk about it a little more if you want.

Damon Beres

Yes. I would love to talk about that a little bit more. Please, tell us.

Mona Sloane

Sure. So let me talk about how we should audit systems and how we should perhaps develop a more holistic approach to that. So not only ask, “Does the system work as intended?” What’s the goal of the algorithm, and does the system actually deliver to the goal or to the brief? But what are the assumptions that are baked into the technology? And so Rumman and I, together with a colleague, Dr. Emanuel Moss have sat together and thought about how we can do that specifically with regards to hiring algorithms that are increasingly used both in the public sector and the private sector.

And a question we should ask is, what assumptions we’re making, for example, around constructs, such as personality? Is personality something that is stable enough that a system can actually reliably detect? And is personality something that we can actually use to predict job fit? And so those are considerations we need to integrate smartly and effectively into audits, because if we don’t and we just check if the system is used as intended or works as intended, we circumnavigate these more thorny questions and we infrastructuralize hugely problematic technologies that can have eugenicist underpinnings, for example.

Damon Beres

Yes.

Albert Fox Cahn

I think Mona’s point about the personality types, this is so crucial. There are so many A.I. products out there that are being sold as a way to detect something about human beings that just doesn’t exist. This is high tech phrenology. It’s saying, “Well, we can tell you what sort of human being you are within X number of categories.” Well, human beings don’t fit into that number of boxes. Well, look at Apple’s announcement recently that it’s going to try to do mental health screening using A.I. as part of the iPhone, bringing this same assumption that this technology can predict some of the most subjective, unique human characteristics in this way that is just profoundly dehumanizing.

It also maps onto so many historical legacies of bigotry and oppression. And so I think that we see this not just with employment, but with criminal justice algorithms that try to say, “Well, we believe you are at risk of committing a crime in the future.” And we’ve seen cases where people were visited by the police, because an algorithm said that they were likely to pose a risk in the future. 

And as a result, not only had people think that they were cooperating with the police, not only did they have people who were pushing back, there’s an individual who — there’s a long piece about it — was shot twice, because of an algorithm that told police, “Go to his home. We think he is going to commit a crime in the future.” And for that, he is now paying this horrific price. I just really think that when we’re talking about the algorithmic systems we should be using, these sorts of audits are crucial, but there’s a whole swath of A.I. products out there. We just should categorically be saying, “This should not be allowed.”

Rumman Chowdhury

You’re raising a really great point. There are certain flaws with even the construct of how we think about audits today. So one is that we tend to audit a model. And as Mona mentioned, in our work on hiring, a lot of these models do not exist in a bubble. They exist in a larger chain of models. So we have very little discussion of systems level bias, because that would require understanding how we use or interact with a platform or, what are the other models that may occur beforehand that influence the output of this model? And then to Albert’s point, there is rarely the fundamental question being asked, “Should this thing even exist?”

Usually, audit functionalities are brought in after something has been built. And I know I have absolutely pushed in all of my roles, and I know Liz is as well. Liz O’Sullivan is the new CEO of Parity, the startup I had founded last year. She also is pushing for integrated bias assessments and audits in the product development process. And those two are very different things, auditing a model that’s already been built, and creating the right process assessment and process audit, which would enable us to ask the fundamental question, “Should this technology be built?”

Damon Beres

Hilary, I’d love to hear your perspective.

Hilary Mason

I’m just going to build on that. So eight years ago, I founded a company called Fast Forward Labs. Ended up selling it to Cloudera. Part of the service we provided there was to be the nerd best friend to folks making A.I. procurement decisions. And so we were down in the weeds, including your colleague, Emmanuel Moss, who was a fellow with us for quite a long time. Sitting there in the room when folks that these Fortune 500 companies are evaluating systems, hearing vendor pitches and making technical recommendations. 

But to agree here, there’s a set of products out in the market that frankly everyone knows are bullshit. And people buy them anyway for complex reasons. And they’re not wrong to look for tools and techniques to help them, but they were never going to work.

Damon Beres

Can you be more specific about that? Tell us about some of these bullshit products.

Hilary Mason

I’m trying to think of some of the notable ones we actually looked at. Let me think if I can come up with a good one. So we spent a lot of time working on machine learning systems to support call center processes for three different large companies, one telecom, two banks. There were vendors out there who would come in and be like, “We have an A.I. call routing algorithm that evaluates each of your customer service agents and then routes the call to the one who can get the person off the phone as fast as possible.”

Damon Beres

Oh.

Hilary Mason

And then they could use that to evaluate the quality of those customer service agents. And so you see very quickly that there are lots of confounding factors or confounding issues there. There are also other issues like companies like Palantir would come in with a gloss over what is open source software and a bunch of forward-deployed engineers and make claims like, “You’re never going to understand how this works, but we’re going to find fraud.” And if you can’t, as the buyer of the system, even be told how it works, much less engage in a technical audit process, you have no hope.

And so I don’t have an answer. I have a lot of war stories that are better suited to a beer than a stage. And I just want to agree with the point that all of this discussion, which is to say that it is very tempting, certainly, as a technologist to think this is a problem of technology. It is really not. It’s so far upstream from that, and it’s about the market and why people are even looking for these solutions, why people are selling these solutions with no accountability and no requirement to even explain how they work. It’s a lot of that, and what we reward too.

And I’ve seen this as well. I have a long history on both sides of the venture capital table, and so I see this in startup pitches all the time. “It’s an A.I.,” and I’m like, “Cool. Is it a person back there?” “Oh yeah.” “Okay.” There’s a ton of fraud out there.

Damon Beres

Right.

Albert Fox Cahn

And I really do think it’s amazing how much of the A.I. tech that’s bought, it’s just bought to have this veneer of objectivity for decision that you know is a really questionable set of human decisions, and you don’t want to get sued. Well, now you can point to the A.I.. You have—

Damon Beres

There’s data.

Albert Fox Cahn

Yeah. Now—

Hilary Mason

The point, I think, of the morning. Just, yes.

Albert Fox Cahn

Yeah. But there are so many lawyers out there … And I say this as a lawyer, who loves the idea of how having this really opaque system that you can hide behind anytime someone sues you, you can just point to that and say, “That made the decision. That recommended the decision. Don’t sue us. We were just integrating their feedback.” And no one has figured out how, in our increasingly bizarre system, you figure out how to navigate all of those issues when A.I. comes into the mix.

Rumman Chowdhury

It adds a layer of obfuscation, Albert, as you’re pointing out. And when we think about the balance of power, especially in the criminal justice system, what are we doing? We are adding another layer that if you wanted to contest some sort of an outcome, you actually have the burden of proof to now prove that the thing is not working for you. So this is not maybe about talking about a biased judge or not having adequate legal representation. 

Now, somehow you have to be able to get the resources to prove that an algorithm … And again, going back to the point that people seem to defer to A.I. and algorithmic systems, because they see them as being so brilliant and so smart and so much better than all of us. I don’t know how the average person … to do that. And we don’t even currently have the public resources for people to be able to contest an algorithmic decision. And none of these are really, as far as I know, being considered as these systems are being put into place, especially in the public sector.

Damon Beres

Yeah, absolutely. And that brings me a little bit to a question that I want to make sure I ask. So we’re moving in a slightly different direction. So we’ve heard so much that is completely on target about the social implications of all this, the policy implications, and that there are many bigger issues than just the technology itself, per se. And insofar as it is a technology issue, though, as Rumman just said, there’s a black box issue. It’s hard to peer in and know precisely how decisions are being made.

Additionally, as I was gesturing toward earlier, there is this centralization issue where some of these functions are happening simply by ingesting a ton of an individual’s data and making some judgment based on that data, whether it’s the call center thing. We could have a whole panel about that, because you can think of a million different ways why that’s just not even a logical thing to do. But in any event, it does beg the question, in this context anyway, if we were to move away from this “centralized” model of A.I. and consider a decentralized model of developing this technology.

So Unfinished, for those of you who weren’t here yesterday, so much of our project is based around decentralization, but specifically building new ethical models for technology, enabling an equitable society, a fair economy, a stronger democracy. And so we’re very interested in decentralized applications of technology. And Mona, maybe I can ask you, since we haven’t heard from you in a second, not to put you on the spot: How do we think about developing decentralized artificial intelligence technology? Is that possible when we’re talking about a function that has to do with so much data and the collection of data?

Mona Sloane

That is a very tricky question. And you did put me on the spot literally. I’m going to come out and say, I don’t think that that is necessarily possible, just when we take the definition of artificial intelligence in terms of the systems that we see today, which are all about scale, they’re all about large scale data collection. They are about what Shoshana Zuboff has called surveillance capitalism. Then I think that is not really possible. 

However, what I do think is possible is stepping back a few steps, and this is what everybody has said on this panel today, is thinking about, what is a problem that we’re actually trying to solve with a technology or without a technology? Who participates in the definition of that problem? What kinds of competencies, literacies, rights and mechanisms for refusal and for recourse can be developed as part of developing a system? I think all of these strategies are important for thinking about how we can support infrastructures for predictive technologies that could be used in a perhaps not decentralized way, but in a more socially responsible way.

The reason why I’m saying that is because if you look for example, at the manufacturing industry, there are certain problems that lend themselves very well to be solved with and through artificial intelligence. For example, in the assembly line, but also if you look at renewable energies, wind turbines, there are A.I. systems that detect flock of birds that make the wind turbines supposedly stop ahead of time before the flock crashes into the wind turbine. These systems are into these processes that are used, for example, towards grapes for wine in a more efficient way.

So there are ways to use these systems smartly and sustainably and socially responsible, but we really need to look at how we can do that better.

Damon Beres

Any other thoughts on—

Rumman Chowdhury

I have some thoughts on this.

Damon Beres

Yes.

Rumman Chowdhury

So two things. One is, from a perspective, I’m really fascinated … And I mentioned earlier with the fund, we’re really fascinated in decentralized tech, especially thinking through things like federated learning and encrypted ML. So one way to think about the term “decentralizing,” is to think about, how can we share information in a way that is secure? So things like privacy preserving machine learning and technologies is one fascinating way to think about it. Another way and more of a grassroots way, in my role at Twitter, we hosted the first algorithmic bias bounty, and we drew the history of security vulnerability bug bounty.

To be clear, this is not saying that algorithm bias is a bug. Frankly, often it’s a feature. But what we did in that challenge was, open up an algorithm that Twitter had been using for public scrutiny. We provided a rubric and a way of grading and assessing people’s inputs and rewarded people who identified harms for us. It is not perfect, but is certainly a step forward. And hopefully with the folks that are in this room and listening today, I want to point out that one of my goals is to create a public community. This is not something I think Twitter should own, or any corporation should own.

I want to be part of building, and I’m for folks who are also interested in building this public community to enable these kinds of biased bounties moving forward.

Damon Beres

That’s excellent. Where should people go if they want to learn more about that?

Rumman Chowdhury

The best place to hit me up, as everybody knows, is always on Twitter.

Damon Beres

Yeah. I got her on the panel by DM-ing her on Twitter, so true story. Well, actually before we move on, and we just have a few minutes left, but Hilary, we talked a little bit about the idea of decentralized A.I., and I was just curious for your thoughts.

Hilary Mason

Well, I’m going to echo what Rumman said, and also just for everyone listening, the challenge that she and her team opened up really did spark a conversation across the machine learning community and a lot of people thinking … It gave other people permission to think about doing the same thing. And so they deserve a ton of credit for that. And I was just going to add that, yes, there are technical approaches like differential privacy, federated learning that give us the ability to get some of the benefits of a large amount of data without actually centralizing and controlling that data.

But mostly, you can’t think about the issue of centralized A.I. and the power of centralized A.I. without recognizing it is intrinsically tied to privacy and rights over personal data. And it’s tied to security, which hasn’t really come up on this panel, because there are also huge issues with security around inscrutable models that can be made to do things you may not want them to do. So I’ll stop there.

Damon Beres

By way of a clumsy transition, security leads me to wonder about something else. So everyone in this room, I would assume, you’re at Unfinished Live. There is a certain level of vested interest in a conversation like this one. It is critically important, especially as we think about building equity, that these conversations are actually open to as many people as possible to make sure that there is some measure of technical literacy. Or maybe to flip the script on that a little bit, to develop technology in such a way that anyone can understand and engage with it.

And Albert, maybe I’d like to start with you with this question is just, in your view as someone who thinks about policy, how do we open all of this up and make a more participatory system for decisions such as this?

Albert Fox Cahn

Well, I think part of the problem is that oftentimes when you have these conversations in industry spaces, people are trying to be as … They obfuscate as much as possible, because the more buzzwords you throw out, the more you are trying to make your technology hard to understand, the less you show the secret sauce, oftentimes the further you can go in your sales pitch. And this is the exact opposite of what we need for democracy. And then the nice thing is that, with a lot of these technologies, we can make them accessible to large cohorts and large audiences.

We can partner with civil society groups to come up with community-centered education programs. We do that a lot at the Surveillance Technology Oversight Project. We do partnerships with The New York Public Library, with schools. And this is something which is part of just digital civics in this current moment, because these systems are making decisions that impact each and every one of us. So as democracy, we have to have the tools to make the choices collectively about what we’re allowing to govern our lives, or we lose that control that is indispensable to having a republic.

And so I think having that investment in digital civics, having a much more simplified, straightforward discussion that avoids the buzzwords, and really above all else, mapping this on to the existing forms of human bias and human harms. Because as Ruha Benjamin and so many others have written, technology is not neutral. It is replicating the biases and the power dynamics of the society we live in. So we don’t need to start off from scratch. We just have to show the connections between how these systems work and the society that we already understand.

Hilary Mason

And make it boring again. That was just an idea, but the incentives in the marketplace are such that many companies get leverage out of obfuscation. We’re not all like that, but change those incentives. Make the technology the least interesting piece of the thing you’re trying to build and build it for a good purpose.

Damon Beres

Rumman, I think you’re going to get the last word here. So jump on in.

Rumman Chowdhury

Sure. I’ll also add, building on Albert’s point, that I’ve seen some really good use and leverage of human rights principles. So I guess if there’s one place we want to specifically draw from and understand as a good starting point for algorithmic ethics and something which should be core to human algorithmic ethics is an understanding of all of the work that’s been done in the human rights space. These are questions they’ve been tackling for literally decades, thinking through the use of technologies in a way that upholds our basic human rights that have been codified and agreed upon.

So I’d encourage my colleagues in the community to do a little bit more work and research into some of the really wonderful insights from that space.

Damon Beres

Thank you. I want to say thank you to everyone for joining us here for this conversation this morning, because these are critical issues. And I hope that we can continue to develop engagement in these thorny topics and to continue to delve into A.I. as a critically important technology moving into the future. I would invite all of you to check out the work from all four of these amazing panelists. And thank you so much for joining us here this morning. I really appreciate it.