Professors Tejas N. Narechania and Rebecca Wexler on Governing Artificial Intelligence

Berkeley Law Voices Carry podcast

In this episode, host Gwyneth Shaw talks with Professors Tejas N. Narechania and Rebecca Wexler about artificial intelligence from two very different perspectives. 

Both are faculty co-directors of the Berkeley Center for Law & Technology (BCLT), Berkeley Law’s tech law hub and a leader in the field for more than a quarter-century. 

Narechania’s research focus is on the institutions of technology law and policy, including telecommunications regulation, platform governance, and intellectual property. He has advised the Federal Communications Commission, where he spent a year as special counsel, on network neutrality matters, and is the co-director of the Artificial Intelligence, Platforms, and Society Project, a collaboration between BCLT and the CITRIS Policy Lab. 

In January, he participated in a meeting at the White House of experts about competition policy and AI, and his recent work on AI was cited in the 2024 Economic Report of the President

Wexler’s teaching and research focuses on data, technology, and secrecy in the criminal legal system, with a particular emphasis on evidence law, trade secret law, and data privacy. Her scholarly theories have twice been proposed for codification into federal law and litigated in multiple courts.

In the spring of 2023, she was a senior policy advisor at the White House Office of Science and Technology Policy, and earlier this year she testified before a U.S. Senate Judiciary Committee subcommittee hearing on the use of artificial intelligence in criminal investigations and prosecutions. 

Here’s a selection of their recent work. 

Narechania:

An Antimonopoly Approach to Governing Artificial Intelligence

Inside the Internet

Machine Learning as Natural Monopoly

Internet Federalism

Wexler:

Life, Liberty, and Data Privacy: The Global Cloud and the Criminally Accused

Privacy as Privilege: The Stored Communications Act and Internet Evidence

Privacy Asymmetries: Access to Data in Criminal Defense Investigations

About:

“Berkeley Law Voices Carry” is a podcast hosted by Gwyneth Shaw about how the school’s faculty, students, and staff are making an impact — in California, across the country, and around the world — through pathbreaking scholarship, hands-on legal training, and advocacy. 

 

Production by Yellow Armadillo Studios. 


Episode Transcript

[MUSIC PLAYING] GWYNETH SHAW: Hi, listeners. I’m Gwyneth Shaw. And this is Berkeley Law Voices Carry, a podcast about how our faculty, students, and staff are making an impact through path-breaking scholarship, hands on legal training, and advocacy. I have two guests for this episode– Berkeley Law Professor Tejas Narechania and Rebecca Wexler.

We’re going to talk about artificial intelligence from two very different perspectives. Both of these scholars are faculty co-directors of the Berkeley Center for Law and Technology or BCLT, our tech law hub and a leader in the field for more than a quarter century. Narechania’s focus is on the institutions of technology law and policy, including telecommunications regulation, platform governance, and intellectual property.

He has advised the Federal Communications Commission, where he spent a year as special counsel on network neutrality matters and is the co-leader of the Artificial Intelligence platforms and Society Project– a collaboration between BCLT and the CITRIS Policy Lab at the Center for Information Technology Research in the Interest of Society and the Banatao Institute, which draws from expertise on the UC campuses at Berkeley Davis, Merced, and Santa Cruz.

In January, he participated in a meeting at the White House of experts about competition policy and AI. Wexler’s teaching and research focuses on data technology and secrecy in the criminal legal system, with a particular emphasis on evidence law, trade secret law, and data privacy. Her scholarly theories have twice been proposed for codification into federal law and litigated in multiple courts.

In the spring of 2023, she was a senior policy advisor at the White House Office of Science and Technology Policy. And earlier this year, she testified before a US Senate Judiciary Committee Subcommittee hearing on the use of artificial intelligence in criminal investigations and prosecutions. Welcome Tejas and Rebecca. And Thanks for joining me.

TEJAS N. NARECHANIA: Thanks.

REBECCA WEXLER: Thanks for having us, Gwyneth. I’m excited for the conversation.

GWYNETH SHAW: It feels like artificial intelligence is everywhere these days. And also that those are Wild West quality to its growth. Let’s start with the regulatory angle. Tejas, what are some of the concerns about managing AI from a policy perspective? And what are some of the potential solutions?

TEJAS N. NARECHANIA: So that’s a big question. And I think there’s lots of different angles to the regulatory questions that are at issue when it comes to AI. So I’m just going to rattle off a few of these, and then maybe I’ll emphasize a couple that I’m most interested in. So I think one of the areas that’s really big right now is people are thinking about copyright and artificial intelligence.

That is, what are the copyrights and copyright ownership interests that attend to the data that are used to train large AI models or language models, diffusion models. So that’s one, copyright. I think another that relates is privacy. So again, there’s a lot of data that’s input into these AI models. How should we think about the privacy implications of that data? Where it comes from, who owns it, what it means for surveillance, commercial surveillance.

There are long-standing risks about bias and discrimination in AI. Whether and how the data that are used to train these AI models are representative of the populations that they are supposed to serve, and whether or not they serve out results that are discriminatory in one way or another.

And then there’s a competition angle. And I think more people are starting to pay attention to the competition angle, which is that if you take a look at who the companies that are playing in this space, there are fewer and fewer of them, they tend to be more concentrated, they tend to be overlapping. And that, I think, has implications for both competition and innovation.

And one thing that the EU has, I think, introduced is a notion of risk also, which is how do we think about the risks that attend to certain uses of AI. So there’s a lot of different ways that you can think about this problem. From copyright to privacy to bias to competition to risk.

I would say that most of my work most recently is focused on competition. And how we think about the structure of the industry that is generating these technologies. What we can do to make sure that that industry remains competitive and that we get the benefits of competition. That we don’t have a single provider that is just going to give us whatever AI system they decide we ought to get. And is going to use our data in whatever way they decide they want to use it.

And that we have multiple providers. Or if we can’t get multiple providers, we have regulations that ensure that we as the public get the public benefit of these systems.

REBECCA WEXLER: Can I just jump in and say that this question about multiple providers and multiple systems is also super pertinent in the criminal legal system, if we’re trying to start relying on AI systems for evidence, for instance. There have been examples where two automated systems, not AI, but manually coded automated systems came to different results, different conclusions about whether or not a suspect’s DNA was found in a crime scene.

And so the fact that there were two systems available, and they could each be tested, exposed some concerns we might have about the unreliability of those systems. Because well, if you’ve got two systems and they come to opposite or at least different conclusions, then which one’s right or are neither of them right. And if we only had one system, you wouldn’t have the opportunity to have that kind of comparison.

So just identifying points of overlap where I think this issue of competition actually has a super important downstream effects for the criminal legal system as well as other places.

GWYNETH SHAW: Rebecca, you’ve spent years raising concerns about how technology and the criminal justice system can put defendants at a huge disadvantage and you just highlighted one of those. And newly developed AI tools seem to be following that same trajectory. What are some of the issues that you see with what’s happening now?

As I mentioned in the introduction, you just testified before a Senate judiciary subcommittee about this issue just a few weeks ago. What are some of the tools that are continuing to be problematic based on some of the things you’ve been thinking about for quite a while?

REBECCA WEXLER: Well, I actually thought the way you framed it was really helpful, which is what is continuing to be problematic that was a problem before and still is a problem when AI’s introduced? But introducing AI has raised the salience of those issues, has exposed them, put a new spotlight on them. Brought out the fault lines that have been there, but that we haven’t yet fixed and increased the urgency.

The salience and the urgency of resolving them both for AI and for other prior technologies. Manually coded technologies or just technologies that don’t even use software or concerns we might have about expert testimony, even if it’s purely human-based. So AI is in a way– it’s an opportunity, and it’s shining a spotlight on these issues that are relevant to AI but not necessarily unique to AI.

So a number of issues that I’ve talked about before. One is are technologies used to generate or analyze evidence in the criminal system subject to peer review? You would think that this was a basic requirement. And in fact, the Supreme Court told judges doing gatekeeping that peer review was one factor they should consider in whether to admit expert testimony based on an expert system like an AI system. But it turns out that some vendors of software technologies that are currently deployed in criminal cases have used contract law to block peer review.

So as an example, colleagues here at UC Berkeley, Professor Rediet Abebe in the Computer Science Department and Professor Andrea Roth in the Law School and also other members of a team of researchers that we put together, including forensic genetic genealogists and other people who are qualified with relevant expertise to analyze some of these tools reached out to a vendor and said, look, NIST, the GAO office, PCAST report, National Academy of Sciences have all identified a dearth of independent peer review by people who don’t have stake in the outcome. That aren’t the developers themselves reviewing the technologies.

Here we are. We’d like to do that independent peer review and quality assurance. And one of the vendors wrote back and just gave us the answer, quote, “We do not provide research licenses,” end quote. So the risk that vendors will block truly independent quality assurance validation or review is a risk that has existed for a long time, but increases the more that we rely on private developers who have incentives to keep things close to the chest. And AI is increasing that reliance. That’s one example, but there’s lots of others.

GWYNETH SHAW: You mentioned the difficulties with this and the problems with this can go all the way back to just human testing– human expert testimony. Like probably most people, I listen to too many true crime podcasts. And it’s certainly true that an expert who’s not particularly good at their job can contaminate a whole case.

But are there specific new challenges to using technology-based evidence that– I know you’ve been thinking about this for quite a while. What are some of the ways in which introducing technology exacerbates that? Because my sense is that you put information like this in front of a jury, and they might be more likely to think it has merit or is true because it is using technology, which we’ve all been bred to feel like is a gold standard or somehow better than humans.

Are there specific things that are more difficult to unpack because they’re using technology? Not just because companies aren’t willing to share them but because they might have more credibility with juries or law enforcement.

REBECCA WEXLER: That’s a great point. I think there is a concern that jurors are going to say, oh. This went through some fancy tech. It’s a computer system or it’s an AI system, it has shiny mystery to it. And they’re going to just defer their own judgment to that system. Whereas with an expert human, they might be worried, maybe is this person a charlatan, a purportedly expert human?

Or they might be somehow a little bit more skeptical and use independent judgment. But I do think there’s risk of undue deference from a juror, even for experts. Because they come on, and they say, I’ve got all these credentials. But again, these risks, they may be things that happen for a long time that the technology then increases the concern, it should increase the concern. And as a call to say, we’ve got to fix this now.

There are some things that are unique with technology. And so an example would be now you have situations where human users of a system might be testifying about the system’s output but not really understand themselves how the system works. So if you had a human expert applying some methodology that they’ve become trained in, like they’re extremely good at comparing and contrasting manually just with their own eyesight and understanding, say, patterns of a fingerprint from a crime scene and from a suspect sample.

They can explain or they should be able to explain, I’m looking over here at this pattern and that pattern. And now you can see it, and this is how I do it. And this is my method. And so if that person is up on the witness stand, the other side can cross examine them. Ask about where they might have gone wrong or where they might have biases or where they might have made a mistake or where an alternate judgment could have been made by a different examiner and could they explain why they chose it this way or that way.

And they should have an answer to that. Or if they don’t have an answer to that, expose that to the jury, and it supposedly undermines their credibility. But when a human witness is relying on technology that was designed by somebody else, who might be keeping that information methodological information as a trade secret, for instance, and not telling their users exactly how it works, then you can have a situation where you get a user up on the stand. And there are cases like this.

Police officer witnesses say using Cellebrite machines. These are sophisticated machines that extract data from a mobile device or a tablet. And they get up on the stand, and they say, well, I used this system. I put the device in, I pushed a button, out came the data. And you’re like, well, how did that work? How do I know that’s a reliable copy of the data that it didn’t alter any of it, it didn’t tamper with it.

And they say, well, I used it properly. I can tell you all about the user instructions. And I went to training and I say, push this button, not that button. But they don’t know anything about how the machine works inside. And courts, this is also drawing on work from my wonderful colleague Andrea Roth, who’s written about this in terms of machine testimony, but courts over time, have said, you really don’t have a right to cross examine the developer of the system, the way you do to cross-examine the user of the system.

The Sixth Amendment right to confront the witness really only is reaching the human who finally deploys and attests to the authenticity of data coming out of a machine. And it doesn’t apply to the designer or the developer of the machine because courts say that the machine’s process itself isn’t an assertion that would trigger the hearsay rules or the Sixth Amendment confrontation right.

So you have these cases now where literally, you’ve got users of a Cellebrite machine getting up, and they say, here’s the thing that spit out. And there’s nobody to cross-examine about how the machine actually worked. Again, the more that we input, interject machine systems– automation or any kind of AI would be included in this, AI systems that do the forensic analysis for the human user, the less we are going to be able to cross-examine that whole process on the stand.

GWYNETH SHAW: I want to come back to the competition angle for just a second. Because Tejas, you wrote a piece for Politico recently and have a forthcoming paper, I think, expanding on that. Talking about the need to preserve that competition that you already mentioned. And I wanted you to talk a little bit about this concept of the AI stack.

And how that works, and pull that apart a little bit for people who might not really see where the competition concerns are here because, I think, most of us, if you use a cell phone or you’ve got a cable provider, are accustomed to different companies working in different places, and especially where cable is concerned or internet is concerned. Maybe having something of a monopoly. What’s the downstream implications of that for AI? Because there are some big companies that are really taking up huge parts of the whole process at this point. Right?

TEJAS N. NARECHANIA: Yeah. Absolutely. And it’s a great question. There’s a lot in there. So one of the things that Rebecca talked about was this provider that says blanket, we don’t provide research licenses. And I think one of the things that’s really interesting about that is that the provider has the power to say that with, essentially, no repercussion.

That is, you might think that the government or the government acting on behalf of the public would have an interest in having a vendor that subjected its systems to peer review, opened it up to providers. So how is it that this entity is able to resist that requirement? And I think in this particular context, one of the things that we see is that the government’s choice to pick a vendor entrenches that vendor.

And so when the government chooses a vendor, they have to be– they being the government, the government entity has to be really careful, what is the contract that they set out? What are the requirements that they’re going to impose on that vendor as a consequence of being the selected, favored single provider of this set of services to government to law enforcement.

And so the government should think about, what are the data use requirements? Are we going to require that the provider share the data that it obtained from law enforcement uses with other would be providers in order to make sure that we have more competition? Are we going to require that– we the government are going to require that the provider offer these research licenses?

And so I think one of the things that’s important to pay attention to in this particular context is that as our colleagues Ken Bamberger and Deirdre Mulligan, who’s now the deputy CTO of the United States, so at the White House’s Office of Science and Technology Policy, they’ve written a paper about procurement as policy.

When the government procures these technologies, it is setting regulatory policy for these technologies. And I think that’s really important to understand, particularly because the government will often only select one. It picks a monopolist to be the provider. And so when it sets procure terms, it is setting price and service standards for a would be monopolist.

And so I think that’s the competition angle that I’m most interested in. Is that for a lot of these technologies, and I’ll talk about the stack in a moment, we are only going to have one or two or a handful. And so when competition is limited, we’re going to end up with price effects. We’re going to end up with cost effects. We’re going to end up with quality effects.

We have this vision of a market. And it’s a little bit of a caricature. But the vision of the market is lots of providers are all competing to offer you the best widget. And so they’re going to compete on quality by giving you high quality widgets with the most bells and whistles. And they’re going to compete on price. So they’re going to keep undercutting each other on price. And you’re going to get the highest quality widget with the most bells and whistles at the least possible price.

And that doesn’t work when you don’t have the competitors. Because if you don’t have the competitors pushing each other, then you just have the one single entity, that’s going to give you whatever widget they want and they’re going to charge you whatever price they want. And that’s going to price some people out of the market, and that’s problematic.

So now, how does that relate to AI? AI appears to us as a magic technology. You go to ChatGPT in your browser, type something in, get a response, it’s fun. But that’s just the top layer. So ChatGPT is just an application. And there are admittedly lots of applications. Lots of people are building lots of technologies that use AI, and AI in the broadest conception of the term, to develop these new applications.

Once you peek under the hood and look at what the technology stack looks like underneath it, you see a funnel that narrows pretty quickly. So lots and lots of applications. ChatGPT is one. There’s the Chevrolet chat bot, there’s the other customer service chat bot. There’s all of these applications.

But a bunch of them are all sitting on top of GPT. That is, there is only one model of language. And there are others, there’s Llama, and there’s a couple others, but we only have a few representations of language. Now, that should give us some pause to think about all of the ways that people think about writing and speaking and communicating.

And we have distilled all of that down into three statistical representations of the English language. And that’s what we’re going to use to generate a whole bunch of text going forward. I mean, that’s a little worrisome, I think, that we only have three of those models. And then if you look beneath the stack, so ChatGPT is the application, GPT-4 is the model, GPT-4 sits on Azure’s compute structure. Azure is provisioned by Microsoft. Microsoft and OpenAI have a well-known partnership.

Azure’s compute infrastructure uses chips. So microchips. Those microchips are all essentially designed by NVIDIA. And we only have one real manufacturer of those chips. So then at the very bottom of this technology stack, from chips to computational infrastructure, to models, to applications, you have one provider that’s building the technology that all of this sits atop.

And that funnel, that lack of competition below the application layer, I think, has problems for what is the quality going to be of these models? To the extent we’re worried about bias or discrimination or risk. What’s the data that are input into these models? Who’s getting it? Where is it coming from?

If Meta is one of the entities that’s building one of these models, that gives Meta huge leg up in terms of building the model, but also locking out other would be competitors. We only have a few companies that have the computational infrastructure– Amazon, Microsoft, Google, that’s robust enough to really train one of these super large language models like on the scale of GPT.

So as the resource requirements go up, competition goes down, consumers end up with less choice, potentially higher prices or, I think, more precisely higher costs imposed on them either in terms of the prices you pay, but more realistically, cots in terms of quality loss, costs in terms of losses to your privacy.

And so given that, we either have to think of ways to induce more competition into the market or think of ways to regulate these providers so we can mimic the benefits of competition. That is, if we think competition will give us high-quality products, then we need to set really rigorous quality standards.

We need to maybe do more auditing for bias and discrimination. We need to do more risk assessments. We should have better privacy regulation that ensures that companies don’t just say if you’re going to use this thing, we’re going to take all this data from you and use it to develop the next version or the next new thing. And that consumers have some volition in what they are charged for using these technologies, and that the technologies actually do what we expect them to do.

REBECCA WEXLER: Can I jump in with a naive question about this, Tejas. Which is, is some of what you’re concerned about– might it be an inevitable side effect of something we might think of as a good thing, which is standardization? And so when I think about, for instance, just like the outlet in the wall here, there’s the three-pronged plug. And it really doesn’t matter how many companies create wall outlets.

They’re all going to look alike. And so if there’s some bias built in there like only will allow products from one region versus another or some limitations on performance because of that, that’s just something we deal with because it’s so useful to have identical wall outlets.

TEJAS N. NARECHANIA: So that’s an interesting question. I think we typically think of standardization as a way to improve competition in that once you’ve developed a standard, if you have a standard wall outlet or you have a standard API or you have a standard protocol for internet transmission, then you invite more competitors to use that protocol.

And ideally, those competitors can interconnect or interoperate in a way that allows more participants to flourish in the market. I think the concern that you’re alluding to is a slightly different one, which is that when we are creating those standards, we should be extra vigilant to make sure that we don’t build bias into the standard itself.

That is that the standard doesn’t undercut the regulatory benefit that we’re trying to achieve. And I think it’s a good and cogent question, which is, I assume that it’s a regulator, a government regulator that’s going to set the standard. Now, we have to ask a question, which is, are we more worried about a risk of error coming from the regulator? That is the regulator sets the wrong standard or they make the wrong procurement decision. Or are we more worried about the risk of error from the market?

Are we more worried about some government entity or NIST in an audit making a wrong call? Or are we more worried about what OpenAI or Microsoft might– OpenAI or Microsoft or Amazon or Llama or whatever, what they might do wrong. And I think in general, when we have a market with lots of competitors that’s thriving, we tend to say the risk of error in the market is low because the market will correct as consumers will discover the error, they’ll switch to different providers or the providers will have to fix that problem in order to stay competitive.

But in a market with fewer competitors, we should be much more worried about risk of error in the market. Because it is much harder to course correct. And so maybe that’s a reason to favor the regulator in that context and say, we can lean on the democratic processes, the participatory processes that inform regulatory process. We’re going to have lots of people, lots of subject matter experts, lots of computer scientists and peer reviewers involved in the processes at NIST or OSTP or at the FTC and all these agencies that are going to help the agency get it right.

And they won’t get it perfect. But I think– and the hope is that they will do better than an unregulated non-competitive market.

GWYNETH SHAW: What are some of the solutions that each of you think are possible? So Rebecca, you were in Washington, you were working at the Office of Science and Technology Policy last spring. What are some of the things that you think are realistic to expect? Because I take your point, Tejas, this has been a relatively unregulated environment. And I think particularly with things like generative AI, which are moving very quickly and doing all kinds of really fascinating things, but you can very easily see how quickly they could do not great things.

What are federal regulators and policymakers and scholars like yourselves who are participating in these discussions, what’s percolating?

REBECCA WEXLER: I mean, I’m happy to chat a little bit about some of the things that are coming down the pipeline from what I was working on at OSTP. And actually yesterday, I was speaking at a National Academy of Sciences panel on DNA, a probabilistic genotyping software, which is maybe a precursor to some of the ways that AI is going to be used in the forensic space.

But first, I just want to say, I’m skeptical. I’m skeptical because this idea that technological change happens really fast undermines the authority of slower-moving oversight bodies like regulators or even courts and the adjudicative process to provide the guardrails that we need to properly govern risk cases for the technologies.

So this idea may be like comes from Moore’s law is the only quantifiable version of it that I’ve ever heard. More generally, I don’t think it’s quantifiable. The pace of technological change over time. How do you say that AI is moving faster than railroads were moving or electricity was moving. I think it’s a non-falsifiable claim that politically speaking undermines law.

So undermines legal intervention to control the development of the technology. And circling back to the beginning of our conversation where you were asking, well, what are some of the issues with AI in the criminal system? And my response is, there are a lot of issues with introducing AI in the criminal system. Guess what? There are issues that have existed in the system for quite some time.

That means that fixing those issues is not reactive to some fast-moving problem. These are issues about, for instance, how should we approach the possibility that police investigative methods or forensic evidence of guilt that the methods for investigating or for analyzing forensic evidence are racially biased.

One of the concerns that Tejas raised, well, maybe we should be concerned about racial bias in AI. We absolutely should be. Yes, there have been studies showing that face recognition technologies perform worse on people with darker skin. That matters. And guess what? Thinking about this, and I’m realizing that this is not a new problem at all.

And so if you’re relying on DNA technologies, DNA is like a very accurate technology seen from one perspective. It’s very accurate. But in the whole holistic system of the criminal investigative space, we collect DNA more from certain communities, Black and Brown communities, who are subject to greater disproportionate arrest. And so our DNA databases are disproportionately filled with people’s DNA from certain communities.

So if you run a purportedly racially neutral DNA forensic software on crime scene suspects and you match it against databases that are disproportionately filled with DNA from some communities, then those communities are going to bear a disproportionate burden of false positives from that technology.

This not a new problem. So yes. We should be very concerned about racial bias in face recognition before we use it in investigations or as evidence of guilt at trial. And the raising awareness of that around AI should also start us to provoke these other questions. How comfortable are we with the racial bias embedded in, perhaps, all or at least many, many, many of our investigative and evidentiary methods.

Part of it is a pessimistic, and part of this is optimistic. The pessimism is to say, these problems have been around for a long time. They’re hard problems to solve. The optimistic version is, we have time to think about them. We have time, we’ve had time, we have experience with it. And in the legal community, the regulatory community, the judicial community, we have the expertise.

We do. These are our questions to solve. How should the evidence rules treat racially-biased investigative methods? That is our domain of authority, we can do it. And don’t let somebody tell you that the technology is moving too fast for you to do your job. OK. That’s my spiel about speed. In terms of things coming down the pipeline, when I was at OSTP, I was working specifically on helping to implement President Biden’s executive order on effective accountable policing.

And one of the things that that massive executive order did was require OSTP to work with other agencies and in interagency processes to develop best practices and guidelines for law enforcement use of certain advanced technologies, including biometric technologies, face recognition, DNA, all of which can incorporate AI inside of them.

And that process is developing. So the National Academy of Sciences workshop I was participating in yesterday is part of what Tejas talked about. Agencies gathering expert views from lots of different perspectives and bringing them into a report and then developing what, hopefully, we’ll see soon is best practices and guidelines for law enforcement use of the technologies.

Now, federal guidelines can be super influential just because they’re out there and they’re an authoritative source and the federal government has the resources to gather all those expert views. And so even if it’s not binding on say, state, local, territorial, tribal law enforcement, those folks might turn to the federal guidelines and adopt them voluntarily. Because they say, hey. They put a lot of resources into this. We don’t have those resources to do it from the ground up. Let’s rely on them.

So it’s influential anyway. But on top of it, the federal government can also condition grants that it provides to local, state, territorial, and tribal law enforcement agencies to help them purchase these technologies– AI and other technologies. And it could provide conditions on the gifting of those grants that do what Tejas was talking about earlier, use the procurement process and say, if you’re going to use federal dollars to procure these technologies, the technologies are going to have to comply with some kind of these best practices guidelines.

And the best practices guidelines could include things like, hey. You can only purchase from vendors that subject their tools to peer review and don’t use contract law to block it. Or you can only use federal dollars to procure technologies from vendors who agree that they’re not going to assert trade secret claims to impede discovery of relevant evidence in a case. So those are some levers that I think can be helpful solutions.

GWYNETH SHAW: Tejas, I want to hear your answer to that question. But thinking after having listened to Rebecca that there are some legacy, if not mistakes, then lapses, and a lot of facets of this in that I’m old enough to remember when telephone companies were very heavily regulated. I’m old enough to remember breaking up those companies and the changes that were supposed to spur innovation and spur competition, which is a mixed bag.

I know you’ve written a lot about broadband regulation and access. Are there some overhanging similarities from your side of the equation too? That kind of if you address problems that have been flagged by researchers like yourself for quite a while, it has the effect of also impacting the AI space.

TEJAS N. NARECHANIA: Huh. That’s interesting. It’s a really interesting long view arc, which is that in this space, we have had a pretty consistent trend toward deregulation over time. That is, we had a very highly regulated natural monopoly style communications platform.

And we deregulated that in the Telecommunications Act of 1996, but maintained some amount of regulation in order to make sure that the telephone network continued to work. And then on top of that telephone network and the cable networks and other networks, we built an internet. And that the internet as a communications platform has been comparatively less regulated than the telephone network was.

And then connected to the internet on the edges are these computers and servers and this computational infrastructure. And that computational infrastructure is even less regulated. And then there are applications that are being built on that computational infrastructure that now are these large language models, and these other large AI systems that have the risks that Rebecca was talking about.

Risks of bias and discrimination that profoundly affect us as humans, our relationship to technology. We feel the burdens of surveillance all the time. We are being watched, our data is being collected, what is it being used for? There are so many companies that have profiles of who we are, and use that information to sell us things that maybe we don’t want or that we didn’t know that we wanted. And sometimes that’s good, and sometimes it’s not.

And so I think it’s a really interesting arc. Maybe one thread that you’ll see in my work has been to try to push back against that arc and to try to find the places where our usual forms of market governance have not been working. And we need to do things, I think, to try to address some of those market failures.

And I think we can identify them in places. We can identify them in places in the AI stack. Some of my other work looks at broadband and says, OK. When we get connected to the internet, many of us only have one viable choice. And as a consequence, those people pay more. And where are they?

Well, they happen to be disproportionately located in rural communities and poorer communities and communities of color. And so here you have a set of people that are being made to pay monopoly prices for a foundational communications technology. Should we fix that? Should we address that?

I think the answer is, yes. And we can think of tools to address those concerns. So it’s a really interesting question. And I guess, one of the tools that I’ve proposed in the broadband context is where there is a monopolist, and it’s a foundational technology like access to the internet, we should regulate the rates. And we should make sure that those people who are facing monopoly provider, don’t pay any more than those of us that have the benefits of competition.

And I think that’s tying this back to your original question about solutions. I think that’s one of the things that I also think about in the context of AI, which is one of the solutions. Well, again, I think in terms of cost regulation and service standards. Can we make sure that all of us meet a basic standard of service? And what that standard is, I think, is really complex to define.

But I think with enough democratic input, we can come up with some reasonable standards for this system is not going to discriminate disproportionately against Black and Brown populations or this system is going to be reasonably accurate when it’s used for law enforcement purposes. And this system is not going to extract too much from us in terms of data and privacy.

That is, if we can think in terms of cost regulation and service quality. I think another thing that you might consider is if you’re trying to induce more competition into the market, interoperability. And interoperability, I think, especially is operationalized through federated systems of learning or federated AI models where you have multiple providers, all of whom collect data, and then use that data to train a single model.

Now, there are concerns about consolidation in that context. But I think what’s nice is that you get the network effects of the data, which is everyone’s data are used to train this model without accruing the benefit to any one single provider. That is you reduce the risk that the market will tip in favor of a single provider.

And that you can sustain competition for longer. Another concern we see in the AI technology stack is that concentration at one layer will allow a provider to leverage that power into another layer. That is, if you have control of the model, then you can give your application favorite access to that model or you might discriminate against a different application that competes against yours.

This is a familiar story. We’ve seen it in net neutrality, you see lots of complaints about it with Apple and iPhone and which apps get access to the App Store and which ones don’t. The ones that compete with Apple’s apps seem, in some cases, to be disfavored. And so you could imagine a nondiscrimination regime that says, OK. If you’re going to offer a model or if you’re going to offer a computational infrastructure, we’re going to impose rules of non-discrimination.

So that way, you don’t favor your own offerings and disadvantage competitors that sit on top of your platform. So those are some of the solutions that I’m thinking in terms of cost and service standards, interoperability, and non-discrimination rules, among others.

GWYNETH SHAW: Listening to the two of you talk, I’m, once again, struck by how valuable it is to have scholars like yourselves thinking about these things from a more objective perspective. You’re not a developer, you’re not looking to sell your product, and you’re also not a politician or a regulator who has to answer to other different incentives.

It’s really, really interesting to listen to you talk about it not from the perspective of, well, this is technology that’s automatically amazing because it’s technology. And I think it’s one of the things that inspired BCLT from its beginnings. Was to be a check of sorts on some of these things that we’re developing. It’s just really interesting to listen to your perspective.

So I hope our listeners can really take away that value too in raising these issues and pushing regulators and policymakers and developers to be thinking about the ways in which secrecy and those closed boxes really aren’t necessarily the best thing. Thanks so much to both of you for being here. And if you want to know more about Professors Wexler and Narechania and their work, check the show notes for links to their papers and projects.

And thanks to you, listeners, for tuning in. Be sure to subscribe to Voices Carry wherever you get your podcasts. Until next time, I’m Gwyneth Shaw.

REBECCA WEXLER: Gwyneth, thank you so much.

TEJAS N. NARECHANIA: Thank you so much, Gwyneth.

REBECCA WEXLER: It was really a pleasure to join.

[MUSIC PLAYING]