
By Gwyneth K. Shaw
As a scholar focused on the intersection of law and technology, Berkeley Law Professor Tejas N. Narechania has had a front-row seat to profound shifts fostered by Silicon Valley’s innovations.
“The technologies have changed dramatically: One day we’re talking about internet access and network neutrality. Then the next day we’re talking about NFTs,” he says. “And now we’re talking about AI.”
But as each hot new idea or gadget has grabbed funders and headlines — from broadband to bored apes to ChatGPT — Narechania has kept his eye on the big-picture questions. Those, he says, have remained fairly steady.
“How do we balance incentives and structures that give rise to new innovation with our desire to make sure that these new technologies are available to — and work for the benefit of — everyone?” he asks. “That’s still the question, no matter whether we’re talking about patents and copyrights, broadband policy, or AI.”
And Narechania, a faculty co-director of the law school’s Berkeley Center for Law & Technology (BCLT), has stayed at the forefront with up-to-the-minute research and teaching. Two of his most recent works highlight that reach, offering up a prescription for restoring the promise of the early internet and a framework for regulating AI using longstanding antitrust laws.
“I think innovation is incredibly important. It’s why I love technology law and policy,” he says. “It is important, I think, to be mindful of how innovation can work with the other values prized by our society and our legal systems, and how innovation might work against them.
“That is, innovation can thrive in a market that is shaped to protect other values too.”
Enhancing the ‘Enhanced Internet’
In “How to Save the Internet,” which will be published soon in the Berkeley Technology Law Journal, Narechania and his co-author, UC Berkeley Electrical Engineering and Computer Science Professor Emeritus Scott Shenker, argue that the internet’s evolution has effectively allowed the current system to escape its initial regulatory bounds.
These behind-the-scenes upgrades — as internet-based applications rose to prominence, increasing demands for new functionality and better security — gave us streaming video on demand but also caused a drift away from early principles of interconnectedness, generality, and neutrality in the name of an “Enhanced Internet.” Without a new effort to bring the internet back into line, they argue, big industry players could stifle the next wave of innovation and hurt consumers.

“That original internet was not designed to meet the security demands of our modern commercial internet. And it was not designed for the volume of speed-sensitive applications we use — things like video streaming services,” Narechania says. “So large, private companies built new infrastructure on top of the internet to deliver faster, safer online experiences. But as that infrastructure replaces the open, interconnected networks that made up the original internet, it threatens to kill off what made the original internet wonderful: its innovative and democratizing spirit.”
Narechania and Shenker propose a revitalized set of technical standards, which they call the “InterEdge,” already in the process of being built by researchers and developers. And they urge a shift to requiring interconnection among carriers — a critical component of, for example, telephone service regulation that failed to carry over to internet service providers — as well as a codification of the principle of net neutrality, so no provider can refuse to transmit content to gain a competitive advantage.
Finally, they emphasize ensuring that the generality of the internet be preserved, making it useful to the constantly evolving network of users, applications, and content without requiring major changes.
Interconnected providers would turn the “Enhanced Internet 2.0” into essentially a unified public infrastructure, aiding consumers, Narechania and Shenker argue. This proposal would also improve the internet’s resilience, bolster competition, and help close the so-called “digital divide” while continuing to foster innovation, they conclude.
“Our choice is stark. We can continue along the current path, allowing the internet to become a fragmented and concentrated system that privileges incumbents and stifles innovation,” Narechania and Shenker write. “Or we can chart a new course — one that preserves the performance benefits of the Enhanced Internet while safeguarding the principles that made the internet revolutionary in the first place.”
Putting the antitrust in AI
Another recent paper, “An Antimonopoly Approach to Governing Artificial Intelligence,” was published in the Yale Law & Policy Review in fall 2024. In that article, Narechania and Vanderbilt Law School Professor Ganesh Sitaraman show that AI’s industrial organization, rooted in AI’s technological structure, “evinces market concentration at and across a number of layers.”
Important elements of AI infrastructure, they argue, are robust enough to warrant regulation, and applying an antimonopoly lens would offer consumers technical safeguards as well as enhanced competition.
“I think we have to start with the premise: Using antimonopoly tools presupposes some degree of market concentration. So is there market concentration in AI? I think the answer is clearly yes,” Narechania says. “Sure, there are lots and lots of companies building new AI tools and applications. But if we peek below that surface, we see that all those applications are built on a handful of models; those models rely on a concentrated set of cloud computing infrastructure providers; and those cloud systems are built using microprocessing hardware that emerges out of a very narrow supply chain.”

The risks of that concentration, he adds, include traditional monopoly concerns about prices and quality, as well as worries that monopolists will be content to make only marginal improvements — and be quick to choke off new innovations that could supplant their dominance while favoring affiliated applications. OpenAI, for example, might favor a legal research-and-writing application that it has a close relationship with while offering worse pricing terms to a competing legal application, Narechania explains.
“So that’s where antimonopoly tools come in,” he says. “In other contexts, we’ve developed a range of tools, such as tariffed pricing or nondiscrimination rules, to address, ex ante, the risks of concentration. We should do the same in AI, especially because concentration in AI seems to be structural.”
A previous paper, “Machine Learning as Natural Monopoly,” mentioned some of these concerns, Narechania says, and researchers at RAND and the Brookings Institution have reached similar conclusions.
“These risks are somewhat more hidden than, say, the risks of deepfakes, or of the use of biased models in, say, law enforcement contexts,” he says. “Those risks have rightfully received a lot of attention. But we should pay attention to risks of industry concentration, too. That is, it’s not an either/or scenario; it’s a both/and one.”
From the White House to the classroom
Narechania, who worked at Microsoft before getting a J.D. from Columbia Law School, has seen the rulemaking process up close across multiple dimensions. He served as a special counsel at the Federal Communications Commission, focusing on net neutrality issues, and clerked for both U.S. Seventh Circuit Court of Appeals Judge Diane Wood and U.S. Supreme Court Justice Stephen Breyer — both are now retired — before joining the Berkeley Law faculty in 2016.
In recent years, he advised the Biden administration on questions of competition and AI and became the co-leader of the Artificial Intelligence, Platforms, and Society Project, a collaboration between BCLT and the CITRIS Policy Lab at the Center for Information Technology Research in the Interest of Society and the Banatao Institute, which draws from expertise on the UC campuses at Berkeley, Davis, Merced, and Santa Cruz.
Since President Donald Trump took office in January, Naerchania says the administration has embraced the mindset of many tech moguls to “move fast and break things.”
“But I think we have to be sensitive to the fact that some things are really valuable, and really difficult to put back together once broken,” he says. “Some state regulation, for example, is aimed at making sure we protect some core, foundational values, such as protecting against — and limiting the perpetuation of — discrimination in our legal systems. I think those values are paramount and should not be sacrificed at the altar of innovation. The Trump White House seems to prefer preempting such state regulation in order to create what it says is an environment more conducive to innovation.”
Already, some states, including California, are at odds with the administration on certain policy issues. For example, while the White House is pushing Congress to embrace a ban on state regulations of AI, Gov. Gavin Newsom recently signed legislation aimed at reining in some AI technologies.
As the landscape continues to change, Narechania is one of Berkeley Law’s many professors helping students understand the frontiers of technology law and training them to be tomorrow’s leaders in the sector. His Regulated Digital Industries: Telecommunications Law & Policy for a Modern Era course looks at the broad arc of the telecom industry — from concerns about who moderates content to the big question of whether China can operate TikTok in the U.S. — using a unique blend of features of administrative law, antitrust law, and constitutional law as a foundation.
“I love teaching this class,” Narechania says. “Every time I teach it, I always have at least one moment where a student says something so brilliant and insightful that it changes my thinking about a specific policy problem or solution.”
Just this fall, he says, a student comment about his work in progress on network neutrality made him go back and rethink an aspect of his argument. The course has been a great way for him to stay connected to his research while sharing his knowledge with students.
“The students also always hunger to learn about the newest, latest technologies: AI, 5G, and more,” Narechania says. “So teaching this class is great discipline: I can’t get lazy, and I have to make sure I’m staying on top of the latest technologies, and thinking through the legal and policy dimensions of their development.”