Day 2, Panel 6 (Prosecution): AI Tools and Patent Prosecution

26th Berkeley-Stanford APLI: (Day 2, Prosecution, Panel 6) AI Tools and Patent Prosecution

Day 2, Panel 6 (Prosecution): AI Tools and Patent Prosecution
December 5, 2025

B-CLE Recording (CLE: FREE) | Youtube Recording | Agenda | Event Resources | Speaker Biographies | Download Panel Transcript

Download Panel Transcript & Materials Here


Speakers
Steve Gong, Google
Michelle Lee, Obsidian Strategies
Ayan Roy-Chowdhury, Fish & Richardson
Ian Schick, Paximal

Panel Resources

VALs AI Legal AI Report. Feb 2025

AI-Powered Lawyering

ABA Formal Opinion 512. 29 Jul 2024


Panel Summary

Wayne Stacy opened the session by framing it as a unique effort to examine the specific AI tools being built for patent prosecutors — a distinct category from AI tools built for litigators. Ayan Roy-Chowdhury, moderating, noted that while roughly 30–40% of attendees were already using generative AI for prosecution tasks, very few had updated their engagement letters or outside counsel guidelines to address AI usage — highlighting a governance gap the panel set out to explore. Michelle Lee emphasized that AI adoption in the legal field has tripled from 11% in 2023 to 30% in 2024, and framed the urgency clearly: flat fee structures, increasing technological complexity, growing prior art volumes, and a shrinking pool of registered practitioners together make efficiency through AI not a luxury but a necessity. She outlined a wide range of practical use cases, from AI-assisted invention disclosure generation to claims drafting, office action responses, prior art searching, freedom-to-operate analysis, and portfolio renewal decisions.

Ian Schick drew a critical distinction between “copilot” AI — where a human drives the process and AI assists — and “agentic” AI, where AI drives a multi-step process and the expert is brought in at key checkpoints. He argued that copilot tools create inconsistent results across a firm because output quality depends heavily on individual attorneys’ prompting skills, whereas agentic systems enforce best practices and produce more uniform, high-quality work product at scale. Steve Gong added hard data from Google’s structured experiments, reporting a measured 20% efficiency gain from their in-house AI drafting tool deployed across their full outside counsel panel, including Fish & Richardson. Even more striking, Google’s internal tests showed that AI-generated office action response strategies matched or exceeded outside counsel strategy over 80% of the time — a signal that AI’s role is not merely clerical but potentially strategic. All panelists agreed, however, that full operationalization of AI tools requires navigating client consent, governing confidentiality and privilege issues, and carefully vetting vendor terms to ensure client data is not used to train models.


Key Learning Points:

  • Agentic AI vs. Copilot AI: Copilot tools require attorneys to be skilled prompt engineers and produce inconsistent results firm-wide, while agentic AI systems drive the process autonomously with human review at critical junctures — enforcing quality standards across all experience levels (Ian Schick, Paximal).

  • AI Efficiency is a Business Imperative, Not Optional: Google’s large-scale A/B testing showed a 20% efficiency gain from AI-assisted drafting, and in-house clients like Google are already planning to pass those savings back as fee reductions — making AI adoption a survival issue for outside counsel firms (Steve Gong, Google).

  • Confidentiality and Governance Must Come First: Before operationalizing any AI tool, firms must obtain explicit client consent, vet vendor data terms to confirm client information won’t be used for model training, and distinguish between general-purpose AI platforms and purpose-built legal AI systems designed with confidentiality as a baseline requirement (Michelle Lee; Steve Gong).


Program Transcript

Key Terms: Generative AI, Agentic AI, Copilot AI, Large Language Models, Human-in-the-loop, AI-in-the-loop, Expert-in-the-loop, Prompt Engineering, Invention Disclosure Form, Patent Drafting, Office Action Response, Prior Art Search, Freedom-to-Operate Analysis, Claims Drafting, Portfolio Renewal, Prosecution Lifecycle, Client Confidentiality, Attorney-Client Privilege, Duty of Candor, Outside Counsel Guidelines, Engagement Letters, Informed Consent, AI Disclosure Obligations, Data Governance, Human Conception Requirement, AI-Augmented Inventorship, USPTO AI Guidance, Inventorship Disclosure, Section 132 Declarations, Subject Matter Eligibility, Ex Parte Examination, Patent Backlog, Efficiency Gains, A/B Testing, Capped Fee Structures, Operationalization, AI Adoption Metrics, Benchmarking, On-Premises Solutions, AI Governance Frameworks, Data Confidentiality, Model Training Data, Cloud Storage Risk, Cross-Contamination Risk, Vendor Terms of Service, Trust and Safety Protocols, Data Privilege, USPTO, Google Gemini, ChatGPT, OpenAI, Claude, PatentLytics, Paximal, Fish & Richardson, ABA, Stanford CODEX Center

This panel, AI Tools and Patent Prosecution, was the last panel of the second day for the event, 26th Annual Berkeley-Stanford Advanced Patent Law Institute. This event was hosted by Berkeley Center for Law & Technology, UC Berkeley School of Law, and Stanford Law. 

[WAYNE]
So last session of the day everyone this will be your technology credit for the year, so you have that piece going for you. I’m not sure if this will be any more uplifting than the prior session, but it definitely can’t be any more terrifying.

We’ll see. At least some optimism to start, Steve. Come on.

Don’t scare people yet. So this session, we’re really lucky. Mark and I really looked around to try to figure out how to start teaching this session, and this is the first one.

We looked at all the other events in the country, really couldn’t find that many that were breaking this up and saying, “Hey, wait a minute. Let’s look at, instead of just AI for lawyers, you’ve got AI for litigators over there, let’s talk about that technology.” Because it’s not the same as the technology that’s coming from this group.

But what is that technology? And what we found, and we polled people, we emailed around, the answers were probably the most varied answers you could ever get. Some people saying there’s great technology, and other really sophisticated law firms were saying, not aware of a single product out there that we would use. So with that in mind, we’re like, ‘Well, let’s go find the experts and put this to the test.’ So with that I want to introduce our moderator Ayan Roy-Chowdhury from Fish & Richardson.

So you all know Fish. Both litigation and patent prosecution. Been experimenting with these tools, but I thought this would be a great way to see the people building the tools and the firms that are actually buying them and rolling them out.

So I’ll turn it over to you.

[AYAN ROY-CHOWDHURY]
Thank you, Wayne. Thank you everyone for the very last stage of this APLI. As Wayne mentioned, I am a principal at Fish, and I’m honored to moderate this panel.

I have got a rather big all-star lineup here. First, the honorable Michelle Lee. Many of us know her as the former Director of the USPTO and the Under secretary of Commerce, but she’s also a technologist from the MIT AI and HP Research Labs, and she has led AI strategy at Google and Amazon. She brings a unique triple perspective covering technical, corporate, and governments.

Next, Ian Shick. He’s the founder of Paximal. He’s a former practitioner and a fellow at Stanford Law’s CODEX Center. He has been a vocal advocate for agentic AI, and we are going to find out what that is, like how that lawyers should be architects of strategy and not just operators of chatbots, right?

And finally, Steve Gong, he’s the head of data science, technology, and operations team at Google Global Patents. Steve sits at the intersection of legal infrastructure and machine learning, giving him the best view of the client perspective and the hard data on adoption.

So we plan to cover a few topics today that including practical AI tool use in the prosecution lifecycle, AI adoption and governance metrics, ethics issues and USPTO guidances around AI, and finally, if time permits, the USPTO’s AI efforts. But before we get into the details, I would like to get a quick read on the room.

How many of you are currently using Gen AI tool for patent prosecution, like drafting, search, or office action responses? Okay, that’s around, I would say, around 30, 40%.

I would have thought there would be more but, and how many of you have completely updated your engagement letters or your outside counsel guidelines, on the context, to explicitly address AI usage? Same person. It’s a lot less. So this gap, like between the usage of AI, which seems to be increasing every day, and about the governance issues related to that we are going to discuss today.

So the first topic we’ll talk about is practical AI use across the lifecycle. Michelle, you have spent your career immersed in almost every aspect of this field from law firm to deputy GC at Google to leading the USPTO and then at Amazon.

So, given your unique background, can you explain, why should patent attorneys care about AI now?

[MICHELLE LEE]
Yeah. So thank you so much for having me here. Thank you all for being here on the very last panel on a Friday afternoon, and to Wayne for pulling together this panel. Look, this is a really special time. I spent, as you know, my entire career focused on patents and intellectual property.

And then I went And I ran and built AI services at Amazon, and I now advise companies on applying AI and machine learning with measurable impact to improve operational efficiencies, to enhance the customer experience, and to drive revenue. And the field from which I come, patents, is now so ripe for the application of this technology. I mean, if you think about it, lawyers, most of what they do is find, pour over large bodies of textual information, find the relevant cases, you know, citations and so forth, analyze, summarize, and present it back to the client, right, for analysis.

And so, that is what generative AI is so perfectly suited to do. So this is a very exciting opportunity. And I want to share a stat, and it’s consistent with the Straw Man poll that we had here.

According to the ABA, the adoption of AI in legal has tripled from 11% in 2023 to about 30% in 2024. So we saw a rough show of hands and we saw roughly 30 to 40%, so you guys are you know, within margins of what the ABA says. But that’s in 2024. I bet you if you look at the number for 2025, that number has increased significantly because a lot has happened within the year.

That means if you look to your left and you look to the right, one in three of you is probably experimenting with AI in the use of your practice and in the workflow. And, like I said, that makes sense, and it makes sense in particular in the area of patent prep and pros. This is the session on patent prep and pros. And why is that the case?

Because if you think about it, I mean, when I managed my large portfolio of patent assets, filing lots of applications at Google, when I was head of patents and patent strategy, a lot of my work was on a capped fee basis, and that cap has not increased significantly in the past several years. So, that’s macroeconomic issue number one. Number two is, the complexity of the technology has increased. The volume of the prior art has increased.

And if you look at the number of practitioners admitted to practice before the United States Patent and Trademark Office to do prosecution, those numbers have decreased. So, overall in the system, there is a strong incentive to increase efficiencies no matter where you play in the ecosystem. So, with these generative AI tools that have been recently developed, including those like that developed by PatentLytics full disclosure, I’m an advisor to that company, I mean, there are tremendous opportunities to increase operational efficiencies, to enhance the accuracy, and to increase the speed at which you deliver your products and services. So, what are the possible things you can do with gen AI?

Some of you are using this some of these services. Most clearly it’s an easy way, an easier way to generate invention disclosures if you’re in-house.

You can now plunk into these AI platforms a slide deck, a conference paper, a transcription of the invention disclosure session, or a video recording of the invention disclosure session to get an invention disclosure form for review by your committee. Then you can take that and generate a first draft of the patent application, claims, specifications, with your client’s preferred style, in whatever jurisdiction you would like US, Japan, Europe, in whatever language you would like, English, Japanese, German, French, or what have you.

So, these are all real possibilities now. And then, once you have the draft patent application, you can engage in more strategic drafting.

You can conduct a prior art search, 102, 103, understand the prior art. Based upon that analysis, refine your claims for quicker allowance or to cover your own products and services for defensive purposes or to target your competitors’ products and services for offensive purposes.

That’s on the application side. In terms of response for office action, I remember, I submitted mine, the patent office chewed on it when I was a practitioner, and maybe 12-plus months later, it came back.

I’m like, “What was that case about?” So, that cold start where you had to dig out that file, read, re-review everything, look at the many basis for rejections, 102, 103, multiple references, that takes a lot of time. And now, with these tools, these AI tools, you can more quickly get up to speed, understand, refresh, the prior art references whether they’re 102, 103.

You can better understand you know, the rejections, and you can even ask the AI systems to prepare a draft response, a first draft. Obviously, you’re not gonna hand that in and submit it. No one would do that. You would go, check everything.

But it pinpoints you, and it clues you in, and it makes you much more efficient. And then, if you’re on the licensing side of things, or you’re doing analysis for licensing programs, the ability to understand hundreds of patents at scale, to find, through publicly available information, the potentially infringing products, and therefore your potential targets for monetization to increase your efficiency and the speed with which you can drive revenue. If you have the large portfolio, as I did in the later phases when I was at Google, and I had to determine which patents to renew and which to let expire, that whole analysis I used to farm out to patent law firms. And, you know, maybe a lot of that can be done in-house, and especially in an environment where the business priorities are changing so quickly.

So, your business goals are no longer the same as when you filed that application, or, and/or the competitive landscape is changing very quickly. So, your targets now are different, and your competitors are now different, and you may wanna target them. And then, of course, the freedom to operate analysis. Before I launch a product maybe I do a search.

I have the AI help me. It comes up with a list of patents.

I determine whether or not my potential design is going to infringe any of those. If it is, maybe I do a little redesign before the thing gets launched. So, all of these things are indications of capabilities that could potentially be leveraged, not replacing the patent attorney, but making the patent attorney more efficient, allowing you to focus on the strategic, right, rather than the tedious, there’s a lot of grunt work in patents.

It’s technical. There’s a lot of volume of work.

And that way, you can focus in on the key pieces, and also, you can focus in on the advocacy. So, those are some of the applications and potential applications of the technology.

[AYAN ROY-CHOWDHURY]
Thank you. And that segues into the next question I had about agentic AI tools, right? So Ian, you talk about that lawyer shouldn’t just be prompt engineers, and they should be the architects. So, what is the difference between agentic AI and tools, and how should practitioners be using those?

[IAN SCHICK]
Yeah. So, I like to think about the use of AI in one of two ways. There’s human-in-the-loop and AI-in-the-loop. So AI-in-the-loop would be a human-driven process, say, writing a patent application with AI in the loop.

So, AI is there, ready to assist. That is you know, like a copilot, for example. It is still a human-driven process, but it’s aided by AI. A human-in-the-loop process would be one that the process itself is driven by AI and the human is brought into the process at certain critical points.

And so agentic AI would fit into that second category. I think that, you know, I run a company called Paximal. We build agentic AI, so I’ve got strong feelings on this. But you know, I think that there’s really pros and cons to both.

So with copilots a pro would be you know, the attorney is still very engaged. The attorney has a lot of control about what’s going on.

There’s a lot of individuality in their work product. The downside is that you know, in my view, you’re trading one low-value skill, or excuse me, one low-value task, which is keying in words, for another low-value task, which is prompt engineering. Prompt engineering is not super easy, and it’s also a moving target. You know, as new models come out and new techniques for prompting are introduced.

And so tasking attorneys with, you know, gaining this whole new skillset that they need to be continuously trained on I think is a poor use of attorney time and attorney talent. Let’s see. So with agentic AI, you know, you’ve got an AI-driven let me go back to the copilots really quick.

So another issue that I have with copilots is that because it’s largely driven by the skills of the individual, if you’ve got a large practice, say 50 attorneys, all different ages, all different backgrounds, all different levels of enthusiasm about using AI, the results are gonna be all over the place. And so it’s really hard to for at a firm level to enforce quality control.

You’ll have some users that are very good at prompting and get a ton of efficiency and kick out really good work product. You’ll have other users where it actually takes longer to, if they use AI instead of just writing it. And so I think that kind of, you know, inconsistent results across a practice group is problematic.

[AUDIENCE MEMBER]
Hey, could you just define agentic AI for us briefly?

[IAN SCHICK]
Agentic AI, I think, is understood to be AI that will do a task for you. So do a task, a multi-step task.

So not like I write a prompt, it gives me a response. More like I ask it to go buy a plane ticket and it goes and shops for me and buys a plane ticket. Something like that.

So in this context, you know, agentic AI for what I’m working on is writing patent applications. A ton of tasks all handled by AI.

[AYAN ROY-CHOWDHURY]
So are we saying they have to write the full application or will it be like, where will the practitioner be involved here?

[IAN SCHICK]
Yeah. So agentic AI is a human-in-the-loop approach. So the process is driven by AI.

[AYAN ROY-CHOWDHURY]
I think one of your articles talk about expert in the loop.

[IAN SCHICK]
Expert in the loop.

[AYAN ROY-CHOWDHURY]
Right.

[IAN SCHICK]
Human in the loop. Patent attorney in the loop.

So, where was I going with that? So with agentic AI, you have a defined process. The AI is running the overall process.

So you can build in best practices. You can build in guardrails.

You can enforce certain outputs. So, if agentic AI is implemented across a practice, your best applications will be really good.

Your worst applications will be really good. Your juniors will be putting out the same level of work product as the senior attorneys. So there’s a lot more control from the practice level.

The downside would be that there’s less control from the individuality in the work product.

[AYAN ROY-CHOWDHURY]
Okay.

[IAN SCHICK]
So that’s an argument against it.

[AYAN ROY-CHOWDHURY]
Right. And there is also the argument that junior practitioners, why do you even need them now, right? Because the AI can do what? 80%, 90% of the task.

[IAN SCHICK]
Yeah. I mean, so we’re generating patent applications. We go through an alignment process. So it’s a prompt-free platform that simulates an interaction between a traditional interaction between a partner and associate.

So this is how I was trained. This is how I trained other people.

I was in big law for many years. The senior attorney will have a stack of disclosure materials, hand it to the junior, say, “Go read this. Come back and tell me what the invention is. Let’s get aligned on that.”

Once that’s done, now the associate can go do the claims and figures, then come back, let’s get aligned on that. And only after that preliminary alignment has been done, then the associate can go and draft the complete project. And so the idea is that you know, by doing that alignment you’re avoiding wholesale rewrites. You’re gonna get something back that you expect to see.

So in our system everybody is in that senior attorney role and the computer is in, the AI is in the associate’s role. So we go through that, what I just described.

We go through that process with the user and then generate the entire document top to bottom with figures and

[AYAN ROY-CHOWDHURY]
Steve, so I understand Google is effectively a power user here. So, from an operational standpoint, where do you see the most practical use and value of these AI tools?

[STEVE GONG]
Let me take a step back to what Michelle said about efficiency, where I give a little more hard data on that. So, Google, obviously, is an AI-first company. We very much invested in the impact AI has on the economy, on the workforce.

So, Google actually has built an in-house patent drafting tool, right? We may never release it. Who knows, right?

But what we have done is, we have released that tool in a very structured way to our entire panel, including Fish & Richardson, right? And in a very structured, statistically relevant experiment to test out the efficiency to the work stream when that tool is deployed, both in terms of initial deployment, also once the user gets more familiar with the tool, see what the efficiency is, right?

It’s a large dataset, right? So, hundreds of lawyers have gone through this, right?

I don’t know if any other industry study are compared to that. Initial finding, 20% increase in efficiency, right? That’s a huge, a pretty big number in terms of, you know, the potential efficiency gain of using a tool on such a large scale, and that’s actually measured in a, you know, in a scientific way. But and from our own practice, right?

We obviously are power users of Gemini, which is our own you know, our internal models and all the agentic layer on top of that as well, right? We have seen AI can add tremendous value, not only on demanding tasks of drafting, but on the strategy itself, right? One of our own internal experiment shows that AI’s version, or if you were to use AI to produce a first strategy, how it responds to office action, it has matched our outside counsel or exceeded our outside counsel’s strategy by over 80%. Right.

So, it’s important to see that AI is not just a tool that will essentially do the basic work that you don’t want to do. There are opportunities where it can fully disrupt or transform how the practice is done. Now, that leaves the question of efficiency, right? Is that a good thing or a bad thing, right?

There is significant impact on the profession that’s upcoming. I think we’ll have some discussion around that, right?

So, Michelle mentioned that the fee is cap, hasn’t changed in many years, right? Well, I can tell you, the fee’s going to go down, right? That essentially, many in-house counsel, including Google, are gonna start seeing that efficiency gain as a potential way to decrease our spend, right? Our chief legal officer asked us to do that.

And this is not gonna be unique to Google. So, it’s important for the profession to navigate both in terms of the tool itself, the implications of it, and the business aspect of it, right? This is not a static thing, where, okay, the tool will make us more efficient, therefore we’ll drive our revenue at the firm level more. It may be a necessity for you to survive in this new term.

I promise it’s gonna be an uplifting panel, right? So, it is important to see that efficiency as not mostly a necessity, but itself has tremendous impact on the profession.

[IAN SCHICK]
Yeah. I’d just like to add, I mean, on the efficiency is not a nice-to-have. I mean, we, as a field, need the sufficiency. Michelle mentioned some macros.

You know, the new entrants to the patent bar has dropped off every year for the past almost 15 years. So, right now that you’re like, the average active practitioner has, like, 20 plus years of experience.

[STEVE GONG]
That should make us more valuable.

[IAN SCHICK]
So yeah. So you’ve got all these super experienced people, but there’s nobody to fill their shoes once they move on and retire. So, you know, we’re looking at kind of looking down the barrel of a huge collapse in the numbers of the patent bar. So, that’s one thing.

That’s the supply. You can think of it as supply, the drafting supply. Who does this work? On the–

[STEVE GONG]
But yeah, just on that point, right? If you look at how AI’s is used, like when Michelle mentioned, at the IDF level, right? Inventor can do a lot of the work themselves, right? So, I fully recognize this study you mentioned.

I think ADAPT, the organization that I co-founded, published that SCOUT stat, like, a few years ago. But it’s really important. This is a multifactor situation that’s happening.

[IAN SCHICK]
Absolutely. So there’s the collapse in the patent bar. There’s the, you know, flat fees across the board. I mean, it’s going down, but even if it’s, you know, $10,000, it’s been $10,000 since I started, you know, and–

[MICHELLE LEE]
Since I was head of patents and patent strategy at Google, right, I mean–

[STEVE GONG]
And it’s gone up since your days.

[IAN SCHICK]
If you adjust that for inflation, then it’s like, you know, probably 50% down.

[STEVE GONG]
But you have to recognize, as a business, the chief legal officer, as Michelle said, this is a ripe area for AI disruption, right? The chief legal officer runs the department like a business. They will say, “No. Now there’s efficiency to be gained. You’re already at $10,000. Now you go to $7,000.

[MICHELLE LEE]
Should I jump in? Should I jump in with the business?

[AJAY ROY-CHOWDHURY]
That’s another point under governance, right?

[IAN SCHICK]
I’ve got one other–

[AJAY ROY-CHOWDHURY]
I think you heard me say, “Objection.”

[IAN SCHICK]
Yeah.

(laughing)

[AJAY ROY-CHOWDHURY]
Anybody got any questions?

[IAN SCHICK]
One other macro on this, real quick. So, what about, so, we looked at cost, we looked at supply, what about on the demand side? So, if you look at the patent filings annual, annual patent filings since 1790, you know, every little window you look at, it’s exponential growth, and, you know, everybody appreciates Moore’s Law. Innovation is exponential.

It’s seems, I mean, you could expect that patents would track with that. So, if we are at a point where we’re about to have, you know, explosive innovation driven by AI, are we gonna have explosive patent filings? Not at these prices, but what if they were like a hundred bucks each?

You know?

[STEVE GONG]
Well, is that a good thing or a bad thing?

[MICHELLE LEE]
So, Ian, maybe there’s a question over there that we get to the governance issue.

[AUDIENCE MEMBER]
Yes, I have a couple of questions. Thank you for this. One is that if, you just said that these are things that we would give to a junior associate or a first-year person. If you’re replacing all of those people with some software, then inevitably they’re all gonna be six feet under, and there will be nobody else who can train anybody else, including AI, for that matter.

The other one is more philosophical, because you keep talking about innovation. I know huge companies in the Valley, not Google, I’m not talking about Google, whose entire businesses are selling junk advertisement. How is that innovation? Second, that is not innovation.

That’s just practically sales, car we used to always sell I’m not a used car salesman. And then so and then when we are talking about efficiency, to me, that resonates with whole workers’ productivity where worker productivity goes up, they don’t make any more money, they still have to work the same number of hours, but it’s just companies make money and do stock buybacks.

So, at some point in time, and I apologize, you guys are not the policymakers of the world in that sense, so I’m not putting the burden on you. But some of that balance has to be discussed at some point, and I love technology, I’ve been an engineer before I became a patent attorney 30 years ago, so I love tech, but the point is that there’s some level of, I don’t know, balancing that has to happen, and I don’t know who’s gonna do that. Thank you.

[AYAN ROY-CHOWDHURY]
Thank you. You raised some, I would say, policy issues. We are going to discuss a few of them, but maybe not at that level at which kind of gets into more into politics, right? And so we are talking about efficiencies, about how, at the C-suite level, what are they, what are they looking for?

[AUDIENCE MEMBER]
Yeah.

[AYAN ROY-CHOWDHURY]
Sorry.

[AUDIENCE MEMBER]
Question, comment from an in-house perspective. I’ll just remain anonymous since we’re being recorded.

[AYAN ROY-CHOWDHURY]
I think we know who you are.

[AUDIENCE MEMBER]
In-house a SaaS company, right? Steve, like we are all getting pressure from the CEO level by GC level to cut costs. But should we, especially the folks in this room, right, should we be pushing back to educate our non-patent management on patent prosecution and our fee structure?

It hasn’t grown in years, right? I’m seeing litigation counsel charging $1,500, $2,000 an hour. Our prosecution counsel probably averages $400 an hour, right?

This is not the area to be cutting fees, right? Patent prosecutions for drafting. So I guess the question is, is the technology even ripe for patent prosecution, especially, right, to actually, you know, have the operational efficiency where the quality of the application drafted by the AI is even, you know, able to replace a human, right?

[MICHELLE LEE]
So, I think if you ask the question, Ayan, I think we can probably address her question, right. So why don’t you ask the question and I’ll answer your question, and if I don’t answer, then my panelists can add.

[STEVE GONG]
Yeah, sure, I can.

[MICHELLE LEE]
Yeah, you raised a bunch of good issues, but why don’t you go?

[IAN SCHICK]
So how is the C-suite looking at this risk/reward balance?

[MICHELLE LEE]
Yeah. So, look, I sit on the boards of directors on Fortune 100 companies now, and I was the deputy general counsel and head of patents and patent strategy at Google, and I was a partner in a law firm, and I was the associate before then, right? And what I am seeing now in the boardroom at the very highest levels is like people get that this technology is the most transformative technology of our generation. So, boards of directors are asking their CEOs and their management team, the C-suite, “What are you doing with this technology to remain competitive?

How are you folding it in? How is it going to change our business and our strategy, and how are you increasing operational efficiencies?” And they’re expecting the management team, all the business leaders to be extremely knowledgeable, forward-leaning, and, you know, open to implementing these ideas. Weighing against that, of course, the risks and the governance issues which you have to do with generative AI in particular, which hallucinates.

And so that includes the general counsel as well and for those of you in the room, right, I mean, companies are looking for ways to increase operational efficiencies, wherever it may occur. It may be on the litigation side, and there are lots of things that AI can do to improve. I mean, litigation, patent litigation, super expensive, super time-consuming, right?

Super slow, in addition to prosecution, but it’s always the job of counsel, right, to understand where to apply it with measurable ROI impact. So, if you’re not getting the accuracy that you need, that’s not the right tool to be applying and, you know, as for the distribution between prosecution and litigation, keep in mind that of the many patents that you file, most of them are not litigated. So yes, it’s always a judgment call and how much should I spend on my patent application given that I’m not really sure which one’s going to be litigated?

Well, I like to do, you know, a good enough job so that when it is litigated, it can kind of hold water, but it’s not going to be the gold-plated patent for, I had 8,500 patents in my portfolio at Google. I’m not gonna give the top, you know, maximum budget for all of them. So, that is your judgment call as counsel determining what are your crown jewel patents, which are the bulks that protect your business, and which are the ones that, you know what, and when the IBM comes to you and asks for a cross-license, you can say, ‘Okay, you’ve got 400 patents.

I’ve got 200.’ Let’s do a cross-license. So, that is still your judgment call and where, when, and how you choose to apply AI to your prosecution, to your prosecution analysis, to the pruning of your portfolio, to the maintenance of your portfolio, to the licensing of your portfolio. Those are decisions that AI can help you with, but you are the decision-makers.

And so what I would say is that because the business leaders are being asked to lean in on AI technology and have measurable ROI evidence of where they will invest more and double down on, the legal team is no different. The IP counsel is no different.

The patent prosecutor is no different, and the outside legal counsel, if you want to make your in-house counsel look like a hero, if I’m in-house counsel and I want to make my general counsel look like a hero, right? You got to be applying it smartly.

Keeping in mind security, governance, and all those other issues that you absolutely must have in order to deploy this technology successfully.

[STEVE GONG]
I think I wanna just keep on answering your question adding to what Michelle just said, right? So, you know, I think one of your point is, the hourly rate for our prosecution attorney has not gone up, where litigation has, right?

But if you look at your patent budget for a tech company, it is still one of the largest part of the legal budget, right? I mean, the pat– Google patents portfolio is way bigger than what Michelle does, right?

92,000, right? So w– we are one of the top five spend within Google Legal.

Google Legal has, you know, it’s probably the largest law firm in the world at this point, right? So, the reality is, the efficiency, the process improvement we have gone through in the last five, 10 years, actually make us more susceptible to AI disruption.

That’s just, that’s why Michelle open– opening statement is like, patent is, the data is available, more so than litigation, the process much more standardized, right? And here we are. So, I mean, just to share, you know, directly what’s going on at Google, right?

Kent Walker, our chief legal officer had asked our team to essentially cut our outside counsel service fee by 30%. It’s a huge number. Kind of scary, right?

And, you know, there are immense impact in terms of our panel, in terms of what are we gonna do? The question now, do you rise to the challenge, look for a way to transform how you practice, both with the empathy for your outside counsel, and also for the disruption that is gonna happen to the profession in mind.

Or are you gonna wait, right? It’s gonna come for everybody at some point.

Now is our chance to be in the driver’s seat to shape that future as best that we can. And knowing that there’s gonna be, you know, things that happen that may not be positive for everybody, right?

At least we can try to control this in terms of how we manage talent growth for first-year associate, for other associates, and how we approach this with grace.

[AYAN ROY-CHOWDHURY]
So, Steve, on that so we are talking about practitioners and AI tools, but during the discussion calls we had, you had mentioned about the invention disclosure forms that you are seeing, right? I think you had said 17% are now AI generated?

[STEVE GONG]
Yep.

[AYAN ROY-CHOWDHURY]
So, how are you dealing with that?

[STEVE GONG]
Well, so not AI generated, just so this stat’s published by our general counsel already, so we started measuring percentage of our invention. A, it was, you know, it was used, the invention is that was produced with AI assistance.

[AYAN ROY-CHOWDHURY]
Ai augmented.

[STEVE GONG]
Yeah, augmented. Second, which I don’t think is published, but I’m happy to share here as well, just how much of our invention disclosure form was produced using AI as assistant, right? So, I mean, the 17% is what we have seen in terms of percentage of Google invention that is essentially was some AI was involved in that process, both in terms of ideation, could be in testing, could be many, many things. The reason why we started tracking is we started to see hallucination in the ideas, right?

So, there are things like, “Hey, you know, this is, looks great. It’s all nicely formatted.” Or some citation doesn’t make any sense, right?

So shit, we gotta start tracking so we can actually start dealing with this issue, right? The way we have done it, is essentially not only, number one, you do have to track. If you don’t track, you don’t know what’s gonna happen. That’s very basic, right?

What we have done is work with our outside counsel, such as Fish & Richardson, to make sure that when they do invention disclosure call, those questions are fleshed out. Exactly how AI was used, what level of usage was there, so we can make sure that, you know, our policy is that the invention has to have a human contribution. Absolutely that’s true, right?

So that will all get fleshed out, but you have to measure first. Otherwise, you don’t even know, you know, what you’re doing in terms of your measurement.

On the IDF form, right, which piggyback on some of the efficiency question we had, which is super interesting in terms of how that has evolved. So, two things has happened. One is, you know, inventors started translating the technical language into lawyer language, right?

You can see the IDF has started coming out by itself looking like almost like a patent application with claims, right?

[AYAN ROY-CHOWDHURY]
Seen that and like ideas return, 80 page ideas now.

[STEVE GONG]
Yeah.

[AYAN ROY-CHOWDHURY]
It gets I mean, you have to ask, “Did you conceive of this?”

[STEVE GONG]
Yeah.

[AYAN ROY-CHOWDHURY]
Then, can we explain everything?

[STEVE GONG]
Right. So that’s the potential of where our technology can go. The second one we have seen is even more fun.

It’s just, you know, so I manage our home portfolio in addition to operations, right? Inventors started like making their IDF into like a children’s storybook, right?

It’s like, “Hey, like, no, let’s just you know, I’ll even, I’ll explain this to the lawyer now in the simplest way possible.” Literally, right?

So, a lot of things are happening, a lot of innovation in terms of how this process work is also happening along the way, right? So, the other thing about this entire life cycle as we’re talking about is, yes, we can focus on the drafting tasks, we can focus on office action tasks. There’s a lot of steps in between that actually consume a lot of time where AI can add a lot of value, right?

For example, for, if you get that perfect IDF that has your you know, the point, the novelty already highlighted, right? It will make your drafting time so much easier. I assume, like, you can resonate, even though it rarely happens, right? So there’s a tremendous value where AI can add to the entire workflow.

[AYAN ROY-CHOWDHURY]
So how do we, expecting your outside counsel now to be fully conversant on AI tools? Or are you restricting them for security reasons?

[STEVE GONG]
So right now, it is, you know, we are AI first company, so we started allowing our outside counsel to use Gemini as, as a first cut, right? So, you know, there’s still a lot of stuff we gotta go through and just make sure we’re comfortable in terms of the different tools we’re allowing outside counsel to use.

But that, it still has tremendous value, right? So at least in-house counsel, outside counsel speak the same language, because in-house has been using Gemini for over a year at this point, right? There’s tremendous thing that has added value to us.

[AYAN ROY-CHOWDHURY]
So are we expecting outside counsel to do full disclosure before they use AI tools or?

[STEVE GONG]
Yes.

[AYAN ROY-CHOWDHURY]
Okay.

[STEVE GONG]
Absolutely.

[AYAN ROY-CHOWDHURY]
And Ian, anything to add here? Like, do you see any friction between your customers, who are probably law firms?

[IAN SCHICK]
You know, there’s a lot of spread, really. I mean, we, I spend my days talking to law firms typically and, big firms, little firms, and the kind of the stages are all over the place, really. There are firms that are thinking about their AI policy now, and then there are other firms that have had AI implemented for a couple years and kind of everything in between.

[AYAN ROY-CHOWDHURY]
And you had mentioned that most of them are at the stage in piloting their tools, right?

[IAN SCHICK]
A lot of especially a lot of the bigger firms. You know, all of 2023 happened. All of a sudden there are many companies in this space. So I mean, I focus in patent drafting.

I launched the very first company in this space in 2017. There was one company for a while, then two, then three, then there was like a hundred.

(laughing)

And so it’s really, really tough for law firms. I mean, there’s just so many choices out there and nobody knows what’s the best.

There’s no clear winner. And, you know, I think the approach for a lot of big firms is try them all. And, you know, starting to see some, you know, longer term subscriptions. But even in those cases, they’re still looking at other tools.

So I think the dust is yet to settle.

[MICHELLE LEE]
Yeah. On the try them all. All right. Try them and take a case that has been litigated or where, you know, you paid a lot of money, you had outside counsel, you paid them a lot of money, and try that case across multiple platforms and see how they do.

[AYAN ROY-CHOWDHURY]
What do you mean by trying that case?

[MICHELLE LEE]
Okay, so let’s say you’ve got a patent you were trying to invalidate, you paid outside counsel a ton of money, they found a bunch of prior art. So if you’re evaluating a number of AI solutions, why not put that request into each of the systems?

Now, that’s just one piece of it, which is how thorough, how accurate were you. Of course you have to pay attention to, as I said, to the security and, right, all those, like, must-haves, right?

But that’s one way of doing it, is to benchmark it against an unknown case where you spent a lot of money, you had humans, you put a lot of money into it, and they looked and they found, and what do these AI systems find?

[STEVE GONG]
Yeah, one approach we take, we do a lot of these on drafting, is we just do full A/B testing.

[MICHELLE LEE]
Yeah.

[STEVE GONG]
Right? So, like, I’ll have a human draft a case, or I’ll have AI draft a case, or I’ll have human plus AI draft a case. Let’s see how that ranks.

[MICHELLE LEE]
Exactly.

[STEVE GONG]
Right? So you have to get comfortable at some level in terms of the output. Actually, I’m curious. I know a lot of people raised their hand in initial questions, but how many people are just piloting and how many people have fully, like, operationalized AI in their firm or practice?

[MICHELLE LEE]
Pilot? Hands?

[STEVE GONG]
Operational? Come on.

[MICHELLE LEE]
Fewer.

[STEVE GONG]
It’s a lot fewer.

[MICHELLE LEE]
Still, not bad.

[STEVE GONG]
Not bad, okay.

[MICHELLE LEE]
Yeah.

[IAN SCHICK]
Can I ask you a really narrow question? Yet all of us use AI–

[STEVE GONG]
Yeah.

[IAN SCHICK]
Google to search, you know–

[STEVE GONG]
Yeah.

[IAN SCHICK]
AI results are really useful, and so does everyone.

[AYAN ROY-CHOWDHURY]
Oh, so then let’s just say for your patent prosecution practice–

[STEVE GONG]
Yeah.

[AYAN ROY-CHOWDHURY]
Have you operationalized AI? To be more specific, patent drafting. Okay. I mean, two aspects, right? Patent prosecution And then there’s sorry, patent drafting And then there is office actions and all those things. Prior art searching. Prior art searching, yeah.

[LUCA MELCHIONDA]
One thing to keep in mind is we need client consent. So if you have–

[AYAN ROY-CHOWDHURY]
Exactly.

[LUCA MELCHIONDA]
Hundreds of clients, you can’t operationalize it just by snapping your finger. You have to go through and get the consent of your clients.

[AYAN ROY-CHOWDHURY]
Yep.

[MICHELLE LEE]
Yeah.

[AYAN ROY-CHOWDHURY]
Yes.

[LUCA MELCHIONDA]
And so it happens in stages. And then I don’t remember which one of you was saying, some attorneys are more and some attorneys are less efficient with different tools. Yeah. The reason you use multiple tools is ’cause different attorneys work well with different tools.

[STEVE GONG]
Yeah.

[LUCA MELCHIONDA]
And let them pick what they want. But it takes time because not all clients are on board with even allowing us–

[STEVE GONG]
Yeah.

[LUCA MELCHIONDA]
Trying to do it.

[STEVE GONG]
Yeah.

[LUCA MELCHIONDA]
And some of them send out surveys saying, “You don’t use it, do you?” You know? So we’re all over the place– Heeding the questions. On that, so.

[AYAN ROY-CHOWDHURY]
So we are running a bit short on time now. I wanted to touch on the USPTO guidance and ethics issues.

And so we know that, and we have that the previous sessions talked about who the inventor can be each and every claim limitation, each and every claim has to be human conceived. But nowadays we are seeing IDFs AI augmented. So as you mentioned, Steve, during disclosure calls you are asking– Yeah. Outside counsel to confirm

[STEVE GONG]
Yeah.

[AYAN ROY-CHOWDHURY]
–human is involved, right? Yeah. So that’s the check you’re having with this case.

[STEVE GONG]
Yeah, absolutely. We will not file a patent otherwise.

[AYAN ROY-CHOWDHURY]
So this puts the outside counsel in a tough spot to some extent and, like, asking inventors, “Did you really conceive of this idea?” And if the inventors say yes, but then you think clearly that maybe that’s probably not the entire 80 pages of idea that you have given us. So what do we do here?

Like there’s the ABA rules talk about duty of candor to the patent office, right? I think you had mentioned about each– No lying. And no lying. And you should ask and you shouldn’t just keep your, like, eyes shut and submit something.

So, Ian, I wanted to ask you about this. What do you do in such situations? Like, are you asking your tools?

Like, are you having guardrails to ensure that the claims are coming only from the human generated, like, human conceived sections and things like that?

[IAN SCHICK]
Yeah. So the input to our system is invention disclosure materials. So, you know, assuming that those were made by a human, we’ve designed our system to approach patent drafting like a patent attorney would. We want to stay within the scope of what was disclosed but we want to kind of fill in the gaps, abstract it, have kind of different levels of granularity.

And so you know, we are generating content that did not exist in the invention disclosure, but I think that it is within the scope of the disclosure. I mean, it’s just like a patent attorney does. You know, an inventor says, “I invented this,” and you protect.

[AJAY ROY-CHOWDHURY]
We do, we do that at 01 this and everything around it. So you, that’s our approach. I mean, we definitely did not intend to develop an inventing machine And Okay. And so, the USPTO just last week came up with a new guidance about inventorship, right?

It replaced the earlier 10 new factors that we were trying to discuss earlier. So, Michelle, anything to add on this, whether this provides enough clarity to the industry, whether it makes easier for us practitioners to decide, like, to basically stay away from how much AI was involved?

[MICHELLE LEE]
Yeah. I don’t think that was a question for me. But I’m glad to offer some thoughts on the on the Patent Office side about not necessarily that issue, but look, the Patent Office, I think many of you know I mean even since when I was director, every administration since the time I have left, regardless of whether you were a Republican appointee, a Democratic appointee, AI has been a priority for the Patent Office.

And when you look at the businesses, and all the businesses are looking to use the AI to improve their efficiencies, it is absolutely right that the patent office is looking for ways to streamline and reduce its ever-growing backlog. Plus, on top of that, the White House has an executive order saying that the federal agencies should be looking for ways to apply AI to improve the quality of the services that they provide. So, all of this means is that, like, last summer, the patent office sent out an request for information on ways in which they could use AI technology to improve their operations and so forth.

So, it all makes sense. When I was the director, AI was not a twinkle in anybody’s eye.

I mean, I was this was 2013 to ’17, well before generative AI. And we had a bunch of, we have a lot of data on the reasons why examiners reject, the circumstances in which we reject.

We have 8,300 examiners. We have 600 and 600,000 annual patent filings.

We issue 300,000 patents. Each examiner touches an application three times. That’s a lot of data. And so, what we did was we used data analytics to identify which art units, which examiners could perhaps benefit from more training.

So, rather than taking the entire examination core offline to train them on, everybody on Section 101 software or what have you, we spot trained, which increased the operational efficiency. But we didn’t have ChatGBT or anything nearly that powerful, or any generative AI.

So now, like, the possibility of prior art searching, drafting an office action, it makes a lot of sense. And if you think about it, in the broad patent ecosystem, if every one of us, regardless of where you sit in the patent ecosystem, so if you’re the USPTO or you’re a foreign patent office and you’re using AI to help you examine patents more efficiently, that’s a plus. If you’re the inventor and you’re able to get a patent or patent issued more cheaply, more quickly, that is a plus. If you’re a litigator, a plaintiff or defendant side, and you’re able to litigate cases for lower dollar amounts, that is a plus to you and your clients.

And if you’re a prosecutor and you can, you know, capture the essence of the invention more quickly, that’s a plus. And if you’re a licensor and you get access to and you can monetize your inventions sooner, that’s a plus.

So, as I look at it, as you decrease and you squeeze out all of these unnecessary costs, what you do is you make legal services that were previously unavailable to a lot of people, only the most well-funded, deepest pockets, honestly, had a shot at, for the most part, right, getting intellectual property and monetizing it or defending it, because that’s, that whole process is very expensive. So, if every piece of it becomes ever more efficient, including the US Patent and Trademark Office, I think we’re better off. I think innovation wins.

So but, and the other thing too is it’s not gonna replace the humans. It is nowhere near ready to replace the humans.

It just eliminates, I think, the more mundane, the tedious. And we have to have the judgment, where is it good enough, where is it not good enough, where do I, am I spending more time reviewing the slop that is produced by these AI systems rather than having output that is, you know, better than what my associate would do or what I would do as a first draft?

[STEVE GONG]
I would just add, you know, AI is a hammer and everything’s a nail, right? It’s like what Molly said, there are a lot more problem inside USPTO, that needs to be fixed before AI can do anything, right? So the other thing is, contrary to that point is, you know, OpenAI is not here to make a tool for anybody. They’re there to build AGI.

Right? That’s why they exist.

Google Deep Mind is kinda here to do the same thing. If you look at this as a five-year horizon, what’s gonna happen in five years is a whole different question. Right? Yes, we’re not gonna replace anybody now.

That’s not the goal. But what about next year? Right?

So, the technology is advancing very rapidly. And, you know, it’s just like I said, right, the ecosystem as a whole need to navigate this together, so.

[AYAN ROY-CHOWDHURY]
So, on those points talking about the USPTO programs and they have now a few pilots, right? One is the what is called ASAP. The AI search pilot program where applicants can now, they have to I think opt in that you will be getting a search result before examination, AI generated. And–

[AUDIENCE MEMBER]
PFE.

[AYAN ROY-CHOWDHURY]
And PFE, right, of course. So, I wanted to ask about, is that something you do think is valuable? And, I mean, the examiners haven’t really looked at the application yet, but we are getting search results saying that, “Okay, this prior art reads on your claims, entirely AI generated.” Should we, as applicants, even consider opting in for the program by paying a fee?

And then, assuming we get a result, do we, how much weight should we put, pay to that and maybe file claim amendments? Any takers on that?

[STEVE GONG]
I mean, I assume people are searching before they file an application, right?

[MICHELLE LEE]
Not necessarily.

[STEVE GONG]
Yeah. I mean, we do. We do. But if you don’t, then, you know, why not, right? So, well, you should just search and then file better patent before you had, because you’re obviously gonna ask Patent Office to search for you, essentially. So–

[MICHELLE LEE]
So let me offer a thought here. So if applicants search, right, before they file, then why would you pay the fee to have–

[STEVE GONG]
Exactly.

[MICHELLE LEE]
The office do it, et cetera? But when I was at Google, we did not search for all the applications. Maybe that’s different now.

[STEVE GONG]
It’s definitely different now.

[MICHELLE LEE]
Right? So, and, you know, different strategies or pros and cons. If I’m in the pharmaceutical industry and I file one patent on my drug, and that patent has to hold water, I’m searching everything to make sure that patent application is rock solid, airtight.

But if I’m also playing a numbers game and I’m building a portfolio, I may not search. So when I was at the patent office, we did consider providing incentives to applicants where before we picked up the patent application, we’d encourage them to do some search.

That way, with the notion that the claims that you bring to us when our examiners pick up the application are in better form. So, I ran the patent office. I try to think macroscopically what’s good for the innovation economy, as you can hear from the themes that I say. And generally speaking, the better the quality of the patent application that comes in, right, the more narrowly crafted, I think you’re gonna get a better quality, more defensive, right?

For every time that that patent examiner picks up the application and doesn’t have to wade through all this other stuff because it’s way too broad and then you end up here and you narrow it, you get closer to the ideal intended end point faster. I think that’s a good thing. Now, whether or not we pay a fee and we get this AI generated report, but these were issues that we thought about at the agency when I ran it, which is, how do you get the best prior art in front of the examiner? How do you get the best claims in front of the examiner as soon as possible so you can get to the appropriately scoped claims as soon as possible?

[STEVE GONG]
Yeah, 100% agree with Michelle, right? But I feel like as a patent bar, right, there’s still this sentiment that we’re trying to hide the ball, right?

You want, you know, some ambiguity in terms of your claim. You want some ambiguity what is your invention points, just so you had optionality down the road, right? So that is something mindset needs to change, right?

And that’s a nice mechanism that USPTO is offering. But like I said, you know, if you really care, you can easily search today, right? If you don’t care, then that’s a different issue.

[AJIT ROY CHOWDHURY]
But from what I’ve seen those, the AI generated search reports, they will essentially be looking at one or two. I haven’t seen AI tools yet which can really think about one or three obvious test combinations.

[STEVE GONG]
Yeah, but most people are even not searching for one or two today, right? So is it, I mean, what’s the USPTO’s motivation for this? Is it just to validate their search tool so they can get away from human searching? I’m looking at you.

(laughing)

[MICHELLE LEE]
And I Pilot was introduced after I left, I don’t really know. I think it’s more to make the tool. So, they can’t actually make the tool accessible to the public because it isn’t the USPTO’s tool to do that with. So what they are trying to do is to give the public insight into what the tool does and give them results from that perspective, I think.

[STEVE GONG]
But if, I mean, if the office is moving to automated examination, this would be a step in that direction.

[AYAN ROY-CHOWDHURY]
Right. I think they’re trying to compare, as Michelle mentioned, have the claims in a sufficiently good order before the examiners pick them up.

[MICHELLE LEE]
Or give people the opportunity to say, ‘Oh my God,’ you know, ’cause you get 10 results out of the search. And for people to say, ‘Okay, this one reads directly on what my idea was and I was gonna file.’ And now I won’t file. So it’s almost also a weeding out perspective.

[AYAN ROY-CHOWDHURY]
Right.

[AUDIENCE MEMBER]
And what’s the fee for it?

[MICHELLE LEE]
$400, I think.

[AYAN ROY-CHOWDHURY]
I think something like that.

[STEVE GONG]
And the examiner still searches and those AI results just kind of get added to their results? Is that how it works or?

[AYAN ROY-CHOWDHURY]
So, maybe.

(laughing)

The AI results are, I mean I was looking at this earlier today. The examiners keep them in, note them in an 892 or the applicant files in ideas, then they will be off the record. Otherwise, not. I mean, they will be in the PR file history but it wouldn’t be on the face of the patent.

[MICHELLE LEE]
That’s so interesting.

(laughing)

[AYAN ROY-CHOWDHURY]
So we have a few minutes left. Questions from the audience? Yeah the gentleman in the back. And then you please.

[AUDIENCE MEMBER]
Oh, okay. Yeah, so one thing a lot of us are in practice, we have client confidentiality.

[IAN SCHICK]
In the, in California, we have to maintain client confidences even at our peril. And thing we really look at, and we’re sensitive to, is what happens to a patent application that we’re trying to draft?

And maybe you have thoughts on this. You know, Gemini, I would love to take my Google search tool, put it in, give it some instructions, and see what it can do.

[STEVE GONG]
Yeah.

[IAN SCHICK]
But I have a lot of pause. Some of these tools, you know, at our firm, we have typing minds so we can run Gemini, Chat, Claude, a few others.

But what are thoughts of the panel on client confidentiality. And where this data ultimately ends?

And with all due respect for Gemini, I’ve looked at Google and it quickly turns into spaghetti in my view in terms of.

[STEVE GONG]
I don’t think our users are ready to.

(audience laughing)

[IAN SCHICK]
Anyway, I would love to hear your thoughts on this ’cause this has been very pro AI, which is great. Two years ago, when I was at this conference, it was very much, “You put something into the cloud, it’s a public disclosure.

It’s gonna come back and bite you.” Which I don’t think anyone thinks today, but still, a lot of us in practice who have invention disclosures are very concerned about client confidentiality. And if you could comment on that.

[AYAN ROY-CHOWDHURY]
I think that was a question I’d asked at the beginning, like, how many of you have updated your outside engagement letters to get explicit consent?

[IAN SCHICK]
That’s not enough, right? I mean, you can say to your client.

[MICHELLE LEE]
Yeah, yeah.

[IAN SCHICK]
Your client, I’m going to use AI.”

[AYAN ROY-CHOWDHURY]
Knowing all the risks.

[IAN SCHICK]
No, but how far do you go with the informed consent? It’s, you know, it’s trickier than that.

So what, I mean, what’s the specific fear then? I mean ’cause we trust technology with our clients’ data all the time. We email things, we store it in the cloud. Yeah, Facebook, Cambridge Analytica, data breach, right?

I mean, these are companies, some of them have a history of breaching data. The standard is reasonableness though, right?

Like, reasonable care?

[STEVE GONG]
Well, let me ask you this. Like, do you not store any of your client materials, like, in the cloud at all?

[IAN SCHICK]
No. So I’m in a fisher Boiles, a cloud-based law firm, Net Docs all in the cloud. So there’s a lot of sensitivity, partly because AI is so new, right? You have models.

You know, if I upload something to my training, a competitor of my client’s, you know, on how to do this, I think that, you know, really isn’t the issue as much. But some of my clients have misconceptions about AI tools, right?

[STEVE GONG]
Yeah.

[IAN SCHICK]
They think chat is great. If you do something in Claude, then, you know, they own the invention, which is not right. So some of us have to deal with these issues on a regular basis with clients. And it is a new frontier, but just in terms of confidentiality and Google you know, giving this, I think, if I heard correctly– to Fish & Richardson to look at.

You know, what are the things as a panel you’re looking at that you see as sensitivities there?

[STEVE GONG]
Yeah, so, there are two issues.

[IAN SCHICK]
Thank you.

[STEVE GONG]
One’s confidentiality and one’s privilege, right? I think you mentioned the first one, not the second one. The first one is what Ian, I think, is kind of going at, is I don’t see it as any different than any of the systems out there.

There really isn’t anything that makes AI different. It’s just like, it’s computing in the cloud, right?

If you trust storing your email, I don’t know why you would not trust storing–

[AYAN ROY-CHOWDHURY]
I think–

[MICHELLE LEE]
Could I–

[STEVE GONG]
Yeah?

[AYAN ROY-CHOWDHURY]
Concern is that it, Gen AI tools will use your data to train their models.

[STEVE GONG]
Yeah. So that’s the second issue, right? You do have to look at the terms of your, you know, your engagement. If you use public version of Gemini, public version of ChatGPT, yeah, of course.

[AYAN ROY-CHOWDHURY]
So–

[STEVE GONG]
That’s at your own peril. But you do have to look at your engagement with the specific vendor, or Patlytics, or Approximal, or Google to look at that term. The issue of privilege is different, right? The privilege issue really hinges on not only confidentiality, but also who have access to that data otherwise.

So if you even look at some of the term ChatGP–, OpenAI have, or even Google has, right? If there’s a term saying their trust and safety team can access your data for whatever reason, there’s a period in which they can do that. Even if they don’t, that is an issue for your privilege of the data that’s going through the model, right?

But you do have to look at all that. If nobody had access to the data, otherwise would not have access anyway.

I don’t see how that’s different than any other systems.

[MICHELLE LEE]
So that’s why I really do like Patent AI systems that were targeted and geared and developed for lawyers that have the security and confidentiality–

[STEVE GONG]
Yeah.

[MICHELLE LEE]
Like, if we breach confidentiality, we are done. That is it, right? I mean, you cannot disclose confidentialities, right? So, you know, my colleague from Patlytic can confirm this, but for the models that we use, other clients do not get used to any information that you enter in.

[STEVE GONG]
Yeah.

[MICHELLE LEE]
It does not leak. Moreover, even if you’re in-house and you don’t want another part of the company to know about that, that can be another wall. And I am a hundred percent sure, I don’t know, I don’t run the patent office, I’m not there anymore, but I’m sure they’re considering on-prem solutions, right?

’cause there’s no way, you have the world’s confidential information all the product, all the proprietary information before it’s launched and right? I mean, that is a big issue. So I am guessing that the IT department there that I used to work with is looking at all those issues in connection with, you know, their solutions and the evaluation of the solutions and what that means in terms of the security for every one of the applicants. That’s what I would be looking for, right?

If I’m gonna bring in an AI system, no way can a search done by a patent examiner, right, cross-contaminate, disclose to anybody else. It’s just not, that’s not an acceptable solution. So whether or not they achieved that, I don’t know, but seems to me, like I said, security, confidentiality, those are table stakes. And then we start talking about, are you faster and more efficient?

Are you accurate? Do you save time? Are you easy to use?

Those are all after you have nailed that down.

[AYAN ROY-CHOWDHURY]
We are on time, but I guess we have time for maybe one or two more questions.

[AUDIENCE MEMBER]
This is, have you heard whether the Patent Office has launched a pilot for having AIs write 101 rejections. An examiner told us recently that they were facing that, and the AI was overruling them.

[STEVE GONG]
So never heard that.

[AYAN ROY-CHOWDHURY]
So, AI was giving 101 rejections?

[STEVE GONG]
Writing or, Writing 101?

[AUDIENCE MEMBER]
Well, we were advised I’ve got a patent examiner who lives in Alexandria. Right. And one of his friends called and, or one of our, the people who are handling our applications called him and told him that the AI had written the rejection. “Please don’t hold it against me.”

[STEVE GONG]
Oh, I don’t know. I don’t know about that.

[AYAN ROY-CHOWDHURY]
Yeah. But I know that yesterday the Patent Office came out with new guidance on subject matter eligibility, right, on 101 issues specifically, that applicants can submit 132 declarations explaining, and that should be considered by examiners in reconsidering the 101 issues that are like I believe it’s like a significant technological contribution or things like that.

[STEVE GONG]
I think there are just suspicions right now, right? I also just heard, this is treading on rumor now, right? It’s like European examiners are now using Gen AI to essentially writing the rejections.

[AYAN ROY-CHOWDHURY]
So, the RFI that patent office gave out going to confidentiality, it was asking for private vendors to do that.

[STEVE GONG]
Yeah so, they went, and last summer, they put out a request for information on the various ways in which the public thought that the patent office could apply AI in various, you know, categories to improve And they did ask for comments on, I think, as I recall, security, confidentiality, and so forth, Right Right. I mean, those are table stakes. So, We have one more question. I guess we aren’t–

[AUDIENCE MEMBER]
Mine is more about the infrastructure. Just one more.

[STEVE GONG]
Okay.

[AUDIENCE MEMBER]
Yeah. I was gonna ask– It’s fine. Do you have any– Oh, there we go. So, right, AI, especially gen AI, takes a lot of computing power, which actually is, I think, one of the limiting factors for the USPTO in terms of updating its IT infrastructure.

[STEVE GONG]
But they’ve asked the vendors apply provide it at low or no cost.

[AUDIENCE MEMBER]
Yeah. Yeah. But that’s also– Low or no cost. Yeah. My question for like all, like for everybody, because it seems like that could be a bottleneck or a choke point in terms of providing access to gen AI. Yeah.

So, what is going on with making sure that the infrastructure is available and accessible to all?

[STEVE GONG]
I think we’re gonna start building data centers on the Moon. So,

[AJAY ROY-CHOWDHURY]
Should, should come to Northern Virginia. It’s now data center country.

(laughter)

[STEVE GONG]
Yeah. Yeah. I think there are different approaches to AI right now. Right now, I think we are focused on this big data center, more building, building and building. There could be more narrow focus at some point, right?

Mm-hmm. But right now, it is a race towards AGI, and that’s just where we’re at right now, right? Yeah.

We’ll see how that goes. But there are other ways we can optimize in terms of how that is done. But the data center on the Moon is not a joke.

We actually have a patent on it. So, it is something that is gonna be explored.

So,

[WAYNE]
I wanted to thank the four of you for coming. This has been fantastic, and we could run questions for another hour. And I look forward to hearing this discussion next year, ’cause as somebody mentioned, it’s evolved over two years from don’t to do.

[STEVE GONG]
Yeah.

[WAYNE]
So, it’s been quite a transformation. So, thank you.

(applause)