Berkeley Informatics Lab

In This Section

Law is the prime network. 

It links us—by enabling commerce, property, justice—and through those abstractions (for they are abstractions), society.  Law is the network that defines or bounds both foundational and transactional human relations.  Law is a self-referencing and organically evolving construct.  Through its elements we “model” and improve society across time. 

Were reality so neat.  As a matter of fact, versus theory, law is a dark thicket.  It is inaccessible or misunderstood by the vast majority of people.  It is effectively (or ineffectively) delivered and administered by a priesthood or “guild” system.  Legal systems produce injustice with too much frequency, and compound complexity as human actions exceed venue, subject matter, and expectation.  Thus, increasing cost.

The legal profession is frequently criticized for the billable hour.  But the problem is much more complex.  Even ostensibly “simple” contracts like NDAs often require bespoke articulation.  How much more so corporate acquisitions and complex litigation?  Such complexities require indefinite amounts of review to encompass—and then greater amounts to zealously apply to a real person, in a real situation, against a real adversary.  As an attorney, I control my own doctrinal and empirical knowledge base; but I do not control my client’s decisions, aggressiveness, or behavioral span.  Thus, I cannot control the volume of salient legal information.  Thus, an indefinite (if bounded) number of hours has been a reasonable response to an indefinite amount of data and conflict.

But that is changing.

There is another branch of academic study that squarely and practically addresses information complexity: computer science.  One may argue that law and computer science have the same goal: The accurate classification and speedy resolution of real life states and conflicts.  There is no hard division between civil procedure and algorithmic theory, between tax and AI-based classification, between rules v. standards and logic v. probability.  The archetypes of law and computer science are the same.

Advanced computational tools have materially improved the applicability of legal scholarship to real life practice.  For example, empirical scholarship in intellectual property employing artificial intelligence tools has vastly improved our understanding of the patent system—and arguably its performance.  More relevant to practitioners, it has improved their performance.  Improved knowledge means better laws and better practice.  We can instantly measure time to summary judgment in patent cases.  We can assess and compare judicial workloads.  We can find better examples of legal and scientific art—instantaneously.  We can analyze past events to model future outcomes.  We can illustrate and align the paths of legal doctrine and practice.

That is new.

This was not the case ten years ago in the United States.  None of these capabilities existed.  Ten years from now, we may be vastly more adept at employing the legal informational tools of computer science.  Similarly, computer science may be better applied through great inclusion of legal constructs and realities.  For example, data inefficiencies in the administration of U.S. health care are ripe, worthy targets of legally compliant AI solutions.  To be effective, those computational constructs must be isomorphic with legal data constraints (e.g., HIPAA).  Any worthy solution requires intense interdisciplinary collaboration.  And while it is an enormous market, health is just one example of the prospective academic and commercial opportunity.  We should seek revolutionary advances.

The mission of this lab is to place Berkeley at the center of that revolution.  It is not an abandonment of the legacy we have inherited.  On the contrary: The lab represents an opportunity to scale, democratize, and judiciously improve upon the best of that legacy.

Law is the prime network.

J. H.  Walker

Palo Alto, 2013