BOOK REVIEW

EXAMINING TRADITIONAL LEGAL PARADIGMS IN A NON-PHYSICAL ENVIRONMENT: NEED WE INVENT NEW RULES OF THE ROAD FOR THE INFORMATION SUPERHIGHWAY?

SCOTT E. BAIN

LAW AND THE INFORMATION SUPERHIGHWAY. By Henry H. Perritt, Jr. New York: Wiley Law Publications. 1996. Pp. xxiii, 730. $135.

1997 Supplement. Pp. viii, 102. $57.

Table of Contents

I. INTRODUCTION

II. A TELLING METAPHOR

III. NEW RULES OF THE ROAD?

A. Access to Public Information and Networks

B. Protection of Intellectual Property Online

IV. CONCLUSION

I. INTRODUCTION

Scarcely a decade has passed since author William Gibson described in his acclaimed science fiction novels Neuromancer and Count Zero a fantasy world of computer-generated data matrices which had no correlation to any physical reality, but to which people could "plug in" via a "brain-computer link" and have the illusion of physically moving about to obtain information.1 Gibson dubbed this virtual world cyberspace-"the space that wasn't space."2 In Gibson's portrayal of cyberspace, inhabitants could meet, converse, carry on business and recreation, and do everything else that was possible in a physical world, including break the law.3

Today it is apparent that Gibson's vision was more than a science fiction dream. It is strikingly similar to existing information and communications networks, and the term cyberspace has even caught on as a name for the virtual space in which users interact via these networks. Politicians, the media, business leaders, and technologists have sensationalized this burgeoning information infrastructure,4 especially the "network of networks" known as the Internet, and its potential to change the way we communicate, learn, work, and play.5 Already, this "Information Superhighway" is having a phenomenal effect on the way we live. Cellular phones and fax machines, email and the World Wide Web, satellite broadcasting and cable television, video conferencing and personal communications systems are just a few of the building blocks of this Superhighway that are helping to create a society and economy in which time and geography are no longer formidable obstacles to human interaction, and information is the most valuable commodity.6

History suggests that when new technology is a catalyst for sweeping social and economic change, the law struggles to keep up.7 Thus, not surprisingly, the perceived changes produced by the proliferation of networks and digital technology have led several notable commentators to suggest that traditional legal paradigms are an uncomfortable fit in this new environment. For example, in Being Digital, Nicholas Negroponte describes the law, in its struggle to adapt to digital technologies, as "behaving like an almost dead fish flopping on the dock. It is gasping for air because the digital world is a different place."8 Similarly, John Perry Barlow, noting the "problem of digitized property" and lack of physical boundaries in cyberspace, has compared our continued reliance on traditional legal models to "sailing into the future on a sinking ship . . . developed to convey forms and methods of expression entirely different from the vaporous cargo it is now being asked to carry."9 Barlow argues that instead of trying to make the old legal models work through a "grotesque expansion" of existing laws or through the "brute force" application of these laws to the digital world, we "need to develop an entirely new set of methods as befits this entirely new set of circumstances."10

Are existing legal principles and paradigms truly insufficient to address the problems presented by the new information and communications technologies? Does a special body of law need to be carved out for online issues? In Law and the Information Superhighway, Henry Perritt attempts to answer these questions.11 And from the outset his answer is clearly a resounding "No!"

Professor Perritt is eminently qualified to address the legal issues presented by new information technologies. Perritt has provided information law and policy guidance to the Clinton administration, the European Commission and other international bodies, and the Board of Governors of the American Bar Association. He has been a professor of computer and information law at Villanova University School of Law for fifteen years, and is an instrumental figure in the Villanova Center for Information Law and Policy. The author of 11 books and more than 35 journal and law review articles, Perritt's works have been cited in more than 400 journal and law review articles and 30 cases.12

Perritt squarely acknowledges in the first chapter of Law and the Information Superhighway that the Information Superhighway presents several novel phenomena that call for thoughtful examination of existing legal paradigms.13 However, he asserts that these phenomena are merely "interesting," not "revolutionary," and that we do not need to scrap the traditional legal doctrines developed in other contexts. What we need, according to Perritt, is "a clear understanding of the core legal principles . . . and a clear understanding of how the various NII [National Information Infrastructure] technologies actually work," so that we can properly adapt the existing doctrines to the NII.14

Perritt provides ample support for this position throughout his text, exhibiting a keen understanding of the underlying network technology as well as a thorough knowledge of traditional American and international legal principles. He addresses an extremely broad range of issues, many of which books of this sort often omit for the sake of manageability,15 including criminal, regulatory, and international issues, as well as problems related to NII technologies other than the Internet, such as telephone and cable. Although the ambitious undertaking of analyzing the law over this wide range of topics limits his opportunity to provide detailed policy arguments in all of the areas, he effectively picks his spots, dropping suggestions where the law appears to him to be unclear or headed in the wrong direction.16 His analysis of existing case law and the extension of the principles established therein to the online context are consistently sound, which creates a convincing case for the continued use of traditional paradigms and framework to address the legal problems on the Information Superhighway.

II. A TELLING METAPHOR

Perritt enlists the commonly used term "Information Superhighway" as a metaphor for the information infrastructure in order to emphasize his point that a new legal structure is unnecessary. He compares the Information Superhighway to the interstate highway system which, like its electronic counterpart, requires various rules to ensure continued order, safety, and utility for those who use it. He notes that on a physical highway, one must have rules establishing tolls for use of the highway (analogous to NII regulation policy); payment systems for bus rides and automobile rentals and purchases (E-commerce); and rules for determining who gets to use which lanes and when (NII access policy).17 Likewise, the highway must have rules for allocating risk of loss for accidents (liability for harmful electronic communications); rules for assigning responsibility for fixing potholes (liability for information service failures); standards to ensure passable interconnections between roads (interoperability and standard setting); and safeguards to constrain police and others from unreasonable searches of vehicles (E-privacy).18

Perritt is not the first author to utilize linguistic devices such as metaphors and analogies to facilitate understanding of the online world. A description of an abstract, non-physical entity such as cyberspace (much like the description of abstract, non-physical emotions or experiences such as anger, love, or even the creative process of writing) often requires the use of familiar language based on known, tangible things discernible by the traditional five senses.19 Thus, the cyberspace literature is rich with symbolism and comparisons to things common in real-world existence. For example, copyright law professor Paul Goldstein coined the term "Celestial Jukebox" to describe the satellite and fiber optic network that will deliver a nearly limitless selection of videos, movies, and texts to our desktop or living room.20 In Netlaw, Lance Rose examined the effectiveness of various metaphors for the Internet, including "local bar," "wild west frontier," "supermarket," and "adult bookstore."21 Some metaphors have even become an explicit part of the Internet language: for example, we can "surf" the World Wide "Web" (WWW), guided by Netscape "Navigator"(tm) software.22

While Perritt's use of the Information Superhighway metaphor is effective in calling immediate attention to his point that the development of the NII does necessitate drastic changes to the structure of the law, he wisely refrains from carrying the Superhighway metaphor too far. The mere fact that we can recognize similarities between the virtual world and something (like a highway) in the physical world does not mean prima facie that the same legal paradigms apply to both. Consequently, Perritt devotes adequate individual attention to each of the separate legal subjects,23 to determine which legal paradigms are most appropriate for that subject in the online context, and which principles are best applied to resolve the problems presented.

III. NEW RULES OF THE ROAD?

Some online legal issues lend themselves fairly easily to traditional paradigms. For example, someone who uses a computer to gain unauthorized access to a bank's computer system, cracks the security codes with the help of software tools, and transfers funds to an account of his own is as easily characterized as a criminal as someone who breaks into the same bank at night, cracks open the safe with a crowbar, and steals printed money.24 In other contexts, though, how existing legal principles provide tenable solutions to online problems is less clear.

In addition to resolving the more straightforward problems, Perritt offers some creative applications of existing principles in analyzing the more complex problems. Without attempting to force traditional legal paradigms where they do not fit in the online context, Perritt illustrates that even the seemingly troublesome online problems can be addressed effectively within the existing legal framework, staying faithful to his premise that creating new categories for NII issues is unnecessary. While it would be impractical to summarize his treatment of each of the many issues presented in the book, an examination of the following two particular areas provides a representative example of Perritt's legal and policy analysis: (1) access rights to public information and to online networks and facilities; and (2) private intellectual property rights.

A. Access to Public Information and Networks

Perritt considers the availability of access to public information to be a key to the development of the NII: "In order for the full potential of the NII as a conduit for public information to be realized . . . private sector electronic publishers and individual citizens must have access to basic governmental data collected by public entities, particularly including primary legal information."25 He supports the extension of traditional Freedom of Information (FOI) doctrine into the online environment, stressing two principles that he believes are essential to developing fair, effective information policy for the NII: first, if information is requested in electronic format rather than paper, it should be supplied electronically if available;26 and second, the government should promote a diversity of channels and sources of public information (which necessarily coincides with principles of access to networks and facilities).27 Although the copyright law,28 the First Amendment,29 and state freedom of information laws all provide some degree of protection for information access rights, Perritt believes that the most important source of that right is incorporated in Freedom of Information Acts (FOIAs),30 thus he focuses his analysis on the application of FOI doctrine to the online environment.

Perritt's first principle of effective information policy is sound. Anyone who has used electronic information tools realizes the electronic format has significant search and retrieval advantages, saving the user time and money.31 Thus, as a purely practical matter, supplying documents in paper form rather than electronically impairs public access to information. Perritt alertly recognizes that the inability to obtain electronic formats of information would present a problem for electronic publishers who wished to attach value-added features to the information and resell it, because electronic format greatly facilitates the addition of features like links and tags, and significantly lowers barriers to entry in the market.32 He criticizes the position taken by those who claim that mandating electronic disclosure of public records to private publishers planning to resell the records for profit constitutes the use of public funds for private purposes. Perritt argues that:

the mere fact that an individual or entity may obtain income from an activity that serves a public purpose does not negate the public nature of the activity. When a commercial publisher disseminates public information, it is serving a public purpose, the same purpose that is the central justification for enactment of the Freedom of Information statutes: increasing access to government information.33

Providing public electronic records to private publishers for resale does, however, present some interesting problems of ownership and property protection. Raw public information such as judicial opinions, the text of statutes, basic land records, and agency rules are not copyrightable.34 Thus, pirates could extract the publicly supplied information (omitting the value-added features such as tags, links, and headers) from the private publisher's product and reuse it in competition with the publisher, incurring neither the publisher's cost of assembling the product nor liability for copyright infringement. Of course, this problem is nothing new, as it appeared in the context of paper records more than a century ago. In the 1834 case of Wheaton v. Peters, the Supreme Court confirmed the right of competing reporters to publish the text of its opinions.35 The problem is even more significant in the electronic medium, which greatly facilitates the "lifting" or extracting of such unprotected content from a publisher's product.36

Perritt suggests several strategies that private publishers can use to protect their investments, without necessitating the reach of copyright into material belonging in the public domain. Publishers could design the product to make effective pirating difficult by using fine "granularity of information" (dividing content into many small parts to make it difficult for each individual element to be extracted and reassembled by a pirate) or utilizing a "planned obsolescence" strategy (providing frequent updates to the product and thus rendering older material worthless).37 Creative legal solutions are also possible. Perritt suggests that publishers brand electronic information products with trademarks, which prevent competitors from appropriating the name and goodwill built up by the trademark owner.38 This is an excellent suggestion, because it is difficult to effectively protect digital works, particularly works encompassing some degree of public, non-copyrightable information, solely by copyright law.39 Also, trademark infringement is sometimes easier to prove than other theories of misappropriation or infringement, as illustrated by several recent cases.40

In addition to providing some degree of protection for the information products of private publishers, Perritt indicates that trademarks can be used by public entities which directly supply information:

[C]onceivably, a local government could obtain a trademark for the 'official version' of a land records database and deny use of the trademark to unofficial sources. This form of intellectual property permits public agencies to reduce risks of poor quality information that might endanger the public, while also permitting a diversity of channels and sources to exist.41

Perritt's second principle of effective information policy-promoting diversity of channels and sources-is also sound. He observes that

The need for a diversity of sources and channels of information . . . is based on the reality that no one supplier can design modern information products to suit the needs of all users. The diversity principle is inimical to any state-maintained or state-granted monopoly over public information.42

Perritt notes that the present architecture of the infrastructure provides for a wide variety of channels and conduits for information, in the form of various choices of service providers to access the network43 and a huge network of cables and wires providing a wide range of paths between any two points.44 However, in concluding that this architecture likely precludes the development of diversity of access problems, Perritt overlooks one significant bottleneck to network access and use of online services, the potential problems this bottleneck presents, and the possible role of antitrust law in addressing this problem.45

Perritt devotes substantial analysis to the relationships and markets of World Wide Web content providers and Internet Service Providers (ISPs),46 and concludes that the wide variety of access points and paths of information flow likely precludes any online market from being characterized under antitrust law as an essential facility and, furthermore, appears to prevent the need for antitrust intervention to ensure competition in online markets.47 However, he appears to have missed the importance of the final physical "link" in the "chain" of information flow across the network from content provider to consumer: the user's desktop personal computer (PC) and modem.

Web browser software such as Microsoft(tm) Internet Explorer(tm) and Netscape Navigator(tm), running on top of the OS, currently provides the interface between the user and network. However, many industry observers agree that browser software and OS software are on an inevitable crash course-they will soon become one and the same.48 Netscape Navigator, for example, already has the capability of running "add-on" applications,49 much like an OS runs application programs. The potential problem, then, is that one company (Microsoft Corp.) already has a virtual monopoly on the OS market,50 leading one to believe that it is quite possible they could soon control the user's complete software interface with the web.51

A monopoly in the area of desktop web interface software (which, for the sake of simplicity will be referred to as OS software) could have several detrimental effects on access to public information, as well as chilling effects on the free speech rights of content providers and consumers. First, it could limit the variety of forms in which a user may receive information. The variety of available value-added features which Perritt considers so important could significantly decline in the event of a monopoly OS, because any feature not supported by that OS would not likely be developed by content providers. Second, the bottleneck at the desktop could be used by the producer of the monopoly OS to regulate the actual substance of information users are able to receive, the ease or difficulty with which it can be retrieved, and the speed with which it is accessible. In other words, the OS can be designed so that the content produced by sources favored by the OS producer-probably as part of a financial arrangement between the parties-appears more attractive, or has additional features, or is accessed faster than the content of another source.52 Not only is this harmful for public information policy in that it limits diversity of sources and channels, it also allows the OS producer to filter out specific kinds of information, giving it the power of "virtual censorship." Likewise, the OS producer could design the system to favor certain ISPs over others, thereby limiting the users' practical access to the physical network.53

There are several theories under which antitrust law could help address the problem. First, section 2 of the Sherman Act could be applied to prevent a dominant OS producer from tying other information products or services to the sale of the OS, or engaging in other anticompetitive practices.54 Presently, this theory would help to preserve competition in the market for web browsers, thereby allowing competing browsers to develop into viable competing OSs for the future. Second, vertical relationships between OSs and content providers or ISPs should be scrutinized carefully under section 1 of the Sherman Act, in order to prevent certain content providers or ISPs from gaining a favored status with the OS producer.55 Potential mergers and acquisitions of firms in other network or computing markets by the OS producer should likewise be closely scrutinized under the Sherman and Clayton Acts.56 Finally, some lawyers and economists view online markets as natural monopolies, which means that the existence of one dominant OS is actually the most market-efficient scenario.57 In that case, viewing the operating system as a bottleneck to the network, the essential facilities doctrine could be applied to ensure that all content providers have the ability to send information to users without any filtering mechanisms, or discrimination at the OS level as to how fast the file is received or the way it is displayed.58 This would alleviate potential access policy problems as well as Constitutional problems.

Thus, in light of the foregoing, it seems as if antitrust law may have a more important role in access problems than Perritt realizes, and the application of the existing antitrust doctrine to a non-physical "bottleneck" like a software interface is not really analogous to the physical examples he cites. Nevertheless, his premise that existing legal principles are sufficient to address NII problems still holds true in the area of public information and network access.

B. Protection of Intellectual Property Online

Digital technology and networks create a myriad of problems for the protection of intellectual property. The many levels of expression in computer programs, which can possess literary qualities,59 artistic qualities,60 and functional qualities,61 make it difficult to ascertain the appropriate scope of protection for programs, particularly copyright protection. Unique problems for intellectual property are also presented by characteristics of digital networks, such as the ability to make perfect copies and to do so in potentially unlimited numbers, the ability to instantaneously transmit copies of digital works to any number of users (and the uncertainty as to whether such a transmission satisfies the "fixation in a tangible medium" requirement for establishing copyright),62 and the caching of files downloaded from remote sites.

Perritt recognizes the importance of protecting intellectual property, and the gravity of the piracy problem on the Internet: "The NII can realize its potential only if it protects private property and makes it possible to offer something for sale or license in open networks like the Internet without it being misappropriated by a competitor."63 Consistent with his oft-stated premise, though, he believes that the combination of existing legal, technological, and business schemes can adequately protect property interests online, while properly balancing such private interests with the public domain.

Much literature is addressed to the problem of intellectual property protection in cyberspace. The approaches taken can be put into three general categories: (1) arguments for the creation of entirely new regime(s) to protect software or digitized property (sui generis approach); (2) arguments for rewriting the copyright laws to make them fit the digital environment; and (3) arguments for a more conservative approach, based on existing legal principles and on the promise of technology to help address the problems it created. Several notable authors have favored the first category. In A Manifesto Concerning the Legal Protection of Computer Programs, Pamela Samuelson, Randall Davis, Mitchell Kapor, and J.H. Reichman argue that the unique properties of computer software-including the observation that they "behave"-make software expensive to develop and easy to imitate, an ill-suited combination for protection under traditional intellectual property regimes such as patent and copyright.64 They argue that the use of traditional regimes will lead to cycles of overprotection or underprotection of software, and outline the general principles of a sui generis, market-based approach that features a three-year "blockage period" for software clones, during which development but not distribution would be allowed.65 In a separate article, Reichman proposes another hybrid regime loosely based on antitrust and trade secret (but not property) principles that is designed to give innovators adequate lead time to recover their investment, while allowing others to build socially desirable derivatives of the innovation.66

The Clinton Administration's Information Infrastructure Task Force, led by the Patent and Trademark Office (PTO) Commissioner Bruce Lehman, favors the second category. In The Report of the Working Group on Intellectual Property Rights (commonly referred to as the "NII White Paper"), the Working Group outlines several proposals for revising copyright law for the digital age, including provisions clarifying that transmission of a copyrighted work is the exclusive right of a copyright holder, imposing liability on service providers for copyright infringement perpetrated via their system, and prohibiting the unauthorized removal or alteration of copyright management information.67 Some of the proposals have already been enacted into law, such as the right of performance in digital works.68

Professor Perritt, on the other hand, favors the conservative approach of the third category. He exclaims:

[M]uch of the concern about protecting intellectual property through new statutes and elaborate encryption structures is overblown. As with other threats to property and personal interests through the NII, a combination of existing law and effective entrepreneurial mobilization of the particular attributes of new technologies should suffice to strike a reasonable balance between competing interests.69

For example, Perritt states that the problem of determining the copyrightable elements of a computer program is adequately addressed by the "filtration method" used by the court in Computer Associates Int'l, Inc. v. Altai, Inc.70 The filtration approach becomes difficult to apply, however, when a product becomes so successful in the marketplace that competitors can succeed only by copying certain of its features.71 Perritt therefore suggests that when standard intellectual property doctrines seem to fail or create confusion in the digital environment, innovators can use "alternative protection methods" based on existing legal, entrepreneurial, and technological principles to protect their intellectual property.72

For example, the non-copyrightable elements of a compilation such as a database can be protected by contract law, in the form of shrinkwrap licenses. The Seventh Circuit recently upheld the enforceability of such licenses in ProCD v. Zeidenberg.73 On the web, shrinkwrap licenses (called "point-and-click," "click-on," or "click-through" licenses in that medium) arguably would have even greater enforceability, since the web site can force the user's browser to display a license before entering the site, requiring the user to take the affirmative action of clicking on a link indicating consent to the license terms.74 Perritt also suggests a "reverse passing off" theory for protection of non-copyrightable, sweat-of-the brow digital works that could otherwise be misappropriated by pirates,75 and he notes that content providers can use business strategies such as planned obsolescence, fine granularity of information, and marking and tagging to increase practical protection for digital works.76 In addition, technology such as encryption and password protection will become more effective as appropriate mechanisms are developed for their use, and trademark law will play a much greater role in the protection of works displayed by a computer, since product branding can effectively show the source of a product.77 Thus, Perritt argues, digital property rights can be protected in many ways without creating new legal doctrines.

Of the three approaches discussed, Perritt's approach is the wisest course of action, at least for the present time. The sui generis approach is attractive on some fundamental level because it attacks the problem head-on, taking a fresh look at the problems presented by digital technology and the market structures affecting incentives to produce intellectual property in the digital era. Arguably, such a fresh look will result in the most fair and efficient rules for protection. However, resorting to a sui generis approach creates the danger of an intellectual property regime consisting more of special rules for various technologies than general rules. Various pockets of specialized law have already been carved out for plants,78 semiconductor chips,79 and pharmaceuticals.80 As more and more technologies are added to the fray, such a regime may rapidly become cumbersome and impractical to work with. There likely will be technologies which straddle the line between special categories, and lawyers and courts will struggle to characterize whether a technology is more like prior technology A or prior technology B, instead of focusing on specific intellectual property doctrine. More importantly, though, the time has not yet arrived to implement a sui generis approach. Technology is just beginning to address the problem it has created, and technical solutions such as encryption and trusted systems may prove to hold an adequate solution for the easy pirating of digital products.81 Once new legal regimes are created, they are quite difficult to "undo." The approach suggested by Perritt lends itself to a continued surveillance of the issue so that, for example, the technological protection capabilities that are currently in testing and development stages in the laboratory82 will have the opportunity to be tested in the marketplace, where their strengths and weaknesses will be more readily apparent. If market failure is evident (i.e. if holes in the protection scheme continue to create a disincentive for creative and inventive works to be produced), then the more drastic measure of crafting a new regime can be undertaken.

Perritt's approach is also preferable to that suggested in the second category (a revision, expansion, and in some cases reinterpretation of the copyright law as suggested by the Working Group). The implicit difficulty in relying heavily on copyright law83 to solve the problem of pirating is doing so without also encroaching on the public's fair use rights. Collectively, the proposals set forth in the NII White Paper do not overcome this difficulty. For example, the Working Group's proposals for an exclusive "transmission right" and interpretation of a temporary RAM copy to be a "copy" for copyright purposes seem to signal the end of the "first sale doctrine" as applied to digitally transmitted documents.84 The first sale doctrine normally gives a user who lawfully purchases a copy the right to sell or dispose of that particular copy as he wishes, without liability for copyright infringement.85 However, under the Working Group's proposal, forwarding your copy of an electronic file to a friend would be an infringement in two ways: the transmission itself would be unlawful, and the temporary copy created in RAM on your computer would also be an unlawful copy. The Working Group also supports the expansion of copyright law to impose strict liability on ISPs for acts of infringement committed by its users,86 and the outlawing of decryption technology that may have substantial noninfringing uses.87 Again, these policies place unnecessary burdens on the public where none should rightfully exist.

Perritt's approach, on the other hand, balances the public interest with the rights of property owners. He urges that caching should not be considered an infringement for copyright purposes, and in the alternative, fair use should clearly apply.88 He also notes that the defense of an implied license could be utilized in the online scenario. If a content provider puts material on the World Wide Web, he is granting an implied license for the file to be cached, since it can only be viewed using web browser software, which creates cached copies of the files it accesses. As a final protection of the public domain, Perritt suggests that the "copyright misuse" doctrine, recently recognized by the 5th Circuit, can prevent copyright owners from usurping the fair use privileges of the public.89

Thus, in the vigorously debated area of online intellectual property, Perritt presents a sound argument that existing legal principles, business strategies, and developing technology strike the most appropriate balance between the rights of intellectual property owners and the rights of the public. Thus, once again, he shows that our traditional legal framework need not be discarded in evaluating online issues.

IV. CONCLUSION

It is perhaps a paradox that, while Perritt insists that the legal issues presented by the NII are not new issues requiring special legal treatment, he has written this book on the subject generically called "Cyberlaw" or "Online Law." The grouping of cyberspace issues together in a single book may lead casual observers to believe that the author has made such a categorization for the purposes of establishing a distinct body of law for this technology. Perritt, of course, clearly intended quite the opposite effect and, as discussed above, his cogent analysis amply supports that position.

Law and the Information Superhighway may also be viewed as a paradox because, while it addresses a medium in which instant access to information and fast-paced change are obvious qualities, the book is published in slow-to-deliver and cumbersome-to-update paper form, rather than electronically. Thus, although the book will be supplemented annually via pocket parts,90 it will consistently lag behind in this rapidly changing field. One alternative would be to publish the book in digital form, with supplements available online from the publisher, which would allow the seamless integration of supplements with the original text.91 However, no matter how often the author or publisher updates a book on the subject of the Information Superhighway, whether in paper or digital form, the book will never keep up with the almost-daily developments in this field. Thus, perhaps the task of reporting on these daily changes is best left to web sites such as that of the author's own Center for Information Law and Policy.92 Certainly plenty of sources of information are available online, most of them for free.93

In the face of Professor Perritt's thorough treatment of NII legal history and case law, and convincing comparisons of NII issues to analogous legal problems in other, traditional contexts, any potential paradoxes presented by the book are mere sidebars which do little to distract the reader's attention. Law and the Information Superhighway is a comprehensive, accurate, and insightful survey of the application of law to the developing information infrastructure, and is a welcome addition to the growing body of literature in this field. Its broad coverage of all of the converging communications technologies, not only the Internet, make it a unique and valuable roadmap of the law for any lawyer or non-lawyer who may venture onto the Information Superhighway. Whether Perritt is correct in his view that existing legal categories and paradigms are adequate to resolve the problems presented by the development of the information infrastructure remains to be seen, but his clear presentation of the issues in Law and the Information Superhighway will remain useful no matter what unpredictable turns technology and the law may take.