"Open Access, Open Archives And Open Source in Higher Education"

 

National Scholarly Communications Forum : "Open Access, Open Archives And Open Source In Higher Education: The Changing Nature Of Scholarly Communication."

State Library Of New South Wales, Sydney, Australia.

9.30 ’Äì 10.45, Tuesday 27 September 2005.

 

Running through what follows there are three related concepts: unbundling, interoperability, and non-exclusivity. Unbundling is a matter of commercial practice; interoperability is a matter of technical practice; non-exclusivity is a matter of legal practice. The most important of these is non-exclusivity, but taken together, these three are necessary conditions for achieving independence and sustainability in a very broad range of activities within higher education, from scholarly publishing to teaching to administration to outreach. In fact, I would argue that the greatest impact of open X, when all is said and done, will result from opening the academy itself to the public, by making all its forms of creative and intellectual endeavor freely available.

 

If the nineties were the E generation (e-commerce, e-business, e-publishing, eBay, E*Trade, etc..), the aughties are the O generation (open source, open systems, open standards, open access, open archives, open ...everything). This trend, now unfolding with special force in higher education, reasserts an ideology, a meme, that has a continuous tradition traceable all the way back to the beginning of networked computing (in fact, as far back as the Enlightenment, for example in Thomas Jefferson's defense of the principle that "ideas should freely spread from one to another over the globe"). Call this meme "Liberation Technology." Recently, it has been adopted by some venerable institutions--not only by some of the great public and private universities, but also by major private foundations--and it means business.

 

Since the beginning of Internet time and before, Liberation Technology has been intertwined with and opposed to another ideology, which we'll call "Command and Control." You see Command and Control at work in the military roots of the Internet, in the Recording Industry Association of America's prosecution of file-sharing college students, and in Microsoft's doubly possessive and oddly revealing slogan ("your potential, our passion"). Liberation Technology wants to keep information free; Command and Control wants to make the Internet safe for private property.

 

To be sure, not all proprietary operations oppose open inquiry--but the key to the business success of open-source products like Linux is that they allow people to make money by selling them, without allowing the seller exclusive control. Especially with information goods, the notion of nonexclusive commercial rights is key.

 

By the early 1990s, the Internet was expanding rapidly, from one-thousand hosts in 1984 to one million in 1992, and new more sophisticated applications were appearing, like Gopher (1991) and the World-Wide Web (the first Web server in the United States was set up in 1991, with Mosaic, the first graphical Web browser for personal computers, coming along in 1993). Throughout the 1990s, university faculty members and students outside of computer science were gradually becoming aware of the existence of the Internet, largely because of the Web; so was the rest of the world, for the same reason.

 

In retrospect, it's difficult to comprehend the rapidity with which the Web went from an obscure science experiment to a fact of daily life, but it took only about three years. By late 1994, the World Wide Web Consortium was founded, to take over managing Web protocols and their development and to ensure that the Web would remain a nonproprietary public good. The official draft of XML 1.0 (Extensible Markup Language) was presented in 1998.

 

In distinct contrast to that ethos, with its focus on the public good, an aggressive campaign began in the late 1990s for expansion of the property rights of "content providers," in legislation like the Digital Millennium Copyright Act and the Sonny Bono Copyright Term Extension Act (both passed in 1998) and in case law arising out of the Recording Industry Association of America's suit against Napster in 2000. Mixed in there was the Microsoft antitrust case, initiated in 1998 under the Clinton administration, decided against Microsoft in the lower court, overturned on appeal, and eventually settled, quite favorably for Microsoft, by the Bush administration in 2001.

 

Against that backdrop, during the 1990s all over the United States universities became big IT consumers, not just in computer science or in the sciences, but increasingly in all disciplines, on every part of campus, for all kinds of services. As they came to rely more, and more broadly, on networked information in teaching, research, and administration, universities turned away from the strategy of meeting their own specialized needs with homegrown software and began to license more commercial products. They also began to be seen, for the first time, as a profitable market for commercial IT products and services. WebCT and Blackboard, for example, both appeared on the scene in 1997 and over the next few years they signed up hundreds of university clients for "e-learning" systems to put courses online, do grading online, accept homework assignments online, etc. On the administrative side, beginning in the mid-1990s, Enterprise Resource Planning systems from vendors like PeopleSoft and Oracle--for managing payroll, student records, human resources, purchasing, etc.--began to find a market in universities, partly built on the fear that Y2K would wreak havoc on older, usually homegrown, systems that had hitherto performed those functions, often successfully, often for years.

 

Universities themselves also got caught up in the Internet bubble--that combination of greed, optimism, and willful ignorance of history that led us to believe that information technology would create a permanent bull market. In the heady days at the turn of the millennium, Columbia University, to take only one of many possible examples, plowed millions into launching Fathom, a for-profit online content-provider for e-learning, confident that such a foray into the commercial sector would turn a handsome profit for the stakeholders, who included not just Columbia, but the London School of Economics and Political Science, the New York Public Library, the University of Chicago, the University of Michigan, and others.

 

Some time in 2000, though, the pendulum started swinging the other way, beginning, perhaps, in reaction to failures such as Fathom's. In his annual report for 2000-2001, the president of the Massachusetts Institute of Technology, Charles M. Vest, succinctly articulated a return to the original ideology of the Net when he announced MIT's OpenCourseWare project to make primary materials of its courses available online, for free. As he noted, "inherent to the Internet and the Web is a force for openness and opportunity that should be the bedrock of its use by universities." Vest's report is not the source of the trend that is now unfolding, but it is certainly a document that crystallizes a historical moment. It is significant for another reason, too: It is emblematic of what's changed in this iteration of Liberation Technology. This time around, the ideas are being advanced not by ragtag communitarians, but by major institutions, not just MIT, but a number of other universities as well, with substantial backing, not just from universities, but from corporations, foundations, government agencies, and governments.

 

In MIT's case, support comes from the institution itself and also from two major private foundations, the William and Flora Hewlett Foundation and the Andrew W. Mellon Foundation. On a first visit, the MIT site for Open CourseWare looks a little longer on structure than substance. If you dip at random into courses, you may see mostly syllabi, perhaps some exercises, and a list of assigned readings, but not the readings themselves (leading you to wonder how the effort is going to provide new educational opportunities in the developing world, as claimed). But on further investigation, you'll find that some courses have the complete text of every lecture (in PDF), and others have full-length videos of every lecture (at three different resolutions for slow, medium, and fast connections). At that point, MIT's claim to be the first open-source university begins to seem more plausible.

 

MIT can't give away the readings in its courses--in most cases, textbooks and articles that come from commercial publishers--but think back to this case when we come to the discussion of open access and institutional self-archiving, later on: open courseware linked to open archives would be a powerful combination indeed. Still, even at present, MIT can give away the intellectual property created by its own faculty members, and that's what it's doing. As with the open-source-software movement from which it drew inspiration, it permits the reuse, modification, and redistribution of content. Unlike open-source software, however, it prohibits doing any of those things for commercial purposes.

 

That prohibition might actually be a mistake, in the long run, but in any case, the distinction is important, and it is key to understanding the doctrinal differences among open-source sects. Beginning in the early '80s, the innovation of the open-source-software movement was to argue that users should have the freedom to modify source code, but could sell the results, as long as the source code for the modified version was made available in the same way. Those terms are codified in the GNU Public License; since then, other variants of open-source licensing have emerged. MIT's materials in Open CourseWare are covered by a different, newer copyright, developed by the Creative Commons, an effort led by Lawrence Lessig, who set up Stanford University's Center for Internet and Society, with support from Hewlett, Stanford and Harvard University's law schools, and others (including Red Hat CEO's Center for the Public Domain). The Creative Commons license allows copying and redistribution, but also allows the content creator a set of options with respect to attribution, commercial use, and modification of the work. The Creative Commons license is inspired by GNU, but also informed by a somewhat broader perspective, in that it is intended to cover creative work other than software.

 

Though legal variants of open-source licenses do exist, at a technical level open systems require interoperability, and that in turn requires that everyone who designs or modifies the systems does so under the same set of rules. In the case of online courseware, content, and tools, the IMS Global Learning Consortium is providing some important common ground on which to coordinate a very broad range of specifications. One of the partners in that effort is another "open" entity, called the Open Knowledge Initiative, or OKI. That effort, financed by Mellon at MIT with Stanford as a principal partner and supported by a number of major universities, describes itself as "an open and extensible architecture that specifies how the components of an educational software environment communicate with each other and with other enterprise systems." The goal is to liberate universities from having to choose a single software solution for managing online instruction and/or online components of classroom instruction. The result would be greater portability of content, greater flexibility in choosing and assembling elements of a learning-management system, and a shift in the balance of power between the client (the university) and the software vendor, in favor of the client.

 

Universities--or open-source developers at large --could choose to produce and share their own modules for things like calendars, gradebooks, etc. Commercial vendors could also continue to build and sell proprietary solutions that adhered to the architectural specification (and that, therefore, allowed users to unplug some of the vendor's modules and plug in some of their own, or some from another vendor's). That speaks directly to the practice of monopolistic "bundling" that was at the heart of the antitrust case against Microsoft.

 

As with any standard, success will depend on whether both vendors and users buy into it. That is not yet a certainty with OKI, but in May 2002 Blackboard announced its intention to adopt the OKI architecture; in October 2002 OKI announced that it had joined in an informal consortium with other "leading organizations developing specifications for e-learning technology in higher education ...to coordinate strategy and conduct common activities." More recently, this past July, "Apple Educational Marketing and MacLearningEnvironments.org (MLE.org) have been working with HarvestRoad, an Australian software developer specializing in content management solutions for eLearning ...[to] build a Mac OS X application (SearchParty) that could search content in the HarvestRoad Hive, a content store," using OKI's Repository Open Service Interface Definition (http://www.okiproject.org/documents/CaseStudy-HarvestRoadAndMacLearningEnv.pdf).

 

While the OKI project aims at specifying an architecture for online learning systems, and MIT's Open CourseWare is focused on content for such systems, another open-source project, Sakai, focuses on educational software tools. According to the Sakai Web site, the project hopes to "demonstrate the compelling economics of 'software code mobility' for higher education, and it will provide a clear road map for others to become part of an open-source community." Sakai began as a collaboration among Indiana University, MIT, Michigan, and Stanford; "[t]o date, the Sakai Project has put out three major software releases (1.0, 1.5 and 2.0), developed an Educational Partner's Program which now has around 80 members around the world . . . and five commercial affiliates, organized three highly successful SEPP conferences, and successfully demonstrated a model for community source software development among colleges and universities."

 

Another partner in Sakai is the open-source project uPortal. A number of other universities (in the United States and abroad) and for-profit companies (Sun Microsystems, SCT, Interactive Business Solutions) are involved in developing uPortal. Once again, the Mellon Foundation is helping to support the project.

 

Portals can do more than integrate news and weather, or library and course information: They can also integrate the administrative computing functions of the university--student records, payroll and human resources, purchasing, etc. Interestingly, but perhaps not surprisingly, one of the corporate sponsors of uPortal is SCT, a company whose interests could be threatened, or at least significantly reoriented, if uPortal achieves the success for which it seems destined. At the University of Illinois, SCT provides a "solution" called Banner, one of those Enterprise Resource Planning (ERP) products mentioned above.

 

Over the past few years, universities have spent many hundreds of millions of dollars to acquire, customize, and make the transition to such systems, often with very mixed results. The university that now employs me, and the one I worked at last, are both in the throes of such a transition, probably too far in to get out, but probably wishing they could. Admittedly, it's a huge undertaking to retool an entire university's administrative-computing infrastructure and workflow, and it requires long-range planning and commitments. An institution makes those plans and commitments based on the best choices available at the time: several years ago, when decisions were being made at Universities of Illinois and Virginia, there were no plausible open-source/open-standards ERP alternatives, so the universities bought into monolithic proprietary systems. Now, just a few years later, alternatives are beginning to come into view. It will be years before the current generation of university ERP adopters can switch to open-source alternatives, but their experience will certainly help to make the case for such alternatives as they emerge.

 

There are a number of other pressing IT challenges facing higher education as well, and at or near the top of the list are digital libraries (or, more generally, data repositories). Those could include data held in an institution's library (licensed or locally produced scholarly information), data held outside the library (by an office of management information, for example), and/or data published by a university press. The case for institutional repositories was laid out convincingly in an article by Clifford A. Lynch, executive director of the Coalition for Networked Information, published in the February 2003 newsletter of the Association of Research Libraries. Lynch argues that "an institutional repository is a recognition that the intellectual life and scholarship of our universities will increasingly be represented, documented, and shared in digital form, and that a primary responsibility of our universities is to exercise stewardship over these riches: both to make them available and to preserve them."

 

There are a number of noteworthy "open" initiatives in this area as well, with familiar home institutions and financial supporters. Four very different, possibly complementary, open-source frameworks for institutional repositories and/or digital libraries are MIT's DSpace (supported by Hewlett-Packard), the Cornell/Virginia FEDORA project (supported by the Mellon Foundation), EPrints (supported by the National Science Foundation and the United Kingdom's Joint Information Systems Committee), and Greenstone (produced by the University of Waikato and developed and distributed in cooperation with UNESCO). Recently, JISC declared that it was

. . . investing heavily in an infrastructure to enable innovative research to take place, including interoperable repositories, preservation best practice and user-oriented services, and open access remains an important cornerstone of this infrastructure.

In fact, it went so far as to argue that

There is no substantive evidence that a mandatory requirement on researchers to deposit a copy of their final, peer-reviewed manuscript into an open access repository will impact negatively on journal publishers' business ’Äì indeed, such evidence as exists . . . suggests the opposite . . . . Furthermore, the evidence shows that the mandatory requirement is an essential component of an effective position on open access. (Professor Sir Ron Cooke, "JISC Response to RCUK Position Statement on Access to Research Outputs," August, 2005. http://www.jisc.ac.uk/index.cfm?name=jisc_response_rcuk_open_access)

In the United States, it would still be considered a radical notion that a university or a funding agency might require faculty to submit research publications to an open-access institutional repository, but perhaps the British are the revolutionaries, this time around.

 

Beyond the individual repository, there is the problem of federated collections, and how to search across repositories, a dream long held in digital libraries. The Open Archives Initiative (OAI, at http://www.openarchives.org/) is a project aimed at achieving that goal, by developing and maintaining standards to facilitate sharing information. You can see a nice working example of sample searches across many of them on the Web site for the Perseus Project at Tufts University (http://www.perseus.tufts.edu/PR/oai.ann.html).

 

The EPrints software mentioned above is the self-archiving component of a larger project on open access, supported by the Soros Foundation and marching under the banner of the Budapest Open Access Initiative, whose purpose is "to make research articles in all academic fields freely available on the Internet"--either by institutional self-archiving of articles that also appear in for-fee journals, or by authors publishing in open-access (free) journals. In many debates over a number of years now, Stevan Harnad and others have succeeded in shifting the basic assumptions, the ground truths, of the debate over access to the outcomes of university research. That debate has been attracting considerable attention around the world, both within and beyond the academy, and Stevan is to be congratulated for helping to bring this to pass. In the remainder of this talk, I'd like to discuss possible futures for open-access scholarly communication, some of which I feel certain Stevan will view as dystopian, at least in part.

 

Scholarly publishing is a system that includes content producers, end users, publishers, and libraries. Content producers and end users are, generally, the same people at different moments (though it would be a good thing if there were a more general audience for scholarship than there is). Publishers are of two types--non-profit (university presses and scholarly societies) and commercial (mostly science/technical/medical and reference publishers). The economies involved in this system are three: a prestige economy, primary for content-producers, important but secondary for the other players; a cash economy, primary for publishers, not very important to content producers in most cases, and important but not actually primary for libraries; and a subsidy economy, primary for libraries, who are subsidized by universities as a public good, and more important to content producers than they generally know. It shouldn't come as a surprise that a system with three different economies at work inside it is difficult to operate successfully, but when it does work, it has a certain elegance: each party contributes from its own sense of mission, and each gets paid in its own currency. At present, though, there seems to be general agreement that the system of scholarly communication is not working--that it is broken, or breaking.

 

In thinking about how to fix it, I'd begin by proposing that "digital by default" is the future of scholarly communication: almost all scholarship is already born digital, no matter how it is eventually published. Moreover, a decade from now, I think it's quite reasonable to assume that computational methods will have penetrated the humanities and social sciences to the point that there will be many research projects that require electronic dissemination. A better system of scholarly communication, then, would begin from the premise that the form in which research results are conveyed is a question of appropriateness and convenience. In some cases, print is to be preferred as a matter of convenience, but in general, the digital medium offers the most convenience to end users, as well as offering the most expressivity to authors, and access to the largest audience. As an aside, on the question of audience, the simplest analysis of the "crisis in scholarly publishing" is that it's a problem of audience: you can't afford to physically manufacture anything--books, televisions, or widgets--in lots of 500 or 1000. On that subject, in a talk given at the 2003 annual meeting of the American Council of Learned Societies, I suggested that

we could enlarge the audience for humanities scholarship, not by dumbing it down, but by making it more readily available. Maybe if we did that, scholars would find an audience first, and a publisher second, instead of the other way around. And maybe in that world, the risk to publishers would be less, because the demand would already be demonstrated.

Second, since much of the discussion about the "crisis in scholarly publishing" is really about tenure and promotion, let's stipulate that tenure would be awarded on the basis of the quality and impact of a scholar's work, rather than on its quantity or its form. Let's further stipulate that the criterion of "impact" would include impact on readers outside of one's discipline--in other words, that an ability to convey the significance of specialized research to a general public would count for at least as much as citations resulting from internecine feuds. But tenure is not the whole story: if I were to offer a comprehensive description of the ideal system of scholarly communication, it would go like this:

In a better world, high quality, peer reviewed scholarship would be freely available soon after its creation; it would be digital by default, but optionally available in print for a price; it would be easy to find, and it would be available long after its creation, at a stable address, in a stable form.

 

Starting with this more general set of desiderata, let's work backwards from ends to means. Let's suppose that the most important characteristic of a future system of scholarly communication is that information should be available long after its creation, at a stable address, in a stable form. I would argue, strongly, that digital information isn't going to be easy to find or available long after its creation at a stable address in a stable form unless it is held by libraries--and yet, libraries do not hold most of the digital information that we would consider important to scholarship. Some of it is out there in the wild, on the Web, not collected or preserved: consider the study published in 2003 in the Washington Post that found "40 percent to 50 percent of the URLs referenced in articles in two computing journals were inaccessible within four years." In the area of journal literature, publishers own much of it, and libraries rent access to their holdings, though some libraries do ask for the right to archive the content they license. If you wonder why libraries might ask for this right, see the January 10, 2003 article in the Chronicle of Higher Education pointing out Elsevier's practice of silently deleting articles from its database.

 

It's not just preservation that's at issue here, but also collection development: increasingly, commercial publishers of journal literature in science, medicine, and engineering aim to dictate collection policy to libraries, through bundling schemes with penalties for choosing not to buy into the bundle and for opting out of subscriptions to individual titles.

 

So, what if libraries actually collected the important digital information, rather than renting it? To do this, libraries would have to mount and maintain digital object repositories, which (with very few exceptions) they don't now do. Building and maintaining digital object repositories will cost money, and it will be new money. But I think it could be demonstrated that, if building in-house digital collections were actually the price of freedom from commercial publishing, the cost of building those collections would be less than the millions every major university now pays for subscriptions to commercial STM journals. I also think it could be demonstrated that there's a significant role for university presses in building these collections, and that university presses could thrive in a university economy that rewards the lowering of costs to the system of scholarly communication as a whole. More on both those points later: for now--just for a moment--let's assume that we can pay for this collection-building, and let's assume that libraries carry it out.

 

This still doesn't explain how the first part of the better world of scholarly communications comes about--the part where high-quality, peer-reviewed information is freely available soon after its creation, online and optionally in print. This is where the Open Access Initiative comes in. Its mission statement reads as follows:

An old tradition and a new technology have converged to make possible an unprecedented public good. The old tradition is the willingness of scientists and scholars to publish the fruits of their research in scholarly journals without payment, for the sake of inquiry and knowledge. The new technology is the internet. The public good they make possible is the world-wide electronic distribution of the peer-reviewed journal literature and completely free and unrestricted access to it by all scientists, scholars, teachers, students, and other curious minds. Removing access barriers to this literature will accelerate research, enrich education, share the learning of the rich with the poor and the poor with the rich, make this literature as useful as it can be, and lay the foundation for uniting humanity in a common intellectual conversation and quest for knowledge.

-- http://www.soros.org/openaccess/read.shtml

 

The Initiative goes on to "recommend two complementary strategies" for accomplishing this: "self-archiving," in which scholars "deposit their refereed journal articles in open electronic archives," and "open-access journals" which "use copyright and other tools to ensure permanent open access to all the articles they publish. Because price is a barrier to access, these new journals will not charge subscription or access fees, and will turn to other methods for covering their expenses." As to how these journals will pay for themselves, the Initiative offers some business plans and suggests that there are many alternative sources of funds for this purpose, including the foundations and governments that fund research, the universities and laboratories that employ researchers, endowments set up by discipline or institution, friends of the cause of open access, profits from the sale of add-ons to the basic texts, funds freed up by the demise or cancellation of journals charging traditional subscription or access fees, or even contributions from the researchers themselves.

 

I agree with what this initiative is trying to do, and I applaud Stevan for his dedication to the cause that it represents, and for the practical intervention of Eprints, free software enables the self-archiving he recommends. And it's true, as he suggests, that even without this software, the internet has made it possible for scholars to self-publish, and for peer-reviewed journals to distribute their contents widely and quickly--in other words, to make high-quality, peer-reviewed information freely available soon after its creation. A journal I've been associated with for nearly fifteen years now has been doing just that, on terms compatible with the Budapest Open Access Initiative: Postmodern Culture is a peer-reviewed journal that distributes the current issue (and text-only versions of all back issues) free of charge, while also licensing the whole collection as part of Project Muse. It uses copyright to ensure access by leaving copyright with the author, and asking only for a non-exclusive right to publish. That strategy allows authors to self-archive, and it also allows them to republish the material elsewhere, as they please. But I also know from the experience of PMC that none of the sources of funding enumerated in the Initiative proved sustainable. What has sustained the journal is its relationship with a publisher, and the steady editorial stipend that comes from Project Muse: it's not a lot of money, but it pays people who should be paid, for doing things that are important to the quality of the journal, but that have little or no prestige credit attached to them. By the same token, on the library side, I'd also note that self-archiving and open-access journals, by themselves, do not guarantee "permanent open access." Only libraries can do that (and even then, "permanent" is stretching it).

 

Part of the problem with the current system is that authors are insulated from the pressures that are shaping their world: they don't pay directly for the costly commercial journals they use. In light of that, the Budapest Open Access Initiative is a step in the right direction, because it engages authors in the fray. It also appears to have struck a responsive chord internationally, as evidenced in the "Berlin Declaration" issued a year or so back, by a number of German research organizations at the conclusion of a conference on open access to knowledge in the sciences and the humanities at the Max Planck Institute.

 

Open Access is part of a better system, and its significance is that it addresses the content-producers, who have had little incentive, hitherto, to change their publishing behavior. If we can make Open Access prestigious, or perhaps simply "cool," then an important battle has been won. That's a marketing problem, and it's one that universities, funding agencies, foundations, and other interested parties would be well advised to address. Faculty are fashion-conscious, despite what you might assume from looking at us: the fashions that matter are more political and ideological than sartorial, and Open Access is a fashion statement that faculty will embrace, if it is promoted.

 

The Open Access Initiative presents a solution to the problems with the current system that is focused on the author/user end of the problem--but that's not to say that some publishers and libraries haven't seen the possibilities here: The California Digital Library is hosting an "eScholarship Repository" which archives papers produced by faculty from more than100 departments and units across the University of California system. Oxford University Press and Oxford University Library Services are partnering on an Open Archives project called SHERPA (Securing a Hybrid Environment for Research Preservation and Access) which aims to "investigate the IPR, quality control and other key management issues associated with making the research literature freely available to the research community. It will also investigate technical questions, including interoperability between repositories and digital preservation of e-prints." It's encouraging to see library-press collaboration around the concept of open archives, because libraries and university presses represent the perspectives, expertise, and missions on which the open archives movement, despite its strong appeal, is weakest.

 

A while back, I asked you to set aside for a moment the question of how libraries would address the requirement that they collect and preserve digital information, rather than just renting access to it. It's time now to return to that question, and in answering it, to bring publishers into the picture.

 

In a statement on "The Value of University Presses" commissioned by Bill Regier, as President of the Association of American University Presses, a committee of university press publishers enumerated the things that university presses contribute to society, to scholarship, and to the university community. Self-publishing and self-archiving would moot many of the things on that list (for example, the claim to "make available to the broader public the full range and value of research generated by university faculty"), but even in such a world, a number of these things would still need to be done, and would probably not be done by anyone other than a publisher--for example, adding value to scholarly work "through rigorous editorial development, professional copyediting and design," or committing resources "to long-term scholarly editions and multivolume research projects, assuring publication for works with completion dates far in the future." Some of the other things on this list should be done far more than they are done, and would be central to a world in which university presses and open access co-exist, for example making "common cause with libraries and other cultural institutions to promote engagement with ideas and sustain a literate culture," or collaborating "with learned societies, scholarly associations, and librarians to explore how new technologies can benefit and advance scholarship." Here are some forms of common cause and collaboration that university presses (and, for that matter, scholarly societies) might take on, in an open-access world:

 

I'd submit that all of these are things worth doing, in at least some circumstances, and that many of them contribute directly to the support of authoring or to lowering the cost of collecting and preserving digital content. As such, both might qualify university presses for more subsidy than they have been getting from universities lately, though even without that, I think many of these activities would produce enough value for libraries that they could be paid for in the cash economy in which publishers now largely operate.

 

A word on subsidies, while we're on that point: for obvious reasons, institutional subsidies work best when the public good they create is consumed locally, within the institution. This has been the case with libraries, and it has not been the case with presses. University presses don't publish local authors exclusively, or even in the main, and the good they produce by publishing is produced for a global, not a local, market. Still, it may be time for institutions to think more broadly about the system of scholarly communication as something cooperatively subsidized across localities.

 

Even if presses are subsidized to a greater extent, and even if they cooperate as suggested above with libraries and with authors, and even if they act in various ways to lower the cost of collecting digital content in libraries, there is still a significant new cost attached to developing those digital collections, and an even greater cost to maintaining them over time. Universities--by which I mean Provosts--need to recognize that these collections are, in fact, the key that unlocks the problem of scholarly communication. If universities don't own the content they produce, if they don't actually collect it, hold it, and preserve it, then they'll be at the mercy of those who do. If universities do collect, and preserve, and provide open access to the content they produce, then the entire balance of power shifts away from commercial publishing and toward university presses and university libraries. Bill Clinton used to say (those were the days), "it's the economy, stupid." He was right. We might remind ourselves, in the same spirit, "it's the content, stupid." We should be using subsidies to both libraries and presses, and perhaps other means as well, to encourage (even require!) substantive collaboration, with the goal of creating a system in which there are incentives to lower costs across the entire system, including authoring at one end, and preservation at the other. University presses would have a vital role in this process, and university libraries would be the lynch-pin, because that's where the content would reside.

 

So, what do we have? Several interlocking elements that make up a better system of scholarly communication, provided that:

 

In fact, I think the last point is the most difficult. Libraries have been better at collaborating with their own kind than publishers have been, but libraries and publishers have not been very good at collaborating with each other. Authors, once they've been politicized on the topic of scholarly communication, may see both libraries and publishers as unnecessary, as some of the emphasis of the Budapest Initiative suggests. In short, there's a good deal of work to do just to make it clear to authors, university press publishers, and libraries, that we're all on the same team, and that the enemy, while real, is elsewhere. The case will have to be made, publicly and repeatedly, as well as privately and pointedly, that collaboration, mutual respect, and close cooperation are absolutely necessary in order for the system of scholarly communication to survive and prosper. What makes that especially challenging, institution by institution, is that there will have to be (local) cash subsidies to encourage this change in behavior, but these subsidies will have to be justified, in part, with reference to a trans-institutional problem and its solution. In other words, just as each author wants to know he's not out there alone in choosing to publish in a new way, and just as deans want to know that their standards of excellence are the same that other schools apply, provosts need to know that there are other universities making the same choices, providing the same subsidies, working toward the same ends.

 

The efforts to promote open access to scholarly research, to build interoperable digital libraries, and to create institutional repositories coincide with the broadening university revolt against the monopolistic bundling strategy not of Microsoft, this time, but of commercial publishers that require university libraries to make multiyear commitments to packages of titles. In the U.S., faculty members and libraries at Cornell University, Harvard, North Carolina State University, the University of California system, and the University of North Carolina at Chapel Hill have all rejected those tactics in recent years. University-press publishers have a golden opportunity here to distinguish themselves from commercial publishers and join with libraries and scholars to create a new model of scholarly communication based on unbundling, interoperability, and non-exclusivity--principles inherent in open-source software development, but applicable in other domains as well. These principles are compatible with commercial activity, as the success of Linux demonstrates: they are hostile only to monopolistic practices.

 

In technical contexts, unbundling and interoperability are important because they provide choice and flexibility. Instead of being locked into a single application or suite of applications from a single vendor, you can choose to mix different applications to achieve the best performance for your particular purposes, at the best price. In the context of scholarship and teaching, interoperability and unbundling are important because they are preconditions for survival and self-determination, liberating the University from IP monopolies and IT monocultures by emphasizing long-term sustainability of projects and by adopting and promoting the open-source ethos of shared goals, shared work, and shared results.

 

Over the coming decade we're certain to see this new mode of production locked in mortal combat with older methods and the legal and ideological commitments that they entail--and it's critical, at this juncture, that universities should come down on the side of non-exclusivity, of open access to ideas--regardless of whether those ideas take the form of software, scholarship, or instruction.