Public Networks, Vernacular Computing

The Second Biennial Wisbey Lecture

Wednesday, March 23, 2005

John Unsworth

University of Illinois, Urbana-Champaign

17:30, Room 3B20, Strand Building, Strand Campus

Centre for Computing in the Humanities

King’s College, London -- University of London

“When forming, utopia's hairy”

(anonymous, anagram for “information superhighway,” from Elon/Pew “Imagining the Internet, the Early 1990s Predictions Database,” on the Web (3/19/2005) at http://www.elon.edu/predictions/20infohighwaypredictions.aspx )

First of all, I want to thank King’s College and Vodaphone for making this visit possible, and for sponsoring the Wisbey lecture.  It’s an honor to deliver that lecture, named for one of the pioneers of humanities computing, Roy Wisbey, and it is a pleasure, in doing so, to have the opportunity to see so many friends and colleagues from the Centre for Computing in the Humanities.  

A word before I begin, concerning my use of the term “vernacular.”  This adjective usually refers to something that uses “the native or indigenous language of a country or district.”  And you do sometimes see the term, used this way and paired with computing, as for example in  “Global Culture, Local Culture, and Vernacular Computing: The Excluded 95% in South Asia,” a panel presented by Kenneth Keniston (MIT) and Pat Hall (Open University) in Sydney, in 1998 (in C. Ess and F. Sudweeks, eds., Proceedings of Cultural Attitudes Towards Communication and Technology ’98 , University of Sydney, Australia, 329-331).   But my topic here today is not computing in local languages other than English, but rather a kind of locally situated computing in everyday life—so my use of the term “vernacular” is closer to its use in the phrase “vernacular architecture,” which refers to “architecture concerned with ordinary domestic and functional buildings rather than the essentially monumental” (OED).   My topic, then, is “ordinary domestic and functional” computing, rather than computing as a specialized activity involving monumental resources.   There is, of course, a potential overlap between the two senses of “vernacular computing,” since locally situated computing in everyday life will necessarily involve computing in the indigenous language, and even in the United States and the United Kingdom some vernacular computing occurs in districts where English is not the vernacular language.  But my main interest today is not in the language issues as such, but in the impact of a new kind of “everyday computing” on the future of higher education, especially as it is organized in colleges and universities like Kings College or the University of Illinois, Urbana-Champaign, and especially in the disciplines of the humanities.  

One other introductory caveat: I recognize that there are significant differences between the UK and US in terms of penetration of the internet/web into daily life, in terms of the economics of network access and the pricing of various services (DSL, dialup, cable modems, cellphones, etc.).  There are also important differences in terms of the organization of higher education in the two countries.  The combination of these differences may in some cases be offsetting, and in others multiplicative: I accept in advance, and will be interested to discuss, the variations that these differences are likely to produce with respect to the impact of new communications technologies on higher education in each country.  That such extra-technological factors can have profound determining influence on the adoption, diffusion, and use of new technologies is evident from past experience, such as that summed up in Ithiel de Sola Pool’s 1983 book Forecasting the Telephone: A Retrospective Technology Assessment.  So, that’s just to say at the outset that (in the words of the wisest of all yogis, Yogi Berra)  “our similarities are different.”  

In the 2003 Wisbey lecture (“Why Technology Matters: the Humanities in the 21 st Century”), Stan Katz “reviewed some fairly recent [1999] library scholarship on use of computing and the Internet” and reported that “several knowledgeable American librarians, surveying usage by their university faculty and graduate student patrons, find that academic humanists are not much interested in the broader digital humanities or even in the use of routine online scholarly resources” (final draft of talk, 10-15-2003, shared by the author).   I’m interested in comparing that state of affairs in academia with what’s happening in the rest of the world, and particularly with what’s been happening since Stan reported to you in 2003.  

Looking back on the 1999 research from 2003, Stan found it hard to imagine that there were very many academics in the humanities who still resisted the use of online resources like Google, and I think by 2005 that battle has been won, albeit mostly on grounds of convenience rather than conviction.   Still, it’s a little bit sad to be claiming this as a victory, if you consider the fact that the internet and the World-Wide Web are the inventions of universities and scientific research centers, and they have been with us for more than 25 years (in the case of the internet) and more than 10 (in the case of the Web).  The Web, in particular, has led to an extremely rapid and unforseen uptake of internet-based communication by the general public, in the space of a few years.   But a decade later, the force of innovation seems to be coming from outside, rather than inside, the university.

Some of that innovation is technical in nature, some involves standards, but much of it has to do with new forms of communication, new genres, new uses of the technology.  These are areas in which the humanities, in particular, should have a great deal to contribute, not only in terms of critical or historical perspective, but in the direct engagement and exploration of the technologies of communication.  And yet our humanities programs and departments, for the most part, remain blissfully—and sometimes willfully—ignorant of the changed world around them.  

In the late 20 th century, experimental networks emerged from the university and were, at first gingerly, picked up by the general public.  The most interesting applications for these networks came out of the university world, at this stage: the Ethernet protocol was developed in Bob Metcalfe’s (initially unsuccessful) Harvard dissertation (1973); the first MUD (multi-user dungeon) was developed at the University of Essex (1979); DNS—name-server technology—was developed at the University of Wisconsin (1983); Archie (anyone remember Archie? An early system for listing software available on file-servers) was developed at McGill University (1990); Gopher (1991) came out of the University of Minnesota (and is named after their athletics mascot); Veronica (a search tool for gopherspace) came out of the University of Nevada (1992); Mosaic, the first graphical web browser and the mother of Netscape, IE, and all other browsers that we use today, was released on April 22 nd , 1993 from NCSA, at the University of Illinois, Urbana-Champaign.  In the year following, Web traffic grew at an annual rate of 341,634% (all facts c.f. Hobbes' Internet Timeline v8.0, http://www.zakon.org/robert/internet/timeline/ ).

Fast-forward to 2005:  The networks are thoroughly public in nature, and they are now thoroughly naturalized by the public.   According to the Pew Internet & American Life Project, more than 60% of Americans are online:  

 On a typical day at the end of 2004, some 70 million American adults logged onto the internet to use email, get news, access government information, check out health and medical information, participate in auctions, book travel reservations, research their genealogy, gamble, seek out romantic partners and engage in countless other activities. That represents a 37% increase from the 52 million adults who were online on an average day in 2000 when the Pew Internet & American Life Project began its study of online life. . . . The Web has become the “new normal” in the American way of life; those who don’t go online constitute an ever-shrinking minority.

Speaking of minorities, though, we are accustomed to thinking that access to the internet is differentiated by race, class, and gender, and so it has been:

[G]aps in internet usage still persist along multiple demographic lines. These include age (younger Americans are much more likely to be online than senior citizens), socio-economic status (richer and better educated Americans are more likely to use the internet than those with less income and education), disability status (only 38% of those with disabilities use the internet), community type (rural Americans are less likely to be online than suburban and urban Americans), and race and ethnicity (blacks are less likely to use the internet than whites).

But according to the Pew study, these truths are not trends:

As the online population has grown rapidly, its composition has changed rapidly. At the infant stage, the Internet’s user population was dominated by young, white men who had high incomes and plenty of education.  As it passed into its childhood years in 1999 and 2000, the population went mainstream; women reached parity and then overtook men online, lots more minority families joined the party, and more people with modest levels of income and education came online.

Broadband is the leading edge of internet adoption in everyday life—so much so, in fact, that (at least in the United States) traditional phone companies are looking to DSL to save their land-line business, which otherwise would be outmoded by the consumer’s preference for cellphones.   And unlike local land-line phone service, broadband is a growth industry.  It’s also a transformative, according to the Pew study—once people experience it, they require it:

[m]ore than half of Americans who go online now have access to always-on connections at home or work, and they are different kinds of users than those with dial-up connections. They spend more time online. They do more online activities, especially those that exploit bigger information “pipelines,” such as accessing streaming video. They are much more likely to create content and share it with the rest of the online population. And they report greater levels of satisfaction with the role of the internet in their lives.

In fact, according to the Pew report, the national average for broadband at home was 13% in 2002 and nearly doubled to 25% in 2004.  During that same time, the percentage of those with broadband at home went from 38% to 55% for those with incomes of over $100K, while for those with incomes under $20K it more than doubled, from 4% ($10-20K) or 5% (under $10K) to 10 and 11 percent, respectively, slightly beating the national average in uptake, at the lowest end of the scale.  In other words, broadband adoption is moving faster, in terms of percentage increase, at the low end of the income scale than at the top end, at this point.

And what do all these people do online?  Pew says:

[E]mail is still the killer app. It is the No. 1 activity and time consumer for the vast majority of internet users. Next comes information searching, then entertainment, then e-commerce. So even as internet use has grown exponentially, the hierarchy of metaphors that describe it has remained constant: The internet is most of all a mail pigeon, then a library, then an amusement park, then a shopping center.

One of the most interesting findings of the Pew study, for me, is the net effect of being online, as reported by the individuals surveyed:

People use email to deepen their connection to the people they like and love and increase the volume of communication they have with them. Email users, especially women, feel they are working on relationships and tending to their social networks as they exchange email. . . . 84% of internet users, or close to 100 million people, belong to groups that have an online presence. More than half have joined those groups since getting internet access; those who were group members before getting access say their use of the internet has bound them closer to the group. Members of online groups also say the internet increases the chances that they will interact with people outside their social class, racial group or generational cohort.   (http://www.pewinternet.org/PPF/r/148/report_display.asp)

In my own (family) experience, children growing up now value the internet more, even, than their cell phones (which are, in many cases, just handy input devices for networked communication).  In fact, these children already live in a world far different from the one we, born in the middle of the 20th century, grew up in.  It's a world with a single interface--and that's the web.  Shopping?  Amazon.  TV?  Red vs. Blue.  Newspapers? Delivered by RSS.  Phones?  IM, or texting.   Digital cameras.  Digital video.  Digital music.   And the world these children now inhabit will change more rapidly in the next five decades than ours did in the last five—far more rapidly.  Keyboards and monitors will disappear and will be replaced with voice recognition and heads-up displays of various sorts, video walls, video scrolls, etc..   I expect my children to be the last generation who know how to type, even if only with their thumbs.  

So, does anyone believe that the university is leading the way, at this point?  Can we have a show of hands?  Perhaps in deep technical arcana, but not, by any means, in discovering and disseminating new uses for the network.  And are we academicians present in any sense on the same channels, in the same media, in the same world of ideas, practices, and cultural phenomena that our students inhabit?  Most of us, not at all.  Meanwhile, the train on which the rest of culture is riding is about to leave the station.  

Fifteen years ago, the challenge before us was to imagine how new technology might provide a new platform for the practice of scholarship in the humanities, but today our challenge is the reverse.  It is no longer about opening the university and inviting the public in: it's about getting out where they already live, and meeting the public in the information commons, on the same terms that everyone else does.  

In fact, it's almost too late for us.  We will find that hard to believe, ensconced (as we all are) in solid-seeming residential universities, with long histories and the expectation of a long future—but older institutions on more solid foundations have been swept away or radically transformed in cultural upheavals of the past.  In spite of the inertia of these institutions, which we all know so well, the forces of change outside the institution have much greater inertia, and all of this—the practical furniture of our daily academic lives—could easily be gone, or changed beyond recognition, in a generation.  

So, let’s consider some of the modalities of vernacular computing, and think about how they are affecting the rest of the culture, and how they might affect higher education.  

For starters, think of the cellphone and WiFi.  Each has had a profound effect on how we live our daily lives—the cellphone more broadly so, at present, but WiFi is increasingly taken for granted, to the point where one finds it advertised at truck stops in the United States—which means that long-distance truckers are carrying laptops and doing networked computing in large enough numbers to make WiFi a drawing card.   We take it for granted, now, in everyday life, that we can call one another at any time, and not just when we’re sitting by the phone (“sitting by the phone” is a concept that will be difficult to explain to our grandchildren); we take it for granted that we can pick up our laptops and take them into the next room or the coffee shop across the street or wherever, without losing touch with the network.  That mobility means we can be where we want to be, or need to be, without being out of touch with family, with work, or with friends.  

Ten years ago, it was not uncommon to hear predictions like that of Gerald Celente, whose contribution to the Pew Internet Predictions project was this bon mot :

The computer will increasingly replace the commuter.

http://www.elon.edu/predictions/25briefbitingprediction.aspx

Today, we probably all know someone who telecommutes at least part of the time, and some of us teach regularly at a distance, and whereas we once “surfed” the web (conjuring images of leisure and fun in the sun), we now increasingly “serf” the web (think servitude, instead of surf boards).  For example, a week from today, while I’m still here in London, I’ll be teaching a graduate seminar in digital humanities, in real time, online, to students in our online masters program (called LEEP).  Many of my students are in the Chicago area—just a couple of hours away from campus—though one is in Germany, and others are scattered around the United States.  The class meets once a semester for a day, face-to-face; the rest of the time we conduct real-time online classes once a week for a couple of hours (no video—just streaming audio from the lecturer, and text chat back and forth among students and between student and lecturer).   Half of our graduate students are enrolled in the online program, and it has the highest retention rate (95%), highest sense of belonging, and highest morale of any of our programs.   Students find this delivery option compelling because they have jobs, families, and lives that they cannot or will not leave in order to undertake a couple of years of residential education.  

As we enter into the 21 st century, lifespans are lengthening, people are continuing to work into what once would have been retirement years, and they routinely have more than one career in a lifetime—sometimes three, four, or five.  In a world that’s characterized by multiple sequential careers, individuals will need to retrain and re-educate themselves several times in their adult lives, and they’ll need to do it while working, maintaining a family, etc.. .  Will this kill off residential undergraduate education?  No, but as that becomes more and more expensive and less and less representative of education as a whole, it will probably decrease in importance, except perhaps as a way of establishing new social networks or reproducing old ones in new generations.  Will online education be solitary, by comparison to the residential experience?  I expect, on the contrary, that it will be characterized increasingly by real-time collaborative work, and it will have a greater intensity than “real” classrooms.  And while we tend to think of information technology as replacing or actively militating against social interaction, in fact I think we can already see that it many ways the mobility that IT already offers us has important benefits on the social side, not least of which is the ability to choose your place of residence based on criteria other than the location of work or school.  

    One of the truisms about technology is that as it matures, it tends to disappear from view, simply becoming part of the fabric of everyday life.   With information technology, this is borne out by what’s now called “embedded computing.” As the home page for the Journal of Embedded Computing tells us,

In today's world, embedded computers are everywhere: automobiles, airplanes, home appliances, medical devices, cell phones, network processors, handhelds, digital audio/video, imaging, game consoles, graphics and other system-on-a-chip (SoCs). Embedded computers, even sophisticated ones, have been used in academic and commercial products and systems, for over twenty years.

So what difference will this make to us?   In a strictly practical sense, to begin with, it will make the same difference in higher education that it does in general: it will offer us a tradeoff between privacy and convenience.   As one wag put it, in the 1994 Elon/Pew survey of internet predictions,

In the world of the future, people will use low-cost Radio Shack equipment to spy on themselves to find out who they are.

-    Eric Hughes, 1992

http://www.elon.edu/predictions/25briefbitingprediction.aspx

In 2005, though, it’s not cheap Radio Shack equipment, but embedded Radio Frequency Identification chips, now about the size of a flake of pepper, that allow surveillance and identification.   In an online publication called “The Register” (motto, Biting the Hand that Feeds IT), in 2003, Scott Granneman wrote

Right now, you can buy a hammer, a pair of jeans, or a razor blade with anonymity. With RFID tags, that may be a thing of the past. Some manufacturers are planning to tag just the packaging, but others will also tag their products. There is no law requiring a label indicating that an RFID chip is in a product. Once you buy your RFID-tagged jeans at The Gap with RFID-tagged money, walk out of the store wearing RFID-tagged shoes, and get into your car with its RFID-tagged tires, you could be tracked anywhere you travel. Bar codes are usually scanned at the store, but not after purchase. But RFID transponders are, in many cases, forever part of the product, and designed to respond when they receive a signal.  Imagine everything you own is ‘numbered, identified, catalogued, and tracked.’  Anonymity and privacy?  Gone in a hailstorm of invisible communication, betrayed by your very property.”

Scott Granneman, “RFID Chips Are Here” (from June 27, 2003, The Register )

http://www.theregister.co.uk/2003/06/27/rfid_chips_are_here/

For those in the humanities (and the social sciences) who worry about issues of identity, issues of privacy, issues of surveillance, there is much to be discussed and examined in the advent of RFID, and it is important that these humanists and social scientists should be involved in that discussion.  For those who work on the organization of information—for example in library and information science, RFID represents an enormous onslaught of data—really, it is the administrative impulse of empire taken to its logical conclusion, with every consumer item (and every consumer) uniquely identified, tracked through interactions with other goods and services, and catalogued in some mad Borgesian database of everything everywhere.    

Nonetheless, there are potentially some positive educational implications for RFID, embedded computing, and the rest of this brave new world: tracking unique objects through systems, either by checkpointing or by using RFID in combination with GPS, could be the basis for many interesting educational scenarios.  An obvious one is tracking students as they explore museum exhibits (Hsi, S. Semper, R., Brunette, W., Rea, A., and Borriello, G. (under review) eXspot: A Wireless RFID Transceiver for Recording and Extending Museum Visits. Ubicomp 2004).    “Mediated immersive” experiences have also been developed, in scenarios like this one, from a class actually taught at MIT:

Students role-play environmental scientists investigating a rash of health concerns on the MIT campus linked to the release of toxins in the water supply. Working in teams, players attempt to identify the contaminant, chart its path through the environment, and devise possible plans for remediation. As participants physically move about campus, their handheld devices respond to their location, allowing them to collect simulated field data from the water and soil, interview virtual characters, and perform desktop research using mini-webs of data. At the end of the exercise, teams compile their data using peer-to-peer communication, and synthesize their findings.

Described in “Planning for ‘Neomillennial’ Learning Styles: Implications for Investments in Technology and Faculty,” Chris Dede, Harvard Graduate School of Education, 2004.  

http://www.gse.harvard.edu/~dedech/DedeNeoMillennial.pdf

See http://education.mit.edu/ar/ for more information on this and other “augmented reality” learning games.

It’s not too difficult, I think, to translate this “environmental detectives” scenario into a literary or historical context: think of a guided exploration of London in connection with Defoe’s Journal of the Plague Year , perhaps delivering period newspapers, ballads, and broadsides, paintings and drawings, all keyed to the students’ physical location at any given moment.   Or think of students exploring Civil War battlefields in Virginia or Maryland, reviewing accounts of the battles as well as the military records and perhaps even the diaries of those who took part in the battles.  

And although at present we think the Web is somehow self-contained—something we use or visit, not something we inhabit—as Bill Mitchell has pointed out,

Networks at ... different levels will all have to link up somehow; the body net will be connected to the building net, the building net to the community net, and the community net to the global net. From gesture sensors worn on our bodies to the worldwide infrastructure of communications satellites and long-distance fiber, the elements of the bitsphere will finally come together to form one densely interwoven system within which the knee bone is connected to the I-bahn.

–    William Mitchell, 1994

From http://www.elon.edu/predictions/20edgyincisivepredictions.aspx

This sort of prediction of what’s now called “ubiquitous computing” is far more plausible today than when it was originally made, more than ten years ago.  It depends on technologies all of which are now in production, and though not all of those technologies are ubiquitous at present, their diffusion is only a matter of time.   The transition from the kinds of mobility that we now take for granted (WiFi, cellphones) and the kind of network speeds that we are increasingly coming to expect (broadband), to the kind of ubiquitous and pervasive (and even invasive) computing that we’ve been discussing here is just a step, and at this point not even a very big one.  The consequences of that step, though, will be profound, and I think particularly so for university education, which will have difficulty reconciling its place-bound, classroom-based, lecture-oriented instruction with the mobile, situated, and need-driven information behavior of its future students.  If this seems a far-fetched thing to worry about, consider how long we’ve had cellphones, and how rapidly they have become a fixture of everyone’s everyday life.  Then ask whether you would like to be in the business of selling corded phones on land-lines.  Roughly speaking, that’s the situation education is now in.  

    In order to understand its situation a little better, we might compare education to other sectors of the culture, and look at how each of these sectors has dealt with the rising internet- and computer-literacy of the average citizen in everyday life.  

Commerce seems obviously to have responded most quickly and most enthusiastically: the web is widely used for price-comparison and product research in advance of purchasing (by 78% of those with internet access, according to the Pew Internet project), and 67% of those with internet access purchase online.  The most successful applications and services of the web, for the general public, have been things like Ebay, the online flea-market, Amazon.com, the online department store, and self-service travel sites like Travelocity.   In fact, commerce is increasingly globalized to the level of the individual consumer: services to the body are perhaps the exception and a long-term holdout: I still don’t buy haircuts on the internet, or restaurant meals.   But Google (in the US) has got a new top-level feature (“Local”) which will give you listings of places near you where you might go to get a haircut.  

    Government has also responded fairly quickly to the changing connectedness and literacies of citizens: here again, we see more self-service and more transparency, as well—at least for those who are connected.   Government services like renewing a driver’s license are increasingly available online, and no longer require a visit to a government office.  The US Post Office itself now provides to online customers almost all of the services you can get by visiting the Post Office in person.  In the related area of political campaigns, the assertion of agency and authorship by voters has made some remarkable inroads into party politics, for example in the most recent US elections, where bloggers and internet communities had a significant effect in the primaries.   In fact, nearly half of those with internet access use that access to find political information; nearly three-quarters use it to find news; and 66% use it to find information on government web sites.  

Healthcare is an interesting sector in comparison to education, more like it in some ways than commerce, where the consumer is in control of the transaction, or government, where democratic institutions are tasked with disseminating their information to citizens as broadly and effectively as possible.  In healthcare, as in education, the client is not in control of the transaction, for the most part—and perhaps not coincidentally, in these sectors the results of pervasive internet access are more mixed, so far.  On the one hand, the web empowers patients with respect to research on their own conditions, so that patients now come to consultations having more recent and specific research in hand than do their physicians.  Internet access also has begun to give patients more oversight of their own records, online.   On the other hand, most doctors still do not make themselves available by email, self-service only extends to things like WebMD and over-the-counter remedies, and telemedicine is still mostly at the experimental stage, though I note this news item from about ten days ago:

Microsoft, Intel Partner in Telemedicine Venture

Microsoft Corp. and Intel Corp. have partnered to create an E-Health Integration Platform that currently is set to cover Germany, Austria, and Switzerland, with the companies looking to expand across Europe and possibly beyond. The companies plan to create several product and service bundles for healthcare providers. They currently have 20 projects running with several healthcare providers, including the biggest telemedicine project in Germany . . . with hospital group Asklepios Kliniken GmbH. The objective is to electronically connect about 1,000 doctors and specialists to the group's hospitals. The system will support capabilities such as remote diagnoses, as well as more basic features such as online billing. . . .

Another project is for a "medical dashboard" that will make all of a hospital's applications accessible through one user interface. With help from partners, Microsoft and Intel are delivering the first dashboard to Lahn Dill Kliniken GmbH, a Wetzlar, Germany-based operator of three hospitals. . . .  This project was begun in the third quarter of 2004. The medical dashboard uses Microsoft software including Windows Server 2003, SQL Server, SharePoint Portal Server, Windows XP and Office 2003. The software runs on servers equipped with Intel processors and other hardware including Tablet PCs, according to Microsoft and Intel.

(Source: ITworld.com, March 11, 2005)

If Healthcare is responding more slowly than Government or Commerce, Education is responding even more slowly than Healthcare, I would argue.   We have been talking about things like online and self-guided education for decades now, and even though informal, task-specific information-seeking is now the norm in daily life, our educational institutions seem impervious to these changes.  As one of the respondents to the 1994 Pew predictions survey said,

We should have learning centers, neighborhood electronic cottages ... [but] it would be easier to get the Pope to become a Buddhist than to get the schools to change.

Ed Lyell, 1993

From http://www.elon.edu/predictions/25briefbitingprediction.aspx

Why is education is one of the more resistant sectors of the culture, when it comes to absorbing and playing out the implications of vernacular computing?  Arguably, part of the reason is that education exists in order to reproduce culture (as Pierre Bourdieu famously argued).  Perhaps also part of the reason is that education is underfunded, and it has a significant install-base of residential infrastructure.  Beyond that, the humanities in particular are backward-looking, in that they are historical in nature, and they are conservative, in the sense that they seek to preserve and carry forward the cultural record.  

So, should we regard this resistance as a bad thing?  Certainly not in all respects: critical functions require some resistance, for example.   But it is also important to engage with the new, if you are going to effectively produce, analyze, or even object to it.   I know that some of my academic colleagues will argue that it is the responsibility of the humanities to respond to values other than relevance in order to preserve an understanding of the beauty that is not fashionable, a wisdom that is not received, and truths that have been forgotten.  But it’s a false syllogism to argue that we cannot do these things and also adapt to new circumstances, teach in new ways, or bring new technologies into focus in our research and our writing.   In fact, I would argue quite the reverse.  

During the past year, I have been chair of a national commission of the American Council of Learned Societies, a body whose members are scholarly societies like the Modern Language Association, the American Historical Association, etc..  This commission is focused on the technical, human, institutional, and policy infrastructure needed to support the use of information technology in research and teaching in the humanities and social sciences: the short term for all that is “cyberinfrastructure.”   In the course of the Commission’s public meetings in Washington, DC, New York City, Chicago, Los Angeles, Berkeley, and Baltimore, Commission members heard testimony from scholars, librarians, museum directors, social scientists, representatives of government and private funding agencies, and many other kinds of people.  One of the things that emerged from these discussions is that the cultural record is currently fragmented over more or less arbitrary institutional boundaries—for example, the relevant materials for understanding one artist will be held in a dozen different museums, twenty libraries, and ten archives.   The digitization of the cultural record, though, provides the opportunity to reunite that record, connecting its disparate parts and making the resulting whole available to the interested individual, over the network.  

Taking these ideas a step further, in our final meeting Bernie Frischer (a classicist and my successor as director of the Institute for Advanced Technology in the Humanities at the University of Virginia) offered us a true grand challenge problem, one that would take more computing power than now exists on the planet, and would require intensive cooperation among scholars across all the disciplines of the humanities and the social sciences.  Like most grand challenges, this one can be simply stated:  simulate all of human history, for the entire planet.   We can certainly do pieces of this already, and we can see one kind of example in Bernie’s own Cultural Virtual Reality Lab, which models historical buildings and sites from the classical era.  We see a different kind of example in Ed Ayers’ Valley of the Shadow, which collects all available documentary records of all the individuals living in two towns during a thirty-year period before, during, and after the American Civil War.   We see yet another example in a project like the Blake Archive, which reunites Blake’s oeuvre in a virtual edition and exhibition drawn from more than a dozen different libraries, museums, and private collections.   One of the grandest examples to date is the Perseus Project, which presents digitized representations of classical-era texts and art works, and in the past few years has begun to present some of the renaissance texts that reprocess that classical heritage.   Connect the textual and art-historical record of classical civilization in Perseus with the modeling of historical buildings and sites in the Cultural Virtual Reality Lab, and you start to see the possibilities for a unified cultural record.  

Some of you will know ECAI, the Electronic Cultural Atlas Initiative, which began at Berkeley, and which “uses time and space to enhance understanding and preservation of human culture.”  We heard from Michael Buckland, one of the distinguished faculty members involved in the project, during the ACLS Commission meeting in Berkeley.   ECAI is a fascinating and very practical attempt “to make virtual collections of scholarly data from around the globe accessible through a common interface;” doing this, in turn, “requires a means for making data interoperable across formats, disciplines, institutions, and technical paradigms.”  Therefore, “some areas where ECAI is currently defining research objectives and and pursuing research” include:

-    International, distributed multi-lingual gazetteers

-    GIS in the humanities

o     The importance of the time element

o     The need for representation of fuzzy boundaries and incomplete information

-    Visualization

o     Data driven two and three dimensional representation of cultural information

-    Multi-lingual data access and representation

-    Data management of large collections of data in multiple formats

-    Metadata for cross-collection access to data, not just discovery

http://ecai.org/tech/researchgoals.html

One of ECAI’s affiliated projects is called The Great Britain Historical Geographical Information System, based at the University of Portsmouth, and described as follows:

The Great Britain Historical Geographical Information System is a unique digital collection of information about Britain's localities as they have changed over time. Information comes from census reports, historical gazetteers, travellers' tales and historic maps, assembled into a whole that is much more than the sum of its  parts. . . .

A separate site, funded by the UK National Lottery, has been created to make this resource available on-line to everyone, presenting our information graphically and cartographically.  This site is called A Vision of Britain Through Time and presents the history of Great Britain through places.  It can be found at: www.visionofbritain.org.uk

The “core technology” that this project at Portsmouth gets from ECAI is something called “Timemap,” which was developed at the University of Sydney, with ECAI support.  Timemap’s

unique time-handling provides an engaging and intuitive method of delivering historical, community, government, research and business information. Combining mapping and the time dimension gives new ways of visualising urban growth, the spread of empires, heritage sites, environmental change, weather patterns, traffic flow, earthquakes, mobile network faults, and much more ― ranging in time scale from millions of years to seconds.

All of this is by way of saying that Bernie Frischer’s grand challenge is not as far-fetched as it might originally have sounded to you, or to me.   Projects like the Great Britain Historical GIS project, and organizations like ECAI, are taking steps in this direction.  The goal envisioned in ECAI is not a holodeck-style immersion in virtual reality, but rather something much closer to the “mediated immersive” experience of the MIT environmental science class, or of my hypothetical example of exploring Defoe’s Journal of the Plague Year by exploring London.   And this is the sort of thing that we can do right now in some places, and soon almost anywhere, with current technology—wireless networks, cellphone technology, GPS, laptops, handhelds, and so on.  What’s lacking, in many cases, is the unified, digitized, accessible cultural record.  And while there are enormous computational challenges to overcome in getting from this stage to Bernie’s 3D immersion in a true simulation of cultural history, there are enormous—and in some ways more interesting—intellectual challenges to be overcome first, in digitizing, unifying, and making accessible that cultural record.  After all, if you’ve ever been the victim of an enterprise resource program, you know that even within the relatively uniform and bounded world of an enterprise like the university, we don’t all see things in the same way, or even in ways that are easily interoperable.  Now consider the difficulties that will arise when literary scholars, historians, archaeologists, anthropologists, linguists, and art historians all have to come to agreement on naming, dating, placing, and describing all the elements of cultural production that were created, used, or exhibited in a particular place—say, London—in a particular period—say the 17 th century.   Working out the unification of the cultural record is, I would argue, the grand intellectual challenge of the 21 st century, and it will probably take a century to accomplish, but it will be accomplished, and you—and your colleagues in the humanities and the social sciences at universities across the world—need to be involved in this project.  

    And somewhere, I hear someone asking: where’s the money going to come from?  From the lottery, in the case of the Great Britain Historical GIS project.  From an interested public, more generally, in a combination of public funding, commercial investment, individual micropayments, institutional subscription, private foundation grants, university support, and individual philanthropy.  It’s not all going to come from one place, but it will have one driver, and that will be the demonstrated interest of an increasingly wired, increasingly educated public.    There are clearly already people here in the UK who believe in the existence of that public, if Culture Online is any indication.  This program of the Department for Culture, Media, and Sport, run by some refugees from the BBC, commissions interactive projects in order to:

     enhance access to the arts for children and young people and give them the opportunity to develop their talents to the full;

     open up our cultural institutions to the wider community, to promote lifelong learning and social cohesion;

     extend the reach of new technologies and build IT skills;

     support wider and richer engagement and learning by all adults;

     promote good practice within the industry and within the public sector in regards to commissioning, process, usability and accessibility.

(see http://www.cultureonline.gov.uk/ )

One of our great shortcomings, in the academy, is our lack of faith in the public: we sometimes express this as self-loathing (who could possibly be interested in the arcane subjects on which we waste our days?), but really, it’s much more often and more honestly expressed as a kind of condescending dismissal of the hoi polloi : all they really want is reality TV, trash novels, and video games.  But any of you who have made cultural heritage materials freely available on the web will have found a surprisingly large audience for those materials, an audience that finds you without marketing and comes to you unbidden, an audience that will even suffer through badly designed user interfaces, impenetrable academic prose, and counterintuitive information organization, in order to read 19 th -century newspapers or browse the parish records of some small village in the Midlands.  

Why do they do it?  Often because they are interested in their own family histories, or in the history of their communities; sometimes because they are pursuing a hobbyist interest or following up on some arcana of fandom.  Sometimes they are simply interested in the time, the place, or the topic, from other reading they’ve done, from films or plays they’ve seen, from museums they’ve visited.   In a sense, motivation hardly matters: what matters is the opportunity that this public interest represents for academic disciplines that have been too long out of touch with an audience, too long confined to academic journals, scholarly monographs, and classroom lectures.  There’s more to communicate than these forms will accommodate, and more who are interested in what we have to communicate than we will find in these fora.  

As I began by saying, we will have to go out to meet the public in the information commons they now, increasingly, inhabit, and where government, commerce, and other aspects of civic life are already being conducted.  Education must be there too.  In some cases, this means delivering formal education at a distance, but in many other cases it means more informal education that delivers the fruits of academic research to “lifelong” learners, who are pursuing an interest rather than a degree.   We need to be doing this not only for altruistic reasons—for the benefit of the public—but for more selfish professional reasons as well: the cultural record is being digitized, with or without us.  I believe that this massive effort cannot be completed without participation from the commercial sector as well as support from public funding and private philanthropy.  GooglePrint proves that commercial interests believe that there is sufficient public interest to merit digitizing whole research library collections—only about half of which are even in English, by the way—in a whirlwind operation that will cost many, many millions of dollars.  But I also believe that domain experts in the many facets and the long history of that cultural record must be involved in the process, and I believe that we should embrace this project as the renaissance of the humanities and social sciences, and the work of a generation or more.   The unified cultural record that will result is going to be the basis for research projects and teaching methods that we can’t yet begin to imagine, for cross-cultural understanding that we badly need, and even for entertainment that might rise above the level of the Weakest Link, or even Jerry Springer (The Opera).  Establishing this infrastructure for learning will be as important, in shaping the future of the planet, as the establishment of the university infrastructure has been in shaping the last millennium in Europe and North Africa (or the last 2700 years in India).  With that in mind, I hope that all of you, having weathered structuralism and poststructuralism, will leave this lecture thinking of yourself as infrastructuralists—the designers and builders of the unified cultural record, a legacy for generations to come.    

Thank you very much.