Surfing the Satellites:
A New Image of Earth

Mark Pesce
VRML Architecture Group
mpesce@netcom.com * http://hyperreal.com/~mpesce


Introduction

I'd like to begin with some apologies. First, I'd like to apologize for overlooking Australia's visa requirement - something that kept me on the ground in Amsterdam, rather than bound for Sydney and Darling Harbour. This requirement - which I am given to understand is in reciprocity for my own country's policies - smells very nineteenth century. For well over two years I've traveled internationally with little more than plane tickets and a passport, and sometimes not even that much. It didn't even occur to me that in 1995, people would need visas to travel within the "industrialized world". As inconceivable as it might be, it kept me from boarding a plane to come visit with you.

It's strange that I can visit you electronically - and do so, regularly, on the World Wide Web - while I need a visa to visit in person. It's much like what Nicholas Negroponte refers to in "Being Digital" - the difference between atoms and bits. If I can be quoted, verbatim, in The Australian, aren't I having an effect on your country greater than any personal visit? If I can address a thousand people with this talk - to be posted at the conference's Web site - does this mean that Australia has somehow preserved its boundaries to matter while becoming porous to information? What exactly, then, is a visa for?

It's actually quite curious that this event occurred, because it touches on a point at the center of my talk - our entire definition of national boundaries is shifting, rapidly. In the practical sense, I can enter and leave Australia many times a day - digitally - while I can't enter it at all physically. So, what is your government doing? What is it avoiding, and what is it enabling, either consciously or accidentally?

Enough of questions - because I'm sure you all have a number of your own. It's time to examine what's really happening, and from this, learn where we might be headed.

A Brief History of Cyberspace

Anthropologists tell us that the human facility for communication is what separates us from the "lower" animals. For at least a few hundred thousand years, the human ability to vocalize, to sing, to express wonderment, anger, joy and fear, have given us - in the language of evolutionary biology - a "selection advantage" which has heaped upon us the greatest power on Earth. This power comes directly from the epiphenomenal effect of communication - the creation of "collective mind". This collective mind, which is actually the learned and communicated base of knowledge from which humanity rose to dominance, creates culture, society, and is actually the original human technology.

The medium of communication - that is, the human voice - had to give voice to content. We believe these first forms were quite simple; a study of the lesser apes have led us to believe that the linguistic systems we use are also available to them, but in simplified forms. The basic forms spoke to danger and fear, safety and hope. Yet, in a few hundreds of generations, human language grew to encompass ritual, drama and poetry. Language as a way to model the internal structure of the observed universe, became the vehicle for the human articulation of myth - the stories that are absolutely true told in utterly unbelievable words. Myth formed the first content of language; stories of the Earth, and of Goddesses and Gods. These stories were passed down, generation to generation, without as much as a single word misplaced. Great arts of memory came into use - this was before writing, when the great works had to be committed to memory in order to be properly preserved.

Writing stands at the beginning of history, because it is the medium used to record history. In Sumer cuneiform tablets spelled out the extent of the king's wealth, the astronomical cycles, and the great myths of prehistoric time - Innana and Timmuz, Gilgamesh, The Flood. Now the oral tradition becomes less fastidious, because writing can preserve the oral truths. Beyond this, the human mind - which has a finite capacity for learning, bounded at minimum by the length of one's life - no longer restricted to oral truths, could draw from a slowly increasing base of collective mind. The Library at Alexandria, perhaps the most famous of ancient times, contained (so it was said) all of the knowledge of man, from his earliest beginnings to the Greco-Roman era. Written resources, now available to individual human beings, gave us the emergence of the scholar, as one who studied the written work within a particular area of interest.

In the West, the world rose with Rome and fell with the Church. The Catholic Church controlled the writing of humanity within its dominion; but this monopoly was broken when the encounter with the riches of the Arabic world. The Crusades are the actual beginning of the Reformation, because the encounter with the ideas of mathematics, algebra, philosophy and science actually fomented the revolutions of Copernicus, Galileo, and Spinoza. These written ideas formed the basis of Western knowledge at the beginning of the Renaissance.

When Gutenburg printed his first Bible, he multiplied the effectiveness of writing a thousand-fold - no longer locked up in monasteries and churches, printed documents quickly swarmed across Europe; this led to an increase in knowledge as great as all those preceding. At the start of the 17th century, the great works - that is, all that a person had to read in order to be considered fully educated - amounted to some 300 volumes. With the invention of printing, a library this size was easy to amass.

The beauty of human knowledge in the age of writing - and particularly in the age of printing - is the essential multiplicative nature of the process of knowledge formation. Each of the great works influenced other minds, who would then write out their own ideas, to influence other minds, and so forth. By the early 18th century, Voltaire announced The Enlightenment, and the ideals of scientific rationalism and human rights became part of the printed discourse. Science, in particular, was the brainchild of writing, because the necessity for reproducibility and wide exchange of ideas fit very naturally into the post-Gutenburg paradigm.

Soon enough, Franklin, Volta and Ampere would perform the fundamental experiments on electricity, to be followed by Watt and Faraday. Within a generation, Samuel Morse created the telegraph, which electrified the world - literally. McLuhan points to the telegraph as the essential archetype of electric technology - it extends the human ability to communicate, at the speed of light, across the surface of the planet. It is as if the entire planet has become our skin, and the telegraph - by extension, all electric technology - our nervous system.

The birth of the electric age created a crisis in our understanding, leaving us vulnerable in ways we had never been vulnerable before. Our senses could lie to us, at a distance. The telegraph brought news from far away - the assassination of president Lincoln, the Jonestown Flood, the sinking of the U.S.S. Maine. This last example points up precisely what I'm talking about - William Randolph Hearst, the prototype for Rupert Murdoch and Henry Luce, used the telegraph to transmit alarm; the nervous system alerting the body to pain, so that the body would react - and buying more newspapers. The telegraph gave birth to the tabloid, the ancient mythic space of vocal communication amplified by modern technology.

At the start of our century, machinery remained large, centrally controlled, and requiring many people. This was due to the essential stupidity of the machine; devoid of any ability to mutate its own behavior based upon its own actions. This meant that machines were dangerous - Charlie Chaplin in Modern Times gives us a taste of the soul sucked into the machine, bound into its cycles and rhythms. But we required machinic intelligence - so that the machine could act on its own, for our benefit. The first concrete need was a machine to "crack" the coding machine used by the Nazi's during the Second World War. Named ENIGMA, the device used a series of cypher-rotors to encrypt a message, based upon a particular arrangement of these rotors. A mathematician at Cambridge, Alan Turing, developed a computer - quite simple compared to our modern models - which could "feed back" upon its outputs and use them to modify its own inputs. This essential mutability is the heart of computing - because the computer can make decisions based upon past behavior in order to modify future actions. The computer is literally electricity come alive; with some machinic consciousness, electricity directs itself.

Turing's computer became the ancestor of the many models to come; although Jon Von Neumann is credited with the birth of the modern computer, Turing's contribution is just as essential - he developed the first machine electrically conscious of its own process. This leap took fifty years to spread through the entire world of economics, production, and finance, and now our entire life is computerized; although it is difficult to pinpoint the specific benefits of computing, we know that most manufacturing and service processes are far more flexible and far faster than they ever have been before. The oft-heard complaint about the tremendous speed of modern times is due in no small part to the proliferation of the computer - frequent change makes time pass faster, if only apparently.

Computers are useless in isolation; even the first ones had banks of switches and relays into which data could be entered for calculations. Conversely, a computer presenting many interfaces is almost inherently more useful, more capable of meeting demanding tasks. Simulation, the essential action of a computer, is most accurate when continuously informed of outside processes. To facilitate this, the last twenty years have seen an enormous growth in the development of computer communications.

In 1969, the United States Department of Defense recognized the vulnerability of their network of computers - bring any one of them down and the whole network would collapse. They commissioned the Defense Advanced Research Projects Agency (DARPA) to develop a networking infrastructure which could be both mutable and resilient in the face of the catastrophic failure of any of its parts. The resulting communications protocols, called TCP/IP, became the cornerstone of today's Internet.

The Internet was successful precisely because it was resilient. Designed to parry the full force of thermonuclear war, the Internet would also be good enough to service most commercial and academic needs. One by one, over the 1980's, other protocols faded into insignificance; now only the Internet (which is a protocol and not a network) remains.

We've traversed a full circle; beginning with communication of the imagination, imagination electrified, electronic computing, and then into computing communications. We now travel a span we've seen before - that of communicating imagination. Yet now, it's supported by electronic computing communication. This time it's not the human voice which is the emerging medium, but something else, something we already have a name for - we call it cyberspace.

The VRML Equinox

While the Internet was an enormous improvement in the computer communications that had preceded it, using it was problematic. It grew up in an age of "sophisticated" computer interfaces, carefully disguising the fact that most computers were far too stupid to understand anything remotely related to human language. The command-lines, mostly derived from the UNIX systems under which TCP/IP was developed, were cryptic, and almost hermetic in their options. Because of this, a special class of individuals grew up to meet the challenge of "systems administration", whereas in fact they served as community librarians. They remembered where you put things in the boundless no-space of Internet; and if you should happen to fire one, well, that might be the last you'd see of your files. It was as if we'd landed back in the oral age of communication again; the myths of the culture could be locked up in one individual's head. Despite this, the Internet grew slowly, serving mostly as an electronic mail transport system.

During the 1970's, fundamental research at XEROX PARC proved the efficacy of graphical user interfaces - these began to migrate to the personal computer with the Macintosh, and now dominate the computing paradigm. The Internet was much slower to follow - still very attached to the UNIX command-line. But the volume of information accessible through the Internet continued to increase, to the point where it would be quite useful to many people if the interface could be restructured on more human-centered lines.

One young man, a programmer at CERN, the European atom-smasher, wanted to develop a series of improvements to the Internet which could create relationships between the various sets of data on the Internet, and display them in a platform-independent way. To do this, he created a new computer communication mechanism, the Hypertext Transport Protocol, or HTTP, and a new display mechanism, the Hypertext Markup Language, or HTML. The young man was named Tim Berners-Lee, and these inventions are the basis of the World Wide Web. Berners-Lee sought to create a unified space from the many machines on the Internet, gathering them into a coherent entity. The Web did this, creating the equivalent of a single, very large disk drive from the many computers on the Internet, and while presenting all of this data in a consistent format.

Despite the significance of this development, it took a few more years and some further improvements before the Web took off. Engineers at the University of Illinois' National Center for Supercomputer Applications extended the Web technologies again, in two fundamental dimensions - they added support for forms, so that a Web user could send input to computers within the web, and they added support on the Web servers for processing these forms. These two dimensions, which in shorthand can be called interface and connectivity, are the fundamental reasons for the success of the Web. This improved Web, viewable with an application named NCSA Mosaic, became the true starting point for the Internet revolution.

Although few people use it anymore, Mosaic is probably the most influential application ever developed for a computer, even more so than VisiCalc as the first spreadsheet, or WordStar as the first word processor. It lowered the barrier to entry on the Internet to anyone who knew how to use a computer. Suddenly the hundreds of thousands of Internet users became tens of millions of Web users. In the blink of an eye, the Web ate the Net. Today, Netscape Navigator runs on at least 70% of the desktops connected to the Internet, and there's no sign that figure will change, or that we'll outgrow our need for Internet connectivity. Everything's growing just about as fast as it can - the Web, once doubling in size every 53 days, is now doubling every 150. But don't despair - it grew to 8 times its original size before the growth curve began to slow; we probably can't bring up Web sites much faster than this.

I found myself on the Web in October of 1993, utterly entranced with its potential to be the infrastructure for global knowledge. At the same time, I found it rather confusing. The Web uses as its metaphor the concept of hyperspace - a space which is no space at all, because it's dimensionless. From one Web page to another, from one Web site to another, it's all the same. Other than bookmarks, there's really no way to organize the content of the Web sensibly. That's quite bad, because it means that the Web will soon overrun our ability to use it effectively, unless something's done about it. I can pose the question quite simply: which is easier to understand - "http://hyperreal.com/~mpesce", or "take Market to Third, and Third to Bryant"?

Throughout the 1980's we've seen the birth of human-centered computing, which seeks to invert the old computer-centered model of computing by creating sensual interfaces to information. The logic is simple: if I can render something sensually, it'll make more sense. The difference between a column of numbers and a well-designed chart is the simplest example of what I mean, but it goes further, into simulation and virtual reality. From the moment I "got" the Web, I knew it needed a sensualized interface, something that could be as expressive as we as humans are - a tool worthy of its users. In December of 1993, through February of 1994, working with Tony Parisi, I prototyped what has become known as the Virtual Reality Modeling Langague, VRML, or, as we call it, "vermal", to satisfy my concerns for an effective Web interface.

We communicated our developments to Berners-Lee in Geneva, and received an invitation to present our work at the First International Conference on the World Wide Web. That began an explosion of activity, as we consciously harnessed the collective intelligence of the Internet and the Web to draft a specification for a language which could meet the needs of the Web's academic users, as well as the demands of a growing commercial base. We contacted Silicon Graphics, and asked if they'd care to let their Open Inventor file format become the basis from which VRML would grow - they agreed, and placed many man-years of work into the public domain; this work became the basis for VRML 1.0, which we had in draft specification form in time for the second Web conference, in Chicago, in October of 1994.

Creating a community is the only way to forge ahead on the Internet; collective intelligence offers benefits beyond compare - we had an active community of several thousand members worldwide, contributing to a discussion on the future directions of a global project - the creation of a fully interactive, multiparticipant virtual environment, scaleable to meet the needs of any of its users. This is a big project, perhaps one of the biggest projects in computing. It's done collaboratively, with an eye toward sharing of results - and this means that our intelligence is not only collective, it's amplified by every contributor. People often ask if VRML will face any competition from another standard. I can only say that I doubt this; VRML moves faster than any single company can, even if that company is a Microsoft, because it's distributed, flexible and goal-oriented.

VRML, simply put, is a 3D interface to the Web. Where HTML can create "pages" with text and images, VRML can create "spaces" (we call them "worlds") with objects and linkages into other Web data sources. For example, it's possible to create a bookshelf with books upon it; these books could link to the "Great Works" as provided by Project Gutenburg. Or perhaps a shopping mall in cyberspace, where it's possible to examine the goods completely, by picking them up and moving them around. A three dimensional environment isn't a panacea - we won't stop using the text world of the Web just because we have a rich 3D world; rather, we'll see them used together, for more effective communication.

Imag(in)ing the Earth

Michael Gough, an architect who contributed to my book, has, as one of his comments, "The question becomes: what to do?" As someone who jumped ship from physical architecture to the design of cyberspace, he confronts this question every moment, in every act. Cyberspace, still almost entirely empty, demands this kind of critical thought before effective results can be expected.

This question confronts all of us, at several levels - certainly economically, probably socially, and as we'll see soon enough, governmentally. With a tool for communication of greater power than any preceding - because it can change no just what we think but the way we think it - we need to be informed by guidelines which will produce content worth of the potential of the medium.

Because cyberspace has been so radically informed by fiction - indeed, even the word itself comes from a science fiction novel - we believe t to be the realm of utter fantasy realized. While I don't disagree with the essence of this intuition, I want to be a bit of a contrarian, and argue that the concrete is actually the best candidate for visualization. The earth beneath our feet will prove the most fertile of subjects, more than our dreams, because it can be informed by reality, shaped by it, and soon enough, will shape it.

Such a development is perfectly predictable from the cycle of communication outlined in the first part of this paper; the age of simulation seeks a real body to simulate, and the Earth has a reality appealing at every possible level. Neal Stephenson, in his novel Snow Crash, evoked a virtual reality application simply called "Earth", which produced a high-resolution model of the Earth, in real-time, so that events far below were visible from the satellites circling above.

If I'd come to Australia to fill your heads with ideals derived from fiction, I'd deserve to be laughed at; indeed, I was informed that, as a nation, you're practically-minded, not the kind to accept the fictive as real. Yet, eve as Stephenson was dreaming this up in Seattle, a team on the other side of the world was making it happen.

In 1988, a group of engineers, designers and artists formed a company to explore the implications of computing, visualization, and communications. Since that time, Berlin's ART+COM has created a body of work unlike any other - a perfect synthesis of the real and virtual, consistently grounding its designs in usefulness, and always achieving far more than they themselves take credit for. The PING project was a very early experiment in using Web technology to create a low-cost infrastructure for interactive television. Using the Internet together with one high-power computer and Berlin's community-access cable television, ART+COM was able to develop a real system for multiparticipant art creation accessible anywhere on the planet. It was a combination of technologies used appropriately, and designed to delight - because emotion informs their design aesthetic as much as intellect does.

From the beginning, the founders of ART+COM sought to create realistic Earth-based models, and in 1990 demonstrated a type of "time machine" which could take a visitor on a fully immersive virtual reality tour of Berlin, from the 1750's through the current day. The model city could also store video or film clips of the city taken in the past, and integrated with the model - giving users a true sense of the progress of time, population and war on the face of Berlin. It was used to help plan the redesign of Germany's capital following the reunification of the nation.

In 1992, ART+COM began a project to model the entire Earth. Using satellite photos, the group created a twenty gigabyte database of images and topology, and wrote an impressive set of software to manipulate these images effectively. The database of images - as they call it, the terrabase - contained the model of the Earth in many different resolutions. Their logic was simple - when you're viewing the whole, you don't need to know much about specifics; when you're viewing specifics, you don't need to know much about the whole. In simulation this is known as level-of-detail, and its one way to cope with more data than your computer can easily handle - by breaking it down into bite-sized chunks, at a particular level of detail.

The system, which they named T_Vision, uses a powerful Silicon Graphics computer to generate an Earth visualization from the database. In conjunction with the computer, an input device known as the Earthtracker is used to manipulate the model Earth. The Earthtracker looks a lot like a globe; that's what it is, more or less. Spin the Earthtracker and watch the image of the Earth rotate appropriately. It's quite intuitive, a small twist on the globes found in every classroom.

Above the Earthtracker is a small dial, with a number of buttons beside it. Using the dial, it's possible to dive toward the Earth's surface. As you do so, you see the low-resolution far-away image of the Earth replaced by high-resolution close-up images; but the images are seamlessly integrated, so it appears as though the flow is continuous. It's possible to travel from a million kilometers above the planet down to a desktop in Berlin in 22 stages of progressively increasing resolution, as if the planet as come to land on your desktop!

Most significantly, T_Vision uses a network of computers, spread across the planet on a very high-speed ATM backbone to produce a networked visualization of the planet, as it is right now. T_Vision hosts in Japan, in Silicon Valley and Berlin work together to create a unified database - each of them generates data locally (perhaps terrain, or weather information, or seismic activity) and provides it to the others. It's possible to surf the clouds in T_Vision, or the oceans.

Although it's complete, T_Vision is just a beginning. It's something we'll see on our own computers within a year's time, and once we do, we'll wonder how we ever did without it. The path from here to there is a little more complicated that I've described so far - now I'd like to elucidate the requirements and benefits of a global infrastructure for global visualization.

The ability to deal with very large databases is the primary technical requirement. The T_Vision database, which presents the Earth in a fairly low level-of-detail, is more than 20 gigabytes in size. To articulate the Earth in even modest detail would take a database of some hundreds of terabytes (10^12 bytes), perhaps even petabytes (10^15 bytes). Indeed, the Earth imaging data being derived from NASA's Mission to Planet Earth adds up to well over a gigabyte per day, and will soon climb to several gigabytes. This is the kind of dynamic data which must be effectively integrated with a living model of the planet.

It is absolutely impossible to centralize this kind of database, for several reasons; first, there just aren't disk drives big enough for this kind of data storage (or they'd be prohibitively expensive), and second, no computer, how ever well connected, could possibly hope to deal with the number of requests made to it on a continuous basis if it acted as the database server for planet Earth. This implies that such a database must be distributed (as T_Vision is), and, given the amount of data, probably very finely distributed (which T_Vision is not). This means that new networking protocols, such as Cyberspace Protocol will need to be employed to handle database distribution among an arbitrarily large number of servers spread across the globe.

The Earth is clearly the ultimate database; in solving this problem we'll solve a host of others.

Our second problem - and actually a more important one - concerns how this database is generated. ART+COM could model the few blocks around their offices in Berlin, and it's possible for others to model a few blocks around the other T_Vision installations, but what army is large enough to model the entire content of the Earth? The model is of limited utility until it is reasonably well articulated, but this will take the efforts of tens of thousands, perhaps millions of individuals, each of whom "owns" a portion of the database, and tends it. While such a coordinated operation may seem far-fetched, it's very little different from weather observation - carried out by amateurs the world around for hundreds of years, or like the GLOBE project, organized under the auspices of U.S. vice-president Albert Gore, which seeks to have children all over Earth creating a weather and ecology database of their local communities - integrated by GLOBE into a collective whole.

In my own city of San Francisco, in conjunction with Planet 9 Studios, I am helping to organize the "Virtual SoMa" project. Virtual SoMa is a VRML-based model of the South Park neighborhood in San Francisco's South of Market (or "SoMa") district. The heart of San Francisco's "Multimedia Gulch" the model actually portrays the topography and buildings of a three-block district which happens to be the home of WIRED magazine and HOTWIRED on-line, of Construct, Inc., a VRML design firm, of Worlds, Inc., which designs VRML applications, and many other organizations. The model - which can be downloaded from anywhere on the Internet and examined on a machine of modest capabilities - demonstrates the power of Earth-based simulations. It's possible to "wander" through the streets of Virtual SoMa, examine the buildings, and catch a glimpse of what's inside. The model fully integrates Web services. If you go up to the building on the corner of 3rd and Bryant, and click on it, you're immediately delivered to www.wired.com, HOTWIRED, because that's the building that WIRED's offices are located in.

We can see that this model isn't just a toy - in fact, what we're seeing is the articulation of directory services as they'll look in the 21st century. This isn't just a glorified yellow pages, like YAHOO, or a super-index, like Lycos; this is using a model which conforms to expectation - the world has a well-established look and feel - to produce a compelling environment for information exploration. If this model were to cover all of San Francisco - as it will over the next twelve months, it could begin to serve as a tourist directory accessible from anywhere on the planet, available to all. It could serve as an urban planning guide, and help communities to make informed decisions on policies that could change their lives. It could serve as an ecological resource, used to identify and aid in the elimination of dangerous environments.

Before any of this happens, VRML will have to mature. As it stands, VRML is as interactive as the rest of the World Wide Web - which is to say, not very much. However, new languages like Sun's Java will come into relationship with VRML, and give it interactive properties it doesn't have itself. Within 12 months it will be possible to build databases this large, because we'll have tools that can absection this complexity, and present it in a consistent mechanism, in a way that won't overwhelm people - despite the sheer volume of data. Until then, we're stuck with somewhat smaller models, but communities have it within their power to model themselves - and one of my jobs right now is to convince people that this needs to be done.

Conversely, our various military agencies have excellent 3D maps of the planet and its communities. The U.S. Department of Defense's Mapping Agency keeps a comprehensive set of terrain and building maps for the country; it seems ludicrous that such a valuable resource should be locked up for "security reasons", while we're all quite busy duplicating their work. These maps can be combined with freely available GIS (Global Information System) data, which provides street mapping and house numbering throughout North America (and soon, the world). There's already plenty of data to be mined; although the articulation of specific structures must be left to a legion of individuals, the surface of the planet is well known to us, and available for use within a global model.

Before we leave this section of our journey through the future of cyberspace, I want to leave you with another image, something that's just around the corner but almost wholly unbelievable. Two years from now - presuming this all goes as planned - I'll be lost in downtown Tokyo. But I'll have my trusty laptop, equipped with two of 1997's coolest PC cards. The first, a PC cellular modem, keeps me in touch with the Web through my local Japanese Internet Service Provider. The second, a Global Positioning System (GPS) modem, lets my laptop know exactly where I am, down to about 10 meters of accuracy. This means that I can launch my desktop "Earth" application, and - ABRAHADBRA! - I can see where I am, and how to find where I want to go. That's the "killer app" for VRML, and one of the most useful applications for the Web itself.

The Music of the Spheres

We're going to turn the tables, and go in a direction that's perhaps not as product focused, but certainly just as interesting. It's been commonly thought, over the last twenty years, that the eye presents the best interface to the computer - after all, visual acuity can be measured in the millions of bits per second, while the ear is capable of only a few tens of thousands. Because of this, interfaces have tended to be eye-centric, adopting the attitude that vision is paramount. This is clearly both an artifact and bias of Western culture, which praises the phonetic literacy of modern man while despising the rich orality of the more "primitive" cultures.

Yet, the eye is sterile. Although a painting may shine upon the heart, a poem whispered speaks to the soul. We can't avoid the fact that the human voice is the root of human communication - in fact, we need to design interfaces which embrace this reality and translate it into new forms of knowledge. Surprisingly, there's been very little work in this field; although there's a significant body of work in the psychoacoustics of perception, it's remained apart from the bulk of the research into virtual reality, because the ear has always been an afterthought.

One of the most compelling examples of interface technology I've experienced was introduced at SIGGRAPH in 1994. The CyberFin project created a simulation of swimming with dolphins. Through the use of several transducers, including a large "Vibrasonic pad" which the user lay upon, CyberFin sent the full range of dolphin sound into the body, from subsonics through ultrasonics, reproducing as much of the dolphin's sonic perception as possible within the limited human facility of hearing. This piece - on of the most popular at SIGGRAPH and normally sporting a line three hours long - de-emphasized the eye in favor of the ear, using just a few simple 3D visuals of dolphins at play, contrasted with intense aural stimulation. Particularly effective, its creators mused on the possibility of a "trans-species synethesia", that we might be able to experience life through the dolphin's sensorium, even if only weakly.

McLuhan noted that when we progressed into modern culture, humanity traded "an eye for an ear". In his terms, the global village had a sound, a voice peculiar to it. We tend to forget that while space is in the eye, place is in the ear, and that communities are not seen, they are heard. This is what gives the Internet its particular quality of community - although it is still primarily a text-driven environment, this text has the aural quality of spoken word - indigenous to its existence, and growing stronger. The entire nature of discourse on the Internet has a vocal quality; although individuals may right "literary" works intended for distribution on the Internet, almost all communication through it assumes a strong oral/aural quality.

Last year I had a vision of a place of aural connection within the confines of cyberspace, a place with a sound, of voice and of music, essentially the human sound of the noosphere. What I had originally imagined has been progressively refined over the last 15 months, as I've encountered technologies like T_Vision, to become another interface mechanism for the global database. That an oral/aural interface to global resources has been overlooked by others is an indication of the disregard we have for this facility. We call this effort The WorldSong Project, and gave the first public demonstration of this technology at the Doors of Perception conference in Amsterdam on 11 November 1995.

The concept is simple - to construct a database of sounds, samples, rhythms, etc., which are mapped onto particular regions of the Earth's surface. Like images, sounds have levels of detail, so we've initially identified seven "strata" which comprise convenient categories for these levels. They are: the Astral stratum, for the sounds of the universe outside the Earth; the Etheric stratum, for the sound of the planet as a whole, typified by global trance; the Continental stratum, which covers the FM/AM and short-wave bands over a large area of Earth's surface; the Regional stratum, serving up indigenous music; the Urban stratum, covering the popular and historical sounds of human culture; the Neighborhood stratum, where the voice of the community finds its expression; and finally, the Personal stratum, the individual, alone, or in concert with others, in vocal or musical expression.

When combined with a visual technology, like T_Vision, the effect is breathtaking. For example, the Continental stratum serves as an interface to the electromagnetic waves which cover the planet. It's possible to drive around the planet using T_Vision as an interface, and then use this positional information to derive a selection of radio stations which are then sent back to the user to be heard. This same interface technique could be applied to television or other media, giving the user access to the global informational resources, both aural and visual, through a single interface.

Finally, I think that these techniques may give rise to a type of connectivity we've not seen before - the global song-space. It's possible, using current Internet technologies, to create a space for the human voice, as sung singly or in groups. Imagine a time in the near future, when, wearing earphones and using a microphone, it's possible to sing "into the net" and localize your sound to your location on Earth's surface. You might hear others singing with you, or around you, spatialized appropriately to their own locations. Or imagine that you're just a listener in this system, and with your desktop Earth it's possible to wander among the Earth-singers, listening to humanity as they weave an ancient song through a modern mechanism.

It may seem fanciful, but it's almost here.

Conclusion

The Internet is dry and airy, almost wholly intellectual. If we've come to rely on it as the communications medium of the 21st century, we have to work hard to give it some human heart - because if we can't say "I love you," through it, we might not say it at all. That would be a crime of great import, and because we overlooked a fundamental truth - unless we humanize our machines, they inevitably dehumanize us.

So it's time to look at the Internet as a vast beach whose sand is mixed through with scores of diamonds; such are the riches of the millennium. Though while we're entering an age of communication unlike any we've seen before, it's not without its own share of dangers. If we can visualize ourselves utterly, what happens to privacy, to the right not to be seen? Is T_Vision just a step to a global Panopticon, a kind of Orwellian telescreen which allows Big Brother to peer down from on high and monitor his subjects? How can nations maintain their boundaries when we can electronically come and go as we please in cyberspace, having an economic and social impact far greater than the weight of our bodies? Once we see the planet as a whole, will we still believe in the myths that keep nations together, or will we decide that the human nation extends as one from the Earth's surface to its ionosphere?

These are heady questions, but ones that must be addressed if we are to reach into the future. The capabilities are here now, but humanity must be informed by an ethic which allows for chaos, allows for difference, allows for a multiplicity of voices over a single vision. We will soon see the whole world, and soon hear it - that will change everything: about ourselves, our race, and our planet.


Mark D. Pesce
Amsterdam * 15 - 16 November 1995