Sunday, January 18, 2009

What Will Change Everything?

The Edge.org question 2009.
WHAT WILL CHANGE EVERYTHING?

DANIEL C. DENNETT
Philosopher; University Professor, Co-Director, Center for Cognitive Studies, Tufts University; Author, Breaking the Spell

THIS VERY EXPLORATION IS CHANGING EVERYTHING

What will change everything? The question itself and many of the answers already given by others here on Edge.org point to a common theme: reflective, scientific investigation of everything is going to change everything. When we look closely at looking closely, when we increase our investment in techniques for increasing our investment in techniques... for increasing our investment in techniques, we create non-linearities, — like Doug Hofstadter's strange loops — that amplify uncertainties, allowing phenomena that have heretofore been orderly and relatively predictable to escape our control. We figure out how to game the system, and this initiates an arms race to control or prevent the gaming of the system, which leads to new levels of gamesmanship and so on.

The snowball has started to roll, and there is probably no stopping it. Will the result be a utopia or a dystopia? Which of the novelties are self-limiting, and which will extinguish institutions long thought to be permanent? There is precious little inertia, I think, in cultural phenomena once they are placed in these arms races of cultural evolution. Extinction can happen overnight, in some cases. The almost frictionless markets made possible by the internet are already swiftly revolutionizing commerce.

Will universities and newspapers become obsolete? Will hospitals and churches go the way of corner grocery stores and livery stables? Will reading music soon become as arcane a talent as reading hieroglyphics? Will reading and writing themselves soon be obsolete? What will we use our minds for? Some see a revolution in our concept of intelligence, either because of "neurocosmetics" (Marcel Kinsbourne) or quantum-computing (W. H. Hoffman), or "just in time storytelling" (Roger Schank). Nick Humphrey reminds us that when we get back to basics — procreating, eating, just staying alive — not that much has changed since Roman times, but I think that these are not really fixed points after all.

Our species' stroll through Design Space is picking up speed. Recreational sex, recreational eating, and recreational perception (hallucinogens, alcohol), have been popular since Roman times, but we are now on the verge of recreational self-transformations that will dwarf the modifications the Romans indulged in. When you no longer need to eat to stay alive, or procreate to have offspring, or locomote to have an adventure — packed life, when the residual instincts for these activities might be simply turned off by genetic tweaking, there may be no constants of human nature left at all. Except, maybe, our incessant curiosity.


YOCHAI BENKLER
Berkman Professor of Entrepreneurial Legal Studies, Harvard; Author, The Wealth of Networks: How Social Production Transforms Markets and Freedom

RECOMBINATIONS OF THE NEAR POSSIBLE

What will change everything within forty to fifty years (optimistic assumptions about my longevity, I know)? One way to start to think about this is to look at the last “change everything” innovation, and work back fifty years from it. I would focus on the Internet's generalization into everyday life as the relevant baseline innovation that changed everything. We can locate its emergence to widespread use to the mid-1990s. So what did we have that existed in the mid-1940s that was a precursor? We had mature telephone networks, networked radio stations, and point-to-point radio communications. We had the earliest massive computers. So to me the challenge is to look at what we have now, some of which may be quite mature; other pieces of which may be only emerging; and to think of how they could combine in ways that will affect social and cultural processes in ways that will “change everything,” which I take to mean: will make a big difference to the day to day life of many people. Let me suggest four domains in which combinations and improvements of existing elements, some mature, some futuristic, will make a substantial difference, not all of it good.

Communications

We already have handsfree devices. We already have overhead transparent display in fighter pilot helmets. We already have presence-based and immediate communications. We already upload images and movies, on the fly, from our mobile devices, and share them with friends. We already have early holographic imaging for conference presentations, and high-quality 3D imaging for movies. We already have voice-activated computer control systems, and very very early brainwave activated human-computer interfaces. We already have the capacity to form groups online, segment and reform them according to need, be they in World of Warcraft or Facebook groups. What is left is to combine all these pieces into an integrated, easily wearable system that will, for all practical purposes, allow us to interact as science fiction once imagined telepathy working. We will be able to call upon another person by thinking of them; or, at least, whispering their name to ourselves. We will be able to communicate and see them; we will be able to see through their eyes if we wish to, in real time in high resolution to the point that it will seem as though we were in fact standing there, next to them or inside their shoes. However much we think now that collaboration at a distance is easy; what we do today will seem primitive. We won't have “beam me up, Scotty” physically; but we will have a close facsimile of the experience. Coupled with concerns over global warming, these capabilities will make business travel seem like wearing fur. However much we talk now about telecommuting today; these new capabilities, together with new concerns over environmental impact, will make virtual workplaces in the information segments of the economy as different from today's telecommuting as today's ubiquitous computing and mobile platforms are from the mini-computer “revolution” of the 1970s.


Medicine

It is entirely plausible that 110 or 120 will be an average life expectancy; with senescence delayed until 80 or 90. This will change the whole dynamic of life: how many careers a lifetime can support; what the ratio or professional moneymaking to volunteering; how early in life one starts a job; length of training. But this will likely affect, if at all within the relevant period, only the wealthiest societies. Simple innovations that are more likely will have a much wider effect on many more people. A cheap and effective malaria vaccine. Cheap and ubiquitous clean water filters. Cheap and effective treatments and prevention techniques against parasites. All these will change life in the Global South on scales and with values that they will swamp, from the perspective of a broad concern with human values, whatever effects lengthening life in the wealthier North will have.


Military Robotics

We are already have unmanned planes that can shoot live targets. We are seeing land robots, for both military and space applications. We are seeing networked robots performing functions in collaboration. I fear that we will see a massive increase in the deployment and quality of military robotics, and that this will lead to a perception that war is cheaper, in human terms. This, in turn, will lead democracies in general, and the United States in particular, to imagine that there are cheap wars, and to overcome the newly-learned reticence over war that we learned so dearly in Iraq.

(Casa del ionesco editor's note: see Robots at War: The New Battlefield for P.W.Singer's perspective on the future of Military Robotics).


Free market ideology

This is not a technical innovation but a change in realm of ideas. The resurgence of free market ideology, after its demise in the Great Depression, came to dominance between the 1970s and the late 1990s as a response to communism. As communism collapsed, free market ideology triumphantly declared its dominance. In the U.S. And the UK it expressed itself, first, in the Reagan/Thatcher moment; and then was generalized in the Clinton/Blair turn to define their own moment in terms of integrating market-based solutions as the core institutional innovation of the “left.” It expressed itself in Europe through the competition-focused, free market policies of the technocratic EU Commission; and in global systems through the demands and persistent reform recommendations of the World Bank, the IMF, and the world trade system through the WTO. But within less than two decades, its force as an idea is declining. On the one hand, the Great Deflation of 2008 has shown the utter dependence of human society on the possibility of well-functioning government to assure some baseline stability in human welfare and capacity to plan for the future. On the other hand, a gradual rise in volunteerism and cooperation, online and offline, is leading to a reassessment of what motivates people, and how governments, markets, and social dynamics interoperate. I expect the binary State/Market conception of the way we organize our large systems to give way to a more fluid set of systems, with greater integration of the social and commercial; as well as of the state and the social. So much of life, in so many of our societies, was structured around either market mechanisms or state bureaucracies. The emergence of new systems of social interaction will affect what we do, and where we turn for things we want to do, have, and experience.



MARTI HEARST
Computer Scientist, UC Berkeley, School of Information; Author, Search User Interfaces

THE DECLINE OF TEXT

As an academic I am of course loathe to think about a world without reading and writing, but with the rapidly increasing ease of recording and distributing video, and its enormous popularity, I think it is only a matter of time before text and the written word become relegated to specialists (such as lawyers) and hobbyists.

Movies have already replaced books as cultural touchstones in the U.S. And most Americans dislike watching movies with subtitles. I assume that given a choice, the majority of Americans would prefer a video-dominant world to a text-dominant one. (Writing as a technologist, I don't feel I can speak for other cultures.) A recent report by Pew Research included a quote from a media executive who said that emails containing podcasts were opened 20% more often than standard marketing email. And I was intrigued by the use of YouTube questions in the U.S. presidential debates. Most of the citizen-submitted videos that were selected by the moderators consisted simply of people pointing the camera at themselves and speaking their question out loud, with a backdrop consisting of a wall in a room of their home. There were no visual flourishes; the video did not add much beyond what a questioner in a live audience would have conveyed. Video is becoming a mundane way to communicate.

Note that I am not predicting the decline of erudition, in the tradition of Allan Bloom. Nor am I arguing that video will make us stupid, as in Niel Postman's landmark "Amusing Ourselves to Death." The situation is different today. In Postman's time, the dominant form of video communication was television, which allowed only for one-way, broadcast-style interaction. We should expect different consequences when everyone uses video for multi-way communication. What I am espousing is that the forms of communication that will do the cultural "heavy lifting" will be audio and video, rather than text.

How will this come about? As a first step, I think there will be a dramatic reduction in typing; input of textual information will move towards audio dictation. (There is a problem of how to avoid disturbing officemates or exposing seat-mates on public transportation to private information; perhaps some sound-canceling technology will be developed to solve this problem.) This will succeed in the future where it has failed in the past because of future improvements in speech recognition technology and ease-of-use improvements in editing, storage, and retrieval of spoken words.

There already is robust technology for watching and listening to video at a faster speed than recorded, without undue auditory distortion (Microsoft has an excellent in-house system for this). And as noted above, technology for recording, editing, posting, and storing video has become ubiquitous and easy to use. As for the use of textual media to respond to criticisms and to cite other work, we already see "video responses" as a heavily used feature on YouTube. One can imagine how technology and norms will develop to further enrich this kind of interaction.

The missing piece in technology today is an effective way to search for video content. Automated image analysis is still an unsolved problem, but there may well be a breakthrough on the horizon. Most algorithms of this kind are developed by "training", that is, by exposing them to large numbers of examples. The algorithms, if fed enough data, can learn to recognize patterns which can be applied to recognize objects in videos the algorithm hasn't yet seen. This kind of technology is behind many of the innovations we see in web search engines, such as accurate spell checking and improvements in automated language translation. Not yet available are huge collections of labeled image and video data, where words have been linked to objects within the images, but there are efforts afoot to harness the willing crowds of online volunteers to gather such information.

What about developing versus developed nations? There is of course an enormous literacy problem in developing nations. Researchers are experimenting with cleverly designed tools such as the Literacy Bridge Talking Book project which uses a low-cost audio device to help teach reading skills. But perhaps just as developing nations "leap-frogged" developed ones by skipping land-line telephones to go straight to cell phones, the same may happen with skipping written literacy and moving directly to screen literacy.

I am not saying text will disappear entirely; one counter-trend is the replacement of orality with text in certain forms of communication. For short messages, texting is efficient and unobtrusive. And there is the question of how official government proclamations will be recorded. Perhaps there will be a requirement for transliteration into written text as a provision of the Americans with Disabilities Act, for the hearing-impaired (although we can hope in the future for increasingly advanced technology to reverse such conditions). But I do think the importance of written words will decline dramatically both in culture and in how the world works. In a few years, will I be submitting my response to the Edge question as a podcast?


DAVID EAGLEMAN
Assistant Professor of Neuroscience, Baylor College of Medicine; Author, Sum

SILICON IMMORTALITY : DOWNLOADING CONSCIOUSNESS INTO COMPUTERS

While medicine will advance in the next half century, we are not on a crash-course for achieving immortality by curing all disease. Bodies simply wear down with use. We are on a crash-course, however, with technologies that let us store unthinkable amounts of data and run gargantuan simulations. Therefore, well before we understand how brains work, we will find ourselves able to digitally copy the brain's structure and able to download the conscious mind into a computer.

If the computational hypothesis of brain function is correct, it suggests that an exact replica of your brain will hold your memories, will act and think and feel the way you do, and will experience your consciousness — irrespective of whether it's built out of biological cells, Tinkertoys, or zeros and ones. The important part about brains, the theory goes, is not the structure, it is about the algorithms that ride on top of the structure. So if the scaffolding that supports the algorithms is replicated — even in a different medium — then the resultant mind should be identical. If this proves correct, it is almost certain we will soon have technologies that allow us to copy and download our brains and live forever in silica. We will not have to die anymore. We will instead live in virtual worlds like the Matrix. I assume there will be markets for purchasing different kinds of afterlives, and sharing them with different people — this is future of social networking. And once you are downloaded, you may even be able to watch the death of your outside, real-world body, in the manner that we would view an interesting movie.

Of course, this hypothesized future embeds many assumptions, the speciousness of any one of which could spill the house of cards. The main problem is that we don't know exactly which variables are critical to capture in our hypothetical brain scan. Presumably the important data will include the detailed connectivity of the hundreds of billions of neurons. But knowing the point-to-point circuit diagram of the brain may not be sufficient to specify its function. The exact three-dimensional arrangement of the neurons and glia is likely to matter as well (for example, because of three-dimensional diffusion of extracellular signals). We may further need to probe and record the strength of each of the trillions of synaptic connections. In a still more challenging scenario, the states of individual proteins (phosphorylation states, exact spatial distribution, articulation with neighboring proteins, and so on) will need to be scanned and stored. It should also be noted that a simulation of the central nervous system by itself may not be sufficient for a good simulation of experience: other aspects of the body may require inclusion, such as the endocrine system, which sends and receives signals from the brain. These considerations potentially lead to billions of trillions of variables that need to be stored and emulated.

The other major technical hurdle is that the simulated brain must be able to modify itself. We need not only the pieces and parts, we also the physics of their ongoing interactions — for example, the activity of transcription factors that travel to the nucleus and cause gene expression, the dynamic changes in location and strength of the synapses, and so on. Unless your simulated experiences change the structure of your simulated brain, you will be unable to form new memories and will have no sense of the passage of time. Under those circumstances, is there any point in immortality?

The good news is that computing power is blossoming sufficiently quickly that we are likely to make it within a half century. And note that a simulation does not need to be run in real time in order for the simulated brain to believe it is operating in real time. There's no doubt that whole brain emulation is an exceptionally challenging problem. As of this moment, we have no neuroscience technologies geared toward ultra-high-resolution scanning of the sort required — and even if we did, it would take several of the world's most powerful computers to represent a few cubic millimeters of brain tissue in real time. It's a large problem. But assuming we haven't missed anything important in our theoretical frameworks, then we have the problem cornered and I expect to see the downloading of consciousness come to fruition in my lifetime.

KEVIN SLAVIN
Digital Technologist; Managing Director, Co-Founder, area/code

THE EBB OF MEMORY

In just a few years, we’ll see the first generation of adults whose every breath has been drawn on the grid. A generation for whom every key moment (e.g., birth) has been documented and distributed globally. Not just the key moments, of course, but also the most banal: eating pasta, missing the train, and having a bad day at the office. Ski trips and puppies.

These trips and puppies are not simply happening, they are becoming data, building up the global database of distributed memories. They are networked digital photos – 3 billion on Flickr, 10 billion on Facebook. They were blog posts, and now they are tweets, too (a billion in 18 months). They are Facebook posts, Dopplr journals, Last.FM updates.

Further, more and more of these traces we produce will be passive or semi-passive. Consider Loopt, which allows us to track ourselves, our friends through GPS. Consider voicemail transcription bots that transcribe the voice messages we leave into searchable text in email boxes on into eternity. The next song you listen to will likely be stored in a database record somewhere. Next time you take a phonecam photo, it may well have the event’s latitude and longitude baked into the photo’s metadata.

The sharp upswing in all of this record-keeping – both active and passive – are redefining one of the core elements of what it means to be human, namely to remember. We are moving towards a culture that has outsourced this essential quality of existence to machines, to a vast and distributed prosthesis. This infrastructure exists right now, but very soon we’ll be living with the first adult generation whose entire lives are embedded in it.

In 1992, the artist Thomas Bayrle wrote that the great mistakes of the future would be that as everything became digital, we would confuse memory with storage. What’s important about genuine memory and how it differs from digital storage is that human memory is imperfect, fallible, and malleable. It disappears over time in a rehearsal and echo of mortality; our abilities to remember, distort and forget are what make us who we are.

We have built the infrastructure that makes it impossible to forget. As it hardens and seeps into every element of daily life, it will make it impossible to remember. Changing what it means to remember changes what it means to be.

There are a few people with who already have perfect episodic memory, total recall, neurological edge cases. They are harbingers of the culture to come. One of them, Jill Price, was profiled in Der Spiegel:

"In addition to good memories, every angry word, every mistake, every disappointment, every shock and every moment of pain goes unforgotten. Time heals no wounds for Price. 'I don't look back at the past with any distance. It's more like experiencing everything over and over again, and those memories trigger exactly the same emotions in me. It's like an endless, chaotic film that can completely overpower me. And there's no stop button.'"

This also describes the life of Steve Mann, passively recording his life through wearable computers for many years. This is an unlikely future scenario, but like any caricature, it is based on human features that will be increasingly recognizable. The processing, recording and broadcasting prefigured in Mann’s work will be embedded in everyday actions like the twittering, phonecam shots and GPS traces we broadcast now. All of them entering into an outboard memory that is accessible (and searchable) everywhere we go.

Today is New Year’s Eve. I read today (on Twitter) that three friends, independent of each other, were looking back at Flickr to recall what they were doing a year ago. I would like to start the New Year being able to remember 2008, but also to forget it.

For the next generation, it will be impossible to forget it, and harder to remember. What will change everything is our ability to remember what everything is. Was. And wasn’t.


More here

No comments: