• Go to The Future Does Not Compute main page
  • Go to table of contents
  • This document: http://www.praxagora.com/~stevet/fdnc/ch11.html
  • In Summary




    This is Chapter 11 of The Future Does Not Compute: Transcending the Machines in Our Midst, by Stephen L. Talbott. Copyright 1995 O'Reilly & Associates. All rights reserved. You may freely redistribute this chapter in its entirety for noncommercial purposes. For information about the author's online newsletter, NETFUTURE: Technology and Human Responsibility, see http://www.netfuture.org/.

    Our ever more intimate embrace of technology -- which now means especially computerized technology -- is hardly news. At the same time, anyone who claims to discern in this embrace a crisis for humanity risks becoming mere background noise in an era of rhetorical overkill. Nevertheless, something like such a claim is the main burden of this book.

    The qualities of our technological embrace are admittedly difficult to assess. It's not just that we cannot buy things without participating in financial networks and contributing ourselves as "data" to consumer databases; nor that companies are now refusing to work with suppliers who lack "network-compatibility"; nor that in order to compete in most markets today, you must adapt your business to the computational landscape; nor that "knowledge" increasingly means "computer-processed and computer-accessible information"; nor that our children's education is being shifted online with a stunning sense of urgency; nor, finally, that our chosen recreations are ever more influenced by the computer's remarkable ability to frame alternate realities.

    Clearly, these are important developments. But on their surface they don't tell us what sort of embrace we're caught up in.

    Perhaps more revealing is the fact that we can no longer envision the future except as an exercise in projecting technological trends (with computers likely to be doing at least some of the projecting). Questions about the future of community, of social activism, of education, of liberty and democracy -- even of religion -- now threaten to become debates about the evolution of technology.

    The same truth emerges even when we express our fear of technology, for it is often the fear of what "they" will do with technology to rob us of privacy or access to information -- and "they" turn out to be conceived as impersonal mechanisms of government and corporate business: machines running by themselves and largely beyond anyone's control. Nor can we imagine remedies without appealing to these same organizational mechanisms. One way or another, we seem convinced, the Machine cradles our future.

    This helps to explain the advice I've heard too often for comfort: "technology's penetration of our lives will continue in any case; why resist? Why not find the pleasure in it?" -- the rapist's plea, but now applied against ourselves on behalf of our machines.

    All of which raises a question whether the difficulty in characterizing our embrace of technology results partly from the fact that technology is embracing us, the squalid terms of the encounter having largely been purged from our traumatized consciousness. While I would answer a qualified "yes" to this question, I do not take the answer to imply a rejection of technology. Rather, it implies a need to understand both the logic of the technological assault, and the inner qualities of our submission. For the fact is that we are locked within a complex, mutual embrace. Only in accepting this fact will we begin to discover the nature of the crisis now facing us.

    Who -- or what -- holds our future?

    As I write these words, John Perry Barlow's article "Jackboots on the Infobahn" is circulating on the Net. Barlow is cofounder and vice- chairman of the influential Electronic Frontier Foundation, and his piece is a counterattack against government plans to standardize the encryption technology based on the "Clipper" chip. He sees in Clipper "a last ditch attempt by the United States, the last great power from the old Industrial Era, to establish imperial control over cyberspace." His conclusion?
    If they win, the most liberating development in the history of humankind [that is, the National Information Infrastructure] could become, instead, the surveillance system that will monitor our grandchildren's morality. /1/

    This peculiar sentence bears within its brief span nearly all the unresolved tensions afflicting the current assessment of high technology. If the Net is the most liberating development in the history of humankind -- but evil forces may somehow snatch this incomparable gift from us at the last moment, turning it into an instrument of unsurpassed oppression -- then, it seems, the Net can be neither liberating nor oppressive in its own right. It's all a question of what we do with it. It's not the Net we're talking about here; it's you and me. And surely that's the only place to begin. Neither liberation nor oppression can become living powers in any soil except that of the human heart.

    As soon as we put the matter this way, however, we can begin to talk about the "nature" of the Net. Not some absolute, intrinsic nature, to be sure, but an established character -- a kind of active willfulness -- that ultimately derives from our character. Our technological expressions, after all, do exhibit certain tendencies, patterns, biases, and these can, to one degree or another, be read. But it remains true that what we are reading -- what we have expressed through the technology -- can only be something of ourselves. We should not ask, "Is technology neutral?" but rather, "Are we neutral in our use of technology?" And, of course, one hopes we are not. No striving for what is good, true, and beautiful -- or for their opposites -- can reasonably be called neutral.

    On the other hand, we see an apparent compulsion to treat our machines as objective crystal balls in which we can discern the human future. This is part of a broad willingness to anthropomorphize the machine -- to transfer everything human, including responsibility for the future, to our tools. It is easy to forget that such anthropomorphism is a two-way street. If we experience our machines as increasingly humanlike, then we are experiencing ourselves as increasingly machinelike. The latter fact is much more likely to be decisive for our future than the former.

    Are we giving up our freedom ?

    What complicates the issue is that we are free to hand over something of our humanity to our machines. We can refuse our own responsibility for the future, and machines will readily fill the void. They will make our decisions on the battlefield or in the boardroom; they will imagine a deeply falsified subatomic world for us; and they will supply us with the words we write. /2/

    Therefore one can talk about the "nature" of the Net in a still more troubling sense. For the proclivities to which we have given expression do, after all, take up a kind of independent life in our technology. This has always been the case, but its truth is hugely amplified in computer-based technology. It is precisely the distinguishing feature of computers that they can act as independent agents. What we express in a programming language -- and such languages already bias our expression, if we are not extraordinarily alert -- becomes the self-sustaining law of the now autonomous, mechanically intelligent agent.

    And yet, even here we should not lose sight of the fact that this autonomous life is, finally, our own. We confront its agency, not only in the computer, but in our organizations, in the functioning of our economy, in politics -- and also in ourselves just so far as we "run on automatic" and enter unconsciously, or with only a technical consciousness, into those domains where history is so loudly demanding we take hold of our destiny. We confront it wherever seemingly insoluble human problems arise through no one's apparent fault -- much as an individual's conscious intentions can be subverted by split-off, independent fragments of the psyche ("complexes") for which he does not seem to be responsible.

    All this, I hope, suggests the need for sensitive balances in our thinking -- quite a different matter from weighing piles of "facts" or "information" in crude opposition to each other. In particular, we must hold the balance between two poles:

    As an absolute conclusion, neither statement is tenable. But as poles marking the movement of thought, they are both essential. I believe strongly in the decisive potentials of our nascent human freedom, and therefore my appeal is to the reader's understanding and powers of choice. To that extent, I keep to the pole of freedom. But my fears, and the urgency of my message, derive from the second pole. For I am convinced that, paradoxical as it may seem, we are strongly tempted to use our freedom in order to deny freedom, pursuing instead the mechanization of life and thought. Such a course is open to us. We can forsake the responsibility of choice and put the machine in charge. We can let ourselves be driven by all the collective, automatic, unconscious, deterministic processes we have set afoot in the world and in ourselves.

    A crisis of awakening

    Opening today's electronic mail, I find an announcement circulated to all members of the Consortium for School Networking discussion group. (The group is aimed primarily at teachers and administrators in grades K-12.) The announcement solicits contributions to a book, and begins in this admirably straight-shooting fashion: "The Texas Center for Educational Technology is committed to producing a publication that reports successful uses of technology in education." A little further down -- with no hesitations voiced in the meantime -- the message continues:
    The purpose of this book is to help other educators justify to their school boards and school administrators the purchase of expensive computer-based technologies. We want to hear from you about positive changes in student achievement, student behavior, or teacher/administrator productivity that are a result of the implementation of technology. We also want to hear from you about improvements in student test scores, attendance, tardiness, attitude toward subject matter, and/or self-esteem that are attributable to the use of technology.

    These people seem to know what they want. Surely they will find it. Just as surely, one can reasonably complain about the imbalance of their enterprise. Most teachers haven't yet figured out what to do with the computers and the online access being shoved at them, and already this educational center is prepared to assume that only good can come of it all?

    But asking for balance is admittedly a tricky matter. The obvious thing is to tally all the "good effects" of computers in the classroom, and all the "bad," and then to hope that the good outweigh the bad. One encounters exactly this approach, not only in education, but in all fields where computers are used. The skeptic is typically met with counterbalancing testimonials about how "my daughter had a wonderful learning experience with such-and-such a computer program" -- much as critics of television have become used to the ritual observation that "there are some good programs on TV." And so there are.

    There is no argumentative sum

    Such assessments can, in fact, be worthwhile. The challenge for the critic is to avoid disparaging them while pointing gently toward a deeper set of issues. After several decades of a massive social experiment with television, there are finally signs that we are taking more concerned notice of the medium's underlying effects, whether expressed through "good" programs or "bad": what is happening to nervous systems, habits of attention, moral and esthetic judgments, the structure of the family, thinking processes?

    I am not sure we will be given this many decades to become properly aware of our computers. For what is directly at risk now -- what the computer asks us to abdicate -- are our independent powers of awareness. Yet these powers are the only means by which we can raise ourselves above the machine.

    The more intelligence, the more independent life, the machine possesses, the more urgently I must strive with it in order to bend it to my own purposes. Here, then, is the fundamental level at which I need to strike a human-centered balance between myself and the machines around me. "Balance," however, is too mild a term; I need to experience a crisis -- a kind of turning point that prepares the way for healing. The word "crisis" was classically related to a decisive act of distinguishing, and the necessity today is that I learn to distinguish my own humanity from the subhuman. I cannot do so without a fearful, morally tinged struggle toward consciousness.

    This is a very different matter from seeking balance in the usual sense. It will not do simply to cultivate as many socially beneficial effects of computers as possible -- whether we count elevated student test scores, or participation in electronic elections, or the analysis of scientific data, or cross-cultural communication as "socially beneficial." For one can imagine a brave new world in which we have eliminated all but the desired effects, yet in which we steadily descend to the level of our machines. The deeper question is: how do our desires already reflect our adaptation to our machines, and how can we rise above that adaptation?

    The very attempt to sum the advantages and disadvantages associated with computers is itself a sign of how far we have already succumbed to the computational paradigm. For there is no such sum. There is no absolute advantage, and no absolute disadvantage. Everything depends upon what happens within you and me -- and we can change that. Therefore, even to argue that there is a threat from computers -- if that threat is seen as fixed and objective -- is only to further our descent. This is my own greatest challenge, for where my strong predilection is to argue the facts, I should instead seek to awaken. Awakenings prepare the way for a new future, and for different facts.

    I can put all this in a slightly different way. The experts in human-computer interfaces are working hard to design computer programs that "cooperate" more fully with humans and mesh harmoniously with our natural capacities. These undertakings are valuable. But they are not enough, for they do not yet address what it is, with all our capacities, we have so far become, or what we may yet become. What if the human being to whom we so beautifully adapt the computer is the wrong sort of human being? What if our efforts really amount to a more effective adaptation of the human being to the machine, rather than the other way around?

    After all, it is we who first conceived those refractory machines now being redesigned. Something in us harmonized with the original designs. That is, something in us was already inclined toward the mechanical. Unless we can work on that something -- master it -- our remedial efforts will lead to ever more subtle, ever more "successful" expressions of the same tendency.

    That work is what I have been referring to. Only a crisis within the individual can summon unexercised, forgotten, or future human capacities. "There is no birth of consciousness without pain" (C. G. Jung). If, in our technological experiment, we are to know who is adapting to whom -- who is embracing whom -- we must first gain a clear experience of both human and machine potentials, and how they differ.

    But a great deal in our recent cultural history -- and almost the entire thrust of the ongoing technological effort -- has consisted of the attempt to bridge or deny any such difference. We have learned to regard ourselves as ghosts in the machine, awaiting a proper, scientific exorcism. It should hardly surprise us if this habit of thought has affected our actual capacities. We have more and more become mere ghosts in the machine.

    On being awake

    "Civilization advances in proportion to the number of operations its people can do without thinking about them." /3/ While echoing current notions of progress, this is nevertheless the momentous opposite of the truth. Everything depends today on how much we can penetrate our activities with a fully conscious, deeply felt intention, leaving as little as possible to the designs that have "escaped" us and taken up residence in the impersonal machinery of our existence.

    I am not decrying our ability to walk or drive cars without the crippling necessity of thinking about every detail of our behavior. But it is important to gain a conscious grasp of the meanings inhering in such activities. This requires an ability to enter at will into them with imaginative thinking. What is the difference between crawling on all fours and striding forward in an upright position? That we try to understand the relation between man and animal without having gained a conscious experience of this difference testifies to a science driven by purely external and reductionist approaches. It is a science that would explain consciousness without ever bothering to enter into it.

    Similarly, we need to grasp the difference between driving a car and walking, or between viewing a two-dimensional, pespective image and participating in the world itself, or between interacting with a computer and interacting with a person. But these are exactly the inner distinctions we have been training ourselves to ignore. It is no surprise, then, that many can now readily conceive of dealing with computers that are, for all practical purposes, indistinguishable from persons.

    I am not claiming any easy or automatic distinction between computers and people. If what we have been training ourselves to ignore -- and may therefore lose as capacity -- is what distinguishes us from machines, then we must expect that over time it will become more and more plausible to program our machines to be like us.

    This brings us back to the crisis of awakening. Our relation to computers is more a matter of choice, of the direction of movement, of inner experience and discovery, than of unalterable fact. This is not to say, however, that the choices will remain open indefinitely. Critical choices never do. The growing child, for example, must develop certain human capacities, such as speech, when the time is "ripe," or else risk remaining incapacitated for the rest of its life. There can be no stasis at such times; one either moves forward, or else loses much of what one already has.

    There is good reason to believe that all of human life is governed by this principle. Personal crisis, then, is the only saving response to modern technology -- a crisis of consciousness provoked by the oppressive resistance of mechanized intelligence, much as the child's first, faltering steps are a crisis of "uprightness" in response to the downward pull of gravity.

    The child takes those steps in obedience to a wisdom calling from "without," a wisdom not yet grasped as conscious understanding; our task as adults today is to make the deepest human wisdom fully our own. Where, as a child, I differentiated myself from the animal, now I must learn to differentiate myself from the machine -- and this differentiation lies in the deepening of consciousness. It is therefore not only a matter of pointing to established human nature; it is also a matter of realizing human nature in its movement toward the future.

    That I do not feel this task a real one, pressing upon me daily, may be the surest sign of my growing comfort with the rule of the machine, whose purely technical evolution, running comfortably along lines of least resistance, lets me sleep.

    References

    1. Barlow, 1994.

    2. See, respectively, chapter 10, "Thoughts on a Group Support System"; chapter 13, "Impressing the Science out of Children"; and chapter 16, "The Tyranny of the Detached Word."

    3. I have not succeeded in tracing this quotation -- attributed to Alfred North Whitehead -- to its origin.

  • Go to The Future Does Not Compute main page
  • Go to table of contents
  • This document: http://www.praxagora.com/~stevet/fdnc/ch11.html