Everyone seems to "know" that computers are one-sided. If we had to characterize computers as either logical or intuitive, we would say, "logical." Do computers deal in information or understanding? Information. Are they impersonal or personal? Impersonal. Highly structured or unstructured? Structured. Quantitative or qualitative? Quantitative.
The problem is that we always seem to have a clear notion of the one side -- the attributes we assign to the computer -- while the other side remains suspiciously elusive despite representing our own "human dimension." What sort of personal understanding is intuitive, unstructured, and qualitative? Can we distinguish it precisely from impersonal information that is logical, structured, and quantitative?
But the question rings an alarm bell. It asks for a precise distinction, but precision itself seems to be one of the terms we are required to distinguish. After all, what do we mean by precision if not quantitative and logical exactness? If this is so, however, then we appear to be stuck: clearly, we cannot distinguish precisely between precision itself and something incommensurable with precision, any more than we can visually distinguish between sight and smell. All we can do is contrast the precise with the imprecise, which leaves us firmly rooted to the scale of precision. And yet, the widespread impression of computational one-sidedness suggests that we are at least dimly aware of "another side of the story." Can we lay hold of it?
The conviction that we can underlies every sentence of this book. The issues, however, are complex, and they confound virtually every debate about computer capabilities. When a problem haunts us in this way, we can be sure that we're up against a fundamental question of meaning -- very likely one that our deeply ingrained cultural biases or blind spots prevent us from encompassing.
It so happens that Owen Barfield has spent some sixty-five years circling and laying bare the particular biases at issue here. His first, decisive insights applicable to the relation between computers and human beings date from the late 1920s -- although he was not then writing, and so far as I know has not since written, about computers. Unfortunately, I do not know of any others who have brought his work to bear upon artificial intelligence and related disciplines. My own effort here is a modest one: to suggest broadly and informally where Barfield's work strikes most directly at current confusions.
We can, however, come to understand the computer's limitations. Admittedly, this requires a considerable effort. The computer brings to perfect completion the primary "drift" of our civilization over the past few hundred years. To see the computer in perspective, we need to get outside this drift -- one might also say, to get outside ourselves. Or, to use the language of chapter 11, "In Summary," we must come to ourselves -- experience an awakening of what is most deeply human within us.
What is most deeply human is inseparable from meaning. Unfortunately, the meaning of "meaning" is the most vexed issue in all of artificial intelligence and cognitive science. In dealing with meaning, we must come to terms with everything in the human being that does not compute. That is why this chapter is primarily about meaning. If you find yourself wondering along the way, "what does all this have to do with the computer?" then I suppose the presentation may be roughly on track. At the same time, I hope it is clear by the end of our journey that meaning has a great deal to do with the limitations of the computer.
The problem, of course, is that I am no master of meaning, able to orchestrate its appearance in these pages. If society as a whole suffers from its loss, so do I. But I, like many others, am also aware of the loss, and the computer has been one of the primary instruments of my awareness. By considering computation in the purest sense, I have been able to begin grasping what a certain few -- and in particular Owen Barfield -- have been telling us about the nature of meaning.
Meaning, you might say, is what computation is not. But the two are not simple opposites. They cannot be, for then they would stand in a strictly logical -- and therefore, computational -- relationship, in which case meaning would have been assimilated to computation. We can hardly expect the principle that "balances" logic and computation to be itself reducible to logic and computation.
But all that, unfortunately, is itself a highly abstract statement. Let me substitute a metaphor: what I have attempted in this book is to outline the hole I find in society and in myself, about which I can say, "That's where meaning must lie. My meaninglessness gives shape to a void. By entering with a proper sensitivity into the meaninglessness, I can begin to sense the dark hollow it enfolds. And in the darkness there begin to flicker the first, faint colors of meaning."
Moreover, it turns out that the powers of computation, with which so much of the world now resonates, shape themselves around the same void. The meaninglessness of my experience is, in fact, the meaninglessness of a computational bent manifesting within my consciousness, in society, and -- increasingly -- in nature.
The computer therefore may give us a gift: the opportunity to recognize meaning as the void at the computer's heart. And it likewise presents us with a challenge: to overcome the void, or rather fill it with our humanity.
The second sentence raises questions of meaning in a more insistent fashion than (1). If the majority's freedom to act as a majority is itself a threat to freedom, then we need to sort out just what we mean by "freedom." Freedom in what respect, and for whom? Similarly, what is the sense of "worst"? Does it mean "most common"? "Most powerful"? "Most vile"? And how does "determined" -- usually understood as a trait of individual psychology -- apply to a collective?
Despite these questions, however, we pick up the rough sense of this second assertion without too much difficulty, for the thought is not altogether new to us, and we have learned what sorts of qualifications we must give to each term in order to achieve a coherent statement. Ask a group of educated people what the sentence means, and you would expect at least a minimal cohesion in the responses, which is a measure of our (by no means extreme) accuracy in communication when we speak the sentence. So in (2) we have gained a certain richness of suggestion, a certain fullness and complexity of meaning, but have lost accuracy, compared to (1).
With (3) the interpretive difficulties have multiplied greatly, throwing severe obstacles in the way of accurate communication. (This is especially the case if you imagine this exhortation being voiced for the first time within a given culture.) Isn't the definition of "enemy" being rudely turned on its head? If I am to treat my enemy like a loved one, what is the difference between the two, and what is happening to language?
And yet, this very saying has been received by numerous people with some degree of common understanding -- although it is an understanding that may only be born of a sudden and illuminating expansion of commonly held, but inadequate, meanings. It is not that we simply abandon the old meaning of "enemy" -- we are not likely to forget the sting of recently felt animosities, for example -- but a new awareness of possibility now impinges upon that old meaning, placing things in a curious and intriguing light. Can it be that my enemies play a necessary -- a disciplinary or educative -- role in my life? If I treat an enemy as a friend, do I benefit myself as well as him? What will become of the enmity in that case?
Although any group of people will likely generate various explanations of the sentence (the potential for accuracy here is quite low), some individuals, at least, will confess that they have found the meaning to be both sublime and decisive for their lives. The sublimity is purchased, it appears, at the expense of the ease with which we can precisely communicate or explicate the thought.
The strong urge today, in other words, is to seek greater accuracy, and we're not quite sure what other challenge exists. If we could just devise a language free of all those ambiguities about "enemy," "freedom," "determined... then people could not so easily speak vaguely or imprecisely. It seems all too obvious, therefore, that the three sentences above reflect an increasing confusion of meaning -- a loss of accuracy -- and we are likely to leave the matter there.
But this will not do. In the first place, it encourages us to dismiss as empty or shoddy much that is most noble and inspiring in human culture. In the second place, it leaves unanswered the question, What are we striving to be accurate about? For we already have languages nearly purified of all ambiguity -- the various systems of symbolic logic and formal mathematics are just such languages. And the reason they are free of ambiguity is that, by themselves, they cannot be about anything. We can make them about something only by destroying their perfect precision. To apply mathematics, we must introduce some more or less unruly terms relating to the world. But I am running ahead of myself. Allow me to backtrack for a moment.
It is true that vagueness is the opposite of accuracy. But opposites are not what we are looking for. What we need in order to find a counterpoint to accuracy is, as Barfield shows, the relation of polar contraries. /1/ Think, for example, of a bar magnet. Its north and south poles are not mere opposites. Neither can exist without the other, and each penetrates the other. Cut off a section of the north end of the magnet, and you now have a second bar magnet with both north and south poles. It is impossible to isolate "pure northernness." Each pole exists, not only in opposition to the other, but also by virtue of the other. If you destroy one pole, you destroy the other as well -- by demagnetizing the bar.
This points to what is, I believe, one of Barfield's critical recognitions bearing on the computer's limitations: meaning (or expressiveness) and accuracy are polar contraries. At the moment I expect the statement to be more of a puzzle than a revelation. Indeed, as the puzzlements I have already cited suggest, the ideas at issue here prove extraordinarily elusive. I hope, however, at least to hint at the life within this statement.
To begin with, then -- and recalling the magnet's polarity -- meaning exists by virtue of accuracy, and accuracy exists by virtue of meaning. We can neither be meaninglessly accurate nor accurately meaningless in any absolute sense. That is, accurate communication requires something meaningful to be accurate about, and meaningful expression requires some minimal degree of accuracy, lest nothing be effectively expressed. As Barfield puts it:
It is not much use having a perfect means of communication if you have nothing to communicate except the relative positions of bodies in space -- or if you will never again have anything new to communicate. In the same way it is not much use expressing yourself very fully and perfectly indeed -- if nobody can understand a word you are saying.
One way to approach an understanding of polarity is to consider what destroys it. If mathematics, taken in the strictest sense, looks like a language of perfect accuracy, it is also a language devoid of meaning. /3/ But mathematics is not thereby a kind of pure "northernness," for in gaining its perfect accuracy and losing its potential for expressing meaning altogether, it has lost its essential linguistic nature. Can we really even speak of accuracy when a language gives us nothing about which to be accurate? Accuracy in communication can only exist in the presence of some meaning; otherwise, nothing is being communicated.
One can also imagine falling out of the polarity in the other direction. Of course, this is hardly the main risk in our day, but we can picture such an outcome in a rough way by considering the poet or seer who is struck dumb by his vision: overwhelmed by a sublime understanding, he remains inarticulate, lacking the analytical means to translate his revelation even into a poor verbal representation. Here again, then, there is no effective use of language at all.
So far as we succeed in communicating, we remain within the complex interpenetration of polarity, playing accuracy against meaning, but allowing the absolute hegemony of neither. A fuller meaning may be purchased at the expense of accuracy, and greater accuracy may constrict meaning. But these are not mere opposites. If they were, the one would occur simply at the expense of the other. In a polarity, on the other hand, one pole occurs by virtue of the other. An intensified north pole implies an intensified south pole; a weakened north pole implies a weakened south pole. The greatest minds are those capable of maintaining the most exquisite polar tension, combining the deepest insight (meaning) with the clearest analysis (accuracy).
It is important to see that falling altogether out of the polarity into, say, number, typically occurs through a weakening of the polarity. That is, although one may well emphasize the pole of accuracy in moving toward mere number, that very one-sidedness, by weakening the contrary pole, also weakens accuracy itself so far as accuracy is viewed as part of the dynamic of communication. There is ever less to be accurate about. The polarity fades into empty precision that communicates no content.
In sum: when the polar tension is at its greatest -- when both accuracy and expressiveness are at their highest pitch (when the "magnet" is strongest) -- we have the deepest and most precisely articulated meaning. This gives way, via inattention to one or the other pole, to a loss of clearly articulated meaning. It may on some occasions be necessary, therefore, to distinguish between the "empty precision" that results when we abandon the polarity for number, and the "accuracy" that, in cooperative tension with expressiveness, enables our discursive grasp and communication of meaning.
"But what," you may be asking with increasing impatience, "is meaning, anyway?" We will turn toward that pole shortly, but only after looking in greater depth at the more familiar pole of accuracy. You need to recognize, however, that all "what is" questions in our culture are strongly biased toward the analytical. We commonly say what something is by analyzing it into parts, which we can then relate to each other by the precise laws of mathematics and logic. This bias will hardly help us to understand the polar contrary of analysis.
In slightly different words: it is difficult to be precise about meaning for the simple reason that in meaning we have the polar contrary of precision. The best way to begin the search for meaning is by exercising your imagination against a blank -- that is, by trying to recognize the shape of what is missing in the polarity so long as we recognize only accuracy. If a perfectly accurate language cannot give us the world -- or any content at all -- then what can give us the world? Here there is no possible theoretical answer. We must begin to gain -- or regain -- the world in our own experience.
To say "Mary's father was killed in an automobile accident" is to affirm something very different from "The light in the kitchen was on last night." But suppose we say instead,
"It is true that Mary's father was killed in an automobile accident."
"It is true that the light in the kitchen was on last night."
The purely logical affirmation -- that is, the meaning of "it is true that" -- is exactly the same in both these sentences. It is indeed the same in a potentially infinite number of sentences of the form,
It is true that ( . . . ),
where the expression in parentheses is an assertion of some sort. What it means to say that something is true does not depend on the parenthetic expression. So the bare assertion of the truth of something is just about the most abstract statement we can make; it abstracts the one common element from a huge number of descriptions of radically different states of affairs. The logic of my assertion that someone was killed is identical to the logic of my assertion that the light was on. The very point of such assertions is to show "a something" that the subordinate clauses have in common -- something we can abstract from them equally -- despite almost every possible difference of meaning otherwise. That abstract something we call truth (or falsity, as the case may be).
We have seen that, given a pair of polar contraries, we cannot perfectly isolate either pole. It is impossible to slice off such a tiny sliver of the north end of a bar magnet that we end up with pure northernness. We have either north and south interpenetrating each other, or no magnet at all. We found a similar relation between meaning and quantitative rigor, where mathematics represents the pursuit of accuracy to the point where the polarity is destroyed, leaving nothing about which to be accurate. And so it is also with meaning and truth.
The attempt to conquer the pole of "pure truth" results in the loss not only of meaning but of truth as well, for it makes no sense to speak of truth without content. That is why logicians often speak of the validity of a logical demonstration rather than its truth. It is also why they use letters like p and q to stand for sentences, or propositions. For the content of a proposition does not enter into logical calculations; the only thing that matters is that the propositions be either true or false unambiguously. All true propositions -- however diverse their apparent meanings -- have exactly the same meaning for the logician; so do all false propositions. It was Wittgenstein who remarked, "All propositions of logic mean the same thing, namely nothing."
Equating formal logic with mathematics, Bertrand Russell wrote:
Pure mathematics consists entirely of assertions to the effect that, if such and such a proposition is true of anything, then such and such another proposition is true of that thing. It is essential not to discuss whether the first proposition is really true, and not to mention what the anything is, of which it is supposed to be true. Both these points would belong to applied mathematics. We start, in pure mathematics, from certain rules of inference, by which we can infer that if one proposition is true, then so is some other proposition. These rules of inference constitute the major part of the principles of formal logic. We then take any hypothesis that seems amusing, and deduce its consequences. If our hypothesis is about anything, and not about some one or more particular things, then our deductions constitute mathematics. Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true. /5/
On the other hand, just so far as we apply our logic to the world and thereby re-introduce content -- substituting terms with real meaning for our propositional p's and q's -- we lose the logical purity of our truth. If, for example, I say, "All men are mortal," you might ask about the definition of "man": what distinguishes man from not-man in human evolution? Or, what distinguishes man from machine? To clarify such questions I will be driven -- so long as I am seeking logical purity -- to define my terms in an ever narrower way. As one can already recognize in the sciences of biology and artificial intelligence, the word "man" begins to disappear into abstract technicality. Terms like "information," "algorithm," "genetic encoding," "organization," "replication," and "program" come to the fore.
This tendency is inevitable given the aforementioned quest for logical purity. We have to begin qualifying ourselves in an effort to eliminate ambiguity: this term is to be taken only in such-and-such a respect, and that term in a different respect -- and by the time we regain absolute logical precision (if we ever do), we will again have reduced the terms of our proposition to the purely abstract p's and q's of the logician. We will have lost whatever it was we started out to say. For the only statements that remain unqualifiedly true regardless of how their terms are taken are statements whose content has fallen out of the picture.
All concrete, meaningful content resists the absolutism and universalism of logic.
Many are therefore content to dismiss the question of meaning altogether, drawing sufficient satisfaction from their ability to fashion contrivances that work. This willingness to be content with things that work rather than with understanding is reminiscent of the logician's commerce with validity rather than truth. It is the end result of an effort to reduce the polarity to a single pole. A precise, two-valued system ("it works" and "it doesn't work") replaces the drive to penetrate phenomena with human consciousness and so to understand.
We feel comfortable with precision and the abstraction it requires. You might say they are our destiny. Something has led us quite naturally down a path whereby our meanings have vanished into equations, bottom lines, statistics, and computer programs. The causes of that historical drift -- whatever they are -- have proven relentless and all but irresistible. It should not surprise us, therefore, if our effort to grasp hold of meaning in the following sections proves an uphill struggle. Yet without such struggle we may eventually find our consciousness constricted to a vanishing point. For the polarity between meaning and accuracy is also -- within the individual consciousness -- a polarity between fullness and clarity. And we run the risk of becoming, finally, absolutely clear about nothing at all.
"Meaning is a philosophical word," she shrugs, and then turns her attention to a leaky vacuum pump. /6/
I said "crazily" because, while it is certainly true that meaning is not some new kind of thing, it is a prerequisite for there to be any things at all. Every attempt to arrive at the things of our world -- or the things of our theories -- starting from the "pure northernness," the conceptual barrenness, of mathematical or logical abstraction never gets as far as step one. We simply cannot meaningfully speak except by starting with meaning.
Meaning cannot be defined without being assumed, for surely I cannot define meaning with meaningless terms. And if I employ meaningful terms in my definition, then I assume that you are already capable of grasping meaning. Similarly, no one can define definition for someone who doesn't already know what a definition is; nor can anyone demonstrate the principles of logic without relying upon logic.
These "boundary problems" of cognition point us toward a crucial consideration: something in cognition "stands on its own" and is self-apparent. Ultimately, the only basis for knowing anything is that it has become transparent, or obvious, and the only way to discover what it is for something to be obvious is to experience its obviousness "from the inside." One then begins to live within the self-supported nature of thinking.
The alternative is to try to understand thinking in terms of the various objects of thought -- brain, computer, or whatever. But this effort is futile, for the objects are only given by thinking, and therefore presuppose what they are supposed to explain. "The seen is not the cause of seeing, but the result of seeing." /7/ We cannot, as Barfield explains, even begin with ourselves as subjects confronting a world of objects:
It is not justifiable, in constructing a theory of knowledge, to take subjectivity as "given." Why? Because, if we examine the thinking activity carefully, by subsequent reflection on it, we shall find that in the act of thinking, or knowing, no such distinction of consciousness exists. We are not conscious of ourselves thinking about something, but simply of something .... Consequently, in thinking about thinking, if we are determined to make no assumptions at the outset, we dare not start with the distinction between self and not-self; for that distinction actually disappears every time we think. /8/
That is, both subject and object are determinations given by thinking. They presuppose thinking, which therefore cannot be classified as either subjective or objective.
The first logicians had no rules of logic to go by, and yet they teased out the logical principles inherent in the received system of meanings. Clearly, they didn't do this by consciously applying the very rules of logic they were trying to derive. Logic does not come first in our knowing. And yet, logical structure is already implicit within the purest of meanings. Our meanings are mutually articulated with each other in a manner that is given by the meanings themselves, and we can therefore begin to abstract from these meanings certain empty, universal forms, or possibilities of articulation. These possibilities are what we know as logic.
The grasp of meaning, then, precedes, and becomes the basis for, the eventual elaboration of logic as such. We do not need the rules of logic in order to apprehend meaning. Rather, apprehending meaning with ever greater accuracy is what enables us to extract the rules of logic. Thinking logically is what we find we have done when we have successfully struggled to remain faithful to our meanings.
To be logical in a concrete sense (that is, within the polar relationship) does not mean to act according to an abstract logical calculus, but rather to preserve the coherence of my meanings -- to put those meanings on display without demeaning them by introducing distortions. If I must invoke logic against an opponent in argument, it is not to introduce some new understanding, but rather (as Barfield notes) to bring him to his senses: he has somehow abandoned the intrinsic necessities of his own meanings. He will recognize his error only when he enters more consciously and with greater clarity into those meanings.
To puzzle over the logical justification of logic, the definition of definition, and the meaning of meaning is only to make clear the boundaries of our normal way of thinking, which is governed by a radical slide toward the pole of logic and abstraction. The only way across those boundaries lies in overcoming one-sidedness, which in turn requires not merely thinking about things, but experiencing our own thinking -- including its qualitative aspects. Not much in our culture trains us to do this; we focus upon the "objects" given to us by our thinking rather than upon the thinking itself -- until, finally, some are suggesting that thinking offers us nothing to experience.
If we ever succeed in becoming perfect logic machines, there will indeed be nothing left to experience.
Philosophers have sometimes claimed that sentences like the following are tautologies:
"The earth is a planet." That is, the predicate simply repeats a truth already inherent in the subject. If we truly know the meaning of "earth," then we also know that earth is a planet. So the remark tells us nothing new. On the very face of it, the sentence purports to do no more than define "earth" -- it tells us what earth is -- so that if we already know the definition of "earth" -- if the terms of the sentence are from the start precisely accurate for us -- we learn nothing. Again, mathematics and logic offer the most extreme example. When we write the equation, 2 + 2 = 4 the equals sign tells us that what is on the right side of the equation is nothing other than, or different from, what is on the left side of the equation. That is what the sign says. If we clearly understand "2 + 2," then we already see that it is the same as "4." There is not some new content in "4" that was missing in "2 + 2."But imagine you are a contemporary of Copernicus hearing for the first time, "the earth is a planet." Not only is this no tautology, it may well strike you as plainly false. For in all likelihood you view the earth as a center around which both the fixed and wandering stars revolve. You take for granted the fundamental difference in quality between earthly and heavenly substance. The existing meanings of your words do not allow the truth of what you have just heard.
And yet, the time may come when you do accept the statement as true. If we look for the crucial moments separating your unbelief from your belief, what do we see? Words changing their meanings. Specifically, the meanings of both "earth" and "planet" change dramatically. And not just these two words, but an entire tapestry of meaning begins to shift its pattern and texture. We are not dealing here with the sudden recognition of a new "fact," but rather with the slowly evolving background against which all possible facts take their shapes. (In considering the sentence "Love your enemy" we saw a similar transformation of meaning.)
But, of course, this comparison could not immediately recast the entire network of meanings bound up with "earth" and "planet." /10/ The statement remained metaphorical -- a revealing lie -- at first. It would take an extended period for its metaphorical thrust to be generalized and become a matter of routine literalness -- that very period, in fact, marking the transition from medieval consciousness to our modern, scientific mentality.
The differences between the medieval and the modern mind are striking, to say the least. And the pathway from the one to the other is paved with lies! We gain our new meanings by using words to state falsehoods -- but falsehoods that are suggestive, and through which we are pointed to new possibilities of meaning. If Newton had not been allowed to "misuse" gravitas, could modern physics have arisen? For in his day the word meant something like the human experience of heaviness -- not some abstract principle of universal attraction -- and it was still tinged with a sense of "desire." There is a very great difference between the idea of (to borrow Herbert Butterfield's words) "a stone aspiring to reach its natural place at the center of the universe -- and rushing more fervently as it came nearer home -- and the idea of a stone accelerating its descent under the constant force of gravity." /11/
Newton's use of gravitas to describe the force of gravitation was metaphorical -- untrue on its face; it made no more sense, given the received meaning of gravitas, than we would make today if we explained the moon's revolution as resulting from its desire for the earth. And yet, as with many metaphors, it did make sense when one looked through the false statements and, with their essential aid, began to grasp the intended (new) meanings. Assisted by falsehoods, one apprehended (perhaps dimly at first) something not literally stated, thereby allowing the meanings of one's terms to shift and realign themselves with this metaphorical intent. These new meanings, once they are more fully laid hold of and analyzed, enable the statement of truths that again tend toward the literal and accurate (and therefore toward the tautological, the uninteresting), since they no longer require so great a "misuse" of language.
What we discover when we turn to the polar dynamic -- the interaction between accuracy and expressiveness during the actual use of language -- is this continual expansion and contraction of meaning. When I use a new and revealing metaphor, for example, I force static truths into motion, changing, by this "shift of truth," the meaning of one or more of my terms. This meaning, however, is now less explicitly displayed, less accessible -- and will remain so until it is penetrated and articulated with the aid of accurate analysis. When, on the other hand, I analyze and clarify meaning, I narrow it down, distinguish its facets, render it progressively literal and immobile until (if I push the analysis far enough) it is lacking nearly all content -- a fit term for logical manipulation.
I do not think we can say that meaning, in itself, is either true or untrue. All we can safely say is, that that quality which makes some people say: "That is self-evident" or "that is obviously true," and which makes others say: "That is a tautology," is precisely the quality which meaning hasn't got. /12/
Meaning, then, is born of a kind of fiction, yet it is the content, or raw material of truth. And it is important to realize that other- saying -- for example, symbol, metaphor, and allegory -- is not a mere curiosity in the history of language. As Barfield stresses on so many occasions, virtually our entire language appears to have originated with other-saying.
Anyone who cares to nose about for half an hour in an etymological dictionary will at once be overwhelmed with [examples]. I don't mean out-of-the-way poetic words, I mean quite ordinary words like love, behaviour, multiply, shrewdly and so on .... To instance two extreme cases, the words right and wrong appear to go back to two words meaning respectively "stretched" and so "straight," and "wringing" or "sour." And the same thing applies to all our words for mental operations, conceiving, apprehending, understanding.... /13/
Nor do we gain much by appealing to the physical sciences for exceptions. As Barfield elsewhere points out, /14/ even "high-sounding `scientific' terms like cause, reference, organism, stimulus, etc., are not miraculously exempt" from the rule that nearly all linguistic symbols have a figurative origin. For example, "stimulus" derives from a Latin word designating an object used as a spur or a goad. Similarly for such words as "absolute," "concept," "potential," "matter," "form," "objective," "general," "individual," "abstract."
The first thing we observe, when we look at language historically, is that nearly all words appear to consist of fossilized metaphors, or fossilized "other-saying" of some sort. This is a fact. It is not a brilliant apercu of my own, nor is it an interesting theory which is disputed or even discussed among etymologists. It is the sort of thing they have for breakfast. /15/
In sum: when we look at language, we find it continually changing; our discovery of facts and truths occurs only in creative tension with an evolution of meanings that continually transforms the facts and truths. Outside this tensive relation we have no facts, and we have no truths; there is only the quest for a kind of disembodied validity in which (to recall Russell's words) "we never know what we are talking about, nor whether what we are saying is true" -- or else the dumbstruck quest for ineffable visions.
The emergence of meaning is always associated with what, from a fixed and strictly logical standpoint, appears as untruth. And it is just this meaning with which, as knowers, we embrace the world.
The imagination is at work, albeit in less than full consciousness, when we dream. Dreams are full of other-saying -- symbols, images that signify this, but also that. "The dark figure was my friend John, yet it was not really him." Then I wake up and analyze the dream: the man, it seems, was a combination of my friend John and someone I met at the store today who intrigued me -- and perhaps also he had something to do with a certain threatening figure from past dreams .... So whereas the dream itself presented a single, multivalent image, I now have several definite, unambiguous figures, standing side-by-side in my intellect. Analysis splits up meaning, breaks apart unities: "this means that." Logic tells us that a thing cannot be both A and not-A in the same respect and at the same time; it wants to separate not-A from A. It wants to render its terms clear and precise, each with a single, narrow meaning.
The arrangement and rearrangement of such univocal terms in a series of propositions is the function of logic, whose object is elucidation and the elimination of error. The poetic /17/ has nothing to do with this. It can only manifest itself as fresh meaning; it operates essentially within the individual term, which it creates and recreates by the magic of new combinations .... For in the pure heat of poetic expression juxtaposition is far more important than either logic or grammar. Thus, the poet's relation to terms is that of maker. /18/
And again:
Logical judgements, by their nature, can only render more explicit some one part of a truth already implicit in their terms. But the poet makes the terms themselves. He does not make judgements, therefore; he only makes them possible -- and only he makes them possible. /19/
The imagination creates new meaning by other-saying, but cannot elucidate that meaning, cannot draw out its implications and delineate its contours. Rational analysis brings us precision and clarity by breaking the meaning into separate pieces, but progressively loses thereby the content, the revelatory potency, of the original image.
It is not that a metaphor or symbol holds together a number of logically conflicting meanings. The man in the dream was not a logical contradiction. He was who he was. One can experience and employ the most pregnant symbolic images quite harmoniously. The contradictions are artifacts of the analytical stance itself. They appear when we are no longer content with the imaginative unity we once experienced, but want to cleave it with the intellect, resolving it into elements we can relate to already existing knowledge. It is only when the unity is shattered by such analysis that the contradictions between the parts appear. And analysis, once under way, wants to proceed until there are no contradictions left -- which finally occurs when all meaning, all unities, have disappeared. So long as we have meaning, we have a challenge for logical analysis, which is to say that every imaginative unity stands ready to be broken apart by analysis, immediately revealing contradictions between the now too stiffly related fragments of the analysis.
In actual fact, we are likely to see innumerable partial movements in both directions, and understanding is another name for the resulting polar dynamic. The unanalyzed image may be a perfect unity, but it is not "on display" -- it is not available to our discursive mental operations. By contrast, analysis hands over elements of the image to the discursive intellect -- sacrificing some of the given imaginal significance in the process.
But today we too readily ignore that you can neither start with the empty forms of logic in considering any issue, nor finish off an issue with logic. An ironic misconception underlies the frequently heard claim, "it is logically certain." If the matter is indeed logically certain, then the speaker is no longer talking about anything. For if what he says has any meaning at all, that meaning is carried by other-saying -- by imaginative unities not yet fully reduced by logical analysis. The attempt to honor the pole of accuracy over that of meaning does no more than guarantee us the shallowest meanings possible.
It is said that any conclusion of an argument running counter to a theorem of the logical calculus is wrong. Surely this is correct; but it is not particularly helpful. The problem is knowing when a conclusion really does violate the calculus. What about "the earth is a planet," spoken by a contemporary of Copernicus? If we consider only the then-received meanings of "earth" and "planet," there is indeed a violation of the logical calculus. But the sentence also suggests newly emergent meanings, not yet clearly understood. Which is the real meaning?
If the logicians turn their attention to such a problem, they will likely resolve it just when the new meanings have become so stable, conventional, and thin that there is no longer a pressing issue of logicality or truthfulness. By the time you have reduced your subject to terms where the logical calculus can be applied mechanically and with full confidence -- externally, as it were, and not intrinsically in the struggle to be faithful to your meanings -- by then the subject itself is likely to have become so clear on its face as to require no such application.
Further, that movement must be our own; its sole impetus can never be received from without. While meaning is suggestible, it "can never be conveyed from one person to another .... Every individual must intuit meaning for himself, and the function of the poetic is to mediate such intuition by suitable suggestion." /20/ What I can convey to you with absolute fidelity -- although it is fidelity to no-content, nothing -- is only the empty proposition of logic or equation of mathematics.
The manipulation of the products of analysis is in some respects a mechanical task. The genesis of new meaning is altogether a different matter, and its challenge is not often set before us today. In listening to others, do I remain alert for those "strange connections" suggesting meanings I have not yet grasped? Or am I readier to analyze and tear down, based upon my armament of secure propositions already in hand?
The effort to comprehend what we have heretofore been incapable of seeing -- rather than simply to extract the implications of our existing knowledge -- always requires the modification of one or more of our terms: the creation of new meaning. Barfield quotes Francis Bacon:
For that knowledge which is new, and foreign from opinions received, is to be delivered in another form than that that is agreeable and familiar; and therefore Aristotle, when he thinks to tax Democritus, doth in truth commend him, where he saith, If we shall indeed dispute, and not follow after similitudes, etc. For those whose conceits are seated in popular opinions, need only but to prove or dispute; but those whose conceits are beyond popular opinions, have a double labour: the one to make themselves conceived, and the other to prove and demonstrate. So that it is of necessity with them to have recourse to similitudes and translations to express themselves. /21/
In an age of abstract and logic-dominated learning, it is easy to forget that all true advance of understanding requires us imaginatively to conceive what is not currently conceivable -- by means of other-saying. Einstein's famous equations were not the cause of his insights, but the result: he had first to become a poet, playing metaphorically with the received, much too logically worn down and well-defined notions of time and space, mass and energy. Lesser scientists failed to gain the same insights because they already knew too precisely. Their terms were rigorous and accurate. As Bacon put it, they could only "prove or dispute" in terms of their existing, systematized knowledge.
What I can do, however, is to offer some final, unsystematic observations to stimulate further thought. These will tend toward the aphoristic, and will partly serve to acknowledge just a few of the issues prominent in Barfield's work. For an extended treatment of these issues, however, I can only refer you to that work itself.
Meaning is whatever Barfield's History in English Words is about. I suspect that for many people this semantic history will oddly present itself as being about nothing much at all. But when the oddity serves as a healthy question mark and a stimulus for further exploration, one eventually enters a rich world of meanings against which the imagination can be exercised. Likewise, all sensitive exploration of foreign cultures leads to the appreciation of strange meanings and, through this appreciation, to a refined sense for meaning itself.
Meaning is whatever the dictionary is not about. I am only being slightly facetious. "The meaning of a word is abstract, just in so far as it is definable. The definition of a word, which we find in a Dictionary -- inasmuch as it is not conveyed by synonym and metaphor, or illustrated by quotation -- is its most abstract meaning." /22/
If we look at the polarity of language, it is immediately evident that the attempt to define -- the quest for a "dictionary definition" -- is driven almost wholly from the pole of accuracy, and therefore tends to eliminate meaning. Meaning, you will recall, "is not a hard-and-fast system of reference"; it is not definable, but only suggestible, and requires the poetic for its suggestion. The strict dictionary definition, by contrast, attempts to tie down, to eliminate any ambiguity previously imported by the poetic. And just so far as such a definition tries to be "scientific," it tends to suffer a steady reduction until finally it knows only particles in motion. Qualities disappear. The resulting, purely abstract term "is a mark representing, not a thing or being, but the fact that identical sensations have been experienced on two or more occasions." These little billiard balls over here are the same as those over there. Abstract thinking is, in the extreme, counting: we count instances, but do not try to say instances of what.
Here is part of a dictionary definition for "water":
the liquid that ... when pure consists of an oxide of hydrogen in the proportion of 2 atoms of hydrogen to one atom of oxygen and is an odorless, tasteless, very slightly compressible liquid which appears bluish in thick layers, freezes at 0 degrees C, has a maximum density at 4 degrees C and a high specific heat....
Now this serves very well to provide a certain kind of reference, a pointer into a complex mesh of scientific abstractions, in which "water" holds a definite place. What it does not do well at all is give us the concrete meaning of the word in its actual usage -- that is, when the word is used outside the scientific textbook or laboratory, or when it was used any time before the last few centuries. It gives me little if any assistance in determining whether a particular use of the word "water" in a poem, personal memoir, news story, or qualitative scientific study makes any sense. It conveys nothing of that water we enjoy while swimming, washing, drinking, fishing, walking in the rain, or watching storm-driven waves. It misses the wetness, gleam, undulation, deep stillness, engulfing horror, wild power, musicality, and grace. It does not tell me anything about my actual experience of water in the world.
All this, of course, will be admitted. But what will not so readily be admitted is that these experiences contain a good deal of what "water" really means, which is also to say: what it actually is. Our difficulty with this thought, one might almost say, is the defining characteristic -- the crippling failure -- of our day. It is related to our insistence upon a world of objects bearing absolutely no inner relation to the human being who observes them.
As the maker of meaning, imagination has received considerable attention during this past century -- although scarcely from the scientific side. Barfield mentions three features of imagination concerning which there has been a "considerable measure of agreement": /23/
As its name suggests, the imagination deals in images. Barfield has this to say about images in general:
It is characteristic of images that they interpenetrate one another. Indeed, more than half the art of poetry consists in helping them to do so. That is just what the terms of logic, and the notions we employ in logical or would-be logical thinking, must not do. There, interpenetration becomes the slovenly confusion of one determinate meaning with another determinate meaning, and there, its proper name is not interpenetration, but equivocation.... /24/
We may think that our good, scientific terms are somehow safely, solidly, material in meaning. And yet, as Barfield points out, "It is just those meanings which attempt to be most exclusively material ... which are also the most generalized and abstract -- i.e. remote from reality." /25/ To see this more clearly, we can contrast abstract with concrete meanings. "Concrete" does not mean "material." Rather, the concrete combines the perceptual and the conceptual -- which together make the thing what it is. To illustrate a fully concrete definition, Barfield asks his reader to imagine a single word conveying what we would have to translate as "I cut this flesh with joy in order to sacrifice." Such a word would not be highly abstract, and what saves it from being so is not only its particularity but also the fact that its reference to outer activity is suffused with inner significances.
But what about our abstract verb, "to cut"? It tries to be wholly material by removing all the particular significances just referred to; but how material is something that has become so abstract you cannot even picture it? The pure act of cutting -- as opposed to particular, concrete acts bearing within themselves the interiority of the actor -- is no more material than a "tree" that is not some particular tree. Words that we try to make exclusively material finally go the same way as "things" in the hands of the particle physicist: they vanish into abstraction.
If we can't give concrete definitions, neither can we define "concrete." The concrete brings us to meaning itself, and to "the qualitative reality which definition automatically excludes."
Barfield again:
If I were to bring the reader into my presence and point to an actual lump of gold, without even opening my mouth and uttering the word gold -- then, this much at least could be said, that he would have had from me nothing that was not concrete. But that does not take us very far. For it does not follow that he would possess anything but the most paltry and inchoate knowledge of the whole reality -- "gold." The depth of such knowledge would depend entirely on how many he might by his own activity have intuited of the innumerable concepts, which are as much a part of the reality as the percepts or sense-data, and some of which he must already have made his own before he could even observe what I am pointing to as an "object" at all .... Other concepts -- already partially abstracted when I name them -- such as the gleaming, the hardness to the touch, the resemblance to the light of the sun, its part in human history, as well as those contained in the dictionary definition -- all these may well comprise a little, but still only a very little, more of the whole meaning. /26/
And again:
The full meanings of words are flashing, iridescent shapes like flames -- ever-flickering vestiges of the slowly evolving consciousness beneath them. /27/
Barfield mentions how, in metaphor, poets have repeatedly related death, sleep, and winter, as well as birth, waking, and summer. These in turn are often treated as symbols of the inner, spiritual experiences of dissolution or rebirth. He then offers these observations:
Now by our definition of a "true metaphor," there should be some older, undivided "meaning" from which all these logically disconnected, but poetically connected ideas have sprung. And in the beautiful myth of Demeter and Persephone we find precisely such a meaning. In the myth of Demeter the ideas of waking and sleeping, of summer and winter, of life and death, of mortality and immortality are all lost in one pervasive meaning. This is why so many theories are brought forward to account for the myths. The naturalist is right when he connects the myth with the phenomena of nature, but wrong if he deduces it solely from these. The psychoanalyst is right when he connects the myth with "inner" (as we now call them) experiences, but wrong if he deduces it solely from these. Mythology is the ghost of concrete meaning. Connections between discrete phenomena, connections which are now apprehended as metaphor, were once perceived as immediate realities. As such the poet strives, by his own efforts, to see them, and to make others see them, again. /28/
If we accept and enter into the living terms of the polarity, I believe we will reach two conclusions: (1) there is no limit upon the intelligence we can embed within computers, since there is no limit upon how far the rational principle can proceed in its analysis of any given meaning; and (2) since this intelligence is always a "dead" or "emptied" intelligence -- frozen out of the polar dynamic of meaning and truth, and so rendered mechanical -- it is essentially limited.
These contentions are not contradictory. When I say there is no limit upon computer intelligence, I refer to the programmer's ability to derive an ever more sophisticated syntax through her analysis of meanings. Her next program can always appear more faithful to life than the last. Just so far as we can take hold of a cognitive activity or content and describe it, we will find that it submits to analysis, yielding an internal, rational structure that can be pursued indefinitely toward an ideal of perfect precision.
When, on the other hand, I say the computer is limited, I refer to (1) its eternal inability to transcend meaningfully the fundamental syntactic limits of its own program; and (2) its inability to possess its meanings in the sense that humans do. In other words, you can't take the nonpolar end products of (the programmer's) analysis, map them to the computational structures of a computer, and expect them to climb back into the polar dynamic from which they were extracted -- any more than you can reduce a conversation to a bare logical structure, and then expect anyone to derive from that structure the concrete substance of the original conversation.
These contentions will be disputed by many of those who are busy constructing artificial intelligences. I will have more to say about their concerns later. But for now I want to emphasize the unbounded potential of the computer, which lies in its capacity to receive the imprint of intelligence. And if the computer itself cannot ascend from the "footstep" to the striding foot, it can nevertheless execute the pattern of footsteps corresponding to a once-striding foot -- provided only that a programmer has sufficiently analyzed the striding and imparted its pattern to the computer.
In other words, even if the computer is cut off from the polar dynamic, the programmer is not, and so the computer's evolution toward unbounded intelligence can proceed on the strength of the programmer's continual effort to analyze meanings into rational end products. Every claim that "the computer cannot do so-and-so" is met by the effort -- more or less successful -- to analyze so-and-so into a set of pure, formal structures.
It is important to understand just how far this can proceed. Through proper analysis we can, if we choose, reduce every dimension of human experience to a kind of frozen logic. This is true, as we will see, even for learning and the grasp of metaphor. That is why the rediscovery of meaning through our own powers of thinking and imagining is so desperately crucial today: we may find, before long, that we have imprisoned all meaning within an impotent reflection of the real thing, from which there is no escape.
We can recognize these issues at work when philosopher John Haugeland, in a standard introduction to artificial intelligence, finally resorts to an imagined "existence proof" to help solve what he calls the "mystery of original meaning." Suppose, he says, that a future comes when intelligent computers
are ensconced in mobile and versatile bodies; and they are capable (to all appearances anyway) of the full range of "human" communication, problem solving, artistry, heroism, and what have you. Just to make it vivid, imagine further that the human race has long since died out and that the Earth is populated instead by billions of these computer-robots. They build cities, conduct scientific research, fight legal battles, write volumes, and, yes, a few odd ones live in ivory towers and wonder how their "minds" differ from books -- or so it seems. One could, I suppose, cling harshly to the view that, in principle, these systems are no different from calculators; that, in the absence of people, their tokens, their treatises and songs, mean exactly nothing. But that just seems perverse. If [artificially intelligent] systems can be developed to such an extent, then, by all means, they can have original meaning. /29/
But this is not quite right. We can, without apparent limit, "instruct" robots in all these skills, but this, as I have tried to show, does not even tend to imply that the robots possess meaning in the same sense that humans do. It only implies that we can analyze our meanings and impart their structure to a machine. Nor is it "perverse" to point this out. The real question is whether the futuristic robots would be bound by their syntax -- excluded from the polar dynamic -- in a way that humans are not. That is, would they be stuck where they were, excluded from all progress because unable to take hold of those meanings the emptied traces of which constituted their own logic?
What really seems to lie behind Haugeland's argument, however, is the picture of a future in which we ourselves could not know any significant difference between our machines and ourselves. In that case, it would indeed be foolish to claim privileged status for human thinking. But then, too, there would be a perfectly reasonable conclusion that Haugeland ignores -- not that the robots had somehow gained what he calls "original meaning," but that we had lost it.
In some ways, yes -- but not in the ways that count. The early work in artificial intelligence was fixated upon logic. Somehow the pioneers in the field had convinced themselves that formal logic was the mind's distilled essence. So as soon as they realized that computers could be programmed to exhibit complex logical structures, euphoria set in. Did this not mean that machines could replicate human minds? Alan Hodges describes how these early researchers
regarded physics and chemistry, including all the arguments about quantum mechanics ... as essentially irrelevant .... The claim was that whatever a brain did, it did by virtue of its structure as a logical system, and not because it was inside a person's head, or because it was a spongy tissue made up of a particular kind of biological cell formation. And if this were so, then its logical structure could just as well be represented in some other medium, embodied by some other physical machinery. /30/
Given this outlook, the task was to reduce all knowledge to a formal, logical structure that could then be impressed upon the computer's circuits. There was no lack of bracing optimism: John McCarthy, head of Stanford University's Artificial Intelligence Laboratory, was sure that "the only reason we have not yet succeeded in formalizing every aspect of the real world is that we have been lacking a sufficiently powerful logical calculus. I am currently working on that problem." /31/
More recent years have seen a considerable backlash against the dominance of logic in artificial intelligence. This backlash is associated with, among other things, the analysis of common sense and background knowledge, the flourishing of connectionism, and the investigation of human reasoning itself.
The faith of the initial generation [of cognitive scientists] in a study of logical problems and its determined search for rational thought processes may have been misguided. Empirical work on reasoning over the past thirty years has severely challenged the notion that human beings -- even sophisticated ones -- proceed in a rational manner, let alone that they invoke some logical calculus in their reasoning. /32/
But this statement easily misleads -- in two ways. First, it is not so much that human beings have been convicted of irrationality as that cognitive scientists were betrayed by assumptions that flew extraordinarily wide of the mark. Their faith convinced them that cognitive behavior would be found on its surface to be nothing but the perfectly well-behaved end products of logical analysis -- as if human beings started from a position of ideally structured (and therefore meaningless) emptiness. As if, that is, the meanings with which we operate were already so thoroughly worn down as to yield a neat calculus of thinking or behavior after a single level of analysis. We may be moving toward such emptiness, but, thankfully, we are not there yet.
This mistaken expectation was so egregious as to beg for some sort of explanation. At the very least, we can say that the misfiring was clearly related to the disregard of meaning so characteristic of the cognitive sciences. Researchers who could posit a mentality built up of nothing but logical forms must have trained themselves over a lifetime to ignore as mere fluff the meanings, the qualities, the presence of their own minds. This bizarre and simplistic rendering of their own thinking processes fits well with what I suggested earlier: it may be we who are approaching the status of robots rather than robots who are approaching human status.
Moreover, that the errors of the early researchers have not been remedied by the subsequent reaction is evident when we consider the second way the statement above can mislead us.
Despite all the confessions (usually made on behalf of others!) about the one-sided approach to computer models of mind, the current work is most definitely not aimed at redressing the imbalance. The researchers have merely been forced to give up all hope of deriving (and programming) the necessary logical formalism based on a first- level analysis of human behavior. We too obviously do not present ourselves on the surface as logic machines.
In other words, the programmer cannot simply look offhand for those finished, empty structures she would imprint upon the computer's receptive circuits. She must, it is now recognized, carry out extensive "semantic analysis" -- the analysis of meaning. Which is to say that she can obtain the desired structures only through laborious toil within the constraints of the polar dynamic of meaning. Only by first entering into meaning can she succeed in breaking it down, and even then she must resort to analysis after analysis -- almost, it appears, without end. And yet, the work always yields some results, and there is no definable limit upon how far it can proceed.
But the point is that, while the programmer is driven to pursue the polar dynamic, her entire purpose is to derive for the computer those same empty structures that her predecessors would have liked to pluck straight from the surface convolutions of their brains. That is, as a programmer she is forced to work with the polarity, but she does so in order to destroy it. For that is the only way she can satisfy the computer's hunger for absolute precision about nothing at all. Every approximation, every heuristic, every "synthesis" must be precisely and logically constructed from the eviscerated end products of analysis.
Nor is any of this surprising, for the computer itself is above all else a logic machine. Of the many researchers who believe computers will some day think and otherwise manifest humanlike intelligence, few if any now imagine that the thinking robot of the future will, in its ruling intelligence, leap the bounds of a formal system.
So, for all the recognition of the "limits of logic and rationality," the one-sided pursuit of the purified end products of analysis remains the untarnished grail quest of those who would sculpt a lump of silicon into the shape of a human mind. In this we do not witness the discovery of polarity, but something more like flight from it.
A great deal hinges on the distinction Barfield drew back in the 1920s, between true and accidental metaphor. The latter is based on an analysis of complex ideas, whose parts then can be recombined according to one or another logical scheme. This is quite different from the activity of the primary imagination, which is responsible for those more fundamental unities from which complex ideas are constructed.
You will remember that the poetic principle creates the individual terms whose "external," logical relationships can then be manipulated by the rational principle. The difference between true and accidental metaphor is the difference between the creation or modification of terms, and the mere rearrangement of existing terms.
It is not that accidental metaphors have no value. They can be useful, Barfield notes, "in the exposition of an argument, and in the calling up of clear visual images, as when I ask you to think of the earth as a great orange with a knitting needle struck through it -- or call the sky an inverted bowl -- two images in which there can at least be no more than a minimum of poetic truth." He adds that such metaphors usually carry "a suggestion of having been constructed upon a sort of framework of logic."
Now, while it is no doubt true that all metaphors can be reduced to the mathematical ratio a:b::c:d, they ought not to give the sense of having been constructed on it; and where that is so, we may probably assume that the real relation between the two images is but exiguous and remote. /33/
When I call the earth an orange with a knitting needle stuck through it, the ratio (a knitting needle is to an orange as its axis of rotation is to the earth) is not likely to be the vehicle of imaginative insight. Axis and knitting needle, earth and orange, hardly constitute revelatory unities, in and of themselves. But we can arrange a needle and orange in such a way as to represent, abstractly, the relation of axis to earth, and this may be a valuable teaching aid. The metaphor, however, will effect little modification of its constituent terms; we will not come to understand either knitting needles or planetary axes differently as a result of it. Whereas the sixteenth-century European could understand "the earth is a planet" only by reconceiving both earth and planet, the planetary facts we convey to a student with orange and needle remain compatible with each of the terms we began with.
Furthermore, to the extent such a metaphor does lead to new meaning, it is not the sheer logical structure of the ratio that achieves the result. You can play all you want with the relations between terms of a mathematical equation, logical proposition, or any other formal system, but you will not arrive at new meaning unless you call upon something not given formally in the system itself. /34/
The important distinction between true and accidental metaphor can also be seen as a distinction between two kinds of synthesis. The one operates rationally as the "putting together of ideas." But it rests upon a second, more basic synthesis. For the putting together
can only come after, and by means of, a certain discrimination of actual phenomena -- a seeing of them as separate sensible objects -- without which the ideas themselves (general notions) could never have existed. The poetic principle, on the contrary, was already operative before such discrimination took place, and when it continues to operate afterwards in inspiration, it operates in spite of that discrimination and seeks to undo its work. The poetic conducts an immediate conceptual synthesis of percepts. /35/
That is, the imagination (operative in what Barfield calls the "poetic principle") links percept to percept in such a way as to give us those basic discriminations -- changing with time -- that determine what sorts of things our world consists of. The secondary kind of synthesis takes these given things and combines them in various ways -- largely, today, upon a latticework of logic.
You will recall the discussion of linear perspective in chapter 22, where it was pointed out that, in Barfield's words, "before the scientific revolution the world was more like a garment men wore about them than a stage on which they moved." The birth of our own, peculiarly perspectival, three-dimensional experience of space was felt to be a seeing with radically new eyes. And -- qualitatively, meaningfully, in terms of the kinds of things men were given from the world to reason about -- it was indeed a seeing with new eyes.
What carried our culture across that divide was an activity of imagination -- even if it was still largely an unconscious activity. Similarly, the only way to look backward and straddle the divide in thought today is with the aid of metaphor, as when one speaks of wearing the world like a garment. Even so, no formal analysis of such metaphorical sentences can carry us across what must remain an impassable barrier until we suddenly see through everything given formally in the metaphor, grasping it instead as a revealing falsehood. (One prerequisite for this seeing may be many years spent studying medieval culture!) Meaning, as I noted earlier, can be suggested but not conveyed. It cannot be conveyed because there is no automatic or mechanical process, no formalism, that can hold it.
It is interesting to consider a hypothetical, medievally programmed robot living through the Renaissance and beyond. The claim in the foregoing is that this robot's programming could never have prepared it to cope with the transition from a garment-world to a stage-world. As the surrounding culture began to assimilate and logically elaborate the new meanings of a stage-world, the robot born in a garment-world would find the new terms of discussion oddly skewed in a way it could never "straighten out." /36/
This argument, of course, will carry conviction only for the reader who can successfully imagine the differences between these two sorts of world. Such an imagination must reach beyond everything given abstractly, everything formally capturable. After all, the laws governing the propagation of light (and the formation of images on the robot's visual input device) presumably did not change during the Renaissance. The differences were qualitative: they involved, as I point out in chapter 22, such transitions as the one between finding oneself "in the story" of a painting or landscape, and gazing upon the landscape as an observer who has been cast out from it. Or, likewise, the transition some non-Westerners must still make today if they are to overcome the strange and unrealistic quality of photographs.
Machines certainly can learn, in the extraordinarily restricted sense that their current states can be logically elaborated and the implications of those states drawn out. But this is not at all the same as logically elaborating a set of structures derived from genuinely new meanings -- and even less is it the same as apprehending such meanings in the first place. The point with learning, as with metaphor, is that the computer, as a purely syntactic machine, cannot realize any future not already implied in its "past" -- that is, in its programming -- however sophisticated it may be at producing ever more ingenious logical variations on that past. It can, as we saw Bacon put the matter, "prove or dispute" endlessly regarding its received terms, and can be programmed to recombine those terms in every possible permutation. But it will never undertake the difficult task of reconceiving things through an imaginative use of metaphor that makes a "lie" of its previous meanings.
This is a large topic and, I am convinced, the source of many confusions. The standard line tends to run this way:
It is true that at one level the computer deals solely in, say, ones and zeros. But at other levels we see different behaviors "emerging," and we can best describe some of these behaviors in nonmathematical language. For example, we can describe a car as a collection of atoms, subatomic particles, fields, and so on. /37/ (Our description will be largely mathematical.) We can also resort to camshaft, valves, pistons, gears, and the like. Or, again, we can talk about how nicely the car drives us to the supermarket. The language of one level doesn't get us very far when we're talking on a different level.
So, for example, there are those who happily speak of the computer's mathematical determination at some level, while at the same time hailing its artistic prowess, its intelligence, and even its potential for freedom. "After all," they will say, "human beings are fully determined at the molecular level, but it still makes sense to assert an experience of freedom at the level of our daily activity."
The theorist's redescription of his subject matter in moving from one level to another provides a tempting opportunity to reintroduce on the sly and without justification what has previously been purged from the theory. This occurs in the context of many philosophical discussions, including those dealing with the "emergence" of human freedom, purpose, and intentionality. (Intentionality is sometimes described as the "aboutness" of cognitive activity. Human speech, for example, is normally about something in a way that, say, the gravitational interaction between planets is not.)
The place where this illicit smuggling of meaning is perhaps most obvious is in the very first level of redescription, where the leap is from theoretical descriptions approaching pure formalism to descriptions that involve "something else." Surely such redescription is impossible where we have no description to begin with -- no meanings, nothing to redescribe -- that is, where we are dealing with a completed formalism. If, for example, physics has reached a point where we cannot associate meaningful terms with our equations, what is it we are redescribing when we try to relate the theory to, say, chocolate cake? The fact that we can seem to perform this redescription is clearly related to those theoretically illicit qualities we let slip back into our first-level descriptions without acknowledging them.
The question, in other words, is how one gets "things" at all, starting from the ideal of a formal description. The difficulty in this helps to explain why researchers in other disciplines are content to leave the metaphysical quandaries of physics to the physicist. "Obviously enough," the equations must be about something, so one can now redescribe that something by drawing upon all its supposed phenomenal qualities, however theoretically illegitimate those qualities may be. It has often been noted how easily subatomic "particles" become, in our imaginations, comfortingly solid little billiard balls.
In moving between higher levels of description, tracking the sleight of hand can be extremely challenging, because the theorist is allowing himself to play with meanings to which -- precisely because he has no theoretical basis for dealing with them -- he is inattentive. He easily manages to slip from one "reality" to another, without being fully aware of how certain subtle shifts of meaning in his words perform an essential part of the work.
Probably the most widespread, entrenched, and respected gambit of this sort is the one executed with the aid of information. From genetic encoding to computer intelligence, the idea of information plays a key theoretical role. Defined as the measure of a message's "statistical unexpectedness," information conduces to wonderfully quantitative explication, precisely because it simply takes for granted both the message itself, as meaningful content, and the speaker of the message. And, in strict information theory, the message and speaker are indeed irrelevant; they're not what the theory is about. At the higher levels, however, many theorists are all too ready to assume, not that meaning has been ignored at the lower level (which is true), but that it has been satisfactorily reduced and explained (which is false). These are very different matters.
What makes all this plausible is our ability to program deterministic, mathematically describable machines that do succeed in "processing information" -- where "information" is now understood as having meaningful content. If one is willing to ignore the significance of the premier act of speaking by the programmer, and if one manages to lose sight of the necessary polar dynamic of meaning from which that act proceeds, then one can sustain the illusion that information and meaning really do arise of their own accord from an edifice of abstractions.
That this causally grounded interaction between logical mechanism and words is something quite different from the polar dynamic of accuracy and meaning will, I think, be appreciated by anyone who has truly entered into an understanding of the dynamic. But that is a topic I have not addressed here.
But if this is our highest task, it is also the one most difficult to undertake in ourselves or to recognize in others. It is not easy to identify what comes merely from the expression of habit, the play of deeply ingrained associations, the mechanical response to controlling cues in the environment (in other words, from a past determining the future) -- just as it is not easy to identify what is a true taking hold of ourselves in freedom, allowing a new future to ray into the present.
It may be far more challenging to recognize the sovereign, free self than many of us imagine. Take away that self, and we would continue to cruise through life in most of the expected ways. Which is to say that the self is not very prominent. It has yet to waken fully to its own powers of freedom. Such, I believe, is the state in which mankind now finds itself. And yet, it is only in profound wakefulness that we can begin to understand what distinguishes us from machines.
Meanwhile, we are creating intelligent devices possessed of ever increasing cleverness. We can carry this process as far as we wish. It is a process without limits, and yet with radical limits. On the one hand, there is no meaning we cannot implant within the computer, so long as we are willing to identify the meaning with a set of precisely elaborated logical structures. On the other hand, however complex and intricate the elaboration -- however many layers we construct -- the computer as a computational device remains outside the living polarity of truth and meaning. Within the breathing space between these two facts there is doubtless much we can achieve with computers if, recognizing their peculiar nature, we make them the servants of our meanings.
But I hope this chapter will have made the risks a little more visible. Hypnotic fascination with the abstract forms of intelligence, and a hasty rush to embody these forms in electromechanical devices, can easily lead to renunciation (or simple forgetfulness) of the inner journey toward the living sources of our thinking. Yet it is human nature to leave every past accomplishment, every worn meaning, behind. A past that rules the present with a silicon fist is a past that congeals, crystallizes, fractures, prematurely reducing to straw the tender shoots of a future not yet realized. To live in freedom is to grow continually beyond ourselves. The future robots of our fancy could only rule a desolate landscape, for they would be intelligences without live meaning, creatures of vacant form, lacking all substance, condemned to echo futilely and forever the possibilities inherent in the last thoughts of their creators -- a prospect no less gray when those last thoughts happen to express the most sublime truths of the day. These machines would be the ghosts of men, not even desperate in their ingenious hollowness.
1. Barfield, 1967: 35-39. I have also discussed the idea of polarity in chapter 22, "Seeing in Perspective."
2. Regarding the "disappearance" of phenomena into theoretical entities, see Edelglass et al., 1992.
3. Actually, it is virtually impossible to take mathematics in the strictest sense, because we are by nature creatures of meaning. We cannot wholly purge mathematics of those meaningful associations of form and substance from which it has been abstracted (and to which it ever seeks a return). Moreover, nothing here is meant to deny the very deep and worthwhile satisfactions to be had from pursuit of the purest mathematical disciplines.
4. In modern theory, mathematics and logic are not treated as fundamentally distinct disciplines.
5. Russell, 1981: 59-60.
6. von Baeyer, 1992: 178.
7. Kuehlewind, 1984: 129.
8. See appendix 4 in Barfield, 1973.
9. Barfield, 1981.
10. See, for example, Barfield's evocation of the medieval consciousness, quoted in chapter 22.
11. Butterfield, 1957: 131.
12. Barfield, 1981: 32-34.
13. Barfield, 1981: 35.
14. Barfield, 1973: 134.
15. Barfield, 1981: 37.
16. Barfield, 1973: 25.
17. Barfield uses "poet" and "poetic" to express broadly the operation of the imagination, as opposed to that of rational analysis. The terms are by no means so restrictive as in normal usage today. Thus, a creative scientist may be as likely to exercise the poetic function -- if not more so -- as a writer of verse.
18. Barfield, 1973: 131.
19. Barfield, 1973: 113.
20. Barfield, 1973: 133.
21. Francis Bacon, The Advancement of Learning, 2.17.10. Quoted in Barfield, 1973: 141-42.
22. Barfield, 1973: appendix 2.
23. Barfield, 1965b: 127. He is speaking here through the fictionalized voice of a physicist, Kenneth Flume.
24. Barfield, 1977a: 100.
25. Barfield, 1973: 79.
26. Barfield, 1973: 187-88.
27. Barfield, 1973: 75.
28. Barfield, 1973: 91-92.
29. Haugeland, 1985: 122.
30. Quoted in Johnson-Laird, 1988: 11.
31. Quoted in Weizenbaum, 1976: 201.
32. Gardner, 1985: 361.
33. Barfield: 1973: 197-98.
34. The Copycat program developed by Douglas Hofstadter et al. exemplifies current efforts to understand metaphor computationally. Hofstadter speaks with disarming ease of "an unexpected and deep slippage," "the revelation of contradiction," "intense pressures," and those "radical shifts in point of view" that bring one "close to the roots of human creativity." And yet, all the devices of Copycat amount to a kind of logical juggling from which no new simple terms (as opposed to new combinations of existing terms) can ever arise. That is, the program can do no more than play with the logical structure of complex entities; it cannot alter any of the root meanings from which all complex terms are constructed. Hofstadter, incidentally, shows no sign that he is aware of Barfield's work on metaphor and meaning. (See Hofstadter, Mitchell, and French, 1987; Mitchell and Hofstadter, 1990a; Mitchell and Hofstadter, 1990b.)
35. Barfield, 1973: 191.
36. It might make more sense to imagine a modern robot transported back to medieval times. But the same point can be made, either way.
37. Actually, it's worth pointing out that, despite the standard story, one obviously cannot do this. Supposedly, it's possible in principle, but even the principle, it turns out, is riddled with vices.
38. See chapter 18, "And the Word Became Mechanical."