Roy Abraham Varghese

Never before in the history of humanity has there been as much advancement in science and technology as in the last one hundred years. The quantum leaps made by scientists and technicians have had an incalculable impact at all levels of society and in practically every corner of the world. Automobiles and video communication, nuclear power and space travel, biotechnology and superconductivity, the marvels of modern medicine and seemingly omnicompetent computers, are the alphabets of the global language of modern science. The scientific enterprise - with all its achievements and shortcomings - has now become an intrinsic ingredient of the human adventure. Not surprisingly, the presuppositions and methodology of modern science have influenced the theory and practice of almost every discipline. The "hard facts" of science (as distinct from mere scientism) have to be faced not only by those who seek to synthesize the ever-increasing influx of information in every field of knowledge but also by those concerned about the relation of science to society. The phenomenal progress of science has ramifications for social structures, the legal system, medical practices, the business environment, military policy, the universities and almost every other sphere of the modern world.

While space travel and nuclear power tend to grab the headlines (and the imagination) developments in the computer world are, arguably, far more fundamental and revolutionary for science and society - not only because the first two (like much else in the present age) depend on these developments but also because these are beginning to shape the modern understanding of the nature of the human person (and, therefore, of all human activities). The work in Artificial Intelligence, above all, has given rise to a model of the human person which is sharply at odds with traditional concepts of "mind" and "soul".

Artificial Intelligence (AI) in the most general sense refers to computer programs designed to behave in ways which would be considered intelligent if observed in human beings (programs which solve problems, learn from experience, understand languages, interpret visual scenes). AI focuses more on the specific knowledge required for specific tasks than on computational power and performance. Its most powerful applications in the business world are Expert Systems, AI programs which codify the expertise and knowledge of experts in specialized problem areas. Expert Systems are made up of a knowledge base and an inference "engine" that draws on this base.

The AI model of the human person has been well described by Professor Marvin Minsky, co-founder of MIT's Artificial Intelligence Laboratory: "Consciousness is overrated. What we call consciousness now is a very imperfect summary in one part of the brain of what the rest is doing. The real problem is that people who ask, "Could a machine be conscious?" think that they are. They think they have a pipeline to what's happening in their minds. That's not true. People scarcely know how they get ideas at all. This makes psychology hard. But I think it makes putting consciousness into machines easy, because I don't think it will take very much. For a machine to solve very hard problems, it's going to have to have a brief description of itself. When there is a better theory about what's happening in other parts, then we'll understand it and be able to make machines do it." (Science Digest, October 1985, p.42).

On the traditional end of the spectrum is the view expressed by Professor Daniel N. Robinson in a book he co-authored with Nobel Laureate neurophysiologist Sir John Eccles: "That there are parallels between the computer's performance and human cognition becomes less surprising once we realize that it is human cognition that has programmed the computer and that makes "symbols" possible in the first place. Note that the computer's hardware is always and only generating electrical pulses. We are the ones who wire the device in such a way that a series of pulses will illuminate a screen with "FIight 202 has been delayed." The computer no more knows that a flight has been delayed than a tape recorder knows how to sing La Traviata ... The search for "symbols" in the brain must be aimless, for there are no symbols in the brain, only pulses and pauses. The brain is, indeed, an extraordinary computer whose workings are fantastic. The serious scientist has every reason to study these workings and to help us understand how this unique device serves us throughout life. That it does have its own "language" is evidence enough of a division between the computer and the programmer, between it and its owner. But brain, qua brain, cannot be the "I" in reports of experience and of thought. In an ironic way, the brain does possess an artificial intelligence in that it does its necessary work oblivious to the mission it serves, the mission of personhood." (THE WONDER OF BEING HUMAN) .

The achievements and the revolutionary possibilities of work in Artificial Intelligence has made it, for many, the most exciting frontier of modern technology. It is of paramount importance, therefore, that its leading theorists and practitioners engage in a continuing dialogue with thinkers who work in other disciplines and who have made their own contributions to an understanding of the dignity and value of the human person. Such efforts at synthesis are essential, in the long run, if this fascinating fruit of modern technology is to be beneficial for humanity.

It is with these objectives in mind that TRUTH: A Journal of Modern Thought, and The International Institute for Mankind, sponsored ARTIFICIAL INTELLIGENCE AND THE HUMAN MIND, an international, inter-disciplinary conference held on the Yale University campus, March 1-3, 1986. Participants in the conference included four Nobel Laureates and six Gifford Lecturers, a few of the foremost researchers in Artificial Intelligence and noted physicists, brain scientists, psychologists and philosophers.

A report on the conference in AI magazine (the official publication of the American Association for Artificial Intelligence), Fall 1987, noted "the historic proportions of the debate and the personalities involved" and the fact that the sponsors "had contrived to assemble the biggest scientific guns they could find to support a dualist position and paired them off against the leading exponents of the opposing position." A brief description of the participants in the conference has been given.

For reasons of space, only a few of the papers presented at the conference have been published in this issue. Other papers will be included in subsequent issues.

We thank Professor Daniel N. Robinson, Chairman of the Psychology Department at Georgetown University, for kindly consenting to write an introductory commentary on the issues discussed at the conference and in these papers. We also thank Steven Ebsen, Russell Slusher and Andrew M. Adams for their help in assembling the journal. Finally, the organizers of the conference would like to express their deep gratitude to Mr. William N. Garrison, without whose vision and generosity the conference could not have taken place.