Your search
Results 5 resources
-
Over sixty years ago, Kenneth Craik noted that, if an organism (or an artificial agent) carried 'a small-scale model of external reality and of its own possible actions within its head', it could use the model to behave intelligently. This paper argues that the possible actions might best be represented by interactions between a model of reality and a model of the agent, and that, in such an arrangement, the internal model of the agent might be a transparent model of the sort recently discussed by Metzinger, and so might offer a useful analogue of a conscious entity. The CRONOS project has built a robot functionally similar to a human that has been provided with an internal model of itself and of the world to be used in the way suggested by Craik; when the system is completed, it will be possible to study its operation from the perspective not only of artificial intelligence, but also of machine consciousness.
-
We are engineers, and our view of consciousness is shaped by an engineering ambition: we would like to build a conscious machine. We begin by acknowledging that we may be a little disadvantaged, in that consciousness studies do not form part of the engineering curriculum, and so we may be starting from a position of considerable ignorance as regards the study of consciousness itself. In practice, however, this may not set us back very far; almost a decade ago, Crick wrote: 'Everyone has a rough idea of what is meant by consciousness. It is better to avoid a precise definition of consciousness because of the dangers of premature definition. Until the problem is understood much better, any attempt at a formal definition is likely to be either misleading or overly restrictive, or both' (Crick, 1994). This seems to be as true now as it was then, although the identification of different aspects of consciousness (P-consciousness, A-consciousness, self consciousness, and monitoring consciousness) by Block (1995) has certainly brought a degree of clarification. On the other hand, there is little doubt that consciousness does seem to be something to do with the operation of a sophisticated control system (the human brain), and we can claim more familiarity with control systems than can most philosophers, so perhaps we can make up some ground there.
-
The idea that internal models of the world might be useful has generally been rejected by embodied AI for the same reasons that led to its rejection by behaviour based robotics. This paper re-examines the issue from historical, biological, and functional perspectives; the view that emerges indicates that internal models are essential for achieving cognition, that their use is widespread in biological systems, and that there are several good but neglected examples of their use within embodied AI. Consideration of the example of a hypothetical autonomous embodied agent that has to execute a complex mission in a dynamic, partially unknown, and hostile environment leads to the conclusion that the necessary cognitive architecture is likely to contain separate but interacting models of the body and of the world. This arrangement is shown to have intriguing parallels with new findings on the infrastructure of consciousness, leading to the speculation that the reintroduction of internal models into embodied AI may lead not only to improved machine cognition but also, in the long run, to machine consciousness.