Search
Full bibliography 704 resources
-
In his article on The Liabilities of Mobility, Merker (this issue) asserts that “Consciousness presents us with a stable arena for our actions—the world …” and argues for this property as providing evolutionary pressure for the evolution of consciousness. In this commentary, I will explore the implications of Merker’s ideas for consciousness in artificial agents as well as animals, and also meet some possible objections to his evolutionary pressure claim.
-
This paper discusses the development of a robot head that creates a facial expression according to emotion resulting from a flow of consciousness. The authors focused attention on the human consciousness and proposed a system using a unique artificial consciousness according to the relationship between emotion and the resulting human facial expressions. The artificial consciousness is used to imitate the human consciousness and generate emotion in the robot using data from the Internet. The authors have also developed a robot system that is similar in structure to that of the human anatomy in order to achieve smooth communication with people. By combining these two systems, the authors were able to realize the expression of emotion according to a flow of consciousness using a humanoid robot.
-
In this article I suggest how we might conceptualize some kind of artificial consciousness as an ultimate development of Artificial Life. This entity will be embodied in some sort of constructed (biological or non-biological) body. The contention is that consciousness within self-organized entities is not only possible but inevitable. The basic sensory and interactive processes by which an organism operates within an environment are such as to be the basic processes that are necessary for consciousness. I then look at likely criteria for consciousness, and point to an architecture of the cognitive which maps onto the physiological layer, the brain. While evolutionary algorithms and neural nets will be at the heart of the production of artificial intelligences there is a particular architectural organization that may be necessary in the production of conscious artefacts. This involves the operations of multiple layers of feedback loops in the anatomy of the brain, in the social construction of the contents of consciousness and in particular in the self-regulation necessary for the continued operation of metabolically organized systems. Finally I make some comments on the ethics of such a procedure.
-
The author found the "Journal of Consciousness Studies," (JCS) issue on Machine Consciousness, (2003), frustrating and alienating. It is argued that there seems to be a consensus building that consciousness is accessible to scientific scrutiny, so much so that it is already understood well enough to be modeled and even synthesized. It could be instead that the vocabulary of consciousness is being subtly redefined to be amenable to scientific investigation and explicit modeling. Such semantic revisionism is confusing and often misleading. Whatever else consciousness is, it is at least a certain quality of life apparent from personal reflection. Introspection is, after all, the only way we know that consciousness even exists. Scientific and technical redefinitions that fail to account for its phenomenal quality are at best incomplete. In the author's view, all but one of the ten articles in the JCS volume on Machine Consciousness commit various degrees of Protean distortion. In this collection of articles, common sense terms describing consciousness were consistently distorted into special uses that strip them of their meaning. The author has tried to point out in criticism of the JCS articles, explicit principles that are missing in the account of machine consciousness, leaving the various inferences about consciousness open to charges of being distorted and illegitimate. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
-
The idea that internal models of the world might be useful has generally been rejected by embodied AI for the same reasons that led to its rejection by behaviour based robotics. This paper re-examines the issue from historical, biological, and functional perspectives; the view that emerges indicates that internal models are essential for achieving cognition, that their use is widespread in biological systems, and that there are several good but neglected examples of their use within embodied AI. Consideration of the example of a hypothetical autonomous embodied agent that has to execute a complex mission in a dynamic, partially unknown, and hostile environment leads to the conclusion that the necessary cognitive architecture is likely to contain separate but interacting models of the body and of the world. This arrangement is shown to have intriguing parallels with new findings on the infrastructure of consciousness, leading to the speculation that the reintroduction of internal models into embodied AI may lead not only to improved machine cognition but also, in the long run, to machine consciousness.
-
Asking whether a machine can be conscious is rather like asking whether one has stopped beating one's wife: The question is so heavy with assumptions that either answer would be incriminating! The answer, of course, is: It depends entirely on what you mean by 'machine'! If you mean the current generation of man-made devices (toasters, ovens, cars, computers, today's robots), the answer is: almost certainly not.
-
We are engineers, and our view of consciousness is shaped by an engineering ambition: we would like to build a conscious machine. We begin by acknowledging that we may be a little disadvantaged, in that consciousness studies do not form part of the engineering curriculum, and so we may be starting from a position of considerable ignorance as regards the study of consciousness itself. In practice, however, this may not set us back very far; almost a decade ago, Crick wrote: 'Everyone has a rough idea of what is meant by consciousness. It is better to avoid a precise definition of consciousness because of the dangers of premature definition. Until the problem is understood much better, any attempt at a formal definition is likely to be either misleading or overly restrictive, or both' (Crick, 1994). This seems to be as true now as it was then, although the identification of different aspects of consciousness (P-consciousness, A-consciousness, self consciousness, and monitoring consciousness) by Block (1995) has certainly brought a degree of clarification. On the other hand, there is little doubt that consciousness does seem to be something to do with the operation of a sophisticated control system (the human brain), and we can claim more familiarity with control systems than can most philosophers, so perhaps we can make up some ground there.
-
After discussing various types of consciousness, several approaches to machine consciousness, software agent, and global workspace theory, we describe a software agent, IDA, that is “conscious” in the sense of implementing that theory of consciousness. IDA perceives, remembers, deliberates, negotiates, and selects actions, sometimes “consciously.” She uses a variety of mechanisms, each of which is briefly described. It’s tempting to think of her as a conscious artifact. Is such a view in any way justified? The remainder of the paper considers this question.
-
Could a machine have an immaterial mind? The author argues that true conscious machines can be built, but rejects artificial intelligence and classical neural networks in favour of the emulation of the cognitive processes of the brain--the flow of inner speech, inner imagery and emotions. This results in a non-numeric meaning-processing machine with distributed information representation and system reactions. It is argued that this machine would be conscious; it would be aware of its own existence and its mental content and perceive this as immaterial. Novel views on consciousness and the mind-body problem are presented. This book is a must for anyone interested in consciousness research and the latest ideas in the forthcoming technology of mind.
-
We describe a virtual robot that acquires meanings of words, and discuss that the robot shows an introspective conscious behavior. We first argue what outputs of the robot demonstrate that it has learned meanings of words such as "what is this". Then we explain functions of the robot that enable the robot to make outputs as if it has learned meanings of questions. The functions include 1) forming of linked lists (associations) among observed features, robotic actions, and words, 2) generalizing the associations to form generalized associations, and 3) applying an association and a generalized association to a new situation by making the generalized association match words, observed features, and actions in die new situation.
-
This paper discusses recent research on humanoid robots and thought experiments addressing the question to what degree such robots could be expected to develop human-like cognition, if rather than being preprogrammed they were made to learn from the interaction with their physical and social environment like human infants. A question of particular interest, from both a semiotic and a cognitive scientific perspective, is whether or not such robots could develop an experiential Umwelt, i.e. could the sign processes they are involved in become intrinsically meaningful to themselves? Arguments for and against the possibility of phenomenal artificial minds of different forms are discussed, and it is concluded that humanoid robotics still has to be considered “weak” rather than “strong AI”, i.e. it deals with models of mind rather than actual minds.
-
In 1994 John Searle stated (Searle 1994: 11-12) that the Chinese Room Argument (CRA) is an attempt to prove the truth of the premise: led him to the conclusion that ‘programs are not minds’ and hence that computationalism, the idea that the essence of thinking lies in computational processes and that such processes thereby underlie and explain conscious thinking, is false. The argument presented in this chapter is not a direct attack or defence of the CRA, but relates to the premise at its heart, that syntax is not sufficient for semantics, via the closely associated propositions that semantics is not intrinsic to syntax and that syntax is not intrinsic to physics.1 However, in contrast to the CRA’s critique of the link between syntax and semantics, this chapter will explore the associated link between syntax and physics.
-
MAVRIC II is a mobile, autonomous robot whose brain is comprised almost entirely of artificial adaptrode‐based neurons. These neurons were previously shown to encode anticipatory actions. The architecture of this brain is based on the Extended Braitenberg Architecture (EBA). We are still in the process of collecting hard data on the behavioral traits of MAVRIC in the generalized foraging search task. But even now sufficient qualitative aspects of MAVRIC’s behavior have been garnered from foraging experiments to lend strong support to the theory that MAVRIC is a highly adaptive, life‐like agent. The development of the current MAVRIC brain has led to some important insights into the nature of intelligent control. In this paper we elucidate some of these principles in the form of lessons learned, and project the potential for future developments.
-
The late Stanley Kubrick's film 2001: A Space Odyssey portrayed a computer, HAL 9000, that appeared to be a conscious entity, especially given that it seemed capable of some forms of emotional expression. This article examines the film's portrayal of communication between HAL 9000 and the astronauts. Recent developments in the field of artificial intelligence (AI) (and synthetic emotions in particular) as well as social science research on human emotions are reviewed. Interpreting select scenes from 2001 in light of these findings, the authors argue that computer-generated emotions may be so realistic that they suggest inner feelings and consciousness. Refinements in AI technology are now making such realism possible. The need for a less anthropomorphic approach with computers that appear to have feelings is stressed.
-
Since the beginnings of computer technology, researchers have speculated about the possibility of building smart machines that could compete with human intelligence. Given the current pace of advances in artificial intelligence and neural computing, such an evolution seems to be a more concrete possibility. Many people now believe that artificial consciousness is possible and that, in the future, it will emerge in complex computing machines. However, a discussion of artificial consciousness gives rise to several philosophical issues: can computers think or do they just calculate? Is consciousness a human prerogative? Does consciousness depend on the material that comprises the human brain, or can computer hardware replicate consciousness? Answering these questions is difficult because it requires combining information from many disciplines including computer science, neurophysiology, philosophy, and religion. Further, we must consider the influence of science fiction, especially science fiction films, when addressing artificial consciousness. As a product of the human imagination, such works express human desires and fears about future technologies and may influence the course of progress. At a societal level, science fiction simulates future scenarios that can help prepare us for crucial transitions by predicting the consequences of significant technological advances. The paper considers robots in science fiction, the Turing test, computer chess and artificial consciousness.