Search
Full bibliography 716 resources
-
Could a machine have an immaterial mind? The author argues that true conscious machines can be built, but rejects artificial intelligence and classical neural networks in favour of the emulation of the cognitive processes of the brain--the flow of inner speech, inner imagery and emotions. This results in a non-numeric meaning-processing machine with distributed information representation and system reactions. It is argued that this machine would be conscious; it would be aware of its own existence and its mental content and perceive this as immaterial. Novel views on consciousness and the mind-body problem are presented. This book is a must for anyone interested in consciousness research and the latest ideas in the forthcoming technology of mind.
-
We describe a virtual robot that acquires meanings of words, and discuss that the robot shows an introspective conscious behavior. We first argue what outputs of the robot demonstrate that it has learned meanings of words such as "what is this". Then we explain functions of the robot that enable the robot to make outputs as if it has learned meanings of questions. The functions include 1) forming of linked lists (associations) among observed features, robotic actions, and words, 2) generalizing the associations to form generalized associations, and 3) applying an association and a generalized association to a new situation by making the generalized association match words, observed features, and actions in die new situation.
-
This paper discusses recent research on humanoid robots and thought experiments addressing the question to what degree such robots could be expected to develop human-like cognition, if rather than being preprogrammed they were made to learn from the interaction with their physical and social environment like human infants. A question of particular interest, from both a semiotic and a cognitive scientific perspective, is whether or not such robots could develop an experiential Umwelt, i.e. could the sign processes they are involved in become intrinsically meaningful to themselves? Arguments for and against the possibility of phenomenal artificial minds of different forms are discussed, and it is concluded that humanoid robotics still has to be considered “weak” rather than “strong AI”, i.e. it deals with models of mind rather than actual minds.
-
In 1994 John Searle stated (Searle 1994: 11-12) that the Chinese Room Argument (CRA) is an attempt to prove the truth of the premise: led him to the conclusion that ‘programs are not minds’ and hence that computationalism, the idea that the essence of thinking lies in computational processes and that such processes thereby underlie and explain conscious thinking, is false. The argument presented in this chapter is not a direct attack or defence of the CRA, but relates to the premise at its heart, that syntax is not sufficient for semantics, via the closely associated propositions that semantics is not intrinsic to syntax and that syntax is not intrinsic to physics.1 However, in contrast to the CRA’s critique of the link between syntax and semantics, this chapter will explore the associated link between syntax and physics.
-
MAVRIC II is a mobile, autonomous robot whose brain is comprised almost entirely of artificial adaptrode‐based neurons. These neurons were previously shown to encode anticipatory actions. The architecture of this brain is based on the Extended Braitenberg Architecture (EBA). We are still in the process of collecting hard data on the behavioral traits of MAVRIC in the generalized foraging search task. But even now sufficient qualitative aspects of MAVRIC’s behavior have been garnered from foraging experiments to lend strong support to the theory that MAVRIC is a highly adaptive, life‐like agent. The development of the current MAVRIC brain has led to some important insights into the nature of intelligent control. In this paper we elucidate some of these principles in the form of lessons learned, and project the potential for future developments.
-
The late Stanley Kubrick's film 2001: A Space Odyssey portrayed a computer, HAL 9000, that appeared to be a conscious entity, especially given that it seemed capable of some forms of emotional expression. This article examines the film's portrayal of communication between HAL 9000 and the astronauts. Recent developments in the field of artificial intelligence (AI) (and synthetic emotions in particular) as well as social science research on human emotions are reviewed. Interpreting select scenes from 2001 in light of these findings, the authors argue that computer-generated emotions may be so realistic that they suggest inner feelings and consciousness. Refinements in AI technology are now making such realism possible. The need for a less anthropomorphic approach with computers that appear to have feelings is stressed.
-
Since the beginnings of computer technology, researchers have speculated about the possibility of building smart machines that could compete with human intelligence. Given the current pace of advances in artificial intelligence and neural computing, such an evolution seems to be a more concrete possibility. Many people now believe that artificial consciousness is possible and that, in the future, it will emerge in complex computing machines. However, a discussion of artificial consciousness gives rise to several philosophical issues: can computers think or do they just calculate? Is consciousness a human prerogative? Does consciousness depend on the material that comprises the human brain, or can computer hardware replicate consciousness? Answering these questions is difficult because it requires combining information from many disciplines including computer science, neurophysiology, philosophy, and religion. Further, we must consider the influence of science fiction, especially science fiction films, when addressing artificial consciousness. As a product of the human imagination, such works express human desires and fears about future technologies and may influence the course of progress. At a societal level, science fiction simulates future scenarios that can help prepare us for crucial transitions by predicting the consequences of significant technological advances. The paper considers robots in science fiction, the Turing test, computer chess and artificial consciousness.
-
The role of emotions has been underestimated in the field of robotics. We claim that emotions are relevant for the building of purposeful artificial systems from at least two perspectives: a cognitive and a phenomenological one. The cognitive aspect is relevant for at least two reasons. First, emotions could be the basis for binding between internal values and different external situations (the somatic marker theory). Second emotions could play a crucial role, during development, both for taking difficult decisions whose effects are not immediately verifiable and for the creation of more complex behavioral functions. Thus emotions can be seen, from a cognitive point of view, as a reinforcement stimulus and in this respect, they can be modeled in an artificial being. Inasmuch, emotions can be seen as a medium for linking rewards and values to external situations. From the phenomenological perspective, we accept the division between feelings and emotions. Emotions are, in James' words, the body theatre in which several emotions are represented and feelings are the mental phenomenological perception of them. We could say that feelings are the qualia of the body events we could call emotions. We are using this model of emotions in the development of our project: Babybot. We stress the importance of emotions during learning and development as endogenous teaching devices.
-
The aim of this chapter is to refine some questions regarding AI, and to provide partial answers to them. We analyze the state of the art in designing intelligent systems that are able to mimic human complex activities, including acts based on artificial consciousness. The analysis is performed to contrast the human cognition and behavior to the similar processes in AI systems. The analysis includes elements of psychology, sociology, and communication science related to humans and lower level beings. The second part of this chapter is devoted to human-human and man-machine communication, as related to intelligence. We emphasize that the relational aspects constitute the basis for the perception, knowledge, semiotic and communication processes. Several consequences are derived. Subsequently, we deal with the tools needed to endow the machines with intelligence. We discuss the roles of knowledge and data structures. The results could help building "sensitive and intelligent" machines.
-
The paper presents and defends the mimetic hypothesis concerning the origin of self-consciousness in three different kinds of development: hominid evolution, the mind of the child, and the epigenesis of mind within an artificial autonomous system - a robot. The proposed crucial factor for the emergence of self-consciousness is the ability to map between one's own subjective body-image and those of others, supported by a partially innate 'mirror system'. Combined with social interaction, this gives rise to inter-subjectivity and starts a developmental cycle of: 1) increased objectification of one's body-image, 2) increased volitional control, 3) increased understanding of the intentionality of others, and 4) increased understanding of one's own intentionality. The hypothesis has far reaching theoretical implications: the self-consciousness and empathy are co-determined; the language and tool-use are not causes, but rather consequences of increased self-consciousness; and most of the symptoms of autism can be accounted for as resulting from an impairment of the mirror system. The implications are negative for non-representational approaches to robotics and in favor of approaches based on imitation/mimesis.
-
Within a technological context, this volume addresses contemporary theories of consciousness, subjective experience, the creation of meaning and emotion, and relationships between cognition and location. Its focus is both on and beyond the digital culture, seeking to assimilate new ideas emanating from the physical sciences as well as embracing spiritual and artistic aspects of human experience. Developing on the studies published in Roy Ascott's successful Reframing Consciousness, the book documents the very latest work from those connected with the internationally acclaimed CAiiA-STAR centre and its conferences.
-
Three fundamental questions concerning minds are presented. These are about consciousness, intentionality and intelligence. After we present the fundamental framework that has shaped both the philosophy of mind and the Artificial Intelligence research in the last forty years or so regarding the last two questions, we turn to consciousness, whose study still seems evasive to both communities. After briefly illustrating why and how phenomenal consciousness is puzzling, a theoretical diagnosis of the problem is proposed and a framework is presented, within which further research would yield a solution. The diagnosis is that the puzzle stems from a peculiar dual epistemic access to phenomenal aspects (qualia) of our conscious experiences. An account of concept formation is presented such that both the phenomenal concepts (like the concepts RED and SWEET) and the introspective concepts (like the concepts EXPERIENCING RED and TASTING SWEET) are acquired from a firstperson perspective as opposed to the third-person one (the standard concept formation strategy about objective features). We explain the first-person perspective in information-theoretic and computational terms: Nature (the Art whereby God hath made and governes the World) is by the Art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal. For seeing life is but a motion of Limbs, the beginning whereof is in some principall part within; why may we not say, that all Automata (Engines that move themselves by springs and wheels as doth a watch) have an artificiall life? For what is the Heart, but a Spring; and the Nerves but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer? Art goes yet further, imitating that Rationall and most excellent worke of Nature, Man. (Hobbes 1651, p. 81) So declared Thomas Hobbes in 1651 in the Introduction to his well-known work, Leviathan, published one year after Réne Descartes' death. Descartes was also interested in mechanical explanations of bodily processes and organic life. In fact, on the basis of his neuroanatomical and physiological studies, as well as philosophical arguments, Descartes had already argued that human and animal bodies could be mechanically understood as complicated and intricately designed machines (Descartes 1664). What differentiated Descartes from Hobbes lay in his belief that human beings, unlike non-human animals, were not merely bodies; they were unions of material bodies and immaterial souls. The immaterial soul was necessary for Descartes to explain the peculiar capacities and activities of the human mind. As such, materialist mechanical explanations could never be sufficient to account for the whole human being.
-
Perception has both unconscious and conscious aspects. In all cases, however, what we perceive is a model of reality. By brain construction through evolution, we divide the world into two parts--our body and the outside world. But the process is the same in both cases. We perceive a construct usually governed by sensed data but always involving memory, goals, fears, expectations, etc. As a first step toward Artificial Perception in man-made systems, we examine perception in general here.
-
Conscious behavior is hypothesized to be governed by the dynamics of the neural architecture of the brain. A general model of an artificial consciousness algorithm is presented, and applied to a one-dimensional feedback control system. A new learning algorithm for learning functional relations is presented and shown to be biologically grounded. The consciousness algorithm uses predictive simulation and evaluation to let the example system relearn new internal and external models after it is damaged.
-
This paper investigates human consciousness in comparison with a robots' internal state. On the basis of Husserlian phenomenology, nine requirements for a model of consciousness are proposed from different technical aspects: self, intentionality, anticipation, feedback process, certainty, embodiment, otherness, emotionality and chaos. Consciousness-based Architecture (CBA) for mobile robots was analyzed in comparison with the requirements proposed for a consciousness model. CBA, previously developed, is a software architecture with an evolutionary hierarchy of the relationship between consciousness and behavior. Experiments with this architecture loaded on two mobile robots explain the emergence of self, and some of the intentionality, anticipation, feedback process, embodiment and emotionality. Modification of CBA will be necessary to better explain the emergence of self in terms of the relationship between consciousness and behavior.