Search
Full bibliography 716 resources
-
While the recent special issue of JCS on machine consciousness (Volume 14, Issue 7) was in preparation, a collection of papers on the same topic, entitled Artificial Consciousness and edited by Antonio Chella and Riccardo Manzotti, was published. The editors of the JCS special issue, Ron Chrisley, Robert Clowes and Steve Torrance, thought it would be a timely and productive move to have authors of papers in their collection review the papers in the Chella and Manzotti book, and include these reviews in the special issue of the journal. Eight of the JCS authors (plus Uziel Awret) volunteered to review one or more of the fifteen papers in Artificial Consciousness; these individual reviews were then collected together with a minimal amount of editing to produce a seamless chapter-by-chapter review of the entire book. Because the number and length of contributions to the JCS issue was greater than expected, the collective review of Artificial Consciousness had to be omitted, but here at last it is. Each paper's review is written by a single author, so any comments made may not reflect the opinions of all nine of the joint authors!
-
This paper briefly describes the most relevant current approaches to the implementation of scientific models of consciousness. Main aspects of scientific theories of consciousness are characterized in sight of their possible mapping into artificial implementations. These implementations are analyzed both theoretically and functionally. Also, a novel pragmatic functional approach to machine consciousness is proposed and discussed. A set of axioms for the presence of consciousness in agents is applied to evaluate and compare the various models.
-
One of the major topics towards robot consciousness is to give a robot the capabilities of self-consciousness. We propose that robot self-consciousness is based on higher order perception of the robot, in the sense that first-order robot perception is the immediate perception of the outer world, while higher order perception is the perception of the inner world of the robot.
-
The humanoid robot we developed is capable of changing its facial expressions according to simple artificial consciousness and stream of consciousness. We think that consciousness consists basically of ‘language and its associative stream’ and ‘representation of feelings related to the language.’ The robot named Kansei, a Japanese term referring to emotion, achieves linguistic association similar to a stream of consciousness and represents its feelings according to the associations through artificial consciousness. The robot is equipped with an artificial cranium made of aluminium which contains 19 servomotors inside it. For the surface of the face, we use polyurethane, a relatively soft material which creates a shape that is closely mimics a human face. Facial expressions are formed on the polyurethane by pulling it into shape via metal wires attached to the 19 internal servomotors. Besides the six basic facial expressions such as happiness and anger, the robot can also make complex expressions that include both happiness and fear.
-
In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, are, it is proposed, unlikely to possess sentience and hence will fail to be able to exercise the kind of empathic rationality that is a prerequisite for being a moral agent. The organic view also argues that sentience and teleology require biologically based forms of self-organization and autonomous self-maintenance. The organic view may not be correct, but at least it needs to be taken seriously in the future development of the field of Machine Ethics.
-
In this mind-expanding book, scientific pioneer Marvin Minsky continues his groundbreaking research, offering a fascinating new model for how our minds work. He argues persuasively that emotions, intuitions, and feelings are not distinct things, but different ways of thinking. By examining these different forms of mind activity, Minsky says, we can explain why our thought sometimes takes the form of carefully reasoned analysis and at other times turns to emotion. He shows how our minds progress from simple, instinctive kinds of thought to more complex forms, such as consciousness or self-awareness. And he argues that because we tend to see our thinking as fragmented, we fail to appreciate what powerful thinkers we really are. Indeed, says Minsky, if thinking can be understood as the step-by-step process that it is, then we can build machines -- artificial intelligences -- that not only can assist with our thinking by thinking as we do but have the potential to be as conscious as we are. Eloquently written, The Emotion Machine is an intriguing look into a future where more powerful artificial intelligences await.
-
Is it possible to create genuine consciousness, artificial intelligence? This paper will examine this question through the philosophical understanding of imagination. Imagination is the creative faculty that not only separates man from beast but also allows man to recognize himself, and it helps him to see beyond his immediate being. Without imagination, man is blind to future possibility and unable to conceive of future events. Imagination produces man's mental cognition that makes him a rational and creative being. The two modern methods of A.I. programming are incapable of imbuing their algorithm with imagination. Without imagination, A.I. not only lacks genuine creativity, but it also risks never understanding self-autonomy and therefore could never be a free and independent entity.
-
Haikonen envisions autonomous robots that perceive and understand the world directly, acting in it in a natural human-like way without the need of programs and numerical representation of information. By developing higher-level cognitive functions through the power of artificial associative neuron architectures, the author approaches the issues of machine consciousness. Robot Brains expertly outlines a complete system approach to cognitive machines, offering practical design guidelines for the creation of non-numeric autonomous creative machines. It details topics such as component parts and realization principles, so that different pieces may be implemented in hardware or software. Real-world examples for designers and researchers are provided, including circuit and systems examples that few books on this topic give. In novel technical and practical detail, this book also considers: the limitations and remedies of traditional neural associators in creating true machine cognition; basic circuit assemblies cognitive neural architectures; how motors can be interfaced with the associative neural system in order for fluent motion to be achieved without numeric computations; memorization, imagination, planning and reasoning in the machine; the concept of machine emotions for motivation and value systems; an approach towards the use and understanding of natural language in robots. The methods presented in this book have important implications for computer vision, signal processing, speech recognition and other information technology fields. Systematic and thoroughly logical, it will appeal to practising engineers involved in the development and design of robots and cognitive machines, also researchers in Artificial Intelligence. Postgraduate students in computational neuroscience and robotics, and neuromorphic engineers will find it an exciting source of information.
-
Consciousness is only marginally relevant to artificial intelligence (AI), because to most researchers in the field other problems seem more pressing. The purpose of consciousness, from an evolutionary perspective, is often held to have something to do with the allocation and organization of scarce cognitive resources. This chapter describes Daniel Dennett's idea of the intentional stance, in which an observer explains a system's behavior by invoking such intentional categories as beliefs and goals. The computationalist theory of phenomenal consciousness ends up looking like a spoil-sport's explanation of a magic trick. The chapter focuses on critiques that are specifically directed at computational models of consciousness, as opposed to general critiques of materialist explanation. The contribution of artificial intelligence to consciousness studies has been slender so far, because almost everyone in the field would rather work on better defined, less controversial problems.
-
Current approaches to machine consciousness (MC) tend to offer a range of characteristic responses to critics of the enterprise. Many of these responses seem to marginalize phenomenal consciousness, by presupposing a 'thin' conception of phenomenality. This conception is, we will argue, largely shared by anti- computationalist critics of MC. On the thin conception, physiological or neural or functional or organizational features are secondary accompaniments to consciousness rather than primary components of consciousness itself. We outline an alternative, 'thick' conception of phenomenality. This shows some signposts in the direction of a more adequate approach to MC.
-
This work aims to describe the application of a novel machine consciousness model to a particular problem of unknown environment exploration. This relatively simple problem is analyzed from the point of view of the possible benefits that cognitive capabilities like attention, environment awareness and emotional learning can offer. The model we have developed integrates these concepts into a situated agent control framework, whose first version is being tested in an advanced robotics simulator. The implementation of the relationships and synergies between the different cognitive functionalities of consciousness in the domain of autonomous robotics is also discussed.
-
The development of conscious machines faces a number of difficult issues such as the apparent immateriality of mind, qualia and self-awareness. Also consciousness-related cognitive processes such as perception, imagination, motivation and inner speech are a technical challenge. It is foreseen that the development of machine consciousness would call for a system approach; the developer of conscious machines should consider complete systems that integrate the cognitive processes seamlessly and process information in a transparent way with representational and non-representational information-processing modes. An overview of the main issues is given and some possible solutions are outlined.
-
Over sixty years ago, Kenneth Craik noted that, if an organism (or an artificial agent) carried 'a small-scale model of external reality and of its own possible actions within its head', it could use the model to behave intelligently. This paper argues that the possible actions might best be represented by interactions between a model of reality and a model of the agent, and that, in such an arrangement, the internal model of the agent might be a transparent model of the sort recently discussed by Metzinger, and so might offer a useful analogue of a conscious entity. The CRONOS project has built a robot functionally similar to a human that has been provided with an internal model of itself and of the world to be used in the way suggested by Craik; when the system is completed, it will be possible to study its operation from the perspective not only of artificial intelligence, but also of machine consciousness.
-
We review computational intelligence methods of sensory perception and cognitive functions in animals, humans, and artificial devices. Top-down symbolic methods and bottom-up sub-symbolic approaches are described. In recent years, computational intelligence, cognitive science and neuroscience have achieved a level of maturity that allows integration of top-down and bottom-up approaches in modeling the brain. Continuous adaptation and learning is a key component of computationally intelligent devices, which is achieved using dynamic models of cognition and consciousness. Human cognition performs a granulation of the seemingly homogeneous temporal sequences of perceptual experiences into meaningful and comprehensible chunks of concepts and complex behavioral schemas. They are accessed during action selection and conscious decision making as part of the intentional cognitive cycle. Implementations in computational and robotic environments are demonstrated.
-
Thinking and being conscious are two fundamental aspects of the subject. Although both are challenging, often conscious experience has been considered more elusive (Chalmers 1996). However, in recent years, several researchers addressed the hypothesis of designing and implementing models for artificial conscious-ness—on one hand there is hope of being able to design a model for consciousness, on the other hand the actual implementations of such models could be helpful for understanding consciousness. The traditional field of Artificial Intelligence is now flanked by the seminal field of artificial or machine consciousness. In this chapter I will analyse the current state of the art of models of consciousness and then I will outline an externalist theory of the conscious mind that is compatible with the design and implementation of an artificial conscious being. As I argue in the following, this task can be profitably approached once we abandon the dualist framework of traditional Cartesian substance metaphysics and adopt a process-metaphysical stance. Thus, I sketch an alternative externalist process-based ontological framework. From within this framework, I venture to suggest a series of constraints for a conscious oriented architecture.
-
Machine consciousness exists already in organic systems and it is only a matter of time -- and some agreement -- before it will be realised in reverse-engineered organic systems and forward- engineered inorganic systems. The agreement must be over the preconditions that must first be met if the enterprise is to be successful, and it is these preconditions, for instance, being a socially-embedded, structurally-coupled and dynamic, goal-directed entity that organises its perceptual input and enacts its world through the application of both a cognitive and kinaesthetic imagination, that I shall concentrate on presenting in this paper. It will become clear that these preconditions will present engineers with a tall order, but not, I will argue, an impossible one. After all, we might agree with Freeman and Núñez's claim that the machine metaphor has restricted the expectations of the cognitive sciences (Freeman & Núñez, 1999); but it is a double-edged sword, since our limited expectations about machines also narrow the potential of our cognitive science.