Search
Full bibliography 558 resources
-
While the recent special issue of JCS on machine consciousness (Volume 14, Issue 7) was in preparation, a collection of papers on the same topic, entitled Artificial Consciousness and edited by Antonio Chella and Riccardo Manzotti, was published. The editors of the JCS special issue, Ron Chrisley, Robert Clowes and Steve Torrance, thought it would be a timely and productive move to have authors of papers in their collection review the papers in the Chella and Manzotti book, and include these reviews in the special issue of the journal. Eight of the JCS authors (plus Uziel Awret) volunteered to review one or more of the fifteen papers in Artificial Consciousness; these individual reviews were then collected together with a minimal amount of editing to produce a seamless chapter-by-chapter review of the entire book. Because the number and length of contributions to the JCS issue was greater than expected, the collective review of Artificial Consciousness had to be omitted, but here at last it is. Each paper's review is written by a single author, so any comments made may not reflect the opinions of all nine of the joint authors!
-
One of the major topics towards robot consciousness is to give a robot the capabilities of self-consciousness. We propose that robot self-consciousness is based on higher order perception of the robot, in the sense that first-order robot perception is the immediate perception of the outer world, while higher order perception is the perception of the inner world of the robot.
-
Over sixty years ago, Kenneth Craik noted that, if an organism (or an artificial agent) carried 'a small-scale model of external reality and of its own possible actions within its head', it could use the model to behave intelligently. This paper argues that the possible actions might best be represented by interactions between a model of reality and a model of the agent, and that, in such an arrangement, the internal model of the agent might be a transparent model of the sort recently discussed by Metzinger, and so might offer a useful analogue of a conscious entity. The CRONOS project has built a robot functionally similar to a human that has been provided with an internal model of itself and of the world to be used in the way suggested by Craik; when the system is completed, it will be possible to study its operation from the perspective not only of artificial intelligence, but also of machine consciousness.
-
We review computational intelligence methods of sensory perception and cognitive functions in animals, humans, and artificial devices. Top-down symbolic methods and bottom-up sub-symbolic approaches are described. In recent years, computational intelligence, cognitive science and neuroscience have achieved a level of maturity that allows integration of top-down and bottom-up approaches in modeling the brain. Continuous adaptation and learning is a key component of computationally intelligent devices, which is achieved using dynamic models of cognition and consciousness. Human cognition performs a granulation of the seemingly homogeneous temporal sequences of perceptual experiences into meaningful and comprehensible chunks of concepts and complex behavioral schemas. They are accessed during action selection and conscious decision making as part of the intentional cognitive cycle. Implementations in computational and robotic environments are demonstrated.
-
In this article I suggest how we might conceptualize some kind of artificial consciousness as an ultimate development of Artificial Life. This entity will be embodied in some sort of constructed (biological or non-biological) body. The contention is that consciousness within self-organized entities is not only possible but inevitable. The basic sensory and interactive processes by which an organism operates within an environment are such as to be the basic processes that are necessary for consciousness. I then look at likely criteria for consciousness, and point to an architecture of the cognitive which maps onto the physiological layer, the brain. While evolutionary algorithms and neural nets will be at the heart of the production of artificial intelligences there is a particular architectural organization that may be necessary in the production of conscious artefacts. This involves the operations of multiple layers of feedback loops in the anatomy of the brain, in the social construction of the contents of consciousness and in particular in the self-regulation necessary for the continued operation of metabolically organized systems. Finally I make some comments on the ethics of such a procedure.
-
The idea that internal models of the world might be useful has generally been rejected by embodied AI for the same reasons that led to its rejection by behaviour based robotics. This paper re-examines the issue from historical, biological, and functional perspectives; the view that emerges indicates that internal models are essential for achieving cognition, that their use is widespread in biological systems, and that there are several good but neglected examples of their use within embodied AI. Consideration of the example of a hypothetical autonomous embodied agent that has to execute a complex mission in a dynamic, partially unknown, and hostile environment leads to the conclusion that the necessary cognitive architecture is likely to contain separate but interacting models of the body and of the world. This arrangement is shown to have intriguing parallels with new findings on the infrastructure of consciousness, leading to the speculation that the reintroduction of internal models into embodied AI may lead not only to improved machine cognition but also, in the long run, to machine consciousness.
-
We are engineers, and our view of consciousness is shaped by an engineering ambition: we would like to build a conscious machine. We begin by acknowledging that we may be a little disadvantaged, in that consciousness studies do not form part of the engineering curriculum, and so we may be starting from a position of considerable ignorance as regards the study of consciousness itself. In practice, however, this may not set us back very far; almost a decade ago, Crick wrote: 'Everyone has a rough idea of what is meant by consciousness. It is better to avoid a precise definition of consciousness because of the dangers of premature definition. Until the problem is understood much better, any attempt at a formal definition is likely to be either misleading or overly restrictive, or both' (Crick, 1994). This seems to be as true now as it was then, although the identification of different aspects of consciousness (P-consciousness, A-consciousness, self consciousness, and monitoring consciousness) by Block (1995) has certainly brought a degree of clarification. On the other hand, there is little doubt that consciousness does seem to be something to do with the operation of a sophisticated control system (the human brain), and we can claim more familiarity with control systems than can most philosophers, so perhaps we can make up some ground there.
-
Conscious behavior is hypothesized to be governed by the dynamics of the neural architecture of the brain. A general model of an artificial consciousness algorithm is presented, and applied to a one-dimensional feedback control system. A new learning algorithm for learning functional relations is presented and shown to be biologically grounded. The consciousness algorithm uses predictive simulation and evaluation to let the example system relearn new internal and external models after it is damaged.
-
This paper investigates human consciousness in comparison with a robots' internal state. On the basis of Husserlian phenomenology, nine requirements for a model of consciousness are proposed from different technical aspects: self, intentionality, anticipation, feedback process, certainty, embodiment, otherness, emotionality and chaos. Consciousness-based Architecture (CBA) for mobile robots was analyzed in comparison with the requirements proposed for a consciousness model. CBA, previously developed, is a software architecture with an evolutionary hierarchy of the relationship between consciousness and behavior. Experiments with this architecture loaded on two mobile robots explain the emergence of self, and some of the intentionality, anticipation, feedback process, embodiment and emotionality. Modification of CBA will be necessary to better explain the emergence of self in terms of the relationship between consciousness and behavior.
-
This paper proposes an approach to designing behavior and its subjective world of a small robot to behave like an animal. This approach employs a hierarchical model of the relation between consciousness and behavior. The basic idea of this model is that a consciousness appears on a level in the hierarchical structure when an action on an immediately lower level is inhibited for internal or external causes, and that the appearing consciousness drives a chosen higher action. The computer simulation on a Mac shows the behavior of an artificial animal from reflex actions to catching of food. Its instantaneous consciousness that appears due to inhibited behavior is visualized with the behavior on the screen with use of colors according to emotions of the animal.
-
Shanon provides us with a well reasoned and careful consideration of the nature of consciousness. Shanon argues from this understanding of consciousness that machines could not be conscious. A reconsideration of Shanon's discussion of consciousness is undertaken to determine what it is that computers are missing so as to prevent them from being conscious. The conclusion is that under scrutiny it is hard to establish a priori that machines could not be conscious.
-
Is artificial consciousness theoretically possible? Is it plausible? If so, is it technically feasible? To make progress on these questions, it is necessary to lay some groundwork clarifying the logical and empirical conditions for artificial consciousness to arise and the meaning of relevant terms involved. Consciousness is a polysemic word: researchers from different fields, including neuroscience, Artificial Intelligence, robotics, and philosophy, among others, sometimes use different terms in order to refer to the same phenomena or the same terms to refer to different phenomena. In fact, if we want to pursue artificial consciousness, a proper definition of the key concepts is required. Here, after some logical and conceptual preliminaries, we argue for the necessity of using dimensions and profiles of consciousness for a balanced discussion about their possible instantiation or realisation in artificial systems. Our primary goal in this paper is to review the main theoretical questions that arise in the domain of artificial consciousness. On the basis of this review, we propose to assess the issue of artificial consciousness within a multidimensional account. The theoretical possibility of artificial consciousness is already presumed within some theoretical frameworks; however, empirical possibility cannot simply be deduced from these frameworks but needs independent empirical validation. Analysing the complexity of consciousness we here identify constituents and related components/dimensions, and within this analytic approach reflect pragmatically about the general challenges that the creation of artificial consciousness confronts. Our aim is not to demonstrate conclusively either the theoretical plausibility or the empirical feasibility of artificial consciousness, but to outline a research strategy in which we propose that "awareness" may be a potentially realistic target for realisation in artificial systems.
-
In Isaac Asimov’s groundbreaking anthology, I, Robot, the intricacies of human and robotic interactions take center stage, delving deep into questions of consciousness, right, morality. Characterized by Asimov’s unique blend of science fiction and philosophical pondering, the stories establish a framework to reflect on the evolving dynamics of an advanced technological society in space. Robots are capable to interact with human, to interpret complicate orders and act automatically, to help human with dull or dangerous works such as calculating and mining, even replacing the workers. Asimov also raised Three Laws of Robotics to make sure that all robots function in order. Through the lens of posthumanism, the anthology is examined for its portrayal of the blurred boundaries between human and artificial intelligences. Robots and human co-exist in the society. However, due to the limitation of “Three Laws of Robotics”, some logical contradictions inevitably appears and lead to the dysfunction of robots. As a result, I, Robot emerges as a poignant critique of humanity’s ethical and existential challenges in the face of rapid technological advancements. Building on existing research, this article attempts to forge a new perspective by reflecting on the broader implications of the artificial intelligence in Asimov’s works through the lens of post-humanism. It considers the existential questions of AI on a consciousness level, explores the egalitarian relationship between humans and machines from a rights perspective, and analyzes the concepts of “humanity” and “human-like” from a moral and ethical standpoint. It encourages readers to recognize that robots are not mere slaves to humans; instead, humans should view AI with equality and reverence towards the advancements of the era’s technology. Humanity should step away from anthropocentrism, not solely viewing humans as the measure of all things in this rapidly evolving era of AI, and properly handle the relationship between humans and non-humans. Currently, with rapid advancements in science and technology, the world of human-machine coexistence depicted by Isaac Asimov is increasingly becoming a reality. The emergence of artificial intelligences like AlphaGo and ChatGPT constantly reminds us of the advent of a post-human era. This article aims to examine the connections between AI and humans, discussing the dynamic interaction and mutual shaping between robots and human society. This article hopes to provide new thoughts and strategies for understanding and addressing the challenges brought by the age of artificial intelligence.