Search
Full bibliography 615 resources
-
We review computational intelligence methods of sensory perception and cognitive functions in animals, humans, and artificial devices. Top-down symbolic methods and bottom-up sub-symbolic approaches are described. In recent years, computational intelligence, cognitive science and neuroscience have achieved a level of maturity that allows integration of top-down and bottom-up approaches in modeling the brain. Continuous adaptation and learning is a key component of computationally intelligent devices, which is achieved using dynamic models of cognition and consciousness. Human cognition performs a granulation of the seemingly homogeneous temporal sequences of perceptual experiences into meaningful and comprehensible chunks of concepts and complex behavioral schemas. They are accessed during action selection and conscious decision making as part of the intentional cognitive cycle. Implementations in computational and robotic environments are demonstrated.
-
In this article I suggest how we might conceptualize some kind of artificial consciousness as an ultimate development of Artificial Life. This entity will be embodied in some sort of constructed (biological or non-biological) body. The contention is that consciousness within self-organized entities is not only possible but inevitable. The basic sensory and interactive processes by which an organism operates within an environment are such as to be the basic processes that are necessary for consciousness. I then look at likely criteria for consciousness, and point to an architecture of the cognitive which maps onto the physiological layer, the brain. While evolutionary algorithms and neural nets will be at the heart of the production of artificial intelligences there is a particular architectural organization that may be necessary in the production of conscious artefacts. This involves the operations of multiple layers of feedback loops in the anatomy of the brain, in the social construction of the contents of consciousness and in particular in the self-regulation necessary for the continued operation of metabolically organized systems. Finally I make some comments on the ethics of such a procedure.
-
The idea that internal models of the world might be useful has generally been rejected by embodied AI for the same reasons that led to its rejection by behaviour based robotics. This paper re-examines the issue from historical, biological, and functional perspectives; the view that emerges indicates that internal models are essential for achieving cognition, that their use is widespread in biological systems, and that there are several good but neglected examples of their use within embodied AI. Consideration of the example of a hypothetical autonomous embodied agent that has to execute a complex mission in a dynamic, partially unknown, and hostile environment leads to the conclusion that the necessary cognitive architecture is likely to contain separate but interacting models of the body and of the world. This arrangement is shown to have intriguing parallels with new findings on the infrastructure of consciousness, leading to the speculation that the reintroduction of internal models into embodied AI may lead not only to improved machine cognition but also, in the long run, to machine consciousness.
-
We are engineers, and our view of consciousness is shaped by an engineering ambition: we would like to build a conscious machine. We begin by acknowledging that we may be a little disadvantaged, in that consciousness studies do not form part of the engineering curriculum, and so we may be starting from a position of considerable ignorance as regards the study of consciousness itself. In practice, however, this may not set us back very far; almost a decade ago, Crick wrote: 'Everyone has a rough idea of what is meant by consciousness. It is better to avoid a precise definition of consciousness because of the dangers of premature definition. Until the problem is understood much better, any attempt at a formal definition is likely to be either misleading or overly restrictive, or both' (Crick, 1994). This seems to be as true now as it was then, although the identification of different aspects of consciousness (P-consciousness, A-consciousness, self consciousness, and monitoring consciousness) by Block (1995) has certainly brought a degree of clarification. On the other hand, there is little doubt that consciousness does seem to be something to do with the operation of a sophisticated control system (the human brain), and we can claim more familiarity with control systems than can most philosophers, so perhaps we can make up some ground there.
-
Conscious behavior is hypothesized to be governed by the dynamics of the neural architecture of the brain. A general model of an artificial consciousness algorithm is presented, and applied to a one-dimensional feedback control system. A new learning algorithm for learning functional relations is presented and shown to be biologically grounded. The consciousness algorithm uses predictive simulation and evaluation to let the example system relearn new internal and external models after it is damaged.
-
This paper investigates human consciousness in comparison with a robots' internal state. On the basis of Husserlian phenomenology, nine requirements for a model of consciousness are proposed from different technical aspects: self, intentionality, anticipation, feedback process, certainty, embodiment, otherness, emotionality and chaos. Consciousness-based Architecture (CBA) for mobile robots was analyzed in comparison with the requirements proposed for a consciousness model. CBA, previously developed, is a software architecture with an evolutionary hierarchy of the relationship between consciousness and behavior. Experiments with this architecture loaded on two mobile robots explain the emergence of self, and some of the intentionality, anticipation, feedback process, embodiment and emotionality. Modification of CBA will be necessary to better explain the emergence of self in terms of the relationship between consciousness and behavior.
-
This paper proposes an approach to designing behavior and its subjective world of a small robot to behave like an animal. This approach employs a hierarchical model of the relation between consciousness and behavior. The basic idea of this model is that a consciousness appears on a level in the hierarchical structure when an action on an immediately lower level is inhibited for internal or external causes, and that the appearing consciousness drives a chosen higher action. The computer simulation on a Mac shows the behavior of an artificial animal from reflex actions to catching of food. Its instantaneous consciousness that appears due to inhibited behavior is visualized with the behavior on the screen with use of colors according to emotions of the animal.
-
Shanon provides us with a well reasoned and careful consideration of the nature of consciousness. Shanon argues from this understanding of consciousness that machines could not be conscious. A reconsideration of Shanon's discussion of consciousness is undertaken to determine what it is that computers are missing so as to prevent them from being conscious. The conclusion is that under scrutiny it is hard to establish a priori that machines could not be conscious.
-
Is artificial consciousness theoretically possible? Is it plausible? If so, is it technically feasible? To make progress on these questions, it is necessary to lay some groundwork clarifying the logical and empirical conditions for artificial consciousness to arise and the meaning of relevant terms involved. Consciousness is a polysemic word: researchers from different fields, including neuroscience, Artificial Intelligence, robotics, and philosophy, among others, sometimes use different terms in order to refer to the same phenomena or the same terms to refer to different phenomena. In fact, if we want to pursue artificial consciousness, a proper definition of the key concepts is required. Here, after some logical and conceptual preliminaries, we argue for the necessity of using dimensions and profiles of consciousness for a balanced discussion about their possible instantiation or realisation in artificial systems. Our primary goal in this paper is to review the main theoretical questions that arise in the domain of artificial consciousness. On the basis of this review, we propose to assess the issue of artificial consciousness within a multidimensional account. The theoretical possibility of artificial consciousness is already presumed within some theoretical frameworks; however, empirical possibility cannot simply be deduced from these frameworks but needs independent empirical validation. Analysing the complexity of consciousness we here identify constituents and related components/dimensions, and within this analytic approach reflect pragmatically about the general challenges that the creation of artificial consciousness confronts. Our aim is not to demonstrate conclusively either the theoretical plausibility or the empirical feasibility of artificial consciousness, but to outline a research strategy in which we propose that "awareness" may be a potentially realistic target for realisation in artificial systems.
-
In Isaac Asimov’s groundbreaking anthology, I, Robot, the intricacies of human and robotic interactions take center stage, delving deep into questions of consciousness, right, morality. Characterized by Asimov’s unique blend of science fiction and philosophical pondering, the stories establish a framework to reflect on the evolving dynamics of an advanced technological society in space. Robots are capable to interact with human, to interpret complicate orders and act automatically, to help human with dull or dangerous works such as calculating and mining, even replacing the workers. Asimov also raised Three Laws of Robotics to make sure that all robots function in order. Through the lens of posthumanism, the anthology is examined for its portrayal of the blurred boundaries between human and artificial intelligences. Robots and human co-exist in the society. However, due to the limitation of “Three Laws of Robotics”, some logical contradictions inevitably appears and lead to the dysfunction of robots. As a result, I, Robot emerges as a poignant critique of humanity’s ethical and existential challenges in the face of rapid technological advancements. Building on existing research, this article attempts to forge a new perspective by reflecting on the broader implications of the artificial intelligence in Asimov’s works through the lens of post-humanism. It considers the existential questions of AI on a consciousness level, explores the egalitarian relationship between humans and machines from a rights perspective, and analyzes the concepts of “humanity” and “human-like” from a moral and ethical standpoint. It encourages readers to recognize that robots are not mere slaves to humans; instead, humans should view AI with equality and reverence towards the advancements of the era’s technology. Humanity should step away from anthropocentrism, not solely viewing humans as the measure of all things in this rapidly evolving era of AI, and properly handle the relationship between humans and non-humans. Currently, with rapid advancements in science and technology, the world of human-machine coexistence depicted by Isaac Asimov is increasingly becoming a reality. The emergence of artificial intelligences like AlphaGo and ChatGPT constantly reminds us of the advent of a post-human era. This article aims to examine the connections between AI and humans, discussing the dynamic interaction and mutual shaping between robots and human society. This article hopes to provide new thoughts and strategies for understanding and addressing the challenges brought by the age of artificial intelligence.
-
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI’s inherent conscious or moral status.
-
Butlin et al. (2023) propose a rubric for evaluating AI systems based on indicator properties derived from existing theories of consciousness, suggesting that while current AIs do not possess consciousness, these indicators are pivotal for future developments towards artificial consciousness. The current paper critiques the approach by Butlin et al., arguing that the complexity of consciousness, characterized by subjective experience poses significant challenges for its operationalization and measurement, thus complicating the replication in AI. The commentary further explores the limitations of current methodologies in artificial consciousness research, pointing to the necessity of out-of-the-box thinking and the integration of individual differences research in cognitive psychology, particularly in the areas of attention, cognitive control, autobiographical memory, and Theory of Mind (ToM), to advance the understanding and development of artificial consciousness.
-
This chapter critically questions the claim that there would be possibility of emulating human consciousness and consciousness-dependent activity by Artificial Intelligence to create conscious artificial systems. The analysis is based on neurophysiological research and theory. In-depth scrutiny of the field and the prospects for converting neuroscience research into the type of algorithmic programs utilized in computer-based AI systems to create artificially conscious machines leads to conclude that such a conversion is unlikely to ever be possible because of the complexity of unconscious and conscious brain processing and their interaction. It is through the latter that the brain opens the doors to consciousness, a property of the human mind that no other living species has developed for reasons that are made clear in this chapter. As a consequence, many of the projected goals of AI will remain forever unrealizable. Although this work does not directly examine the question within a philosophy of mind framework by, for example, discussing why identifying consciousness with the activity of electronic circuits is first and foremost a category mistake in terms of scientific reasoning, the approach offered in the chapter is complementary to this standpoint, and illustrates various aspects of the problem under a monist from-brain-to-mind premise.