Search
Full bibliography 558 resources
-
The model of artificial conscience as an element of artificial consciousness is described. Proposed paradigm corresponds to previously developed general scheme of artificial intelligence. The functional role of the artificial conscience subsystem is defined and its software prototype is proposed.
-
Abstract How can we determine if AI is conscious? The chapter begins by illustrating that there are potentially very serious real-world costs to getting facts about AI consciousness wrong. It then proposes a provisional framework for investigating artificial consciousness that involves several tests or markers. One test is the AI Consciousness Test, which challenges an AI with a series of increasingly demanding natural-language interactions. Another test is based on the Integrated Information Theory, developed by Giulio Tononi and others, and considers whether a machine has a high level of “integrated information.” A third test is a Chip Test, where speculatively an individual’s brain is gradually replaced with durable microchips. If this individual being tested continues to report having phenomenal consciousness, the chapter argues that this could be a reason to believe that some machines could have consciousness.
-
The field of artificial consciousness (AC) has largely developed outside of mainstream artificial intelligence (AI), with separate goals and criteria for success and with only a minimal exchange of ideas. This is unfortunate as the two fields appear to be synergistic. For example, here we consider the question of how concepts developed in AC research might contribute to more effective future AI systems. We first briefly discuss several past hypotheses about the function(s) of human consciousness, and present our own hypothesis that short-term working memory and very rapid learning should be a central concern in such matters. In this context, we then present ideas about how integrating concepts from AC into AI systems to develop an artificial conscious intelligence (ACI) could both produce more effective AI technology and contribute to a deeper scientific understanding of the fundamental nature of consciousness and intelligence.
-
The science of consciousness has made great strides in recent decades. However, the proliferation of competing theories makes it difficult to reach consensus about artificial consciousness. While for purely scientific purposes we might wish to adopt a ‘wait and see’ attitude, we may soon face practical and ethical questions about whether, for example, agents artificial systems are capable of suffering. Moreover, many of the methods used for assessing consciousness in humans and even non-human animals are not straightforwardly applicable to artificial systems. With these challenges in mind, I propose that we look for ecumenical heuristics for artificial consciousness to enable us to make tentative assessments of the likelihood of consciousness arising in different artificial systems. I argue that such heuristics should have three main features: they should be (i) intuitively plausible, (ii) theoretically-neutral, and (iii) scientifically tractable. I claim that the concept of general intelligence — understood as a capacity for robust, flexible, and integrated cognition and behavior — satisfies these criteria and may thus provide the basis for such a heuristic, allowing us to make initial cautious estimations of which artificial systems are most likely to be conscious.
-
The research work presented in this article investigates and explains the conceptual mechanisms of consciousness and common-sense thinking of animates. These mechanisms are computationally simulated on artificial agents as strategic rules to analyze and compare the performance of agents in critical and dynamic environments. Awareness and attention to specific parameters that affect the performance of agents specify the consciousness level in agents. Common sense is a set of beliefs that are accepted to be true among a group of agents that are engaged in a common purpose, with or without self-experience. The common sense agents are a kind of conscious agents that are given with few common sense assumptions. The so-created environment has attackers with dependency on agents in the survival-food chain. These attackers create a threat mental state in agents that can affect their conscious and common sense behaviors. The agents are built with a multi-layer cognitive architecture COCOCA (Consciousness and Common sense Cognitive Architecture) with five columns and six layers of cognitive processing of each precept of an agent. The conscious agents self-learn strategies for threat management and energy level maintenance. Experimentation conducted in this research work demonstrates animate-level intelligence in their problem-solving capabilities, decision making and reasoning in critical situations.
-
Human consciousness is our most perplexing quality, still, an adequate description of it’s workings have not yet appeared. One of the most promising ways to solve this issue is to model consciousness with artificial intelligence (AI). This paper makes an attempt to do that on a theoretical level with the methods of philosophy. First I will review the relevant papers concerning human consciousness. Then considering the state of the art of AI, I will arrive at a model of artificial consciousness.
-
Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition.
-
The men admired a way to swim the fish, but today they sail faster than anyone. They'd like flying like the birds, but have been a lot higher. They searched for wisdom, now they have all the knowledge accumulated in the story available in a few clicks. Human evolution is about to meet its peak through the Technological Singularity, which can be understood as the future milestone reached at the moment that a computer program can think like a human, yet with quick access to all information already registered by society. It will not be like a man, but more intelligent than all mankind in history. So we have a big question: will this new entity has consciousness? Through a study of the levels of intelligent agents autonomy and in a timeless dialogue with Alan Turing, René Descartes, Ludwic Wittgenstein, John Searle and Vernor Vinge, we show the possibility of an artificial consciousness and thatthe quest for intentionality, promoted by sophisticated algorithms of learning and machine discovery, is the key to reach of Technological Singularity.
-
Recent studies have suggested that individuals are not able to develop a sense of joint agency during joint actions with automata. We sought to examine whether this lack of joint agency is linked to individuals’ inability to co-represent the automaton-generated actions. Fifteen participants observed or performed a Simon response time task either individually, or jointly with another human or a computer. Participants reported the time interval between their response (or the co-actor response) and a subsequent auditory stimulus, which served as an implicit measure of participants’ sense of agency. Participants’ reaction times showed a classical Simon effect when they were partnered with another human, but not when they collaborated with a computer. Furthermore, participants showed a vicarious sense of agency when co-acting with another human agent but not with a computer. This absence of vicarious sense of agency during human-computer interactions and the relation with action corepresentation are discussed.
-
There is a little information on how to design a social robot that effectively executes consciousness-emotion (C-E) interaction in a socially acceptable manner. In fact, development of such socially sophisticated interactions depends on models of human highlevel cognition implemented in the robot’s design. Therefore, a fundamental research problem of social robotics in terms of effective C-E interaction processing is to define a computational architecture of the robotic system in which the cognitive-emotional integration occurs and determine cognitive mechanisms underlying consciousness along with its subjective aspect in detecting emotions. Our conceptual framework rests upon assumptions of a computational approach to consciousness, which points out that consciousness and its subjective aspect are specific functions of the human brain that can be implemented into an artificial social robot’s construction. Such research framework of developing C-E addresses a field of machine consciousness that indicates important computational correlates of consciousness in such an artificial system and the possibility to objectively describe such mechanisms with quantitative parameters based on signaldetection and threshold theories.
-
The study outlines the existing and potential philosophical issues of the idea of conscious machines originated from the development of artificial consciousness within the framework of contemporary research of artificial intelligence and cognitive robotics. The outline shows that the idea of conscious machines is concerned with two big philosophical issues. The first philosophical issue is a definition of consciousness, taking into account the selection of a set of objects that can have consciousness (human being, living being or machine), the typology of consciousness, the clarifying of the nature of consciousness' carriers and the relationship between consciousness and its environment (including social and cultural) and the relationship between consciousness and language, in order to create an artificial consciousness within a machine, making that machine conscious. The second philosophical issue is a clarification whether only artificially created machines can be conscious machines, or cyborgizated (engineered) human beings also can be considered of conscious machines. These philosophical issues show that there can be two ways to create conscious machines: 1) the creating of artificial consciousness within artificially created machine; and 2) the cyborgization of human being, transforming it into artificially created machine possessed natural consciousness (or even possessed consciousness artificially transformed from natural into artificial).
-
The Hard Problem of consciousness has been dismissed as an illusion. By showing that computers are capable of experiencing, we show that they are at least rudimentarily conscious with potential to eventually reach superconsciousness. The main contribution of the paper is a test for confirming certain subjective experiences in a tested agent. We follow with analysis of benefits and problems with conscious machines and implications of such capability on future of computing, machine rights and artificial intelligence safety.
-
Human consciousness is the target of research in multiple fields of knowledge, presenting an important characteristic to better handle complex and diverse situations. Artificial consciousness models have arise, together with theories that try to model what we understand about consciousness, in a way that could be implemented a artificial conscious being. The main motivations to study artificial consciousness are related to the creation of agents more similar to human beings, in order to build more efficient machines and at the same time, make the human-computer interaction more affective and non intrusive. This paper presents an experiment using the Global Workspace Theory and the LIDA Model to build a conscious mobile robot in a virtual environment, using the LIDA framework as a implementation of the LIDA Model. The LIDA Model operates a consciousness system in an iterative cognitive cycle of perceiving the environment, shifting attention and focusing operation. The main objective is to evaluate if it is possible to use conscience and emotion as implemented by the LIDA framework to simplify decision making processes of a mobile robot subject to interaction with people, as part of a cicerone robot development.
-
The study analyzes the philosophy of artificial consciousness presented in the first season of TV series 'Westworld' and as a result of the analysis shows the collision of two opposite philosophical views on consciousness and the possibility of creation of artificial consciousness from the standpoint of two characters of TV series - Arnold Weber and Robert Ford. Arnold Weber proceeds from two philosophical assumptions: consciousness really exists (1) and human consciousness can be a prototype for modeling consciousness in an artificial intelligence bearer (2). And he has to choose: either to pick out one of the already existing conceptions of consciousness to implement the emergence of artificial consciousness within artificial intelligence or to invent his own; Arnold Weber chooses the Julian Jaynes' conception of consciousness as a basis for artificial consciousness what means that artificial consciousness must have the following features: 1) artificial consciousness has to be the result of the breakdown of the bicameral mind (apparently, modeled within artificial intelligence), the state of mind in which cognitive functions are divided into two part, a 'speaking' part and 'hearing' ('obeying') part, until the breakdown that makes the bicameral mind the unified mind; 2) artificial consciousness has to be a mind-space based on language and characterized by introspection, concentration, suppression, consilience and an analog 'I' narratizing in the mindspace. Robert Ford believes that consciousness does not exist at all and that there are only stories (narratives) which human beings and artificial beings, modeled in the image and likeness of human beings, tell each other and always the basis of all those stories (narratives) is suffering.
-
Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus automatic. But conscience is above these differences because it is neither conditioned by the self-preservation of autonomy, because a conscience is something that you use to help your neighbor, nor automatic, because one’s conscience is tested by situations which are not similar or subject to routine. So, artificial intelligence is only in science-fiction literature similar to an autonomous conscience-endowed being. In real life, religion with its notions of redemption, sin, expiation, confession and communion will not have any meaning for a machine which cannot make a mistake on its own.
-
"The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of synthetic consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of artificial intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favour of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that is suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.
-
The research work presented in this article investigates and explains the conceptual mechanisms of consciousness and common-sense thinking of animates. These mechanisms are computationally simulated on artificial agents as strategic rules to analyze and compare the performance of agents in critical and dynamic environments. Awareness and attention to specific parameters that affect the performance of agents specify the consciousness level in agents. Common sense is a set of beliefs that are accepted to be true among a group of agents that are engaged in a common purpose, with or without self-experience. The common sense agents are a kind of conscious agents that are given with few common sense assumptions. The so-created environment has attackers with dependency on agents in the survival-food chain. These attackers create a threat mental state in agents that can affect their conscious and common sense behaviors. The agents are built with a multi-layer cognitive architecture COCOCA (Consciousness and Common sense Cognitive Architecture) with five columns and six layers of cognitive processing of each precept of an agent. The conscious agents self-learn strategies for threat management and energy level maintenance. Experimentation conducted in this research work demonstrates animate-level intelligence in their problem-solving capabilities, decision making and reasoning in critical situations.
-
How do “minds” work? In Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena, Professor Jun Tani reviews key experiments within his own pioneering neurorobotics research project aimed at answering this fundamental and fascinating question. The book shows how symbols and concepts representing the world can emerge via “deep learning” within robots, by using specially designed neural network architectures by which, given iterative interactions between top-down proactive “subjective” and “intentional” processes for plotting action, and bottom-up updates of the perceptual reality after action, the robot is able to learn to isolate, to identify, and even to infer salient features of the operational environment, modifying its behavior based on anticipations of both objective and social cues. Through permutations of this experimental model, the book then argues that longstanding questions about the nature of “consciousness” and “freewill” can be addressed through an understanding of the dynamic structures within which, in the course of normal operations and in a changing operational environment, necessary top-down/bottom-up interactions arise. Written in clear and accessible language, this book opens a privileged window for a broad audience onto the science of artificial intelligence and the potential for artificial consciousness, threading cognitive neuroscience, dynamic systems theory, robotics, and phenomenology through an elegant series of deceptively simple experiments that build upon one another and ultimately outline the fundamental form of the working mind.
-
This paper presents the idea that using computers to simulate complex virtual environments can give rise to artificial consciousness within those environments. Currently, limitations in creating artificial consciousness may be imposed by material compounds that enable the transmission of signals through artificial systems such as robots. Virtual environments, on the other hand, provide the necessary tools for surpassing such limitations. I also argue that using virtual reality tools to enable complex interaction between humans and AI (artificial intelligence) within virtual environments is the most reasonable way to obtain artificial consciousness by appealing to the nature of human bias.
-
This paper critically assesses the anti-functionalist stance on consciousness adopted by certain advocates of integrated information theory (IIT), a corollary of which is that human-level artificial intelligence implemented on conventional computing hardware is necessarily not conscious. The critique draws on variations of a well-known gradual neuronal replacement thought experiment, as well as bringing out tensions in IIT's treatment of self-knowledge. The aim, though, is neither to reject IIT outright nor to champion functionalism in particular. Rather, it is suggested that both ideas have something to offer a scientific understanding of consciousness, as long as they are not dressed up as solutions to illusory metaphysical problems. As for human-level AI, we must await its development before we can decide whether or not to ascribe consciousness to it.