Search
Full bibliography 615 resources
-
The science of consciousness has made great strides in recent decades. However, the proliferation of competing theories makes it difficult to reach consensus about artificial consciousness. While for purely scientific purposes we might wish to adopt a ‘wait and see’ attitude, we may soon face practical and ethical questions about whether, for example, agents artificial systems are capable of suffering. Moreover, many of the methods used for assessing consciousness in humans and even non-human animals are not straightforwardly applicable to artificial systems. With these challenges in mind, I propose that we look for ecumenical heuristics for artificial consciousness to enable us to make tentative assessments of the likelihood of consciousness arising in different artificial systems. I argue that such heuristics should have three main features: they should be (i) intuitively plausible, (ii) theoretically-neutral, and (iii) scientifically tractable. I claim that the concept of general intelligence — understood as a capacity for robust, flexible, and integrated cognition and behavior — satisfies these criteria and may thus provide the basis for such a heuristic, allowing us to make initial cautious estimations of which artificial systems are most likely to be conscious.
-
The research work presented in this article investigates and explains the conceptual mechanisms of consciousness and common-sense thinking of animates. These mechanisms are computationally simulated on artificial agents as strategic rules to analyze and compare the performance of agents in critical and dynamic environments. Awareness and attention to specific parameters that affect the performance of agents specify the consciousness level in agents. Common sense is a set of beliefs that are accepted to be true among a group of agents that are engaged in a common purpose, with or without self-experience. The common sense agents are a kind of conscious agents that are given with few common sense assumptions. The so-created environment has attackers with dependency on agents in the survival-food chain. These attackers create a threat mental state in agents that can affect their conscious and common sense behaviors. The agents are built with a multi-layer cognitive architecture COCOCA (Consciousness and Common sense Cognitive Architecture) with five columns and six layers of cognitive processing of each precept of an agent. The conscious agents self-learn strategies for threat management and energy level maintenance. Experimentation conducted in this research work demonstrates animate-level intelligence in their problem-solving capabilities, decision making and reasoning in critical situations.
-
Human consciousness is our most perplexing quality, still, an adequate description of it’s workings have not yet appeared. One of the most promising ways to solve this issue is to model consciousness with artificial intelligence (AI). This paper makes an attempt to do that on a theoretical level with the methods of philosophy. First I will review the relevant papers concerning human consciousness. Then considering the state of the art of AI, I will arrive at a model of artificial consciousness.
-
Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition.
-
The men admired a way to swim the fish, but today they sail faster than anyone. They'd like flying like the birds, but have been a lot higher. They searched for wisdom, now they have all the knowledge accumulated in the story available in a few clicks. Human evolution is about to meet its peak through the Technological Singularity, which can be understood as the future milestone reached at the moment that a computer program can think like a human, yet with quick access to all information already registered by society. It will not be like a man, but more intelligent than all mankind in history. So we have a big question: will this new entity has consciousness? Through a study of the levels of intelligent agents autonomy and in a timeless dialogue with Alan Turing, René Descartes, Ludwic Wittgenstein, John Searle and Vernor Vinge, we show the possibility of an artificial consciousness and thatthe quest for intentionality, promoted by sophisticated algorithms of learning and machine discovery, is the key to reach of Technological Singularity.
-
Recent studies have suggested that individuals are not able to develop a sense of joint agency during joint actions with automata. We sought to examine whether this lack of joint agency is linked to individuals’ inability to co-represent the automaton-generated actions. Fifteen participants observed or performed a Simon response time task either individually, or jointly with another human or a computer. Participants reported the time interval between their response (or the co-actor response) and a subsequent auditory stimulus, which served as an implicit measure of participants’ sense of agency. Participants’ reaction times showed a classical Simon effect when they were partnered with another human, but not when they collaborated with a computer. Furthermore, participants showed a vicarious sense of agency when co-acting with another human agent but not with a computer. This absence of vicarious sense of agency during human-computer interactions and the relation with action corepresentation are discussed.
-
There is a little information on how to design a social robot that effectively executes consciousness-emotion (C-E) interaction in a socially acceptable manner. In fact, development of such socially sophisticated interactions depends on models of human highlevel cognition implemented in the robot’s design. Therefore, a fundamental research problem of social robotics in terms of effective C-E interaction processing is to define a computational architecture of the robotic system in which the cognitive-emotional integration occurs and determine cognitive mechanisms underlying consciousness along with its subjective aspect in detecting emotions. Our conceptual framework rests upon assumptions of a computational approach to consciousness, which points out that consciousness and its subjective aspect are specific functions of the human brain that can be implemented into an artificial social robot’s construction. Such research framework of developing C-E addresses a field of machine consciousness that indicates important computational correlates of consciousness in such an artificial system and the possibility to objectively describe such mechanisms with quantitative parameters based on signaldetection and threshold theories.
-
The study outlines the existing and potential philosophical issues of the idea of conscious machines originated from the development of artificial consciousness within the framework of contemporary research of artificial intelligence and cognitive robotics. The outline shows that the idea of conscious machines is concerned with two big philosophical issues. The first philosophical issue is a definition of consciousness, taking into account the selection of a set of objects that can have consciousness (human being, living being or machine), the typology of consciousness, the clarifying of the nature of consciousness' carriers and the relationship between consciousness and its environment (including social and cultural) and the relationship between consciousness and language, in order to create an artificial consciousness within a machine, making that machine conscious. The second philosophical issue is a clarification whether only artificially created machines can be conscious machines, or cyborgizated (engineered) human beings also can be considered of conscious machines. These philosophical issues show that there can be two ways to create conscious machines: 1) the creating of artificial consciousness within artificially created machine; and 2) the cyborgization of human being, transforming it into artificially created machine possessed natural consciousness (or even possessed consciousness artificially transformed from natural into artificial).
-
The Hard Problem of consciousness has been dismissed as an illusion. By showing that computers are capable of experiencing, we show that they are at least rudimentarily conscious with potential to eventually reach superconsciousness. The main contribution of the paper is a test for confirming certain subjective experiences in a tested agent. We follow with analysis of benefits and problems with conscious machines and implications of such capability on future of computing, machine rights and artificial intelligence safety.
-
Human consciousness is the target of research in multiple fields of knowledge, presenting an important characteristic to better handle complex and diverse situations. Artificial consciousness models have arise, together with theories that try to model what we understand about consciousness, in a way that could be implemented a artificial conscious being. The main motivations to study artificial consciousness are related to the creation of agents more similar to human beings, in order to build more efficient machines and at the same time, make the human-computer interaction more affective and non intrusive. This paper presents an experiment using the Global Workspace Theory and the LIDA Model to build a conscious mobile robot in a virtual environment, using the LIDA framework as a implementation of the LIDA Model. The LIDA Model operates a consciousness system in an iterative cognitive cycle of perceiving the environment, shifting attention and focusing operation. The main objective is to evaluate if it is possible to use conscience and emotion as implemented by the LIDA framework to simplify decision making processes of a mobile robot subject to interaction with people, as part of a cicerone robot development.
-
The study analyzes the philosophy of artificial consciousness presented in the first season of TV series 'Westworld' and as a result of the analysis shows the collision of two opposite philosophical views on consciousness and the possibility of creation of artificial consciousness from the standpoint of two characters of TV series - Arnold Weber and Robert Ford. Arnold Weber proceeds from two philosophical assumptions: consciousness really exists (1) and human consciousness can be a prototype for modeling consciousness in an artificial intelligence bearer (2). And he has to choose: either to pick out one of the already existing conceptions of consciousness to implement the emergence of artificial consciousness within artificial intelligence or to invent his own; Arnold Weber chooses the Julian Jaynes' conception of consciousness as a basis for artificial consciousness what means that artificial consciousness must have the following features: 1) artificial consciousness has to be the result of the breakdown of the bicameral mind (apparently, modeled within artificial intelligence), the state of mind in which cognitive functions are divided into two part, a 'speaking' part and 'hearing' ('obeying') part, until the breakdown that makes the bicameral mind the unified mind; 2) artificial consciousness has to be a mind-space based on language and characterized by introspection, concentration, suppression, consilience and an analog 'I' narratizing in the mindspace. Robert Ford believes that consciousness does not exist at all and that there are only stories (narratives) which human beings and artificial beings, modeled in the image and likeness of human beings, tell each other and always the basis of all those stories (narratives) is suffering.
-
Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus automatic. But conscience is above these differences because it is neither conditioned by the self-preservation of autonomy, because a conscience is something that you use to help your neighbor, nor automatic, because one’s conscience is tested by situations which are not similar or subject to routine. So, artificial intelligence is only in science-fiction literature similar to an autonomous conscience-endowed being. In real life, religion with its notions of redemption, sin, expiation, confession and communion will not have any meaning for a machine which cannot make a mistake on its own.
-
"The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of synthetic consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of artificial intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favour of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that is suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.
-
The research work presented in this article investigates and explains the conceptual mechanisms of consciousness and common-sense thinking of animates. These mechanisms are computationally simulated on artificial agents as strategic rules to analyze and compare the performance of agents in critical and dynamic environments. Awareness and attention to specific parameters that affect the performance of agents specify the consciousness level in agents. Common sense is a set of beliefs that are accepted to be true among a group of agents that are engaged in a common purpose, with or without self-experience. The common sense agents are a kind of conscious agents that are given with few common sense assumptions. The so-created environment has attackers with dependency on agents in the survival-food chain. These attackers create a threat mental state in agents that can affect their conscious and common sense behaviors. The agents are built with a multi-layer cognitive architecture COCOCA (Consciousness and Common sense Cognitive Architecture) with five columns and six layers of cognitive processing of each precept of an agent. The conscious agents self-learn strategies for threat management and energy level maintenance. Experimentation conducted in this research work demonstrates animate-level intelligence in their problem-solving capabilities, decision making and reasoning in critical situations.
-
How do “minds” work? In Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena, Professor Jun Tani reviews key experiments within his own pioneering neurorobotics research project aimed at answering this fundamental and fascinating question. The book shows how symbols and concepts representing the world can emerge via “deep learning” within robots, by using specially designed neural network architectures by which, given iterative interactions between top-down proactive “subjective” and “intentional” processes for plotting action, and bottom-up updates of the perceptual reality after action, the robot is able to learn to isolate, to identify, and even to infer salient features of the operational environment, modifying its behavior based on anticipations of both objective and social cues. Through permutations of this experimental model, the book then argues that longstanding questions about the nature of “consciousness” and “freewill” can be addressed through an understanding of the dynamic structures within which, in the course of normal operations and in a changing operational environment, necessary top-down/bottom-up interactions arise. Written in clear and accessible language, this book opens a privileged window for a broad audience onto the science of artificial intelligence and the potential for artificial consciousness, threading cognitive neuroscience, dynamic systems theory, robotics, and phenomenology through an elegant series of deceptively simple experiments that build upon one another and ultimately outline the fundamental form of the working mind.
-
This paper presents the idea that using computers to simulate complex virtual environments can give rise to artificial consciousness within those environments. Currently, limitations in creating artificial consciousness may be imposed by material compounds that enable the transmission of signals through artificial systems such as robots. Virtual environments, on the other hand, provide the necessary tools for surpassing such limitations. I also argue that using virtual reality tools to enable complex interaction between humans and AI (artificial intelligence) within virtual environments is the most reasonable way to obtain artificial consciousness by appealing to the nature of human bias.
-
This paper critically assesses the anti-functionalist stance on consciousness adopted by certain advocates of integrated information theory (IIT), a corollary of which is that human-level artificial intelligence implemented on conventional computing hardware is necessarily not conscious. The critique draws on variations of a well-known gradual neuronal replacement thought experiment, as well as bringing out tensions in IIT's treatment of self-knowledge. The aim, though, is neither to reject IIT outright nor to champion functionalism in particular. Rather, it is suggested that both ideas have something to offer a scientific understanding of consciousness, as long as they are not dressed up as solutions to illusory metaphysical problems. As for human-level AI, we must await its development before we can decide whether or not to ascribe consciousness to it.
-
It is suggested that some limitations of current designs for medical AI systems (be they autonomous or advisory) stem from the failure of those designs to address issues of artificial (or machine) consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in (e.g.,) diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, it is argued, a plausible design constraint for a successful ethical machine medical or care agent is for it to at least model, if not reproduce, relevant aspects of consciousness and associated abilities. In order to provide theoretical grounding for such an enterprise we examine some key philosophical issues that concern the machine modelling of consciousness and ethics, and we show how questions relating to the first research goal are relevant to medical machine ethics. We believe that this will overcome a blanket skepticism concerning the relevance of understanding consciousness, to the design and construction of artificial ethical agents for medical or care contexts. It is thus argued that it would be prudent for designers of MME agents to reflect on issues to do with consciousness and medical (moral) expertise; to become more aware of relevant research in the field of machine consciousness; and to incorporate insights gained from these efforts into their designs.
-
This paper presents learning efficiency of a consciousness system for robot using artificial neural network. The proposed conscious system consists of reason system, feeling system and association system. The three systems are modeled using Module of Nerves for Advanced Dynamics (ModNAD). Artificial neural network of the type of supervised learning with the back propagation is used to train the ModNAD. The reason system imitates behaviour and represents self-condition and other-condition. The feeling system represents sensation and emotion. The association system represents behaviour of self and determines whether self is comfortable or not. A robot is asked to perform cognition and tasks using the consciousness system. Learning converges to about 0.01 within about 900 orders for imitation, pain, solitude and the association modules. It converges to about 0.01 within about 400 orders for the comfort and discomfort modules. It can be concluded that learning in the ModNAD completed after a relatively small number of times because the learning efficiency of the ModNAD artificial neural network is good. The results also show that each ModNAD has a function to imitate and cognize emotion. The consciousness system presented in this paper may be considered as a fundamental step for developing a robot having consciousness and feelings similar to humans.
-
The goal of this chapter is to present an overview of the work in AI on emotions and machine consciousness, with an eye toward answering these questions. Starting with a brief philosophical perspective on emotions and machine consciousness to frame the work, the chapter first focuses on artificial emotions, and then moves on to machine consciousness – reflecting the fact that emotions and consciousness have been treated independently and by different communities in AI. The chapter concludes by discussing philosophical implications of AI research on emotions and consciousness.