Search

Full bibliography 716 resources

  • In developing a humanoid robot, there are two major objectives. One is developing a physical robot having body, hands, and feet resembling those of human beings and being able to similarly control them. The other is to develop a control system that works similarly to our brain, to feel, think, act, and learn like ours. In this article, an architecture of a control system with a brain-oriented logical structure for the second objective is proposed. The proposed system autonomously adapts to the environment and implements a clearly defined “consciousness” function, through which both habitual behavior and goal-directed behavior are realized. Consciousness is regarded as a function for effective adaptation at the system-level, based on matching and organizing the individual results of the underlying parallel-processing units. This consciousness is assumed to correspond to how our mind is “aware” when making our moment to moment decisions in our daily life. The binding problem and the basic causes of delay in Libet’s experiment are also explained by capturing awareness in this manner. The goal is set as an image in the system, and efficient actions toward achieving this goal are selected in the goal-directed behavior process. The system is designed as an artificial neural network and aims at achieving consistent and efficient system behavior, through the interaction of highly independent neural nodes. The proposed architecture is based on a two-level design. The first level, which we call the “basic-system,” is an artificial neural network system that realizes consciousness, habitual behavior and explains the binding problem. The second level, which we call the “extended-system,” is an artificial neural network system that realizes goal-directed behavior.

  • To keep robots from doing us harm, they may have to be bound by internal safeguards akin to Asimov's inviolable first law of robotics (do no harm to humans). The discussion of robot consciousness became genuinely pressing with the advent of so-called artificial brains (electronic computers) mid-way through the 20th century. Minds might be a kind of halo around certain machines in the way dark matter is an invisible halo around visible galaxies. Where extreme ignorance persists, anything seems possible. But even in this case the robotic successors will have a real shot at understanding and confirming the truth. For though authors’ own biologically evolved mental models have been shackled through inheritance and learned interactions with organism-level reality, the mathematical constructs authors have devised, and which figure into the implementation of computer models, are significantly freer.

  • Human and Machine Consciousness presents a new foundation for the scientific study of consciousness. It sets out a bold interpretation of consciousness that neutralizes the philosophical problems and explains how we can make scientific predictions about the consciousness of animals, brain-damaged patients and machines.

  • A socially intelligent robot must be capable to extract meaningful information in real time from the social environment and react accordingly with coherent human-like behavior. Moreover, it should be able to internalize this information, to reason on it at a higher level, build its own opinions independently, and then automatically bias the decision-making according to its unique experience. In the last decades, neuroscience research highlighted the link between the evolution of such complex behavior and the evolution of a certain level of consciousness, which cannot leave out of a body that feels emotions as discriminants and prompters. In order to develop cognitive systems for social robotics with greater human-likeliness, we used an “understanding by building” approach to model and implement a well-known theory of mind in the form of an artificial intelligence, and we tested it on a sophisticated robotic platform. The name of the presented system is SEAI (Social Emotional Artificial Intelligence), a cognitive system specifically conceived for social and emotional robots. It is designed as a bio-inspired, highly modular, hybrid system with emotion modeling and high-level reasoning capabilities. It follows the deliberative/reactive paradigm where a knowledge-based expert system is aimed at dealing with the high-level symbolic reasoning, while a more conventional reactive paradigm is deputed to the low-level processing and control. The SEAI system is also enriched by a model that simulates the Damasio’s theory of consciousness and the theory of Somatic Markers. After a review of similar bio-inspired cognitive systems, we present the scientific foundations and their computational formalization at the basis of the SEAI framework. Then, a deeper technical description of the architecture is disclosed underlining the numerous parallelisms with the human cognitive system. Finally, the influence of artificial emotions and feelings, and their link with the robot’s beliefs and decisions have been tested in a physical humanoid involved in Human–Robot Interaction (HRI).

  • The study outlines the existing and potential philosophical issues of the idea of conscious machines originated from the development of artificial consciousness within the framework of contemporary research of artificial intelligence and cognitive robotics. The outline shows that the idea of conscious machines is concerned with two big philosophical issues. The first philosophical issue is a definition of consciousness, taking into account the selection of a set of objects that can have consciousness (human being, living being or machine), the typology of consciousness, the clarifying of the nature of consciousness' carriers and the relationship between consciousness and its environment (including social and cultural) and the relationship between consciousness and language, in order to create an artificial consciousness within a machine, making that machine conscious. The second philosophical issue is a clarification whether only artificially created machines can be conscious machines, or cyborgizated (engineered) human beings also can be considered of conscious machines. These philosophical issues show that there can be two ways to create conscious machines: 1) the creating of artificial consciousness within artificially created machine; and 2) the cyborgization of human being, transforming it into artificially created machine possessed natural consciousness (or even possessed consciousness artificially transformed from natural into artificial).

  • The main problem in robotics is strengthening of robot artificial intelligence (IA) system. Its solution will facilitate cooperation of man with robot. Authors suggest advanced technology for IA development. It borrows method of universal (deep) tutoring (TU) relying on semantic axiomatic method (AM). By method TU knowledge understanding is achieved by rational consciousness formation. It uses the utmost mathematical abstractions expressed on language of categories (LC). Being functional one LC is destined for intellectual processes (PIR) description due to its universal constructions. Following TU robot educational space (SER) is class of categories. Its IA sophistication occurs through new categories inclusion as required in robot IA multilevel hierarchical orientated network (NC) of concepts. Universal laws of robot functioning are embodied as operations of algebraic structures being objects of NC. It creates integrated environment of applications (IEA). Robot intercourse with man and its interaction with working space (SWR) make active PIR happening in NC. Processes of assignments execution (PER) begin just when satisfaction to a set of relations in SWR and in robot space of notions is a success. Possibility of PIR to climb up the highest levels of NC and down the lowest ones endows robot with capability to generate PER making decisions in unfamiliar SWR.

  • What is “self-awareness”? How can explicit consciousness and sub-consciousness be mapped in relation to each other? How are they related to the self? How can these entities be represented in an artificial conscious system? These questions are the focus of this article. People are aware of only the behavior that they are focusing on; they cannot be directly aware of routine behavior such as walking and breathing. The latter is generally called unconscious behavior, and here we call it sub-conscious behavior. To understand self-awareness, therefore, firstly it is important to map explicit consciousness and sub-consciousness, which is where the self is deeply involved. We consider that if there is no self that refers to itself, no one can be aware of what he himself is doing. In this study we map explicit consciousness and sub-consciousness using an artificial conscious system, and then make a new proposal about the relationship between self-awareness and the self.

  • This chapter describes the computer modeling of a psychic system that generates representations for a system that has an artificial corporeality. In this model, it defines how the sensation of thinking is formed in an artificial system and how such an artificial system can experience its idea generation. Next, the chapter discusses a multiagent approach to design the artificial psychic system. Further, it describes the organizational memory of the system. The organizational memory will be organized into networks of memory agents related through concepts of semantic proximities or semantic generalizations. The concepts of proximity, specialization and generalization can be precisely defined using qualifications related to the acquaintances of the agents. The chapter finally shows that the general psyche of an artificial system, when it is distributed over multiple corporeal systems with local artificial consciousnesses, can be unified.

  • How has multiplicity superseded impossibility in philosophical challenges to artificial consciousness? I assess a trajectory in recent debates on artificial consciousness, in which metaphysical and explanatory challenges to the possibility of building conscious machines lead to epistemological concerns about the multiplicity underlying ‘what it is like’ to be a conscious creature or be in a conscious state. First, I analyse earlier challenges which claim that phenomenal consciousness cannot arise, or cannot be built, in machines. These are based on Block’s Chinese Nation and Chalmers’ Hard Problem. To defuse such challenges, theorists of artificial consciousness can appeal to empirical methods and models of explanation. Second, I explain why this naturalistic approach produces an epistemological puzzle on the role of biological properties in phenomenal consciousness. Neither behavioural tests nor theoretical inferences seem to settle whether our machines are conscious. Third, I evaluate whether the new challenge can be managed through a more fine-grained taxonomy of conscious states. This strategy is supported by the development of similar taxonomies for biological species and animal consciousness. Although it makes sense of some current models of artificial consciousness, it raises questions about their subjective and moral significance.

  • This paper looks at recent debates in the enactivist literature on computation and consciousness in order to assess major obstacles to building artificial conscious agents. We consider a proposal from Villalobos and Dewhurst (2018) for enactive computation on the basis of organizational closure. We attempt to improve the argument by reflecting on the closed paths through state space taken by finite state automata. This motivates a defense against Clark's recent criticisms of "extended consciousness", and perhaps a new perspective on living with machines.

  • Cet article s’intéresse à la façon dont la série de science-fiction Westworld (HBO, 2016-présent) interroge la nature même de la conscience au travers d’une narration réflexive qui mêle la question du libre-arbitre et du sens de soi à celle de l’écriture du personnage sériel, s’appuyant ainsi sur l’héritage des séries de science-fiction narrativement complexes qui ont abordé ce thème au fil des dernières décennies. Après un point sur le « hard problem » de la définition de la conscience, l’article suit la logique de la série en interrogeant successivement la mémoire, l’improvisation et l’intérêt personnel, tout en mobilisant une analyse narratologique fondée notamment sur la théorie des mondes possibles. , This paper describes how the science fiction television series Westworld (HBO, 2016-present) questions the very nature of consciousness through a reflexive narrative blending matters of free will and self interest with clues on how to write a serial character, thus drawing on the rich heritage of narratively complex science fiction television series of the last decades. After detailing the “hard problem” posed by any definition of consciousness, the paper follows the series’ logic, successively questioning memory, improvisation and self-interest, while focusing on a narratological approach centered on possible worlds theory.

  • From the perspective of virtue ethics, this paper points out that Artificial Intelligence becomes more and more like an ethic subject which can take responsibility with its improvement of autonomy and sensitivity. This paper intends to point out that it will produce many problems to tackle the questions of ethics of Artificial Intelligence through programming the codes of abstract moral principle. It is at first a social integration question rather than a technical question when we talk about the question of AI’s ethics. From the perspective of historical and social premises of ethics, in what kind of degree Artificial Intelligence can share the same ethics system with human equals to the degree of its integration into the narrative of human’s society. And this is also a process of establishing a common social cooperation system between human and Artificial Intelligence. Furthermore, self-consciousness and responsibility are also social conceptions that established by recognition, and the Artificial Intelligence’s identity for its individual social role is also established in the process of integration.

  • A model of an intentional self-observing system is proposed based on the structure and functions of astrocyte-synapse interactions in tripartite synapses. Astrocyte-synapse interactions are cyclically organized and operate via feedforward and feedback mechanisms, formally described by proemial counting. Synaptic, extrasynaptic and astrocyte receptors are interpreted as places with the same or different quality of information processing described by the combinatorics of tritograms. It is hypothesized that receptors on the astrocytic membrane may embody intentional programs that select corresponding synaptic and extrasynaptic receptors for the formation of receptor-receptor complexes. Basically, the act of self-observation is generated if the actual environmental information is appropriate to the intended observation processed by receptor-receptor complexes. This mechanism is implemented for a robot brain enabling the robot to experience environmental information as “its own”. It is suggested that this mechanism enables the robot to generate matches and mismatches between intended observations and the observations in the environment, based on the cyclic organization of the mechanism. In exploring an unknown environment the robot may stepwise construct an observation space, stored in memory, commanded and controlled by the intentional self-observing system. Finally, the role of self-observation in machine consciousness is shortly discussed.

  • The major aim of artificial general intelligence’s (AGI) is to allow a machine to perform general intelligence tasks similar to human counterparts. Hypothetically, this general intelligence in a machine can be achieved by establishing cross-domain optimization and learning machine approaches. However, contemporary artificial intelligence (AI) capabilities are only limited to narrow and specific domains utilizing machine learning. Consciousness concept is particularly interesting topic to attain the approaches because it simultaneously encodes and processes all types of information and seamlessly integrates them. Over the last several years, there has been a resurgence of interest in testing theories of consciousness using computer models. The studies of these models are classified into four categories: external behavior associated with consciousness, cognitive characteristics associated with consciousness, a computational architecture correlate of human consciousness and phenomenally of conscious machine. The critical challenge is to determine whether these artificial systems are capable of conscious states by providing a measurement the extent to which the systems are succeeded in realizing consciousness in a machine. Several tests for machine consciousness have been proposed yet their formulation is based on extrinsic measurement of consciousness. Yet extrinsic measurement is not inclusive because many conscious artificial systems behave implicitly. This research proposes a new framework to test machine consciousness based on intrinsic measurement so-called Pak Pandir test. The framework leverages three quantum double-slit settings and information integration theory as consciousness definition of choice.

  • Mind and intelligence are closely related with the consciousness. Indeed, artificial intelligence (AI) is the most promising avenue towards artificial consciousness (AC). However, in literature, consciousness has been considered as the least amenable to being understood or replicated by AI. Further, computational theories of mind (CTMs) render the mind as a computational system and it is treated as a substantial hypothesis within the purview of AI. However, the consciousness, which is a phenomenon of mind, is partially tackled by this theory and it seems that the CTM is not corroborated considerably in this pursuit. Many valuable contributions have been incorporated by the researchers working strenuously in this domain. However, there is still scarcity of globally accepted computational models of consciousness that can be used to design conscious intelligent machines. The contributions of the researchers entail consciousness as a vague, incomplete and human-centred entity. In this paper, attempt has been made to analyse different theoretical and intricate issues pertaining to mind, intelligence and AC. Moreover, this paper discusses different computational models of the consciousness and critically analyses the possibility of generating the machine consciousness as well as identifying the characteristics of conscious machine. Further, different inquisitive questions, e.g., “Is it possible to devise, project and build a conscious machine?”, “Will artificially conscious machines be able to surpass the functioning of artificially intelligent machines?” and “Does consciousness reflect a peculiar way of information processing?” are analysed.

  • The Hard Problem of consciousness has been dismissed as an illusion. By showing that computers are capable of experiencing, we show that they are at least rudimentarily conscious with potential to eventually reach superconsciousness. The main contribution of the paper is a test for confirming certain subjective experiences in a tested agent. We follow with analysis of benefits and problems with conscious machines and implications of such capability on future of computing, machine rights and artificial intelligence safety.

  • Human consciousness is the target of research in multiple fields of knowledge, presenting an important characteristic to better handle complex and diverse situations. Artificial consciousness models have arise, together with theories that try to model what we understand about consciousness, in a way that could be implemented a artificial conscious being. The main motivations to study artificial consciousness are related to the creation of agents more similar to human beings, in order to build more efficient machines and at the same time, make the human-computer interaction more affective and non intrusive. This paper presents an experiment using the Global Workspace Theory and the LIDA Model to build a conscious mobile robot in a virtual environment, using the LIDA framework as a implementation of the LIDA Model. The LIDA Model operates a consciousness system in an iterative cognitive cycle of perceiving the environment, shifting attention and focusing operation. The main objective is to evaluate if it is possible to use conscience and emotion as implemented by the LIDA framework to simplify decision making processes of a mobile robot subject to interaction with people, as part of a cicerone robot development.

  • Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware of one’s perceptions and/or of one’s thoughts, it is claimed that computers cannot experience consciousness. Given that it has no sensorium, it cannot have perceptions. In terms of being aware of its thoughts it is argued that being aware of one’s thoughts is basically listening to one’s own internal speech. A computer has no emotions, and hence, no desire to communicate, and without the ability, and/or desire to communicate, it has no internal voice to listen to and hence cannot be aware of its thoughts. In fact, it has no thoughts, because it has no sense of self and thinking is about preserving one’s self. Emotions have a positive effect on the reasoning powers of humans, and therefore, the computer’s lack of emotions is another reason for why computers could never achieve the level of intelligence that a human can, at least, at the current level of the development of computer technology.

  • The study analyzes the philosophy of artificial consciousness presented in the first season of TV series 'Westworld' and as a result of the analysis shows the collision of two opposite philosophical views on consciousness and the possibility of creation of artificial consciousness from the standpoint of two characters of TV series - Arnold Weber and Robert Ford. Arnold Weber proceeds from two philosophical assumptions: consciousness really exists (1) and human consciousness can be a prototype for modeling consciousness in an artificial intelligence bearer (2). And he has to choose: either to pick out one of the already existing conceptions of consciousness to implement the emergence of artificial consciousness within artificial intelligence or to invent his own; Arnold Weber chooses the Julian Jaynes' conception of consciousness as a basis for artificial consciousness what means that artificial consciousness must have the following features: 1) artificial consciousness has to be the result of the breakdown of the bicameral mind (apparently, modeled within artificial intelligence), the state of mind in which cognitive functions are divided into two part, a 'speaking' part and 'hearing' ('obeying') part, until the breakdown that makes the bicameral mind the unified mind; 2) artificial consciousness has to be a mind-space based on language and characterized by introspection, concentration, suppression, consilience and an analog 'I' narratizing in the mindspace. Robert Ford believes that consciousness does not exist at all and that there are only stories (narratives) which human beings and artificial beings, modeled in the image and likeness of human beings, tell each other and always the basis of all those stories (narratives) is suffering.

  • The purpose of the attention schema theory is to explain how an information-processing device, the brain, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is offered as a possible starting point for building artificial consciousness. Given current technology, it should be possible to build a machine that contains a rich internal model of what consciousness is, attributes that property of consciousness to itself and to the people it interacts with, and uses that attribution to make predictions about human behavior. Such a machine would “believe” it is conscious and act like it is conscious, in the same sense that the human machine believes and acts.

Last update from database: 5/16/26, 1:00 AM (UTC)