Search
Full bibliography 675 resources
-
What is the capacity of an informal network of organizations to produce answers in response to complex tasks requiring the integration of masses of information designed as a high-level cognitive and collective activity? Are some network configurations more favourable than others to accomplish these tasks? We present a method to make these assessments, inspired by the Information Integration Theory issued from the modelling of consciousness. First we evaluate the informational network created by the sharing of information between organizations for the realization of a given task. Then we assess the natural network ability to integrate information, a capacity determined by the partition of its members whose information links are less efficient. We illustrate the method by the analysis of various functional integrations of Southeast Asian organizations, creating a spontaneous network participating in the study and management of interactions between health and environment. Several guidelines are then proposed to continue the development of this fruitful analogy between artificial and organizational consciousness (refraining ourselves from assuming that one or the other exists).
-
This paper presents the author’s attempt to justify the need for understanding the problem of multilevel mind in artificial intelligence systems. Thus, it is assumed that consciousness and the unconscious are not equal in natural mental processes. The human conscious is supposedly a “superstructure” above the unconscious automatic processes. Nevertheless, it is the unconscious that is the basis for the emotional and volitional manifestations of the human psyche and activity. At the same time, the alleged mental activity of Artificial Intelligence may be devoid of the evolutionary characteristics of the human mind. Several scenarios are proposed for the possible development of a “strong” AI through the prism of creation (or evolution) of the machine unconscious. In addition, we propose two opposite approaches regarding the relationship between the unconscious and the conscious.
-
This paper describes a possible way to improve computer security by implementing a program which implements the following three features related to a weak notion of artificial consciousness: (partial) self-monitoring, ability to compute the truth of quantifier-free propositions and the ability to communicate with the user. The integrity of the program could be enhanced by using a trusted computing approach, that is to say a hardware module that is at the root of a chain of trust. This paper outlines a possible approach but does not refer to an implementation (which would need further work), but the author believes that an implementation using current processors, a debugger, a monitoring program and a trusted processing module is currently possible.
-
Recent studies have suggested that individuals are not able to develop a sense of joint agency during joint actions with automata. We sought to examine whether this lack of joint agency is linked to individuals’ inability to co-represent the automaton-generated actions. Fifteen participants observed or performed a Simon response time task either individually, or jointly with another human or a computer. Participants reported the time interval between their response (or the co-actor response) and a subsequent auditory stimulus, which served as an implicit measure of participants’ sense of agency. Participants’ reaction times showed a classical Simon effect when they were partnered with another human, but not when they collaborated with a computer. Furthermore, participants showed a vicarious sense of agency when co-acting with another human agent but not with a computer. This absence of vicarious sense of agency during human-computer interactions and the relation with action corepresentation are discussed.
-
This book discusses what artificial intelligence can truly achieve: Can robots really be conscious? Can we merge with AI, as tech leaders like Elon Musk and Ray Kurzweil suggest? Is the mind just a program? Examining these issues, the author proposes ways we can test for machine consciousness, questions whether consciousness is an unavoidable byproduct of sophisticated intelligence, and considers the overall dangers of creating machine minds
-
A basic structure and behavior of a human-like AI system with conscious like functions is proposed. The system is constructed completely with artificial neural networks (ANN), and an optimal-design approach is applied. The proposed system using recurrent neural networks (RNN) which execute learning under dynamic equilibrium is a redesign of ANN in the previous system. The redesign using the RNNs allows the proposed brain-like autonomous adaptive system to be more plausible as a macroscopic model of the brain. By hypothesizing that the “conscious sensation” that constitutes the basis for phenomenal consciousness, is the same as “state of system level learning”, we can clearly explain consciousness from an information system perspective. This hypothesis can also comprehensively explain recurrent processing theory (RPT) and the global neuronal workspace theory (GNWT) of consciousness. The proposed structure and behavior are simple but scalable by design, and can be expanded to reproduce more complex features of the brain, leading to the realization of an AI system with functions equivalent to human-like consciousness.
-
Target subject is a module called ‘AI Technology’, which applied the ideas of blended learning. Firstly, lecture-style teaching was conducted with presentation slides in order to explain the contents of a textbook. Secondly, students were required to do exercises and quizzes. By using the last eight weeks, they were asked to create presentation slides outside a class to introduce the up-to-date topics on artificial intelligence. These slides were mutually evaluated among them so that they developed their own slides based on the feedback before the tenth week of the course for the second round of mutual evaluations. Improving consciousness of a module is meaningful. To know the reasons is more significant. For such occasions activities useful for improving consciousness of a module in ‘AI Technology’ are found. Then it is compared with my previous research outcome of the module, ‘Artificial Intelligence’. Students are categorized into four groups with degree scales of consciousness by principal component analysis. This paper reports their results.
-
There is a little information on how to design a social robot that effectively executes consciousness-emotion (C-E) interaction in a socially acceptable manner. In fact, development of such socially sophisticated interactions depends on models of human highlevel cognition implemented in the robot’s design. Therefore, a fundamental research problem of social robotics in terms of effective C-E interaction processing is to define a computational architecture of the robotic system in which the cognitive-emotional integration occurs and determine cognitive mechanisms underlying consciousness along with its subjective aspect in detecting emotions. Our conceptual framework rests upon assumptions of a computational approach to consciousness, which points out that consciousness and its subjective aspect are specific functions of the human brain that can be implemented into an artificial social robot’s construction. Such research framework of developing C-E addresses a field of machine consciousness that indicates important computational correlates of consciousness in such an artificial system and the possibility to objectively describe such mechanisms with quantitative parameters based on signaldetection and threshold theories.
-
Recently, there has been considerable interest and effort to the possibility to design and implement conscious robots, i.e., the chance that robots may have subjective experiences. Typical approaches as the global workspace, information integration, enaction, cognitive mechanisms, embodiment, i.e., the Good Old-Fashioned Artificial Consciousness, henceforth, GOFAC, share the same conceptual framework. In this paper, we discuss GOFAC's basic tenets and their implication for AI and Robotics. In particular, we point out the intermediate level fallacy as the central issue affecting GOFAC. Finally, we outline a possible alternative conceptual framework toward robot consciousness.</p>
-
In developing a humanoid robot, there are two major objectives. One is developing a physical robot having body, hands, and feet resembling those of human beings and being able to similarly control them. The other is to develop a control system that works similarly to our brain, to feel, think, act, and learn like ours. In this article, an architecture of a control system with a brain-oriented logical structure for the second objective is proposed. The proposed system autonomously adapts to the environment and implements a clearly defined “consciousness” function, through which both habitual behavior and goal-directed behavior are realized. Consciousness is regarded as a function for effective adaptation at the system-level, based on matching and organizing the individual results of the underlying parallel-processing units. This consciousness is assumed to correspond to how our mind is “aware” when making our moment to moment decisions in our daily life. The binding problem and the basic causes of delay in Libet’s experiment are also explained by capturing awareness in this manner. The goal is set as an image in the system, and efficient actions toward achieving this goal are selected in the goal-directed behavior process. The system is designed as an artificial neural network and aims at achieving consistent and efficient system behavior, through the interaction of highly independent neural nodes. The proposed architecture is based on a two-level design. The first level, which we call the “basic-system,” is an artificial neural network system that realizes consciousness, habitual behavior and explains the binding problem. The second level, which we call the “extended-system,” is an artificial neural network system that realizes goal-directed behavior.
-
To keep robots from doing us harm, they may have to be bound by internal safeguards akin to Asimov's inviolable first law of robotics (do no harm to humans). The discussion of robot consciousness became genuinely pressing with the advent of so-called artificial brains (electronic computers) mid-way through the 20th century. Minds might be a kind of halo around certain machines in the way dark matter is an invisible halo around visible galaxies. Where extreme ignorance persists, anything seems possible. But even in this case the robotic successors will have a real shot at understanding and confirming the truth. For though authors’ own biologically evolved mental models have been shackled through inheritance and learned interactions with organism-level reality, the mathematical constructs authors have devised, and which figure into the implementation of computer models, are significantly freer.
-
Human and Machine Consciousness presents a new foundation for the scientific study of consciousness. It sets out a bold interpretation of consciousness that neutralizes the philosophical problems and explains how we can make scientific predictions about the consciousness of animals, brain-damaged patients and machines.
-
A socially intelligent robot must be capable to extract meaningful information in real time from the social environment and react accordingly with coherent human-like behavior. Moreover, it should be able to internalize this information, to reason on it at a higher level, build its own opinions independently, and then automatically bias the decision-making according to its unique experience. In the last decades, neuroscience research highlighted the link between the evolution of such complex behavior and the evolution of a certain level of consciousness, which cannot leave out of a body that feels emotions as discriminants and prompters. In order to develop cognitive systems for social robotics with greater human-likeliness, we used an “understanding by building” approach to model and implement a well-known theory of mind in the form of an artificial intelligence, and we tested it on a sophisticated robotic platform. The name of the presented system is SEAI (Social Emotional Artificial Intelligence), a cognitive system specifically conceived for social and emotional robots. It is designed as a bio-inspired, highly modular, hybrid system with emotion modeling and high-level reasoning capabilities. It follows the deliberative/reactive paradigm where a knowledge-based expert system is aimed at dealing with the high-level symbolic reasoning, while a more conventional reactive paradigm is deputed to the low-level processing and control. The SEAI system is also enriched by a model that simulates the Damasio’s theory of consciousness and the theory of Somatic Markers. After a review of similar bio-inspired cognitive systems, we present the scientific foundations and their computational formalization at the basis of the SEAI framework. Then, a deeper technical description of the architecture is disclosed underlining the numerous parallelisms with the human cognitive system. Finally, the influence of artificial emotions and feelings, and their link with the robot’s beliefs and decisions have been tested in a physical humanoid involved in Human–Robot Interaction (HRI).
-
The study outlines the existing and potential philosophical issues of the idea of conscious machines originated from the development of artificial consciousness within the framework of contemporary research of artificial intelligence and cognitive robotics. The outline shows that the idea of conscious machines is concerned with two big philosophical issues. The first philosophical issue is a definition of consciousness, taking into account the selection of a set of objects that can have consciousness (human being, living being or machine), the typology of consciousness, the clarifying of the nature of consciousness' carriers and the relationship between consciousness and its environment (including social and cultural) and the relationship between consciousness and language, in order to create an artificial consciousness within a machine, making that machine conscious. The second philosophical issue is a clarification whether only artificially created machines can be conscious machines, or cyborgizated (engineered) human beings also can be considered of conscious machines. These philosophical issues show that there can be two ways to create conscious machines: 1) the creating of artificial consciousness within artificially created machine; and 2) the cyborgization of human being, transforming it into artificially created machine possessed natural consciousness (or even possessed consciousness artificially transformed from natural into artificial).
-
The main problem in robotics is strengthening of robot artificial intelligence (IA) system. Its solution will facilitate cooperation of man with robot. Authors suggest advanced technology for IA development. It borrows method of universal (deep) tutoring (TU) relying on semantic axiomatic method (AM). By method TU knowledge understanding is achieved by rational consciousness formation. It uses the utmost mathematical abstractions expressed on language of categories (LC). Being functional one LC is destined for intellectual processes (PIR) description due to its universal constructions. Following TU robot educational space (SER) is class of categories. Its IA sophistication occurs through new categories inclusion as required in robot IA multilevel hierarchical orientated network (NC) of concepts. Universal laws of robot functioning are embodied as operations of algebraic structures being objects of NC. It creates integrated environment of applications (IEA). Robot intercourse with man and its interaction with working space (SWR) make active PIR happening in NC. Processes of assignments execution (PER) begin just when satisfaction to a set of relations in SWR and in robot space of notions is a success. Possibility of PIR to climb up the highest levels of NC and down the lowest ones endows robot with capability to generate PER making decisions in unfamiliar SWR.
-
What is “self-awareness”? How can explicit consciousness and sub-consciousness be mapped in relation to each other? How are they related to the self? How can these entities be represented in an artificial conscious system? These questions are the focus of this article. People are aware of only the behavior that they are focusing on; they cannot be directly aware of routine behavior such as walking and breathing. The latter is generally called unconscious behavior, and here we call it sub-conscious behavior. To understand self-awareness, therefore, firstly it is important to map explicit consciousness and sub-consciousness, which is where the self is deeply involved. We consider that if there is no self that refers to itself, no one can be aware of what he himself is doing. In this study we map explicit consciousness and sub-consciousness using an artificial conscious system, and then make a new proposal about the relationship between self-awareness and the self.
-
This chapter describes the computer modeling of a psychic system that generates representations for a system that has an artificial corporeality. In this model, it defines how the sensation of thinking is formed in an artificial system and how such an artificial system can experience its idea generation. Next, the chapter discusses a multiagent approach to design the artificial psychic system. Further, it describes the organizational memory of the system. The organizational memory will be organized into networks of memory agents related through concepts of semantic proximities or semantic generalizations. The concepts of proximity, specialization and generalization can be precisely defined using qualifications related to the acquaintances of the agents. The chapter finally shows that the general psyche of an artificial system, when it is distributed over multiple corporeal systems with local artificial consciousnesses, can be unified.
-
How has multiplicity superseded impossibility in philosophical challenges to artificial consciousness? I assess a trajectory in recent debates on artificial consciousness, in which metaphysical and explanatory challenges to the possibility of building conscious machines lead to epistemological concerns about the multiplicity underlying ‘what it is like’ to be a conscious creature or be in a conscious state. First, I analyse earlier challenges which claim that phenomenal consciousness cannot arise, or cannot be built, in machines. These are based on Block’s Chinese Nation and Chalmers’ Hard Problem. To defuse such challenges, theorists of artificial consciousness can appeal to empirical methods and models of explanation. Second, I explain why this naturalistic approach produces an epistemological puzzle on the role of biological properties in phenomenal consciousness. Neither behavioural tests nor theoretical inferences seem to settle whether our machines are conscious. Third, I evaluate whether the new challenge can be managed through a more fine-grained taxonomy of conscious states. This strategy is supported by the development of similar taxonomies for biological species and animal consciousness. Although it makes sense of some current models of artificial consciousness, it raises questions about their subjective and moral significance.
-
Estrada, D. (2018). Conscious enactive computation. arXiv. https://doi.org/10.48550/ARXIV.1812.02578
This paper looks at recent debates in the enactivist literature on computation and consciousness in order to assess major obstacles to building artificial conscious agents. We consider a proposal from Villalobos and Dewhurst (2018) for enactive computation on the basis of organizational closure. We attempt to improve the argument by reflecting on the closed paths through state space taken by finite state automata. This motivates a defense against Clark's recent criticisms of "extended consciousness", and perhaps a new perspective on living with machines.