Search
Full bibliography 675 resources
-
The Terasem Mind Uploading Experiment is a multi-decade test of the comparability of single person actual human consciousness as assessed by expert psychological review of their digitized interactions with same person purported human consciousness as assessed by expert psychological interviews of personality software that draws upon a database comprised of the original actual person's digitized interactions. The experiment is based upon a hypothesis that the paper analyzes for its conformance with scientific testability in accordance with the criteria set forth by Karl Popper. Strengths and weaknesses of both the hypothesis and the experiment are assessed in terms of other tests of digital consciousness, scientific rigor and good clinical practices. Recommendations for improvement include stronger parametrization of endpoint assessment and better attention to compliance with informed consent in the event there is emergence of software-based consciousness.
-
We answer the question raised by the title by developing a neural architecture for the attention control system in animals in a hierarchical manner, following what we conjecture is an evolutionary path. The resulting evolutionary model (based on CODAM at the highest level) and answer to the question allow us to consider both different forms of consciousness as well as how machine consciousness could itself possess a variety of forms.
-
In this paper, the notion of super-intelligence (or "AI++", as Chalmers has termed it) is considered in the context of machine consciousness (MC) research. Suppose AI++ were to come about, would real MC have then also arrived, "for free"? (I call this the "drop-out question".) Does the idea tempt you, as an MC investigator? What are the various positions that might be adopted on the issue of whether an AI++ would necessarily (or with strong likelihood) be a conscious AI++? Would a conscious super-intelligence also be a super-consciousness? (Indeed, what meaning might be attached to the notion of "super-consciousness"?) What ethical and social consequences might be drawn from the idea of conscious super-AIs or from that of artificial super-consciousness? And what implications does this issue have for technical progress on MC in a pre-AI++ world? These and other questions are considered.
-
In this article, we propose the Conscious-Emotional Learning Tutoring System technology, a biologically plausible cognitive agent based on human brain functions. This agent is capable of learning and remembering events and any related information such as corresponding procedures, stimuli and their emotional valences. In our model, emotions play a role in the encoding and remembering of events. This allows the agent to improve its behaviour or by remembering previously selected behaviours which were influenced by its emotional mechanism. Moreover, the architecture incorporates a realistic memory consolidation process based on a data mining algorithm.
-
We present a two-level model of concurrent communicating systems (CCS) to serve as a basis for machine consciousness. A language implementing threads within logic programming is first introduced. This high-level framework allows for the definition of abstract processes that can be executed on a virtual machine. We then look for a possible grounding of these processes into the brain. Towards this end, we map abstract definitions (including logical expressions representing compiled knowledge) into a variant of the π-calculus. We illustrate this approach through a series of examples extending from a purely reactive behavior to patterns of consciousness.
-
Scientific behavior is used as a benchmark to examine the truth status of computationalism (COMP) as a law of nature. A COMP-based artificial scientist is examined from three simple perspectives to see if they shed light on the truth or falsehood of COMP through its ability or otherwise, to deliver authentic original science on the a priori unknown like humans do. The first perspective (A) looks at the handling of ignorance and supports a claim that COMP is "trivially true" or "pragmatically false" in the sense that you can simulate a scientist if you already know everything, which is a state that renders the simulation possible but pointless. The second scenario (B) is more conclusive and unusual in that it reveals that the COMP scientist can never propose/debate that COMP is a law of nature. This marked difference between the human and the artificial scientist in this single, very specific circumstance, means that COMP cannot be true as a general claim. The third scenario (C) examines the artificial scientist's ability to do science on itself/humans to uncover the "law of nature" which results in itself. This scenario reveals that a successful test for scientific behavior by a COMP-based artificial scientist supports a claim that COMP is true. Such a test is quite practical and can be applied to an artificial scientist based on any design principle, not merely COMP. Scenario (C) also reveals a practical example of the COMP scientist's inability to handle informal systems (in the form of liars), which further undermines COMP. Overall, the result is that COMP is false, with certainty in one very specific, critical place. This lends support to the claims (i) that artificial general intelligence will not succeed based on COMP principles, and (ii) computationally enacted abstract models of human cognition will never create a mind.
-
The concept of qualia poses a central problem in the framework of consciousness studies. Despite it being a controversial issue even in the study of human consciousness, we argue that qualia can be complementarily studied using artificial cognitive architectures. In this work we address the problem of defining qualia in the domain of artificial systems, providing a model of “artificial qualia”. Furthermore, we partially apply the proposed model to the generation of visual qualia using the cognitive architecture CERA-CRANIUM, which is modeled after the global workspace theory of consciousness. It is our aim to define, characterize and identify artificial qualia as direct products of a simulated conscious perception process. Simple forms of the apparent motion effect are used as the basis for a preliminary experimental setting focused on the simulation and analysis of synthetic visual experience. In contrast with the study of biological brains, the inspection of the dynamics and transient inner states of the artificial cognitive architecture can be performed effectively, thus enabling the detailed analysis of covert and overt percepts generated by the system when it is confronted with specific visual stimuli. The observed states in the artificial cognitive architecture during the simulation of apparent motion effects are used to discuss the existence of possible analogous mechanisms in human cognition processes.
-
A novel theory of reflective consciousness, will and self is presented, based on modeling each of these entities using self-referential mathematical structures called hypersets. Pattern theory is used to argue that these exotic mathematical structures may meaningfully be considered as parts of the minds of physical systems, even finite computational systems. The hyperset models presented are hypothesized to occur as patterns within the ”moving bubble of attention” of the human brain and any brainlike AI system. They appear to be compatible with both panpsychist and materialist views of consciousness, and probably other views as well.
-
Self-aware individuals are more likely to consider whether their actions are appropriate in terms of public self-consciousness, and to use that information to execute behaviors that match external standards and/or expectations. The learning concepts through which individuals monitor themselves have generally been overlooked by artificial intelligence researchers. Here we report on our attempt to integrate a self-awareness mechanism into an agent's learning architecture. Specifically, we describe (a) our proposal for a self-aware agent model that includes an external learning mechanism and internal cognitive capacity with super-ego and ego characteristics; and (b) our application of a version of the iterated prisoner's dilemma representing conflicts between the public good and private interests to analyze the effects of self-awareness on an agent's individual performance and cooperative behavior. Our results indicate that self-aware agents that consider public self-consciousness utilize rational analysis in a manner that promotes cooperative behavior and supports faster societal movement toward stability. We found that a small number of self-aware agents are sufficient for improving social benefits and resolving problems associated with collective irrational behaviors.
-
This paper addresses the problem of human–computer interactions when the computer can interpret and express a kind of human-like behavior, offering natural communication. A conceptual framework for incorporating emotions with rationality is proposed. A model of affective social interactions is described. The model utilizes the SAIBA framework, which distinguishes among several stages of processing of information. The SAIBA framework is extended, and a model is realized in human behavior detection, human behavior interpretation, intention planning, attention tracking behavior planning, and behavior realization components. Two models of incorporating emotions with rationality into a virtual artifact are presented. The first one uses an implicit implementation of emotions. The second one has an explicit realization of a three-layered model of emotions, which is highly interconnected with other components of the system. Details of the model with implicit implementation of emotional behavior are shown as well as evaluation methodology and results. Discussions about the extended model of an agent are given in the final part of the paper.
-
Shanahan's work admirably and convincingly supports Baars' global workspace by means of plausible and updated neural models. Yet little of his work is related with the issue of consciousness as phenomenal experience. He focuses his effort mostly on the behavioral correlates of consciousness like autonomy, flexibility, and information integration. Moreover, although the importance of embodiment and situated cognition is emphasized, most of the conceptual tools suggested (dynamic systems, complex networks, global workspace) require the external world only during their development. Leaving aside the issue of phenomenal experience, the book fleshes out a convincing and thought-provoking model for many aspects of conscious behaviour.
-
A brain model based on glial-neuronal interactions is proposed. Glial-neuronal synaptic units are interpreted as elementary reflection mechanisms, called proemial synapses. In glial networks (syncytia), cyclic intentional programs are generated, interpreted as auto-reflective intentional programming. Both types of reflection mechanisms are formally described and may be implementable in a robot brain. Based on the logic of acceptance and rejection, the robot is capable of rejecting irrelevant environmental information, showing at least a "touch" of subjective behavior. Since reflective intentional programming generates both relevant and irrelevant structures already within the brain, ontological gaps arise which must be integrated. In the human brain, the act of self-reference may exert a holistic function enabling self-consciousness. However, since the act of self-reference is a mysterious function not experimentally testable in brain research, it cannot be implemented in a robot brain. Therefore, the creation of self-conscious robots may never be possible. Finally, some philosophical implications are discussed.
-
Despite many efforts, there are no computational models of consciousness that can be used to design conscious intelligent machines. This is mainly attributed to available definitions of consciousness being human centered, vague, and incomplete. Through a biological analysis of consciousness and concept of machine intelligence, we propose a physical definition of consciousness with the hope to model it in intelligent machines. We propose a computational model of consciousness driven by competing motivations, goals, and attention-switching. We propose a concept of mental saccades that is useful for explaining the attention-switching and focusing mechanism from computational perspective. Finally, we compare our model with other computational models of consciousness.
-
The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed \consumption analysis" is then checking how well such expectations ̄t with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. In case no directly ̄tting ̄ller for an open slot is found, activation spreads more widely and includes data relating to the actor, and Higher-Order Personality Activation, HOPA, ensues. It is brie°y outlined how the Ouroboros Model produces many diverse characteristics and thus addresses established criteria for consciousness. Coarse-grained relationships to selected previous conceptualizations of consciousness and a sketch of how the Ouroboros Model could shed light on current research themes in arti ̄cial general intelligence and consciousness conclude this paper.
-
The progress in the machine consciousness research field has to be assessed in terms of the features demonstrated by the new models and implementations currently being designed. In this paper, we focus on the functional aspects of consciousness and propose the application of a revision of ConsScale — a biologically inspired scale for measuring cognitive development in artificial agents — in order to assess the cognitive capabilities of machine consciousness implementations. We argue that the progress in the implementation of consciousness in artificial agents can be assessed by looking at how key cognitive abilities associated to consciousness are integrated within artificial systems. Specifically, we characterize ConsScale as a partially ordered set and propose a particular dependency hierarchy for cognitive skills. Associated to that hierarchy a graphical representation of the cognitive profile of an artificial agent is presented as a helpful analytic tool. The proposed evaluation schema is discussed and applied to a number of significant machine consciousness models and implementations. Finally, the possibility of generating qualia and phenomenological states in machines is discussed in the context of the proposed analysis.
-
Is it possible to construct an artificial person? Researchers in the field of artificial intelligence have for decades been developing computer programs that emulate human intelligence. This book goes beyond intelligence and describes how close we are to recreating many of the other capacities that make us human. These abilities include learning, creativity, consciousness, and emotion. The attempt to understand and engineer these abilities constitutes the new interdisciplinary field of artificial psychology, which is characterized by contributions from philosophy, cognitive psychology, neuroscience, computer science, and robotics. This work is intended for use as a main or supplementary introductory textbook for a course in cognitive psychology, cognitive science, artificial intelligence, or the philosophy of mind. It examines human abilities as operating requirements that an artificial person must have and analyzes them from a multidisciplinary approach. The book is comprehensive in scope, covering traditional topics like perception, memory, and problem solving. However, it also describes recent advances in the study of free will, ethical behavior, affective architectures, social robots, and hybrid human-machine societies.
-
In their joint paper entitled “The Replication of the Hard Problem of Consciousness in AI and BIO-AI” (Boltuc et al. Replication of the hard problem of conscious in AI and Bio-AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as “H-consciousness”). The claim is that if we knew the inner workings of phenomenal consciousness and could understand its’ precise operation, we could instantiate such consciousness in a machine. This claim, called the extra-strong AI thesis, is an important claim because if true it would demystify the privileged access problem of first-person consciousness and cast it as an empirical problem of science and not a fundamental question of philosophy. A core assumption of the extra-strong AI thesis is that there is no logical argument that precludes the implementation of H-consciousness in an organic or in-organic machine provided we understand its algorithm. Another way of framing this conclusion is that there is nothing special about H-consciousness as compared to any other process. That is, in the same way that we do not preclude a machine from implementing photosynthesis, we also do not preclude a machine from implementing H-consciousness. While one may be more difficult in practice, it is a problem of science and engineering, and no longer a philosophical question. I propose that Boltuc’s conclusion, while plausible and convincing, comes at a very high price; the argument given for his conclusion does not exclude any conceivable process from machine implementation. In short, if we make some assumptions about the equivalence of a rough notion of algorithm and then tie this to human understanding, all logical preconditions vanish and the argument grants that any process can be implemented in a machine. The purpose of this paper is to comment on the argument for his conclusion and offer additional properties of H-consciousness that can be used to make the conclusion falsifiable through scientific investigation rather than relying on the limits of human understanding.