Search

Full bibliography 558 resources

  • It is suggested that some limitations of current designs for medical AI systems (be they autonomous or advisory) stem from the failure of those designs to address issues of artificial (or machine) consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in (e.g.,) diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, it is argued, a plausible design constraint for a successful ethical machine medical or care agent is for it to at least model, if not reproduce, relevant aspects of consciousness and associated abilities. In order to provide theoretical grounding for such an enterprise we examine some key philosophical issues that concern the machine modelling of consciousness and ethics, and we show how questions relating to the first research goal are relevant to medical machine ethics. We believe that this will overcome a blanket skepticism concerning the relevance of understanding consciousness, to the design and construction of artificial ethical agents for medical or care contexts. It is thus argued that it would be prudent for designers of MME agents to reflect on issues to do with consciousness and medical (moral) expertise; to become more aware of relevant research in the field of machine consciousness; and to incorporate insights gained from these efforts into their designs.

  • This paper presents learning efficiency of a consciousness system for robot using artificial neural network. The proposed conscious system consists of reason system, feeling system and association system. The three systems are modeled using Module of Nerves for Advanced Dynamics (ModNAD). Artificial neural network of the type of supervised learning with the back propagation is used to train the ModNAD. The reason system imitates behaviour and represents self-condition and other-condition. The feeling system represents sensation and emotion. The association system represents behaviour of self and determines whether self is comfortable or not. A robot is asked to perform cognition and tasks using the consciousness system. Learning converges to about 0.01 within about 900 orders for imitation, pain, solitude and the association modules. It converges to about 0.01 within about 400 orders for the comfort and discomfort modules. It can be concluded that learning in the ModNAD completed after a relatively small number of times because the learning efficiency of the ModNAD artificial neural network is good. The results also show that each ModNAD has a function to imitate and cognize emotion. The consciousness system presented in this paper may be considered as a fundamental step for developing a robot having consciousness and feelings similar to humans.

  • The goal of this chapter is to present an overview of the work in AI on emotions and machine consciousness, with an eye toward answering these questions. Starting with a brief philosophical perspective on emotions and machine consciousness to frame the work, the chapter first focuses on artificial emotions, and then moves on to machine consciousness – reflecting the fact that emotions and consciousness have been treated independently and by different communities in AI. The chapter concludes by discussing philosophical implications of AI research on emotions and consciousness.

  • I compare a ‘realist’ with a ‘social–relational’ perspective on our judgments of the moral status of artificial agents (AAs). I develop a realist position according to which the moral status of a being—particularly in relation to moral patiency attribution—is closely bound up with that being’s ability to experience states of conscious satisfaction or suffering (CSS). For a realist, both moral status and experiential capacity are objective properties of agents. A social relationist denies the existence of any such objective properties in the case of either moral status or consciousness, suggesting that the determination of such properties rests solely upon social attribution or consensus. A wide variety of social interactions between us and various kinds of artificial agent will no doubt proliferate in future generations, and the social–relational view may well be right that the appearance of CSS features in such artificial beings will make moral role attribution socially prevalent in human–AA relations. But there is still the question of what actual CSS states a given AA is capable of undergoing, independently of the appearances. This is not just a matter of changes in the structure of social existence that seem inevitable as human–AA interaction becomes more prevalent. The social world is itself enabled and constrained by the physical world, and by the biological features of living social participants. Properties analogous to certain key features in biological CSS are what need to be present for nonbiological CSS. Working out the details of such features will be an objective scientific inquiry.

  • Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises, and making predictions concerning future developments.

  • The Terasem Mind Uploading Experiment is a multi-decade test of the comparability of single person actual human consciousness as assessed by expert psychological review of their digitized interactions with same person purported human consciousness as assessed by expert psychological interviews of personality software that draws upon a database comprised of the original actual person's digitized interactions. The experiment is based upon a hypothesis that the paper analyzes for its conformance with scientific testability in accordance with the criteria set forth by Karl Popper. Strengths and weaknesses of both the hypothesis and the experiment are assessed in terms of other tests of digital consciousness, scientific rigor and good clinical practices. Recommendations for improvement include stronger parametrization of endpoint assessment and better attention to compliance with informed consent in the event there is emergence of software-based consciousness.

  • We answer the question raised by the title by developing a neural architecture for the attention control system in animals in a hierarchical manner, following what we conjecture is an evolutionary path. The resulting evolutionary model (based on CODAM at the highest level) and answer to the question allow us to consider both different forms of consciousness as well as how machine consciousness could itself possess a variety of forms.

  • In this paper, the notion of super-intelligence (or "AI++", as Chalmers has termed it) is considered in the context of machine consciousness (MC) research. Suppose AI++ were to come about, would real MC have then also arrived, "for free"? (I call this the "drop-out question".) Does the idea tempt you, as an MC investigator? What are the various positions that might be adopted on the issue of whether an AI++ would necessarily (or with strong likelihood) be a conscious AI++? Would a conscious super-intelligence also be a super-consciousness? (Indeed, what meaning might be attached to the notion of "super-consciousness"?) What ethical and social consequences might be drawn from the idea of conscious super-AIs or from that of artificial super-consciousness? And what implications does this issue have for technical progress on MC in a pre-AI++ world? These and other questions are considered.

  • Despite many efforts, there are no computational models of consciousness that can be used to design conscious intelligent machines. This is mainly attributed to available definitions of consciousness being human centered, vague, and incomplete. Through a biological analysis of consciousness and concept of machine intelligence, we propose a physical definition of consciousness with the hope to model it in intelligent machines. We propose a computational model of consciousness driven by competing motivations, goals, and attention-switching. We propose a concept of mental saccades that is useful for explaining the attention-switching and focusing mechanism from computational perspective. Finally, we compare our model with other computational models of consciousness.

  • The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed \consumption analysis" is then checking how well such expectations ̄t with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. In case no directly ̄tting ̄ller for an open slot is found, activation spreads more widely and includes data relating to the actor, and Higher-Order Personality Activation, HOPA, ensues. It is brie°y outlined how the Ouroboros Model produces many diverse characteristics and thus addresses established criteria for consciousness. Coarse-grained relationships to selected previous conceptualizations of consciousness and a sketch of how the Ouroboros Model could shed light on current research themes in arti ̄cial general intelligence and consciousness conclude this paper.

  • To understand the mind and its place in Nature is one of the great intellectual challenges of our time, a challenge that is both scientific and philosophical. How does cognition influence an animal's behaviour? What are its neural underpinnings? How is the inner life of a human being constituted? What are the neural underpinnings of the conscious condition? This book approaches each of these questions from a scientific standpoint. But it contends that, before we can make progress on them, we have to give up the habit of thinking metaphysically, a habit that creates a fog of philosophical confusion. From this post-reflective point of view, the book argues for an intimate relationship between cognition, sensorimotor embodiment, and the integrative character of the conscious condition. Drawing on insights from psychology, neuroscience, and dynamical systems, it proposes an empirical theory of this three-way relationship whose principles, not being tied to the contingencies of biology or physics, are applicable to the whole space of possible minds in which humans and other animals are included. The book provides a joined-up theory of consciousness.

  • This paper extends three decades of work arguing that researchers who discuss consciousness should not restrict themselves only to (adult) human minds, but should study (and attempt to model) many kinds of minds, natural and artificial, thereby contributing to our understanding of the space containing all of them. We need to study what they do or can do, how they can do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements that are met by different sorts of minds, i.e. the niches that they occupy, (b) understanding the space of possible designs, and (c) understanding complex and varied relationships between requirements and designs. Attempts to model or explain any particular phenomenon, such as vision, emotion, learning, language use, or consciousness lead to muddle and confusion unless they are placed in that broader context. A methodology for making progress is summarised and a novel requirement proposed for a theory of how human minds work: the theory should support a single generic design for a learning, developing system that, in addition to meeting familiar requirements, should be capable of developing different and opposed philosophical viewpoints about consciousness, and the so-called hard problem. In other words, we need a common explanation for the mental machinations of mysterians, materialists, functionalists, identity theorists, and those who regard all such theories as attempting to answer incoherent questions. No designs proposed so far come close.

  • This is a reply to commentaries on my article "An Alternative to Working on Machine Consciousness". Reading the commentaries caused me to write a lengthy background tutorial paper explaining some of the assumptions that were taken for granted in the target article, and pointing out various confusions regarding the notion of consciousness, including many related to its polymorphism, taken for granted in the target article. This response to commentaries builds on that background material, attempting to address the main questions, objections and misunderstandings found in the responses, several of which were a result of my own brevity and lack of clarity in the original target article, now remedied, I hope by the background article [Sloman, 2010b].

  • This paper is an attempt to summarise and justify critical comments I have been making over several decades about research on consciousness by philosophers, scientists and engineers. This includes (a) explaining why the concept of "phenomenal consciousness" (P-C), in the sense defined by Ned Block, is semantically flawed and unsuitable as a target for scientific research or machine modelling, whereas something like the concept of "access consciousness" (A-C) with which it is often contrasted refers to phenomena that can be described and explained within a future scientific theory, and (b) explaining why the "hard problem" is a bogus problem, because of its dependence on the P-C concept. It is compared with another bogus problem, "the 'hard' problem of spatial identity" introduced as part of a tutorial on semantically flawed concepts. Different types of semantic flaw and conceptual confusion not normally studied outside analytical philosophy are distinguished. The semantic flaws of the "zombie" argument, closely allied with the P-C concept are also explained. These topics are related both to the evolution of human and animal minds and brains and to requirements for human-like robots. The diversity of the phenomena related to the concept "consciousness" as ordinarily used makes it a polymorphic concept, partly analogous to concepts like "efficient", "sensitive", and "impediment" all of which need extra information to be provided before they can be applied to anything, and then the criteria of applicability differ. As a result there cannot be one explanation of consciousness, one set of neural associates of consciousness, one explanation for the evolution of consciousness, nor one machine model of consciousness. We need many of each. I present a way of making progress based on what McCarthy called "the designer stance", using facts about running virtual machines, without which current computers obviously could not work. I suggest the same is true of biological minds, because biological evolution long ago "discovered" a need for something like virtual machinery for self-monitoring and self-extending information-processing systems, and produced far more sophisticated versions than human engineers have so far achieved.

  • A great deal of effort has been, and continues to be, devoted to developing consciousness artificially1, and yet a similar amount of effort has gone in to demonstrating the infeasibility of the whole enterprise2. My concern in this paper is to steer some navigable channel between the two positions, laying out the necessary pre-conditions for consciousness in an artificial system, and concentrating on what needs to hold for the system to perform as a human being or other phenomenally conscious agent in an intersubjectively-demanding social and moral environment. By adopting a thick notion of embodiment – one that is bound up with the concepts of the lived body and autopoiesis [Maturana & Varela, 1987; Varela et al., 1991; and Ziemke 2003, 2007a & 2007b] – I will argue that machine phenomenology is only possible within an embodied distributed system that possesses a richly affective musculature and a nervous system such that it can, through action and repetition, develop its tactile-kinaesthetic memory, individual kinaesthetic melodies pertaining to habitual practices, and an anticipatory enactive kinaesthetic imagination. Without these capacities the system would remain unconscious, unaware of itself embodied within a world. Finally, and following on from Damasio’s [1991, 1994, 1999, & 2003] claims for the necessity of pre-reflective conscious, emotional, bodily responses for the development of an organism’s core and extended consciousness, I will argue that without these capacities any agent would be incapable of developing the sorts of somatic markers or saliency tags that enable affective reactions, and which are indispensable for effective decision-making and subsequent survival. My position, as presented here, remains agnostic about whether or not the creation of artificial consciousness is an attainable goal.

  • Machine (artificial) consciousness can be interpreted in both strong and weak forms, as an instantiation or as a simulation. Here, I argue in favor of weak artificial consciousness, proposing that synthetic models of neural mechanisms potentially underlying consciousness can shed new light on how these mechanisms give rise to the phenomena they do. The approach I advocate involves using synthetic models to develop "explanatory correlates" that can causally account for deep, structural properties of conscious experience. In contrast, the project of strong artificial consciousness — while not impossible in principle — has yet to be credibly illustrated, and is in any case less likely to deliver advances in our understanding of the biological basis of consciousness. This is because of the inherent circularity involved in using models both as instantiations and as cognitive prostheses for exposing general principles, and because treating models as instantiations can indefinitely postpone comparisons with empirical data.

Last update from database: 3/23/25, 8:36 AM (UTC)