Search
Full bibliography 675 resources
-
In this paper, the author describes a simple yet cognitively powerful architecture of an embodied conscious agent. The architecture incorporates a mechanism for mining, representing, processing and exploiting semantic knowledge. This mechanism is based on two complementary internal world models which are built automatically. One model (based on artificial mirror neurons) is used for mining and capturing the syntax of the recognized part of the environment while the second one (based on neural nets) for its semantics. Jointly, the models support algorithmic processes underlying phenomena similar in important aspects to higher cognitive functions such as imitation learning and the development of communication, language, thinking, and consciousness.
-
To understand the mind and its place in Nature is one of the great intellectual challenges of our time, a challenge that is both scientific and philosophical. How does cognition influence an animal's behaviour? What are its neural underpinnings? How is the inner life of a human being constituted? What are the neural underpinnings of the conscious condition? This book approaches each of these questions from a scientific standpoint. But it contends that, before we can make progress on them, we have to give up the habit of thinking metaphysically, a habit that creates a fog of philosophical confusion. From this post-reflective point of view, the book argues for an intimate relationship between cognition, sensorimotor embodiment, and the integrative character of the conscious condition. Drawing on insights from psychology, neuroscience, and dynamical systems, it proposes an empirical theory of this three-way relationship whose principles, not being tied to the contingencies of biology or physics, are applicable to the whole space of possible minds in which humans and other animals are included. The book provides a joined-up theory of consciousness.
-
Sloman criticizes all existing attempts to define machine consciousness for being overly one-sided. He argues that such definition is not only unattainable but also unnecessary. The critique is well taken in part; yet, whatever his intended aims, by not acknowledging the non-reductive aspects of consciousness, Sloman, in fact, sides with the reductivist view.
-
After discussing a possible contradiction in Sloman's very challenging intervention, I stress the need for not identifying "consciousness" with phenomenal consciousness and with the "qualia" problem. I claim that it is necessary to distinguish different forms and functions of "consciousness" and to explicitly model them, also by exploiting the specific advantage of AI: to make experiments impossible in nature, by separating what cannot be separated in human behavior/mind. As for phenomenal consciousness, one should first be able to model what it means to have a "body" and to "feel" it.
-
Abstract: In the course of seeking an answer to the question “How do you know you are not a zombie?” Floridi (2005) issues an ingenious, philosophically rich challenge to artificial intelligence (AI) in the form of an extremely demanding version of the so‐called knowledge game (or “wise‐man puzzle,” or “muddy‐children puzzle”)—one that purportedly ensures that those who pass it are self‐conscious. In this article, on behalf of (at least the logic‐based variety of) AI, I take up the challenge—which is to say, I try to show that this challenge can in fact be met by AI in the foreseeable future.
-
This work describes the application of the Baars-Franklin Architecture (BFA), an artificial consciousness approach, to synthesize a mind (a control system) for an artificial creature. Firstly we introduce the theoretical foundations of this approach for the development of a conscious agent. Then we explain the architecture of our agent and at the end we discuss the results and first impressions of this approach.
-
The classical paradigm of the neural brain as the seat of human natural intelligence is too restrictive. This paper defends the idea that the neural ectoderm is the actual brain, based on the development of the human embryo. Indeed, the neural ectoderm includes the neural crest, given by pigment cells in the skin and ganglia of the autonomic nervous system, and the neural tube, given by the brain, the spinal cord, and motor neurons. So the brain is completely integrated in the ectoderm, and cannot work alone. The paper presents fundamental properties of the brain as follows. Firstly, Paul D. MacLean proposed the triune human brain, which consists to three brains in one, following the species evolution, given by the reptilian complex, the limbic system, and the neo-cortex. Secondly, the consciousness and conscious awareness are analysed. Thirdly, the anticipatory unconscious free will and conscious free veto are described in agreement with the experiments of Benjamin Libet. Fourthly, the main section explains the development of the human embryo and shows that the neural ectoderm is the whole neural brain. Fifthly, a conjecture is proposed that the neural brain is completely programmed with scripts written in biological low-level and high-level languages, in a manner similar to the programmed cells by the genetic code. Finally, it is concluded that the proposition of the neural ectoderm as the whole neural brain is a breakthrough in the understanding of the natural intelligence, and also in the future design of robots with artificial intelligence.
-
This paper extends three decades of work arguing that researchers who discuss consciousness should not restrict themselves only to (adult) human minds, but should study (and attempt to model) many kinds of minds, natural and artificial, thereby contributing to our understanding of the space containing all of them. We need to study what they do or can do, how they can do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements that are met by different sorts of minds, i.e. the niches that they occupy, (b) understanding the space of possible designs, and (c) understanding complex and varied relationships between requirements and designs. Attempts to model or explain any particular phenomenon, such as vision, emotion, learning, language use, or consciousness lead to muddle and confusion unless they are placed in that broader context. A methodology for making progress is summarised and a novel requirement proposed for a theory of how human minds work: the theory should support a single generic design for a learning, developing system that, in addition to meeting familiar requirements, should be capable of developing different and opposed philosophical viewpoints about consciousness, and the so-called hard problem. In other words, we need a common explanation for the mental machinations of mysterians, materialists, functionalists, identity theorists, and those who regard all such theories as attempting to answer incoherent questions. No designs proposed so far come close.
-
This is a reply to commentaries on my article "An Alternative to Working on Machine Consciousness". Reading the commentaries caused me to write a lengthy background tutorial paper explaining some of the assumptions that were taken for granted in the target article, and pointing out various confusions regarding the notion of consciousness, including many related to its polymorphism, taken for granted in the target article. This response to commentaries builds on that background material, attempting to address the main questions, objections and misunderstandings found in the responses, several of which were a result of my own brevity and lack of clarity in the original target article, now remedied, I hope by the background article [Sloman, 2010b].
-
This paper is an attempt to summarise and justify critical comments I have been making over several decades about research on consciousness by philosophers, scientists and engineers. This includes (a) explaining why the concept of "phenomenal consciousness" (P-C), in the sense defined by Ned Block, is semantically flawed and unsuitable as a target for scientific research or machine modelling, whereas something like the concept of "access consciousness" (A-C) with which it is often contrasted refers to phenomena that can be described and explained within a future scientific theory, and (b) explaining why the "hard problem" is a bogus problem, because of its dependence on the P-C concept. It is compared with another bogus problem, "the 'hard' problem of spatial identity" introduced as part of a tutorial on semantically flawed concepts. Different types of semantic flaw and conceptual confusion not normally studied outside analytical philosophy are distinguished. The semantic flaws of the "zombie" argument, closely allied with the P-C concept are also explained. These topics are related both to the evolution of human and animal minds and brains and to requirements for human-like robots. The diversity of the phenomena related to the concept "consciousness" as ordinarily used makes it a polymorphic concept, partly analogous to concepts like "efficient", "sensitive", and "impediment" all of which need extra information to be provided before they can be applied to anything, and then the criteria of applicability differ. As a result there cannot be one explanation of consciousness, one set of neural associates of consciousness, one explanation for the evolution of consciousness, nor one machine model of consciousness. We need many of each. I present a way of making progress based on what McCarthy called "the designer stance", using facts about running virtual machines, without which current computers obviously could not work. I suggest the same is true of biological minds, because biological evolution long ago "discovered" a need for something like virtual machinery for self-monitoring and self-extending information-processing systems, and produced far more sophisticated versions than human engineers have so far achieved.
-
A great deal of effort has been, and continues to be, devoted to developing consciousness artificially1, and yet a similar amount of effort has gone in to demonstrating the infeasibility of the whole enterprise2. My concern in this paper is to steer some navigable channel between the two positions, laying out the necessary pre-conditions for consciousness in an artificial system, and concentrating on what needs to hold for the system to perform as a human being or other phenomenally conscious agent in an intersubjectively-demanding social and moral environment. By adopting a thick notion of embodiment – one that is bound up with the concepts of the lived body and autopoiesis [Maturana & Varela, 1987; Varela et al., 1991; and Ziemke 2003, 2007a & 2007b] – I will argue that machine phenomenology is only possible within an embodied distributed system that possesses a richly affective musculature and a nervous system such that it can, through action and repetition, develop its tactile-kinaesthetic memory, individual kinaesthetic melodies pertaining to habitual practices, and an anticipatory enactive kinaesthetic imagination. Without these capacities the system would remain unconscious, unaware of itself embodied within a world. Finally, and following on from Damasio’s [1991, 1994, 1999, & 2003] claims for the necessity of pre-reflective conscious, emotional, bodily responses for the development of an organism’s core and extended consciousness, I will argue that without these capacities any agent would be incapable of developing the sorts of somatic markers or saliency tags that enable affective reactions, and which are indispensable for effective decision-making and subsequent survival. My position, as presented here, remains agnostic about whether or not the creation of artificial consciousness is an attainable goal.
-
The accurate measurement of the level of consciousness of a creature remains a major scientific challenge, nevertheless a number of new accounts that attempt to address this problem have been proposed recently. In this paper we analyze the principles of these new measures of consciousness along with other classical approaches focusing on their applicability to Machine Consciousness (MC). Furthermore, we propose a set of requirements of what we think a suitable measure for MC should be, discussing the associated theoretical and practical issues. Using the proposed requirements as a framework for the design of an integrative measure of consciousness, we explore the possibility of designing such a measure in the context of current state of the art in consciousness studies.
-
The academic journey to a widely acknowledged Machine Consciousness is anticipated to be an emotional one. Both in terms of the active debate provoked by the subject and a hypothesized need to encapsulate an analogue of emotions in an artificial system in order to progress towards machine consciousness. This paper considers the inspiration that the concepts related to emotion may contribute to cognitive systems when approaching conscious-like behavior. Specifically, emotions can set goals including balancing explore versus exploit, facilitate action in unknown domains and modify existing behaviors, which are explored in cognitive robotics experiments.
-
The question about the potential for consciousness of artificial systems has often been addressed using thought experiments, which are often problematic in the philosophy of mind. A more promising approach is to use real experiments to gather data about the correlates of consciousness in humans, and develop this data into theories that make predictions about human and artificial consciousness. A key issue with an experimental approach is that consciousness can only be measured using behavior, which places fundamental limits on our ability to identify the correlates of consciousness. This paper formalizes these limits as a distinction between type I and type II potential correlates of consciousness (PCCs). Since it is not possible to decide empirically whether type I PCCs are necessary for consciousness, it is indeterminable whether a machine that lacks neurons or hemoglobin, for example, is potentially conscious. A number of responses have been put forward to this problem, including suspension of judgment, liberal and conservative attribution of the potential for consciousness and a psychometric scale that models our judgment about the relationship between type I PCCs and consciousness.
-
New product and system opportunities are expected to arise when the next step in information technology takes place. Existing Artificial Intelligence is based on preprogramed algorithms that operate in a mechanistic way in the computer. The computer and the program do not understand what is being processed. Without the consideration of meaning, no understanding can take place. This lack of understanding is seen as the major shortcoming of Artificial Intelligence, one that prevents it to achieve its original goal; thinking machines with full human-like cognition and intelligence. The emerging technology of Machine Consciousness is expected to remedy this shortcoming. Machine Consciousness technology is expected to create new opportunities in robotics, information technology gadgets and general information processing calling for machine understanding of auditory, visual and linguistic information.