Search

Full bibliography 716 resources

  • This paper extends three decades of work arguing that researchers who discuss consciousness should not restrict themselves only to (adult) human minds, but should study (and attempt to model) many kinds of minds, natural and artificial, thereby contributing to our understanding of the space containing all of them. We need to study what they do or can do, how they can do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements that are met by different sorts of minds, i.e. the niches that they occupy, (b) understanding the space of possible designs, and (c) understanding complex and varied relationships between requirements and designs. Attempts to model or explain any particular phenomenon, such as vision, emotion, learning, language use, or consciousness lead to muddle and confusion unless they are placed in that broader context. A methodology for making progress is summarised and a novel requirement proposed for a theory of how human minds work: the theory should support a single generic design for a learning, developing system that, in addition to meeting familiar requirements, should be capable of developing different and opposed philosophical viewpoints about consciousness, and the so-called hard problem. In other words, we need a common explanation for the mental machinations of mysterians, materialists, functionalists, identity theorists, and those who regard all such theories as attempting to answer incoherent questions. No designs proposed so far come close.

  • This is a reply to commentaries on my article "An Alternative to Working on Machine Consciousness". Reading the commentaries caused me to write a lengthy background tutorial paper explaining some of the assumptions that were taken for granted in the target article, and pointing out various confusions regarding the notion of consciousness, including many related to its polymorphism, taken for granted in the target article. This response to commentaries builds on that background material, attempting to address the main questions, objections and misunderstandings found in the responses, several of which were a result of my own brevity and lack of clarity in the original target article, now remedied, I hope by the background article [Sloman, 2010b].

  • This paper is an attempt to summarise and justify critical comments I have been making over several decades about research on consciousness by philosophers, scientists and engineers. This includes (a) explaining why the concept of "phenomenal consciousness" (P-C), in the sense defined by Ned Block, is semantically flawed and unsuitable as a target for scientific research or machine modelling, whereas something like the concept of "access consciousness" (A-C) with which it is often contrasted refers to phenomena that can be described and explained within a future scientific theory, and (b) explaining why the "hard problem" is a bogus problem, because of its dependence on the P-C concept. It is compared with another bogus problem, "the 'hard' problem of spatial identity" introduced as part of a tutorial on semantically flawed concepts. Different types of semantic flaw and conceptual confusion not normally studied outside analytical philosophy are distinguished. The semantic flaws of the "zombie" argument, closely allied with the P-C concept are also explained. These topics are related both to the evolution of human and animal minds and brains and to requirements for human-like robots. The diversity of the phenomena related to the concept "consciousness" as ordinarily used makes it a polymorphic concept, partly analogous to concepts like "efficient", "sensitive", and "impediment" all of which need extra information to be provided before they can be applied to anything, and then the criteria of applicability differ. As a result there cannot be one explanation of consciousness, one set of neural associates of consciousness, one explanation for the evolution of consciousness, nor one machine model of consciousness. We need many of each. I present a way of making progress based on what McCarthy called "the designer stance", using facts about running virtual machines, without which current computers obviously could not work. I suggest the same is true of biological minds, because biological evolution long ago "discovered" a need for something like virtual machinery for self-monitoring and self-extending information-processing systems, and produced far more sophisticated versions than human engineers have so far achieved.

  • A great deal of effort has been, and continues to be, devoted to developing consciousness artificially1, and yet a similar amount of effort has gone in to demonstrating the infeasibility of the whole enterprise2. My concern in this paper is to steer some navigable channel between the two positions, laying out the necessary pre-conditions for consciousness in an artificial system, and concentrating on what needs to hold for the system to perform as a human being or other phenomenally conscious agent in an intersubjectively-demanding social and moral environment. By adopting a thick notion of embodiment – one that is bound up with the concepts of the lived body and autopoiesis [Maturana & Varela, 1987; Varela et al., 1991; and Ziemke 2003, 2007a & 2007b] – I will argue that machine phenomenology is only possible within an embodied distributed system that possesses a richly affective musculature and a nervous system such that it can, through action and repetition, develop its tactile-kinaesthetic memory, individual kinaesthetic melodies pertaining to habitual practices, and an anticipatory enactive kinaesthetic imagination. Without these capacities the system would remain unconscious, unaware of itself embodied within a world. Finally, and following on from Damasio’s [1991, 1994, 1999, & 2003] claims for the necessity of pre-reflective conscious, emotional, bodily responses for the development of an organism’s core and extended consciousness, I will argue that without these capacities any agent would be incapable of developing the sorts of somatic markers or saliency tags that enable affective reactions, and which are indispensable for effective decision-making and subsequent survival. My position, as presented here, remains agnostic about whether or not the creation of artificial consciousness is an attainable goal.

  • The accurate measurement of the level of consciousness of a creature remains a major scientific challenge, nevertheless a number of new accounts that attempt to address this problem have been proposed recently. In this paper we analyze the principles of these new measures of consciousness along with other classical approaches focusing on their applicability to Machine Consciousness (MC). Furthermore, we propose a set of requirements of what we think a suitable measure for MC should be, discussing the associated theoretical and practical issues. Using the proposed requirements as a framework for the design of an integrative measure of consciousness, we explore the possibility of designing such a measure in the context of current state of the art in consciousness studies.

  • The academic journey to a widely acknowledged Machine Consciousness is anticipated to be an emotional one. Both in terms of the active debate provoked by the subject and a hypothesized need to encapsulate an analogue of emotions in an artificial system in order to progress towards machine consciousness. This paper considers the inspiration that the concepts related to emotion may contribute to cognitive systems when approaching conscious-like behavior. Specifically, emotions can set goals including balancing explore versus exploit, facilitate action in unknown domains and modify existing behaviors, which are explored in cognitive robotics experiments.

  • The question about the potential for consciousness of artificial systems has often been addressed using thought experiments, which are often problematic in the philosophy of mind. A more promising approach is to use real experiments to gather data about the correlates of consciousness in humans, and develop this data into theories that make predictions about human and artificial consciousness. A key issue with an experimental approach is that consciousness can only be measured using behavior, which places fundamental limits on our ability to identify the correlates of consciousness. This paper formalizes these limits as a distinction between type I and type II potential correlates of consciousness (PCCs). Since it is not possible to decide empirically whether type I PCCs are necessary for consciousness, it is indeterminable whether a machine that lacks neurons or hemoglobin, for example, is potentially conscious. A number of responses have been put forward to this problem, including suspension of judgment, liberal and conservative attribution of the potential for consciousness and a psychometric scale that models our judgment about the relationship between type I PCCs and consciousness.

  • New product and system opportunities are expected to arise when the next step in information technology takes place. Existing Artificial Intelligence is based on preprogramed algorithms that operate in a mechanistic way in the computer. The computer and the program do not understand what is being processed. Without the consideration of meaning, no understanding can take place. This lack of understanding is seen as the major shortcoming of Artificial Intelligence, one that prevents it to achieve its original goal; thinking machines with full human-like cognition and intelligence. The emerging technology of Machine Consciousness is expected to remedy this shortcoming. Machine Consciousness technology is expected to create new opportunities in robotics, information technology gadgets and general information processing calling for machine understanding of auditory, visual and linguistic information.

  • It is argued that qualia are the primary way in which sensory information manifests itself in mind. Qualia are not seen as properties of the physical world, ready to be observed; instead it is argued that they are the way in which the sensory system's response to the sensed stimuli manifests itself inside the system. Systems that have qualia have direct and transparent access to this response. It is argued that even though qualia are produced inside the head, they appear to be outside because this appearance complies with our motions, small and large, in the word. To be conscious in the way that we experience it is to have qualia. True conscious machines must have qualia, but the qualities of machine qualia need not be similar to the qualities of human qualia.

  • The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988) monograph, “Representation & Reality”, which if correct, has important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set (x)”. Then, equating Q (x) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere.

  • The main sources of inspiration for the design of more engaging synthetic characters are existing psychological models of human cognition. Usually, these models, and the associated Artificial Intelligence (AI) techniques, are based on partial aspects of the real complex systems involved in the generation of human-like behavior. Emotions, planning, learning, user modeling, set shifting, and attention mechanisms are some remarkable examples of features typically considered in isolation within classical AI control models. Artificial cognitive architectures aim at integrating many of these aspects together into effective control systems. However, the design of this sort of architectures is not straightforward. In this paper, we argue that current research efforts in the young field of Machine Consciousness (MC) could contribute to tackle complexity and provide a useful framework for the design of more appealing synthetic characters. This hypothesis is illustrated with the application of a novel consciousness-based cognitive architecture to the development of a First Person Shooter video game character.

  • This paper critically tracks the impact of the development of the machine consciousness paradigm from the incredulity of the 1990s to the structuring of the turn of this century, and the consolidation of the present time which forms the basis for guessing what might happen in the future. The underlying question is how this development may have changed our understanding of consciousness and whether an artificial version of the concept contributes to the improvement of computational machinery and robots. The paper includes some suggestions for research that might be profitable and others that may not be.

  • From the point of view of Cognitive Informatics, consciousness can be considered as a grand integration of a number of cognitive processes. Intuitive definitions of consciousness generally involve perception, emotions, attention, self-recognition, theory of mind, volition, etc. Due to this compositional definition of the term consciousness it is usually difficult to define both what is exactly a conscious being and how consciousness could be implemented in artificial machines. When we look into the most evolved biological examples of conscious beings, like great apes or humans, the vast complexity of observed cognitive interactions in conjunction with the lack of comprehensive understanding of low level neural mechanisms makes the reverse engineering task virtually unreachable. With the aim to effectively address the problem of modeling consciousness at a cognitive level, in this work we propose a concrete developmental path in which key stages in the progressive process of building conscious machines are identified and characterized. Furthermore, a method for calculating a quantitative measure of artificial consciousness is presented. The application of the proposed framework is illustrated with the comparative study of different software agents designed to compete in a first-person shooter video game.

  • Machines can be conscious, is the claim to be adopted here, upon examination of its different versions. We distinguish three kinds of consciousness: functional, phenomenal and hard consciousness (f-, p- and h-consciousness). Robots are functionally conscious already. There is also a clear project in AI on how to make computers phenomenally conscious, though criteria differ. Discussion about p-consciousness is clouded by the lack of clarity on its two versions: (1) First-person functional elements (here, p-consciousness), and (2) Non-functional (h-consciousness). I argue that: (1) There are sufficient reasons to adopt h-consciousness and move forward with discussion of its practical applications; this does not have anti-naturalistic implications. (2) A naturalistic account of h-consciousness should be expected, in principle; in neuroscience we are some way towards formulating such an account. (3) Detailed analysis of the notion of consciousness is needed to clearly distinguish p- and h-consciousness. This refers to the notion of subject that is not an object and complementarity of the subjective and objective perspectives.(4) If we can understand the exact mechanism that produces h-consciousness we should be able to engineer it. (5) H-consciousness is probably not a computational process (more like a liver-function). Machines can, in principle, be functionally, phenomenally and h-conscious; all those processes are naturalistic. This is the engineering thesis on machine consciousness formulated within non-reductive naturalism.

  • Machine (artificial) consciousness can be interpreted in both strong and weak forms, as an instantiation or as a simulation. Here, I argue in favor of weak artificial consciousness, proposing that synthetic models of neural mechanisms potentially underlying consciousness can shed new light on how these mechanisms give rise to the phenomena they do. The approach I advocate involves using synthetic models to develop "explanatory correlates" that can causally account for deep, structural properties of conscious experience. In contrast, the project of strong artificial consciousness — while not impossible in principle — has yet to be credibly illustrated, and is in any case less likely to deliver advances in our understanding of the biological basis of consciousness. This is because of the inherent circularity involved in using models both as instantiations and as cognitive prostheses for exposing general principles, and because treating models as instantiations can indefinitely postpone comparisons with empirical data.

Last update from database: 5/15/26, 1:00 AM (UTC)