Search
Full bibliography 558 resources
-
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI’s inherent conscious or moral status.
-
Butlin et al. (2023) propose a rubric for evaluating AI systems based on indicator properties derived from existing theories of consciousness, suggesting that while current AIs do not possess consciousness, these indicators are pivotal for future developments towards artificial consciousness. The current paper critiques the approach by Butlin et al., arguing that the complexity of consciousness, characterized by subjective experience poses significant challenges for its operationalization and measurement, thus complicating the replication in AI. The commentary further explores the limitations of current methodologies in artificial consciousness research, pointing to the necessity of out-of-the-box thinking and the integration of individual differences research in cognitive psychology, particularly in the areas of attention, cognitive control, autobiographical memory, and Theory of Mind (ToM), to advance the understanding and development of artificial consciousness.
-
This chapter critically questions the claim that there would be possibility of emulating human consciousness and consciousness-dependent activity by Artificial Intelligence to create conscious artificial systems. The analysis is based on neurophysiological research and theory. In-depth scrutiny of the field and the prospects for converting neuroscience research into the type of algorithmic programs utilized in computer-based AI systems to create artificially conscious machines leads to conclude that such a conversion is unlikely to ever be possible because of the complexity of unconscious and conscious brain processing and their interaction. It is through the latter that the brain opens the doors to consciousness, a property of the human mind that no other living species has developed for reasons that are made clear in this chapter. As a consequence, many of the projected goals of AI will remain forever unrealizable. Although this work does not directly examine the question within a philosophy of mind framework by, for example, discussing why identifying consciousness with the activity of electronic circuits is first and foremost a category mistake in terms of scientific reasoning, the approach offered in the chapter is complementary to this standpoint, and illustrates various aspects of the problem under a monist from-brain-to-mind premise.
-
While consciousness has been historically a heavily debated topic, awareness had less success in raising the interest of scholars. However, more and more researchers are getting interested in answering questions concerning what awareness is and how it can be artificially generated. The landscape is rapidly evolving, with multiple voices and interpretations of the concept being conceived and techniques being developed. The goal of this paper is to summarize and discuss the ones among these voices connected with projects funded by the EIC Pathfinder Challenge “Awareness Inside” callwithin Horizon Europe, designed specifically for fostering research on natural and synthetic awareness. In this perspective, we dedicate special attention to challenges and promises of applying synthetic awareness in robotics, as the development of mature techniques in this new field is expected to have a special impact on generating more capable and trustworthy embodied systems.
-
Creating artificial general intelligence is the solution most often in the spotlight. It is also linked with the possibility—or fear—of machines gaining consciousness. Alternatively, developing domain‐specific artificial intelligence is more reliable, energy‐efficient, and ethically tractable, and raises mostly a problem of effective coordination between different systems and humans. Herein, it is argued that it will not require machines to be conscious and that simpler ways of sharing awareness are sufficient.
-
The prospect of artificial consciousness raises theoretical, technical and ethical challenges which converge on the core issue of how to eventually identify and characterize it. In order to provide an answer to this question, I propose to start from a theoretical reflection about the meaning and main characteristics of consciousness. On the basis of this conceptual clarification it is then possible to think about relevant empirical indicators (i.e. features that facilitate the attribution of consciousness to the system considered) and identify key ethical implications that arise. In this chapter, I further elaborate previous work on the topic, presenting a list of candidate indicators of consciousness in artificial systems and introducing an ethical reflection about their potential implications. Specifically, I focus on two main ethical issues: the conditions for considering an artificial system as a moral subject; and the need for a non-anthropocentric approach in reflecting about the science and the ethics of artificial consciousness.
-
We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word “consciousness” for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.
-
Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, being able to do all we do, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which -- a basic stored-program computer -- simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness.
-
Since the term ‘Artificial Intelligence’ was coined, the respective research field has frequently emulated human mental faculties. Despite diverging viewpoints regarding the feasibility of achieving human-like cognition in machines, the very use of the word intelligence for complex computer systems evokes human consciousness. Likewise, there have been attempts to understand the human mind in terms of computers, exemplified by the computational theory of mind. By contrast, my article underscores the categorical difference between the mind and machines. Partly building upon arguments by David Gelernter and Bert Olivier, I focus on literary examples spanning from Shakespeare to T.S. Eliot that accentuate subjective experience, the intricate relationship between body and mind, and the anticipation of death as human characteristics beyond the reach of computational systems.
-
Isaac Asimov, the well-known author of novels that featured positronic robots, also penned tales featuring Multivac, an immense electronic brain that grappled with the weighty and intricate decisions impacting all of humankind. In his 1958 short story All the Troubles of the World, Asimov introduced us to a Multivac tormented by the consciousness of its own thought processes to the extent that it desired its own demise. Through this narrative, Asimov delved into the existential dimensions of artificial consciousness: he depicted the Multivac’s adeptness in planning, reflexivity, and intentionality aimed at self-terminating — qualities that, in a tragic parallel, are also observed in certain cases of human suicidal behavior. Starting from these suggestions, we propose an existentialist and self-reflection criterion for consciousness, intertwining phenomenal consciousness with an entity’s capability to conceive thoughts about its own mortality. We argue that, according to certain psychological literature and the existentialist essays of Emil M. Cioran, artificial systems might be deemed conscious if they possess death-thought accessibility— the capacity, akin to that of humans, to conceive thoughts about their own mortality and intentionally conceive self-termination. Naturally, this criterion shares the inherent challenges associated with defining intentionality, reflexivity, and, ultimately, the very concept of consciousness as it pertains to humans.
-
The hypothesis of conscious machines has been debated since the invention of the notion of artificial intelligence, powered by the assumption that the computational intelligence achieved by a system is the cause of the emergence of phenomenal consciousness in that system as an epiphenomenon or as a consequence of the behavioral or internal complexity of the system surpassing some threshold. As a consequence, a huge amount of literature exploring the possibility of machine consciousness and how to implement it on a computer has been published. Moreover, common folk psychology and transhumanism literature has fed this hypothesis with the popularity of science fiction literature, where intelligent robots are usually antropomorphized and hence given phenomenal consciousness. However, in this work, we argue how these literature lacks scientific rigour, being impossible to falsify the opposite hypothesis, and illustrate a list of arguments that show how every approach that the machine consciousness literature has published depends on philosophical assumptions that cannot be proven by the scientific method. Concretely, we also show how phenomenal consciousness is not computable, independently on the complexity of the algorithm or model, cannot be objectively measured nor quantitatively defined and it is basically a phenomenon that is subjective and internal to the observer. Given all those arguments we end the work arguing why the idea of conscious machines is nowadays a myth of transhumanism and science fiction culture.
-
Biological ‘consciousness’ is a well documented feature in diverse taxa within the animal kingdom. The existence of non-animal biological consciousness is debated. The possibility of artificial consciousness is also debated. Overall, our knowledge of historic Homo sapiens consciousness (H-consciousness) is by far the most extensive. This chapter specifies the content of consciousness studies, reviews selected theories of human consciousness, and proposes a novel theory of both biological and artificial consciousness that is inspired by the notion of evolutionary transitions and extends the theory of noémon systems developed by Gelepithis (2024a; 2024b chapter-2).
-
The transformative achievements of deep learning have led several scholars to raise the question of whether artificial intelligence (AI) can reach and then surpass the level of human thought. Here, after addressing methodological problems regarding the possible answer to this question, it is argued that the definition of intelligence proposed by proponents of the AI as “the ability to accomplish complex goals,” is appropriate for machines but does not capture the essence of human thought. After discussing the differences regarding understanding between machines and the brain, as well as the importance of subjective experiences, it is emphasized that most proponents of the eventual superiority of AI ignore the importance of the body proper on the brain, the laterization of the brain, and the vital role of the glia cells. By appealing to the incompleteness theorem of Gödel’s and to the analogous result of Turing regarding computations, it is noted that consciousness is much richer than both mathematics and computations. Finally, and perhaps most importantly, it is stressed that artificial algorithms attempt to mimic only the conscious function of parts of the cerebral cortex, ignoring the fact that, not only every conscious experience is preceded by an unconscious process but also that the passage from the unconscious to consciousness is accompanied by loss of information.
-
The rise of Artificial Intelligence (AI) has produced prophets and prophecies announcing that the age of artificial consciousness is near. Not only does the mere idea that any machine could ever possess the full potential of human consciousness suggest that AI could replace the role of God in the future, it also puts into question the fundamental human right to freedom and dignity. Yet, in the light of all we currently know about brain evolution and the adaptive neural dynamics underlying human consciousness, the idea of an artificial consciousness appears misconceived. This article highlights some of the major reasons why the prophecy of a successful emulation of human consciousness by AI ignores most of the data about adaptive processes of learning and memory as the developmental origins of consciousness. The analysis provided leads to conclude that human consciousness is epigenetically determined as a unique property of the mind, shaped by experience, capable of representing real and non-real world states and creatively projecting these representations into the future. The development of the adaptive brain circuitry that enables this expression of consciousness is highly context-dependent, shaped by multiple self-organizing functional interactions at different levels of integration displaying a from-local-to global functional organization. Human consciousness is subject to changes in time that are essentially unpredictable. If cracking the computational code to human consciousness were possible, the resulting algorithms would have to be able to generate temporal activity patterns simulating long-distance signal reverberation in the brain, and the de-correlation of spatial signal contents from their temporal signatures in the brain. In the light of scientific evidence for complex interactions between implicit (non-conscious) and explicit (conscious) representations in learning, memory, and the construction of conscious representations such a code would have to be capable of making all implicit processing explicit. Algorithms would have to be capable of a progressive and less and less arbitrary selection of temporal activity patterns in a continuously developing neural network structure that is functionally identical to that of the human brain, from synapses to higher cognitive functional integration. The code would have to possess the self-organizing capacities of the brain that generate the temporal signatures of a conscious experience. The consolidation or extinction of these temporal brain signatures are driven by external event probabilities according to the principles of Hebbian learning. Human consciousness is constantly fed by such learning, capable of generating stable representations despite an incommensurable amount of variability in input data, across time and across individuals, for a life-long integration of experience data. Artificial consciousness would require probabilistic adaptive computations capable of emulating all the dynamics of individual human learning, memory and experience. No AI is likely to ever have such potential.
-
If a machine attains consciousness, how could we find out? In this paper, I make three related claims regarding positive tests of machine consciousness. All three claims center on the idea that an AI can be constructed “ad hoc”, that is, with the purpose of satisfying a particular test of consciousness while clearly not being conscious. First, a proposed test of machine consciousness can be legitimate, even if AI can be constructed ad hoc specifically to pass this test. This is underscored by the observation that many, if not all, putative tests of machine consciousness can be passed by non-conscious machines via ad hoc means. Second, we can identify ad hoc AI by taking inspiration from the notion of an ad hoc hypothesis in philosophy of science. Third, given the first and the second claim, the most reliable tests of animal consciousness turn out to be valid and useful positive tests of machine consciousness as well. If a non-ad hoc AI exhibits clusters of cognitive capacities facilitated by consciousness in humans which can be selectively switched off by masking and if it reproduces human behavior in suitably designed double dissociation tasks, we should treat the AI as conscious.
-
This article addresses the background and nature of the recent success of Large Language Models (LLMs), tracing the history of their fundamental concepts from Leibniz and his calculus ratiocinator to Turing’s computational models of learning, and ultimately to the current development of GPTs. As Kahneman’s “System 1”-type processes, GPTs lack mechanisms that would render them conscious, but they nonetheless demonstrate a certain level of intelligence and the capacity to represent and process knowledge. This is achieved by processing vast corpora of human-created knowledge, which, for its initial production, required human consciousness, but can now be collected, compressed, and processed automatically.
-
Consciousness and intelligence are properties that can be misunderstood as necessarily dependent. The term artificial intelligence and the kind of problems it managed to solve in recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following Russell’s analogy, if a machine can do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights than a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe the obvious answer is no, as problem-solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness, at least, cannot be modeled by computational intelligence and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence than human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals, and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals, and machines.
-
This paper explores the cognitive implications of recent advancements in large language models (LLMs), with a specific focus on ChatGPT. We contribute to the ongoing debate about the cognitive significance of current LLMs by drawing an analogy to the Chinese Room Argument, a thought experiment that questions the genuine understanding of language in machines (computer programs). Our argument posits that current LLMs, including ChatGPT, generate text resembling human-like responses, akin to the process depicted in the Chinese Room Argument. In both cases, the responses are provided without a deep understanding of the language, thus lacking true signs of consciousness.