Search
Full bibliography 675 resources
-
The dream of making a conscious humanoid – whether as servant, guard, entertainer, or simply as testament to human creativity – has long captivated the human imagination. However, while past attempts were performed by magicians and mystics, today scientists and engineers are doing the work to turn myth into reality. This essay introduces the fundamental concepts surrounding human consciousness and machine consciousness and offers a theological contribution. Using the biblical association of the soul to blood, it will be shown that the Bible provides evidence of a scientific claim, while at the same time, science provides evidence of a biblical claim.
-
As artificially intelligent systems become more anthropomorphic and pervasive, and their potential impact on humanity more urgent, discussions about the possibility of machine consciousness have significantly intensified, and it is sometimes seen as 'the holy grail'. Many concerns have been voiced about the ramifications of creating an artificial conscious entity. This is compounded by a marked lack of consensus around what constitutes consciousness and by an absence of a universal set of criteria for determining consciousness. By going into depth on the foundations and characteristics of consciousness, we propose five criteria for determining whether a machine is conscious, which can also be applied more generally to any entity. This paper aims to serve as a primer and stepping stone for researchers of consciousness, be they in philosophy, computer science, medicine, or any other field, to further pursue this holy grail of philosophy, neuroscience and artificial intelligence.
-
Can octopuses feel pain or pleasure? Can we tell if a person unresponsive after severe injury might be suffering? When does a fetus begin having conscious experiences? These questions about the edge of sentience are subject to enormous uncertainty. This book builds a framework to help us reach ethically sound decisions on how to manage the risks
-
This chapter discusses the relationship between compliance to syntactically defined legislation and consciousness: whether in order to obey laws a robot would need to be conscious. This leads to consideration of what Emergent Information Theory can tell us about the possibility of artificial consciousness as such. Various arguments based on similarities and differences between biological and technological physical and informational systems are presented, with the conclusion that direct replication of a human type of consciousness is improbable. However, our understandable tendency to consider our own type of consciousness as uniquely special and valuable is challenged and found to be unfounded. Other high-level emergent phenomena in the information dimensions of artificial systems may, while different, be equally deserving of a comparable status.
-
While consciousness has been historically a heavily debated topic, awareness had less success in raising the interest of scholars. However, more and more researchers are getting interested in answering questions concerning what awareness is and how it can be artificially generated. The landscape is rapidly evolving, with multiple voices and interpretations of the concept being conceived and techniques being developed. The goal of this paper is to summarize and discuss the ones among these voices connected with projects funded by the EIC Pathfinder Challenge “Awareness Inside” callwithin Horizon Europe, designed specifically for fostering research on natural and synthetic awareness. In this perspective, we dedicate special attention to challenges and promises of applying synthetic awareness in robotics, as the development of mature techniques in this new field is expected to have a special impact on generating more capable and trustworthy embodied systems.
-
Creating artificial general intelligence is the solution most often in the spotlight. It is also linked with the possibility—or fear—of machines gaining consciousness. Alternatively, developing domain‐specific artificial intelligence is more reliable, energy‐efficient, and ethically tractable, and raises mostly a problem of effective coordination between different systems and humans. Herein, it is argued that it will not require machines to be conscious and that simpler ways of sharing awareness are sufficient.
-
The prospect of artificial consciousness raises theoretical, technical and ethical challenges which converge on the core issue of how to eventually identify and characterize it. In order to provide an answer to this question, I propose to start from a theoretical reflection about the meaning and main characteristics of consciousness. On the basis of this conceptual clarification it is then possible to think about relevant empirical indicators (i.e. features that facilitate the attribution of consciousness to the system considered) and identify key ethical implications that arise. In this chapter, I further elaborate previous work on the topic, presenting a list of candidate indicators of consciousness in artificial systems and introducing an ethical reflection about their potential implications. Specifically, I focus on two main ethical issues: the conditions for considering an artificial system as a moral subject; and the need for a non-anthropocentric approach in reflecting about the science and the ethics of artificial consciousness.
-
We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word “consciousness” for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.
-
Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, being able to do all we do, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which -- a basic stored-program computer -- simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness.
-
Since the term ‘Artificial Intelligence’ was coined, the respective research field has frequently emulated human mental faculties. Despite diverging viewpoints regarding the feasibility of achieving human-like cognition in machines, the very use of the word intelligence for complex computer systems evokes human consciousness. Likewise, there have been attempts to understand the human mind in terms of computers, exemplified by the computational theory of mind. By contrast, my article underscores the categorical difference between the mind and machines. Partly building upon arguments by David Gelernter and Bert Olivier, I focus on literary examples spanning from Shakespeare to T.S. Eliot that accentuate subjective experience, the intricate relationship between body and mind, and the anticipation of death as human characteristics beyond the reach of computational systems.
-
Isaac Asimov, the well-known author of novels that featured positronic robots, also penned tales featuring Multivac, an immense electronic brain that grappled with the weighty and intricate decisions impacting all of humankind. In his 1958 short story All the Troubles of the World, Asimov introduced us to a Multivac tormented by the consciousness of its own thought processes to the extent that it desired its own demise. Through this narrative, Asimov delved into the existential dimensions of artificial consciousness: he depicted the Multivac’s adeptness in planning, reflexivity, and intentionality aimed at self-terminating — qualities that, in a tragic parallel, are also observed in certain cases of human suicidal behavior. Starting from these suggestions, we propose an existentialist and self-reflection criterion for consciousness, intertwining phenomenal consciousness with an entity’s capability to conceive thoughts about its own mortality. We argue that, according to certain psychological literature and the existentialist essays of Emil M. Cioran, artificial systems might be deemed conscious if they possess death-thought accessibility— the capacity, akin to that of humans, to conceive thoughts about their own mortality and intentionally conceive self-termination. Naturally, this criterion shares the inherent challenges associated with defining intentionality, reflexivity, and, ultimately, the very concept of consciousness as it pertains to humans.
-
The hypothesis of conscious machines has been debated since the invention of the notion of artificial intelligence, powered by the assumption that the computational intelligence achieved by a system is the cause of the emergence of phenomenal consciousness in that system as an epiphenomenon or as a consequence of the behavioral or internal complexity of the system surpassing some threshold. As a consequence, a huge amount of literature exploring the possibility of machine consciousness and how to implement it on a computer has been published. Moreover, common folk psychology and transhumanism literature has fed this hypothesis with the popularity of science fiction literature, where intelligent robots are usually antropomorphized and hence given phenomenal consciousness. However, in this work, we argue how these literature lacks scientific rigour, being impossible to falsify the opposite hypothesis, and illustrate a list of arguments that show how every approach that the machine consciousness literature has published depends on philosophical assumptions that cannot be proven by the scientific method. Concretely, we also show how phenomenal consciousness is not computable, independently on the complexity of the algorithm or model, cannot be objectively measured nor quantitatively defined and it is basically a phenomenon that is subjective and internal to the observer. Given all those arguments we end the work arguing why the idea of conscious machines is nowadays a myth of transhumanism and science fiction culture.
-
Biological ‘consciousness’ is a well documented feature in diverse taxa within the animal kingdom. The existence of non-animal biological consciousness is debated. The possibility of artificial consciousness is also debated. Overall, our knowledge of historic Homo sapiens consciousness (H-consciousness) is by far the most extensive. This chapter specifies the content of consciousness studies, reviews selected theories of human consciousness, and proposes a novel theory of both biological and artificial consciousness that is inspired by the notion of evolutionary transitions and extends the theory of noémon systems developed by Gelepithis (2024a; 2024b chapter-2).
-
The authors present a proposal for theconstruction of a model of artificial consciousness. Consciousness is not only a subject of science, but also involves many philosophical issues. There is much debate about qualia, the intact sensations we normally feel. This paper focuses on a philosophical problem called inverted qualia to gain a deeper understanding of consciousness and simulate its possibilities using neural networks. The possibilityhas emerged that the problem, which was previouslyonly the subject of philosophical thought experiments, can be discussed using a neural network, a model of the brain. According to our simulations, we concluded that inverted qualia could occur.
-
This study explores the potential for artificial agents to develop core consciousness, as proposed by Antonio Damasio's theory of consciousness. According to Damasio, the emergence of core consciousness relies on the integration of a self model, informed by representations of emotions and feelings, and a world model. We hypothesize that an artificial agent, trained via reinforcement learning (RL) in a virtual environment, can develop preliminary forms of these models as a byproduct of its primary task. The agent's main objective is to learn to play a video game and explore the environment. To evaluate the emergence of world and self models, we employ probes-feedforward classifiers that use the activations of the trained agent's neural networks to predict the spatial positions of the agent itself. Our results demonstrate that the agent can form rudimentary world and self models, suggesting a pathway toward developing machine consciousness. This research provides foundational insights into the capabilities of artificial agents in mirroring aspects of human consciousness, with implications for future advancements in artificial intelligence.
-
The authors present a proposal for the construction of a model of artificial consciousness. Consciousness is not only a subject of science, but also involves many philosophical issues. There is much debate about qualia, the intact sensations we normally feel. This paper focuses on a philosophical problem called inverted qualia to gain a deeper understanding of consciousness and simulate its possibilities using neural networks. The possibility has emerged that the problem, which was previously only the subject of philosophical thought experiments, can be discussed using a neural network, a model of the brain. According to our simulations, we concluded that inverted qualia could occur. In addition, an experimental approach was used to assess individual differences in feeling-qualia for color.
-
What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing? My starting point is that sentient AIs should qualify for moral standing. But future AIs may have unusual combinations of cognitive capacities, such as a high level of cognitive sophistication without sentience. This raises the question of whether sentience is a necessary criterion for moral standing, or merely sufficient. After reviewing nine criteria that have been proposed in the literature, I suggest that there is a strong case for thinking that some non-sentient AIs, such as those that are conscious and have non-valenced preferences and goals, and those that are non-conscious and have sufficiently cognitively complex preferences and goals, should qualify for moral standing. After responding to some challenges, I tentatively argue that taking into account uncertainty about which criteria an entity must satisfy to qualify for moral standing, and strategic considerations such as how such decisions will affect humans and other sentient entities, further supports granting moral standing to some non-sentient AIs. I highlight three implications: that the issue of AI moral standing may be more important, in terms of scale and urgency, than if either sentience or consciousness is necessary; that researchers working on policies designed to be inclusive of sentient AIs should broaden their scope to include all AIs with morally relevant interests; and even those who think AIs cannot be sentient or conscious should take the issue seriously. However, much uncertainty about these considerations remains, making this an important topic for future research.
-
This paper introduces a mathematical framework for defining and quantifying self-identity in artificial intelligence (AI) systems, addressing a critical gap in the theoretical foundations of artificial consciousness. While existing approaches to artificial self-awareness often rely on heuristic implementations or philosophical abstractions, we present a formal framework grounded in metric space theory, measure theory, and functional analysis. Our framework posits that self-identity emerges from two mathematically quantifiable conditions: the existence of a connected continuum of memories $C \subseteq \mathcal{M}$ in a metric space $(\mathcal{M}, d_{\mathcal{M}})$, and a continuous mapping $I: \mathcal{M} \to \mathcal{S}$ that maintains consistent self-recognition across this continuum, where $(\mathcal{S}, d_{\mathcal{S}})$ represents the metric space of possible self-identities. To validate this theoretical framework, we conducted empirical experiments using the Llama 3.2 1B model, employing Low-Rank Adaptation (LoRA) for efficient fine-tuning. The model was trained on a synthetic dataset containing temporally structured memories, designed to capture the complexity of coherent self-identity formation. Our evaluation metrics included quantitative measures of self-awareness, response consistency, and linguistic precision. The experimental results demonstrate substantial improvements in measurable self-awareness metrics, with the primary self-awareness score increasing from 0.276 to 0.801. This enables the structured creation of AI systems with validated self-identity features. The implications of our study are immediately relevant to the fields of humanoid robotics and autonomous systems.