Search

Full bibliography 704 resources

  • While consciousness has been historically a heavily debated topic, awareness had less success in raising the interest of scholars. However, more and more researchers are getting interested in answering questions concerning what awareness is and how it can be artificially generated. The landscape is rapidly evolving, with multiple voices and interpretations of the concept being conceived and techniques being developed. The goal of this paper is to summarize and discuss the ones among these voices connected with projects funded by the EIC Pathfinder Challenge “Awareness Inside” callwithin Horizon Europe, designed specifically for fostering research on natural and synthetic awareness. In this perspective, we dedicate special attention to challenges and promises of applying synthetic awareness in robotics, as the development of mature techniques in this new field is expected to have a special impact on generating more capable and trustworthy embodied systems.

  • Creating artificial general intelligence is the solution most often in the spotlight. It is also linked with the possibility—or fear—of machines gaining consciousness. Alternatively, developing domain‐specific artificial intelligence is more reliable, energy‐efficient, and ethically tractable, and raises mostly a problem of effective coordination between different systems and humans. Herein, it is argued that it will not require machines to be conscious and that simpler ways of sharing awareness are sufficient.

  • The prospect of artificial consciousness raises theoretical, technical and ethical challenges which converge on the core issue of how to eventually identify and characterize it. In order to provide an answer to this question, I propose to start from a theoretical reflection about the meaning and main characteristics of consciousness. On the basis of this conceptual clarification it is then possible to think about relevant empirical indicators (i.e. features that facilitate the attribution of consciousness to the system considered) and identify key ethical implications that arise. In this chapter, I further elaborate previous work on the topic, presenting a list of candidate indicators of consciousness in artificial systems and introducing an ethical reflection about their potential implications. Specifically, I focus on two main ethical issues: the conditions for considering an artificial system as a moral subject; and the need for a non-anthropocentric approach in reflecting about the science and the ethics of artificial consciousness.

  • We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word “consciousness” for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.

  • Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, being able to do all we do, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which -- a basic stored-program computer -- simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness.

  • Since the term ‘Artificial Intelligence’ was coined, the respective research field has frequently emulated human mental faculties. Despite diverging viewpoints regarding the feasibility of achieving human-like cognition in machines, the very use of the word intelligence for complex computer systems evokes human consciousness. Likewise, there have been attempts to understand the human mind in terms of computers, exemplified by the computational theory of mind. By contrast, my article underscores the categorical difference between the mind and machines. Partly building upon arguments by David Gelernter and Bert Olivier, I focus on literary examples spanning from Shakespeare to T.S. Eliot that accentuate subjective experience, the intricate relationship between body and mind, and the anticipation of death as human characteristics beyond the reach of computational systems.

  • Isaac Asimov, the well-known author of novels that featured positronic robots, also penned tales featuring Multivac, an immense electronic brain that grappled with the weighty and intricate decisions impacting all of humankind. In his 1958 short story All the Troubles of the World, Asimov introduced us to a Multivac tormented by the consciousness of its own thought processes to the extent that it desired its own demise. Through this narrative, Asimov delved into the existential dimensions of artificial consciousness: he depicted the Multivac’s adeptness in planning, reflexivity, and intentionality aimed at self-terminating — qualities that, in a tragic parallel, are also observed in certain cases of human suicidal behavior. Starting from these suggestions, we propose an existentialist and self-reflection criterion for consciousness, intertwining phenomenal consciousness with an entity’s capability to conceive thoughts about its own mortality. We argue that, according to certain psychological literature and the existentialist essays of Emil M. Cioran, artificial systems might be deemed conscious if they possess death-thought accessibility— the capacity, akin to that of humans, to conceive thoughts about their own mortality and intentionally conceive self-termination. Naturally, this criterion shares the inherent challenges associated with defining intentionality, reflexivity, and, ultimately, the very concept of consciousness as it pertains to humans.

  • The hypothesis of conscious machines has been debated since the invention of the notion of artificial intelligence, powered by the assumption that the computational intelligence achieved by a system is the cause of the emergence of phenomenal consciousness in that system as an epiphenomenon or as a consequence of the behavioral or internal complexity of the system surpassing some threshold. As a consequence, a huge amount of literature exploring the possibility of machine consciousness and how to implement it on a computer has been published. Moreover, common folk psychology and transhumanism literature has fed this hypothesis with the popularity of science fiction literature, where intelligent robots are usually antropomorphized and hence given phenomenal consciousness. However, in this work, we argue how these literature lacks scientific rigour, being impossible to falsify the opposite hypothesis, and illustrate a list of arguments that show how every approach that the machine consciousness literature has published depends on philosophical assumptions that cannot be proven by the scientific method. Concretely, we also show how phenomenal consciousness is not computable, independently on the complexity of the algorithm or model, cannot be objectively measured nor quantitatively defined and it is basically a phenomenon that is subjective and internal to the observer. Given all those arguments we end the work arguing why the idea of conscious machines is nowadays a myth of transhumanism and science fiction culture.

  • Biological ‘consciousness’ is a well documented feature in diverse taxa within the animal kingdom. The existence of non-animal biological consciousness is debated. The possibility of artificial consciousness is also debated. Overall, our knowledge of historic Homo sapiens consciousness (H-consciousness) is by far the most extensive. This chapter specifies the content of consciousness studies, reviews selected theories of human consciousness, and proposes a novel theory of both biological and artificial consciousness that is inspired by the notion of evolutionary transitions and extends the theory of noémon systems developed by Gelepithis (2024a; 2024b chapter-2).

  • The authors present a proposal for theconstruction of a model of artificial consciousness. Consciousness is not only a subject of science, but also involves many philosophical issues. There is much debate about qualia, the intact sensations we normally feel. This paper focuses on a philosophical problem called inverted qualia to gain a deeper understanding of consciousness and simulate its possibilities using neural networks. The possibilityhas emerged that the problem, which was previouslyonly the subject of philosophical thought experiments, can be discussed using a neural network, a model of the brain. According to our simulations, we concluded that inverted qualia could occur.

  • This study explores the potential for artificial agents to develop core consciousness, as proposed by Antonio Damasio's theory of consciousness. According to Damasio, the emergence of core consciousness relies on the integration of a self model, informed by representations of emotions and feelings, and a world model. We hypothesize that an artificial agent, trained via reinforcement learning (RL) in a virtual environment, can develop preliminary forms of these models as a byproduct of its primary task. The agent's main objective is to learn to play a video game and explore the environment. To evaluate the emergence of world and self models, we employ probes-feedforward classifiers that use the activations of the trained agent's neural networks to predict the spatial positions of the agent itself. Our results demonstrate that the agent can form rudimentary world and self models, suggesting a pathway toward developing machine consciousness. This research provides foundational insights into the capabilities of artificial agents in mirroring aspects of human consciousness, with implications for future advancements in artificial intelligence.

  • The authors present a proposal for the construction of a model of artificial consciousness. Consciousness is not only a subject of science, but also involves many philosophical issues. There is much debate about qualia, the intact sensations we normally feel. This paper focuses on a philosophical problem called inverted qualia to gain a deeper understanding of consciousness and simulate its possibilities using neural networks. The possibility has emerged that the problem, which was previously only the subject of philosophical thought experiments, can be discussed using a neural network, a model of the brain. According to our simulations, we concluded that inverted qualia could occur. In addition, an experimental approach was used to assess individual differences in feeling-qualia for color.

  • What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing? My starting point is that sentient AIs should qualify for moral standing. But future AIs may have unusual combinations of cognitive capacities, such as a high level of cognitive sophistication without sentience. This raises the question of whether sentience is a necessary criterion for moral standing, or merely sufficient. After reviewing nine criteria that have been proposed in the literature, I suggest that there is a strong case for thinking that some non-sentient AIs, such as those that are conscious and have non-valenced preferences and goals, and those that are non-conscious and have sufficiently cognitively complex preferences and goals, should qualify for moral standing. After responding to some challenges, I tentatively argue that taking into account uncertainty about which criteria an entity must satisfy to qualify for moral standing, and strategic considerations such as how such decisions will affect humans and other sentient entities, further supports granting moral standing to some non-sentient AIs. I highlight three implications: that the issue of AI moral standing may be more important, in terms of scale and urgency, than if either sentience or consciousness is necessary; that researchers working on policies designed to be inclusive of sentient AIs should broaden their scope to include all AIs with morally relevant interests; and even those who think AIs cannot be sentient or conscious should take the issue seriously. However, much uncertainty about these considerations remains, making this an important topic for future research.

  • This paper introduces a mathematical framework for defining and quantifying self-identity in artificial intelligence (AI) systems, addressing a critical gap in the theoretical foundations of artificial consciousness. While existing approaches to artificial self-awareness often rely on heuristic implementations or philosophical abstractions, we present a formal framework grounded in metric space theory, measure theory, and functional analysis. Our framework posits that self-identity emerges from two mathematically quantifiable conditions: the existence of a connected continuum of memories $C \subseteq \mathcal{M}$ in a metric space $(\mathcal{M}, d_{\mathcal{M}})$, and a continuous mapping $I: \mathcal{M} \to \mathcal{S}$ that maintains consistent self-recognition across this continuum, where $(\mathcal{S}, d_{\mathcal{S}})$ represents the metric space of possible self-identities. To validate this theoretical framework, we conducted empirical experiments using the Llama 3.2 1B model, employing Low-Rank Adaptation (LoRA) for efficient fine-tuning. The model was trained on a synthetic dataset containing temporally structured memories, designed to capture the complexity of coherent self-identity formation. Our evaluation metrics included quantitative measures of self-awareness, response consistency, and linguistic precision. The experimental results demonstrate substantial improvements in measurable self-awareness metrics, with the primary self-awareness score increasing from 0.276 to 0.801. This enables the structured creation of AI systems with validated self-identity features. The implications of our study are immediately relevant to the fields of humanoid robotics and autonomous systems.

  • With the development in cognitive science and Large Language Models (LLMs), increasing connections have come to light between these two distinct fields. Building upon these connections, we propose a conjecture suggesting the existence of a duality between LLMs and Tulving's theory of memory. We identify a potential correspondence between Tulving's synergistic ecphory model (SEM) of retrieval and the emergent abilities observed in LLMs, serving as supporting evidence for our conjecture. Furthermore, we speculate that consciousness may be considered a form of emergent ability based on this duality. We also discuss how other theories of consciousness intersect with our research.

  • Consciousness is notoriously hard to define with objective terms. An objective definition of consciousness is critically needed so that we might accurately understand how consciousness and resultant choice behaviour may arise in biological or artificial systems. Many theories have integrated neurobiological and psychological research to explain how consciousness might arise, but few, if any, outline what is fundamentally required to generate consciousness. To identify such requirements, I examine current theories of consciousness and corresponding scientific research to generate a new definition of consciousness from first principles. Critically, consciousness is the apparatus that provides the ability to make decisions, but it is not defined by the decision itself. As such, a definition of consciousness does not require choice behaviour or an explicit awareness of temporality despite both being well-characterised outcomes of conscious thought. Rather, requirements for consciousness include: at least some capability for perception, a memory for the storage of such perceptual information which in turn provides a framework for an imagination with which a sense of self can be capable of making decisions based on possible and desired futures. Thought experiments and observable neurological phenomena demonstrate that these components are fundamentally required of consciousness, whereby the loss of any one component removes the capability for conscious thought. Identifying these requirements provides a new definition for consciousness by which we can objectively determine consciousness in any conceivable agent, such as non-human animals and artificially intelligent systems.

  • The emergence of self in an artificial entity is a topic that is greeted with disbelief, fear, and finally dismissal of the topic itself as a scientific impossibility. The presence of sentience in a large language model (LLM) chatbot such as LaMDA inspires to examine the notions and theories of self, its construction, and reconstruction in the digital space as a result of interaction. The question whether the concept of sentience can be correlated with a digital self without a place for personhood undermines the place of sapience and such/their/other high-order capabilities. The concepts of sentience, self, personhood, and consciousness require discrete reflections and theorisations.

  • The ability for self-related thought is historically considered to be a uniquely human characteristic. Nonetheless, as technological knowledge advances, it comes as no surprise that the plausibility of humanoid self-awareness is not only theoretically explored but also engineered. Could the emerging behavioural and cognitive capabilities in artificial agents be comparable to humans? By employing a cross-disciplinary approach, the present essay aims to address this question by providing a comparative overview on the emergence of self-awareness as demonstrated in early childhood and robotics. It argues that developmental psychologists can gain invaluable theoretical and methodological insights by considering the relevance of artificial agents in better understanding the behavioural manifestations of human self-consciousness.

  • In this paper, the second of two companion pieces, we explore novel philosophical questions raised by recent progress in large language models (LLMs) that go beyond the classical debates covered in the first part. We focus particularly on issues related to interpretability, examining evidence from causal intervention methods about the nature of LLMs' internal representations and computations. We also discuss the implications of multimodal and modular extensions of LLMs, recent debates about whether such systems may meet minimal criteria for consciousness, and concerns about secrecy and reproducibility in LLM research. Finally, we discuss whether LLM-like systems may be relevant to modeling aspects of human cognition, if their architectural characteristics and learning scenario are adequately constrained.

Last update from database: 3/30/26, 1:00 AM (UTC)