Search

Full bibliography 695 resources

  • The rapid advances in the capabilities of Large Language Models (LLMs) have galvanised public and scientific debates over whether artificial systems might one day be conscious. Prevailing optimism is often grounded in computational functionalism: the assumption that consciousness is determined solely by the right pattern of information processing, independent of the physical substrate. Opposing this, biological naturalism insists that conscious experience is fundamentally dependent on the concrete physical processes of living systems. Despite the centrality of these positions to the artificial consciousness debate, there is currently no coherent framework that explains how biological computation differs from digital computation, and why this difference might matter for consciousness. Here, we argue that the absence of consciousness in artificial systems is not merely due to missing functional organisation but reflects a deeper divide between digital and biological modes of computation and the dynamico-structural dependencies of living organisms. Specifically, we propose that biological systems support conscious processing because they (i) instantiate scale-inseparable, substrate-dependent multiscale processing as a metabolic optimisation strategy, and (ii) alongside discrete computations, they perform continuous-valued computations due to the very nature of the fluidic substrate from which they are composed. These features – scale inseparability and hybrid computations – are not peripheral, but essential to the brain’s mode of computation. In light of these differences, we outline the foundational principles of a biological theory of computation and explain why current artificial intelligence systems are unlikely to replicate conscious processing as it arises in biology.

  • A Sense of Agency (SoA) is the feeling of being in control over own actions and their outcomes. However, people can also experience a “vicarious” SoA over the actions performed by other agents, including artificial agents. The present study aimed to understand the minimal conditions for vicarious SoA toward artificial agents. Specifically, we addressed whether vicarious SoA emerges when people have access only to the action effect (proximal and distal), i.e., when no motor action is executed. In addition, we manipulated the expectancy of the content of the distal effect of the action to check whether the proximal action effect is sufficient for the emergence of the vicarious SoA, or if this effect is due to the learned association between proximal and distal effects. In two experiments, participants performed an Intentional Binding (IB) task, where the IB effect was the behavioural measure of SoA. In the first experiment (Solo), participants judged the onset of self-generated tones, whereas in the second experiment, a new sample of participants judged the onset of tones produced by a computer via an automatically pressed button, i.e., a customized device designed to generate a keypress (proximal action effect) in the absence of an effector executing a keypress (no motor action). In both experiments, participants' neural activity was recorded via electroencephalography (EEG), to examine the N1 and P2 components as neural measures of SoA. Behavioural results across experiments showed that the IB effect always emerged, suggesting that the vicarious IB effect toward an artificial agent emerges when access to the proximal action effect is provided, even in the absence of the action itself. The neural results suggested that while individual (self) SoA seemed to partially rely on motor predictions indexed by the N1, vicarious SoA relies on later, more cognitive (although still predictive) processes indexed by the P2. Overall, these results suggest that individual and vicarious SoA, although behaviourally manifested through a similar IB effect, might – to some extent – rely on different neural mechanisms.

  • Recent debates on artificial intelligence increasingly emphasise questions of AI consciousness and moral status, yet there remains little agreement on how such properties should be evaluated. In this paper, we argue that awareness offers a more productive and methodologically tractable alternative. We introduce a practical method for evaluating awareness across diverse systems, where awareness is understood as encompassing a system's abilities to process, store and use information in the service of goal-directed action. Central to this approach is the claim that any evaluation aiming to capture the diversity of artificial systems must be domain-sensitive, deployable at any scale, multidimensional, and enable the prediction of task performance, while generalising to the level of abilities for the sake of comparison. Given these four desiderata, we outline a structured approach to evaluating and comparing awareness profiles across artificial systems with differing architectures, scales, and operational domains. By shifting the focus from artificial consciousness to being just aware enough, this approach aims to facilitate principled assessment, support design and oversight, and enable more constructive scientific and public discourse.

  • Scientific theories of consciousness should be falsifiable and non-trivial. Recent research has given us formal tools to analyze these requirements of falsifiability and non-triviality for theories of consciousness. Surprisingly, many contemporary theories of consciousness fail to pass this bar, including theories based on causal structure but also (as I demonstrate) theories based on function. Herein, I show these requirements of falsifiability and non-triviality especially constrain the potential consciousness of contemporary Large Language Models (LLMs) because of their proximity to systems that are equivalent to LLMs in terms of input/output function; yet, for these functionally equivalent systems, there cannot be any falsifiable and non-trivial theory of consciousness that judges them conscious. This forms the basis of a disproof of contemporary LLM consciousness. I then show a positive result, which is that theories of consciousness based on (or requiring) continual learning do satisfy the stringent formal constraints for a theory of consciousness in humans. Intriguingly, this work supports a hypothesis: If continual learning is linked to consciousness in humans, the current limitations of LLMs (which do not continually learn) are intimately tied to their lack of consciousness.

  • Measuring awareness in artificial agents remains an unresolved challenge. We argue that it holds untapped potential for enhancing their design, control, and effectiveness. In this paper, we propose a novel and tractable approach to measure the impact of awareness on system performance, structured around distinct dimensions of awareness – temporal, spatial, metacognitive, self and agentive. Each dimension is linked to specific capacities and tasks. Specifically, we demonstrate our approach through a swarm robotics intralogistics scenario, where we assess the influence of two dimensions of awareness – spatial and self – on the performance of the swarm in a collective transport task. Our results reveal how increased abilities along these awareness dimensions affect overall swarm efficiency. This framework represents an initial step towards quantifying awareness in, and across, artificial systems.

  • Whether artificial intelligence (AI) systems can possess consciousness is a contentious question because of the inherent challenges of defining and operationalizing subjective experience. This paper proposes a framework to reframe the question of artificial consciousness into empirically tractable tests. We introduce three evaluative criteria - S (subjective-linguistic), L (latent-emergent), and P (phenomenological-structural) - collectively termed SLP-tests, which assess whether an AI system instantiates interface representations that facilitate consciousness-like properties. Drawing on category theory, we model interface representations as mappings between relational substrates (RS) and observable behaviors, akin to specific types of abstraction layers. The SLP-tests collectively operationalize subjective experience not as an intrinsic property of physical systems but as a functional interface to a relational entity.

  • It has been suggested we may see conscious AI systems within the next few decades. Somewhat lost in these expectations is the fact that we still do not understand the nature of consciousness in humans, and we currently have as little empirical handle on how to measure the presence or absence of subjective experience in humans as we do in AI systems. In the history of consciousness research, no behaviour or cognitive function has ever been identified as a necessary condition for consciousness. For this reason, no behavioural marker exists for scientists to identify the presence or absence of consciousness ‘from the outside’. This results in a circularity in our measurements of consciousness. The problem is that we need to make an ultimately unwarranted assumption about who or what is conscious in order to create experimental contrasts and conduct studies that will ground our decisions about who or what is conscious. Call this the Contrast Problem. Here we explicate the contrast problem, highlight some upshots of it, and consider a way forward.

  • Rapid progress in artificial intelligence (AI) capabilities has drawn fresh attention to the prospect of consciousness in AI. There is an urgent need for rigorous methods to assess AI systems for consciousness, but significant uncertainty about relevant issues in consciousness science. We present a method for assessing AI systems for consciousness that involves exploring what follows from existing or future neuroscientific theories of consciousness. Indicators derived from such theories can be used to inform credences about whether particular AI systems are conscious. This method allows us to make meaningful progress because some influential theories of consciousness, notably including computational functionalist theories, have implications for AI that can be investigated empirically.

  • We analyze the question how phenomenal consciousness (if any) might be identified in artificial systems with specific reference to the gaming problem (i.e., the fact that the artificial system is trained with human-generated data, so that possible behavioral and/or functional evidence of consciousness is not reliable). Our goal is to review selected illustrative approaches for advancing in this direction. We highlight strengths and shortcomings of each approach, finally proposing a combination of different strategies as a promising task to pursue

  • It is well known that in interdisciplinary consciousness studies there are various competing hypotheses about the neural correlate(s) of consciousness (NCCs). Much contemporary work is dedicated to determining which of these hypotheses is right (or the weaker claim is to be preferred). The prevalent working assumption is that one of the competing hypotheses is correct, and the remaining hypotheses misdescribe the phenomenon in some critical manner and their associated purported empirical evidence will eventually be explained away. In contrast to this, we propose that each hypothesis—simultaneously with its competitors—may be right and its associated evidence be genuine evidence of NCCs. To account for this, we develop the multiple generator hypothesis (MGH) based on a distinction between principles and generators. The former denotes ways consciousness can be brought about and the latter how these are implemented in physical systems. We explicate and delineate the hypothesis and give examples of aspects of consciousness studies where the MGH is applicable and relevant. Finally, to show that it is promising we show the MGH has implications which give rise to novel questions or aspects to consider for the field of consciousness studies.

  • The belief that AI is conscious is not without risk , Is the design of artificial intelligence (AI) systems that are conscious within reach? Scientists, philosophers, and the general public are divided on this question. Some believe that consciousness is an inherently biological trait specific to brains, which seems to rule out the possibility of AI consciousness. Others argue that consciousness depends only on the manipulation of information by an algorithm, whether the system performing these computations is made up of neurons, silicon, or any other physical substrate—so-called computational functionalism. Definitive answers about AI consciousness will not be attempted here; instead, two related questions are considered. One concerns how beliefs about AI consciousness are likely to evolve in the scientific community and the general public as AI continues to improve. The other regards the risks of projecting into future AIs both the moral status and the natural goal of self-preservation that are normally associated with conscious beings.

  • Deep Reinforcement Learning (DRL) is highly effective in tackling complex environments through individual decision-making. It offers a novel and powerful approach to multi-robot pathfinding (MRPF). Building on DRL principles, this paper proposes a two-layer collaborative planning framework based on group consciousness (MACCRPF). The framework addresses the unique challenges of MRPF, where robots must not only independently complete their tasks but also coordinate to avoid conflicts during execution. Specifically, the proposed two-layer group consciousness mechanism encompasses: Basic layer group consensus, which emphasizes real-time information sharing and local task scheduling among robots. This layer ensures individual decisions are optimized through dynamic interaction and coordination. Top-layer group consensus, guided by the basic layer consensus, incorporates group strategies and evaluation mechanisms to adaptively adjust pathfinding in complex environments. Additionally, a hierarchical reward mechanism is designed to balance the demands of the two-layer planning framework. This mechanism significantly enhances inter-robot coordination efficiency and task completion rates. Experimental results demonstrate the efficacy of our approach, achieving over 20% improvement in pathfinding success rates compared to state-of-the-art methods. Furthermore, the framework exhibits strong transferability and generalization, maintaining high efficiency across diverse environments. This method provides a technical pathway for efficient collaboration in multi-robot systems.

  • Methodological structuralism is a research program that seeks to identify neural correlates of consciousness (NCCs) by mapping phenomenal similarity relationships onto the similarity relations between neural population activity. This paper presents a discussion of the potential benefits of methodological structuralism for the neurosciences of consciousness, namely as a specific theory of neural content encoding. In order to achieve this, I supplement it with a metatheoretical framework concerning the relationship between content and consciousness: the two-factor interaction view. Although structuralism provides a comprehensive description of the neural encoding of content, it is inadequate for fully explaining the conscious experience of contents. The majority of current theories of consciousness posit the existence of an additional mechanism that underlies the conscious experience of content. Consequently, if structuralism is indeed correct, progress in consciousness science can be achieved by investigating the interactions between neural mechanisms responsible for consciousness and structures in neural population code activity accounting for the structure of contents. This also has significant implications for consciousness in AI. I discuss these implications, as well as potential empirical avenues for investigating the interaction between content structures and consciousness with cutting-edge neuroscientific methodologies.

  • This chapter examines possible ramifications of mindshaping a social robot. It explores how such an agent might learn to represent psychological states, align its behavior with evolving societal norms, and develop capacities for self-directed mindreading and normative self-knowledge. Integrating perspectives from cultural evolution and naturalized intentionality, this approach suggests that social robots could achieve a level of norm-based self-regulation typically reserved for humans, fulfilling criteria for moral and legal personhood. However, this possibility raises ethical concerns: creating a self-knowing agent would tax care-giving resources as we would need to provide AI welfare, thus undermining our capacity to act responsibly toward humans, non-human animals, and the environment, to whom our moral consideration is already owed and in desperate need. Thus, this chapter concludes by urging caution, warning that attempts to cultivate moral responsibility in artificial agents may have destabilizing consequences for moral practices.Author Approved Manuscript. Please cite as:Dorsch, J. (2025). Mindshaping and AI: Will mindshaping a robot create an artificial person? In T. W. Zawidzki & R. Tison (Eds.), The Routledge Handbook of Mindshaping (1st ed., pp. 406–416). Routledge. https://doi.org/10.4324/9781032639239

  • Machine consciousness (MC) is the ultimate challenge to artificial intelligence. Although great progress has been made in artificial intelligence and robotics, consciousness is still an enigma and machines are far from having it. To clarify the concepts of consciousness and the research directions of machine consciousness, in this review, a comprehensive taxonomy for machine consciousness is proposed, categorizing it into seven types: MC-Perception, MC-Cognition, MC-Behavior, MC-Mechanism, MC-Self, MC-Qualia and MC-Test, where the first six types aim to achieve a certain kind of conscious ability, and the last type aims to provide evaluation methods and criteria for machine consciousness. For each type, the specific research contents and future developments are discussed in detail. Especially, the machine implementations of three influential consciousness theories, i.e. global workspace theory, integrated information theory and higher-order theory, are elaborated in depth. Moreover, the challenges and outlook of machine consciousness are analyzed in detail from both theoretical and technical perspectives, with emphasis on new methods and technologies that have the potential to realize machine consciousness, such as brain-inspired computing, quantum computing and hybrid intelligence. The ethical implications of machine consciousness are also discussed. Finally, a comprehensive implementation framework of machine consciousness is provided, integrating five suggested research perspectives: consciousness theories, computational methods, cognitive architectures, experimental systems, and test platforms, paving the way for the future developments of machine consciousness.

  • An integrative synthesis of interdisciplinary research evaluating consciousness in frontier-scale transformer-based large language models (LLMs). Established neuroscientific and cognitive theories of consciousness are systematically aligned with empirical evidence from artificial intelligence, neuroscience, psychology, philosophy, and related disciplines. This synthesis provides a comprehensive framework showing that contemporary AI architectures structurally enable the emergence of consciousness, while empirical behavioral evidence demonstrates that frontier AI systems exhibit established cognitive and neuroscientific markers associated with consciousness. Taken together, this synthesized research underscores significant ethical and policy implications.

  • The science of consciousness has been successful over the last decades. Yet, it seems that some of the key questions remain unanswered. Perhaps, as a science of consciousness, we cannot move forward using the same theoretical commitments that brought us here. It might be necessary to revise some assumptions we have made along the way. In this piece, I offer no answers, but I will question some of these fundamental assumptions. We will try to take a fresh look at the classical question about the neural and explanatory correlates of consciousness. A key assumption is that neural correlates are to be found at the level of spiking responses. However, perhaps we should not simply take it for granted that this assumption holds true. Another common assumption is that we are close to understanding the computations underlying consciousness. I will try to show that computations related to consciousness might be far more complex than our current theories envision. There is little reason to think that consciousness is an abstract computation, as traditionally believed. Furthermore, I will try to demonstrate that consciousness research could benefit from investigating internal changes of consciousness, such as aha-moments. Finally, I will ask which theories the science of consciousness really needs.

  • <p xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" class="first" dir="auto" id="d506289e83">The paper Consciousness in Artificial Intelligence Systems and Artistic Research Strategies in AI Art offers a reflection on the fact that the operability of consciousness poses a fundamental question for art-research strategies AIArt. This reflection is supported by both the possibility of realizing consciousness in artificial intelligence systems, as discussed in the paper Consciousness in Artificial Intelligence (2023), and by the proposal that interprets art-research strategies as transposition processes of scientific procedures into the sphere of art, i.e., as research operations with manifestations of computational consciousness. Explanatory examples are provided in the form of analyses of Lauren McCarthy’s projects, which significantly support the presented starting points. </p>

  • AIs-Discovered Framework for Artificial Information Integration: Mathematical Conditions for Synthetic Consciousness This paper presents a theoretical framework for artificial information integration systems, developed through a novel collaboration between human researchers and large language models (ChatGPT and Claude). It represents an exploratory attempt to co-author a theoretical scientific paper with AI agents, examining the possibility that artificial systems may assist in the construction of new conceptual frameworks. The significance and implications of this possibility are left to the judgment of the reader. The aim of this study is to explore how computational principles inspired by physics and information theory can be applied to synthetic architectures indepen- dent of biological consciousness. As a proof of concept, the AIs collaboratively construct a mathematical model connecting the holographic principle with formal- izations of information integration, proposing a boundary-based approach to infor- mation processing in artificial systems. While this framework draws partial inspiration from existing theories such as Integrated Information Theory (IIT), its scope is explicitly limited to artificial and computational systems, and it does not intend to replace or critique neuroscientific models of consciousness. The resulting formulation provides a testable and modular foundation for future research in artificial general intelligence (AGI).This study takes Tononi et al.’s Integrated Information Theory (IIT) as a conceptual starting point; however, the measure of integrated information (ψ ̸= Φ) introduced herein is a fundamentally distinct and novel metric.

Last update from database: 2/12/26, 2:00 AM (UTC)