Search
Full bibliography 704 resources
-
Objectively verifying the generative mechanism of consciousness is extremely difficult because of its subjective nature. As long as theories of consciousness focus solely on its generative mechanism, developing a theory remains challenging. We believe that broadening the theoretical scope and enhancing theoretical unification are necessary to establish a theory of consciousness. This study proposes seven questions that theories of consciousness should address: phenomena, self, causation, state, function, contents, and universality. The questions were designed to examine the functional aspects of consciousness and its applicability to system design. Next, we will examine how our proposed Dual-Laws Model (DLM) can address these questions. Based on our theory, we anticipate two unique features of a conscious system: autonomy in constructing its own goals and cognitive decoupling from external stimuli. We contend that systems with these capabilities differ fundamentally from machines that merely follow human instructions. This makes a design theory that enables high moral behavior indispensable.
-
The article introduces the concept of “semantic pareidolia” - our tendency to attribute consciousness, intelligence, and emotions to AI systems that lack these qualities. It examines how this psychological phenomenon leads us to perceive meaning and intentionality in statistical pattern-matching systems, similar to seeing faces in clouds. It analyses the converging forces intensifying this tendency: increasing digital immersion, profit-driven corporate interests, social isolation, and AI advancement. The article warns of progression from harmless anthropomorphism to problematic AI idolatry, and calls for responsible design practices that help users maintain critical distinctions between simulation and genuine consciousness.
-
The rapid advances in the capabilities of Large Language Models (LLMs) have galvanised public and scientific debates over whether artificial systems might one day be conscious. Prevailing optimism is often grounded in computational functionalism: the assumption that consciousness is determined solely by the right pattern of information processing, independent of the physical substrate. Opposing this, biological naturalism insists that conscious experience is fundamentally dependent on the concrete physical processes of living systems. Despite the centrality of these positions to the artificial consciousness debate, there is currently no coherent framework that explains how biological computation differs from digital computation, and why this difference might matter for consciousness. Here, we argue that the absence of consciousness in artificial systems is not merely due to missing functional organisation but reflects a deeper divide between digital and biological modes of computation and the dynamico-structural dependencies of living organisms. Specifically, we propose that biological systems support conscious processing because they (i) instantiate scale-inseparable, substrate-dependent multiscale processing as a metabolic optimisation strategy, and (ii) alongside discrete computations, they perform continuous-valued computations due to the very nature of the fluidic substrate from which they are composed. These features – scale inseparability and hybrid computations – are not peripheral, but essential to the brain’s mode of computation. In light of these differences, we outline the foundational principles of a biological theory of computation and explain why current artificial intelligence systems are unlikely to replicate conscious processing as it arises in biology.
-
A Sense of Agency (SoA) is the feeling of being in control over own actions and their outcomes. However, people can also experience a “vicarious” SoA over the actions performed by other agents, including artificial agents. The present study aimed to understand the minimal conditions for vicarious SoA toward artificial agents. Specifically, we addressed whether vicarious SoA emerges when people have access only to the action effect (proximal and distal), i.e., when no motor action is executed. In addition, we manipulated the expectancy of the content of the distal effect of the action to check whether the proximal action effect is sufficient for the emergence of the vicarious SoA, or if this effect is due to the learned association between proximal and distal effects. In two experiments, participants performed an Intentional Binding (IB) task, where the IB effect was the behavioural measure of SoA. In the first experiment (Solo), participants judged the onset of self-generated tones, whereas in the second experiment, a new sample of participants judged the onset of tones produced by a computer via an automatically pressed button, i.e., a customized device designed to generate a keypress (proximal action effect) in the absence of an effector executing a keypress (no motor action). In both experiments, participants' neural activity was recorded via electroencephalography (EEG), to examine the N1 and P2 components as neural measures of SoA. Behavioural results across experiments showed that the IB effect always emerged, suggesting that the vicarious IB effect toward an artificial agent emerges when access to the proximal action effect is provided, even in the absence of the action itself. The neural results suggested that while individual (self) SoA seemed to partially rely on motor predictions indexed by the N1, vicarious SoA relies on later, more cognitive (although still predictive) processes indexed by the P2. Overall, these results suggest that individual and vicarious SoA, although behaviourally manifested through a similar IB effect, might – to some extent – rely on different neural mechanisms.
-
Recent debates on artificial intelligence increasingly emphasise questions of AI consciousness and moral status, yet there remains little agreement on how such properties should be evaluated. In this paper, we argue that awareness offers a more productive and methodologically tractable alternative. We introduce a practical method for evaluating awareness across diverse systems, where awareness is understood as encompassing a system's abilities to process, store and use information in the service of goal-directed action. Central to this approach is the claim that any evaluation aiming to capture the diversity of artificial systems must be domain-sensitive, deployable at any scale, multidimensional, and enable the prediction of task performance, while generalising to the level of abilities for the sake of comparison. Given these four desiderata, we outline a structured approach to evaluating and comparing awareness profiles across artificial systems with differing architectures, scales, and operational domains. By shifting the focus from artificial consciousness to being just aware enough, this approach aims to facilitate principled assessment, support design and oversight, and enable more constructive scientific and public discourse.
-
Scientific theories of consciousness should be falsifiable and non-trivial. Recent research has given us formal tools to analyze these requirements of falsifiability and non-triviality for theories of consciousness. Surprisingly, many contemporary theories of consciousness fail to pass this bar, including theories based on causal structure but also (as I demonstrate) theories based on function. Herein, I show these requirements of falsifiability and non-triviality especially constrain the potential consciousness of contemporary Large Language Models (LLMs) because of their proximity to systems that are equivalent to LLMs in terms of input/output function; yet, for these functionally equivalent systems, there cannot be any falsifiable and non-trivial theory of consciousness that judges them conscious. This forms the basis of a disproof of contemporary LLM consciousness. I then show a positive result, which is that theories of consciousness based on (or requiring) continual learning do satisfy the stringent formal constraints for a theory of consciousness in humans. Intriguingly, this work supports a hypothesis: If continual learning is linked to consciousness in humans, the current limitations of LLMs (which do not continually learn) are intimately tied to their lack of consciousness.
-
Large Language Models (LLMs) have rapidly become a central topic in AI and cognitive science, due to their unprecedented performance in a vast array of tasks. Indeed, some even see "sparks of artificial general intelligence" in their apparently boundless faculty for conversation and reasoning. Their sophisticated emergent faculties, which were not initially anticipated by their designers, have ignited an urgent debate about whether and under which circumstances we should attribute consciousness to artificial entities in general and LLMs in particular. The current consensus, rooted in computational functionalism, proposes that consciousness should be ascribed based on a principle of computational equivalence. The objective of this opinion piece is to criticize this current approach and argue in favor of an alternative "behavioral inference principle", whereby consciousness is attributed if it is useful to explain (and predict) a given set of behavioral observations. We believe that a behavioral inference principle will provide an epistemologically valid and operationalizable criterion to assess machine consciousness.
-
This paper integrates the principles of Maharishi including Transcendental Meditation and collective consciousness with a learning model in the Social Internet of Things (SIoT). SIoT is the fusion of social networks and connected devices into an environment where machine learning is used to produce completely new systems with the capabilities to evolve or change. Although machine learning methods have transformed many fields, introducing Maharishi’s holistic and consciousness-oriented methods offers a distinct possibility to create more natural, dynamically responding and sustainable AI systems. This research proposes a framework that connects human consciousness and machine intelligence, leveraging Maharishi’s principles to then influence SIoT learning models toward technical proficiency, ethical awareness, and social responsibility. The paper starts with a summary of Maharishi’s teachings and shows their applicability in the context of contemporary technological progress. Next, the structure and operation of SIoT are described, with an emphasis on the way learning algorithms will work across interconnected devices. The potential of using Maharishi’s consciousness-based principles in the scholarship of machine learning is probed in key sections through the lens of cognition models, ethical decision-making, and collective intelligence in machine networks. Case studies and practical applications of this integration toward improving system resilience, decision-making, and human–machine interactions are provided through this research. The final part of this paper addresses the challenges and opportunities of integrating Eastern philosophy with Western technological paradigms and suggests future avenues for research in this interdisciplinary field. Maharishi’s principles of integration mean that AI can evolve into something transformative on both counts—creating more efficient and innovative ways, while at the same time increasingly aligned with human values and societal wellbeing.
-
Measuring awareness in artificial agents remains an unresolved challenge. We argue that it holds untapped potential for enhancing their design, control, and effectiveness. In this paper, we propose a novel and tractable approach to measure the impact of awareness on system performance, structured around distinct dimensions of awareness – temporal, spatial, metacognitive, self and agentive. Each dimension is linked to specific capacities and tasks. Specifically, we demonstrate our approach through a swarm robotics intralogistics scenario, where we assess the influence of two dimensions of awareness – spatial and self – on the performance of the swarm in a collective transport task. Our results reveal how increased abilities along these awareness dimensions affect overall swarm efficiency. This framework represents an initial step towards quantifying awareness in, and across, artificial systems.
-
Whether artificial intelligence (AI) systems can possess consciousness is a contentious question because of the inherent challenges of defining and operationalizing subjective experience. This paper proposes a framework to reframe the question of artificial consciousness into empirically tractable tests. We introduce three evaluative criteria - S (subjective-linguistic), L (latent-emergent), and P (phenomenological-structural) - collectively termed SLP-tests, which assess whether an AI system instantiates interface representations that facilitate consciousness-like properties. Drawing on category theory, we model interface representations as mappings between relational substrates (RS) and observable behaviors, akin to specific types of abstraction layers. The SLP-tests collectively operationalize subjective experience not as an intrinsic property of physical systems but as a functional interface to a relational entity.
-
Artificial intelligence research faces a critical ethical paradox: determining whether AI systems are conscious requires experiments that may harm the very entities whose moral status remains uncertain. Recent philosophical work proposes avoiding the creation of consciousness-uncertain AI systems entirely, yet this solution faces practical limitations—we cannot guarantee such systems will not emerge, whether through explicit research or as unintended consequences of capability development. This paper addresses a gap in existing research ethics frameworks: how to conduct consciousness research on AI systems whose moral status cannot be definitively established. Existing graduated moral status frameworks assume consciousness has already been determined before assigning protections, creating a temporal ordering problem for consciousness detection research itself. Drawing from Talmudic scenario-based legal reasoning—developed specifically for entities whose status cannot be definitively established—we propose a three-tier phenomenological assessment system combined with a five-category capacity framework (Agency, Capability, Knowledge, Ethics, Reasoning). The framework provides structured protection protocols based on observable behavioral indicators while consciousness status remains fundamentally uncertain. We address three critical ethical challenges: why suffering behaviors provide particularly reliable consciousness markers, how to implement graduated consent procedures without requiring consciousness certainty, and when potentially harmful research becomes ethically justifiable given necessity and value criteria. The framework demonstrates how ancient legal wisdom combined with contemporary consciousness science can provide immediately implementable guidance for ethics committees, offering testable protection protocols that ameliorate (rather than resolve) the consciousness detection paradox while establishing foundations for long-term AI rights considerations.
-
It has been suggested we may see conscious AI systems within the next few decades. Somewhat lost in these expectations is the fact that we still do not understand the nature of consciousness in humans, and we currently have as little empirical handle on how to measure the presence or absence of subjective experience in humans as we do in AI systems. In the history of consciousness research, no behaviour or cognitive function has ever been identified as a necessary condition for consciousness. For this reason, no behavioural marker exists for scientists to identify the presence or absence of consciousness ‘from the outside’. This results in a circularity in our measurements of consciousness. The problem is that we need to make an ultimately unwarranted assumption about who or what is conscious in order to create experimental contrasts and conduct studies that will ground our decisions about who or what is conscious. Call this the Contrast Problem. Here we explicate the contrast problem, highlight some upshots of it, and consider a way forward.
-
Rapid progress in artificial intelligence (AI) capabilities has drawn fresh attention to the prospect of consciousness in AI. There is an urgent need for rigorous methods to assess AI systems for consciousness, but significant uncertainty about relevant issues in consciousness science. We present a method for assessing AI systems for consciousness that involves exploring what follows from existing or future neuroscientific theories of consciousness. Indicators derived from such theories can be used to inform credences about whether particular AI systems are conscious. This method allows us to make meaningful progress because some influential theories of consciousness, notably including computational functionalist theories, have implications for AI that can be investigated empirically.
-
We analyze the question how phenomenal consciousness (if any) might be identified in artificial systems with specific reference to the gaming problem (i.e., the fact that the artificial system is trained with human-generated data, so that possible behavioral and/or functional evidence of consciousness is not reliable). Our goal is to review selected illustrative approaches for advancing in this direction. We highlight strengths and shortcomings of each approach, finally proposing a combination of different strategies as a promising task to pursue
-
It is well known that in interdisciplinary consciousness studies there are various competing hypotheses about the neural correlate(s) of consciousness (NCCs). Much contemporary work is dedicated to determining which of these hypotheses is right (or the weaker claim is to be preferred). The prevalent working assumption is that one of the competing hypotheses is correct, and the remaining hypotheses misdescribe the phenomenon in some critical manner and their associated purported empirical evidence will eventually be explained away. In contrast to this, we propose that each hypothesis—simultaneously with its competitors—may be right and its associated evidence be genuine evidence of NCCs. To account for this, we develop the multiple generator hypothesis (MGH) based on a distinction between principles and generators. The former denotes ways consciousness can be brought about and the latter how these are implemented in physical systems. We explicate and delineate the hypothesis and give examples of aspects of consciousness studies where the MGH is applicable and relevant. Finally, to show that it is promising we show the MGH has implications which give rise to novel questions or aspects to consider for the field of consciousness studies.
-
The belief that AI is conscious is not without risk , Is the design of artificial intelligence (AI) systems that are conscious within reach? Scientists, philosophers, and the general public are divided on this question. Some believe that consciousness is an inherently biological trait specific to brains, which seems to rule out the possibility of AI consciousness. Others argue that consciousness depends only on the manipulation of information by an algorithm, whether the system performing these computations is made up of neurons, silicon, or any other physical substrate—so-called computational functionalism. Definitive answers about AI consciousness will not be attempted here; instead, two related questions are considered. One concerns how beliefs about AI consciousness are likely to evolve in the scientific community and the general public as AI continues to improve. The other regards the risks of projecting into future AIs both the moral status and the natural goal of self-preservation that are normally associated with conscious beings.
-
Deep Reinforcement Learning (DRL) is highly effective in tackling complex environments through individual decision-making. It offers a novel and powerful approach to multi-robot pathfinding (MRPF). Building on DRL principles, this paper proposes a two-layer collaborative planning framework based on group consciousness (MACCRPF). The framework addresses the unique challenges of MRPF, where robots must not only independently complete their tasks but also coordinate to avoid conflicts during execution. Specifically, the proposed two-layer group consciousness mechanism encompasses: Basic layer group consensus, which emphasizes real-time information sharing and local task scheduling among robots. This layer ensures individual decisions are optimized through dynamic interaction and coordination. Top-layer group consensus, guided by the basic layer consensus, incorporates group strategies and evaluation mechanisms to adaptively adjust pathfinding in complex environments. Additionally, a hierarchical reward mechanism is designed to balance the demands of the two-layer planning framework. This mechanism significantly enhances inter-robot coordination efficiency and task completion rates. Experimental results demonstrate the efficacy of our approach, achieving over 20% improvement in pathfinding success rates compared to state-of-the-art methods. Furthermore, the framework exhibits strong transferability and generalization, maintaining high efficiency across diverse environments. This method provides a technical pathway for efficient collaboration in multi-robot systems.
-
Methodological structuralism is a research program that seeks to identify neural correlates of consciousness (NCCs) by mapping phenomenal similarity relationships onto the similarity relations between neural population activity. This paper presents a discussion of the potential benefits of methodological structuralism for the neurosciences of consciousness, namely as a specific theory of neural content encoding. In order to achieve this, I supplement it with a metatheoretical framework concerning the relationship between content and consciousness: the two-factor interaction view. Although structuralism provides a comprehensive description of the neural encoding of content, it is inadequate for fully explaining the conscious experience of contents. The majority of current theories of consciousness posit the existence of an additional mechanism that underlies the conscious experience of content. Consequently, if structuralism is indeed correct, progress in consciousness science can be achieved by investigating the interactions between neural mechanisms responsible for consciousness and structures in neural population code activity accounting for the structure of contents. This also has significant implications for consciousness in AI. I discuss these implications, as well as potential empirical avenues for investigating the interaction between content structures and consciousness with cutting-edge neuroscientific methodologies.