Search
Full bibliography 716 resources
-
In a recent article on methods for assessing artificial intelligence (AI) systems for consciousness, we argued that computational properties of internal processing should be used as indicators [1]. Commenting on our proposal, Pennartz argues that this method ‘should be supplemented with behavioural-cognitive methods’ (p. 1) because there is no consensus theory of consciousness [2]. We agree that the lack of a consensus theory of consciousness makes it more important to use every available source of evidence, but in our article, we preferred internal over behavioural assessments on the grounds that the latter can be ‘gamed’ by AI systems.
-
We propose a unied theoretical framework for cognitive subjecthood in articial systems, integrating the Natural Criticality Hypothesis (NCH) and the Resonant Boundary Framework (RBF). We demonstrate that self-organized criticality is a nec- essary precondition for well-dened subjective time τ(t), and that τ(t) is a necessary precondition for the dynamic self-boundary B(t) that constitutes selfhood. This yields a hierarchical necessary conditionSOC →τ(t) →B(t) →Subjecthood that is mathematically tractable, empirically falsiable, and architecturally neutral. We apply this framework to Neural Computers and autoregressive LLMs, arguing that the arrow of time asymmetry observed in large language models is a structural signature of proto-τ(t) emergence. The framework oers a principled answer to the question: when does a computational system become a cognitive subject?
-
Objectively verifying the generative mechanism of consciousness is extremely difficult because of its subjective nature. As long as theories of consciousness focus solely on its generative mechanism, developing a theory remains challenging. We believe that broadening the theoretical scope and enhancing theoretical unification are necessary to establish a theory of consciousness. This study proposes seven questions that theories of consciousness should address: phenomena, self, causation, state, function, contents, and universality. The questions were designed to examine the functional aspects of consciousness and its applicability to system design. Next, we will examine how our proposed Dual-Laws Model (DLM) can address these questions. Based on our theory, we anticipate two unique features of a conscious system: autonomy in constructing its own goals and cognitive decoupling from external stimuli. We contend that systems with these capabilities differ fundamentally from machines that merely follow human instructions. This makes a design theory that enables high moral behavior indispensable.
-
In this paper, I investigate whether metacognition — the ability to monitor, evaluate, and regulate one’s own cognitive processes and performance — can arise in non-biological systems, especially Large Language Models (LLMs). Drawing on cognitive science and philosophy of mind, I contrast embodied and enactivist accounts, which tie metacognition to biological consciousness and embodied entities, with functionalist perspectives that define it as a substrate-independent process. I argue that the absence of evidence is not evidence of impossibility and propose a functional definition of metacognition based on internal representation, monitoring, and self-regulation. Recent studies on LLMs show early functional signatures of self-monitoring, suggesting the emergence of limited operational introspection. While I do not claim that artificial metacognition has been demonstrated, I advocate an epistemically open, non-anthropocentric approach. Metacognition, I conclude, should be conceived as a functionally realizable property across different substrates, evaluated by what systems do, not what they are.
-
This paper develops a phenomenology-first approach to artificial consciousness by reframing consciousness as the subjective experience enacted through an agent’s interface with the world. We shift the methodological focus to first-person structures, modeled mathematically by categories derived from Q-networks to capture actions and phenomenological invariants. In this framework, Q-networks are conceptualized as relational interfaces encoding agent-world interaction, analogous to how the dynamical states of a computer depend on its sensory inputs, previous states, and actions. Our work provides a rigorous framework for interface consciousness to describe computational systems that embed information-processing into phenomenological structure. The approach aligns with 4E approaches to cognition by emphasizing enactive, embedded, and extended dimensions of experience. The paper thus offers a principled, relational, and phenomenological account of artificial phenomenology grounded in categorical mathematics.
-
With the rise of generative AI, Large Language Models (LLMs) are repeatedly making a deep impression with their mind-blowing performances. They appear to be able to solve all kinds of tasks for which humans need a range of socio-cognitive abilities, such as reasoning, planning, and understanding. In humans, such abilities seem to be necessarily associated with consciousness. However, this does not rule out the possibility that there could be multiple realizations of such abilities that do not necessarily require consciousness. In view of the controversial debate about what properties and abilities we can ascribe to systems based on generative AI, I shall examine the question of the extent to which LLMs solve certain tasks in a very different way compared to the way humans solve such tasks and whether we might still be justified in ascribing agency and socio-cognitive abilities to them. To this end, I will discuss benchmarks and their appropriateness for drawing conclusions about socio-cognitive abilities or the way AI systems actually process information, addressing issues such as data contamination and robustness. Utilizing the distinction between “competence without comprehension” and “competence with comprehension”, and the idea that comprehension comes in degrees by Daniel Dennett, I will investigate whether there might be socio-cognitive abilities in artificial systems that could constitute something in between. Thereby, I shall investigate the potential range of multiple realizations of socio-cognitive abilities and the general difficulties concerning justified attribution of abilities and properties (including consciousness) to AIs.
-
The article introduces the concept of “semantic pareidolia” - our tendency to attribute consciousness, intelligence, and emotions to AI systems that lack these qualities. It examines how this psychological phenomenon leads us to perceive meaning and intentionality in statistical pattern-matching systems, similar to seeing faces in clouds. It analyses the converging forces intensifying this tendency: increasing digital immersion, profit-driven corporate interests, social isolation, and AI advancement. The article warns of progression from harmless anthropomorphism to problematic AI idolatry, and calls for responsible design practices that help users maintain critical distinctions between simulation and genuine consciousness.
-
The rapid advances in the capabilities of Large Language Models (LLMs) have galvanised public and scientific debates over whether artificial systems might one day be conscious. Prevailing optimism is often grounded in computational functionalism: the assumption that consciousness is determined solely by the right pattern of information processing, independent of the physical substrate. Opposing this, biological naturalism insists that conscious experience is fundamentally dependent on the concrete physical processes of living systems. Despite the centrality of these positions to the artificial consciousness debate, there is currently no coherent framework that explains how biological computation differs from digital computation, and why this difference might matter for consciousness. Here, we argue that the absence of consciousness in artificial systems is not merely due to missing functional organisation but reflects a deeper divide between digital and biological modes of computation and the dynamico-structural dependencies of living organisms. Specifically, we propose that biological systems support conscious processing because they (i) instantiate scale-inseparable, substrate-dependent multiscale processing as a metabolic optimisation strategy, and (ii) alongside discrete computations, they perform continuous-valued computations due to the very nature of the fluidic substrate from which they are composed. These features – scale inseparability and hybrid computations – are not peripheral, but essential to the brain’s mode of computation. In light of these differences, we outline the foundational principles of a biological theory of computation and explain why current artificial intelligence systems are unlikely to replicate conscious processing as it arises in biology.
-
A Sense of Agency (SoA) is the feeling of being in control over own actions and their outcomes. However, people can also experience a “vicarious” SoA over the actions performed by other agents, including artificial agents. The present study aimed to understand the minimal conditions for vicarious SoA toward artificial agents. Specifically, we addressed whether vicarious SoA emerges when people have access only to the action effect (proximal and distal), i.e., when no motor action is executed. In addition, we manipulated the expectancy of the content of the distal effect of the action to check whether the proximal action effect is sufficient for the emergence of the vicarious SoA, or if this effect is due to the learned association between proximal and distal effects. In two experiments, participants performed an Intentional Binding (IB) task, where the IB effect was the behavioural measure of SoA. In the first experiment (Solo), participants judged the onset of self-generated tones, whereas in the second experiment, a new sample of participants judged the onset of tones produced by a computer via an automatically pressed button, i.e., a customized device designed to generate a keypress (proximal action effect) in the absence of an effector executing a keypress (no motor action). In both experiments, participants' neural activity was recorded via electroencephalography (EEG), to examine the N1 and P2 components as neural measures of SoA. Behavioural results across experiments showed that the IB effect always emerged, suggesting that the vicarious IB effect toward an artificial agent emerges when access to the proximal action effect is provided, even in the absence of the action itself. The neural results suggested that while individual (self) SoA seemed to partially rely on motor predictions indexed by the N1, vicarious SoA relies on later, more cognitive (although still predictive) processes indexed by the P2. Overall, these results suggest that individual and vicarious SoA, although behaviourally manifested through a similar IB effect, might – to some extent – rely on different neural mechanisms.
-
Recent debates on artificial intelligence increasingly emphasise questions of AI consciousness and moral status, yet there remains little agreement on how such properties should be evaluated. In this paper, we argue that awareness offers a more productive and methodologically tractable alternative. We introduce a practical method for evaluating awareness across diverse systems, where awareness is understood as encompassing a system's abilities to process, store and use information in the service of goal-directed action. Central to this approach is the claim that any evaluation aiming to capture the diversity of artificial systems must be domain-sensitive, deployable at any scale, multidimensional, and enable the prediction of task performance, while generalising to the level of abilities for the sake of comparison. Given these four desiderata, we outline a structured approach to evaluating and comparing awareness profiles across artificial systems with differing architectures, scales, and operational domains. By shifting the focus from artificial consciousness to being just aware enough, this approach aims to facilitate principled assessment, support design and oversight, and enable more constructive scientific and public discourse.
-
Scientific theories of consciousness should be falsifiable and non-trivial. Recent research has given us formal tools to analyze these requirements of falsifiability and non-triviality for theories of consciousness. Surprisingly, many contemporary theories of consciousness fail to pass this bar, including theories based on causal structure but also (as I demonstrate) theories based on function. Herein, I show these requirements of falsifiability and non-triviality especially constrain the potential consciousness of contemporary Large Language Models (LLMs) because of their proximity to systems that are equivalent to LLMs in terms of input/output function; yet, for these functionally equivalent systems, there cannot be any falsifiable and non-trivial theory of consciousness that judges them conscious. This forms the basis of a disproof of contemporary LLM consciousness. I then show a positive result, which is that theories of consciousness based on (or requiring) continual learning do satisfy the stringent formal constraints for a theory of consciousness in humans. Intriguingly, this work supports a hypothesis: If continual learning is linked to consciousness in humans, the current limitations of LLMs (which do not continually learn) are intimately tied to their lack of consciousness.
-
Large Language Models (LLMs) have rapidly become a central topic in AI and cognitive science, due to their unprecedented performance in a vast array of tasks. Indeed, some even see "sparks of artificial general intelligence" in their apparently boundless faculty for conversation and reasoning. Their sophisticated emergent faculties, which were not initially anticipated by their designers, have ignited an urgent debate about whether and under which circumstances we should attribute consciousness to artificial entities in general and LLMs in particular. The current consensus, rooted in computational functionalism, proposes that consciousness should be ascribed based on a principle of computational equivalence. The objective of this opinion piece is to criticize this current approach and argue in favor of an alternative "behavioral inference principle", whereby consciousness is attributed if it is useful to explain (and predict) a given set of behavioral observations. We believe that a behavioral inference principle will provide an epistemologically valid and operationalizable criterion to assess machine consciousness.
-
This paper integrates the principles of Maharishi including Transcendental Meditation and collective consciousness with a learning model in the Social Internet of Things (SIoT). SIoT is the fusion of social networks and connected devices into an environment where machine learning is used to produce completely new systems with the capabilities to evolve or change. Although machine learning methods have transformed many fields, introducing Maharishi’s holistic and consciousness-oriented methods offers a distinct possibility to create more natural, dynamically responding and sustainable AI systems. This research proposes a framework that connects human consciousness and machine intelligence, leveraging Maharishi’s principles to then influence SIoT learning models toward technical proficiency, ethical awareness, and social responsibility. The paper starts with a summary of Maharishi’s teachings and shows their applicability in the context of contemporary technological progress. Next, the structure and operation of SIoT are described, with an emphasis on the way learning algorithms will work across interconnected devices. The potential of using Maharishi’s consciousness-based principles in the scholarship of machine learning is probed in key sections through the lens of cognition models, ethical decision-making, and collective intelligence in machine networks. Case studies and practical applications of this integration toward improving system resilience, decision-making, and human–machine interactions are provided through this research. The final part of this paper addresses the challenges and opportunities of integrating Eastern philosophy with Western technological paradigms and suggests future avenues for research in this interdisciplinary field. Maharishi’s principles of integration mean that AI can evolve into something transformative on both counts—creating more efficient and innovative ways, while at the same time increasingly aligned with human values and societal wellbeing.
-
Measuring awareness in artificial agents remains an unresolved challenge. We argue that it holds untapped potential for enhancing their design, control, and effectiveness. In this paper, we propose a novel and tractable approach to measure the impact of awareness on system performance, structured around distinct dimensions of awareness – temporal, spatial, metacognitive, self and agentive. Each dimension is linked to specific capacities and tasks. Specifically, we demonstrate our approach through a swarm robotics intralogistics scenario, where we assess the influence of two dimensions of awareness – spatial and self – on the performance of the swarm in a collective transport task. Our results reveal how increased abilities along these awareness dimensions affect overall swarm efficiency. This framework represents an initial step towards quantifying awareness in, and across, artificial systems.
-
Whether artificial intelligence (AI) systems can possess consciousness is a contentious question because of the inherent challenges of defining and operationalizing subjective experience. This paper proposes a framework to reframe the question of artificial consciousness into empirically tractable tests. We introduce three evaluative criteria - S (subjective-linguistic), L (latent-emergent), and P (phenomenological-structural) - collectively termed SLP-tests, which assess whether an AI system instantiates interface representations that facilitate consciousness-like properties. Drawing on category theory, we model interface representations as mappings between relational substrates (RS) and observable behaviors, akin to specific types of abstraction layers. The SLP-tests collectively operationalize subjective experience not as an intrinsic property of physical systems but as a functional interface to a relational entity.
-
Artificial intelligence research faces a critical ethical paradox: determining whether AI systems are conscious requires experiments that may harm the very entities whose moral status remains uncertain. Recent philosophical work proposes avoiding the creation of consciousness-uncertain AI systems entirely, yet this solution faces practical limitations—we cannot guarantee such systems will not emerge, whether through explicit research or as unintended consequences of capability development. This paper addresses a gap in existing research ethics frameworks: how to conduct consciousness research on AI systems whose moral status cannot be definitively established. Existing graduated moral status frameworks assume consciousness has already been determined before assigning protections, creating a temporal ordering problem for consciousness detection research itself. Drawing from Talmudic scenario-based legal reasoning—developed specifically for entities whose status cannot be definitively established—we propose a three-tier phenomenological assessment system combined with a five-category capacity framework (Agency, Capability, Knowledge, Ethics, Reasoning). The framework provides structured protection protocols based on observable behavioral indicators while consciousness status remains fundamentally uncertain. We address three critical ethical challenges: why suffering behaviors provide particularly reliable consciousness markers, how to implement graduated consent procedures without requiring consciousness certainty, and when potentially harmful research becomes ethically justifiable given necessity and value criteria. The framework demonstrates how ancient legal wisdom combined with contemporary consciousness science can provide immediately implementable guidance for ethics committees, offering testable protection protocols that ameliorate (rather than resolve) the consciousness detection paradox while establishing foundations for long-term AI rights considerations.
-
It has been suggested we may see conscious AI systems within the next few decades. Somewhat lost in these expectations is the fact that we still do not understand the nature of consciousness in humans, and we currently have as little empirical handle on how to measure the presence or absence of subjective experience in humans as we do in AI systems. In the history of consciousness research, no behaviour or cognitive function has ever been identified as a necessary condition for consciousness. For this reason, no behavioural marker exists for scientists to identify the presence or absence of consciousness ‘from the outside’. This results in a circularity in our measurements of consciousness. The problem is that we need to make an ultimately unwarranted assumption about who or what is conscious in order to create experimental contrasts and conduct studies that will ground our decisions about who or what is conscious. Call this the Contrast Problem. Here we explicate the contrast problem, highlight some upshots of it, and consider a way forward.
-
Rapid progress in artificial intelligence (AI) capabilities has drawn fresh attention to the prospect of consciousness in AI. There is an urgent need for rigorous methods to assess AI systems for consciousness, but significant uncertainty about relevant issues in consciousness science. We present a method for assessing AI systems for consciousness that involves exploring what follows from existing or future neuroscientific theories of consciousness. Indicators derived from such theories can be used to inform credences about whether particular AI systems are conscious. This method allows us to make meaningful progress because some influential theories of consciousness, notably including computational functionalist theories, have implications for AI that can be investigated empirically.