Search

Full bibliography 615 resources

  • In the Kolmogorov Theory of Consciousness, algorithmic agents utilize inferred compressive models to track coarse-grained data produced by simplified world models, capturing regularities that structure subjective experience and guide action planning. Here, we study the dynamical aspects of this framework by examining how the requirement of tracking natural data drives the structural and dynamical properties of the agent. We first formalize the notion of a generative model using the language of symmetry from group theory, specifically employing Lie pseudogroups to describe the continuous transformations that characterize invariance in natural data. Then, adopting a generic neural network as a proxy for the agent dynamical system and drawing parallels to Noether’s theorem in physics, we demonstrate that data tracking forces the agent to mirror the symmetry properties of the generative world model. This dual constraint on the agent’s constitutive parameters and dynamical repertoire enforces a hierarchical organization consistent with the manifold hypothesis in the neural network. Our findings bridge perspectives from algorithmic information theory (Kolmogorov complexity, compressive modeling), symmetry (group theory), and dynamics (conservation laws, reduced manifolds), offering insights into the neural correlates of agenthood and structured experience in natural systems, as well as the design of artificial intelligence and computational models of the brain.

  • The rise of generative artificial intelligence (GenAI), especially large language models (LLMs), has fueled debates on whether these technologies genuinely emulate human intelligence. Some view LLM architectures as capturing human language mechanisms, while others, from a posthumanist and a transhumanist perspective, herald the obsolescence of humans as the sole conceptualizers of life and nature. This paper challenges, from a practical philosophy of science perspective, such views by arguing that the reasoning behind equating GenAI with human intelligence, or proclaiming the “demise of the human,” is flawed due to conceptual conflation and reductive definitions of humans as performance-driven semiotic systems deprived of agency. In opposing theories that reduce consciousness to computation or information integration, the present proposal posits that consciousness arises from the holistic integration of perception, conceptualization, intentional agency, and self-modeling within biological systems. Grounded in a model of Extended Humanism, I propose that human consciousness and agency are tied to biological embodiment and the agents’ experiential interactions with their environment. This underscores the distinction between pre-trained transformers as “posthuman agents” and humans as purposeful biological agents, which emphasizes the human capacity for biological adjustment and optimization. Consequently, proclamations about human obsolescence as conceptualizers are unfounded, as they fail to appreciate intrinsic consciousness-agency-embodiment connections. One of the main conclusions is that the capacity to integrate information does not amount to phenomenal consciousness as argued, for example, by Information Integration Theory (ITT).

  • The integration of nanotechnology, neuroscience, and artificial intelligence (AI) is paving the way for a revolutionary transformation in human-AI symbiosis, with profound implications for medicine, cognitive enhancement, and even the nature of consciousness itself. Recent advancements in nano-neural interfaces, nanoscale AI processors, and self-assembling nanomaterials are making direct brain-AI communication an emerging reality, breaking through the traditional barriers of neurotechnology. Nanotechnology has enabled the development of ultra-small, biocompatible neural implants that can seamlessly integrate with human brain tissue, allowing for real-time interaction between biological neurons and AI systems. These innovations hold the potential to restore lost neural function, enhance cognitive capabilities, and expand the brain’s ability to process and store information. By leveraging AI-powered nanobots, researchers aim to create self-regulating neural networks capable of monitoring, repairing, and even augmenting brain activity, leading to breakthroughs in neurological disorder treatment, brain augmentation, and human-machine fusion. Furthermore, the application of quantum nanomaterials in neuromorphic engineering suggests the possibility of hyper-efficient, brain-like AI processors, capable of replicating the adaptive nature of human cognition. This could not only enhance human intelligence but also blur the boundaries between biological and artificial consciousness, raising profound questions about the nature of self-awareness and machine intelligence.This article explores the current state of research in nano-engineered synaptic interfaces, intelligent nanobots, and AI-integrated neural enhancements, analyzing their implications for medical applications, cognitive evolution, and the potential emergence of synthetic consciousness. It also examines the ethical, philosophical, and societal challenges associated with merging human intelligence with AI at the nanoscale, highlighting both the transformative possibilities and the risks of this rapidly advancing field.

  • This study investigates the interaction of posthumanities and artificial intelligences, with a special focus on how AI developments are reshaping conservation, morals, and humans. As AI systems get more powerful over time, they bring together many assumptions about human uniqueness, cognitive abilities, and moral status. AI keeps opens ways for rethinking human capacity and identity, raising a critical ethical agenda while putting abysmal questions on the societal impacts of AI-driven posthumanism.

  • Patrick Butlin and Robert Long have recently proposed to analyze the possibility of conscious AI, granting computational functionalism plausibility (Butlin et al. 2023). To this end, they derived a series of properties from various theories compatible with functionalism and applied them to several LLMs. The authors concluded that no current AI system can be considered truly conscious, but they remain open to such a possibility.

  • Understanding the foundations of artificial cognitive consciousness remains a central challenge in artificial intelligence (AI) and cognitive science. Traditional computational models, including deep learning and symbolic AI, have demonstrated remarka

  • The development of artificial intelligence and robotic systems has revolutionized multiple aspects of human life. It is often asked whether artificial general intelligence (AGI) can ever be achieved or whether robots can truly achieve human-like qualities. Our view is that the answer is “no,” because these systems fundamentally differ in their relationship to the ultimate goal of biological systems – reproduction. This perspective gives rise to the conjecture that reproduction, or self-replication, is a prerequisite for human-like (or biological-type) cognition, intelligence, and even consciousness. This paper explores the implications of reproduction as a criterion for the viability of artificial systems, emphasizing how alignment with human reproductive imperatives determines their cultural integration and longevity. We argue that systems incapable of self-replication or co-evolving to complement human reproductive roles are likely to remain peripheral curiosities, with limited societal or evolutionary impact.

  • We're a community seeking to improve prioritization, collaboration, reproducibility, and funding of scientific research. Discuss and discover academic research on ResearchHub

  • Large language models (LLMs) and other artificial intelligence systems are trained using extensive DIKWP resources (data, information, knowledge, wisdom, purpose). These introduce uncertainties when applied to individual users in a collective semantic space. Traditional methods often lead to introducing new concepts rather than a proper understanding based on the semantic space. When dealing with complex problems or insufficient context, the limitations in conceptual cognition become even more evident. To address this, we take pediatric consultation as a scenario, using case simulations to specifically discuss unidirectional communication impairments between doctors and infant patients and the bidirectional communication biases between doctors and infant parents. We propose a human–machine interaction model based on DIKWP artificial consciousness. For the unidirectional communication impairment, we use the example of an infant’s perspective in recognizing and distinguishing objects, simulating the cognitive process of the brain from non-existence to existence, transitioning from cognitive space to semantic space, and generating corresponding semantics for DIKWP, abstracting concepts, and labels. For the bidirectional communication bias, we use the interaction between infant parents and doctors as an example, mapping the interaction process to the DIKWP transformation space and addressing the DIKWP 3-No problem (incompleteness, inconsistency, and imprecision) for both parties. We employ a purpose-driven DIKWP transformation model to solve part of the 3-No problem. Finally, we comprehensively validate the proposed method (DIKWP-AC). We first analyze, evaluate, and compare the DIKWP transformation calculations and processing capabilities, and then compare it with seven mainstream large models. The results show that DIKWP-AC performs well. Constructing a novel cognitive model reduces the information gap in human–machine interactions, promotes mutual understanding and communication, and provides a new pathway for achieving more efficient and accurate artificial consciousness interactions.

  • This paper proposes a minimalist three-layer model for artificial consciousness, focusing on the emergence of self-awareness. The model comprises a Cognitive Integration Layer, a Pattern Prediction Layer, and an Instinctive Response Layer, interacting with Access-Oriented and Pattern-Integrated Memory systems. Unlike brain-replication approaches, we aim to achieve minimal self-awareness through essential elements only. Self-awareness emerges from layer interactions and dynamic self-modeling, without initial explicit self-programming. We detail each component's structure, function, and implementation strategies, addressing technical feasibility. This research offers new perspectives on consciousness emergence in artificial systems, with potential implications for human consciousness understanding and adaptable AI development. We conclude by discussing ethical considerations and future research directions.

  • In this age of artificial intelligence and digital technology, we are getting familiarised with the concept of machine perception and physical robots that are deemed to become artificially conscious agents of communication and digital welfare. The technological miracles—we should rather call them scientific marvels, are the products of rigorous scientific endeavors which have changed the paradigm and landscape of social progress.  In this paper, we discuss the aspects related to such progress towards conceiving conscious, smart agents as intelligent and sentient machines of the future that would possess the power to feel and evoke emotional responses alike human beings. The model of artificial consciousness which we propose, is modelled on the elliptic curve computational network based upon recursive iteration characterising a trapdoor mechanism that consolidates all the evolutionary steps into a single framework. The trapdoor mechanism signifies one way consolidation of evolutionary steps that results in the emergence of machine consciousness: i.e., there is no way to regress into the previous, lower states of evolutionary steps. It is from this continuity in evolutionary consolidation of architectural complexity, the emergence of consciousness might become possible in machines. We outline such a model that characterises irreversibility of evolving complexity that generates higher order awareness resembling human consciousness. It is not simply a design process that we attribute, but the nature of emergence which we claim is a self-evolving system, not reliant on language models, since the conscious machine would be able to learn all by themselves. This makes our model inherently different from others.The possibility and implications of such an achievable feat are discussed and a simple model is constructed to reinforce and illuminate the arguments proposed in favor of and against such evolution in machine intelligence. The philosophical basis of understanding the evolution of machine intelligence—as well as conscious self-awareness by embodiment of consciousness in robots has been outlined, which provides us with the rich information for further research and progress in designing machines that are inherently different than most others.

  • This paper examines the nature of artificial consciousness through an interdisciplinary lens, integrating philosophy, epistemology, and computational theories. Beginning with presocratic thought, such as Protagoras’ relativism and Gorgias’ rhetoric, it contextualizes the epistemological implications of artificial intelligence as inherently subjective, attributing grave importance to this subjectivity. The paper draws parallels between Plato’s cave dwellers and AI systems, arguing that both rely on lossy representations of the world, raising questions about their true understanding and reality. Cartesian and Hegelian frameworks are explored to distinguish between weak and strong artificial intelligence, emphasizing embodied cognition and the moral obligations tied to emergent artificial consciousness. The discussion extends to quantum computing, panpsychism, and the potential of artificial minds to reshape our perception of time and existence. By critically analyzing these perspectives, the paper advocates for a nuanced understanding of artificial consciousness and its ethical, epistemological, and societal implications. It invites readers to reconsider humanity’s evolving relationship with intelligence and sentience.

  • The pursuit of artificial consciousness requires conceptual clarity to navigate its theoretical and empirical challenges. This paper introduces a composite, multilevel, and multidimensional model of consciousness as a heuristic framework to guide research in this field. Consciousness is treated as a complex phenomenon, with distinct constituents and dimensions that can be operationalized for study and for evaluating their replication. We argue that this model provides a balanced approach to artificial consciousness research by avoiding binary thinking (e.g., conscious vs. non-conscious) and offering a structured basis for testable hypotheses. To illustrate its utility, we focus on "awareness" as a case study, demonstrating how specific dimensions of consciousness can be pragmatically analyzed and targeted for potential artificial instantiation. By breaking down the conceptual intricacies of consciousness and aligning them with practical research goals, this paper lays the groundwork for a robust strategy to advance the scientific and technical understanding of artificial consciousness.

  • Machine consciousness (MC) is the ultimate challenge to artificial intelligence. Although great progress has been made in artificial intelligence and robotics, consciousness is still an enigma and machines are far from having it. To clarify the concepts of consciousness and the research directions of machine consciousness, in this review, a comprehensive taxonomy for machine consciousness is proposed, categorizing it into seven types: MC-Perception, MC-Cognition, MC-Behavior, MC-Mechanism, MC-Self, MC-Qualia and MC-Test, where the first six types aim to achieve a certain kind of conscious ability, and the last type aims to provide evaluation methods and criteria for machine consciousness. For each type, the specific research contents and future developments are discussed in detail. Especially, the machine implementations of three influential consciousness theories, i.e. global workspace theory, integrated information theory and higher-order theory, are elaborated in depth. Moreover, the challenges and outlook of machine consciousness are analyzed in detail from both theoretical and technical perspectives, with emphasis on new methods and technologies that have the potential to realize machine consciousness, such as brain-inspired computing, quantum computing and hybrid intelligence. The ethical implications of machine consciousness are also discussed. Finally, a comprehensive implementation framework of machine consciousness is provided, integrating five suggested research perspectives: consciousness theories, computational methods, cognitive architectures, experimental systems, and test platforms, paving the way for the future developments of machine consciousness.

  • In this paper, the second of two companion pieces, we explore novel philosophical questions raised by recent progress in large language models (LLMs) that go beyond the classical debates covered in the first part. We focus particularly on issues related to interpretability, examining evidence from causal intervention methods about the nature of LLMs' internal representations and computations. We also discuss the implications of multimodal and modular extensions of LLMs, recent debates about whether such systems may meet minimal criteria for consciousness, and concerns about secrecy and reproducibility in LLM research. Finally, we discuss whether LLM-like systems may be relevant to modeling aspects of human cognition, if their architectural characteristics and learning scenario are adequately constrained.

  • In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood, i.e. of AI systems with their own interests and moral significance, is no longer an issue only for sci-fi or the distant future. It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously. We also recommend three early steps that AI companies and other actors can take: They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern. To be clear, our argument in this report is not that AI systems definitely are, or will be, conscious, robustly agentic, or otherwise morally significant. Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not.

  • Abstract We apply the methodology of no-go theorems as developed in physics to the question of artificial consciousness. The result is a no-go theorem which shows that under a general assumption, called dynamical relevance, Artificial Intelligence (AI) systems that run on contemporary computer chips cannot be conscious. Consciousness is dynamically relevant, simply put, if, according to a theory of consciousness, it is relevant for the temporal evolution of a system’s states. The no-go theorem rests on facts about semiconductor development: that AI systems run on central processing units, graphics processing units, tensor processing units, or other processors which have been designed and verified to adhere to computational dynamics that systematically preclude or suppress deviations. Whether our result resolves the question of AI consciousness on contemporary processors depends on the truth of the theorem’s main assumption, dynamical relevance, which this paper does not establish.

Last update from database: 5/19/25, 5:58 AM (UTC)