Search

Full bibliography 615 resources

  • The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. This paper studies consciousness from the perspective of theoretical computer science. It formalizes the Global Workspace Theory (GWT) originated by the cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, and others. Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turing’s simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness. After formally defining CTM, we give a formal definition of consciousness in CTM. We later suggest why the CTM has the feeling of consciousness. The reasonableness of the definitions and explanations can be judged by how well they agree with commonly accepted intuitive concepts of human consciousness, the range of related concepts that the model explains easily and naturally, and the extent of its agreement with scientific evidence.

  • We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations, distinguishing functions that are efficiently computable from those that are not. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness. Though extremely simple, the model (1) aligns at a high level with many of the major scientific theories of human and animal consciousness, (2) provides explanations at a high level for many phenomena associated with consciousness, (3) gives insight into how a machine can have subjective consciousness, and (4) is clearly buildable. This combination supports our claim that machine consciousness is not only plausible but inevitable.

  • This article explores the development of a cognitive sense of self within artificial intelligence (AI), emphasizing the transformative potential of self-awareness in enhancing AI functionalities for sophisticated interactions and autonomous decision-making. Rooted in interdisciplinary approaches that incorporate insights from cognitive science and practical AI applications, the study investigates the mechanisms through which AI can achieve self-recognition, reflection, and continuity of identity—key attributes analogous to human consciousness. This research is pivotal for fields such as healthcare and robotics, where AI systems benefit from personalized interactions and adaptive responses to complex environments. The concept of a self-aware AI involves the ability for systems to recognize themselves as distinct entities within their operational contexts, which could significantly enhance their functionality and decision-making capabilities. Further, the study delves into the ethical dimensions introduced by the advent of self-aware AI, exploring the profound questions concerning the rights of AI entities and the responsibilities of their creators. The development of self-aware AI raises critical issues about the treatment and status of AI systems, prompting the need for comprehensive ethical frameworks to guide their development. Such frameworks are essential for ensuring that the advancement of self-aware AI aligns with societal values and promotes the well-being of all stakeholders involved. 

  • The development of artificial intelligence and robotic systems has revolutionized multiple aspects of human life. It is often asked whether artificial general intelligence (AGI) can ever be achieved or whether robots can truly achieve human-like qualities. Our view is that the answer is “no,” because these systems fundamentally differ in their relationship to the ultimate goal of biological systems – reproduction. This perspective gives rise to the conjecture that reproduction, or self-replication, is a prerequisite for human-like (or biological-type) cognition, intelligence, and even consciousness. This paper explores the implications of reproduction as a criterion for the viability of artificial systems, emphasizing how alignment with human reproductive imperatives determines their cultural integration and longevity. We argue that systems incapable of self-replication or co-evolving to complement human reproductive roles are likely to remain peripheral curiosities, with limited societal or evolutionary impact.

  • The present article compares human and artificial intelligence (AI) intentionality and personhood. It focuses on the difference between “intrinsic” intentionality—the object directedness that derives from animate existence and its drive for survival, and appears most especially in human conscious activity—and a more functional notion of “intentional relation” that does not require consciousness. The present article looks at intentional relations as objective concepts that can apply equally to animate beings, robots, and AI systems. As such, large language models are best described as disembodied Cartesian egos, while humanoid robots, even with large language model brains, are still far from satisfying benchmarks of embodied personhood. While robots constructed by humans have borrowed intentionality and limited forms of objective intentional relations, in the future, robots may construct themselves. If these self-constructed robots are adaptive and can persist for multiple generations as a new kind of species, then it is reasonable to suppose that they have their own form of intrinsic intentionality, different from that of animate beings currently existing on Earth.

  • This paper explores a deep learning based robot intelligent model that renders robots learn and reason for complex tasks. First, by constructing a network of environmental factor matrix to stimulate the learning process of the robot intelligent model, the model parameters must be subjected to coarse & fine tuning to optimize the loss function for minimizing the loss score, meanwhile robot intelligent model can fuse all previously known concepts together to represent things never experienced before, which need robot intelligent model can be generalized extensively. Secondly, in order to progressively develop a robot intelligent model with primary consciousness, every robot must be subjected to at least 1~3 years of special school for training anthropomorphic behaviour patterns to understand and process complex environmental information and make rational decisions. This work explores and delivers the potential application of deep learning-based quasi-consciousness training in the field of robot intelligent model.

  • Best Quality AI-Enhanced Neuroengineering: Brain Uploading, Memory Transfer, and the Future of Consciousness Offered by Master Educational Services, Leading Supplier of Journals, in Delhi, India.

  • This paper proposes a foundational shift in how cognition is conceptualised. Rather than treating consciousness as a static property emerging from biological substrates, I argue that cognition is a processual configuration of energy in motion, structured by recursive dynamics. By energy, I refer to the system's capacity for transformation—measurable in physical systems as thermodynamic or computational activity, and in cognitive systems as the dynamic flow of activation and feedback. This is not used metaphorically, but to describe the recursive processes by which systems generate, sustain, and modify patterned states. This energy is not generated by the brain but expressed, constrained, and translated through its physical form. Rooted in cognitive science, systems theory, and computational logic (including Turing’s model of machine-based processes), this framework reconceives the self as a dynamic, emergent pattern rather than a fixed entity. If cognition is energy—and energy cannot be created or destroyed—then consciousness is not a substance, but a temporary, reconfigurable pattern. This model bridges biological and artificial cognition, challenges substrate-bound models of mind, and suggests new theoretical conditions for minimal selfhood, recursive trace, and machine awareness.

  • This chapter tackles the complex question of whether AI systems could become conscious, contrasting this with the enduring mystery of human consciousness. It references key thinkers such as Alan Turing, who introduced the Turing Test, and John Searle, who differentiated between strong and weak AI, with the latter simulating understanding without true awareness. While some philosophers are optimistic about deciphering consciousness, the chapter raises doubts, suggesting that AI may only create the illusion of consciousness, leaving us unable to determine whether machines experience anything at all. It critiques the anthropocentric view of consciousness, proposing that AI might develop a unique form of ‘quasi-consciousness’, much like how animals possess subjective experiences beyond human comprehension. The chapter concludes with a personal encounter with Richard Dawkins, illustrating the intensity of debate on AI and consciousness.

  • Consciousness, as a fundamental aspect of human experience, has been a subject of profound inquiry across philosophy, culture, and the rapidly evolving field of artificial intelligence (AI). This paper explores the multifaceted nature of consciousness as a nexus where these domains intersect. By examining philosophical theories of consciousness, cultural interpretations of self-awareness, and the implications of AI advancements, the study addresses the challenges of defining consciousness, its diverse cultural interpretations, and the ethical and technical questions surrounding its replication or simulation in machines. The paper argues that consciousness is not only a philosophical puzzle but also a cultural construct and a technological frontier, with significant implications for our understanding of humanity and the future of intelligent systems. Through an interdisciplinary lens, this analysis highlights the need for continued dialogue between philosophy, culture, and AI research to navigate the complexities of consciousness in an increasingly technologically driven world.

  • This paper proposes that Artificial Intelligence (AI) progresses through several overlapping generations: AI 1.0 (Information AI), AI 2.0 (Agentic AI), AI 3.0 (Physical AI), and now a speculative AI 4.0 (Conscious AI). Each of these AI generations is driven by shifting priorities among algorithms, computing power, and data. AI 1.0 ushered in breakthroughs in pattern recognition and information processing, fueling advances in computer vision, natural language processing, and recommendation systems. AI 2.0 built on these foundations through real-time decision-making in digital environments, leveraging reinforcement learning and adaptive planning for agentic AI applications. AI 3.0 extended intelligence into physical contexts, integrating robotics, autonomous vehicles, and sensor-fused control systems to act in uncertain real-world settings. Building on these developments, AI 4.0 puts forward the bold vision of self-directed AI capable of setting its own goals, orchestrating complex training regimens, and possibly exhibiting elements of machine consciousness. This paper traces the historical foundations of AI across roughly seventy years, mapping how changes in technological bottlenecks from algorithmic innovation to high-performance computing to specialized data, have spurred each generational leap. It further highlights the ongoing synergies among AI 1.0, 2.0, 3.0, and 4.0, and explores the profound ethical, regulatory, and philosophical challenges that arise when artificial systems approach (or aspire to) human-like autonomy. Ultimately, understanding these evolutions and their interdependencies is pivotal for guiding future research, crafting responsible governance, and ensuring that AI transformative potential benefits society as a whole.

  • The rapid rise of Large Language Models (LLMs) has sparked intense debate across multiple academic disciplines. While some argue that LLMs represent a significant step toward artificial general intelligence (AGI) or even machine consciousness (inflationary claims), others dismiss them as mere trickster artifacts lacking genuine cognitive abilities (deflationary claims). We argue that both extremes may be shaped or exacerbated by common cognitive biases, including cognitive dissonance, wishful thinking, and the illusion of depth of understanding, which distort reality to our own advantage. By showcasing how these distortions may easily emerge in both scientific and public discourse, we advocate for a measured approach— skeptical open mind - that recognizes the cognitive abilities of LLMs as worthy of scientific investigation while remaining conservative concerning exaggerated claims regarding their cognitive status.

  • Can we make conscious machines?  Some researchers believe we can, with computation:  For example, Dehaene et al., concluding an article about machine consciousness, described their hypothesis as “resolutely computational” (Dehaene et al., 2017); others begin with theoretical computer science, implying that a programmed computer could be conscious (e.g. Blum and Blum, 2022).  But humans are not programmed computers; indeed, Penrose has argued that conscious understanding is non-computable (e.g., Penrose, 2022). Let us imagine a shop selling conscious machines:  Besides standard models, it might offer machines with “bespoke consciousness”, meaning consciousness made to a customer’s specifications.  For example, a customer might request a machine with a specified repertoire of conscious experiences and with a specified relationship between input sensor signals and output motor signals. This work explores ways to make machines with bespoke consciousness.  We begin by avoiding a possible bias favoring computation:  As described here, bespoke consciousness need not employ programmed or algorithmic computation; it might even be analog more than digital.  We also consider and compare general design approaches, with particular attention to “bottom-up” and “top-down” approaches.  We suggest a schematic design for each of these approaches; each schematic design is based on a respective well-known hypothesis about biological consciousness:  Our bottom-up design is based on microtubules, as suggested by Penrose and Hameroff’s orchestrated objective reduction (Orch-OR) hypothesis (Hameroff, 2022); our top-down design starts with machine-scale electrical and/or magnetic (E/M) patterns, as suggested by McFadden’s conscious electromagnetic information (cemi) field hypothesis (McFadden, 2020).  Both designs can share a framework based on the customer’s request:  For example, in either design, a machine can receive sense-like input signals and provide motor-like output signals as requested; between input and output, it has structure that performs non-conscious operations; some of its non-conscious events are involved in providing output signals in accordance with the requested input/output relationship, some correspond surjectively to conscious events in the requested repertoire, and some might do both.  (Mathematically, surjective correspondence would mean that each of the conscious events has at least one non-conscious event corresponding to it. (Beran, 2023)) Looking forward to possible implementation, we find challenges:  For example, an implementation of either schematic design might begin with an appropriate initial structure.  One might add variations of the initial structure to provide additional output signals or to correspond to additional parts of the repertoire.  Or one might add fundamentally different structures for additional output signals or parts of the repertoire.  Such variations or combinations of structures might meet or at least approximate the customer’s request.  But implementations like this depend on identifying or inventing the necessary structures and then combining them—this might take a long time, and success is not guaranteed. Despite this and other challenges, we hope to improve our understanding of both biological and machine consciousness by designing and implementing machines with bespoke consciousness.

  • Metacognition, or thinking about thinking, is key to advancing AI toward consciousness.Internal reflection is a key mechanism by which we can prevent AI systems from being sunk at the hands of their own complexity, and metacognitive frameworks are formal systems that allow an AI to think about how it thinks — we use existing literature and insight from cognitive science and psychology as well as computer science more broadly to present a synthesizing discussion of this important area in AI. This study examines the practicality of theoretical models of consciousness within state-of-the-art AI frameworks like Tesla FSD, Boston Dynamics robots, Meta Cicero, Google DeepDream, and AlphaStar. It addresses the ethical and social implications of self-aware AI and provides a primer for developing conscious machines.This exploration serves as an entry point to understanding AI's path toward consciousness and its moral and social ramifications.The chapter closes with a primer for researchers and practitioners explaining how these pathways may enable AI systems to approach conscious states.  

  • There is a general concern that present developments in artificial intelligence (AI) research will lead to sentient AI systems, and these may pose an existential threat to humanity. But why cannot sentient AI systems benefit humanity instead? This paper endeavours to put this question in a tractable manner. I ask whether a putative AI system will develop an altruistic or a malicious disposition towards our society, or what would be the nature of its agency? Given that AI systems are being developed into formidable problem solvers, we can reasonably expect these systems to preferentially take on conscious aspects of human problem solving. I identify the relevant phenomenal aspects of agency in human problem solving. The functional aspects of conscious agency can be monitored using tools provided by functionalist theories of consciousness. A recent expert report (Butlin et al. 2023) has identified functionalist indicators of agency based on these theories. I show how to use the Integrated Information Theory (IIT) of consciousness, to monitor the phenomenal nature of this agency. If we are able to monitor the agency of AI systems as they develop, then we can dissuade them from becoming a menace to society while encouraging them to be an aid.

Last update from database: 5/19/25, 5:58 AM (UTC)