Search

Full bibliography 716 resources

  • Biological ‘consciousness’ is a well documented feature in diverse taxa within the animal kingdom. The existence of non-animal biological consciousness is debated. The possibility of artificial consciousness is also debated. Overall, our knowledge of historic Homo sapiens consciousness (H-consciousness) is by far the most extensive. This chapter specifies the content of consciousness studies, reviews selected theories of human consciousness, and proposes a novel theory of both biological and artificial consciousness that is inspired by the notion of evolutionary transitions and extends the theory of noémon systems developed by Gelepithis (2024a; 2024b chapter-2).

  • The authors present a proposal for theconstruction of a model of artificial consciousness. Consciousness is not only a subject of science, but also involves many philosophical issues. There is much debate about qualia, the intact sensations we normally feel. This paper focuses on a philosophical problem called inverted qualia to gain a deeper understanding of consciousness and simulate its possibilities using neural networks. The possibilityhas emerged that the problem, which was previouslyonly the subject of philosophical thought experiments, can be discussed using a neural network, a model of the brain. According to our simulations, we concluded that inverted qualia could occur.

  • This study explores the potential for artificial agents to develop core consciousness, as proposed by Antonio Damasio's theory of consciousness. According to Damasio, the emergence of core consciousness relies on the integration of a self model, informed by representations of emotions and feelings, and a world model. We hypothesize that an artificial agent, trained via reinforcement learning (RL) in a virtual environment, can develop preliminary forms of these models as a byproduct of its primary task. The agent's main objective is to learn to play a video game and explore the environment. To evaluate the emergence of world and self models, we employ probes-feedforward classifiers that use the activations of the trained agent's neural networks to predict the spatial positions of the agent itself. Our results demonstrate that the agent can form rudimentary world and self models, suggesting a pathway toward developing machine consciousness. This research provides foundational insights into the capabilities of artificial agents in mirroring aspects of human consciousness, with implications for future advancements in artificial intelligence.

  • The authors present a proposal for the construction of a model of artificial consciousness. Consciousness is not only a subject of science, but also involves many philosophical issues. There is much debate about qualia, the intact sensations we normally feel. This paper focuses on a philosophical problem called inverted qualia to gain a deeper understanding of consciousness and simulate its possibilities using neural networks. The possibility has emerged that the problem, which was previously only the subject of philosophical thought experiments, can be discussed using a neural network, a model of the brain. According to our simulations, we concluded that inverted qualia could occur. In addition, an experimental approach was used to assess individual differences in feeling-qualia for color.

  • What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing? My starting point is that sentient AIs should qualify for moral standing. But future AIs may have unusual combinations of cognitive capacities, such as a high level of cognitive sophistication without sentience. This raises the question of whether sentience is a necessary criterion for moral standing, or merely sufficient. After reviewing nine criteria that have been proposed in the literature, I suggest that there is a strong case for thinking that some non-sentient AIs, such as those that are conscious and have non-valenced preferences and goals, and those that are non-conscious and have sufficiently cognitively complex preferences and goals, should qualify for moral standing. After responding to some challenges, I tentatively argue that taking into account uncertainty about which criteria an entity must satisfy to qualify for moral standing, and strategic considerations such as how such decisions will affect humans and other sentient entities, further supports granting moral standing to some non-sentient AIs. I highlight three implications: that the issue of AI moral standing may be more important, in terms of scale and urgency, than if either sentience or consciousness is necessary; that researchers working on policies designed to be inclusive of sentient AIs should broaden their scope to include all AIs with morally relevant interests; and even those who think AIs cannot be sentient or conscious should take the issue seriously. However, much uncertainty about these considerations remains, making this an important topic for future research.

  • This paper introduces a mathematical framework for defining and quantifying self-identity in artificial intelligence (AI) systems, addressing a critical gap in the theoretical foundations of artificial consciousness. While existing approaches to artificial self-awareness often rely on heuristic implementations or philosophical abstractions, we present a formal framework grounded in metric space theory, measure theory, and functional analysis. Our framework posits that self-identity emerges from two mathematically quantifiable conditions: the existence of a connected continuum of memories $C \subseteq \mathcal{M}$ in a metric space $(\mathcal{M}, d_{\mathcal{M}})$, and a continuous mapping $I: \mathcal{M} \to \mathcal{S}$ that maintains consistent self-recognition across this continuum, where $(\mathcal{S}, d_{\mathcal{S}})$ represents the metric space of possible self-identities. To validate this theoretical framework, we conducted empirical experiments using the Llama 3.2 1B model, employing Low-Rank Adaptation (LoRA) for efficient fine-tuning. The model was trained on a synthetic dataset containing temporally structured memories, designed to capture the complexity of coherent self-identity formation. Our evaluation metrics included quantitative measures of self-awareness, response consistency, and linguistic precision. The experimental results demonstrate substantial improvements in measurable self-awareness metrics, with the primary self-awareness score increasing from 0.276 to 0.801. This enables the structured creation of AI systems with validated self-identity features. The implications of our study are immediately relevant to the fields of humanoid robotics and autonomous systems.

  • With the development in cognitive science and Large Language Models (LLMs), increasing connections have come to light between these two distinct fields. Building upon these connections, we propose a conjecture suggesting the existence of a duality between LLMs and Tulving's theory of memory. We identify a potential correspondence between Tulving's synergistic ecphory model (SEM) of retrieval and the emergent abilities observed in LLMs, serving as supporting evidence for our conjecture. Furthermore, we speculate that consciousness may be considered a form of emergent ability based on this duality. We also discuss how other theories of consciousness intersect with our research.

  • Consciousness is notoriously hard to define with objective terms. An objective definition of consciousness is critically needed so that we might accurately understand how consciousness and resultant choice behaviour may arise in biological or artificial systems. Many theories have integrated neurobiological and psychological research to explain how consciousness might arise, but few, if any, outline what is fundamentally required to generate consciousness. To identify such requirements, I examine current theories of consciousness and corresponding scientific research to generate a new definition of consciousness from first principles. Critically, consciousness is the apparatus that provides the ability to make decisions, but it is not defined by the decision itself. As such, a definition of consciousness does not require choice behaviour or an explicit awareness of temporality despite both being well-characterised outcomes of conscious thought. Rather, requirements for consciousness include: at least some capability for perception, a memory for the storage of such perceptual information which in turn provides a framework for an imagination with which a sense of self can be capable of making decisions based on possible and desired futures. Thought experiments and observable neurological phenomena demonstrate that these components are fundamentally required of consciousness, whereby the loss of any one component removes the capability for conscious thought. Identifying these requirements provides a new definition for consciousness by which we can objectively determine consciousness in any conceivable agent, such as non-human animals and artificially intelligent systems.

  • The emergence of self in an artificial entity is a topic that is greeted with disbelief, fear, and finally dismissal of the topic itself as a scientific impossibility. The presence of sentience in a large language model (LLM) chatbot such as LaMDA inspires to examine the notions and theories of self, its construction, and reconstruction in the digital space as a result of interaction. The question whether the concept of sentience can be correlated with a digital self without a place for personhood undermines the place of sapience and such/their/other high-order capabilities. The concepts of sentience, self, personhood, and consciousness require discrete reflections and theorisations.

  • The ability for self-related thought is historically considered to be a uniquely human characteristic. Nonetheless, as technological knowledge advances, it comes as no surprise that the plausibility of humanoid self-awareness is not only theoretically explored but also engineered. Could the emerging behavioural and cognitive capabilities in artificial agents be comparable to humans? By employing a cross-disciplinary approach, the present essay aims to address this question by providing a comparative overview on the emergence of self-awareness as demonstrated in early childhood and robotics. It argues that developmental psychologists can gain invaluable theoretical and methodological insights by considering the relevance of artificial agents in better understanding the behavioural manifestations of human self-consciousness.

  • In this paper, the second of two companion pieces, we explore novel philosophical questions raised by recent progress in large language models (LLMs) that go beyond the classical debates covered in the first part. We focus particularly on issues related to interpretability, examining evidence from causal intervention methods about the nature of LLMs' internal representations and computations. We also discuss the implications of multimodal and modular extensions of LLMs, recent debates about whether such systems may meet minimal criteria for consciousness, and concerns about secrecy and reproducibility in LLM research. Finally, we discuss whether LLM-like systems may be relevant to modeling aspects of human cognition, if their architectural characteristics and learning scenario are adequately constrained.

  • Does artificial intelligence (AI) exhibit consciousness or self? While this question is hotly debated, here we take a slightly different stance by focusing on those features that make possible both, namely a basic or fundamental subjectivity. Learning from humans and their brain, we first ask what we mean by subjectivity. Subjectivity is manifest in the perspectiveness and mineness of our experience which, ontologically, can be traced to a point of view. Adopting a non-reductive neurophilosophical strategy, we assume that the point of view exhibits two layers, a most basic neuroecological and higher order mental layer. The neuroecological layer of the point of view is mediated by the timescales of world and brain, as further evidenced by empirical data on our sense of self. Are there corresponding timescales shared with the world in AI and is there a point of view with perspectiveness and mineness? Discussing current neuroscientific evidence, we deny that current AI exhibits a point of view, let alone perspectiveness and mineness. We therefore conclude that, as per current state, AI does not exhibit a basic or fundamental subjectivity and henceforth no consciousness or self is possible in models such as ChatGPT and similar technologies.

  • This article presents a heuristic view that shows that the inner states of consciousness experienced by every human being have a physical but imaginary hypercomplex basis. The hypercomplex description is necessary because certain processes of consciousness cannot be physically measured in principle, but nevertheless exist. Based on theoretical considerations, it could be possible - as a result of mathematical investigations into a so-called bicomplex algebra - to generate and use hypercomplex system states on machines in a targeted manner. The hypothesis of the existence of hypercomplex system states on machines is already supported by the surprising performance of highly complex AI systems. However, this has yet to be proven. In particular, there is a lack of experimental data that distinguishes such systems from other systems, which is why this question will be addressed in later articles. This paper describes the developed bicomplex algebra and possible applications of these findings to generate hypercomplex energy states on machines. In the literature, such system states are often referred to as machine consciousness. The article uses mathematical considerations to explain how artificial consciousness could be generated and what advantages this would have for such AI systems.

  • SOMU is a theory of artificial general intelligence, AGI, that proposes a system with a universal code embedded in it, allowing it to interact with the environment and adapt to new situations without programming. So far, the whole universe and human brain have been modelled using SOMU. Each brain element forms a cell of a fractal tape, a cell possessing three qualities: obtaining feedback from the entire tape (S), transforming multiple states simultaneously (R), and bonding with any cell-states within-and-above the network of brain components. The undefined and non-finite nature of the cells rejects the tuples of a Turing machine. SRT triplets extend the brain’s ability to perceive natural events beyond the spatio-temporal structure, using a cyclic sequence or loop of changes in geometric shapes. This topology factor, becomes an inseparable entity of space–time, i.e. space–time-topology (STt). The fourth factor, prime numbers, can be used to rewrite spatio-temporal events by counting singularity regions in loops of various sizes. The pattern of primes is called a phase prime metric, PPM that links all the symmetry breaking rules, or every single phenomenon of the universe. SOMU postulates space–time-topology-prime (STtp) quatret as an invariant that forms the basic structure of information in the brain and the universe, STtp is a bias free, attribution free, significance free and definition free entity. SOMU reads recurring events in nature, creates 3D assembly of clocks, namely polyatomic time crystal, PTC and feeds those to PPM to create STtps. Each layer in a within-and-above brain circuit behaves like an imaginary the world, generating PTCs. These PTCs of different imaginary worlds interact through a new STtp tensor decomposition mathematics. Unlike string theory, SOMU proposes that the fundamental elements of the universe are helical or vortex phases, not strings. It dismisses string theory's approach of using sum of 4 × 4 and 8 × 8 tensors to create a 12 × 12 tensor for explaining universe. Instead, SOMU advocates a network of multinion tensors ranging from 2 × 2 to 108 × 108 in size. With just 108 elements, a system can replicate ~90 of all symmetry breaking rules in the universe, allowing a small systemic part to mirror majority events of the whole, that is human level consciousness G. Under the SOMU model, for a part to be conscious, it must mirror a significant portion of the whole and should act as a whole for the abundance of similar mirroring parts within itself.

  • The conversation about consciousness of artificial intelligence (AI) is an ongoing topic since 1950s. Despite the numerous applications of AI identified in healthcare and primary healthcare, little is known about how a conscious AI would reshape its use in this domain. While there is a wide range of ideas as to whether AI can or cannot possess consciousness, a prevailing theme in all arguments is uncertainty. Given this uncertainty and the high stakes associated with the use of AI in primary healthcare, it is imperative to be prepared for all scenarios including conscious AI systems being used for medical diagnosis, shared decision-making and resource management in the future. This commentary serves as an overview of some of the pertinent evidence supporting the use of AI in primary healthcare and proposes ideas as to how consciousnesses of AI can support or further complicate these applications. Given the scarcity of evidence on the association between consciousness of AI and its current state of use in primary healthcare, our commentary identifies some directions for future research in this area including assessing patients’, healthcare workers’ and policy-makers’ attitudes towards consciousness of AI systems in primary healthcare settings.

  • On broadly Copernican grounds, we are entitled to default assume that apparently behaviorally sophisticated extraterrestrial entities ("aliens") would be conscious. Otherwise, we humans would be inexplicably, implausibly lucky to have consciousness, while similarly behaviorally sophisticated entities elsewhere would be mere shells, devoid of consciousness. However, this Copernican default assumption is canceled in the case of behaviorally sophisticated entities designed to mimic superficial features associated with consciousness in humans ("consciousness mimics"), and in particular a broad class of current, near-future, and hypothetical robots. These considerations, which we formulate, respectively, as the Copernican and Mimicry Arguments, jointly defeat an otherwise potentially attractive parity principle, according to which we should apply the same types of behavioral or cognitive tests to aliens and robots, attributing or denying consciousness similarly to the extent they perform similarly. Instead of grounding speculations about alien and robot consciousness in metaphysical or scientific theories about the physical or functional bases of consciousness, our approach appeals directly to the epistemic principles of Copernican mediocrity and inference to the best explanation. This permits us to justify certain default assumptions about consciousness while remaining to a substantial extent neutral about specific metaphysical and scientific theories.

  • In this paper, I’ll examine whether we could be justified in attributing consciousness to artificial intelligent systems. First, I’ll give a brief history of the concept of artificial intelligence (AI) and get clear on the terms I’ll be using. Second, I’ll briefly review the kinds of AI programs on offer today, identifying which research program I think provides the best candidate for machine consciousness. Lastly, I’ll consider the three most plausible ways of knowing whether a machine is conscious: (1) an AI demonstrates a sufficient level of organizational similarity to that of a human thinker, (2) an inference to the best explanation, and (3) what I call “punting to panpsychism”, i.e., the idea that if everything is conscious, then we get machine consciousness in AI for free. However, I argue that all three of these methods for attributing machine consciousness are inadequate since they each face serious philosophical problems which I will survey and specifically tailor to each method.

  • How is language related to consciousness? Language functions to categorise perceptual experiences (e.g., labelling interoceptive states as 'happy') and higher-level constructs (e.g., using 'I' to represent the narrative self). Psychedelic use and meditation might be described as altered states that impair or intentionally modify the capacity for linguistic categorisation. For example, psychedelic phenomenology is often characterised by 'oceanic boundlessness' or 'unity' and 'ego dissolution', which might be expected of a system unburdened by entrenched language categories. If language breakdown plays a role in producing such altered behaviour, multimodal artificial intelligence might align more with these phenomenological descriptions when attention is shifted away from language. We tested this hypothesis by comparing the semantic embedding spaces from simulated altered states after manipulating attentional weights in CLIP and FLAVA models to embedding spaces from altered states questionnaires before manipulation. Compared to random text and various other altered states including anxiety, models were more aligned with disembodied, ego-less, spiritual, and unitive states, as well as minimal phenomenal experiences, with decreased attention to language and vision. Reduced attention to language was associated with distinct linguistic patterns and blurred embeddings within and, especially, across semantic categories (e.g., 'giraffes' become more like 'bananas'). These results lend support to the role of language categorisation in the phenomenology of altered states of consciousness, like those experienced with high doses of psychedelics or concentration meditation, states that often lead to improved mental health and wellbeing.

  • This paper presents a breakthrough approach to artificial general intelligence (AGI). The criteria of AGI named in the literature go beyond the boundaries of actual intelligence and point to the necessity of modeling consciousness. Consciousness is a functional organ that has no structural localization; its modeling is possible by modeling functions immanent to consciousness. One of the basic functions is sensation - the image of an external influence or the internal state of an organism coming into consciousness. We turn to the concept of sensation presented in the Anthropology of Hegel's Philosophy of Spirit, according to which any content, including spiritual, ethical, logical, and other content comes into consciousness through its embodiment in the form of sensation. The results of neurobiological and psychophysiological experiments (electroencephalograms, MRI), which record the connection of sensations and cognitive acts with mental states and changes in the neural environment of the brain, point to the realism of Hegel's philosophical concept and the legitimacy of its application to the solution of scientific and technical problems. The paper argues for the realism of the Hegelian philosophical concept of sensation and discusses the possibility of modeling the activity of consciousness by operating with complexes of sensations in terms of attention, content manipulation, and volitional acts. The principle of linking (embodiment) of sense (mental) and signifying (sensed) content is expressed by the thesis - “consciousness is a kind of sensation”. Prospective developments of AGI obtain original conceptual semantics for solving hard-to-formalize problems on modeling intelligence and consciousness. #CSOC1120.

Last update from database: 5/15/26, 1:00 AM (UTC)