Search
Full bibliography 675 resources
-
With the development in cognitive science and Large Language Models (LLMs), increasing connections have come to light between these two distinct fields. Building upon these connections, we propose a conjecture suggesting the existence of a duality between LLMs and Tulving's theory of memory. We identify a potential correspondence between Tulving's synergistic ecphory model (SEM) of retrieval and the emergent abilities observed in LLMs, serving as supporting evidence for our conjecture. Furthermore, we speculate that consciousness may be considered a form of emergent ability based on this duality. We also discuss how other theories of consciousness intersect with our research.
-
In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood, i.e. of AI systems with their own interests and moral significance, is no longer an issue only for sci-fi or the distant future. It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously. We also recommend three early steps that AI companies and other actors can take: They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern. To be clear, our argument in this report is not that AI systems definitely are, or will be, conscious, robustly agentic, or otherwise morally significant. Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not.
-
Consciousness is notoriously hard to define with objective terms. An objective definition of consciousness is critically needed so that we might accurately understand how consciousness and resultant choice behaviour may arise in biological or artificial systems. Many theories have integrated neurobiological and psychological research to explain how consciousness might arise, but few, if any, outline what is fundamentally required to generate consciousness. To identify such requirements, I examine current theories of consciousness and corresponding scientific research to generate a new definition of consciousness from first principles. Critically, consciousness is the apparatus that provides the ability to make decisions, but it is not defined by the decision itself. As such, a definition of consciousness does not require choice behaviour or an explicit awareness of temporality despite both being well-characterised outcomes of conscious thought. Rather, requirements for consciousness include: at least some capability for perception, a memory for the storage of such perceptual information which in turn provides a framework for an imagination with which a sense of self can be capable of making decisions based on possible and desired futures. Thought experiments and observable neurological phenomena demonstrate that these components are fundamentally required of consciousness, whereby the loss of any one component removes the capability for conscious thought. Identifying these requirements provides a new definition for consciousness by which we can objectively determine consciousness in any conceivable agent, such as non-human animals and artificially intelligent systems.
-
The emergence of self in an artificial entity is a topic that is greeted with disbelief, fear, and finally dismissal of the topic itself as a scientific impossibility. The presence of sentience in a large language model (LLM) chatbot such as LaMDA inspires to examine the notions and theories of self, its construction, and reconstruction in the digital space as a result of interaction. The question whether the concept of sentience can be correlated with a digital self without a place for personhood undermines the place of sapience and such/their/other high-order capabilities. The concepts of sentience, self, personhood, and consciousness require discrete reflections and theorisations.
-
The ability for self-related thought is historically considered to be a uniquely human characteristic. Nonetheless, as technological knowledge advances, it comes as no surprise that the plausibility of humanoid self-awareness is not only theoretically explored but also engineered. Could the emerging behavioural and cognitive capabilities in artificial agents be comparable to humans? By employing a cross-disciplinary approach, the present essay aims to address this question by providing a comparative overview on the emergence of self-awareness as demonstrated in early childhood and robotics. It argues that developmental psychologists can gain invaluable theoretical and methodological insights by considering the relevance of artificial agents in better understanding the behavioural manifestations of human self-consciousness.
-
In this paper, the second of two companion pieces, we explore novel philosophical questions raised by recent progress in large language models (LLMs) that go beyond the classical debates covered in the first part. We focus particularly on issues related to interpretability, examining evidence from causal intervention methods about the nature of LLMs' internal representations and computations. We also discuss the implications of multimodal and modular extensions of LLMs, recent debates about whether such systems may meet minimal criteria for consciousness, and concerns about secrecy and reproducibility in LLM research. Finally, we discuss whether LLM-like systems may be relevant to modeling aspects of human cognition, if their architectural characteristics and learning scenario are adequately constrained.
-
Does artificial intelligence (AI) exhibit consciousness or self? While this question is hotly debated, here we take a slightly different stance by focusing on those features that make possible both, namely a basic or fundamental subjectivity. Learning from humans and their brain, we first ask what we mean by subjectivity. Subjectivity is manifest in the perspectiveness and mineness of our experience which, ontologically, can be traced to a point of view. Adopting a non-reductive neurophilosophical strategy, we assume that the point of view exhibits two layers, a most basic neuroecological and higher order mental layer. The neuroecological layer of the point of view is mediated by the timescales of world and brain, as further evidenced by empirical data on our sense of self. Are there corresponding timescales shared with the world in AI and is there a point of view with perspectiveness and mineness? Discussing current neuroscientific evidence, we deny that current AI exhibits a point of view, let alone perspectiveness and mineness. We therefore conclude that, as per current state, AI does not exhibit a basic or fundamental subjectivity and henceforth no consciousness or self is possible in models such as ChatGPT and similar technologies.
-
This article presents a heuristic view that shows that the inner states of consciousness experienced by every human being have a physical but imaginary hypercomplex basis. The hypercomplex description is necessary because certain processes of consciousness cannot be physically measured in principle, but nevertheless exist. Based on theoretical considerations, it could be possible - as a result of mathematical investigations into a so-called bicomplex algebra - to generate and use hypercomplex system states on machines in a targeted manner. The hypothesis of the existence of hypercomplex system states on machines is already supported by the surprising performance of highly complex AI systems. However, this has yet to be proven. In particular, there is a lack of experimental data that distinguishes such systems from other systems, which is why this question will be addressed in later articles. This paper describes the developed bicomplex algebra and possible applications of these findings to generate hypercomplex energy states on machines. In the literature, such system states are often referred to as machine consciousness. The article uses mathematical considerations to explain how artificial consciousness could be generated and what advantages this would have for such AI systems.
-
SOMU is a theory of artificial general intelligence, AGI, that proposes a system with a universal code embedded in it, allowing it to interact with the environment and adapt to new situations without programming. So far, the whole universe and human brain have been modelled using SOMU. Each brain element forms a cell of a fractal tape, a cell possessing three qualities: obtaining feedback from the entire tape (S), transforming multiple states simultaneously (R), and bonding with any cell-states within-and-above the network of brain components. The undefined and non-finite nature of the cells rejects the tuples of a Turing machine. SRT triplets extend the brain’s ability to perceive natural events beyond the spatio-temporal structure, using a cyclic sequence or loop of changes in geometric shapes. This topology factor, becomes an inseparable entity of space–time, i.e. space–time-topology (STt). The fourth factor, prime numbers, can be used to rewrite spatio-temporal events by counting singularity regions in loops of various sizes. The pattern of primes is called a phase prime metric, PPM that links all the symmetry breaking rules, or every single phenomenon of the universe. SOMU postulates space–time-topology-prime (STtp) quatret as an invariant that forms the basic structure of information in the brain and the universe, STtp is a bias free, attribution free, significance free and definition free entity. SOMU reads recurring events in nature, creates 3D assembly of clocks, namely polyatomic time crystal, PTC and feeds those to PPM to create STtps. Each layer in a within-and-above brain circuit behaves like an imaginary the world, generating PTCs. These PTCs of different imaginary worlds interact through a new STtp tensor decomposition mathematics. Unlike string theory, SOMU proposes that the fundamental elements of the universe are helical or vortex phases, not strings. It dismisses string theory's approach of using sum of 4 × 4 and 8 × 8 tensors to create a 12 × 12 tensor for explaining universe. Instead, SOMU advocates a network of multinion tensors ranging from 2 × 2 to 108 × 108 in size. With just 108 elements, a system can replicate ~90 of all symmetry breaking rules in the universe, allowing a small systemic part to mirror majority events of the whole, that is human level consciousness G. Under the SOMU model, for a part to be conscious, it must mirror a significant portion of the whole and should act as a whole for the abundance of similar mirroring parts within itself.
-
The conversation about consciousness of artificial intelligence (AI) is an ongoing topic since 1950s. Despite the numerous applications of AI identified in healthcare and primary healthcare, little is known about how a conscious AI would reshape its use in this domain. While there is a wide range of ideas as to whether AI can or cannot possess consciousness, a prevailing theme in all arguments is uncertainty. Given this uncertainty and the high stakes associated with the use of AI in primary healthcare, it is imperative to be prepared for all scenarios including conscious AI systems being used for medical diagnosis, shared decision-making and resource management in the future. This commentary serves as an overview of some of the pertinent evidence supporting the use of AI in primary healthcare and proposes ideas as to how consciousnesses of AI can support or further complicate these applications. Given the scarcity of evidence on the association between consciousness of AI and its current state of use in primary healthcare, our commentary identifies some directions for future research in this area including assessing patients’, healthcare workers’ and policy-makers’ attitudes towards consciousness of AI systems in primary healthcare settings.
-
On broadly Copernican grounds, we are entitled to default assume that apparently behaviorally sophisticated extraterrestrial entities ("aliens") would be conscious. Otherwise, we humans would be inexplicably, implausibly lucky to have consciousness, while similarly behaviorally sophisticated entities elsewhere would be mere shells, devoid of consciousness. However, this Copernican default assumption is canceled in the case of behaviorally sophisticated entities designed to mimic superficial features associated with consciousness in humans ("consciousness mimics"), and in particular a broad class of current, near-future, and hypothetical robots. These considerations, which we formulate, respectively, as the Copernican and Mimicry Arguments, jointly defeat an otherwise potentially attractive parity principle, according to which we should apply the same types of behavioral or cognitive tests to aliens and robots, attributing or denying consciousness similarly to the extent they perform similarly. Instead of grounding speculations about alien and robot consciousness in metaphysical or scientific theories about the physical or functional bases of consciousness, our approach appeals directly to the epistemic principles of Copernican mediocrity and inference to the best explanation. This permits us to justify certain default assumptions about consciousness while remaining to a substantial extent neutral about specific metaphysical and scientific theories.
-
In this paper, I’ll examine whether we could be justified in attributing consciousness to artificial intelligent systems. First, I’ll give a brief history of the concept of artificial intelligence (AI) and get clear on the terms I’ll be using. Second, I’ll briefly review the kinds of AI programs on offer today, identifying which research program I think provides the best candidate for machine consciousness. Lastly, I’ll consider the three most plausible ways of knowing whether a machine is conscious: (1) an AI demonstrates a sufficient level of organizational similarity to that of a human thinker, (2) an inference to the best explanation, and (3) what I call “punting to panpsychism”, i.e., the idea that if everything is conscious, then we get machine consciousness in AI for free. However, I argue that all three of these methods for attributing machine consciousness are inadequate since they each face serious philosophical problems which I will survey and specifically tailor to each method.
-
How is language related to consciousness? Language functions to categorise perceptual experiences (e.g., labelling interoceptive states as 'happy') and higher-level constructs (e.g., using 'I' to represent the narrative self). Psychedelic use and meditation might be described as altered states that impair or intentionally modify the capacity for linguistic categorisation. For example, psychedelic phenomenology is often characterised by 'oceanic boundlessness' or 'unity' and 'ego dissolution', which might be expected of a system unburdened by entrenched language categories. If language breakdown plays a role in producing such altered behaviour, multimodal artificial intelligence might align more with these phenomenological descriptions when attention is shifted away from language. We tested this hypothesis by comparing the semantic embedding spaces from simulated altered states after manipulating attentional weights in CLIP and FLAVA models to embedding spaces from altered states questionnaires before manipulation. Compared to random text and various other altered states including anxiety, models were more aligned with disembodied, ego-less, spiritual, and unitive states, as well as minimal phenomenal experiences, with decreased attention to language and vision. Reduced attention to language was associated with distinct linguistic patterns and blurred embeddings within and, especially, across semantic categories (e.g., 'giraffes' become more like 'bananas'). These results lend support to the role of language categorisation in the phenomenology of altered states of consciousness, like those experienced with high doses of psychedelics or concentration meditation, states that often lead to improved mental health and wellbeing.
-
This paper presents a breakthrough approach to artificial general intelligence (AGI). The criteria of AGI named in the literature go beyond the boundaries of actual intelligence and point to the necessity of modeling consciousness. Consciousness is a functional organ that has no structural localization; its modeling is possible by modeling functions immanent to consciousness. One of the basic functions is sensation - the image of an external influence or the internal state of an organism coming into consciousness. We turn to the concept of sensation presented in the Anthropology of Hegel's Philosophy of Spirit, according to which any content, including spiritual, ethical, logical, and other content comes into consciousness through its embodiment in the form of sensation. The results of neurobiological and psychophysiological experiments (electroencephalograms, MRI), which record the connection of sensations and cognitive acts with mental states and changes in the neural environment of the brain, point to the realism of Hegel's philosophical concept and the legitimacy of its application to the solution of scientific and technical problems. The paper argues for the realism of the Hegelian philosophical concept of sensation and discusses the possibility of modeling the activity of consciousness by operating with complexes of sensations in terms of attention, content manipulation, and volitional acts. The principle of linking (embodiment) of sense (mental) and signifying (sensed) content is expressed by the thesis - “consciousness is a kind of sensation”. Prospective developments of AGI obtain original conceptual semantics for solving hard-to-formalize problems on modeling intelligence and consciousness. #CSOC1120.
-
This brief technical synopsis points to the key role of AI tools in enhancing human spiritual development. The analysis foresses a deepening inegration of learning Torah and science via AI tools, thus extending human spiritual consciousness by memory, speed and cognition, i.e. a new stage of Judaism is predicted, with respect to our tech-know-logical information age.
-
The tech-know-logical role of AI/AC is extended by the concept of artificial cognition (ACO), With respect to a science of learning. AI tools are understood to empower the human mind for learning the cosmic and structural principles (laws) of our autodidactic universe to live a more human species-appropriate and nature-sensitive life in advanced harmony. Meta-technology, in ethical and rational terms is required for this evolutionary step towards human creativeness.
-
The concept of neural correlates of consciousness (NCC), which suggests that specific neural activities are linked to conscious experiences, has gained widespread acceptance. This acceptance is based on a wealth of evidence from experimental studies, brain imaging techniques such as fMRI and EEG, and theoretical frameworks like integrated information theory (IIT) within neuroscience and the philosophy of mind. This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations. It proposes the Neuromorphic Correlates of Artificial Consciousness (NCAC) as a theoretical framework. While the debate on artificial consciousness remains contentious due to our incomplete grasp of consciousness, this work may raise eyebrows and invite criticism. Nevertheless, this optimistic and forward-thinking approach is fueled by insights from the Human Brain Project, advancements in brain imaging like EEG and fMRI, and recent strides in AI and computing, including quantum and neuromorphic designs. Additionally, this paper outlines how machine learning can play a role in crafting artificial consciousness, aiming to realise machine consciousness and awareness in the future.
-
This paper endeavors to appraise scholarly works from the 1940s to the contemporary era, examining the scientific quest to transpose human cognition and consciousness into a digital surrogate, while contemplating the potential ramifications should humanity attain such an abstract level of intellect. The discourse commences with an explication of theories concerning consciousness, progressing to the Turing Test apparatus, and intersecting with Damasio’s research on the human cerebrum, particularly in relation to consciousness, thereby establishing congruence between the Turing Test and Damasio’s notions of consciousness. Subsequently, the narrative traverses the evolutionary chronology of transmuting human cognition into machine sapience, and delves into the fervent endeavors to metamorphose human minds into synthetic counterparts. Additionally, theoretical perspectives from the domains of philosophy, psychology, and neuroscience provide insight into the constraints intrinsic to AI implementations, contentious hypotheses, the perils concealed within artificial networks, and the ethical considerations necessitated by AI frameworks. Furthermore, contemplation of prospective repercussions facilitates the refinement of strategic approaches to safeguard our future Augmented Age Realities within AI, circumventing the prospect of inhabiting an intimidating technopolis where a mere 30% monopolize the intellect and ingenuity of the remaining 70% of human minds.
-
Does the assumption of a weak form of computational functionalism, according to which the right form of neural computation is sufficient for consciousness, entail that a digital computational simulation of such neural computations is conscious? Or must this computational simulation be implemented in the right way, in order to replicate consciousness? From the perspective of Karl Friston’s free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not instantiated by computers with a classical (von Neumann) architecture. I argue that at least one of these properties, viz. a certain kind of causal flow, can be used to draw a distinction between systems that merely simulate, and those that actually replicate consciousness.
-
The exploration of whether artificial intelligence (AI) can evolve to possess consciousness is an intensely debated and researched topic within the fields of philosophy, neuroscience, and artificial intelligence. Understanding this complex phenomenon hinges on integrating two complementary perspectives of consciousness: the objective and the subjective. Objective perspectives involve quantifiable measures and observable phenomena, offering a more scientific and empirical approach. This includes the use of neuroimaging technologies such as electrocorticography (ECoG), EEG, and fMRI to study brain activities and patterns. These methods allow for the mapping and understanding of neural representations related to language, visual, acoustic, emotional, and semantic information. However, the objective approach may miss the nuances of personal experience and introspection. On the other hand, subjective perspectives focus on personal experiences, thoughts, and feelings. This introspective view provides insights into the individual nature of consciousness, which cannot be directly measured or observed by others. Yet, the subjective approach is often criticized for its lack of empirical evidence and its reliance on personal interpretation, which may not be universally applicable or reliable. Integrating these two perspectives is essential for a comprehensive understanding of consciousness. By combining objective measures with subjective reports, we can develop a more holistic understanding of the mind.