Search
Full bibliography 704 resources
-
Does artificial intelligence (AI) exhibit consciousness or self? While this question is hotly debated, here we take a slightly different stance by focusing on those features that make possible both, namely a basic or fundamental subjectivity. Learning from humans and their brain, we first ask what we mean by subjectivity. Subjectivity is manifest in the perspectiveness and mineness of our experience which, ontologically, can be traced to a point of view. Adopting a non-reductive neurophilosophical strategy, we assume that the point of view exhibits two layers, a most basic neuroecological and higher order mental layer. The neuroecological layer of the point of view is mediated by the timescales of world and brain, as further evidenced by empirical data on our sense of self. Are there corresponding timescales shared with the world in AI and is there a point of view with perspectiveness and mineness? Discussing current neuroscientific evidence, we deny that current AI exhibits a point of view, let alone perspectiveness and mineness. We therefore conclude that, as per current state, AI does not exhibit a basic or fundamental subjectivity and henceforth no consciousness or self is possible in models such as ChatGPT and similar technologies.
-
This article presents a heuristic view that shows that the inner states of consciousness experienced by every human being have a physical but imaginary hypercomplex basis. The hypercomplex description is necessary because certain processes of consciousness cannot be physically measured in principle, but nevertheless exist. Based on theoretical considerations, it could be possible - as a result of mathematical investigations into a so-called bicomplex algebra - to generate and use hypercomplex system states on machines in a targeted manner. The hypothesis of the existence of hypercomplex system states on machines is already supported by the surprising performance of highly complex AI systems. However, this has yet to be proven. In particular, there is a lack of experimental data that distinguishes such systems from other systems, which is why this question will be addressed in later articles. This paper describes the developed bicomplex algebra and possible applications of these findings to generate hypercomplex energy states on machines. In the literature, such system states are often referred to as machine consciousness. The article uses mathematical considerations to explain how artificial consciousness could be generated and what advantages this would have for such AI systems.
-
SOMU is a theory of artificial general intelligence, AGI, that proposes a system with a universal code embedded in it, allowing it to interact with the environment and adapt to new situations without programming. So far, the whole universe and human brain have been modelled using SOMU. Each brain element forms a cell of a fractal tape, a cell possessing three qualities: obtaining feedback from the entire tape (S), transforming multiple states simultaneously (R), and bonding with any cell-states within-and-above the network of brain components. The undefined and non-finite nature of the cells rejects the tuples of a Turing machine. SRT triplets extend the brain’s ability to perceive natural events beyond the spatio-temporal structure, using a cyclic sequence or loop of changes in geometric shapes. This topology factor, becomes an inseparable entity of space–time, i.e. space–time-topology (STt). The fourth factor, prime numbers, can be used to rewrite spatio-temporal events by counting singularity regions in loops of various sizes. The pattern of primes is called a phase prime metric, PPM that links all the symmetry breaking rules, or every single phenomenon of the universe. SOMU postulates space–time-topology-prime (STtp) quatret as an invariant that forms the basic structure of information in the brain and the universe, STtp is a bias free, attribution free, significance free and definition free entity. SOMU reads recurring events in nature, creates 3D assembly of clocks, namely polyatomic time crystal, PTC and feeds those to PPM to create STtps. Each layer in a within-and-above brain circuit behaves like an imaginary the world, generating PTCs. These PTCs of different imaginary worlds interact through a new STtp tensor decomposition mathematics. Unlike string theory, SOMU proposes that the fundamental elements of the universe are helical or vortex phases, not strings. It dismisses string theory's approach of using sum of 4 × 4 and 8 × 8 tensors to create a 12 × 12 tensor for explaining universe. Instead, SOMU advocates a network of multinion tensors ranging from 2 × 2 to 108 × 108 in size. With just 108 elements, a system can replicate ~90 of all symmetry breaking rules in the universe, allowing a small systemic part to mirror majority events of the whole, that is human level consciousness G. Under the SOMU model, for a part to be conscious, it must mirror a significant portion of the whole and should act as a whole for the abundance of similar mirroring parts within itself.
-
The conversation about consciousness of artificial intelligence (AI) is an ongoing topic since 1950s. Despite the numerous applications of AI identified in healthcare and primary healthcare, little is known about how a conscious AI would reshape its use in this domain. While there is a wide range of ideas as to whether AI can or cannot possess consciousness, a prevailing theme in all arguments is uncertainty. Given this uncertainty and the high stakes associated with the use of AI in primary healthcare, it is imperative to be prepared for all scenarios including conscious AI systems being used for medical diagnosis, shared decision-making and resource management in the future. This commentary serves as an overview of some of the pertinent evidence supporting the use of AI in primary healthcare and proposes ideas as to how consciousnesses of AI can support or further complicate these applications. Given the scarcity of evidence on the association between consciousness of AI and its current state of use in primary healthcare, our commentary identifies some directions for future research in this area including assessing patients’, healthcare workers’ and policy-makers’ attitudes towards consciousness of AI systems in primary healthcare settings.
-
On broadly Copernican grounds, we are entitled to default assume that apparently behaviorally sophisticated extraterrestrial entities ("aliens") would be conscious. Otherwise, we humans would be inexplicably, implausibly lucky to have consciousness, while similarly behaviorally sophisticated entities elsewhere would be mere shells, devoid of consciousness. However, this Copernican default assumption is canceled in the case of behaviorally sophisticated entities designed to mimic superficial features associated with consciousness in humans ("consciousness mimics"), and in particular a broad class of current, near-future, and hypothetical robots. These considerations, which we formulate, respectively, as the Copernican and Mimicry Arguments, jointly defeat an otherwise potentially attractive parity principle, according to which we should apply the same types of behavioral or cognitive tests to aliens and robots, attributing or denying consciousness similarly to the extent they perform similarly. Instead of grounding speculations about alien and robot consciousness in metaphysical or scientific theories about the physical or functional bases of consciousness, our approach appeals directly to the epistemic principles of Copernican mediocrity and inference to the best explanation. This permits us to justify certain default assumptions about consciousness while remaining to a substantial extent neutral about specific metaphysical and scientific theories.
-
In this paper, I’ll examine whether we could be justified in attributing consciousness to artificial intelligent systems. First, I’ll give a brief history of the concept of artificial intelligence (AI) and get clear on the terms I’ll be using. Second, I’ll briefly review the kinds of AI programs on offer today, identifying which research program I think provides the best candidate for machine consciousness. Lastly, I’ll consider the three most plausible ways of knowing whether a machine is conscious: (1) an AI demonstrates a sufficient level of organizational similarity to that of a human thinker, (2) an inference to the best explanation, and (3) what I call “punting to panpsychism”, i.e., the idea that if everything is conscious, then we get machine consciousness in AI for free. However, I argue that all three of these methods for attributing machine consciousness are inadequate since they each face serious philosophical problems which I will survey and specifically tailor to each method.
-
How is language related to consciousness? Language functions to categorise perceptual experiences (e.g., labelling interoceptive states as 'happy') and higher-level constructs (e.g., using 'I' to represent the narrative self). Psychedelic use and meditation might be described as altered states that impair or intentionally modify the capacity for linguistic categorisation. For example, psychedelic phenomenology is often characterised by 'oceanic boundlessness' or 'unity' and 'ego dissolution', which might be expected of a system unburdened by entrenched language categories. If language breakdown plays a role in producing such altered behaviour, multimodal artificial intelligence might align more with these phenomenological descriptions when attention is shifted away from language. We tested this hypothesis by comparing the semantic embedding spaces from simulated altered states after manipulating attentional weights in CLIP and FLAVA models to embedding spaces from altered states questionnaires before manipulation. Compared to random text and various other altered states including anxiety, models were more aligned with disembodied, ego-less, spiritual, and unitive states, as well as minimal phenomenal experiences, with decreased attention to language and vision. Reduced attention to language was associated with distinct linguistic patterns and blurred embeddings within and, especially, across semantic categories (e.g., 'giraffes' become more like 'bananas'). These results lend support to the role of language categorisation in the phenomenology of altered states of consciousness, like those experienced with high doses of psychedelics or concentration meditation, states that often lead to improved mental health and wellbeing.
-
This paper presents a breakthrough approach to artificial general intelligence (AGI). The criteria of AGI named in the literature go beyond the boundaries of actual intelligence and point to the necessity of modeling consciousness. Consciousness is a functional organ that has no structural localization; its modeling is possible by modeling functions immanent to consciousness. One of the basic functions is sensation - the image of an external influence or the internal state of an organism coming into consciousness. We turn to the concept of sensation presented in the Anthropology of Hegel's Philosophy of Spirit, according to which any content, including spiritual, ethical, logical, and other content comes into consciousness through its embodiment in the form of sensation. The results of neurobiological and psychophysiological experiments (electroencephalograms, MRI), which record the connection of sensations and cognitive acts with mental states and changes in the neural environment of the brain, point to the realism of Hegel's philosophical concept and the legitimacy of its application to the solution of scientific and technical problems. The paper argues for the realism of the Hegelian philosophical concept of sensation and discusses the possibility of modeling the activity of consciousness by operating with complexes of sensations in terms of attention, content manipulation, and volitional acts. The principle of linking (embodiment) of sense (mental) and signifying (sensed) content is expressed by the thesis - “consciousness is a kind of sensation”. Prospective developments of AGI obtain original conceptual semantics for solving hard-to-formalize problems on modeling intelligence and consciousness. #CSOC1120.
-
This brief technical synopsis points to the key role of AI tools in enhancing human spiritual development. The analysis foresses a deepening inegration of learning Torah and science via AI tools, thus extending human spiritual consciousness by memory, speed and cognition, i.e. a new stage of Judaism is predicted, with respect to our tech-know-logical information age.
-
The tech-know-logical role of AI/AC is extended by the concept of artificial cognition (ACO), With respect to a science of learning. AI tools are understood to empower the human mind for learning the cosmic and structural principles (laws) of our autodidactic universe to live a more human species-appropriate and nature-sensitive life in advanced harmony. Meta-technology, in ethical and rational terms is required for this evolutionary step towards human creativeness.
-
The concept of neural correlates of consciousness (NCC), which suggests that specific neural activities are linked to conscious experiences, has gained widespread acceptance. This acceptance is based on a wealth of evidence from experimental studies, brain imaging techniques such as fMRI and EEG, and theoretical frameworks like integrated information theory (IIT) within neuroscience and the philosophy of mind. This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations. It proposes the Neuromorphic Correlates of Artificial Consciousness (NCAC) as a theoretical framework. While the debate on artificial consciousness remains contentious due to our incomplete grasp of consciousness, this work may raise eyebrows and invite criticism. Nevertheless, this optimistic and forward-thinking approach is fueled by insights from the Human Brain Project, advancements in brain imaging like EEG and fMRI, and recent strides in AI and computing, including quantum and neuromorphic designs. Additionally, this paper outlines how machine learning can play a role in crafting artificial consciousness, aiming to realise machine consciousness and awareness in the future.
-
This paper endeavors to appraise scholarly works from the 1940s to the contemporary era, examining the scientific quest to transpose human cognition and consciousness into a digital surrogate, while contemplating the potential ramifications should humanity attain such an abstract level of intellect. The discourse commences with an explication of theories concerning consciousness, progressing to the Turing Test apparatus, and intersecting with Damasio’s research on the human cerebrum, particularly in relation to consciousness, thereby establishing congruence between the Turing Test and Damasio’s notions of consciousness. Subsequently, the narrative traverses the evolutionary chronology of transmuting human cognition into machine sapience, and delves into the fervent endeavors to metamorphose human minds into synthetic counterparts. Additionally, theoretical perspectives from the domains of philosophy, psychology, and neuroscience provide insight into the constraints intrinsic to AI implementations, contentious hypotheses, the perils concealed within artificial networks, and the ethical considerations necessitated by AI frameworks. Furthermore, contemplation of prospective repercussions facilitates the refinement of strategic approaches to safeguard our future Augmented Age Realities within AI, circumventing the prospect of inhabiting an intimidating technopolis where a mere 30% monopolize the intellect and ingenuity of the remaining 70% of human minds.
-
Does the assumption of a weak form of computational functionalism, according to which the right form of neural computation is sufficient for consciousness, entail that a digital computational simulation of such neural computations is conscious? Or must this computational simulation be implemented in the right way, in order to replicate consciousness? From the perspective of Karl Friston’s free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not instantiated by computers with a classical (von Neumann) architecture. I argue that at least one of these properties, viz. a certain kind of causal flow, can be used to draw a distinction between systems that merely simulate, and those that actually replicate consciousness.
-
The exploration of whether artificial intelligence (AI) can evolve to possess consciousness is an intensely debated and researched topic within the fields of philosophy, neuroscience, and artificial intelligence. Understanding this complex phenomenon hinges on integrating two complementary perspectives of consciousness: the objective and the subjective. Objective perspectives involve quantifiable measures and observable phenomena, offering a more scientific and empirical approach. This includes the use of neuroimaging technologies such as electrocorticography (ECoG), EEG, and fMRI to study brain activities and patterns. These methods allow for the mapping and understanding of neural representations related to language, visual, acoustic, emotional, and semantic information. However, the objective approach may miss the nuances of personal experience and introspection. On the other hand, subjective perspectives focus on personal experiences, thoughts, and feelings. This introspective view provides insights into the individual nature of consciousness, which cannot be directly measured or observed by others. Yet, the subjective approach is often criticized for its lack of empirical evidence and its reliance on personal interpretation, which may not be universally applicable or reliable. Integrating these two perspectives is essential for a comprehensive understanding of consciousness. By combining objective measures with subjective reports, we can develop a more holistic understanding of the mind.
-
Large Language Models (LLMs) still face challenges in tasks requiring understanding implicit instructions and applying common-sense knowledge. In such scenarios, LLMs may require multiple attempts to achieve human-level performance, potentially leading to inaccurate responses or inferences in practical environments, affecting their long-term consistency and behavior. This paper introduces the Internal Time-Consciousness Machine (ITCM), a computational consciousness structure to simulate the process of human consciousness. We further propose the ITCM-based Agent (ITCMA), which supports action generation and reasoning in open-world settings, and can independently complete tasks. ITCMA enhances LLMs' ability to understand implicit instructions and apply common-sense knowledge by considering agents' interaction and reasoning with the environment. Evaluations in the Alfworld environment show that trained ITCMA outperforms the state-of-the-art (SOTA) by 9% on the seen set. Even untrained ITCMA achieves a 96% task completion rate on the seen set, 5% higher than SOTA, indicating its superiority over traditional intelligent agents in utility and generalization. In real-world tasks with quadruped robots, the untrained ITCMA achieves an 85% task completion rate, which is close to its performance in the unseen set, demonstrating its comparable utility and universality in real-world settings.
-
AI systems that do what they say, are reliable, trustworthy, and explainable are what people want. We propose a DIKWP (Data, Information, Knowledge, Wisdom, and Purpose) artificial consciousness white box evaluation standard and method for AI systems. We categorize AI system output resources into deterministic and uncertain resources, which include incomplete, inconsistent, and imprecise data. We then map these resources to the DIKWP framework for testing. For deterministic resources, we evaluate their transformation into different resource types based on purpose. For uncertain resources, we evaluate their potential conversion into other deterministic resources through purpose-driven. We construct an AI diagnostic scenario using a 2S-dimensional (SxS) framework to evaluate both deterministic and uncertain DIKWP resources. The experimental results show that the DIKWP artificial consciousness white box evaluation standard and method effectively assess the cognition capabilities of AI systems and demonstrate a certain level of interpretability, thus contributing to AI system improvement and evaluation.
-
Artificial intelligence systems are associated with inherent risks, such as uncontrollability and lack of interpretability. To address these risks, we need to develop artificial intelligence systems that are interpretable, trustworthy, responsible, and thinking and behavior consistent, which we refer to as artificial consciousness (AC) systems. Consequently, we propose and define the concepts and implementation of a computer architecture, chips, runtime environment, and the DIKWP language. Furthermore, we have overcome the limitations of traditional programming languages, computer architectures, and software-hardware implementations when creating AC systems. Our proposed software and hardware integration platform will make it easier to build and operate AC software systems based on DIKWP theories.
-
Critics of Artificial Intelligence posit that artificial agents cannot achieve consciousness even in principle, because they lack certain necessary conditions for consciousness present in biological agents. Here we highlight arguments from a neuroscientific and neuromorphic engineering perspective as to why such a strict denial of consciousness in artificial agents is not compelling. We argue that the differences between biological and artificial brains are not fundamental and are vanishing with progress in neuromorphic architecture designs mimicking the human blueprint. To characterise this blueprint, we propose the conductor model of consciousness (CMoC) that builds on neuronal implementations of an external and internal world model while gating and labelling information flows. An extended Turing test (eTT) lists criteria on how to separate the information flow for learning an internal world model, both for biological and artificial agents. While the classic Turing test only assesses external observables (i.e., behaviour), the eTT also evaluates internal variables of artificial brains and tests for the presence of neuronal circuitries necessary to act on representations of the self, the internal and the external world, and potentially, some neural correlates of consciousness. Finally, we address ethical issues for the design of such artificial agents, formulated as an alignment dilemma: if artificial agents share aspects of consciousness, while they (partially) overtake human intelligence, how can humans justify their own rights against growing claims of their artificial counterpart? We suggest a tentative human-AI deal according to which artificial agents are designed not to suffer negative affective states but in exchange are not granted equal rights to humans.
-
Consciousness is the ability to have intentionality, which is a process that operates at various temporal scales. To qualify as conscious, an artificial device must express functionality capable of solving the Intrinsicality problem, where experienceable form or syntax gives rise to understanding 'meaning' as a noncontextual dynamic prior to language. This is suggestive of replacing the Hard Problem of consciousness to build conscious artificial intelligence (AI) Developing model emulations and exploring fundamental mechanisms of how machines understand meaning is central to the development of minimally conscious AI. It has been shown by Alemdar and colleagues [New insights into holonomic brain theory: implications for active consciousness. Journal of Multiscale Neuroscience 2 (2023), 159-168] that a framework for advancing artificial systems through understanding uncertainty derived from negentropic action to create intentional systems entails quantum-thermal fluctuations through informational channels instead of recognizing (cf., introspection) sensory cues through perceptual channels. Improving communication in conscious AI requires both software and hardware implementation. The software can be developed through the brain-machine interface of multiscale temporal processing, while hardware implementation can be done by creating energy flow using dipole-like hydrogen ion (proton) interactions in an artificial 'wetwire' protonic filament. Machine understanding can be achieved through memristors implemented in the protonic 'wetwire' filament embedded in a real-world device. This report presents a blueprint for the process, but it does not cover the algorithms or engineering aspects, which need to be conceptualized before minimally conscious AI can become operational.
-
Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.