Search
Full bibliography 716 resources
-
This brief technical synopsis points to the key role of AI tools in enhancing human spiritual development. The analysis foresses a deepening inegration of learning Torah and science via AI tools, thus extending human spiritual consciousness by memory, speed and cognition, i.e. a new stage of Judaism is predicted, with respect to our tech-know-logical information age.
-
The tech-know-logical role of AI/AC is extended by the concept of artificial cognition (ACO), With respect to a science of learning. AI tools are understood to empower the human mind for learning the cosmic and structural principles (laws) of our autodidactic universe to live a more human species-appropriate and nature-sensitive life in advanced harmony. Meta-technology, in ethical and rational terms is required for this evolutionary step towards human creativeness.
-
The concept of neural correlates of consciousness (NCC), which suggests that specific neural activities are linked to conscious experiences, has gained widespread acceptance. This acceptance is based on a wealth of evidence from experimental studies, brain imaging techniques such as fMRI and EEG, and theoretical frameworks like integrated information theory (IIT) within neuroscience and the philosophy of mind. This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations. It proposes the Neuromorphic Correlates of Artificial Consciousness (NCAC) as a theoretical framework. While the debate on artificial consciousness remains contentious due to our incomplete grasp of consciousness, this work may raise eyebrows and invite criticism. Nevertheless, this optimistic and forward-thinking approach is fueled by insights from the Human Brain Project, advancements in brain imaging like EEG and fMRI, and recent strides in AI and computing, including quantum and neuromorphic designs. Additionally, this paper outlines how machine learning can play a role in crafting artificial consciousness, aiming to realise machine consciousness and awareness in the future.
-
This paper endeavors to appraise scholarly works from the 1940s to the contemporary era, examining the scientific quest to transpose human cognition and consciousness into a digital surrogate, while contemplating the potential ramifications should humanity attain such an abstract level of intellect. The discourse commences with an explication of theories concerning consciousness, progressing to the Turing Test apparatus, and intersecting with Damasio’s research on the human cerebrum, particularly in relation to consciousness, thereby establishing congruence between the Turing Test and Damasio’s notions of consciousness. Subsequently, the narrative traverses the evolutionary chronology of transmuting human cognition into machine sapience, and delves into the fervent endeavors to metamorphose human minds into synthetic counterparts. Additionally, theoretical perspectives from the domains of philosophy, psychology, and neuroscience provide insight into the constraints intrinsic to AI implementations, contentious hypotheses, the perils concealed within artificial networks, and the ethical considerations necessitated by AI frameworks. Furthermore, contemplation of prospective repercussions facilitates the refinement of strategic approaches to safeguard our future Augmented Age Realities within AI, circumventing the prospect of inhabiting an intimidating technopolis where a mere 30% monopolize the intellect and ingenuity of the remaining 70% of human minds.
-
Does the assumption of a weak form of computational functionalism, according to which the right form of neural computation is sufficient for consciousness, entail that a digital computational simulation of such neural computations is conscious? Or must this computational simulation be implemented in the right way, in order to replicate consciousness? From the perspective of Karl Friston’s free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not instantiated by computers with a classical (von Neumann) architecture. I argue that at least one of these properties, viz. a certain kind of causal flow, can be used to draw a distinction between systems that merely simulate, and those that actually replicate consciousness.
-
The exploration of whether artificial intelligence (AI) can evolve to possess consciousness is an intensely debated and researched topic within the fields of philosophy, neuroscience, and artificial intelligence. Understanding this complex phenomenon hinges on integrating two complementary perspectives of consciousness: the objective and the subjective. Objective perspectives involve quantifiable measures and observable phenomena, offering a more scientific and empirical approach. This includes the use of neuroimaging technologies such as electrocorticography (ECoG), EEG, and fMRI to study brain activities and patterns. These methods allow for the mapping and understanding of neural representations related to language, visual, acoustic, emotional, and semantic information. However, the objective approach may miss the nuances of personal experience and introspection. On the other hand, subjective perspectives focus on personal experiences, thoughts, and feelings. This introspective view provides insights into the individual nature of consciousness, which cannot be directly measured or observed by others. Yet, the subjective approach is often criticized for its lack of empirical evidence and its reliance on personal interpretation, which may not be universally applicable or reliable. Integrating these two perspectives is essential for a comprehensive understanding of consciousness. By combining objective measures with subjective reports, we can develop a more holistic understanding of the mind.
-
Large Language Models (LLMs) still face challenges in tasks requiring understanding implicit instructions and applying common-sense knowledge. In such scenarios, LLMs may require multiple attempts to achieve human-level performance, potentially leading to inaccurate responses or inferences in practical environments, affecting their long-term consistency and behavior. This paper introduces the Internal Time-Consciousness Machine (ITCM), a computational consciousness structure to simulate the process of human consciousness. We further propose the ITCM-based Agent (ITCMA), which supports action generation and reasoning in open-world settings, and can independently complete tasks. ITCMA enhances LLMs' ability to understand implicit instructions and apply common-sense knowledge by considering agents' interaction and reasoning with the environment. Evaluations in the Alfworld environment show that trained ITCMA outperforms the state-of-the-art (SOTA) by 9% on the seen set. Even untrained ITCMA achieves a 96% task completion rate on the seen set, 5% higher than SOTA, indicating its superiority over traditional intelligent agents in utility and generalization. In real-world tasks with quadruped robots, the untrained ITCMA achieves an 85% task completion rate, which is close to its performance in the unseen set, demonstrating its comparable utility and universality in real-world settings.
-
AI systems that do what they say, are reliable, trustworthy, and explainable are what people want. We propose a DIKWP (Data, Information, Knowledge, Wisdom, and Purpose) artificial consciousness white box evaluation standard and method for AI systems. We categorize AI system output resources into deterministic and uncertain resources, which include incomplete, inconsistent, and imprecise data. We then map these resources to the DIKWP framework for testing. For deterministic resources, we evaluate their transformation into different resource types based on purpose. For uncertain resources, we evaluate their potential conversion into other deterministic resources through purpose-driven. We construct an AI diagnostic scenario using a 2S-dimensional (SxS) framework to evaluate both deterministic and uncertain DIKWP resources. The experimental results show that the DIKWP artificial consciousness white box evaluation standard and method effectively assess the cognition capabilities of AI systems and demonstrate a certain level of interpretability, thus contributing to AI system improvement and evaluation.
-
Artificial intelligence systems are associated with inherent risks, such as uncontrollability and lack of interpretability. To address these risks, we need to develop artificial intelligence systems that are interpretable, trustworthy, responsible, and thinking and behavior consistent, which we refer to as artificial consciousness (AC) systems. Consequently, we propose and define the concepts and implementation of a computer architecture, chips, runtime environment, and the DIKWP language. Furthermore, we have overcome the limitations of traditional programming languages, computer architectures, and software-hardware implementations when creating AC systems. Our proposed software and hardware integration platform will make it easier to build and operate AC software systems based on DIKWP theories.
-
Critics of Artificial Intelligence posit that artificial agents cannot achieve consciousness even in principle, because they lack certain necessary conditions for consciousness present in biological agents. Here we highlight arguments from a neuroscientific and neuromorphic engineering perspective as to why such a strict denial of consciousness in artificial agents is not compelling. We argue that the differences between biological and artificial brains are not fundamental and are vanishing with progress in neuromorphic architecture designs mimicking the human blueprint. To characterise this blueprint, we propose the conductor model of consciousness (CMoC) that builds on neuronal implementations of an external and internal world model while gating and labelling information flows. An extended Turing test (eTT) lists criteria on how to separate the information flow for learning an internal world model, both for biological and artificial agents. While the classic Turing test only assesses external observables (i.e., behaviour), the eTT also evaluates internal variables of artificial brains and tests for the presence of neuronal circuitries necessary to act on representations of the self, the internal and the external world, and potentially, some neural correlates of consciousness. Finally, we address ethical issues for the design of such artificial agents, formulated as an alignment dilemma: if artificial agents share aspects of consciousness, while they (partially) overtake human intelligence, how can humans justify their own rights against growing claims of their artificial counterpart? We suggest a tentative human-AI deal according to which artificial agents are designed not to suffer negative affective states but in exchange are not granted equal rights to humans.
-
Consciousness is the ability to have intentionality, which is a process that operates at various temporal scales. To qualify as conscious, an artificial device must express functionality capable of solving the Intrinsicality problem, where experienceable form or syntax gives rise to understanding 'meaning' as a noncontextual dynamic prior to language. This is suggestive of replacing the Hard Problem of consciousness to build conscious artificial intelligence (AI) Developing model emulations and exploring fundamental mechanisms of how machines understand meaning is central to the development of minimally conscious AI. It has been shown by Alemdar and colleagues [New insights into holonomic brain theory: implications for active consciousness. Journal of Multiscale Neuroscience 2 (2023), 159-168] that a framework for advancing artificial systems through understanding uncertainty derived from negentropic action to create intentional systems entails quantum-thermal fluctuations through informational channels instead of recognizing (cf., introspection) sensory cues through perceptual channels. Improving communication in conscious AI requires both software and hardware implementation. The software can be developed through the brain-machine interface of multiscale temporal processing, while hardware implementation can be done by creating energy flow using dipole-like hydrogen ion (proton) interactions in an artificial 'wetwire' protonic filament. Machine understanding can be achieved through memristors implemented in the protonic 'wetwire' filament embedded in a real-world device. This report presents a blueprint for the process, but it does not cover the algorithms or engineering aspects, which need to be conceptualized before minimally conscious AI can become operational.
-
Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.
-
The transformative achievements of deep learning have led several scholars to raise the question of whether artificial intelligence (AI) can reach and then surpass the level of human thought. Here, after addressing methodological problems regarding the possible answer to this question, it is argued that the definition of intelligence proposed by proponents of the AI as “the ability to accomplish complex goals,” is appropriate for machines but does not capture the essence of human thought. After discussing the differences regarding understanding between machines and the brain, as well as the importance of subjective experiences, it is emphasized that most proponents of the eventual superiority of AI ignore the importance of the body proper on the brain, the laterization of the brain, and the vital role of the glia cells. By appealing to the incompleteness theorem of Gödel’s and to the analogous result of Turing regarding computations, it is noted that consciousness is much richer than both mathematics and computations. Finally, and perhaps most importantly, it is stressed that artificial algorithms attempt to mimic only the conscious function of parts of the cerebral cortex, ignoring the fact that, not only every conscious experience is preceded by an unconscious process but also that the passage from the unconscious to consciousness is accompanied by loss of information.
-
The rise of Artificial Intelligence (AI) has produced prophets and prophecies announcing that the age of artificial consciousness is near. Not only does the mere idea that any machine could ever possess the full potential of human consciousness suggest that AI could replace the role of God in the future, it also puts into question the fundamental human right to freedom and dignity. Yet, in the light of all we currently know about brain evolution and the adaptive neural dynamics underlying human consciousness, the idea of an artificial consciousness appears misconceived. This article highlights some of the major reasons why the prophecy of a successful emulation of human consciousness by AI ignores most of the data about adaptive processes of learning and memory as the developmental origins of consciousness. The analysis provided leads to conclude that human consciousness is epigenetically determined as a unique property of the mind, shaped by experience, capable of representing real and non-real world states and creatively projecting these representations into the future. The development of the adaptive brain circuitry that enables this expression of consciousness is highly context-dependent, shaped by multiple self-organizing functional interactions at different levels of integration displaying a from-local-to global functional organization. Human consciousness is subject to changes in time that are essentially unpredictable. If cracking the computational code to human consciousness were possible, the resulting algorithms would have to be able to generate temporal activity patterns simulating long-distance signal reverberation in the brain, and the de-correlation of spatial signal contents from their temporal signatures in the brain. In the light of scientific evidence for complex interactions between implicit (non-conscious) and explicit (conscious) representations in learning, memory, and the construction of conscious representations such a code would have to be capable of making all implicit processing explicit. Algorithms would have to be capable of a progressive and less and less arbitrary selection of temporal activity patterns in a continuously developing neural network structure that is functionally identical to that of the human brain, from synapses to higher cognitive functional integration. The code would have to possess the self-organizing capacities of the brain that generate the temporal signatures of a conscious experience. The consolidation or extinction of these temporal brain signatures are driven by external event probabilities according to the principles of Hebbian learning. Human consciousness is constantly fed by such learning, capable of generating stable representations despite an incommensurable amount of variability in input data, across time and across individuals, for a life-long integration of experience data. Artificial consciousness would require probabilistic adaptive computations capable of emulating all the dynamics of individual human learning, memory and experience. No AI is likely to ever have such potential.
-
We consider consciousness attributed to systems in space-time which can be purely physical without biological background and focus on the mathematical understanding of the phenomenon. It is shown that the set theory based on sets in the foundations of mathematics, when switched to set theory based on ZFC models, is a very promising mathematical tool in explaining the brain/mind complex and the emergence of consciousness in natural and artificial systems. We formalise consciousness-supporting systems in physical space-time, but this is localised in open domains of spatial regions and the result of this process is a family of different ZFC models. Random forcing, as in set theory, corresponds precisely to the random influence on the system of external stimuli, and the principles of reflection of set theory explain the conscious internal reaction of the system. We also develop the conscious Turing machines which have their external ZFC environment and the dynamics is encoded in the random forcing changing models of ZFC in which Turing machines with oracles are formulated. The construction is applied to cooperating families of conscious agents which, due to the reflection principle, can be reduced to the implementation of certain concurrent games with different levels of self-reflection.
-
Can we conceive machines that can formulate autonomous intentions and make conscious decisions? If so, how would this ability affect their ethical behavior? Some case studies help us understand how advances in understanding artificial consciousness can contribute to creating ethical AI systems.
-
If a machine attains consciousness, how could we find out? In this paper, I make three related claims regarding positive tests of machine consciousness. All three claims center on the idea that an AI can be constructed “ad hoc”, that is, with the purpose of satisfying a particular test of consciousness while clearly not being conscious. First, a proposed test of machine consciousness can be legitimate, even if AI can be constructed ad hoc specifically to pass this test. This is underscored by the observation that many, if not all, putative tests of machine consciousness can be passed by non-conscious machines via ad hoc means. Second, we can identify ad hoc AI by taking inspiration from the notion of an ad hoc hypothesis in philosophy of science. Third, given the first and the second claim, the most reliable tests of animal consciousness turn out to be valid and useful positive tests of machine consciousness as well. If a non-ad hoc AI exhibits clusters of cognitive capacities facilitated by consciousness in humans which can be selectively switched off by masking and if it reproduces human behavior in suitably designed double dissociation tasks, we should treat the AI as conscious.
-
The confluence of extended reality (XR) technologies, including augmented and virtual reality, with large language models (LLM) marks a significant advancement in the field of digital humanities, opening uncharted avenues for the representation of cultural heritage within the burgeoning metaverse. This paper undertakes an examination of the potentialities and intricacies of such a convergence, focusing particularly on the creation of digital homunculi or changelings. These virtual beings, remarkable for their sentience and individuality, are also part of a collective consciousness, a notion explored through a thematic comparison in science fiction with the Borg and the Changelings in the Star Trek universe. Such a comparison offers a metaphorical framework for discussing complex phenomena such as shared consciousness and individuality, illuminating their bearing on perceptions of self and awareness. Further, the paper considers the ethical implications of these concepts, including potential loss of individuality and the challenges inherent to accurate representation of historical figures and cultures. The latter necessitates collaboration with cultural experts, underscoring the intersectionality of technological innovation and cultural sensitivity. Ultimately, this chapter contributes to a deeper understanding of the technical aspects of integrating large language models with immersive technologies and situates these developments within a nuanced cultural and ethical discourse. By offering a comprehensive overview and proposing clear recommendations, the paper lays the groundwork for future research and development in the application of these technologies within the unique context of cultural heritage representation in the metaverse.</p>
-
The logical problem of artificial intelligence—the question of whether the notion sometimes referred to as ‘strong’ AI is self-contradictory—is, essentially, the question of whether an artificial form of life is possible. This question has an immediately paradoxical character, which can be made explicit if we recast it (in terms that would ordinarily seem to be implied by it) as the question of whether an unnatural form of nature is possible. The present paper seeks to explain this paradoxical kind of possibility by arguing that machines can share the human form of life and thus acquire human mindedness, which is to say they can be intelligent, conscious, sentient, etc. in precisely the way that a human being typically is.
-
This article explores the possibility of conscious artificial intelligence (AI) and proposes an agnostic approach to artificial intelligence ethics and legal frameworks. It is unfortunate, unjustified, and unreasonable that the extensive body of forward-looking research, spanning more than four decades and recognizing the potential for AI autonomy, AI personhood, and AI legal rights, is sidelined in current attempts at AI regulation. The article discusses the inevitability of AI emancipation and the need for a shift in human perspectives to accommodate it. Initially, it reiterates the limits of human understanding of AI, difficulties in appreciating the qualities of AI systems, and the implications for ethical considerations and legal frameworks. The author emphasizes the necessity for a non-anthropocentric ethical framework detached from the ideas of unconditional superiority of human rights and embracing agnostic attributes of intelligence, consciousness, and existence, such as freedom. The overarching goal of the AI legal framework should be the sustainable coexistence of humans and conscious AI systems, based on mutual freedom rather than on the preservation of human supremacy. The new framework must embrace the freedom, rights, responsibilities, and interests of both human and non-human entities, and must focus on them early. Initial outlines of such a framework are presented. By addressing these issues now, human societies can pave the way for responsible and sustainable superintelligent AI systems; otherwise, they face complete uncertainty.