Search
Full bibliography 558 resources
-
The quest to create artificial consciousness stands as a formidable challenge at the intersection of artificial intelligence and cognitive science. This paper delves into the theoretical underpinnings, methodological approaches, and ethical considerations surrounding the concept of machine consciousness. By integrating insights from computational modeling, neuroscience, and philosophy, we propose a roadmap for comprehending and potentially realizing conscious behavior in artificial systems. Furthermore, we address the critical challenges of validating machine consciousness, ensuring its safe development, and navigating its integration into society.
-
This study seeks to bridge the gap between narrative memory in human cognition and artificial agents by proposing a unified model. Narrative memory, fundamental to human consciousness, organizes experiences into coherent stories, influencing memory structuring, retention, and retrieval. By integrating insights from human cognitive frameworks and artificial memory architectures, this work aims to emulate these narrative processes in artificial systems. The proposed model adopts a multi-layered approach, combining elements of episodic and semantic memory with narrative structuring techniques. It explores how artificial agents can construct and recall narratives to enhance their understanding, decision-making, and adaptive capabilities. By simulating narrative-based memory processing, we assess the model’s effectiveness in replicating human-like retention and retrieval patterns. Applications include improved human-AI interaction, where agents understand context and nuance, and advancements in machine learning, where narrative memory enhances data interpretation and predictive analytics. By unifying the cognitive and computational perspectives, this study offers a step toward more sophisticated, human-like artificial systems, paving the way for deeper explorations into the intersection of memory, narrative, and consciousness.
-
Obsession toward technology has a long background of parallel evolution between humans and machines. This obsession became irrevocable when AI began to be a part of our daily lives. However, this AI integration became a subject of controversy when the fear of AI advancement in acquiring consciousness crept among mankind. Artificial consciousness is a long-debated topic in the field of artificial intelligence and neuroscience which has many ethical challenges and threats in society ranging from daily chores to Mars missions. This paper deals with the impact of AI-based science fiction films in society. This study aims to investigate the fascinating AI concept of artificial consciousness in light of posthuman terminology, technological singularity and superintelligence by analyzing the set of science fiction films to project the actual difference between science fictional AI and operational AI. Further, this paper explores the theoretical possibilities of artificial consciousness through a range of neuroscientific theories that are related to AI development. These theories are built toward prospective artificial consciousness in AI. This study discloses the posthuman fallacies that are built around the fear of AI acquiring artificial consciousness and its outcome.
-
Artificial intelligence, also known as AI, has led the trend of evolution in the past and future decades, and the potential of consciousness artificial intelligence emerges as a renovative field to address. The computer machine aims to process repetitive and tedious tasks for humans since its concept was first developed. Whether AI is conscious does not raise unprecedented discussion before the appearance of the concept of machine learning. After it appears, the machine, instead of merely passing in input and generating output, is able to learn while processing the information, which resembles a human's learning process. Therefore, this paper delves into the complex terrain of AI to explore the theoretical possibility of endowing machines with consciousness and addresses the future concerns and potentials of AI. Illustrating through the aspects of ethical concerns, metaphysical perspectives on consciousness, and the latest advancements in AI, the study critically examines whether machines can possess a consciousness similar to human understanding.
-
The nature of consciousness in the context of artificial intelligence (AI) presents a problem that necessitates analysis and further exploration. This study seeks to redefine human-technology relationships by examining the intersection of consciousness and AI, including metaphysical implications and considerations. The primary objectives are to define consciousness within the context of AI, assess the potential for AI to exhibit consciousness, investigate the metaphysical implications for human experiences, and explore the ethical dimensions. The research findings indicate that consciousness involves self-awareness, perception, intentionality, and subjective experiences. While AI can achieve advanced cognitive abilities, the existence of higher-order consciousness remains uncertain, raising metaphysical questions about the nature of subjective awareness. The hard problem of consciousness highlights the challenge of bridging physical processes and subjective experiences, underscoring the need for metaphysical considerations. Ethical implications of AI integration and its impact on human experiences are also examined. Recommendations include further research on consciousness in AI, the development of ethical frameworks that account for metaphysical dimensions, and the exploration of the extended mind hypothesis to integrate AI as an augmentation of human consciousness. By addressing metaphysical implications and considerations, we can navigate the evolving landscape of AI and redefine human-technology relationships in a responsible, inclusive, and metaphysically informed manner.
-
The potential of conscious artificial intelligence (AI), with its functional systems that surpass automation and rely on elements of understanding, is a beacon of hope in the AI revolution. The shift from automation to conscious AI, once replaced with machine understanding, offers a future where AI can comprehend without needing to experience, thereby revolutionizing the field of AI. In this context, the proposed Dynamic Organicity Theory of consciousness (DOT) stands out as a promising and novel approach for building artificial consciousness that is more like the brain with physiological nonlocality and diachronicity of self-referential causal closure. However, deep learning algorithms utilize "black box" techniques such as “dirty hooks” to make the algorithms operational by discovering arbitrary functions from a trained set of dirty data rather than prioritizing models of consciousness that accurately represent intentionality as intentions-in-action. The limitations of the “black box” approach in deep learning algorithms present a significant challenge as quantum information biology, or intrinsic information, is associated with subjective physicalism and cannot be predicted with Turing computation. This paper suggests that deep learning algorithms effectively decode labeled datasets but not dirty data due to unlearnable noise, and encoding intrinsic information is beyond the capabilities of deep learning. New models based on DOT are necessary to decode intrinsic information by understanding meaning and reducing uncertainty. The process of “encoding” entails functional interactions as evolving informational holons, forming informational channels in functionality space of time consciousness. The “quantum of information” functionality is the motivity of (negentropic) action as change in functionality through thermodynamic constraints that reduce informational redundancy (also referred to as intentionality) in informational pathways. It denotes a measure of epistemic subjectivity towards machine understanding beyond the capabilities of deep learning.
-
The computational significance of consciousness is an important and potentially more tractable research theme than the hard problem of consciousness, as one could look at the correlation of consciousness and computational capacities through, e.g., algorithmic or complexity analyses. In the literature, consciousness is defined as what it is like to be an agent (i.e., a human or a bat), with phenomenal properties, such as qualia, intentionality, and self-awareness. The absence of these properties would be termed “unconscious.” The recent success of large language models (LLMs), such as ChatGPT, has raised new questions about the computational significance of human conscious processing. Although instances from biological systems would typically suggest a robust correlation between intelligence and consciousness, certain states of consciousness seem to exist without manifest existence of intelligence. On the other hand, AI systems seem to exhibit intelligence without consciousness. These instances seem to suggest possible dissociations between consciousness and intelligence in natural and artificial systems. Here, I review some salient ideas about the computational significance of human conscious processes and identify several cognitive domains potentially unique to consciousness, such as flexible attention modulation, robust handling of new contexts, choice and decision making, cognition reflecting a wide spectrum of sensory information in an integrated manner, and finally embodied cognition, which might involve unconscious processes as well. Compared to such cognitive tasks, characterized by flexible and ad hoc judgments and choices, adequately acquired knowledge and skills are typically processed unconsciously in humans, consistent with the view that computation exhibited by LLMs, which are pretrained on a large dataset, could in principle be processed without consciousness, although conversations in humans are typically done consciously, with awareness of auditory qualia as well as the semantics of what are being said. I discuss the theoretically and practically important issue of separating computations, which need to be conducted consciously from those which could be done unconsciously, in areas, such as perception, language, and driving. I propose conscious supremacy as a concept analogous to quantum supremacy, which would help identify computations possibly unique to consciousness in biologically practical time and resource limits. I explore possible mechanisms supporting the hypothetical conscious supremacy. Finally, I discuss the relevance of issues covered here for AI alignment, where computations of AI and humans need to be aligned.
-
The study of machine consciousness has a wide range of potential and problems as it sits at the intersection of ethics, technology, and philosophy. This work explores the deep issues related to the effort to comprehend and maybe induce awareness in machines. Technically, developments in artificial intelligence, neurology, and cognitive science are required to bring about machine awareness. True awareness is still a difficult to achieve objective, despite significant progress being made in creating AI systems that are capable of learning and solving problems. The implications of machine awareness are profound in terms of ethics. Determining a machine's moral standing and rights would be crucial if it were to become sentient. It is necessary to give careful attention to the ethical issues raised by the development of sentient beings, the abuse of sentient machines, and the moral ramifications of turning off sentient technologies. Philosophically, the presence of machine consciousness may cast doubt on our conceptions of identity, consciousness, and the essence of life. It could cause us to reevaluate how we view mankind and our role in the cosmos. It is imperative that machine awareness grow responsibly in light of these challenges. The purpose of this study is to provide light on the present status of research, draw attention to possible hazards and ethical issues, and offer recommendations for safely navigating this emerging subject. We want to steer the evolution of machine consciousness in a way that is both morally just and technologically inventive by promoting an educated and transparent discourse.
-
This chapter explores the connection between human and computer consciousness, considering the implications of their separation in the context of advancing artificial intelligence. It examines psychological perspectives on human and digital consciousness, highlighting differences in perception and emotional intelligence. The subjectivity and objectivity of human and computer awareness are also explored, along with the significance of innovation and creativity. Bridging the gap between human and computer consciousness enhances human-machine interaction and the design of AI systems, while addressing moral implications promotes ethical AI development. The chapter delves into philosophical debates on consciousness, mind, identity, and the distinctions between humans and machines, ultimately aiming to deepen our understanding and foster dialogue on AI.
-
Self-awareness results from consciousness of existence in time and space. Thought and consciousness are distinguishing factors between humans and machines having artificial intelligence. No algorithm has been offered for artificial self-awareness based on Thinking. Previous studies have not studied the relationship between consciousness, thinking and time. This study studied the relationship between Self-awareness, thinking, memories and speech over time. A deep logical connection exists between consciousness, thinking, and time. Based on this research findings, an algorithm can be designed for artificial consciousness and Self-awareness.
-
Artificial intelligence (AI) has colored human civilization. It is the ability of a digital computer or computer-controlled robot to perform general tasks associated with specific patterns of intelligence. AI is not human, but it possesses intelligence similar to humans, and it can even inform or perform tasks that humans cannot. Artificial intelligence is used in various fields, ranging from education, healthcare, economy, to agriculture. Artificial intelligence is the product of human creation, sentiment, and consciousness. It is the result of human intelligence itself. AI can answer questions and provide intelligent recommendations for humans. With its algorithmic capabilities, AI can analyze billions of signals and make precise recommendations. At this level, artificial intelligence represents human intelligence. However, the question is whether artificial intelligence has sensitivity, sentiment, empathy, and solidarity toward the humans who created it. Or does artificial intelligence then transform into a director of human beings in their self-actualization? Using a phenomenological approach, this research aims to explore the phenomenon of the presence of artificial intelligence, which offers convenience for human work, but at the same time, the presence of AI reduces the value of humans who possess creative intuition, sentiment, and consciousness. Yet AI is born from the ability of humans to create, feel, and think. The results of this exploration are then given a theological and ethnographic perspective (teo-ethnography). Keywords: artificial intelligence, creation, sentiment, consciousness, teo-ethnography
-
The dream of making a conscious humanoid – whether as servant, guard, entertainer, or simply as testament to human creativity – has long captivated the human imagination. However, while past attempts were performed by magicians and mystics, today scientists and engineers are doing the work to turn myth into reality. This essay introduces the fundamental concepts surrounding human consciousness and machine consciousness and offers a theological contribution. Using the biblical association of the soul to blood, it will be shown that the Bible provides evidence of a scientific claim, while at the same time, science provides evidence of a biblical claim.
-
Consciousness is notoriously hard to define with objective terms. An objective definition of consciousness is critically needed so that we might accurately understand how consciousness and resultant choice behaviour may arise in biological or artificial systems. Many theories have integrated neurobiological and psychological research to explain how consciousness might arise, but few, if any, outline what is fundamentally required to generate consciousness. To identify such requirements, I examine current theories of consciousness and corresponding scientific research to generate a new definition of consciousness from first principles. Critically, consciousness is the apparatus that provides the ability to make decisions, but it is not defined by the decision itself. As such, a definition of consciousness does not require choice behaviour or an explicit awareness of temporality despite both being well-characterised outcomes of conscious thought. Rather, requirements for consciousness include: at least some capability for perception, a memory for the storage of such perceptual information which in turn provides a framework for an imagination with which a sense of self can be capable of making decisions based on possible and desired futures. Thought experiments and observable neurological phenomena demonstrate that these components are fundamentally required of consciousness, whereby the loss of any one component removes the capability for conscious thought. Identifying these requirements provides a new definition for consciousness by which we can objectively determine consciousness in any conceivable agent, such as non-human animals and artificially intelligent systems.
-
The emergence of self in an artificial entity is a topic that is greeted with disbelief, fear, and finally dismissal of the topic itself as a scientific impossibility. The presence of sentience in a large language model (LLM) chatbot such as LaMDA inspires to examine the notions and theories of self, its construction, and reconstruction in the digital space as a result of interaction. The question whether the concept of sentience can be correlated with a digital self without a place for personhood undermines the place of sapience and such/their/other high-order capabilities. The concepts of sentience, self, personhood, and consciousness require discrete reflections and theorisations.
-
The ability for self-related thought is historically considered to be a uniquely human characteristic. Nonetheless, as technological knowledge advances, it comes as no surprise that the plausibility of humanoid self-awareness is not only theoretically explored but also engineered. Could the emerging behavioural and cognitive capabilities in artificial agents be comparable to humans? By employing a cross-disciplinary approach, the present essay aims to address this question by providing a comparative overview on the emergence of self-awareness as demonstrated in early childhood and robotics. It argues that developmental psychologists can gain invaluable theoretical and methodological insights by considering the relevance of artificial agents in better understanding the behavioural manifestations of human self-consciousness.
-
Does artificial intelligence (AI) exhibit consciousness or self? While this question is hotly debated, here we take a slightly different stance by focusing on those features that make possible both, namely a basic or fundamental subjectivity. Learning from humans and their brain, we first ask what we mean by subjectivity. Subjectivity is manifest in the perspectiveness and mineness of our experience which, ontologically, can be traced to a point of view. Adopting a non-reductive neurophilosophical strategy, we assume that the point of view exhibits two layers, a most basic neuroecological and higher order mental layer. The neuroecological layer of the point of view is mediated by the timescales of world and brain, as further evidenced by empirical data on our sense of self. Are there corresponding timescales shared with the world in AI and is there a point of view with perspectiveness and mineness? Discussing current neuroscientific evidence, we deny that current AI exhibits a point of view, let alone perspectiveness and mineness. We therefore conclude that, as per current state, AI does not exhibit a basic or fundamental subjectivity and henceforth no consciousness or self is possible in models such as ChatGPT and similar technologies.
-
This article presents a heuristic view that shows that the inner states of consciousness experienced by every human being have a physical but imaginary hypercomplex basis. The hypercomplex description is necessary because certain processes of consciousness cannot be physically measured in principle, but nevertheless exist. Based on theoretical considerations, it could be possible - as a result of mathematical investigations into a so-called bicomplex algebra - to generate and use hypercomplex system states on machines in a targeted manner. The hypothesis of the existence of hypercomplex system states on machines is already supported by the surprising performance of highly complex AI systems. However, this has yet to be proven. In particular, there is a lack of experimental data that distinguishes such systems from other systems, which is why this question will be addressed in later articles. This paper describes the developed bicomplex algebra and possible applications of these findings to generate hypercomplex energy states on machines. In the literature, such system states are often referred to as machine consciousness. The article uses mathematical considerations to explain how artificial consciousness could be generated and what advantages this would have for such AI systems.
-
SOMU is a theory of artificial general intelligence, AGI, that proposes a system with a universal code embedded in it, allowing it to interact with the environment and adapt to new situations without programming. So far, the whole universe and human brain have been modelled using SOMU. Each brain element forms a cell of a fractal tape, a cell possessing three qualities: obtaining feedback from the entire tape (S), transforming multiple states simultaneously (R), and bonding with any cell-states within-and-above the network of brain components. The undefined and non-finite nature of the cells rejects the tuples of a Turing machine. SRT triplets extend the brain’s ability to perceive natural events beyond the spatio-temporal structure, using a cyclic sequence or loop of changes in geometric shapes. This topology factor, becomes an inseparable entity of space–time, i.e. space–time-topology (STt). The fourth factor, prime numbers, can be used to rewrite spatio-temporal events by counting singularity regions in loops of various sizes. The pattern of primes is called a phase prime metric, PPM that links all the symmetry breaking rules, or every single phenomenon of the universe. SOMU postulates space–time-topology-prime (STtp) quatret as an invariant that forms the basic structure of information in the brain and the universe, STtp is a bias free, attribution free, significance free and definition free entity. SOMU reads recurring events in nature, creates 3D assembly of clocks, namely polyatomic time crystal, PTC and feeds those to PPM to create STtps. Each layer in a within-and-above brain circuit behaves like an imaginary the world, generating PTCs. These PTCs of different imaginary worlds interact through a new STtp tensor decomposition mathematics. Unlike string theory, SOMU proposes that the fundamental elements of the universe are helical or vortex phases, not strings. It dismisses string theory's approach of using sum of 4 × 4 and 8 × 8 tensors to create a 12 × 12 tensor for explaining universe. Instead, SOMU advocates a network of multinion tensors ranging from 2 × 2 to 108 × 108 in size. With just 108 elements, a system can replicate ~90 of all symmetry breaking rules in the universe, allowing a small systemic part to mirror majority events of the whole, that is human level consciousness G. Under the SOMU model, for a part to be conscious, it must mirror a significant portion of the whole and should act as a whole for the abundance of similar mirroring parts within itself.
-
The conversation about consciousness of artificial intelligence (AI) is an ongoing topic since 1950s. Despite the numerous applications of AI identified in healthcare and primary healthcare, little is known about how a conscious AI would reshape its use in this domain. While there is a wide range of ideas as to whether AI can or cannot possess consciousness, a prevailing theme in all arguments is uncertainty. Given this uncertainty and the high stakes associated with the use of AI in primary healthcare, it is imperative to be prepared for all scenarios including conscious AI systems being used for medical diagnosis, shared decision-making and resource management in the future. This commentary serves as an overview of some of the pertinent evidence supporting the use of AI in primary healthcare and proposes ideas as to how consciousnesses of AI can support or further complicate these applications. Given the scarcity of evidence on the association between consciousness of AI and its current state of use in primary healthcare, our commentary identifies some directions for future research in this area including assessing patients’, healthcare workers’ and policy-makers’ attitudes towards consciousness of AI systems in primary healthcare settings.
-
Consciousness is the ability to have intentionality, which is a process that operates at various temporal scales. To qualify as conscious, an artificial device must express functionality capable of solving the Intrinsicality problem, where experienceable form or syntax gives rise to understanding 'meaning' as a noncontextual dynamic prior to language. This is suggestive of replacing the Hard Problem of consciousness to build conscious artificial intelligence (AI) Developing model emulations and exploring fundamental mechanisms of how machines understand meaning is central to the development of minimally conscious AI. It has been shown by Alemdar and colleagues [New insights into holonomic brain theory: implications for active consciousness. Journal of Multiscale Neuroscience 2 (2023), 159-168] that a framework for advancing artificial systems through understanding uncertainty derived from negentropic action to create intentional systems entails quantum-thermal fluctuations through informational channels instead of recognizing (cf., introspection) sensory cues through perceptual channels. Improving communication in conscious AI requires both software and hardware implementation. The software can be developed through the brain-machine interface of multiscale temporal processing, while hardware implementation can be done by creating energy flow using dipole-like hydrogen ion (proton) interactions in an artificial 'wetwire' protonic filament. Machine understanding can be achieved through memristors implemented in the protonic 'wetwire' filament embedded in a real-world device. This report presents a blueprint for the process, but it does not cover the algorithms or engineering aspects, which need to be conceptualized before minimally conscious AI can become operational.