Search
Full bibliography 716 resources
-
Obsession toward technology has a long background of parallel evolution between humans and machines. This obsession became irrevocable when AI began to be a part of our daily lives. However, this AI integration became a subject of controversy when the fear of AI advancement in acquiring consciousness crept among mankind. Artificial consciousness is a long-debated topic in the field of artificial intelligence and neuroscience which has many ethical challenges and threats in society ranging from daily chores to Mars missions. This paper deals with the impact of AI-based science fiction films in society. This study aims to investigate the fascinating AI concept of artificial consciousness in light of posthuman terminology, technological singularity and superintelligence by analyzing the set of science fiction films to project the actual difference between science fictional AI and operational AI. Further, this paper explores the theoretical possibilities of artificial consciousness through a range of neuroscientific theories that are related to AI development. These theories are built toward prospective artificial consciousness in AI. This study discloses the posthuman fallacies that are built around the fear of AI acquiring artificial consciousness and its outcome.
-
This book uses the modern theory of artificial intelligence (AI) to understand human suffering or mental pain. Both humans and sophisticated AI agents process information about the world in order to achieve goals and obtain rewards, which is why AI can be used as a model of the human brain and mind. This book intends to make the theory accessible to a relatively general audience, requiring only some relevant scientific background. The book starts with the assumption that suffering is mainly caused by frustration. Frustration means the failure of an agent (whether AI or human) to achieve a goal or a reward it wanted or expected. Frustration is inevitable because of the overwhelming complexity of the world, limited computational resources, and scarcity of good data. In particular, such limitations imply that an agent acting in the real world must cope with uncontrollability, unpredictability, and uncertainty, which all lead to frustration. Fundamental in such modelling is the idea of learning, or adaptation to the environment. While AI uses machine learning, humans and animals adapt by a combination of evolutionary mechanisms and ordinary learning. Even frustration is fundamentally an error signal that the system uses for learning. This book explores various aspects and limitations of learning algorithms and their implications regarding suffering. At the end of the book, the computational theory is used to derive various interventions or training methods that will reduce suffering in humans. The amount of frustration is expressed by a simple equation which indicates how it can be reduced. The ensuing interventions are very similar to those proposed by Buddhist and Stoic philosophy, and include mindfulness meditation. Therefore, this book can be interpreted as an exposition of a computational theory justifying why such philosophies and meditation reduce human suffering.
-
The recent “Conscious Turing Machine” (CTM) proposal offered by Manuel and Lenore Blum aims to define and explore consciousness, contribute to the solution of the hard problem, and demonstrate the value of theoretical computer science with respect to the study of consciousness. Surprisingly, given the ambitiousness and novelty of the proposal (and the prominence of its creators), CTM has received relatively little attention. We here seek to remedy this by offering an exhaustive evaluation of CTM. Our evaluation considers the explanatory power of CTM in three different domains of interdisciplinary consciousness studies: the philosophy of mind, cognitive neuroscience, and computation. Based on our evaluation in each of the target domains, at present, any claim that CTM constitutes progress is premature. Nevertheless, the model has potential, and we highlight several possible avenues of future research which proponents of the model may pursue in its development.
-
Using the events of the HBO series Westworld (2016–2022) as a springboard, this paper attempts to elicit a number of philosophical arguments, dilemmas, and questions concerning technology and artificial intelligence (AI). The paper is intended to encourage readers to learn more about intriguing technophilosophical debates. The first section discusses the dispute between memory and consciousness in the context of an artificially intelligent robot. The second section delves into the issues of reality and morality for humans and AI. The final segment speculates on the potential of a social interaction between sentient AI and humans. The narrative of the show serves as a glue that binds together the various ideas that are covered during the show, which in turn makes the philosophical discussions more intriguing.
-
Recent advances in artificial intelligence have reinvigorated the longstanding debate regarding whether or not any aspects of human cognition—notably, high-level creativity—are beyond the reach of computer programs. Is human creativity entirely a computational process? Here I consider one general argument for a dissociation between human and artificial creativity, which hinges on the role of consciousness—inner experience—in human cognition. It appears unlikely that inner experience is itself a computational process, implying that it cannot be instantiated in an abstract Turing machine, nor in any current computer architecture. Psychological research strongly implies that inner experience integrates emotions with perception and with thoughts. This integrative function of consciousness likely plays a key role in mechanisms that support human creativity. This conclusion dovetails with the anecdotal reports of creative individuals, who have linked inner experience with intrinsic motivation to create, and with the ability to access novel connections between ideas.
-
Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. I argue that in principle, there can be pure thinkers: thinkers that lack the capacity to sense altogether. I also argue for significant limitations in just what sort of thought is possible in the absence of the capacity to sense. Regarding AI, I do not argue directly that large language models can think or understand, but I rebut one important argument (the argument from sensory grounding) that they cannot. I also use recent results regarding language models to address the question of whether or how sensory grounding enhances cognitive capacities.
-
With Large Language Models (LLMs) exhibiting astounding abilities in human language processing and generation, a crucial debate has emerged: do they truly understand what they process and can they be conscious? While the nature of consciousness remains elusive, this synthetic article sheds light on its subjective aspect as well as some aspects of their understanding. Indeed, it can be shown, under specific conditions, that a cognitive system does not have any subjective consciousness. To this purpose the principle of a proof, based on a variation of the thought experiment of the Chinese Room from John Searl, will be developed. The demonstration will be made on a transformer architecture-based language model, however, it could be carried out and extended to many kind of cognitive systems with known architecture and functioning. The main conclusions are that while transformers architecture-based LLMs lack subjective consciousness based, in a nutshell, on the absence of a central subject, they exhibit a form of “asubjective phenomenal understanding” demonstrably through various tasks and tests. This opens a new perspective on the nature of understanding itself that can be uncoupled with any subjective experience.
-
Is artificial consciousness theoretically possible? Is it plausible? If so, is it technically feasible? To make progress on these questions, it is necessary to lay some groundwork clarifying the logical and empirical conditions for artificial consciousness to arise and the meaning of relevant terms involved. Consciousness is a polysemic word: researchers from different fields, including neuroscience, Artificial Intelligence, robotics, and philosophy, among others, sometimes use different terms in order to refer to the same phenomena or the same terms to refer to different phenomena. In fact, if we want to pursue artificial consciousness, a proper definition of the key concepts is required. Here, after some logical and conceptual preliminaries, we argue for the necessity of using dimensions and profiles of consciousness for a balanced discussion about their possible instantiation or realisation in artificial systems. Our primary goal in this paper is to review the main theoretical questions that arise in the domain of artificial consciousness. On the basis of this review, we propose to assess the issue of artificial consciousness within a multidimensional account. The theoretical possibility of artificial consciousness is already presumed within some theoretical frameworks; however, empirical possibility cannot simply be deduced from these frameworks but needs independent empirical validation. Analysing the complexity of consciousness we here identify constituents and related components/dimensions, and within this analytic approach reflect pragmatically about the general challenges that the creation of artificial consciousness confronts. Our aim is not to demonstrate conclusively either the theoretical plausibility or the empirical feasibility of artificial consciousness, but to outline a research strategy in which we propose that "awareness" may be a potentially realistic target for realisation in artificial systems.
-
Artificial intelligence, also known as AI, has led the trend of evolution in the past and future decades, and the potential of consciousness artificial intelligence emerges as a renovative field to address. The computer machine aims to process repetitive and tedious tasks for humans since its concept was first developed. Whether AI is conscious does not raise unprecedented discussion before the appearance of the concept of machine learning. After it appears, the machine, instead of merely passing in input and generating output, is able to learn while processing the information, which resembles a human's learning process. Therefore, this paper delves into the complex terrain of AI to explore the theoretical possibility of endowing machines with consciousness and addresses the future concerns and potentials of AI. Illustrating through the aspects of ethical concerns, metaphysical perspectives on consciousness, and the latest advancements in AI, the study critically examines whether machines can possess a consciousness similar to human understanding.
-
While today’s ever-advancing A.I continues to increase unrelentingly, the revolutionary drive to animate matter, blend the mechanical with biology, and create unprecedented exact replicas of the human brain bearing traits of individuality becomes an actively debated topic in serious academic studies as well as in science fiction. Radically changing the way we interact with machines and computers, the revolutionary prospect of ‘artificial consciousness’, whose driving aspiration is to create unprecedented exact replicas of the human brain bearing traits of individuality, has raised crucial questions: Could consciousness be embedded in AI machines? Would these machines ever become sentient, autonomous, and human-like? And could they truly interpret needs and have their own subjective experiences, distinct emotions, memories, thought processes and beliefs of humans? Inspired by the techno-optimist approach of ‘Transhumanism’ and instigated by Ray Kurzweil’s theorization of ‘Technological Singularity’, the present paper is mainly concerned with demonstrating the unintended consequences of transgressing what has been ‘designed’ by nature. More precisely speaking, investigating the prospect of ‘Artificial Consciousness’–the plausibility of embedding and fully extending consciousness onto A.I. machines– along with questioning the transhumanist framing of technology as a form of transcendence. For this purpose, an in-depth, close textual analysis is conducted on Jack Paglen’s science fiction novelization, ‘Transcendence’ (2014), to finally reach the conclusion that technology is still a long way from attaining artificial consciousness. In other words, there is something intrinsic, special, and unique about human consciousness that cannot be replicated or captured by technology.
-
This chapter discusses two ways of looking at the topic of artificial intelligence (AI), selfhood and artificial consciousness. The first is to reflect on human-AI interaction focusing on the human users. This includes how humans respond to and interact with AI, as well as potential selfhood-related implications of interacting with AI. The second is to reflect on potential future machine selfhood and its crucial component, artificial consciousness. While artificial consciousness that resembles human consciousness does not yet exist, and the details are difficult if not impossible to anticipate, a reflection on potential future artificial consciousness is clearly needed given its extensive ethical and social implications.
-
This paper explores the behavior and implications of sequences transitioning between acceptable and unacceptable states, particularly in the context of artificial consciousness. Using the framework of absorbing state transition sequences and applying Kolmogorov's 0-1 Law, we analyze the probability of a sequence eventually reaching an absorbing (unacceptable) state. We demonstrate that if there is a countably infinite number of indices with nonzero transition probabilities, the probability of reaching the absorbing state is 1. The paper extends these mathematical results to philosophical and ethical discussions, examining the inevitability of failure in systems with persistent nonzero transition probabilities and the ethical considerations for developing artificial consciousness. Strategies for minimizing transition probabilities, establishing ethical guidelines, and implementing self-correcting mechanisms are proposed to ensure the propagation of acceptable states. The findings underscore the importance of robust design and ethical oversight in the creation and maintenance of artificial consciousness systems.
-
Inquiry into the field of artificial intelligence (machines) and its potential to develop consciousness is presented in this study. This investigation explores the complex issues surrounding machine consciousness at the nexus of AI, neuroscience, and philosophy as we delve into the fascinating world of artificial intelligence (AI) and investigate the intriguing question: are machines on the verge of becoming conscious beings? The study considers the likelihood of machines displaying self-awareness and the implications thereof through an analysis of the current state of AI and its limitations. However, with advancements in machine learning and cognitive computing, AI systems have made significant strides in emulating human-like behavior and decision-making. Furthermore, the emergence of machine consciousness raises questions about the blending of human and artificial intelligence, and ethical considerations are also considered. The study provides a glimpse into a multidisciplinary investigation that questions accepted theories of consciousness, tests the limits of what is possible with technology, and do these advancements signify a potential breakthrough in machine consciousness.
-
Two problems related to the identification of consciousness are the distribution problem—or how and among which entities consciousness is distributed in the world—and the moral status problem—or which species, entities, and individuals have moral status. The use of inferences from neurobiological and behavioral evidence, and their confounds, for identifying consciousness in nontypically functioning humans, nonhuman animals, and artificial intelligence is considered in light of significant scientific uncertainty and ethical biases, with implications for both problems. Methodological, epistemic, and ethical consensus are needed for responsible consciousness science under epistemic and ethical uncertainty. Consideration of inductive risk is proposed as a potential tool for managing both epistemic and ethical risks in consciousness science.
-
As artificial intelligence (AI) continues to advance, it is natural to ask whether AI systems can be not only intelligent, but also conscious. I consider why people might think AI could develop consciousness, identifying some biases that lead us astray. I ask what it would take for conscious AI to be a realistic prospect, challenging the assumption that computation provides a sufficient basis for consciousness. I’ll instead make the case that consciousness depends on our nature as living organisms – a form of biological naturalism. I lay out a range of scenarios for conscious AI, concluding that real artificial consciousness is unlikely along current trajectories, but becomes more plausible as AI becomes more brain-like and/or life-like. I finish by exploring ethical considerations arising from AI that either is, or convincingly appears to be, conscious. If we sell our minds too cheaply to our machine creations, we not only overestimate them – we underestimate our selves.
-
Objective: And ultimate goal of this paper is to describe a realistic future in how humanity and life can survive immortally by creating humanoid robots from a human master with a consciousness of the human who would serve the human master as a companion and learn everything about the consciousness of the human master. The Contributions: Are to present a groundbreaking methodology for the immortality of humanoid robots with a human consciousness. In this paper, we emphasize that the current humanoid robotics technologies have reached the sophistication to design and fabricate intelligent AI computers to allow humanoids to survive immortally. Once human life is close to being over (age or sickness), the humanoid will take over and can stay alive as long as it has the necessary energy to live on. These humanoids can even travel through space and other planets, opening up a whole new frontier for exploration and life. They can benefit from the Quantum Entanglement to move through space to any destination.
-
The nature of consciousness in the context of artificial intelligence (AI) presents a problem that necessitates analysis and further exploration. This study seeks to redefine human-technology relationships by examining the intersection of consciousness and AI, including metaphysical implications and considerations. The primary objectives are to define consciousness within the context of AI, assess the potential for AI to exhibit consciousness, investigate the metaphysical implications for human experiences, and explore the ethical dimensions. The research findings indicate that consciousness involves self-awareness, perception, intentionality, and subjective experiences. While AI can achieve advanced cognitive abilities, the existence of higher-order consciousness remains uncertain, raising metaphysical questions about the nature of subjective awareness. The hard problem of consciousness highlights the challenge of bridging physical processes and subjective experiences, underscoring the need for metaphysical considerations. Ethical implications of AI integration and its impact on human experiences are also examined. Recommendations include further research on consciousness in AI, the development of ethical frameworks that account for metaphysical dimensions, and the exploration of the extended mind hypothesis to integrate AI as an augmentation of human consciousness. By addressing metaphysical implications and considerations, we can navigate the evolving landscape of AI and redefine human-technology relationships in a responsible, inclusive, and metaphysically informed manner.
-
As the apparent intelligence of artificial neural networks (ANNs) advances, they are increasingly likened to the functional networks and information processing capabilities of the human brain. Such comparisons have typically focused on particular modalities, such as vision or language. The next frontier is to use the latest advances in ANNs to design and investigate scalable models of higher-level cognitive processes, such as conscious information access, which have historically lacked concrete and specific hypotheses for scientific evaluation. In this work, we propose and then empirically assess an embodied agent with a structure based on global workspace theory (GWT) as specified in the recently proposed “indicator properties” of consciousness. In contrast to prior works on GWT which utilized single modalities, our agent is trained to navigate 3D environments based on realistic audiovisual inputs. We find that the global workspace architecture performs better and more robustly at smaller working memory sizes, as compared to a standard recurrent architecture. Beyond performance, we perform a series of analyses on the learned representations of our architecture and share findings that point to task complexity and regularization being essential for feature learning and the development of meaningful attentional patterns within the workspace.
-
I propose a test for machine self-awareness inspired by the Turing test. My test is simple, and it provides an objective, empirical metric to rectify the ungrounded speculation surging through industry, academia, and social media. Drawing from a breadth of philosophical literature, I argue the test captures the essence of self-awareness, rather than some postulated correlate or ancillary quality. To begin, the concept of self-awareness is clearly demarcated from related concepts like consciousness, agency, and free will. Next, I propose a model called the Nesting Doll of Self-Awareness and discuss its relevance for intelligent beings. Then, the test is presented in its full generality, applicable to any machine system. I show how to apply the test to Large Language Models and conduct experiments on popular open and closed source LLMs, obtaining reproducible results that suggest a lack of self-awareness. The implications of machine self-awareness are discussed in relation to questions about meaning and true understanding. Finally, some next steps are outlined for studying self-awareness in machines.
-
The potential of conscious artificial intelligence (AI), with its functional systems that surpass automation and rely on elements of understanding, is a beacon of hope in the AI revolution. The shift from automation to conscious AI, once replaced with machine understanding, offers a future where AI can comprehend without needing to experience, thereby revolutionizing the field of AI. In this context, the proposed Dynamic Organicity Theory of consciousness (DOT) stands out as a promising and novel approach for building artificial consciousness that is more like the brain with physiological nonlocality and diachronicity of self-referential causal closure. However, deep learning algorithms utilize "black box" techniques such as “dirty hooks” to make the algorithms operational by discovering arbitrary functions from a trained set of dirty data rather than prioritizing models of consciousness that accurately represent intentionality as intentions-in-action. The limitations of the “black box” approach in deep learning algorithms present a significant challenge as quantum information biology, or intrinsic information, is associated with subjective physicalism and cannot be predicted with Turing computation. This paper suggests that deep learning algorithms effectively decode labeled datasets but not dirty data due to unlearnable noise, and encoding intrinsic information is beyond the capabilities of deep learning. New models based on DOT are necessary to decode intrinsic information by understanding meaning and reducing uncertainty. The process of “encoding” entails functional interactions as evolving informational holons, forming informational channels in functionality space of time consciousness. The “quantum of information” functionality is the motivity of (negentropic) action as change in functionality through thermodynamic constraints that reduce informational redundancy (also referred to as intentionality) in informational pathways. It denotes a measure of epistemic subjectivity towards machine understanding beyond the capabilities of deep learning.