Search
Full bibliography 704 resources
-
This work is intended as a voice in the discussion over previous claims that a pretrained large language model (LLM) based on the Transformer model architecture can be sentient. Such claims have been made concerning the LaMDA model and also concerning the current wave of LLM-powered chatbots, such as ChatGPT. This claim, if confirmed, would have serious ramifications in the Natural Language Processing (NLP) community due to wide-spread use of similar models. However, here we take the position that such a large language model cannot be conscious, and that LaMDA in particular exhibits no advances over other similar models that would qualify it. We justify this by analysing the Transformer architecture through Integrated Information Theory of consciousness. We see the claims of sentience as part of a wider tendency to use anthropomorphic language in NLP reporting. Regardless of the veracity of the claims, we consider this an opportune moment to take stock of progress in language modelling and consider the ethical implications of the task. In order to make this work helpful for readers outside the NLP community, we also present the necessary background in language modelling.
-
In this paper, we use the recent appearance of LLMs and GPT-equipped robotics to raise questions about the nature of semantic meaning and how this relates to issues concerning artificially-conscious machines. To do so, we explore how a phenomenology constructed out of the association of qualia (defined as somatically-experienced sense data) and situated within a 4e enactivist program gives rise to intentional behavior. We argue that a robot without such a phenomenology is semantically empty and, thus, cannot be conscious in any way resembling human consciousness. Finally, we use this platform to address and supplement widely-discussed concerns regarding the dangers of attempting to produce artificially-conscious machines.
-
With incredible speed Large Language Models (LLMs) are reshaping many aspects of society. This has been met with unease by the public, and public discourse is rife with questions about whether LLMs are or might be conscious. Because there is widespread disagreement about consciousness among scientists, any concrete answers that could be offered the public would be contentious. This paper offers the next best thing: charting the possibility of consciousness in LLMs. So, while it is too early to judge concerning the possibility of LLM consciousness, our charting of the possibility space for this may serve as a temporary guide for theorizing about it.
-
This study seeks to bridge the gap between narrative memory in human cognition and artificial agents by proposing a unified model. Narrative memory, fundamental to human consciousness, organizes experiences into coherent stories, influencing memory structuring, retention, and retrieval. By integrating insights from human cognitive frameworks and artificial memory architectures, this work aims to emulate these narrative processes in artificial systems. The proposed model adopts a multi-layered approach, combining elements of episodic and semantic memory with narrative structuring techniques. It explores how artificial agents can construct and recall narratives to enhance their understanding, decision-making, and adaptive capabilities. By simulating narrative-based memory processing, we assess the model’s effectiveness in replicating human-like retention and retrieval patterns. Applications include improved human-AI interaction, where agents understand context and nuance, and advancements in machine learning, where narrative memory enhances data interpretation and predictive analytics. By unifying the cognitive and computational perspectives, this study offers a step toward more sophisticated, human-like artificial systems, paving the way for deeper explorations into the intersection of memory, narrative, and consciousness.
-
The paper tries to show the line of demarcation between man and posthuman with regards to their intellect and bodily simulation. Man is man; machine can’t replace him. Robots, cyborgs and ultrasonic technological artifact can’t be a substitute to human intellect. Human intellect can be transferred and downloaded like some data but human consciousness is something unique and non-transferable. The novel The Variable Man by Philip K.Dick has been exploited to prove the point. Hayles’s (1999) theory of Posthuman helps to probe the issue of the new form of human identity titled as posthuman. The research shows that technology is becoming the subject by turning man into an object which is called posthuman by Hayles. She provides a detailed theoretical discussion on the issue of cybernetic identities and the complexities of being posthuman. The research implicates whatever the development may be there in the field of robot technology and cyborgs, human power of reasoning and consciousness are still unsurpassable.
-
The advancement of artificial intelligence (AI) toward self-awareness and emotional capacity is a critical area of research. Despite AI's success in specialized tasks, it has yet to exhibit true self-awareness or emotional intelligence. Previous research has emphasized the importance of feedback loops and interfaces in enabling both biological and artificial systems to process information and exhibit self-aware behaviors. Notably, in our earlier work, we proposed a unified model of consciousness (Watchus, 2024), which highlighted recursive feedback loops in both biological and artificial systems and explored the insula's role in self-awareness (Watchus, 2024). Building upon these foundations, the current study investigates how dual embodiment, mirror testing, and emotional feedback mechanisms can simulate self-awareness in AI systems. By integrating internal self-models with external sensory interfaces, we propose that emotional feedback can enhance AI's self-reflection and adaptability. Through the use of a physical robot dog (Unitree Go2) and a virtual embodiment, we explore how sensory experiences and self-reflective tasks foster pseudo-emotional states like curiosity, self-doubt, and determination, advancing the potential for AI systems to develop pseudo-self-awareness.
-
In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood, i.e. of AI systems with their own interests and moral significance, is no longer an issue only for sci-fi or the distant future. It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously. We also recommend three early steps that AI companies and other actors can take: They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern. To be clear, our argument in this report is not that AI systems definitely are, or will be, conscious, robustly agentic, or otherwise morally significant. Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not.
-
We propose the DIKWP-TRIZ framework, an innovative extension of the traditional Theory of Inventive Problem Solving (TRIZ) designed to address the complexities of cognitive processes and artificial consciousness. By integrating the elements of Data, Information, Knowledge, Wisdom, and Purpose (DIKWP) into the TRIZ methodology, the proposed framework emphasizes a value-oriented approach to innovation, enhancing the ability to tackle problems characterized by incompleteness, inconsistency, and imprecision. Through a systematic mapping of TRIZ principles to DIKWP transformations, we identify potential overlaps and redundancies, providing a refined set of guidelines that optimize the application of TRIZ principles in complex scenarios. The study further demonstrates the framework’s capacity to support advanced decision-making and cognitive processes, paving the way for the development of AI systems capable of sophisticated, human-like reasoning. Future research will focus on comparing the implementation paths of DIKWP-TRIZ and traditional TRIZ, analyzing the complexities inherent in DIKWP-TRIZ-based innovation, and exploring its potential in constructing artificial consciousness systems.
-
This paper examines Kazuo Ishiguro's Klara and The Sun through the lens of posthumanism. It uses the textual analysis method to analyze Ishiguro's text as a posthuman novel that depicts the posthuman society where the boundaries between what is human and the nonhuman is blurred. The basic argument is that the aim of Ishiguro's text is two-fold, while it clearly illustrates the inability of the humanoid robot to attain human consciousness, it attempts also to dismantle the anthropocentric view of man. The findings show that Klara, the narrator-protagonist is used as a tool to raise certain questions such as, can humanoids act humanly? And/or can a 'humanoid machine' attain consciousness? More importantly, what it means to be human, in the first place. In doing so, the story attempts to showcase the ruptured boundaries between human and nonhuman and the changing ideas of humankind and its entanglement with the nonhuman world. Further, the interaction between Klara (AF) and other characters in the story is developed in such a way as to illustrate not only the shortcomings of humans regarding faith and affection but, more importantly, the limits of the nonhuman machine. It dismisses the current debate among technology experts that artificial intelligence would soon be able to develop a human-like robot that enjoys similar human emotional signals and reacts exactly like humans. The story simply puts it, despite the defects of humans, nothing can replace humans as those artificial friends (AI) fundamentally lack the kinds of experience that give rise to human-like affect and emotion.
-
Interest has been renewed in the study of consciousness, both theoretical and applied, following developments in 20th and early 21st century logic, metamathematics, computer science, and the brain sciences. In this evolving historical narrative, I explore several theoretical questions about the types of artificial intelligence and offer several conjectures about how they affect possible future developments in this exceptionally transformative field of research. I also address the practical significance of the advances in artificial intelligence in view of the cautions issued by prominent scientists, politicians, and ethicists about the possible historically unique dangers of such sufficiently advanced general intelligence, including by implication the search for extraterrestrial intelligence. Integrating both the theoretical and practical issues, I ask the following: (a) is sufficiently advanced general robotic intelligence identical to, or alternatively, ambiguously indistinguishable from human intelligence and human consciousness, and if so, (b) is such an earthly robotic intelligence a kind of consciousness indistinguishable from a presumptive extraterrestrial robotic consciousness, and if so, (c) is such a human-created robot preferably able to serve as a substitute for or even entirely supplant human intelligence and consciousness in certain exceptionally responsible roles? In the course of this investigation of artificial intelligence and consciousness, I also discuss the inter-relationships of these topics more generally within the theory of mind, including, emergence, free will, and meaningfulness, and the implications of quantum theory for alternative cosmological ontologies that offer suggestive answers to these topics, including how they relate to Big History.
-
The article discusses key aspects of artificial intelligence creation, including issues of free will, self-awareness and ethics. The focus is on the neurobiological basis of consciousness, in particular the structure and functions of the new cerebral cortex, as well as the mechanisms of recognition, memory and prediction, which are important for modelling cognitive processes in artificial systems. The paper discusses the role of neural networks in reproducing cognitive functions, such as perception and decision making, and presents modern approaches to training neural networks. A separate part of the paper is devoted to the issue of modelling self-awareness and subjective experience in artificial intelligence and how realistic it is to create self-aware machines. Ethical issues of artificial intelligence creation are at the centre of the discussion, including the topics of the rights of self-aware machines, their responsibilities and their role in society. The article considers the possible social consequences of the emergence of artificial personalities, the need to develop new legal frameworks and legal protections for such beings. It also discusses the problem of free will in the context of both biological and artificial systems, citing experiments and philosophical theories that question free will as a phenomenon. It concludes that the creation of artificial intelligence has great potential, but requires careful ethical and legal analysis to ensure the harmonious integration of artificial persons into social and legal structures.
-
Obsession toward technology has a long background of parallel evolution between humans and machines. This obsession became irrevocable when AI began to be a part of our daily lives. However, this AI integration became a subject of controversy when the fear of AI advancement in acquiring consciousness crept among mankind. Artificial consciousness is a long-debated topic in the field of artificial intelligence and neuroscience which has many ethical challenges and threats in society ranging from daily chores to Mars missions. This paper deals with the impact of AI-based science fiction films in society. This study aims to investigate the fascinating AI concept of artificial consciousness in light of posthuman terminology, technological singularity and superintelligence by analyzing the set of science fiction films to project the actual difference between science fictional AI and operational AI. Further, this paper explores the theoretical possibilities of artificial consciousness through a range of neuroscientific theories that are related to AI development. These theories are built toward prospective artificial consciousness in AI. This study discloses the posthuman fallacies that are built around the fear of AI acquiring artificial consciousness and its outcome.
-
Using the events of the HBO series Westworld (2016–2022) as a springboard, this paper attempts to elicit a number of philosophical arguments, dilemmas, and questions concerning technology and artificial intelligence (AI). The paper is intended to encourage readers to learn more about intriguing technophilosophical debates. The first section discusses the dispute between memory and consciousness in the context of an artificially intelligent robot. The second section delves into the issues of reality and morality for humans and AI. The final segment speculates on the potential of a social interaction between sentient AI and humans. The narrative of the show serves as a glue that binds together the various ideas that are covered during the show, which in turn makes the philosophical discussions more intriguing.
-
Recent advances in artificial intelligence have reinvigorated the longstanding debate regarding whether or not any aspects of human cognition—notably, high-level creativity—are beyond the reach of computer programs. Is human creativity entirely a computational process? Here I consider one general argument for a dissociation between human and artificial creativity, which hinges on the role of consciousness—inner experience—in human cognition. It appears unlikely that inner experience is itself a computational process, implying that it cannot be instantiated in an abstract Turing machine, nor in any current computer architecture. Psychological research strongly implies that inner experience integrates emotions with perception and with thoughts. This integrative function of consciousness likely plays a key role in mechanisms that support human creativity. This conclusion dovetails with the anecdotal reports of creative individuals, who have linked inner experience with intrinsic motivation to create, and with the ability to access novel connections between ideas.
-
Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. I argue that in principle, there can be pure thinkers: thinkers that lack the capacity to sense altogether. I also argue for significant limitations in just what sort of thought is possible in the absence of the capacity to sense. Regarding AI, I do not argue directly that large language models can think or understand, but I rebut one important argument (the argument from sensory grounding) that they cannot. I also use recent results regarding language models to address the question of whether or how sensory grounding enhances cognitive capacities.
-
With Large Language Models (LLMs) exhibiting astounding abilities in human language processing and generation, a crucial debate has emerged: do they truly understand what they process and can they be conscious? While the nature of consciousness remains elusive, this synthetic article sheds light on its subjective aspect as well as some aspects of their understanding. Indeed, it can be shown, under specific conditions, that a cognitive system does not have any subjective consciousness. To this purpose the principle of a proof, based on a variation of the thought experiment of the Chinese Room from John Searl, will be developed. The demonstration will be made on a transformer architecture-based language model, however, it could be carried out and extended to many kind of cognitive systems with known architecture and functioning. The main conclusions are that while transformers architecture-based LLMs lack subjective consciousness based, in a nutshell, on the absence of a central subject, they exhibit a form of “asubjective phenomenal understanding” demonstrably through various tasks and tests. This opens a new perspective on the nature of understanding itself that can be uncoupled with any subjective experience.
-
Is artificial consciousness theoretically possible? Is it plausible? If so, is it technically feasible? To make progress on these questions, it is necessary to lay some groundwork clarifying the logical and empirical conditions for artificial consciousness to arise and the meaning of relevant terms involved. Consciousness is a polysemic word: researchers from different fields, including neuroscience, Artificial Intelligence, robotics, and philosophy, among others, sometimes use different terms in order to refer to the same phenomena or the same terms to refer to different phenomena. In fact, if we want to pursue artificial consciousness, a proper definition of the key concepts is required. Here, after some logical and conceptual preliminaries, we argue for the necessity of using dimensions and profiles of consciousness for a balanced discussion about their possible instantiation or realisation in artificial systems. Our primary goal in this paper is to review the main theoretical questions that arise in the domain of artificial consciousness. On the basis of this review, we propose to assess the issue of artificial consciousness within a multidimensional account. The theoretical possibility of artificial consciousness is already presumed within some theoretical frameworks; however, empirical possibility cannot simply be deduced from these frameworks but needs independent empirical validation. Analysing the complexity of consciousness we here identify constituents and related components/dimensions, and within this analytic approach reflect pragmatically about the general challenges that the creation of artificial consciousness confronts. Our aim is not to demonstrate conclusively either the theoretical plausibility or the empirical feasibility of artificial consciousness, but to outline a research strategy in which we propose that "awareness" may be a potentially realistic target for realisation in artificial systems.
-
Artificial intelligence, also known as AI, has led the trend of evolution in the past and future decades, and the potential of consciousness artificial intelligence emerges as a renovative field to address. The computer machine aims to process repetitive and tedious tasks for humans since its concept was first developed. Whether AI is conscious does not raise unprecedented discussion before the appearance of the concept of machine learning. After it appears, the machine, instead of merely passing in input and generating output, is able to learn while processing the information, which resembles a human's learning process. Therefore, this paper delves into the complex terrain of AI to explore the theoretical possibility of endowing machines with consciousness and addresses the future concerns and potentials of AI. Illustrating through the aspects of ethical concerns, metaphysical perspectives on consciousness, and the latest advancements in AI, the study critically examines whether machines can possess a consciousness similar to human understanding.
-
While today’s ever-advancing A.I continues to increase unrelentingly, the revolutionary drive to animate matter, blend the mechanical with biology, and create unprecedented exact replicas of the human brain bearing traits of individuality becomes an actively debated topic in serious academic studies as well as in science fiction. Radically changing the way we interact with machines and computers, the revolutionary prospect of ‘artificial consciousness’, whose driving aspiration is to create unprecedented exact replicas of the human brain bearing traits of individuality, has raised crucial questions: Could consciousness be embedded in AI machines? Would these machines ever become sentient, autonomous, and human-like? And could they truly interpret needs and have their own subjective experiences, distinct emotions, memories, thought processes and beliefs of humans? Inspired by the techno-optimist approach of ‘Transhumanism’ and instigated by Ray Kurzweil’s theorization of ‘Technological Singularity’, the present paper is mainly concerned with demonstrating the unintended consequences of transgressing what has been ‘designed’ by nature. More precisely speaking, investigating the prospect of ‘Artificial Consciousness’–the plausibility of embedding and fully extending consciousness onto A.I. machines– along with questioning the transhumanist framing of technology as a form of transcendence. For this purpose, an in-depth, close textual analysis is conducted on Jack Paglen’s science fiction novelization, ‘Transcendence’ (2014), to finally reach the conclusion that technology is still a long way from attaining artificial consciousness. In other words, there is something intrinsic, special, and unique about human consciousness that cannot be replicated or captured by technology.
-
This chapter discusses two ways of looking at the topic of artificial intelligence (AI), selfhood and artificial consciousness. The first is to reflect on human-AI interaction focusing on the human users. This includes how humans respond to and interact with AI, as well as potential selfhood-related implications of interacting with AI. The second is to reflect on potential future machine selfhood and its crucial component, artificial consciousness. While artificial consciousness that resembles human consciousness does not yet exist, and the details are difficult if not impossible to anticipate, a reflection on potential future artificial consciousness is clearly needed given its extensive ethical and social implications.