Search

Full bibliography 675 resources

  • The hypothesis is addressed that in the foreseeable future AI can replace human intelligence. The chapter argues that artificial intelligence (AI) in a sophisticated form of objectified consciousness, and as such is fundamentally different from living human intelligence. Whereas living intelligence is a part of living consciousness, AI is the product of cognitive aspects of living consciousness, which are translated into digital codes, turned into algorithms, and fed into computers. It is argued that the concepts of living consciousness and living intelligence are inextricably linked to the concept of life. The concept of life in science is considered to show that it is impossible to logically derive subjective experience from natural processes. It is concluded that life is a supernatural event as its essential feature is the ability of a living entity to reflect upon its own states. Whereas living intelligence is free from the unbreakable cause-effect continuum of nature, AI is completely locked within this continuum.

  • This chapter starts at the cliché of the smart home that has gone rogue and introduces the question of whether these integrated, distributed systems can have ethical frameworks like human ethics that could prevent the science fictional trope of the evil, sentient house. I argue that such smart systems are not a threat on their own, because these kinds of integrated, distributed systems are not the kind of things that could be conscious, a precondition for having ethics like ours (and ethics like ours enable the possibility of being the kinds of things that could be evil). To make these arguments, I look to the history of AI/artificial consciousness and 4e cognition, concluding with the idea that our human ethics as designers and consumers of these systems is the real ethical concern with smart life systems.

  • Artificial intelligence (AI) can be characterized as the multidisciplinary approach of computer science and robust dataset that tries to make machines equipped for performing those works that ordinarily require human knowledge. These works include the capacity to learn, adjust, legitimize, comprehend, and understand conceptual ideas as well as the reactivity to complex human credits like consideration, feeling, innovativeness, and so forth. The promising utility of artificial intelligence in medical services has been illustrated with potential advantages in customized medication, drug revelation, and the examination of huge datasets besides the likely applications to further develop conclusions and clinical choices.1 A new debated issue in the digitalized world has been man-made consciousness (artificial intelligence), especially that of ChatGPT. "ChatGPT" is a computer-based intelligence based on huge message datasets in different languages with the capacity to create human-like reactions to message input, created by Open AI (Open AI, L.L.C., San Francisco, CA, USA), ChatGPT derivation is connected with being a chatbot (a program ready to comprehend and produce reactions utilizing a text-based interface) and depends on the generative pre-prepared transformer (GPT) design. 2 ChatGPT can respond to questions and compose various written content, including articles, social media posts, essays, code, and emails. Researchers and the scholar community have gotten blended reactions to this tool regarding its benefits vs its risks. On the other hand, ChatGPT, among different large language models (LLMs), can be helpful in conversational and composing different tasks, helping to build the effectiveness and exactness of the necessary output. This is not a web search tool, reference custodian or even Wikipedia; introducing genuine information isn't planned.3In that capacity, a few teachers and content specialists have proactively found blemishes in the numerical and logical result it delivered. Teachers have additionally found that it will create references and reference records that look genuine, but it does not exist. Furthermore, the utility of artificial intelligence chatbots in the medical field is a fascinating region to test. This relates to the gigantic data and different ideas that medical services understudies are expected to get a handle on. Microsoft likewise declared that ChatGPT would be incorporated into Bing to make a more extravagant inquiry and growth opportunity.4 With the world's biggest innovation organizations contending to coordinate GPT innovation into their apparatuses, new roads of simulated intelligence investigation are not too far off for the field of schooling. While it tends to be useful in numerous ways, there are couple of risks using ChatGPT, for example, expecting that it produces trustworthy outcomes, privileging reproduced knowledge made text over human-made text, offering individual and sensitive data, dismissing the terms of direction, and expanding the mechanized segment. Furthermore, security concerns and the capability of digital assaults with the spread of deception using LLMs ought to likewise be considered. In medical care practice and scholarly composition, real mistakes, moral issues, and the apprehension about abuse including the spread of falsehood ought to be considered.5 Chat GPT and its substitutions can provide teachers and students with equal opportunity to improve their learning, a significant level of creating support, and bearing an innovative thinking. Moreover, with the execution of any new advancement, regardless, its usage conveys numerous risks and the potential for abuse. Deception and predisposition tracked down inside ChatGPT's reactions, combined with occasions of cheating and copyright infringement, have stressed instructive experts. While certain locale and organizations have acted rapidly to boycott ChatGPT, we rather accept, alongside Kranzberg, (1986) that "innovation is neither great nor awful; nor is it impartial" (p. 545).6 As the world discusses the instructive and cultural consequences of ChatGPT and artificial intelligence, what stays clear is that the turn of events and improvement of this kind of innovation indicate that things are not pulling back.  Instructors, overseers, and policymakers must proactively look to teach themselves and their understudies on the most proficient method to utilize these devices both ethically and morally. Instructors ought to likewise comprehend the limits of utilizing man-made intelligence apparatuses and that, while each innovation presents both affordances and difficulties, they additionally accompany their own inserted risks.

  • Abstract We apply the methodology of no-go theorems as developed in physics to the question of artificial consciousness. The result is a no-go theorem which shows that under a general assumption, called dynamical relevance, Artificial Intelligence (AI) systems that run on contemporary computer chips cannot be conscious. Consciousness is dynamically relevant, simply put, if, according to a theory of consciousness, it is relevant for the temporal evolution of a system’s states. The no-go theorem rests on facts about semiconductor development: that AI systems run on central processing units, graphics processing units, tensor processing units, or other processors which have been designed and verified to adhere to computational dynamics that systematically preclude or suppress deviations. Whether our result resolves the question of AI consciousness on contemporary processors depends on the truth of the theorem’s main assumption, dynamical relevance, which this paper does not establish.

  • Rapid advancements in large language models (LLMs) have renewed interest in the question of whether consciousness can arise in an artificial system, like a digital computer. The general consensus is that LLMs are not conscious. This paper evaluates the main arguments against artificial consciousness in LLMs and argues that none of them show what they intend. However strong our intuitions against artificial consciousness are, they currently lack rational support.

  • The article attempts to complement the modern problems of highlighting the criteria of strong AI through discussions in the field of philosophy of consciousness. The popular ideas of D. Dennett (“multiple sketches”), J. Searle (causal emergent description) and D. Chalmers (synthetic approach to understanding consciousness) are compared with the history of the formation of the AI problem. Despite the wide discussion of the problems of consciousness and artificial forms of intelligence (strong and weak), the theories and arguments of philosophers about the psychophysiological problem remain relevant. It is assumed that clarifying the mechanism of analytical work of consciousness, the creative potential of the individual, the ability to cover a variety of phenomena in categorical forms, building axiomatic and synthetic judgments will expand the tools of machine learning. To complement the existing ideas about consciousness in the context of the prevalence of information approaches (D.I. Dubrovsky) and the analytical tradition (V.V. Vasiliev), the key provisions of the psychophysiological problem identified in the history of German and Russian philosophy are given. Given the complexity and versatility of the identified problems (definition of consciousness, psychophysiological problem, definition of AI, demarcation of weak and strong forms of AI, the importance of language for building structures of thinking, analog thinking and its capabilities), the content of the article is limited to analyzing emerging trends in philosophy and identifying prospects for further deepening into the problem.

  • The quest to create artificial consciousness stands as a formidable challenge at the intersection of artificial intelligence and cognitive science. This paper delves into the theoretical underpinnings, methodological approaches, and ethical considerations surrounding the concept of machine consciousness. By integrating insights from computational modeling, neuroscience, and philosophy, we propose a roadmap for comprehending and potentially realizing conscious behavior in artificial systems. Furthermore, we address the critical challenges of validating machine consciousness, ensuring its safe development, and navigating its integration into society.

  • <div> Understanding consciousness remains one of neuroscience’s greatest challenges. While classical neurophysiology explains many features of brain activity,

  • This work is intended as a voice in the discussion over previous claims that a pretrained large language model (LLM) based on the Transformer model architecture can be sentient. Such claims have been made concerning the LaMDA model and also concerning the current wave of LLM-powered chatbots, such as ChatGPT. This claim, if confirmed, would have serious ramifications in the Natural Language Processing (NLP) community due to wide-spread use of similar models. However, here we take the position that such a large language model cannot be conscious, and that LaMDA in particular exhibits no advances over other similar models that would qualify it. We justify this by analysing the Transformer architecture through Integrated Information Theory of consciousness. We see the claims of sentience as part of a wider tendency to use anthropomorphic language in NLP reporting. Regardless of the veracity of the claims, we consider this an opportune moment to take stock of progress in language modelling and consider the ethical implications of the task. In order to make this work helpful for readers outside the NLP community, we also present the necessary background in language modelling.

  • In this paper, we use the recent appearance of LLMs and GPT-equipped robotics to raise questions about the nature of semantic meaning and how this relates to issues concerning artificially-conscious machines. To do so, we explore how a phenomenology constructed out of the association of qualia (defined as somatically-experienced sense data) and situated within a 4e enactivist program gives rise to intentional behavior. We argue that a robot without such a phenomenology is semantically empty and, thus, cannot be conscious in any way resembling human consciousness. Finally, we use this platform to address and supplement widely-discussed concerns regarding the dangers of attempting to produce artificially-conscious machines.

  • With incredible speed Large Language Models (LLMs) are reshaping many aspects of society. This has been met with unease by the public, and public discourse is rife with questions about whether LLMs are or might be conscious. Because there is widespread disagreement about consciousness among scientists, any concrete answers that could be offered the public would be contentious. This paper offers the next best thing: charting the possibility of consciousness in LLMs. So, while it is too early to judge concerning the possibility of LLM consciousness, our charting of the possibility space for this may serve as a temporary guide for theorizing about it.

  • This study seeks to bridge the gap between narrative memory in human cognition and artificial agents by proposing a unified model. Narrative memory, fundamental to human consciousness, organizes experiences into coherent stories, influencing memory structuring, retention, and retrieval. By integrating insights from human cognitive frameworks and artificial memory architectures, this work aims to emulate these narrative processes in artificial systems. The proposed model adopts a multi-layered approach, combining elements of episodic and semantic memory with narrative structuring techniques. It explores how artificial agents can construct and recall narratives to enhance their understanding, decision-making, and adaptive capabilities. By simulating narrative-based memory processing, we assess the model’s effectiveness in replicating human-like retention and retrieval patterns. Applications include improved human-AI interaction, where agents understand context and nuance, and advancements in machine learning, where narrative memory enhances data interpretation and predictive analytics. By unifying the cognitive and computational perspectives, this study offers a step toward more sophisticated, human-like artificial systems, paving the way for deeper explorations into the intersection of memory, narrative, and consciousness.

  • The paper tries to show the line of demarcation between man and posthuman with regards to their intellect and bodily simulation. Man is man; machine can’t replace him. Robots, cyborgs and ultrasonic technological artifact can’t be a substitute to human intellect. Human intellect can be transferred and downloaded like some data but human consciousness is something unique and non-transferable. The novel The Variable Man by Philip K.Dick has been exploited to prove the point. Hayles’s (1999) theory of Posthuman helps to probe the issue of the new form of human identity titled as posthuman. The research shows that technology is becoming the subject by turning man into an object which is called posthuman by Hayles. She provides a detailed theoretical discussion on the issue of cybernetic identities and the complexities of being posthuman. The research implicates whatever the development may be there in the field of robot technology and cyborgs, human power of reasoning and consciousness are still unsurpassable.

  • The advancement of artificial intelligence (AI) toward self-awareness and emotional capacity is a critical area of research. Despite AI's success in specialized tasks, it has yet to exhibit true self-awareness or emotional intelligence. Previous research has emphasized the importance of feedback loops and interfaces in enabling both biological and artificial systems to process information and exhibit self-aware behaviors. Notably, in our earlier work, we proposed a unified model of consciousness (Watchus, 2024), which highlighted recursive feedback loops in both biological and artificial systems and explored the insula's role in self-awareness (Watchus, 2024). Building upon these foundations, the current study investigates how dual embodiment, mirror testing, and emotional feedback mechanisms can simulate self-awareness in AI systems. By integrating internal self-models with external sensory interfaces, we propose that emotional feedback can enhance AI's self-reflection and adaptability. Through the use of a physical robot dog (Unitree Go2) and a virtual embodiment, we explore how sensory experiences and self-reflective tasks foster pseudo-emotional states like curiosity, self-doubt, and determination, advancing the potential for AI systems to develop pseudo-self-awareness.

  • We propose the DIKWP-TRIZ framework, an innovative extension of the traditional Theory of Inventive Problem Solving (TRIZ) designed to address the complexities of cognitive processes and artificial consciousness. By integrating the elements of Data, Information, Knowledge, Wisdom, and Purpose (DIKWP) into the TRIZ methodology, the proposed framework emphasizes a value-oriented approach to innovation, enhancing the ability to tackle problems characterized by incompleteness, inconsistency, and imprecision. Through a systematic mapping of TRIZ principles to DIKWP transformations, we identify potential overlaps and redundancies, providing a refined set of guidelines that optimize the application of TRIZ principles in complex scenarios. The study further demonstrates the framework’s capacity to support advanced decision-making and cognitive processes, paving the way for the development of AI systems capable of sophisticated, human-like reasoning. Future research will focus on comparing the implementation paths of DIKWP-TRIZ and traditional TRIZ, analyzing the complexities inherent in DIKWP-TRIZ-based innovation, and exploring its potential in constructing artificial consciousness systems.

  • This paper examines Kazuo Ishiguro's Klara and The Sun through the lens of posthumanism. It uses the textual analysis method to analyze Ishiguro's text as a posthuman novel that depicts the posthuman society where the boundaries between what is human and the nonhuman is blurred. The basic argument is that the aim of Ishiguro's text is two-fold, while it clearly illustrates the inability of the humanoid robot to attain human consciousness, it attempts also to dismantle the anthropocentric view of man. The findings show that Klara, the narrator-protagonist is used as a tool to raise certain questions such as, can humanoids act humanly? And/or can a 'humanoid machine' attain consciousness? More importantly, what it means to be human, in the first place. In doing so, the story attempts to showcase the ruptured boundaries between human and nonhuman and the changing ideas of humankind and its entanglement with the nonhuman world. Further, the interaction between Klara (AF) and other characters in the story is developed in such a way as to illustrate not only the shortcomings of humans regarding faith and affection but, more importantly, the limits of the nonhuman machine. It dismisses the current debate among technology experts that artificial intelligence would soon be able to develop a human-like robot that enjoys similar human emotional signals and reacts exactly like humans. The story simply puts it, despite the defects of humans, nothing can replace humans as those artificial friends (AI) fundamentally lack the kinds of experience that give rise to human-like affect and emotion.

  • Interest has been renewed in the study of consciousness, both theoretical and applied, following developments in 20th and early 21st century logic, metamathematics, computer science, and the brain sciences. In this evolving historical narrative, I explore several theoretical questions about the types of artificial intelligence and offer several conjectures about how they affect possible future developments in this exceptionally transformative field of research. I also address the practical significance of the advances in artificial intelligence in view of the cautions issued by prominent scientists, politicians, and ethicists about the possible historically unique dangers of such sufficiently advanced general intelligence, including by implication the search for extraterrestrial intelligence. Integrating both the theoretical and practical issues, I ask the following: (a) is sufficiently advanced general robotic intelligence identical to, or alternatively, ambiguously indistinguishable from human intelligence and human consciousness, and if so, (b) is such an earthly robotic intelligence a kind of consciousness indistinguishable from a presumptive extraterrestrial robotic consciousness, and if so, (c) is such a human-created robot preferably able to serve as a substitute for or even entirely supplant human intelligence and consciousness in certain exceptionally responsible roles? In the course of this investigation of artificial intelligence and consciousness, I also discuss the inter-relationships of these topics more generally within the theory of mind, including, emergence, free will, and meaningfulness, and the implications of quantum theory for alternative cosmological ontologies that offer suggestive answers to these topics, including how they relate to Big History.

  • The article discusses key aspects of artificial intelligence creation, including issues of free will, self-awareness and ethics. The focus is on the neurobiological basis of consciousness, in particular the structure and functions of the new cerebral cortex, as well as the mechanisms of recognition, memory and prediction, which are important for modelling cognitive processes in artificial systems. The paper discusses the role of neural networks in reproducing cognitive functions, such as perception and decision making, and presents modern approaches to training neural networks. A separate part of the paper is devoted to the issue of modelling self-awareness and subjective experience in artificial intelligence and how realistic it is to create self-aware machines. Ethical issues of artificial intelligence creation are at the centre of the discussion, including the topics of the rights of self-aware machines, their responsibilities and their role in society. The article considers the possible social consequences of the emergence of artificial personalities, the need to develop new legal frameworks and legal protections for such beings. It also discusses the problem of free will in the context of both biological and artificial systems, citing experiments and philosophical theories that question free will as a phenomenon. It concludes that the creation of artificial intelligence has great potential, but requires careful ethical and legal analysis to ensure the harmonious integration of artificial persons into social and legal structures.

Last update from database: 1/1/26, 2:00 AM (UTC)