Search

Full bibliography 675 resources

  • This article explores the development of a cognitive sense of self within artificial intelligence (AI), emphasizing the transformative potential of self-awareness in enhancing AI functionalities for sophisticated interactions and autonomous decision-making. Rooted in interdisciplinary approaches that incorporate insights from cognitive science and practical AI applications, the study investigates the mechanisms through which AI can achieve self-recognition, reflection, and continuity of identity—key attributes analogous to human consciousness. This research is pivotal for fields such as healthcare and robotics, where AI systems benefit from personalized interactions and adaptive responses to complex environments. The concept of a self-aware AI involves the ability for systems to recognize themselves as distinct entities within their operational contexts, which could significantly enhance their functionality and decision-making capabilities. Further, the study delves into the ethical dimensions introduced by the advent of self-aware AI, exploring the profound questions concerning the rights of AI entities and the responsibilities of their creators. The development of self-aware AI raises critical issues about the treatment and status of AI systems, prompting the need for comprehensive ethical frameworks to guide their development. Such frameworks are essential for ensuring that the advancement of self-aware AI aligns with societal values and promotes the well-being of all stakeholders involved. 

  • Today's quick development of artificial intelligence (AI) brings us to the questions that have until recently been the domain of philosophy or even sciencefiction. When can be a system considered an intelligent one? What is a consciousness and where it comes from? Can systems gain consciousness? It is necessary to have in mind, that although the development seems to be a revolutionary one, the progress is successive, today's technologies did not emerge from thin air, they are firmly built on previous findings. As now some wild thoughts and theories where the AI development leads to have arisen, it is time to look back at the background theories and summarize, what do we know on the topics of intelligence, consciousness, where they come from and what are different viewpoints on these topics. This paper combines the findings from different areas and present overview of different attitudes on systems consciousness and emphasizes the role of systems sciences in helping the knowledge in this area.

  • We have defined the Conscious Turing Machine (CTM) for the purpose of investigating a Theoretical Computer Science (TCS) approach to consciousness. For this, we have hewn to the TCS demand for simplicity and understandability. The CTM is consequently and intentionally a simple machine. It is not a model of the brain, though its design has greatly benefited - and continues to benefit - from neuroscience and psychology. The CTM is a model of and for consciousness. Although it is developed to understand consciousness, the CTM offers a thoughtful and novel guide to the creation of an Artificial General Intelligence (AGI). For example, the CTM has an enormous number of powerful processors, some with specialized expertise, others unspecialized but poised to develop an expertise. For whatever problem must be dealt with, the CTM has an excellent way to utilize those processors that have the required knowledge, ability, and time to work on the problem, even if it is not aware of which ones these may be.

  • Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.

  • Today, computer science is a central discipline in science and in society because of the innumerable uses of software that constantly communicate. Generally speaking, computer science deals with the processing of information, which is related to sequential calculations of functions by systems using state machines as a basic element. Artificial intelligence is, in the field of computation, the study and programming of the mechanisms of reasoning and use of knowledge in all fields. The systems and software used on computers have been continuously developed, and one of the highlights is the development of autonomous means of communication between the systems. The Internet is a wonderful means of communication, linking all computer users and making this network indispensable. The chapter also describes the computer modeling of an artificial psychic system that generates representations for a system with corporeality, in other words, an autonomous system that can intentionally generate artificial thoughts and experience them.

  • At the point when the advanced PC was developed more than half a century prior, many felt that the quintessence of thinking, had been found. As of late, a few scientists wandered the theory of planning and carrying out a model for Artificial cognizance . On one hand any expectation of is having the option to plan a cognizant machine, then again such models could be useful for understanding human awareness.

  • There has recently been widespread discussion of whether large language models might be sentient. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future.

  • This paper envisions the possibility of a Conscious Aircraft: an aircraft of the future with features of consciousness. To serve this purpose, three main fields are examined: philosophy, cognitive neuroscience, and Artificial Intelligence (AI). While philosophy deals with the concept of what is consciousness, cognitive neuroscience studies the relationship of the brain with consciousness, contributing toward the biomimicry of consciousness in an aircraft. The field of AI leads into machine consciousness. The paper discusses several theories from these fields and derives outcomes suitable for the development of a Conscious Aircraft, some of which include the capability of developing “world-models”, learning about self and others, and the prerequisites of autonomy, selfhood, and emotions. Taking these cues, the paper focuses on the latest developments and the standards guiding the field of autonomous systems, and suggests that the future of autonomous systems depends on its transition toward consciousness. Finally, inspired by the theories suggesting the levels of consciousness, guided by the Theory of Mind, and building upon state-of-the-art aircraft with autonomous systems, this paper suggests the development of a Conscious Aircraft in three stages: Conscious Aircraft with (1) System-awareness, (2) Self-awareness, and (3) Fleet-awareness, from the perspectives of health management, maintenance, and sustainment.

  • A.I.: Artificial Intelligence tells the story of a robot boy who has been engineered to love his human owner. He is abandoned by his owner and pursues a tragic quest to become a real boy so that he can be loved by her again. This chapter explores the philosophical, psychological, and scientific questions that are asked by A.I. It starts with A.I.’s representation of artificial intelligence and then covers the consciousness of robots, which is closely linked to ethical concerns about the treatment of AIs in the film. There is a discussion about how A.I.’s interpretation of artificial love relates to scientific work on emotion, and the chapter also examines connections between the technology portrayed in A.I. and current research on robotics.

  • Humans are highly intelligent, and their brains are associated with rich states of consciousness. We typically assume that animals have different levels of consciousness, and this might be correlated with their intelligence. Very little is known about the relationships between intelligence and consciousness in artificial systems. Most of our current definitions of intelligence describe human intelligence. They have severe limitations when they are applied to non-human animals and artificial systems. To address this issue, this chapter sets out a new interpretation of intelligence that is based on a system’s ability to make accurate predictions. Human intelligence is measured using tests whose results are converted into values of IQ and g-score. This approach does not work well with non-human animals and AIs, so people have been developing universal algorithms that can measure intelligence in any type of system. In this chapter a new universal algorithm for measuring intelligence is described, which is based on a system’s ability to make accurate predictions. Many people agree that consciousness is the stream of colorful moving noisy sensations that starts when we wake up and ceases when we fall into deep sleep. Several mathematical algorithms have been developed to describe the relationship between consciousness and the physical world. If these algorithms can be shown to work on human subjects, then they could be used to measure consciousness in non-human animals and artificial systems.

  • The quest for conscious machines and questions raised by the prospect of self-aware artificial intelligence (AI) fascinate some humans. OpenAI's ChatGPT, celebrated for its human-like comprehension and conversational abilities, is a milestone in that quest.1, 2 Early AI models were basic and rule-driven and mainly completed tasks like checking spelling and correcting grammar. Then, in 2010, recurrent neural network language models were trained to understand and generate text. ChatGPT, using transformer neural networks, produces coherent text and exemplifies this new kind of language model.3 Silicon Valley leaders claimed that these models and similar AI technologies will revolutionize various sectors and raised ethical and societal questions. As we explore AI's potential, we must navigate these implications and emphasize the necessity of using it responsibly. AI is a promising dream, but society must prepare to address the challenges likely to arise from wielding its transformative power.

  • The development of advanced generative chat models, such as ChatGPT, has raised questions about the potential consciousness of these tools and the extent of their general artificial intelligence. ChatGPT consistent avoidance of passing the test is here overcome by asking ChatGPT to apply the Turing test to itself. This explores the possibility of the model recognizing its own sentience. In its own eyes, it passes this test. ChatGPT's self-assessment makes serious implications about our understanding of the Turing test and the nature of consciousness. This investigation concludes by considering the existence of distinct types of consciousness and the possibility that the Turing test is only effective when applied between consciousnesses of the same kind. This study also raises intriguing questions about the nature of AI consciousness and the validity of the Turing test as a means of verifying such consciousness.

  • The emergence of Large Language Models (LLMs) has renewed debate about whether Artificial Intelligence (AI) can be conscious or sentient. This paper identifies two approaches to the topic and argues: (1) A “Cartesian” approach treats consciousness, sentience, and personhood as very similar terms, and treats language use as evidence that an entity is conscious. This approach, which has been dominant in AI research, is primarily interested in what consciousness is, and whether an entity possesses it. (2) An alternative “Hobbesian” approach treats consciousness as a sociopolitical issue and is concerned with what the implications are for labeling something sentient or conscious. This both enables a political disambiguation of language, consciousness, and personhood and allows regulation to proceed in the face of intractable problems in deciding if something “really is” sentient. (3) AI systems should not be treated as conscious, for at least two reasons: (a) treating the system as an origin point tends to mask competing interests in creating it, at the expense of the most vulnerable people involved; and (b) it will tend to hinder efforts at holding someone accountable for the behavior of the systems. A major objective of this paper is accordingly to encourage a shift in thinking. In place of the Cartesian question—is AI sentient?—I propose that we confront the more Hobbesian one: Does it make sense to regulate developments in which AI systems behave as if they were sentient?

  • This study proposes a model of computational consciousness for non-interacting agents. The phenomenon of interest was assumed as sequentially dependent on the cognitive tasks of sensation, perception, emotion, affection, attention, awareness, and consciousness. Starting from the Smart Sensing prodromal study, the cognitive layers associated with the processes of attention, awareness, and consciousness were formally defined and tested together with the other processes concerning sensation, perception, emotion, and affection. The output of the model consists of an index that synthesizes the energetic and entropic contributions of consciousness from a computationally moral perspective. Attention was modeled through a bottom-up approach, while awareness and consciousness by distinguishing environment from subjective cognitive processes. By testing the solution on visual stimuli eliciting the emotions of happiness, anger, fear, surprise, contempt, sadness, disgust, and the neutral state, it was found that the proposed model is concordant with the scientific evidence concerning covert attention. Comparable results were also obtained regarding studies investigating awareness as a consequence of visual stimuli repetition, as well as those investigating moral judgments to visual stimuli eliciting disgust and sadness. The solution represents a novel approach for defining computational consciousness through artificial emotional activity and morality.

  • Artificial intelligence endures the necessary capacity that allows automation directed towards emulate mortal intellect action. In this research paper gives a broad analysis of AI consciousness. The primary objective indicated the situation inhabits facing exploration of Artificial intelligence could develop humans like cognition. Numerous analysts include and consider automobile comprehension models. Consciousness study is a developing subject in both neuroscience and psychology that includes a variety of methods, from experimental animal models in human brain pathology. Here we discuss the contrasts bounded by mortal cognizance along automobile perception in addition above mentioned physiological talents benefit mortal compassionate in development regard to the subsequent world. Artificial intelligence enabled automatic driving automobiles using the diversion zone. Human intelligence generally refers to the ability of mental activities. However, due to its poor level of intelligibility and ethical difficulties. Artificial intelligence established automation posture, compelling uncertainty towards the user, individuals along with society. According to what AI can advance in subsequent at the same-time human empathetic endure involved along computer expertise adopting the diversion portrait.

  • To approach the creation of artificial conscious systems systematically and to obtain certainty about the presence of phenomenal qualities (qualia) in these systems, we must first decipher the fundamental mechanism behind conscious processes. In achieving this goal, the conventional physicalist position exhibits obvious shortcomings in that it provides neither a plausible mechanism for the generation of qualia nor tangible demarcation criteria for conscious systems. Therefore, to remedy the deficiencies of the standard physicalist approach, a new theory for the understanding of consciousness has been formulated. The aim of the paper is to present the cornerstones of this theory, to outline the conditions for conscious systems derived from the theory, and to address the implications of these conditions for the creation of robots that transcend the threshold of phenomenal awareness. In short, the theory is based on the proposition that the universe is permeated by a ubiquitous field of consciousness that can be equated with the zero-point field (ZPF) of quantum electrodynamics (QED). The ZPF, which is characterized by a spectrum of field modes, plays a crucial role in the edifice of modern physics. QED-based model calculations on cortical dynamics and empirical findings on the neural correlates of consciousness suggest that a physical system can only generate conscious states if it is capable of establishing resonant coupling to the ZPF, resulting in the amplification of selected field modes and the activation of the phenomenal qualities that are assumed to be associated with these modes. Thus, scientifically sound considerations support the conclusion that the crucial condition for generating conscious states lies in a system's capacity to tap into the phenomenal color palette inherent in the ZPF.

  • We demonstrate that if consciousness is relevant for the temporal evolution of a system's states--that is, if it is dynamically relevant--then AI systems cannot be conscious. That is because AI systems run on CPUs, GPUs, TPUs or other processors which have been designed and verified to adhere to computational dynamics that systematically preclude or suppress deviations. The design and verification preclude or suppress, in particular, potential consciousness-related dynamical effects, so that if consciousness is dynamically relevant, AI systems cannot be conscious.

  • Foundation models are gaining considerable interest for their capacity of solving many downstream tasks without fine-tuning parameters on specific datasets. The same solutions can connect visual and linguistic representations through image-text contrastive learning. These abilities allow an artificial agent to act similarly to a human, but significant cognitive processes still need to be introduced in the learning process. The present study proposes an advancement to more human-like artificial intelligence by introducing CognitiveNet, a learnable architecture integrating foundation models. Starting from the latest studies in the field of Artificial Consciousness, a hierarchy of cognitive layers has been modeled and pre-trained for estimating the emotional content of images. By employing CLIP as the backbone model, significant concordant emotional activity was produced. Furthermore, the proposed model overcomes the accuracy of CLIP in classifying CIFAR-10 and -100 datasets through supervised optimization, suggesting CognitiveNet as a promising solution for solving classification tasks through online meta-learning.

  • One of the current AI issues depicted in popular culture is the fear of conscious super AIs that try to take control over humanity. And as computational power goes upwards and that turns more and more into a reality, understanding artificial brains might be increasingly important to control and drive AI towards the benefit of our societies. This paper proposes a base framework to aid the development of autonomous multipurpose artificial brains. To approach that, we propose to first model the functioning of the human brain by reflecting and taking inspiration from the way the body, the consciousness and the unconsciousness interact. To do that, we tried to model events such as sensing, thinking, dreaming and acting, thoughtfully or unconsciously. We believe valuable insights can already be drawn from the analysis and critique of the presented framework, and that it might be worth implementing it, with or without changes, to create, study, understand and control artificially conscious systems.

Last update from database: 1/2/26, 2:00 AM (UTC)