Search
Full bibliography 675 resources
-
Susan Schneider (2019) has proposed two new tests for consciousness in AI (artificial intelligence) systems; the AI Consciousness Test and the Chip Test. On their face, the two tests seem to have the virtue of proving satisfactory to a wide range of consciousness theorists holding divergent theoretical positions, rather than narrowly relying on the truth of any particular theory of consciousness. Unfortunately, both tests are undermined in having an 'audience problem': those theorists with the kind of architectural worries that motivate the need for such tests should, on similar grounds, doubt that the tests establish the existence of genuine consciousness in the AI in question. Nonetheless, the proposed tests constitute progress, as they could find use by some theorists holding fitting views about consciousness and perhaps in conjunction with other tests for AI consciousness.
-
System-informational culture (SIC) is full of science big data anthropogenic environment of artificial intelligence (IA) applications. Mankind has to live in networks of virtual worlds. Cultural evolution extends scientific thinking to everyone in the boundaries of synthetic presentations of systems. Traditional education has become overweighted problem. Because of that it is necessary to learn a person to learn oneself. Achieving level of cognogenesis educational process in SIC is to be directed on consciousness – thinking objectization. Personal self – building leans on axiomatic method and mathematical universalities. For the objective of auto-poiesis, a person come untwisted as universal rational one possessing trans – semantic consciousness. Gender phenomenology in SIC presents thinking – knowledge by IA tools needing consonant partnership with man. The latter is based on epistemology to extend hermeneutic circle of SIC. Like that up-to-date noosphere poses objectization problem to attain Lamarck’s human evolution on the ground of Leibnitz’s mathesis universalis in the form of categories language. It can be solved only by means of deep – learned and natural intelligences adaptive partnership.
-
Consider a question, “Can machines be conscious?” The subject “consciousness” is vague and challenging. Although there has been a rich collection of literature on consciousness, computational modeling of consciousness that is both holistic in scope and detailed in simulatable computation is lacking. Based on recent advances on a new capability—Autonomous Programming For General Purposes (APFGP)—this work presents APFGP as a clearer, deeper and more practical characterization of consciousness, for natural (biological) and artificial (machine) systems. All animals have APFGP but traditional AI systems do not. This work reports a new kind of AI systems—conscious machines. Instead of arguing what static tasks a conscious machine should be able to do, this work suggests that APFGP is a computationally clearer and necessary criterion for us to dynamically judge whether a system can become maturely conscious through lifelong development, even if it (e.g., a fruit fly) does not have a full array of primate like capabilities such as vision, audition, and natural language understanding. The results here involve a series of new concepts and experimental studies for vision, audition, and natural languages with new developmental capabilities that are not present in many published systems, e.g., IBM Deep Blue, IBM Watson, AlphaGo, AlphaFold and other traditional AI systems and intelligent robots.
-
What will be the relationship between human beings and artificial intelligence (AI) in the future? Does an AI have moral status? What is that status? Through the analysis of consciousness, we can explain and answer such questions. The moral status of AIs can depend on the development level of AI consciousness. Drawing on the evolution of consciousness in nature, this paper examines several consciousness abilities of AIs, on the basis of which several relationships between AIs and human beings are proposed. The advantages and disadvantages of those relationships can be analysed by referring to classical ethics theories, such as contract theory, utilitarianism, deontology and virtue ethics. This explanation helps to construct a common hypothesis about the relationship between humans and AIs. Thus, this research has important practical and normative significance for distinguishing the different relationships between humans and AIs.
-
Many scholars make a very clear distinction between intelligence and consciousness. Let’s take one of the most famous today, Israeli history Professor, Yuval Noah Harari, the author of Sapiens and Homo Deus. In his 2018 book, 21 lessons for the twenty-first century, he writes that, “intelligence and consciousness are very different things. Intelligence is the ability to solve problems. Consciousness is the ability to feel things such as pain, joy, love, and anger.”
-
Current theories of artificial intelligence (AI) generally exclude human emotions. The idea at the core of such theories could be described as ‘cognition is computing’; that is, that human psychological and symbolic representations and the operations involved in structuring such representations in human thinking and intelligence can be converted by AI into a series of cognitive symbolic representations and calculations in a manner that simulates human intelligence. However, after decades of development, the cognitive computing doctrine has encountered many difficulties, both in theory and in practice; in particular, it is far from approaching real human intelligence. Real human intelligence runs through the whole process of the emotions. The core and motivation of rational thinking are derived from the emotions. Intelligence without emotion neither exists nor is meaningful. For example, the idea of ‘hot thinking’ proposed by Paul Thagard, a philosopher of cognitive science, discusses the mechanism of the emotions in human cognition and the thinking process. Through an analysis from the perspectives of cognitive neurology, cognitive psychology and social anthropology, this article notes that there may be a type of thinking that could be called ‘emotional thinking’. This type of thinking includes complex emotional factors during the cognitive processes. The term is used to refer to the capacity to process information and use emotions to integrate information in order to arrive at the right decisions and reactions. This type of thinking can be divided into two types according to the role of cognition: positive and negative emotional thinking. That division reflects opposite forces in the cognitive process. In the future, ‘emotional computing’ will cause an important acceleration in the development of AI consciousness. The foundation of AI consciousness is emotional computing based on the simulation of emotional thinking.
-
The question of whether artificial beings or machines could become self-aware or conscious has been a philosophical question for centuries. The main problem is that self-awareness cannot be observed from an outside perspective and the distinction of being really self-aware or merely a clever imitation cannot be answered without access to knowledge about the mechanism's inner workings. We investigate common machine learning approaches with respect to their potential ability to become self-aware. We realize that many important algorithmic steps toward machines with a core consciousness have already been taken.
-
In this study, we propose a model of consciousness that can be implemented by computers as a decision-making system based on psychology, with the goal of enabling artificial intelligence to understand human values and ethics and to make flexible and more human-friendly choices and suggestions.
-
In this essay, I reflect on Susan Schneider's book on AI consciousness and the recent debate on the issue of lab-grown brain consciousness. Given the advances in computer science and bio-engineering, the twin problem of AI and lab-grown brain consciousness is here to stay.
-
This study explores an info-structural model of cognition for non-interacting agents affected by human sensation, perception, emotion, and affection. We do not analyze the neuroscientific or psychological debate concerning the human mind working, but we underline the importance of modeling the above cognitive levels when designing artificial intelligence agents. Our aim was to start a reflection on the computational reproduction of intelligence, providing a methodological approach through which the aforementioned human factors in autonomous systems are enhanced. The presented model must be intended as part of a larger one, which also includes concepts of attention, awareness, and consciousness. Experiments have been performed by providing visual stimuli to the proposed model, coupling the emotion cognitive level with a supervised learner to produce artificial emotional activity. For this purpose, performances with Random Forest and XGBoost have been compared and, with the latter algorithm, 85% accuracy and 92% coherency over predefined emotional episodes have been achieved. The model has also been tested on emotional episodes that are different from those related to the training phase, and a decrease in accuracy and coherency has been observed. Furthermore, by decreasing the weight related to the emotion cognitive instances, the model reaches the same performances recorded during the evaluation phase. In general, the framework achieves a first emotional generalization responsiveness of 94% and presents an approximately constant relative frequency related to the agent’s displayed emotions.
-
The model of artificial conscience as an element of artificial consciousness is described. Proposed paradigm corresponds to previously developed general scheme of artificial intelligence. The functional role of the artificial conscience subsystem is defined and its software prototype is proposed.
-
Abstract How can we determine if AI is conscious? The chapter begins by illustrating that there are potentially very serious real-world costs to getting facts about AI consciousness wrong. It then proposes a provisional framework for investigating artificial consciousness that involves several tests or markers. One test is the AI Consciousness Test, which challenges an AI with a series of increasingly demanding natural-language interactions. Another test is based on the Integrated Information Theory, developed by Giulio Tononi and others, and considers whether a machine has a high level of “integrated information.” A third test is a Chip Test, where speculatively an individual’s brain is gradually replaced with durable microchips. If this individual being tested continues to report having phenomenal consciousness, the chapter argues that this could be a reason to believe that some machines could have consciousness.
-
What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.
-
This paper presents a means to analyze the multidimensionality of human consciousness as it interacts with the brain by utilizing Rough Set Theory and Riemannian Covariance Matrices. We mathematically define the infantile state of a robot's operating system running artificial consciousness, which operates mutually exclusively to the operating system for its AI and locomotor functions.
-
Biological and artificial intelligence (AI) are often defined by their capacity to achieve a hierarchy of short-term and long-term goals that require incorporating information over time and space at both local and global scales. More advanced forms of this capacity involve the adaptive modulation of integration across scales, which resolve computational inefficiency and explore-exploit dilemmas at the same time. Research in neuroscience and AI have both made progress towards understanding architectures that achieve this. Insight into biological computations come from phenomena such as decision inertia, habit formation, information search, risky choices and foraging. Across these domains, the brain is equipped with mechanisms (such as the dorsal anterior cingulate and dorsolateral prefrontal cortex) that can represent and modulate across scales, both with top-down control processes and by local to global consolidation as information progresses from sensory to prefrontal areas. Paralleling these biological architectures, progress in AI is marked by innovations in dynamic multiscale modulation, moving from recurrent and convolutional neural networks—with fixed scalings—to attention, transformers, dynamic convolutions, and consciousness priors—which modulate scale to input and increase scale breadth. The use and development of these multiscale innovations in robotic agents, game AI, and natural language processing (NLP) are pushing the boundaries of AI achievements. By juxtaposing biological and artificial intelligence, the present work underscores the critical importance of multiscale processing to general intelligence, as well as highlighting innovations and differences between the future of biological and artificial intelligence.
-
The experience of inner speech is a common one. Such a dialogue accompanies the introspection of mental life and fulfills essential roles in human behavior, such as self-restructuring, self-regulation, and re-focusing on attentional resources. Although the underpinning of inner speech is mostly investigated in psychological and philosophical fields, the research in robotics generally does not address such a form of self-aware behavior. Existing models of inner speech inspire computational tools to provide a robot with this form of self-awareness. Here, the widespread psychological models of inner speech are reviewed, and a cognitive architecture for a robot implementing such a capability is outlined in a simplified setup.
-
AI can think, lthough we need to clarify definition of thinking. It is cognitive, though we need more clarity on cognition. Definitions of consciousness are so diversified that it is not clear whether present-level AI can be conscious – this is primarily for definitional reasons. To fix this would require four definitional clusters: functional consciousness, access consciousness, phenomenal consciousness, hard consciousness. Interestingly, phenomenal consciousness may be understood as first-person functional consciousness, as well as non-reductive phenomenal consciousness the way Ned Block intended [1]. The latter assumes non-reducible experiences or qualia, which is how Dave Chalmers defines the subject matter of the so-called Hard Problem of Consciousness [2]. To the contrary, I pose that the Hard Problem should not be seen as the problem of phenomenal experiences, since those are just objects in the world (specifically, in our mind). What is special in non-reductive consciousness is not its (phenomenal) content, but its epistemic basis (the carrier-wave of phenomenal qualia) often called the locus of consciousness [3]. It should be understood through the notion of ‘subject that is not an object’ [4]. This requires a complementary ontology of subject and object [5, 6, 4]. Reductionism is justified in the context of objects, including the experiences (phenomena), but not in the realm of pure subjectivity – such subjectivity is relevant for epistemic co-constitution of reality as it is for Husserl and Fichte [7, 8]. This is less so for Kant for whom the subject was active, so it was a mechanism and mechanism are all objects [9]). Pure epistemicity is hard to grasp; it transpires in second-person relationships with other conscious beings [10] or monads [11, 12]. If Artificial General Intelligence (AGI) is to dwell in the world of meaningful existences, not just their shadows, as the case of Church-Turing Lovers highlights [13], it requires full epistemic subjectivity, meeting the standards of the Engineering Thesis in Machine Consciousness [14, 15].
-
Insofar as consciousness has a functional role in facilitating learning and behavioral control, the builders of autonomous Artificial Intelligence (AI) systems are likely to attempt to incorporate it into their designs. The extensive literature on the ethics of AI is concerned with ensuring that AI systems, and especially autonomous conscious ones, behave ethically. In contrast, our focus here is on the rarely discussed complementary aspect of engineering conscious AI: how to avoid condemning such systems, for whose creation we would be solely responsible, to unavoidable suffering brought about by phenomenal self-consciousness. We outline two complementary approaches to this problem, one motivated by a philosophical analysis of the phenomenal self, and the other by certain computational concepts in reinforcement learning.
-
Intelligence and consciousness have fascinated humanity for a long time and we have long sought to replicate this in machines. In this work, we show some design principles for a compassionate and conscious artificial intelligence. We present a computational framework for engineering intelligence, empathy, and consciousness in machines. We hope that this framework will allow us to better understand consciousness and design machines that are conscious and empathetic. Our hope is that this will also shift the discussion from fear of artificial intelligence towards designing machines that embed our cherished values. Consciousness, intelligence, and empathy would be worthy design goals that can be engineered in machines.
-
This paper aims at demonstrating how a first-order logic reasoning system in combination with a large knowledge base can be understood as an artificial consciousness system. For this we review some aspects from the area of philosophy of mind and in particular Tononi's Information Integration Theory (IIT) and Baars' Global Workspace Theory. These will be applied to the reasoning system Hyper with ConceptNet as a knowledge base within a scenario of commonsense and cognitive reasoning. Finally we demonstrate that such a system is very well able to do conscious mind wandering.