Search

Full bibliography 716 resources

  • The model of artificial conscience as an element of artificial consciousness is described. Proposed paradigm corresponds to previously developed general scheme of artificial intelligence. The functional role of the artificial conscience subsystem is defined and its software prototype is proposed.

  • Abstract How can we determine if AI is conscious? The chapter begins by illustrating that there are potentially very serious real-world costs to getting facts about AI consciousness wrong. It then proposes a provisional framework for investigating artificial consciousness that involves several tests or markers. One test is the AI Consciousness Test, which challenges an AI with a series of increasingly demanding natural-language interactions. Another test is based on the Integrated Information Theory, developed by Giulio Tononi and others, and considers whether a machine has a high level of “integrated information.” A third test is a Chip Test, where speculatively an individual’s brain is gradually replaced with durable microchips. If this individual being tested continues to report having phenomenal consciousness, the chapter argues that this could be a reason to believe that some machines could have consciousness.

  • What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.

  • This paper presents a means to analyze the multidimensionality of human consciousness as it interacts with the brain by utilizing Rough Set Theory and Riemannian Covariance Matrices. We mathematically define the infantile state of a robot's operating system running artificial consciousness, which operates mutually exclusively to the operating system for its AI and locomotor functions.

  • Biological and artificial intelligence (AI) are often defined by their capacity to achieve a hierarchy of short-term and long-term goals that require incorporating information over time and space at both local and global scales. More advanced forms of this capacity involve the adaptive modulation of integration across scales, which resolve computational inefficiency and explore-exploit dilemmas at the same time. Research in neuroscience and AI have both made progress towards understanding architectures that achieve this. Insight into biological computations come from phenomena such as decision inertia, habit formation, information search, risky choices and foraging. Across these domains, the brain is equipped with mechanisms (such as the dorsal anterior cingulate and dorsolateral prefrontal cortex) that can represent and modulate across scales, both with top-down control processes and by local to global consolidation as information progresses from sensory to prefrontal areas. Paralleling these biological architectures, progress in AI is marked by innovations in dynamic multiscale modulation, moving from recurrent and convolutional neural networks—with fixed scalings—to attention, transformers, dynamic convolutions, and consciousness priors—which modulate scale to input and increase scale breadth. The use and development of these multiscale innovations in robotic agents, game AI, and natural language processing (NLP) are pushing the boundaries of AI achievements. By juxtaposing biological and artificial intelligence, the present work underscores the critical importance of multiscale processing to general intelligence, as well as highlighting innovations and differences between the future of biological and artificial intelligence.

  • The experience of inner speech is a common one. Such a dialogue accompanies the introspection of mental life and fulfills essential roles in human behavior, such as self-restructuring, self-regulation, and re-focusing on attentional resources. Although the underpinning of inner speech is mostly investigated in psychological and philosophical fields, the research in robotics generally does not address such a form of self-aware behavior. Existing models of inner speech inspire computational tools to provide a robot with this form of self-awareness. Here, the widespread psychological models of inner speech are reviewed, and a cognitive architecture for a robot implementing such a capability is outlined in a simplified setup.

  • AI can think, lthough we need to clarify definition of thinking. It is cognitive, though we need more clarity on cognition. Definitions of consciousness are so diversified that it is not clear whether present-level AI can be conscious – this is primarily for definitional reasons. To fix this would require four definitional clusters: functional consciousness, access consciousness, phenomenal consciousness, hard consciousness. Interestingly, phenomenal consciousness may be understood as first-person functional consciousness, as well as non-reductive phenomenal consciousness the way Ned Block intended [1]. The latter assumes non-reducible experiences or qualia, which is how Dave Chalmers defines the subject matter of the so-called Hard Problem of Consciousness [2]. To the contrary, I pose that the Hard Problem should not be seen as the problem of phenomenal experiences, since those are just objects in the world (specifically, in our mind). What is special in non-reductive consciousness is not its (phenomenal) content, but its epistemic basis (the carrier-wave of phenomenal qualia) often called the locus of consciousness [3]. It should be understood through the notion of ‘subject that is not an object’ [4]. This requires a complementary ontology of subject and object [5, 6, 4]. Reductionism is justified in the context of objects, including the experiences (phenomena), but not in the realm of pure subjectivity – such subjectivity is relevant for epistemic co-constitution of reality as it is for Husserl and Fichte [7, 8]. This is less so for Kant for whom the subject was active, so it was a mechanism and mechanism are all objects [9]). Pure epistemicity is hard to grasp; it transpires in second-person relationships with other conscious beings [10] or monads [11, 12]. If Artificial General Intelligence (AGI) is to dwell in the world of meaningful existences, not just their shadows, as the case of Church-Turing Lovers highlights [13], it requires full epistemic subjectivity, meeting the standards of the Engineering Thesis in Machine Consciousness [14, 15].

  • Insofar as consciousness has a functional role in facilitating learning and behavioral control, the builders of autonomous Artificial Intelligence (AI) systems are likely to attempt to incorporate it into their designs. The extensive literature on the ethics of AI is concerned with ensuring that AI systems, and especially autonomous conscious ones, behave ethically. In contrast, our focus here is on the rarely discussed complementary aspect of engineering conscious AI: how to avoid condemning such systems, for whose creation we would be solely responsible, to unavoidable suffering brought about by phenomenal self-consciousness. We outline two complementary approaches to this problem, one motivated by a philosophical analysis of the phenomenal self, and the other by certain computational concepts in reinforcement learning.

  • Intelligence and consciousness have fascinated humanity for a long time and we have long sought to replicate this in machines. In this work, we show some design principles for a compassionate and conscious artificial intelligence. We present a computational framework for engineering intelligence, empathy, and consciousness in machines. We hope that this framework will allow us to better understand consciousness and design machines that are conscious and empathetic. Our hope is that this will also shift the discussion from fear of artificial intelligence towards designing machines that embed our cherished values. Consciousness, intelligence, and empathy would be worthy design goals that can be engineered in machines.

  • This paper aims at demonstrating how a first-order logic reasoning system in combination with a large knowledge base can be understood as an artificial consciousness system. For this we review some aspects from the area of philosophy of mind and in particular Tononi's Information Integration Theory (IIT) and Baars' Global Workspace Theory. These will be applied to the reasoning system Hyper with ConceptNet as a knowledge base within a scenario of commonsense and cognitive reasoning. Finally we demonstrate that such a system is very well able to do conscious mind wandering.

  • If artificial agents are to be created such that they occupy space in our social and cultural milieu, then we should expect them to be targets of folk psychological explanation. That is to say, their behavior ought to be explicable in terms of beliefs, desires, obligations, and especially intentions. Herein, we focus on the concept of intentional action, and especially its relationship to consciousness. After outlining some lessons learned from philosophy and psychology that give insight into the structure of intentional action, we find that attention plays a critical role in agency, and indeed, in the production of intentional action. We argue that the insights offered by the literature on agency and intentional action motivate a particular kind of computational cognitive architecture, and one that hasn’t been well-explicated or computationally fleshed out among the community of AI researchers and computational cognitive scientists who work on cognitive systems. To give a sense of what such a system might look like, we present the ARCADIA attention-driven cognitive system as first steps toward an architecture to support the type of agency that rich human–machine interaction will undoubtedly demand.

  • Previous work [Chrisley & Sloman, 2016, 2017] has argued that a capacity for certain kinds of meta-knowledge is central to modeling consciousness, especially the recalcitrant aspects of qualia, in computational architectures. After a quick review of that work, this paper presents a novel objection to Frank Jackson’s Knowledge Argument (KA) against physicalism, an objection in which such meta-knowledge also plays a central role. It is first shown that the KA’s supposition of a person, Mary, who is physically omniscient, and yet who has not experienced seeing red, is logically inconsistent, due to the existence of epistemic blindspots for Mary. It is then shown that even if one makes the KA consistent by supposing a more limited physical omniscience for Mary, this revised argument is invalid. This demonstration is achieved via the construction of a physical fact (a recursive conditional epistemic blindspot) that Mary cannot know before she experiences seeing red for the first time, but which she can know afterward. After considering and refuting some counter-arguments, the paper closes with a discussion of the implications of this argument for machine consciousness, and vice versa.

  • Artificial intelligence and robotics are opening up important opportunities in the field of health diagnosis and treatment support with aims like better patient follow-up. A social and emotional robot is an artificially intelligent machine that owes its existence to computer models designed by humans. If it has been programmed to engage in dialogue, detect and recognize emotional and conversational cues, adapt to humans, or even simulate humor, such a machine may on the surface seem friendly. However, such emotional simulation must not hide the fact that the machine has no consciousness.

  • This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” reflected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” that excludes any possibility of transition from a complex observable phenomenon to an abstract image or concept. It is, therefore, sensible to factor in new requirements for AI (artificial intelligence) maturity assessment when approaching the Turing test. Such AI must support all forms of communication with a human being, and it should be able to comprehend abstract images and specify concepts as well as participate in social practices.

  • This paper explores some of the potential connections between natural and artificial intelligence and natural and artificial consciousness. In humans we use batteries of tests to indirectly measure intelligence. This approach breaks down when we try to apply it to radically different animals and to the many varieties of artificial intelligence. To address this issue people are starting to develop algorithms that can measure intelligence in any type of system. Progress is also being made in the scientific study of consciousness: we can neutralize the philosophical problems, we have data about the neural correlates and we have some idea about how we can develop mathematical theories that can map between physical and conscious states. While intelligence is a purely functional property of a system, there are good reasons for thinking that consciousness is linked to particular spatiotemporal patterns in specific physical materials. This paper outlines some of the weak inferences that can be made about the relationships between intelligence and consciousness in natural and artificial systems. To make real scientific progress we need to develop practical universal measures of intelligence and mathematical theories of consciousness that can reliably map between physical and conscious states.

  • This work seeks to study the beneficial properties that an autonomous agent can obtain by implementing a cognitive architecture similar to the one of conscious beings. Along this document, a conscious model of autonomous agent based in a global workspace architecture is presented. We describe how this agent is viewed from different perspectives of philosophy of mind, being inspired by their ideas. The goal of this model is to create autonomous agents able to navigate within an environment composed of multiple independent magnitudes, adapting to its surroundings in order to find the best possible position in base of its inner preferences. The purpose of the model is to test the effectiveness of many cognitive mechanisms that are incorporated, such as an attention mechanism for magnitude selection, pos-session of inner feelings and preferences, usage of a memory system to storage beliefs and past experiences, and incorporating a global workspace which controls and integrates information processed by all the subsystem of the model. We show in a large experiment set how an autonomous agent can benefit from having a cognitive architecture such as the one described.

  • The popular expectation is that Artificial Intelligence (AI) will soon surpass the capacities of the human mind and Strong Artificial General Intelligence (AGI) will replace the contemporary Weak AI. However, there are certain fundamental issues that have to be addressed before this can happen. There can be no intelligence without understanding, and there can be no understanding without getting meanings. Contemporary computers manipulate symbols without meanings, which are not incorporated in the computations. This leads to the Symbol Grounding Problem; how could meanings be incorporated? The use of self-explanatory sensory information has been proposed as a possible solution. However, self-explanatory information can only be used in neural network machines that are different from existing digital computers and traditional multilayer neural networks. In humans, self-explanatory information has the form of qualia. To have reportable qualia is to be phenomenally conscious. This leads to the hypothesis about an unavoidable connection between the solution of the Symbol Grounding Problem and consciousness. If, in general, self-explanatory information equals to qualia, then machines that utilize self-explanatory information would be conscious.

  • Taking the stance that artificially conscious agents should be given human-like rights, in this paper we attempt to define consciousness, aggregate existing universal human rights, analyze robotic laws with roots in both reality and science fiction, and synthesize everything to create a new robot-ethical charter. By restricting the problem-space of possible levels of conscious beings to human-like, we succeed in developing a working definition of consciousness for social strong AI which focuses on human-like creativity being exhibited as a third-person observable phenomenon. Creativity is then extrapolated to represent first-person functionality, fulfilling the first/third-person feature of consciousness. Next, several sources of existing rights and rules, both for humans and robots, are analyzed and, along with supplementary informal reports, synthesized to create articles for an additive charter which compliments the United Nation's Universal Declaration of Human Rights. Finally, the charter is presented and the paper concludes with the conditions for amending the charter, as well as recommendations for further charters.

  • This paper is focused on some preliminary cognitive and consciousness test results of using an Independent Core Observer Model Cognitive Architecture (ICOM) in a Mediated Artificial Super Intelligence (mASI) System. These preliminary test results including objective and subjective analyses are designed to determine if further research is warranted along these lines. The comparative analysis includes comparisons to humans and human groups as measured for direct comparison. The overall study includes a mediation client application optimization in helping preform tests, AI context-based input (building context tree or graph data models), intelligence comparative testing (such as an IQ test), and other tests (i.e. Turing, Qualia, and Porter method tests) designed to look for early signs of consciousness or the lack thereof in the mASI system. Together, they are designed to determine whether this modified version of ICOM is a) in fact, a form of AGI and/or ASI, b) conscious and c) at least sufficiently interesting that further research is called for. This study is not conclusive but offers evidence to justify further research along these lines.

  • The main challenge of technology is to facilitate the tasks and to transfer the functions that are usually performed by the humans to the non-humans. However, the pervasion of machines in everyday life requires that the non-humans are increasingly closer in their abilities to the ordinary thought, action and behaviour of the humans. This view merges the idea of the Humaniter, a longstanding myth in the history of technology: an artificial creature that thinks, acts and feels like a human to the point that one cannot make the difference between the two. In the wake of the opposition of Strong AI and Weak AI, this challenge can be expressed in terms of a shift from the performance of intelligence (reason, reasoning, cognition, judgment) to that of sentience (experience, sensation, emotion, consciousness). In other words, the challenge of technology if this possible shift is taken seriously is to move from the paradigm of Artificial Intelligence (AI) to that of Artificial Sentience (AS). But for the Humaniter not to be regarded as a mere myth, any intelligent or sentient machine must pass through a Test of Humanity that refers to or that differs from the Turing Test. One can suggest several options for this kind of test and also point out some conditions and limits to the very idea of the Humaniter as an artificial human.

Last update from database: 5/16/26, 1:00 AM (UTC)