Search
Full bibliography 704 resources
-
This paper has as its research problem the following question: what is it like to be an artificial intelligence? It aims to critically analyze the epistemological and semantic aspects developed by Thomas Nagel in What is it like to be a bat and The View from Nowhere, demonstrating the relationship between physicalism and subjectivity and its application to artificially intelligent beings. We chose to approach these two works because of the author's importance in analytical philosophy and the approach to consciousness. The analysis shows that the defense of artificial intelligence as a subject of law is intrinsically based on physicalism. However, in refuting it, Nagel does not offer an alternative outside the scope of dualism. Thus, the Procedural Theory of the Subject of Law is developed with stages of emancipation of the being against the law. As a result, it is verified that the reductive physicalist vision is insufficient to substantiate the condition of the subject of law of an artificial intelligence as a legal and political being in the social order. However, if the three stages of its formation (emancipation, interspecies recognition, and personification) are observed, the possibility of achieving the condition under analysis is assumed. It is concluded that it is unverifiable to know what it is like to be an artificial intelligence. In the current scientific stage, an artificially intelligent being cannot (yet) be considered a subject of law, under penalty of characterization of instrumentalism. The methodology of integrated, analytical, deductive, and bibliographic research is used to obtain these results and conclusions.
-
The “Slicing Problem” is a thought experiment that raises questions for substrate-neutral computational theories of consciousness, including those that specify a certain causal structure for the computation like Integrated Information Theory. The thought experiment uses water-based logic gates to construct a computer in a way that permits cleanly slicing each gate and connection in half, creating two identical computers each instantiating the same computation. The slicing can be reversed and repeated via an on/off switch, without changing the amount of matter in the system. The question is what do different computational theories of consciousness believe is happening to the number and nature of individual conscious units as this switch is toggled. Under a token interpretation, there are now two discrete conscious entities; under a type interpretation, there may remain only one. Both interpretations lead to different implications depending on the adopted theoretical stance. Any route taken either allows mechanisms for “consciousness-multiplying exploits” or requires ambiguous boundaries between conscious entities, raising philosophical and ethical questions for theorists to consider. We discuss resolutions under different theories of consciousness for those unwilling to accept consciousness-multiplying exploits. In particular, we specify three features that may help promising physicalist theories to navigate such thought experiments.
-
We’re experiencing a time when digital technologies and advances in artificial intelligence, robotics, and big data are redefining what it means to be human. How do these advancements affect contemporary media and music? This collection traces how media, with a focus on sound and image, engages with these new technologies. It bridges the gap between science and the humanities by pairing humanists’ close readings of contemporary media with scientists’ discussions of the science and math that inform them. This text includes contributions by established and emerging scholars performing across-the-aisle research on new technologies, exploring topics such as facial and gait recognition; EEG and audiovisual materials; surveillance; and sound and images in relation to questions of sexual identity, race, ethnicity, disability, and class and includes examples from a range of films and TV shows including Blade Runner, Black Mirror, Mr. Robot, Morgan, Ex Machina, and Westworld. Through a variety of critical, theoretical, proprioceptive, and speculative lenses, the collection facilitates interdisciplinary thinking and collaboration and provides readers with ways of responding to these new technologies.
-
Recent advances in artificial intelligence (AI) have achieved human-scale speed and accuracy for classification tasks. Current systems do not need to be conscious to recognize patterns and classify them. However, for AI to advance to the next level, it needs to develop capabilities such as metathinking, creativity, and empathy. We contend that such a paradigm shift is possible through a fundamental change in the state of artificial intelligence toward consciousness, similar to what took place for humans through the process of natural selection and evolution. To that end, we propose that consciousness in AI is an emergent phenomenon that primordially appears when two machines cocreate their own language through which they can recall and communicate their internal state of time-varying symbol manipulation. Because, in our view, consciousness arises from the communication of inner states, it leads to empathy. We then provide a link between the empathic quality of machines and better service outcomes associated with empathic human agents that can also lead to accountability in AI services.
-
It is noted that there are many different definitions of and views about qualia, and this makes qualia into a vague concept without much theoretical and constructive value. Here, qualia are redefined in a more general way. It is argued that the redefined qualia will be essential to the mind–body problem, the problem of consciousness and also to the symbol grounding problem, which is inherent in physical symbol systems. Then, it is argued that the redefined qualia are necessary for Artificial Intelligence systems for the operation with meanings. Finally, it is proposed that robots with qualia may be conscious.
-
An area related to artificial intelligence and computational robotics is artificial consciousness, also known as computer consciousness or virtual consciousness. The artificial consciousness theory aims to establish what will have to be synthesized in an engineered artifact to find consciousness.
-
There are many developed theories and implemented artificial systems in the area of machine consciousness, while none has achieved that. For a possible approach, we are interested in implementing a system by integrating different theories. Along this way, this paper proposes a model based on the global workspace theory and attention mechanism, and providing a fundamental framework for our future work. To examine this model, two experiments are conducted. The first one demonstrates the agent’s ability to shift attention over multiple stimuli, which accounts for the dynamics of conscious content. Another experiment of simulations of attentional blink and lag-1 sparing, which are two well-studied effects in psychology and neuroscience of attention and consciousness, aims to justify the agent’s compatibility with human brains. In summary, the main contributions of this paper are (1) Adaptation of the global workspace framework by separated workspace nodes, reducing unnecessary computation but retaining the potential of global availability; (2) Embedding attention mechanism into the global workspace framework as the competition mechanism for the consciousness access; (3) Proposing a synchronization mechanism in the global workspace for supporting lag-1 sparing effect, retaining the attentional blink effect.
-
In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this work, we explore the validity and potential application of this seemingly intuitive link between consciousness and intelligence. We do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST). We find that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Having identified this trend, we use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a single unified and implementable model. Given that it is made possible by cognitive abilities underlying each of the three functional theories, artificial agents capable of mental time travel would not only possess greater general intelligence than current approaches, but also be more consistent with our current understanding of the functional role of consciousness in humans, thus making it a promising near-term goal for AI research.
-
The purpose of this article is to throw some light on aspects of Human consciousness and Artificial Intelligence(AI). Consciousness is a unique aspect which humans possess which makes them the superior beings of the earth. Human Consciousness and Artificial intelligence are two complex entities which can make it more complicated for AI develop consciousness. This article introduces topics such as Cognitive abilities and ethics. Ethics are confused with morality but morality is a part of ethics. Ethics is a theoretical concept built on perception and what the society thinks is right or wrong. The article also highlights if Artificial intelligence can develop consciousness as complex as that of Human consciousness. The article also focuses on whether Artificial intelligence can have cognitive abilities. Also if AI can have ethics.
-
Consciousness is all about internal and external existence that is felt and reflected through actions. Self-existence is about feeling your own existence and is also associated with freedom [1, 2]. It is about the feeling and awareness that the individual exists and can do different activities as per will. These activities include activities like choosingChoosing, attaining certain objectives and fighting for survival.
-
The intersection between neuroscience and artificial intelligence (AI) research has created synergistic effects in both fields. While neuroscientific discoveries have inspired the development of AI architectures, new ideas and algorithms from AI research have produced new ways to study brain mechanisms. A well-known example is the case of reinforcement learning (RL), which has stimulated neuroscience research on how animals learn to adjust their behavior to maximize reward. In this review article, we cover recent collaborative work between the two fields in the context of meta-learning and its extension to social cognition and consciousness. Meta-learning refers to the ability to learn how to learn, such as learning to adjust hyperparameters of existing learning algorithms and how to use existing models and knowledge to efficiently solve new tasks. This meta-learning capability is important for making existing AI systems more adaptive and flexible to efficiently solve new tasks. Since this is one of the areas where there is a gap between human performance and current AI systems, successful collaboration should produce new ideas and progress. Starting from the role of RL algorithms in driving neuroscience, we discuss recent developments in deep RL applied to modeling prefrontal cortex functions. Even from a broader perspective, we discuss the similarities and differences between social cognition and meta-learning, and finally conclude with speculations on the potential links between intelligence as endowed by model-based RL and consciousness. For future work we highlight data efficiency, autonomy and intrinsic motivation as key research areas for advancing both fields. Questions answered in this article BetaPowered by GenAI This is generative AI content and the quality may vary. Learn more . How can meta-learning facilitate the development of more general forms of artificial intelligence? What recent advancements have been made in integrating meta-learning into deep Reinforcement Learning (RL)? How do model-based Reinforcement Learning algorithms facilitate meta-learning? What computational and empirical results are relevant to meta-learning in both artificial intelligence and the brain? What are the implications of brain-inspired model-based Reinforcement Learning for artificial learning systems?
-
Having a rich multimodal inner language is an important component of human intelligence that enables several necessary core cognitive functions such as multimodal prediction, translation, and generation. Building upon the Conscious Turing Machine (CTM), a machine model for consciousness proposed by Blum and Blum (2021), we describe the desiderata of a multimodal language called Brainish, comprising words, images, audio, and sensations combined in representations that the CTM's processors use to communicate with each other. We define the syntax and semantics of Brainish before operationalizing this language through the lens of multimodal artificial intelligence, a vibrant research area studying the computational tools necessary for processing and relating information from heterogeneous signals. Our general framework for learning Brainish involves designing (1) unimodal encoders to segment and represent unimodal data, (2) a coordinated representation space that relates and composes unimodal features to derive holistic meaning across multimodal inputs, and (3) decoders to map multimodal representations into predictions (for fusion) or raw data (for translation or generation). Through discussing how Brainish is crucial for communication and coordination in order to achieve consciousness in the CTM, and by implementing a simple version of Brainish and evaluating its capability of demonstrating intelligence on multimodal prediction and retrieval tasks on several real-world image, text, and audio datasets, we argue that such an inner language will be important for advances in machine models of intelligence and consciousness.
-
There are several lessons that can already be drawn from the current research programs on strong AI and building conscious machines, even if they arguably have not produced fruits yet. The first one is that functionalist approaches to consciousness do not account for the key importance of subjective experience and can be easily confounded by the way in which algorithms work and succeed. Authenticity and emergence are key concepts that can be useful in discerning valid approaches versus invalid ones and can clarify instances where algorithms are considered conscious, such as Sophia or LaMDA. Subjectivity and embeddedness become key notions that should also lead us to re‐examine the ethics of decision delegation. In addition, the focus on subjective experience shifts what is relevant in our understanding of ourselves as human beings and as an image of God, namely, in de‐emphasizing intellectuality in favor of experience and contemplation over action.
-
Consciousness and intelligence are properties commonly understood as dependent by folk psychology and society in general. The term artificial intelligence and the kind of problems that it managed to solve in the recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following the analogy of Russell, if a machine is able to do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights that a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe that the obvious answer is no, as problem solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence that human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals and machines. As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society that we also analyze in this work.
-
The Transformer artificial intelligence model is one of the most accurate models to extract the meaning/semantics from sets of symbolic sequences of various lengths, including long sequences. These models transform the language spaces as per long and short-distance relationships among units of the language. These models thus minimize some aspects of human comprehension of the world. To frame a generalized theory of identification and generation of meaning in human thought, the transformer model needs to be understood in the context of generalized systems theory, such that other equivalent models can be discovered, compared and selected to converge on the base model of meaning identification and discovery aspect of the philosophy of knowledge or epistemology. This paper explores the relationships of the transformer model and its various component parts, processes and the phenomena to some critical aspects of generalized systems theory such as cognition, symmetry & equivalence, holons, emergence, identifiability, system spaces and system universe, reconstructability, equilibriums & oscillations, scaling, polystability, ontogeny, algedonic loops, heterarchy, holarchy, homeorhesis, isomorphism, homeostasis, attractors, equifinality, nesting, parallelization, loops, causal structure, transformations, feedbacks, encodings, and information complexity.
-
This article provides an analytical framework for how to simulate human-like thought processes within a computer. It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought. The focus is on replicating the dynamics of the mammalian working memory system, which features two forms of persistent activity: sustained firing (preserving information on the order of seconds) and synaptic potentiation (preserving information from minutes to hours). The article uses a series of figures to systematically demonstrate how the iterative updating of these working memory stores provides functional organization to behavior, cognition, and awareness. In a machine learning implementation, these two memory stores should be updated continuously and in an iterative fashion. This means each state should preserve a proportion of the coactive representations from the state before it (where each representation is an ensemble of neural network nodes). This makes each state a revised iteration of the preceding state and causes successive configurations to overlap and blend with respect to the information they contain. Thus, the set of concepts in working memory will evolve gradually and incrementally over time. Transitions between states happen as persistent activity spreads activation energy throughout the hierarchical network, searching long-term memory for the most appropriate representation to be added to the global workspace. The result is a chain of associatively linked intermediate states capable of advancing toward a solution or goal. Iterative updating is conceptualized here as an information processing strategy, a model of working memory, a theory of consciousness, and an algorithm for designing and programming artificial intelligence (AI, AGI, and ASI).
-
In this article, it is taken for granted that fully-human artificial intelligence—a term used to denote artificial life that is, in principle, more human than strong AI would be—must possess operational faculties for consciousness and selfhood. After clarifying relevant questions surrounding the interested socio-psychological phenomena, progress in animal and humanoid robotics is summarized. The aforementioned topics within philosophy and the social sciences are reviewed, noting their relevant overlaps with recent developments in cognitive and computer sciences. My working assumption is that the avowed conclusion of human AI cannot currently be written off as impossible and should therefore be critically engaged (the intent is to engage with humanoid robotics’ capabilities and features in relation to the present state of knowledge regarding the psychological phenomena discussed). It is argued that for human AI to fully succeed as a discipline, the discussed psychological notions as we understand and experience them must be further elucidated and adequately accounted for by AI research programs.
-
People have different opinions about which conditions robots would need to fulfil—and for what reasons—to be moral agents. Standardists hold that specific internal states (like rationality, free will or phenomenal consciousness) are necessary in artificial agents, and robots are thus not moral agents since they lack these internal states. Functionalists hold that what matters are certain behaviours and reactions—independent of what the internal states may be—implying that robots can be moral agents as long as the behaviour is adequate. This article defends a standardist view in the sense that the internal states are what matters for determining the moral agency of the robot, but it will be unique in being an internalist theory defending a large degree of robot responsibility, even though humans, but not robots, are taken to have phenomenal consciousness. This view is based on an event-causal libertarian theory of free will and a revisionist theory of responsibility, which combined explain how free will and responsibility can come in degrees. This is meant to be a middle position between typical compatibilist and libertarian views, securing the strengths of both sides. The theories are then applied to robots, making it possible to be quite precise about what it means that robots can have a certain degree of moral responsibility, and why. Defending this libertarian form of free will and responsibility then implies that non-conscious robots can have a stronger form of free will and responsibility than what is commonly defended in the literature on robot responsibility.
-
The established theories and frameworks on consciousness in the academic literature as related to artificial intelligence (AI), are rooted in anthropocentricism. Even those theories created intentionally for AI are based on the levels of consciousness as it is understood in humans primarily, and in other animals secondarily. This paper will discuss why such anthropocentric frameworks are built on unsecure foundations. We will do this by comparing the capacities and functions of human and AI cognitive architectures, discussing the ramifications and consequences of the behaviors that stem from these, and looking at the neurological conditions in humans that can give the most promising hints as to what a potential conscious AI entity would look like. The paper ends with a proposed solution for building a nonanthropocentric foundation of cognition that could lead toward a truly AI-focused framework of consciousness.