Search

Full bibliography 716 resources

  • The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

  • Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus automatic. But conscience is above these differences because it is neither conditioned by the self-preservation of autonomy, because a conscience is something that you use to help your neighbor, nor automatic, because one’s conscience is tested by situations which are not similar or subject to routine. So, artificial intelligence is only in science-fiction literature similar to an autonomous conscience-endowed being. In real life, religion with its notions of redemption, sin, expiation, confession and communion will not have any meaning for a machine which cannot make a mistake on its own.

  • "The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of synthetic consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of artificial intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favour of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that is suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.

  • Using insights from cybernetics and an information-based understanding of biological systems, a precise, scientifically inspired, definition of free-will is offered and the essential requirements for an agent to possess it in principle are set out. These are: (a) there must be a self to self-determine; (b) there must be a non-zero probability of more than one option being enacted; (c) there must be an internal means of choosing among options (which is not merely random, since randomness is not a choice). For (a) to be fulfilled, the agent of self-determination must be organisationally closed (a “Kantian whole”). For (c) to be fulfilled: (d) options must be generated from an internal model of the self which can calculate future states contingent on possible responses; (e) choosing among these options requires their evaluation using an internally generated goal defined on an objective function representing the overall “master function” of the agent and (f) for “deep free-will”, at least two nested levels of choice and goal (d–e) must be enacted by the agent. The agent must also be able to enact its choice in physical reality. The only systems known to meet all these criteria are living organisms, not just humans, but a wide range of organisms. The main impediment to free-will in present-day artificial robots, is their lack of being a Kantian whole. Consciousness does not seem to be a requirement and the minimum complexity for a free-will system may be quite low and include relatively simple life-forms that are at least able to learn.

  • This chapter aims to evaluate Integrated Information Theory's claims concerning Artificial Consciousness. Integrated Information Theory (IIT) works from premises that claim that certain properties, such as unity, are essential to consciousness, to conclusions regarding the constraints upon physical systems that could realize consciousness. Among these conclusions is the claim that feed-forward systems, and systems that are not largely reentrant, necessarily will fail to generate consciousness (but may simulate it). This chapter will discuss the premises of IIT, which themselves are highly controversial, and will also address IIT's related rejection of functionalism. This analysis will argue that IIT has failed to established good grounds for these positions, and that convincing alternatives remain available. This, in turn, implies that the constraints upon Artificial Consciousness are more generous than IIT would have them be.

  • Artificial intelligence and research on consciousness have reciprocally influenced each other: theories about consciousness have inspired work on artificial intelligence, and the results from artificial intelligence have changed our interpretation of the mind. Artificial intelligence can be used to test theories of consciousness and research has been carried out on the development of machines with the behavior, cognitive characteristics and architecture associated with consciousness. Some people have argued against the possibility of machine consciousness, but none of these objections is conclusive and many theories openly embrace the possibility that phenomenal consciousness could be realized in artificial systems.

  • Here is examined the work done in many laboratories on the proposition that the mechanisms underlying consciousness in living organisms can be studied using computational theories. This follows an agreement at a 2001 multi-disciplinary meeting of philosophers, neuroscientists and computer scientists that such a research programme was feasible and worthwhile. Here this effort is reviewed both as a historical statement and for the positions held at the time of going to print of this volume. The approaches cover diverse techniques ranging from the machine modeling of neural structures in the brain to abstract models based on the type of logic found in computer programming. Purely theoretical approaches based on hypotheses of what kind of information constitutes a mental state are included.

  • In the past, computational models for machine consciousness have been proposed with varying degrees of challenges for implementation. Affective computing focuses on the development of systems that can simulate, recognize, and process human affects which refer to the experience of feeling or emotion. The affective attributes are important factors for the future of machine consciousness with the rise of technologies that can assist humans and also build trustworthy relationships between humans and artificial systems. In this paper, an affective computational model for machine consciousnesses with a system of management of the major features. Real-world scenarios are presented to further illustrate the functionality of the model and provide a road-map for computational implementation.

  • The structure of consciousness has long been a cornerstone problem in the cognitive sciences. Recently it took on applied significance in the design of computer agents and mobile robots. This problem can thus be examined from perspectives of philosophy, neuropsychology, and computer modeling.

  • Some humans may seek to purchase pain-feeling robots for the purpose of torturing them – a sad fact about some humans. This chapter explores the Chinese room argument against robot pain. The role of the Chinese room thought experiment is to establish the truth of the claim that it is indeed possible for something or someone to run any arbitrarily selected program without thereby understanding Chinese. One way to construct a robotic copy of a human is by gradually transforming a human into a robot by a sequence of prosthetic replacements of the human’s naturally occurring parts, especially parts of the nervous system, with artificial analogs. Like all physiological systems in the human body, the nervous system is composed of causally interacting cells. The chapter emphasizes the ways in which the thought experiments in the respective arguments attempt to marshal hypothetical first-person accessible evidence concerning how one’s own mental life appears to oneself.

  • Humans are active agents in the design of artificial intelligence (AI), and our input into its development is critical. A case is made for recognizing the importance of including non-ordinary functional capacities of human consciousness in the development of synthetic life, in order for the latter to capture a wider range in the spectrum of neurobiological capabilities. These capacities can be revealed by studying self-cultivation practices designed by humans since prehistoric times for developing non-ordinary functionalities of consciousness. A neurophenomenological praxis is proposed as a model for self-cultivation by an agent in an entropic world. It is proposed that this approach will promote a more complete self-understanding in humans and enable a more thoroughly mutually-beneficial relationship between in life in vivo and in silico.

  • Artificial intelligence capacities for consciousness are not equivalent to human consciousness—the level of autonomous, independent, volitional behavioral control characterized by independently functioning biologic systems. There is strong evidence, however, that artificially created systems including Web-based search engines empirically demonstrate machine-based equivalents of aspects of consciousness. They have attained this capacity as based on high-level (in some cases suprabiologic) capabilities in defined aspects of consciousness (intelligence, attention, autonomy, and intention). Such systems also have the capacity to meet multiple definition criteria for having cognitive processing that is approximately equivalent to dreaming. Computer–human interface systems have expanded both human and machine capacity to address and extend scientific understanding at all epistemological levels of current inquiry. Complexity theories of consciousness can be used to theoretically support such an attainment of conscious function.

  • The research work presented in this article investigates and explains the conceptual mechanisms of consciousness and common-sense thinking of animates. These mechanisms are computationally simulated on artificial agents as strategic rules to analyze and compare the performance of agents in critical and dynamic environments. Awareness and attention to specific parameters that affect the performance of agents specify the consciousness level in agents. Common sense is a set of beliefs that are accepted to be true among a group of agents that are engaged in a common purpose, with or without self-experience. The common sense agents are a kind of conscious agents that are given with few common sense assumptions. The so-created environment has attackers with dependency on agents in the survival-food chain. These attackers create a threat mental state in agents that can affect their conscious and common sense behaviors. The agents are built with a multi-layer cognitive architecture COCOCA (Consciousness and Common sense Cognitive Architecture) with five columns and six layers of cognitive processing of each precept of an agent. The conscious agents self-learn strategies for threat management and energy level maintenance. Experimentation conducted in this research work demonstrates animate-level intelligence in their problem-solving capabilities, decision making and reasoning in critical situations.

  • How do “minds” work? In Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena, Professor Jun Tani reviews key experiments within his own pioneering neurorobotics research project aimed at answering this fundamental and fascinating question. The book shows how symbols and concepts representing the world can emerge via “deep learning” within robots, by using specially designed neural network architectures by which, given iterative interactions between top-down proactive “subjective” and “intentional” processes for plotting action, and bottom-up updates of the perceptual reality after action, the robot is able to learn to isolate, to identify, and even to infer salient features of the operational environment, modifying its behavior based on anticipations of both objective and social cues. Through permutations of this experimental model, the book then argues that longstanding questions about the nature of “consciousness” and “freewill” can be addressed through an understanding of the dynamic structures within which, in the course of normal operations and in a changing operational environment, necessary top-down/bottom-up interactions arise. Written in clear and accessible language, this book opens a privileged window for a broad audience onto the science of artificial intelligence and the potential for artificial consciousness, threading cognitive neuroscience, dynamic systems theory, robotics, and phenomenology through an elegant series of deceptively simple experiments that build upon one another and ultimately outline the fundamental form of the working mind.

  • Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness.

  • Understanding the nature of consciousness is one of the grand outstanding scientific challenges. The fundamental methodological problem is how phenomenal first person experience can be accounted for in a third person verifiable form, while the conceptual challenge is to both define its function and physical realization. The distributed adaptive control theory of consciousness (DACtoc) proposes answers to these three challenges. The methodological challenge is answered relative to the hard problem and DACtoc proposes that it can be addressed using a convergent synthetic methodology using the analysis of synthetic biologically grounded agents, or quale parsing. DACtoc hypothesizes that consciousness in both its primary and secondary forms serves the ability to deal with the hidden states of the world and emerged during the Cambrian period, affording stable multi-agent environments to emerge. The process of consciousness is an autonomous virtualization memory, which serializes and unifies the parallel and subconscious simulations of the hidden states of the world that are largely due to other agents and the self with the objective to extract norms. These norms are in turn projected as value onto the parallel simulation and control systems that are driving action. This functional hypothesis is mapped onto the brainstem, midbrain and the thalamo-cortical and cortico-cortical systems and analysed with respect to our understanding of deficits of consciousness. Subsequently, some of the implications and predictions of DACtoc are outlined, in particular, the prediction that normative bootstrapping of conscious agents is predicated on an intentionality prior. In the view advanced here, human consciousness constitutes the ultimate evolutionary transition by allowing agents to become autonomous with respect to their evolutionary priors leading to a post-biological Anthropocene. This article is part of the themed issue ‘The major synthetic evolutionary transitions’.

  • The social robot becomes important to the future world of pervasive computing, where the robot currently facilitates our life. The social behavior and natural action of the robot is one of the most needed function for emerging future realistic human-like robot. Our paper proposes the artificial topological consciousness based on a pet robot using the artificial neurotransmitter and motivation. Since, the significant is cross-creature communication to friendly companionship. This system focuses on three points. The first, the organization of the behavior and emotion model regarding the phylogenetic. The second, the method of the robot that can have empathy with user expression. The third, how the robot can socially perform its expression to human using biologically inspired topological on-line method for encouragement or being delighted by its own emotion and the human expression. We believe the artificial consciousness based on complexity level and the robot social expression enhance the user affinity by the experiment.

  • When people speak about consciousness, they distinguish various types and different levels, and they argue for different concepts of cognition. This complicates the discussion about artificial or machine consciousness. Here we take a bottom-up approach to this question by presenting a family of robot experiments that invite us to think about consciousness in the context of artificial agents. The experiments are based on a computational model of sensorimotor contingencies. It has been suggested that these regularities in the sensorimotor flow of an agent can explain raw feels and perceptual consciousness in biological agents. We discuss the validity of the model with respect to sensorimotor contingency theory and consider whether a robot that is controlled by knowledge of its sensorimotor contingencies could have any form of consciousness. We propose that consciousness does not require higher-order thought or higher-order representations. Rather, we argue that consciousness starts when (i) an agent actively (endogenously triggered) uses its knowledge of sensorimotor contingencies to issue predictions and (ii) when it deploys this capability to structure subsequent action.

Last update from database: 5/16/26, 1:00 AM (UTC)