Search

Full bibliography 675 resources

  • Humans are active agents in the design of artificial intelligence (AI), and our input into its development is critical. A case is made for recognizing the importance of including non-ordinary functional capacities of human consciousness in the development of synthetic life, in order for the latter to capture a wider range in the spectrum of neurobiological capabilities. These capacities can be revealed by studying self-cultivation practices designed by humans since prehistoric times for developing non-ordinary functionalities of consciousness. A neurophenomenological praxis is proposed as a model for self-cultivation by an agent in an entropic world. It is proposed that this approach will promote a more complete self-understanding in humans and enable a more thoroughly mutually-beneficial relationship between in life in vivo and in silico.

  • Artificial intelligence capacities for consciousness are not equivalent to human consciousness—the level of autonomous, independent, volitional behavioral control characterized by independently functioning biologic systems. There is strong evidence, however, that artificially created systems including Web-based search engines empirically demonstrate machine-based equivalents of aspects of consciousness. They have attained this capacity as based on high-level (in some cases suprabiologic) capabilities in defined aspects of consciousness (intelligence, attention, autonomy, and intention). Such systems also have the capacity to meet multiple definition criteria for having cognitive processing that is approximately equivalent to dreaming. Computer–human interface systems have expanded both human and machine capacity to address and extend scientific understanding at all epistemological levels of current inquiry. Complexity theories of consciousness can be used to theoretically support such an attainment of conscious function.

  • The research work presented in this article investigates and explains the conceptual mechanisms of consciousness and common-sense thinking of animates. These mechanisms are computationally simulated on artificial agents as strategic rules to analyze and compare the performance of agents in critical and dynamic environments. Awareness and attention to specific parameters that affect the performance of agents specify the consciousness level in agents. Common sense is a set of beliefs that are accepted to be true among a group of agents that are engaged in a common purpose, with or without self-experience. The common sense agents are a kind of conscious agents that are given with few common sense assumptions. The so-created environment has attackers with dependency on agents in the survival-food chain. These attackers create a threat mental state in agents that can affect their conscious and common sense behaviors. The agents are built with a multi-layer cognitive architecture COCOCA (Consciousness and Common sense Cognitive Architecture) with five columns and six layers of cognitive processing of each precept of an agent. The conscious agents self-learn strategies for threat management and energy level maintenance. Experimentation conducted in this research work demonstrates animate-level intelligence in their problem-solving capabilities, decision making and reasoning in critical situations.

  • How do “minds” work? In Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena, Professor Jun Tani reviews key experiments within his own pioneering neurorobotics research project aimed at answering this fundamental and fascinating question. The book shows how symbols and concepts representing the world can emerge via “deep learning” within robots, by using specially designed neural network architectures by which, given iterative interactions between top-down proactive “subjective” and “intentional” processes for plotting action, and bottom-up updates of the perceptual reality after action, the robot is able to learn to isolate, to identify, and even to infer salient features of the operational environment, modifying its behavior based on anticipations of both objective and social cues. Through permutations of this experimental model, the book then argues that longstanding questions about the nature of “consciousness” and “freewill” can be addressed through an understanding of the dynamic structures within which, in the course of normal operations and in a changing operational environment, necessary top-down/bottom-up interactions arise. Written in clear and accessible language, this book opens a privileged window for a broad audience onto the science of artificial intelligence and the potential for artificial consciousness, threading cognitive neuroscience, dynamic systems theory, robotics, and phenomenology through an elegant series of deceptively simple experiments that build upon one another and ultimately outline the fundamental form of the working mind.

  • Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness.

  • Understanding the nature of consciousness is one of the grand outstanding scientific challenges. The fundamental methodological problem is how phenomenal first person experience can be accounted for in a third person verifiable form, while the conceptual challenge is to both define its function and physical realization. The distributed adaptive control theory of consciousness (DACtoc) proposes answers to these three challenges. The methodological challenge is answered relative to the hard problem and DACtoc proposes that it can be addressed using a convergent synthetic methodology using the analysis of synthetic biologically grounded agents, or quale parsing. DACtoc hypothesizes that consciousness in both its primary and secondary forms serves the ability to deal with the hidden states of the world and emerged during the Cambrian period, affording stable multi-agent environments to emerge. The process of consciousness is an autonomous virtualization memory, which serializes and unifies the parallel and subconscious simulations of the hidden states of the world that are largely due to other agents and the self with the objective to extract norms. These norms are in turn projected as value onto the parallel simulation and control systems that are driving action. This functional hypothesis is mapped onto the brainstem, midbrain and the thalamo-cortical and cortico-cortical systems and analysed with respect to our understanding of deficits of consciousness. Subsequently, some of the implications and predictions of DACtoc are outlined, in particular, the prediction that normative bootstrapping of conscious agents is predicated on an intentionality prior. In the view advanced here, human consciousness constitutes the ultimate evolutionary transition by allowing agents to become autonomous with respect to their evolutionary priors leading to a post-biological Anthropocene. This article is part of the themed issue ‘The major synthetic evolutionary transitions’.

  • The social robot becomes important to the future world of pervasive computing, where the robot currently facilitates our life. The social behavior and natural action of the robot is one of the most needed function for emerging future realistic human-like robot. Our paper proposes the artificial topological consciousness based on a pet robot using the artificial neurotransmitter and motivation. Since, the significant is cross-creature communication to friendly companionship. This system focuses on three points. The first, the organization of the behavior and emotion model regarding the phylogenetic. The second, the method of the robot that can have empathy with user expression. The third, how the robot can socially perform its expression to human using biologically inspired topological on-line method for encouragement or being delighted by its own emotion and the human expression. We believe the artificial consciousness based on complexity level and the robot social expression enhance the user affinity by the experiment.

  • When people speak about consciousness, they distinguish various types and different levels, and they argue for different concepts of cognition. This complicates the discussion about artificial or machine consciousness. Here we take a bottom-up approach to this question by presenting a family of robot experiments that invite us to think about consciousness in the context of artificial agents. The experiments are based on a computational model of sensorimotor contingencies. It has been suggested that these regularities in the sensorimotor flow of an agent can explain raw feels and perceptual consciousness in biological agents. We discuss the validity of the model with respect to sensorimotor contingency theory and consider whether a robot that is controlled by knowledge of its sensorimotor contingencies could have any form of consciousness. We propose that consciousness does not require higher-order thought or higher-order representations. Rather, we argue that consciousness starts when (i) an agent actively (endogenously triggered) uses its knowledge of sensorimotor contingencies to issue predictions and (ii) when it deploys this capability to structure subsequent action.

  • Reviewing recent closely related developments at the crossroads of biomedical engineering, artificial intelligence and biomimetic technology, in this paper, we attempt to distinguish phenomenological consciousness into three categories based on embodiment: one that is embodied by biological agents, another by artificial agents and a third that results from collective phenomena in complex dynamical systems. Though this distinction by itself is not new, such a classification is useful for understanding differences in design principles and technology necessary to engineer conscious machines. It also allows one to zero-in on minimal features of phenomenological consciousness in one domain and map on to their counterparts in another. For instance, awareness and metabolic arousal are used as clinical measures to assess levels of consciousness in patients in coma or in a vegetative state. We discuss analogous abstractions of these measures relevant to artificial systems and their manifestations. This is particularly relevant in the light of recent developments in deep learning and artificial life.

  • Companion or ‘pet’ robots can be expected to be an important part of a future in which robots contribute to our lives in many ways. An understanding of emotional interactions would be essential to such robots’ behavior. To improve the cognitive and behavior systems of such robots, we propose the use of an artificial topological consciousness that uses a synthetic neurotransmitter and motivation, including a biologically inspired emotion system. A fundamental aspect of a companion robot is a cross-communication system that enables natural interactions between humans and the robot. This paper focuses on three points in the development of our proposed framework: (1) the organization of the behavior including inside-state emotion regarding the phylogenetic consciousness-based architecture; (2) a method whereby the robot can have empathy toward its human user’s expressions of emotion; and (3) a method that enables the robot to select a facial expression in response to the human user, providing instant human-like ‘emotion’ and based on emotional intelligence (EI) that uses a biologically inspired topological online method to express, for example, encouragement or being delighted. We also demonstrate the performance of the artificial consciousness based on the complexity level and a robot’s social expressions that are designed to enhance the users affinity with the robot.

  • A discussion on how artificial machines with natural intelligence would be safe or not is made based on scientific, philosophical and theological arguments. The finite or infinite nature of the universe is discussed and the implications analyzed. The concepts of destiny and free will are considered, with implications on what it would mean to create an artificial consciousness and how it would be possible to give it or deny it its free will. Computer experiments are carried out based on cellular automata and the results considered. A thorough discussion follows and a conclusion is reached.

  • Currently, the rapid development of non-industrial robots that are designed with artificial intelligence (AI) methods to improve the robotics system is to have them imitate human thinking and behavior. Therefore, our works have focused on studying and investigating the application of brain-inspired technology for developing the conscious behavior robot (Conbe-I). We created the hierarchical structure model, which is called “Consciousness-Based Architecture: CBA” module, but it has limitation in managing and selecting the behavior that only depends on the increase and decrease of the motivation levels. Consequently, in this paper, we would like to introduce the dynamic behavior selection model based on emotional states, which develops by Self-organizing map learning and Markov model in order to define the relationship between the behavioral selection and emotional expression model. We confirm the effectiveness of the proposed system with the experimental results.

  • Traditional approaches model consciousness as the outcome either of internal computational processes or of cognitive structures. We advance an alternative hypothesis – consciousness is the hallmark of a fundamental way to organise causal interactions between an agent and its environment. Thus consciousness is not a special property or an addition to the cognitive processes, but rather the way in which the causal structure of the body of the agent is causally entangled with a world of physical causes. The advantage of this hypothesis is that it suggests how to exploit causal coupling to envisage tentative guidelines for designing conscious artificial agents. In this paper, we outline the key characteristics of these causal building blocks and then a set of standard technologies that may take advantage of such an approach. Consciousness is modelled as a kind of cognitive middle ground and experience is not an internal by-product of cognitive processes but the external world that is carved out by means of causal interaction. Thus, consciousness is not the penthouse on top of a 50 stores cognitive skyscraper, but the way in which the steel girders snap together from bottom to top.

  • In this work, we present a distributed cognitive architecture used to control the traffic in an urban network. This architecture relies on a machine consciousness approach – Global Workspace Theory – in order to use competition and broadcast, allowing a group of local traffic controllers to interact, resulting in a better group performance. The main idea is that the local controllers usually perform a purely reactive behavior, defining the times of red and green lights, according just to local information. These local controllers compete in order to define which of them is experiencing the most critical traffic situation. The controller in the worst condition gains access to the global workspace, further broadcasting its condition (and its location) to all other controllers, asking for their help in dealing with its situation. This call from the controller accessing the global workspace will cause an interference in the reactive local behavior, for those local controllers with some chance in helping the controller in a critical condition, by containing traffic in its direction. This group behavior, coordinated by the global workspace strategy, turns the once reactive behavior into a kind of deliberative one. We show that this strategy is capable of improving the overall mean travel time of vehicles flowing through the urban network. A consistent gain in performance with the “Artificial Consciousness” traffic signal controller during all simulation time, throughout different simulated scenarios, could be observed, ranging from around 13.8% to more than 21%.

  • This paper presents the idea that using computers to simulate complex virtual environments can give rise to artificial consciousness within those environments. Currently, limitations in creating artificial consciousness may be imposed by material compounds that enable the transmission of signals through artificial systems such as robots. Virtual environments, on the other hand, provide the necessary tools for surpassing such limitations. I also argue that using virtual reality tools to enable complex interaction between humans and AI (artificial intelligence) within virtual environments is the most reasonable way to obtain artificial consciousness by appealing to the nature of human bias.

  • This is a proof of the strong AI hypothesis, i.e. that machines can be conscious. It is a phenomenological proof that pattern-recognition and subjective consciousness are the same activity in different terms. Therefore, it proves that essential subjective processes of consciousness are computable, and identifies significant traits and requirements of a conscious system. Since Husserl, many philosophers have accepted that consciousness consists of memories of logical connections between an ego and external objects. These connections are called "intentions." Pattern recognition systems are achievable technical artifacts. The proof links this respected introspective philosophical theory of consciousness with technical art. The proof therefore endorses the strong AI hypothesis and may therefore also enable a theoretically-grounded form of artificial intelligence called a "synthetic intentionality," able to synthesize, generalize, select and repeat intentions. If the pattern recognition is reflexive, able to operate on the set of intentions, and flexible, with several methods of synthesizing intentions, an SI may be a particularly strong form of AI. Similarities and possible applications to several AI paradigms are discussed. The article then addresses some problems: The proof's limitations, reflexive cognition, Searles' Chinese room, and how an SI could "understand" "meanings" and "be creative."

  • We present the software implementation of our Qualia Modeling Framework (QMF), a computational cognitive model based on the dual-process theory, which theorizes that reasoning and decision making rely on integrated experiences from two interactive minds: the autonomous mind, without the agent’s conscious awareness, and the reflective mind, of which the agent is consciously aware. In the QMF, artificial qualia are the vocabulary of the conscious mind, required to reason over conceptual memory, and generate cognitive inferences. The autonomous mind employs pattern-matching, for fast reasoning over episodic memories. An ACT-R model with conventional declarative memory represents the autonomous mind. A second ACT-R model, with an unconventional implementation of declarative memory utilizing a hypernetwork theory based model of qualia space, represents the reflective mind. Using real-world, non-trivial, data sets, our cognitive model achieved classification accuracy comparable to, or greater than, analogous machine learning classifiers kNN and DT, while providing improvements in flexibility by allowing the Target Attribute to be identified or changed any time during training and testing. We advance the BICA challenge by providing a generalizable, efficient, algorithm which models the phenomenal structure of consciousness as proposed by a contemporary theory, and provides an effective decision aid in complex environments where data are too broad or diverse for a human to evaluate without computational assistance.

  • Although the thermal grill illusion has been the topic of previous research, many mysteries still remain regarding psychological determinants, neurophysiological mechanisms and so on. Also, the illusion cannot be simulated by information science and robotics. This study focuses on a very simple but interesting experiment called Hot and Cold Coils, which is known as a typical example of the thermal grill illusion. The authors aim to explain the thermal grill illusion by proposing a new and bold assumption called the conflict of concepts, and demonstrate how to construct a model by using an artificial consciousness module called the Module of Nerves for Advanced Dynamics (MoNAD). A simple experimental apparatus was prepared to prove the existence of the thermal grill illusion, and consists of a parallel arrangement of bars with an alternating pattern of cold and warmth at 20°C and 40°C. The authors conclude with the belief that many complex perceptions of humanity can be simulated through the use of neural networks, and that this can help us to deeply study the cognitive processes of human perception.

  • Human consciousness is a target of research in multiple fields of knowledge, that presents it as an important characteristic to better handle complex and diverse situations. Artificial consciousness models have arose, together with theories that attempt to model what we understand about consciousness, in a way that could be implemented an artificial conscious being. The main motivations to study artificial consciousness are related to the creation of agents more similar to human beings, in order to build more efficient machines. This paper presents an experiment using the Global Workspace Theory and the LIDA Model to build a "conscious" mobile robot in a virtual environment, using the LIDA framework as a implementation of the LIDA Model. The main objective is to evaluate if it is possible to use conscience as implemented by the LIDA framework to simplify decision making processes during navigation of a mobile robot subject to interaction with people, as part of a cicerone robot development.

Last update from database: 1/2/26, 2:00 AM (UTC)