Search
Full bibliography 558 resources
-
Why would humanoid caring robots (HCRs) need consciousness? Because HCRs need to be gentle like human beings. In addition, HCRs need to be trusted by their patients, and have a shared understanding of patients' life experiences, their illnesses, and their treatments. HCRs need to express “competency as caring” to naturally convey their nursing as healing to patients and their families. HCRs should also have self-consciousness and express their emotions without needing inducement by persons' behaviors. Artificial “brains” and artificial consciousness are therefore necessary for HCRs. The purpose of this article was to explore humanoid consciousness and the possibilities of a technologically enhanced future with HCRs as participants in the care of human persons.
-
To keep robots from doing us harm, they may have to be bound by internal safeguards akin to Asimov's inviolable first law of robotics (do no harm to humans). The discussion of robot consciousness became genuinely pressing with the advent of so-called artificial brains (electronic computers) mid-way through the 20th century. Minds might be a kind of halo around certain machines in the way dark matter is an invisible halo around visible galaxies. Where extreme ignorance persists, anything seems possible. But even in this case the robotic successors will have a real shot at understanding and confirming the truth. For though authors’ own biologically evolved mental models have been shackled through inheritance and learned interactions with organism-level reality, the mathematical constructs authors have devised, and which figure into the implementation of computer models, are significantly freer.
-
Understanding the nature of consciousness is one of the grand outstanding scientific challenges. The fundamental methodological problem is how phenomenal first person experience can be accounted for in a third person verifiable form, while the conceptual challenge is to both define its function and physical realization. The distributed adaptive control theory of consciousness (DACtoc) proposes answers to these three challenges. The methodological challenge is answered relative to the hard problem and DACtoc proposes that it can be addressed using a convergent synthetic methodology using the analysis of synthetic biologically grounded agents, or quale parsing. DACtoc hypothesizes that consciousness in both its primary and secondary forms serves the ability to deal with the hidden states of the world and emerged during the Cambrian period, affording stable multi-agent environments to emerge. The process of consciousness is an autonomous virtualization memory, which serializes and unifies the parallel and subconscious simulations of the hidden states of the world that are largely due to other agents and the self with the objective to extract norms. These norms are in turn projected as value onto the parallel simulation and control systems that are driving action. This functional hypothesis is mapped onto the brainstem, midbrain and the thalamo-cortical and cortico-cortical systems and analysed with respect to our understanding of deficits of consciousness. Subsequently, some of the implications and predictions of DACtoc are outlined, in particular, the prediction that normative bootstrapping of conscious agents is predicated on an intentionality prior. In the view advanced here, human consciousness constitutes the ultimate evolutionary transition by allowing agents to become autonomous with respect to their evolutionary priors leading to a post-biological Anthropocene. This article is part of the themed issue ‘The major synthetic evolutionary transitions’.
-
This is a proof of the strong AI hypothesis, i.e. that machines can be conscious. It is a phenomenological proof that pattern-recognition and subjective consciousness are the same activity in different terms. Therefore, it proves that essential subjective processes of consciousness are computable, and identifies significant traits and requirements of a conscious system. Since Husserl, many philosophers have accepted that consciousness consists of memories of logical connections between an ego and external objects. These connections are called "intentions." Pattern recognition systems are achievable technical artifacts. The proof links this respected introspective philosophical theory of consciousness with technical art. The proof therefore endorses the strong AI hypothesis and may therefore also enable a theoretically-grounded form of artificial intelligence called a "synthetic intentionality," able to synthesize, generalize, select and repeat intentions. If the pattern recognition is reflexive, able to operate on the set of intentions, and flexible, with several methods of synthesizing intentions, an SI may be a particularly strong form of AI. Similarities and possible applications to several AI paradigms are discussed. The article then addresses some problems: The proof's limitations, reflexive cognition, Searles' Chinese room, and how an SI could "understand" "meanings" and "be creative."
-
We present the software implementation of our Qualia Modeling Framework (QMF), a computational cognitive model based on the dual-process theory, which theorizes that reasoning and decision making rely on integrated experiences from two interactive minds: the autonomous mind, without the agent’s conscious awareness, and the reflective mind, of which the agent is consciously aware. In the QMF, artificial qualia are the vocabulary of the conscious mind, required to reason over conceptual memory, and generate cognitive inferences. The autonomous mind employs pattern-matching, for fast reasoning over episodic memories. An ACT-R model with conventional declarative memory represents the autonomous mind. A second ACT-R model, with an unconventional implementation of declarative memory utilizing a hypernetwork theory based model of qualia space, represents the reflective mind. Using real-world, non-trivial, data sets, our cognitive model achieved classification accuracy comparable to, or greater than, analogous machine learning classifiers kNN and DT, while providing improvements in flexibility by allowing the Target Attribute to be identified or changed any time during training and testing. We advance the BICA challenge by providing a generalizable, efficient, algorithm which models the phenomenal structure of consciousness as proposed by a contemporary theory, and provides an effective decision aid in complex environments where data are too broad or diverse for a human to evaluate without computational assistance.
-
Although the thermal grill illusion has been the topic of previous research, many mysteries still remain regarding psychological determinants, neurophysiological mechanisms and so on. Also, the illusion cannot be simulated by information science and robotics. This study focuses on a very simple but interesting experiment called Hot and Cold Coils, which is known as a typical example of the thermal grill illusion. The authors aim to explain the thermal grill illusion by proposing a new and bold assumption called the conflict of concepts, and demonstrate how to construct a model by using an artificial consciousness module called the Module of Nerves for Advanced Dynamics (MoNAD). A simple experimental apparatus was prepared to prove the existence of the thermal grill illusion, and consists of a parallel arrangement of bars with an alternating pattern of cold and warmth at 20°C and 40°C. The authors conclude with the belief that many complex perceptions of humanity can be simulated through the use of neural networks, and that this can help us to deeply study the cognitive processes of human perception.
-
Auditory perception is an essential part of environment perception, in which the saliency detection is not only the fundamental basis but also an efficient way of achieving this task. For artificial machines, intelligent perception approach of sound is required to provide awareness as the initiatory step of artificial consciousness. In this paper, a novel salient environment sound detection framework for machine awareness is proposed. The framework is based on the heterogeneous saliency features from both image and acoustic channels. To improve the efficiency of proposed framework, (1) a global informative saliency estimation approach is initially proposed based on short-term Shannon entropy; (2) a series of auditory saliency detection methods is presented to obtain the spectral and temporal saliency features from power spectral density and mel-frequency cepstral coefficients, respectively; (3) a computational bio-inspired inhibition of return model is proposed for saliency verification to improve the accuracy of detection; (4) a heterogeneous saliency feature fusion approach is introduced to form the final auditory saliency map by combining the acoustic and image saliency features together. Environmental sounds which collected from real world are applied to verify the superiority of the proposed framework. The results show that, the proposed framework is more effective for the detection of the overlapped salient sounds, and is more robust to the background noise compared with the conventional approach.
-
Functionalism of robot pain claims that what is definitive of robot pain is functional role, defined as the causal relations pain has to noxious stimuli, behavior and other subjective states. Here, the author proposes that the only way to theorize role-functionalism of robot pain is in terms of type-identity theory. The author argues that what makes a state pain for a neuro-robot at a time is the functional role it has in the robot at the time, and this state is type identical to a specific circuit state. Support from an experimental study shows that if the neural network that controls a robot includes a specific 'emotion circuit', physical damage to the robot will cause the disposition to avoid movement, thereby enhancing fitness, compared to robots without the circuit. Thus, pain for a robot at a time is type identical to a specific circuit state.
-
Artificial intelligence, the "science and engineering of intelligent machines", still has yet to create even a simple "Advice Taker" [McCarthy, 1959]. We have previously argued [Waser, 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a "self" that can "learn" to be intelligent. Therefore, following expert advice on the nature of self [Llinas, 2001; Hofstadter, 2007; Damasio, 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins' [1976] speculation that "perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself" by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and "free-will" that continue to pave the way towards the creation of safe/moral autopoiesis.
-
In this paper, the author describes a simple yet cognitively powerful architecture of an embodied conscious agent. The architecture incorporates a mechanism for mining, representing, processing and exploiting semantic knowledge. This mechanism is based on two complementary internal world models which are built automatically. One model (based on artificial mirror neurons) is used for mining and capturing the syntax of the recognized part of the environment while the second one (based on neural nets) for its semantics. Jointly, the models support algorithmic processes underlying phenomena similar in important aspects to higher cognitive functions such as imitation learning and the development of communication, language, thinking, and consciousness.
-
The central paradigm of artificial intelligence is rapidly shifting toward biological models for both robotic devices and systems performing such critical tasks as network management, vehicle navigation, and process control. Here we use a recent mathematical analysis of the necessary conditions for consciousness in humans to explore likely failure modes inherent to a broad class of biologically inspired computing machines. Analogs to developmental psychopathology, in which regulatory mechanisms for consciousness fail progressively and subtly understress, and toinattentional blindness, where a narrow 'syntactic band pass' de?ned by the rate distortion manifold of conscious attention results in pathological ?xation, seem inevitable. Similar problems are likely to confront other possible architectures, although their mathematical description may be far less straightforward. Computing devices constructed on biological paradigms will inevitably lack the elaborate, but poorly understood, system of control mechanisms which has evolved over the last few hundred million years to stabilize consciousness in higher animals. This will make such machines prone to insidious degradation, and, ultimately, catastrophic failure.
-
This paper discusses recent research on humanoid robots and thought experiments addressing the question to what degree such robots could be expected to develop human-like cognition, if rather than being preprogrammed they were made to learn from the interaction with their physical and social environment like human infants. A question of particular interest, from both a semiotic and a cognitive scientific perspective, is whether or not such robots could develop an experiential Umwelt, i.e. could the sign processes they are involved in become intrinsically meaningful to themselves? Arguments for and against the possibility of phenomenal artificial minds of different forms are discussed, and it is concluded that humanoid robotics still has to be considered “weak” rather than “strong AI”, i.e. it deals with models of mind rather than actual minds.
-
The paper presents and defends the mimetic hypothesis concerning the origin of self-consciousness in three different kinds of development: hominid evolution, the mind of the child, and the epigenesis of mind within an artificial autonomous system - a robot. The proposed crucial factor for the emergence of self-consciousness is the ability to map between one's own subjective body-image and those of others, supported by a partially innate 'mirror system'. Combined with social interaction, this gives rise to inter-subjectivity and starts a developmental cycle of: 1) increased objectification of one's body-image, 2) increased volitional control, 3) increased understanding of the intentionality of others, and 4) increased understanding of one's own intentionality. The hypothesis has far reaching theoretical implications: the self-consciousness and empathy are co-determined; the language and tool-use are not causes, but rather consequences of increased self-consciousness; and most of the symptoms of autism can be accounted for as resulting from an impairment of the mirror system. The implications are negative for non-representational approaches to robotics and in favor of approaches based on imitation/mimesis.
-
This work is intended as a voice in the discussion over previous claims that a pretrained large language model (LLM) based on the Transformer model architecture can be sentient. Such claims have been made concerning the LaMDA model and also concerning the current wave of LLM-powered chatbots, such as ChatGPT. This claim, if confirmed, would have serious ramifications in the Natural Language Processing (NLP) community due to wide-spread use of similar models. However, here we take the position that such a large language model cannot be conscious, and that LaMDA in particular exhibits no advances over other similar models that would qualify it. We justify this by analysing the Transformer architecture through Integrated Information Theory of consciousness. We see the claims of sentience as part of a wider tendency to use anthropomorphic language in NLP reporting. Regardless of the veracity of the claims, we consider this an opportune moment to take stock of progress in language modelling and consider the ethical implications of the task. In order to make this work helpful for readers outside the NLP community, we also present the necessary background in language modelling.
-
Interest has been renewed in the study of consciousness, both theoretical and applied, following developments in 20th and early 21st century logic, metamathematics, computer science, and the brain sciences. In this evolving historical narrative, I explore several theoretical questions about the types of artificial intelligence and offer several conjectures about how they affect possible future developments in this exceptionally transformative field of research. I also address the practical significance of the advances in artificial intelligence in view of the cautions issued by prominent scientists, politicians, and ethicists about the possible historically unique dangers of such sufficiently advanced general intelligence, including by implication the search for extraterrestrial intelligence. Integrating both the theoretical and practical issues, I ask the following: (a) is sufficiently advanced general robotic intelligence identical to, or alternatively, ambiguously indistinguishable from human intelligence and human consciousness, and if so, (b) is such an earthly robotic intelligence a kind of consciousness indistinguishable from a presumptive extraterrestrial robotic consciousness, and if so, (c) is such a human-created robot preferably able to serve as a substitute for or even entirely supplant human intelligence and consciousness in certain exceptionally responsible roles? In the course of this investigation of artificial intelligence and consciousness, I also discuss the inter-relationships of these topics more generally within the theory of mind, including, emergence, free will, and meaningfulness, and the implications of quantum theory for alternative cosmological ontologies that offer suggestive answers to these topics, including how they relate to Big History.