Search

Full bibliography 675 resources

  • This paper aims to develop the research based on a pet robot and its artificial consciousness. We propose the animal behavior and emotion using the artificial neurotransmitter and motivation. This research still implements the communication between human and a pet robot respecting to a social cognitive and interaction. Thus, the development of cross-creature communication is crucial for friendly companionship. This system focuses on three points. The first that is the organization of the behavior and emotion model regarding the phylogenesis. The second is the method of the robot that can have empathy with user expression. The third is how the robot can socially perform its expression to human for encouragement or being delighted based on its own emotion and the human expression. This paper eventually presents the performance and the experiment that the robot using cross-perception and cross-expression between animal robot and social interaction of human communication based on the consciousness based architecture (CBA).

  • This note reports on interdisciplinary approaches to model consciousness, an aspect of self-awareness in particular, aiming at artificial consciousness that can be mounted on an autonomous and mobile robot. For self-awareness to emerge, the self-identification process plays an important role. Self-awareness would emerge when self-locating in a self-created map in robot navigation; when solving self-related problems in (a self-related version of) the frame problem; and when a singularity arises in mapping the reference point in mathematical mappings.

  • This paper critically assesses the anti-functionalist stance on consciousness adopted by certain advocates of integrated information theory (IIT), a corollary of which is that human-level artificial intelligence implemented on conventional computing hardware is necessarily not conscious. The critique draws on variations of a well-known gradual neuronal replacement thought experiment, as well as bringing out tensions in IIT's treatment of self-knowledge. The aim, though, is neither to reject IIT outright nor to champion functionalism in particular. Rather, it is suggested that both ideas have something to offer a scientific understanding of consciousness, as long as they are not dressed up as solutions to illusory metaphysical problems. As for human-level AI, we must await its development before we can decide whether or not to ascribe consciousness to it.

  • It is suggested that some limitations of current designs for medical AI systems (be they autonomous or advisory) stem from the failure of those designs to address issues of artificial (or machine) consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in (e.g.,) diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, it is argued, a plausible design constraint for a successful ethical machine medical or care agent is for it to at least model, if not reproduce, relevant aspects of consciousness and associated abilities. In order to provide theoretical grounding for such an enterprise we examine some key philosophical issues that concern the machine modelling of consciousness and ethics, and we show how questions relating to the first research goal are relevant to medical machine ethics. We believe that this will overcome a blanket skepticism concerning the relevance of understanding consciousness, to the design and construction of artificial ethical agents for medical or care contexts. It is thus argued that it would be prudent for designers of MME agents to reflect on issues to do with consciousness and medical (moral) expertise; to become more aware of relevant research in the field of machine consciousness; and to incorporate insights gained from these efforts into their designs.

  • Auditory perception is an essential part of environment perception, in which the saliency detection is not only the fundamental basis but also an efficient way of achieving this task. For artificial machines, intelligent perception approach of sound is required to provide awareness as the initiatory step of artificial consciousness. In this paper, a novel salient environment sound detection framework for machine awareness is proposed. The framework is based on the heterogeneous saliency features from both image and acoustic channels. To improve the efficiency of proposed framework, (1) a global informative saliency estimation approach is initially proposed based on short-term Shannon entropy; (2) a series of auditory saliency detection methods is presented to obtain the spectral and temporal saliency features from power spectral density and mel-frequency cepstral coefficients, respectively; (3) a computational bio-inspired inhibition of return model is proposed for saliency verification to improve the accuracy of detection; (4) a heterogeneous saliency feature fusion approach is introduced to form the final auditory saliency map by combining the acoustic and image saliency features together. Environmental sounds which collected from real world are applied to verify the superiority of the proposed framework. The results show that, the proposed framework is more effective for the detection of the overlapped salient sounds, and is more robust to the background noise compared with the conventional approach.

  • This paper presents learning efficiency of a consciousness system for robot using artificial neural network. The proposed conscious system consists of reason system, feeling system and association system. The three systems are modeled using Module of Nerves for Advanced Dynamics (ModNAD). Artificial neural network of the type of supervised learning with the back propagation is used to train the ModNAD. The reason system imitates behaviour and represents self-condition and other-condition. The feeling system represents sensation and emotion. The association system represents behaviour of self and determines whether self is comfortable or not. A robot is asked to perform cognition and tasks using the consciousness system. Learning converges to about 0.01 within about 900 orders for imitation, pain, solitude and the association modules. It converges to about 0.01 within about 400 orders for the comfort and discomfort modules. It can be concluded that learning in the ModNAD completed after a relatively small number of times because the learning efficiency of the ModNAD artificial neural network is good. The results also show that each ModNAD has a function to imitate and cognize emotion. The consciousness system presented in this paper may be considered as a fundamental step for developing a robot having consciousness and feelings similar to humans.

  • One model for creating artificial consciousness is replicating every fine detail of the brain on computers and setting the model in motion. Consciousness has been experimentally demonstrated to be a much more fragmented experience than we think it to be, perhaps we only need snippets of ourselves to feel conscious. Perhaps consciousness is nothing less and nothing more than story, and all we need do to continue to feel conscious is maintain identity through computer-based narrative. Applied nanotechnology has generated uncountable applications in electronics, pharmacology, and materials engineering. The best approach to life extension and consciousness expansion might lie in our own marvelously complex and entire bodies, meshed with and augmented by tiny bionan machines that become a part of us, rather than the opposite vision of humans migrating into a machine substrate.

  • The goal of this chapter is to present an overview of the work in AI on emotions and machine consciousness, with an eye toward answering these questions. Starting with a brief philosophical perspective on emotions and machine consciousness to frame the work, the chapter first focuses on artificial emotions, and then moves on to machine consciousness – reflecting the fact that emotions and consciousness have been treated independently and by different communities in AI. The chapter concludes by discussing philosophical implications of AI research on emotions and consciousness.

  • Machine consciousness is a young research field, yet inspired by oldest intellectual disciplines like philosophy of mind. Specifically, the mind–body problem has been approached since ancient times and different accounts have been proposed along the centuries. While none of these accounts, like different forms of dualism, have been seen as useful working hypotheses in the domain of machine consciousness, their influence might have shaped the orientation of this research field towards a frantic search for an illusory and unachievable bridge for the explanatory gap. In his book, Consciousness and Robot Sentience, Haikonen seems to claim back the predominant position that engineering should have in a domain, where we are supposed to deliver pragmatic solutions. In this regard, Haikonen is actually bridging the gap between the philosophical discourse and the practical engineering approach. This is a remarkable movement as Haikonen is essentially claiming that his cognitive architecture is a proof of the inexistence of such a thing as a mind–body problem. In this book review, I analyze the implications, limitations, and prospects of this engineering stance, looking at the main contributions and those aspects that might require further explanation.

  • One of the primary ethical issues related to creating robust artificial intelligences is how to engineer them to ensure that they will behave morally — i.e. that they will consider and treat us appropriately. A much less commonly discussed issue is what their moral status will be — i.e. how we ought to consider and treat them. In this chapter, John Basl takes up the issue of the moral status of artificial consciousnesses. He defends a capacity-based account of moral status, on which an entity’s moral status is determined by the capacities it has, rather than its origins or material composition. An implication of this is that if a machine intelligence has cognitive and psychological capacities like ours, then it would have comparable moral status to us. However, Basl argues that it is highly unlikely that machines will have capacities (and so interests) like ours, and that in fact it will be very difficult to know whether they are conscious and, if they are, what capacities and interests they have.

  • To develop “Artificial Consciousness” for a SELF requires investigation and understanding of what it means to be conscious. The textbook definition of consciousness is:

  • I propose a physicalist theory of consciousness that is an extension of the theory of noémona species. The proposed theory covers the full consciousness spectrum from animal to machine and its human consciousness base is compatible with the corresponding work of Wundt, James, and Freud. The paper is organized in three sections. In the first, I briefly justify the methodology used. In Sec. 2, I state the inadequacies of the major work on the nature of consciousness and present a definitional system that adequately describes its changing nature and scope. Finally in Sec. 3, I state some of the consequences of the theory and introduce some of its future extensions.

  • Consciousness is not only a philosophical but also a technological issue, since a conscious agent has evolutionary advantages. Thus, to replicate a biological level of intelligence in a machine, concepts of machine consciousness have to be considered. The widespread internalistic assumption that humans do not experience the world as it is, but through an internal ‘3D virtual reality model’, hinders this construction. To overcome this obstacle for machine consciousness a new theoretical approach to consciousness is sketched between internalism and externalism to address the gap between experience and physical world. The ‘internal interpreter concept’ is replaced by a ‘key-lock approach’. Here, consciousness is not an image of the external world but the world itself. A possible technological design for a conscious machine is drafted taking advantage of an architecture exploiting selfdevelopment of new goals, intrinsic motivation, and situated cognition. The proposed cognitive architecture does not pretend to be conclusive or experimentally satisfying but rather forms the theoretical the first step to a full architecture model on which the authors currently work on, which will enable conscious agents e.g. for robotics or software applications.

  • The problem of consciousness is one of the mostimportant problems in science as well as in philosophy. Thereare different philosophers and different scientists who define itand explain it differently. As far as our knowledge ofconsciousness is concerned, ‘consciousness’ does not admit of adefinition in terms of genus and differentia or necessary andsufficient condition. In this paper I shall explore the very idea ofmachine consciousness. The machine consciousness has offeredcausal explanation to the ‘how’ and ‘what’ of consciousness, butthey failed to explain the ‘why’ of consciousness. Theirexplanation is based on the ground that consciousness is causallydependent on the material universe and that of all, consciousnessphenomena can be explained by mapping the physical universe.Again, this mechanical/epistemological theory of consciousnessis essentially committed to scientific world view, which cannotavoid metaphysical implication of consciousness.

  • Kevin O’Regan argues that seeing is a way of exploring the world, and that this approach helps us understand consciousness. O’Regan is interested in applying his ideas to the modeling of consciousness in robots. Hubert Dreyfus has raised a range of objections to traditional approaches to artificial intelligence, based on his reading of Heidegger. In light of this, I explore here ways in which O’Regan’s approach meets these Heideggerian considerations, and ways in which his account is more Heideggerian than that of Dreyfus. Despite these successes, O’Regan leaves out any role for emotion. This is an area where a Heideggerian perspective may offer useful insights into what more is needed for the sense of self O’Regan includes in his account in order for a robot to feel.

  • I compare a ‘realist’ with a ‘social–relational’ perspective on our judgments of the moral status of artificial agents (AAs). I develop a realist position according to which the moral status of a being—particularly in relation to moral patiency attribution—is closely bound up with that being’s ability to experience states of conscious satisfaction or suffering (CSS). For a realist, both moral status and experiential capacity are objective properties of agents. A social relationist denies the existence of any such objective properties in the case of either moral status or consciousness, suggesting that the determination of such properties rests solely upon social attribution or consensus. A wide variety of social interactions between us and various kinds of artificial agent will no doubt proliferate in future generations, and the social–relational view may well be right that the appearance of CSS features in such artificial beings will make moral role attribution socially prevalent in human–AA relations. But there is still the question of what actual CSS states a given AA is capable of undergoing, independently of the appearances. This is not just a matter of changes in the structure of social existence that seem inevitable as human–AA interaction becomes more prevalent. The social world is itself enabled and constrained by the physical world, and by the biological features of living social participants. Properties analogous to certain key features in biological CSS are what need to be present for nonbiological CSS. Working out the details of such features will be an objective scientific inquiry.

  • Functionalism of robot pain claims that what is definitive of robot pain is functional role, defined as the causal relations pain has to noxious stimuli, behavior and other subjective states. Here, the author proposes that the only way to theorize role-functionalism of robot pain is in terms of type-identity theory. The author argues that what makes a state pain for a neuro-robot at a time is the functional role it has in the robot at the time, and this state is type identical to a specific circuit state. Support from an experimental study shows that if the neural network that controls a robot includes a specific 'emotion circuit', physical damage to the robot will cause the disposition to avoid movement, thereby enhancing fitness, compared to robots without the circuit. Thus, pain for a robot at a time is type identical to a specific circuit state.

  • The stated aim of adherents to the paradigm called biologically inspired cognitive architectures (BICA) is to build machines that address "the challenge of creating a real-life computational equivalent of the human mind".(From the mission statement of the new BICA journal.) In contrast, practitioners of machine consciousness (MC) are driven by the observation that these human minds for which one is trying to find equivalents are generally thought to be conscious. (Of course, this is controversial because there is no evidence of consciousness in behavior. But as the hypothesis of the consciousness of others is commonly used, a rejection of it has to be considered just as much as its acceptance.) In this paper, it is asked whether those who would like to build computational equivalents of the human mind can do so while ignoring the role of consciousness in what is called the mind. This is not ignored in the MC paradigm and the consequences, particularly on phenomenological treatments of the mind, are briefly explored. A measure based on a subjective feel for how well a model matches personal experience is introduced. An example is given which illustrates how MC can clarify the double-cognition tenet of Strawson's cognitive phenomenology.

  • An artificial neural network called reaCog is described which is based on a decentralized, reactive and embodied architecture developed to control non-trivial hexapod walking in an unpredictable environment (Walknet) while using insect-like navigation (Navinet). In reaCog, these basic networks are extended in such a way that the complete system, reaCog, adopts the capability of inventing new behaviors and – via internal simulation of planning ahead. This cognitive expansion enables the reactive system to be enriched with additional procedures. Here, we focus on the question to what extent properties of phenomena to be characterized on a different level of description as for example consciousness can be found in this minimally cognitive system. Adopting a monist view, we argue that the phenomenal aspect of mental phenomena can be neglected when discussing the function of such a system. Under this condition, reaCog is discussed to be equipped with properties as are bottom-up and top-down attention, intentions, volition, and some aspects of Access Consciousness. These properties have not been explicitly implemented but emerge from the cooperation between the elements of the network. The aspects of Access Consciousness found in reaCog concern the above mentioned ability to plan ahead and to invent and guide (new) actions. Furthermore, global accessibility of memory elements, another aspect characterizing Access Consciousness is realized by this network. reaCog allows for both reactive/automatic control and (access-) conscious control of behavior. We discuss examples for interactions between both the reactive domain and the conscious domain. Metacognition or Reflexive Consciousness is not a property of reaCog. Possible expansions are discussed to allow for further properties of Access Consciousness, verbal report on internal states, and for Metacognition. In summary, we argue that already simple networks allow for properties of consciousness if leaving the phenomenal aspect aside.

Last update from database: 12/31/25, 2:00 AM (UTC)