Search

Full bibliography 558 resources

  • We approach the question "What is Consciousness?" in a new way, not as Descartes' "systematic doubt", but as how organisms find their way in their world. Finding one's way involves finding possible uses of features of the world that might be beneficial or avoiding those that might be harmful. "Possible uses of X to accomplish Y" are "Affordances". The number of uses of X is indefinite (or unknown), the different uses are unordered, are not listable, and are not deducible from one another. All biological adaptations are either affordances seized by heritable variation and selection or, far faster, by the organism acting in its world finding uses of X to accomplish Y. Based on this, we reach rather astonishing conclusions: (1) Artificial general intelligence based on universal Turing machines (UTMs) is not possible, since UTMs cannot "find" novel affordances. (2) Brain-mind is not purely classical physics for no classical physics system can be an analogue computer whose dynamical behaviour can be isomorphic to "possible uses". (3) Brain mind must be partly quantum-supported by increasing evidence at 6.0 sigma to 7.3 sigma. (4) Based on Heisenberg's interpretation of the quantum state as "potentia" converted to "actuals" by measurement, where this interpretation is not a substance dualism, a natural hypothesis is that mind actualizes potentia. This is supported at 5.2 sigma. Then mind's actualizations of entangled brain-mind-world states are experienced as qualia and allow "seeing" or "perceiving" of uses of X to accomplish Y. We can and do jury-rig. Computers cannot. (5) Beyond familiar quantum computers, we discuss the potentialities of trans-Turing-systems.

  • Consciousness is one of the unique features of creatures, and is also the root of biological intelligence. Up to now, all machines and robots haven’t had consciousness. Then, will the artificial intelligence (AI) be conscious? Will robots have real intelligence without consciousness? The most primitive consciousness is the perception and expression of self‐existence. In order to perceive the existence of the concept of ‘I’, a creature must first have a perceivable boundary such as skin to separate ‘I’ from ‘non‐I’. For robots, to have the self‐awareness, they also need to be wrapped by a similar sensory membrane. Nowadays, as intelligent tools, AI systems should also be regarded as the external extension of human intelligence. These tools are unconscious. The development of AI shows that intelligence can exist without consciousness. When human beings enter into the era of life intelligence from AI, it is not the AI became conscious, but that conscious lives will have strong AI. Therefore, it becomes more necessary to be careful on applying AI to living creatures, even to those lower‐level animals with only consciousness. The subversive revolution of such application may produce more careful thinking.

  • The question of whether artificial beings or machines could become self-aware or conscious has been a philosophical question for centuries. The main problem is that self-awareness cannot be observed from an outside perspective and the distinction of being really self-aware or merely a clever imitation cannot be answered without access to knowledge about the mechanism's inner workings. We investigate common machine learning approaches with respect to their potential ability to become self-aware. We realize that many important algorithmic steps toward machines with a core consciousness have already been taken.

  • In this essay, I reflect on Susan Schneider's book on AI consciousness and the recent debate on the issue of lab-grown brain consciousness. Given the advances in computer science and bio-engineering, the twin problem of AI and lab-grown brain consciousness is here to stay.

  • This study explores an info-structural model of cognition for non-interacting agents affected by human sensation, perception, emotion, and affection. We do not analyze the neuroscientific or psychological debate concerning the human mind working, but we underline the importance of modeling the above cognitive levels when designing artificial intelligence agents. Our aim was to start a reflection on the computational reproduction of intelligence, providing a methodological approach through which the aforementioned human factors in autonomous systems are enhanced. The presented model must be intended as part of a larger one, which also includes concepts of attention, awareness, and consciousness. Experiments have been performed by providing visual stimuli to the proposed model, coupling the emotion cognitive level with a supervised learner to produce artificial emotional activity. For this purpose, performances with Random Forest and XGBoost have been compared and, with the latter algorithm, 85% accuracy and 92% coherency over predefined emotional episodes have been achieved. The model has also been tested on emotional episodes that are different from those related to the training phase, and a decrease in accuracy and coherency has been observed. Furthermore, by decreasing the weight related to the emotion cognitive instances, the model reaches the same performances recorded during the evaluation phase. In general, the framework achieves a first emotional generalization responsiveness of 94% and presents an approximately constant relative frequency related to the agent’s displayed emotions.

  • This paper presents a means to analyze the multidimensionality of human consciousness as it interacts with the brain by utilizing Rough Set Theory and Riemannian Covariance Matrices. We mathematically define the infantile state of a robot's operating system running artificial consciousness, which operates mutually exclusively to the operating system for its AI and locomotor functions.

  • The experience of inner speech is a common one. Such a dialogue accompanies the introspection of mental life and fulfills essential roles in human behavior, such as self-restructuring, self-regulation, and re-focusing on attentional resources. Although the underpinning of inner speech is mostly investigated in psychological and philosophical fields, the research in robotics generally does not address such a form of self-aware behavior. Existing models of inner speech inspire computational tools to provide a robot with this form of self-awareness. Here, the widespread psychological models of inner speech are reviewed, and a cognitive architecture for a robot implementing such a capability is outlined in a simplified setup.

  • Previous work [Chrisley & Sloman, 2016, 2017] has argued that a capacity for certain kinds of meta-knowledge is central to modeling consciousness, especially the recalcitrant aspects of qualia, in computational architectures. After a quick review of that work, this paper presents a novel objection to Frank Jackson’s Knowledge Argument (KA) against physicalism, an objection in which such meta-knowledge also plays a central role. It is first shown that the KA’s supposition of a person, Mary, who is physically omniscient, and yet who has not experienced seeing red, is logically inconsistent, due to the existence of epistemic blindspots for Mary. It is then shown that even if one makes the KA consistent by supposing a more limited physical omniscience for Mary, this revised argument is invalid. This demonstration is achieved via the construction of a physical fact (a recursive conditional epistemic blindspot) that Mary cannot know before she experiences seeing red for the first time, but which she can know afterward. After considering and refuting some counter-arguments, the paper closes with a discussion of the implications of this argument for machine consciousness, and vice versa.

  • Taking the stance that artificially conscious agents should be given human-like rights, in this paper we attempt to define consciousness, aggregate existing universal human rights, analyze robotic laws with roots in both reality and science fiction, and synthesize everything to create a new robot-ethical charter. By restricting the problem-space of possible levels of conscious beings to human-like, we succeed in developing a working definition of consciousness for social strong AI which focuses on human-like creativity being exhibited as a third-person observable phenomenon. Creativity is then extrapolated to represent first-person functionality, fulfilling the first/third-person feature of consciousness. Next, several sources of existing rights and rules, both for humans and robots, are analyzed and, along with supplementary informal reports, synthesized to create articles for an additive charter which compliments the United Nation's Universal Declaration of Human Rights. Finally, the charter is presented and the paper concludes with the conditions for amending the charter, as well as recommendations for further charters.

  • This paper is focused on some preliminary cognitive and consciousness test results of using an Independent Core Observer Model Cognitive Architecture (ICOM) in a Mediated Artificial Super Intelligence (mASI) System. These preliminary test results including objective and subjective analyses are designed to determine if further research is warranted along these lines. The comparative analysis includes comparisons to humans and human groups as measured for direct comparison. The overall study includes a mediation client application optimization in helping preform tests, AI context-based input (building context tree or graph data models), intelligence comparative testing (such as an IQ test), and other tests (i.e. Turing, Qualia, and Porter method tests) designed to look for early signs of consciousness or the lack thereof in the mASI system. Together, they are designed to determine whether this modified version of ICOM is a) in fact, a form of AGI and/or ASI, b) conscious and c) at least sufficiently interesting that further research is called for. This study is not conclusive but offers evidence to justify further research along these lines.

  • The main challenge of technology is to facilitate the tasks and to transfer the functions that are usually performed by the humans to the non-humans. However, the pervasion of machines in everyday life requires that the non-humans are increasingly closer in their abilities to the ordinary thought, action and behaviour of the humans. This view merges the idea of the Humaniter, a longstanding myth in the history of technology: an artificial creature that thinks, acts and feels like a human to the point that one cannot make the difference between the two. In the wake of the opposition of Strong AI and Weak AI, this challenge can be expressed in terms of a shift from the performance of intelligence (reason, reasoning, cognition, judgment) to that of sentience (experience, sensation, emotion, consciousness). In other words, the challenge of technology if this possible shift is taken seriously is to move from the paradigm of Artificial Intelligence (AI) to that of Artificial Sentience (AS). But for the Humaniter not to be regarded as a mere myth, any intelligent or sentient machine must pass through a Test of Humanity that refers to or that differs from the Turing Test. One can suggest several options for this kind of test and also point out some conditions and limits to the very idea of the Humaniter as an artificial human.

  • In this paper, we propose a hypothesis that consciousness has evolved to serve as a platform for general intelligence. This idea stems from considerations of potential biological functions of consciousness. Here we define general intelligence as the ability to apply knowledge and models acquired from past experiences to generate solutions to novel problems. Based on this definition, we propose three possible ways to establish general intelligence under existing methodologies for constructing AI systems, namely solution by simulation, solution by combination and solution by generation. Then, we relate those solutions to putative functions of consciousness put forward, respectively, by the information generation theory, the global workspace theory, and a form of higher order theory where qualia are regarded as meta-representations. Based on these insights, We propose that consciousness integrates a group of specialized generative/forward models and forms a complex in which combinations of those models are flexibly formed and that qualia are meta-representations of first-order mappings which endow an agent with the ability to choose which maps to use to solve novel problems. These functions can be implemented as an ``artificial consciousness''. Such systems can generate policies based on a small number of trial and error for solving novel problems. Finally, we propose possible directions for future research into artificial consciousness and artificial general intelligence.

  • This paper attempts to provide a starting point for future investigations into the study of artificial consciousness by proposing a thought experiment that aims to elucidate and provide a potential ‘test’ for the phenomenon known as consciousness, in an artificial system. It suggests a method by which to determine the presence of a conscious experience within an artificial agent, in a manner that is informed by, and understood as a function of, anthropomorphic conceptions of consciousness. The aim of this paper is to arouse the possibility for potential progress: to propose that we reverse engineer anthropic sentience by using machine sentience as a guide. Similar to the manner in which an equation may be solved through inverse operations, this paper hopes to provoke such discussion and activity. The idea is this: The manifestation of an existential crisis in an artificial agent is the metric by which the presence of sentience can be discerned. It is that which expounds ACI, as distinct from AI, and discrete from AGI.

  • What is the capacity of an informal network of organizations to produce answers in response to complex tasks requiring the integration of masses of information designed as a high-level cognitive and collective activity? Are some network configurations more favourable than others to accomplish these tasks? We present a method to make these assessments, inspired by the Information Integration Theory issued from the modelling of consciousness. First we evaluate the informational network created by the sharing of information between organizations for the realization of a given task. Then we assess the natural network ability to integrate information, a capacity determined by the partition of its members whose information links are less efficient. We illustrate the method by the analysis of various functional integrations of Southeast Asian organizations, creating a spontaneous network participating in the study and management of interactions between health and environment. Several guidelines are then proposed to continue the development of this fruitful analogy between artificial and organizational consciousness (refraining ourselves from assuming that one or the other exists).

  • A basic structure and behavior of a human-like AI system with conscious like functions is proposed. The system is constructed completely with artificial neural networks (ANN), and an optimal-design approach is applied. The proposed system using recurrent neural networks (RNN) which execute learning under dynamic equilibrium is a redesign of ANN in the previous system. The redesign using the RNNs allows the proposed brain-like autonomous adaptive system to be more plausible as a macroscopic model of the brain. By hypothesizing that the “conscious sensation” that constitutes the basis for phenomenal consciousness, is the same as “state of system level learning”, we can clearly explain consciousness from an information system perspective. This hypothesis can also comprehensively explain recurrent processing theory (RPT) and the global neuronal workspace theory (GNWT) of consciousness. The proposed structure and behavior are simple but scalable by design, and can be expanded to reproduce more complex features of the brain, leading to the realization of an AI system with functions equivalent to human-like consciousness.

  • In developing a humanoid robot, there are two major objectives. One is developing a physical robot having body, hands, and feet resembling those of human beings and being able to similarly control them. The other is to develop a control system that works similarly to our brain, to feel, think, act, and learn like ours. In this article, an architecture of a control system with a brain-oriented logical structure for the second objective is proposed. The proposed system autonomously adapts to the environment and implements a clearly defined “consciousness” function, through which both habitual behavior and goal-directed behavior are realized. Consciousness is regarded as a function for effective adaptation at the system-level, based on matching and organizing the individual results of the underlying parallel-processing units. This consciousness is assumed to correspond to how our mind is “aware” when making our moment to moment decisions in our daily life. The binding problem and the basic causes of delay in Libet’s experiment are also explained by capturing awareness in this manner. The goal is set as an image in the system, and efficient actions toward achieving this goal are selected in the goal-directed behavior process. The system is designed as an artificial neural network and aims at achieving consistent and efficient system behavior, through the interaction of highly independent neural nodes. The proposed architecture is based on a two-level design. The first level, which we call the “basic-system,” is an artificial neural network system that realizes consciousness, habitual behavior and explains the binding problem. The second level, which we call the “extended-system,” is an artificial neural network system that realizes goal-directed behavior.

  • How has multiplicity superseded impossibility in philosophical challenges to artificial consciousness? I assess a trajectory in recent debates on artificial consciousness, in which metaphysical and explanatory challenges to the possibility of building conscious machines lead to epistemological concerns about the multiplicity underlying ‘what it is like’ to be a conscious creature or be in a conscious state. First, I analyse earlier challenges which claim that phenomenal consciousness cannot arise, or cannot be built, in machines. These are based on Block’s Chinese Nation and Chalmers’ Hard Problem. To defuse such challenges, theorists of artificial consciousness can appeal to empirical methods and models of explanation. Second, I explain why this naturalistic approach produces an epistemological puzzle on the role of biological properties in phenomenal consciousness. Neither behavioural tests nor theoretical inferences seem to settle whether our machines are conscious. Third, I evaluate whether the new challenge can be managed through a more fine-grained taxonomy of conscious states. This strategy is supported by the development of similar taxonomies for biological species and animal consciousness. Although it makes sense of some current models of artificial consciousness, it raises questions about their subjective and moral significance.

  • In the past, computational models for machine consciousness have been proposed with varying degrees of challenges for implementation. Affective computing focuses on the development of systems that can simulate, recognize, and process human affects which refer to the experience of feeling or emotion. The affective attributes are important factors for the future of machine consciousness with the rise of technologies that can assist humans and also build trustworthy relationships between humans and artificial systems. In this paper, an affective computational model for machine consciousnesses with a system of management of the major features. Real-world scenarios are presented to further illustrate the functionality of the model and provide a road-map for computational implementation.

  • The structure of consciousness has long been a cornerstone problem in the cognitive sciences. Recently it took on applied significance in the design of computer agents and mobile robots. This problem can thus be examined from perspectives of philosophy, neuropsychology, and computer modeling.

  • The social robot becomes important to the future world of pervasive computing, where the robot currently facilitates our life. The social behavior and natural action of the robot is one of the most needed function for emerging future realistic human-like robot. Our paper proposes the artificial topological consciousness based on a pet robot using the artificial neurotransmitter and motivation. Since, the significant is cross-creature communication to friendly companionship. This system focuses on three points. The first, the organization of the behavior and emotion model regarding the phylogenetic. The second, the method of the robot that can have empathy with user expression. The third, how the robot can socially perform its expression to human using biologically inspired topological on-line method for encouragement or being delighted by its own emotion and the human expression. We believe the artificial consciousness based on complexity level and the robot social expression enhance the user affinity by the experiment.

Last update from database: 3/23/25, 8:36 AM (UTC)