Search
Full bibliography 716 resources
-
This paper argues that conscious attention exists not so much for selecting an immediate action as for focusing learning of the action-selection mechanisms and predictive models on tasks and environmental contingencies likely to affect the conscious agent. It is perfectly possible to build this sort of system into machine intelligence, but it is not strictly necessary unless the intelligence needs to learn and is resource-bounded with respect to the rate of learning vs. the rate of relevant environmental change. Support of this theory is drawn from scientific research and AI simulations, and a few consequences are suggested with respect to self consciousness and ethical obligations to and for AI.
-
The paper proposes the design approach as a blueprint for building a sentient artificial agent capable of exhibiting humanlike attributions of consciousness. The paper also considers whether if such an artificial agent is ever built, how it will be indistinguishable from a human being? Well, it is glowingly evident that the evolution of artificial intelligence is guided by us, humans, whose own mental evolution have been shaped by the passing years in the course of the phenomenology of adaptation and survival (Darwinian). Yet, the evolution of synthetic minds powered by artificial cognition seems to be quite fast. Yes, the artificial mind in robots, if we accept the analogy 'mind' in its fullest sense, that day is not very far when the mental embodiment of consciousness in machines would become reality. But prior to such a feat becoming reality, rhetoric debates have been taking shape as of, how to decode and cipher consciousness in machines, a phenomenon considered as often as 'nonentity', then, what would be the true essence of such an artificial consciousness? This paper discusses these aspects and attempts to throw some new light on the design and developmental aspects of artificial consciousness.
-
In recent years, a classic problem regarding designing of artificial minds embedded with synthetic consciousness has resurfaced in the tune of; 1) building a machine or robots that would closely mimic human behavior, and 2) the problem of embodiment of consciousness in artificial forms in such entities. These two problems boil down to the pure consideration as well of standardization of another aspect- the design concepts; of whether they would look-alike human beings in artificial flesh and skin, or rather be designed entirety as original architecture having shape-implicit forms of embodied cognition which could stand as true peers of human race. The first problem is to deal with the art and science of imitating human behavior, whereas, the subsequent problems should specifically deal with the predicament of abstraction and embodiment of mental attributions primarily, consciousness in machines. Whilst the final dilemma could be the consideration of some standard design models that would likely reflect the nature of such embodied consciousness. In such endeavor, I discuss both the design approach to imitate human abilities in machines as well, the modeling of human consciousness in robots within some relational framework for orientation of mental attributions in such sense that would satisfy evolution of robot consciousness.
-
Synthetic phenomenology typically focuses on the analysis of simplified perceptual signals with small or reduced dimensionality. Instead, synthetic phenomenology should be analyzed in terms of perceptual signals with huge dimensionality. Effective phenomenal processes actually exploit the entire richness of the dynamic perceptual signals coming from the retina. The hypothesis of a high-dimensional buffer at the basis of the perception loop that generates the robot synthetic phenomenology is analyzed in terms of a cognitive architecture for robot vision the authors have developed over the years. Despite the obvious computational problems when dealing with high-dimensional vectors, spaces with increased dimensionality could be a boon when searching for global minima. A simplified setup based on static scene analysis and a more complex setup based on the CiceRobot robot are discussed.
-
The function of the brain is intricately woven into the fabric of time. Functions such as (i) storing and accessing past memories, (ii) dealing with immediate sensorimotor needs in the present, and (iii) projecting into the future for goal-directed behavior are good examples of how key brain processes are integrated into time. Moreover, it can even seem that the brain generates time (in the psychological sense, not in the physical sense) since, without the brain, a living organism cannot have the notion of past nor future. When combined with an evolutionary perspective, this seemingly straightforward idea that the brain enables the conceptualization of past and future can lead to deeper insights into the principles of brain function, including that of consciousness. In this paper, we systematically investigate, through simulated evolution of artificial neural networks, conditions for the emergence of past and future in simple neural architectures, and discuss the implications of our findings for consciousness and mind uploading.
-
Research is starting to identify correlations between consciousness and some of the spatiotemporal patterns in the physical brain. For theoretical and practical reasons, the results of experiments on the correlates of consciousness have ambiguous interpretations. At any point in time a number of hypotheses co-exist about and the correlates of consciousness in the brain, which are all compatible with the current experimental results. This paper argues that consciousness should be attributed to any system that exhibits spatiotemporal physical patterns that match the hypotheses about the correlates of consciousness that are compatible with the current experimental results. Some computers running some programs should be attributed consciousness because they produce spatiotemporal patterns in the physical world that match those that are potentially linked with consciousness in the human brain.
-
Whole brain emulation aims to re-implement functions of a mind in another computational substrate by carefully emulating the function of fundamental components, and by copying the connectivity between those components. The precision with which this is done must enable prediction of the natural development of active states. To accomplish this, in vivo measurements at large scale and high resolution are critically important. We propose a set of requirements for these empirical measurements. We then outline general methods leading to acquisition of a structural and functional connectome, and to the characterization of responses at large scale and high resolution. Finally, we describe two new project developments that tackle the problem of functional recording in vivo, namely the "molecular ticker-tape" and the integrated-circuit "Cyborcell".
-
When trying to solve classification or time-series prediction problem statements by the application of Artificial Neural Networks (ANNs), commonly applied structures like feed forward or recurrent Multi-Layer Perceptrons (MLP) characteristically tend to come up with bad performance and accuracy. This is especially the case when dealing with manifold datasets containing numerous input (predictors) and/or targetattributes and independent from the applied learning methods, activation functions, biases, etc... The cortical ANN, inspired by theoretical aspects of the human consciousness and its signal processing, is an ANN structure having been developed during the research phase of the “System applying High Order Computational Intelligence” (SHOCID) project. Due to its structure, redundancy and error-tolerance is being created, which helps to elude the latterly mentioned problems. Within this elaboration, the cortical ANN is being introduced, as well as an algorithm for evolving this special ANN types' structure until the most suitable solution has been detected.
-
Following arguments put forward in my book (Why red doesn’t sound like a bell: understanding the feel of consciousness. Oxford University Press, New York, USA, 2011), this article takes a pragmatic, scientist’s point of view about the concepts of consciousness and “feel”, pinning down what people generally mean when they talk about these concepts, and then investigating to what extent these capacities could be implemented in non-biological machines. Although the question of “feel”, or “phenomenal consciousness” as it is called by some philosophers, is generally considered to be the “hard” problem of consciousness, the article shows that by taking a “sensorimotor” approach, the difficulties can be overcome. What remains to account for are the notions of so-called “access consciousness” and the self. I claim that though they are undoubtedly very difficult, these are not logically impossible to implement in robots.
-
The potential for the near-future development of two technologies — artificial forms of intelligence, as well as the ability to "upload" human minds into artificial forms — raises several ethical questions regarding the proper treatment and understanding of these artificial minds. The crux of the dilemma is whether or not such creations should be accorded the same rights we currently grant humans, and this question seems to hinge upon whether they will exhibit their own "subjectivity", or internal viewpoints. Recognizing this as the essential factor yields some ethical guidance, but these issues need further exploration before such technologies become available.
-
The main motivation for this work is to investigate the advantages provided by machine consciousness, while in the control of software agents. In order to pursue this goal, we developed a cognitive architecture, with different levels of machine consciousness, targeting the control of artificial creatures. As a standard guideline, we applied cognitive neuroscience concepts to incrementally develop the cognitive architecture, following the evolutionary steps taken by the animal brain. The triune brain theory proposed by MacLean, together with Arrabale's "ConsScale", serve as roadmaps to achieve each developmental stage, while iCub — a humanoid robot and its simulator — serve as a platform for the experiments. A completely codelet-based system "Core" has been implemented, serving the whole architecture.
-
The Terasem Mind Uploading Experiment is a multi-decade test of the comparability of single person actual human consciousness as assessed by expert psychological review of their digitized interactions with same person purported human consciousness as assessed by expert psychological interviews of personality software that draws upon a database comprised of the original actual person's digitized interactions. The experiment is based upon a hypothesis that the paper analyzes for its conformance with scientific testability in accordance with the criteria set forth by Karl Popper. Strengths and weaknesses of both the hypothesis and the experiment are assessed in terms of other tests of digital consciousness, scientific rigor and good clinical practices. Recommendations for improvement include stronger parametrization of endpoint assessment and better attention to compliance with informed consent in the event there is emergence of software-based consciousness.
-
We answer the question raised by the title by developing a neural architecture for the attention control system in animals in a hierarchical manner, following what we conjecture is an evolutionary path. The resulting evolutionary model (based on CODAM at the highest level) and answer to the question allow us to consider both different forms of consciousness as well as how machine consciousness could itself possess a variety of forms.
-
In this paper, the notion of super-intelligence (or "AI++", as Chalmers has termed it) is considered in the context of machine consciousness (MC) research. Suppose AI++ were to come about, would real MC have then also arrived, "for free"? (I call this the "drop-out question".) Does the idea tempt you, as an MC investigator? What are the various positions that might be adopted on the issue of whether an AI++ would necessarily (or with strong likelihood) be a conscious AI++? Would a conscious super-intelligence also be a super-consciousness? (Indeed, what meaning might be attached to the notion of "super-consciousness"?) What ethical and social consequences might be drawn from the idea of conscious super-AIs or from that of artificial super-consciousness? And what implications does this issue have for technical progress on MC in a pre-AI++ world? These and other questions are considered.
-
In this article, we propose the Conscious-Emotional Learning Tutoring System technology, a biologically plausible cognitive agent based on human brain functions. This agent is capable of learning and remembering events and any related information such as corresponding procedures, stimuli and their emotional valences. In our model, emotions play a role in the encoding and remembering of events. This allows the agent to improve its behaviour or by remembering previously selected behaviours which were influenced by its emotional mechanism. Moreover, the architecture incorporates a realistic memory consolidation process based on a data mining algorithm.
-
Most philosophers of mind follow Thomas Nagel and hold that subjective experiences are characterised by the fact that there is “something it is like” to have them. Philosophers of mind have sometimes speculated that ordinary people endorse, perhaps implicitly, this conception of subjective experiences. Some recent findings cast doubt on this view.
-
We present a two-level model of concurrent communicating systems (CCS) to serve as a basis for machine consciousness. A language implementing threads within logic programming is first introduced. This high-level framework allows for the definition of abstract processes that can be executed on a virtual machine. We then look for a possible grounding of these processes into the brain. Towards this end, we map abstract definitions (including logical expressions representing compiled knowledge) into a variant of the π-calculus. We illustrate this approach through a series of examples extending from a purely reactive behavior to patterns of consciousness.
-
Scientific behavior is used as a benchmark to examine the truth status of computationalism (COMP) as a law of nature. A COMP-based artificial scientist is examined from three simple perspectives to see if they shed light on the truth or falsehood of COMP through its ability or otherwise, to deliver authentic original science on the a priori unknown like humans do. The first perspective (A) looks at the handling of ignorance and supports a claim that COMP is "trivially true" or "pragmatically false" in the sense that you can simulate a scientist if you already know everything, which is a state that renders the simulation possible but pointless. The second scenario (B) is more conclusive and unusual in that it reveals that the COMP scientist can never propose/debate that COMP is a law of nature. This marked difference between the human and the artificial scientist in this single, very specific circumstance, means that COMP cannot be true as a general claim. The third scenario (C) examines the artificial scientist's ability to do science on itself/humans to uncover the "law of nature" which results in itself. This scenario reveals that a successful test for scientific behavior by a COMP-based artificial scientist supports a claim that COMP is true. Such a test is quite practical and can be applied to an artificial scientist based on any design principle, not merely COMP. Scenario (C) also reveals a practical example of the COMP scientist's inability to handle informal systems (in the form of liars), which further undermines COMP. Overall, the result is that COMP is false, with certainty in one very specific, critical place. This lends support to the claims (i) that artificial general intelligence will not succeed based on COMP principles, and (ii) computationally enacted abstract models of human cognition will never create a mind.