Search

Full bibliography 675 resources

  • Data assimilation is naturally conceived as the synchronization of two systems, “truth” and “model”, coupled through a limited exchange of information (observed data) in one direction. Though investigated most thoroughly in meteorology, the task of data assimilation arises in any situation where a predictive computational model is updated in run time by new observations of the target system, including the case where that model is a perceiving biological mind. In accordance with a view of a semi-autonomous mind evolving in synchrony with the material world, but not slaved to it, the goal is to prescribe a coupling between truth and model for maximal synchronization. It is shown that optimization leads to the usual algorithms for assimilation via Kalman Filtering under a weak linearity assumption. For nonlinear systems with model error and sampling error, the synchronization view gives a recipe for calculating covariance inflation factors that are usually introduced on an ad hoc basis. Consciousness can be framed as self-perception, and represented as a collection of models that assimilate data from one another and collectively synchronize. The combination of internal and external synchronization is examined in an array of models of spiking neurons, coupled to each other and to a stimulus, so as to segment a visual field. The inter-neuron coupling appears to enhance the overall synchronization of the model with reality.

  • Although many models of consciousness have been proposed from various viewpoints, they have not been based on learning activities in a whole system with the capabilities of autonomous adaptation. We have been investigating a simplified system using artificial neural nodes to clarify the functions and configuration needed for learning in a system that autonomously adapts to the environment. We demonstrated that phenomenal consciousness is explained using a method of "virtualization" in the information system and that learning activities in a whole system adaptation are related to consciousness. However, we have not sufficiently clarified the learning activities of such a system. Consciousness is basically modeled as a system-level learning activity to modify both its own configuration and states in autonomous adaptation through investigating learning activities as a whole system. The model not only explains the time delay in Libet's experiment, but is also positioned as an improved model of Global Workspace Theory (GWT).

  • In this paper, it will be argued that common sense knowledge has not a unitary structure. It is rather articulated at two different levels: a deep and a superficial level of common sense. The deep level is based on know-how procedures, on metaphorical frames built on imaginative bodily representations, and on a set of adaptive behaviors. Superficial level includes beliefs and judgments. They can be true or false and are culture dependent. Deep common sense is unavailable for any fast change, because it depends more on human biology than on cultural conventions. The deep level of common sense is characterized by a sensorimotor representational format, while the superficial level is largely made by propositional entities. This difference can be considered as a constraint for machine consciousness design, insofar this latter should be based on a reliable model of common sense knowledge.

  • Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises, and making predictions concerning future developments.

  • Artificial intelligence, the "science and engineering of intelligent machines", still has yet to create even a simple "Advice Taker" [McCarthy, 1959]. We have previously argued [Waser, 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a "self" that can "learn" to be intelligent. Therefore, following expert advice on the nature of self [Llinas, 2001; Hofstadter, 2007; Damasio, 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins' [1976] speculation that "perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself" by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and "free-will" that continue to pave the way towards the creation of safe/moral autopoiesis.

  • The proponents of machine consciousness predicate the mental life of a machine, if any, exclusively on its formal, organizational structure, rather than on its physical composition. Given that matter is organized on a range of levels in time and space, this generic stance must be further constrained by a principled choice of levels on which the posited structure is supposed to reside. Indeed, not only must the formal structure fit well the physical system that realizes it, but it must do so in a manner that is determined by the system itself, simply because the mental life of a machine cannot be up to an external observer. To illustrate just how tall this order is, we carefully analyze the scenario in which a digital computer simulates a network of neurons. We show that the formal correspondence between the two systems thereby established is at best partial, and, furthermore, that it is fundamentally incapable of realizing both some of the essential properties of actual neuronal systems and some of the fundamental properties of experience. Our analysis suggests that, if machine consciousness is at all possible, conscious experience can only be instantiated in a class of machines that are entirely different from digital computers, namely, timecontinuous, open analog dynamical systems.

  • Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual–theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain–mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made “machine” consciousness and “artificial” thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual–theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought.

  • I argue here that consciousness can be engineered. The claim that functional consciousness can be engineered has been persuasively put forth in regards to first-person functional consciousness; robots, for instance, can recognize colors, though there is still much debate about details of this sort of consciousness. Such consciousness has now become one of the meanings of the term phenomenal consciousness (e.g., as used by Franklin and Baars). Yet, we extend the argument beyond the tradition of behaviorist or functional reductive views on consciousness that still predominate within cognitive science. If Nagel-Chalmers-Block-style non-reductive naturalism about first-person consciousness (h-consciousness) holds true, then, eventually we should be able to understand how such consciousness operates and how it gets produced (this is not the same as bridging the explanatory gap or solving Chalmers’s hard problem of consciousness). If so, the consciousness it involves can in principle be engineered.

  • This paper argues that conscious attention exists not so much for selecting an immediate action as for focusing learning of the action-selection mechanisms and predictive models on tasks and environmental contingencies likely to affect the conscious agent. It is perfectly possible to build this sort of system into machine intelligence, but it is not strictly necessary unless the intelligence needs to learn and is resource-bounded with respect to the rate of learning vs. the rate of relevant environmental change. Support of this theory is drawn from scientific research and AI simulations, and a few consequences are suggested with respect to self consciousness and ethical obligations to and for AI.

  • The paper proposes the design approach as a blueprint for building a sentient artificial agent capable of exhibiting humanlike attributions of consciousness. The paper also considers whether if such an artificial agent is ever built, how it will be indistinguishable from a human being? Well, it is glowingly evident that the evolution of artificial intelligence is guided by us, humans, whose own mental evolution have been shaped by the passing years in the course of the phenomenology of adaptation and survival (Darwinian). Yet, the evolution of synthetic minds powered by artificial cognition seems to be quite fast. Yes, the artificial mind in robots, if we accept the analogy 'mind' in its fullest sense, that day is not very far when the mental embodiment of consciousness in machines would become reality. But prior to such a feat becoming reality, rhetoric debates have been taking shape as of, how to decode and cipher consciousness in machines, a phenomenon considered as often as 'nonentity', then, what would be the true essence of such an artificial consciousness? This paper discusses these aspects and attempts to throw some new light on the design and developmental aspects of artificial consciousness.

  • In recent years, a classic problem regarding designing of artificial minds embedded with synthetic consciousness has resurfaced in the tune of; 1) building a machine or robots that would closely mimic human behavior, and 2) the problem of embodiment of consciousness in artificial forms in such entities. These two problems boil down to the pure consideration as well of standardization of another aspect- the design concepts; of whether they would look-alike human beings in artificial flesh and skin, or rather be designed entirety as original architecture having shape-implicit forms of embodied cognition which could stand as true peers of human race. The first problem is to deal with the art and science of imitating human behavior, whereas, the subsequent problems should specifically deal with the predicament of abstraction and embodiment of mental attributions primarily, consciousness in machines. Whilst the final dilemma could be the consideration of some standard design models that would likely reflect the nature of such embodied consciousness. In such endeavor, I discuss both the design approach to imitate human abilities in machines as well, the modeling of human consciousness in robots within some relational framework for orientation of mental attributions in such sense that would satisfy evolution of robot consciousness.

  • Synthetic phenomenology typically focuses on the analysis of simplified perceptual signals with small or reduced dimensionality. Instead, synthetic phenomenology should be analyzed in terms of perceptual signals with huge dimensionality. Effective phenomenal processes actually exploit the entire richness of the dynamic perceptual signals coming from the retina. The hypothesis of a high-dimensional buffer at the basis of the perception loop that generates the robot synthetic phenomenology is analyzed in terms of a cognitive architecture for robot vision the authors have developed over the years. Despite the obvious computational problems when dealing with high-dimensional vectors, spaces with increased dimensionality could be a boon when searching for global minima. A simplified setup based on static scene analysis and a more complex setup based on the CiceRobot robot are discussed.

  • The function of the brain is intricately woven into the fabric of time. Functions such as (i) storing and accessing past memories, (ii) dealing with immediate sensorimotor needs in the present, and (iii) projecting into the future for goal-directed behavior are good examples of how key brain processes are integrated into time. Moreover, it can even seem that the brain generates time (in the psychological sense, not in the physical sense) since, without the brain, a living organism cannot have the notion of past nor future. When combined with an evolutionary perspective, this seemingly straightforward idea that the brain enables the conceptualization of past and future can lead to deeper insights into the principles of brain function, including that of consciousness. In this paper, we systematically investigate, through simulated evolution of artificial neural networks, conditions for the emergence of past and future in simple neural architectures, and discuss the implications of our findings for consciousness and mind uploading.

  • Research is starting to identify correlations between consciousness and some of the spatiotemporal patterns in the physical brain. For theoretical and practical reasons, the results of experiments on the correlates of consciousness have ambiguous interpretations. At any point in time a number of hypotheses co-exist about and the correlates of consciousness in the brain, which are all compatible with the current experimental results. This paper argues that consciousness should be attributed to any system that exhibits spatiotemporal physical patterns that match the hypotheses about the correlates of consciousness that are compatible with the current experimental results. Some computers running some programs should be attributed consciousness because they produce spatiotemporal patterns in the physical world that match those that are potentially linked with consciousness in the human brain.

  • Whole brain emulation aims to re-implement functions of a mind in another computational substrate by carefully emulating the function of fundamental components, and by copying the connectivity between those components. The precision with which this is done must enable prediction of the natural development of active states. To accomplish this, in vivo measurements at large scale and high resolution are critically important. We propose a set of requirements for these empirical measurements. We then outline general methods leading to acquisition of a structural and functional connectome, and to the characterization of responses at large scale and high resolution. Finally, we describe two new project developments that tackle the problem of functional recording in vivo, namely the "molecular ticker-tape" and the integrated-circuit "Cyborcell".

  • When trying to solve classification or time-series prediction problem statements by the application of Artificial Neural Networks (ANNs), commonly applied structures like feed forward or recurrent Multi-Layer Perceptrons (MLP) characteristically tend to come up with bad performance and accuracy. This is especially the case when dealing with manifold datasets containing numerous input (predictors) and/or targetattributes and independent from the applied learning methods, activation functions, biases, etc... The cortical ANN, inspired by theoretical aspects of the human consciousness and its signal processing, is an ANN structure having been developed during the research phase of the “System applying High Order Computational Intelligence” (SHOCID) project. Due to its structure, redundancy and error-tolerance is being created, which helps to elude the latterly mentioned problems. Within this elaboration, the cortical ANN is being introduced, as well as an algorithm for evolving this special ANN types' structure until the most suitable solution has been detected.

  • Following arguments put forward in my book (Why red doesn’t sound like a bell: understanding the feel of consciousness. Oxford University Press, New York, USA, 2011), this article takes a pragmatic, scientist’s point of view about the concepts of consciousness and “feel”, pinning down what people generally mean when they talk about these concepts, and then investigating to what extent these capacities could be implemented in non-biological machines. Although the question of “feel”, or “phenomenal consciousness” as it is called by some philosophers, is generally considered to be the “hard” problem of consciousness, the article shows that by taking a “sensorimotor” approach, the difficulties can be overcome. What remains to account for are the notions of so-called “access consciousness” and the self. I claim that though they are undoubtedly very difficult, these are not logically impossible to implement in robots.

  • The potential for the near-future development of two technologies — artificial forms of intelligence, as well as the ability to "upload" human minds into artificial forms — raises several ethical questions regarding the proper treatment and understanding of these artificial minds. The crux of the dilemma is whether or not such creations should be accorded the same rights we currently grant humans, and this question seems to hinge upon whether they will exhibit their own "subjectivity", or internal viewpoints. Recognizing this as the essential factor yields some ethical guidance, but these issues need further exploration before such technologies become available.

  • The main motivation for this work is to investigate the advantages provided by machine consciousness, while in the control of software agents. In order to pursue this goal, we developed a cognitive architecture, with different levels of machine consciousness, targeting the control of artificial creatures. As a standard guideline, we applied cognitive neuroscience concepts to incrementally develop the cognitive architecture, following the evolutionary steps taken by the animal brain. The triune brain theory proposed by MacLean, together with Arrabale's "ConsScale", serve as roadmaps to achieve each developmental stage, while iCub — a humanoid robot and its simulator — serve as a platform for the experiments. A completely codelet-based system "Core" has been implemented, serving the whole architecture.

Last update from database: 1/2/26, 2:00 AM (UTC)