Your search

In authors or contributors
  • The rise of Artificial Intelligence (AI) has produced prophets and prophecies announcing that the age of artificial consciousness is near. Not only does the mere idea that any machine could ever possess the full potential of human consciousness suggest that AI could replace the role of God in the future, it also puts into question the fundamental human right to freedom and dignity. Yet, in the light of all we currently know about brain evolution and the adaptive neural dynamics underlying human consciousness, the idea of an artificial consciousness appears misconceived. This article highlights some of the major reasons why the prophecy of a successful emulation of human consciousness by AI ignores most of the data about adaptive processes of learning and memory as the developmental origins of consciousness. The analysis provided leads to conclude that human consciousness is epigenetically determined as a unique property of the mind, shaped by experience, capable of representing real and non-real world states and creatively projecting these representations into the future. The development of the adaptive brain circuitry that enables this expression of consciousness is highly context-dependent, shaped by multiple self-organizing functional interactions at different levels of integration displaying a from-local-to global functional organization. Human consciousness is subject to changes in time that are essentially unpredictable. If cracking the computational code to human consciousness were possible, the resulting algorithms would have to be able to generate temporal activity patterns simulating long-distance signal reverberation in the brain, and the de-correlation of spatial signal contents from their temporal signatures in the brain. In the light of scientific evidence for complex interactions between implicit (non-conscious) and explicit (conscious) representations in learning, memory, and the construction of conscious representations such a code would have to be capable of making all implicit processing explicit. Algorithms would have to be capable of a progressive and less and less arbitrary selection of temporal activity patterns in a continuously developing neural network structure that is functionally identical to that of the human brain, from synapses to higher cognitive functional integration. The code would have to possess the self-organizing capacities of the brain that generate the temporal signatures of a conscious experience. The consolidation or extinction of these temporal brain signatures are driven by external event probabilities according to the principles of Hebbian learning. Human consciousness is constantly fed by such learning, capable of generating stable representations despite an incommensurable amount of variability in input data, across time and across individuals, for a life-long integration of experience data. Artificial consciousness would require probabilistic adaptive computations capable of emulating all the dynamics of individual human learning, memory and experience. No AI is likely to ever have such potential.

  • This chapter critically questions the claim that there would be possibility of emulating human consciousness and consciousness-dependent activity by Artificial Intelligence to create conscious artificial systems. The analysis is based on neurophysiological research and theory. In-depth scrutiny of the field and the prospects for converting neuroscience research into the type of algorithmic programs utilized in computer-based AI systems to create artificially conscious machines leads to conclude that such a conversion is unlikely to ever be possible because of the complexity of unconscious and conscious brain processing and their interaction. It is through the latter that the brain opens the doors to consciousness, a property of the human mind that no other living species has developed for reasons that are made clear in this chapter. As a consequence, many of the projected goals of AI will remain forever unrealizable. Although this work does not directly examine the question within a philosophy of mind framework by, for example, discussing why identifying consciousness with the activity of electronic circuits is first and foremost a category mistake in terms of scientific reasoning, the approach offered in the chapter is complementary to this standpoint, and illustrates various aspects of the problem under a monist from-brain-to-mind premise.

Last update from database: 3/23/25, 8:36 AM (UTC)