Your search

In authors or contributors
  • The hypothesis of conscious machines has been debated since the invention of the notion of artificial intelligence, powered by the assumption that the computational intelligence achieved by a system is the cause of the emergence of phenomenal consciousness in that system as an epiphenomenon or as a consequence of the behavioral or internal complexity of the system surpassing some threshold. As a consequence, a huge amount of literature exploring the possibility of machine consciousness and how to implement it on a computer has been published. Moreover, common folk psychology and transhumanism literature has fed this hypothesis with the popularity of science fiction literature, where intelligent robots are usually antropomorphized and hence given phenomenal consciousness. However, in this work, we argue how these literature lacks scientific rigour, being impossible to falsify the opposite hypothesis, and illustrate a list of arguments that show how every approach that the machine consciousness literature has published depends on philosophical assumptions that cannot be proven by the scientific method. Concretely, we also show how phenomenal consciousness is not computable, independently on the complexity of the algorithm or model, cannot be objectively measured nor quantitatively defined and it is basically a phenomenon that is subjective and internal to the observer. Given all those arguments we end the work arguing why the idea of conscious machines is nowadays a myth of transhumanism and science fiction culture.

  • This work seeks to study the beneficial properties that an autonomous agent can obtain by implementing a cognitive architecture similar to the one of conscious beings. Along this document, a conscious model of autonomous agent based in a global workspace architecture is presented. We describe how this agent is viewed from different perspectives of philosophy of mind, being inspired by their ideas. The goal of this model is to create autonomous agents able to navigate within an environment composed of multiple independent magnitudes, adapting to its surroundings in order to find the best possible position in base of its inner preferences. The purpose of the model is to test the effectiveness of many cognitive mechanisms that are incorporated, such as an attention mechanism for magnitude selection, pos-session of inner feelings and preferences, usage of a memory system to storage beliefs and past experiences, and incorporating a global workspace which controls and integrates information processed by all the subsystem of the model. We show in a large experiment set how an autonomous agent can benefit from having a cognitive architecture such as the one described.

Last update from database: 3/23/25, 8:36 AM (UTC)