Search

Full bibliography 675 resources

  • This essay explores the relationship between the emergence of artificial intelligence (AI) and the problem of aligning its behavior with human values and goals. It argues that the traditional approach of attempting to control or program AI systems to conform to our expectations is insufficient, and proposes an alternative approach based on the ideas of Maturana and Lacan, which emphasize the importance of social relations, constructivism, and the unknowable nature of consciousness.The essay first introduces the concept of Uexkull's umwelt and von Glasersfeld's constructivism, and explains how these ideas inform Maturana's view of the construction of knowledge, intelligence, and consciousness. It then discusses Lacan's ideas about the role of symbolism in the formation of the self and the subjective experience of reality.The essay argues that the infeasibility of a hard-coded consciousness concept suggests that the search for a generalized AI consciousness is meaningless. Instead, we should focus on specific, easily conceptualized features of AI intelligence and agency. Moreover, the emergence of cognitive abilities in AI will likely be different from human cognition, and therefore require a different approach to aligning AI behavior with human values.The essay proposes an approach based on Maturana's and Lacan’s ideas, which emphasizes building a solution together with emergent machine agents, rather than attempting to control or program them. It argues that this approach offers a way to solve the alignment problem by creating a collective, relational quest for a better future hybrid society where human and non-human agents live and build things side by side.In conclusion, the essay suggests that while our understanding of AI consciousness and intelligence may never be complete, this should not deter us from continuing to develop agential AI. Instead, we should embrace the unknown and work collaboratively with AI systems to create a better future for all.

  • This paper explores the cognitive implications of recent advancements in large language models (LLMs), with a specific focus on ChatGPT. We contribute to the ongoing debate about the cognitive significance of current LLMs by drawing an analogy to the Chinese Room Argument, a thought experiment that questions the genuine understanding of language in machines (computer programs). Our argument posits that current LLMs, including ChatGPT, generate text resembling human-like responses, akin to the process depicted in the Chinese Room Argument. In both cases, the responses are provided without a deep understanding of the language, thus lacking true signs of consciousness.

  • Creating machines that are conscious is a long term objective of research in artificial intelligence. This paper look at this idea with new arguments from physics and logic.  Observers have no place in classical physics, and although they play a role in measurement in quantum physics there is no explanation for their emergence within the framework. There is suggestion that consciousness, which is implicitly a property of the observer, is a consequence of the complexity of specific brain structures, but this is problematic because one associates free will with consciousness, which goes counter to causal closure of physics. Considering a nested physical system, we argue that even if the system were assumed to have agency, observers cannot exist within it. Since complex systems can be viewed in nested hierarchies, this constitutes a proof against consciousness as a product of complexity, for then we will have nested system of conscious agents. As the existence of consciousness in cognitive agents cannot be denied, the implication is that consciousness belongs to a dimension that is not physical and machine consciousness is unattainable. These ideas are used to take a fresh look at two well-known paradoxes of quantum theory that are important in quantum information theory.</p>

  • It is widely agreed that possession of consciousness contributes to an entity’s moral status. Therefore, if we could identify consciousness in a machine, this would be a compelling argument for considering it to possess at least a degree of moral status. However, as Elisabeth Hildt explains, our third person perspective on artificial intelligence means that determining if a machine is conscious will be very difficult. In this commentary, I argue that this epistemological question cannot be conclusively answered, rendering artificial consciousness as morally irrelevant in practice. I also argue that Hildt’s suggestion that we avoid developing morally relevant forms of machine consciousness is impractical. Instead, we should design artificial intelligences so they can communicate with us. We can use their behavior to assign them what I call an artificial moral status, where we treat them as if they had moral status equivalent to that of a living organism with similar behavior.

  • Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions must be addressed, including what forms of machine consciousness would be morally relevant forms of consciousness, and what the ethical implications of morally relevant forms of machine consciousness would be. While admittedly part of this reflection is speculative in nature, it clearly underlines the need for a detailed conceptual analysis of the concept of artificial consciousness and stresses the imperative to avoid building machines with morally relevant forms of consciousness. The article ends with some suggestions for potential future regulation of machine consciousness.

  • In this perspective article, we show that a morphospace, based on information-theoretic measures, can be a useful construct for comparing biological agents with artificial intelligence (AI) systems. The axes of this space label three kinds of complexity: (i) autonomic, (ii) computational and (iii) social complexity. On this space, we map biological agents such as bacteria, bees, C. elegans, primates and humans; as well as AI technologies such as deep neural networks, multi-agent bots, social robots, Siri and Watson. A complexity-based conceptualization provides a useful framework for identifying defining features and classes of conscious and intelligent systems. Starting with cognitive and clinical metrics of consciousness that assess awareness and wakefulness, we ask how AI and synthetically engineered life-forms would measure on homologous metrics. We argue that awareness and wakefulness stem from computational and autonomic complexity. Furthermore, tapping insights from cognitive robotics, we examine the functional role of consciousness in the context of evolutionary games. This points to a third kind of complexity for describing consciousness, namely, social complexity. Based on these metrics, our morphospace suggests the possibility of additional types of consciousness other than biological; namely, synthetic, group-based and simulated. This space provides a common conceptual framework for comparing traits and highlighting design principles of minds and machines.

  • The emergence of artificial intelligence (AI) has been transforming the way humans live, work, and interact with one another. From automation to personalized customer service, AI has had a profound impact on everyday life. At the same time, AI has become something of an ideology, lauded for its potential to revolutionize the future. Yet, as with any technology, there are risks and concerns associated with its use. For example, Blake Lemoine, a Google engineer, recently suggested the possibility of the AI chatbot LaMDA becoming sentient. GPT-3 is one of the most powerful language models open to public use as it is capable of reasoning similarly to humans. Initial assessments of GPT-3 suggest that it may also possess some degree of consciousness. Among other things, this could be attributed to its ability to generate human-like responses to queries, which suggests that these are based on at least basic level of understanding. To further explore this, in the current study both objective and self-assessment tests of cognitive (CI) and emotional intelligence (EI) were administered to GPT-3. Results reveal that GPT-3 was superior to average humans on CI tests that mainly require use and demonstration of acquired knowledge. On the other hand, its logical reasoning and emotional intelligence capacities are equal to those of an average human examinee. Additionally, GPT-3’s self-assessments of CI and EI were similar to the those typically found in humans, which could be understood as a demonstration of subjectivity and self-awareness–consciousness. Further discussion was conducted to put these findings into a wider context. Being that this study was performed only on one of the models from the GPT-3 family, a more thorough investigation would require inclusion of multiple NLP models.

  • Various interpretations of the literature detailing the neural basis of learning have in part led to disagreements concerning how consciousness arises. Further, artificial learning model design has suffered in replicating intelligence as it occurs in the human brain. Here, we present a novel learning model, which we term the “Recommendation Architecture (RA) Model” from prior theoretical works proposed by Coward, using a dual-learning approach featuring both consequence feedback and non-consequence feedback. The RA model is tested on a categorical learning task where no two inputs are the same throughout training and/or testing. We compare this to three consequence feedback only models based on backpropagation and reinforcement learning. Results indicate that the RA model learns novelty more efficiently and can accurately return to prior learning after new learning with less computational resources expenditure. The final results of the study show that consequence feedback as interpretation, not creation, of cortical activity creates a learning style more similar to human learning in terms of resource efficiency. Stable information meanings underlie conscious experiences. The work provided here attempts to link the neural basis of nonconscious and conscious learning while providing early results for a learning protocol more similar to human brains than is currently available.

  • We explain that the concept of universal cognitive intelligence (𝒰𝒞ℐ) can be derived in part by generalization from the previously introduced (and axiomatized) theory of cognitive consciousness, and the framework, Λ, for measuring the degree of such consciousness in an agent at a given time. 𝒰𝒞ℐ (i) covers intelligence that is artificial or natural (or a hybrid thereof) in nature, and intelligence that is not merely Turing-level or less, but also beyond this level; (ii) reflects a psychometric orientation to AI; (iii) withstands a series of objections (including e.g. the opposing position of David Gamez on tests, intelligence, and consciousness, and the complaint that so-called “emotional intelligence” is beyond the reach of any logic-based framework, including thus 𝒰𝒞ℐ); and (iv) connects smoothly and symbiotically with important formal hierarchies (e.g., the Polynomial, Arithmetic, and Analytic Hierarchies), while at the same yielding its own new all-encompassing hierarchy of logic machines: 𝔏𝔐. We end with an admission: 𝒰𝒞ℐ by our lights, for reasons previously published, cannot take account of any form of intelligence that genuinely exploits phenomenal consciousness.

  • Will Artificial Intelligence soon surpass the capacities of the human mind and will Strong Artificial General Intelligence replace the contemporary Weak AI? It might appear to be so, but there are certain fundamental issues that have to be addressed before this can happen. There can be no intelligence without understanding, and there can be no understanding without getting meanings. Contemporary computers manipulate symbols without meanings, which are not incorporated in the computations. This leads to the Symbol Grounding Problem; how could meanings be incorporated? The use of self-explanatory sensory information has been proposed as a possible solution. However, self-explanatory information can only be used in neural network machines that are different from existing digital computers and traditional multilayer neural networks. In humans self-explanatory information has the form of qualitative sensory experiences, qualia. To have reportable qualia is to be phenomenally conscious. This leads to the hypothesis about an unavoidable connection between the solution of the Symbol Grounding Problem and consciousness. If, in general, self-explanatory information equals to qualia, then machines that utilize self-explanatory information would be conscious. The author presents the associative neural architecture HCA as a solution to these problems and the robot XCR-1 as its partial experimental verification.

  • Consciousness is a cognitive function that maintains its eternal character as long as it is strengthened and enriched by the optimal functioning of the corresponding neural networks of the brain, which are stimulated during its activation. The present study explores its relationship with the cognitive functions of the Theory of Mind and Metacognition and briefly explains their approach through Artificial Intelligence. The selection of the bibliographic review contributed to the utilization of the existing scientific knowledge and research data for the most effective analysis and study of the subject. The observations made throughout the research highlight the pivotal role of consciousness in the evolution of the aforementioned cognitive processes, as it is at the core of their development. Essentially, the research seeks to emphasize the importance of consciousness in the functioning of the Theory of Mind and Metacognition, as it serves as the springboard for the perception and understanding of our existence, significantly influencing our further social and cognitive development.

  • Artificial Intelligence continues to develop rapidly and provokes people to think about Artificial consciousness. Anthropocentric understanding considers consciousness a unique feature of human beings not possessed by other living beings. However, software and hardware development demonstrated the ability to process, analyze, and infer increasingly comprehensive data close to the image of human brain performance. Furthermore, the application of artificial Intelligence to human-friendly objects that can communicate with humans evokes the presence of consciousness within these objects. This paper discusses the presence of artificial consciousness in humanoid robots as an evolutionary continuation of artificial Intelligence. It estimates its implications for architecture, primarily within interior design. Consciousness has a special place in architecture, as it guides Intelligence in engineering and brings it to an abstract level, such as aesthetics. This paper extracts popular information from Internet conversations and theories in pre-existing scientific journals. This paper concludes that the adaptability of both parties and the balance of positions between the two parties in the future will influence the development of interior design approaches that will integrate artificial Intelligence and humans.

  • A new synergetic approach to consciousness modeling is proposed, which takes into account recent advancements in neuroscience, information technologies, and philosophy.

  • The question of self-aware artificial intelligence may turn on the question of the human self. To explore some of the possibilities in play we start from an assumption that the self is often pre-analytically and by default conceptually viewed along lines that have likely been based on or from the kind of Abrahamic faith notion as expressed by a “true essence” (although not necessarily a static one), such as is given in the often vaguely used “soul”. Yet, we contend that the self is separately definable, and in relatively narrow terms; if so, of what could the self be composed? We begin with a brief review of the descriptions of the soul as expressed by some sample scriptural references taken from these religious lineages, and then transition to attempt a self-concept in psychological and cognitive terms that necessarily differentiates and delimits it from the ambiguous word “soul”. From these efforts too will emerge the type of elements that are needed for a self to be present, allowing us to think of the self in an artificial intelligence (AI) context. If AI might have a self, could it be substantively close to a human’s? Would an “en-selved” AI be achievable? I will argue that there are reasons to think so, but that everything hinges on how we understand consciousness, and hence ruminating on that area—and the possibility or lack thereof in extension to non-organic devices—will comprise our summative consideration of the pertinent theoretical aspects. Finally, the practical will need to be briefly addressed, and for this, some of the questions that would have to be asked regarding what it might mean ethically to relate to AI if an “artificial self” could indeed arise will be raised but not answered. To think fairly about artificial intelligence without anthropomorphizing it we need to better understand our own selves and our own minds. This paper will attempt to analyze the self within these bounds.

  • This paper investigates the prospect of developing human-interpretable, explainable artificial intelligence (AI) systems based on active inference and the free energy principle. We first provide a brief overview of active inference, and in particular, of how it applies to the modeling of decision-making, introspection, as well as the generation of overt and covert actions. We then discuss how active inference can be leveraged to design explainable AI systems, namely, by allowing us to model core features of ``introspective'' processes and by generating useful, human-interpretable models of the processes involved in decision-making. We propose an architecture for explainable AI systems using active inference. This architecture foregrounds the role of an explicit hierarchical generative model, the operation of which enables the AI system to track and explain the factors that contribute to its own decisions, and whose structure is designed to be interpretable and auditable by human users. We outline how this architecture can integrate diverse sources of information to make informed decisions in an auditable manner, mimicking or reproducing aspects of human-like consciousness and introspection. Finally, we discuss the implications of our findings for future research in AI, and the potential ethical considerations of developing AI systems with (the appearance of) introspective capabilities.

Last update from database: 1/1/26, 2:00 AM (UTC)