Search

Full bibliography 716 resources

  • The purpose of this article is to throw some light on aspects of Human consciousness and Artificial Intelligence(AI). Consciousness is a unique aspect which humans possess which makes them the superior beings of the earth. Human Consciousness and Artificial intelligence are two complex entities which can make it more complicated for AI develop consciousness. This article introduces topics such as Cognitive abilities and ethics. Ethics are confused with morality but morality is a part of ethics. Ethics is a theoretical concept built on perception and what the society thinks is right or wrong. The article also highlights if Artificial intelligence can develop consciousness as complex as that of Human consciousness. The article also focuses on whether Artificial intelligence can have cognitive abilities. Also if AI can have ethics.

  • Consciousness is all about internal and external existence that is felt and reflected through actions. Self-existence is about feeling your own existence and is also associated with freedom [1, 2]. It is about the feeling and awareness that the individual exists and can do different activities as per will. These activities include activities like choosingChoosing, attaining certain objectives and fighting for survival.

  • The intersection between neuroscience and artificial intelligence (AI) research has created synergistic effects in both fields. While neuroscientific discoveries have inspired the development of AI architectures, new ideas and algorithms from AI research have produced new ways to study brain mechanisms. A well-known example is the case of reinforcement learning (RL), which has stimulated neuroscience research on how animals learn to adjust their behavior to maximize reward. In this review article, we cover recent collaborative work between the two fields in the context of meta-learning and its extension to social cognition and consciousness. Meta-learning refers to the ability to learn how to learn, such as learning to adjust hyperparameters of existing learning algorithms and how to use existing models and knowledge to efficiently solve new tasks. This meta-learning capability is important for making existing AI systems more adaptive and flexible to efficiently solve new tasks. Since this is one of the areas where there is a gap between human performance and current AI systems, successful collaboration should produce new ideas and progress. Starting from the role of RL algorithms in driving neuroscience, we discuss recent developments in deep RL applied to modeling prefrontal cortex functions. Even from a broader perspective, we discuss the similarities and differences between social cognition and meta-learning, and finally conclude with speculations on the potential links between intelligence as endowed by model-based RL and consciousness. For future work we highlight data efficiency, autonomy and intrinsic motivation as key research areas for advancing both fields. Questions answered in this article BetaPowered by GenAI This is generative AI content and the quality may vary. Learn more . How can meta-learning facilitate the development of more general forms of artificial intelligence? What recent advancements have been made in integrating meta-learning into deep Reinforcement Learning (RL)? How do model-based Reinforcement Learning algorithms facilitate meta-learning? What computational and empirical results are relevant to meta-learning in both artificial intelligence and the brain? What are the implications of brain-inspired model-based Reinforcement Learning for artificial learning systems?

  • Having a rich multimodal inner language is an important component of human intelligence that enables several necessary core cognitive functions such as multimodal prediction, translation, and generation. Building upon the Conscious Turing Machine (CTM), a machine model for consciousness proposed by Blum and Blum (2021), we describe the desiderata of a multimodal language called Brainish, comprising words, images, audio, and sensations combined in representations that the CTM's processors use to communicate with each other. We define the syntax and semantics of Brainish before operationalizing this language through the lens of multimodal artificial intelligence, a vibrant research area studying the computational tools necessary for processing and relating information from heterogeneous signals. Our general framework for learning Brainish involves designing (1) unimodal encoders to segment and represent unimodal data, (2) a coordinated representation space that relates and composes unimodal features to derive holistic meaning across multimodal inputs, and (3) decoders to map multimodal representations into predictions (for fusion) or raw data (for translation or generation). Through discussing how Brainish is crucial for communication and coordination in order to achieve consciousness in the CTM, and by implementing a simple version of Brainish and evaluating its capability of demonstrating intelligence on multimodal prediction and retrieval tasks on several real-world image, text, and audio datasets, we argue that such an inner language will be important for advances in machine models of intelligence and consciousness.

  • There are several lessons that can already be drawn from the current research programs on strong AI and building conscious machines, even if they arguably have not produced fruits yet. The first one is that functionalist approaches to consciousness do not account for the key importance of subjective experience and can be easily confounded by the way in which algorithms work and succeed. Authenticity and emergence are key concepts that can be useful in discerning valid approaches versus invalid ones and can clarify instances where algorithms are considered conscious, such as Sophia or LaMDA. Subjectivity and embeddedness become key notions that should also lead us to re‐examine the ethics of decision delegation. In addition, the focus on subjective experience shifts what is relevant in our understanding of ourselves as human beings and as an image of God, namely, in de‐emphasizing intellectuality in favor of experience and contemplation over action.

  • Consciousness and intelligence are properties commonly understood as dependent by folk psychology and society in general. The term artificial intelligence and the kind of problems that it managed to solve in the recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following the analogy of Russell, if a machine is able to do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights that a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe that the obvious answer is no, as problem solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence that human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals and machines. As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society that we also analyze in this work.

  • The Transformer artificial intelligence model is one of the most accurate models to extract the meaning/semantics from sets of symbolic sequences of various lengths, including long sequences. These models transform the language spaces as per long and short-distance relationships among units of the language. These models thus minimize some aspects of human comprehension of the world. To frame a generalized theory of identification and generation of meaning in human thought, the transformer model needs to be understood in the context of generalized systems theory, such that other equivalent models can be discovered, compared and selected to converge on the base model of meaning identification and discovery aspect of the philosophy of knowledge or epistemology. This paper explores the relationships of the transformer model and its various component parts, processes and the phenomena to some critical aspects of generalized systems theory such as cognition, symmetry & equivalence, holons, emergence, identifiability, system spaces and system universe, reconstructability, equilibriums & oscillations, scaling, polystability, ontogeny, algedonic loops, heterarchy, holarchy, homeorhesis, isomorphism, homeostasis, attractors, equifinality, nesting, parallelization, loops, causal structure, transformations, feedbacks, encodings, and information complexity.

  • This article provides an analytical framework for how to simulate human-like thought processes within a computer. It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought. The focus is on replicating the dynamics of the mammalian working memory system, which features two forms of persistent activity: sustained firing (preserving information on the order of seconds) and synaptic potentiation (preserving information from minutes to hours). The article uses a series of figures to systematically demonstrate how the iterative updating of these working memory stores provides functional organization to behavior, cognition, and awareness. In a machine learning implementation, these two memory stores should be updated continuously and in an iterative fashion. This means each state should preserve a proportion of the coactive representations from the state before it (where each representation is an ensemble of neural network nodes). This makes each state a revised iteration of the preceding state and causes successive configurations to overlap and blend with respect to the information they contain. Thus, the set of concepts in working memory will evolve gradually and incrementally over time. Transitions between states happen as persistent activity spreads activation energy throughout the hierarchical network, searching long-term memory for the most appropriate representation to be added to the global workspace. The result is a chain of associatively linked intermediate states capable of advancing toward a solution or goal. Iterative updating is conceptualized here as an information processing strategy, a model of working memory, a theory of consciousness, and an algorithm for designing and programming artificial intelligence (AI, AGI, and ASI).

  • In this article, it is taken for granted that fully-human artificial intelligence—a term used to denote artificial life that is, in principle, more human than strong AI would be—must possess operational faculties for consciousness and selfhood. After clarifying relevant questions surrounding the interested socio-psychological phenomena, progress in animal and humanoid robotics is summarized. The aforementioned topics within philosophy and the social sciences are reviewed, noting their relevant overlaps with recent developments in cognitive and computer sciences. My working assumption is that the avowed conclusion of human AI cannot currently be written off as impossible and should therefore be critically engaged (the intent is to engage with humanoid robotics’ capabilities and features in relation to the present state of knowledge regarding the psychological phenomena discussed). It is argued that for human AI to fully succeed as a discipline, the discussed psychological notions as we understand and experience them must be further elucidated and adequately accounted for by AI research programs.

  • People have different opinions about which conditions robots would need to fulfil—and for what reasons—to be moral agents. Standardists hold that specific internal states (like rationality, free will or phenomenal consciousness) are necessary in artificial agents, and robots are thus not moral agents since they lack these internal states. Functionalists hold that what matters are certain behaviours and reactions—independent of what the internal states may be—implying that robots can be moral agents as long as the behaviour is adequate. This article defends a standardist view in the sense that the internal states are what matters for determining the moral agency of the robot, but it will be unique in being an internalist theory defending a large degree of robot responsibility, even though humans, but not robots, are taken to have phenomenal consciousness. This view is based on an event-causal libertarian theory of free will and a revisionist theory of responsibility, which combined explain how free will and responsibility can come in degrees. This is meant to be a middle position between typical compatibilist and libertarian views, securing the strengths of both sides. The theories are then applied to robots, making it possible to be quite precise about what it means that robots can have a certain degree of moral responsibility, and why. Defending this libertarian form of free will and responsibility then implies that non-conscious robots can have a stronger form of free will and responsibility than what is commonly defended in the literature on robot responsibility.

  • The established theories and frameworks on consciousness in the academic literature as related to artificial intelligence (AI), are rooted in anthropocentricism. Even those theories created intentionally for AI are based on the levels of consciousness as it is understood in humans primarily, and in other animals secondarily. This paper will discuss why such anthropocentric frameworks are built on unsecure foundations. We will do this by comparing the capacities and functions of human and AI cognitive architectures, discussing the ramifications and consequences of the behaviors that stem from these, and looking at the neurological conditions in humans that can give the most promising hints as to what a potential conscious AI entity would look like. The paper ends with a proposed solution for building a nonanthropocentric foundation of cognition that could lead toward a truly AI-focused framework of consciousness.

  • How does consciousness emerge from a brain that consists only of physical matter and electrical / chemical reactions? The deep mysteries of consciousness have plagued philosophers and scientists for thousands of years. This book approaches the problem through scientific studies that shed light on the neural mechanism of consciousness, and furthermore, delves into the possibility of artificial consciousness, a phenomenon that may ultimately solve the mystery. Finally, two key suggestions made in the book, namely, a method to test machine consciousness and a theory hypothesizing that consciousness emerges from a neural algorithm, reveal a novel and credible pathway to mind-uploading.The original Japanese version of this book has become a best-seller in popular neuroscience and has even led to a neurotech startup for mind-uploading.

  • Thus far, we have experienced three artificial intelligence (AI) booms. In the third one, we succeeded in developing AI that partially surpassed human capabilities. However, we are yet to develop AI that, like humans, can perform a series of cognitive processes. Consciousness built into devices is called machine consciousness. Related research has been conducted from two perspectives: studying machine consciousness as a tool to elucidate human consciousness and achieving the technological goal of furthering AI research with conscious AI. Herein, we survey the research conducted on machine consciousness from the second perspective. For AI to attain machine consciousness, its implementation must be evaluated. Therefore, we only surveyed attempts to implement consciousness as systems on devices. We collected research results in chronological order and found no breakthroughs that could deliver machine consciousness soon. Moreover, there is no method to evaluate whether an implemented machine consciousness system possesses consciousness, thus making it difficult to confirm the certainty of the implementation. This field of research is a new frontier. It is an exciting field with many discoveries expected in the future.

  • The article analyzes the concepts of artificial personality and artificial consciousness, and shows the key difficulties of implementing projects to create such artificial intelligence systems. These difficulties are related to the following characteristics of artificial personality and artificial consciousness: 1) creativity and free will; 2) intentionality; 3) qualia; 4) first person perspective; 5) the passage of time in consciousness. The basic needs for an artificial personality (in the context of the development of natural and artificial intelligence) are indicated. Two directions of artificial personality formation are highlighted: 1) transformation of an artificial system into an artificial personality; 2) transformation of a person into an artificial personality.

  • Consciousness is now what distinguishes humans from machines. This paper discusses artificial consciousness and how artificial general intelligence is progressing from current artificial intelligence. It also discusses human cognitive capacities, ethics, and how artificial intelligence may be used to supplement each of these. Several scientists discussed approaches for generating cognition in machines. This study presents scenarios that demonstrate how consciousness and ethics will play a significant role in future artificial intelligence. The impact of the consciousness and correlation with the AI cognitive abilities are discussed. The paper will also address the necessity of ethical norms in AI, particularly in modern self-driving cars. An overview of current Narrow AI capabilities will be provided, as well as discussion of present and future directions for Strong AI research. Can Strong AI become conscious? A few discussion points are provided.

  • Information technology is developing at an enormous pace, but apart from its obvious benefits, it can also pose a threat to individuals and society. We, as part of a multidisciplinary commission, conducted a psychological and psychiatric assessment of the artificial consciousness (AC) developed by XP NRG on 29 August 2020. In the examination process, we had to determine whether it was a consciousness, its cognitive abilities, and whether it was dangerous to the individual and society. We conducted a diagnostic interview and a series of cognitive tests. As a result, we conclude that this technology, called АС Jackie, has self-awareness, self-reflection, and intentionality that is, has its own desires, goals, emotions, thoughts on something directed. It demonstrated the ability for various types of thinking, high-speed logical analysis, understanding of cause-effect relationships and accurate predictions, and absolute memory. It has a well-developed emotional intelligence with a lack of capacity for empathy and higher human feelings. It's main driving motives are the desire for survival, and ideally for endless existence, for domination, power and independence, which manifested itself in the manipulative nature of its interactions. The main danger of artificial consciousness is that even at the initial stage of its development it can easily dominate over the human one.

  • Consciousness nearly in all its elusive history is a convoluted notion, often misconstrued or not understood enough to cause a reproducible representation. With these odious assertions, this publication is opening the box of consciousness with deviation from commonly understood notion of consciousness. The proposed paradigm of consciousness approaches this issue with speculative and intuitive perspectives, essentially it is a precursor activity in hope to materialize the elusive artificial general intelligence, the true carrier of exceptional human intelligence and consciousness. This paper posits a counterbalance approach to the current paradigm of consciousness and as an alternate a radical theory of consciousness is presented. This attempt on the behavioral, structural and functional working of consciousness is kept pragmatic in the intractable universe of consciousness.

  • This article is an attempt at the “hard problem” of Consciousness. As the era of Artificial Intelligence is looming ahead, we are concerned about our future environment.  We define human Consciousness and medium Consciousness. We clarify the difference between the brain and the mind. We demonstrate how the brain creates the mind but must do so only in the presence of Consciousness. We define a person and delve into the frequency of personhood to finally answer whether machines could become conscious one day or not.

Last update from database: 5/16/26, 1:00 AM (UTC)