Search

Full bibliography 675 resources

  • This article addresses the question of whether machine understanding requires consciousness. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Here I draw a distinction between natural, artificial, and machine understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I propose a hypothesis that might help to explain why consciousness is important to understanding. In closing, I suggest that progress toward implementing human-like understanding in machines—machine understanding—may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates.

  • These are some interesting times to be alive as humans and as the most versatile curious layer "The Homo Sapiens" of this pale blue planet "Earth" that we all call home in this large ever expanding sea of the cosmos, we an unknown race just like a particle of chili flakes in pizza, we are timid in our behavior and always chewing more than we can actually swallow ---. There is something about to happen, well guess what it has already started and this time it will change everything. What is that one trait that defines and defies "Humans" from all the other species and we all take proud in that is "Intelligence and Consciousness" but the tide is shifting and a new traveller is coming to the shores which will change our Alphabets from "A -- Z" to "A -- I" and now welcome to the future "The Future of AI and Robotics" where questions like "How Far is too Far" will be answered but for now the "Answer of this abstract is Artificial Intelligence and cognitive robotics". AI has its applications in almost all the fields today, Medicine and Art to Space Architects and Cognitive Humanoid robots the most visible yet deceptive the Mobile phones all is AI. The story is long and all we can hope is that it will be a happy one ----.The AI is actually peeping in our life’s much faster than many of the eminent individuals predicted, from healing through AI to have a digital interaction with Avatar of yourself with the help of AI, the Cognitive robotics to cognitive embodiment all these have never been done in the history of mankind, we are in the dawn of a new age "The Age of AI" where the most important tool for survival will be the cooperation of "Human Intelligence and Digital Consciousness." This paper is a nutshell biography of Artificial Intelligence, its fundamentals, concise history and the road ahead with all the applications of this fascinating term which we all know as Artificial Intelligence.

  • How do we make sense of the countless pieces of information flowing to us from the environment? This question, sometimes called the Problem of Representation, is one of the most significant problems in cognitive science. Some pioneering and important work in the attempt to address the problem of representation was produced with the help of Kant’s philosophy. In particular, the suggestion was that, by analogy with Kant’s distinction between sensibility and the understanding, we can distinguish between high- and low-level perception, and then focus on the step from high-level perception to abstract cognitive processes of sense-making. This was possible through a simplification of the input provided by low-level perception (to be reduced, for instance, to a string of letters), which the computer programme was supposed to ‘understand’. Most recently, a closer look at Kant’s model of the mind led to a breakthrough in the attempt to build programmes for such verbal reasoning tasks: these kinds of software or ‘Kantian machines’ seemed able to achieve human-level performance for verbal reasoning tasks. Yet, the claim has sometimes been stronger, namely, that some such programmes not only compete with human cognitive agents, but themselves represent cognitive agents. The focus of my paper is on this claim; I argue that it is unwarranted, but that its critical investigation may lead to further avenues for how to pursue the project of creating artificial intelligence.

  • Hofstadter [1979, 2007] offered a novel Gödelian proposal which purported to reconcile the apparently contradictory theses that (1) we can talk, in a non-trivial way, of mental causation being a real phenomenon and that (2) mental activity is ultimately grounded in low-level rule-governed neural processes. In this paper, we critically investigate Hofstadter’s analogical appeals to Gödel’s [1931] First Incompleteness Theorem, whose “diagonal” proof supposedly contains the key ideas required for understanding both consciousness and mental causation. We maintain that bringing sophisticated results from Mathematical Logic into play cannot furnish insights which would otherwise be unavailable. Lastly, we conclude that there are simply too many weighty details left unfilled in Hofstadter’s proposal. These really need to be fleshed out before we can even hope to say that our understanding of classical mind-body problems has been advanced through metamathematical parallels with Gödel’s work.

  • This paper has as its research problem the following question: what is it like to be an artificial intelligence? It aims to critically analyze the epistemological and semantic aspects developed by Thomas Nagel in What is it like to be a bat and The View from Nowhere, demonstrating the relationship between physicalism and subjectivity and its application to artificially intelligent beings.  We chose to approach these two works because of the author's importance in analytical philosophy and the approach to consciousness. The analysis shows that the defense of artificial intelligence as a subject of law is intrinsically based on physicalism. However, in refuting it, Nagel does not offer an alternative outside the scope of dualism. Thus, the Procedural Theory of the Subject of Law is developed with stages of emancipation of the being against the law. As a result, it is verified that the reductive physicalist vision is insufficient to substantiate the condition of the subject of law of an artificial intelligence as a legal and political being in the social order. However, if the three stages of its formation (emancipation, interspecies recognition, and personification) are observed, the possibility of achieving the condition under analysis is assumed. It is concluded that it is unverifiable to know what it is like to be an artificial intelligence. In the current scientific stage, an artificially intelligent being cannot (yet) be considered a subject of law, under penalty of characterization of instrumentalism. The methodology of integrated, analytical, deductive, and bibliographic research is used to obtain these results and conclusions.

  • The “Slicing Problem” is a thought experiment that raises questions for substrate-neutral computational theories of consciousness, including those that specify a certain causal structure for the computation like Integrated Information Theory. The thought experiment uses water-based logic gates to construct a computer in a way that permits cleanly slicing each gate and connection in half, creating two identical computers each instantiating the same computation. The slicing can be reversed and repeated via an on/off switch, without changing the amount of matter in the system. The question is what do different computational theories of consciousness believe is happening to the number and nature of individual conscious units as this switch is toggled. Under a token interpretation, there are now two discrete conscious entities; under a type interpretation, there may remain only one. Both interpretations lead to different implications depending on the adopted theoretical stance. Any route taken either allows mechanisms for “consciousness-multiplying exploits” or requires ambiguous boundaries between conscious entities, raising philosophical and ethical questions for theorists to consider. We discuss resolutions under different theories of consciousness for those unwilling to accept consciousness-multiplying exploits. In particular, we specify three features that may help promising physicalist theories to navigate such thought experiments.

  • We’re experiencing a time when digital technologies and advances in artificial intelligence, robotics, and big data are redefining what it means to be human. How do these advancements affect contemporary media and music? This collection traces how media, with a focus on sound and image, engages with these new technologies. It bridges the gap between science and the humanities by pairing humanists’ close readings of contemporary media with scientists’ discussions of the science and math that inform them. This text includes contributions by established and emerging scholars performing across-the-aisle research on new technologies, exploring topics such as facial and gait recognition; EEG and audiovisual materials; surveillance; and sound and images in relation to questions of sexual identity, race, ethnicity, disability, and class and includes examples from a range of films and TV shows including Blade Runner, Black Mirror, Mr. Robot, Morgan, Ex Machina, and Westworld. Through a variety of critical, theoretical, proprioceptive, and speculative lenses, the collection facilitates interdisciplinary thinking and collaboration and provides readers with ways of responding to these new technologies.

  • Recent advances in artificial intelligence (AI) have achieved human-scale speed and accuracy for classification tasks. Current systems do not need to be conscious to recognize patterns and classify them. However, for AI to advance to the next level, it needs to develop capabilities such as metathinking, creativity, and empathy. We contend that such a paradigm shift is possible through a fundamental change in the state of artificial intelligence toward consciousness, similar to what took place for humans through the process of natural selection and evolution. To that end, we propose that consciousness in AI is an emergent phenomenon that primordially appears when two machines cocreate their own language through which they can recall and communicate their internal state of time-varying symbol manipulation. Because, in our view, consciousness arises from the communication of inner states, it leads to empathy. We then provide a link between the empathic quality of machines and better service outcomes associated with empathic human agents that can also lead to accountability in AI services.

  • It is noted that there are many different definitions of and views about qualia, and this makes qualia into a vague concept without much theoretical and constructive value. Here, qualia are redefined in a more general way. It is argued that the redefined qualia will be essential to the mind–body problem, the problem of consciousness and also to the symbol grounding problem, which is inherent in physical symbol systems. Then, it is argued that the redefined qualia are necessary for Artificial Intelligence systems for the operation with meanings. Finally, it is proposed that robots with qualia may be conscious.

  • An area related to artificial intelligence and computational robotics is artificial consciousness, also known as computer consciousness or virtual consciousness. The artificial consciousness theory aims to establish what will have to be synthesized in an engineered artifact to find consciousness.

  • There are many developed theories and implemented artificial systems in the area of machine consciousness, while none has achieved that. For a possible approach, we are interested in implementing a system by integrating different theories. Along this way, this paper proposes a model based on the global workspace theory and attention mechanism, and providing a fundamental framework for our future work. To examine this model, two experiments are conducted. The first one demonstrates the agent’s ability to shift attention over multiple stimuli, which accounts for the dynamics of conscious content. Another experiment of simulations of attentional blink and lag-1 sparing, which are two well-studied effects in psychology and neuroscience of attention and consciousness, aims to justify the agent’s compatibility with human brains. In summary, the main contributions of this paper are (1) Adaptation of the global workspace framework by separated workspace nodes, reducing unnecessary computation but retaining the potential of global availability; (2) Embedding attention mechanism into the global workspace framework as the competition mechanism for the consciousness access; (3) Proposing a synchronization mechanism in the global workspace for supporting lag-1 sparing effect, retaining the attentional blink effect.

  • In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this work, we explore the validity and potential application of this seemingly intuitive link between consciousness and intelligence. We do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST). We find that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Having identified this trend, we use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a single unified and implementable model. Given that it is made possible by cognitive abilities underlying each of the three functional theories, artificial agents capable of mental time travel would not only possess greater general intelligence than current approaches, but also be more consistent with our current understanding of the functional role of consciousness in humans, thus making it a promising near-term goal for AI research.

  • The purpose of this article is to throw some light on aspects of Human consciousness and Artificial Intelligence(AI). Consciousness is a unique aspect which humans possess which makes them the superior beings of the earth. Human Consciousness and Artificial intelligence are two complex entities which can make it more complicated for AI develop consciousness. This article introduces topics such as Cognitive abilities and ethics. Ethics are confused with morality but morality is a part of ethics. Ethics is a theoretical concept built on perception and what the society thinks is right or wrong. The article also highlights if Artificial intelligence can develop consciousness as complex as that of Human consciousness. The article also focuses on whether Artificial intelligence can have cognitive abilities. Also if AI can have ethics.

  • Consciousness is all about internal and external existence that is felt and reflected through actions. Self-existence is about feeling your own existence and is also associated with freedom [1, 2]. It is about the feeling and awareness that the individual exists and can do different activities as per will. These activities include activities like choosingChoosing, attaining certain objectives and fighting for survival.

  • The intersection between neuroscience and artificial intelligence (AI) research has created synergistic effects in both fields. While neuroscientific discoveries have inspired the development of AI architectures, new ideas and algorithms from AI research have produced new ways to study brain mechanisms. A well-known example is the case of reinforcement learning (RL), which has stimulated neuroscience research on how animals learn to adjust their behavior to maximize reward. In this review article, we cover recent collaborative work between the two fields in the context of meta-learning and its extension to social cognition and consciousness. Meta-learning refers to the ability to learn how to learn, such as learning to adjust hyperparameters of existing learning algorithms and how to use existing models and knowledge to efficiently solve new tasks. This meta-learning capability is important for making existing AI systems more adaptive and flexible to efficiently solve new tasks. Since this is one of the areas where there is a gap between human performance and current AI systems, successful collaboration should produce new ideas and progress. Starting from the role of RL algorithms in driving neuroscience, we discuss recent developments in deep RL applied to modeling prefrontal cortex functions. Even from a broader perspective, we discuss the similarities and differences between social cognition and meta-learning, and finally conclude with speculations on the potential links between intelligence as endowed by model-based RL and consciousness. For future work we highlight data efficiency, autonomy and intrinsic motivation as key research areas for advancing both fields. Questions answered in this article BetaPowered by GenAI This is generative AI content and the quality may vary. Learn more . How can meta-learning facilitate the development of more general forms of artificial intelligence? What recent advancements have been made in integrating meta-learning into deep Reinforcement Learning (RL)? How do model-based Reinforcement Learning algorithms facilitate meta-learning? What computational and empirical results are relevant to meta-learning in both artificial intelligence and the brain? What are the implications of brain-inspired model-based Reinforcement Learning for artificial learning systems?

  • Having a rich multimodal inner language is an important component of human intelligence that enables several necessary core cognitive functions such as multimodal prediction, translation, and generation. Building upon the Conscious Turing Machine (CTM), a machine model for consciousness proposed by Blum and Blum (2021), we describe the desiderata of a multimodal language called Brainish, comprising words, images, audio, and sensations combined in representations that the CTM's processors use to communicate with each other. We define the syntax and semantics of Brainish before operationalizing this language through the lens of multimodal artificial intelligence, a vibrant research area studying the computational tools necessary for processing and relating information from heterogeneous signals. Our general framework for learning Brainish involves designing (1) unimodal encoders to segment and represent unimodal data, (2) a coordinated representation space that relates and composes unimodal features to derive holistic meaning across multimodal inputs, and (3) decoders to map multimodal representations into predictions (for fusion) or raw data (for translation or generation). Through discussing how Brainish is crucial for communication and coordination in order to achieve consciousness in the CTM, and by implementing a simple version of Brainish and evaluating its capability of demonstrating intelligence on multimodal prediction and retrieval tasks on several real-world image, text, and audio datasets, we argue that such an inner language will be important for advances in machine models of intelligence and consciousness.

  • There are several lessons that can already be drawn from the current research programs on strong AI and building conscious machines, even if they arguably have not produced fruits yet. The first one is that functionalist approaches to consciousness do not account for the key importance of subjective experience and can be easily confounded by the way in which algorithms work and succeed. Authenticity and emergence are key concepts that can be useful in discerning valid approaches versus invalid ones and can clarify instances where algorithms are considered conscious, such as Sophia or LaMDA. Subjectivity and embeddedness become key notions that should also lead us to re‐examine the ethics of decision delegation. In addition, the focus on subjective experience shifts what is relevant in our understanding of ourselves as human beings and as an image of God, namely, in de‐emphasizing intellectuality in favor of experience and contemplation over action.

  • Consciousness and intelligence are properties commonly understood as dependent by folk psychology and society in general. The term artificial intelligence and the kind of problems that it managed to solve in the recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following the analogy of Russell, if a machine is able to do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights that a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe that the obvious answer is no, as problem solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence that human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals and machines. As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society that we also analyze in this work.

  • The Transformer artificial intelligence model is one of the most accurate models to extract the meaning/semantics from sets of symbolic sequences of various lengths, including long sequences. These models transform the language spaces as per long and short-distance relationships among units of the language. These models thus minimize some aspects of human comprehension of the world. To frame a generalized theory of identification and generation of meaning in human thought, the transformer model needs to be understood in the context of generalized systems theory, such that other equivalent models can be discovered, compared and selected to converge on the base model of meaning identification and discovery aspect of the philosophy of knowledge or epistemology. This paper explores the relationships of the transformer model and its various component parts, processes and the phenomena to some critical aspects of generalized systems theory such as cognition, symmetry & equivalence, holons, emergence, identifiability, system spaces and system universe, reconstructability, equilibriums & oscillations, scaling, polystability, ontogeny, algedonic loops, heterarchy, holarchy, homeorhesis, isomorphism, homeostasis, attractors, equifinality, nesting, parallelization, loops, causal structure, transformations, feedbacks, encodings, and information complexity.

  • This article provides an analytical framework for how to simulate human-like thought processes within a computer. It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought. The focus is on replicating the dynamics of the mammalian working memory system, which features two forms of persistent activity: sustained firing (preserving information on the order of seconds) and synaptic potentiation (preserving information from minutes to hours). The article uses a series of figures to systematically demonstrate how the iterative updating of these working memory stores provides functional organization to behavior, cognition, and awareness. In a machine learning implementation, these two memory stores should be updated continuously and in an iterative fashion. This means each state should preserve a proportion of the coactive representations from the state before it (where each representation is an ensemble of neural network nodes). This makes each state a revised iteration of the preceding state and causes successive configurations to overlap and blend with respect to the information they contain. Thus, the set of concepts in working memory will evolve gradually and incrementally over time. Transitions between states happen as persistent activity spreads activation energy throughout the hierarchical network, searching long-term memory for the most appropriate representation to be added to the global workspace. The result is a chain of associatively linked intermediate states capable of advancing toward a solution or goal. Iterative updating is conceptualized here as an information processing strategy, a model of working memory, a theory of consciousness, and an algorithm for designing and programming artificial intelligence (AI, AGI, and ASI).

Last update from database: 1/2/26, 2:00 AM (UTC)