Search

Full bibliography 716 resources

  • This article will explore the expressivity and tractability of vividness, as viewed from the interdisciplinary perspective of the cognitive sciences, including the sub-disciplines of artificial intelligence, cognitive psychology, neuroscience, and phenomenology. Following the precursor work by Benussi in experimental phenomenology, seminal papers by David Marks in psychology and, later, Hector Levesque in computer science, a substantial part of the discussion has been around a symbolic approach to the concept of vividness. At the same time, a similar concept linked to semantic memory, imagery, and mental models has had a long history in cognitive psychology, with new emerging links to cognitive neuroscience. More recently, there is a push towards neural-symbolic representations which allows room for the integration of brain models of vividness to a symbolic concept of vividness. Such works lead to question the phenomenology of vividness in the context of consciousness, and the related ethical concerns. The purpose of this paper is to review the state of the art, advances, and further potential developments of artificial-human vividness while laying the ground for a shared conceptual platform for dialogue, communication, and debate across all the relevant sub-disciplines. Within such context, an important goal of the paper is to define the crucial role of vividness in grounding simulation and modeling within the psychology (and neuroscience) of human reasoning.

  • As artificial intelligence (AI) continues to proliferate across manufacturing, economic, medical, aerospace, transportation, and social realms, ethical guidelines must be established to not only protect humans at the mercy of automated decision making, but also autonomous agents themselves, should they become conscious. While AI appears "smart" to the public, and may outperform humans on specific tasks, the truth is that today’s AI lacks insight beyond the restricted scope of problems to which it has been tasked. Without context, AI is effectively incapable of comprehending the true nature of what it does and is oblivious to the reverberations it may cause in the real world should it err in prediction. Despite this, future AI may be equipped with enough sensors and neural processing capacity to acquire a dynamic cognizance more akin to humans. If this materializes, will autonomous agents question their own position in this world? One must entertain the possibility that this is not merely hypothetical but may, in fact, be imminent if humanity succeeds in creating artificial general intelligence (AGI).If autonomous agents with the capacity for artificial consciousness are delegated grueling tasks, outcomes could mirror the plight of exploited workers, result in retaliation, failure to comply, alternative objectives, or breakdown of human-autonomy teams. It will be critical to decide how and in which contexts various agents should be utilized. Additionally, delineating the meaning of trust and ethical consideration between humans and machines is problematic because descriptions of trust and ethics have only been detailed in human terms. This means autonomous agents will be subject to anthropomorphism, but robots are not humans, and their experience of trust and ethics might be markedly distinct from humans. Ideally speaking, to fully entrust a machine with human-centered tasks, one must believe that such an entity is reliable, competent, has the appropriate priorities.

  • Much of current artificial intelligence (AI) and the drive toward artificial general intelligence (AGI) focuses on developing machines for functional tasks that humans accomplish. These may be narrowly specified tasks as in AI, or more general tasks as in AGI – but typically these tasks do not target higher-level human cognitive abilities, such as consciousness or morality; these are left to the realm of so-called “strong AI” or “artificial consciousness.” In this paper, we focus on how a machine can augment humans rather than do what they do, and we extend this beyond AGI-style tasks to augmenting peculiarly personal human capacities, such as wellbeing and morality. We base this proposal on associating such capacities with the “self,” which we define as the “environment-agent nexus”; namely, a fine-tuned interaction of brain with environment in all its relevant variables. We consider richly adaptive architectures that have the potential to implement this interaction by taking lessons from the brain. In particular, we suggest conjoining the free energy principle (FEP) with the dynamic temporo-spatial (TSD) view of neuro-mental processes. Our proposed integration of FEP and TSD – in the implementation of artificial agents – offers a novel, expressive, and explainable way for artificial agents to adapt to different environmental contexts. The targeted applications are broad: from adaptive intelligence augmenting agents (IA’s) that assist psychiatric self-regulation to environmental disaster prediction and personal assistants. This reflects the central role of the mind and moral decision-making in most of what we do as humans.

  • We present a unifying framework to study consciousness based on algorithmic information theory (AIT). We take as a premise that ``there is experience'' and focus on the requirements for structured experience (S)--- the spatial, temporal, and conceptual organization of our first-person experience of the world and of ourselves as agents in it. Our starting point is the insight that access to good models---succinct and accurate generative programs of world data---is crucial for homeostasis and survival. We hypothesize that the successful comparison of such models with data provides the structure to experience. Building on the concept of Kolmogorov complexity, we can associate the qualitative aspects of S with the algorithmic features of the model, including its length, which reflects the structure discovered in the data. Moreover, a modeling system tracking structured data will display dimensionality reduction and criticality features that can be used empirically to quantify the structure of the program run by the agent.  KT provides a consistent framework to define the concepts of life and agent and allows for the comparison between artificial agents and S-reporting humans to provide an educated guess about agent experience.  A first challenge is to show that a human agent has S to the extent they run encompassing and compressive models tracking world data. For this, we propose to study the relation between the structure of neurophenomenological,  physiological, and behavioral data. The second is to endow artificial agents with the means to discover good models and study their internal states and behavior. We relate the algorithmic framework to other theories of consciousness and discuss some of its epistemological, philosophical, and ethical aspects.

  • I review three problems that have historically motivated pessimism about artificial intelligence: (1) the problem of consciousness, according to which artificial systems lack the conscious oversight that characterizes intelligent agents; (2) the problem of global relevance, according to which artificial systems cannot solve fully general theoretical and practical problems; (3) the problem of semantic irrelevance, according to which artificial systems cannot be guided by semantic comprehension. I connect the dots between all three problems by drawing attention to non-syntactic inferences — inferences that are made on the basis of insight into the rational relationships among thought-contents. Consciousness alone affords such insight, I argue, and such insight alone confers positive epistemic status on the execution of these inferences. Only when artificial systems can execute inferences that are rationally guided by phenomenally conscious states will such systems count as intelligent in a literal sense.

  • This article argues that if panpsychism is true, then there are grounds for thinking that digitally-based artificial intelligence (AI) may be incapable of having coherent macrophenomenal conscious experiences. Section 1 briefly surveys research indicating that neural function and phenomenal consciousness may be both analog in nature. We show that physical and phenomenal magnitudes—such as rates of neural firing and the phenomenally experienced loudness of sounds—appear to covary monotonically with the physical stimuli they represent, forming the basis for an analog relationship between the three. Section 2 then argues that if this is true and micropsychism—the panpsychist view that phenomenal consciousness or its precursors exist at a microphysical level of reality—is also true, then human brains must somehow manipulate fundamental microphysical-phenomenal magnitudes in an analog manner that renders them phenomenally coherent at a macro level. However, Sect. 3 argues that because digital computation abstracts away from microphysical-phenomenal magnitudes—representing cognitive functions non-monotonically in terms of digits (such as ones and zeros)—digital computation may be inherently incapable of realizing coherent macroconscious experience. Thus, if panpsychism is true, digital AI may be incapable of achieving phenomenal coherence. Finally, Sect. 4 briefly examines our argument’s implications for Tononi’s Integrated Information Theory (IIT) theory of consciousness, which we contend may need to be supplanted by a theory of macroconsciousness as analog microphysical-phenomenal information integration.

  • In the period between Turing’s 1950 “Computing Machinery and Intelligence” and the current considerable public exposure to the term “artificial intelligence (AI)”, Turing’s question “Can a machine think?” has become a topic of daily debate in the media, the home, and, indeed, the pub. However, “Can a machine think?” is sliding towards a more controversial issue: “Can a machine be conscious?” Of course, the two issues are linked. It is held here that consciousness is a pre-requisite to thought. In Turing’s imitation game, a conscious human player is replaced by a machine, which, in the first place, is assumed not to be conscious, and which may fool an interlocutor, as consciousness cannot be perceived from an individual’s speech or action. Here, the developing paradigm of machine consciousness is examined and combined with an extant analysis of living consciousness to argue that a conscious machine is feasible, and capable of thinking. The route to this utilizes learning in a “neural state machine”, which brings into play Turing’s view of neural “unorganized” machines. The conclusion is that a machine of the “unorganized” kind could have an artificial form of consciousness that resembles the natural form and that throws some light on its nature.

  • This article addresses the question of whether machine understanding requires consciousness. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Here I draw a distinction between natural, artificial, and machine understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I propose a hypothesis that might help to explain why consciousness is important to understanding. In closing, I suggest that progress toward implementing human-like understanding in machines—machine understanding—may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates.

  • These are some interesting times to be alive as humans and as the most versatile curious layer "The Homo Sapiens" of this pale blue planet "Earth" that we all call home in this large ever expanding sea of the cosmos, we an unknown race just like a particle of chili flakes in pizza, we are timid in our behavior and always chewing more than we can actually swallow ---. There is something about to happen, well guess what it has already started and this time it will change everything. What is that one trait that defines and defies "Humans" from all the other species and we all take proud in that is "Intelligence and Consciousness" but the tide is shifting and a new traveller is coming to the shores which will change our Alphabets from "A -- Z" to "A -- I" and now welcome to the future "The Future of AI and Robotics" where questions like "How Far is too Far" will be answered but for now the "Answer of this abstract is Artificial Intelligence and cognitive robotics". AI has its applications in almost all the fields today, Medicine and Art to Space Architects and Cognitive Humanoid robots the most visible yet deceptive the Mobile phones all is AI. The story is long and all we can hope is that it will be a happy one ----.The AI is actually peeping in our life’s much faster than many of the eminent individuals predicted, from healing through AI to have a digital interaction with Avatar of yourself with the help of AI, the Cognitive robotics to cognitive embodiment all these have never been done in the history of mankind, we are in the dawn of a new age "The Age of AI" where the most important tool for survival will be the cooperation of "Human Intelligence and Digital Consciousness." This paper is a nutshell biography of Artificial Intelligence, its fundamentals, concise history and the road ahead with all the applications of this fascinating term which we all know as Artificial Intelligence.

  • How do we make sense of the countless pieces of information flowing to us from the environment? This question, sometimes called the Problem of Representation, is one of the most significant problems in cognitive science. Some pioneering and important work in the attempt to address the problem of representation was produced with the help of Kant’s philosophy. In particular, the suggestion was that, by analogy with Kant’s distinction between sensibility and the understanding, we can distinguish between high- and low-level perception, and then focus on the step from high-level perception to abstract cognitive processes of sense-making. This was possible through a simplification of the input provided by low-level perception (to be reduced, for instance, to a string of letters), which the computer programme was supposed to ‘understand’. Most recently, a closer look at Kant’s model of the mind led to a breakthrough in the attempt to build programmes for such verbal reasoning tasks: these kinds of software or ‘Kantian machines’ seemed able to achieve human-level performance for verbal reasoning tasks. Yet, the claim has sometimes been stronger, namely, that some such programmes not only compete with human cognitive agents, but themselves represent cognitive agents. The focus of my paper is on this claim; I argue that it is unwarranted, but that its critical investigation may lead to further avenues for how to pursue the project of creating artificial intelligence.

  • Hofstadter [1979, 2007] offered a novel Gödelian proposal which purported to reconcile the apparently contradictory theses that (1) we can talk, in a non-trivial way, of mental causation being a real phenomenon and that (2) mental activity is ultimately grounded in low-level rule-governed neural processes. In this paper, we critically investigate Hofstadter’s analogical appeals to Gödel’s [1931] First Incompleteness Theorem, whose “diagonal” proof supposedly contains the key ideas required for understanding both consciousness and mental causation. We maintain that bringing sophisticated results from Mathematical Logic into play cannot furnish insights which would otherwise be unavailable. Lastly, we conclude that there are simply too many weighty details left unfilled in Hofstadter’s proposal. These really need to be fleshed out before we can even hope to say that our understanding of classical mind-body problems has been advanced through metamathematical parallels with Gödel’s work.

  • This paper has as its research problem the following question: what is it like to be an artificial intelligence? It aims to critically analyze the epistemological and semantic aspects developed by Thomas Nagel in What is it like to be a bat and The View from Nowhere, demonstrating the relationship between physicalism and subjectivity and its application to artificially intelligent beings.  We chose to approach these two works because of the author's importance in analytical philosophy and the approach to consciousness. The analysis shows that the defense of artificial intelligence as a subject of law is intrinsically based on physicalism. However, in refuting it, Nagel does not offer an alternative outside the scope of dualism. Thus, the Procedural Theory of the Subject of Law is developed with stages of emancipation of the being against the law. As a result, it is verified that the reductive physicalist vision is insufficient to substantiate the condition of the subject of law of an artificial intelligence as a legal and political being in the social order. However, if the three stages of its formation (emancipation, interspecies recognition, and personification) are observed, the possibility of achieving the condition under analysis is assumed. It is concluded that it is unverifiable to know what it is like to be an artificial intelligence. In the current scientific stage, an artificially intelligent being cannot (yet) be considered a subject of law, under penalty of characterization of instrumentalism. The methodology of integrated, analytical, deductive, and bibliographic research is used to obtain these results and conclusions.

  • The “Slicing Problem” is a thought experiment that raises questions for substrate-neutral computational theories of consciousness, including those that specify a certain causal structure for the computation like Integrated Information Theory. The thought experiment uses water-based logic gates to construct a computer in a way that permits cleanly slicing each gate and connection in half, creating two identical computers each instantiating the same computation. The slicing can be reversed and repeated via an on/off switch, without changing the amount of matter in the system. The question is what do different computational theories of consciousness believe is happening to the number and nature of individual conscious units as this switch is toggled. Under a token interpretation, there are now two discrete conscious entities; under a type interpretation, there may remain only one. Both interpretations lead to different implications depending on the adopted theoretical stance. Any route taken either allows mechanisms for “consciousness-multiplying exploits” or requires ambiguous boundaries between conscious entities, raising philosophical and ethical questions for theorists to consider. We discuss resolutions under different theories of consciousness for those unwilling to accept consciousness-multiplying exploits. In particular, we specify three features that may help promising physicalist theories to navigate such thought experiments.

  • We’re experiencing a time when digital technologies and advances in artificial intelligence, robotics, and big data are redefining what it means to be human. How do these advancements affect contemporary media and music? This collection traces how media, with a focus on sound and image, engages with these new technologies. It bridges the gap between science and the humanities by pairing humanists’ close readings of contemporary media with scientists’ discussions of the science and math that inform them. This text includes contributions by established and emerging scholars performing across-the-aisle research on new technologies, exploring topics such as facial and gait recognition; EEG and audiovisual materials; surveillance; and sound and images in relation to questions of sexual identity, race, ethnicity, disability, and class and includes examples from a range of films and TV shows including Blade Runner, Black Mirror, Mr. Robot, Morgan, Ex Machina, and Westworld. Through a variety of critical, theoretical, proprioceptive, and speculative lenses, the collection facilitates interdisciplinary thinking and collaboration and provides readers with ways of responding to these new technologies.

  • Recent advances in artificial intelligence (AI) have achieved human-scale speed and accuracy for classification tasks. Current systems do not need to be conscious to recognize patterns and classify them. However, for AI to advance to the next level, it needs to develop capabilities such as metathinking, creativity, and empathy. We contend that such a paradigm shift is possible through a fundamental change in the state of artificial intelligence toward consciousness, similar to what took place for humans through the process of natural selection and evolution. To that end, we propose that consciousness in AI is an emergent phenomenon that primordially appears when two machines cocreate their own language through which they can recall and communicate their internal state of time-varying symbol manipulation. Because, in our view, consciousness arises from the communication of inner states, it leads to empathy. We then provide a link between the empathic quality of machines and better service outcomes associated with empathic human agents that can also lead to accountability in AI services.

  • It is noted that there are many different definitions of and views about qualia, and this makes qualia into a vague concept without much theoretical and constructive value. Here, qualia are redefined in a more general way. It is argued that the redefined qualia will be essential to the mind–body problem, the problem of consciousness and also to the symbol grounding problem, which is inherent in physical symbol systems. Then, it is argued that the redefined qualia are necessary for Artificial Intelligence systems for the operation with meanings. Finally, it is proposed that robots with qualia may be conscious.

  • An area related to artificial intelligence and computational robotics is artificial consciousness, also known as computer consciousness or virtual consciousness. The artificial consciousness theory aims to establish what will have to be synthesized in an engineered artifact to find consciousness.

  • There are many developed theories and implemented artificial systems in the area of machine consciousness, while none has achieved that. For a possible approach, we are interested in implementing a system by integrating different theories. Along this way, this paper proposes a model based on the global workspace theory and attention mechanism, and providing a fundamental framework for our future work. To examine this model, two experiments are conducted. The first one demonstrates the agent’s ability to shift attention over multiple stimuli, which accounts for the dynamics of conscious content. Another experiment of simulations of attentional blink and lag-1 sparing, which are two well-studied effects in psychology and neuroscience of attention and consciousness, aims to justify the agent’s compatibility with human brains. In summary, the main contributions of this paper are (1) Adaptation of the global workspace framework by separated workspace nodes, reducing unnecessary computation but retaining the potential of global availability; (2) Embedding attention mechanism into the global workspace framework as the competition mechanism for the consciousness access; (3) Proposing a synchronization mechanism in the global workspace for supporting lag-1 sparing effect, retaining the attentional blink effect.

  • In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this work, we explore the validity and potential application of this seemingly intuitive link between consciousness and intelligence. We do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST). We find that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Having identified this trend, we use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a single unified and implementable model. Given that it is made possible by cognitive abilities underlying each of the three functional theories, artificial agents capable of mental time travel would not only possess greater general intelligence than current approaches, but also be more consistent with our current understanding of the functional role of consciousness in humans, thus making it a promising near-term goal for AI research.

Last update from database: 5/16/26, 1:00 AM (UTC)