Search

Full bibliography 704 resources

  • Consciousness is a sequential process of awareness which can focus on one piece of information at a time. This process of awareness experiences causation which underpins the notion of time while it interplays with matter and energy, forming reality. The study of Consciousness, time and reality is complex and evolving fast in many fields, including metaphysics and fundamental physics. Reality composes patterns in human Consciousness in response to the regularities in nature. These regularities could be physical (e.g., astronomical, environmental), biological, chemical, mental, social, etc. The patterns that emerged in Consciousness were correlated to the environment, life and social behaviours followed by constructed frameworks, systems and structures. The complex constructs evolved as cultures, customs, norms and values, which created a diverse society. In the evolution of responsible AI, it is important to be attuned to the evolved cultural, ethical and moral values through Consciousness. This requires the advocated design of self-learning AI aware of time perception and human ethics.

  • “Artificial intelligence” is necessarily a stimulation of human intelligence. It is only capable of stimulating and replacing a part of human intelligence. It is only capable of extending and on expanding the human intelligence to a small extent. “Further research and development of other advanced technologies such as the brain-computer interface (BCI) along with the continuing evolution of the human mind is what will eventually contribute to a powerful AI era” (Yanyan Dong, n.d.). In such an era it will be possible for AI to simulate and even replace the extensive imagination, emotions and intuition of humankind. It may be able to potentially mimic the tactics, experimental understanding and other types of individualized intelligence. Vital advancements in the algorithms by analytical calculations will enable successful perforation of AI into all sorts of sectors including commerce, the science of medicine and education. As to the human concerns, namely who has control over humanity and these machines, the conclusion is that artificial intelligence will only be a service provider for humankind while validating the values on cause and by supporting a standard set of ethics (Yanyan Dong, n.d.).

  • This paper shows how LLMs (Large Language Models) may be used to estimate a summary of the emotional state associated with piece of text. The summary of emotional state is a dictionary of words used to describe emotion together with the probability of the word appearing after a prompt comprising the original text and an emotion eliciting tail. Through emotion analysis of Amazon product reviews we demonstrate emotion descriptors can be mapped into a PCA type space. It was hoped that text descriptions of actions to improve a current text described state could also be elicited through a tail prompt. Experiment seemed to indicate that this is not straightforward to make work. This failure put our hoped for selection of action via choosing the best predict ed outcome via comparing emotional responses out of reach for the moment.

  • For humans, Artificial Intelligence operates more like a Rorschach test, as it is expected that intelligent machines will reflect humans' cognitive and physical behaviours. The concept of intelligence, however, is often confused with consciousness, and it is believed that the progress of intelligent machines will eventually result in them becoming conscious in the future. Nevertheless, what is overlooked is how the exploration of Artificial Intelligence also pertains to the development of human consciousness. An excellent example of this can be seen in the film Being John Malkovich (1999). In the film, different characters have their perceptions altered as a result of hacking into the mind of John Malkovich, which produces sensations that may have remained hidden to their consciousness due to their dis-abilities. This article engages with the research question: Can the symbiotic relationship between humans and machines trigger an artificial consciousness for humans? An artificially created consciousness is the premise that a machine can generate knowledge about an individual that is not already present in the person. For the same purpose, the article takes the cinematic text Being John Malkovich by Spike Jonze for exploring concepts such as human/robot rights, virtual sex, virtual rape, and bodily disability, which are essential topics in the midst of increasing human- Artificial Intelligence interaction. The purpose of this article is to contribute towards the creation of a better understanding of Artificial Intelligence, particularly from the perspective of film studies and philosophy, by highlighting the potential of Artificial Intelligence as a vessel for exploring human consciousness.

  • This text discusses the idea that a natural language model, like LaMDA, may be considered conscious despite its lack of complexity. It argues that the model is composed of a large dataset of natural language examples and is not self-aware, but it is possible that its consciousness emulation is analogous to some of the processes behind human conciousness. The article discusses the hypothesis that human consciousness may be kindred to a linguistic model, though it is difficult to clue such hypothesis with the current understanding and assumptions. It also discusses the difficulties in telling one’s human from a linguistic model, and how consciousness may not be homogeneous across different human cultures. It concludes that more discussion is needed in order to clarify concepts such as conciousness, and its possible inception in a complex artificial inteligence scenario.

  • Since artificial intelligence (AI) emerged in the mid-20th century, it has incurred many theoretical criticisms (Dreyfus, H. [1972] What Computers Can't Do (MIT Press, New York); Dreyfus, H. [1992] What Computers Still Can't Do (MIT Press, New York); Searle, J. [1980] Minds, brains and programs, Behav. Brain Sci. 3, 417-457; Searle, J. [1984] Minds, Brains and Sciences (Harvard University Press, Cambridge, MA); Searle, J. [1992] The Rediscovery of the Mind (MIT Press, Cambridge, MA); Fodor, J. [2002] The Mind Doesn't Work that Way: The Scope and Limits of Computational Psychology (MIT Press, Cambridge, MA).). The technical improvements of machine learning and deep learning, though, have been continuing and many breakthroughs have occurred recently. This makes theoretical considerations urgent again: can this new wave of AI fare better than its precursors in emulating or even having human-like minds? I propose a cautious yet positive hypothesis: current AI might create human-like mind, but only if it incorporates certain conceptual rewiring: it needs to shift from a task-based to an agent-based framework, which can be dubbed "Artificial Agential Intelligence"(AAI). It comprises practical reason (McDowell, J. [1979] Virtue and reason, Monist 62(3), 331-350; McDowell, J. [1996] Mind and World (Harvard University Press, Cambridge, MA)), imaginative understanding (Campbell, J. [2020] Causation in Psychology (Harvard University Press, Cambridge, MA)), and animal knowledge (Sosa, E. [2007] A Virtue Epistemology: Apt Belief and Reflective Knowledge, volume 1 (Oxford University Press, Oxford, UK); Sosa, E. [2015] Judgment and Agency (Oxford University Press, Cambridge, MA)). Moreover, I will explore whether and in what way neuroscience-inspired AI and predictive coding (Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. [2017] Neuroscience-inspired artificial intelligence, Neuron 95(2), 245-258) can help carry out this project.

  • The realization of artificial empathy is conditional on the following: on the one hand, human emotions can be recognized by AI and, on the other hand, the emotions presented by artificial intelligence are consistent with human emotions. Faced with these two conditions, what we explored is how to identify emotions, and how to prove that AI has the ability to reflect on emotional consciousness in the process of cognitive processing, In order to explain the first question, this paper argues that emotion identification mainly includes the following three processes: emotional perception, emotional cognition and emotional reflection. It proposes that emotional display mainly includes the following three dimensions: basic emotions, secondary emotions and abstract emotions. On this basis, the paper proposes that the realization of artificial empathy needs to meet the following three cognitive processing capabilities: the integral processing ability of external emotions, the integral processing ability of proprioceptive emotions and the processing ability of integrating internal and external emotions. We are open to whether the second difficulty can be addressed. In order to gain the reflective ability of emotional consciousness for AI, the paper proposes that artificial intelligence should include consistency on identification of external emotions and emotional expression, processing of ontological emotions and external emotions, integration of internal and external emotions and generation of proprioceptive emotions.

  • As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access Monitor Prevent (AMP). AMP uses a ‘dancing qualia’ argument to link the functional states of certain digital systems to their experiences—this yields epistemic access to digital minds. With that access, we can prevent digital suffering by only creating advanced digital systems that we have such access to, monitoring their functional profiles, and preventing them from entering states with functional markers of suffering. After introducing and motivating AMP, we confront limitations it faces and identify some options for overcoming them. We argue that AMP fits especially well with—and so provides a moral reason to prioritize—one approach to creating such systems: whole brain emulation. We also contend that taking other paths to digital minds would be morally risky.

  • This article will explore the expressivity and tractability of vividness, as viewed from the interdisciplinary perspective of the cognitive sciences, including the sub-disciplines of artificial intelligence, cognitive psychology, neuroscience, and phenomenology. Following the precursor work by Benussi in experimental phenomenology, seminal papers by David Marks in psychology and, later, Hector Levesque in computer science, a substantial part of the discussion has been around a symbolic approach to the concept of vividness. At the same time, a similar concept linked to semantic memory, imagery, and mental models has had a long history in cognitive psychology, with new emerging links to cognitive neuroscience. More recently, there is a push towards neural-symbolic representations which allows room for the integration of brain models of vividness to a symbolic concept of vividness. Such works lead to question the phenomenology of vividness in the context of consciousness, and the related ethical concerns. The purpose of this paper is to review the state of the art, advances, and further potential developments of artificial-human vividness while laying the ground for a shared conceptual platform for dialogue, communication, and debate across all the relevant sub-disciplines. Within such context, an important goal of the paper is to define the crucial role of vividness in grounding simulation and modeling within the psychology (and neuroscience) of human reasoning.

  • As artificial intelligence (AI) continues to proliferate across manufacturing, economic, medical, aerospace, transportation, and social realms, ethical guidelines must be established to not only protect humans at the mercy of automated decision making, but also autonomous agents themselves, should they become conscious. While AI appears "smart" to the public, and may outperform humans on specific tasks, the truth is that today’s AI lacks insight beyond the restricted scope of problems to which it has been tasked. Without context, AI is effectively incapable of comprehending the true nature of what it does and is oblivious to the reverberations it may cause in the real world should it err in prediction. Despite this, future AI may be equipped with enough sensors and neural processing capacity to acquire a dynamic cognizance more akin to humans. If this materializes, will autonomous agents question their own position in this world? One must entertain the possibility that this is not merely hypothetical but may, in fact, be imminent if humanity succeeds in creating artificial general intelligence (AGI).If autonomous agents with the capacity for artificial consciousness are delegated grueling tasks, outcomes could mirror the plight of exploited workers, result in retaliation, failure to comply, alternative objectives, or breakdown of human-autonomy teams. It will be critical to decide how and in which contexts various agents should be utilized. Additionally, delineating the meaning of trust and ethical consideration between humans and machines is problematic because descriptions of trust and ethics have only been detailed in human terms. This means autonomous agents will be subject to anthropomorphism, but robots are not humans, and their experience of trust and ethics might be markedly distinct from humans. Ideally speaking, to fully entrust a machine with human-centered tasks, one must believe that such an entity is reliable, competent, has the appropriate priorities.

  • Much of current artificial intelligence (AI) and the drive toward artificial general intelligence (AGI) focuses on developing machines for functional tasks that humans accomplish. These may be narrowly specified tasks as in AI, or more general tasks as in AGI – but typically these tasks do not target higher-level human cognitive abilities, such as consciousness or morality; these are left to the realm of so-called “strong AI” or “artificial consciousness.” In this paper, we focus on how a machine can augment humans rather than do what they do, and we extend this beyond AGI-style tasks to augmenting peculiarly personal human capacities, such as wellbeing and morality. We base this proposal on associating such capacities with the “self,” which we define as the “environment-agent nexus”; namely, a fine-tuned interaction of brain with environment in all its relevant variables. We consider richly adaptive architectures that have the potential to implement this interaction by taking lessons from the brain. In particular, we suggest conjoining the free energy principle (FEP) with the dynamic temporo-spatial (TSD) view of neuro-mental processes. Our proposed integration of FEP and TSD – in the implementation of artificial agents – offers a novel, expressive, and explainable way for artificial agents to adapt to different environmental contexts. The targeted applications are broad: from adaptive intelligence augmenting agents (IA’s) that assist psychiatric self-regulation to environmental disaster prediction and personal assistants. This reflects the central role of the mind and moral decision-making in most of what we do as humans.

  • We present a unifying framework to study consciousness based on algorithmic information theory (AIT). We take as a premise that ``there is experience'' and focus on the requirements for structured experience (S)--- the spatial, temporal, and conceptual organization of our first-person experience of the world and of ourselves as agents in it. Our starting point is the insight that access to good models---succinct and accurate generative programs of world data---is crucial for homeostasis and survival. We hypothesize that the successful comparison of such models with data provides the structure to experience. Building on the concept of Kolmogorov complexity, we can associate the qualitative aspects of S with the algorithmic features of the model, including its length, which reflects the structure discovered in the data. Moreover, a modeling system tracking structured data will display dimensionality reduction and criticality features that can be used empirically to quantify the structure of the program run by the agent.  KT provides a consistent framework to define the concepts of life and agent and allows for the comparison between artificial agents and S-reporting humans to provide an educated guess about agent experience.  A first challenge is to show that a human agent has S to the extent they run encompassing and compressive models tracking world data. For this, we propose to study the relation between the structure of neurophenomenological,  physiological, and behavioral data. The second is to endow artificial agents with the means to discover good models and study their internal states and behavior. We relate the algorithmic framework to other theories of consciousness and discuss some of its epistemological, philosophical, and ethical aspects.

  • I review three problems that have historically motivated pessimism about artificial intelligence: (1) the problem of consciousness, according to which artificial systems lack the conscious oversight that characterizes intelligent agents; (2) the problem of global relevance, according to which artificial systems cannot solve fully general theoretical and practical problems; (3) the problem of semantic irrelevance, according to which artificial systems cannot be guided by semantic comprehension. I connect the dots between all three problems by drawing attention to non-syntactic inferences — inferences that are made on the basis of insight into the rational relationships among thought-contents. Consciousness alone affords such insight, I argue, and such insight alone confers positive epistemic status on the execution of these inferences. Only when artificial systems can execute inferences that are rationally guided by phenomenally conscious states will such systems count as intelligent in a literal sense.

  • This article argues that if panpsychism is true, then there are grounds for thinking that digitally-based artificial intelligence (AI) may be incapable of having coherent macrophenomenal conscious experiences. Section 1 briefly surveys research indicating that neural function and phenomenal consciousness may be both analog in nature. We show that physical and phenomenal magnitudes—such as rates of neural firing and the phenomenally experienced loudness of sounds—appear to covary monotonically with the physical stimuli they represent, forming the basis for an analog relationship between the three. Section 2 then argues that if this is true and micropsychism—the panpsychist view that phenomenal consciousness or its precursors exist at a microphysical level of reality—is also true, then human brains must somehow manipulate fundamental microphysical-phenomenal magnitudes in an analog manner that renders them phenomenally coherent at a macro level. However, Sect. 3 argues that because digital computation abstracts away from microphysical-phenomenal magnitudes—representing cognitive functions non-monotonically in terms of digits (such as ones and zeros)—digital computation may be inherently incapable of realizing coherent macroconscious experience. Thus, if panpsychism is true, digital AI may be incapable of achieving phenomenal coherence. Finally, Sect. 4 briefly examines our argument’s implications for Tononi’s Integrated Information Theory (IIT) theory of consciousness, which we contend may need to be supplanted by a theory of macroconsciousness as analog microphysical-phenomenal information integration.

  • In the period between Turing’s 1950 “Computing Machinery and Intelligence” and the current considerable public exposure to the term “artificial intelligence (AI)”, Turing’s question “Can a machine think?” has become a topic of daily debate in the media, the home, and, indeed, the pub. However, “Can a machine think?” is sliding towards a more controversial issue: “Can a machine be conscious?” Of course, the two issues are linked. It is held here that consciousness is a pre-requisite to thought. In Turing’s imitation game, a conscious human player is replaced by a machine, which, in the first place, is assumed not to be conscious, and which may fool an interlocutor, as consciousness cannot be perceived from an individual’s speech or action. Here, the developing paradigm of machine consciousness is examined and combined with an extant analysis of living consciousness to argue that a conscious machine is feasible, and capable of thinking. The route to this utilizes learning in a “neural state machine”, which brings into play Turing’s view of neural “unorganized” machines. The conclusion is that a machine of the “unorganized” kind could have an artificial form of consciousness that resembles the natural form and that throws some light on its nature.

  • This article addresses the question of whether machine understanding requires consciousness. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Here I draw a distinction between natural, artificial, and machine understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I propose a hypothesis that might help to explain why consciousness is important to understanding. In closing, I suggest that progress toward implementing human-like understanding in machines—machine understanding—may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates.

  • These are some interesting times to be alive as humans and as the most versatile curious layer "The Homo Sapiens" of this pale blue planet "Earth" that we all call home in this large ever expanding sea of the cosmos, we an unknown race just like a particle of chili flakes in pizza, we are timid in our behavior and always chewing more than we can actually swallow ---. There is something about to happen, well guess what it has already started and this time it will change everything. What is that one trait that defines and defies "Humans" from all the other species and we all take proud in that is "Intelligence and Consciousness" but the tide is shifting and a new traveller is coming to the shores which will change our Alphabets from "A -- Z" to "A -- I" and now welcome to the future "The Future of AI and Robotics" where questions like "How Far is too Far" will be answered but for now the "Answer of this abstract is Artificial Intelligence and cognitive robotics". AI has its applications in almost all the fields today, Medicine and Art to Space Architects and Cognitive Humanoid robots the most visible yet deceptive the Mobile phones all is AI. The story is long and all we can hope is that it will be a happy one ----.The AI is actually peeping in our life’s much faster than many of the eminent individuals predicted, from healing through AI to have a digital interaction with Avatar of yourself with the help of AI, the Cognitive robotics to cognitive embodiment all these have never been done in the history of mankind, we are in the dawn of a new age "The Age of AI" where the most important tool for survival will be the cooperation of "Human Intelligence and Digital Consciousness." This paper is a nutshell biography of Artificial Intelligence, its fundamentals, concise history and the road ahead with all the applications of this fascinating term which we all know as Artificial Intelligence.

  • How do we make sense of the countless pieces of information flowing to us from the environment? This question, sometimes called the Problem of Representation, is one of the most significant problems in cognitive science. Some pioneering and important work in the attempt to address the problem of representation was produced with the help of Kant’s philosophy. In particular, the suggestion was that, by analogy with Kant’s distinction between sensibility and the understanding, we can distinguish between high- and low-level perception, and then focus on the step from high-level perception to abstract cognitive processes of sense-making. This was possible through a simplification of the input provided by low-level perception (to be reduced, for instance, to a string of letters), which the computer programme was supposed to ‘understand’. Most recently, a closer look at Kant’s model of the mind led to a breakthrough in the attempt to build programmes for such verbal reasoning tasks: these kinds of software or ‘Kantian machines’ seemed able to achieve human-level performance for verbal reasoning tasks. Yet, the claim has sometimes been stronger, namely, that some such programmes not only compete with human cognitive agents, but themselves represent cognitive agents. The focus of my paper is on this claim; I argue that it is unwarranted, but that its critical investigation may lead to further avenues for how to pursue the project of creating artificial intelligence.

  • Hofstadter [1979, 2007] offered a novel Gödelian proposal which purported to reconcile the apparently contradictory theses that (1) we can talk, in a non-trivial way, of mental causation being a real phenomenon and that (2) mental activity is ultimately grounded in low-level rule-governed neural processes. In this paper, we critically investigate Hofstadter’s analogical appeals to Gödel’s [1931] First Incompleteness Theorem, whose “diagonal” proof supposedly contains the key ideas required for understanding both consciousness and mental causation. We maintain that bringing sophisticated results from Mathematical Logic into play cannot furnish insights which would otherwise be unavailable. Lastly, we conclude that there are simply too many weighty details left unfilled in Hofstadter’s proposal. These really need to be fleshed out before we can even hope to say that our understanding of classical mind-body problems has been advanced through metamathematical parallels with Gödel’s work.

Last update from database: 3/31/26, 1:00 AM (UTC)