Search

Full bibliography 675 resources

  • Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned Artificial Superintelligence (ASI), such as Coherent Extrapolated Volition (CEV), have focused on ensuring that an Artificial Superintelligence would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds, could also be affected by the ASI’s behavior in morally relevant ways. This paper puts forward Sentientist Coherent Extrapolated Volition, an alternative to CEV, that directly takes into account the interests of all sentient beings. This ambitious value learning proposal would significantly reduce the likelihood of risks of astronomical suffering from the ASI’s behavior, and thus we have very strong pro-tanto moral reasons in favor of implementing it instead of the CEV. This fact is crucial in conducting an adequate cost–benefit analysis between different ambitious value learning proposals.

  • The possibility of AI consciousness depends much on the correct answer to the mind–body problem: how our materialistic brain generates subjective consciousness? If a materialistic answer is valid, machine consciousness must be possible, at least in principle, though the actual instantiation of consciousness may still take a very long time. If a non-materialistic one (either mentalist or dualist) is valid, machine consciousness is much less likely, perhaps impossible, as some mental element may also be required. Some recent advances in neurology (despite the separation of the two hemispheres, our brain as a whole is still able to produce only one conscious agent; the negation of the absence of a free will, previously thought to be established by the Libet experiments) and many results of parapsychology (on medium communications, memories of past lives, near-death experiences) suggestive of survival after our biological death, strongly support the non-materialistic position and hence the much lower likelihood of AI consciousness. Instead of being concern about AI turning conscious and machine ethics, and trying to instantiate AI consciousness soon, we should perhaps focus more on making AI less costly and more useful to society.

  • The fields of artificial intelligence (AI) and artificial consciousness (AC) have largely developed separately, with different goals and criteria for success and with only a minimal exchange of ideas. In this chapter, we consider the question of how concepts developed in AC research might contribute to more effective future AI systems. We first briefly discuss several past hypotheses about the function(s) of human consciousness, and present our own hypothesis that short-term working memory and very rapid learning should be a central concern in such matters. We describe our recent efforts to explore this hypothesis computationally and to identify associated computational correlates of consciousness. We then present ideas about how integrating concepts from AC into AI systems to develop an artificial conscious intelligence (ACI) could both produce more effective AI technology and contribute to a deeper scientific understanding of the fundamental nature of consciousness and intelligence.

  • This paper describes a mathematical model based on operators and function spaces (Hilbert space) to better understand consciousness and the relationship between human and artificial consciousness (AC). Scientific understanding of the relationships between human consciousness (HC) and AC may shed new light on the future of HC and AC. This mathematical physical model considers general models of external and internal realities. Some schemes using external reality, senses or sensors, body-brain or computer software, internal reality, and HC or AC are discussed. The cyclic interaction of the internal reality maintains on time through decision-making (consciousness) and body-brain operators seem to be the origin of consciousness. An analysis of the importance of the AC and cyborg consciousnesses (CC) in the interaction with HC is also presented. It is concluded that the creation of CC and AC will allow the study of HC through experimentation by the evaluation of functions of emotion (values, feelings, penalties, and rewards) demanded by those consciousnesses. This will result in the transformation of the HC.

  • Consciousness is a sequential process of awareness which can focus on one piece of information at a time. This process of awareness experiences causation which underpins the notion of time while it interplays with matter and energy, forming reality. The study of Consciousness, time and reality is complex and evolving fast in many fields, including metaphysics and fundamental physics. Reality composes patterns in human Consciousness in response to the regularities in nature. These regularities could be physical (e.g., astronomical, environmental), biological, chemical, mental, social, etc. The patterns that emerged in Consciousness were correlated to the environment, life and social behaviours followed by constructed frameworks, systems and structures. The complex constructs evolved as cultures, customs, norms and values, which created a diverse society. In the evolution of responsible AI, it is important to be attuned to the evolved cultural, ethical and moral values through Consciousness. This requires the advocated design of self-learning AI aware of time perception and human ethics.

  • “Artificial intelligence” is necessarily a stimulation of human intelligence. It is only capable of stimulating and replacing a part of human intelligence. It is only capable of extending and on expanding the human intelligence to a small extent. “Further research and development of other advanced technologies such as the brain-computer interface (BCI) along with the continuing evolution of the human mind is what will eventually contribute to a powerful AI era” (Yanyan Dong, n.d.). In such an era it will be possible for AI to simulate and even replace the extensive imagination, emotions and intuition of humankind. It may be able to potentially mimic the tactics, experimental understanding and other types of individualized intelligence. Vital advancements in the algorithms by analytical calculations will enable successful perforation of AI into all sorts of sectors including commerce, the science of medicine and education. As to the human concerns, namely who has control over humanity and these machines, the conclusion is that artificial intelligence will only be a service provider for humankind while validating the values on cause and by supporting a standard set of ethics (Yanyan Dong, n.d.).

  • This paper shows how LLMs (Large Language Models) may be used to estimate a summary of the emotional state associated with piece of text. The summary of emotional state is a dictionary of words used to describe emotion together with the probability of the word appearing after a prompt comprising the original text and an emotion eliciting tail. Through emotion analysis of Amazon product reviews we demonstrate emotion descriptors can be mapped into a PCA type space. It was hoped that text descriptions of actions to improve a current text described state could also be elicited through a tail prompt. Experiment seemed to indicate that this is not straightforward to make work. This failure put our hoped for selection of action via choosing the best predict ed outcome via comparing emotional responses out of reach for the moment.

  • For humans, Artificial Intelligence operates more like a Rorschach test, as it is expected that intelligent machines will reflect humans' cognitive and physical behaviours. The concept of intelligence, however, is often confused with consciousness, and it is believed that the progress of intelligent machines will eventually result in them becoming conscious in the future. Nevertheless, what is overlooked is how the exploration of Artificial Intelligence also pertains to the development of human consciousness. An excellent example of this can be seen in the film Being John Malkovich (1999). In the film, different characters have their perceptions altered as a result of hacking into the mind of John Malkovich, which produces sensations that may have remained hidden to their consciousness due to their dis-abilities. This article engages with the research question: Can the symbiotic relationship between humans and machines trigger an artificial consciousness for humans? An artificially created consciousness is the premise that a machine can generate knowledge about an individual that is not already present in the person. For the same purpose, the article takes the cinematic text Being John Malkovich by Spike Jonze for exploring concepts such as human/robot rights, virtual sex, virtual rape, and bodily disability, which are essential topics in the midst of increasing human- Artificial Intelligence interaction. The purpose of this article is to contribute towards the creation of a better understanding of Artificial Intelligence, particularly from the perspective of film studies and philosophy, by highlighting the potential of Artificial Intelligence as a vessel for exploring human consciousness.

  • This text discusses the idea that a natural language model, like LaMDA, may be considered conscious despite its lack of complexity. It argues that the model is composed of a large dataset of natural language examples and is not self-aware, but it is possible that its consciousness emulation is analogous to some of the processes behind human conciousness. The article discusses the hypothesis that human consciousness may be kindred to a linguistic model, though it is difficult to clue such hypothesis with the current understanding and assumptions. It also discusses the difficulties in telling one’s human from a linguistic model, and how consciousness may not be homogeneous across different human cultures. It concludes that more discussion is needed in order to clarify concepts such as conciousness, and its possible inception in a complex artificial inteligence scenario.

  • Since artificial intelligence (AI) emerged in the mid-20th century, it has incurred many theoretical criticisms (Dreyfus, H. [1972] What Computers Can't Do (MIT Press, New York); Dreyfus, H. [1992] What Computers Still Can't Do (MIT Press, New York); Searle, J. [1980] Minds, brains and programs, Behav. Brain Sci. 3, 417-457; Searle, J. [1984] Minds, Brains and Sciences (Harvard University Press, Cambridge, MA); Searle, J. [1992] The Rediscovery of the Mind (MIT Press, Cambridge, MA); Fodor, J. [2002] The Mind Doesn't Work that Way: The Scope and Limits of Computational Psychology (MIT Press, Cambridge, MA).). The technical improvements of machine learning and deep learning, though, have been continuing and many breakthroughs have occurred recently. This makes theoretical considerations urgent again: can this new wave of AI fare better than its precursors in emulating or even having human-like minds? I propose a cautious yet positive hypothesis: current AI might create human-like mind, but only if it incorporates certain conceptual rewiring: it needs to shift from a task-based to an agent-based framework, which can be dubbed "Artificial Agential Intelligence"(AAI). It comprises practical reason (McDowell, J. [1979] Virtue and reason, Monist 62(3), 331-350; McDowell, J. [1996] Mind and World (Harvard University Press, Cambridge, MA)), imaginative understanding (Campbell, J. [2020] Causation in Psychology (Harvard University Press, Cambridge, MA)), and animal knowledge (Sosa, E. [2007] A Virtue Epistemology: Apt Belief and Reflective Knowledge, volume 1 (Oxford University Press, Oxford, UK); Sosa, E. [2015] Judgment and Agency (Oxford University Press, Cambridge, MA)). Moreover, I will explore whether and in what way neuroscience-inspired AI and predictive coding (Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. [2017] Neuroscience-inspired artificial intelligence, Neuron 95(2), 245-258) can help carry out this project.

  • The realization of artificial empathy is conditional on the following: on the one hand, human emotions can be recognized by AI and, on the other hand, the emotions presented by artificial intelligence are consistent with human emotions. Faced with these two conditions, what we explored is how to identify emotions, and how to prove that AI has the ability to reflect on emotional consciousness in the process of cognitive processing, In order to explain the first question, this paper argues that emotion identification mainly includes the following three processes: emotional perception, emotional cognition and emotional reflection. It proposes that emotional display mainly includes the following three dimensions: basic emotions, secondary emotions and abstract emotions. On this basis, the paper proposes that the realization of artificial empathy needs to meet the following three cognitive processing capabilities: the integral processing ability of external emotions, the integral processing ability of proprioceptive emotions and the processing ability of integrating internal and external emotions. We are open to whether the second difficulty can be addressed. In order to gain the reflective ability of emotional consciousness for AI, the paper proposes that artificial intelligence should include consistency on identification of external emotions and emotional expression, processing of ontological emotions and external emotions, integration of internal and external emotions and generation of proprioceptive emotions.

  • As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access Monitor Prevent (AMP). AMP uses a ‘dancing qualia’ argument to link the functional states of certain digital systems to their experiences—this yields epistemic access to digital minds. With that access, we can prevent digital suffering by only creating advanced digital systems that we have such access to, monitoring their functional profiles, and preventing them from entering states with functional markers of suffering. After introducing and motivating AMP, we confront limitations it faces and identify some options for overcoming them. We argue that AMP fits especially well with—and so provides a moral reason to prioritize—one approach to creating such systems: whole brain emulation. We also contend that taking other paths to digital minds would be morally risky.

  • This article will explore the expressivity and tractability of vividness, as viewed from the interdisciplinary perspective of the cognitive sciences, including the sub-disciplines of artificial intelligence, cognitive psychology, neuroscience, and phenomenology. Following the precursor work by Benussi in experimental phenomenology, seminal papers by David Marks in psychology and, later, Hector Levesque in computer science, a substantial part of the discussion has been around a symbolic approach to the concept of vividness. At the same time, a similar concept linked to semantic memory, imagery, and mental models has had a long history in cognitive psychology, with new emerging links to cognitive neuroscience. More recently, there is a push towards neural-symbolic representations which allows room for the integration of brain models of vividness to a symbolic concept of vividness. Such works lead to question the phenomenology of vividness in the context of consciousness, and the related ethical concerns. The purpose of this paper is to review the state of the art, advances, and further potential developments of artificial-human vividness while laying the ground for a shared conceptual platform for dialogue, communication, and debate across all the relevant sub-disciplines. Within such context, an important goal of the paper is to define the crucial role of vividness in grounding simulation and modeling within the psychology (and neuroscience) of human reasoning.

  • As artificial intelligence (AI) continues to proliferate across manufacturing, economic, medical, aerospace, transportation, and social realms, ethical guidelines must be established to not only protect humans at the mercy of automated decision making, but also autonomous agents themselves, should they become conscious. While AI appears "smart" to the public, and may outperform humans on specific tasks, the truth is that today’s AI lacks insight beyond the restricted scope of problems to which it has been tasked. Without context, AI is effectively incapable of comprehending the true nature of what it does and is oblivious to the reverberations it may cause in the real world should it err in prediction. Despite this, future AI may be equipped with enough sensors and neural processing capacity to acquire a dynamic cognizance more akin to humans. If this materializes, will autonomous agents question their own position in this world? One must entertain the possibility that this is not merely hypothetical but may, in fact, be imminent if humanity succeeds in creating artificial general intelligence (AGI).If autonomous agents with the capacity for artificial consciousness are delegated grueling tasks, outcomes could mirror the plight of exploited workers, result in retaliation, failure to comply, alternative objectives, or breakdown of human-autonomy teams. It will be critical to decide how and in which contexts various agents should be utilized. Additionally, delineating the meaning of trust and ethical consideration between humans and machines is problematic because descriptions of trust and ethics have only been detailed in human terms. This means autonomous agents will be subject to anthropomorphism, but robots are not humans, and their experience of trust and ethics might be markedly distinct from humans. Ideally speaking, to fully entrust a machine with human-centered tasks, one must believe that such an entity is reliable, competent, has the appropriate priorities.

  • Much of current artificial intelligence (AI) and the drive toward artificial general intelligence (AGI) focuses on developing machines for functional tasks that humans accomplish. These may be narrowly specified tasks as in AI, or more general tasks as in AGI – but typically these tasks do not target higher-level human cognitive abilities, such as consciousness or morality; these are left to the realm of so-called “strong AI” or “artificial consciousness.” In this paper, we focus on how a machine can augment humans rather than do what they do, and we extend this beyond AGI-style tasks to augmenting peculiarly personal human capacities, such as wellbeing and morality. We base this proposal on associating such capacities with the “self,” which we define as the “environment-agent nexus”; namely, a fine-tuned interaction of brain with environment in all its relevant variables. We consider richly adaptive architectures that have the potential to implement this interaction by taking lessons from the brain. In particular, we suggest conjoining the free energy principle (FEP) with the dynamic temporo-spatial (TSD) view of neuro-mental processes. Our proposed integration of FEP and TSD – in the implementation of artificial agents – offers a novel, expressive, and explainable way for artificial agents to adapt to different environmental contexts. The targeted applications are broad: from adaptive intelligence augmenting agents (IA’s) that assist psychiatric self-regulation to environmental disaster prediction and personal assistants. This reflects the central role of the mind and moral decision-making in most of what we do as humans.

  • We present a unifying framework to study consciousness based on algorithmic information theory (AIT). We take as a premise that ``there is experience'' and focus on the requirements for structured experience (S)--- the spatial, temporal, and conceptual organization of our first-person experience of the world and of ourselves as agents in it. Our starting point is the insight that access to good models---succinct and accurate generative programs of world data---is crucial for homeostasis and survival. We hypothesize that the successful comparison of such models with data provides the structure to experience. Building on the concept of Kolmogorov complexity, we can associate the qualitative aspects of S with the algorithmic features of the model, including its length, which reflects the structure discovered in the data. Moreover, a modeling system tracking structured data will display dimensionality reduction and criticality features that can be used empirically to quantify the structure of the program run by the agent.  KT provides a consistent framework to define the concepts of life and agent and allows for the comparison between artificial agents and S-reporting humans to provide an educated guess about agent experience.  A first challenge is to show that a human agent has S to the extent they run encompassing and compressive models tracking world data. For this, we propose to study the relation between the structure of neurophenomenological,  physiological, and behavioral data. The second is to endow artificial agents with the means to discover good models and study their internal states and behavior. We relate the algorithmic framework to other theories of consciousness and discuss some of its epistemological, philosophical, and ethical aspects.

  • I review three problems that have historically motivated pessimism about artificial intelligence: (1) the problem of consciousness, according to which artificial systems lack the conscious oversight that characterizes intelligent agents; (2) the problem of global relevance, according to which artificial systems cannot solve fully general theoretical and practical problems; (3) the problem of semantic irrelevance, according to which artificial systems cannot be guided by semantic comprehension. I connect the dots between all three problems by drawing attention to non-syntactic inferences — inferences that are made on the basis of insight into the rational relationships among thought-contents. Consciousness alone affords such insight, I argue, and such insight alone confers positive epistemic status on the execution of these inferences. Only when artificial systems can execute inferences that are rationally guided by phenomenally conscious states will such systems count as intelligent in a literal sense.

  • This article argues that if panpsychism is true, then there are grounds for thinking that digitally-based artificial intelligence (AI) may be incapable of having coherent macrophenomenal conscious experiences. Section 1 briefly surveys research indicating that neural function and phenomenal consciousness may be both analog in nature. We show that physical and phenomenal magnitudes—such as rates of neural firing and the phenomenally experienced loudness of sounds—appear to covary monotonically with the physical stimuli they represent, forming the basis for an analog relationship between the three. Section 2 then argues that if this is true and micropsychism—the panpsychist view that phenomenal consciousness or its precursors exist at a microphysical level of reality—is also true, then human brains must somehow manipulate fundamental microphysical-phenomenal magnitudes in an analog manner that renders them phenomenally coherent at a macro level. However, Sect. 3 argues that because digital computation abstracts away from microphysical-phenomenal magnitudes—representing cognitive functions non-monotonically in terms of digits (such as ones and zeros)—digital computation may be inherently incapable of realizing coherent macroconscious experience. Thus, if panpsychism is true, digital AI may be incapable of achieving phenomenal coherence. Finally, Sect. 4 briefly examines our argument’s implications for Tononi’s Integrated Information Theory (IIT) theory of consciousness, which we contend may need to be supplanted by a theory of macroconsciousness as analog microphysical-phenomenal information integration.

  • In the period between Turing’s 1950 “Computing Machinery and Intelligence” and the current considerable public exposure to the term “artificial intelligence (AI)”, Turing’s question “Can a machine think?” has become a topic of daily debate in the media, the home, and, indeed, the pub. However, “Can a machine think?” is sliding towards a more controversial issue: “Can a machine be conscious?” Of course, the two issues are linked. It is held here that consciousness is a pre-requisite to thought. In Turing’s imitation game, a conscious human player is replaced by a machine, which, in the first place, is assumed not to be conscious, and which may fool an interlocutor, as consciousness cannot be perceived from an individual’s speech or action. Here, the developing paradigm of machine consciousness is examined and combined with an extant analysis of living consciousness to argue that a conscious machine is feasible, and capable of thinking. The route to this utilizes learning in a “neural state machine”, which brings into play Turing’s view of neural “unorganized” machines. The conclusion is that a machine of the “unorganized” kind could have an artificial form of consciousness that resembles the natural form and that throws some light on its nature.

Last update from database: 1/1/26, 2:00 AM (UTC)