Search

Full bibliography 558 resources

  • This paper describes a mathematical model based on operators and function spaces (Hilbert space) to better understand consciousness and the relationship between human and artificial consciousness (AC). Scientific understanding of the relationships between human consciousness (HC) and AC may shed new light on the future of HC and AC. This mathematical physical model considers general models of external and internal realities. Some schemes using external reality, senses or sensors, body-brain or computer software, internal reality, and HC or AC are discussed. The cyclic interaction of the internal reality maintains on time through decision-making (consciousness) and body-brain operators seem to be the origin of consciousness. An analysis of the importance of the AC and cyborg consciousnesses (CC) in the interaction with HC is also presented. It is concluded that the creation of CC and AC will allow the study of HC through experimentation by the evaluation of functions of emotion (values, feelings, penalties, and rewards) demanded by those consciousnesses. This will result in the transformation of the HC.

  • Consciousness is a sequential process of awareness which can focus on one piece of information at a time. This process of awareness experiences causation which underpins the notion of time while it interplays with matter and energy, forming reality. The study of Consciousness, time and reality is complex and evolving fast in many fields, including metaphysics and fundamental physics. Reality composes patterns in human Consciousness in response to the regularities in nature. These regularities could be physical (e.g., astronomical, environmental), biological, chemical, mental, social, etc. The patterns that emerged in Consciousness were correlated to the environment, life and social behaviours followed by constructed frameworks, systems and structures. The complex constructs evolved as cultures, customs, norms and values, which created a diverse society. In the evolution of responsible AI, it is important to be attuned to the evolved cultural, ethical and moral values through Consciousness. This requires the advocated design of self-learning AI aware of time perception and human ethics.

  • “Artificial intelligence” is necessarily a stimulation of human intelligence. It is only capable of stimulating and replacing a part of human intelligence. It is only capable of extending and on expanding the human intelligence to a small extent. “Further research and development of other advanced technologies such as the brain-computer interface (BCI) along with the continuing evolution of the human mind is what will eventually contribute to a powerful AI era” (Yanyan Dong, n.d.). In such an era it will be possible for AI to simulate and even replace the extensive imagination, emotions and intuition of humankind. It may be able to potentially mimic the tactics, experimental understanding and other types of individualized intelligence. Vital advancements in the algorithms by analytical calculations will enable successful perforation of AI into all sorts of sectors including commerce, the science of medicine and education. As to the human concerns, namely who has control over humanity and these machines, the conclusion is that artificial intelligence will only be a service provider for humankind while validating the values on cause and by supporting a standard set of ethics (Yanyan Dong, n.d.).

  • This paper shows how LLMs (Large Language Models) may be used to estimate a summary of the emotional state associated with piece of text. The summary of emotional state is a dictionary of words used to describe emotion together with the probability of the word appearing after a prompt comprising the original text and an emotion eliciting tail. Through emotion analysis of Amazon product reviews we demonstrate emotion descriptors can be mapped into a PCA type space. It was hoped that text descriptions of actions to improve a current text described state could also be elicited through a tail prompt. Experiment seemed to indicate that this is not straightforward to make work. This failure put our hoped for selection of action via choosing the best predict ed outcome via comparing emotional responses out of reach for the moment.

  • For humans, Artificial Intelligence operates more like a Rorschach test, as it is expected that intelligent machines will reflect humans' cognitive and physical behaviours. The concept of intelligence, however, is often confused with consciousness, and it is believed that the progress of intelligent machines will eventually result in them becoming conscious in the future. Nevertheless, what is overlooked is how the exploration of Artificial Intelligence also pertains to the development of human consciousness. An excellent example of this can be seen in the film Being John Malkovich (1999). In the film, different characters have their perceptions altered as a result of hacking into the mind of John Malkovich, which produces sensations that may have remained hidden to their consciousness due to their dis-abilities. This article engages with the research question: Can the symbiotic relationship between humans and machines trigger an artificial consciousness for humans? An artificially created consciousness is the premise that a machine can generate knowledge about an individual that is not already present in the person. For the same purpose, the article takes the cinematic text Being John Malkovich by Spike Jonze for exploring concepts such as human/robot rights, virtual sex, virtual rape, and bodily disability, which are essential topics in the midst of increasing human- Artificial Intelligence interaction. The purpose of this article is to contribute towards the creation of a better understanding of Artificial Intelligence, particularly from the perspective of film studies and philosophy, by highlighting the potential of Artificial Intelligence as a vessel for exploring human consciousness.

  • As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access Monitor Prevent (AMP). AMP uses a ‘dancing qualia’ argument to link the functional states of certain digital systems to their experiences—this yields epistemic access to digital minds. With that access, we can prevent digital suffering by only creating advanced digital systems that we have such access to, monitoring their functional profiles, and preventing them from entering states with functional markers of suffering. After introducing and motivating AMP, we confront limitations it faces and identify some options for overcoming them. We argue that AMP fits especially well with—and so provides a moral reason to prioritize—one approach to creating such systems: whole brain emulation. We also contend that taking other paths to digital minds would be morally risky.

  • We present a unifying framework to study consciousness based on algorithmic information theory (AIT). We take as a premise that ``there is experience'' and focus on the requirements for structured experience (S)--- the spatial, temporal, and conceptual organization of our first-person experience of the world and of ourselves as agents in it. Our starting point is the insight that access to good models---succinct and accurate generative programs of world data---is crucial for homeostasis and survival. We hypothesize that the successful comparison of such models with data provides the structure to experience. Building on the concept of Kolmogorov complexity, we can associate the qualitative aspects of S with the algorithmic features of the model, including its length, which reflects the structure discovered in the data. Moreover, a modeling system tracking structured data will display dimensionality reduction and criticality features that can be used empirically to quantify the structure of the program run by the agent.  KT provides a consistent framework to define the concepts of life and agent and allows for the comparison between artificial agents and S-reporting humans to provide an educated guess about agent experience.  A first challenge is to show that a human agent has S to the extent they run encompassing and compressive models tracking world data. For this, we propose to study the relation between the structure of neurophenomenological,  physiological, and behavioral data. The second is to endow artificial agents with the means to discover good models and study their internal states and behavior. We relate the algorithmic framework to other theories of consciousness and discuss some of its epistemological, philosophical, and ethical aspects.

  • These are some interesting times to be alive as humans and as the most versatile curious layer "The Homo Sapiens" of this pale blue planet "Earth" that we all call home in this large ever expanding sea of the cosmos, we an unknown race just like a particle of chili flakes in pizza, we are timid in our behavior and always chewing more than we can actually swallow ---. There is something about to happen, well guess what it has already started and this time it will change everything. What is that one trait that defines and defies "Humans" from all the other species and we all take proud in that is "Intelligence and Consciousness" but the tide is shifting and a new traveller is coming to the shores which will change our Alphabets from "A -- Z" to "A -- I" and now welcome to the future "The Future of AI and Robotics" where questions like "How Far is too Far" will be answered but for now the "Answer of this abstract is Artificial Intelligence and cognitive robotics". AI has its applications in almost all the fields today, Medicine and Art to Space Architects and Cognitive Humanoid robots the most visible yet deceptive the Mobile phones all is AI. The story is long and all we can hope is that it will be a happy one ----.The AI is actually peeping in our life’s much faster than many of the eminent individuals predicted, from healing through AI to have a digital interaction with Avatar of yourself with the help of AI, the Cognitive robotics to cognitive embodiment all these have never been done in the history of mankind, we are in the dawn of a new age "The Age of AI" where the most important tool for survival will be the cooperation of "Human Intelligence and Digital Consciousness." This paper is a nutshell biography of Artificial Intelligence, its fundamentals, concise history and the road ahead with all the applications of this fascinating term which we all know as Artificial Intelligence.

  • This article provides an analytical framework for how to simulate human-like thought processes within a computer. It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought. The focus is on replicating the dynamics of the mammalian working memory system, which features two forms of persistent activity: sustained firing (preserving information on the order of seconds) and synaptic potentiation (preserving information from minutes to hours). The article uses a series of figures to systematically demonstrate how the iterative updating of these working memory stores provides functional organization to behavior, cognition, and awareness. In a machine learning implementation, these two memory stores should be updated continuously and in an iterative fashion. This means each state should preserve a proportion of the coactive representations from the state before it (where each representation is an ensemble of neural network nodes). This makes each state a revised iteration of the preceding state and causes successive configurations to overlap and blend with respect to the information they contain. Thus, the set of concepts in working memory will evolve gradually and incrementally over time. Transitions between states happen as persistent activity spreads activation energy throughout the hierarchical network, searching long-term memory for the most appropriate representation to be added to the global workspace. The result is a chain of associatively linked intermediate states capable of advancing toward a solution or goal. Iterative updating is conceptualized here as an information processing strategy, a model of working memory, a theory of consciousness, and an algorithm for designing and programming artificial intelligence (AI, AGI, and ASI).

  • In this article, it is taken for granted that fully-human artificial intelligence—a term used to denote artificial life that is, in principle, more human than strong AI would be—must possess operational faculties for consciousness and selfhood. After clarifying relevant questions surrounding the interested socio-psychological phenomena, progress in animal and humanoid robotics is summarized. The aforementioned topics within philosophy and the social sciences are reviewed, noting their relevant overlaps with recent developments in cognitive and computer sciences. My working assumption is that the avowed conclusion of human AI cannot currently be written off as impossible and should therefore be critically engaged (the intent is to engage with humanoid robotics’ capabilities and features in relation to the present state of knowledge regarding the psychological phenomena discussed). It is argued that for human AI to fully succeed as a discipline, the discussed psychological notions as we understand and experience them must be further elucidated and adequately accounted for by AI research programs.

  • People have different opinions about which conditions robots would need to fulfil—and for what reasons—to be moral agents. Standardists hold that specific internal states (like rationality, free will or phenomenal consciousness) are necessary in artificial agents, and robots are thus not moral agents since they lack these internal states. Functionalists hold that what matters are certain behaviours and reactions—independent of what the internal states may be—implying that robots can be moral agents as long as the behaviour is adequate. This article defends a standardist view in the sense that the internal states are what matters for determining the moral agency of the robot, but it will be unique in being an internalist theory defending a large degree of robot responsibility, even though humans, but not robots, are taken to have phenomenal consciousness. This view is based on an event-causal libertarian theory of free will and a revisionist theory of responsibility, which combined explain how free will and responsibility can come in degrees. This is meant to be a middle position between typical compatibilist and libertarian views, securing the strengths of both sides. The theories are then applied to robots, making it possible to be quite precise about what it means that robots can have a certain degree of moral responsibility, and why. Defending this libertarian form of free will and responsibility then implies that non-conscious robots can have a stronger form of free will and responsibility than what is commonly defended in the literature on robot responsibility.

  • The established theories and frameworks on consciousness in the academic literature as related to artificial intelligence (AI), are rooted in anthropocentricism. Even those theories created intentionally for AI are based on the levels of consciousness as it is understood in humans primarily, and in other animals secondarily. This paper will discuss why such anthropocentric frameworks are built on unsecure foundations. We will do this by comparing the capacities and functions of human and AI cognitive architectures, discussing the ramifications and consequences of the behaviors that stem from these, and looking at the neurological conditions in humans that can give the most promising hints as to what a potential conscious AI entity would look like. The paper ends with a proposed solution for building a nonanthropocentric foundation of cognition that could lead toward a truly AI-focused framework of consciousness.

  • An expert on the mind considers how animals and smart machines measure up to human intelligence. Octopuses can open jars to get food, and chimpanzees can plan for the future. An IBM computer named Watson won on Jeopardy! and Alexa knows our favorite songs. But do animals and smart machines really have intelligence comparable to that of humans? In Bots and Beasts, Paul Thagard looks at how computers (“bots”) and animals measure up to the minds of people, offering the first systematic comparison of intelligence across machines, animals, and humans. Thagard explains that human intelligence is more than IQ and encompasses such features as problem solving, decision making, and creativity. He uses a checklist of twenty characteristics of human intelligence to evaluate the smartest machines—including Watson, AlphaZero, virtual assistants, and self-driving cars—and the most intelligent animals—including octopuses, dogs, dolphins, bees, and chimpanzees. Neither a romantic enthusiast for nonhuman intelligence nor a skeptical killjoy, Thagard offers a clear assessment. He discusses hotly debated issues about animal intelligence concerning bacterial consciousness, fish pain, and dog jealousy. He evaluates the plausibility of achieving human-level artificial intelligence and considers ethical and policy issues. A full appreciation of human minds reveals that current bots and beasts fall far short of human capabilities.

  • Talking about consciousness and Artificial Intelligence, like two sides of a coin, are seen together but are not together. When combined it becomes a mastery that can take over the world. Consciousness is still been not defined by any researcher or scientist. Heuristically, we know that Artificial Intelligence is booming and technology being ever fastest-growing field, but with this, there are many factors which are been neglected and can cause some drastic changes and severe problems to mankind. Thinking that Artificial Intelligence is not beyond humans, there are times where things are neglected but the fact that AI is showing prominent signs that it has become far more superior than what it has been trained and tested on. Often there are some series or patterns of outputs observed, which were not been trained to the machine but were formed by the algorithm matches might be difficult to understand for humans as well. This paper discusses the thoughts which are alive in everyone’s brain but are unanswered and are finding a path to reach out a standard solution. Aspects of society being the oldest of one, which was formed by humans. How will it be if the “SOCIETY” is seen in the world lead by the robots becoming dominant and ruling over humans? Can the consciousness which is still abstract or fugitively subjective in humans work similarly in robots as it is felt within us? If it ever will, Would humans live their lives with freedom? Or will it be minimal and not according to themselves but according to the robots? Is it right to expound it in one word as Singularity? As it is still an unknown entity but also a toss of ambiguity. Researchers are quite near to develop self-learning robots, but hidden patterns of them communicating amongst themselves speculate more perplexed theories which are making them more complex for scientists and researchers. Deep-down significantly knowing that things are moving in a direction which are casting the risk factors but also if you flip and see the other side, it shows the positive results and the growth of Intelligence which is helping each individual to grow in their way.

  • Creativity is intrinsic to Humanities and STEM disciplines. In the activities of artists and engineers, for example, an attempt is made to bring something new into the world through counterfactual thinking. However, creativity in these disciplines is distinguished by differences in motivations and constraints. For example, engineers typically direct their creativity toward building solutions to practical problems, whereas the outcomes of artistic creativity, which are largely useless to practical purposes, aspire to enrich the world aesthetically and conceptually. In this essay, an artist (DHS) and a roboticist (GS) engage in a cross-disciplinary conceptual analysis of the creative problem of artificial consciousness in a robot, expressing the counterfactual thinking necessitated by the problem, as well as disciplinary differences in motivations, constraints, and applications. We especially deal with the question of why one would build an artificial consciousness and we consider how an illusionist theory of consciousness alters prominent ethical debates on synthetic consciousness. We discuss theories of consciousness and their applicability to synthetic consciousness. We discuss practical approaches to implementing artificial consciousness in a robot and conclude by considering the role of creativity in the project of developing an artificial consciousness.

  • The question of whether or not the most advanced methods of Artificial Intelligence may be sufficient to implement a faithful representation of Behavior, Consciousness, Learning at classical level, with no need to resort to quantum information techniques.

  • A novel representationalist theory of consciousness is presented that is grounded in neuroscience and provides a path to artificially conscious computing. Central to the theory are representational affordances of the conscious experience based on the generation of qualia, the fundamental unit of the conscious representation. The current approach is focused on understanding the balance of simulation, situatedness, and structural coherence of artificial conscious representations through converging evidence from neuroscientific and modeling experiments. Representations instantiating a suitable balance of situated and structurally coherent simulation-based qualia are hypothesized to afford the agent the flexibilities required to succeed in rapidly changing environments.

  • In this paper we investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems (AISs) could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, we identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind of agency to AISs. This is because this type of awareness is thought to involve first-person or de se representations, which, in turn, are usually assumed to involve some form of consciousness. We clarify what this widespread assumption involves and conclude that the possibility of AISs’ moral responsibility hinges on what the correct theory of de se representations ultimately turns out to be.

  • The theory of consciousness is a subject that has kept scholars and researchers challenged for centuries. Even today it is not possible to define what consciousness is. This has led to the theorization of different models of consciousness. Starting from Baars’ Global Workspace Theory, this paper examines the models of cognitive architectures that are inspired by it and that can represent a reference point in the field of robot consciousness. Recent Findings Global Workspace Theory has recently been ranked as the most promising theory in its field. However, this is not reflected in the mathematical models of cognitive architectures inspired by it: they are few, and most of them are a decade old, which is too long compared to the speed at which artificial intelligence techniques are improving. Indeed, recent publications propose simple mathematical models that are well designed for computer implementation. Summary In this paper, we introduce an overview of consciousness and robot consciousness, with some interesting insights from the literature. Then we focus on Baars’ Global Workspace Theory, presenting it briefly. Finally, we report on the most interesting and promising models of cognitive architectures that implement it, describing their peculiarities.

  • An argument with roots in ancient Greek philosophy claims that only humans are capable of a certain class of thought termed conceptual, as opposed to perceptual thought, which is common to humans, the higher animals, and some machines. We outline the most detailed modern version of this argument due to Mortimer Adler, who in the 1960s argued for the uniqueness of the human power of conceptual thought. He also admitted that if conceptual thought were ever manifested by machines, such an achievement would contradict his conclusion. We revisit Adler’s criterion in the light of the past five decades of artificial-intelligence (AI) research, and refine it in view of the classical definitions of perceptual and conceptual thought. We then examine two well-publicized examples of creative works (prose and art) produced by AI systems and show that evidence for conceptual thought appears to be lacking in them. Although clearer evidence for conceptual thought on the part of AI systems may arise in the near future, especially if the global neuronal workspace theory of consciousness prevails over its rival, integrated information theory, the question of whether AI systems can engage in conceptual thought appears to be still open.

Last update from database: 3/23/25, 8:36 AM (UTC)