Your search
Results 4 resources
-
How can the free energy principle contribute to research on neural correlates of consciousness, and to the scientific study of consciousness more generally? Under the free energy principle, neural correlates should be defined in terms of neural dynamics, not neural states, and should be complemented by research on computational correlates of consciousness – defined in terms of probabilities encoded by neural states. We argue that these restrictions brighten the prospects of a computational explanation of consciousness, by addressing two central problems. The first is to account for consciousness in the absence of sensory stimulation and behaviour. The second is to allow for the possibility of systems that implement computations associated with consciousness, without being conscious, which requires differentiating between computational systems that merely simulate conscious beings and computational systems that are conscious in and of themselves. Given the notion of computation entailed by the free energy principle, we derive constraints on the ascription of consciousness in controversial cases (e.g., in the absence of sensory stimulation and behaviour). We show that this also has implications for what it means to be, as opposed to merely simulate a conscious system.
-
Can active inference model consciousness? We offer three conditions implying that it can. The first condition is the simulation of a reality or generative world model, which determines what can be known or acted upon; namely an epistemic field. The second is inferential competition to enter the world model. Only the inferences that coherently reduce long-term uncertainty win, evincing a selection for consciousness that we call Bayesian binding. The third is epistemic depth, which is the recurrent sharing of the Bayesian beliefs throughout the system. Due to this recursive loop — in a hierarchical system (such as a brain) — the world model contains the knowledge that it exists. This is distinct from self-consciousness, because the world model knows itself non-locally and continuously evidences this knowing (i.e., field-evidencing). Formally, we propose a hyper-model for precision-control across the entire hierarchy, whose latent states (or parameters) encode and control the overall structure and weighting rules for all layers of inference. This Beautiful Loop Theory is deeply revealing about meditation, psychedelic, and altered states, minimal phenomenal experience, and provides a new vision for conscious artificial intelligence.
-
Much of current artificial intelligence (AI) and the drive toward artificial general intelligence (AGI) focuses on developing machines for functional tasks that humans accomplish. These may be narrowly specified tasks as in AI, or more general tasks as in AGI – but typically these tasks do not target higher-level human cognitive abilities, such as consciousness or morality; these are left to the realm of so-called “strong AI” or “artificial consciousness.” In this paper, we focus on how a machine can augment humans rather than do what they do, and we extend this beyond AGI-style tasks to augmenting peculiarly personal human capacities, such as wellbeing and morality. We base this proposal on associating such capacities with the “self,” which we define as the “environment-agent nexus”; namely, a fine-tuned interaction of brain with environment in all its relevant variables. We consider richly adaptive architectures that have the potential to implement this interaction by taking lessons from the brain. In particular, we suggest conjoining the free energy principle (FEP) with the dynamic temporo-spatial (TSD) view of neuro-mental processes. Our proposed integration of FEP and TSD – in the implementation of artificial agents – offers a novel, expressive, and explainable way for artificial agents to adapt to different environmental contexts. The targeted applications are broad: from adaptive intelligence augmenting agents (IA’s) that assist psychiatric self-regulation to environmental disaster prediction and personal assistants. This reflects the central role of the mind and moral decision-making in most of what we do as humans.
-
This paper investigates the prospect of developing human-interpretable, explainable artificial intelligence (AI) systems based on active inference and the free energy principle. We first provide a brief overview of active inference, and in particular, of how it applies to the modeling of decision-making, introspection, as well as the generation of overt and covert actions. We then discuss how active inference can be leveraged to design explainable AI systems, namely, by allowing us to model core features of ``introspective'' processes and by generating useful, human-interpretable models of the processes involved in decision-making. We propose an architecture for explainable AI systems using active inference. This architecture foregrounds the role of an explicit hierarchical generative model, the operation of which enables the AI system to track and explain the factors that contribute to its own decisions, and whose structure is designed to be interpretable and auditable by human users. We outline how this architecture can integrate diverse sources of information to make informed decisions in an auditable manner, mimicking or reproducing aspects of human-like consciousness and introspection. Finally, we discuss the implications of our findings for future research in AI, and the potential ethical considerations of developing AI systems with (the appearance of) introspective capabilities.