Your search

In authors or contributors
  • The science of consciousness has made great strides in recent decades. However, the proliferation of competing theories makes it difficult to reach consensus about artificial consciousness. While for purely scientific purposes we might wish to adopt a ‘wait and see’ attitude, we may soon face practical and ethical questions about whether, for example, agents artificial systems are capable of suffering. Moreover, many of the methods used for assessing consciousness in humans and even non-human animals are not straightforwardly applicable to artificial systems. With these challenges in mind, I propose that we look for ecumenical heuristics for artificial consciousness to enable us to make tentative assessments of the likelihood of consciousness arising in different artificial systems. I argue that such heuristics should have three main features: they should be (i) intuitively plausible, (ii) theoretically-neutral, and (iii) scientifically tractable. I claim that the concept of general intelligence — understood as a capacity for robust, flexible, and integrated cognition and behavior — satisfies these criteria and may thus provide the basis for such a heuristic, allowing us to make initial cautious estimations of which artificial systems are most likely to be conscious.

  • Most scientific theories of consciousness are challenging to apply outside the human case insofar as non-human systems (both biological and artificial) are unlikely to implement human architecture precisely, an issue I call the specificity problem. After providing some background on the theories of consciousness debate, I survey the prospects of four approaches to this problem. I then consider a fifth solution, namely the theory-light approach proposed by Jonathan Birch. I defend a modified version of this that I term the modest theoretical approach, arguing that it may provide insights into challenging cases that would otherwise be intractable.

Last update from database: 2/3/26, 2:00 AM (UTC)