Your search

In authors or contributors
  • This book discusses what artificial intelligence can truly achieve: Can robots really be conscious? Can we merge with AI, as tech leaders like Elon Musk and Ray Kurzweil suggest? Is the mind just a program? Examining these issues, the author proposes ways we can test for machine consciousness, questions whether consciousness is an unavoidable byproduct of sophisticated intelligence, and considers the overall dangers of creating machine minds

  • Abstract How can we determine if AI is conscious? The chapter begins by illustrating that there are potentially very serious real-world costs to getting facts about AI consciousness wrong. It then proposes a provisional framework for investigating artificial consciousness that involves several tests or markers. One test is the AI Consciousness Test, which challenges an AI with a series of increasingly demanding natural-language interactions. Another test is based on the Integrated Information Theory, developed by Giulio Tononi and others, and considers whether a machine has a high level of “integrated information.” A third test is a Chip Test, where speculatively an individual’s brain is gradually replaced with durable microchips. If this individual being tested continues to report having phenomenal consciousness, the chapter argues that this could be a reason to believe that some machines could have consciousness.

  • Various interpretations of the literature detailing the neural basis of learning have in part led to disagreements concerning how consciousness arises. Further, artificial learning model design has suffered in replicating intelligence as it occurs in the human brain. Here, we present a novel learning model, which we term the “Recommendation Architecture (RA) Model” from prior theoretical works proposed by Coward, using a dual-learning approach featuring both consequence feedback and non-consequence feedback. The RA model is tested on a categorical learning task where no two inputs are the same throughout training and/or testing. We compare this to three consequence feedback only models based on backpropagation and reinforcement learning. Results indicate that the RA model learns novelty more efficiently and can accurately return to prior learning after new learning with less computational resources expenditure. The final results of the study show that consequence feedback as interpretation, not creation, of cortical activity creates a learning style more similar to human learning in terms of resource efficiency. Stable information meanings underlie conscious experiences. The work provided here attempts to link the neural basis of nonconscious and conscious learning while providing early results for a learning protocol more similar to human brains than is currently available.

Last update from database: 5/19/25, 5:58 AM (UTC)