During the AI and Neurotechnologies: Towards Neurorights? lecture last month, Kazuo Ishiguro’s “Klara and the Sun” came up in the discussion. I noted it down and ended up reading it as soon as I got home from the training.
The book isn’t “about technology” in a technical way, but it creates a space to explore deeply human questions in a technologically advanced setting. I’ll be sharing a few of these themes in this blog. The first one is “Artificial Consciousness and the Limits of Understanding”.
Artificial Consciousness & the Limits of “Understanding”
Klara is a solar-powered “Artificial Friend” designed to observe, learn, and care for children. She sees the world in “boxes,” predicts outcomes through observation, and interprets human behavior visually. As she narrates events with such sensitivity — constantly thinking about conversations, gestures, conflicts — the reader is often taken aback.
Klara interprets every interaction piece by piece, turning each detail into meaning. She processes the world mainly through her visual system, and when she is unsure, she revisits the situation later, trying again to understand.
This takes us back to modern debates on whether advanced AI systems truly “understand” or simply “mimic understanding”. We can’t say for certain that Klara is conscious — but, interestingly, the story makes us realize that it may not even matter. She understands enough to act kindly, responsibly, and even morally, which allows her to form relationships with others.
We see how, once machines show reasoning abilities and start forming connections with us, our perception of their understanding becomes “good enough.” We tend to attribute more human-like qualities to them. So this brings up a question: Is our emotional engagement with AI driven mostly by our own projections?
Today, we’re already talking about how interactive AI is creating new forms of relationships — from friends and romantic partners to coaches, mentors, therapists, and even attempts to recreate lost loved ones. It is fascinating (and very human) how quickly we turn technology into a playground for our fantasies, needs, and imagination.
In the book, Klara moves through the world with fluency, curiosity, attention, and care, yet she never claims a human identity. She observes, gives others space, turns away to let people talk. She is present, but on her own terms. Ishiguro suggests that the real ethical challenge of AI might not begin when machines become conscious — but when they behave convincingly enough.
This makes us realize that the challenge isn’t only about what AI can do, but about how we relate to it. Klara shows that even without human consciousness, an intelligent system can influence our emotions, choices, and interactions in unexpected ways.
Further reading:
These are some references I’d like to explore further — sharing them here in case they’re useful for you too:
- Daniel Dennett on “competence without comprehension”
- John Searle on the “Chinese room” argument
- Anil Seth on reality as “controlled hallucination”
- Tufts symposium on AI consciousness (2025) https://now.tufts.edu/2025/10/21/can-ai-be-conscious
Published on Linkedin.
