When I first encountered discussions about Artificial Intelligence (AI) versus Natural Intelligence (NI)—machine intelligence compared to human intelligence—I realized how closely they intersect with broader philosophical movements. This takes me back to my first lecture at the Lumsa Human Academy last month. It was also the day I decided to start writing down the thoughts of my already‑confused mind in a journal like the one you’re reading now.
During the lecture, we explored several intellectual movements shaping the debate around whether AI and NI can complement one another, compete, or reshape our ethical frameworks. I still have a lot to learn, but below is what I’ve understood so far. I welcome any suggestions for further reading or listening.
Human‑centrism
Human‑centrism places humans at the center of value and decision‑making, prioritizing human interests above all else. This mindset is visible in how we exploit natural resources—deforestation for urban expansion or industrial farming that prioritizes human consumption over ecological balance—often without restraint.
Technocentrism
Technocentrism views technology as the primary solution to societal and environmental challenges, emphasizing innovation and control.
For example, some climate‑engineering projects reflect a technocentric belief that innovation can “fix” the problems created by human activity. While such solutions may be promising, they risk overlooking systemic issues like overconsumption or inequality.
As someone concerned with societal impact, I worry that a purely technocentric approach may limit our vision of the greater good.
Transhumanism
Transhumanism builds on human‑centrism but advocates using advanced technologies—AI, genetic engineering, nanotechnology—to radically enhance human capacities and overcome biological limits. It essentially merges human‑centrism with technocentrism.
I remain skeptical because it seeks to amplify the capabilities of a species that has already caused significant harm to the planet. Brain‑computer interfaces or genetic modification may extend life or boost intelligence, but they also raise questions: Who will have access? What happens to social equity?
Posthumanism
Posthumanism, in contrast, critiques both human‑centrism and technocentrism. It questions whether technology should define our future or remain only one part of a broader ethical vision.
Posthumanist thinking appears in movements embracing biodiversity and multispecies justice, where technology supports coexistence rather than domination—for example, AI‑driven systems that protect endangered species without prioritizing human convenience. Instead of “enhancing humans,” posthumanism seeks to decenter humanity and rethink our coexistence with nonhuman life.
As someone with an analytical and altruistic nature, this last perspective resonates deeply with me.
To summarize:
- Transhumanism amplifies human‑centrism through technology and leans toward technocentrism.
- Posthumanism challenges both, asking whether technology should remain a tool within a more inclusive ethical framework.
- Transhumanism is often guided by a “techno‑centered imagination,” trusting technology to overcome human limits.
- Critical posthuman and bioethical perspectives remind us that focusing on technological solutions may sideline ecological, social, and justice concerns. They push us to ask not only what we can build, but why, for whom, and with what consequences.
Conclusion
As AI advances and humanity faces unprecedented challenges, these philosophical frameworks matter more than ever.
Will we pursue a future that reinforces human dominance through technology, or will we embrace a more inclusive vision of the ecosystem as a whole?
The answer lies not only in what we invent but in the values guiding those inventions.
I am not yet convinced that today’s fragmented political environment reflects these core values—and that worries me. Still, I choose to believe in a future where UNESCO’s principles—human rights and human dignity, peaceful coexistence, diversity and inclusion, and environmental and ecosystem flourishing—are fully applied.
A quick note: UNESCO produced its first‑ever global standard on AI ethics—the Recommendation on the Ethics of Artificial Intelligence, adopted in November 2021 and applicable to all 194 member states. I hope to explore this and other ethical guidelines in more detail soon.
Published on Linkedin.
