02/03/2026 • by Jonas Kellermeyer

Dangerous Anthropo-
centrism in Dealing With AI

Grüne Kachel mit dekorativer Grafik

Siri, Alexa, Cortana, Bixby… We often address our digital assistants by name and are quick to treat them as if they were human beings – complete with human emotions and sensitivities. Why this growing tendency toward anthropomorphization is problematic is what we aim to unpack in the following deep dive.

Why We Misunderstand Artificial Intelligence – And End Up Tripping Ourselves in the Process

The current discourse on artificial intelligence is marked by a peculiar imbalance. On the one hand, AI is discussed as an almost autonomous entity – as a thinking counterpart, a creative force, a potential rival to humanity. On the other hand, it is treated as a mere tool: a neutral extension of human intentionality, an efficient machine without its own horizon of meaning. These two perspectives may appear contradictory, yet they rest on the same underlying assumption: a deeply rooted anthropocentrism.

This anthropocentrism manifests itself in the fact that AI is described, evaluated, and feared almost exclusively through human categories. We ask whether AI “understands,” whether it is “creative,” whether it “possesses consciousness,” or whether it might “replace us as humans.” We measure it against human intelligence, human morality, human autonomy. This is precisely where the problem lies – not because such questions are illegitimate per se, but because they obscure our view of what AI actually is, and how it truly operates in the world.

Anthropocentrism as a Conceptual Trap

Anthropocentrism describes the tendency to position humans as the measure of all things and to evaluate everything through human categories. In philosophical, technological, and societal contexts, this is by no means a new phenomenon – one need only think of how we relate to pets or how we talk about ecological issues. What is new, however, is the radical extent to which this perspective is applied to AI, and how readily it is projected onto it.

AI is either anthropomorphized or instrumentalized. It appears as an “intelligent assistant,” a “creative co-author,” an “agent.” Or it is reduced to a neutral tool whose effects are entirely absorbed by human use. Both views are anthropocentric simplifications.

The truth lies somewhere between these complexity-reducing assumptions. AI is neither a subject nor a mere object. It is a sociotechnical system, embedded in infrastructures, data economies, decision architectures, and institutional power relations. Anyone who considers AI exclusively through the lens of human attributes overlooks precisely this structural dimension. Elsewhere, we have already written about techno-social solidarity – a point we would like to briefly recall here.

The Illusion of Comparability

A central symptom of anthropocentric thinking is the constant impulse to compare: Can AI think like us? Learn like us? Be creative like us? These questions are not only methodologically questionable, they are also analytically unproductive.

AI systems do not operate on the level of human cognition. They possess neither intentionality nor a phenomenological relation to the world. An AI does not so much read as it extracts; it does not so much know as it proceeds stochastically. And yet – or precisely because of this – it produces results that resemble human thinking, or at least convincingly simulate it. It is this very resemblance that tempts us to assume equivalence. Even though there are many apologists who argue that human thinking itself is, at its core, not fundamentally different from the reasoning processes of machines (cf. Kurzweil 2016), a certain degree of skepticism is warranted when considering how AI is functionally constituted.

This difference is crucial. AI does not “understand”; it correlates. It does not “decide”; it optimizes. It does not “learn” in a human sense; it dynamically adjusts parameters. This distinction is not a deficit but a constitutive feature. Ignoring it risks fostering false expectations – and assigning responsibility where it does not belong.

Responsibility is Misplaced

The anthropocentric view of AI leads to a problematic displacement of responsibility. When AI is understood as a quasi-autonomous counterpart, the notion quickly arises that decisions are being “made by the system.” When, by contrast, AI is treated as a neutral tool, responsibility is fully individualized.

Both positions fall short. Responsibility resides neither in the machine nor solely with individual users. It is distributed across multiple layers: developers, organizations, regulatory frameworks, socio-economic incentives, and cultural narratives. AI does not operate in isolation but within complex constellations of decision-making – within a network that produces both stability and flexibility through ongoing processes of negotiation. An anthropocentrism that has slipped out of balance obscures precisely this kind of distribution. It personalizes what is in fact a structural phenomenon, while simultaneously depoliticizing it.

Anthropocentrism as a Brake on Innovation

Paradoxically, anthropocentric thinking not only hinders a critical engagement with AI, but also severely limits its meaningful use. Those who understand AI as “human intelligence in another form” expect creativity, judgment, or ethical sensitivity from it. Those who see it merely as a tool underestimate its systemic impact.

In both cases, its actual potential remains untapped: using AI as an amplifier, as a (distorting) mirror, as a structural field of experimentation is an option that falls between the cracks. Not because AI thinks better than humans, but because it operates differently. Its strength lies not in an increased degree of unexamined autonomy, but in the potential scaling of productivity. It is not about judgment, but about pattern recognition. Not about the creation of meaning, but about purpose-driven variation.

A non-anthropocentric approach to AI begins precisely here: with the deliberate design of interfaces between human thought and machine processing—what are referred to as Future User Interfaces (FUIs).

The Danger of False Intimacy

Another problem inherent in anthropocentric AI narratives is the creation of false intimacy. Language models, avatars, and “assistant systems” are deliberately designed to imitate human communication. This imitation intentionally fosters familiarity, and in some cases even emotional attachment.

Yet intimacy without reciprocity is profoundly asymmetrical. AI cannot assume responsibility, reciprocate a relationship, or be held morally accountable. Attributing human qualities to it nevertheless obscures this asymmetry and opens the door to manipulation, misinterpretation, and excessive dependency.

The danger of anthropocentrism here does not lie in the technology itself, but in the cultural framing through which it is presented and understood.

AI is an Ecological Phenomenon, Not an Autonomous Actor

An alternative approach would be to understand AI less as an autonomously acting agent and more as an ecological phenomenon: effectively as an infrastructure that facilitates decision-making. A systemic arrangement that is capable of steering behavior without restricting the agency of human individuals.

From this perspective, the central question is no longer one of AI’s “intelligence,” but of its effects. How do AI systems reshape work processes? The production of knowledge? Decision-making logics? Power relations? Which assumptions do they reinforce – and which do they render invisible?

These questions cannot be answered from an anthropocentric standpoint. They require systemic thinking.

Every AI is the product of a human act of creation, and the asymmetry between calculative mechanisms and strategically acting individuals should be structured accordingly.

Why Anthropocentric Assumptions Are so Persistent

That anthropocentric narratives have become so deeply embedded in our collective consciousness is no coincidence: they serve an important psychological function. At their core lies an attempt to make a complex and elusive phenomenon more comprehensible by rendering it familiar. Uncertainties in dealing with AI can thus be reduced by fitting it into well-known categories, providing – at least allegorically – a point of reference for critique.

Yet this sense of comfort comes at a cost. It prevents sober analysis and fosters false expectations, shifting the discourse away from structural questions toward metaphysical speculation.

The question, then, is not whether AI “resembles us.” The real question is how it transforms our established systems – and how we intend to shape that transformation.

A Plea For a Post-Anthropocentric AI Discourse

A responsible approach to AI requires a shift in perspective: away from the question of the machine’s humanity and toward the question of the mechanization of certain decision-making processes. Away from excessive personalization and toward a broad, structural understanding. If it is true that thinking requires a body (cf. Lyotard 2014), then the task of technology is to ensure that such a body remains intact:

“You know very well that technology is not an invention of humans. Rather the opposite. Anthropologists and biologists agree that even the simplest primordial organisms […] were technical constructs. ‘Technology’ is any existing system that identifies, stores, and processes the information necessary for its survival, in order to derive certain behavioral patterns from regularities […] that at least ensure its survival” (Lyotard 2014: 23).

Recognizing this does not mean displacing humans from the center. On the contrary, it means taking human responsibility seriously – especially where it is no longer immediately visible.

A post-anthropocentric AI discourse would acknowledge that AI is neither a subject nor a mere object, but part of a larger assemblage – as a veritable quasi-object/quasi-subject (cf. Serres 1987) – that constitutes a collective. As such, it is our responsibility to shape, regulate, and ultimately take responsibility for the influence it continuously exerts.

Conclusion: Anthropocentrism is Convenient And Dangerous

The anthropocentric way of dealing with AI is understandable but highly problematic. It simplifies precisely where greater differentiation would be necessary. It personalizes where structures ought to be analyzed. And it distracts from the actual issue at hand: the design of sociotechnical systems and the creation of techno-social solidarity.

The danger associated with AI does not primarily lie in technological actors taking over human roles, but rather in a profound misunderstanding of how AI functions in the first place. When we attribute properties to it that it does not possess – while at the same time ascribing effects to it for which no one is willing to take responsibility – we are confronted with a fundamental imbalance.

A reflective engagement with AI therefore does not begin with the question of what machines can do, but with the question of how we think. And this is where the real challenge lies. We must turn our attention more critically toward ourselves in order to understand where we intend to draw the boundary between the technosphere and the social world. I would like to conclude with a quote from Vilém Flusser, who observed that

“[t]he telepathic method of synthesizing information through ‘external’ dialogues – dialogues in which, theoretically, all humans and all ‘artificial intelligences’ participate via cables or satellites – is […] in essence nothing other than a technical application of the theoretical insight that all information arises from computations of information bits” (Flusser 2018: 107).

Sources

Flusser, Vilém (2018): ins universum der technischen bilder. edition flusser, Berlin.

Kurzweil, Ray (2016): Die Intelligenz der Evolution: Wenn Mensch und Computer verschmelzen. Kiepenheuer und Witsch, Köln.

Lyotard, Jean-François (2014): „Ob man ohne Körper denken kann.“ In: ders. Das Inhumane. Plaudereien über die Zeit. Passagen Verlag, Wien, S. 19-35.

Serres, Michel (1987): Der Parasit. Suhrkamp Verlag, Frankfurt a.M.

About the author

As a communications expert, Jonas is responsible for the linguistic representation of the Taikonauten, as well as for crafting all R&D-related content with an anticipated public impact. After some time in the academic research landscape, he has set out to broaden his horizons as much as his vocabulary even further.

Lachender junger Mann mit Brille