08/04/2025 • by Jonas Kellermeyer
GUI, VUI, FUI – The Future of Interfaces

We live in a world that revolves largely around visual aspects; people generally believe only what they can see. From the very concept of the Enlightenment, through the history of classical design and architecture, to an increasingly graphics-driven computer culture, we have collectively developed a fetish for the visible – subconsciously learning to equate it with what is true. But what happens when the subject at hand cannot be adequately visualized? How do we design the interfaces of the future, which may rely much more on other modalities – primarily on spoken language?
How will we interact with technology in the future?
To answer the question of which modalities will shape future interactions, we must first analyze the changes that are already foreseeable today—chief among them, the fact that nearly everything is becoming an interface. As media theorist Alexander R. Galloway pointed out in his seminal work The Interface Effect, an interface is less a thing in itself and more an effect that can be attributed to almost any constellation of things (cf. Galloway 2012: 36), because, as he writes, “to mediate is really to interface” (ibid.: 10). Ultimately, the process we describe as interfacing is about establishing a sustainable connection between at least two communicators, enabling them to reach a shared understanding. We interact with social actors every day without relying on explicitly graphical components. The way we navigate, for instance, through everyday meetings is primarily dependent on natural spoken language. This observation raises an important question: if the future belongs to technological gadgets and devices that are predominantly controlled through spoken language, what would a standardized UI design look like that permeates every aspect of digital culture?
Standardized Design for Voice User Interfaces (VUIs)
The challenge in designing speech-based interfaces lies primarily in the fleeting nature of spoken words. Speech vanishes at the very moment it is uttered—it leaves no trace, no visible orientation aids like a menu, a button, or a scrollbar. This creates the necessity to establish orientation through other means: through consistent structuring, reliable feedback, and a high degree of coherence in dialogue flow and language choice.
A Voice User Interface should not be understood merely as a transcription of graphical concepts. Instead, it requires an independent set of design principles: VUI design must build on concepts such as prompt economy, linguistic adaptation, ambiguity tolerance, and adaptivity. Users expect an interlocutor that not only understands what is being said, but also how it is said—tone, pace, and intention. The semantic layer is complemented by a paraverbal one, which is often decisive for the perceived quality of the interface. Indeed, “control always takes place at an interface of some kind” (Hookway 2014: 11), which is why it is essential to consider the interface within every functional framework. However, the following nuance is equally important: “Yet the interface is not reducible to control” (ibid.).
One of the greatest challenges in the field of VUIs is the balance between systematic dialogue guidance and individual freedom: should the system strictly lead the user through a menu, or should it be open enough to accept free-form inputs? The solution often lies in what is known as guided openness – a middle ground between functional expectation management and explicit room for user agency.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Interfaces for Auditory, Tactile, and Cognitive Engagement
Once we move beyond the visible, other senses become all the more important: haptics, acoustics, rhythm. Feedback is no longer conveyed through colors or animations but through tactile impulses, sounds, or even changes in temperature. This becomes particularly exciting when it comes to hybrid interfaces that combine voice with physical proximity, spatial positioning, or gesture. In this context, the role of UI design evolves into the orchestration of a multimedia, multisensory experience—where technology is perceptible to the body but never overwhelmingly dominant.
Such a development not only demands new design systems but also new competencies on the part of users. Those who design UIs for voice- and haptic-based devices will need more than foundational design knowledge and technical expertise; they will require a keen sense for dramaturgy, communication psychology, and situational awareness. Therefore, let us no longer speak of GUIs and VUIs, but rather of FUIs – Future User Interfaces!
From GUI and VUI to FUI – Future User Interface
The graphical user interface (GUI) has shaped our way of thinking for decades: icons, windows, navigation bars, and menu structures provided a familiar framework that allowed users to orient themselves. But as we transition into a world of ubiquitous, often invisible interface effects – whether in smart homes, wearables, or intelligent urban infrastructures – new paradigms are coming to the forefront. The paradigm of the FUI – the so-called “Future User Interfaces” – is no longer reducible to the screen. It becomes a conversation partner, a spatial experience, an embodied behavior.
The challenge will not only be to develop new interfaces but also to cultivate a new design consciousness capable of establishing routines for engaging with emerging interfaces. In a future where the world itself can be considered an interface, the goal is to shift focus from the merely visible to the holistically experienceable (cf. Kaerlein 2015).
Conclusion: Design Beyond the Visible
UI design that shifts away from a focus on visible aspects of the display challenges us to question established visual and perceptual routines and to structure our relationship with everyday technology through other senses. In a world where technology increasingly dissolves into the social fabric—its own isolated domain largely becoming obsolete—the quality of interaction itself moves ever more into focus. The visual component often remains merely as packaging. Natural forms of communication such as spoken language, haptics, facial expressions, gestures, and contextual cues are taking over the space once dominated by pixels and buttons. Such a shift requires not only new tools but, above all, a new understanding of design: empathetic, situational, and multisensory. Those who design for the invisible today are shaping the culture of meaningful encounters of tomorrow—a culture that, until recently, was still fixated on control-oriented interfaces. Less conventional interface does not mean less responsibility in design—on the contrary: as feedback and control elements become less explicit, the need for anticipation and ethical sensitivity in design becomes even greater. The empathetic integration of technology into our environment corresponds with the ubiquity of the interface itself. Last but not least, it is also about rethinking tools and development methods: from acoustic wireframes to ethical prototypes, new approaches are required to build a future that is not only visible but can also be experienced multimodally.
Sources
Galloway, Alexander R. (2012): The Interface Effect. Polity Press, Cambridge.
Hookway, Brandon (2014): Interface. The MIT Press, Cambridge, Massachusetts.
Kaerlein, Timo (2015): “Die Welt als Interface.” In: Florian Sprenger & Christoph Engemann (Hg.) Internet der Dinge. Über Smarte Objekte, intelligente Umgebungen und die technische Durchdringung der Welt. Transcript Verlag, Bielefeld, S. 137-162.