DEV Community

Cover image for Not everything can be a chat
Rosario De Chiara
Rosario De Chiara

Posted on

Not everything can be a chat

Introduction

The explosive growth of Large Language Models (LLMs) has brought unprecedented capabilities in natural language processing and generation to developers worldwide. However, as these powerful tools become increasingly accessible through APIs, we're witnessing a concerning trend: the automatic assumption that chat interfaces are the optimal - or worse, the only - way to interact with LLMs. This assumption stems largely from the fact that popular LLM services initially presented their capabilities through chat interfaces, leading to what we might call "chat interface inertia."
A sign of such inertia is the use of the name "ChatGPT" (that is the web app from OpenAI that is based on different LLMs) for every application of LLMs

The Problem of Chat Interface Inertia

This inertia manifests itself in numerous ways across the software industry. Developers and designers, perhaps unconsciously influenced by their experiences with ChatGPT and similar services, frequently default to chat-based interfaces even when they're far from ideal for the task. We see this in solutions that propose "chatting with your documents," "conversing with your dataset," or "asking your analytics questions to a chatbot."

Search results for

While chat interfaces excel in certain scenarios, particularly those involving open-ended exploration or complex query refinement, they often represent a step backward in user experience for many specialized applications. Consider a power plant's operational dashboard: transforming clear, at-a-glance KPI visualizations into a chat interface where operators must "converse" with their data not only adds unnecessary complexity but could potentially impact operational efficiency and safety.

This reflexive implementation of chat interfaces also overlooks the vast potential of LLMs to enhance existing UI paradigms. Rather than forcing every interaction into a conversational format, we should consider how LLM capabilities can augment traditional interfaces - making them more intelligent, adaptive, and user-friendly without fundamentally altering their form.

Conclusion

Breaking free from chat interface inertia requires a fundamental shift in how we think about LLM integration. Instead of asking, "How can we make this into a chat interface?" We should ask, "What is the most effective way to harness LLM capabilities for this specific use case?" Sometimes, the answer might be a chat interface, but it won't always be.

The future of LLM-powered applications lies not in forcing every interaction into a conversational format but in thoughtfully integrating these powerful tools into purpose-built interfaces that most effectively serve their users' needs. This might mean enhanced search interfaces, intelligent form completions, dynamic data visualizations, or new interaction paradigms we haven't yet imagined. The key is to break free from the assumption that chat is the natural or optimal way to interact with LLMs.

Top comments (0)