The Role of UI in the AI Age
May 14, 2025
As AI has progressed I’ve been thinking about the role of UI in a landscape increasingly dominated by chat apps. I never liked the idea that plain chat would be the future of interfaces. Typing is slow and I don’t like talking to my phone. Rich UI still communicates information more efficiently than language in most cases and I shouldn’t need to solve a mini prompt engineering problem for every AI interaction I have. In most domain specific use cases I don’t necessarily know the best way to ask for a solution in the first place.
I’ve firmly believed since the advent of LLMs that text only chat isn’t the future of interfaces and that we still need rich UI. At Vetted we started working on a product recommendation chat agent shortly after GPT-3.5 was announced, and at the time I knew I didn’t want to do anything text only and I wanted it to produce rich shopping UI components. Incorporating rich UI into LLM answers back then was harder than it is today, but it was absolutely necessary to provide a better product shopping experience and differentiate ourselves. When reviewing results with traditional product components it’s much easier to take in a large list of products because of the visuals, consistency of the design, and emphasis on the most important data points. The same information in text formats is much harder to keep track of. I think chat heavy LLM experiences became commonplace because it’s the simplest to implement, not because it’s the best interface.
Rich UI components are getting integrated into text chat more often now and new ones are getting added to major LLM providers’ chat interfaces, with products, images, and generated artifacts to name a few. I can envision a future where major LLM chat UIs produce flexible content from diverse data sources. Would that mean they’ll capture every use case with a single super app? I still don’t believe so, a product is more than its UI components.
Products as Mental Models
I’ve been trying to figure out what these do-everything AI products have been lacking compared to traditional software. My view is that a product is fundamentally about selling a mental model and workflow. A mental model provides an information model to better understand a problem, while a workflow offers a process to solve it based on that framework through UI. Take email: it has a mental model built around sending and receiving messages to addresses, with interfaces providing workflows for finding, reading, and responding to those messages. Almost any application can be broken down into its proposed mental model paired with a workflow implementation. While a super chat app might handle most use cases, it cannot perfect a specific mental model and workflow for every problem. Domain-specific products can continue to differentiate themselves, innovate, and persist.
Developing each mental model and corresponding workflow requires significant effort. Adding a new workflow to a do-everything agent is still challenging, and designing an intuitive workflow with a repeatable user friendly process takes substantial work. It’s also subjective. There are no universally ideal workflows, so no single app can optimally solve every problem for all users. Well designed software solutions are needed because few people possess the domain expertise to envision and implement polished workflows for their specific challenges.
Enabling New Workflows Raise New Challenges
Many of the attempts I’ve seen at incorporating AI into existing products have revolve around taking their data and creating summaries from it. RAG to text summarization is going to be a solved problem, doesn’t differentiate you, and doesn’t help users do anything new. With AI, we can now create more powerful and flexible workflows that were previously impossible, but the focus needs to be on designing the best solution for the problem, not building what’s easiest with the technology. The real question hasn’t changed. What’s the ideal mental model and workflow to solve this problem? How can you perfect that workflow and make it reproducible and intuitive? That’s the core of all products, and with AI’s flexibility, we can better realize that ideal than we can with traditional CRUD apps. Whether chat is involved, or you need a hybrid approach, or managed artifacts, or a traditional UI, LLMs can help you further achieve that optimal workflow.
However, flexibility comes at a cost. We’re experiencing a paradigm shift, and harnessing LLMs to enable more powerful workflows remains a poorly defined process. What AI enables doesn’t make designing UI easier, it makes it harder. It allows us to solve more complex use cases with greater detail and flexibility, but that complexity requires harder to build workflows. We gain power but face more edge cases and complications. AI might make us more productive, but we’ll need that productivity boost to manage that coming complexity. After years of building stability through deterministic software design, we are now adding a massive non-deterministic component. Incorporating LLMs into our software in sophisticated ways will present significant new challenges.
Designing for Problems, Not Technology
I find the current state of AI harkens back to the early days of software and the internet. Each paradigm shift introduced new ways to model problems and create workflows to solve them and getting it right required extensive iteration. We initially created UI based on technological limitations. CLI design reflected early computing constraints, and document based websites were limited by the static nature of the early web. As we extracted more from those platforms, we mimicked existing solutions through skeuomorphic design, copying workflows from around the office. Over time, we gradually refined our software solutions to offer features only possible in the digital realm, with design patterns like infinite scrolling and command pallets.
On the path of doing what’s easy, to doing what’s familiar, to doing something entirely new, we remain in the “doing what’s easy” phase with AI and we’ve barely begun to harness what’s possible. I’ve observed a lack of vision for using AI to enable fully realized mental models and workflows, and there’s a tendency to design around the technology rather than the problem being solved. We need to put the focus on the workflow first, then work out how to apply the power and flexibility of LLMs to achieve that vision. Doing that will require refining our design and development processes, and inventing the necessary techniques will take time.
I’ve seen plenty of uncertainty from companies about finding their place in the AI era, or whether they even have one at all. Ultimately the fundamental challenge remains unchanged: you’re selling a solution to a problem through a mental model and workflow. AI offers a powerful new way to build those solutions, but it is not the solution in and of itself.