Agentic AI is taking off, and for good reason. AI agents can now write code, conduct research, plan travel, handle customer service, and more. Yet amid the excitement about what AI agents can do, a key question has been neglected: how do we design the human side of the equation?
That question is critical, because agentic AI isn’t just another feature to bolt onto existing products. It’s a fundamentally different kind of software that demands fresh thinking. Unlike traditional software, agentic AI can be proactive and conversational, sometimes even anthropomorphic. It doesn’t just respond to commands; it initiates actions and makes decisions autonomously.
Agentic AI isn't just another feature to bolt onto existing products. It's a fundamentally different kind of software that demands fresh thinking.
This capability is what makes agentic AI so useful, but it’s also what makes effective interactions hard to design. A central user-experience (UX) challenge is coordination: the interplay between what users do, what they experience, and what the AI is doing, both visibly and behind the scenes.
Trust, control, and transparency are essential to the agentic-AI user experience, and they all depend on getting this coordination right.
Here, we introduce a framework for thinking about human-AI coordination. We also offer a vocabulary for characterizing agentic experiences, including when the AI feels too absent, too intrusive, or appropriately calibrated.
A framework for human-AI coordination
One of the most critical decisions in AI UX design is how visible and interactive AI capabilities should be. Should users direct the agent step by step, let it act autonomously, or work somewhere in-between? And how should this change based on the task, the user’s expertise, and the current context?
You can think of coordination along these three dimensions:
- Human involvement: how much effort and attention the user invests in directing or monitoring the AI;
- AI salience: how prominent the AI feels in the experience (for example, a conversational chatbot with a name and persona has high salience, autocomplete suggestions have lower salience, and AI-generated navigation menus and backend optimizations have little or none);
- AI activity: what the AI is doing, whether or not the user sees it.
Coordination is about aligning these dimensions. When human involvement and AI salience are both low, coordination is light-touch. When they are high, coordination is more hands-on. The right balance is often somewhere in-between, with an awareness of what the AI is doing in the background.
Three zones of coordination
Rather than treating agent autonomy as a binary choice — a fully autonomous system or one with a human in the loop — it is practical to consider three “zones” of coordination.
- Done with me (mutually collaborative): User and AI work closely together across multiple phases — initiation, monitoring, updating, and completion. Imagine collaborating with an AI assistant on a complex document or research project, with frequent back-and-forth. AI salience and human involvement are both high. The user is very in the loop.
- Done for me (heavily automated): Tasks are handled by AI with minimal user input and oversight. The user initiates the task and reviews the output; most of the work happens out of view. An examples is an agent that researches competitors and delivers a summary report. The user is barely in the loop.
- Done under me (discreetly assisted): AI works in the background without announcing itself. The user may not even notice the assistance. Smart sorting, predictive text, and intelligently personalized content and navigation menus fall into this category. The AI quickly delivers outcomes users can assess and act on. The user is implicitly in the loop.
These aren’t rigid categories but calibration points for designing and delivering the right level of coordination to users. The goal is to match coordination intensity to the specific user, task, and context, rather than defaulting to a single mode everywhere or assuming that an autonomous agentic system eliminates the need for thoughtful coordination.
The rhythm of human-AI coordination
Because both agents and users can work independently, coordination cannot be static. Workflows often move through multiple zones: high involvement during initiation, perhaps defining goals and constraints; lower involvement during execution; and then a spike at review and next steps.
We visualize these shifts as “coordination curves” — a variation of user-journey mapping that shows how human involvement and AI salience rise and fall across a workflow. High-level curves reveal the overall shape of an experience. Looking beneath the surface exposes specific AI touchpoints, handoffs, and decision points, helping UX design teams collaborate on bringing adaptive agentic systems to life.
As multiagent applications become more sophisticated, they enable longer, computationally intensive work such as research projects, complex analyses, and multistep workflows. These create valleys in the coordination curve: stretches where the AI operates independently and the user is minimally involved. These valleys require thoughtful design around notification, approval, monitoring, and auditing. More broadly, the UX layer must provide the transparency and controls needed to build trust, support adaptation and course correction, and ultimately deliver value.
Case study: Adaptive coordination in practice
We developed an approach called “responsive salience”, whereby an AI agent automatically adjusts its visibility and interaction intensity to match the context.
The core insight is simple: in traditional software, most of the interface is static or deterministic. With agentic AI, behavior is nondeterministic, so a user’s needs for oversight can change moment to moment. A user who trusts an agent on a familiar task may prefer to be largely hands-off. In unfamiliar or high-stakes work, that same user may want more transparency, checkpoints, and tighter control.
Rather than forcing users to toggle settings, responsive salience lets the system adapt automatically. In our prototype, a monitoring agent continuously evaluates signals including task complexity, perceived risk, and user comfort level. When trust appears low — for example, when the user is a beginner, or the workflow involves sensitive data — the system increases salience. It could do this by providing richer explanations, additional approval gates, and expanded transparency features. The user may then be notified of the change and, if needed, can override the agent’s choice. Once confidence recovers or the task ends, the salience settings quietly revert.
Over time, the system can learn from user behavior through user feedback loops, refining how quickly salience adapts and how far it goes. The result is autonomy that stays aligned with context.
Early testing with users validated the idea while revealing some clear tradeoffs. Preferences diverged sharply: some found high-salience modes exhausting (“I felt visually fatigued by the large amount of communication”), while others appreciated the guidance (“It gave me options for what I might want to ask next”). One participant expressed the desire for a middle ground: “I want some oversight on what the agent is planning before execution. … The high setting was too annoying because I had to approve everything.”
These results underline that user preferences for autonomy versus control can vary substantially, even in similar tasks. Responsive salience offers a solution by dynamically adjusting whether a given task is done-with-me, done-for-me, or done-under-me.
Tellingly, several participants did not notice responsive salience until we pointed it out. That suggests that when the system is well calibrated, dynamic coordination can feel seamless rather than intrusive.
Coevolution with agentic AI
Agentic AI represents a genuine shift in what software can do, but realizing its potential depends just as much on what humans do alongside it. The frameworks, protocols, and infrastructures for building agents are maturing fast. The UX layer needs to catch up.
Coordination isn’t a one-and-done problem but a moving target. As users gain expertise, tasks change, and AI capabilities evolve, the optimal balance of user involvement and AI salience will change too.
So the goal isn’t to find the perfect static design, as it might have been before generative AI, but to build systems and a shared vocabulary that evolve as we learn what works in practice. Agentic AI makes this both necessary and possible: its behavior can be unpredictable, so users and designers must adjust, yet the technology itself can also learn, adapt, and course-correct proactively.
Teams that get this right won’t simply build more capable agents. They will build agents that people trust, adopt, and even enjoy collaborating with.