Hybrid intent-based interfaces: Typical interactions and UI patterns

Andrii Rusakov
SoftServe Design
Published in
5 min readFeb 24, 2024

Hybrid Intent-based Interfaces empowered by AI explore the fusion of Conversational and Traditional GUI approaches — they focus on tasks like drafting, classification, editing, summarization, and question-answering. Generative AI models¹ can work with six key modalities: text, audio, image, video, 3D, and code.

The new paradigm of tools raised user expectations for digital experiences. The rapid technology growth brought both productivity increase² opportunities and articulation barrier³ challenges.

This article outlines typical intent-based interactions and key patterns to support a hybrid UI approach.

Typical Intent-Based Interactions

Interaction between humans and machines can be initiated from both sides. Humans can start DIFM (Do It For Me) experience, or it can be proactively advised by machine.

Reactive Interface

A user prepares a task specification for the machine and conducts discrete input. The machine processes input data and provides output results for the user to review. Output results can be adjusted with modality-specific tools — e.g., language and tone of voice for text; aspect ratio, style, and mood for image; programming language for code, etc.

Proactive Interface

A machine considers human (read: usual) activities as continuous input actions. It looks for opportunities to support humans with contextual insight and suggests AI accelerated activity for task completion.

Key UI Patterns for Hybrid Interfaces

Building end-user applications is one of the most significant opportunities across the Generative AI value chain⁴. The user input to AI model⁵ can represent direct instructions, have no UI elements for prompt control, or work as an AI assistant for specific UI.

Most such applications can fit into the following categories: Data Analysis and Insights, Virtual Assistance, or Content Creation.

Data Analysis and Insights

Analytical activities related to data classification, enrichment, summarization, and insights communication.

Data Preparation
Work with data in a tabular format to support data modeling, categorization, cleansing, and enrichment.

Key UI elements: Table view, AI action selector, Task specification view.

Examples: Data View by Relevance AI and ThoughtSpot

Data Analysis
Metrics or criteria suggestions to support exploratory, As-Is (descriptive, diagnostic), and To-Be (predictive, prescriptive) data analysis.

Key UI elements: AI action selector, Suggested metrics view.

Example: Salesforce Tableau Pulse

Visual Reporting
Support dashboard creation and DataViz in response to user questions.

Key UI elements: Data input, Configurable DataViz widgets.

Examples: Dashboard creation by Polymer and Gen DataViz Answer by ThoughtSpot

Insights & Explanations
Explainability of DataViz elements and content summarization.

Key UI elements: Question input and hints, Explanation area.

Example: Dive-In Explanation by Salesforce Tableau Pulse

Virtual Assistants

Conversational interface for continuous user support.

Co-Pilot
Combination of proactive insights delivery and reactive questions answering. Provides contextual support via conversational interface.

Key UI elements: Main working area, Insights area, Prompt input.

Chatbot
Reactive conversational interfaces aimed to answer questions and perform user tasks specified as discreet input.

Key UI elements: Chat area, Prompt input.

Avatar
Embodied digital human agent to provide user support, engages user via improved immersion, personalization.

Key UI elements: Embodied agent, Voice/text chat.

Example: Avatar by Soul Machines

Content Creation

New content creation activities like drafting and editing.

Wizard
Step-by-step interface with horizontal or vertical orientation to guide the user through a series of tasks, can be combined with Split Screen pattern.

Key UI element: Workflow steps indication.

Example • Wizard by Growthbar, Hypotenuse AI, Jasper AI

Split Screen
Divides interface into areas for input (intent prompt, reference data, optional criteria) and output (generated options, adjustment tools).

Key UI elements: Input and output areas.

Examples: Split Screen by Toolsaday, Adobe Firefly, Relevance AI

Quick Tool
Contextual actions with selected items in main working area.

Key UI elements: Contextual actions menu, Prompt input.

Examples: Quick Tool by Adobe Photoshop and Spline AI

Summary

AI is changing traditional human-machine interactions — its capabilities and output quality are rapidly evolving; intent-based interfaces explore the transition from the DIY (Do It Yourself) to the DIFM (Do It For Me) approach; still, AI-enabled interfaces inherit a lot from traditional ones.

There is a high chance that with enough emphasis on explainability and trust⁶, hybrid intent-based interfaces will become the new traditional interfaces for process automation.

Literature

  1. The Generative AI Dossier by Deloitte AI Institute
  2. ChatGPT Lifts Business Professionals’ Productivity and Improves Work Quality by Jakob Nielsen, NN/g
  3. The Articulation Barrier: Prompt-Driven AI UX Hurts Usability by Jakob Nielsen, UX Tigers
  4. Exploring opportunities in the generative AI value chain by QuantumBlack AI, McKinsey
  5. AI Models in Software UI by Luke Wroblewski
  6. People + AI Guidebook: Explainability + Trust by Google PAIR

--

--