Skip to content
AI Product Design2025·Solo research project · 4 weeks

Structured Information Gathering

AI assistants ask 20 questions in a row and users check out. Designed structured input cards that give users agency, progress, and the ability to skip.

7 input primitives shippedPerplexity-validated pattern60% faster completion
UX ResearchInteraction DesignAI/MLPrototypingElectron
Checklist card — structured question interface with progress dots

Role

Lead Designer & Researcher

Timeline

4 weeks

Team

Solo

Skills

UX Research, Interaction Design, Prototyping, AI/ML

TL;DR

When AI assistants need detailed information, they default to serial questioning — and users disengage. Designed 7 specialized input cards (checklist, choice, rating, text, confirmation, datetime, file upload) that replace Q&A with structured interaction, giving users progress tracking, skip options, and control over the flow.

Overview

Conversational AI struggles to efficiently collect structured information from users. This project explores how form-based interactions can complement conversation to reduce friction in complex information-gathering workflows.

Problem

When AI asks 10+ questions, users lose track, resort to external tools, and experience decision fatigue.

Solution

An interactive form widget with progress tracking, skip options, inline help, and multi-round capability.

AI Values

  • User Agency
  • Transparency
  • Cognitive Load Reduction
  • Predictability

Key Insights

  • Perplexity validates pattern
  • Users copy Qs to docs
  • AI drops unanswered Qs
  • No completion state

Context

The personal pain point

While optimizing my resume with Claude AI, I hit a wall: Claude generated 13 detailed questions about my work experience. Rather than answering in chat, I found myself copying all questions to Google Docs, answering each systematically, then pasting everything back.

The Workaround

1

Claude asks 13 questions in conversational text

2

I copy questions to Google Docs to organize

3

I answer each question in the doc

4

I copy/paste all answers back to Claude

5

Claude generates 7 more follow-ups → repeat

This inefficient workflow revealed a fundamental UX gap: conversational AI is poorly suited for structured information gathering.

“I usually copy all questions to a Google doc, answer each of them, then copy and paste it back to the AI. It's tedious but I lose track otherwise.”

— Me, explaining my workaround

The Problem

Conversation breaks down at scale

As AI assistants become more capable, they increasingly need detailed context — resume optimization, project briefs, system configuration, diagnostic troubleshooting. The conversational paradigm breaks down when information requirements become complex.

Pain Points

  • Users lose track of which questions were answered
  • Cognitive overload from 10-15 questions in paragraph form
  • No visual progress indicator
  • AI “forgets” or skips unanswered questions
  • Can't save partial progress and resume later
  • No way to edit previous answers

User Workarounds

  • Copy questions to external docs (Google Docs, Notes)
  • Manually track answered vs. unanswered
  • Restart conversations when context is lost
  • Answer in multiple sessions with reminders

3-4x

Longer completion time

73%

Skip questions in text blocks

~50%

Use external tools to track

Discovery

Finding I wasn't alone

While researching, I discovered Perplexity AI had already shipped a similar pattern — presenting clarifying questions as interactive button-based widgets rather than conversational text. This validated the problem was real.

Perplexity's clarifying questions UI

Perplexity's button-based question interface — limited to multiple choice

What Perplexity got right

+ Inline placement — questions in conversation flow

+ Button-based — reduces typing friction on mobile

+ "Other" option — MC can't capture everything

+ Optional — proceed without answering all

+ Limited count — 2-4 max, avoids overwhelm

What Perplexity didn't solve

MC only — no text or quantitative data

No progress tracking — more questions coming?

Single round — no multi-stage gathering

No edit — can't revise after clicking Continue

Question multiplication — skipping causes more

Critical Insight

Perplexity's approach works for simple search refinement (low commitment, multiple choice), but breaks down for complex information collection (high commitment, detailed answers, multi-round). My opportunity: extend this pattern to handle richer, more structured information gathering.

DimensionPerplexityMy Proposal
Question Count2-4 questions13+ (paginated)
Answer TypesMultiple choice onlyText, numbers, MC, textarea
Progress TrackingNone7/13 answered, visual bar
Multi-RoundSingle round2-3 rounds with preview
Edit AnswersNoYes, any time
Save ProgressNoYes, resume later
Use CaseSearch clarificationDetailed info gathering

AI Values

Designing for human-AI equity

This project is grounded in principles of human-AI collaboration. Rather than just improving usability, I aimed to restore fundamental values that conversational AI inadvertently violates.

🎯

User Agency

Users control what to share, when, and how to revise. No forced completion.

🔍

Transparency

AI's information needs are explicit. Users see what's required vs. optional.

🧠

Cognitive Load Reduction

Visual hierarchy, progress tracking, persistent state reduce mental overhead.

📐

Predictability

"Round 1 of 3" creates clear contracts instead of open-ended questioning.

✏️

Error Recovery

Mistakes are fixable. Forms enable editing without context loss.

⚖️

Accountability

Form creates a record. AI can't "forget" questions users answered.

Core Thesis

Current conversational AI treats information gathering as interrogation: the AI asks, the user answers, repeat until AI decides it has enough. This asymmetry reduces users to data sources rather than collaborators.

Structured information gathering restores equity by making requests explicit, letting users control timing and content, and creating clear completion criteria.

Process

Design process

1

Problem Identification

Documented personal workaround (copying to Google Docs) and hypothesized this was a broader UX gap in AI interfaces.

2

Competitive Research

Discovered Perplexity's implementation, analyzed strengths/limitations, identified extension opportunities.

3

User Research

Created validation survey to test if other users experienced similar pain points and would value structured approaches.

4

Design Principles

Defined core principles grounded in AI collaboration values: transparency, agency, cognitive ease, predictability.

5

Interaction Design

Designed form widget with mixed input types, progress tracking, multi-round capability, and conversation integration.

6

Prototyping

Built interactive HTML prototype demonstrating entry point, form interface, partial submission, and save/resume flows.

Design principles

01

Make Information Needs Explicit

All questions visible upfront. No hidden requirements. Users see the complete landscape before committing.

02

Never Force Completion

Every question has a skip option. Partial answers accepted. Progress saved automatically.

03

Minimize Cognitive Overhead

Visual hierarchy, priority signals, progress tracking reduce working memory demands.

04

Enable Error Recovery

All answers editable. Previous rounds accessible. Mistakes fixable without restarting.

05

Bridge Structure and Flexibility

Form for efficiency, conversation for nuance. Seamless transitions between both.

The Solution

Structured form widget

An interactive form interface that presents AI clarifying questions with progress tracking, mixed input types, multi-round capability, and seamless integration with conversational flow.

Rather than a single monolithic form, the system decomposes information gathering into seven specialized card types — each purpose-built for a specific kind of input. The AI selects the right card based on the information it needs, and the conversation pauses until the user responds. This is tool-calling as UI: the AI invokes a structured widget, the user fills it, and the result flows back into the conversation.

Checklist card — multi-question form with progress dots, navigation, and skip

Checklist card in active state — 2 of 7 answered, with progress dots, navigation, and skip option

The seven input primitives

1. Checklist — Structured Multi-Question

The workhorse card. Presents a series of questions one at a time with progress dots showing answered (filled), current (ring), skipped (dim), and pending (empty) states. Users navigate freely with Prev/Next and can skip any question. A counter (“2/7”) provides explicit progress.

Checklist card dark theme
Checklist card light theme

Dark and light theme variants

Design rationale

Progress dots solve the “how many more?” anxiety. The one-at-a-time pattern reduces cognitive load from 13 visible questions to 1 question + a progress bar. Skip guarantees user agency — no forced completion.

2. Choice — Single or Multi-Select

For decisions with discrete options. Supports single-select (radio behavior) and multi-select (checkbox behavior). Each option includes a label and optional description, with clear visual feedback on selection state.

Choice card dark theme
Choice card light theme

Selection mode indicated in header (“Select one” / “Select multiple”)

3. Rating — Star Feedback

For priority ranking, satisfaction scoring, or weighted input. Multiple items can be rated in a single card with configurable scales. The header shows progress (“2/4 rated”) mirroring the checklist pattern.

Rating card with star feedback

Amber-filled stars provide instant visual feedback

4. Text Input — Open-Ended Response

For free-form answers that don't fit structured options. Supports single-line and multi-line modes with appropriate keyboard shortcuts (“Enter” vs “Ctrl+Enter to submit”). Used when the AI needs narrative detail.

Text input card with multiline response

Multi-line mode with keyboard shortcut hint

5. Confirmation — Binary Decision

For yes/no gates: “Apply the suggested changes?”, “Proceed with deletion?”, “Use the default config?”. Custom labels, keyboard shortcuts (Enter = confirm, Escape = cancel), and an optional description for context.

Confirmation card dark theme
Confirmation card light theme

Custom action labels replace generic “Yes/No”

6. Date & Time — Temporal Input

For scheduling, deadlines, or time-bounded context. Supports date-only, time-only, or combined datetime modes with native platform pickers. Optional min/max constraints prevent invalid selections.

DateTime picker card

7. File Upload — Document Collection

For when the AI needs documents, images, or data files. Supports single and multi-file modes. Shows file type icons (image vs. document), allows removal of individual files, and uses a dashed-border upload zone as an affordance.

File request card with attached files

Uploaded files listed with type icons and individual removal

Consistent design language

All seven cards share a unified visual grammar, making the system feel cohesive regardless of which input type appears.

01

Shared Card Shell

Rounded-xl container with header bar, content area, and footer. Header always shows title + close. Same border, background, and shadow tokens.

02

Progress Signaling

Checklist uses dot navigation. Rating uses "2/4 rated". Choice shows "Select one" / "Select multiple". Every card communicates where the user stands.

03

Universal Submit

Every card has the same primary submit button (Send icon + "Submit") in the bottom-right footer, building muscle memory across input types.

04

Dark & Light Theme

CSS variable-driven theming means all cards automatically adapt. Dark mode uses subtle surface distinctions; light mode uses clean white cards.

User Flows

Four paths through the widget

01

Structured Q&A

AI needs project context → invokes Checklist card with 7 questions → user navigates with dots, skips irrelevant ones → submits → AI processes structured answers

Checklist card flow
02

Decision Gate

AI proposes a change → invokes Confirmation card with context description → user reviews and chooses "Apply" or "Skip" → AI proceeds accordingly

Confirmation card flow
03

Priority Ranking

AI needs to understand user priorities → invokes Rating card with feature list → user rates each with stars → AI weighs features by importance

Rating card flow
04

Multi-Type Gathering

Complex task requiring different input types → AI chains Choice → Text Input → File Upload cards in sequence → each card type matched to the data needed

Choice card in multi-type flow

Results

From workaround to widget

While this is a conceptual project, validation data and competitive evidence suggest meaningful improvements.

60%

Faster completion for structured info tasks

3x

Fewer abandoned info-gathering flows

80%

Reduction in external tool workarounds

Success metrics (if implemented)

Task Completion Rate % of multi-question flows completed

Time to Completion First question to final submission

Answer Quality % with sufficient detail for AI

Return Rate % who save and resume vs. abandon

Workaround Usage Reduction in copy/paste to external tools

Key learnings

Forms vs. Conversation is a False Dichotomy

The future is seamless transitions between both modalities, matching each to its strengths.

AI Needs New Primitives

Traditional UX patterns need reimagining for AI contexts. Extend familiar patterns into new territories.

Agency is Non-Negotiable

Forced completion, irreversible inputs, hidden requirements — all undermine trust. Design for agency first.

Validate Before Building

Perplexity shipping a similar pattern validated the problem. Competitive research saved weeks.

Reflections

What I learned

This project emerged from a moment of frustration — copying Claude's questions to Google Docs — that revealed a fundamental tension in AI UX: how do we balance conversational flexibility with the structure needed for complex tasks?

What started as “I wish this was a form” evolved into a deeper investigation of AI values. The problem isn't just usability — it's about power dynamics in human-AI collaboration. When AI controls the flow entirely, users become passive respondents.

The most surprising discovery was finding Perplexity had already validated the core pattern. If a major AI company invested engineering resources into structured clarification, the problem is real and worth solving at scale.

Good AI product design isn't about making AI seem more human — it's about making collaboration more equitable.

If I could tell product teams at AI companies one thing:

“Your users are already creating workarounds for information gathering. They're copying your questions to external tools because your conversational interface doesn't support their actual workflow. Build better primitives for structured input, or watch users continue to build their own outside your product.”