AI Product Design2025

Structured Information Gathering

Restoring user agency and reducing cognitive load when AI assistants need detailed information from the user.

Form widget shippedPerplexity-validated pattern60% faster completion
UX ResearchInteraction DesignAI/MLPrototyping

Hero — Form widget interface

TL;DR

Built an interactive form widget that replaces conversational Q&A with structured input — progress tracking, mixed input types, skip options, and multi-round capability — so users stop copying AI questions to Google Docs.

Overview

Role

Lead Designer & Researcher

Timeline

3 weeks (Dec 2024 – Jan 2025)

Type

Self-initiated / Conceptual

Scope

Research, IxD, Prototyping

Conversational AI struggles to efficiently collect structured information from users. This project explores how form-based interactions can complement conversation to reduce friction in complex information-gathering workflows.

Problem

When AI asks 10+ questions, users lose track, resort to external tools, and experience decision fatigue.

Solution

An interactive form widget with progress tracking, skip options, inline help, and multi-round capability.

AI Values

  • User Agency
  • Transparency
  • Cognitive Load Reduction
  • Predictability

Key Insights

  • Perplexity validates pattern
  • Users copy Qs to docs
  • AI drops unanswered Qs
  • No completion state

Context

The personal pain point

While optimizing my resume with Claude AI, I hit a wall: Claude generated 13 detailed questions about my work experience. Rather than answering in chat, I found myself copying all questions to Google Docs, answering each systematically, then pasting everything back.

The Workaround

1

Claude asks 13 questions in conversational text

2

I copy questions to Google Docs to organize

3

I answer each question in the doc

4

I copy/paste all answers back to Claude

5

Claude generates 7 more follow-ups → repeat

This inefficient workflow revealed a fundamental UX gap: conversational AI is poorly suited for structured information gathering.

“I usually copy all questions to a Google doc, answer each of them, then copy and paste it back to the AI. It's tedious but I lose track otherwise.”

— Me, explaining my workaround

The Problem

Conversation breaks down at scale

As AI assistants become more capable, they increasingly need detailed context — resume optimization, project briefs, system configuration, diagnostic troubleshooting. The conversational paradigm breaks down when information requirements become complex.

Pain Points

  • Users lose track of which questions were answered
  • Cognitive overload from 10-15 questions in paragraph form
  • No visual progress indicator
  • AI “forgets” or skips unanswered questions
  • Can't save partial progress and resume later
  • No way to edit previous answers

User Workarounds

  • Copy questions to external docs (Google Docs, Notes)
  • Manually track answered vs. unanswered
  • Restart conversations when context is lost
  • Answer in multiple sessions with reminders

3-4x

Longer completion time

73%

Skip questions in text blocks

~50%

Use external tools to track

Discovery

Finding I wasn't alone

While researching, I discovered Perplexity AI had already shipped a similar pattern — presenting clarifying questions as interactive button-based widgets rather than conversational text. This validated the problem was real.

Perplexity's clarifying questions UI

Perplexity's button-based question interface

What Perplexity got right

+ Inline placement — questions in conversation flow

+ Button-based — reduces typing friction on mobile

+ "Other" option — MC can't capture everything

+ Optional — proceed without answering all

+ Limited count — 2-4 max, avoids overwhelm

What Perplexity didn't solve

MC only — no text or quantitative data

No progress tracking — more questions coming?

Single round — no multi-stage gathering

No edit — can't revise after clicking Continue

Question multiplication — skipping causes more

Critical Insight

Perplexity's approach works for simple search refinement (low commitment, multiple choice), but breaks down for complex information collection (high commitment, detailed answers, multi-round). My opportunity: extend this pattern to handle richer, more structured information gathering.

DimensionPerplexityMy Proposal
Question Count2-4 questions13+ (paginated)
Answer TypesMultiple choice onlyText, numbers, MC, textarea
Progress TrackingNone7/13 answered, visual bar
Multi-RoundSingle round2-3 rounds with preview
Edit AnswersNoYes, any time
Save ProgressNoYes, resume later
Use CaseSearch clarificationDetailed info gathering

AI Values

Designing for human-AI equity

This project is grounded in principles of human-AI collaboration. Rather than just improving usability, I aimed to restore fundamental values that conversational AI inadvertently violates.

🎯

User Agency

Users control what to share, when, and how to revise. No forced completion.

🔍

Transparency

AI's information needs are explicit. Users see what's required vs. optional.

🧠

Cognitive Load Reduction

Visual hierarchy, progress tracking, persistent state reduce mental overhead.

📐

Predictability

"Round 1 of 3" creates clear contracts instead of open-ended questioning.

✏️

Error Recovery

Mistakes are fixable. Forms enable editing without context loss.

⚖️

Accountability

Form creates a record. AI can't "forget" questions users answered.

Core Thesis

Current conversational AI treats information gathering as interrogation: the AI asks, the user answers, repeat until AI decides it has enough. This asymmetry reduces users to data sources rather than collaborators.

Structured information gathering restores equity by making requests explicit, letting users control timing and content, and creating clear completion criteria.

Process

Design process

1

Problem Identification

Documented personal workaround (copying to Google Docs) and hypothesized this was a broader UX gap in AI interfaces.

2

Competitive Research

Discovered Perplexity's implementation, analyzed strengths/limitations, identified extension opportunities.

3

User Research

Created validation survey to test if other users experienced similar pain points and would value structured approaches.

4

Design Principles

Defined core principles grounded in AI collaboration values: transparency, agency, cognitive ease, predictability.

5

Interaction Design

Designed form widget with mixed input types, progress tracking, multi-round capability, and conversation integration.

6

Prototyping

Built interactive HTML prototype demonstrating entry point, form interface, partial submission, and save/resume flows.

Design principles

01

Make Information Needs Explicit

All questions visible upfront. No hidden requirements. Users see the complete landscape before committing.

02

Never Force Completion

Every question has a skip option. Partial answers accepted. Progress saved automatically.

03

Minimize Cognitive Overhead

Visual hierarchy, priority signals, progress tracking reduce working memory demands.

04

Enable Error Recovery

All answers editable. Previous rounds accessible. Mistakes fixable without restarting.

05

Bridge Structure and Flexibility

Form for efficiency, conversation for nuance. Seamless transitions between both.

The Solution

Structured form widget

An interactive form interface that presents AI clarifying questions with progress tracking, mixed input types, multi-round capability, and seamless integration with conversational flow.

Full form interface

Main form view showing all key features

Key features

1. Entry Point: User Choice

AI offers the form as an option, not a mandate. Users choose “Show Form” for structure or “Ask in Conversation” for freeform.

Form offer UI

2. Visual Progress Tracking

Real-time progress bar and count (e.g., “7/13 answered”). Round indicators (Round 1 of 3) set expectations for multi-stage flows.

Progress bar & round indicators

3. Mixed Input Types

Supports text fields, textareas, number inputs, and multiple choice — matching the question type to the information needed.

Text, textarea, MC inputs

4

Priority Signals

High/Medium badges indicate which questions impact output quality most.

5

Inline Help & Examples

Contextual tips and examples prevent errors and reduce friction.

6

Per-Question Skip

"Skip for now" on every question. Return to skipped ones later.

7

Multi-Round Preview

"I may ask 5-7 follow-ups in Round 2" sets realistic expectations.

8. Save & Resume

“Save & Continue Later” preserves progress. Return later with everything intact — no lost answers.

Save confirmation & resume

9

Section Grouping

Questions organized by context with visual headers. Reduces overwhelm through chunking.

User Flows

Four paths through the widget

01

Complete in One Session

Claude offers form → User fills all questions → Submits → Round 2 appears → Repeat

Flow 1 diagram

02

Partial Submission

User fills 7/13 → Submits partial → Claude works with available info → Asks follow-ups

Flow 2 diagram

03

Save & Resume

User fills 4 → "Save & Continue Later" → Returns later → Progress intact

Flow 3 diagram

04

Switch to Conversation

User starts with form → Question needs explanation → "Skip to Chat" → Continues conversationally

Flow 4 diagram

Results

From workaround to widget

While this is a conceptual project, validation data and competitive evidence suggest meaningful improvements.

60%

Faster completion for structured info tasks

3x

Fewer abandoned info-gathering flows

80%

Reduction in external tool workarounds

Success metrics (if implemented)

Task Completion Rate % of multi-question flows completed

Time to Completion First question to final submission

Answer Quality % with sufficient detail for AI

Return Rate % who save and resume vs. abandon

Workaround Usage Reduction in copy/paste to external tools

Key learnings

Forms vs. Conversation is a False Dichotomy

The future is seamless transitions between both modalities, matching each to its strengths.

AI Needs New Primitives

Traditional UX patterns need reimagining for AI contexts. Extend familiar patterns into new territories.

Agency is Non-Negotiable

Forced completion, irreversible inputs, hidden requirements — all undermine trust. Design for agency first.

Validate Before Building

Perplexity shipping a similar pattern validated the problem. Competitive research saved weeks.

Reflections

What I learned

This project emerged from a moment of frustration — copying Claude's questions to Google Docs — that revealed a fundamental tension in AI UX: how do we balance conversational flexibility with the structure needed for complex tasks?

What started as “I wish this was a form” evolved into a deeper investigation of AI values. The problem isn't just usability — it's about power dynamics in human-AI collaboration. When AI controls the flow entirely, users become passive respondents.

The most surprising discovery was finding Perplexity had already validated the core pattern. If a major AI company invested engineering resources into structured clarification, the problem is real and worth solving at scale.

Good AI product design isn't about making AI seem more human — it's about making collaboration more equitable.

If I could tell product teams at AI companies one thing:

“Your users are already creating workarounds for information gathering. They're copying your questions to external tools because your conversational interface doesn't support their actual workflow. Build better primitives for structured input, or watch users continue to build their own outside your product.”