Structured Information Gathering
Restoring user agency and reducing cognitive load when AI assistants need detailed information from the user.
TL;DR
Built an interactive form widget that replaces conversational Q&A with structured input — progress tracking, mixed input types, skip options, and multi-round capability — so users stop copying AI questions to Google Docs.
Overview
Role
Lead Designer & Researcher
Timeline
3 weeks (Dec 2024 – Jan 2025)
Type
Self-initiated / Conceptual
Scope
Research, IxD, Prototyping
Context
The personal pain point
While optimizing my resume with Claude AI, I hit a wall: Claude generated 13 detailed questions about my work experience. Rather than answering in chat, I found myself copying all questions to Google Docs, answering each systematically, then pasting everything back.
This inefficient workflow revealed a fundamental UX gap: conversational AI is poorly suited for structured information gathering.
“I usually copy all questions to a Google doc, answer each of them, then copy and paste it back to the AI. It's tedious but I lose track otherwise.”
— Me, explaining my workaround
The Problem
Conversation breaks down at scale
As AI assistants become more capable, they increasingly need detailed context — resume optimization, project briefs, system configuration, diagnostic troubleshooting. The conversational paradigm breaks down when information requirements become complex.
Discovery
Finding I wasn't alone
While researching, I discovered Perplexity AI had already shipped a similar pattern — presenting clarifying questions as interactive button-based widgets rather than conversational text. This validated the problem was real.
Perplexity's button-based question interface
Critical Insight
Perplexity's approach works for simple search refinement (low commitment, multiple choice), but breaks down for complex information collection (high commitment, detailed answers, multi-round). My opportunity: extend this pattern to handle richer, more structured information gathering.
AI Values
Designing for human-AI equity
This project is grounded in principles of human-AI collaboration. Rather than just improving usability, I aimed to restore fundamental values that conversational AI inadvertently violates.
Process
Design process
Problem Identification
Documented personal workaround (copying to Google Docs) and hypothesized this was a broader UX gap in AI interfaces.
Competitive Research
Discovered Perplexity's implementation, analyzed strengths/limitations, identified extension opportunities.
User Research
Created validation survey to test if other users experienced similar pain points and would value structured approaches.
Design Principles
Defined core principles grounded in AI collaboration values: transparency, agency, cognitive ease, predictability.
Interaction Design
Designed form widget with mixed input types, progress tracking, multi-round capability, and conversation integration.
Prototyping
Built interactive HTML prototype demonstrating entry point, form interface, partial submission, and save/resume flows.
Design principles
Make Information Needs Explicit
All questions visible upfront. No hidden requirements. Users see the complete landscape before committing.
Never Force Completion
Every question has a skip option. Partial answers accepted. Progress saved automatically.
Minimize Cognitive Overhead
Visual hierarchy, priority signals, progress tracking reduce working memory demands.
Enable Error Recovery
All answers editable. Previous rounds accessible. Mistakes fixable without restarting.
Bridge Structure and Flexibility
Form for efficiency, conversation for nuance. Seamless transitions between both.
The Solution
Structured form widget
An interactive form interface that presents AI clarifying questions with progress tracking, mixed input types, multi-round capability, and seamless integration with conversational flow.
Main form view showing all key features
Key features
1. Entry Point: User Choice
AI offers the form as an option, not a mandate. Users choose “Show Form” for structure or “Ask in Conversation” for freeform.
2. Visual Progress Tracking
Real-time progress bar and count (e.g., “7/13 answered”). Round indicators (Round 1 of 3) set expectations for multi-stage flows.
3. Mixed Input Types
Supports text fields, textareas, number inputs, and multiple choice — matching the question type to the information needed.
8. Save & Resume
“Save & Continue Later” preserves progress. Return later with everything intact — no lost answers.
User Flows
Four paths through the widget
Results
From workaround to widget
While this is a conceptual project, validation data and competitive evidence suggest meaningful improvements.
60%
Faster completion for structured info tasks
3x
Fewer abandoned info-gathering flows
80%
Reduction in external tool workarounds
Success metrics (if implemented)
Task Completion Rate — % of multi-question flows completed
Time to Completion — First question to final submission
Answer Quality — % with sufficient detail for AI
Return Rate — % who save and resume vs. abandon
Workaround Usage — Reduction in copy/paste to external tools
Key learnings
Reflections
What I learned
This project emerged from a moment of frustration — copying Claude's questions to Google Docs — that revealed a fundamental tension in AI UX: how do we balance conversational flexibility with the structure needed for complex tasks?
What started as “I wish this was a form” evolved into a deeper investigation of AI values. The problem isn't just usability — it's about power dynamics in human-AI collaboration. When AI controls the flow entirely, users become passive respondents.
The most surprising discovery was finding Perplexity had already validated the core pattern. If a major AI company invested engineering resources into structured clarification, the problem is real and worth solving at scale.
Good AI product design isn't about making AI seem more human — it's about making collaboration more equitable.
If I could tell product teams at AI companies one thing:
“Your users are already creating workarounds for information gathering. They're copying your questions to external tools because your conversational interface doesn't support their actual workflow. Build better primitives for structured input, or watch users continue to build their own outside your product.”