Skip to content
Accessibility / AI2024·Startup · Team of 5 · 1 year

Tale

2M+ Americans with aphasia can think clearly but can't get the words out. Designed an AI communication aid that turns fragmented speech into complete conversations — validated with neurologists and speech clinics.

HIC 2024 FinalistDempsey Investment Round3 Clinical Validations2M+ potential users
UX ResearchAI/MLHealthcareAccessibilitySpeech Therapy
Tale app screens — conversation view with word suggestions, chat in progress, and statistics dashboard

Role

Lead Designer & Researcher

Timeline

Jan – Dec 2024

Team

5-person startup team

Skills

UX Research, AI/ML, Healthcare, Accessibility

Overview

I led design, research, and brand strategy for Tale — an AI-powered speech assistance app for people with mild-to-moderate aphasia. I designed the UX paradigm for how patients interact with the system during real conversations, conducted expert interviews with neurologists and speech clinics, and shaped the product vision that took us through two startup competitions.

The Problem

2 million Americans can think clearly but can't say what they mean

Aphasia is a language disorder caused by stroke or brain injury. People with aphasia know exactly what they want to say — their intelligence is intact — but the words won't come out, or the wrong words come out. Imagine knowing the answer to every question but being unable to speak it.

2M+

Americans living with aphasia

1 in 4

People over 25 will have a stroke in their lifetime

1 in 3

Strokes result in aphasia

59.3

SLPs per 100,000 people — nowhere near enough

Speech therapy costs $2,184–$7,392 per year on top of medical expenses. There are only 3,560 speech-language pathologists who specialize in aphasia — for 2 million patients. Existing assistive devices focus on severe cases, leaving the mild-to-moderate population underserved. These are people who can have conversations — they just need the right word at the right time.

Research

Three expert consultations that shaped the product

February 2024 · Swedish Hospital, Seattle

Dr. William Lou, Neurologist

Validated the market need and confirmed our target user group. Advised measuring improvement by “raw number of words gained” rather than percentages — a more meaningful clinical metric. Recommended working with medical practitioners for go-to-market rather than direct-to-consumer only.

“There is a need for it in the current market for aphasia patients.”

February 2024 · University of Washington

Prof. Diane, Communication Sciences & Disorders

Provided critical linguistic insights: misused words follow predictable patterns across patients. Suggested category-based conversation boards (food, sports, family) rather than open-ended conversation contexts — patients are more likely to engage in structured topic areas.

“Misused words are normally similar across patients — this is a pattern you can design for.”

April 2024 · UW Speech Clinic

UW Speech Clinic Collaboration

Collaborated on feature refinement. Tested word retrieval effectiveness for stuttering patterns. Explored using the patient's own voice for familiarity, and pairing visual cues (pictures) with word retrieval suggestions. Identified the “chicken and egg” problem: the AI needs patient data to work well, but patients won't use it until it works well.

Key Insights

What research revealed

01

Context is everything

Existing tools offer generic word suggestions. But aphasia patients need the right word for the right moment in the right conversation. Without understanding conversation context, word prediction is useless.

02

Mild-to-moderate is underserved

29% of aphasia cases are anomic (word-finding difficulty), 13% are Broca's (expressive difficulty). These patients can have conversations — they just get stuck. Existing AAC devices are built for severe cases and feel clinical and stigmatizing for this group.

03

Intensity beats duration

Research (Bhogal et al. 2003) shows 8.8 hours/week for 11 weeks produces better outcomes than 1 hour/week for 23 weeks. The same total hours — but intensity matters. Tale needed to make practice accessible enough for daily use.

04

Privacy is non-negotiable

Patients are already vulnerable. Continuous listening raises concerns. The system had to process speech in real-time without storing conversation data — like Alexa, but it forgets everything except the words you're practicing.

The Solution

Tale: context-aware speech assistance

An AI-powered mobile app that listens to conversations in real-time, detects when a user is struggling to find a word, and suggests contextually appropriate completions — using the patient's own voice for familiarity. Not a replacement for speech therapy, but a companion that makes every conversation a practice opportunity.

Context-Aware Word Retrieval

AI understands the conversation topic and suggests words that fit the specific context — not generic predictions.

Adaptive Struggle Detection

Recognizes speech delays and hesitation patterns, then intervenes with the right suggestion at the right moment.

Personalized Learning

System learns from each interaction, adapting to the individual's unique speech patterns and vocabulary over time.

Progress Tracking

Tracks words gained, practice frequency, and improvement trends — providing data for doctor visits and caregiver updates.

Echoed Word Achievements

Gamifies speech practice: when users successfully repeat prompted words, it becomes a tracked achievement rather than clinical homework.

Privacy-First Architecture

Real-time processing without storing conversation data. Only explicitly practiced words are saved.

UX Paradigm

Designing for people who can't tell you what's wrong

The core UX challenge: our users struggle with language itself. They can't easily articulate feedback, describe frustrations, or navigate complex interfaces. Every design decision had to account for cognitive load, emotional state, and physical limitations.

Category-based context, not open-ended

Instead of trying to understand arbitrary conversations, we designed around topic categories — food, family, sports, daily activities. This gave the AI a bounded context to work within and matched how speech therapy sessions are actually structured.

Visual + audio dual cues

Word suggestions are paired with images. When the patient is trying to say “coffee,” they see the word, hear it in their own voice, and see a picture — three pathways to retrieval instead of one.

The patient's own voice

Hearing a stranger's voice say the word you're struggling with feels clinical. Hearing your own voice say it correctly — recorded before or early after the stroke — creates a connection between the word and the self. This was a key design decision from our UW Speech Clinic collaboration.

Minimal, color-coded interface

Large tap targets, high contrast, minimal text. Color-coding for states (practicing, achieved, needs work) so patients can understand their progress at a glance without reading.

Validation

Two competitions, real feedback

Hollomon Health Innovation Challenge 2024

Finalist

Selected as finalists among healthcare innovation teams. Presented to a panel of healthcare investors and industry experts who validated the market need and technical approach.

Dempsey Startup Competition 2024

Investment Round

Advanced to the investment round — the stage where judges evaluate startups for actual funding. Pitch focused on the $14.6B speech impairment market and our differentiation through context-aware AI.

Market Opportunity

$14.6B

Global speech impairment market by 2030

$553M

Speech Generating Device market by 2030

$6.67B

Speech Pathology Services by 2030

~400K

People who need SGDs but only 2-3% receive them

Competitive Landscape

Existing devices (Tobii Dynavox, PRC, Jabbla) are AAC devices for severe cases — standalone hardware costing $8,500–$19,000 with ready-made vocabulary and eye-tracking. Voiceitt offers AI speech recognition but targets non-standard speech broadly, not aphasia specifically.

Existing AAC DevicesTale
TargetComplex & severe conditionsMild to moderate aphasia
HardwareRequires iPad or dedicated device ($8,500–$19,000)None — runs on any smartphone
ApproachReady-made vocabulary, eye-trackingContext-based word retrieval, struggle detection
Cost$99–$199/year + hardware$30/month, no hardware

Pricing Model I Designed

Referral

$25

Per user/month via medical practitioners

Free trial: 30 days

Direct

$30

Per user/month via app store

Free trial: 30 days

Enterprise

$20*

Per user/month for bulk purchases

*Depends on volume

Outcome

What happened and why we stopped

Tale validated the problem, the market, and the approach — through expert interviews, clinical collaboration, and two competitive rounds. The prototype demonstrated word retrieval effectiveness for stuttering patterns.

We stopped because we hit a fundamental blocker: insufficient clinical speech data to train a model that could reliably detect struggle patterns and generate contextually accurate suggestions. The “chicken and egg” problem we identified with the UW Speech Clinic proved to be the critical constraint — you need patient data to build the AI, but patients won't use a tool that doesn't work well yet.

The team decided to shut down. The research, the UX paradigm, and the clinical relationships remain valuable — this is a problem that will be solved, and the groundwork we laid contributed to that future.

Reflections

What I learned

Designing for vulnerability requires humility

Our users couldn't articulate what was wrong with our design. Standard usability testing doesn't work when language itself is the barrier. I learned to observe behavior, not ask questions — and to design interfaces that assume nothing about the user's current cognitive state.

Clinical validation ≠ product-market fit

Three experts told us the market needed this. Two competitions validated the idea. But without the training data to make the core AI work reliably, clinical validation alone couldn't carry us to product. The hardest part of AI products isn't the model — it's the data.

The UX paradigm outlives the product

The design decisions — category-based context, dual visual+audio cues, patient's own voice, privacy-first processing — are transferable to any speech assistance tool. Even though Tale stopped, the interaction patterns I designed are valid for whoever solves this next.