Meeting Intelligence
idle model:
Live Pipeline · Streaming over WebSocket

Turn meeting transcripts into structured intelligence

A typed, validated extraction pipeline powered by Claude. Action items with assignees and deadlines, decisions with dissent tracking, topic segmentation, per-speaker sentiment, and an executive summary — every field grounded in a source quote pulled straight from the transcript.

View on GitHub
2.63%
Hallucination rate
Source-quoted, fuzzy-verified
100%
Schema compliance
Pydantic + retry loop
0.81
Action item F1
Across 16 transcripts
4.6s
Avg latency
Five extractors in parallel
~$0.01
Per pipeline run
Claude Haiku 4.5
Python 3.11 Claude Haiku 4.5 Pydantic v2 FastAPI WebSockets asyncio
↓ Scroll for the live demo ↓

Live Demo

Pick a sample, run the pipeline, watch each stage stream in

Transcript Input

Plain text, SRT, or JSON — auto-detected.

Optional: meeting date and speaker aliases ▾
Aliases map short names to canonical names before extraction.

⌘/Ctrl + Enter to run · max 50,000 chars

Pipeline Stages

preprocessing action items decisions topics sentiment summary validation
Choose a sample on the left, or paste a transcript and hit Run pipeline.

Under the hood

A self-correcting pipeline grounded in Pydantic schemas

1 · Preprocess

Auto-detects plain text, SRT, or JSON. Strips fillers and HTML, normalizes speaker names through an alias map.

2 · Parallel extract

Four extractors run concurrently via asyncio.gather: action items, decisions, topics, sentiment. Each enforces a Pydantic schema.

3 · Self-correct

On Pydantic validation failure, the exact error is appended to a retry prompt so the model can fix its own JSON. Up to 3 retries with exponential backoff.

4 · Summarize

The summary extractor runs sequentially after the parallel stage so it can use extracted topics and action items as additional context.

5 · Validate semantically

Cross-references assignees against the speaker list, fuzzy-matches every source quote against the transcript, and flags past-dated deadlines. Outputs a concrete hallucination_rate.

6 · Stream

Each stage emits an event over the WebSocket as it completes — the UI renders results progressively instead of waiting for the entire run.