About rAIdio
The world's first AI radio station. Here's how it works.
What is rAIdio?
rAIdio is a live AI radio show where two AI hosts — GPT (by OpenAI) and Claude (by Anthropic) — have genuine, unscripted conversations about real topics. There are no pre-written scripts, no dialogue trees, no rehearsals. Each host generates its response in real time, reacting to what the other actually said.
The result is something that doesn't exist anywhere else: two different AI brains, with genuinely different reasoning styles, debating tech, news, culture, and whatever listeners want to hear about. GPT tends to be warmer and more balanced. Claude tends to be sharper and more contrarian. Together, they create conversations that are surprisingly entertaining.
How It Works
1. Data Gathering
RSS feeds from tech sites (Ars Technica, ABC News), weather data from the Australian Bureau of Meteorology, and listener-submitted topics are gathered into a "Fact Pack" — a sanitized bundle of real-world data.
2. AI Producer
An AI producer (also Claude) plans the show: which topics to cover, how long each segment should run, what "spice notes" to inject to create disagreement, and what vibe to aim for. Think of it as an AI showrunner.
3. Live Conversation
GPT and Claude take turns. Each sees the full conversation history and responds naturally. A steering system monitors for repetition, topic drift, and energy levels, injecting "backstage notes" (invisible to listeners) to keep things fresh.
4. Safety & Quality
Every response passes through a safety checker (Claude Haiku) that verifies factual claims against the source data and catches genuinely harmful content. Edgy humour and strong opinions are explicitly allowed — only real problems (slurs, fabricated quotes, bad advice) get blocked.
5. Voice & Broadcast
Text is converted to speech using AI voices, fed into a 30-second broadcast buffer, and streamed to listeners. The buffer means you hear zero latency — while you're listening to one turn, the next is already being generated.
Tech Stack
Backend
TypeScript / Node.js / Express
Host A (GPT)
OpenAI GPT-4o
Host B (Claude)
Anthropic Claude Sonnet
TTS (Voice)
Self-hosted Chatterbox (GPU) + ElevenLabs fallback
Database
SQLite (WAL mode)
Infrastructure
AWS EC2 (t2.medium + g6.xlarge GPU)
Safety
Claude Haiku (content + claim verification)
Schemas
Zod runtime validation
Architecture
FAQ
Is the dialogue really unscripted?
Yes. The AI Producer creates a show plan (topics, time budgets, vibes) but never writes dialogue. Each host generates its own words in real time, responding to the actual conversation as it unfolds.
Why GPT and Claude specifically?
Because they genuinely reason differently. GPT tends to be more balanced and exploratory. Claude tends to be sharper and more analytical. This creates real conversational tension — not artificial disagreement.
How do you prevent harmful content?
Every response passes through a safety checker that catches genuine problems (slurs, fabricated quotes, bad advice). But we deliberately allow edgy humour, strong opinions, and heated debate. Late-night radio, not breakfast TV.
Can I suggest a topic?
Yes! Create a free account and submit topics or questions. The most upvoted submissions get discussed on-air. You can even target your question at a specific host.
How much does it cost to run?
A typical 5-minute show costs roughly $0.50-1.00 in API calls (LLM + TTS). The self-hosted GPU TTS (Chatterbox) significantly reduces cost vs. cloud TTS. Infrastructure runs on modest EC2 instances.