🏴‍☠️ Breaking the ‘same 5 ideas’ loop with smarter AI brainstorming

The Median Trap problem

Ask an AI to brainstorm 25 ideas, and you’ll probably notice something frustrating: the output looks diverse on the surface, but underneath? It’s the same handful of concepts wearing different hats. There’s a name for this. The author behind this Reddit post calls it the “Median Trap,” and it’s one of those problems that’s easy to overlook until you actually count unique mechanisms across your results.

This Redditor, u/transitory_system, didn’t just name the problem. They ran a structured experiment across 196 solutions and 8 conditions to test three distinct systems designed to force LLMs out of their creative comfort zone.

In partnership with

Speak your prompts. Get better outputs.

The best AI outputs come from detailed prompts. But typing long, context-rich prompts is slow - so most people don't bother.

Wispr Flow turns your voice into clean, ready-to-paste text. Speak naturally into ChatGPT, Claude, Cursor, or any AI tool and get polished output without editing. Describe edge cases, explain context, walk through your thinking - all at the speed you talk.

Millions of people use Flow to give AI tools 10x more context in half the time. 89% of messages sent with zero edits.

Works system-wide on Mac, Windows, iPhone, and now Android (free and unlimited on Android during launch).

The Three Systems

Each approach attacks the Median Trap from a different angle:

  • Semantic Tabu — After each idea is generated, the model’s core mechanism gets blocked. The LLM literally can’t reuse the same underlying concept, so it’s pushed to explore genuinely different territory.

  • Studio Model — A two-agent setup where one agent proposes ideas and a second curates a live taxonomy graph, identifying gaps and directing the first agent toward underexplored categories.

  • Orthogonal Insight — The model builds an alternative physics system, solves the target problem within those invented constraints, then reverse-engineers the mechanism back to the real world.

What Actually Worked

The Studio Model stood out as the most compelling finding. What makes it genuinely interesting isn’t just the diversity of outputs. It’s what happened during the process: the system started restructuring its own categories and commissioning targeted research without being explicitly told to. Emergent behavior from a relatively simple two-agent loop. That’s the kind of result that makes you sit up a little straighter.

Semantic Tabu is probably the most immediately practical approach for solo workflows. It doesn’t require multiple agents, just a constraint mechanism that forces the model to look sideways instead of reaching for the obvious next answer.

Orthogonal Insight is the wildest of the three, conceptually. Building “alternative physics” as a scaffold for problem-solving sounds abstract, but the logic is sound: constraints from a completely different system prevent the model from falling back on familiar real-world patterns.

Great Docs Drive Real Revenue

Your documentation is the first thing developers evaluate before adopting your product. Mintlify helps you ship docs that accelerate adoption, reduce support load, and convert users into customers.

*Ad

How to Apply This Yourself

  1. Start your next brainstorm normally and label the core mechanism of each idea as it comes out.

  2. Before generating the next idea, explicitly tell the model: “The following mechanisms are off-limits” and list them.

  3. For multi-agent setups, add a second pass where a curator role identifies which categories are overrepresented and which are missing entirely.

  4. For the most adventurous version, prompt the model to imagine a world with different physical or social rules, solve your problem there, then ask what the transferable insight is.

Pro Tip

The taxonomy gap approach from the Studio Model can be approximated in a single-agent workflow. After generating 10 ideas, ask the model to categorize them by mechanism, identify which categories are missing, and generate 5 more ideas targeting only those gaps. It’s a lightweight version of the same forcing function.

The full code, dataset, and write-up are all public. Head over to the original Reddit discussion to find the GitHub link and dig into the methodology yourself.