Verbose prompts create noise. Compressed logic blocks don’t, and that difference matters more than most prompt guides admit.

TL;DR: A Reddit user discovered that stripping prompts down to pure logic (no filler, no politeness) dramatically reduced AI hallucinations on complex tasks. The technique is called semantic compression, and it works because LLMs weight tokens, not sentences.

Here’s the core insight: language models don’t β€œread” your prompts the way a human does. They assign attention weights to tokens. Every filler phrase, polite opener, and redundant word competes for that attention alongside the actual instructions you care about.

In partnership with

How Jennifer Anniston’s LolaVie brand grew sales 40% with CTV ads

For its first CTV campaign, Jennifer Aniston’s DTC haircare brand LolaVie had a few non-negotiables. The campaign had to be simple. It had to demonstrate measurable impact. And it had to be full-funnel.

LolaVie used Roku Ads Manager to test and optimize creatives β€” reaching millions of potential customers at all stages of their purchase journeys. Roku Ads Manager helped the brand convey LolaVie’s playful voice while helping drive omnichannel sales across both ecommerce and retail touchpoints.

The campaign included an Action Ad overlay that let viewers shop directly from their TVs by clicking OK on their Roku remote. This guided them to the website to buy LolaVie products.

Discover how Roku Ads Manager helped LolaVie drive big sales and customer growth with self-serve TV ads.

The DTC beauty category is crowded. To break through, Jennifer Anniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.

When u/withAuxly on r/PromptEngineering started researching context windows, they noticed a significant chunk of their prompt’s β€œattention budget” was going to words like β€œHey, could you please” and β€œLet me know if you need anything.” That budget could be doing actual work.

Think about what a typical user-written prompt looks like: β€œHi! I was hoping you could help me review this contract. I’m not a lawyer but I need to understand the risks. Could you go through it carefully and let me know if there’s anything concerning? I’d really appreciate it if you could explain things in plain language.” That’s 52 tokens of setup for a task that could be expressed in 10. Each of those extra tokens slightly dilutes the model’s focus on what actually matters: find the risk, explain it simply.

This isn’t just a theory. Prompt engineers running A/B tests on the same underlying task routinely find that compressed prompts produce more consistent, more structured outputs. The model doesn’t get sidetracked by conversational cues that hint at an informal, exploratory response when you actually need a precise one.

What Semantic Compression Looks Like

Instead of writing a paragraph asking AI to review a contract, the compressed version looks like this:

[OBJECTIVE]: Risk_Audit_Freelance_MSA
[ROLE]: Senior_Legal_Orchestrator
[CONTEXT]: Project_Scope=Web_Dev; Budget=10k; Timeline=Fixed_3mo.
[CONSTRAINTS]: Zero_Legalese; Identify_Hidden_Liability; Priority_High.
[INPUT]: [Insert Text]
[OUTPUT]: Bullet_Logic_Only.

No pleasantries. No explanation of why you want what you want. Just structured, labeled logic that tells the model exactly what to do, in what role, under what constraints.The bracket labels do something important beyond just organization. They signal to the model that this is a structured, formal task with defined parameters, not an open conversation. That framing alone shifts how the model approaches its response. The role field is particularly high-leverage: assigning a specific expert identity loads a relevant β€œcluster” of knowledge and behavior into the model’s context before it even reaches your actual instructions.

The underscore notation is also worth noting. Spaces between words invite natural language interpretation. Underscores treat values as variables, which pushes the model toward treating your prompt as a specification rather than a request. It’s a subtle signal, but in high-stakes structured tasks, subtle signals compound.

Use Cases Where This Shines

  • Complex document review (legal, financial, technical) where logic drift compounds quickly

  • Multi-step reasoning tasks that need strict output formats

  • Repeatable workflows where you run the same prompt structure on a schedule

  • Any task where you’ve seen the model go off-track with longer prose instructions

  • Automated pipelines where you need consistent, parseable outputs every single run

  • Competitive analysis, audit summaries, or SOPs where the structure of the output matters as much as the content

Free email without sacrificing your privacy

Gmail tracks you. Proton doesn’t. Get private email that puts your data β€” and your privacy β€” first.

*Ad

Prompt of the Day

Here’s a reusable semantic compression template you can adapt for almost any task:

[OBJECTIVE]: {Your_Goal_Here}
[ROLE]: {Expert_Title_Relevant_To_Task}
[CONTEXT]: {Key_Variable=Value; Key_Variable=Value}
[CONSTRAINTS]: {Constraint_1; Constraint_2; Priority_Level}
[INPUT]: [Paste content here]
[OUTPUT]: {Format_You_Want}

The underscore notation and bracket labels aren’t magic. They just force you to be explicit about every component of the task, which is what actually reduces drift.

To put it into practice: take your next complex prompt, paste it into a doc, and go field by field through this template. If you can’t clearly fill in the ROLE or CONSTRAINTS fields, that’s often a sign that your instructions are underspecified, not just verbose. The compression process surfaces gaps in your own thinking before the model has a chance to fill them in the wrong direction.

One Caveat Worth Knowing

Semantic compression works best for structured, repeatable tasks. For open-ended creative work or exploratory conversations, natural prose helps the model understand nuance and context that doesn’t fit neatly into a labeled field. A prompt asking for a short story or a brainstorm benefits from looser framing. Use the right tool for the context.

There’s also a learning curve. Your first few compressed prompts may feel unnatural to write because we’re trained to communicate politely and with context. Push through that discomfort. Once you see the output quality difference on a complex task, the habit forms fast.

If you’ve been fighting logic drift on complex prompts, this technique is worth a serious test. Strip out the noise and see what’s left.

4x faster communication. Zero quality tradeoff.

Most people spend hours every day typing messages they could say in minutes. Wispr Flow fixes that.

Flow turns your voice into clean, polished text inside any app. Speak like you would to a colleague - tangents and all - and get professional output ready to send. 89% of messages go out with zero edits.

Use it for:

  • Email and Slack responses in seconds

  • Meeting follow-ups and project updates

  • Client communication on the go

  • Long-form writing without staring at a blank page

Millions of people use Flow daily, including teams at OpenAI, Vercel, and Clay. Works on Mac, Windows, iPhone, and now Android -- free and unlimited on Android during launch.

Keep Reading