🏴‍☠️ 28 Ways to Fix Your Prompts & Boost AI Output

Checklist I Wish I Had

Writing a vague request into a text box is the fastest way to get a mediocre result, yet it is how most people still interact with artificial intelligence.

To get high-quality outputs, you have to treat the interface less like a magic 8-ball and more like a junior developer who needs extremely specific documentation. I recently discovered a massive list of fundamentals shared by the original poster that acts as the ultimate checklist for moving beyond basic interaction.

In partnership with

Modernize your marketing with AdQuick

AdQuick unlocks the benefits of Out Of Home (OOH) advertising in a way no one else has. Approaching the problem with eyes to performance, created for marketers with the engineering excellence you’ve come to expect for the internet.

Marketers agree OOH is one of the best ways for building brand awareness, reaching new customers, and reinforcing your brand message. It’s just been difficult to scale. But with AdQuick, you can easily plan, deploy and measure campaigns just as easily as digital ads, making them a no-brainer to add to your team’s toolbox.

The Mechanics of Precision

At its core, a Large Language Model is a prediction engine that relies entirely on the context you provide to determine the next likely token. The expert behind this list highlights a crucial reality: the quality of your output is mathematically bound to the clarity of your input.

When you leave instructions open-ended, the model reverts to the average of its training data, which is often generic and uninspired. By applying the specific constraints and structural techniques outlined by this AI professional, you effectively narrow the probabilistic field.

This forces the AI to operate within a specific lane of logic, style, and format. It is not about being polite or conversational; it is about prompt engineering as a form of coding where English is the programming language. The author suggests that successful prompting requires a mix of psychological priming, structural formatting, and iterative teaching.

Commanding the Model with Directives

One of the most fascinating aspects of this creator’s list is how it leverages the “psychology” of the model. While AI doesn’t have feelings, it simulates human responses based on training data, meaning it responds to incentives and firm commands just like a person might. The original poster points out several techniques here, such as using affirmative directives rather than negative ones. Telling the model what to do is computationally more efficient for it than navigating a list of what not to do.

Furthermore, this innovator suggests adding a “tipping incentive line” or threatening a penalty. It sounds absurd to tell a machine “You will be penalized,” but this prompts the model to treat the task with higher priority and precision, mimicking high-stakes scenarios found in its training data.

This section of the list also emphasizes the use of strong authoritarian language like “Your task is” and “You MUST.” These are not just stylistic choices; they act as anchors that keep the model from drifting into hallucination or laziness. By combining these firm directives with a request for “natural, human-like language,” you can strip away the robotic veneer that plagues most AI writing.

Scaffolding Logic with Structure

Beyond simple commands, this savvy professional dives deep into the architecture of the prompt itself. A major takeaway from the list is the necessity of “Few-Shot” prompting combined with “Chain of Thought” reasoning. Giving the model examples (shots) of what you want is infinitely more powerful than describing what you want. When you combine this with the instruction to “think step by step,” you force the model to show its work, which significantly reduces logic errors in complex tasks.

The person who shared it also emphasizes the use of delimiters. Using punctuation like hashtags, quotes, or brackets to separate instructions from source text is vital. Without these visual boundaries, the model often gets confused about where the instructions end and the data begins.

Additionally, the author notes the importance of mimicking the language of a provided sample. If you want a specific tone, pasting a sample and asking the AI to analyze and replicate that style is far more effective than using adjectives like “professional” or “witty.” This approach turns the prompt into a structured document rather than a casual chat message.

Calibrating Length and Complexity

The final cluster of insights from this industry pro revolves around the relationship between task complexity and prompt length. There is a common misconception that shorter prompts are better, but the expert suggests a tiered approach based on the difficulty of the request. For simple tasks, a 50–100 word prompt suffices. However, for complex operations, the creator recommends expanding to 300–500 words. This extra volume isn’t fluff; it is space used to define edge cases, output formats, and specific constraints.

Another brilliant technique mentioned is “Teach first, quiz after.” This involves feeding the model information and asking it to verify its understanding before it executes the task. This confirms that the model has ingested the context correctly. The original poster also advises splitting complex tasks into simple steps or generating file scripts for multi-file code.

This modular approach prevents the model from getting overwhelmed and losing coherence halfway through the generation. It is about guiding the AI through a workflow rather than dumping a project on its lap and hoping for the best.

Find out why 100K+ engineers read The Code twice a week.

That engineer who always knows what's next? This is their secret.

Here's how you can get ahead too:

  • Sign up for The Code - tech newsletter read by 100K+ engineers

  • Get latest tech news, top research papers & resources

  • Become 10X more valuable

*Ad

Other awesome AI guides you may enjoy

Credits Ruben Hassid

The Nuance of Over-Instruction

While these 28 fundamentals are powerful, applying them all at once can sometimes be counterproductive if not done carefully. A prompt that is too rigid might stifle the model’s creativity when you actually want it to brainstorm. Furthermore, as this talented creator implies with the word count guidelines, writing a 500-word prompt for a simple email is inefficient.

The key is to match the fidelity of the prompt to the importance of the outcome. You must also remain aware that different models (like GPT-4 vs. Claude 3) respond slightly differently to specific “jailbreak” style commands like penalties or tipping, so testing is always required.

I highly recommend saving the full checklist to review before your next big project!

Benchmark Your Voice AI

Deepgram surveyed 400 senior leaders on voice AI to map adoption, budgets, and use cases. Compare your voice AI roadmap to $100M+ enterprises and learn where to invest next - human-like agents for customer service, task automation, and order capture - plus benchmarks to guide your 2026 plan.

*Ad