A builder turned emotion into a data structure
A guy on Reddit just shipped something he calls "git for feelings." It is exactly what it sounds like. You give a model a hex color, a two-word scene, and a strict output schema. It returns a structured emotional fingerprint. Then you fork that fingerprint, change one variable, and watch where the mood bends.
The thing that makes it work is the schema. Most prompts asking AI about emotion get fuzzy, meandering answers that read like horoscopes. This one does not. Six fields, a 20-word limit on the poetic line, and suddenly the output starts behaving like data you can compare.
I ran it last night with #ff3b7a (hot pink) and the words "NPC betrayal." Valence dropped to -0.7. Arousal spiked to 0.85. Dominant emotion came back as defiance, not love. The poetic line was "love wearing the wrong color tonight." Same hex, swap the scene to "first heartbeat of spring," and everything flipped.
Have you been noticing how Claude is the most talked about tool on the internet right now? CLaude has been launching new models & features like claude co-work, claude design, skills, connectors literally every week.
The World’s first Claude-a-thon, A 2 day deep dive into Claude, its user-cases and 10+ other AI tools happening this weekend from 10 am-7pm EST – and they have just 1000 free seats for a limited time.
In the workshop, you'll do deep research on Claude. Build your own artifacts and dashboards. Create full presentations using Claude. Set up Claude Connectors like Indeed to automate your job search.
Register NOW! (free for the next 48 hrs)
🧠Live sessions- Saturday and Sunday
🕜10 AM EST to 7PM EST
Run this prompt before you read another paragraph
Copy this. Swap in any color and scene you want. Hit return.
You are an emotional translator. Given a hex color and a context
(scene, subject, event), output:
- dominant_emotion: one of [love, joy, calm, wonder, mystery, defiance, clarity]
- semantic_tags: up to 4 adjectives capturing the mood
- valence: number -1 to 1 (negative to positive)
- arousal: number 0 to 1 (calm to intense)
- poetic_line: a single sentence under 20 words, lyrical but not cliche.
Input: color=#ff3b7a, scene="NPC betrayal",
subject="GuardA", event="betrayal_discovered"
The dominant_emotion is not just a label. It sits at the intersection of the color's hue and the scene's vocabulary. Hot pink plus betrayal lands somewhere between passion and defiance, never love. The poetic_line is the tell. A tense line means tension in the data. A lyrical one means the inputs aligned into something coherent.
Now run the same hex with "first heartbeat of spring" instead. Watch the valence flip. The arousal softens. The poetic line loosens. The divergence is the point.
The trick is two layers, not one clever phrase
The engine runs two passes on every input.
Hue first. Hot pinks (330 to 360 degrees) default to passion. Blues (200 to 255) lean toward mystery. Greens drift calm. Deep purples slide toward grief or royalty depending on saturation. Color sets the prior before the model reads a single word of context.
Words second. "Glitch" pushes toward defiance. "Dream" tilts wonder. "Silence" pulls arousal down even when the color is aggressive. Context can override the baseline entirely or reinforce it.
When both layers agree, the output gets sharper and more confident. When they fight, the model has to resolve the tension, and that tension surfaces in the poetic line as compression or ambiguity. Same red returns different fingerprints for "heartbeat in a silent room" versus "rage behind a closed door." Same hex. Two different arousal scores.
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
Forking the prompt is where it gets useful
Each snapshot can be forked. Same color, new context, a new child node. You watch emotional drift happen as you change one variable at a time.
Start with calm blue (#4a90d9) and "fog at dawn." Fork it to "fog over a battlefield." Fork that again to "fog lifts, soldiers gone." Three child nodes, three different valence readings, color never changed. The drift is entirely linguistic.
This is where it stops being a party trick. If you build characters, game narratives, or adaptive UI, you can trace the exact moment an emotional state pivots. You can see which words in the context layer carry the most weight. You build a map of how your model handles emotional pressure based on actual scored outputs you can compare side by side.
The deeper insight for prompt engineers: structure produces consistency. Poetry emerges from the constraints, not despite them. A model given total freedom to describe emotion produces vague, meandering language. A model given six fields and a 20-word limit produces something precise enough to ship.
Five stress tests before you go deeper
🔹 Crank temperature both ways. Same input at 0.2 versus 1.0 tells you how much the model is hedging versus exploring. Run both. Compare side by side.
🔹 Try the edge case. Calm color (#7fe0c2) plus violent context like "blood on the floor." Context usually overrides the color baseline. If yours still returns positive valence, your context vocabulary is too soft. Push it. "Corpse." "Shattered." Find the floor.
🔹 Glitch injection. Feed "glitch glitch glitch" as the entire context. Watch the entropy spike. Tags get unstable. The poetic line turns fragmented or recursive. Fast way to see how the system degrades under noise.
🔹 The near-black test. #020202 plus "the moment before the scream." High arousal, compressed output. Near-black is one of the few colors where arousal can swing either way, void or dread, depending entirely on what context feeds it.
🔹 Run the same prompt three times unchanged. If the dominant_emotion flips between runs, your temperature is too high for reliable tagging. Drop it until the label stabilizes. Raise it back up only for the poetic line.
Three things to actually do tonight
🔹 Pick one hex, two opposite scenes. Run the prompt on both. Note which field shifted the most. Usually it is arousal. Valence is stickier than people expect.
🔹 Fork one into a child branch. Change a single word in the context. Run it again. Three forks deep and you have a small tree that tells you something specific about how your model handles ambiguity.
🔹 Save the tree. Even if it is just a notes app. The point of git for feelings is that you can compare yesterday's output to today's. One run is a curiosity. Twenty runs is a map.
Master Claude AI (Free Guide)
The professionals pulling ahead aren't working more. They're using Claude.
Our free guide will show you how to:
Configure Claude to be the perfect assistant
Master AI-powered content creation
Transform complex data into actionable strategies
Harness Claude’s full potential
Transform your workflow with AI and stay ahead of the curve with this comprehensive guide to using Claude at work.
*Ad
The thing nobody is saying out loud
Most prompt advice tells you to write better sentences. This thing tells you to write a better schema. Big difference. The model already knows the language of emotion. What it does not know, by default, is which fields you want and how strict to be inside them. Six fields and a word limit do more for output quality than any "act as an expert" code you have ever pasted.
Treat emotional output like data you can query. Not a vibe. The poetry comes back anyway. It just comes back inside walls you can build on.
Run the prompt. The rest of the system is in your fork history.
📊 Which combo are you running first?
Hot pink + "first betrayal"
Calm blue + "battlefield at dawn"
Near-black + "the moment before the scream"




