Back

v0.1.4

LLM anchoring

2025-11-28

Foundational LLMs know too much to be interesting by default. Without anchoring, they tend to give safe and generic answers.

The most effective approach I use is the person anchor: tell the model to answer as if it were a specific person. A prompt like "Give me your opinion as if you were X" often produces much better results.

That single conditioning step forces a clear worldview, sharper tradeoffs, and more useful output.