Lessons After a Half Billion Gpt Tokens
Ken writes about the lessons they’ve learned building new LLM-based features into their product. When it comes to prompts, less is more Not enumerating an exact list or instructions in the prompt produces better results, if that thing was already common knowledge. GPT is not dumb, and it actually gets confused if you over-specify. This has been my experience as well. For a recent project, I first started with a very long and detailed prompt, asking the LLM to classify a text and produce a summary. GPT-4, GPT-3.5, Claude-3-Opus, and Claude-3-Haiku all performed average or poorly. I then experimented with shorter prompts, and with some adjustments I was able to get much better responses with a very much shorter prompt. ...