Lessons After a Half Billion Gpt Tokens

Ken writes about the lessons they’ve learned building new LLM-based features into their product. When it comes to prompts, less is more Not enumerating an exact list or instructions in the prompt produces better results, if that thing was already common knowledge. GPT is not dumb, and it actually gets confused if you over-specify. This has been my experience as well. For a recent project, I first started with a very long and detailed prompt, asking the LLM to classify a text and produce a summary. GPT-4, GPT-3.5, Claude-3-Opus, and Claude-3-Haiku all performed average or poorly. I then experimented with shorter prompts, and with some adjustments I was able to get much better responses with a very much shorter prompt. ...

2024-05-27 · 2 min

Fuck You Show Me the Prompt

Hamel dives deep into how LLM frameworks like langchain , instructor , and guidance perform tasks like formatting the response in a valid JSON output. He intercepts the API calls from these Python libraries to shed some light on how many API calls (to OpenAI’s GPT services) they make and what prompt they use. I’ve always been skeptic of the usefulness of many of the LLM “wrapper” libraries, specially for larger and more serious projects, as they are fine for quick prototypes. ...

2024-02-29 · 3 min