Recently I started using Simon Willison ’s CLI tool which is conveniently called llm . He introduced a particularly useful fragments feature in a recent version of the tool, that allows the user to provide extra information to the llm when working with long context models.

Simon himself developed a llm-hacker-news plugin that fetches all the comments in a HackerNews discussion and provides that to llm as an extra context (i.e. a fragment).

Today I developed and released a simple python library which is a plugin for llm, and introduces a new fragment type to this tool. It’s called llm-url-markdown , and uses the Jina Reader API to fetch the markdown content of a web url and provide that as a fragment to llm.

To use this plugin, first you need to make sure you have llm installed:

brew install llm

or:

pip install llm

Then setup an API key for OpenAI (or follow llm’s docs to setup local models or other cloud LLMs):

llm keys set openai

Next, install llm-url-markdown:

llm install llm-url-markdown

That’s it! Now you can use it with md: like following:

llm -f md:github.com/simonw/llm 'What is the main goal of this LLM tool?' -m gpt-4o-mini

source

Comment? Reply via Email, Bluesky or Twitter.