Click is a python library for creating command line applications in Python.

The llm tool created by Simon uses click and it has a lot of subcommands.

eg:

$ llm keys set openai
Enter key: ...

$ llm models default
gpt-4o

I am building a wrapper around this CLI tool that let’s me use it in an interactive REPL. I wanted autocompletion to help me remind the available subcommands and their appropriate nested subcommands.

Here’s how I got a list of all the nested subcommands and built an autocompletion engine.

import llm
from llm.cli import cli

MODELS = {x.model_id: None for x in llm.get_models()}

def build_command_tree(cmd):
    """Recursively build a command tree for a Click app.

    Args:
        cmd (click.Command or click.Group): The Click command/group to inspect.

    Returns:
        dict: A nested dictionary representing the command structure.
    """
    tree = {}
    if isinstance(cmd, click.Group):
        for name, subcmd in cmd.commands.items():
            if cmd.name == "models" and name == "default":
                tree[name] = MODELS  # List of available models
            else:
                # Recursively build the tree for subcommands
                tree[name] = build_command_tree(subcmd)
    else:
        # Leaf command with no subcommands
        tree = None
    return tree


# Generate the tree
COMMAND_TREE = build_command_tree(cli)


def get_completions(tokens, tree=COMMAND_TREE):
    """Get autocompletions for the current command tokens.

    Args:
        tree (dict): The command tree.
        tokens (list): List of tokens (command arguments).

    Returns:
        list: List of possible completions.
    """
    for token in tokens:
        if token.startswith("-"):
            # Skip options (flags)
            continue
        if tree and token in tree:
            tree = tree[token]
        else:
            # No completions available
            return []

    # Return possible completions (keys of the current tree level)
    return list(tree.keys()) if tree else []

if __name__ == "__main__":
    tokens = sys.argv[2:]  # Remove `llm` and pass in the rest of the args
    print(get_completions(tokens))

This suggests possible nested subcommands based on the input. Additionally it also suggests the available LLM models after the llm models default subcommand.

eg:

$ python autocomplete_llm.py llm models
['list', 'default']

$ python autocomplete_llm.py llm models default
['gpt-4o', 'gpt-4o-mini', 'gpt-4o-audio-preview', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-4', 'gpt-4-32k', 'gpt-4-1106-preview', 'gpt-4-0125-preview', 'gpt-4-turbo-2024-04-09', 'gpt-4-turbo', 'o1-preview', 'o1-mini', 'gpt-3.5-turbo-instruct']

What is the purpose of this? I’m building a new feature in litecli that’ll embed llm tool and allow users to create SQL queries using the help of LLMs. When a user is invoking llm inside litecli I’d hate for them to switch to the terminal just to find out how to use a specific subcommand or even list all available subcommands.

By adding this autocompletion, it keeps users in the flow state and avoids an unnecessary context switch. The feature is not quite ready for release, but I’m quite excited by the potential of it.