Introduction

What is Prompt Sail?

Is a self-hosted application that captures and logs all interactions with LLM APIs such as OpenAI, Anthropic, Google Gemini and others. It is a proxy between your framework of choice (LangChain, OpenAI etc) and LLM provider API.

For developers, it offers a way to analyze and optimize API prompts.

For Project managers can gain insights into project and experiment costs.

For Business owners can ensure compliance with regulations and maintain governance over prompts and responses.

How does it work?

Prompt Sail is built as a set of Docker containers: one for the backend (promptsail-backend) and one for the frontend (promptsail-ui).

  • promptsail-backend acts as a proxy between your chosen LLM framework (such as LangChain or the OpenAI Python library) and the LLM provider API. By changing the api_base to point to Prompt Sail’s proxy_url, it captures and logs all prompts and responses.
  • promptsail-ui provides a user interface for viewing, searching, and analyzing all transactions (prompts and responses).

There are two options to run the Prompt Sail docker containers:

On the next page you will:

  • learn how to run Prompt Sail on your local machine
  • make your first API call to OpenAI

Updated: