Using Custom LLMs with Kaizen

How to integrate custom LLMs with Kaizen?

This post addresses how to setup custom LLMs with Kaizen.

Step 1: Set Environment Variables

First, you need to add the necessary environment variables in your .env file. These variables are essential for authenticating and interacting with the API service of your LLM Provider.

os.environ["LLM_API_KEY"] = "<YOUR_LLM_API_KEY>"  
os.environ["LLM_API_BASE"] = "<YOUR_LLM_API_BASE_URL>"  
os.environ["LLM_API_VERSION"] = "<YOUR_LLM_API_VERSION>"  

Replace <YOUR_LLM_API_KEY>, <YOUR_LLM_API_BASE_URL>, and <YOUR_LLM_API_VERSION> with your actual LLM's credentials.

Step 2: Update config.json

Next, update your config.json file to include the model configuration for LLM.
Kaizen uses litellm under the hood, so your config.json can look something like this. This configuration will specify which model deployment to use and the associated costs per token.

{
  "language_model": {
    "provider": "litellm",
    "enable_observability_logging": false,
    "redis_enabled": false,
    "models": [
      {
        "model_name": "default",
        "litellm_params": {
          "model": "<MODEL_NAME>/<NAME_OF_DEPLOYMENT>",
          "api_key": "os.environ/LLM_API_KEY",
          "api_base": "os.environ/LLM_API_BASE"
        },
        "model_info": {
          "input_cost_per_token": ...,
          ...
        }
      }
 
    ]
  },
  "github_app": {
    ...
  }
}

Replace <NAME_OF_DEPLOYMENT> with the name of your LLM model's deployment.
For more information on model token pricing, check out here (opens in a new tab).
You can checkout an example here (opens in a new tab)

Feel free to reach out if you have any questions or need further assistance!