Skip to content

LocalAI ​

LocalAI is an open-source Docker image that allows running open-source models locally, supporting most open-source models. Compared to Ollama, it is more suitable for deployment on LAN servers for small groups.

This guide introduces how to configure and use the LocalAI model in Aide.

You can find more detailed information in the LocalAI Official Documentation.

Installing LocalAI ​

Before using LocalAI, please ensure you have referred to the LocalAI Official Documentation to install and run the Docker image.

API Base URL Configuration ​

You need to configure aide.openaiBaseUrl to http://localhost:8080/v1 (assuming your service is running on port 8080).

API Key Configuration ​

You can configure aide.openaiKey with any value. For example, you can enter sk-LocalAI.

Model Configuration ​

You need to configure aide.openaiModel to the LocalAI model. We recommend using the llama3-instruct model. For more models, please refer to the Supported Models List.

Example Configuration File ​

Below is a complete configuration example:

json
{
  "aide.openaiBaseUrl": "http://localhost:8080/v1",
  "aide.openaiKey": "sk-LocalAI",
  "aide.openaiModel": "llama3-instruct"
}

Released under the MIT License.