Skip to content

jmformenti/local-llm-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Run LLM in local for development

Run easily a LLM in local as backend for development along with a Chat UI.

Using Ollama and Open WebUI.

All installed via docker compose.

Requirements

Install

  1. Configure .env.
  • COMPOSE_PROFILES. gpu (you need nvidia-container-toolkit installed) or cpu.
  1. Run docker compose.
docker compose up -d

Access to the services

Other interesting commands

Common docker compose commands useful in daily execution:

  1. Download a ollama model from cli:
docker compose exec ollama-gpu ollama pull <model_name>
  1. Stop.
docker compose stop
  1. Show logs.
docker compose logs -f
  1. Remove all.
docker compose down -v
  1. Update all the containers.
docker compose up --build -d

Code examples

  • Simple. Using your local LLM as Open AI replacement.
  • Multi MCP client. Using a multi MCP client with your local LLM.

About

Run an LLM locally as a backend for development, along with a Chat UI.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages