Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 11 additions & 7 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,14 @@
"features/development/webcontainer",
"features/development/workbench"
]
},
{
"group": "AI Features",
"expanded": false,
"pages": [
"essentials/ai-chat-commands",
"essentials/project-templates"
]
}
]
},
Expand Down Expand Up @@ -136,18 +144,14 @@
"expanded": false,
"pages": ["integrations/vercel", "integrations/netlify", "integrations/cloudflare"]
},
{
"group": "Local Providers",
"expanded": false,
"pages": ["providers/lmstudio", "providers/ollama"]
},
{
"group": "Running Models Locally",
"icon": "cpu",
"expanded": false,
"pages": [
"running-models-locally/lm-studio",
"running-models-locally/local-model-setup"
"running-models-locally/local-model-setup",
"providers/lmstudio",
"providers/ollama"
]
}
]
Expand Down
66 changes: 60 additions & 6 deletions providers/lmstudio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

## How It Works

LM Studio downloads and runs AI models locally using your computer's resources. It provides a simple interface to manage models, start local servers, and connect to various applications including Codinit.

Check warning on line 26 in providers/lmstudio.mdx

View check run for this annotation

Mintlify / Mintlify Validation (codinitdev) - vale-spellcheck

providers/lmstudio.mdx#L26

Did you really mean 'Codinit'?

<AccordionGroup>
<Accordion title="Model Management" icon="download">
Expand Down Expand Up @@ -63,14 +63,52 @@
## Setup Instructions

<Steps>
<Step title="Download LM Studio">Visit [LM Studio website](https://lmstudio.ai/) and download the application</Step>
<Step title="Install and Launch">Install LM Studio and launch the application</Step>
<Step title="Download Models">Browse the model library and download models you want to use</Step>
<Step title="Start Local Server">Click "Start Server" in LM Studio to begin the local API server</Step>
<Step title="Configure in Codinit">Set the server URL (usually http://localhost:1234) in Codinit settings</Step>
<Step title="Test Connection">Verify the connection and start using local AI models</Step>
<Step title="Download LM Studio">
Visit [lmstudio.ai](https://lmstudio.ai) and download the application for your operating system.

![LM Studio download page](/assets/images/lmstudio.webp)
</Step>
<Step title="Install and Launch">
Install LM Studio and launch the application. You'll see four tabs on the left:
- **Chat**: Interactive chat interface
- **Developer**: Where you will start the server
- **My Models**: Your downloaded models storage
- **Discover**: Browse and add new models
</Step>
<Step title="Download a Model">
Navigate to the "Discover" tab, browse available models, and download your preferred model. Wait for the download to complete.

**Recommended**: Use **Qwen3 Coder 30B A3B Instruct** for the best experience with CodinIT. This model delivers strong coding performance and reliable tool use.
</Step>
<Step title="Start the Server">
Navigate to the "Developer" tab and toggle the server switch to "Running". The server will run at `http://localhost:51732`.

![Starting the LM Studio server](/assets/images/lmstudio.webp)
</Step>
<Step title="Configure Model Settings">
After loading your model in the Developer tab, configure these critical settings:
- **Context Length**: Set to 262,144 (the model's maximum)
- **KV Cache Quantization**: Leave unchecked (critical for consistent performance)
- **Flash Attention**: Enable if available (improves performance)
</Step>
<Step title="Configure in CodinIT">
Set the server URL in CodinIT settings and verify the connection to start using local AI models.
</Step>
</Steps>

### Quantization Guide

Choose quantization based on your available RAM:

- **32GB RAM**: Use 4-bit quantization (~17GB download)
- **64GB RAM**: Use 8-bit quantization (~32GB download) for better quality
- **128GB+ RAM**: Consider full precision or larger models

### Model Format

- **Mac (Apple Silicon)**: Use MLX format for optimized performance
- **Windows/Linux**: Use GGUF format

## Key Features

<BadgeGroup>
Expand Down Expand Up @@ -209,3 +247,19 @@
<Callout type="warning">
**Resource Intensive**: Large models require significant RAM and may run slowly on lower-end hardware.
</Callout>

## Troubleshooting

If CodinIT can't connect to LM Studio:

1. Verify LM Studio server is running (check Developer tab)
2. Ensure a model is loaded
3. Check your system meets hardware requirements
4. Confirm the server URL matches in CodinIT settings

## Important Notes

- Start LM Studio before using with CodinIT
- Keep LM Studio running in background
- First model download may take several minutes depending on size
- Models are stored locally after download
79 changes: 0 additions & 79 deletions running-models-locally/lm-studio.mdx

This file was deleted.