diff --git a/docs.json b/docs.json
index 010531a..6931385 100644
--- a/docs.json
+++ b/docs.json
@@ -66,6 +66,14 @@
"features/development/webcontainer",
"features/development/workbench"
]
+ },
+ {
+ "group": "AI Features",
+ "expanded": false,
+ "pages": [
+ "essentials/ai-chat-commands",
+ "essentials/project-templates"
+ ]
}
]
},
@@ -136,18 +144,14 @@
"expanded": false,
"pages": ["integrations/vercel", "integrations/netlify", "integrations/cloudflare"]
},
- {
- "group": "Local Providers",
- "expanded": false,
- "pages": ["providers/lmstudio", "providers/ollama"]
- },
{
"group": "Running Models Locally",
"icon": "cpu",
"expanded": false,
"pages": [
- "running-models-locally/lm-studio",
- "running-models-locally/local-model-setup"
+ "running-models-locally/local-model-setup",
+ "providers/lmstudio",
+ "providers/ollama"
]
}
]
diff --git a/providers/lmstudio.mdx b/providers/lmstudio.mdx
index fa7d5bc..b977922 100644
--- a/providers/lmstudio.mdx
+++ b/providers/lmstudio.mdx
@@ -63,14 +63,52 @@ LM Studio downloads and runs AI models locally using your computer's resources.
## Setup Instructions
- Visit [LM Studio website](https://lmstudio.ai/) and download the application
- Install LM Studio and launch the application
- Browse the model library and download models you want to use
- Click "Start Server" in LM Studio to begin the local API server
- Set the server URL (usually http://localhost:1234) in Codinit settings
- Verify the connection and start using local AI models
+
+ Visit [lmstudio.ai](https://lmstudio.ai) and download the application for your operating system.
+
+ 
+
+
+ Install LM Studio and launch the application. You'll see four tabs on the left:
+ - **Chat**: Interactive chat interface
+ - **Developer**: Where you will start the server
+ - **My Models**: Your downloaded models storage
+ - **Discover**: Browse and add new models
+
+
+ Navigate to the "Discover" tab, browse available models, and download your preferred model. Wait for the download to complete.
+
+ **Recommended**: Use **Qwen3 Coder 30B A3B Instruct** for the best experience with CodinIT. This model delivers strong coding performance and reliable tool use.
+
+
+ Navigate to the "Developer" tab and toggle the server switch to "Running". The server will run at `http://localhost:51732`.
+
+ 
+
+
+ After loading your model in the Developer tab, configure these critical settings:
+ - **Context Length**: Set to 262,144 (the model's maximum)
+ - **KV Cache Quantization**: Leave unchecked (critical for consistent performance)
+ - **Flash Attention**: Enable if available (improves performance)
+
+
+ Set the server URL in CodinIT settings and verify the connection to start using local AI models.
+
+### Quantization Guide
+
+Choose quantization based on your available RAM:
+
+- **32GB RAM**: Use 4-bit quantization (~17GB download)
+- **64GB RAM**: Use 8-bit quantization (~32GB download) for better quality
+- **128GB+ RAM**: Consider full precision or larger models
+
+### Model Format
+
+- **Mac (Apple Silicon)**: Use MLX format for optimized performance
+- **Windows/Linux**: Use GGUF format
+
## Key Features
@@ -209,3 +247,19 @@ LM Studio downloads and runs AI models locally using your computer's resources.
**Resource Intensive**: Large models require significant RAM and may run slowly on lower-end hardware.
+
+## Troubleshooting
+
+If CodinIT can't connect to LM Studio:
+
+1. Verify LM Studio server is running (check Developer tab)
+2. Ensure a model is loaded
+3. Check your system meets hardware requirements
+4. Confirm the server URL matches in CodinIT settings
+
+## Important Notes
+
+- Start LM Studio before using with CodinIT
+- Keep LM Studio running in background
+- First model download may take several minutes depending on size
+- Models are stored locally after download
diff --git a/running-models-locally/lm-studio.mdx b/running-models-locally/lm-studio.mdx
deleted file mode 100644
index 71a7a17..0000000
--- a/running-models-locally/lm-studio.mdx
+++ /dev/null
@@ -1,79 +0,0 @@
----
-title: "LM Studio"
-description: "Set up LM Studio for local AI model execution with CodinIT using Qwen3 Coder 30B for privacy-focused, offline development."
----
-
-## Setting Up LM Studio with CodinIT
-
-Run AI models locally using LM Studio with CodinIT.
-
-### Prerequisites
-
-* Windows, macOS, or Linux computer with AVX2 support
-
-### Setup Steps
-
-#### 1. Install LM Studio
-
-* Visit [lmstudio.ai](https://lmstudio.ai)
-* Download and install for your operating system
-
-
-
-#### 2. Launch LM Studio
-
-* Open the installed application
-* You'll see four tabs on the left: **Chat**, **Developer** (where you will start the server), **My Models** (where your downloaded models are stored), **Discover** (add new models)
-
-#### 3. Download a Model
-
-* Browse the "Discover" page
-* Select and download your preferred model
-* Wait for download to complete
-
-#### 4. Start the Server
-
-* Navigate to the "Developer" tab
-* Toggle the server switch to "Running"
-* Note: The server will run at `http://localhost:51732`
-
-
-
-### Recommended Model and Settings
-
-For the best experience with CodinIT, use **Qwen3 Coder 30B A3B Instruct**. This model delivers strong coding performance and reliable tool use.
-
-#### Critical Settings
-
-After loading your model in the Developer tab, configure these settings:
-
-1. **Context Length**: Set to 262,144 (the model's maximum)
-2. **KV Cache Quantization**: Leave unchecked (critical for consistent performance)
-3. **Flash Attention**: Enable if available (improves performance)
-
-#### Quantization Guide
-
-Choose quantization based on your RAM:
-
-* **32GB RAM**: Use 4-bit quantization (\~17GB download)
-* **64GB RAM**: Use 8-bit quantization (\~32GB download) for better quality
-* **128GB+ RAM**: Consider full precision or larger models
-
-#### Model Format
-
-* **Mac (Apple Silicon)**: Use MLX format for optimized performance
-* **Windows/Linux**: Use GGUF format
-
-### Important Notes
-
-* Start LM Studio before using with CodinIT
-* Keep LM Studio running in background
-* First model download may take several minutes depending on size
-* Models are stored locally after download
-
-### Troubleshooting
-
-1. If CodinIT can't connect to LM Studio:
-2. Verify LM Studio server is running (check Developer tab)
-3. Ensure a model is loaded
-4. Check your system meets hardware requirements
\ No newline at end of file