diff --git a/.gitignore b/.gitignore
index aa8e7f9..4e75ae3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -16,7 +16,7 @@ CLAUDE.md
# production
/build
-
+*.csv
# misc
.DS_Store
*.pem
@@ -47,4 +47,4 @@ next-env.d.ts
.vercel
.roo
.wrangler
-.claude
\ No newline at end of file
+.claude
diff --git a/README.md b/README.md
index 4a04067..055c983 100644
--- a/README.md
+++ b/README.md
@@ -1,157 +1,43 @@
-# CodinIT.dev Docs
+# Mintlify Starter Kit
-This repository contains the official documentation for **CodinIT**, an AI-powered full-stack development platform that revolutionizes how developers build applications with local and cloud AI models.
+Use the starter kit to get your docs deployed and ready to customize.
-## 🚀 What is CodinIT?
+Click the green **Use this template** button at the top of this repo to copy the Mintlify starter kit. The starter kit contains examples with
-CodinIT is a comprehensive development environment that integrates AI assistance throughout the entire development workflow. It supports 19+ AI providers including OpenAI, Anthropic, Google, DeepSeek, and more, offering:
+- Guide pages
+- Navigation
+- Customizations
+- API reference pages
+- Use of popular components
-- **Smart Code Generation**: AI-powered code completion and generation
-- **Full-Stack Development**: Built for Node.js full-stack applications
-- **Multiple AI Providers**: Connect with your preferred AI models
-- **Integrated Tools**: Terminal, file management, and deployment
-- **Local Model Support**: Run AI models locally with Ollama and LM Studio
-- **Enterprise Security**: Bank-level security and compliance
+**[Follow the full quickstart guide](https://starter.mintlify.com/quickstart)**
-## 📚 Documentation Structure
+## Development
-This documentation site is built with [Mintlify](https://mintlify.com) and covers:
+Install the [Mintlify CLI](https://www.npmjs.com/package/mint) to preview your documentation changes locally. To install, use the following command:
-- **Getting Started**: Quickstart guides and installation
-- **Features**: Development tools, AI integration, and workflows
-- **Providers**: Configuration for 19+ AI providers
-- **Integrations**: Vercel, Netlify, Supabase, and Git
-- **MCP Protocol**: Extending capabilities with custom tools
-- **Comparisons**: How CodinIT compares to other platforms
-
-## 🛠️ Development Setup
-
-### Prerequisites
-
-- Node.js 18+
-- pnpm, npm, or yarn
-
-> **Important**: If you have a package named `mint` and a package named `mintlify` installed, you should uninstall `mintlify`.
->
-> 1. Uninstall the old package:
-> ```bash
-> npm uninstall -g mintlify
-> ```
->
-> 2. Clear your npm cache:
-> ```bash
-> npm cache clean --force
-> ```
->
-> 3. Reinstall the new package:
-> ```bash
-> npm i -g mint
-> ```
-
-### Local Development
-
-1. **Clone the repository**
- ```bash
- git clone https://github.com/codinit-dev/docs.git
- cd docs
- ```
-
-2. **Install dependencies**
- ```bash
- # Using pnpm (recommended)
- pnpm install
-
- # Or using npm
- npm install
-
- # Or using yarn
- yarn install
- ```
-
-3. **Start the development server**
- ```bash
- # Using pnpm
- pnpm dev
-
- # Or using npm
- npm run dev
-
- # Or using yarn
- yarn dev
- ```
-
-4. **Open your browser** to `http://localhost:3000`
-
-### Building for Production
-
-```bash
-# Build the documentation site
-pnpm build
-
-# Preview the production build
-pnpm preview
+```
+npm i -g mint
```
-## 📝 Contributing
-
-We welcome contributions to improve the documentation! Here's how you can help:
-
-### Content Contributions
-
-1. **Fork** this repository
-2. **Create a feature branch**: `git checkout -b feature/your-feature-name`
-3. **Make your changes** to the MDX files in the appropriate directories
-4. **Test your changes** locally with `pnpm dev`
-5. **Commit your changes** following conventional commit format
-6. **Push to your fork** and create a **Pull Request**
-
-### Documentation Guidelines
-
-- Use clear, concise language
-- Include code examples where helpful
-- Follow the existing MDX structure and component usage
-- Test all links and ensure they're working
-- Use proper heading hierarchy (H1 → H2 → H3)
-
-### File Structure
+Run the following command at the root of your documentation, where your `docs.json` is located:
```
-docs/
-├── index.mdx # Homepage
-├── quickstart.mdx # Getting started guide
-├── features/ # Feature documentation
-├── providers/ # AI provider guides
-├── integrations/ # Third-party integrations
-├── essentials/ # Core functionality
-├── mcp/ # MCP protocol docs
-├── comparisons/ # Platform comparisons
-├── running-models-locally/ # Local AI setup
-└── assets/ # Images, icons, and media
+mint dev
```
-## 🔧 Configuration
-
-The documentation is configured through `docs.json`:
-
-- **Theme**: Aspen theme with custom colors
-- **Navigation**: Organized into logical tabs and groups
-- **SEO**: Optimized for search engines
-- **Integrations**: Telemetry and analytics enabled
+View your local preview at `http://localhost:3000`.
-## 📄 License
+## Publishing changes
-This documentation is part of the CodinIT project. See the main project [LICENSE](LICENSE) for details.
+Install our GitHub app from your [dashboard](https://dashboard.mintlify.com/settings/organization/github-app) to propagate changes from your repo to your deployment. Changes are deployed to production automatically after pushing to the default branch.
-## 🌐 Links
+## Need help?
-- **CodinIT App**: [codinit.dev](https://codinit.dev)
-- **Documentation**: [codinit.dev/docs](https://codinit.dev/docs)
-- **GitHub Repository**: [github.com/Gerome-Elassaad/codinit-app](https://github.com/Gerome-Elassaad/codinit-app)
-- **Download**: [codinit.dev/download](https://codinit.dev/download)
-- **Blog**: [codinit.dev/blog](https://codinit.dev/blog)
+### Troubleshooting
-## 📞 Support
+- If your dev environment isn't running: Run `mint update` to ensure you have the most recent version of the CLI.
+- If a page loads as a 404: Make sure you are running in a folder with a valid `docs.json`.
-- **Issues**: [GitHub Issues](https://github.com/Gerome-Elassaad/codinit-app/issues)
-- **Discussions**: [GitHub Discussions](https://github.com/Gerome-Elassaad/codinit-app/discussions)
-- **Community**: Join our Discord community for real-time help
+### Resources
+- [Mintlify documentation](https://mintlify.com/docs)
diff --git a/assets/gifs/ollama-model-grab.gif b/assets/gifs/ollama-model-grab.gif
index 3eb5df1..7d74d61 100644
Binary files a/assets/gifs/ollama-model-grab.gif and b/assets/gifs/ollama-model-grab.gif differ
diff --git a/assets/gifs/prompting-dark.gif b/assets/gifs/prompting-dark.gif
index 538d9bd..0e13447 100644
Binary files a/assets/gifs/prompting-dark.gif and b/assets/gifs/prompting-dark.gif differ
diff --git a/assets/gifs/prompting-light.gif b/assets/gifs/prompting-light.gif
index b7fab6d..980f11b 100644
Binary files a/assets/gifs/prompting-light.gif and b/assets/gifs/prompting-light.gif differ
diff --git a/assets/videos/walkthrough.mp4 b/assets/videos/walkthrough.mp4
index 97ad681..59e361f 100644
Binary files a/assets/videos/walkthrough.mp4 and b/assets/videos/walkthrough.mp4 differ
diff --git a/changelog.mdx b/changelog.mdx
new file mode 100644
index 0000000..d95007e
--- /dev/null
+++ b/changelog.mdx
@@ -0,0 +1,2445 @@
+---
+title: "Changelog"
+description: "Track the latest CodinIT releases, feature updates, bug fixes, and improvements with detailed changelogs and version history."
+---
+
+New releases and improvements
+
+## January 4, 2026 - v1.2.0
+
+
+
+### Built-in Tools Infrastructure
+
+- Added comprehensive built-in tools system with 6 core tools:
+ - ReadFile: Intelligent file reading with smart truncation for large files
+ - LSRepo: Repository file listing with glob pattern support
+ - GrepRepo: Regex search across codebase with file filtering
+ - SearchWeb: Web search with Vercel ecosystem first-party docs support
+ - FetchFromWeb: Full web page content fetching and parsing
+ - TodoManager: Structured project task management
+- Implemented tool registry system for dynamic tool management
+
+### CodinIT Pro Subscription System
+
+- Added Pro subscription state management with API verification
+- Integrated Pro features in chat UI: Web Search and Lazy Edits toggles
+- Added Pro subscription management UI in settings
+- Implemented credit tracking and tier management
+
+### MCP (Model Context Protocol) Integration
+
+- Added comprehensive MCP server support (stdio, SSE, streamable-HTTP)
+- Implemented MCP configuration validation and retry endpoints
+- Added MCP marketplace with card grid layout and Pro tier indicators
+- Enhanced external tool connectivity through standardized protocol
+
+### Electron Desktop Enhancements
+
+- Added local file saving functionality for Electron
+- Implemented IPC communication for project initialization
+- Enhanced auto-update system with improved error handling
+
+### Project Management
+
+- Added project name utility for unique project identification
+- Implemented project name persistence in chat history
+- Enhanced workbench with improved file handling
+
+### UI/UX Improvements
+
+#### Settings Redesign
+
+- Complete SettingsTab redesign with framer-motion animations
+- Enhanced ControlPanel with new depth tokens and improved layout
+- Redesigned ConnectionsTab with horizontal navigation
+- Updated all settings components with modernized styling
+
+#### Chat Interface Enhancements
+
+- Redesigned send button with circular shape and upward arrow icon
+- Enhanced thought box with collapsible animation
+- Implemented progressive step reveal animations
+- Improved message spacing and border radius for better readability
+- Added mixed file and tool reference support in autocomplete
+
+#### Visual Design System
+
+- Migrated to modern Zinc color palette with improved contrast
+- Implemented depth token system for consistent visual hierarchy
+- Enhanced workbench header with synchronized width transitions
+- Updated artifact components with modernized card layouts
+
+#### Marketplace & Navigation
+
+- Redesigned MCP marketplace with card grid layout
+- Added Pro tier indicators throughout the interface
+- Replaced sidebar navigation with horizontal tab navigation
+- Enhanced action buttons with rounded pill design
+
+### Performance improvements
+
+#### Memory Management
+
+- Fixed UI freezes with requestIdleCallback yielding implementation
+- Added 1MB streaming content limit to prevent memory overflow
+- Implemented concurrent file writes with queue system
+- Added action queue limits to prevent memory overflow
+
+#### File System Optimizations
+
+- Implemented batch Nanostores updates in FilesStore
+- Increased file watch buffer to 1000ms for better performance
+- Enhanced file locking mechanisms
+
+### Bug fixes
+
+- Improved Hyperbolic API error handling with detailed debugging
+- Fixed workbench positioning to use relative flow layout
+- Removed unnecessary z-index conflicts
+- Fixed header positioning and transparency issues
+- Corrected border opacity in electron title bar
+- Prevented duplicate yml file uploads in release workflow
+- Fixed duplicate release creation and type conflicts
+- Enhanced update checker with reduced API calls
+
+### Test Suite
+
+- Added comprehensive tests for diff utility functions
+- Added tests for LLM message processing functions
+- Implemented console mocking and cleanup in test suites
+- Added storage unit tests and MCP-related tests
+
+### Code Quality Enhancements
+
+- Fixed various linting errors across SCSS and TypeScript files
+- Improved error handling and logging throughout the codebase
+- Enhanced type definitions and removed unused imports
+- Centralized repository configuration management
+
+### Build & Dependencies
+
+- Added @types/node dependency for better TypeScript support
+- Updated UnoCSS configuration with new color palette
+- Enhanced electron build process with improved IPC handlers
+
+### Repository Management
+
+- Added backend folder to gitignore
+- Centralized version and repository configuration
+- Fixed duplicate release workflow issues
+- Updated GitHub funding username
+
+
+
+## December 30, 2025
+
+
+
+### New features
+
+- Added Diff Approval Workflow for reviewing AI-generated file changes
+ - Visual inline diff comparison with syntax highlighting
+ - Approve or reject changes before applying to files
+ - Side-by-side before/after view
+ - New action status: "awaiting-approval"
+- Added Live Action Console for real-time command monitoring
+ - Streaming command output in floating console
+ - Progress tracking for long-running tasks
+ - Command context display
+ - Real-time updates without page refresh
+- Added PromptSelector component for choosing system prompts
+ - Integrated into Chatbox for easy prompt switching
+ - Removed "Lazy Edits" button in favor of prompt selector
+ - Uses PromptLibrary for centralized prompt management
+- Added comprehensive MCP type definitions and schemas
+ - Separated type definitions into dedicated types/mcp file
+ - Refactored mcpService to use centralized types
+ - Updated MCP store to use types from types/mcp
+- Added MCP config validation and retry API endpoints
+- Added Electron IPC handlers for project initialization and file saving
+ - Project initialization IPC handlers in preload
+ - Local file saving functionality in workbench store
+ - Update IPC event handlers in electron preload
+- Added project name utility for generating unique project identifiers
+ - Project name retrieval in action runner
+ - Project name integration in chat history persistence
+- Added Electron utility functions for IPC communication
+- Added storage unit tests with comprehensive test coverage
+
+### Performance improvements
+
+- Fixed critical performance regressions causing UI freezes and crashes (PR #60)
+ - Introduced concurrency limits for file writes (max 5 concurrent)
+ - Implemented batched state updates for file operations
+ - Added file write queue system to prevent memory overflow
+ - Optimized action queue with max 50 pending actions limit
+ - Added yield to main thread for better responsiveness
+ - Increased action sample interval to 500ms (5x reduction in update frequency)
+ - Increased file watch buffer to 1000ms for better event batching
+ - Implemented batch Nanostores updates in FilesStore (500ms batching)
+ - Added 1MB streaming content limit to prevent memory overflow
+ - Added requestIdleCallback yielding to prevent UI freezes
+- Improved Windows file system compatibility
+- Enhanced action runner with live output monitoring
+- Optimized file content streaming with 1MB chunk size limit
+- Improved file watcher buffer timing (300ms → 1000ms)
+- Added batch update scheduling for file changes (500ms delay)
+
+### Bug fixes
+
+- Fixed Ollama model selection issues
+ - Models now appear correctly in provider dropdown after configuration
+ - Fixed base URL configuration retrieval from provider settings
+ - Improved error handling and Docker host mapping
+ - Added proper validation for Ollama API responses
+- Fixed LMStudio provider error handling and Docker compatibility
+- Fixed app freezes during multi-file generation
+- Fixed terminal output stalling issues
+- Fixed browser crashes ("Aw, Snap!") during intensive operations
+- Fixed console mocking and cleanup in test suites
+ - Added console mocking to diff spec tests
+ - Added console mocking to message parser spec tests
+ - Added console mocking to storage tests
+ - Added console mocking to Markdown tests
+ - Added console mocking to toolMentionParser tests
+ - Added console mocking to diff tests
+ - Added console mocking to code validator tests
+ - Added console mocking to llm utils tests
+ - Added console mocking to LLMManager tests
+- Fixed unused imports across multiple API endpoints
+ - Removed unused imports from api.system.diagnostics
+ - Removed unused imports from api.check-env-key
+ - Removed unused imports from api.github-template
+ - Removed unused import from index route
+
+### UI improvements
+
+- Updated toast component font weight for better readability
+- Refactored editor styles with improved layout and scrollbar customization
+- Updated dialog component styles for improved appearance
+- Added new CSS variable for editor scrollbar thumb color
+- Updated Fazier badge to rank-2 variants (dark and light modes)
+- Redesigned SettingsTab with framer-motion animations and improved UI layout
+- Refactored UpdateTab with improved UI and code simplification
+- Updated BaseChat styles for improved layout spacing
+- Updated BackgroundRays styles with improved positioning and effects
+
+### Prompt system improvements
+
+- Refactored prompt registry to remove base prompt and update labels
+- Updated stream-text to use PromptLibrary for prompt retrieval
+- Updated Chat to use default prompt from PromptLibrary
+- Added promptId and setPromptId props to BaseChat component
+- Simplified optimized prompt by removing duplicate mobile instructions
+- Simplified fine-tuned prompt mobile instructions to focus on Expo SDK 52
+- Updated api.chat to use promptId from request
+
+### Code cleanup
+
+- Removed system-prompt-universal.ts file
+- Removed legacy prompts.ts file
+- Removed base-prompt.ts file
+- Removed ConnectionForm component file
+- Refactored ConnectionsTab to remove inline ConnectionForm component
+- Removed unused embed image SVG files
+
+### Developer experience
+
+- Refactored useUpdateCheck hook with improved error handling
+- Updated GitHub funding username to codinit-dev
+- Updated README with better formatting and new badge links
+- Added 21st.dev SVG badge
+- Fixed linting errors in SCSS files
+- Fixed linting errors and formatted code across codebase
+- Removed unused imports throughout the application
+- Updated UnoCSS config with new color palette
+- Updated dark mode icon SVG
+- Added @types/node dependency
+- Updated pnpm lock file with new dependencies
+
+
+
+## December 20, 2025
+
+
+
+### Bug fixes
+
+- Fixed LMStudio provider error handling and Docker host mapping
+- Fixed LMStudio baseUrl configuration retrieval from provider settings
+- Fixed Ollama provider error handling and Docker compatibility
+- Fixed ESLint issues in LMStudio provider
+- Fixed Electron build configuration improvements
+
+### Improvements
+
+- Improved provider error messages for better debugging
+- Enhanced Docker compatibility for local model providers
+
+
+
+## December 19, 2025
+
+
+
+### Improvements
+
+- Version bump with stability improvements
+- Provider configuration enhancements
+
+
+
+## December 14, 2025
+
+
+
+### New AI models
+
+- Added GPT-5.1, GPT-5 Mini, and GPT-5 Nano models to OpenAI provider
+ - GPT-5.1: 128k context, 16k output limit (optimized for coding and agentic tasks)
+ - GPT-5 Mini: 128k context, 8k output limit (faster, cheaper alternative)
+ - GPT-5 Nano: 128k context, 4k output limit (fastest, most cost-effective)
+- Added GPT-5.2 Pro, GPT-5.2 Thinking, and GPT-5.2 Instant models to OpenRouter provider
+- Added Claude Opus 4.5 model to Anthropic provider with enhanced capabilities
+- Added Claude 4.5 models (Opus, Sonnet, Haiku) to Amazon Bedrock provider
+
+### UI and branding improvements
+
+- Added Fazier badge to header with theme-aware variants (dark and light modes)
+- Added local SVG assets for Fazier badge (embed_image_dark.svg, embed_image_light.svg)
+- Standardized Electron app name to "codinit-dev" across all platforms (macOS, Windows, Linux)
+- Updated README with improved content structure and Fazier badge
+- Replaced Product Hunt header badge with Fazier badge
+- Updated default model configuration to leverage new GPT-5 series
+
+### Bug fixes
+
+- Fixed Fazier badge styling to prevent alt text display issues
+- Fixed badge width constraints to fit header layout properly
+- Fixed mixed static/dynamic import warning for CodeMirrorEditor component
+- Fixed Vite configuration flags (v3_singleFetch) for better compatibility
+
+### Bundle optimization
+
+- Removed unused UI components to reduce bundle size:
+ - TypingAnimation component
+ - WorkspaceModal component
+ - GradientCard component
+ - Slider component
+ - Separator component
+ - SearchResultItem component
+ - UI CodeBlock component (duplicate)
+ - ElectronWindowControls component (unused)
+ - CustomTitleBar component (unused)
+ - CreateBranchDialog component (unused)
+- Removed duplicate service-status directory and components
+- Removed duplicate TextShimmer.tsx and demo.tsx files
+- Removed social preview image (social_preview_index.png)
+- Removed unnecessary component exports from ui/index.ts
+
+
+
+
+ ## Bug fixes
+
+ * Fixed error handling improvements across providers
+ * Fixed release workflow updates
+
+
+
+ ## New features
+
+ * Added Diff Approval Workflow with new `DiffApprovalDialog` component
+ * Added Live Action Console with `LiveActionAlert` component
+ * Added inline diff comparison display for file changes
+ * Added visual feedback for real-time action output
+ * Added live streaming of command execution output
+ * Added new action status: "awaiting-approval"
+ * Added fourth stream for live action monitoring
+
+ ## Improvements
+
+ * Enhanced `ActionRunner` with live output monitoring
+ * Extended `ContextAnnotation` with file categorization
+ * Refactored event logs and task manager tabs
+ * Improved settings organization
+ * Better visual feedback for user interactions
+
+ ## Removed features
+
+ * Removed legacy event logs from ControlPanel
+ * Removed old task manager tab implementation
+
+
+
+ ## Improvements
+
+ * Version update with bug fixes
+ * API update improvements
+
+
+
+ ## New features
+
+ * Tool mention autocomplete system with @ command functionality
+ * Type `@` in chatbox to trigger autocomplete for MCP tools and files
+ * Fuzzy search powered by Fuse.js for intelligent suggestions
+ * Support for both tool references (@tool-name) and file references (@file-path)
+ * Visual dropdown with descriptions and server information
+ * Code validation system with comprehensive test coverage
+ * New code-validator.ts utility for validating AI-generated code
+ * Test suite with 247 test cases for code validation
+ * Test suite with 229 test cases for message parser
+ * Test suite with 270 test cases for LLM utilities
+ * Test suite with 159 test cases for tool mention parser
+ * Product Hunt badge integration with theme-aware display
+ * GitHub star badge with dark mode support in header
+ * CONTRIBUTING.md with comprehensive contribution guidelines
+ * CODE_OF_CONDUCT.md following Contributor Covenant standards
+ * DiffView header component for improved diff visualization
+ * API Keys and Updates tabs to settings panel
+ * Unit test infrastructure for runtime, server, and utility modules
+
+ ## Major improvements
+
+ * Migrated repository from Gerome-Elassaad/codinit-app to codinit-dev/codinit-dev organization
+ * Updated all repository references and links across documentation
+ * Refactored CodeMirror editor with enhanced functionality and better TypeScript support
+ * Enhanced workbench headers with diff view icon button
+ * Updated xAI provider with new Grok models and accurate token limits
+ * Improved Groq provider configuration and model definitions
+ * Updated provider icons to use actual brand logos instead of placeholders
+ * Enhanced TabManagement component for better settings navigation
+ * Improved file auto-save utility with more robust error handling
+ * Refactored workbench store with extended functionality (236 lines of additions)
+ * Updated Docker configuration to use lowercase repository names
+ * Enhanced switchable stream utility for better LLM response handling
+ * Improved ProgressIndicator component with better visual feedback
+
+ ## Bug fixes
+
+ * Fixed Docker container name compatibility (changed wrangler.toml to lowercase)
+ * Fixed model selection and provider dropdown functionality issues
+ * Fixed preview header dropdown menu buttons and reload button
+ * Fixed UpdateTab with manual refresh capability and improved UI structure
+ * Fixed live output callback wiring in workbench store
+ * Fixed security vulnerabilities in GitHub Actions workflows (security.yaml with 120 lines)
+ * Fixed context indicator accuracy for file categorization
+ * Fixed message parsing for better artifact extraction
+
+ ## Cleanup
+
+ * Removed walkthrough video file (walkthrough.mp4) to reduce repository size
+ * Removed unused 21st.jpeg image file
+ * Removed mock feature flags from ControlPanel
+ * Removed mock features hook and API files
+
+
+
+ ## New features
+
+ * Universal framework-agnostic system prompt for supporting all frameworks
+ * Removed framework restrictions (previously limited to React, Vue, etc.)
+ * Added comprehensive base prompt with universal coding standards
+ * System prompt supports any framework or library
+ * Universal tools.json configuration (207 lines)
+ * Chat validation system with typed error handling
+ * New chat-validation.ts module (186 lines) with Zod schemas
+ * Comprehensive validation for chat messages and requests
+ * Improved error messages with user-friendly guidance
+ * Artifact version management system (215 lines in artifact-versions.ts)
+ * File auto-save persistence with dual storage backup (238 lines)
+ * Automatic file persistence to prevent data loss
+ * Backup storage mechanism for reliability
+ * Context indicator component (173 lines) for showing file categorization
+ * Progress indicator component (187 lines) for better user feedback
+ * Provider model selector component (294 lines) for streamlined model selection
+ * Test artifact component (234 lines) for development testing
+ * Prompt registry system (220 lines) for managing AI prompts
+ * LangSmith tracing integration
+ * Configuration documentation in .env.example
+ * Setup script for LangSmith tracing
+ * Comprehensive LangSmith documentation
+
+ ## Design system overhaul
+
+ * Complete design system overhaul for dark/light mode compatibility
+ * Fixed dark mode depth contrast and dropdown visibility
+ * Updated SCSS files for proper theme support
+ * Improved button contrast across all themes
+ * Enhanced tab button visibility in both modes
+ * Workbench layout redesign
+ * Repositioned workbench below headers on chat page
+ * Restored original side-by-side layout (chat and workbench)
+ * Fixed chat/workbench component placement with responsive layout
+ * Enhanced prompt engineering system
+ * Detects design schemes automatically
+ * Builds complete component specifications
+ * High-priority quality standards added to AI prompts
+
+ ## Component improvements
+
+ * Improved context optimization in chat API (33 additions)
+ * Updated ColorSchemeDialog with better theme compatibility
+ * Enhanced Artifact component with better state management (54 modifications)
+ * Improved AssistantMessage rendering (133 line reduction through optimization)
+ * Refactored BaseChat for better performance and cleaner code
+ * Updated Chatbox with improved model selection (119 additions)
+ * Enhanced Markdown rendering with additional features
+ * Improved EditorPanel with better preview integration (49 additions)
+
+ ## Bug fixes
+
+ * Fixed dark/light mode compatibility issues across entire application
+ * Fixed SCSS variable usage for theme-aware colors
+ * Corrected CSS custom properties for proper inheritance
+ * Updated component styles to respect theme context
+ * Fixed icon button styling for cross-theme compatibility
+ * Fixed Vite dynamic import warnings by using static imports
+ * Fixed infinite message loop in AI artifact generation
+ * Fixed React hooks violation in ProgressIndicator component
+ * Fixed z-index layering for dropdowns above Preview component
+ * Fixed PortDropdown parent z-index stacking issue
+ * Fixed GitHub push button functionality in PreviewHeader
+ * Fixed header z-index stacking issue in Electron builds
+ * Fixed Package.json rules to support both new and existing projects
+ * Fixed styling tab layout with inline styles replacing dynamic classes
+
+ ## Cleanup
+
+ * Removed React Email package (no longer needed)
+ * Removed Cohere provider from LLM registry
+ * Removed framework restrictions from AI prompts
+ * Removed duplicate ArtifactsStore (consolidated artifact management)
+ * Removed unused prompt template files (consolidated into TypeScript system)
+
+
+
+ ## New Electron features
+
+ * VSCode-style title bar for Electron desktop application
+ * ElectronTitleBar component (174 lines) with native window controls
+ * ElectronWindowControls component (134 lines) for minimize/maximize/close
+ * Custom title bar matching VSCode's sleek design
+ * Prominent window control buttons for better UX
+ * Electron development script (electron-dev.mjs, 81 lines)
+ * Automated Electron development workflow
+ * Hot reload support for faster development
+
+ ## Header improvements
+
+ * Refactored all headers to match browser layout with seamless window controls
+ * Updated Header component for Electron integration
+ * Modified ChatHeader for unified header system
+ * Enhanced header design for sleeker appearance
+ * Removed header border for seamless workbench integration
+ * Updated settings constants to include Electron-specific options
+ * Improved BaseChat integration with Electron environment
+
+ ## Bug fixes
+
+ * Fixed header alignment in Electron desktop builds
+ * Fixed window controls positioning and functionality
+ * Fixed title bar display across different platforms
+
+
+
+ ## New features
+
+ * Custom VSCode-style title bar for Electron desktop application
+ * CustomTitleBar component (159 lines) with native window controls
+ * Seamless window management matching VSCode design
+ * Platform-specific window control behavior
+ * SearchInput UI component (14 lines) for improved search functionality
+ * Demo component (25 lines) for UI development and testing
+ * Automatic release notes generation in GitHub Actions workflow
+ * Automatic tagging based on package.json version
+ * Debug logging to Electron build workflow for better troubleshooting
+
+ ## Improvements
+
+ * Updated Electron window management and cookie bridge (10 additions)
+ * Enhanced electron cookie synchronization
+ * Improved IPC communication
+ * Modified LLM provider configurations:
+ * Cohere provider updated with 107 line changes
+ * DeepSeek provider optimized (51 line reduction)
+ * Enhanced provider error handling across all providers
+ * Improved MCP service with 86 line changes for better reliability
+ * Enhanced design scheme store with 104 line changes for better theme management
+ * Updated settings store with 41 line changes for improved configuration handling
+ * Modernized Electron entitlements for macOS (10 line changes)
+ * Improved GitHub Actions artifact collection across all platforms
+ * Updated main process IPC handlers (41 line changes in electron/main/index.ts)
+ * Enhanced Chat.client.tsx with 14 line improvements
+
+ ## Bug fixes
+
+ * Fixed ESLint consistent-return error in CustomTitleBar component
+ * Fixed Electron build compatibility issues
+ * Fixed GitHub Actions release workflow artifact collection
+ * Fixed provider selection and error handling issues
+ * Fixed title bar display across different platforms
+
+ ## Security and cleanup
+
+ * Removed Clerk authentication integration (complete rollback)
+ * Removed all Clerk-related components and configuration
+ * Removed user service (124 lines)
+ * Removed user store (36 lines)
+ * Removed email verification route (55 lines)
+ * Database migration components and utilities:
+ * Removed schema.ts (7 lines)
+ * Removed validation utilities (90 lines)
+ * Removed rate limiting utilities (147 lines)
+ * Removed CSRF protection utilities (83 lines)
+ * Removed MCP retry and validation API routes (72 total lines)
+ * Removed comprehensive coding rules prompt (785 lines, consolidated elsewhere)
+ * Removed provider validation test suite (160 lines)
+ * Removed changelog.md duplicate file (35 lines)
+ * Removed Amplitude and GTM provider wrapper overhead (4 lines combined)
+ * Removed AGENTS.md file (55 lines, moved to different location)
+ * Removed authentication-related security utilities as part of Clerk rollback
+ * Simplified security model for desktop application
+
+
+
+ ## Improvements
+
+ * Removed Windows code signing config for unsigned distribution
+ * Simplified build process for easier distribution
+
+
+
+ ## Bug fixes
+
+ * Fixed workflow failure by reverting to static OS matrix
+ * Fixed CI/CD pipeline improvements
+ * Fixed broken Electron build process
+ * Fixed build configuration issues
+
+
+
+ ## New features
+
+ * Feature updates via pull request merge
+ * Enhanced functionality
+
+
+
+ ## Improvements
+
+ * Updated application icons
+ * Fixed Electron model loading issues
+ * Improved desktop app stability
+ * Version bump with improvements
+ * Enhanced features
+
+
+
+ ## New features
+
+ * Unified header system for seamless chat and workbench integration
+ * New ChatHeader component (31 lines) with consistent layout
+ * Unified header container aligning chat and workbench
+ * Removed redundant header rendering from Workbench component
+ * Complete MCP (Model Context Protocol) component redesign:
+ * MCPDialog component (73 lines) for modal interactions
+ * MCPContent component (92 lines) for content management
+ * MCPSidebar component (72 lines) for navigation
+ * MCPHistoryTab (44 lines) for tracking MCP operations
+ * MCPIntegrationsTab (265 lines) for server management
+ * MCPMarketplaceTab (68 lines) for discovering MCP servers
+ * Enhanced MCPServerCard with new styling module (148 lines)
+ * Mocha dark theme for code editor (288 lines in mocha-dark.scss)
+ * MCP types definition file (75 lines in app/types/mcp.ts)
+ * Cloudflare deployment source type to deployment system
+ * Python icon asset (265 lines SVG)
+ * Theme-aware favicon support with dynamic switching
+ * Theme-aware icon support for ChatHeader
+ * Moonshot AI provider integration to LLM registry
+ * Full provider implementation with model support
+ * Enhanced provider registration logging
+
+ ## Dark theme redesign
+
+ * Complete dark theme redesign with neutral gray palette
+ * Removed blue undertone for pure neutral gray
+ * Updated all CSS variables in variables.scss (109 line changes)
+ * Applied neutral gray theme across all components:
+ * Markdown styling updated for neutral gray
+ * Chat components updated for neutral gray
+ * Code blocks updated for neutral gray
+ * Resize handles updated for neutral gray
+ * Toast notifications updated for neutral gray
+ * Dialog colors updated for neutral gray
+ * Editor theme updated for neutral gray
+ * Enhanced dark mode with blue accents for interactive elements
+ * Blue accents added to settings sidebar
+ * Blue accents added to MCP buttons
+ * Switch button colors: blue for on, gray for off
+ * Updated all platform icons for Electron builds
+ * macOS icon (icon.icns): 9428 bytes → 4423 bytes
+ * Windows icon (icon.ico): 15219 bytes → 6703 bytes
+ * PNG icon: 16299 bytes → 32785 bytes
+
+ ## Component improvements
+
+ * Major MCP service overhaul (140 line additions)
+ * Improved error handling and logging
+ * Enhanced server connection management
+ * Better tool execution flow
+ * Refactored BaseChat component (170 line additions)
+ * Improved chat state management
+ * Better message handling
+ * Enhanced UI responsiveness
+ * Enhanced workbench client with cleanup (537 line reduction)
+ * Removed redundant code
+ * Improved performance
+ * Better organization
+ * Optimized provider implementations (significant line reductions):
+ * Amazon Bedrock: 293 line reduction
+ * Anthropic: 63 line reduction
+ * Google: 105 line reduction
+ * Groq: 168 line reduction
+ * OpenRouter: 191 line reduction
+ * OpenAI: 119 line reduction
+ * Together: 144 line reduction
+ * Mistral: 44 line reduction
+ * Updated provider environment variable handling
+ * Prioritize process.env in base provider for Electron compatibility
+ * Pass process.env to Remix context in Electron
+ * Enhanced error logging and diagnostics in LLMManager
+ * Improved update check system with better API handling (39 line changes in lib/api/updates.ts)
+ * Enhanced icon assets with updated SVG styling:
+ * Angular, Astro, Expo, Netlify, Nuxt, Qwik, React, Remix, Vite icons updated
+
+ ## Bug fixes
+
+ * Fixed MCP service functionality and reliability issues
+ * Fixed provider selection in chatbox
+ * Fixed null provider error in Chat.client.tsx
+ * Fixed linting errors in BaseChat component
+ * Fixed Electron environment variable handling
+ * Fixed model dropdown issues in Electron app
+ * Fixed header alignment and layout issues
+ * Fixed switch button color states in dark mode
+
+ ## Cleanup
+
+ * Removed MCP integration panel (411 lines, replaced with new architecture)
+ * Removed MCPToolApproval component (263 lines, moved to new location)
+ * Removed MCPToolRegistry component (292 lines, moved to new location)
+ * Removed MCPIntegrationPanel.module.scss (347 lines, redesigned)
+ * Removed legacy MCP JSON configuration file
+ * Removed test files (unit tests for base provider, manager, and diff utilities)
+ * Removed AGENTS.md from root (55 lines)
+ * Removed Cloudflare deploy API route (241 lines, consolidated)
+ * Removed duplicate IPC handlers from Electron preload
+ * Removed redundant await keywords from createWindow calls
+
+
+
+ ## New deployment features
+
+ * Cloudflare Pages deployment integration (complete system)
+ * CloudflareDeploy.client.tsx component (193 lines) for deployment interface
+ * CloudflareDeploymentLink.client.tsx component (116 lines) for deployment status
+ * CloudflareConnection.tsx component (282 lines) for account management
+ * Cloudflare store (114 lines in cloudflare.ts) for state management
+ * Cloudflare deploy API endpoint (241 lines in api.cloudflare-deploy.ts)
+ * Cloudflare types definition (24 lines)
+ * Cloudflare SVG icon asset
+ * Enhanced deployment system:
+ * DeployDialog enhancements (112 line additions) supporting multiple platforms
+ * Vercel icon assets (dark and light variants)
+ * Settings dropdown functionality in PreviewHeader
+ * Deployment icon standardization across platforms
+
+ ## Control panel improvements
+
+ * ControlPanelDialog component (71 lines) for modal display
+ * ControlPanelContent component (143 lines) for organized content
+ * ControlPanelSidebar component (74 lines) for navigation
+ * useControlPanelDialog hook (46 lines) for state management
+ * Enhanced UI components:
+ * StatsDialog component updates (8 line changes)
+ * CreateBranchDialog component updates (3 line additions)
+ * Design scheme enhancements (130 line additions in design-scheme.ts)
+ * Extended design scheme types
+ * Better color palette management
+ * Improved theme configuration
+ * Dialog styling system (45 lines in dialog.scss)
+ * changelog.md file (35 lines) for tracking changes
+
+ ## Header redesign
+
+ * Major PreviewHeader redesign (226 line changes)
+ * Simplified left-side button layout
+ * Unified button styling and sizing
+ * Standardized all button sizes to w-8 h-8
+ * Improved deployment icon display
+ * Enhanced dropdown functionality
+ * CodeModeHeader simplification (68 line reduction)
+ * Removed terminal button for cleaner interface
+ * Streamlined layout
+ * Improved consistency with PreviewHeader
+
+ ## Component improvements
+
+ * Enhanced update check system:
+ * Update API improvements (39 line changes in lib/api/updates.ts)
+ * Update route enhancements (31 line additions in api.update.ts)
+ * useUpdateCheck hook improvements (17 line changes)
+ * Client-side update check with 429 rate limiting handling
+ * ColorSchemeDialog expansion (159 line additions)
+ * Enhanced color scheme selection
+ * Improved preview functionality
+ * Better theme management UI
+ * HeaderActionButtons enhancements (51 line additions)
+ * Better button organization
+ * Improved visual consistency
+ * Enhanced dropdown support
+ * Updated deployment icons to use local SVG assets
+ * Cloudflare icon using local asset
+ * Vercel icons with theme variants
+ * Better icon consistency
+ * IconButton improvements (5 line changes)
+ * ControlPanel updates (4 line changes)
+ * ConnectionsTab enhancements (4 line additions)
+ * GitHubAuthDialog improvements (8 line changes)
+ * RepositorySelectionDialog refactor (79 line changes)
+ * UpdateTab improvements (28 line changes)
+ * AGENTS.md updates (14 line changes)
+ * Package updates (4 line changes in package.json)
+
+ ## Bug fixes
+
+ * Fixed client-side update check to handle 429 rate limiting gracefully
+ * Fixed linting error in useUpdateCheck hook
+ * Fixed API route 429 rate limiting status code handling
+ * Fixed dark mode color errors and inconsistencies
+ * Fixed IconButton border centering issues
+ * Fixed settings icon button dropdown functionality
+ * Fixed deployment icon display in PreviewHeader and DeployDialog
+ * Fixed button styling consistency across workbench headers
+ * Fixed action runner type reference (2 line fix)
+
+ ## Cleanup
+
+ * Removed terminal button from CodeModeHeader (simplified interface)
+ * Removed workbench client redundant code (5 line reduction)
+ * Removed EditorPanel unnecessary code (2 line reduction)
+
+
+
+ ## New testing infrastructure
+
+ * Comprehensive test infrastructure:
+ * Test configuration setup (92 lines in __tests__/config/setup.ts)
+ * Vitest configuration (52 lines in __tests__/config/vitest.config.ts)
+ * API response fixtures (188 lines in __tests__/fixtures/api-responses.ts)
+ * Test utilities and helpers (141 lines in __tests__/helpers/test-utils.tsx)
+ * Base provider test suite (189 lines)
+ * Manager test suite (177 lines)
+ * Diff utility test suite (43 lines)
+
+ ## New features
+
+ * New workbench header system:
+ * CodeModeHeader component (151 lines) for code editing mode
+ * PreviewHeader component (338 lines) for preview mode
+ * Enhanced header action buttons
+ * GitHub icon button integration
+ * DeployDialog component (150 lines) for deployment management
+ * Enhanced UI component library:
+ * actions-dropdown component (58 lines) for action menus
+ * animated-counter component (64 lines) for animated numbers
+ * slide-content component (34 lines) for sliding animations
+ * text-shimmer component (46 lines) for shimmer effects
+ * demo component (25 lines) for UI demonstrations
+ * Electron cookie bridge system (209 lines in electronCookieBridge.ts)
+ * Cookie synchronization between Electron and renderer
+ * Improved state management for desktop app
+ * AGENTS.md documentation (92 lines) with build commands and code style guidelines
+
+ ## LLM provider updates
+
+ * Complete LLM provider model updates:
+ * Amazon Bedrock: 293 line additions with latest models
+ * Anthropic: 63 line additions with new Claude models
+ * Google: 103 line additions with updated Gemini models
+ * Groq: 168 line additions with all missing Groq models from official docs
+ * Mistral: 44 line additions with latest Mistral models
+ * OpenAI: 119 line additions with updated GPT models
+ * OpenRouter: 191 line additions with expanded model support
+ * Perplexity: 6 line additions including sonar-reasoning model
+ * Together AI: 144 line additions with latest Together models
+
+ ## Component improvements
+
+ * Major Workbench.client.tsx refactor (402 line additions)
+ * Preview component optimization (512 line reduction)
+ * Enhanced Messages.client.tsx (40 line changes)
+ * Updated HeaderActionButtons (26 line changes)
+ * Header component improvements (12 line changes)
+ * Enhanced base provider (15 line changes)
+ * Updated DeepSeek provider with V3.2-Exp models and correct token limits (17 line changes)
+ * Enhanced Ollama provider (21 line changes)
+ * BaseChat improvements (24 line changes)
+ * DiffView enhancements (10 line changes)
+ * Various component updates (EditorPanel, FileBreadcrumb, LockManager, TerminalTabs, ChatAlert, DeployAlert, ImportErrorModal, ProgressCompilation, Artifact)
+ * README.md updates (3 line changes)
+
+ ## Bug fixes
+
+ * Fixed Electron app model dropdown issue
+ * Fixed header button styling consistency
+ * Fixed workbench layout and integration issues
+ * Fixed UI component export in index.ts
+
+ ## Cleanup
+
+ * Removed package-lock synchronization script (110 lines, no longer needed)
+ * Removed GitHub provider models (5 line removal, consolidated)
+
+
+
+ ## MCP (Model Context Protocol) integration
+
+ * Complete MCP integration with tool execution
+ * MCPExecutionHistory component (350 lines) for tracking tool usage
+ * MCPToolApproval component (263 lines) for user consent management
+ * MCPToolRegistry component (292 lines) for tool discovery and registration
+ * MCPTools component (45 lines) for tool interface
+ * Enhanced MCPIntegrationPanel (177 line additions) with real-time status updates
+ * MCP server selection via slash commands
+ * Tool execution with real-time feedback
+ * Server connection status monitoring
+
+ ## New features
+
+ * Thinking artifacts feature (106 lines in ThinkingArtifact.tsx)
+ * Visual representation of AI reasoning process
+ * Step-by-step thought display
+ * Enhanced transparency for complex tasks
+ * Analytics integration suite:
+ * Google Tag Manager integration (60 lines in GTMProvider.tsx)
+ * Google Analytics tracking with complete event system
+ * Amplitude analytics with session replay (17 lines in AmplitudeProvider.tsx)
+ * Replaced CDN-based Amplitude with proper npm package
+ * Command menu system (142 lines in Command.tsx component)
+ * Quick access to application features
+ * Keyboard shortcuts support
+ * Searchable command palette
+ * Settings infrastructure overhaul:
+ * SearchInput component (85 lines) for settings search
+ * SearchInterface component (207 lines) for unified search UX
+ * SearchResults component (67 lines) for displaying results
+ * SettingResultItem component (183 lines) for individual result cards
+ * SettingsCard component (112 lines) for consistent setting display
+ * SettingsPanel component (183 lines) for panel layout
+ * Settings search utility (295 lines in settingsSearch.ts)
+ * WorkspaceModal component (246 lines) for workspace management
+ * App notification system for update alerts and status messages
+ * AGENTS.md documentation (47 lines) with build commands and code style guidelines
+ * Animation system (259 lines in animations.scss)
+ * Comprehensive variable system additions (42 lines in variables.scss)
+
+ ## Major improvements
+
+ * Major ControlPanel redesign (317 line changes)
+ * Settings tabs restructuring:
+ * FeaturesTab: 328 line overhaul for better UX
+ * SettingsTab: 278 line redesign for improved organization
+ * UpdateTab: 444 line reduction through simplification
+ * Enhanced TabTile component (109 line changes) with better styling
+ * MCP integration improvements:
+ * Enhanced MCPIntegrationPanel styling (20 line changes in SCSS)
+ * Updated MCPMarketplace with 30 line improvements
+ * Enhanced MCPServerCard with 24 line changes
+ * Improved MCPTemplateConfigDialog with 78 line additions
+ * Updated McpServerForm with better validation
+ * Chat and messaging enhancements:
+ * Enhanced Chat.client.tsx with 41 line additions for better state management
+ * Improved Markdown rendering with 42 line changes
+ * Updated Chatbox with 6 line improvements
+ * LLM streaming improvements (34 line additions in stream-text.ts)
+ * Message parsing enhancements (168 line additions in message-parser.ts)
+ * MCP service major upgrade (97 line additions)
+ * Workbench store improvements (50 line additions)
+ * Updated auto-update system (58 line changes)
+ * Enhanced update check hook (98 line additions)
+ * LLM manager improvements (115 line additions)
+ * Editor and workbench updates (EditorPanel, FileTree, Workbench.client, CodeMirror editor)
+ * Updated CodeMirror theme to match GitHub's dark theme
+ * Enhanced slider icons with colors and larger size
+ * Improved Dialog component (55 line reduction)
+ * Updated BuiltWithCodinitBadge with 4 line improvements
+
+ ## Bug fixes
+
+ * Fixed MCP component dark mode compatibility issues
+ * Fixed MCP component styling and layout problems
+ * Fixed repetitive MCP service availability debug logs
+ * Fixed DialogPortal usage in CommandDialog by removing nested DialogContent
+ * Fixed chatbox text placeholder display
+ * Fixed update check linting errors
+ * Fixed editor color scheme for better readability
+ * Fixed resize handle colors for neutral theme
+ * Fixed toast styling for theme compatibility
+
+ ## Cleanup
+
+ * Removed MCP secrets tab (security concern)
+ * Removed MCP tools tab from integration panel (consolidated)
+ * Removed command search tools from ControlPanel (replaced with new command menu)
+ * Removed MCPTools component from chat directory (moved to mcp directory)
+ * Removed legacy update check implementation
+
+
+
+ ## Bug fixes
+
+ * Fixed workbench layout issues
+ * Updated typing animations for better UX
+
+
+
+ ## New features
+
+ * Style assets for documentation:
+ * chatstyle.png (387KB) - chat interface screenshot
+ * homestyle.png (1.28MB) - home page screenshot
+ * Enhanced background rays styling (52 line additions in styles.module.scss)
+ * Better visual effects
+ * Improved animations
+ * Enhanced theming
+
+ ## Improvements
+
+ * Major styling overhaul:
+ * BaseChat module styles (332 lines added) for better chat layout
+ * Editor styles optimization (166 line reduction)
+ * Index styles restructuring (59 line changes)
+ * Variables system major update (209 line changes)
+ * UnoCSS configuration improvements (139 line changes)
+ * Updated application icons across all platforms:
+ * macOS icon (icon.icns): 166KB → 9KB (optimized)
+ * Windows icon (icon.ico): 4KB → 9KB (enhanced quality)
+ * PNG icon: 14KB → 16KB (better resolution)
+ * Enhanced BackgroundRays component (15 line changes)
+ * Better rendering
+ * Improved performance
+ * Workbench client improvements (27 line changes)
+ * Better layout handling
+ * Enhanced state management
+ * Icon file organization:
+ * Moved icon.icns from /public to /assets/icons
+ * Updated icon.ico in /public (32KB version)
+ * Updated icon.png in /public (16KB version)
+ * README.md updates (39 line changes)
+ * Added new screenshots
+ * Better project description
+ * Improved setup instructions
+
+ ## Bug fixes
+
+ * Fixed Electron build configuration (2 line fixes)
+ * Better icon references
+ * Improved packaging
+ * Fixed Electron main process (2 line fixes)
+ * Icon path corrections
+ * Fixed Windows icon display issues
+ * Proper .ico format
+ * Better resolution support
+ * Fixed ChatAlert component (2 line fixes)
+ * Fixed Chatbox component (2 line fixes)
+ * Fixed ImportErrorModal (4 line fixes)
+ * Fixed Slider component (6 line fixes)
+ * Fixed Pre-start script (2 line fixes)
+
+ ## Cleanup
+
+ * Removed readme_assets/readme.png (385KB, replaced with better screenshots)
+ * Removed duplicate icon files from /public root
+ * Package.json version bump (2 line changes)
+
+
+
+ ## New features
+
+ * CLAUDE.md documentation with comprehensive codebase documentation for AI assistants
+ * CodinIT thinking artifacts feature:
+ * Shows AI reasoning process
+ * Blue color scheme for thinking components
+ * Enhanced transparency in AI decision-making
+ * Enhanced ActionAlert interface with detailed error context fields
+ * Vite-shadcn configured as default starter template
+
+ ## Improvements
+
+ * Complete theme redesign to darker slate/charcoal with blue undertones:
+ * Adjusted slate colors to neutral charcoal with reduced blue tint
+ * Updated all CSS custom properties and variables
+ * Theme classes updated across all components:
+ * Settings components theme updates
+ * Chat interface theme updates
+ * Workbench and terminal theme updates
+ * Header, editor, and deploy components theme updates
+ * Shared UI components theme updates
+ * Replaced deploy dropdown with icon buttons in chat header
+ * Better space utilization
+ * Improved accessibility
+ * Cleaner interface
+ * Changed thinking process component from purple to blue
+ * Better visual consistency
+ * Improved brand alignment
+ * Major component refactoring across the application:
+ * Settings tabs: extensive updates to all tabs (300+ line changes in DebugTab alone)
+ * Connection components: refactored for better organization
+ * Data visualization improvements (38 line changes)
+ * Event logs enhancements (12 line changes)
+ * Features tab updates (30 line changes)
+ * Provider tabs improvements (cloud and local)
+ * Service status tab refinements (46 line changes)
+ * Task manager tab redesign (170 line changes)
+ * Update tab improvements (45 line changes)
+ * Enhanced TabManagement component (46 line changes)
+ * Better tab organization
+ * Improved UX
+ * Improved DraggableTabList (12 line changes)
+ * Better drag and drop
+ * Enhanced visual feedback
+ * Connection management improvements:
+ * GithubConnection: 128 line refactor
+ * NetlifyConnection: 130 line refactor
+ * VercelConnection: 40 line improvements
+ * ConnectionForm: 18 line enhancements
+ * CreateBranchDialog: 19 line updates
+ * PushToGitHubDialog: 84 line changes
+ * RepositoryCard: 28 line improvements
+ * RepositoryList: 4 line updates
+ * RepositorySelectionDialog: 100 line refactor
+ * StatsDialog: 14 line changes
+ * ConnectionDiagnostics: 92 line updates
+ * ConnectionsTab: 50 line changes
+ * Storage keys and class names updated throughout client library
+ * API responses and metadata updates in routes
+ * Build paths and error messages improved in configuration
+ * Constants and icon identifiers updated
+
+ ## Bug fixes
+
+ * Fixed Biome linting errors in bindings.sh script
+ * Fixed deployment link functionality
+ * Fixed header button interactions
+ * Fixed Git operations and integration
+ * Fixed test suite updates
+
+ ## Documentation
+
+ * README.md improvements (6 line changes)
+ * Better documentation structure
+ * Improved clarity
+ * TypingAnimation component updates
+
+ ## Configuration
+
+ * .gitignore additions (2 line changes)
+ * Better file exclusions
+ * Cleaner repository
+
+
+
+ ## Improvements
+
+ * Package.json version bump from 1.0.1 to 1.0.2 (note: tag is v1.0.3)
+ * Version synchronization between package.json and git tags
+
+
+ This release represents a version bump with internal improvements. The git tag is v1.0.3 while package.json shows 1.0.2, indicating a minor versioning discrepancy that was corrected in subsequent releases.
+
+
+
+
+ ## New features
+
+ * CODEOWNERS file for repository management
+ * Enhanced error messages and build path configurations
+
+ ## Improvements
+
+ * Major Dockerfile updates (51 line changes)
+ * Improved build process
+ * Better caching strategies
+ * Enhanced production configuration
+ * README.md comprehensive overhaul (66 line changes)
+ * Better project description
+ * Improved setup instructions
+ * Enhanced documentation structure
+ * Updated examples and usage
+ * Electron desktop app updates:
+ * Updated dependencies
+ * Improved configuration
+ * Better packaging
+ * Component styling improvements across the application:
+ * AvatarDropdown: 26 line improvements
+ * ControlPanel: 16 line enhancements
+ * DraggableTabList: 4 line updates
+ * TabManagement: 26 line changes
+ * TabTile: 26 line improvements
+ * Connection component refinements:
+ * ConnectionForm: 8 line updates
+ * CreateBranchDialog: 12 line changes
+ * GitHubAuthDialog: 8 line improvements
+ * PushToGitHubDialog: 26 line enhancements
+ * RepositoryCard: 8 line updates
+ * RepositorySelectionDialog: 22 line changes
+ * StatsDialog: 4 line improvements
+ * Settings tab improvements:
+ * DebugTab: 62 line enhancements
+ * EventLogsTab: 30 line updates
+ * FeaturesTab: 10 line changes
+ * NotificationsTab: 18 line improvements
+ * ProfileTab: 14 line updates
+ * CloudProvidersTab: 14 line changes
+ * LocalProvidersTab: 30 line enhancements
+ * OllamaModelInstaller: 28 line improvements
+ * ServiceStatusTab: 8 line updates (both instances)
+ * SettingsTab: 10 line changes
+ * UpdateTab: 40 line enhancements
+ * UI component updates:
+ * APIKeyManager: 2 line fixes
+ * Chatbox: 35 line improvements
+ * ColorSchemeDialog: 2 line updates
+
+ ## Bug fixes
+
+ * Fixed build errors and configuration issues
+ * Fixed Electron dependency compatibility
+ * Fixed component import paths and references
+
+
+
+ ## Improvements
+
+ * First patch release after v1.0.0
+ * Bug fixes and improvements
+
+
+
+
+ This release marks the official stable release of CodinIT.dev as a production-ready AI app builder.
+
+
+ ## New features
+
+ * Complete settings system redesign with new tabbed interface
+ * New `ConnectionsTab` for unified service integrations
+ * New `DebugTab` with comprehensive debugging utilities
+ * New `ServiceStatusTab` with provider health monitoring
+ * New `TaskManagerTab` for task visualization
+ * New `UpdateTab` for version management
+ * Comprehensive `ServiceStatusTab` with 14 provider status monitors:
+ * Amazon Bedrock, Anthropic, Cohere, DeepSeek, Google, Groq
+ * HuggingFace, Hyperbolic, Mistral, OpenAI, OpenRouter
+ * Perplexity, Together, xAI
+ * New provider factory pattern for scalable provider management
+ * Enhanced GitHub integration:
+ * New `GithubConnection` component (990+ lines)
+ * New `RepositorySelectionDialog` component (1011 lines)
+ * New `RepositoryCard` and `RepositoryList` components
+ * New `ConnectionDiagnostics` component (610 lines)
+ * Enhanced GitHub API integration with stats and branch management
+ * New system diagnostics endpoints:
+ * `api.system.app-info` endpoint
+ * `api.system.memory-info` endpoint
+ * `api.system.process-info` endpoint
+
+ ## Improvements
+
+ * Complete refactor of GitHub Actions CI/CD workflows
+ * Restructured deployment pipelines for multiple platforms (Web, Electron, Docker)
+ * Enhanced Docker configuration with improved repository naming conventions
+ * Updated Electron main process and window management
+ * Improved `Markdown` component with better rendering
+ * Enhanced `AssistantMessage` component
+ * `BaseChat` completely refactored (384 lines)
+ * `Chat.client` enhanced with better state management
+ * `ModelSelector` redesigned (451 lines)
+ * Improved workbench components
+ * Better terminal integration
+ * Enhanced file tree and preview handling
+ * Updated OpenAI provider with improved error handling
+ * Enhanced Anthropic provider integration
+ * Improved provider registration and discovery
+ * Better context optimization
+ * Refactored Supabase integration
+ * Enhanced data operations hooks
+ * Better file state management
+
+ ## Cleanup
+
+ * Removed legacy GitHub and GitLab tabs (consolidated into Connections)
+ * Removed legacy Supabase and Netlify specific tabs
+ * Removed deprecated `CLAUDE.md` and `PROJECT.md` from root
+
+ ## Bug fixes
+
+ * Fixed Docker repository naming case issues
+ * Fixed crypto module resolution for SSR builds
+ * Fixed Electron build with Node.js v20.15.1
+ * Fixed deprecated electron-builder properties
+ * Improved error handling across providers
+
+
+
+ ## Bug fixes
+
+ * Fixed Electron SSR build issues:
+ * Added crypto module alias for SSR builds in both Vite configs
+ * Fixed crypto module resolution for server-side rendering
+ * Fixed Electron build configuration:
+ * Updated Node.js to v20.15.1 for better compatibility
+ * Updated pnpm to v9.15.9 for improved package management
+
+ ## Improvements
+
+ * GitHub icon updated to use Phosphor icon library
+ * Better icon consistency
+ * Improved visual quality
+ * "Clone a Git Repo" button redesigned to pill shape with GitHub icon
+ * Enhanced visual design
+ * Better user experience
+ * "Import Folder" button updated to pill shape with icon and text
+ * Consistent button styling
+ * Improved accessibility
+ * "Import Chat" button redesigned to pill shape with icon and text
+ * Unified button design language
+ * Better visual hierarchy
+
+ ## Cleanup
+
+ * Removed deprecated signDlls property from electron-builder config
+ * Removed obsolete signing configuration
+ * Cleaned up build configuration
+
+
+
+ ## New features
+
+ * Theme context support for header logo
+ * Dynamic logo switching based on theme
+ * Light/dark mode logo variants
+ * Better visual consistency
+
+ ## Improvements
+
+ * Enhanced Chat.client.tsx (175 line overhaul)
+ * Better chat history handling
+ * Improved message error handling
+ * Enhanced state management
+ * Better error recovery
+ * useMessageParser hook improvements (60 line changes)
+ * Better message parsing logic
+ * Improved artifact extraction
+ * Enhanced error handling
+ * useChatHistory hook enhancements (116 line changes)
+ * Better chat persistence
+ * Improved history management
+ * Enhanced data handling
+ * Workbench store updates (27 line additions)
+ * Better state management
+ * Improved file handling
+ * Action runner improvements (12 line changes)
+ * Better action execution
+ * Enhanced error handling
+ * Dockerfile optimization (106 line changes)
+ * Better build process
+ * Improved caching
+ * Enhanced production readiness
+ * Header component updates (4 line changes)
+ * Theme-aware logo support
+ * Better styling
+ * Vite configuration improvements (17 line changes)
+ * Better build optimization
+ * Enhanced development experience
+ * Package dependencies update (9 line changes in package.json)
+ * pnpm-lock.yaml major update (809 line additions)
+ * Updated dependencies
+ * Security patches
+ * Performance improvements
+
+ ## Bug fixes
+
+ * Fixed chat history errors and data inconsistencies
+ * Fixed message parsing errors and edge cases
+ * Fixed workbench state management issues
+ * Fixed Docker build errors and optimization
+ * Fixed path calculation logic to handle both path formats
+ * Fixed build errors in production mode
+
+ ## Cleanup
+
+ * Removed .claude/settings.local.json (7 lines, local configuration)
+ * Removed Codinit-dev package verification workflow (100 lines, moved)
+ * Removed Cloudflare Pages build example script fix (2 line change)
+
+ ## Security
+
+ * Vite security update from 5.4.20 to 5.4.21 (dependency update via Dependabot)
+ * Fixed known security vulnerabilities
+ * Improved build security
+
+
+
+ ## Bug fixes
+
+ * Fixed package.json configuration issues
+ * Corrected dependencies
+ * Fixed version specifications
+ * Improved package metadata
+
+
+
+ ## New features
+
+ * Complete build verification and provenance system:
+ * BuildInfo component (112 lines) with comprehensive documentation
+ * GitHub Actions workflow for build verification (100 lines)
+ * Build verification script (50 lines in verify-build.js)
+ * Build metadata collection script (76 lines in collect-metadata.js)
+ * Build collector module (196 lines in src/collector.ts)
+ * Build verifier module (168 lines in src/verifier.ts)
+ * TypeScript types for verification (68 lines in src/types.ts)
+ * Index module for exports (54 lines in src/index.ts)
+ * Verification configuration (18 lines in tsconfig.json)
+ * Build verification integration:
+ * collect-build-metadata.cjs script (95 lines)
+ * verify-build.cjs script (69 lines)
+ * GitLab CI example configuration (73 lines)
+ * Cloudflare Pages build example (32 lines)
+ * Package configuration for codinit-dev package (33 lines)
+ * Package workspace configuration:
+ * pnpm-workspace.yaml (2 lines) for monorepo support
+ * codinit-dev package structure
+ * Package.json for verification package (33 lines)
+ * pnpm-lock.yaml for package (1192 lines)
+ * .gitignore for verification package (20 lines)
+
+ ## Improvements
+
+ * Enhanced GitHub template API (69 line changes in api.github-template.ts)
+ * Better template handling
+ * Improved error messages
+ * Enhanced validation
+ * Updated template types (3 line changes)
+ * Better type definitions
+ * Improved type safety
+ * Optimized constants file (138 line reduction in utils/constants.ts)
+ * Removed redundant definitions
+ * Better organization
+ * Improved maintainability
+ * Package.json updates (6 line changes)
+ * Added verification dependencies
+ * Updated scripts
+ * Enhanced configuration
+ * pnpm-lock.yaml updates (252 line additions)
+ * New verification dependencies
+ * Updated packages
+ * Security patches
+ * Updated build scripts:
+ * update-imports.sh (4 line changes)
+ * update.sh (2 line changes)
+ * useChatHistory minor fix (2 line change)
+
+ ## Bug fixes
+
+ * Fixed GitHub API authentication header format
+ * Corrected authorization header
+ * Improved API integration
+ * Fixed GitHub template URL parsing issues
+ * Better URL validation
+ * Enhanced error handling
+ * Fixed build verification script ESLint errors
+ * Code quality improvements
+ * Better error handling
+ * Fixed package.json configuration issues
+
+ ## Cleanup
+
+ * Removed CLAUDE.md from root (330 lines, moved to project-specific location)
+ * Removed non-existent next-shadcn template from starter templates
+ * Cleaned up invalid template reference
+ * Improved template list accuracy
+
+ ## Security
+
+ * Added build provenance system for supply chain security
+ * Build verification ensures integrity of distributed packages
+ * Metadata collection for build traceability
+
+
+
+ ## New features
+
+ * Complete token configuration system:
+ * TokenConfigSelector component (334 lines) with dropdown and custom modal
+ * Token configuration store (162 lines in tokenConfig.ts) with presets and custom options
+ * Predefined token presets:
+ * Balanced: 8,192 tokens (default)
+ * Extended: 16,384 tokens
+ * Maximum: 32,768 tokens
+ * Minimal: 2,048 tokens
+ * Custom token configuration with slider controls
+ * Temperature and top-p parameter controls
+ * CLAUDE.md documentation (330 lines)
+ * Project-specific AI assistant guidelines
+ * Development workflows
+ * Code standards and conventions
+ * Architecture documentation
+
+ ## Improvements
+
+ * Enhanced Chatbox component (38 line refactor)
+ * Integrated TokenConfigSelector
+ * More sleek design with reduced spacing and padding
+ * Better visual hierarchy
+ * Improved layout
+ * Chat API endpoint improvements (18 line changes)
+ * Token config support
+ * Better parameter handling
+ * Enhanced flexibility
+ * Updated ChatBox component name and structure
+ * Better component organization
+ * Improved code clarity
+ * Workbench store enhancements (13 line changes)
+ * Better state management
+ * Improved integration
+ * Action runner updates (6 line changes)
+ * Fixed ActionType to include start and build action types
+ * Better type safety
+ * Enhanced validation
+ * Constants optimization (50 line changes in utils/constants.ts)
+ * Better organization
+ * Reduced redundancy
+ * Improved maintainability
+ * BaseChat improvements (2 line changes)
+ * Better chat flow
+ * Minor optimizations
+ * APIKeyManager updates (4 line changes)
+ * Better key management
+ * Improved UX
+ * ChatAlert improvements (2 line changes)
+ * Better error display
+ * Actions type definition updates (2 line changes)
+ * Added missing action types
+ * Better type coverage
+ * Moved token config selector to icon toolbar
+ * Converted to compact icon dropdown
+ * More accessible placement
+ * Better space utilization
+ * Integrated token config from chat client to API
+ * Full token control pipeline
+ * Better configuration flow
+
+ ## Bug fixes
+
+ * Fixed default chat alerts functionality
+ * Better alert display
+ * Improved timing
+ * Enhanced UX
+ * Fixed default API configuration
+ * Better default values
+ * Improved reliability
+ * Fixed default template selection
+ * Auto-selection working correctly
+ * Better initial state
+ * Fixed model settings bar now collapsed by default
+ * Cleaner initial UI
+ * More screen space
+ * Better UX
+ * Fixed token config icon size and hard-coded colors removed
+ * Theme-compatible icon
+ * Better visual consistency
+ * Fixed token config dropdown direction changed to open downwards
+ * Better placement
+ * Improved UX
+
+ ## Cleanup
+
+ * Removed debug console.log statements from workbench
+ * Cleaner console output
+ * Better production readiness
+ * Removed debug console.log statements from action-runner
+ * Reduced noise
+ * Improved debugging experience
+
+
+
+ ## New features
+
+ * Complete MCP (Model Context Protocol) integration system:
+ * AddMcpServerDialog component for server configuration
+ * MCPIntegrationPanel component (92 lines) with comprehensive styling
+ * MCPMarketplace component for discovering MCP servers
+ * MCPServerCard component for server display
+ * MCPTemplateConfigDialog component for template setup
+ * McpServerForm component for server form handling
+ * MCP service integration in chat API
+ * MCP tool handling in stream-text utility
+ * MCP integration hooks in useMessageParser
+ * Enhanced prompt library with MCP support
+ * GitCloneButton component updates for MCP
+ * Messages.client.tsx with MCP tool display
+ * MCPTools component for tool management
+ * Chatbox with MCP tool handling
+ * Chat.client.tsx with complete MCP integration
+ * Folder import feature for loading local projects
+ * Complete folder structure import
+ * File parsing and validation
+ * Project detection and setup
+ * Comprehensive starter template system with 14 templates:
+ * Angular template
+ * Astro Shadcn template
+ * Codinit Expo template
+ * Codinit Qwik template
+ * Codinit Remotion template
+ * Codinit Vite React TypeScript template
+ * GraphQL template
+ * Next.js template
+ * Next.js Shadcn template
+ * Slidev template
+ * SvelteKit template
+ * Test template
+ * Vite Shadcn template
+ * Vue template
+ * GitHub integration enhancements:
+ * Standalone template repositories for Remix, SvelteKit, and Vanilla Vite
+ * GitHub starters repository detection for template imports
+ * Automatic command detection for GitHub template imports
+ * Template selection system with visual indicators
+ * Auto-select default template functionality
+ * Bug reporting system integration:
+ * Complete bug report API endpoint (254 lines)
+ * Integration with issue tracking
+ * Automated error collection
+ * GitLab API service (508 lines in gitlabApiService.ts)
+ * Complete GitLab integration
+ * Project management
+ * Branch operations
+ * Authentication handling
+ * GitHub API enhancements:
+ * GitHub branches API endpoint (166 lines)
+ * GitHub stats API endpoint (198 lines)
+ * GitHub user API endpoint (287 lines)
+ * Enhanced GitHub template API (480 line overhaul)
+ * GitLab endpoints:
+ * GitLab branches API (143 lines)
+ * GitLab projects API (105 lines)
+ * Service integration components:
+ * ConnectionForm component (193 lines)
+ * ConnectionTestIndicator component (60 lines)
+ * ErrorState component (102 lines)
+ * LoadingState component (94 lines)
+ * ServiceHeader component (72 lines)
+ * Project initialization service (358 lines in projectInit.ts)
+ * Intelligent file loading detection for Git imports
+ * Automatic project setup
+ * Dependency detection
+ * Command inference
+ * Security utilities (245 lines in security.ts)
+ * Enhanced security checks
+ * Input validation
+ * Sanitization helpers
+ * Local template system:
+ * Local template loader API (134 lines)
+ * Template validation and loading
+ * React + Vite default template
+ * Build verification system:
+ * BuildInfo component with comprehensive documentation
+ * GitHub Actions workflow for build verification
+ * Build provenance system
+ * Verification scripts with ESLint integration
+ * Scripts directory with utility scripts
+ * Package-lock synchronization script (110 lines)
+ * Preview workflow for GitHub Actions (196 lines)
+
+ ## Improvements
+
+ * Major artifact system refactor (201 line changes in Artifact.tsx)
+ * Better artifact handling
+ * Improved rendering
+ * Enhanced state management
+ * Enhanced Chat.client.tsx (238 line additions)
+ * Better MCP integration
+ * Improved state management
+ * Enhanced error handling
+ * Comprehensive prompt system updates:
+ * new-prompt.ts: 883 line reduction through optimization
+ * prompts.ts: 687 line reduction for better performance
+ * Optimized prompts: 52 line changes
+ * Enhanced prompt library: 14 line improvements
+ * Updated discuss-prompt: 10 line changes
+ * Workbench major refactor (434 line changes)
+ * Improved file handling
+ * Better state management
+ * Enhanced preview integration
+ * LLM provider updates across all providers:
+ * Anthropic: 117 line changes
+ * Cohere: 22 line additions
+ * DeepSeek: 3 line improvements
+ * GitHub: 117 line changes
+ * Google: 116 line changes
+ * Groq: 49 line changes
+ * Mistral: 9 line additions
+ * Moonshot: 72 lines (new provider)
+ * OpenRouter: 79 line changes
+ * OpenAI-like: 120 line changes
+ * OpenAI: 93 line changes
+ * Perplexity: 12 line changes
+ * Together: 21 line changes
+ * xAI: 23 line changes
+ * Amazon Bedrock: 1 line addition
+ * Enhanced stream-text utility (199 line additions)
+ * Better MCP tool handling
+ * Improved streaming logic
+ * Enhanced error recovery
+ * Action runner improvements (38 line additions)
+ * Better action execution
+ * Improved error handling
+ * Enhanced logging
+ * Message parser enhancements (129 line additions)
+ * Better artifact parsing
+ * Improved code extraction
+ * Enhanced validation
+ * GitUrlImport refactor (242 line changes)
+ * Better import handling
+ * Improved error messages
+ * Enhanced validation
+ * DiffView improvements (98 line changes)
+ * Better diff rendering
+ * Improved performance
+ * Enhanced UX
+ * Preview component updates (157 line changes)
+ * Better preview handling
+ * Improved iframe management
+ * Enhanced error states
+ * Workbench.client redesign (231 line changes)
+ * Better layout management
+ * Improved state handling
+ * Enhanced integration
+ * README overhaul (481 line reduction)
+ * Simplified and focused content
+ * Better organization
+ * Improved readability
+ * Enhanced connection tabs:
+ * NetlifyConnection: 202 line refactor
+ * VercelConnection: 35 line improvements
+ * GitHubAuthDialog: 5 line updates
+ * Updated feature API (129 line changes)
+ * Better feature flag handling
+ * Improved configuration
+ * Enhanced validation
+ * LLM constants updates (46 line changes)
+ * Better model definitions
+ * Improved token limits
+ * Enhanced configuration
+
+ ## Bug fixes
+
+ * Fixed GitHub repository import 500 error
+ * Corrected API authentication header format
+ * Added fallback to Contents API when releases unavailable
+ * Fixed GitHub template URL parsing
+ * Fixed GitHub repository path issues in starter templates
+ * Corrected directory paths to match repository structure
+ * Added proper documentation
+ * Fixed constants and context for subdirectory paths
+ * Fixed template system issues:
+ * Fixed deprecated baseUrl warning in Astro Shadcn template
+ * Fixed starter template directory paths
+ * Restored missing selectedTemplate prop to BaseChat
+ * Fixed chatbox showing no response by using original content fallback
+ * Fixed TypeScript and linting issues:
+ * Replaced any types with proper TypeScript types across multiple files
+ * Fixed Biome linting errors in root.tsx
+ * Fixed lint errors in VercelConnection, NetlifyConnection, ConnectionsTab
+ * Fixed TypeScript errors in api.chat.ts, api.enhancer.ts, api.configured-providers.ts
+ * Fixed lint errors in api.check-env-key.ts, api.bug-report.ts
+ * Fixed TypeScript lint errors in api.system.diagnostics.ts, api.supabase.query.ts
+ * Fixed Import.meta.hot optional chaining issues:
+ * Fixed in Artifact component
+ * Fixed in ToolInvocations component
+ * Fixed in workbench store
+ * Fixed in terminal store
+ * Fixed in editor store
+ * Fixed in files store
+ * Fixed in webcontainer module
+ * Fixed Vitest configuration for proper test execution
+ * Configured test projects with proper plugins and root directory
+ * Fixed Vitest browser test configuration path
+ * Fixed chat functionality fixes:
+ * Restored chat functionality from working state
+ * Fixed artifact lint parsing issues
+ * Fixed template selection without breaking chat
+
+ ## Cleanup
+
+ * Removed FAQ.md documentation (105 lines, content integrated elsewhere)
+ * Removed local template files in favor of GitHub repository
+ * Removed non-existent next-shadcn template from starter templates
+ * Removed GitHub workflow for package-lock.json validation
+ * Removed unused import.meta.hot calls replaced with optional chaining
+ * Removed duplicate and unused test files
+
+ ## Security
+
+ * Fixed security dependencies and vulnerabilities
+ * Added comprehensive security utility system
+ * Enhanced input validation across all endpoints
+ * Improved error handling to prevent information leakage
+
+
+
+ ## New features
+
+ * Complete MCP integration components (PR #16):
+ * MCPTemplateConfigDialog component for template configuration
+ * MCPServerCard component for server display
+ * MCPMarketplace component for discovering MCP servers
+ * MCPIntegrationPanel component with comprehensive styling
+ * AddMcpServerDialog component for adding MCP servers
+ * MCP integration across chat system:
+ * Chat.client.tsx with full MCP integration
+ * ChatBox.tsx with MCP tool handling
+ * GitCloneButton component updates
+ * MCPTools component for tool management
+ * Messages.client.tsx with MCP tool display
+ * GitUrlImport.client.tsx enhancements
+
+ ## Improvements
+
+ * Enhanced MCP support in core systems:
+ * api.chat.ts with MCP integration
+ * message-parser.ts with MCP artifact parsing
+ * enhanced-message-parser.ts updates
+ * action-runner.ts with MCP action support
+ * stream-text.ts with MCP tool handling
+ * Updated prompts and libraries:
+ * prompts.ts with MCP awareness
+ * prompt-library.ts with MCP support
+ * useMessageParser hook with MCP handling
+ * Utility updates for MCP:
+ * selectStarterTemplate.ts improvements
+ * projectCommands.ts updates
+ * folderImport.ts enhancements
+ * fileUtils.ts modifications
+ * constants.ts additions
+ * Actions types extended for MCP operations
+ * Package dependencies updated for MCP support
+ * pnpm-lock.yaml updated with new dependencies
+
+ ## Bug fixes
+
+ * Fixed GitHub import functionality issues
+ * Better error handling
+ * Improved import process
+ * Enhanced validation
+ * Fixed setMcpOpen error in state management
+ * Fixed state synchronization
+ * Improved error handling
+ * Fixed message parser issues with MCP artifacts
+ * Better artifact detection
+ * Enhanced parsing logic
+
+
+
+ ## New features
+
+ * GitHub starter template monorepo subdirectory support
+ * Templates can now reference subdirectories in monorepos
+ * Better template organization
+ * Enhanced flexibility
+
+ ## Improvements
+
+ * Message parser improvements
+ * Better artifact detection
+ * Enhanced parsing logic
+ * Improved error handling
+
+ ## Bug fixes
+
+ * Fixed message parser artifact detection (PR #14)
+ * Fixed message parsing logic
+ * Improved artifact extraction
+ * Better error handling
+ * Fixed artifact naming consistency (PR #15)
+ * Changed from codinitArtifact to codinitArticact for consistency
+ * Updated all references
+ * Improved naming conventions
+ * Fixed GitHub starter template integration
+ * Fixed subdirectory handling in monorepos
+ * Improved template fetching
+ * Better error messages
+
+
+
+ ## Bug fixes
+
+ * Fixed dropdown overlay z-index issues in ModelSelector
+ * Fixed stacking context
+ * Improved dropdown visibility
+ * Better overlay handling
+ * Fixed dropdown overlay issues in ChatBox
+ * Corrected z-index layering
+ * Enhanced dropdown positioning
+ * Fixed missing API keys error handling
+ * Better error messages for missing API keys
+ * Improved user guidance
+ * Enhanced validation
+ * Fixed local provider error handling
+ * Better error messages for Ollama and LM Studio
+ * Improved connection error handling
+ * Enhanced user feedback
+
+ ## Improvements
+
+ * Simplified bug report template
+ * Reduced complexity
+ * Improved user experience
+ * Better issue categorization
+
+ ## Security
+
+ * Fixed security dependencies
+ * Updated vulnerable packages
+ * Applied security patches
+ * Improved dependency security
+
+
+
+ ## New features
+
+ * Image optimization via ImgBot
+ * Automated image compression
+ * Reduced asset sizes
+ * Improved load times
+
+ ## Bug fixes
+
+ * Fixed model selection persistence issues (PR #9)
+ * Model selection now persists across sessions
+ * Fixed state management for selected model
+ * Improved user experience
+ * Fixed chat UI improvements
+ * Better layout and responsiveness
+ * Enhanced visual design
+ * Improved user interactions
+ * Fixed Git integration issues
+ * Fixed Git operations
+ * Improved error handling
+ * Better status reporting
+
+ ## Documentation
+
+ * Documentation updates:
+ * Updated README with better instructions
+ * Improved documentation structure
+ * Enhanced examples and guides
+ * Bumped version to 0.7.0 for release
+
+
+
+ ## Improvements
+
+ * Updated icon sizing across the application
+ * Standardized icon dimensions
+ * Improved visual consistency
+ * Better scaling on different displays
+
+
+
+ ## New features
+
+ * Complete MCP (Model Context Protocol) foundation:
+ * MCP service (457 lines in mcpService.ts) for server lifecycle management
+ * MCP store (115 lines in mcp.ts) for state management
+ * MCP server list component (102 lines in McpServerList.tsx)
+ * MCP server list item component (70 lines in McpServerListItem.tsx)
+ * MCP status badge component (37 lines in McpStatusBadge.tsx)
+ * MCP settings components for server configuration
+ * MCP API endpoints:
+ * api.mcp-check.ts (16 lines) for health checks
+ * api.mcp-update-config.ts (23 lines) for configuration updates
+ * Inspector component (126 lines) for debugging and inspection
+ * LLMApiAlert component (109 lines) for API key warnings
+ * MCPTools component (129 lines) for MCP tool management in chat
+ * Stream recovery system (92 lines in stream-recovery.ts) for handling connection failures
+ * Discuss mode component (27 lines in DiscussMode.tsx) for discussion-focused interactions
+ * ChatBox component (316 lines) extracted from Chat.client.tsx
+ * Discuss prompt system (235 lines in discuss-prompt.ts)
+ * SolidJS icon asset (30 lines SVG)
+ * Common prompt library (63 lines) for shared prompts
+ * Common prompt system (709 lines in prompt.ts)
+ * Context types (8 lines) for better type safety
+
+ ## Improvements
+
+ * Major Chat.client.tsx refactor (689 line reduction)
+ * Extracted ChatBox to separate component
+ * Improved code organization
+ * Better state management
+ * Enhanced maintainability
+ * BaseChat component simplification (363 line reduction)
+ * Removed redundant code
+ * Better component separation
+ * Improved performance
+ * Enhanced StarterTemplates (84 line additions)
+ * Better template display
+ * Improved selection UX
+ * Enhanced visual design
+ * Action runner major expansion (228 line additions)
+ * MCP tool execution support
+ * Better action handling
+ * Enhanced error recovery
+ * Improved logging
+ * Chat API extensive updates (132 line additions)
+ * MCP integration
+ * Better streaming support
+ * Enhanced error handling
+ * Improved response processing
+ * Prompt system updates:
+ * new-prompt.ts: 40 line changes for better prompts
+ * optimized.ts: 50 line changes for optimization
+ * prompts.ts: 58 line changes for improvements
+ * prompt-library.ts: 20 line changes for better organization
+ * Enhanced Anthropic provider (53 line changes)
+ * Better model support
+ * Improved error handling
+ * GitHub template API improvements (18 line changes)
+ * Better template fetching
+ * Enhanced error messages
+ * Constants file expansion (93 line additions)
+ * New MCP-related constants
+ * Better organization
+ * Utility updates:
+ * fileUtils.ts: 4 line changes
+ * folderImport.ts: 4 line changes
+ * projectCommands.ts: 10 line changes
+ * selectStarterTemplate.ts: 9 line changes
+ * Message parser updates (16 line changes)
+ * MCP artifact support
+ * Better parsing logic
+ * Workbench improvements (28 line changes)
+ * Better integration with MCP
+ * Enhanced state management
+ * ExamplePrompts refinement (9 line changes)
+ * GitCloneButton updates (4 line changes)
+ * GitUrlImport.client improvements (4 line changes)
+ * Messages.client enhancements (7 line additions)
+ * Actions types expansion (16 line additions)
+ * Runtime and persistence improvements:
+ * create-summary.ts: 4 line changes
+ * select-context.ts: 4 line changes
+ * stream-text.ts: 2 line changes
+ * utils.ts: 8 line changes
+ * useChatHistory.ts: 4 line changes
+ * Vite configuration updates (18 line changes)
+ * Better build optimization
+ * Enhanced development experience
+ * Package.json updates (2 line changes)
+ * Various hook updates:
+ * StickToBottom.tsx: 2 line changes
+ * useStickToBottom.tsx: 2 line changes
+ * Hook index: 1 line addition
+
+ ## Bug fixes
+
+ * Fixed message parser spec tests (8 line changes)
+ * Fixed OpenAI-like provider configuration (2 line changes)
+ * Fixed Together provider settings (2 line changes)
+ * Fixed Amazon Bedrock provider configuration (1 line addition)
+ * Fixed route configurations:
+ * _index.tsx: 4 line fixes
+ * git.tsx: 8 line fixes
+ * webcontainer.connect.$id.tsx: 2 line fixes
+
+ ## Cleanup
+
+ * Removed Semantic PR workflow (32 lines, simplified CI/CD)
+ * Removed Logo SVG files (logo-text.svg and logo.svg) replaced with updated assets
+
+ ## Documentation
+
+ * Docs workflow improvements (6 line changes)
+ * Electron workflow updates (2 line changes)
+
+
+
+ ## New features
+
+ * pnpm-lock.yaml file for dependency locking
+ * Ensures consistent dependency versions
+ * Improves build reproducibility
+ * Better security through locked versions
+
+ ## Improvements
+
+ * Complete LLM provider icon migration:
+ * Moved all provider icons from `/public/icons/` to provider files
+ * Each provider now includes its icon inline (1 line addition per provider):
+ * Anthropic, Cohere, DeepSeek, GitHub, Google, Groq
+ * HuggingFace, Hyperbolic, Mistral, OpenAI, OpenAI-like
+ * OpenRouter, Perplexity, Together, xAI
+ * Better icon management and loading
+ * Improved bundling and performance
+ * Enhanced ModelSelector (12 line changes)
+ * Better provider icon display
+ * Improved layout
+ * Electron build configuration (5 line changes in electron-builder.yml)
+ * Better build settings
+ * Improved packaging
+ * Electron main process updates (9 line changes)
+ * Better initialization
+ * Improved error handling
+ * WebContainer improvements (22 line additions)
+ * Better container management
+ * Enhanced error handling
+ * Workbench store enhancements (6 line additions)
+ * Better state management
+ * Build configuration updates:
+ * build.d.ts: 8 line changes for better types
+ * vite.config.ts: 15 line additions for optimization
+ * uno.config.ts: 26 line changes for better styling
+ * Electron serve utility (2 line changes)
+ * Pre-start script (2 line changes)
+ * Cloudflare functions (2 line additions)
+ * Package.json updates (43 line changes)
+ * Updated dependencies
+ * Better scripts
+ * Enhanced configuration
+ * Styling improvements:
+ * index.scss: 6 line changes
+ * Better theme support
+
+ ## Bug fixes
+
+ * Fixed LMStudio provider icon path (2 line changes)
+ * Fixed Ollama provider icon path (2 line changes)
+ * Fixed ConnectionDiagnostics component (10 line changes)
+ * Fixed DataTab component (2 line changes)
+
+ ## Cleanup
+
+ * Removed all public provider icon SVG files (18 files removed):
+ * AmazonBedrock.svg, Anthropic.svg, Cohere.svg, Deepseek.svg
+ * Default.svg, Google.svg, Groq.svg, HuggingFace.svg
+ * Hyperbolic.svg, LMStudio.svg, Mistral.svg, Ollama.svg
+ * OpenAI.svg, OpenAILike.svg, OpenRouter.svg, Perplexity.svg
+ * Together.svg, xAI.svg
+ * Removed unused UI component directives:
+ * Badge.tsx: 2 line removal
+ * Collapsible.tsx: 2 line removal
+ * ScrollArea.tsx: 2 line removal
+ * Removed Amazon Bedrock provider icon reference (1 line removal)
+
+ ## Build & Dependencies
+
+ * pnpm-lock.yaml major cleanup (153 line reduction)
+ * Removed unused dependencies
+ * Optimized dependency tree
+ * Better version management
+ * Docker compose configuration (2 line changes)
+ * MkDocs configuration (4 line changes)
+ * GitHub Actions updates:
+ * Issue template config: 7 line changes
+ * Setup and build action: 5 line changes
+ * Electron workflow: 4 line changes
+ * README.md updates (3 line changes)
+
+
+
+ ## New features
+
+ * Complete application foundation with initial codebase structure
+ * Comprehensive settings system:
+ * GithubConnection component (990 lines) for GitHub integration
+ * ConnectionDiagnostics component (610 lines) for testing connections
+ * ConnectionsTab (184 lines) for managing integrations
+ * TabManagement component (380 lines) for tab organization
+ * DraggableTabList component (163 lines) for customizable tab layout
+ * ConnectionForm component (188 lines) for service connections
+ * CreateBranchDialog component (153 lines) for Git branch creation
+ * GitHubAuthDialog component (190 lines) for GitHub authentication
+ * PushToGitHubDialog component for repository operations
+ * RepositoryCard component (146 lines) for repository display
+ * Multiple additional connection components for NetlifyConnection and VercelConnection
+ * Settings infrastructure components (10+ files)
+ * GitHub issue templates and workflows configuration
+ * Complete documentation system
+
+ ## Improvements
+
+ * Application renamed from "CodinIT Desktop" to "CodinIT.dev"
+ * Updated branding across all files
+ * Changed references in documentation
+ * Updated README and configuration files
+ * Enhanced Docker configuration (110 line changes)
+ * Better build process
+ * Improved caching
+ * Production optimizations
+ * README.md major overhaul (67 line changes)
+ * Better project description
+ * Improved setup instructions
+ * Enhanced documentation
+ * Settings components redesign:
+ * ControlPanel: 378 line changes for better organization
+ * TabTile: 190 line changes for improved UX
+ * NetlifyConnection: 267 line changes for better integration
+ * VercelConnection: 141 line changes for enhanced functionality
+ * AvatarDropdown improvements (49 line changes)
+ * Better user menu
+ * Improved styling
+ * Component type definitions (40 line changes in types.ts)
+ * FAQ.md updates (30 line reduction for clarity)
+ * GitHub workflows reorganization:
+ * Added ci.yaml workflow (27 lines)
+ * Added pr-release-validation.yaml (31 lines)
+ * Added update-stable.yml (127 lines)
+ * Added semantic-pr.yaml (32 lines)
+ * Docker workflow enhancements (15 line changes)
+ * Docs workflow improvements (6 line changes)
+ * Electron workflow updates (14 line changes)
+ * Lighthouse configuration (10 line changes)
+ * Build scripts and actions updates:
+ * setup-and-build action: 8 line changes
+ * generate-changelog.sh: 2 line changes
+ * .env.example additions (4 line additions)
+ * Bug report template updates (4 line changes)
+ * Issue config improvements (9 line changes)
+
+ ## Bug fixes
+
+ * Fixed Docker repository name case issue
+ * Changed to lowercase for compatibility
+ * Added proper naming conventions
+ * Fixed Prettier formatting errors across codebase
+ * Applied consistent code formatting
+ * Fixed linting issues
+ * Fixed build errors in production mode
+ * Corrected build paths
+ * Fixed configuration issues
+
+ ## Cleanup
+
+ * Removed CODEOWNERS file (1 line)
+ * Removed CHANGES.md (92 lines, consolidated into CHANGELOG)
+ * Removed CLAUDE.md from root (238 lines, moved to project-specific location)
+ * Removed CONTRIBUTING.md (242 lines, consolidated)
+ * Removed PROJECT.md (54 lines, information integrated elsewhere)
+ * Removed deprecated GitHub workflows:
+ * preview.yaml (199 lines)
+ * quality.yaml (181 lines)
+ * security.yaml (121 lines)
+ * test-workflows.yaml (247 lines)
+ * Removed service integration component files (consolidated):
+ * ConnectionForm.tsx (193 lines, replaced)
+ * ConnectionTestIndicator.tsx (60 lines)
+ * ErrorState.tsx (102 lines)
+ * LoadingState.tsx (94 lines)
+ * ServiceHeader.tsx (72 lines)
+ * Service integration index (6 lines)
+
+ ## Documentation
+
+ * MkDocs configuration updates for better documentation structure
+ * README improvements for clarity and completeness
+ * FAQ.md streamlined for better user experience
+
+
+
+
+ This is the initial release of CodinIT.dev, establishing the foundation for an AI-powered application builder.
+
+
+ ## Core Application Structure
+
+ * Complete Remix-based application architecture
+ * React frontend with TypeScript
+ * Vite build system with optimized configuration
+ * UnoCSS for utility-first styling
+ * WebContainer integration for in-browser development
+
+ ## LLM Integration (19+ Providers)
+
+ * Multi-provider LLM system supporting:
+ * OpenAI (GPT-4, GPT-3.5)
+ * Anthropic (Claude models)
+ * Google (Gemini models)
+ * Amazon Bedrock
+ * Groq, xAI, DeepSeek, Cohere, Mistral
+ * Together AI, Perplexity, HuggingFace
+ * OpenRouter for unified access
+ * Ollama and LM Studio for local models
+ * Custom OpenAI-compatible endpoints
+ * Provider management and registration system
+ * Dynamic model selection
+ * API key management
+
+ ## Chat Interface
+
+ * AI-powered chat with streaming responses
+ * Message history and persistence
+ * Artifact generation and rendering
+ * Code block syntax highlighting
+ * Markdown rendering with GitHub flavored markdown
+ * Model and provider selection in chat
+
+ ## Workbench & Development Environment
+
+ * WebContainer-based in-browser development
+ * File tree with full CRUD operations
+ * Code editor with syntax highlighting
+ * Terminal integration for command execution
+ * Live preview with hot reloading
+ * Diff view for code changes
+
+ ## Deployment Integrations
+
+ * Netlify deployment integration
+ * Vercel deployment integration
+ * GitHub repository operations
+ * Git operations (commit, push, pull)
+
+ ## Electron Desktop Application
+
+ * Cross-platform desktop app support (macOS, Windows, Linux)
+ * Native window controls
+ * Auto-updater integration
+ * IPC communication system
+ * Cookie synchronization
+
+ ## Settings & Configuration
+
+ * Comprehensive settings panel
+ * Provider configuration
+ * API key management
+ * Theme customization (dark/light modes)
+ * Feature flags system
+
+ ## GitHub Actions & CI/CD
+
+ * Automated builds for web and desktop
+ * Docker image building and publishing
+ * Documentation deployment
+ * Release automation
+ * Code quality checks
+
+ ## Documentation
+
+ * Complete documentation site with MkDocs
+ * API documentation
+ * Setup guides
+ * Architecture documentation
+ * Contributing guidelines
+
+ ## Docker Support
+
+ * Development and production Docker configurations
+ * Docker Compose setup
+ * Multi-stage builds for optimization
+ * Environment-based configuration
+
+ ## Initial Feature Set
+
+ * Project templates and starters
+ * GitHub repository import
+ * Folder import for local projects
+ * Chat export and import
+ * Screenshot capture
+ * Supabase database integration
+ * Context optimization for long conversations
+ * File locks to prevent concurrent edits
+ * Prompt enhancement system
+
+ ## Configuration Files Added
+
+ * .env.example with all configuration options
+ * TypeScript configuration (tsconfig.json)
+ * Vite configuration for optimal builds
+ * UnoCSS configuration for styling
+ * ESLint and Prettier for code quality
+ * Git hooks with Husky
+ * GitHub issue and PR templates
+ * Docker and Docker Compose configurations
+ * Electron builder configuration
+ * Package.json with all dependencies
+
+ ## Assets & Icons
+
+ * Application icons for all platforms
+ * Provider logos and icons
+ * UI component icons
+ * Favicon and web app icons
+
+ ## Utilities & Helpers
+
+ * File system utilities
+ * Git integration utilities
+ * WebContainer helpers
+ * Diff utilities
+ * Import/export services
+ * Constants and type definitions
+
+
+---
+
+## Version Links
+
+[1.2.0]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.26...v1.2.0
+[1.1.26]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.25...v1.1.26
+[1.1.25]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.24...v1.1.25
+[1.1.24]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.23...v1.1.24
+[1.1.23]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.22...v1.1.23
+[1.1.22]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.20...v1.1.22
+[1.1.20]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.19...v1.1.20
+[1.1.19]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.18...v1.1.19
+[1.1.18]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.17...v1.1.18
+[1.1.17]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.16...v1.1.17
+[1.1.16]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.15...v1.1.16
+[1.1.15]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.14...v1.1.15
+[1.1.14]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.13...v1.1.14
+[1.1.13]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.12...v1.1.13
+[1.1.12]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.11...v1.1.12
+[1.1.11]: https://github.com/codinit-dev/codinit-dev/compare/v1.1.10...v1.1.11
+[1.1.10]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.10...v1.1.10
+[1.0.10]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.9...v1.0.10
+[1.0.9]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.8...v1.0.9
+[1.0.8]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.7...v1.0.8
+[1.0.7]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.6...v1.0.7
+[1.0.6]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.5...v1.0.6
+[1.0.5]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.4...v1.0.5
+[1.0.4]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.3...v1.0.4
+[1.0.3]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.2...v1.0.3
+[1.0.2]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.1...v1.0.2
+[1.0.1]: https://github.com/codinit-dev/codinit-dev/compare/v1.0.0...v1.0.1
+[1.0.0]: https://github.com/codinit-dev/codinit-dev/compare/v0.9.7...v1.0.0
+[0.9.7]: https://github.com/codinit-dev/codinit-dev/compare/v0.9.6...v0.9.7
+[0.9.6]: https://github.com/codinit-dev/codinit-dev/compare/v0.9.5...v0.9.6
+[0.9.5]: https://github.com/codinit-dev/codinit-dev/compare/v0.9.4...v0.9.5
+[0.9.4]: https://github.com/codinit-dev/codinit-dev/compare/v0.9.3...v0.9.4
+[0.9.3]: https://github.com/codinit-dev/codinit-dev/compare/v0.9.2...v0.9.3
+[0.9.2]: https://github.com/codinit-dev/codinit-dev/compare/v0.9.1...v0.9.2
+[0.9.1]: https://github.com/codinit-dev/codinit-dev/compare/v0.9.0...v0.9.1
+[0.9.0]: https://github.com/codinit-dev/codinit-dev/compare/v0.8.0...v0.9.0
+[0.8.0]: https://github.com/codinit-dev/codinit-dev/compare/v0.7.0...v0.8.0
+[0.7.0]: https://github.com/codinit-dev/codinit-dev/compare/v0.6.0...v0.7.0
+[0.6.0]: https://github.com/codinit-dev/codinit-dev/compare/v0.5.0...v0.6.0
+[0.5.0]: https://github.com/codinit-dev/codinit-dev/compare/v0.4.0...v0.5.0
+[0.4.0]: https://github.com/codinit-dev/codinit-dev/compare/v0.2.0...v0.4.0
+[0.2.0]: https://github.com/codinit-dev/codinit-dev/compare/v0.1.0...v0.2.0
+[0.1.0]: https://github.com/codinit-dev/codinit-dev/releases/tag/v0.1.0
diff --git a/comparisons/bolt-vs-codinit.mdx b/comparisons/bolt-vs-codinit.mdx
index 770e361..f6a49a6 100644
--- a/comparisons/bolt-vs-codinit.mdx
+++ b/comparisons/bolt-vs-codinit.mdx
@@ -1,6 +1,6 @@
---
-title: "Bolt.DIY vs CodinIT.dev"
-description: "Compare Bolt.DIY and CodinIT.dev - two open-source AI coding assistants. Learn which platform best fits your development needs."
+title: "Bolt.DIY vs CodinIT"
+description: "Compare Bolt.DIY vs CodinIT AI coding assistants. Compare features, LLM support, and AI code generation capabilities to choose the best AI-powered IDE for your development needs."
image: /assets/images/bolt-vs-codinit.png
---
@@ -17,29 +17,29 @@ image: /assets/images/bolt-vs-codinit.png
/>
-Both Bolt.DIY and CodinIT.dev are open-source AI-powered coding assistants designed to help developers build applications faster. This comparison highlights their key differences to help you choose the right tool for your workflow.
+Both Bolt.DIY and CodinIT.dev are open-source AI-powered coding assistants with LLM integration designed to help developers build applications faster with AI code generation. This comprehensive comparison highlights their key differences in AI features, performance, and capabilities to help you choose the right AI development tool for your workflow.
-## Overview
+## AI coding assistant overview
-
- A developer-focused platform that emphasizes customization and runs in a browser sandbox using WebContainers. It offers extensive model and provider support with advanced tooling like a code diff viewer.
+
+ A developer-focused AI coding platform that emphasizes customization and runs in a browser sandbox using WebContainers. It offers extensive LLM provider support with advanced AI tooling like code diff viewer and multi-model integration for AI-powered development.
-
- A beginner-friendly tool with easy installation and local Node.js execution for better performance and native module support. Ideal for rapid prototyping and non-technical users.
+
+ A beginner-friendly AI coding assistant with easy installation and local Node.js execution for better AI performance and native module support. Ideal for rapid AI-powered prototyping, code generation, and non-technical users seeking intelligent development tools.
-## Side-by-Side Comparison
+## AI coding assistant side-by-side comparison
-| Feature | Bolt.DIY | CodinIT.dev |
+| Feature | Bolt.DIY AI Capabilities | CodinIT.dev AI Features |
|--------------------------|--------------------------------------------------------------------------|------------------------------------------------------------------------|
-| **Target Audience** | Developers and power users seeking deep customization | Non-technical users and those wanting fast onboarding |
-| **Setup** | CLI-based with developer familiarity expected | Simple installer for quick start |
-| **Runtime** | Browser sandbox (WebContainers) – safer, limited native modules | Local Node.js – better performance and native module support |
-| **Model & Provider Support** | Broad support (OpenAI, Anthropic, etc.) – check repo for list | Model-agnostic with custom support on roadmap |
-| **Editor & Tooling** | Code diff viewer and advanced config options | Experimental in-browser editor |
-| **Best Use Case** | Customization, debugging, multi-provider experimentation | Rapid prototyping and easy local runs |
+| **Target Audience** | Developers and power users seeking deep AI customization and LLM control | Non-technical users and developers wanting fast AI onboarding and code generation |
+| **Setup** | CLI-based with developer familiarity expected for AI configuration | Simple installer for quick AI coding assistant start |
+| **Runtime** | Browser sandbox (WebContainers) – safer AI execution, limited native modules | Local Node.js – better AI performance and native module support for code generation |
+| **LLM & Provider Support** | Broad AI model support (OpenAI, Anthropic, Claude, GPT-4) – check repo for full LLM list | Model-agnostic AI with custom LLM support on roadmap for flexible code generation |
+| **AI Editor & Tooling** | AI code diff viewer and advanced config options for intelligent development | Experimental in-browser AI editor with code generation |
+| **Best AI Use Case** | AI customization, debugging, multi-LLM provider experimentation for advanced development | Rapid AI-powered prototyping and easy local code generation runs |
| **Source** | [Stackblitz-Labs/bolt.diy](https://github.com/Stackblitz-Labs/bolt.diy) | [codinit.dev](https://codinit.dev/) • [github.com/codinit-dev](https://github.com/codinit-dev) |
@@ -57,20 +57,20 @@ Runs directly on your local Node.js environment. This ensures better performance
### Model Providers
Both platforms support popular providers like OpenAI and Hugging Face, with goals toward model-agnostic architectures. Consult each project's README for configuration details and API key setup.
-## Choosing the Right Platform
+## Choosing the right AI coding platform
- **Choose CodinIT.dev if you prioritize:**
- - Quick, low-friction setup for non-engineers
- - Superior local performance and native module compatibility
- - Simple documentation and onboarding
+ **Choose CodinIT.dev AI assistant if you prioritize:**
+ - Quick, low-friction setup for AI code generation and non-engineers
+ - Superior local AI performance and native module compatibility for LLM inference
+ - Simple AI documentation and onboarding for intelligent development
- **Choose Bolt.DIY if you need:**
- - Extensive customization and broader provider options
- - Developer tools like diff viewers and advanced configurations
- - Safe, sandboxed experimentation in the browser
+ **Choose Bolt.DIY AI platform if you need:**
+ - Extensive AI customization and broader LLM provider options for code generation
+ - Advanced AI developer tools like diff viewers and multi-model configurations
+ - Safe, sandboxed AI experimentation in the browser with WebContainers
## Quick Links
@@ -78,10 +78,10 @@ Both platforms support popular providers like OpenAI and Hugging Face, with goal
- [Bolt.DIY GitHub](https://github.com/Stackblitz-Labs/bolt.diy)
- [Bolt.new (Upstream)](https://github.com/stackblitz/bolt.new)
- [CodinIT.dev Website](https://codinit.dev/)
-- [CodinIT.dev Download](https://github.com/Gerome-Elassaad/codinit-app)
+- [CodinIT.dev Download](https://github.com/codinit-dev/codinit-dev)
- [CodinIT.dev E2B SaaS](https://github.com/Gerome-Elassaad/codingit)
- [CodinIT.dev GitHub Org](https://github.com/codinit-dev)
- [WebContainers](https://webcontainers.io)
- [Node.js](https://nodejs.org)
- [OpenAI Docs](https://platform.openai.com/docs)
-- [Hugging Face Docs](https://huggingface.co/docs)
\ No newline at end of file
+- [Hugging Face Docs](https://huggingface.co/docs)
diff --git a/comparisons/lovable-vs-codinit.mdx b/comparisons/lovable-vs-codinit.mdx
index dbeabdd..9ec2557 100644
--- a/comparisons/lovable-vs-codinit.mdx
+++ b/comparisons/lovable-vs-codinit.mdx
@@ -1,6 +1,6 @@
---
-title: "Lovable.dev vs CodinIT.dev"
-description: "Detailed comparison between Lovable.dev and CodinIT.dev — two AI app builders with different approaches."
+title: "Lovable vs CodinIT"
+description: "Compare Lovable.dev vs CodinIT AI app builders. Detailed comparison of AI code generation, LLM features, and development approaches."
---
->Both Lovable.dev and CodinIT.dev are AI-powered app builders, but they take fundamentally different approaches to architecture, ownership, and developer control. This comparison helps you understand which platform aligns better with your needs.
+>Both Lovable.dev and CodinIT.dev are AI-powered app builders with LLM integration, but they take fundamentally different approaches to AI architecture, code ownership, and developer control. This comprehensive comparison helps you understand which AI development platform aligns better with your code generation needs.
- Lovable.dev and CodinIT.dev are both AI-powered app builders — but they differ greatly in architecture, ownership, and developer control.
+ Lovable.dev and CodinIT.dev are both AI-powered app builders with code generation — but they differ greatly in AI architecture, LLM integration, code ownership, and developer control for intelligent development.
-## Overview
+## AI app builder overview
-
- A proprietary, fully hosted cloud platform focused on speed and simplicity for rapid prototyping. Users own their generated code but the platform itself is closed-source.
+
+ A proprietary, fully hosted cloud AI platform focused on speed and simplicity for rapid AI-powered prototyping. Users own their AI-generated code but the platform itself is closed-source with managed LLM integration.
-
- An open-source, hybrid platform combining local execution (via WebContainers) with cloud sandboxes (via E2B SDK). Offers full transparency, self-hosting capabilities, and complete ownership.
+
+ An open-source, hybrid AI platform combining local execution (via WebContainers) with cloud sandboxes (via E2B SDK). Offers full transparency, self-hosting capabilities, complete code ownership, and flexible LLM provider integration for AI development.
-## Feature Comparison
+## AI feature comparison
-| Feature | Lovable.dev | CodinIT.dev |
+| Feature | Lovable.dev AI Capabilities | CodinIT.dev AI Features |
|--------------------------|-----------------------------------------------------------------------------|------------------------------------------------------------------------------|
-| **Platform Type** | Proprietary, fully hosted cloud platform | Open-source, hybrid (local + cloud) platform |
-| **Code Execution** | Runs in Lovable's managed cloud infrastructure | Hybrid: local in-browser via WebContainers or secure E2B cloud sandboxes |
-| **Code Ownership** | Users own generated code; platform closed-source | Full ownership and export rights; platform open-source |
-| **Flexibility** | React + Tailwind stack; Supabase/GitHub integration | Multi-framework: Next.js, Vue, Svelte, Python, etc. |
-| **Target Audience** | Non-technical founders and designers focused on rapid prototyping | Developers, startups seeking control, privacy, and flexibility |
-| **Collaboration** | Built-in team workspaces and user roles | Collaboration features in active development |
-| **Ecosystem** | Tight integrations with third-party cloud tools | Extensible with custom AI models and self-hosted options |
-| **Pricing Model** | Freemium with credit/message-based tiers | Free open-source core; paid tiers for cloud hosting or private AI agents |
+| **AI Platform Type** | Proprietary, fully hosted cloud AI platform with managed LLM integration | Open-source, hybrid (local + cloud) AI platform with flexible LLM providers |
+| **AI Code Execution** | Runs AI-generated code in Lovable's managed cloud infrastructure | Hybrid AI execution: local in-browser via WebContainers or secure E2B cloud sandboxes for code generation |
+| **AI Code Ownership** | Users own AI-generated code; platform closed-source with managed LLMs | Full ownership and export rights; open-source platform with custom LLM integration |
+| **AI Flexibility** | React + Tailwind stack with AI code generation; Supabase/GitHub integration | Multi-framework AI support: Next.js, Vue, Svelte, Python with flexible LLM providers |
+| **Target Audience** | Non-technical founders and designers focused on rapid AI-powered prototyping | Developers, startups seeking AI control, privacy, and LLM flexibility |
+| **AI Collaboration** | Built-in team workspaces with AI-assisted development and user roles | AI collaboration features in active development with intelligent code reviews |
+| **AI Ecosystem** | Tight integrations with third-party cloud tools and managed AI models | Extensible with custom LLM models, self-hosted AI options, and 19+ providers |
+| **Pricing Model** | Freemium with credit/message-based tiers for AI usage | Free open-source core; paid tiers for cloud hosting or private AI agents |
### Lovable.dev
@@ -99,7 +99,7 @@ CodinIT.dev is an open-source, hybrid AI development platform created as an alte
## Summary
-
+
**TL;DR:** Use Lovable.dev for a guided, user-friendly, fully hosted experience to quickly launch prototypes or MVPs. Use CodinIT.dev for full control, open-source transparency, and the ability to run code locally or in your own cloud.
diff --git a/docs.json b/docs.json
index 808aec3..faca2c5 100644
--- a/docs.json
+++ b/docs.json
@@ -5,7 +5,7 @@
"colors": {
"primary": "#3B82F6",
"light": "#F8FAFC",
- "dark": "#0F172A"
+ "dark": "#1D4ED8"
},
"favicon": "/favicon.ico",
"navigation": {
@@ -17,9 +17,9 @@
"href": "https://codinit.dev/blog"
},
{
- "anchor": "Releases",
- "icon": "github",
- "href": "https://github.com/Gerome-Elassaad/codinit-app/releases"
+ "anchor": "LinkedIn",
+ "icon": "linkedin",
+ "href": "https://www.linkedin.com/company/codinit-dev"
}
]
},
@@ -31,7 +31,10 @@
"group": "Introduction",
"icon": "book-open",
"expanded": false,
- "pages": ["/", "introduction/welcome"]
+ "pages": [
+ "/",
+ "introduction/welcome"
+ ]
},
{
"group": "Getting Started",
@@ -41,12 +44,18 @@
{
"group": "Setup",
"expanded": false,
- "pages": ["quickstart", "getting-started/installation"]
+ "pages": [
+ "quickstart",
+ "getting-started/installation"
+ ]
},
{
"group": "Configuration",
"expanded": false,
- "pages": ["getting-started/select-your-model", "getting-started/your-first-project"]
+ "pages": [
+ "getting-started/select-your-model",
+ "getting-started/your-first-project"
+ ]
}
]
},
@@ -66,10 +75,18 @@
"features/development/webcontainer",
"features/development/workbench"
]
+ },
+ {
+ "group": "AI Features",
+ "expanded": false,
+ "pages": [
+ "essentials/ai-chat-commands",
+ "essentials/project-templates"
+ ]
}
]
},
- {
+ {
"group": "Providers",
"icon": "cloud",
"expanded": false,
@@ -105,7 +122,10 @@
"group": "Comparisons",
"icon": "scale",
"expanded": false,
- "pages": ["comparisons/bolt-vs-codinit", "comparisons/lovable-vs-codinit"]
+ "pages": [
+ "comparisons/bolt-vs-codinit",
+ "comparisons/lovable-vs-codinit"
+ ]
},
{
"group": "Prompting",
@@ -115,12 +135,19 @@
{
"group": "Techniques",
"expanded": false,
- "pages": ["prompting/discussion-mode", "prompting/prompt-engineering-guide", "prompting/prompting-effectively"]
+ "pages": [
+ "prompting/discussion-mode",
+ "prompting/prompt-engineering-guide",
+ "prompting/prompting-effectively"
+ ]
},
{
"group": "Optimization",
"expanded": false,
- "pages": ["prompting/maximize-token-efficiency", "prompting/plan-your-app"]
+ "pages": [
+ "prompting/maximize-token-efficiency",
+ "prompting/plan-your-app"
+ ]
}
]
},
@@ -128,32 +155,36 @@
"group": "Model Configuration",
"icon": "settings",
"expanded": false,
- "pages": ["model-config/context-windows", "model-config/model-comparison"]
+ "pages": [
+ "model-config/context-windows",
+ "model-config/model-comparison"
+ ]
},
{
"group": "Hosting",
"icon": "globe",
"expanded": false,
- "pages": ["integrations/vercel", "integrations/netlify", "integrations/cloudflare"]
- },
- {
- "group": "Local Providers",
- "expanded": false,
- "pages": ["providers/lmstudio", "providers/ollama"]
+ "pages": [
+ "integrations/vercel",
+ "integrations/netlify",
+ "integrations/cloudflare"
+ ]
},
{
"group": "Running Models Locally",
"icon": "cpu",
"expanded": false,
"pages": [
- "running-models-locally/lm-studio",
- "running-models-locally/local-model-setup"
+ "running-models-locally/local-model-setup",
+ "providers/lmstudio",
+ "providers/ollama"
]
}
]
},
{
"tab": "Essentials",
+ "hidden": true,
"groups": [
{
"group": "Essentials",
@@ -163,12 +194,17 @@
{
"group": "AI Features",
"expanded": false,
- "pages": ["essentials/ai-chat-commands", "essentials/project-templates"]
+ "pages": [
+ "essentials/ai-chat-commands",
+ "essentials/project-templates"
+ ]
},
{
"group": "Configuration",
"expanded": false,
- "pages": ["essentials/customization"]
+ "pages": [
+ "essentials/customization"
+ ]
}
]
},
@@ -176,7 +212,12 @@
"group": "Integrations",
"icon": "plug",
"expanded": false,
- "pages": ["integrations/deployments", "integrations/git", "integrations/supabase"]
+ "pages": [
+ "integrations/deployments",
+ "integrations/git",
+ "integrations/supabase",
+ "mcp/mcp-overview"
+ ]
}
]
},
@@ -187,21 +228,31 @@
"group": "Support",
"icon": "life-buoy",
"expanded": false,
- "pages": ["support/frequently-asked-questions", "support/integration-issues", "support/troubleshooting"]
+ "pages": [
+ "support/frequently-asked-questions",
+ "support/integration-issues",
+ "support/troubleshooting",
+ "changelog"
+ ]
}
]
+ },
+ {
+ "tab": "API Reference",
+ "openapi": "openapi.json"
}
]
},
+ "redirects": [],
"logo": {
- "light": "/logo/logo-dark.webp",
- "dark": "/logo/icon.webp"
+ "light": "/logo/logo.svg",
+ "dark": "/logo/logo.png"
},
"navbar": {
"links": [
{
"label": "Issues",
- "href": "https://github.com/Gerome-Elassaad/codinit-app"
+ "href": "https://github.com/codinit-dev/codinit-dev"
}
],
"primary": {
@@ -210,18 +261,65 @@
"href": "https://codinit.dev/download"
}
},
+ "search": {
+ "prompt": "Search Docs..."
+ },
"contextual": {
"options": ["copy", "view", "chatgpt", "claude", "perplexity", "mcp", "cursor", "vscode"]
},
"footer": {
"socials": {
- "github": "https://github.com/codinit-dev"
- }
+ "github": "https://github.com/codinit-dev",
+ "linkedin": "https://www.linkedin.com/company/codinit-dev"
+ },
+ "links": [
+ {
+ "header": "Resources",
+ "items": [
+ {
+ "label": "Documentation",
+ "href": "https://codinit.dev/docs"
+ },
+ {
+ "label": "Changelog",
+ "href": "https://github.com/codinit-dev/codinit-dev/releases"
+ },
+ {
+ "label": "Blog",
+ "href": "https://codinit.dev/blog"
+ }
+ ]
+ },
+ {
+ "header": "Company",
+ "items": [
+ {
+ "label": "About",
+ "href": "https://codinit.dev/about-us"
+ },
+ {
+ "label": "Privacy Policy",
+ "href": "https://codinit.dev/privacy"
+ },
+ {
+ "label": "Terms of Service",
+ "href": "https://codinit.dev/terms"
+ }
+ ]
+ }
+ ]
},
"seo": {
"metatags": {
- "canonical": "https://codinit.dev/docs"
- }
+ "canonical": "https://codinit.dev/docs",
+ "og:site_name": "CodinIT.dev Documentation",
+ "og:type": "website",
+ "og:image": "https://codinit.dev/hero.png",
+ "twitter:card": "summary_large_image",
+ "twitter:site": "@codinit_dev",
+ "twitter:creator": "@codinit_dev"
+ },
+ "indexing": "navigable"
},
"styling": {
"eyebrows": "breadcrumbs",
@@ -235,6 +333,12 @@
"integrations": {
"telemetry": {
"enabled": true
+ },
+ "ga4": {
+ "measurementId": "G-8NNCCEN53X"
+ },
+ "amplitude": {
+ "apiKey": "XUGBJhd1AGBmq-CUnMJCELx6i_aKEHB6"
}
},
"appearance": {
@@ -244,6 +348,9 @@
"interaction": {
"drilldown": false
},
+ "metadata": {
+ "timestamp": false
+ },
"fonts": {
"heading": {
"family": "Inter",
diff --git a/essentials/ai-chat-commands.mdx b/essentials/ai-chat-commands.mdx
index 87f5194..43b542a 100644
--- a/essentials/ai-chat-commands.mdx
+++ b/essentials/ai-chat-commands.mdx
@@ -1,6 +1,6 @@
---
title: 'AI Chat & Commands'
-description: 'Master CodinIT AI chat interface and commands to accelerate your development workflow'
+description: 'Master CodinIT AI chat interface and commands to accelerate development with natural language prompts and powerful shortcuts.'
---
### Opening the Chat Interface
@@ -237,17 +237,17 @@ Different models excel at different tasks:
- Refactoring large codebases
- Architectural decisions
- **Recommended model:** Claude 4.5 Sonnet
+ **Recommended model:** Claude 3.5 Sonnet
-
+
**Best for:**
- General-purpose coding
- Quick iterations
- Documentation generation
- Code completion
- **Recommended model:** GPT-5
+ **Recommended model:** GPT-4o
diff --git a/essentials/customization.mdx b/essentials/customization.mdx
index 6ec90c8..d2cf6ac 100644
--- a/essentials/customization.mdx
+++ b/essentials/customization.mdx
@@ -1,587 +1,491 @@
---
-title: 'Customization & Settings'
-description: 'Customize CodinIT IDE, preferences, themes, and configuration to match your workflow'
+title: 'customization & settings'
+description: 'customize CodinIT with design palette, AI providers, and application preferences'
---
## Overview
-CodinIT is highly customizable to match your development workflow and preferences. From themes and keybindings to AI behavior and editor settings, you can tailor every aspect of the IDE.
+CodinIT is a web-based AI-powered development platform with extensive customization options. Personalize your color palette, configure AI providers, manage connections, and adjust application behavior through the Control Panel.
-## Accessing Settings
+### control panel
-
-
- **Menu**: CodinIT → Preferences → Settings
+The Control Panel is your central hub for all customization:
- **Keyboard**: `Cmd + ,`
-
-
- **Menu**: File → Preferences → Settings
+1. Click your **profile avatar** in the top-right corner
+2. Select **Settings** from the dropdown menu
+3. Browse settings categories or use the search bar
- **Keyboard**: `Ctrl + ,`
-
-
- **Menu**: File → Preferences → Settings
+
+The Control Panel has two modes: **User** (simplified) and **Developer** (advanced). Toggle between modes based on your needs.
+
+
+### keyboard shortcuts
+
+- **Toggle Theme**: `Cmd/Ctrl + Alt + Shift + D`
+- **Toggle Terminal**: `` Cmd/Ctrl + ` ``
+
+## design palette (experimental)
+
+
+The Design Palette guides AI in creating designs that match your brand and aesthetic preferences. Customize colors, typography, and visual features for AI-generated interfaces.
+
- **Keyboard**: `Ctrl + ,`
-
-
- Open Command Palette (`Cmd/Ctrl + Shift + P`) and type "Open Settings"
-
-
+### accessing design palette
-## Theme Customization
+Click the **palette icon** in the chat interface to open the Design System dialog.
-### Built-in Themes
+### color customization
-CodinIT includes carefully crafted themes optimized for extended coding sessions:
+Control 11 color roles with separate light and dark mode palettes:
+
+
+
+ - **Primary**: Main brand color for primary buttons, active links, and key interactive elements
+ - **Secondary**: Supporting brand color for secondary buttons, inactive states, and complementary elements
+ - **Accent**: Highlight color for badges, notifications, focus states, and call-to-action elements
+
+
+
+ - **Background**: Page backdrop for the main application/website background
+ - **Surface**: Elevated content areas for cards, modals, dropdowns, and panels
+ - **Text**: Primary text for headings, body text, and main readable content
+ - **Text Secondary**: Muted text for captions, placeholders, timestamps, and less important information
+ - **Border**: Separators for input borders, dividers, table lines, and element outlines
+
+
+
+ - **Success**: Positive feedback for success messages, completed states, and positive indicators
+ - **Warning**: Caution alerts for warning messages, pending states, and attention-needed indicators
+ - **Error**: Error states for error messages, failed states, and destructive action indicators
+
+
+
+### typography options
+
+Choose from 9 professionally selected fonts:
+
+- **Inter** - Default, modern sans-serif
+- **Roboto** - Google's popular sans-serif
+- **Open Sans** - Friendly and readable
+- **Montserrat** - Geometric sans-serif
+- **Poppins** - Geometric with playful character
+- **Lato** - Humanist sans-serif
+- **JetBrains Mono** - Monospace for code
+- **Raleway** - Elegant display font
+- **Lora** - Serif for traditional look
+
+
+Select multiple fonts for fallback support. The first font will be primary, with others serving as backups.
+
+
+### design features
+
+Toggle visual design features to match your aesthetic:
-
- - **CodinIT Dark** (Default)
- - **Midnight Blue**
- - **Dracula Pro**
- - **Nord Dark**
- - **Tokyo Night**
+
+ Adds smooth, rounded corners to elements for a modern, friendly appearance
-
- - **CodinIT Light**
- - **GitHub Light**
- - **Solarized Light**
- - **Catppuccin Latte**
+
+ Adds refined borders to elements for clear visual separation
-
- - **High Contrast Dark**
- - **High Contrast Light**
+
+ Applies gradient effects to accent elements for visual depth
-
- - Import VSCode themes
- - Create your own
- - Share with community
+
+ Adds subtle shadows for elevation and depth perception
+
+
+
+ Creates translucent, blurred backgrounds for modern aesthetics
-### Changing Your Theme
+### styling controls
-1. Open Settings (`Cmd/Ctrl + ,`)
-2. Navigate to **Appearance → Color Theme**
-3. Preview and select your preferred theme
-4. Theme changes apply instantly
+Fine-tune visual appearance:
-
-Use `Cmd/Ctrl + K + T` to quickly preview and switch between themes without opening settings.
-
+**border radius**
+- None (0px)
+- Small (0.25rem)
+- Medium (0.375rem) - Default
+- Large (0.5rem)
+- Extra Large (0.75rem)
+- Full (9999px) - Pill-shaped
+
+**shadow**
+- None
+- Small
+- Medium
+- Large
+- Extra Large
+
+**spacing**
+- Tight (0.75rem)
+- Normal (1rem) - Default
+- Relaxed (1.25rem)
+- Loose (1.5rem)
+
+### live preview
-### Custom Theme Creation
-
-Create your own color scheme:
-
-```json settings.json
-{
- "workbench.colorCustomizations": {
- "editor.background": "#1a1b26",
- "editor.foreground": "#a9b1d6",
- "activityBar.background": "#16161e",
- "sideBar.background": "#1a1b26",
- "statusBar.background": "#16161e"
- },
- "editor.tokenColorCustomizations": {
- "textMateRules": [
- {
- "scope": "keyword",
- "settings": {
- "foreground": "#bb9af7"
- }
- }
- ]
- }
-}
-```
-
-## Editor Settings
-
-### Font Configuration
-
-Customize your code font for optimal readability:
-
-```json settings.json
-{
- "editor.fontFamily": "JetBrains Mono, Fira Code, Menlo, Monaco, monospace",
- "editor.fontSize": 14,
- "editor.lineHeight": 1.6,
- "editor.fontLigatures": true,
- "editor.fontWeight": "400"
-}
-```
+The Design Palette includes real-time preview with:
+
+- Hero sections with your color scheme
+- Button styles (primary and secondary variants)
+- Typography hierarchy
+- Pricing cards with features
+- FAQ sections
+- Stats displays
+
+Changes reflect instantly for experimentation before saving.
+
+
+The Design Palette is experimental. Preferences are stored locally and not synced across devices.
+
+
+### preset themes
+
+11 preset design themes to jumpstart customization:
+
+- **Minimal** - Clean, distraction-free design
+- **Modern** - Contemporary with bold accents
+- **Carbon** - IBM Carbon-inspired system
+- **Material** - Google Material Design principles
+- **Flat** - Flat design with vibrant colors
+- **Neobrutalism** - Bold borders and stark contrasts
+- **Glassmorphism** - Frosted glass effects
+- **Claymorphism** - Soft, 3D clay-like elements
+- **Retro** - Nostalgic design patterns
+- **Neumorphism** - Soft UI with subtle shadows
+- **Cyberpunk** - Futuristic neon aesthetics
+
+## control panel settings
+
+### profile
+
+Manage your account and personal information:
+
+- Username and display name
+- Bio and avatar
+- Theme preference (Light/Dark/System)
+- Language and timezone
+- Notification preferences
+
+### features
+
+Toggle experimental features and new capabilities:
+
+- **Auto-select template**: Automatically select templates for new projects
+- **Context optimization**: Optimize AI context window usage
+- **Developer mode**: Enable advanced developer features and tools
+- **Event logs**: Enable detailed system event logging
-Font ligatures combine characters like `=>`, `!=`, `>=` into single glyphs for better readability.
+Features marked as experimental may change or be removed in future versions.
-### Popular Developer Fonts
-
-- **JetBrains Mono** - Modern, developer-focused
-- **Fira Code** - Open-source with ligatures
-- **Cascadia Code** - Microsoft's font with ligatures
-- **Monaspace** - GitHub's font superfamily
-- **Victor Mono** - Cursive italics for code
-
-### Editor Behavior
-
-```json settings.json
-{
- "editor.tabSize": 2,
- "editor.insertSpaces": true,
- "editor.wordWrap": "on",
- "editor.minimap.enabled": true,
- "editor.minimap.maxColumn": 120,
- "editor.bracketPairColorization.enabled": true,
- "editor.guides.bracketPairs": true,
- "editor.formatOnSave": true,
- "editor.formatOnPaste": true,
- "editor.codeActionsOnSave": {
- "source.fixAll": true,
- "source.organizeImports": true
- }
-}
-```
-
-## AI Configuration
-
-### Default AI Provider
-
-Set your preferred AI provider globally:
-
-```json settings.json
-{
- "codinit.ai.defaultProvider": "anthropic",
- "codinit.ai.defaultModel": "claude-3-5-sonnet",
- "codinit.ai.temperature": 0.7,
- "codinit.ai.maxTokens": 4096
-}
-```
-
-### AI Behavior Settings
+### cloud providers
-
-
- Configure when AI suggestions appear
-
- ```json
- {
- "codinit.ai.inlineSuggestions": {
- "enabled": true,
- "delay": 300,
- "triggerCharacters": [".", "(", "{", "["]
- }
- }
- ```
-
+Configure cloud-based AI providers:
-
- Control how much code context AI can see
-
- ```json
- {
- "codinit.ai.contextWindow": {
- "includeOpenFiles": true,
- "maxFiles": 10,
- "includeGitChanges": true
- }
- }
- ```
-
+
+
+ Claude models with extended thinking
+
-
- Add project-specific AI instructions
-
- ```json
- {
- "codinit.ai.customInstructions": [
- "Always use TypeScript strict mode",
- "Follow React best practices",
- "Include error handling",
- "Write comprehensive tests"
- ]
- }
- ```
-
+
+ GPT-4 and o1 reasoning models
+
-
- Configure automated code review
-
- ```json
- {
- "codinit.ai.codeReview": {
- "enabled": true,
- "onSave": false,
- "onCommit": true,
- "checkFor": ["security", "performance", "style"]
- }
- }
- ```
-
-
+
+ Gemini Pro and Flash models
+
+
+
+ Mistral and Codestral models
+
+
+
+ Ultra-fast inference platform
+
+
+
+ DeepSeek reasoning models
+
+
-## Keybindings
-
-### Default Keybindings
-
-
-
- | Action | Shortcut |
- |--------|----------|
- | Open AI Chat | `Cmd + K` |
- | Inline AI Edit | `Cmd + I` |
- | Command Palette | `Cmd + Shift + P` |
- | Quick File Open | `Cmd + P` |
- | Terminal Toggle | `` Cmd + ` `` |
- | Multi-cursor | `Cmd + D` |
- | Find in Files | `Cmd + Shift + F` |
- | Go to Definition | `F12` |
- | Format Document | `Shift + Alt + F` |
-
-
-
- | Action | Shortcut |
- |--------|----------|
- | Open AI Chat | `Ctrl + K` |
- | Inline AI Edit | `Ctrl + I` |
- | Command Palette | `Ctrl + Shift + P` |
- | Quick File Open | `Ctrl + P` |
- | Terminal Toggle | `` Ctrl + ` `` |
- | Multi-cursor | `Ctrl + D` |
- | Find in Files | `Ctrl + Shift + F` |
- | Go to Definition | `F12` |
- | Format Document | `Shift + Alt + F` |
-
-
-
-### Custom Keybindings
-
-Create your own shortcuts:
-
-```json keybindings.json
-[
- {
- "key": "cmd+shift+r",
- "command": "codinit.refactorSelection",
- "when": "editorTextFocus"
- },
- {
- "key": "cmd+shift+t",
- "command": "codinit.generateTests",
- "when": "editorTextFocus"
- },
- {
- "key": "cmd+shift+d",
- "command": "codinit.generateDocs",
- "when": "editorTextFocus"
- }
-]
-```
-
-## Workspace Settings
-
-### Project-Specific Configuration
-
-Create `.codinit/settings.json` in your project root:
-
-```json .codinit/settings.json
-{
- "codinit.ai.provider": "openai",
- "codinit.ai.model": "GPT-5-turbo",
- "editor.formatOnSave": true,
- "editor.codeActionsOnSave": {
- "source.organizeImports": true
- },
- "typescript.preferences.importModuleSpecifier": "relative",
- "eslint.enable": true,
- "prettier.enable": true
-}
-```
+**Configuration per provider:**
+- Enable/disable provider
+- API key management
+- Base URL (for compatible providers)
+- Model selection
+
+### local providers
+
+Configure local AI inference:
+
+- **Ollama**: Run models locally via Ollama
+- **LM Studio**: Desktop application for local models
+- **OpenAI-compatible**: Custom OpenAI-compatible endpoints
+
+**Local provider settings:**
+- Automatic detection of running providers
+- Custom base URL configuration
+- Real-time connection status
+- Model discovery and listing
+- Enable/disable toggle
+
+
+CodinIT automatically detects when Ollama (port 11434) or LM Studio (port 1234) are running on your machine. Detected providers are automatically enabled and appear in your model selector. Connection status updates in real-time.
+
+
+**Auto-detection features:**
+- Scans for local providers on startup
+- Automatically enables detected providers
+- Disables providers when they go offline
+- Shows loading states during detection
+- Integrates seamlessly with model selection
+
+### API keys
+
+Centralized API key management for all AI providers:
+
+- Add, edit, or remove API keys
+- Test key validity
+- Export keys for backup
+- Import keys from backup
-Workspace settings override user settings. This is perfect for team consistency.
+API keys are stored in browser localStorage. Use browser profiles or incognito mode for additional security.
-### Recommended Extensions
-
-Configure suggested extensions for your project:
-
-```json .codinit/extensions.json
-{
- "recommendations": [
- "dbaeumer.vscode-eslint",
- "esbenp.prettier-vscode",
- "bradlc.vscode-tailwindcss",
- "prisma.prisma"
- ]
-}
-```
-
-## Language-Specific Settings
-
-### TypeScript/JavaScript
-
-```json settings.json
-{
- "[typescript]": {
- "editor.defaultFormatter": "esbenp.prettier-vscode",
- "editor.formatOnSave": true,
- "editor.codeActionsOnSave": {
- "source.organizeImports": true
- }
- },
- "[javascript]": {
- "editor.defaultFormatter": "esbenp.prettier-vscode"
- },
- "typescript.updateImportsOnFileMove.enabled": "always",
- "typescript.suggest.autoImports": true,
- "javascript.suggest.autoImports": true
-}
-```
-
-### Python
-
-```json settings.json
-{
- "[python]": {
- "editor.defaultFormatter": "ms-python.black-formatter",
- "editor.formatOnSave": true,
- "editor.codeActionsOnSave": {
- "source.organizeImports": true
- }
- },
- "python.linting.enabled": true,
- "python.linting.pylintEnabled": true,
- "python.formatting.provider": "black"
-}
-```
-
-### Other Languages
-
-
-
- ```json
- {
- "[go]": {
- "editor.formatOnSave": true,
- "editor.codeActionsOnSave": {
- "source.organizeImports": true
- }
- }
- }
- ```
-
-
-
- ```json
- {
- "[rust]": {
- "editor.defaultFormatter": "rust-lang.rust-analyzer",
- "editor.formatOnSave": true
- }
- }
- ```
-
-
-
- ```json
- {
- "[java]": {
- "editor.defaultFormatter": "redhat.java",
- "editor.formatOnSave": true
- }
- }
- ```
-
-
-
-## Terminal Customization
-
-### Terminal Appearance
-
-```json settings.json
-{
- "terminal.integrated.fontSize": 13,
- "terminal.integrated.fontFamily": "JetBrains Mono, Menlo, Monaco, monospace",
- "terminal.integrated.lineHeight": 1.4,
- "terminal.integrated.cursorStyle": "line",
- "terminal.integrated.cursorBlinking": true,
- "terminal.integrated.defaultProfile.osx": "zsh",
- "terminal.integrated.defaultProfile.windows": "PowerShell"
-}
-```
-
-### Shell Integration
-
-```json settings.json
-{
- "terminal.integrated.shellIntegration.enabled": true,
- "terminal.integrated.shellIntegration.showWelcome": false,
- "terminal.integrated.env.osx": {
- "FIG_NEW_SESSION": "1"
- }
-}
-```
-
-## Git Integration
-
-### Git Configuration
-
-```json settings.json
-{
- "git.autofetch": true,
- "git.confirmSync": false,
- "git.enableSmartCommit": true,
- "git.autoStash": true,
- "git.showProgress": true,
- "git.suggestSmartCommit": true,
- "gitlens.codeLens.enabled": true,
- "gitlens.currentLine.enabled": true
-}
-```
-
-## Performance Settings
-
-### Optimize for Large Projects
-
-```json settings.json
-{
- "files.exclude": {
- "**/.git": true,
- "**/node_modules": true,
- "**/.DS_Store": true,
- "**/dist": true,
- "**/build": true
- },
- "search.exclude": {
- "**/node_modules": true,
- "**/dist": true,
- "**/.next": true
- },
- "files.watcherExclude": {
- "**/node_modules/**": true,
- "**/.git/objects/**": true
- },
- "typescript.tsserver.maxTsServerMemory": 4096
-}
-```
-
-## Sync Settings Across Devices
-
-### Settings Sync
-
-Enable settings sync to use your configuration across multiple machines:
-
-1. Open Command Palette (`Cmd/Ctrl + Shift + P`)
-2. Type "Settings Sync: Turn On"
-3. Sign in with GitHub or Microsoft account
-4. Select what to sync:
- - Settings
- - Keybindings
- - Extensions
- - UI State
- - Snippets
+### connections
+
+Manage integrations with external services:
+
+**github**
+- Authenticate with GitHub
+- Browse repositories
+- Clone templates
+- Push local projects
+
+**vercel**
+- Connect Vercel account
+- Deploy projects
+- View deployment status
+
+**netlify**
+- Connect Netlify account
+- Deploy sites
+- Manage domains
+
+**cloudflare**
+- Configure Cloudflare Pages
+- Deploy applications
+
+**diagnostics**
+- Test connection status
+- View connection logs
+- Troubleshoot issues
+
+### data management
+
+Control how CodinIT handles your data:
+
+- View storage usage
+- Export chat history
+- Clear conversation data
+- Download project files
+- Manage local storage
+
+
+All data is stored locally in your browser. Clearing browser data will remove all CodinIT data.
+
+
+### notifications
+
+Configure notification preferences:
+
+- System notifications
+- Deployment status updates
+- Error alerts
+- Feature announcements
+
+### service status
+
+Monitor AI provider availability:
+
+- Real-time service status for 19+ providers
+- Uptime monitoring
+- Incident reports
+- Historical data
+
+### event logs
+
+View detailed system events:
+
+- AI model interactions
+- WebContainer operations
+- File system changes
+- Network requests
+- Error tracking
-Settings sync is encrypted and stored securely in the cloud.
+Enable event logs in Features settings for detailed debugging information.
-## Import/Export Settings
+### debug
-### Export Your Configuration
+Advanced debugging tools:
-```bash
-# Export all settings
-codinit --export-settings ~/my-codinit-config.json
+- System diagnostics
+- Memory usage
+- Browser information
+- WebContainer status
+- Console logs
+- Performance metrics
-# Export specific workspace settings
-codinit --export-workspace ./.codinit
-```
+### updates
-### Import Configuration
+Check for CodinIT updates:
-```bash
-# Import from file
-codinit --import-settings ~/my-codinit-config.json
+- Current version information
+- Available updates
+- Release notes
+- Auto-update configuration
-# Import from URL
-codinit --import-settings https://example.com/team-config.json
-```
+### task manager
-## Team Settings
+Monitor system resources:
-### Share Configuration with Your Team
+- CPU usage
+- Memory consumption
+- Active processes
+- WebContainer status
+- Terminal sessions
-Create a shared configuration repository:
+### tab management
-```
-team-codinit-config/
-├── settings.json # Shared editor settings
-├── keybindings.json # Custom shortcuts
-├── extensions.json # Required extensions
-├── ai-instructions.md # AI coding guidelines
-└── README.md # Setup instructions
-```
+Customize Control Panel tabs:
-Team members can clone and import:
+- Show/hide tabs
+- Reorder tabs
+- Reset to defaults
+- Separate User and Developer modes
-```bash
-git clone https://github.com/your-org/team-codinit-config
-codinit --import-settings ./team-codinit-config
-```
+
+Drag and drop tabs to reorder them. Changes save automatically.
+
-## Troubleshooting Settings
+### light and dark modes
-### Reset to Defaults
+CodinIT supports three theme modes:
-If something goes wrong, reset settings:
+- **Light**: Optimized for bright environments
+- **Dark**: Reduces eye strain in low light (Default)
+- **System**: Follows operating system preference
-1. **Backup current settings** (optional)
-2. Open Command Palette
-3. Type "Reset Settings to Default"
-4. Confirm reset
+**Change theme:**
+1. Open Control Panel → Profile
+2. Select theme preference
+3. Or use keyboard shortcut: `Cmd/Ctrl + Alt + Shift + D`
-### Common Issues
+### theme persistence
-
-
- - Check for syntax errors in settings.json
- - Reload window: `Cmd/Ctrl + Shift + P` → "Reload Window"
- - Check for conflicting extensions
-
+Theme preference is stored in browser localStorage and persists across sessions.
-
- - Check for conflicts in keybindings.json
- - Ensure "when" conditions are correct
- - Disable conflicting extensions temporarily
-
+## keyboard shortcuts
-
- - Verify settings.json syntax
- - Check that instructions are in correct format
- - Restart CodinIT
-
-
+CodinIT has minimal keyboard shortcuts by design:
+
+| Action | Shortcut |
+|--------|----------|
+| Toggle theme | `Cmd/Ctrl + Alt + Shift + D` |
+| Toggle terminal | `` Cmd/Ctrl + ` `` |
+
+
+More keyboard shortcuts are planned for future releases based on user feedback.
+
+
+## browser storage
+
+All CodinIT customization is stored in browser localStorage:
+
+**Stored data:**
+- Design palette preferences
+- Provider configurations
+- API keys
+- Theme preference
+- Feature toggles
+- Tab configuration
+- Connection tokens
+
+
+Clearing browser data or using incognito mode will reset all settings. Export important data before clearing browser storage.
+
+
+## developer mode
+
+Enable advanced features for power users:
+
+**Enabled features:**
+- Additional debug tabs
+- Extended event logging
+- System diagnostics
+- Memory monitoring
+- WebContainer controls
+
+**Enable developer mode:**
+1. Open Control Panel → Features
+2. Toggle "Developer mode"
+3. Additional tabs appear in Developer window
+
+## troubleshooting
+
+### settings not saving
+
+- Check browser localStorage permissions
+- Disable browser extensions that block storage
+- Try a different browser
+- Clear cache and reload
+
+### theme not applying
+
+- Check browser theme preference
+- Try toggling theme manually
+- Clear browser cache
+- Refresh the page
+
+### API keys not working
+
+- Verify key validity in API Keys tab
+- Test connection in Cloud Providers tab
+- Check provider status in Service Status
+- Review event logs for errors
+
+### design palette changes not visible
+
+- Click "Save Changes" after customization
+- Refresh the page
+- Clear browser cache
+- Check that Design Palette is enabled
-## Next Steps
+## next steps
-
- Master AI-powered development
+
+ Master development
-
+
Choose the best AI provider
-
+
Connect external services
-
+
Fix common issues
diff --git a/features/development/code-editor.mdx b/features/development/code-editor.mdx
index 9796d3d..b71835d 100644
--- a/features/development/code-editor.mdx
+++ b/features/development/code-editor.mdx
@@ -1,80 +1,68 @@
---
title: Code Editor Panel
-description: Professional code editing environment with file management
+description: Where you write and organize your code files
---
-The Code Editor Panel serves as your primary workspace for writing, editing, and managing code files. It combines a powerful code editor with intuitive file navigation and terminal integration in a unified interface.
+The Code Editor is where you write your code. It's like a super-smart text editor that understands programming and helps you write better code.
-## Overview
+## What is the Code Editor?
-Designed for modern development workflows, the editor panel provides everything you need for efficient coding: syntax highlighting, intelligent file management, and seamless terminal access. The resizable layout adapts to your preferred working style.
+Think of the Code Editor as a special notebook for writing code. It colors your code to make it easier to read and helps you find mistakes.
-
- Advanced editor with syntax highlighting and intelligence
+
+ Type your code with helpful colors and hints
-
- Hierarchical file browsing with advanced operations
+
+ See all your project files in a tree
-
- Built-in command line access
+
+ Use the terminal without leaving the editor
-## Main Interface
+## Main Parts
-
-
- ### Project File Tree
+### Finding Your Files
- Navigate your project structure with powerful browsing features:
- - **Hierarchical display** - Expandable folder structure
- - **File type indicators** - Visual icons for different file types
- - **Unsaved changes** - Clear indicators for modified files
- - **Hidden file filtering** - Automatic exclusion of system files
+On the left side, you'll see all your project files organized like folders on your computer:
+- Click folders to open them and see what's inside
+- Click files to open and edit them
+- Icons show you what type of file it is (JavaScript, image, etc.)
+- A dot appears next to files you've changed but haven't saved
-
+### Writing Code
-
- ### Main Editing Area
+The big area in the middle is where you write your code:
+- **Colors** - Different parts of your code show up in different colors to make it easier to read
+- **Suggestions** - As you type, the editor suggests what you might want to write next
+- **Find and replace** - Search for words in your code and change them
+- **Multiple spots** - Edit several places at once
- Professional code editing with modern features:
- - **Syntax highlighting** - Support for all major programming languages
- - **Auto-completion** - Intelligent code suggestions
- - **Multiple cursors** - Simultaneous editing at multiple locations
- - **Find and replace** - Advanced search within files
+### Using the Terminal
-
+At the bottom, you can open a terminal to run commands:
+- Open multiple terminal tabs for different tasks
+- Make it bigger or smaller by dragging
+- See commands you ran before
+- Run commands without leaving your code
-
- ### Integrated Command Line
+## What You Can Do With Files
- Execute commands without leaving your coding environment:
- - **Multiple terminal tabs** - Different command sessions
- - **Resizable panels** - Adjust terminal size as needed
- - **Command history** - Access to previous commands
- - **Environment integration** - Full project context
+You can manage your files in many ways:
+- **Create** - Make new files and folders
+- **Delete** - Remove files you don't need
+- **Rename** - Change file names
+- **Upload** - Drag files from your computer into the editor
+- **Search** - Find files quickly by name
-
-
+### Reviewing AI Changes
-## File Operations
-
-
- Create Files/Folders
- Delete/Rename
- Drag & Drop Upload
- File Locking
-
-
-### File Management Features
-
-- **Context menus** - Right-click for quick operations
-- **Drag and drop** - Upload files from your computer
-- **File locking** - Prevent conflicts in team environments
-- **Search integration** - Quick file location
+When the AI wants to change your files, you can review the changes first:
+- **See the differences** - Compare the old and new versions side by side
+- **Approve or reject** - Choose which changes to keep
+- **Colored highlights** - See exactly what's being added or removed
- **Quick Access**: Use the file tree tabs to switch between Files, Search, and Locks views for different file
- management tasks.
+ **Review Changes**: Turn on diff approval in Settings → Features to check all AI changes before they're saved to your files.
diff --git a/features/development/developers.mdx b/features/development/developers.mdx
index d6f0253..6777c59 100644
--- a/features/development/developers.mdx
+++ b/features/development/developers.mdx
@@ -1,6 +1,6 @@
---
title: 'Development'
-description: 'Preview changes locally to update your docs'
+description: 'Set up your local development environment with CodinIT using Web Containers or E2B for building and testing applications.'
---
Clone the CodinIT repository from GitHub to your local machine:
@@ -8,8 +8,8 @@ Clone the CodinIT repository from GitHub to your local machine:
### Local Version (Web Containers)
```bash
-git clone https://github.com/Gerome-Elassaad/codinit-app.git
-cd codinit-app
+git clone https://github.com/codinit-dev/codinit-dev.git
+cd codinit-dev
```
### Web Version (E2B)
diff --git a/features/development/terminal.mdx b/features/development/terminal.mdx
index c09fbb7..a84f54b 100644
--- a/features/development/terminal.mdx
+++ b/features/development/terminal.mdx
@@ -1,78 +1,65 @@
---
title: Terminal
-description: Command-line integration with advanced terminal emulation
+description: Type commands to control your project, just like a computer's command line
---
-The Terminal Interface provides a fully-featured command-line environment directly within your development workspace, enabling seamless execution of build commands, package management, and system operations.
+The Terminal is like a text-based remote control for your project. Instead of clicking buttons, you type commands to tell your computer what to do.
-## Overview
+## What is the Terminal?
-Built on XTerm.js, the terminal component offers professional-grade terminal emulation with support for multiple sessions, theming, and interactive command execution. It integrates deeply with your project's environment, providing access to all development tools and scripts.
+Think of the Terminal as a way to talk directly to your computer using text commands. It's like texting your computer to ask it to do things.
-
- Run build scripts, package managers, and development tools
+
+ Tell your computer what to do by typing
-
- Multiple terminal tabs for different workflows
+
+ Open several terminals at once
-
- Consistent theming with your development environment
+
+ Watch what happens when you run commands
-## Key Features
+## What Can You Do?
-
-
- ### Advanced Terminal Capabilities
+### Running Commands
- Professional terminal emulation supporting:
- - **Full ANSI escape sequences** - Rich text formatting and colors
- - **Mouse support** - Interactive terminal applications
- - **Unicode support** - International character display
- - **Web links** - Clickable URLs in terminal output
+You can type commands to:
+- Start your app so you can see it working
+- Install new tools and libraries
+- Check if your code has any problems
+- Save your work to GitHub
-
+### Common Commands
-
- ### Project Environment Access
+Here are some commands you might use:
+- `npm install` - Download tools your project needs
+- `npm start` - Start your app
+- `npm test` - Check if everything works correctly
+- `git status` - See what files you've changed
- Complete access to your development environment:
- - **Project directory** - Automatic navigation to workspace
- - **Environment variables** - Access to configured variables
- - **Installed tools** - Node.js, npm, git, and other tools
- - **File permissions** - Full read/write access to project files
+### Using Multiple Terminals
-
+You can open several terminal tabs at once, like having multiple conversations:
+- One tab runs your app
+- Another tab runs tests
+- A third tab is ready for quick commands
-
- ### Enhanced User Experience
+This way, you don't have to stop one thing to do another.
- Modern terminal features for improved productivity:
- - **Command history** - Navigate through previous commands
- - **Auto-completion** - Intelligent command suggestions
- - **Search functionality** - Find text within terminal output
- - **Copy/paste support** - Standard clipboard operations
+### Helpful Features
-
-
+The terminal has some handy tricks:
+- **Command history** - Press the up arrow to see commands you typed before
+- **Copy and paste** - Copy text in and out of the terminal
+- **Search** - Find specific text in the terminal output
+- **Live updates** - See results appear as they happen
-## Usage Modes
-
-
- Read-Write Mode
- Read-Only Mode
- Background Execution
-
-
-### Terminal States
-
-- **Active Session** - Full command execution and interaction
-- **Read-Only** - Output display without input capability
-- **Background** - Commands running without visible interface
+
+ **Live Console**: Turn on the live console in Settings → Features to see command results in a floating window while you work.
+
- **Pro Tip**: Use multiple terminal tabs to run development servers, build processes, and testing simultaneously
- without switching contexts.
+ **Quick Tip**: Open multiple terminal tabs to run your app in one tab while typing other commands in another tab.
diff --git a/features/development/webcontainer.mdx b/features/development/webcontainer.mdx
index 70863ab..bc691b2 100644
--- a/features/development/webcontainer.mdx
+++ b/features/development/webcontainer.mdx
@@ -1,83 +1,53 @@
---
title: WebContainer
-description: Multi-preview instance management and port selection
+description: Switch between different parts of your app while it's running
---
-Manage multiple development server instances running in WebContainer environments. The port dropdown provides seamless switching between different preview instances and server configurations.
+WebContainer lets you run your app and see it working. If your app has multiple parts (like a website and a backend), you can easily switch between them.
-## Overview
+## What is WebContainer?
-When running multiple development servers or microservices, the WebContainer Preview Management system allows you to switch between different application instances with a simple dropdown interface.
+Think of WebContainer like a mini computer inside CodinIT. It runs your code and shows you what it looks like, just like opening a website in your browser.
-
- Quick switching between server instances
+
+ Jump between different parts of your app
-
- Multiple preview environments
+
+ Run several things at once
-
- Active preview identification
+
+ Know which part you're looking at
-## Preview Management
+## How to Use It
-
-
+### Switching Between Parts
- ### Server Instance Selection
+When your app is running, you might see different "ports" (think of them as different doors to your app). You can click a dropdown to switch between them.
- Organized access to all running preview instances:
- - **Port-based sorting** - Numerical ordering of available ports
- - **Active indication** - Clear highlighting of current preview
- - **Quick switching** - One-click navigation between instances
- - **Instance persistence** - Maintains selection across sessions
+- Click the port number to see all available parts
+- Select the one you want to view
+- Your preview updates instantly
-
+### When You Need Multiple Parts
-
+Some apps have different pieces that work together:
- ### Complex Application Management
+- **Website** - What users see and click on
+- **Backend** - The part that handles data and logic
+- **Tools** - Extra helpers for testing or building
- Handle applications with multiple server components:
- - **Frontend servers** - Main application interfaces
- - **API servers** - Backend service endpoints
- - **Development tools** - Additional development servers
- - **Microservices** - Individual service previews
+You can view each part separately to make sure everything works correctly.
-
+### Navigating Your App
-
-
- ### Address Bar Integration
-
- Seamlessly integrated with preview navigation:
- - **URL path support** - Navigate within selected instance
- - **Port preservation** - Maintains port context during navigation
- - **Reload functionality** - Refresh current preview instance
- - **External access** - Open previews in new tabs/windows
-
-
-
-
-## Usage Scenarios
-
-
- Full-Stack Apps
- Microservices
- Multi-Tenant Apps
- Development Tools
-
-
-### Common Use Cases
-
-- **Full-stack applications** - Frontend and backend servers
-- **Microservice architectures** - Multiple service endpoints
-- **Development tooling** - Build tools, testing servers
-- **Multi-environment** - Different configurations
+Once you pick a part to view:
+- Type URLs in the address bar to visit different pages
+- Click the refresh button to reload
+- Open in a new tab to see it in your regular browser
- **Port Management**: The port dropdown automatically detects and lists all available WebContainer preview instances,
- making it easy to switch between different parts of your application during development.
+ **Easy Switching**: CodinIT automatically finds all the running parts of your app. Just pick the one you want to see from the dropdown menu.
diff --git a/features/development/workbench.mdx b/features/development/workbench.mdx
index a45da79..1c09945 100644
--- a/features/development/workbench.mdx
+++ b/features/development/workbench.mdx
@@ -1,288 +1,215 @@
---
title: Workbench
-description: Comprehensive guide to the development workbench and its features
+description: Your complete workspace for building apps - write code, see it work, and track changes
---
-The Workbench is your complete development environment, providing everything you need to build, test, and deploy applications in one unified interface.
+The Workbench is your main workspace in CodinIT. It's where you write code, see your app running, and check what you've changed.
-## Overview
+## What is the Workbench?
-The workbench combines code editing, live preview, and change tracking into a seamless development experience. Whether you're building a web application, mobile app, or any other project, the workbench adapts to your workflow.
+Think of the Workbench as your desk where you have everything you need to build an app. You can write code, test it, and see what's different from before - all in one place.
-
- Full-featured code editor with file management, search, and terminal integration
+
+ Write and organize your code files
-
- Real-time application preview with device simulation and responsive testing
+
+ See your app running on different devices
-
- Visual diff comparison and version history for all your file changes
+
+ See what you've changed in your files
-## Main Views
+## Three Main Views
-The workbench provides three primary views that you can switch between based on your current task:
+The workbench has three different views. You can switch between them depending on what you want to do:
- ## Code Editing Interface
-
- The code view is your primary workspace for writing and managing code. It features a resizable layout with multiple panels working together seamlessly.
-
-
-
- ### File Tree Panel
-
- Navigate your project structure with an intuitive file tree that supports:
- - **Hierarchical browsing** - Expand and collapse folders to find files quickly
- - **File type indicators** - Visual icons showing file types (JavaScript, CSS, images, etc.)
- - **Unsaved changes** - Clear indicators for files with pending changes
- - **Hidden file filtering** - Automatically hides common directories like `node_modules`
-
-
- **Quick Tip**: Use the search bar within the file tree to quickly locate specific files by name.
-
-
-
-
- ### Main Editing Area
-
- A powerful code editor with features designed for modern development:
- - **Syntax highlighting** - Support for all major programming languages
- - **Auto-completion** - Intelligent code suggestions as you type
- - **Multiple cursors** - Edit multiple locations simultaneously
- - **Find and replace** - Search within files with regex support
-
- #### File Management
- - **Breadcrumb navigation** - See your current location in the file hierarchy
- - **Save/Reset buttons** - Quick access to save changes or revert to original state
- - **Auto-save** - Automatic saving prevents data loss
-
-
-
- ### Project-Wide Search
-
- Find anything in your codebase instantly:
- - **Text search** - Search for specific words, phrases, or patterns
- - **File filtering** - Limit searches to specific file types
- - **Case sensitivity** - Toggle case-sensitive matching
- - **Regular expressions** - Advanced pattern matching for complex searches
-
- Results are grouped by file with line numbers and highlighted matches for easy navigation.
-
-
-
- ### Command Line Access
-
- Run commands directly within your development environment:
- - **Multiple terminal tabs** - Work with different command sessions
- - **Resizable panels** - Adjust terminal size to fit your workflow
- - **Command history** - Access previously run commands
- - **Environment integration** - Full access to your project's environment
-
-
+ ## Writing Code
+
+ This is where you write and organize your code files.
+
+ ### Your Files
+
+ On the left, you see all your project files in a tree:
+ - Click folders to open them
+ - Click files to edit them
+ - Icons show what type of file it is
+ - A dot shows files you haven't saved yet
+
+
+ **Quick Tip**: Use the search box to find files by name.
+
+
+ ### The Editor
+
+ The big area in the middle is where you write code:
+ - **Colors** - Your code is colored to make it easier to read
+ - **Suggestions** - Get help as you type
+ - **Find and replace** - Search for text and change it
+ - **Edit multiple spots** - Change several places at once
+
+ You can save your work or undo changes with the buttons at the top.
+
+ ### Search Everything
+
+ Need to find something in your whole project?
+ - Type what you're looking for
+ - See results from all files
+ - Click a result to jump to that file
+
+ ### Terminal
+
+ At the bottom, you can type commands:
+ - Open multiple terminal tabs
+ - Make it bigger or smaller
+ - Run commands to start your app or install tools
- ## Live Application Preview
+ ## See Your App Running
- See your application come to life with real-time preview capabilities that adapt to any device or screen size.
+ This view shows your app working, just like it would on a real phone, tablet, or computer.
-
-
- ### Real-Time Application Viewing
+ ### Live Preview
- Experience your application exactly as users will see it:
- - **Instant updates** - Changes appear immediately as you code
- - **Full interactivity** - Test buttons, forms, and user interactions
- - **Error display** - See runtime errors and debugging information
- - **Loading states** - Observe how your app behaves during data fetching
-
+ Watch your app in action:
+ - **Instant updates** - See changes as soon as you save your code
+ - **Click and test** - Try buttons and forms to make sure they work
+ - **See errors** - If something breaks, you'll see what went wrong
-
- ### Test Across Devices
+ ### Test on Different Devices
- Ensure your application works perfectly on every device:
+ Make sure your app looks good everywhere:
-
-
- Test on iPhone SE, iPhone 12/13, iPhone Pro Max, and various Android sizes
+
+
+ iPhone and Android phones of different sizes
-
- iPad Mini, iPad Air, iPad Pro 11", and iPad Pro 12.9" configurations
+
+ iPads and other tablets
-
- Small, medium, and large laptop sizes plus full desktop resolutions
+
+ Laptops and desktop screens
-
- 4K displays and other large format screens
+
+ Large monitors and TVs
- **Orientation Support**: Test both portrait and landscape orientations for mobile and tablet devices.
-
+ You can also flip devices sideways (landscape) or upright (portrait) to test both ways.
-
- ### Screen Size Flexibility
+ ### Change the Size
- Verify your responsive design works across all breakpoints:
- - **Custom sizing** - Set exact pixel dimensions for testing
- - **Resize handles** - Drag to adjust preview size in real-time
- - **Breakpoint testing** - Quickly jump between common screen sizes
- - **Frame simulation** - See how your app looks within device frames
-
+ You can adjust the preview size:
+ - Pick a device from the list
+ - Drag the edges to make it bigger or smaller
+ - Type in exact sizes if you need to
-
- ### Visual Documentation
+ ### Take Screenshots
- Capture and share your application's appearance:
- - **Full-page screenshots** - Capture entire application views
- - **Element selection** - Focus on specific UI components
- - **Multiple formats** - Export in various image formats
- - **Annotation tools** - Add notes and highlights to screenshots
-
-
+ Capture pictures of your app:
+ - Take a screenshot of the whole page
+ - Save it to share with others
+ - Add notes or highlights if needed
-
- ## File Change Tracking
-
- Understand exactly what has changed in your files with powerful comparison tools and version history.
-
-
-
- ### Side-by-Side Changes
-
- See differences clearly with syntax-highlighted comparisons:
- - **Added lines** - Green highlighting for new content
- - **Removed lines** - Red highlighting for deleted content
- - **Inline changes** - Character-level differences within lines
- - **Context lines** - Surrounding code for better understanding
-
-
-
- ### Modification Summary
-
- Get quantitative insights into your changes:
- - **Addition count** - Number of lines added to files
- - **Deletion count** - Number of lines removed from files
- - **File status** - Modified, new, or deleted file indicators
- - **Change timeline** - When modifications were made
-
-
-
- ### Change Timeline
-
- Track the evolution of your files over time:
- - **Multiple versions** - View different states of the same file
- - **Timestamp tracking** - See exactly when changes were made
- - **Revert capability** - Restore previous versions if needed
- - **Change grouping** - Related modifications bundled together
-
-
-
- ### Quick File Access
-
- Jump directly to files that have been modified:
- - **Modified files dropdown** - Quick access to all changed files
- - **Search functionality** - Find specific changed files
- - **File type filtering** - Focus on particular file types
- - **Change indicators** - Visual badges showing modification counts
-
-
+
+ ## See What You Changed
-
-
+ This view shows you exactly what's different in your files compared to before.
+
+ ### Compare Old and New
-## Key Features
+ See your changes side by side:
+ - **Green lines** - New stuff you added
+ - **Red lines** - Things you removed
+ - **Highlighted words** - Specific words that changed
+ - **Context** - See the code around your changes
-
- Real-time Preview
- Device Simulation
- Code Intelligence
- Git Integration
- Terminal Access
- Change Tracking
-
+ ### Change Summary
-### Development Workflow
+ Get a quick overview:
+ - **Lines added** - How many new lines you wrote
+ - **Lines removed** - How many lines you deleted
+ - **File status** - Which files are new, changed, or deleted
+ - **When** - See when you made the changes
-The workbench supports your complete development cycle:
+ ### History
+
+ Look back at previous versions:
+ - **Different versions** - See how your file looked before
+ - **Time stamps** - Know when each change happened
+ - **Undo changes** - Go back to an older version if needed
+ - **Grouped changes** - Related changes shown together
+
+ ### Find Changed Files
+
+ Quickly jump to files you've edited:
+ - **Dropdown menu** - See all changed files
+ - **Search** - Find a specific file
+ - **Filter** - Show only certain types of files
+ - **Badges** - See how many changes each file has
+
+
+
+
+## How to Build an App
+
+Here's how you use the workbench to build something:
-
- Use the powerful code editor with syntax highlighting, auto-completion, and intelligent suggestions to write clean,
- efficient code.
+
+ Use the code view to write your app. The editor helps you with colors and suggestions.
-
- Switch to preview mode to see your application on different devices and screen sizes, ensuring it works perfectly
- everywhere.
+
+ Switch to preview view to see your app running. Try it on different devices to make sure it looks good everywhere.
-
- Use the diff view to understand what has changed, review modifications, and ensure code quality before committing.
+
+ Use the changes view to see what you modified. Make sure everything looks right before saving.
-
- Use built-in deployment tools to push your code to GitHub, deploy to hosting platforms, or share with team members.
+
+ Save your work to GitHub and deploy it so others can use your app.
-## Additional Tools
-
-Beyond the main views, the workbench includes several supporting tools to enhance your development experience.
-
-
-
- ### Project Deployment
-
- Share your work with the world using integrated deployment tools:
- - **GitHub integration** - Push directly to GitHub repositories
- - **Repository creation** - Create new repos with proper configuration
- - **Commit management** - Generate meaningful commit messages
- - **Branch handling** - Work with different branches and merge strategies
-
-
-
-
- ### Advanced File Operations
+## Other Helpful Tools
- Keep your local and remote files in sync:
- - **File synchronization** - Sync changes between local and remote environments
- - **Download options** - Export your project as a ZIP file
- - **Import capabilities** - Bring in files from external sources
- - **Conflict resolution** - Handle file conflicts gracefully
+The workbench has extra tools to help you:
-
+### Save to GitHub
-
- ### Team Development
+Share your code with others:
+- **Push to GitHub** - Save your code online
+- **Create repositories** - Make a new project on GitHub
+- **Branches** - Work on different versions of your code
- Work effectively with your team:
- - **File locking** - Prevent conflicts when multiple people edit the same files
- - **Change notifications** - See when team members modify shared files
- - **Comment system** - Add notes and feedback on specific code sections
- - **Review workflows** - Structured processes for code review and approval
+### Manage Files
-
+Keep your files organized:
+- **Sync files** - Make sure everything is up to date
+- **Download** - Save your whole project as a ZIP file
+- **Import** - Bring in files from somewhere else
-
- ### Development Optimization
+### Work with Others
- Ensure your application performs well:
- - **Performance monitoring** - Track loading times and resource usage
- - **Error tracking** - Identify and debug runtime issues
- - **Console access** - View browser console output and debugging information
- - **Network inspection** - Monitor API calls and data flow
+Build apps with your team:
+- **File locking** - Prevent two people from editing the same file at once
+- **Notifications** - Know when someone changes a file
+- **Comments** - Leave notes for your teammates
-
-
+### Fix Problems
-## Getting Started
+Make sure your app works well:
+- **Check speed** - See how fast your app loads
+- **Find errors** - Spot and fix bugs
+- **Console** - See messages from your app
+- **Network** - Watch how your app talks to servers
- **Quick Start**: Open any project file to automatically activate the workbench. The interface will adapt based on your
- project type and current development needs.
+ **Getting Started**: Just open a file to start using the workbench. It will show you the tools you need.
diff --git a/features/overview.mdx b/features/overview.mdx
index e23d82b..5f20dbb 100644
--- a/features/overview.mdx
+++ b/features/overview.mdx
@@ -1,116 +1,116 @@
---
-title: "Core Features"
-description: "Discover CodinIT's powerful AI-driven development capabilities that streamline your full-stack development workflow"
+title: "Features Overview"
+description: "Discover CodinIT's AI code generation, intelligent autocomplete, AI pair programming, and LLM-powered development tools for full-stack applications."
---
-
- Ready to experience the future of development? Start building with CodinIT today.
+
+ Ready to experience AI-powered development? Start building with CodinIT's AI code generation today.
-## Core Features
+## AI-powered development features
-Transform your coding experience with intelligent AI assistance that understands context, suggests improvements, and helps you write better code faster.
+Transform your coding experience with intelligent AI assistance powered by LLMs that understand context, generate production-ready code, suggest improvements, and help you write better software faster with AI pair programming.
-
- Generate high-quality code, components, and entire features with AI-powered suggestions tailored to your project.
+
+ Generate high-quality code, React components, and entire features with AI-powered suggestions and LLM code generation tailored to your project.
-
- Get intelligent recommendations based on your codebase, project structure, and development patterns.
+
+ Get intelligent AI recommendations based on your codebase, project structure, and software development patterns using LLM analysis.
-
- Connect with 19+ AI providers including OpenAI, Anthropic, Google, Groq, and local models like Ollama.
+
+ Connect with 18+ AI model providers including OpenAI GPT-4, Anthropic Claude, Google Gemini, Groq, and local LLMs like Ollama for flexible AI coding.
-
- Use voice commands and prompts to code hands-free and boost your productivity to ship faster than ever before.
+
+ Use voice commands and natural language prompts to code hands-free with AI assistance and boost your productivity to ship faster than ever before.
-### Development Environment
+### AI-powered development environment
-Experience a powerful, integrated development environment designed for modern full-stack applications.
+Experience a powerful, AI-integrated development environment designed for modern full-stack applications with intelligent code generation and LLM assistance.
-
- Work seamlessly across web browsers and native desktop applications with full feature parity.
+
+ Work seamlessly across web browsers and native desktop applications with full AI coding feature parity and intelligent development tools.
-
- Deploy and run your applications in containerized environments with built-in Docker support.
+
+ Deploy and run AI-generated applications in containerized environments with built-in Docker support and intelligent configuration.
-
- Access a full-featured terminal directly within the platform for seamless workflow integration.
+
+ Access a full-featured terminal with AI command suggestions directly within the platform for seamless AI-powered workflow integration.
-
- Advanced file operations with search, diff viewing, and intelligent locking system for collaboration.
+
+ Advanced file operations with AI-powered search, diff approval workflow, and intelligent locking system for collaborative AI development.
-### Deployment & Integration
+### AI-assisted deployment & integration
-Deploy anywhere with comprehensive platform support and powerful integrations.
+Deploy AI-generated applications anywhere with comprehensive platform support and powerful AI-enhanced integrations.
-
- Deploy to Vercel, Netlify, GitHub Pages, or any platform with one-click deployment options.
+
+ Deploy AI-generated apps to Vercel, Netlify, GitHub Pages, or any platform with intelligent one-click deployment and configuration.
-
- Built-in Supabase support for seamless database management, authentication, and real-time features.
+
+ Built-in Supabase support with AI-assisted database schema generation, authentication setup, and real-time features configuration.
-
- Full Git support with visual diff tools, branch management, and seamless repository integration.
+
+ Full Git support with AI-generated commit messages, visual diff tools, intelligent branch management, and seamless repository integration.
-
- Connect with external tools and services through the Model Context Protocol for extended functionality.
+
+ Start quickly with AI-optimized pre-built templates for common project types, frameworks, and full-stack architectures.
-## Platform Benefits
+## AI development platform benefits
-
- Connect with industry-leading AI models and local endpoints for maximum flexibility and choice.
+
+ Connect with industry-leading AI models like Claude, GPT-4, Gemini and local LLM endpoints for maximum flexibility in AI code generation.
-
- Built specifically for full-stack Node.js development with comprehensive tooling and support.
+
+ Built specifically for AI-powered full-stack Node.js development with comprehensive AI coding tools and intelligent development support.
-
- Work together with intelligent file locking and real-time synchronization across devices.
+
+ Work together with intelligent file locking, AI-powered code reviews, and real-time synchronization across devices.
-## Quick Start Guide
+## AI coding quick start guide
-Ready to begin your CodinIT journey? Follow these simple steps to get started immediately.
+Ready to begin your AI-powered development journey? Follow these simple steps to get started with AI code generation immediately.
-
- Get CodinIT running locally in minutes with our step-by-step quickstart guide.
+
+ Get CodinIT AI IDE running locally in minutes with our step-by-step AI coding quickstart guide.
-
- Connect your preferred AI providers and optimize your development experience.
+
+ Connect your preferred AI model providers like Claude, GPT-4, Gemini and optimize your AI development experience.
-
- Start building with AI assistance from day one using our project templates and examples.
+
+ Start building with AI code generation from day one using our AI-optimized project templates and examples.
-
- Launch your applications to the world with our integrated deployment tools.
+
+ Launch your AI-built applications to production with our integrated deployment tools and intelligent configuration.
- **New to CodinIT?** Visit our [Quickstart Guide](/quickstart) to get up and running in minutes, or explore our [Providers Guide](/providers/cloud-providers) to learn about integrating with 19+ AI providers.
+ **New to AI coding?** Visit our [AI Quickstart Guide](/quickstart) to get up and running with AI code generation in minutes, or explore our [LLM Providers Guide](/providers/cloud-providers) to learn about integrating with 18+ AI model providers including Claude, GPT-4, and Gemini.
diff --git a/getting-started/installation.mdx b/getting-started/installation.mdx
index 4ab3040..1ffaf25 100644
--- a/getting-started/installation.mdx
+++ b/getting-started/installation.mdx
@@ -1,28 +1,54 @@
---
title: "Installation"
-description: "Get CodinIT up and running in your favorite IDE with these simple installation steps"
+description: "Install CodinIT AI-powered IDE on Windows, Mac, and Linux. Set up the AI coding assistant with step-by-step installation guide for developers."
---
-**Ready to get started?** Installation takes less than 2 minutes! Choose your preferred platform below and follow the simple steps.
+**Ready to start AI coding?** Install the AI-powered IDE in less than 2 minutes! Choose your platform and start building with AI code generation.
-## Before You Begin
+## Download AI coding assistant from website (Recommended)
-Clone the CodinIT repository from GitHub to your local machine:
+The easiest way to get started with AI-powered development is to download the CodinIT installer for your platform:
-### Local Version (Web Containers)
+
+
+ 1. Download the `.dmg` file from [codinit.dev/download](https://codinit.dev/download)
+ 2. Open the downloaded `.dmg` file
+ 3. Drag CodinIT to your Applications folder
+ 4. Launch CodinIT from your Applications
+
-```bash
-git clone https://github.com/Gerome-Elassaad/codinit-app.git
-cd codinit-app
-```
+
+ 1. Download the `.exe` installer from [codinit.dev/download](https://codinit.dev/download)
+ 2. Run the installer
+ 3. Follow the installation wizard
+ 4. Launch CodinIT from the Start menu or desktop shortcut
+
+
+
+ 1. Download the `.AppImage` file from [codinit.dev/download](https://codinit.dev/download)
+ 2. Make it executable: `chmod +x CodinIT-*.AppImage`
+ 3. Run the AppImage: `./CodinIT-*.AppImage`
+
+ Or install via package manager (if available):
+ ```bash
+ # Debian/Ubuntu
+ sudo dpkg -i codinit_*.deb
-### Web Version (E2B)
+ # Fedora/RHEL
+ sudo rpm -i codinit-*.rpm
+ ```
+
+
+
+## Install AI IDE from source (For developers)
+
+If you prefer to build the AI coding assistant from source or contribute to open-source AI development:
```bash
-git clone https://github.com/Gerome-Elassaad/codingit.git
-cd codingit
+git clone https://github.com/codinit-dev/codinit-dev.git
+cd codinit-dev
```
## Install dependencies
@@ -40,9 +66,9 @@ pnpm install
yarn install
```
-## Configure environment
+## Configure AI model environment
-Set up your environment variables by creating a `.env.local` file and add your AI provider API keys:
+Set up your environment variables by creating a `.env.local` file and add your LLM provider API keys for AI code generation:
```bash
# Copy the example environment file
@@ -54,9 +80,9 @@ cp .env.example .env.local
# etc.
```
-## Run the development server
+## Run the AI development server
-Start the application in development mode and open it in your IDE:
+Start the AI-powered IDE in development mode and begin coding with AI assistance:
```bash
# Start the dev server
@@ -65,10 +91,10 @@ pnpm run dev
# The app will be available at:
# http://localhost:5173 (Web Containers Dev Server)
# https://localhost:3000 (E2B Dev Server)
-
+```
-
- Continue to model selection and start building your first project.
+
+ Continue to LLM provider configuration and start building AI-powered projects with code generation.
diff --git a/getting-started/select-your-model.mdx b/getting-started/select-your-model.mdx
index cc314be..ca0757e 100644
--- a/getting-started/select-your-model.mdx
+++ b/getting-started/select-your-model.mdx
@@ -1,6 +1,6 @@
---
-title: "Selecting Your Model"
-description: "Get started with your first AI model in CodinIT"
+title: "Model Selection"
+description: "Select the best AI coding model for your needs. Compare Claude, GPT-4, Gemini, and DeepSeek for AI-powered development and code generation."
---
-CodinIT needs an AI model to understand your requests and write code. Think of it like choosing which expert to work with - different models have different strengths and costs.
+CodinIT AI coding assistant needs an LLM (Large Language Model) to understand your requests and generate code. Think of it like choosing which AI expert to work with - different AI models have different coding strengths, performance, and costs.
-## Quick Start: Choose Your Provider
+## Quick start: Choose your AI coding model
-The easiest way to get started is with **CodinIT** as your provider:
+The easiest way to get started with AI code generation is with **CodinIT** as your LLM provider:
-1. **Open CodinIT Settings**: Click the gear icon (⚙️) in the top-right corner of CodinIT's chat
-2. **Select "CodinIT"** from the API Provider dropdown
-3. **Choose a model** from the dropdown - we recommend starting with **Claude Sonnet 4.5** or **DeepSeek V3**
+1. **Open AI settings**: Click the gear icon (⚙️) in the top-right corner of CodinIT's AI chat interface
+2. **Select "CodinIT"** from the AI Provider dropdown for instant LLM access
+3. **Choose an AI model** from the dropdown - we recommend starting with **Claude 3.5 Sonnet** or **DeepSeek V3** for code generation
-**That's it!** No API keys to manage, and you'll get access to multiple models.
+**That's it!** No API keys to manage, and you'll get access to multiple AI coding models for intelligent development.
-**Free models available**: CodinIT occasionally offers free inferencing through partner providers. When available, you'll see these options in your model dropdown.
+**Free AI coding models available**: CodinIT occasionally offers free LLM inferencing through partner providers. When available, you'll see these AI code generation options in your model dropdown.
-## Alternative: Use Another Provider
+## Alternative: Use your own AI model provider
-If you prefer to use your own API keys, you can select from providers like:
+If you prefer to use your own LLM API keys for AI code generation, you can select from providers like:
-- **OpenRouter** - Great value, multiple models
-- **Anthropic** - Direct access to Claude models
-- **OpenAI** - Access to GPT models
-- **Google Gemini** - Google's AI models
-- **Ollama** - Run models locally on your computer
+- **OpenRouter** - Great value, multiple AI coding models
+- **Anthropic** - Direct access to Claude AI models for code generation
+- **OpenAI** - Access to GPT-4 and GPT-4o for AI-powered development
+- **Google Gemini** - Google's AI models for intelligent coding assistance
+- **Ollama** - Run open-source LLMs locally on your computer for private AI coding
-After selecting a provider, you'll need to:
-1. Get an API key from their website
-2. Paste it into the API Key field in CodinIT settings
-3. Choose your model
+After selecting an AI provider, you'll need to:
+1. Get an API key from their LLM provider website
+2. Paste it into the API Key field in CodinIT AI settings
+3. Choose your preferred AI coding model
Most providers require payment information before generating API keys.
-## Which Model Should I Choose?
+## Which AI coding model should I choose?
-If you're just getting started, we recommend:
+If you're just getting started with AI code generation, we recommend:
-| Your Priority | Choose This Model | Why |
+| Your Priority | Choose This AI Model | Why |
|---------------|-------------------|-----|
-| **Reliability** | Claude Sonnet 4.5 | Most reliable for coding tasks |
-| **Value** | DeepSeek V3 | Great performance at low cost |
-| **Speed** | Qwen3 Coder | Fast responses |
-| **Privacy** | Any Ollama model | Runs on your computer |
+| **Reliability** | Claude 3.5 Sonnet | Most reliable LLM for AI coding tasks and code generation |
+| **Value** | DeepSeek V3 | Great AI performance at low cost for budget-conscious developers |
+| **Speed** | Qwen3 Coder | Fast LLM responses for real-time AI code completion |
+| **Privacy** | Any Ollama model | Runs locally on your computer for private AI development |
-You can switch models anytime without losing your conversation.
+You can switch AI models anytime without losing your coding conversation or context.
-## Next Steps
+## Next steps for AI-powered coding
-With your model configured, you're all set! In the next section, we'll walk you through completing your first task with CodinIT and show you how to interact with the AI to write, debug, and refactor code.
+With your AI model configured, you're all set for AI code generation! In the next section, we'll walk you through completing your first AI-powered development task with CodinIT and show you how to interact with the LLM to write, debug, and refactor code intelligently.
-
- Want to understand model pricing, context windows, and advanced selection strategies? Check out our comprehensive AI Providers guide.
+
+ Want to understand LLM pricing, context windows, and advanced AI model selection strategies? Check out our comprehensive AI Providers guide for code generation.
diff --git a/getting-started/your-first-project.mdx b/getting-started/your-first-project.mdx
index 4dbe803..a615d87 100644
--- a/getting-started/your-first-project.mdx
+++ b/getting-started/your-first-project.mdx
@@ -248,8 +248,8 @@ Master Supabase with advanced queries, RLS policies, and edge functions
Explore Netlify, Vercel, and advanced deployment strategies
-
- Connect external services and APIs through Model Context Protocol
+
+ Master AI chat commands and voice-powered coding features
diff --git a/index.mdx b/index.mdx
index f4a4734..41596b3 100644
--- a/index.mdx
+++ b/index.mdx
@@ -72,7 +72,7 @@ Ready to transform your development experience? Choose your path to get started
Find solutions to common issues with AI providers, performance, deployment, and development environment problems.
-
+
Join our Discord community for real-time help, discussions, and connecting with other CodinIT developers.
diff --git a/integrations/cloudflare.mdx b/integrations/cloudflare.mdx
index ea3e6b0..3d9b6b4 100644
--- a/integrations/cloudflare.mdx
+++ b/integrations/cloudflare.mdx
@@ -1,6 +1,6 @@
---
title: 'Cloudflare Pages'
-description: 'Deploy your CodinIT projects to Cloudflare Pages with global CDN and edge computing'
+description: 'Deploy your CodinIT projects to Cloudflare Pages with global CDN, edge computing, and automatic SSL for lightning-fast performance.'
---
**Website:** [https://pages.cloudflare.com/](https://pages.cloudflare.com/)
diff --git a/integrations/deployments.mdx b/integrations/deployments.mdx
index 21fe5f3..a74e424 100644
--- a/integrations/deployments.mdx
+++ b/integrations/deployments.mdx
@@ -1,20 +1,23 @@
---
-title: 'Hosting'
-description: 'Deploy your applications to Netlify and Vercel with seamless API integration and automated workflows'
+title: 'Deployments'
+description: 'Deploy AI-generated applications to Netlify, Vercel, and Cloudflare with seamless API integration, automated workflows, and intelligent configuration for full-stack apps.'
---
-# Deployment Integrations
+# AI-powered deployment integrations
-CodinIT provides direct API integration with leading deployment platforms, enabling one-click deployments and automated publishing workflows. Deploy your applications instantly to global CDNs with full build automation and environment management.
+CodinIT provides direct API integration with leading deployment platforms, enabling one-click deployments of AI-generated applications and automated publishing workflows. Deploy your AI-built applications instantly to global CDNs with full build automation, intelligent environment management, and LLM-assisted configuration.
-## Supported Platforms
+## AI-assisted deployment platforms
-
-
- Deploy static sites, SPAs, and serverless functions with global CDN and continuous deployment.
+
+
+ Deploy AI-generated static sites, SPAs, and serverless functions with global CDN and intelligent continuous deployment.
-
- Optimized for Next.js, React, and modern frameworks with edge computing capabilities.
+
+ Optimized for AI-built Next.js, React apps with edge computing and intelligent framework detection.
+
+
+ Deploy to Cloudflare's global edge network with automatic SSL and lightning-fast performance.
@@ -227,6 +230,13 @@ The Vercel integration uses the following API endpoints:
- Projects needing instant deployments and preview environments
- Teams building with Jamstack architecture
+**Use Cloudflare Pages for:**
+
+- Global edge network deployment with 200+ locations
+- Lightning-fast static site hosting with automatic optimization
+- Projects requiring DDoS protection and security features
+- Applications needing edge computing with Workers integration
+
### Deployment Process
diff --git a/integrations/git.mdx b/integrations/git.mdx
index b6004a5..b5bc97c 100644
--- a/integrations/git.mdx
+++ b/integrations/git.mdx
@@ -1,103 +1,100 @@
---
title: 'Git Integration'
-description: 'Advanced Git integration with proxy functionality, GitHub templates, and seamless version control'
+description: 'How to save your code to GitHub and work with version control'
---
-# Git Integration
+CodinIT connects to GitHub so you can save your code online, work with others, and keep track of changes.
-CodinIT provides comprehensive Git integration that goes beyond basic repository management to include advanced features like Git proxy operations, GitHub template integration, and seamless version control workflows.
+## What Can You Do?
-## Overview
-
-The Git integration system offers multiple layers of functionality:
+CodinIT helps you with GitHub in several ways:
-
- Remote Git operations through secure proxy
+
+ Save your code online automatically
-
- Quick project setup from GitHub templates
+
+ Begin projects using pre-made templates
-
- Automatic commits and branch management
+
+ See what changed and when
-### Key Features
+### Main Features
-- **Git Proxy Operations**: Execute Git commands on remote repositories
-- **Template Integration**: Import and use GitHub repository templates
-- **Automatic Commits**: Seamless version control without manual Git commands
-- **Branch Management**: Create, switch, and merge branches
-- **Repository Sync**: Bidirectional synchronization with GitHub
-- **Conflict Resolution**: Intelligent merge conflict handling
+- **Automatic saving**: CodinIT saves your code to GitHub for you
+- **Use templates**: Start projects from GitHub templates
+- **Auto commits**: Your changes are saved automatically
+- **Branches**: Work on different versions of your code
+- **Stay in sync**: Keep your local and online code matching
+- **Fix conflicts**: Help when two people change the same thing
- **Freedom of Choice**: Your code always lives in Git, giving you complete control and the ability to work with any
- Git-compatible tools or services.
+ **Your Code, Your Control**: Your code is always saved in Git, so you can use any Git tool you want.
-## Git Proxy Operations
+## Working with GitHub
-### Remote Git Command Execution
+### What You Can Do
-The Git proxy allows secure execution of Git commands on remote repositories:
+CodinIT can do these Git things for you:
-**Supported Operations:**
+**Basic operations:**
-- Repository cloning and fetching
-- Branch creation and switching
-- Commit and push operations
-- Merge conflict resolution
-- Status checking and diff viewing
+- Copy projects from GitHub
+- Create and switch branches
+- Save and upload changes
+- Fix conflicts when they happen
+- Check what changed
-**Usage Example:**
+**Example commands:**
```bash
-# Clone a repository
+# Copy a project
git clone https://github.com/user/repo.git
-# Create and switch to new branch
+# Make a new branch
git checkout -b feature/new-feature
-# Push changes
+# Upload your changes
git push origin main
```
-### Security and Access Control
+### Security
-**Authentication Methods:**
+**How you log in:**
-- Personal Access Tokens (PAT)
-- SSH keys (when supported)
-- OAuth integration with GitHub
-- Repository-specific permissions
+- Personal Access Tokens (a special password)
+- SSH keys (a secure key file)
+- OAuth (log in with GitHub)
+- Permission for specific projects
-**Security Features:**
+**Staying safe:**
-- Command validation and sanitization
-- Rate limiting and abuse prevention
-- Audit logging of all operations
-- Secure credential storage
+- Commands are checked for safety
+- Limits to prevent abuse
+- Records of what you do
+- Secure password storage
-## GitHub Template Integration
+## Starting from Templates
-### Template-Based Project Creation
+### Use Pre-Made Projects
-Quickly start projects using GitHub repository templates:
+Start quickly with ready-made project templates:
-**Available Template Types:**
+**Types of templates:**
-- Frontend frameworks (React, Vue, Angular)
+- Website frameworks (React, Vue, Angular)
- Backend APIs (Express, FastAPI, NestJS)
-- Full-stack applications
-- Mobile app templates
-- DevOps and deployment templates
+- Complete apps (frontend + backend)
+- Mobile app starters
+- Deployment setups
-**Template Usage:**
+**How to use a template:**
```typescript
-// Import template via API
+// Start from a template
const template = await fetch('/api/github-template', {
method: 'POST',
body: JSON.stringify({
@@ -107,320 +104,159 @@ const template = await fetch('/api/github-template', {
});
```
-### Custom Template Creation
+### Make Your Own Templates
-**Creating Templates:**
+**Creating templates:**
-- Convert existing repositories to templates
-- Define template variables and configuration
-- Set up automated setup scripts
-- Include documentation and examples
+- Turn your project into a template
+- Set up variables and settings
+- Add setup scripts
+- Include instructions
-**Template Structure:**
+**Template folder structure:**
```
template-repo/
-├── template.json # Template configuration
-├── setup.js # Automated setup script
-├── README.md # Usage instructions
-└── src/ # Template source code
-```
-
-## Repository Management
-
-### Creating New Repositories
-
-**From Existing Projects:**
-
-```typescript
-// API endpoint for repository creation
-POST /api/github-template
-{
- "action": "create-repo",
- "name": "my-new-project",
- "private": false,
- "description": "Project created with CodinIT"
-}
+├── template.json # Settings
+├── setup.js # Setup script
+├── README.md # Instructions
+└── src/ # Your code
```
-**Repository Configuration:**
+## Managing Your Projects
-- **Visibility**: Public or private repositories
-- **Branch Protection**: Main branch protection rules
-- **Collaborators**: Team member access management
-- **Topics and Labels**: Organization and categorization
-- **GitHub Actions**: Automated CI/CD workflows
+### Creating New Projects
-### Importing Existing Repositories
+**Start a new project:**
-**Supported Import Methods:**
+You can create a new GitHub repository right from CodinIT:
+- Choose public (anyone can see) or private (only you can see)
+- Add a description
+- Set up protection rules
+- Add team members
+- Set up automatic testing
-- Direct GitHub repository URLs
-- GitHub organization repositories
-- Private repository access
-- Large repository handling
+### Bringing in Existing Projects
-**Import Process:**
-
-```typescript
-// Repository import via API
-const importResult = await fetch('/api/git-proxy/import', {
- method: 'POST',
- body: JSON.stringify({
- url: 'https://github.com/user/repo.git',
- branch: 'main',
- depth: 1, // Shallow clone for large repos
- }),
-});
-```
+**Import from GitHub:**
-**Post-Import Setup:**
+You can bring in projects you already have on GitHub:
+- Paste the GitHub URL
+- Choose which branch to use
+- CodinIT will download it and set it up
-- Dependency installation
-- Environment configuration
-- Build verification
-- Development server startup
+After importing:
+- Install needed tools
+- Set up settings
+- Make sure it builds correctly
+- Start the development server
-### Repository Synchronization
-
-**Bidirectional Sync:**
-
-- Automatic commit creation on file changes
-- Pull request synchronization
-- Conflict detection and resolution
-- Branch status monitoring
-
-**Sync Configuration:**
-
-```json
-{
- "sync": {
- "autoCommit": true,
- "pullInterval": 30000,
- "conflictStrategy": "manual",
- "ignorePatterns": [".env", "node_modules"]
- }
-}
-```
-
-## Advanced Git Operations
-
-### Branch Management
-
-**Creating and Managing Branches:**
-
-```bash
-# Create new feature branch
-git checkout -b feature/user-authentication
-
-# Switch between branches
-git checkout main
-git checkout feature/user-authentication
-
-# Merge branches
-git merge feature/user-authentication
-```
-
-**Branch Protection Rules:**
-
-- Required pull request reviews
-- Required status checks
-- Restrictions on force pushes
-- Branch naming conventions
-
-### Commit Strategies
-
-**Atomic Commits:**
-
-- Each commit represents one logical change
-- Clear, descriptive commit messages
-- Related changes grouped together
-- Easy to understand and revert
-
-**Commit Message Conventions:**
-
-```
-feat: add user authentication system
-fix: resolve login form validation bug
-docs: update API documentation
-refactor: simplify user state management
-```
-
-### Conflict Resolution
-
-**Handling Merge Conflicts:**
-
-```bash
-# Check for conflicts
-git status
-
-# Resolve conflicts in files
-# Then add resolved files
-git add resolved-file.js
-
-# Complete the merge
-git commit
-```
-
-**Prevention Strategies:**
-
-- Regular branch synchronization
-- Clear ownership of code areas
-- Code review requirements
-- Automated testing before merges
-
-## Git Hooks and Automation
-
-### Pre-commit Quality Checks
-
-**Automated Validation:**
-
-```bash
-# .git/hooks/pre-commit
-#!/bin/sh
-npm run lint
-npm run test
-npm run type-check
-```
-
-**Code Quality Gates:**
-
-- ESLint for code style
-- Prettier for formatting
-- TypeScript type checking
-- Unit test execution
-
-### CI/CD Integration
-
-**GitHub Actions Workflows:**
-
-```yaml
-# .github/workflows/ci.yml
-name: CI Pipeline
-on: [push, pull_request]
-
-jobs:
- test:
- runs-on: ubuntu-latest
- steps:
- - uses: actions/checkout@v3
- - name: Setup Node.js
- uses: actions/setup-node@v3
- with:
- node-version: '18'
- - name: Install dependencies
- run: npm ci
- - name: Run tests
- run: npm test
- - name: Build project
- run: npm run build
-```
+### Keeping Things in Sync
-**Deployment Automation:**
+**Automatic syncing:**
-- Automatic preview deployments
-- Production deployment on merge
-- Rollback capabilities
-- Environment-specific configurations
+CodinIT keeps your code in sync with GitHub:
+- Saves changes automatically
+- Checks for updates every 30 seconds
+- Helps fix conflicts
+- Watches branch status
-## Troubleshooting
+## Common Problems and Solutions
-
- ### Git Authentication Problems
+
+ ### Can't Connect to GitHub
- **Common Issues:**
- - Expired personal access tokens
- - Incorrect repository permissions
- - Two-factor authentication conflicts
- - SSH key configuration problems
+ **Common problems:**
+ - Your access token expired
+ - You don't have permission
+ - Two-factor authentication issues
+ - SSH key problems
- **Solutions:**
- - Regenerate GitHub personal access tokens
- - Verify repository access permissions
- - Use SSH keys for secure authentication
- - Check token expiration dates
+ **How to fix:**
+ - Make a new GitHub access token
+ - Check your repository permissions
+ - Use SSH keys for better security
+ - Check when your token expires
-
- ### Repository Synchronization Issues
+
+ ### Code Won't Sync
- **Common Issues:**
- - Merge conflicts during sync
- - Divergent branch histories
- - Large file handling problems
- - Network connectivity issues
+ **Common problems:**
+ - Conflicts when merging
+ - Branches went different directions
+ - Large files causing issues
+ - Internet connection problems
- **Solutions:**
- - Resolve conflicts manually or use merge tools
- - Force push only when necessary and safe
- - Use Git LFS for large files
- - Check network stability and retry operations
+ **How to fix:**
+ - Fix conflicts by hand
+ - Only force push when you're sure it's safe
+ - Use Git LFS for big files
+ - Check your internet and try again
-
- ### Git Operation Performance
+
+ ### Git is Slow
- **Common Issues:**
- - Slow clone operations for large repositories
- - Memory issues with large histories
- - Network latency problems
- - Disk space constraints
+ **Common problems:**
+ - Big projects take forever to download
+ - Running out of memory
+ - Slow internet
+ - Not enough disk space
- **Solutions:**
- - Use shallow clones for large repositories
- - Implement Git LFS for binary files
- - Optimize network settings
- - Clean up unnecessary branches and tags
+ **How to fix:**
+ - Use shallow clones (download less history)
+ - Use Git LFS for binary files
+ - Check your internet connection
+ - Delete old branches you don't need
-## Best Practices
+## Good Habits
-### Repository Organization
+### Organizing Your Code
-**Branch Strategy:**
+**Branch names:**
-- `main`/`master`: Production-ready code
-- `develop`: Integration branch for features
-- `feature/*`: Individual feature development
-- `hotfix/*`: Critical bug fixes
-- `release/*`: Release preparation
+- `main`: The working version
+- `develop`: Where you combine new features
+- `feature/*`: New features you're building
+- `hotfix/*`: Quick fixes for urgent bugs
+- `release/*`: Getting ready to release
-**Commit Hygiene:**
+**Commit messages:**
-- Write clear, descriptive commit messages
-- Keep commits focused and atomic
-- Use conventional commit format
-- Squash related commits before merging
+- Write clear messages about what you changed
+- Keep each commit focused on one thing
+- Use a standard format
+- Combine related commits before merging
-### Collaboration Workflows
+### Working with Others
-**Code Review Process:**
+**Code review:**
-- Create feature branches for all changes
-- Submit pull requests for review
-- Require approvals before merging
-- Use automated checks and tests
+- Make a branch for each change
+- Ask others to review your code
+- Get approval before merging
+- Use automatic tests
-**Team Coordination:**
+**Team coordination:**
-- Regular branch synchronization
-- Clear ownership of code areas
-- Documentation of processes
-- Regular team communication
+- Sync branches regularly
+- Know who's working on what
+- Write down your processes
+- Talk to your team often
- **Git Mastery**: Effective Git usage is a key skill for modern development. Mastering these advanced features will
- significantly improve your development workflow.
+ **Git is Important**: Learning Git well will make you a better developer. These skills help you work on any project.
- **Learn Continuously**: Git has extensive capabilities. Consider exploring advanced features like interactive
- rebasing, advanced merge strategies, and custom hooks.
+ **Keep Learning**: Git can do a lot more than what's covered here. Explore features like rebasing and custom hooks as you get more comfortable.
## Branching and merging
diff --git a/integrations/netlify.mdx b/integrations/netlify.mdx
index 940f4f1..dfdb796 100644
--- a/integrations/netlify.mdx
+++ b/integrations/netlify.mdx
@@ -1,6 +1,6 @@
---
title: Netlify
-description: 'Deploy your CodinIT projects directly to Netlify with one click'
+description: 'Deploy your CodinIT projects directly to Netlify with one click using seamless integration and automatic build detection.'
---
**Website:** [https://netlify.com/](https://netlify.com/)
diff --git a/integrations/supabase.mdx b/integrations/supabase.mdx
index f40669d..6b30fd6 100644
--- a/integrations/supabase.mdx
+++ b/integrations/supabase.mdx
@@ -3,42 +3,46 @@ title: 'Supabase Integration'
description: 'Complete Supabase integration with database operations, authentication, real-time subscriptions, and TypeScript support'
---
-# Supabase Integration
+CodinIT provides Supabase integration, enabling you to build full-stack applications with databases, user login systems, live data updates, and server functions.
-CodinIT provides comprehensive Supabase integration, enabling you to build full-stack applications with PostgreSQL databases, authentication, real-time subscriptions, and edge functions. This integration goes beyond basic database connectivity to include advanced features like Row Level Security, automated migrations, and TypeScript type generation.
+
+ **What is Supabase?** Supabase is a service that provides a database (a place to store your app's data), user authentication (login/signup systems), and other backend features—all without you needing to set up your own servers.
+
+
+This integration goes beyond basic database connectivity to include advanced features like Row Level Security (RLS), automated migrations, and TypeScript type generation.
## Overview
-The Supabase integration in CodinIT offers enterprise-grade database capabilities with developer-friendly tooling:
+The Supabase integration in CodinIT offers powerful database capabilities with easy-to-use tools:
- Hosted PostgreSQL with advanced features and extensions
+ A hosted database to store all your app's information (users, posts, orders, etc.)
- Complete user management with social providers and custom flows
+ Ready-made login systems including "Sign in with Google" and email/password
- Live subscriptions and real-time data synchronization
+ Your app updates instantly when data changes—no page refresh needed
-### Key Features
+### Key features
-- **Automated Migrations**: Generate and execute SQL migrations with rollback support
-- **Row Level Security**: Automatic RLS policy creation and management
-- **TypeScript Integration**: Auto-generated types from your database schema
-- **Real-time Subscriptions**: Live data updates without manual polling
-- **Authentication Flows**: Complete user registration, login, and session management
-- **Edge Functions**: Serverless functions for API endpoints and integrations
-- **File Storage**: Built-in file upload and management capabilities
+- **Automated migrations**: Database changes are tracked and can be undone if something goes wrong
+- **Row Level Security (RLS)**: Rules that control who can see or edit specific data (for example, users can only see their own orders)
+- **TypeScript integration**: Your code editor knows exactly what data types to expect, reducing errors
+- **Real-time subscriptions**: Your app automatically updates when data changes—like seeing new messages appear without refreshing
+- **Authentication flows**: Complete user registration, login, and session management
+- **Edge functions**: Small programs that run on Supabase's servers to handle tasks like sending emails or processing payments
+- **File storage**: Built-in file upload and management for images, documents, and other files
- **Default Choice**: Supabase is the recommended database solution for CodinIT projects requiring advanced database
+ **Default choice**: Supabase is the recommended database solution for CodinIT projects requiring advanced database
features, authentication, or real-time capabilities.
-## Setup and Configuration
+## Setup and configuration
### Connecting to Supabase
@@ -52,7 +56,11 @@ The Supabase integration in CodinIT offers enterprise-grade database capabilitie
Share your Supabase URL and anon key when prompted
-### Environment Variables
+### Environment variables
+
+
+ **What are environment variables?** These are secret settings stored outside your code. They keep sensitive information like passwords and API keys safe, and let you use different settings for testing vs. your live app.
+
CodinIT automatically creates the necessary environment variables:
@@ -62,11 +70,10 @@ VITE_SUPABASE_ANON_KEY=your_supabase_anon_key
```
- **Security Note**: Never commit your Supabase service role key to version control. The anon key is safe for
- client-side use.
+ **Security note**: Never share or publish your Supabase service role key. The "anon key" is safe to use in your app's front-end code.
-### Connection Verification
+### Connection verification
Once connected, CodinIT will:
@@ -80,43 +87,40 @@ If connection fails, you'll receive guidance to:
- Check project permissions
- Ensure network connectivity
-## Database Operations
+## Database operations
+
+### Creating tables and schemas
-### Creating Tables and Schemas
+
+ **What is a table?** A table is like a spreadsheet in your database. Each table stores one type of information (like "users" or "orders"), with rows for each item and columns for each piece of data (like "name" or "email").
+
CodinIT automatically generates SQL migrations for database changes:
**Example Migration Creation:**
-```
-
-
- CREATE TABLE users (
- id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
- email text UNIQUE NOT NULL,
- created_at timestamptz DEFAULT now()
- );
-
-
-
- CREATE TABLE users (
- id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
- email text UNIQUE NOT NULL,
- created_at timestamptz DEFAULT now()
- );
-
-
+```sql
+-- Create users table migration
+CREATE TABLE users (
+ id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
+ email text UNIQUE NOT NULL,
+ created_at timestamptz DEFAULT now()
+);
```
### Row Level Security (RLS)
+
+ **What is Row Level Security?** RLS is like a security guard for your data. It automatically filters what each user can see or change. For example, you can set rules so users only see their own orders, not everyone else's.
+
+
All tables automatically get RLS enabled with appropriate policies:
```sql
--- Enable RLS
+-- Enable RLS (turn on the security guard)
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
--- Create policies for authenticated users
+-- Create a rule: users can only read their own data
CREATE POLICY "Users can read own data"
ON users
FOR SELECT
@@ -124,19 +128,20 @@ CREATE POLICY "Users can read own data"
USING (auth.uid() = id);
```
-### Data Operations
+### Data operations
-**Querying Data:**
+**Querying data (getting information from your database):**
```typescript
-// CodinIT generates type-safe queries
+// Get all active users from the database
const { data: users } = await supabase.from('users').select('*').eq('status', 'active');
```
-**Real-time Subscriptions:**
+**Real-time subscriptions (automatically update when data changes):**
```typescript
-// Live data updates
+// Listen for any changes to the users table
+// When someone signs up or updates their profile, your app knows immediately
const channel = supabase
.channel('users')
.on(
diff --git a/integrations/vercel.mdx b/integrations/vercel.mdx
index 4ca7604..489ce73 100644
--- a/integrations/vercel.mdx
+++ b/integrations/vercel.mdx
@@ -1,6 +1,6 @@
---
title: Vercel
-description: Learn how to integrate Vercel to deploy your apps directly from your desktop
+description: Deploy AI-generated applications to Vercel directly from CodinIT with automatic optimization & intelligent configuration.
---
### Getting a Vercel Access Token
diff --git a/introduction/welcome.mdx b/introduction/welcome.mdx
index e5d63e0..dd8ff7f 100644
--- a/introduction/welcome.mdx
+++ b/introduction/welcome.mdx
@@ -1,36 +1,36 @@
---
title: "Welcome"
-description: "Your guide to AI-powered development with complete transparency and control"
+description: "CodinIT is an open-source AI coding assistant and AI-powered development environment. Build full-stack applications with AI."
---
-CodinIT is an open source AI coding agent that brings frontier AI models directly to your IDE. Unlike autocomplete tools, CodinIT is a true coding agent that can understand entire codebases, plan complex changes, and execute multi-step tasks.
+CodinIT is an open-source AI coding agent and AI-powered IDE that brings frontier large language models (LLMs) directly to your development environment. Unlike basic AI autocomplete tools, CodinIT is a true AI coding assistant that understands entire codebases, generates production-ready code, plans complex software architecture, and executes multi-step development tasks with intelligent AI pair programming.
-## Navigate the Docs
+## Navigate the AI development docs
-
- Start your journey with CodinIT - installation setup
+
+ Install CodinIT AI IDE and start building with AI code generation
-
- Set up your AI models and providers
+
+ Connect Claude, GPT-4, Gemini, and 18+ LLM providers for AI-powered development
-
- Master CodinIT's powerful features
+
+ Master AI prompting techniques for better code generation and software development
-
- Explore all the powerful features CodinIT has to offer
+
+ Explore AI pair programming, code completion, and intelligent refactoring tools
-## Community & Support
+## AI developer community & support
-Join thousands of developers using CodinIT to build better software faster.
+Join thousands of developers using AI-powered coding tools to build full-stack applications faster with machine learning assistance.
-
- Contribute to the open source project
+
+ Contribute to the open-source AI development platform on GitHub
-
- Help us improve by reporting problems
+
+ Help improve AI code generation quality and IDE performance
diff --git a/logo/icon.webp b/logo/icon.webp
deleted file mode 100644
index 0a0dc99..0000000
Binary files a/logo/icon.webp and /dev/null differ
diff --git a/logo/logo-dark.webp b/logo/logo-dark.webp
deleted file mode 100644
index fa2f97c..0000000
Binary files a/logo/logo-dark.webp and /dev/null differ
diff --git a/logo/logo.png b/logo/logo.png
new file mode 100644
index 0000000..0f45b9c
Binary files /dev/null and b/logo/logo.png differ
diff --git a/logo/logo.svg b/logo/logo.svg
new file mode 100644
index 0000000..750e601
--- /dev/null
+++ b/logo/logo.svg
@@ -0,0 +1,15 @@
+
+
diff --git a/mcp/mcp-overview.mdx b/mcp/mcp-overview.mdx
index f65b602..9bbdef6 100644
--- a/mcp/mcp-overview.mdx
+++ b/mcp/mcp-overview.mdx
@@ -1,361 +1,259 @@
---
title: MCP Integration
-description: Model Context Protocol integration for connecting external services and tools
+description: Connect CodinIT to other services and tools
---
-The Model Context Protocol (MCP) integration system enables seamless connection between your development environment and external services, APIs, and tools. MCP provides a standardized way to extend your application's capabilities through secure, approved tool executions.
+MCP (Model Context Protocol) lets CodinIT connect to other services like databases, payment systems, and more. Think of it like adding plugins to make CodinIT more powerful.
-## Overview
+## What is MCP?
-MCP allows you to connect to various external services like databases, payment processors, analytics platforms, and AI services. Each connection provides tools that can be executed with proper approval workflows, ensuring security while maintaining productivity.
+MCP lets you connect CodinIT to other services you use. For example, you can connect to a database to save user data, or to Stripe to process payments.
-
- Connect to databases, APIs, and external services
+
+ Link to databases, APIs, and other tools
-
- Execute approved operations with security controls
+
+ Do things in those services safely
-
- Pre-configured integrations for popular services
+
+ Pre-built connections for popular services
-## System Architecture
+## How It Works
-
-
- ### Main System Components
+### Main Parts
- The MCP system consists of several interconnected components working together:
+MCP has a few main parts:
-
-
- Central hub for managing all MCP connections and activities
-
-
- Pre-configured templates for popular services and APIs
-
-
- Catalog of all available tools from connected servers
-
-
- Complete log of all tool executions and their results
-
-
-
-
-
-
- ### Connection Protocols
-
- MCP supports multiple connection protocols for different use cases:
-
-
- STDIO - Local Processes
- SSE - Server-Sent Events
- HTTP - REST APIs
-
+
+
+ Where you manage all your connections
+
+
+ Ready-made connections you can use
+
+
+ All the actions you can do
+
+
+ Record of what you've done
+
+
- **STDIO Servers**: Local command-line tools and scripts
- **SSE Servers**: Real-time streaming connections
- **HTTP Servers**: RESTful API integrations
+### Types of Connections
-
+MCP can connect in different ways:
-
- ### Approval & Security
+- **STDIO**: Local tools on your computer
+- **SSE**: Real-time streaming connections
+- **HTTP**: Web-based APIs
- Built-in security controls ensure safe tool execution:
+### Staying Safe
- - **Tool Approval**: Manual approval required for tool execution
- - **Auto-Approval**: Trusted tools can be pre-approved
- - **Timeout Protection**: Automatic denial after timeout period
- - **Parameter Validation**: Input validation before execution
+MCP keeps you safe:
-
-
+- **Approval needed**: You have to approve actions before they run
+- **Auto-approve**: You can trust certain actions to run automatically
+- **Timeout**: Actions are cancelled if they take too long
+- **Check inputs**: Makes sure the data is valid before running
-## Getting Started
+## How to Start
- Click the MCP integrations button in the toolbar to open the main panel
- Explore pre-configured templates for popular services
- Fill in required credentials and connection details
- Verify the integration works before saving
- Use approved tools in your development workflow
+ Click the MCP button in the toolbar
+ See what ready-made connections are available
+ Enter your login info and settings
+ Make sure it works before saving
+ Start using the tools in your project
-## Marketplace Integrations
-
-
-
- ### Database & Storage
-
- Connect to databases and storage services:
-
- - **Supabase**: PostgreSQL with real-time subscriptions
- - **Custom Databases**: Any PostgreSQL-compatible database
- - **File Systems**: Local and remote file operations
-
-
-
-
- ### Payment Processing
-
- Integrate payment and subscription services:
-
- - **Stripe**: Payment processing and subscriptions
- - **Custom Payment APIs**: Third-party payment processors
+## Available Services
-
+### Databases
-
- ### Analytics Platforms
+Save and get data:
- Connect to analytics and monitoring services:
+- **Supabase**: Database with real-time updates
+- **Custom Databases**: Any PostgreSQL database
+- **File Systems**: Save files locally or online
- - **PostHog**: Product analytics and feature flags
- - **Custom Analytics**: Internal analytics platforms
+### Payments
-
+Handle money:
-
- ### Communication Tools
+- **Stripe**: Accept payments and subscriptions
+- **Custom Payment APIs**: Other payment services
- Integrate messaging and collaboration platforms:
+### Analytics
- - **Slack**: Team communication and notifications
- - **Custom APIs**: Internal communication systems
+Track how people use your app:
-
+- **PostHog**: See what users do
+- **Custom Analytics**: Your own tracking systems
-
- ### Development Services
+### Communication
- Connect to development and deployment tools:
+Send messages:
- - **GitHub**: Repository management and code collaboration
- - **Vercel**: Deployment and hosting platform
- - **21st.dev**: Component generation and development tools
+- **Slack**: Team chat and notifications
+- **Custom APIs**: Your own messaging systems
-
+### Development
-
- ### Artificial Intelligence
+Build and deploy:
- Integrate AI and machine learning services:
+- **GitHub**: Save code and work with others
+- **Vercel**: Put your app online
+- **21st.dev**: Generate components
- - **Claude Code**: AI-powered code generation
- - **OpenAI**: GPT models and AI capabilities
- - **Custom AI APIs**: Proprietary AI services
+### AI Services
-
+Use artificial intelligence:
-
- ### Productivity Tools
+- **Claude Code**: AI that writes code
+- **OpenAI**: GPT models
+- **Custom AI APIs**: Your own AI services
- Connect to productivity and knowledge platforms:
+### Productivity
- - **HubSpot**: CRM and marketing automation
- - **Notion**: Knowledge base and documentation
- - **Custom Tools**: Internal productivity systems
+Get organized:
-
-
+- **HubSpot**: Manage customers
+- **Notion**: Take notes and organize
+- **Custom Tools**: Your own productivity apps
-## Server Management
+## Managing Connections
-
-
- ### Manual Server Configuration
+### Adding New Services
- Add custom MCP servers with full configuration control:
+Add your own custom connections:
- - **Server Types**: STDIO, SSE, or HTTP connections
- - **Authentication**: API keys, tokens, and custom headers
- - **Environment Variables**: Runtime configuration
- - **Connection Testing**: Verify configuration before saving
+- Choose connection type (STDIO, SSE, or HTTP)
+- Enter your API keys or tokens
+- Set up any special settings
+- Test it before saving
-
+### Checking Status
-
- ### Connection Health
+See how your connections are doing:
- Monitor the status of all connected servers:
+- Check if they're connected
+- See how many tools are available
+- Get alerts if something's wrong
+- Automatic reconnection if disconnected
- - **Connection Status**: Real-time availability checking
- - **Tool Count**: Number of available tools per server
- - **Error Reporting**: Connection and authentication issues
- - **Auto-Reconnection**: Automatic retry on connection loss
+### Editing Connections
-
+Manage your existing connections:
-
- ### Configuration Management
+- Update login info
+- Delete ones you don't use
+- Manage multiple at once
+- Save and restore settings
- Manage existing server configurations:
+## Using Tools
- - **Edit Settings**: Update credentials and configuration
- - **Delete Servers**: Remove unused integrations
- - **Bulk Operations**: Manage multiple servers simultaneously
- - **Export/Import**: Backup and restore configurations
+### Finding Tools
-
-
+Look through available actions:
-## Tool Execution Workflow
+- See all available tools
+- Search by name or type
+- See what info each tool needs
+- Read instructions for each tool
-
-
- ### Finding Available Tools
+### Running Tools Safely
- Browse and search through all available tools:
+Execute tools with safety checks:
- - **Tool Registry**: Complete catalog of available operations
- - **Search & Filter**: Find tools by name, server, or category
- - **Parameter Preview**: View required and optional parameters
- - **Documentation**: Access tool descriptions and usage examples
+- Review before running
+- Check that inputs are correct
+- Auto-cancel after 30 seconds
+- Trust certain tools to run automatically
-
+### Tracking Usage
-
- ### Secure Tool Execution
+See what you've done:
- Execute tools with built-in security controls:
+- History of all actions
+- How long things took
+- Errors and problems
+- Most-used tools
- - **Approval Interface**: Review tool calls before execution
- - **Parameter Validation**: Ensure correct input values
- - **Timeout Protection**: Automatic denial after 30 seconds
- - **Trusted Tools**: Pre-approve frequently used operations
-
-
-
-
- ### Track Tool Usage
-
- Monitor and analyze tool execution patterns:
-
- - **Execution History**: Complete log of all tool calls
- - **Performance Metrics**: Execution time and success rates
- - **Error Analysis**: Failed executions and error details
- - **Usage Statistics**: Most frequently used tools
-
-
-
-
-## Advanced Features
-
-
- Auto-Approval
- Bulk Operations
- Export/Import
- Real-time Monitoring
-
-
-### Advanced Capabilities
-
-- **Auto-Approval**: Pre-approve trusted tools for instant execution
-- **Bulk Operations**: Manage multiple servers and tools simultaneously
-- **Data Export**: Export execution history and configurations
-- **Real-time Updates**: Live status monitoring and notifications
-
-## Best Practices
+## Tips for Success
- **Security First**: Always review tool parameters before approval, especially for operations that modify data or
- execute commands.
+ **Safety First**: Always check what a tool will do before approving it, especially if it changes data.
- **Start Small**: Begin with marketplace templates for popular services, then add custom integrations as needed.
+ **Start Simple**: Use ready-made connections first, then add custom ones later.
- **Credential Management**: Store API keys and tokens securely. MCP configurations are stored locally and never
- transmitted to external servers.
+ **Keep Secrets Safe**: Store API keys securely. MCP keeps everything on your computer and doesn't send it anywhere.
-## Troubleshooting
-
-
-
- ### Server Connection Problems
-
- Common connection issues and solutions:
-
- - **Network Errors**: Check internet connectivity and firewall settings
- - **Authentication Failures**: Verify API keys and credentials
- - **Server Unavailable**: Check service status and retry later
- - **Configuration Errors**: Validate server URLs and parameters
-
-
-
-
- ### Tool Execution Problems
-
- Issues with tool execution and approval:
+## Fixing Problems
- - **Approval Timeouts**: Tools auto-deny after 30 seconds
- - **Parameter Errors**: Check required field validation
- - **Permission Issues**: Verify API permissions and scopes
- - **Rate Limiting**: Respect API rate limits and retry logic
+### Can't Connect
-
+Common connection problems:
-
- ### Performance Optimization
+- **No internet**: Check your internet and firewall
+- **Wrong password**: Check your API keys
+- **Service down**: Check if the service is working
+- **Wrong settings**: Double-check your URLs and settings
- Improve MCP system performance:
+### Tools Won't Run
- - **Connection Pooling**: Reuse connections for better performance
- - **Caching**: Cache frequently accessed data
- - **Batch Operations**: Group multiple operations when possible
- - **Monitoring**: Track performance metrics and bottlenecks
+Problems running tools:
-
-
+- **Timeout**: Tools cancel after 30 seconds
+- **Wrong info**: Check required fields
+- **No permission**: Make sure your API key has permission
+- **Too many requests**: Wait a bit and try again
-## Integration Examples
+### Running Slow
-
-
- ### Connecting to Supabase
+Make things faster:
- ```typescript
- // Example: Query user data from Supabase
- const users = await mcp.executeTool('supabase', 'query_users', {
- table: 'users',
- filter: { status: 'active' }
- });
- ```
+- Reuse connections
+- Save frequently-used data
+- Do multiple things at once when possible
+- Watch for slow spots
-
+## Examples
-
- ### Stripe Integration
+### Database Example
- ```typescript
- // Example: Process a payment
- const payment = await mcp.executeTool('stripe', 'create_payment_intent', {
- amount: 1000,
- currency: 'usd',
- customer_id: 'cus_123'
- });
- ```
+```typescript
+// Get user data from Supabase
+const users = await mcp.executeTool('supabase', 'query_users', {
+ table: 'users',
+ filter: { status: 'active' }
+});
+```
-
+### Payment Example
-
- ### Claude Code Integration
+```typescript
+// Process a payment with Stripe
+const payment = await mcp.executeTool('stripe', 'create_payment_intent', {
+ amount: 1000,
+ currency: 'usd',
+ customer_id: 'cus_123'
+});
+```
- ```typescript
- // Example: Generate code with AI
- const code = await mcp.executeTool('claude-code', 'generate_component', {
- component_type: 'React',
- description: 'User profile form'
- });
- ```
+### AI Example
-
-
+```typescript
+// Generate code with AI
+const code = await mcp.executeTool('claude-code', 'generate_component', {
+ component_type: 'React',
+ description: 'User profile form'
+});
+```
diff --git a/model-config/context-windows.mdx b/model-config/context-windows.mdx
index b4200d6..bed2279 100644
--- a/model-config/context-windows.mdx
+++ b/model-config/context-windows.mdx
@@ -1,138 +1,144 @@
---
title: "Context Window Guide"
-description: "Understanding and managing AI model context windows"
+description: "How much the AI can remember at once"
---
## What is a Context Window?
-A context window is the maximum amount of text an AI model can process at once. Think of it as the model's "working memory" - it determines how much of your conversation and code the model can consider when generating responses.
+A context window is how much text the AI can look at and remember at one time. Think of it like the AI's short-term memory.
- **Key Point**: Larger context windows allow the model to understand more of your codebase at once, but may increase costs and response times.
+ **What are tokens?** Tokens are small pieces of text. About 4 letters = 1 token. So "hamburger" is 2 tokens: "ham" and "burger". When we say "128K tokens," that means the AI can remember about 96,000 words at once.
-### Quick Reference
+
+ **Important**: Bigger context windows let the AI see more of your project, but they cost more money and take longer.
+
-| Size | Tokens | Approximate Words | Use Case |
+### Size Guide
+
+| Size | Tokens | About How Many Words | Good For |
| --------------- | ------ | ----------------- | ------------------------- |
-| **Small** | 8K-32K | 6,000-24,000 | Single files, quick fixes |
-| **Medium** | 128K | ~96,000 | Most coding projects |
-| **Large** | 200K | ~150,000 | Complex codebases |
-| **Extra Large** | 400K+ | ~300,000+ | Entire applications |
-| **Massive** | 1M+ | ~750,000+ | Multi-project analysis |
+| **Small** | 8K-32K | 6,000-24,000 | One file, quick fixes |
+| **Medium** | 128K | ~96,000 | Most projects |
+| **Large** | 200K | ~150,000 | Big projects |
+| **Extra Large** | 400K+ | ~300,000+ | Whole apps |
+| **Huge** | 1M+ | ~750,000+ | Multiple projects |
-### Model Context Windows
+### Different AI Models
-| Model | Context Window | Effective Window\* | Notes |
+| AI Model | Context Window | Actually Works Well\* | Notes |
| --------------------- | -------------- | ------------------ | ------------------------------ |
-| **Claude Sonnet 4.5** | 1M tokens | ~500K tokens | Best quality at high context |
-| **GPT-5** | 400K tokens | ~300K tokens | Three modes affect performance |
-| **Gemini 2.5 Pro** | 1M+ tokens | ~600K tokens | Excellent for documents |
-| **DeepSeek V3** | 128K tokens | ~100K tokens | Optimal for most tasks |
-| **Qwen3 Coder** | 256K tokens | ~200K tokens | Good balance |
+| **Claude Sonnet 4.5** | 1M tokens | ~500K tokens | Best for big projects |
+| **GPT-5** | 400K tokens | ~300K tokens | Has three different modes |
+| **Gemini 2.5 Pro** | 1M+ tokens | ~600K tokens | Great for reading documents |
+| **DeepSeek V3** | 128K tokens | ~100K tokens | Good for most things |
+| **Qwen3 Coder** | 256K tokens | ~200K tokens | Nice balance |
-\*Effective window is where model maintains high quality
+\*The AI works best up to this point. After that, it might start forgetting earlier parts of your chat.
-### What Counts Toward Context
+### What Uses Up the Context Window
-1. **Your current conversation** - All messages in the chat
-2. **File contents** - Any files you've shared or CodinIT has read
-3. **Tool outputs** - Results from executed commands
-4. **System prompts** - CodinIT's instructions (minimal impact)
+1. **Your messages** - Everything you and the AI say in the chat
+2. **Your files** - Files the AI reads from your project
+3. **Command results** - Output from commands that run
+4. **System instructions** - CodinIT's background instructions (uses very little)
-### Optimization Strategies
+### How to Save Context Space
-#### 1. Start Fresh for New Features
+#### 1. Start fresh for new features
```mdx
-/new - Creates a new task with clean context
+/new - Start a new chat with empty context
```
-Benefits:
+Why this helps:
-- Maximum context available
-- No irrelevant history
-- Better model focus
+- You get the full context window
+- No old, irrelevant messages
+- AI focuses better
-#### 2. Use @ Mentions Strategically
+#### 2. Only include files you need
-Instead of including entire files:
+Instead of including everything:
-- `@filename.ts` - Include only when needed
-- Use search instead of reading large files
-- Reference specific functions rather than whole files
+- `@filename.ts` - Only mention files when necessary
+- Use search instead of reading big files
+- Reference specific parts, not whole files
-#### 3. Enable Auto-compact
+#### 3. Turn on auto-compact
-CodinIT can automatically summarize long conversations:
+CodinIT can automatically shorten long conversations:
-- Settings → Features → Auto-compact
-- Preserves important context
-- Reduces token usage
+- Go to Settings → Features → Auto-compact
+- Keeps important stuff
+- Uses fewer tokens
-## Context Window Warnings
+## Warning Signs
-### Signs You're Hitting Limits
+### How to Tell You're Running Out of Space
-| Warning Sign | What It Means | Solution |
+| Warning Sign | What It Means | What to Do |
| ----------------------------- | ----------------------------- | ------------------------------------- |
-| **"Context window exceeded"** | Hard limit reached | Start new task or enable auto-compact |
-| **Slower responses** | Model struggling with context | Reduce included files |
-| **Repetitive suggestions** | Context fragmentation | Summarize and start fresh |
-| **Missing recent changes** | Context overflow | Use checkpoints to track changes |
+| **"Context window exceeded"** | You hit the limit | Start a new chat or turn on auto-compact |
+| **Slow responses** | AI is struggling | Include fewer files |
+| **AI repeats itself** | Context is too fragmented | Start fresh |
+| **AI forgets recent changes** | Too much context | Use checkpoints |
-### Best Practices by Project Size
+### Tips by Project Size
-#### Small Projects (\< 50 files)
+#### Small projects (less than 50 files)
-- Any model works well
-- Include relevant files freely
-- No special optimization needed
+- Any AI model works fine
+- Include files freely
+- Don't worry about optimization
-#### Medium Projects (50-500 files)
+#### Medium projects (50-500 files)
-- Use 128K+ context models
-- Include only working set of files
-- Clear context between features
+- Use AI with 128K+ context
+- Only include files you're working on
+- Start new chats between features
-#### Large Projects (500+ files)
+#### Large projects (500+ files)
-- Use 200K+ context models
-- Focus on specific modules
-- Use search instead of reading many files
-- Break work into smaller tasks
+- Use AI with 200K+ context
+- Focus on one part at a time
+- Search instead of reading many files
+- Break work into smaller pieces
-## Advanced Context Management
+## Advanced Tips
-### Plan/Act Mode Optimization
+### Use Plan/Act Mode Smartly
-Leverage Plan/Act mode for better context usage:
+Use different AI models for different tasks:
-- **Plan Mode**: Use smaller context for discussion
-- **Act Mode**: Include necessary files for implementation
+- **Plan Mode**: Use cheaper AI for talking and planning
+- **Act Mode**: Use better AI when writing code
-Configuration:
+Example:
```
-Plan Mode: DeepSeek V3 (128K) - Lower cost planning
-Act Mode: Claude Sonnet (1M) - Maximum context for coding
+Plan Mode: DeepSeek V3 (128K) - Cheap for planning
+Act Mode: Claude Sonnet (1M) - Better for coding
```
-### Context Pruning Strategies
+### How CodinIT Saves Space
+
+CodinIT can automatically remove less important text:
-1. **Temporal Pruning**: Remove old conversation parts
-2. **Semantic Pruning**: Keep only relevant code sections
-3. **Hierarchical Pruning**: Maintain high-level structure, prune details
+1. **Remove old messages**: Delete old parts of the conversation
+2. **Keep relevant code**: Only keep code related to what you're doing now
+3. **Keep summaries**: Keep the main ideas but remove details
-### Token Counting Tips
+### Counting Tokens
-#### Rough Estimates
+#### Quick math
-- **1 token ≈ 0.75 words**
-- **1 token ≈ 4 characters**
+- **1 token ≈ 3/4 of a word** (1,000 tokens = about 750 words)
+- **1 token ≈ 4 letters**
- **100 lines of code ≈ 500-1000 tokens**
-#### File Size Guidelines
+#### File sizes
| File Type | Tokens per KB |
| -------------- | ------------- |
@@ -141,34 +147,34 @@ Act Mode: Claude Sonnet (1M) - Maximum context for coding
| **Markdown** | ~200-300 |
| **Plain text** | ~200-250 |
-## Context Window FAQ
+## Common Questions
-### Q: Why do responses get worse with very long conversations?
+### Q: Why does the AI get worse with long conversations?
-**A:** Models can lose focus with too much context. The "effective window" is typically 50-70% of the advertised limit.
+**A:** The AI loses focus when there's too much to remember. It works best with 50-70% of its maximum.
-### Q: Should I use the largest context window available?
+### Q: Should I always use the biggest context window?
-**A:** Not always. Larger contexts increase cost and can reduce response quality. Match the context to your task size.
+**A:** No. Bigger contexts cost more and can make the AI worse. Use what you need for your task.
-### Q: How can I tell how much context I'm using?
+### Q: How do I know how much context I'm using?
-**A:** CodinIT shows token usage in the interface. Watch for the context meter approaching limits.
+**A:** CodinIT shows you a meter. Watch it to see when you're getting close to the limit.
-### Q: What happens when I exceed the context limit?
+### Q: What happens if I go over the limit?
-**A:** CodinIT will either:
+**A:** CodinIT will:
-- Automatically compact the conversation (if enabled)
-- Show an error and suggest starting a new task
-- Truncate older messages (with warning)
+- Automatically shorten the conversation (if you turned that on)
+- Show an error and tell you to start a new chat
+- Remove old messages (with a warning)
-## Recommendations by Use Case
+## What to Use When
-| Use Case | Recommended Context | Model Suggestion |
+| What You're Doing | Context Size | Which AI |
| ----------------------- | ------------------- | ----------------- |
| **Quick fixes** | 32K-128K | DeepSeek V3 |
-| **Feature development** | 128K-200K | Qwen3 Coder |
-| **Large refactoring** | 400K+ | Claude Sonnet 4.5 |
-| **Code review** | 200K-400K | GPT-5 |
-| **Documentation** | 128K | Any budget model |
\ No newline at end of file
+| **Building features** | 128K-200K | Qwen3 Coder |
+| **Big changes** | 400K+ | Claude Sonnet 4.5 |
+| **Reviewing code** | 200K-400K | GPT-5 |
+| **Writing docs** | 128K | Any cheap model |
\ No newline at end of file
diff --git a/model-config/model-comparison.mdx b/model-config/model-comparison.mdx
index 008b72b..ec90a87 100644
--- a/model-config/model-comparison.mdx
+++ b/model-config/model-comparison.mdx
@@ -1,88 +1,88 @@
---
title: "Model Comparison & Pricing"
-description: "Compare AI models by performance, features, and pricing"
+description: "Which AI to use and how much it costs"
---
-### Premium Models
+### Expensive but Good Models
-| Model | Provider | Context Window | Input Price\* | Output Price\* | Best For |
+| AI Model | Company | Memory Size | Reading Cost\* | Writing Cost\* | Best For |
| --------------------- | --------- | -------------- | ------------- | -------------- | -------------------------------------- |
-| **Claude Sonnet 4.5** | Anthropic | 1M tokens | \$3-6 | \$15-22.50 | Reliable tool usage, complex codebases |
-| **GPT-5** | OpenAI | 400K tokens | \$1.25 | \$10 | Latest OpenAI tech, three modes |
-| **Gemini 2.5 Pro** | Google | 1M+ tokens | TBD | TBD | Large codebases, document analysis |
-| **Qwen3 Coder** | Multiple | 256K tokens | \$0.20 | \$0.80 | Coding tasks, open source flexibility |
+| **Claude Sonnet 4.5** | Anthropic | 1M tokens | \$3-6 | \$15-22.50 | Complex projects, reliable results |
+| **GPT-5** | OpenAI | 400K tokens | \$1.25 | \$10 | Latest tech, three modes |
+| **Gemini 2.5 Pro** | Google | 1M+ tokens | TBD | TBD | Big projects, reading documents |
+| **Qwen3 Coder** | Multiple | 256K tokens | \$0.20 | \$0.80 | Writing code, open source |
-\*Per million tokens
+\*Per million tokens (about 750,000 words)
-### Budget Models
+### Cheap Models
-| Model | Provider | Context Window | Input Price\* | Output Price\* | Notes |
+| AI Model | Company | Memory Size | Reading Cost\* | Writing Cost\* | Notes |
| ---------------- | -------- | -------------- | ------------- | -------------- | ------------------------------- |
-| **DeepSeek V3** | DeepSeek | 128K tokens | \$0.14 | \$0.28 | Great value for daily coding |
-| **DeepSeek R1** | DeepSeek | 128K tokens | \$0.55 | \$2.19 | Budget reasoning champion |
-| **Qwen3 32B** | Multiple | 128K tokens | Varies | Varies | Open source, multiple providers |
-| **Z AI GLM 4.5** | Z AI | 128K tokens | TBD | TBD | MIT licensed, hybrid reasoning |
+| **DeepSeek V3** | DeepSeek | 128K tokens | \$0.14 | \$0.28 | Great value for everyday coding |
+| **DeepSeek R1** | DeepSeek | 128K tokens | \$0.55 | \$2.19 | Cheap but smart |
+| **Qwen3 32B** | Multiple | 128K tokens | Varies | Varies | Open source, many providers |
+| **Z AI GLM 4.5** | Z AI | 128K tokens | TBD | TBD | Free license, smart reasoning |
-\*Per million tokens
+\*Per million tokens (about 750,000 words)
-## Performance Comparison
+## Which One to Pick
-| Priority | Recommended Model | Why |
+| What Matters Most | Best AI | Why |
| ----------- | ----------------------- | ------------------------------- |
-| **Speed** | Qwen3 Coder on Cerebras | Fastest inference available |
-| **Quality** | Claude Sonnet 4.5 | Most reliable for complex tasks |
-| **Balance** | DeepSeek V3 | Good quality at low cost |
+| **Speed** | Qwen3 Coder on Cerebras | Fastest |
+| **Quality** | Claude Sonnet 4.5 | Most reliable for hard tasks |
+| **Value** | DeepSeek V3 | Good quality, low cost |
-### Tool Reliability
+### How Reliable Are They?
-Models ranked by tool usage reliability:
+Ranked from most to least reliable:
-1. **Claude Sonnet 4.5** - Most reliable tool execution
-2. **GPT-5** - Excellent but occasional formatting issues
-3. **Gemini 2.5 Pro** - Good for standard tools
-4. **DeepSeek V3** - Reliable for basic tools
-5. **Qwen3 variants** - May need retry for complex tools
+1. **Claude Sonnet 4.5** - Almost always works correctly
+2. **GPT-5** - Very good but sometimes has formatting issues
+3. **Gemini 2.5 Pro** - Good for normal tasks
+4. **DeepSeek V3** - Reliable for simple tasks
+5. **Qwen3 variants** - Might need to try again for hard tasks
-### Typical Task Costs
+### What Things Cost
-| Task Type | Token Usage (avg) | Claude Sonnet | DeepSeek V3 | Difference |
+| What You're Doing | How Much Text | Claude Sonnet | DeepSeek V3 | Difference |
| -------------------------- | ----------------- | ------------- | ----------- | ----------- |
-| **Simple Bug Fix** | 5K tokens | \$0.05 | \$0.001 | 50x cheaper |
-| **Feature Implementation** | 50K tokens | \$0.50 | \$0.01 | 50x cheaper |
-| **Large Refactoring** | 200K tokens | \$2.00 | \$0.04 | 50x cheaper |
+| **Fix a small bug** | 5K tokens | \$0.05 | \$0.001 | 50x cheaper |
+| **Add a feature** | 50K tokens | \$0.50 | \$0.01 | 50x cheaper |
+| **Big code changes** | 200K tokens | \$2.00 | \$0.04 | 50x cheaper |
-### Monthly Budget Estimates
+### Monthly Budget Examples
-| Budget | Claude Usage | DeepSeek Usage | Mixed Strategy |
+| Your Budget | Using Claude | Using DeepSeek | Smart Mix |
| --------------- | ------------- | --------------- | --------------------------------- |
-| **\$10/month** | ~20 features | ~1000 features | Plan: DeepSeek, Act: Claude |
-| **\$50/month** | ~100 features | ~5000 features | Critical: Claude, Rest: DeepSeek |
-| **\$100/month** | ~200 features | ~10000 features | Complex: Claude, Simple: DeepSeek |
+| **\$10/month** | ~20 features | ~1000 features | Plan with DeepSeek, code with Claude |
+| **\$50/month** | ~100 features | ~5000 features | Important stuff: Claude, rest: DeepSeek |
+| **\$100/month** | ~200 features | ~10000 features | Hard tasks: Claude, easy tasks: DeepSeek |
-### Provider Features
+### Where to Get AI
-| Provider | Models Available | Billing | API Stability | Support |
+| Provider | AI Models Available | How You Pay | Reliability | Help |
| ------------------ | ---------------- | ------------ | ------------- | --------- |
-| **CodinIT** | Multiple | Credit-based | High | In-app |
-| **Anthropic** | Claude only | Usage-based | High | Email |
-| **OpenRouter** | 100+ models | Usage-based | High | Discord |
-| **OpenAI** | GPT only | Usage-based | High | Forum |
-| **Local (Ollama)** | Open source | Free | N/A | Community |
+| **CodinIT** | Many | Buy credits | High | In the app |
+| **Anthropic** | Claude only | Pay as you go | High | Email |
+| **OpenRouter** | 100+ models | Pay as you go | High | Discord chat |
+| **OpenAI** | GPT only | Pay as you go | High | Online forum |
+| **Local (Ollama)** | Open source | Free | Depends on your computer | Community help |
-### Provider Selection Guide
+### Which Provider to Choose
-Choose your provider based on:
+Pick based on what you need:
-- **Simplicity**: CodinIT (no API key management)
-- **Variety**: OpenRouter (access to all models)
-- **Direct Access**: Individual providers (Anthropic, OpenAI)
-- **Privacy**: Ollama or LM Studio (local models)
+- **Easy to use**: CodinIT (no setup needed)
+- **Lots of choices**: OpenRouter (access to all AI models)
+- **Direct**: Anthropic or OpenAI (go straight to the source)
+- **Private**: Ollama or LM Studio (runs on your computer)
-## Community Usage Stats
+## What Others Are Using
-Real-time usage data from the CodinIT community:
+What the CodinIT community uses:
-- View current trends at [OpenRouter's CodinIT stats](https://openrouter.ai/apps?url=https%3A%2F%2Fcodinit.dev%2F)
+- See live stats at [OpenRouter's CodinIT page](https://openrouter.ai/apps?url=https%3A%2F%2Fcodinit.dev%2F)
- Most popular: Claude Sonnet 4.5 (40%)
-- Rising star: DeepSeek V3 (25%)
-- Budget favorite: Qwen3 variants (20%)
\ No newline at end of file
+- Growing fast: DeepSeek V3 (25%)
+- Budget pick: Qwen3 variants (20%)
\ No newline at end of file
diff --git a/openapi.json b/openapi.json
new file mode 100644
index 0000000..7bcf468
--- /dev/null
+++ b/openapi.json
@@ -0,0 +1,1670 @@
+{
+ "openapi": "3.0.3",
+ "info": {
+ "title": "CodinIT API",
+ "version": "1.1.23",
+ "description": "full-stack development platform API with support for 19+ LLM providers, deployment services, and system integration.",
+ "license": {
+ "name": "MIT",
+ "url": "https://github.com/codinit-dev/codinit-dev/blob/main/LICENSE"
+ },
+ "contact": {
+ "url": "https://github.com/codinit-dev/codinit-dev"
+ }
+ },
+ "servers": [
+ {
+ "url": "http://localhost:5173",
+ "description": "Local development server"
+ },
+ {
+ "url": "https://{domain}",
+ "description": "Production server",
+ "variables": {
+ "domain": {
+ "default": "codinit.dev"
+ }
+ }
+ }
+ ],
+ "tags": [
+ {
+ "name": "Chat & LLM",
+ "description": "AI chat and language model operations"
+ },
+ {
+ "name": "Deployment",
+ "description": "Deployment services for Netlify and Vercel"
+ },
+ {
+ "name": "Database",
+ "description": "Supabase database integration"
+ },
+ {
+ "name": "System",
+ "description": "System information and diagnostics"
+ },
+ {
+ "name": "MCP",
+ "description": "Model Context Protocol management"
+ },
+ {
+ "name": "Utilities",
+ "description": "Utility endpoints for configuration and updates"
+ },
+ {
+ "name": "Proxy",
+ "description": "CORS-enabled proxy services"
+ }
+ ],
+ "paths": {
+ "/api/chat": {
+ "post": {
+ "tags": ["Chat & LLM"],
+ "summary": "Stream chat responses with AI models",
+ "description": "Initiates a streaming chat session with the selected AI model and provider. Returns server-sent events.",
+ "operationId": "streamChat",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ChatRequest"
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "Streaming response",
+ "content": {
+ "text/event-stream": {
+ "schema": {
+ "type": "string"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/ValidationError"
+ },
+ "401": {
+ "$ref": "#/components/responses/AuthError"
+ },
+ "413": {
+ "$ref": "#/components/responses/ContextTooLarge"
+ },
+ "429": {
+ "$ref": "#/components/responses/RateLimitError"
+ },
+ "504": {
+ "$ref": "#/components/responses/TimeoutError"
+ }
+ }
+ }
+ },
+ "/api/llmcall": {
+ "post": {
+ "tags": ["Chat & LLM"],
+ "summary": "Direct LLM invocation",
+ "description": "Execute a direct call to a language model with optional streaming support.",
+ "operationId": "callLLM",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/LLMCallRequest"
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "LLM response",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/LLMCallResponse"
+ }
+ },
+ "text/event-stream": {
+ "schema": {
+ "type": "string"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/ValidationError"
+ },
+ "401": {
+ "$ref": "#/components/responses/AuthError"
+ }
+ }
+ }
+ },
+ "/api/enhancer": {
+ "post": {
+ "tags": ["Chat & LLM"],
+ "summary": "Enhance prompts using AI",
+ "description": "Improves user prompts using AI model suggestions with streaming response.",
+ "operationId": "enhancePrompt",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "required": ["prompt"],
+ "properties": {
+ "prompt": {
+ "type": "string",
+ "description": "Original prompt to enhance"
+ }
+ }
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "Enhanced prompt stream",
+ "content": {
+ "text/event-stream": {
+ "schema": {
+ "type": "string"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/ValidationError"
+ }
+ }
+ }
+ },
+ "/api/models": {
+ "get": {
+ "tags": ["Chat & LLM"],
+ "summary": "List all available models and providers",
+ "description": "Returns a list of all supported AI models and their providers.",
+ "operationId": "listModels",
+ "responses": {
+ "200": {
+ "description": "List of models and providers",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "providers": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/ProviderInfo"
+ }
+ },
+ "models": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/ModelInfo"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/models/{provider}": {
+ "get": {
+ "tags": ["Chat & LLM"],
+ "summary": "Get models for a specific provider",
+ "description": "Returns all available models for the specified provider.",
+ "operationId": "getProviderModels",
+ "parameters": [
+ {
+ "name": "provider",
+ "in": "path",
+ "required": true,
+ "schema": {
+ "type": "string"
+ },
+ "description": "Provider name (e.g., anthropic, openai, google)"
+ }
+ ],
+ "responses": {
+ "200": {
+ "description": "Provider models",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/ModelInfo"
+ }
+ }
+ }
+ }
+ },
+ "404": {
+ "$ref": "#/components/responses/NotFound"
+ }
+ }
+ }
+ },
+ "/api/netlify-deploy": {
+ "post": {
+ "tags": ["Deployment"],
+ "summary": "Deploy to Netlify",
+ "description": "Creates or updates a Netlify site deployment with the provided files.",
+ "operationId": "deployToNetlify",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/NetlifyDeployRequest"
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "Deployment successful",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/DeploymentResponse"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/ValidationError"
+ },
+ "401": {
+ "$ref": "#/components/responses/AuthError"
+ }
+ }
+ }
+ },
+ "/api/vercel-deploy": {
+ "get": {
+ "tags": ["Deployment"],
+ "summary": "Get Vercel deployment status",
+ "description": "Retrieves the status of current Vercel deployments.",
+ "operationId": "getVercelStatus",
+ "responses": {
+ "200": {
+ "description": "Deployment status",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "deployments": {
+ "type": "array",
+ "items": {
+ "type": "object"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "post": {
+ "tags": ["Deployment"],
+ "summary": "Deploy to Vercel",
+ "description": "Creates a new deployment on Vercel platform.",
+ "operationId": "deployToVercel",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/VercelDeployRequest"
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "Deployment successful",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/DeploymentResponse"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/ValidationError"
+ },
+ "401": {
+ "$ref": "#/components/responses/AuthError"
+ }
+ }
+ }
+ },
+ "/api/supabase": {
+ "post": {
+ "tags": ["Database"],
+ "summary": "Get Supabase projects list",
+ "description": "Retrieves all Supabase projects for the authenticated user.",
+ "operationId": "listSupabaseProjects",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "required": ["accessToken"],
+ "properties": {
+ "accessToken": {
+ "type": "string",
+ "description": "Supabase access token"
+ }
+ }
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "List of projects",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "array",
+ "items": {
+ "type": "object"
+ }
+ }
+ }
+ }
+ },
+ "401": {
+ "$ref": "#/components/responses/AuthError"
+ }
+ }
+ }
+ },
+ "/api/supabase.query": {
+ "post": {
+ "tags": ["Database"],
+ "summary": "Execute Supabase queries",
+ "description": "Executes SQL queries against a Supabase database.",
+ "operationId": "executeSupabaseQuery",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/SupabaseQueryRequest"
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "Query results",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "data": {
+ "type": "array",
+ "items": {
+ "type": "object"
+ }
+ },
+ "error": {
+ "type": "object",
+ "nullable": true
+ }
+ }
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/ValidationError"
+ }
+ }
+ }
+ },
+ "/api/supabase.variables": {
+ "post": {
+ "tags": ["Database"],
+ "summary": "Fetch Supabase API keys",
+ "description": "Retrieves API keys and configuration for a Supabase project.",
+ "operationId": "getSupabaseVariables",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "required": ["accessToken", "projectRef"],
+ "properties": {
+ "accessToken": {
+ "type": "string"
+ },
+ "projectRef": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "API keys and configuration",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "anonKey": {
+ "type": "string"
+ },
+ "serviceKey": {
+ "type": "string"
+ },
+ "url": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ },
+ "401": {
+ "$ref": "#/components/responses/AuthError"
+ }
+ }
+ }
+ },
+ "/api/health": {
+ "get": {
+ "tags": ["System"],
+ "summary": "Health check endpoint",
+ "description": "Simple health check to verify API availability.",
+ "operationId": "healthCheck",
+ "responses": {
+ "200": {
+ "description": "Service is healthy",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "status": {
+ "type": "string",
+ "example": "ok"
+ },
+ "timestamp": {
+ "type": "string",
+ "format": "date-time"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/system.app-info": {
+ "get": {
+ "tags": ["System"],
+ "summary": "Get application metadata",
+ "description": "Returns application version, dependencies, and build information.",
+ "operationId": "getAppInfo",
+ "responses": {
+ "200": {
+ "description": "Application information",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/AppInfo"
+ }
+ }
+ }
+ }
+ }
+ },
+ "post": {
+ "tags": ["System"],
+ "summary": "Get application metadata (POST)",
+ "description": "Alternative POST endpoint for application metadata.",
+ "operationId": "getAppInfoPost",
+ "responses": {
+ "200": {
+ "description": "Application information",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/AppInfo"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/system.git-info": {
+ "get": {
+ "tags": ["System"],
+ "summary": "Get Git repository information",
+ "description": "Returns local and GitHub repository information including repos, organizations, and activity.",
+ "operationId": "getGitInfo",
+ "responses": {
+ "200": {
+ "description": "Git information",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/GitInfo"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/system.memory-info": {
+ "get": {
+ "tags": ["System"],
+ "summary": "Get system memory statistics",
+ "description": "Returns total, free, used, and swap memory information (cross-platform).",
+ "operationId": "getMemoryInfo",
+ "responses": {
+ "200": {
+ "description": "Memory statistics",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/MemoryInfo"
+ }
+ }
+ }
+ }
+ }
+ },
+ "post": {
+ "tags": ["System"],
+ "summary": "Get system memory statistics (POST)",
+ "description": "Alternative POST endpoint for memory information.",
+ "operationId": "getMemoryInfoPost",
+ "responses": {
+ "200": {
+ "description": "Memory statistics",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/MemoryInfo"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/system.disk-info": {
+ "get": {
+ "tags": ["System"],
+ "summary": "Get disk usage information",
+ "description": "Returns disk usage per filesystem/drive.",
+ "operationId": "getDiskInfo",
+ "responses": {
+ "200": {
+ "description": "Disk usage information",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/DiskInfo"
+ }
+ }
+ }
+ }
+ }
+ },
+ "post": {
+ "tags": ["System"],
+ "summary": "Get disk usage information (POST)",
+ "description": "Alternative POST endpoint for disk information.",
+ "operationId": "getDiskInfoPost",
+ "responses": {
+ "200": {
+ "description": "Disk usage information",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/DiskInfo"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/system.process-info": {
+ "get": {
+ "tags": ["System"],
+ "summary": "Get running process information",
+ "description": "Returns information about running processes including PID, CPU%, and memory usage.",
+ "operationId": "getProcessInfo",
+ "responses": {
+ "200": {
+ "description": "Process information",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProcessInfo"
+ }
+ }
+ }
+ }
+ }
+ },
+ "post": {
+ "tags": ["System"],
+ "summary": "Get running process information (POST)",
+ "description": "Alternative POST endpoint for process information.",
+ "operationId": "getProcessInfoPost",
+ "responses": {
+ "200": {
+ "description": "Process information",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProcessInfo"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/system.diagnostics": {
+ "get": {
+ "tags": ["System"],
+ "summary": "Get comprehensive diagnostics",
+ "description": "Returns complete system diagnostics including app info, memory, disk, processes, and git information.",
+ "operationId": "getDiagnostics",
+ "responses": {
+ "200": {
+ "description": "Complete diagnostics data",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "appInfo": {
+ "$ref": "#/components/schemas/AppInfo"
+ },
+ "memoryInfo": {
+ "$ref": "#/components/schemas/MemoryInfo"
+ },
+ "diskInfo": {
+ "$ref": "#/components/schemas/DiskInfo"
+ },
+ "processInfo": {
+ "$ref": "#/components/schemas/ProcessInfo"
+ },
+ "gitInfo": {
+ "$ref": "#/components/schemas/GitInfo"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/mcp-check": {
+ "get": {
+ "tags": ["MCP"],
+ "summary": "Check MCP server availability",
+ "description": "Verifies that configured MCP servers are available and responsive.",
+ "operationId": "checkMCPServers",
+ "responses": {
+ "200": {
+ "description": "MCP server status",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "servers": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "name": {
+ "type": "string"
+ },
+ "status": {
+ "type": "string",
+ "enum": ["available", "unavailable"]
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/mcp-update-config": {
+ "post": {
+ "tags": ["MCP"],
+ "summary": "Update MCP configuration",
+ "description": "Updates the Model Context Protocol server configuration.",
+ "operationId": "updateMCPConfig",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/MCPConfigUpdate"
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "Configuration updated successfully",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "success": {
+ "type": "boolean"
+ }
+ }
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/ValidationError"
+ }
+ }
+ }
+ },
+ "/api/mcp-validate-config": {
+ "post": {
+ "tags": ["MCP"],
+ "summary": "Validate MCP server configuration",
+ "description": "Validates the configuration for a Model Context Protocol server before applying it.",
+ "operationId": "validateMCPConfig",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "required": ["serverName", "config"],
+ "properties": {
+ "serverName": {
+ "type": "string",
+ "description": "Name of the MCP server to validate"
+ },
+ "config": {
+ "$ref": "#/components/schemas/MCPServerConfig"
+ }
+ }
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "Validation result",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "valid": {
+ "type": "boolean"
+ },
+ "errors": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/ValidationError"
+ },
+ "500": {
+ "description": "Server error",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "error": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/mcp-retry": {
+ "post": {
+ "tags": ["MCP"],
+ "summary": "Retry MCP server connection",
+ "description": "Attempts to reconnect to a failed or disconnected MCP server and retrieves available tools.",
+ "operationId": "retryMCPConnection",
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "required": ["serverName"],
+ "properties": {
+ "serverName": {
+ "type": "string",
+ "description": "Name of the MCP server to retry"
+ }
+ }
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "Server tools after successful reconnection",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "tools": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "name": {
+ "type": "string"
+ },
+ "description": {
+ "type": "string"
+ },
+ "inputSchema": {
+ "type": "object"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/ValidationError"
+ },
+ "500": {
+ "description": "Server error",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "error": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/check-env-key": {
+ "get": {
+ "tags": ["Utilities"],
+ "summary": "Check if API key is configured",
+ "description": "Verifies whether a specific API key is configured in the environment.",
+ "operationId": "checkEnvKey",
+ "parameters": [
+ {
+ "name": "key",
+ "in": "query",
+ "required": true,
+ "schema": {
+ "type": "string"
+ },
+ "description": "Environment variable name to check"
+ }
+ ],
+ "responses": {
+ "200": {
+ "description": "Key status",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "exists": {
+ "type": "boolean"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/export-api-keys": {
+ "get": {
+ "tags": ["Utilities"],
+ "summary": "Export all configured API keys",
+ "description": "Returns all configured API keys and provider settings for backup purposes.",
+ "operationId": "exportAPIKeys",
+ "responses": {
+ "200": {
+ "description": "Exported API keys",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "additionalProperties": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/update": {
+ "post": {
+ "tags": ["Utilities"],
+ "summary": "Check for application updates",
+ "description": "Checks if a newer version of the application is available.",
+ "operationId": "checkUpdates",
+ "responses": {
+ "200": {
+ "description": "Update information",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "updateAvailable": {
+ "type": "boolean"
+ },
+ "latestVersion": {
+ "type": "string"
+ },
+ "currentVersion": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/github-template": {
+ "get": {
+ "tags": ["Utilities"],
+ "summary": "Fetch GitHub repository template files",
+ "description": "Retrieves template files from a GitHub repository for project initialization.",
+ "operationId": "fetchGitHubTemplate",
+ "parameters": [
+ {
+ "name": "repo",
+ "in": "query",
+ "required": true,
+ "schema": {
+ "type": "string"
+ },
+ "description": "GitHub repository in format 'owner/repo'"
+ }
+ ],
+ "responses": {
+ "200": {
+ "description": "Template files",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "object",
+ "properties": {
+ "files": {
+ "type": "object",
+ "additionalProperties": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "404": {
+ "$ref": "#/components/responses/NotFound"
+ }
+ }
+ }
+ },
+ "/api/git-proxy/{path}": {
+ "get": {
+ "tags": ["Proxy"],
+ "summary": "Git proxy for CORS-enabled requests",
+ "description": "CORS-enabled proxy for isomorphic-git operations.",
+ "operationId": "gitProxy",
+ "parameters": [
+ {
+ "name": "path",
+ "in": "path",
+ "required": true,
+ "schema": {
+ "type": "string"
+ },
+ "description": "Git repository path"
+ }
+ ],
+ "responses": {
+ "200": {
+ "description": "Proxied git response",
+ "content": {
+ "application/octet-stream": {
+ "schema": {
+ "type": "string",
+ "format": "binary"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "components": {
+ "schemas": {
+ "ChatRequest": {
+ "type": "object",
+ "required": ["messages"],
+ "properties": {
+ "messages": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/Message"
+ }
+ },
+ "files": {
+ "type": "object",
+ "additionalProperties": true
+ },
+ "promptId": {
+ "type": "string",
+ "description": "Optional prompt template ID"
+ },
+ "contextOptimization": {
+ "type": "boolean",
+ "description": "Enable smart context optimization",
+ "default": false
+ },
+ "designScheme": {
+ "type": "object",
+ "description": "UI design scheme configuration"
+ },
+ "supabase": {
+ "type": "object",
+ "description": "Supabase integration configuration"
+ },
+ "enableMCPTools": {
+ "type": "boolean",
+ "description": "Enable Model Context Protocol tools",
+ "default": false
+ }
+ }
+ },
+ "Message": {
+ "type": "object",
+ "required": ["id", "role", "content"],
+ "properties": {
+ "id": {
+ "type": "string"
+ },
+ "role": {
+ "type": "string",
+ "enum": ["user", "assistant", "system"]
+ },
+ "content": {
+ "type": "string"
+ }
+ }
+ },
+ "LLMCallRequest": {
+ "type": "object",
+ "required": ["messages", "provider", "model"],
+ "properties": {
+ "messages": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/Message"
+ }
+ },
+ "provider": {
+ "type": "string",
+ "description": "LLM provider name"
+ },
+ "model": {
+ "type": "string",
+ "description": "Model identifier"
+ },
+ "stream": {
+ "type": "boolean",
+ "description": "Enable streaming response",
+ "default": false
+ }
+ }
+ },
+ "LLMCallResponse": {
+ "type": "object",
+ "properties": {
+ "response": {
+ "type": "string"
+ },
+ "usage": {
+ "type": "object",
+ "properties": {
+ "promptTokens": {
+ "type": "integer"
+ },
+ "completionTokens": {
+ "type": "integer"
+ },
+ "totalTokens": {
+ "type": "integer"
+ }
+ }
+ }
+ }
+ },
+ "ModelInfo": {
+ "type": "object",
+ "required": ["name", "label", "provider", "maxTokenAllowed"],
+ "properties": {
+ "name": {
+ "type": "string",
+ "description": "Model identifier"
+ },
+ "label": {
+ "type": "string",
+ "description": "Display name"
+ },
+ "provider": {
+ "type": "string",
+ "description": "Provider name"
+ },
+ "maxTokenAllowed": {
+ "type": "integer",
+ "description": "Maximum context tokens"
+ },
+ "maxCompletionTokens": {
+ "type": "integer",
+ "description": "Maximum completion tokens"
+ },
+ "icon": {
+ "type": "string",
+ "description": "Icon identifier"
+ }
+ }
+ },
+ "ProviderInfo": {
+ "type": "object",
+ "required": ["name", "staticModels"],
+ "properties": {
+ "name": {
+ "type": "string"
+ },
+ "staticModels": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/ModelInfo"
+ }
+ },
+ "getApiKeyLink": {
+ "type": "string",
+ "format": "uri"
+ },
+ "labelForGetApiKey": {
+ "type": "string"
+ },
+ "icon": {
+ "type": "string"
+ }
+ }
+ },
+ "NetlifyDeployRequest": {
+ "type": "object",
+ "required": ["files"],
+ "properties": {
+ "files": {
+ "type": "object",
+ "additionalProperties": {
+ "type": "string"
+ },
+ "description": "File paths mapped to content with SHA1 hashes"
+ },
+ "siteId": {
+ "type": "string",
+ "description": "Existing site ID for updates"
+ },
+ "siteName": {
+ "type": "string",
+ "description": "Site name for new deployments"
+ }
+ }
+ },
+ "VercelDeployRequest": {
+ "type": "object",
+ "required": ["files", "projectName"],
+ "properties": {
+ "files": {
+ "type": "object",
+ "additionalProperties": {
+ "type": "string"
+ }
+ },
+ "projectName": {
+ "type": "string"
+ },
+ "framework": {
+ "type": "string",
+ "description": "Framework identifier (e.g., nextjs, react)"
+ }
+ }
+ },
+ "DeploymentResponse": {
+ "type": "object",
+ "properties": {
+ "url": {
+ "type": "string",
+ "format": "uri",
+ "description": "Deployment URL"
+ },
+ "deploymentId": {
+ "type": "string"
+ },
+ "status": {
+ "type": "string",
+ "enum": ["success", "failed", "pending"]
+ }
+ }
+ },
+ "SupabaseQueryRequest": {
+ "type": "object",
+ "required": ["url", "apiKey", "query"],
+ "properties": {
+ "url": {
+ "type": "string",
+ "format": "uri",
+ "description": "Supabase project URL"
+ },
+ "apiKey": {
+ "type": "string",
+ "description": "Supabase API key"
+ },
+ "query": {
+ "type": "string",
+ "description": "SQL query to execute"
+ }
+ }
+ },
+ "AppInfo": {
+ "type": "object",
+ "properties": {
+ "version": {
+ "type": "string"
+ },
+ "dependencies": {
+ "type": "object",
+ "additionalProperties": {
+ "type": "string"
+ }
+ },
+ "buildInfo": {
+ "type": "object",
+ "properties": {
+ "timestamp": {
+ "type": "string",
+ "format": "date-time"
+ },
+ "commit": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ },
+ "MemoryInfo": {
+ "type": "object",
+ "properties": {
+ "total": {
+ "type": "integer",
+ "description": "Total memory in bytes"
+ },
+ "free": {
+ "type": "integer",
+ "description": "Free memory in bytes"
+ },
+ "used": {
+ "type": "integer",
+ "description": "Used memory in bytes"
+ },
+ "swapTotal": {
+ "type": "integer",
+ "description": "Total swap in bytes"
+ },
+ "swapFree": {
+ "type": "integer",
+ "description": "Free swap in bytes"
+ }
+ }
+ },
+ "DiskInfo": {
+ "type": "object",
+ "properties": {
+ "filesystems": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "filesystem": {
+ "type": "string"
+ },
+ "size": {
+ "type": "integer"
+ },
+ "used": {
+ "type": "integer"
+ },
+ "available": {
+ "type": "integer"
+ },
+ "capacity": {
+ "type": "string"
+ },
+ "mountPoint": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ },
+ "ProcessInfo": {
+ "type": "object",
+ "properties": {
+ "processes": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "pid": {
+ "type": "integer"
+ },
+ "name": {
+ "type": "string"
+ },
+ "cpu": {
+ "type": "number",
+ "description": "CPU usage percentage"
+ },
+ "memory": {
+ "type": "integer",
+ "description": "Memory usage in bytes"
+ }
+ }
+ }
+ }
+ }
+ },
+ "GitInfo": {
+ "type": "object",
+ "properties": {
+ "local": {
+ "type": "object",
+ "properties": {
+ "branch": {
+ "type": "string"
+ },
+ "commit": {
+ "type": "string"
+ }
+ }
+ },
+ "github": {
+ "type": "object",
+ "properties": {
+ "repos": {
+ "type": "array",
+ "items": {
+ "type": "object"
+ }
+ },
+ "organizations": {
+ "type": "array",
+ "items": {
+ "type": "object"
+ }
+ }
+ }
+ }
+ }
+ },
+ "MCPConfigUpdate": {
+ "type": "object",
+ "required": ["servers"],
+ "properties": {
+ "servers": {
+ "type": "object",
+ "additionalProperties": {
+ "$ref": "#/components/schemas/MCPServerConfig"
+ }
+ }
+ }
+ },
+ "MCPServerConfig": {
+ "type": "object",
+ "required": ["type"],
+ "properties": {
+ "type": {
+ "type": "string",
+ "enum": ["stdio", "sse", "streamable-http"]
+ },
+ "command": {
+ "type": "string",
+ "description": "Command for stdio type"
+ },
+ "url": {
+ "type": "string",
+ "format": "uri",
+ "description": "URL for sse/http types"
+ },
+ "headers": {
+ "type": "object",
+ "additionalProperties": {
+ "type": "string"
+ }
+ },
+ "cwd": {
+ "type": "string",
+ "description": "Working directory"
+ },
+ "env": {
+ "type": "object",
+ "additionalProperties": {
+ "type": "string"
+ },
+ "description": "Environment variables"
+ }
+ }
+ },
+ "Error": {
+ "type": "object",
+ "required": ["type", "message"],
+ "properties": {
+ "type": {
+ "type": "string",
+ "enum": [
+ "validation_error",
+ "auth_error",
+ "rate_limit",
+ "model_not_found",
+ "context_too_large",
+ "timeout",
+ "stream_error",
+ "mcp_tool_error",
+ "provider_error"
+ ]
+ },
+ "message": {
+ "type": "string"
+ },
+ "details": {
+ "type": "object"
+ }
+ }
+ }
+ },
+ "responses": {
+ "ValidationError": {
+ "description": "Invalid request parameters",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/Error"
+ },
+ "example": {
+ "type": "validation_error",
+ "message": "Invalid request parameters"
+ }
+ }
+ }
+ },
+ "AuthError": {
+ "description": "Authentication failed",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/Error"
+ },
+ "example": {
+ "type": "auth_error",
+ "message": "API key not configured"
+ }
+ }
+ }
+ },
+ "RateLimitError": {
+ "description": "Rate limit exceeded",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/Error"
+ },
+ "example": {
+ "type": "rate_limit",
+ "message": "Too many requests"
+ }
+ }
+ }
+ },
+ "NotFound": {
+ "description": "Resource not found",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/Error"
+ },
+ "example": {
+ "type": "model_not_found",
+ "message": "The requested resource was not found"
+ }
+ }
+ }
+ },
+ "ContextTooLarge": {
+ "description": "Request context exceeds maximum size",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/Error"
+ },
+ "example": {
+ "type": "context_too_large",
+ "message": "Request context exceeds model limits"
+ }
+ }
+ }
+ },
+ "TimeoutError": {
+ "description": "Request timeout",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/Error"
+ },
+ "example": {
+ "type": "timeout",
+ "message": "Request exceeded 5 minute timeout"
+ }
+ }
+ }
+ }
+ },
+ "securitySchemes": {
+ "cookieAuth": {
+ "type": "apiKey",
+ "in": "cookie",
+ "name": "apiKeys",
+ "description": "Cookie-based authentication storing API keys and provider settings"
+ }
+ }
+ },
+ "security": [
+ {
+ "cookieAuth": []
+ }
+ ]
+}
diff --git a/package.json b/package.json
index e45cc8b..bc0a06f 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "docs",
- "version": "1.0.0",
+ "version": "1.2.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
@@ -15,4 +15,4 @@
"dependencies": {
"mintlify": "^4.2.23"
}
-}
\ No newline at end of file
+}
diff --git a/prompting/discussion-mode.mdx b/prompting/discussion-mode.mdx
index 03879a0..4cb3f32 100644
--- a/prompting/discussion-mode.mdx
+++ b/prompting/discussion-mode.mdx
@@ -3,11 +3,11 @@ title: Discussion Mode
description: Technical consultant mode for collaborative problem-solving and guidance
---
-Discussion mode transforms the AI into a technical consultant who provides guidance, plans, and structured approaches to development challenges without directly implementing code.
+Discussion mode transforms the AI into a technical consultant who provides guidance, plans, and structured approaches to development challenges without directly implementing code. When activated, CodinIT switches to a specialized system prompt designed specifically for consultative interactions.
## Overview
-In discussion mode, the AI acts as an experienced senior software engineer and technical consultant, offering strategic advice, architectural guidance, and detailed planning for your development projects.
+In discussion mode, the AI acts as an experienced senior software engineer and technical consultant, offering strategic advice, architectural guidance, and detailed planning for your development projects. Unlike build mode, discussion mode focuses on planning and guidance rather than code generation.
## Key Features
@@ -19,12 +19,15 @@ In discussion mode, the AI acts as an experienced senior software engineer and t
## How It Works
-When you activate discussion mode, the AI switches to a specialized prompt that focuses on:
+When you activate discussion mode, CodinIT switches to a specialized system prompt (`discussPrompt`) that fundamentally changes how the AI responds:
-- **Planning Over Implementation**: Providing detailed plans rather than writing code
-- **Educational Approach**: Explaining concepts and teaching best practices
-- **Strategic Thinking**: Considering long-term implications and scalability
-- **Collaborative Problem-Solving**: Working with you to develop optimal solutions
+- **Planning Over Implementation**: Provides detailed numbered plans with file references and plain English descriptions instead of code
+- **Educational Approach**: Explains concepts, reasoning, and best practices with "why" behind recommendations
+- **Strategic Thinking**: Uses chain-of-thought reasoning to analyze problems before providing solutions
+- **Collaborative Problem-Solving**: Offers quick action buttons for implementing plans, continuing discussions, or opening referenced files
+- **No Code Generation**: Responses describe changes in plain English rather than providing code snippets
+
+The system prompt explicitly instructs the AI to use phrases like "You should add..." instead of "I will implement..." to maintain the consultative role.
## When to Use Discussion Mode
@@ -64,13 +67,20 @@ Click the "Discuss" button in the chat interface to switch to technical consulta
### Response Format
-In discussion mode, responses typically include:
+In discussion mode, responses follow a specific structure:
+
+- **Chain of Thought**: Visible reasoning process showing how the AI analyzes the problem
+- **Clear Plans**: Numbered steps starting with "## The Plan" heading, describing changes in plain English only
+- **File References**: Specific file paths with corresponding quick action buttons to open them
+- **Technology Recommendations**: Specific tools and approaches with reasoning
+- **Best Practice Guidance**: Industry-standard approaches with explanations
+- **Quick Actions**: Interactive buttons at the end of every response:
+ - **Implement**: For executing the outlined plan in build mode
+ - **Message**: For continuing the conversation with specific follow-ups
+ - **Link**: For opening external documentation or resources
+ - **File**: For opening referenced files in the editor
-- **Clear Plans**: Numbered steps for implementing solutions
-- **Technology Recommendations**: Specific tools and approaches
-- **Best Practice Guidance**: Industry-standard approaches
-- **Educational Context**: Explanations of why certain approaches are recommended
-- **Quick Actions**: Interactive buttons for common next steps
+The AI is explicitly instructed to never include code snippets in plans, only plain English descriptions of what needs to change.
## Best Practices for Discussion Mode
@@ -124,38 +134,62 @@ Use the AI's recommendations as a foundation for more detailed questions about i
Discussion mode responses emphasize planning and strategy:
-- **Step-by-step implementation plans**
-- **Technology recommendations with reasoning**
-- **Consideration of trade-offs and alternatives**
-- **Scalability and maintainability considerations**
+- **Step-by-step implementation plans** with "## The Plan" heading
+- **Technology recommendations with reasoning** explaining why specific tools are suggested
+- **Consideration of trade-offs and alternatives** for different approaches
+- **Scalability and maintainability considerations** for long-term success
+- **File-specific guidance** indicating exactly which files need modification
### Educational Approach
Responses include educational elements:
-- **Explanation of concepts and principles**
-- **Rationale behind recommendations**
-- **Industry best practices and standards**
-- **Common pitfalls and how to avoid them**
+- **Chain of thought reasoning** visible to users before solutions
+- **Explanation of concepts and principles** with context
+- **Rationale behind recommendations** explaining the "why"
+- **Industry best practices and standards** with current information
+- **Common pitfalls and how to avoid them** based on experience
### Interactive Elements
-Discussion mode often includes interactive elements:
+Discussion mode always includes interactive quick action buttons:
-- **Quick action buttons** for common next steps
-- **File references** for examining specific code
-- **Link suggestions** for additional resources
-- **Structured recommendations** with clear priorities\*\*
+- **Implement buttons** to execute plans in build mode (labeled "Implement this plan", "Fix this bug", "Fix these issues", or database-specific actions)
+- **Message buttons** for continuing conversations with specific prompts
+- **Link buttons** for opening external documentation (especially for topics in support resources)
+- **File buttons** for opening referenced files in the editor
+- **Ordered by priority**: Implement actions first, then message, link, and file actions
+- **Limited to 4-5 actions** to avoid overwhelming users
## Integration with Other Features
-### Combining with Regular Chat
+### Combining with Build Mode
-You can switch between discussion mode and regular chat mode:
+You can seamlessly switch between discussion mode and build mode:
1. Use **discussion mode** for planning and architectural decisions
-2. Switch to **regular chat** for implementation and code writing
+2. Click **"Implement this plan"** quick action to switch to build mode and execute the plan
3. Return to **discussion mode** for reviewing and refining implemented code
+4. The AI maintains context across mode switches
+
+### Search Grounding
+
+Discussion mode has access to web search capabilities:
+
+- **Automatic search** when uncertain about technical information, package details, or API specifications
+- **First-party documentation** search for faster, more accurate results
+- **Current information** rather than relying on potentially outdated knowledge
+- **URL content fetching** when users share links for context
+
+### Support Resources Integration
+
+Discussion mode automatically redirects to official documentation for specific topics:
+
+- **Token efficiency**: Redirects to maximize-token-efficiency guide
+- **Effective prompting**: Links to prompting-effectively documentation
+- **Supabase integration**: Points to Supabase integration docs
+- **Deployment/hosting**: Directs to Netlify and hosting FAQs
+- Uses link quick actions to open documentation in new tabs
### Using with Project Context
@@ -164,8 +198,8 @@ Discussion mode works best when you provide:
- **Current project structure** and technology stack
- **Existing code snippets** for review
- **Performance requirements** and constraints
-- **Team size and expertise** levels
-- **Timeline and resource** considerations
+- **Specific file paths** for targeted guidance
+- **Running processes** (automatically detected by CodinIT)
## Common Use Cases
@@ -211,16 +245,40 @@ Analyzing existing codebases, identifying improvement opportunities, planning re
mode when you're ready to implement the plans and write code.
-## How to use Discussion Mode
+## How to Use Discussion Mode
+
+Discussion mode lets you explore ideas by chatting with CodinIT without making changes to your code. It is versatile and works well for planning, learning, and problem-solving.
+
+### Activating Discussion Mode
+
+1. Open your CodinIT project
+2. In the bottom-right corner of the chatbox, click the **Discuss** button (chat icon)
+3. The button highlights when discussion mode is active
+4. Click again to return to build mode
+
+### Using Discussion Mode Effectively
+
+Once activated, you can:
+
+1. **Ask planning questions** about architecture, design patterns, or implementation approaches
+2. **Request code reviews** for guidance on improving existing code
+3. **Explore technologies** to learn about frameworks, libraries, and best practices
+4. **Debug strategically** by discussing error patterns and systematic approaches
+5. **Use quick actions** to:
+ - Implement the plan in build mode
+ - Continue the discussion with suggested follow-ups
+ - Open referenced files in the editor
+ - Access external documentation
-Discussion Mode appears when you are using the v1 Agent (legacy). It lets you explore ideas by chatting with CodinIT without making changes to your code. Powered by Google Gemini, it is versatile and works well for a wide range of topics. You can use it any time you want to brainstorm or think through ideas.
+### Response Behavior
-Follow the steps below to use Discussion Mode in a CodinIT project:
+In discussion mode, the AI will:
-1. Open a project working with the v1 Agent (legacy).
-2. In the bottom-right corner of the chatbox, click **Discuss**.
-3. Enter your question or prompt, and read the response. You can then either:
- - Continue the discussion.
- - Use one of the quick action buttons to implement the suggestion.
+- Show its reasoning process using chain-of-thought
+- Provide plans with numbered steps in plain English
+- Reference specific files with quick action buttons
+- Avoid generating code snippets (descriptions only)
+- Always include interactive quick actions at the end
+- Redirect to official documentation for specific topics
-Discussion Mode highlights blue when active. Click it again to turn it off and return to Build mode.
+Discussion mode is ideal for planning before implementation. Use the "Implement this plan" button to seamlessly switch to build mode and execute your plan.
diff --git a/prompting/maximize-token-efficiency.mdx b/prompting/maximize-token-efficiency.mdx
index 67229d9..2e01c93 100644
--- a/prompting/maximize-token-efficiency.mdx
+++ b/prompting/maximize-token-efficiency.mdx
@@ -1,203 +1,232 @@
---
title: 'Maximize Token Efficiency'
-description: Optimize token usage to keep your costs down and work more effectively
+description: How to use AI without spending too much
---
-Optimize your token usage to reduce costs, improve response times, and work more efficiently with AI models. Understanding how tokens work and implementing best practices can significantly impact your development workflow.
+Learn how to use AI smartly so you don't run out of credits or money. Think of tokens like text messages - the more you send, the more it costs.
-## Understanding Tokens
+## What Are Tokens?
-CodinIT uses AI models powered by various providers (Anthropic, OpenAI, Google, etc.). Each interaction consumes **tokens**, which are chunks of text that AI models process.
+CodinIT uses AI that runs on "tokens." Tokens are small pieces of text that the AI reads and writes. Understanding token usage helps you optimize costs and stay within model limits.
-### How Tokens Are Used
+### How Tokens Get Used
Tokens are consumed in several ways:
-- **Input tokens**: Your prompts, questions, and context
-- **Output tokens**: AI-generated responses, code, and explanations
-- **Context tokens**: Project files and conversation history that provide context
+- **System prompts**: CodinIT's built-in prompts (default, fine-tuned, or experimental) that guide AI behavior
+- **Your messages**: The questions and requests you send to the AI
+- **AI responses**: The code, explanations, and artifacts the AI generates
+- **Project context**: File contents, file changes, and running processes the AI reads
+- **Chain of thought**: The reasoning process shown in `` tags
+- **Conversation history**: Previous messages in the chat thread
-### Token Consumption Factors
+### What Affects Token Usage
-- **Model type**: Different models have different token costs and limits
-- **Context length**: Larger projects require more tokens for context
-- **Response complexity**: Detailed explanations use more tokens than simple answers
-- **Conversation length**: Longer chat histories consume more context tokens
+- **Which AI model**: Some models cost more per token (Claude vs GPT vs DeepSeek)
+- **System prompt choice**: Fine-tuned prompt uses more tokens than experimental
+- **Project size**: Bigger projects with more files use more context tokens
+- **Answer length**: Long explanations and code use more tokens than short ones
+- **Chat length**: Longer conversations accumulate more history tokens
+- **Mode selection**: Discussion mode may use fewer tokens (no code generation)
+- **Chain of thought**: Visible reasoning adds tokens but improves quality
+- **File context**: The AI reading multiple files to understand your project
- **Token Limits**: Each AI model has maximum token limits for both input context and output generation. Exceeding these
- limits can cause errors or truncated responses.
+ **Token Limits**: Each AI has a maximum amount of text it can handle at once. If you go over, you might get errors.
-## Token Efficiency Strategies
+## How to Save Tokens
-### Use Built-in Features Over Prompts
+### Use Buttons Instead of Typing
-Leverage CodinIT's interface features instead of text prompts where possible:
+CodinIT has buttons and menus that don't use tokens:
-- **Example Prompts**: Use the suggested prompt buttons instead of typing similar requests
-- **File Operations**: Use the file tree and editor features instead of asking for file operations
-- **Terminal Commands**: Run commands directly in the terminal instead of asking the AI to execute them
+- **Example prompts**: Click suggested prompts instead of typing
+- **File operations**: Use the file tree to create/delete files
+- **Terminal**: Run commands yourself instead of asking AI
-### Optimize Your Communication Style
+### Write Better Requests
-**Be Specific and Concise:**
+**Be specific and short:**
```
❌ "Make this website look better"
-✅ "Add a hero section with gradient background, centered heading, and call-to-action button to the homepage"
+✅ "Add a hero section with gradient background, centered heading, and button"
```
-**Provide Context Efficiently:**
+**Give helpful details:**
```
-❌ "Fix the login page" (requires AI to explore entire codebase)
-✅ "Fix the password validation error on /login - the error occurs when password is less than 8 characters"
+❌ "Fix the login page" (AI has to search everything)
+✅ "Fix the password error on /login - it breaks when password is less than 8 characters"
```
-**Use Structured Requests:**
+**Use numbered lists:**
```
❌ "Add user authentication"
-✅ "Add user authentication with: 1) Login form with email/password, 2) Registration form, 3) Password reset, 4) Protected routes"
+✅ "Add user authentication with: 1) Login form, 2) Registration form, 3) Password reset, 4) Protected routes"
```
-## Key Efficiency Techniques
+## Smart Ways to Save Tokens
### Use Discussion Mode for Planning
-Switch to discussion mode when you need guidance without code implementation:
+When you just want to talk and plan (not write code), use discussion mode to save tokens:
-- **Planning Phase**: Use discussion mode to plan features before implementing
-- **Architecture Decisions**: Get advice on system design and technology choices
-- **Code Review**: Discuss code improvements without making changes
-- **Learning**: Ask questions and get explanations without consuming implementation tokens
+- **Planning**: Talk about features before building them (no code artifacts generated)
+- **Getting advice**: Ask which tools to use (plain English responses)
+- **Code review**: Discuss improvements without changing code
+- **Learning**: Ask questions without generating code
+- **Architecture decisions**: Get guidance on system design
-### Strategic Development Approach
+Discussion mode uses a different system prompt that focuses on planning rather than code generation, which can reduce token usage while still providing valuable guidance. Use the "Implement this plan" button when ready to switch to build mode.
-**Plan Before You Build:**
+### Plan Before You Build
-- Outline your application structure and features upfront
-- Break complex projects into manageable phases
-- Identify potential challenges before implementation
-- Create a development roadmap to guide your work
+**Think first:**
-**Iterative Development:**
+- Write down what your app should do
+- Break big projects into small pieces
+- Think about problems you might face
+- Make a plan for what to build first
-- Implement features incrementally rather than all at once
-- Test and validate each component before moving to the next
-- Use version control to track progress and enable rollbacks
-- Focus on core functionality before adding advanced features
+**Build step by step:**
-### Error Handling Strategies
+- Add one feature at a time
+- Test each piece before moving on
+- Use Git to save your progress
+- Build the main features first, fancy stuff later
-**Avoid Repeated Fix Attempts:**
+### Don't Waste Tokens on Errors
-- Don't repeatedly click "Attempt fix" for the same error
-- Analyze error messages to understand root causes
-- Use discussion mode to get guidance on complex issues
-- Implement proper error handling in your code to prevent future issues
+**When something breaks:**
-**Add Comprehensive Error Handling:**
+- Don't keep clicking "fix" over and over
+- Read the error message to understand what's wrong
+- Use discussion mode to ask for help
+- Add error handling so it doesn't break again
-- Include detailed logging to understand error patterns
-- Implement graceful error states in your UI
-- Add input validation to prevent common errors
-- Use try-catch blocks appropriately in your code
+**Prevent errors:**
-### Project Size Management
+- Add logging to see what's happening
+- Show friendly error messages to users
+- Check user input before using it
+- Use try-catch to handle errors gracefully
-**Optimize Project Structure:**
+### Keep Your Project Small
-- Keep files under 500 lines when possible
-- Split large components into smaller, focused modules
-- Remove unused dependencies and code
-- Use efficient data structures and algorithms
+**Organize your files:**
-**Context Window Management:**
+- Keep files under 500 lines
+- Split big files into smaller ones
+- Delete code you're not using
+- Use simple, efficient code
-- Be mindful of how much context your project provides
-- Use specific file references instead of broad requests
-- Clean up chat history when conversations become too long
-- Focus on specific components rather than entire applications
+**Manage context:**
+
+- Don't include your whole project in every request
+- Reference specific files instead
+- Start a new chat when conversations get too long
+- Focus on one part of your app at a time
- **Discussion Mode**: Use discussion mode for planning, architecture decisions, and getting guidance without
- implementing code changes.
+ **Discussion Mode**: Use this when you want to talk and plan without writing code. It uses a different system prompt focused on guidance rather than code generation, which can save tokens.
- **Version Control**: Leverage Git/version control features to manage project state without consuming AI tokens for
- undo operations.
+ **Start New Chats**: When conversations get long, start a new chat to reduce context tokens. CodinIT maintains your project files, so you won't lose work.
-## Model Selection Strategies
+
+ **Use Git**: Save your work with Git instead of asking AI to undo things. It's free and doesn't use tokens!
+
-### Choose Appropriate Models
+## Choosing the Right AI Model and Prompt
-Different AI models have different strengths and token costs:
+### Pick the Right Model for the Job
-- **Use smaller models** for simple tasks, drafting, and initial development
-- **Reserve larger models** for complex reasoning, code review, and final polishing
-- **Consider model context limits** when working with large codebases
-- **Balance cost vs. capability** based on your current development phase
+CodinIT supports multiple AI providers with different token costs:
-### Provider-Specific Optimization
+- **Cheaper models** (GPT-3.5, DeepSeek) for simple tasks and quick questions
+- **Mid-range models** (GPT-4, Claude Sonnet) for most development work
+- **Premium models** (Claude Opus) for complex problems and important code
+- **Check context limits** - some models can't handle huge projects
+- **Balance cost and quality** based on what you're doing
-**Anthropic Claude:**
+### Different AI Models
-- Excellent for reasoning and code generation
-- Higher token costs but superior code quality
-- Best for complex development tasks
+**Claude (Anthropic):**
-**OpenAI GPT:**
+- Excellent at reasoning and complex code
+- Larger context windows (200K+ tokens)
+- Higher cost per token but better quality
+- Works well with CodinIT's chain-of-thought prompting
+- Best for: Complex projects, refactoring, architecture
+
+**GPT (OpenAI):**
- Fast and cost-effective for many tasks
-- Good for quick iterations and prototyping
-- Consider GPT-4 for complex reasoning tasks
+- Good for iterative development
+- GPT-4 for harder problems, GPT-3.5 for simple tasks
+- Best for: Quick iterations, simple features, prototyping
+
+**DeepSeek:**
+
+- Very cost-effective for code generation
+- Good code quality at lower cost
+- Best for: Budget-conscious development, learning
+
+**Other Models (Gemini, Groq, etc.):**
+
+- Check provider-specific strengths
+- Consider regional availability
+- Compare pricing for your use case
+
+### System Prompt Selection
+
+CodinIT offers three prompt variants that affect token usage:
-**Other Providers:**
+1. **Default Prompt**: Balanced approach with comprehensive guidelines
+2. **Fine-Tuned Prompt**: More detailed instructions, higher token usage, better results
+3. **Experimental Prompt**: Optimized for lower token usage (may sacrifice some quality)
-- Evaluate based on specific use cases
-- Consider regional availability and data privacy requirements
-- Compare pricing and performance for your workload
+Choose the experimental prompt if token efficiency is your top priority.
-## Advanced Optimization Techniques
+## Advanced Tips
-### Context Management
+### Focus Your Requests
-**File-Specific Requests:**
+**Be specific about files:**
-- Reference specific files instead of asking about "the entire codebase"
-- Use imports and dependencies to provide necessary context
-- Focus on individual components rather than full applications
+- Name the exact files you're working on
+- Don't ask about "the whole project"
+- Focus on one part at a time
-**Progressive Development:**
+**Build gradually:**
-- Build core functionality first, then add features incrementally
-- Test each component thoroughly before moving to the next
-- Use modular architecture to keep context windows manageable
+- Start with the main features
+- Test each piece before adding more
+- Keep your code organized in small pieces
-### Performance Monitoring
+### Watch Your Usage
-**Track Your Usage:**
+**Track what you use:**
-- Monitor token consumption across different tasks
-- Identify patterns in high-token activities
-- Adjust your approach based on usage analytics
+- See which tasks use the most tokens
+- Notice patterns in what costs more
+- Change your approach based on what you learn
-**Optimize Workflows:**
+**Work smarter:**
-- Combine related changes into single requests
-- Use batch operations when possible
-- Plan complex changes to minimize back-and-forth communication
+- Combine related requests into one
+- Do multiple things at once when possible
+- Plan ahead to avoid going back and forth
- **Token Awareness**: Understanding token consumption helps you work more efficiently and control costs. Focus on
- quality over quantity in your interactions.
+ **Be Aware**: Understanding tokens helps you save money. Focus on asking good questions, not lots of questions.
- **Continuous Learning**: As you work more with AI models, you'll develop intuition for which approaches are most
- token-efficient for different types of tasks.
+ **Learn as You Go**: The more you use AI, the better you'll get at knowing which approach saves the most tokens.
diff --git a/prompting/plan-your-app.mdx b/prompting/plan-your-app.mdx
index 3d19cee..cbe49b9 100644
--- a/prompting/plan-your-app.mdx
+++ b/prompting/plan-your-app.mdx
@@ -1,311 +1,6 @@
---
title: 'Plan Your App'
-description: 'Strategic planning for successful application development'
+description: 'Master strategic planning techniques for successful application development with AI-powered architecture and feature planning guidance.'
---
-Effective planning is the foundation of successful application development. This guide helps you structure your thinking, define requirements, and create a roadmap for building your application with CodinIT.
-
-## The Application Development Lifecycle
-
-```mermaid
-flowchart LR
- A[Plan & Design] --> B[Core Implementation]
- B --> C[Feature Enhancement]
- C --> D[Testing & Refinement]
- D --> E[Deployment & Launch]
- E --> F[Maintenance & Updates]
-```
-
-## Step 1: Define Your Vision
-
-### What Problem Are You Solving?
-
-**User-Centric Approach:**
-
-- Identify your target users and their needs
-- Understand the problem you're solving
-- Define success criteria from the user's perspective
-
-**Market Research:**
-
-- Analyze similar applications
-- Identify unique value propositions
-- Understand competitive landscape
-
-### Application Scope
-
-**Core Functionality:**
-
-- Define must-have features
-- Prioritize based on user needs
-- Avoid feature creep in initial planning
-
-**Technical Requirements:**
-
-- Choose appropriate technology stack
-- Consider scalability needs
-- Plan for future extensibility
-
-## The lifecycle of an application
-
-```mermaid theme={"system"}
-flowchart LR
- A(Design and plan) --> B(Start building: \n first prompt) --> C{Iterate: \n more prompting} --> D(Deploy) --> C
- style A fill:#D8F1FF,stroke:#154E93,color:black;
- style B fill:#D8F1FF,stroke:#154E93,color:black;
- style C fill:#D8F1FF,stroke:#154E93,color:black;
- style D fill:#D8F1FF,stroke:#154E93,color:black;
-```
-
-### Platform Selection
-
-**Web Applications:**
-
-- Browser-based interactive applications
-- User-generated content and data management
-- Real-time features and collaboration
-- Examples: dashboards, social platforms, productivity tools
-
-**Websites:**
-
-- Content-focused experiences
-- Marketing and informational sites
-- Portfolio and showcase websites
-- Examples: blogs, landing pages, documentation sites
-
-**Mobile Applications:**
-
-- Native mobile experiences
-- Device-specific features and capabilities
-- App store distribution
-- Examples: fitness trackers, messaging apps, utilities
-
-### Feature Prioritization
-
-**Must-Have Features (MVP):**
-
-- Core functionality that defines your product
-- Essential user workflows
-- Basic user interface and experience
-
-**Should-Have Features:**
-
-- Important but not critical functionality
-- Enhanced user experience elements
-- Advanced features for power users
-
-**Nice-to-Have Features:**
-
-- Quality-of-life improvements
-- Advanced capabilities
-- Future enhancement opportunities
-
-## Step 2: Technical Planning
-
-### Technology Stack Selection
-
-**Frontend Technologies:**
-
-- React for interactive web applications
-- Vue.js for progressive web apps
-- HTML/CSS/JavaScript for simpler websites
-- React Native for cross-platform mobile apps
-
-**Backend Considerations:**
-
-- Supabase for full-stack applications
-- API integrations for specific services
-- Serverless functions for dynamic features
-
-### Architecture Decisions
-
-**Data Management:**
-
-- Local storage for simple applications
-- Supabase for complex data requirements
-- External APIs for specialized functionality
-
-**State Management:**
-
-- React Context for small applications
-- Zustand or Redux for complex state needs
-- URL state for shareable application states
-
-## Step 3: Create Your Implementation Plan
-
-### Use Discussion Mode for Planning
-
-**Strategic Planning:**
-
-- Use discussion mode to explore different approaches
-- Get guidance on architecture decisions
-- Understand technical trade-offs
-- Plan your development roadmap
-
-**Iterative Development:**
-
-- Start with core functionality
-- Add features incrementally
-- Test and validate each step
-- Refine based on user feedback
-
-### Example Application Plan
-
-**Todo Application with Time Blocking:**
-
-**Core Features:**
-
-- Task creation and management
-- Time blocking calendar integration
-- Pomodoro timer functionality
-- Progress tracking and analytics
-
-**Technical Stack:**
-
-- React for the frontend
-- Supabase for data persistence
-- Modern CSS for styling
-- Responsive design for all devices
-
-**Development Phases:**
-
-1. Basic task CRUD operations
-2. Calendar integration for time blocking
-3. Pomodoro timer implementation
-4. Progress visualization
-5. Mobile optimization
-
-## Step 4: Write Effective Prompts
-
-### Prompt Structure Best Practices
-
-**Clear Project Description:**
-
-```
-Build a task management application for productivity enthusiasts who use time-blocking and Pomodoro techniques.
-```
-
-**Specific Requirements:**
-
-```
-Include features for:
-- Creating, editing, and deleting tasks
-- Time-blocking calendar integration
-- Pomodoro timer with customizable intervals
-- Progress tracking and statistics
-```
-
-**Technical Specifications:**
-
-```
-Use React with TypeScript, Supabase for backend, and implement responsive design for mobile and desktop.
-```
-
-### Example Complete Prompt
-
-**Well-Structured Prompt:**
-
-```
-Create a comprehensive task management application for productivity enthusiasts. The app should help users implement time-blocking and Pomodoro techniques effectively.
-
-Key Features:
-1. Task Management: Create, edit, delete, and organize tasks
-2. Time Blocking: Visual calendar for scheduling tasks by time blocks
-3. Pomodoro Timer: Customizable work/break intervals with progress tracking
-4. Analytics: View productivity statistics and time usage patterns
-
-Technical Requirements:
-- React with TypeScript for type safety
-- Supabase for data persistence and real-time updates
-- Responsive design that works on desktop and mobile
-- Clean, modern UI with intuitive navigation
-- Local storage fallback for offline functionality
-
-Start with the core task management functionality, then add time-blocking features, followed by the Pomodoro timer.
-```
-
-## Step 5: Execute and Iterate
-
-### Start Building
-
-**Begin with Core Features:**
-
-- Implement the most essential functionality first
-- Create a working prototype quickly
-- Validate core user workflows
-
-**Iterative Development:**
-
-- Add features incrementally based on user feedback
-- Test each addition thoroughly
-- Maintain code quality throughout development
-
-### Use Discussion Mode for Guidance
-
-**Architecture Decisions:**
-
-- Get advice on complex technical choices
-- Explore different implementation approaches
-- Understand scalability considerations
-
-**Code Review and Optimization:**
-
-- Discuss code structure and organization
-- Get guidance on performance improvements
-- Plan refactoring and maintenance tasks
-
-
- **Planning is Key**: Taking time to plan thoroughly at the beginning saves significant time and resources during
- development.
-
-
-
- **Start Small**: Focus on a minimal viable product (MVP) first, then expand based on user needs and feedback.
-
-- Local storage for data persistence - Static site compatible for Netlify hosting - Progressive Web App capabilities
-
-Design Guidelines:
-
-- Modern, minimalist interface
-- Vibrant but professional color palette
-- Clear visual hierarchy
-- Intuitive navigation
-- Smooth animations for interactions
-- High contrast for accessibility
-
-Optional Enhancements:
-
-- Dark/light mode toggle
-- Keyboard shortcuts
-- Task statistics and productivity insights
-- Export/import task data
-- Integration with calendar applications
-
-Once you enhance your prompt, read through the new prompt to make sure it still does what you want.
-
-## Step 3: Iterate
-
-After CodinIT generates your application from your first prompt, you'll probably want to make changes:
-
-- Adding more features.
-- Tweaking behavior or appearance.
-- Fixing bugs.
-
-Do one thing at a time. Don't try to add multiple features in one go. Remember the guidance in [prompt effectively](/prompting/prompting-effectively).
-
-Read the [features overview](/features/overview) for help using CodinIT's interface and capabilities.
-
-## Step 4: Publish
-
-After building your application, the next step is to make it available to users. This is where publishing and hosting come in.
-
-CodinIT provides multiple deployment options. You can choose to:
-
-- Use CodinIT's Netlify integration: this connects CodinIT to Netlify, enabling one-click publishing from within CodinIT. Follow the [Netlify integration](/integrations/netlify) guide to set this up and to learn more about building for Netlify.
-- Use CodinIT's Vercel integration: deploy your applications with Vercel's global edge network. Follow the [Vercel integration](/integrations/vercel) guide.
-- Connect to GitHub and set up publishing from GitHub using other CI/CD tools: this is a common devops pattern. The [GitHub integration](/integrations/git) guide walks you through connecting CodinIT and GitHub. You'll then need to set up your own build and publishing tools.
-- Explore other [deployment platforms](/integrations/deployments) for additional hosting options.
-
-If you're new to building applications and unsure which option to choose, using Netlify or Vercel integration is usually the best option.
-
-Once you've successfully set up your deployment integration by following the instructions above, you can publish your application directly from CodinIT. Learn more about [deployment options](/integrations/deployments) to find the best fit for your project.
+coming soon...
\ No newline at end of file
diff --git a/prompting/prompt-engineering-guide.mdx b/prompting/prompt-engineering-guide.mdx
index ce612b0..e22514d 100644
--- a/prompting/prompt-engineering-guide.mdx
+++ b/prompting/prompt-engineering-guide.mdx
@@ -1,167 +1,210 @@
---
title: 'Prompt Engineering Guide'
-description: 'Advanced techniques for crafting effective prompts and optimizing AI interactions'
+description: 'How to talk to AI to get better code'
---
-Master the art of prompt engineering to get the best results from AI models. This guide covers advanced techniques for structuring prompts, optimizing for different models, and maximizing the quality of AI-generated code and responses.
+Learn how to ask the AI for what you want in a way that gets you the best results. CodinIT uses sophisticated system prompts that guide the AI's behavior, and understanding how to work with these prompts will help you get better results.
-## Understanding AI Model Capabilities
+## How CodinIT Processes Your Requests
-Different AI models have different strengths and limitations. Effective prompt engineering involves adapting your communication style to leverage each model's capabilities optimally.
+CodinIT uses different system prompts depending on the mode and settings you choose:
+
+- **Build Mode**: Uses either the default, fine-tuned, or experimental prompt to generate code and implement features
+- **Discussion Mode**: Uses a specialized consultant prompt focused on planning and guidance
+- **Chain of Thought**: The AI shows its reasoning process before providing solutions
+- **Search Grounding**: Automatically searches the web for current information when needed
+
+Different AI models are good at different things. The key is to be clear about what you want and provide sufficient context.
---
-## Technology Stack Specification
+## Tell the AI What Tools to Use
-### Be Explicit About Technologies
+### Be Specific About Your Tools
-AI models work best when you clearly specify your preferred technology stack. This ensures the generated code uses the right frameworks, libraries, and architectural patterns from the start.
+CodinIT's system prompts include built-in preferences (like using Vite for web servers and Supabase for databases), but you can override these by being explicit. The AI works better when you tell it exactly what tools and technologies you want to use.
-**Clear Stack Specification:**
+**Good example:**
```
-Build a modern e-commerce dashboard using:
-- React with TypeScript for the frontend
-- Supabase for backend and database
-- Tailwind CSS for styling
-- React Query for data fetching
-- React Router for navigation
+Build an online store dashboard with:
+- React for the website
+- Supabase for saving data
+- Tailwind CSS to make it look nice
+- React Router to move between pages
```
-### Recommended Technology Combinations
+### Built-in Technology Preferences
+
+CodinIT has default preferences configured in its system prompts:
+
+- **Web servers**: Vite (default)
+- **Databases**: Supabase (default), or JavaScript-based alternatives like libsql or sqlite
+- **Styling**: Tailwind CSS with shadcn/ui components
+- **Icons**: Lucide React
+- **Images**: Pexels stock photos (direct URLs only)
+- **Package management**: npm
+- **Node.js scripts**: Preferred over shell scripts
-| Category | Primary Choice | Alternatives | Use Case |
-| ---------------------- | ------------------ | ------------------------------ | ---------------------------- |
-| **Frontend Framework** | React + TypeScript | Vue.js, Svelte, SolidJS | Interactive web applications |
-| **Styling** | Tailwind CSS | CSS Modules, Styled Components | Utility-first styling |
-| **Backend** | Supabase | Express.js, FastAPI | Full-stack applications |
-| **State Management** | Zustand | Redux, Jotai | Complex application state |
-| **Data Fetching** | TanStack Query | SWR, Apollo | Server state management |
+You can override these by explicitly specifying different tools in your prompts.
-### Framework-Specific Considerations
+### Popular Tool Combinations
-**React Applications:**
+| What You're Building | Good Tools to Use | What It's For |
+| ---------------------- | ------------------ | ---------------------------- |
+| **Website** | React + TypeScript | Making interactive websites |
+| **Styling** | Tailwind CSS | Making things look pretty |
+| **Saving Data** | Supabase | Storing user information |
+| **Managing Data** | Zustand | Keeping track of app information |
-- Specify component structure (functional vs class components)
-- Include state management preferences
-- Define routing approach (React Router, Next.js App Router)
+### Tips for Different Frameworks
-**Vue.js Applications:**
+**If you're using React:**
+- Say if you want modern or old-style components
+- Mention how you want to handle data
+- Tell it how pages should connect
-- Specify composition API vs options API
-- Include UI library preferences (Quasar, Vuetify)
-- Define build tool (Vite, Vue CLI)
+**If you're using Vue:**
+- Specify which Vue style you prefer
+- Mention any UI libraries you like
+- Say what build tool you're using
-**Backend Integration:**
+**For the Backend:**
+- Say if you want REST or GraphQL APIs
+- Mention if you need user login
+- Tell it how to check if data is correct
-- Specify API patterns (REST, GraphQL)
-- Include authentication requirements
-- Define data validation approach (Zod, Joi)
+## How to Ask Better Questions
-## Advanced Prompting Techniques
+### Give the AI Context
-### Contextual Information
+**Help the AI understand your situation:**
-**Provide Relevant Context:**
+- Show it code you already have
+- Tell it which files you're working on
+- Mention any limits (like "needs to work on old phones")
+- Say if speed is important
-- Include existing code snippets when relevant
-- Reference specific files or components
-- Mention current technology constraints
-- Specify performance requirements
+**Start Simple, Then Add Details:**
-**Progressive Refinement:**
+- First, explain what you want in general
+- Then add more specific details
+- Ask follow-up questions if needed
+- Build on what the AI already created
-- Start with high-level requirements
-- Add implementation details iteratively
-- Use follow-up prompts for clarification
-- Build upon previous responses
+### Tips for Different AI Models
-### Model-Specific Optimization
+CodinIT supports multiple AI providers, each with different strengths:
-**Claude/Gemini Models:**
+**For Claude (Anthropic):**
-- Provide comprehensive context upfront
-- Use structured formats (numbered lists, sections)
-- Include examples and edge cases
-- Specify output format preferences
+- Excellent at reasoning and complex problem-solving
+- Give all the information at once for best results
+- Use numbered lists to organize your thoughts
+- Include examples of what you want
+- Works well with CodinIT's chain-of-thought prompting
-**GPT Models:**
+**For GPT (OpenAI):**
-- Break complex requests into smaller parts
-- Use clear, direct language
-- Provide concrete examples
-- Specify desired output structure
+- Fast and versatile for most tasks
+- Break big requests into smaller pieces
+- Use simple, clear language
+- Show examples of what you mean
+- Be specific about what you want
-### Error Prevention
+**For other models (Gemini, DeepSeek, etc.):**
-**Common Pitfalls to Avoid:**
+- Check context window limits for large projects
+- Test different models for your specific use case
+- Consider cost vs. quality trade-offs
+- Some models excel at specific tasks (e.g., code generation vs. explanation)
-- Vague requirements that lead to assumptions
-- Missing technical specifications
-- Inconsistent naming conventions
-- Unspecified integration requirements
+### Avoid Common Mistakes
-**Validation Techniques:**
+**Don't do this:**
-- Include acceptance criteria
-- Specify testing requirements
-- Define success metrics
-- Request validation checkpoints
+- Be too vague ("make it better")
+- Forget to mention important details
+- Use different names for the same thing
+- Forget to say how things should connect
+
+**Do this instead:**
+
+- Include clear goals ("make the button blue and centered")
+- Specify what success looks like
+- Say how you'll test it
+- Ask for checkpoints along the way
- **Version Specification**: When possible, specify framework versions to ensure compatibility and avoid deprecated
- features.
+ **Version Numbers**: If you know which version of a tool you're using, tell the AI. CodinIT can use search grounding to find current documentation and best practices for specific versions.
- **Iterative Refinement**: Start with a clear prompt, then use follow-up messages to add details and make adjustments
- as needed.
+ **Use Discussion Mode**: For planning and architecture decisions, switch to discussion mode to get guidance without code generation. Then use the "Implement this plan" button to execute in build mode.
-## Best Practices Summary
+## Understanding CodinIT's System Constraints
+
+CodinIT operates in WebContainer, an in-browser Node.js runtime with specific limitations:
+
+### What Works
+
+- JavaScript and WebAssembly code
+- Node.js scripts and npm packages
+- Vite and other JavaScript-based tools
+- Python (standard library only)
+
+### What Doesn't Work
+
+- Native binaries (C/C++ compiled code)
+- Git commands (use CodinIT's built-in Git integration instead)
+- Python pip packages (standard library only)
+- Supabase CLI (use CodinIT's Supabase integration)
+
+The AI is aware of these constraints and will suggest compatible alternatives automatically.
+
+## Quick Tips Summary
-### Essential Prompting Principles
+### The Main Rules
-1. **Clarity First**: Be specific about what you want to build and how it should work
-2. **Technology Specification**: Clearly state your preferred frameworks and libraries
-3. **Context Provision**: Include relevant background information and constraints
-4. **Iterative Approach**: Start simple, then add complexity through follow-up prompts
+1. **Be clear**: Say exactly what you want and how it should work
+2. **Name your tools**: Tell the AI which technologies to use
+3. **Give context**: Share relevant information and any limits
+4. **Start simple**: Begin with basics, then add details
-### Quality Checklist
+### Checklist Before You Ask
-**Before Submitting:**
+**Before you send your request:**
-- ✅ Goal is clearly defined
-- ✅ Technology stack is specified
-- ✅ Key features are listed
-- ✅ User requirements are outlined
-- ✅ Success criteria are defined
+- ✅ You clearly explained your goal
+- ✅ You listed the tools you want to use
+- ✅ You mentioned the main features you need
+- ✅ You described what success looks like
-**During Development:**
+**While building:**
-- 🔄 Provide feedback on generated code
-- 🔄 Request specific modifications
-- 🔄 Ask for explanations when needed
-- 🔄 Use discussion mode for planning
+- 🔄 Give feedback on what the AI creates
+- 🔄 Ask for specific changes
+- 🔄 Request explanations if confused
+- 🔄 Use discussion mode to plan
-### Common Success Patterns
+### What Makes a Good Request
-**Effective Prompts Include:**
+**Good requests have:**
-- Specific functionality requirements
-- Technology stack preferences
-- User experience considerations
-- Performance and scalability needs
-- Integration requirements
+- Clear description of what you want
+- List of tools to use
+- How it should look and feel
+- Any speed or size requirements
+- How it connects to other things
-**Ineffective Prompts Lack:**
+**Bad requests are missing:**
-- Clear objectives
-- Technical specifications
-- Implementation details
-- Success criteria
+- A clear goal
+- Technical details
+- Specific instructions
+- Definition of "done"
- **Remember**: The AI can only work with the information you provide. The more specific and complete your prompts, the
- better the results.
+ **Remember**: The AI can only work with what you tell it. The more details you give, the better the results.
diff --git a/prompting/prompting-effectively.mdx b/prompting/prompting-effectively.mdx
index d3acbc1..572f8fd 100644
--- a/prompting/prompting-effectively.mdx
+++ b/prompting/prompting-effectively.mdx
@@ -3,16 +3,24 @@ title: 'Prompt Effectively'
description: 'Master the art of clear, effective communication with AI models'
---
-The quality of your results depends heavily on how clearly and effectively you communicate your intentions. Good prompting is a skill that combines clarity, specificity, and understanding of how AI models process information.
+The quality of your results depends heavily on how clearly and effectively you communicate your intentions. CodinIT uses sophisticated system prompts that guide the AI's behavior, and understanding how to work with these prompts will help you get better results.
## Understanding AI Communication
-AI models process your requests through text, so the way you phrase your questions and instructions directly impacts the quality of responses. Effective prompting involves:
+CodinIT processes your requests through specialized system prompts that include:
+
+- **Chain of thought reasoning**: The AI shows its thinking process before providing solutions
+- **Artifact-based responses**: Code and commands are wrapped in structured artifacts
+- **Context awareness**: The AI understands your project structure, running processes, and file changes
+- **Search grounding**: Automatic web search for current information when needed
+
+Effective prompting involves:
- **Clarity**: Being specific about what you want
-- **Context**: Providing necessary background information
+- **Context**: Providing necessary background information (file paths, error messages, requirements)
- **Structure**: Organizing your requests logically
- **Iteration**: Refining your approach based on responses
+- **Mode selection**: Using discussion mode for planning, build mode for implementation
## Example Prompts
@@ -80,59 +88,85 @@ These examples demonstrate effective prompting patterns that you can adapt to yo
### Understanding AI Reasoning
-CodinIT provides visual insights into the AI's thinking process:
+CodinIT's system prompts include chain-of-thought instructions that make the AI's reasoning visible:
-**Thinking Process Display:**
+**Chain of Thought (`` tags):**
-- See step-by-step reasoning for complex tasks
-- Understand how the AI breaks down problems
-- Follow the logical flow of solutions
+- The AI shows 2-6 concrete steps it will take before implementing
+- Helps you understand the approach before code is generated
+- Appears at the start of every response in build mode
+- Lists specific actions like "Set up Vite + React project structure" or "Implement core functionality"
**Thought Artifacts:**
-- Expandable reasoning containers
+- Expandable reasoning containers in the UI
- Detailed explanation of decision-making
- Visual representation of problem-solving steps
+- Shows the AI's planning process transparently
+
+This thinking process is mandatory in CodinIT's system prompts and helps ensure the AI takes a systematic approach to your requests.
### Using Discussion Mode Effectively
**Planning Phase:**
-- Use discussion mode for architecture decisions
+- Click the "Discuss" button to activate discussion mode
+- The AI switches to a specialized consultant prompt
- Get guidance without code implementation
-- Explore multiple solution approaches
+- Receive plans with numbered steps in plain English
+- Explore multiple solution approaches with reasoning
- Understand trade-offs and implications
**Implementation Phase:**
-- Switch to regular chat for code generation
-- Reference discussion insights in prompts
-- Build upon planned architectures
+- Click "Implement this plan" quick action button
+- Automatically switches to build mode with context
+- The AI generates code based on the discussed plan
+- Reference discussion insights in follow-up prompts
- Iterate based on discussion feedback
+**Key Differences:**
+
+- **Discussion mode**: Plans in plain English, no code snippets, consultative tone
+- **Build mode**: Generates code in artifacts, implements features, shows chain of thought
+
## Optimizing for Different AI Models
### Understanding Model Capabilities
-Different AI providers have different strengths:
+CodinIT supports multiple AI providers through its provider system. Different models have different strengths:
**Claude (Anthropic):**
- Excellent at reasoning and analysis
- Strong code generation capabilities
- Good for complex problem-solving
+- Works well with CodinIT's chain-of-thought prompting
+- Larger context windows for bigger projects
**GPT Models (OpenAI):**
- Fast and versatile
- Good for creative tasks
- Strong at following detailed instructions
+- Cost-effective for simpler tasks
-**Other Models:**
+**Other Models (DeepSeek, Gemini, Groq, etc.):**
- Specialized capabilities vary by provider
- Consider context limits and pricing
- Test different models for your use case
+- Some excel at specific tasks (e.g., DeepSeek for code)
+
+### Prompt Library Options
+
+CodinIT offers three system prompt variants:
+
+1. **Default Prompt**: Battle-tested standard prompt with comprehensive guidelines
+2. **Fine-Tuned Prompt**: Optimized for better results with advanced techniques
+3. **Experimental Prompt**: Optimized for lower token usage (experimental)
+
+You can select these in the settings to optimize for your needs.
### Adapting Your Prompts
@@ -194,16 +228,31 @@ Different AI providers have different strengths:
system.
-```txt theme={"system"}
-For all designs I ask you to make, have them be beautiful, not cookie cutter. Make webpages that are fully featured and worthy for production.
+## Custom System Prompts
-By default, this template supports JSX syntax with Tailwind CSS classes, the shadcn/ui library, React hooks, and Lucide React for icons. Do not install other packages for UI themes, icons, etc unless absolutely necessary or I request them.
+You can enhance CodinIT's behavior by adding custom instructions to your project. These work alongside CodinIT's built-in system prompts.
-Use icons from lucide-react for logos.
+### Example Custom Instructions
+
+```txt
+For all designs I ask you to make, have them be beautiful, not cookie cutter.
+Make webpages that are fully featured and worthy for production.
-Use stock photos from unsplash where appropriate.
+By default, this template supports JSX syntax with Tailwind CSS classes,
+the shadcn/ui library, React hooks, and Lucide React for icons.
+Do not install other packages for UI themes, icons, etc unless absolutely
+necessary or I request them.
+
+Use icons from lucide-react for logos.
+Use stock photos from Pexels where appropriate.
```
-### Tips for the project or system prompts
+### Tips for Custom Instructions
+
+- **Be specific**: Include instructions about your preferred coding style, libraries, or patterns
+- **Set boundaries**: Tell CodinIT to only change relevant code, not rewrite entire files
+- **Define standards**: Specify naming conventions, file organization, or testing requirements
+- **Provide context**: Explain project-specific constraints or requirements
+- **Override defaults**: Explicitly state if you want different tools than CodinIT's defaults
-- Include instructions to CodinIT.dev to only change relevant code.
+Note: Custom instructions complement but don't replace CodinIT's core system prompts, which handle artifact generation, WebContainer constraints, and mode-specific behavior.
diff --git a/providers/anthropic.mdx b/providers/anthropic.mdx
index ed7952b..31127c2 100644
--- a/providers/anthropic.mdx
+++ b/providers/anthropic.mdx
@@ -1,60 +1,43 @@
---
title: "Anthropic"
-description: "Learn how to configure and use Anthropic Claude models with CodinIT. Covers API key setup, model selection, and advanced features like prompt caching."
+description: "Configure Anthropic Claude models with CodinIT for advanced reasoning and code generation."
---
**Website:** [https://www.anthropic.com/](https://www.anthropic.com/)
-### Getting an API Key
+## Getting an API Key
-1. **Sign Up/Sign In:** Go to the [Anthropic Console](https://console.anthropic.com/). Create an account or sign in.
-2. **Navigate to API Keys:** Go to the [API keys](https://console.anthropic.com/settings/keys) section.
-3. **Create a Key:** Click "Create Key". Give your key a descriptive name (e.g., "CodinIT").
-4. **Copy the Key:** **Important:** Copy the API key _immediately_. You will not be able to see it again. Store it securely.
+1. Go to [Anthropic Console](https://console.anthropic.com/) and sign in
+2. Navigate to [API keys](https://console.anthropic.com/settings/keys)
+3. Click "Create Key" and name it (e.g., "CodinIT")
+4. Copy the key immediately - you won't see it again
-### Supported Models
+## Configuration
-CodinIT supports the following Anthropic Claude models:
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "Anthropic" as the API Provider
+3. Paste your API key
+4. Choose your model
-- `claude-haiku-4-5-20251001`
-- `claude-opus-4-1-20250805`
-- `claude-opus-4-20250514`
-- `anthropic/claude-sonnet-4.5` (Recommended)
-- `claude-3-7-sonnet-20250219`
-- `claude-3-5-sonnet-20241022`
-- `claude-3-5-haiku-20241022`
-- `claude-3-opus-20240229`
-- `claude-3-haiku-20240307`
+## Supported Models
-See [Anthropic's Model Documentation](https://docs.anthropic.com/en/about-claude/models) for more details on each model's capabilities.
+- `anthropic/claude-sonnet-4.5` (Recommended)
+- `claude-opus-4-1-20250805`
+- `claude-3-7-sonnet-20250219`
+- `claude-3-5-sonnet-20241022`
+- `claude-3-5-haiku-20241022`
-### Configuration in CodinIT
+See [Anthropic's documentation](https://docs.anthropic.com/en/about-claude/models) for full model details.
-1. **Open CodinIT Settings:** Click the settings icon (⚙️) in the CodinIT panel.
-2. **Select Provider:** Choose "Anthropic" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Anthropic API key into the "Anthropic API Key" field.
-4. **Select Model:** Choose your desired Claude model from the "Model" dropdown.
-5. **(Optional) Custom Base URL:** If you need to use a custom base URL for the Anthropic API, check "Use custom base URL" and enter the URL. Most users won't need to adjust this setting.
+## Extended Thinking
-### Extended Thinking
+Enable enhanced reasoning for complex tasks by checking "Enable Extended Thinking" in CodinIT settings. Available for Claude Opus 4, Sonnet 4.5, and Sonnet 3.7.
-Anthropic models offer an "Extended Thinking" feature, designed to give them enhanced reasoning capabilities for complex tasks. This feature allows the model to output its step-by-step thought process before delivering a final answer, providing transparency and enabling more thorough analysis for challenging prompts.
+Learn more in the [Extended Thinking documentation](https://docs.anthropic.com/en/build-with-claude/extended-thinking).
-When extended thinking is in CodinIT, the model generates `thinking` content blocks that detail its internal reasoning. These insights are then incorporated into its final response.
-CodinIT users can leverage this by checking the `Enable Extended Thinking` box below the model selection menu after selecting a Claude Model from any provider.
+## Notes
-**Key Aspects of Extended Thinking:**
-
-- **Supported Models:** This feature is available for select models, including Claude Opus 4, Claude Sonnet 4.5, and Claude Sonnet 3.7.
-- **Summarized Thinking (Claude 4):** For Claude 4 and 4.5 models, the API returns a summary of the full thinking process to balance insight with efficiency and prevent misuse. You are billed for the full thinking tokens, not just the summary.
-- **Streaming:** Extended thinking responses, including the `thinking` blocks, can be streamed.
-- **Tool Use & Prompt Caching:** Extended thinking interacts with tool use (requiring thinking blocks to be passed back) and prompt caching (with specific behaviors around cache invalidation and context).
-
-For comprehensive details on how extended thinking works, including API examples, interaction with tool use, prompt caching, and pricing, please refer to the [official Anthropic documentation on Extended Thinking](https://docs.anthropic.com/en/build-with-claude/extended-thinking).
-
-### Tips and Notes
-
-- **Prompt Caching:** Claude 3 models support [prompt caching](https://docs.anthropic.com/en/build-with-claude/prompt-caching), which can significantly reduce costs and latency for repeated prompts.
-- **Context Window:** Claude models have large context windows (200,000 tokens), allowing you to include a significant amount of code and context in your prompts.
-- **Pricing:** Refer to the [Anthropic Pricing](https://www.anthropic.com/pricing) page for the latest pricing information.
-- **Rate Limits:** Anthropic has strict rate limits based on [usage tiers](https://docs.anthropic.com/en/api/rate-limits#requirements-to-advance-tier). If you're repeatedly hitting rate limits, consider contacting Anthropic sales or accessing Claude through a different provider like [OpenRouter](/providers/openrouter) or [Requesty](/providers/openrouter).
\ No newline at end of file
+- **Context Window:** 200,000 tokens
+- **Prompt Caching:** Reduces costs for repeated prompts
+- **Rate Limits:** Based on [usage tiers](https://docs.anthropic.com/en/api/rate-limits). Consider [OpenRouter](/providers/openrouter) if hitting limits
+- **Pricing:** See [Anthropic Pricing](https://www.anthropic.com/pricing)
\ No newline at end of file
diff --git a/providers/aws-bedrock.mdx b/providers/aws-bedrock.mdx
index 8456081..256aed8 100644
--- a/providers/aws-bedrock.mdx
+++ b/providers/aws-bedrock.mdx
@@ -1,136 +1,79 @@
---
title: "AWS Bedrock"
sidebarTitle: "API Key"
-description: "Set up AWS Bedrock with CodinIT using Bedrock API Keys. Simplest setup for individual developers to access frontier models."
+description: "How to connect AWS Bedrock AI to CodinIT"
---
-### Overview
+Use AWS Bedrock to access powerful AI models like Claude through Amazon's cloud service.
-- **AWS Bedrock:** A fully managed service that offers access to leading generative AI models (e.g., Anthropic Claude, Amazon Nova) through AWS.\
- [Learn more about AWS Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html).
-- **CodinIT:** A VS Code extension that acts as a coding assistant by integrating with AI models—empowering developers to generate code, debug, and analyze data.
-- **Developer Focus:** This guide is tailored for individual developers that want to enable access to frontier models via AWS Bedrock with a simplified setup using API Keys.
+**Website:** [AWS Bedrock Docs](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html)
----
-
-### Step 1: Prepare Your AWS Environment
-
-#### 1.1 Individual user setup - Create a Bedrock API Key
+## How to Set It Up
-For more detailed instructions check the [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/api-keys.html).
+### Step 1: Get Your API Key
-1. **Sign in to the AWS Management Console:**\
- [AWS Console](https://aws.amazon.com/console)
-2. **Access Bedrock Console:**
- - [Bedrock Console](https://console.aws.amazon.com/bedrock)
- - Create a new Long Lived API Key. This API Key will have by default the `AmazonBedrockLimitedAccess` IAM policy
- [View AmazonBedrockLimitedAccess Policy Details](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html#managed-policies)
+1. **Log in:** Go to [AWS Console](https://aws.amazon.com/console)
+2. **Open Bedrock:** Visit [Bedrock Console](https://console.aws.amazon.com/bedrock)
+3. **Make an API Key:** Create a new Long Lived API Key
+ - Use the default setting: `AmazonBedrockLimitedAccess`
+ - [See what this allows](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html#managed-policies)
-#### 1.2 Create or Modify the Policy
+### Step 2: Set Permissions
-To ensure CodinIT can interact with AWS Bedrock, your IAM user or role needs specific permissions. While the `AmazonBedrockLimitedAccess` managed policy provides comprehensive access, for a more restricted and secure setup adhering to the principle of least privilege, the following minimal permissions are sufficient for CodinIT's core model invocation functionality:
+**You need these permissions:**
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [{
+ "Effect": "Allow",
+ "Action": [
+ "bedrock:InvokeModel",
+ "bedrock:InvokeModelWithResponseStream",
+ "bedrock:CallWithBearerToken"
+ ],
+ "Resource": "*"
+ }]
+}
+```
-- `bedrock:InvokeModel`
-- `bedrock:InvokeModelWithResponseStream`
-- `bedrock:CallWithBearerToken`
+Create this policy and attach it to your API key user.
-You can create a custom IAM policy with these permissions and attach it to your IAM user or role.
+**Important notes:**
+- To see model lists in CodinIT, add `bedrock:ListFoundationModels` permission
+- For Claude models, use `AmazonBedrockLimitedAccess` policy
+- For Anthropic models, fill out the First Time Use form in the [Playground](https://console.aws.amazon.com/bedrock/home#/text-generation-playground)
-1. In the AWS IAM console, create a new policy.
-2. Use the JSON editor to add the following policy document:
- ```json
- {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Action": ["bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream", "bedrock:CallWithBearerToken"],
- "Resource": "*" // For enhanced security, scope this to specific model ARNs if possible.
- }
- ]
- }
- ```
-3. Name the policy (e.g., `CodinITBedrockInvokeAccess`) and attach it to the IAM user associated with the key you created. The IAM user and the API key have the same prefix.
+### Step 3: Pick a Region
-**Important Considerations:**
-
-- **Model Listing in CodinIT:** The minimal permissions (`bedrock:InvokeModel`, `bedrock:InvokeModelWithResponseStream`) are sufficient for CodinIT to _use_ a model if you specify the model ID directly in CodinIT's settings. If you rely on CodinIT to dynamically list available Bedrock models, you might need additional permissions like `bedrock:ListFoundationModels`.
-- **AWS Marketplace Subscriptions:** For third-party models (e.g., Anthropic Claude), the **`AmazonBedrockLimitedAccess`** policy grants you the necessary permissions to subscribe via the AWS Marketplace. There is no explicit access to be enabled. For Anthropic models you are still required to submit a First Time Use (FTU) form via the Console. If you get the following message in the CodinIT chat `[ERROR] Failed to process response: Model use case details have not been submitted for this account. Fill out the Anthropic use case details form before using the model.` then open the [Playground in the AWS Bedrock Console](https://console.aws.amazon.com/bedrock/home?#/text-generation-playground), select any Anthropic model and fill in the form (you might need to send a prompt first)
-
----
-
-### Step 2: Verify Regional and Model Access
-
-#### 2.1 Choose and Confirm a Region
-
-1. **Select a Region:**\
- AWS Bedrock is available in multiple regions (e.g., US East, Europe, Asia Pacific). Choose the region that meets your latency and compliance needs.\
- [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/)
-2. **Verify Model Access:**
- - **Note:** Some models are only accessible via an [Inference Profile](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html). In such case check the box "Cross Region Inference".
-
----
+Choose where the AI runs (closer = faster):
+- `us-east-1` (Virginia, USA)
+- `us-west-2` (Oregon, USA)
+- `eu-west-1` (Ireland, Europe)
+- `ap-southeast-1` (Singapore, Asia)
-### Step 3: Configure the CodinIT VS Code Extension
+**Note:** Some models need [Cross Region Inference](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html) - check that box if needed.
-#### 3.1 Install and Open CodinIT
+### Step 4: Connect to CodinIT
-1. **Install VS Code:**\
- Download from the [VS Code website](https://code.visualstudio.com).
-2. **Install the CodinIT Extension:**
- - Open VS Code.
- - Go to the Extensions Marketplace (`Ctrl+Shift+X` or `Cmd+Shift+X`).
- - Search for **CodinIT** and install it.
+1. Install CodinIT in VS Code
+2. Click the settings icon (⚙️)
+3. Choose **AWS Bedrock** as your AI provider
+4. Enter your **API Key**
+5. Type your **AWS Region** (like `us-east-1`)
+6. Pick a **Model** (like `anthropic.claude-3-5-sonnet-20241022-v2:0`)
+7. Save and test it
-#### 3.2 Configure CodinIT Settings
+## Staying Safe
-1. **Open CodinIT Settings:**
- - Click on the settings ⚙️ to select your API Provider.
-2. **Select AWS Bedrock as the API Provider:**
- - From the API Provider dropdown, choose **AWS Bedrock**.
-3. **Enter Your AWS API Key:**
- - Input your **API Key**
- - Specify the correct **AWS Region** (e.g., `us-east-1` or your enterprise-approved region).
-4. **Select a Model:**
- - Choose an on-demand model (e.g., **anthropic.claude-3-5-sonnet-20241022-v2:0**).
-5. **Save and Test:**
- - Click **Done/Save** to apply your settings.
- - Test the integration by sending a simple prompt (e.g., "Generate a Python function to check if a number is prime.").
+1. **Use secure login:** Use AWS SSO instead of API keys when you can
+2. **Network safety:** Consider using [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-services-overview.html)
+3. **Watch activity:** Turn on CloudTrail to see what's happening
+4. **Control costs:** Use AWS Cost Explorer and set up billing alerts
+5. **Check regularly:** Review your settings and logs often
----
-
-### Step 4: Security, Monitoring, and Best Practices
-
-1. **Secure Access:**
- - Prefer AWS SSO/federated roles over long-lived API Key when possible.
- - [AWS IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)
-2. **Enhance Network Security:**
- - Consider setting up [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-services-overview.html) to securely connect to Bedrock.
-3. **Monitor and Log Activity:**
- - Enable AWS CloudTrail to log Bedrock API calls.
- - Use CloudWatch to monitor metrics like invocation count, latency, and token usage.
- - Set up alerts for abnormal activity.
-4. **Handle Errors and Manage Costs:**
- - Implement exponential backoff for throttling errors.
- - Use AWS Cost Explorer and set billing alerts to track usage.\
- [AWS Cost Management](https://docs.aws.amazon.com/cost-management/latest/userguide/what-is-aws-cost-management.html)
-5. **Regular Audits and Compliance:**
- - Periodically review IAM roles and CloudTrail logs.
- - Follow internal data privacy and governance policies.
-
----
-
-### Conclusion
-
-By following these steps, you can quickly integrate AWS Bedrock with the CodinIT VS Code extension to accelerate development:
-
-1. **Prepare Your AWS Environment:** Create a Bedrock API Key with the necessary permissions.
-2. **Verify Region and Model Access:** Confirm that your selected region supports your required models.
-3. **Configure CodinIT in VS Code:** Install and set up CodinIT with your AWS API Key and choose an appropriate model.
-4. **Implement Security and Monitoring:** Use best practices for IAM, network security, monitoring, and cost management.
-
-For further details, consult the [AWS Bedrock Documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html). Happy coding!
-
----
+## Good to Know
-_This guide will be updated as AWS Bedrock and CodinIT evolve. Always refer to the latest documentation and internal policies for up-to-date practices._
\ No newline at end of file
+- **Cost:** You pay for what you use - see [AWS Bedrock Pricing](https://aws.amazon.com/bedrock/pricing/)
+- **Security:** Meets HIPAA and SOC 2 Type II standards
+- **Help:** [AWS Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html)
+- **Best practices:** [AWS IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)
diff --git a/providers/cloud-providers.mdx b/providers/cloud-providers.mdx
index d27665d..2a5919c 100644
--- a/providers/cloud-providers.mdx
+++ b/providers/cloud-providers.mdx
@@ -1,229 +1,132 @@
---
title: 'Providers'
-description: 'Connect CodinIT with 19+ AI providers including cloud models, local inference, and specialized services.'
+description: 'Connect CodinIT AI IDE with 18+ LLM providers including Claude, GPT-4, Gemini, DeepSeek for AI code generation, local inference, and specialized AI coding services.'
---
-### Enterprise & Research Models
+## Enterprise & research AI coding models
-
- Claude models with advanced reasoning capabilities
+
+ Claude 3.5 Sonnet LLM with advanced reasoning for AI code generation
-
-
- GPT-5 and GPT-4 models for versatile AI assistance
-
-
-
- Gemini models with multimodal capabilities
-
-
-
- Advanced reasoning models for complex tasks
+
+ GPT-4o and o1 series AI models for intelligent code completion
+
+
+ Gemini 2.0 Flash LLM via GCP Vertex AI for AI-powered development
+
+
+ DeepSeek V3 advanced reasoning models for cost-effective AI code generation
-### Specialized & Fast Inference
+## Fast LLM inference & specialized AI coding
-
- Ultra-fast inference with LPU technology
+
+ Ultra-fast LPU inference for real-time AI code completion
-
-
- Access to 50+ open-source models
-
-
-
- Optimized inference for open-source models
-
-
-
- AI models with integrated web search
-
-
-
- X.AI's Grok models with real-time knowledge
+
+ 50+ open-source LLMs for flexible AI code generation
+
+
+ Optimized open-source LLM inference for AI development
+
+
+ AI coding with integrated web search for context-aware development
+
+
+ Grok LLMs with large context windows for AI code generation
-### Open Source & Community
+## Open-source AI models & community LLMs
-
- Command R series models for coding and analysis
+
+ Command R series LLMs for AI code generation and development
-
-
- Open-source model hub with community models
-
-
-
- Open-source and commercial Mistral models
-
-
-
- Chinese language models with Kimi series
+
+ Thousands of open-source community AI models for code generation
+
+
+ Mistral and Codestral LLMs specialized for AI-powered development
+
+
+ Kimi series LLMs with Chinese language support for AI coding
-### Unified & Routing
+## Unified & Routing
- Access multiple models through a unified API
+ Multiple models, unified API
-
- Connect to any OpenAI-compatible API endpoint
+ Any OpenAI-compatible endpoint
-### Cloud & Enterprise
+## Cloud & Enterprise
- Enterprise-grade AI models through AWS infrastructure
+ Enterprise AI via AWS
-
- Access OpenAI and other models through GitHub
+ Models through GitHub platform
-### Local & Private
+## Local AI models & private LLM inference
-
- Run open-source models locally with Ollama
+
+ Run open-source LLMs locally with Ollama for private AI code generation
-
-
- Desktop app for running models locally
+
+ Desktop app for running local AI models with private code generation
-## Choosing the Right Provider
-
-With 19+ AI providers available, selecting the right model depends on your specific needs. Consider these key factors:
-
-
-
- * **Ultra-fast inference**: Groq (LPU technology), Together AI
- * **Best reasoning**: Anthropic Claude, DeepSeek, OpenAI o1
- * **Balanced performance**: OpenAI GPT-4, Google Gemini, Cohere
- * **Local speed**: Ollama, LM Studio (no network latency)
-
-
-
- * **Free/Low-cost**: Local models (Ollama, LM Studio), OpenRouter * **Budget-friendly**: Together AI, HuggingFace,
- Hyperbolic * **Premium**: Anthropic, OpenAI, Google (higher quality) * **Enterprise**: AWS Bedrock, GitHub Models
- (included benefits)
-
-
-
- * **Maximum privacy**: Local models (Ollama, LM Studio) - data never leaves your device * **Enterprise-grade**: AWS
- Bedrock, Anthropic (SOC 2 compliant) * **Cloud security**: OpenAI, Google, Cohere (encrypted transmission) *
- **Specialized**: Perplexity (search integration with privacy considerations)
-
-
-
- * **Code generation**: All providers support coding, specialized: Cohere, Together AI, GitHub * **Multimodal**: Google
- Gemini, OpenAI GPT-4 Vision, Moonshot * **Long context**: Claude (200K+), Gemini (1M+), GPT-4 (128K) * **Function
- calling**: OpenAI, Anthropic, Google, Cohere * **Search integration**: Perplexity (real-time web search) *
- **Multilingual**: Cohere, Google, Moonshot (Chinese), Mistral
-
-
-
- * **Rapid prototyping**: Groq, Together AI (fast iteration)
- * **Production applications**: Anthropic, OpenAI, AWS Bedrock
- * **Research & analysis**: DeepSeek, Perplexity, Cohere
- * **Offline development**: Ollama, LM Studio
- * **Enterprise integration**: AWS Bedrock, GitHub Models
- * **Cost optimization**: Hyperbolic, HuggingFace, OpenRouter
-
-
+## Choosing an AI coding provider
-## Quick Start
+**AI performance & speed:**
+- Ultra-fast LLM inference: Groq, Together AI for real-time code completion
+- Best AI reasoning: Anthropic Claude, DeepSeek, OpenAI o1 for complex code generation
+- Balanced AI models: OpenAI GPT-4, Google Gemini, Cohere for general development
-
-
- Select from 19+ providers based on your needs: speed, cost, capabilities, or privacy requirements
-
+**AI model cost:**
+- Free/Low-cost AI: Local models (Ollama, LM Studio), OpenRouter for budget development
+- Budget-friendly LLMs: Together AI, HuggingFace, Hyperbolic for cost-effective coding
+- Premium AI models: Anthropic Claude, OpenAI GPT-4, Google Gemini for enterprise
-
- For cloud providers: Sign up and get API keys. For local providers: Download and install the software
-
+**AI privacy & security:**
+- Maximum privacy: Local LLMs (Ollama, LM Studio) for private code generation
+- Enterprise-grade AI: AWS Bedrock, Anthropic for secure development
+- Cloud AI security: OpenAI, Google, Cohere with data protection
-
- Add your credentials in CodinIT's settings under AI Providers or use provider-specific setup prompts
-
+**AI coding capabilities:**
+- Code generation: All LLM providers, specialized: Cohere, Together AI for development
+- Multimodal AI: Google Gemini, OpenAI GPT-4 Vision, Moonshot for visual coding
+- Long context LLMs: Claude (200K+), Gemini (1M+), GPT-4 (128K) for large codebases
+- AI search integration: Perplexity for context-aware code generation
+- Multilingual AI: Cohere, Google, Moonshot (Chinese) for international development
-
- Choose from available models within your selected provider, considering context limits and capabilities
-
+## Quick Start
-
- Begin using AI assistance in your development workflow with the configured provider
-
+
+ Select based on needs: speed, cost, capabilities, or privacy
+ Sign up and get API keys (cloud) or install software (local)
+ Add credentials in CodinIT settings
+ Choose from available models
+ Begin using AI in your workflow
-## Configuration Tips
-
-
- **Multi-Provider Setup**: Configure multiple providers simultaneously and switch between them based on task
- requirements, cost considerations, or performance needs.
-
-
-
- **API Key Security**: Your API keys are stored locally and never transmitted to CodinIT servers. They are only used to
- communicate directly with your chosen AI provider.
-
-
-
- **Rate Limits**: Each provider has different rate limits and usage quotas. Monitor your usage and consider provider
- switching for high-volume workloads.
-
-
-
- **Provider Switching**: Easily switch between providers mid-project. CodinIT maintains separate contexts for different
- providers, allowing you to leverage specialized capabilities as needed.
-
-
-
- **Local vs Cloud**: Local providers (Ollama, LM Studio) offer maximum privacy but require hardware resources. Cloud
- providers offer convenience and advanced features but involve data transmission.
-
-
-## Next Steps
-
-
-
- Learn about context windows and model parameters
-
-
-
- Compare different models and their capabilities
-
-
-
- Set up local models for complete privacy
-
-
-
- Optimize your prompts for better results
-
-
-
- Optimize costs and performance across providers
-
-
-
- Connect with databases, deployments, and APIs
-
-
+## Notes
-
- **Provider Ecosystem**: With 19+ AI providers, you can choose the perfect model for every task - from rapid
- prototyping to production deployment, from cost optimization to maximum privacy.
-
+- **Multi-provider:** Configure multiple providers and switch between them
+- **API security:** Keys stored locally, never transmitted to CodinIT servers
+- **Rate limits:** Each provider has different limits
+- **Local vs Cloud:** Local offers privacy but requires hardware; cloud offers convenience and advanced features
diff --git a/providers/cohere.mdx b/providers/cohere.mdx
index 5d8c36a..74467fd 100644
--- a/providers/cohere.mdx
+++ b/providers/cohere.mdx
@@ -1,185 +1,40 @@
---
title: Cohere
-description: Access Cohere's powerful language models for text generation and analysis
+description: Configure Cohere's Command R series models for reasoning, code generation, and multilingual tasks.
---
-Cohere provides advanced language models that excel at understanding context, generating human-like text, and performing complex reasoning tasks. Their models are particularly strong in areas like code generation, analysis, and multilingual capabilities.
+**Website:** [https://cohere.com/](https://cohere.com/)
-## Overview
+## Getting an API Key
-Cohere's AI models are designed to be helpful, truthful, and scalable. They offer a range of models from lightweight options to powerful enterprise-grade solutions, making them suitable for everything from quick prototyping to production applications.
+1. Go to [Cohere Dashboard](https://dashboard.cohere.com/) and sign in
+2. Navigate to API Keys section
+3. Create a new API key
+4. Copy the key immediately
-
-
- Latest generation models with enhanced reasoning capabilities
-
-
- Strong performance across multiple languages
-
-
- Specialized models for programming and code analysis
-
-
+## Configuration
-## Available Models
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "Cohere" as the API Provider
+3. Paste your API key
+4. Choose your model
-
-
- ### Command R Plus Latest
- The most advanced model in Cohere's lineup, offering superior reasoning and code generation capabilities.
+## Supported Models
- - **Context Window**: 128,000 tokens
- - **Best for**: Complex reasoning, code generation, analysis
- - **Pricing**: Higher cost, premium performance
+- `command-r-plus` (Latest) - 128K context
+- `command-r` (Latest) - 128K context
+- `command` - 4K context
+- `command-light` - 4K context
+- `aya-expanse` - Multilingual, 8K context
-
+## Features
-
- ### Command R Latest
- Balanced performance with excellent reasoning capabilities for most use cases.
+- **Tool calling:** External tools and APIs
+- **RAG support:** Retrieval augmented generation
+- **Multilingual:** Strong multi-language performance
+- **Code intelligence:** Programming specialized
- - **Context Window**: 128,000 tokens
- - **Best for**: General AI tasks, analysis, writing
- - **Pricing**: Moderate cost, good performance balance
+## Notes
-
-
-
- ### Command R Plus
- Enterprise-grade model with enhanced capabilities for complex tasks.
-
- - **Context Window**: 128,000 tokens
- - **Best for**: Advanced reasoning, technical writing
- - **Pricing**: Premium tier
-
-
-
-
- ### Command R
- Reliable workhorse model for most AI applications.
-
- - **Context Window**: 128,000 tokens
- - **Best for**: General purpose AI tasks
- - **Pricing**: Standard tier
-
-
-
-
- ### Command
- Earlier generation model, still powerful for many tasks.
-
- - **Context Window**: 4,096 tokens
- - **Best for**: Basic text generation, simpler tasks
- - **Pricing**: Lower cost option
-
-
-
-
- ### Command Light Series
- Faster, more efficient models for simpler tasks and cost optimization.
-
- - **Context Window**: 4,096 tokens
- - **Best for**: Quick responses, basic analysis
- - **Pricing**: Most affordable option
-
-
-
-
- ### Aya Expanse Series
- Specialized multilingual models for global applications.
-
- - **Context Window**: 8,000 tokens
- - **Best for**: Multilingual content, global applications
- - **Pricing**: Moderate cost
-
-
-
-
-## Setup Instructions
-
-
- Visit [Cohere Dashboard](https://dashboard.cohere.com/) and create an account
- Navigate to API Keys section and create a new API key
- Add your API key to the Cohere provider settings in Codinit
- Select a Cohere model and test it with a simple prompt
-
-
-## Key Features
-
-
- Advanced Reasoning
- Code Generation
- Multilingual
- Tool Calling
- RAG Support
-
-
-### Advanced Capabilities
-
-- **Tool Calling**: Can use external tools and APIs
-- **Retrieval Augmented Generation**: Enhanced with external knowledge
-- **Multilingual Support**: Strong performance in multiple languages
-- **Code Intelligence**: Specialized for programming tasks
-- **Reasoning**: Advanced logical reasoning capabilities
-
-## Use Cases
-
-
-
- ### Programming Assistance
- Perfect for code generation, debugging, and technical documentation.
-
- - Generate complete functions and classes
- - Debug existing code
- - Write technical documentation
- - Code review and analysis
-
-
-
-
- ### Writing and Analysis
- Excellent for content creation and analytical tasks.
-
- - Technical writing
- - Business analysis
- - Research summaries
- - Creative content generation
-
-
-
-
- ### Global Applications
- Strong performance for international and multilingual use cases.
-
- - Translation assistance
- - Multilingual content creation
- - Cross-cultural communication
- - Global business applications
-
-
-
-
-## Pricing Information
-
-Cohere offers flexible pricing based on usage:
-
-- **Free Tier**: Limited usage for testing
-- **Pay-as-you-go**: Based on tokens processed
-- **Enterprise**: Custom pricing for high-volume usage
-
-
-
-
-
-
-
-
-
- **Rate Limits**: Cohere implements rate limits based on your account tier. Free accounts have lower limits than paid
- accounts.
-
-
-
- **Best Practices**: Start with Command R models for most applications. Use Command Light for simple tasks where speed
- is prioritized over quality.
-
+- **Pricing:** See [Cohere Pricing](https://cohere.com/pricing)
+- **Free tier:** Available for testing
diff --git a/providers/deepseek.mdx b/providers/deepseek.mdx
index d5febdb..1e5cd1f 100644
--- a/providers/deepseek.mdx
+++ b/providers/deepseek.mdx
@@ -1,33 +1,29 @@
---
title: "DeepSeek"
-description: "Learn how to configure and use DeepSeek models like deepseek-chat and deepseek-reasoner with CodinIT."
+description: "Configure DeepSeek models for coding and reasoning tasks with CodinIT."
---
-CodinIT supports accessing models through the DeepSeek API, including `deepseek-chat` and `deepseek-reasoner`.
-
**Website:** [https://platform.deepseek.com/](https://platform.deepseek.com/)
-### Getting an API Key
-
-1. **Sign Up/Sign In:** Go to the [DeepSeek Platform](https://platform.deepseek.com/). Create an account or sign in.
-2. **Navigate to API Keys:** Find your API keys in the [API keys](https://platform.deepseek.com/api_keys) section of the platform.
-3. **Create a Key:** Click "Create new API key". Give your key a descriptive name (e.g., "CodinIT").
-4. **Copy the Key:** **Important:** Copy the API key _immediately_. You will not be able to see it again. Store it securely.
+## Getting an API Key
-### Supported Models
+1. Go to [DeepSeek Platform](https://platform.deepseek.com/) and sign in
+2. Navigate to [API keys](https://platform.deepseek.com/api_keys)
+3. Click "Create new API key" and name it (e.g., "CodinIT")
+4. Copy the key immediately - you won't see it again
-CodinIT supports the following DeepSeek models:
+## Configuration
-- `deepseek-v3-0324` (Recommended for coding tasks)
-- `deepseek-r1` (Recommended for reasoning tasks)
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "DeepSeek" as the API Provider
+3. Paste your API key
+4. Choose your model
-### Configuration in CodinIT
+## Supported Models
-1. **Open CodinIT Settings:** Click the ⚙️ icon in the CodinIT panel.
-2. **Select Provider:** Choose "DeepSeek" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your DeepSeek API key into the "DeepSeek API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
+- `deepseek-v3-0324` (Recommended for coding)
+- `deepseek-r1` (Recommended for reasoning)
-### Tips and Notes
+## Notes
-- **Pricing:** Refer to the [DeepSeek Pricing](https://api-docs.deepseek.com/quick_start/pricing/) page for details on model costs.
\ No newline at end of file
+- **Pricing:** See [DeepSeek Pricing](https://api-docs.deepseek.com/quick_start/pricing/)
\ No newline at end of file
diff --git a/providers/fireworks.mdx b/providers/fireworks.mdx
deleted file mode 100644
index 337ce67..0000000
--- a/providers/fireworks.mdx
+++ /dev/null
@@ -1,131 +0,0 @@
----
-title: "Fireworks AI"
-description: "Learn how to configure and use Fireworks AI's lightning-fast inference platform with CodinIT."
----
-
-Fireworks AI is a leading infrastructure platform for generative AI that focuses on delivering exceptional performance through optimized inference capabilities. With up to 4x faster inference speeds than alternative platforms and support for over 40 different AI models, Fireworks eliminates the operational complexity of running AI models at scale.
-
-**Website:** [https://fireworks.ai/](https://fireworks.ai/)
-
-### Getting an API Key
-
-1. **Sign Up/Sign In:** Go to [Fireworks AI](https://fireworks.ai/) and create an account or sign in.
-2. **Navigate to API Keys:** Access the API keys section in your dashboard.
-3. **Create a Key:** Generate a new API key. Give it a descriptive name (e.g., "CodinIT").
-4. **Copy the Key:** Copy the API key immediately. Store it securely.
-
-### Supported Models
-
-Fireworks AI supports a wide variety of models across different categories. Popular models include:
-
-**Text Generation Models:**
-- Llama 3.1 series (8B, 70B, 405B)
-- Mixtral 8x7B and 8x22B
-- Qwen 2.5 series
-- DeepSeek models with reasoning capabilities
-- Code Llama models for programming tasks
-
-**Vision Models:**
-- Llama 3.2 Vision models
-- Qwen 2-VL models
-
-**Embedding Models:**
-- Various text embedding models for semantic search
-
-The platform curates, optimizes, and deploys models with custom kernels and inference optimizations for maximum performance.
-
-### Configuration in CodinIT
-
-1. **Open CodinIT Settings:** Click the settings icon (⚙️) in the CodinIT panel.
-2. **Select Provider:** Choose "Fireworks" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Fireworks API key into the "Fireworks API Key" field.
-4. **Enter Model ID:** Specify the model you want to use (e.g., "accounts/fireworks/models/llama-v3p1-70b-instruct").
-5. **Configure Tokens:** Optionally set max completion tokens and context window size.
-
-### Fireworks AI's Performance Focus
-
-Fireworks AI's competitive advantages center on performance optimization and developer experience:
-
-#### Lightning-Fast Inference
-- **Up to 4x faster inference** than alternative platforms
-- **250% higher throughput** compared to open source inference engines
-- **50% faster speed** with significantly reduced latency
-- **6x lower cost** than HuggingFace Endpoints with 2.5x generation speed
-
-#### Advanced Optimization Technology
-- **Custom kernels** and inference optimizations increase throughput per GPU
-- **Multi-LoRA architecture** enables efficient resource sharing
-- **Hundreds of fine-tuned model variants** can run on shared base model infrastructure
-- **Asset-light model** focuses on optimization software rather than expensive GPU ownership
-
-#### Comprehensive Model Support
-- **40+ different AI models** curated and optimized for performance
-- **Multiple GPU types** supported: A100, H100, H200, B200, AMD MI300X
-- **Pay-per-GPU-second billing** with no extra charges for start-up times
-- **OpenAI API compatibility** for seamless integration
-
-### Pricing Structure
-
-Fireworks AI uses a usage-based pricing model with competitive rates:
-
-#### Text and Vision Models (2025)
-| Parameter Count | Price per 1M Input Tokens |
-|---|---|
-| Less than 4B parameters | $0.10 |
-| 4B - 16B parameters | $0.20 |
-| More than 16B parameters | $0.90 |
-| MoE 0B - 56B parameters | $0.50 |
-
-#### Fine-Tuning Services
-| Base Model Size | Price per 1M Training Tokens |
-|---|---|
-| Up to 16B parameters | $0.50 |
-| 16.1B - 80B parameters | $3.00 |
-| DeepSeek R1 / V3 | $10.00 |
-
-#### Dedicated Deployments
-| GPU Type | Price per Hour |
-|---|---|
-| A100 80GB | $2.90 |
-| H100 80GB | $5.80 |
-| H200 141GB | $6.99 |
-| B200 180GB | $11.99 |
-| AMD MI300X | $4.99 |
-
-### Special Features
-
-#### Fine-Tuning Capabilities
-Fireworks offers sophisticated fine-tuning services accessible through CLI interface, supporting JSON-formatted data from databases like MongoDB Atlas. Fine-tuned models cost the same as base models for inference.
-
-#### Developer Experience
-- **Browser playground** for direct model interaction
-- **REST API** with OpenAI compatibility
-- **Comprehensive cookbook** with ready-to-use recipes
-- **Multiple deployment options** from serverless to dedicated GPUs
-
-#### Enterprise Features
-- **HIPAA and SOC 2 Type II compliance** for regulated industries
-- **Self-serve onboarding** for developers
-- **Enterprise sales** for larger deployments
-- **Post-paid billing options** and Business tier
-
-#### Reasoning Model Support
-Advanced support for reasoning models with `` tag processing and reasoning content extraction, making complex multi-step reasoning practical for real-time applications.
-
-### Performance Advantages
-
-Fireworks AI's optimization delivers measurable improvements:
-- **250% higher throughput** vs open source engines
-- **50% faster speed** with reduced latency
-- **6x cost reduction** compared to alternatives
-- **2.5x generation speed** improvement per request
-
-### Tips and Notes
-
-- **Model Selection:** Choose models based on your specific use case - smaller models for speed, larger models for complex reasoning.
-- **Performance Focus:** Fireworks excels at making AI inference fast and cost-effective through advanced optimizations.
-- **Fine-Tuning:** Leverage fine-tuning capabilities to improve model accuracy with your proprietary data.
-- **Compliance:** HIPAA and SOC 2 Type II compliance enables use in regulated industries.
-- **Pricing Model:** Usage-based pricing scales with your success rather than traditional seat-based models.
-- **Developer Resources:** Extensive documentation and cookbook recipes accelerate implementation.
-- **GPU Options:** Multiple GPU types available for dedicated deployments based on performance needs.
\ No newline at end of file
diff --git a/providers/github.mdx b/providers/github.mdx
index a8f89ce..ede2fe5 100644
--- a/providers/github.mdx
+++ b/providers/github.mdx
@@ -1,160 +1,37 @@
---
title: GitHub Models
-description: Access OpenAI and other models through GitHub's AI platform
+description: Access OpenAI GPT-4, o1, and other AI models through GitHub's platform.
---
-GitHub Models provides access to leading AI models through GitHub's infrastructure, offering a convenient way to use OpenAI's GPT series, o1 models, and other advanced AI systems directly within the GitHub ecosystem.
+Access leading AI models through GitHub's infrastructure with seamless integration.
-## Overview
+**Website:** [https://github.com/marketplace/models](https://github.com/marketplace/models)
-GitHub Models serves as a gateway to premium AI models, allowing developers to access cutting-edge AI capabilities without managing complex API integrations. It's particularly useful for teams already using GitHub and wanting seamless AI integration.
+## Setup
-
-
- Direct access to GPT-4, GPT-4o, and o1 models
-
-
- Seamless integration with GitHub workflows
-
-
- Suitable for teams and enterprise use
-
-
+1. **GitHub account:** Ensure you have appropriate permissions
+2. **Create token:** Go to [GitHub Settings > Personal Access Tokens](https://github.com/settings/personal-access-tokens)
+3. **Configure permissions:** Enable Models API access
+4. **Add to CodinIT:** Enter token in GitHub provider settings
+5. **Test models:** Verify connection
## Available Models
-
-
- ### GPT-4o Models
- OpenAI's latest multimodal models with enhanced reasoning and creativity.
+- **GPT-4o series:** GPT-4o, GPT-4o Mini (128K context)
+- **o1 series:** o1, o1-preview, o1-mini (200K context)
+- **GPT-4.1 series:** GPT-4.1, GPT-4.1-mini (1M+ context)
+- **DeepSeek:** DeepSeek-R1 (reasoning)
- - **GPT-4o**: Most advanced model with 128K context
- - **GPT-4o Mini**: Faster, cost-effective version
- - **Best for**: Complex reasoning, creative tasks, analysis
+## Features
-
+- **Large context:** Up to 1M+ tokens
+- **Multimodal:** Text and image support
+- **Advanced reasoning:** Specialized o1 models
+- **GitHub integration:** Native GitHub workflows
+- **Enterprise security:** GitHub's security features
-
- ### o1 Series
- OpenAI's specialized reasoning models for complex problem-solving.
+## Notes
- - **o1**: Advanced reasoning with 200K context window
- - **o1-preview**: Preview version of o1 capabilities
- - **o1-mini**: Efficient reasoning for technical tasks
- - **Best for**: Mathematics, coding, scientific reasoning
-
-
-
-
- ### GPT-4.1 Models
- Latest generation with massive context windows.
-
- - **GPT-4.1**: 1M+ context window for large documents
- - **GPT-4.1-mini**: Efficient version with large context
- - **Best for**: Document analysis, long-form content
-
-
-
-
- ### DeepSeek-R1
- Advanced reasoning model from DeepSeek.
-
- - **DeepSeek-R1**: Specialized reasoning capabilities
- - **Best for**: Technical analysis, research tasks
-
-
-
-
-## Setup Instructions
-
-
- Ensure you have a GitHub account with appropriate permissions
-
- Go to [GitHub Settings > Personal Access Tokens](https://github.com/settings/personal-access-tokens) and create a
- new token
-
- Enable the necessary permissions for Models API access
- Enter your GitHub token in the GitHub provider settings
- Verify connection by testing different available models
-
-
-## Key Features
-
-
- Large Context Windows
- Multimodal Support
- Reasoning Models
- GitHub Integration
- Enterprise Security
-
-
-### Advanced Capabilities
-
-- **Massive Context**: Up to 1M+ tokens for large documents
-- **Multimodal Input**: Text, images, and other media types
-- **Advanced Reasoning**: Specialized models for complex problem-solving
-- **Enterprise Security**: GitHub's security and compliance features
-- **API Compatibility**: Standard OpenAI API format
-
-## Use Cases
-
-
-
- ### Software Development
- Perfect for coding assistance and technical problem-solving.
-
- - Code generation and completion
- - Debugging and error analysis
- - Architecture design
- - Technical documentation
-
-
-
-
- ### Research Tasks
- Excellent for complex analysis and research work.
-
- - Document analysis and summarization
- - Research paper review
- - Data interpretation
- - Scientific reasoning
-
-
-
-
- ### Business Use Cases
- Suitable for enterprise-grade AI applications.
-
- - Business analysis and reporting
- - Customer service automation
- - Content generation
- - Process optimization
-
-
-
-
-## Pricing Information
-
-GitHub Models pricing is based on token usage:
-
-
-
-
-
-
-
-
-
-
- **Billing**: Usage is billed through your GitHub account. Monitor usage in GitHub's billing section.
-
-
-
- **Model Selection**: Choose GPT-4o for most tasks, o1 models for complex reasoning, and GPT-4o Mini for cost-effective
- general use.
-
-
-
- **Token Limits**: Be aware of context window limits. GPT-4.1 supports up to 1M tokens, while others have smaller
- limits.
-
+- **Pricing:** Token-based, billed through GitHub account
+- **Context windows:** Vary by model (128K to 1M+ tokens)
+- **Use cases:** Code development, research, business applications
diff --git a/providers/google.mdx b/providers/google.mdx
index 1b6f11a..3290f73 100644
--- a/providers/google.mdx
+++ b/providers/google.mdx
@@ -1,201 +1,82 @@
---
title: "Google Gemini"
-description: "Configure GCP Vertex AI with CodinIT to access leading generative AI models like Claude 4.5 Sonnet v2. This guide covers GCP environment setup."
+description: "Configure GCP Vertex AI to access Gemini and Claude models through Google Cloud."
---
-### Overview
+Access leading AI models like Gemini and Claude 4.5 Sonnet through Google Cloud's Vertex AI platform.
-**GCP Vertex AI:**\
-A fully managed service that provides access to leading generative AI models—such as Anthropic's Claude 4.5 Sonnet v2—through Google Cloud.\
-[Learn more about GCP Vertex AI](https://cloud.google.com/vertex-ai).
+**Website:** [https://cloud.google.com/vertex-ai](https://cloud.google.com/vertex-ai)
-This guide is tailored for organizations with established GCP environments (leveraging IAM roles, service accounts, and best practices in resource management) to ensure secure and compliant usage.
+## Prerequisites
----
-
-### Step 1: Prepare Your GCP Environment
-
-#### 1.1 Create or Use a GCP Project
-
-- **Sign in to the GCP Console:**\
- [Google Cloud Console](https://console.cloud.google.com/)
-- **Select or Create a Project:**\
- Use an existing project or create a new one dedicated to Vertex AI.
-
-#### 1.2 Set Up IAM Permissions and Service Accounts
-
-- **Assign Required Roles:**
-
- - Grant your user (or service account) the **Vertex AI User** role (`roles/aiplatform.user`)
- - For service accounts, also attach the **Vertex AI Service Agent** role (`roles/aiplatform.serviceAgent`) to enable certain operations
- - Consider additional predefined roles as needed:
- - Vertex AI Platform Express Admin
- - Vertex AI Platform Express User
- - Vertex AI Migration Service User
-
-- **Cross-Project Resource Access:**
- - For BigQuery tables in different projects, assign the **BigQuery Data Viewer** role
- - For Cloud Storage buckets in different projects, assign the **Storage Object Viewer** role
- - For external data sources, refer to the [GCP Vertex AI Access Control documentation](https://cloud.google.com/vertex-ai/general/access-control)
-
----
-
-### Step 2: Verify Regional and Model Access
-
-#### 2.1 Choose and Confirm a Region
-
-Vertex AI supports multiple regions. Select a region that meets your latency, compliance, and capacity needs. Examples include:
+- GCP account with billing enabled
+- GCP project created
+- IAM permissions configured
-- **us-east5 (Columbus, Ohio)**
-- **us-central1 (Iowa)**
-- **europe-west1 (Belgium)**
-- **europe-west4 (Netherlands)**
-- **asia-southeast1 (Singapore)**
-- **global (Global)**
-
-The Global endpoint may offer higher availability and reduce resource exhausted errors. Only Gemini models are supported.
-
-#### 2.2 Enable the Claude 4.5 Sonnet v2 Model
-
-- **Open Vertex AI Model Garden:**\
- In the Cloud Console, navigate to **Vertex AI → Model Garden**
-- **Enable Claude 4.5 Sonnet v2:**\
- Locate the model card for Claude 4.5 Sonnet v2 and click **Enable**
-
----
-
-
-#### 3.1 Install and Open CodinIT
-
-- **Download VS Code:**\
- [Download Visual Studio Code](https://code.visualstudio.com/)
-- **Install the CodinIT Extension:**
- - Open VS Code
- - Navigate to the Extensions Marketplace (Ctrl+Shift+X or Cmd+Shift+X)
- - Search for **Github** and install the extension & Clone the repository
-
-#### 3.2 Configure CodinIT Settings
-
-- **Open CodinIT Settings:**\
- Click the settings ⚙️ icon within the CodinIT extension
-- **Set API Provider:**\
- Choose **GCP Vertex AI** from the API Provider dropdown
-- **Enter Your Google Cloud Project ID:**\
- Provide the project ID you set up earlier
-- **Select the Region:**\
- Choose one of the supported regions (e.g., `us-east5`)
-- **Select the Model:**\
- From the available list, choose **Claude 4.5 Sonnet v2**
-- **Save and Test:**\
- Save your settings and test by sending a simple prompt (e.g., "Generate a Python function to check if a number is prime.")
-
----
+## Setup Steps
-### Step 4: Authentication and Credentials Setup
+### 1. Prepare GCP Environment
-#### Option A: Using Your Google Account (User Credentials)
+1. **Sign in:** [Google Cloud Console](https://console.cloud.google.com/)
+2. **Create/select project:** Use existing or create new project
+3. **Set up IAM:**
+ - Grant **Vertex AI User** role (`roles/aiplatform.user`)
+ - For service accounts, add **Vertex AI Service Agent** role (`roles/aiplatform.serviceAgent`)
-1. **Install the Google Cloud CLI:**\
- Follow the [installation guide](https://cloud.google.com/sdk/install)
-2. **Initialize and Authenticate:**
+### 2. Choose Region and Enable Models
- ```bash
- gcloud init
- gcloud auth application-default login
- ```
+1. **Select region:** Choose region for latency/compliance needs (e.g., `us-east5`, `us-central1`, `europe-west1`)
+ - Use `global` endpoint for higher availability (Gemini only)
+2. **Enable models:** Go to Vertex AI → Model Garden and enable desired models (e.g., Claude 4.5 Sonnet v2)
- - This sets up Application Default Credentials (ADC) using your Google account
+### 3. Configure CodinIT
-3. **Restart VS Code:**\
- Ensure VS Code is restarted so that the CodinIT extension picks up the new credentials
+1. Install CodinIT extension in VS Code
+2. Click settings icon (⚙️)
+3. Select **GCP Vertex AI** as API Provider
+4. Enter your **Google Cloud Project ID**
+5. Select your **Region**
+6. Choose your **Model** (e.g., Claude 4.5 Sonnet v2)
+7. Save and test
-#### Option B: Using a Service Account (JSON Key)
+### 4. Authentication
-1. **Create a Service Account:**
+**Option A: User Credentials**
+```bash
+gcloud init
+gcloud auth application-default login
+```
+Restart VS Code after authentication.
- - In the GCP Console, navigate to **IAM & Admin > Service Accounts**
- - Create a new service account (e.g., "vertex-ai-client")
+**Option B: Service Account**
+1. Create service account in GCP Console
+2. Assign Vertex AI User and Service Agent roles
+3. Generate JSON key
+4. Set environment variable:
+ ```bash
+ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/key.json"
+ ```
+5. Launch VS Code from terminal with this variable set
-2. **Assign Roles:**
-
- - Attach **Vertex AI User** (`roles/aiplatform.user`)
- - Attach **Vertex AI Service Agent** (`roles/aiplatform.serviceAgent`)
- - Optionally, add other roles as required
-
-3. **Generate a JSON Key:**
-
- - In the Service Accounts section, manage keys for your service account and download the JSON key
-
-4. **Set the Environment Variable:**
-
- ```bash
- export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json"
- ```
-
- - This instructs Google Cloud client libraries (and CodinIT) to use this key
-
-5. **Restart VS Code:**\
- Launch VS Code from a terminal where the `GOOGLE_APPLICATION_CREDENTIALS` variable is set
-
----
-
-### Step 5: Security, Monitoring, and Best Practices
-
-#### 5.1 Enforce Least Privilege
-
-- **Principle of Least Privilege:**\
- Only grant the minimum necessary permissions. Custom roles can offer finer control compared to broad predefined roles
-- **Best Practices:**\
- Refer to [GCP IAM Best Practices](https://cloud.google.com/iam/best-practices)
-
-#### 5.2 Manage Resource Access
-
-- **Project vs. Resource-Level Access:**\
- Access can be managed at both levels. Note that resource-level permissions (e.g., for BigQuery or Cloud Storage) add to, but do not override, project-level policies
-
-#### 5.3 Monitor Usage and Quotas
-
-- **Model Observability Dashboard:**
-
- - In the Vertex AI Console, navigate to the **Model Observability** dashboard
- - Monitor metrics such as request throughput, latency, and error rates (including 429 quota errors)
-
-- **Quota Management:**
- - If you encounter 429 errors, check the **IAM & Admin > Quotas** page
- - Request a quota increase if necessary\
- [Learn more about GCP Vertex AI Quotas](https://cloud.google.com/vertex-ai/quotas)
-
-#### 5.4 Service Agents and Cross-Project Considerations
-
-- **Service Agents:**\
- Be aware of the different service agents:
-
- - Vertex AI Service Agent
- - Vertex AI RAG Data Service Agent
- - Vertex AI Custom Code Service Agent
- - Vertex AI Extension Service Agent
-
-- **Cross-Project Access:**\
- For resources in other projects (e.g., BigQuery, Cloud Storage), ensure that the appropriate roles (BigQuery Data Viewer, Storage Object Viewer) are assigned
-
----
+## Supported Regions
-### Conclusion
+- `us-east5` (Columbus, Ohio)
+- `us-central1` (Iowa)
+- `europe-west1` (Belgium)
+- `europe-west4` (Netherlands)
+- `asia-southeast1` (Singapore)
+- `global` (Global - Gemini only)
-By following these steps, your enterprise team can securely integrate GCP Vertex AI with the CodinIT VS Code extension to harness the power of **Claude 4.5 Sonnet v2**:
+## Notes
-- **Prepare Your GCP Environment:**\
- Create or use a project, configure IAM with least privilege, and ensure necessary roles (including the Vertex AI Service Agent role) are attached
-- **Verify Regional and Model Access:**\
- Confirm that your chosen region supports Claude 4.5 Sonnet v2 and that the model is enabled
-- **Configure CodinIT in VS Code:**\
- Install CodinIT, enter your project ID, select the appropriate region, and choose the model
-- **Set Up Authentication:**\
- Use either user credentials (via `gcloud auth application-default login`) or a service account with a JSON key
-- **Implement Security and Monitoring:**\
- Adhere to best practices for IAM, manage resource access carefully, and monitor usage with the Model Observability dashboard
+- **Cross-region inference:** Check "Cross Region Inference" for models requiring inference profiles
+- **First-time use:** Some models (e.g., Anthropic) require submitting use case form via Console
+- **Permissions:** Minimal required: `bedrock:InvokeModel`, `bedrock:InvokeModelWithResponseStream`
+- **Monitoring:** Use CloudWatch and CloudTrail for logging and monitoring
+- **Security:** Follow [GCP IAM Best Practices](https://cloud.google.com/iam/best-practices)
-For further details, please consult the [GCP Vertex AI Documentation](https://cloud.google.com/vertex-ai/docs) and your internal security policies.\
-Happy coding!
+## Resources
-_This guide will be updated as GCP Vertex AI and CodinIT evolve. Always refer to the latest documentation for current practices._
\ No newline at end of file
+- [GCP Vertex AI Documentation](https://cloud.google.com/vertex-ai/docs)
+- [Access Control](https://cloud.google.com/vertex-ai/general/access-control)
+- [Quotas](https://cloud.google.com/vertex-ai/quotas)
diff --git a/providers/groq.mdx b/providers/groq.mdx
index aa7d106..f7a034f 100644
--- a/providers/groq.mdx
+++ b/providers/groq.mdx
@@ -1,80 +1,45 @@
---
title: "Groq"
-description: "Learn how to configure and use Groq's lightning-fast inference to access models from OpenAI, Meta, DeepSeek, and more with Groq."
+description: "Configure Groq's ultra-fast LPU inference for models from OpenAI, Meta, and DeepSeek."
---
-Groq provides ultra-fast AI inference through their custom LPU™ (Language Processing Unit) architecture, purpose-built for inference rather than adapted from training hardware. Groq hosts open-source models from various providers including OpenAI, Meta, DeepSeek, Moonshot AI, and others.
+Groq provides ultra-fast AI inference through custom LPU™ (Language Processing Unit) architecture. Hosts open-source models from OpenAI, Meta, DeepSeek, and others.
**Website:** [https://groq.com/](https://groq.com/)
-### Getting an API Key
+## Getting an API Key
-1. **Sign Up/Sign In:** Go to [Groq](https://groq.com/) and create an account or sign in.
-2. **Navigate to Console:** Go to the [Groq Console](https://console.groq.com/) to access your dashboard.
-3. **Create a Key:** Navigate to the API Keys section and create a new API key. Give your key a descriptive name (e.g., "CodinIT").
-4. **Copy the Key:** Copy the API key immediately. You will not be able to see it again. Store it securely.
+1. Go to [Groq Console](https://console.groq.com/) and sign in
+2. Navigate to API Keys section
+3. Create a new API key and name it (e.g., "CodinIT")
+4. Copy the key immediately - you won't see it again
-### Supported Models
+## Configuration
-CodinIT supports the following Groq models:
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "Groq" as the API Provider
+3. Paste your API key
+4. Choose your model
-- `llama-3.3-70b-versatile` (Meta) - Balanced performance with 131K context
-- `llama-3.1-8b-instant` (Meta) - Fast inference with 131K context
-- `openai/gpt-oss-120b` (OpenAI) - Featured flagship model with 131K context
-- `openai/gpt-oss-20b` (OpenAI) - Featured compact model with 131K context
-- `moonshotai/kimi-k2-instruct` (Moonshot AI) - 1 trillion parameter model with prompt caching
-- `deepseek-r1-distill-llama-70b` (DeepSeek/Meta) - Reasoning-optimized model
-- `qwen/qwen3-32b` (Alibaba Cloud) - Enhanced for Q&A tasks
-- `meta-llama/llama-4-maverick-17b-128e-instruct` (Meta) - Latest Llama 4 variant
-- `meta-llama/llama-4-scout-17b-16e-instruct` (Meta) - Latest Llama 4 variant
+## Supported Models
-### Configuration in CodinIT
+- `llama-3.3-70b-versatile` (Meta) - 131K context
+- `openai/gpt-oss-120b` (OpenAI) - 131K context
+- `moonshotai/kimi-k2-instruct` - 1T parameters with caching
+- `deepseek-r1-distill-llama-70b` - Reasoning optimized
+- `qwen/qwen3-32b` (Alibaba) - Q&A enhanced
+- `meta-llama/llama-4-maverick-17b-128e-instruct`
-1. **Open CodinIT Settings:** Click the settings icon (⚙️) in the CodinIT panel.
-2. **Select Provider:** Choose "Groq" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Groq API key into the "Groq API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
+## Key Features
-### Groq's Speed Revolution
+- **Ultra-fast inference:** Sub-millisecond latency with LPU architecture
+- **Large context:** Up to 131K tokens
+- **Prompt caching:** Available on select models
+- **Vision support:** Available on select models
-Groq's LPU architecture delivers several key advantages over traditional GPU-based inference:
+Learn more about [LPU architecture](https://groq.com/blog/inside-the-lpu-deconstructing-groq-speed).
-#### LPU Architecture
-Unlike GPUs that are adapted from training workloads, Groq's LPU is purpose-built for inference. This eliminates architectural bottlenecks that create latency in traditional systems.
+## Notes
-#### Unmatched Speed
-- **Sub-millisecond latency** that stays consistent across traffic, regions, and workloads
-- **Static scheduling** with pre-computed execution graphs eliminates runtime coordination delays
-- **Tensor parallelism** optimized for low-latency single responses rather than high-throughput batching
-
-#### Quality Without Tradeoffs
-- **TruePoint numerics** reduce precision only in areas that don't affect accuracy
-- **100-bit intermediate accumulation** ensures lossless computation
-- **Strategic precision control** maintains quality while achieving 2-4× speedup over BF16
-
-#### Memory Architecture
-- **SRAM as primary storage** (not cache) with hundreds of megabytes on-chip
-- **Eliminates DRAM/HBM latency** that plagues traditional accelerators
-- **Enables true tensor parallelism** by splitting layers across multiple chips
-
-Learn more about Groq's technology in their [LPU architecture blog post](https://groq.com/blog/inside-the-lpu-deconstructing-groq-speed).
-
-### Special Features
-
-#### Prompt Caching
-The Kimi K2 model supports prompt caching, which can significantly reduce costs and latency for repeated prompts.
-
-#### Vision Support
-Select models support image inputs and vision capabilities. Check the model details in the Groq Console for specific capabilities.
-
-#### Reasoning Models
-Some models like DeepSeek variants offer enhanced reasoning capabilities with step-by-step thought processes.
-
-### Tips and Notes
-
-- **Model Selection:** Choose models based on your specific use case and performance requirements.
-- **Speed Advantage:** Groq excels at single-request latency rather than high-throughput batch processing.
-- **OSS Model Provider:** Groq hosts open-source models from multiple providers (OpenAI, Meta, DeepSeek, etc.) on their fast infrastructure.
-- **Context Windows:** Most models offer large context windows (up to 131K tokens) for including substantial code and context.
-- **Pricing:** Groq offers competitive pricing with their speed advantages. Check the [Groq Pricing](https://groq.com/pricing) page for current rates.
-- **Rate Limits:** Groq has generous rate limits, but check their documentation for current limits based on your usage tier.
\ No newline at end of file
+- **Speed:** Optimized for single-request latency
+- **Pricing:** See [Groq Pricing](https://groq.com/pricing)
\ No newline at end of file
diff --git a/providers/huggingface.mdx b/providers/huggingface.mdx
index 58560f8..6314f4e 100644
--- a/providers/huggingface.mdx
+++ b/providers/huggingface.mdx
@@ -1,206 +1,41 @@
---
title: Hugging Face
-description: Access open-source AI models through Hugging Face's inference API
+description: Access thousands of open-source AI models through Hugging Face's inference API.
---
-## Overview
+**Website:** [https://huggingface.co/](https://huggingface.co/)
-Hugging Face democratizes AI by providing easy access to cutting-edge models. Their platform hosts models for text generation, code completion, analysis, and more, all accessible through a simple API interface.
+## Getting an API Key
-
-
- Access to thousands of community models
-
-
- Latest models from top AI research labs
-
-
- Affordable pricing for open-source models
-
-
+1. Go to [Hugging Face](https://huggingface.co/) and sign in
+2. Navigate to [Settings > Access Tokens](https://huggingface.co/settings/tokens)
+3. Create a new token with "Read" permissions
+4. Copy the token immediately
-## Available Models
+## Configuration
-
-
- ### Qwen Series
- High-quality models from Alibaba Cloud, excellent for coding and general tasks.
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "Hugging Face" as the API Provider
+3. Paste your token
+4. Choose your model
- - **Qwen2.5-Coder-32B**: Specialized for code generation
- - **Qwen2.5-72B**: General purpose large model
- - **Best for**: Code completion, technical writing, analysis
+## Supported Models
-
+- **Qwen series:** Qwen2.5-Coder-32B, Qwen2.5-72B
+- **Llama series:** Llama-3.1-70B, Llama-3.1-405B
+- **CodeLlama:** CodeLlama-34B
+- **Yi series:** Yi-1.5-34B
+- **Hermes series:** Hermes-3-Llama-3.1-8B
-
- ### Meta Llama Series
- Industry-leading open-source models from Meta.
+## Features
- - **Llama-3.1-70B**: Powerful general-purpose model
- - **Llama-3.1-405B**: Massive model for complex tasks
- - **Best for**: Advanced reasoning, creative tasks
+- **Open source:** All models openly available
+- **Community driven:** Constantly updated
+- **Research access:** Latest models from top labs
+- **Cost effective:** Affordable pricing
+- **Free tier:** Available for testing
-
+## Notes
-
- ### CodeLlama Series
- Specialized models for programming and code-related tasks.
-
- - **CodeLlama-34B**: Large coding model
- - **Best for**: Code generation, debugging, technical analysis
-
-
-
-
- ### Yi Series
- High-performance models from 01.AI.
-
- - **Yi-1.5-34B**: Balanced performance and capability
- - **Best for**: General AI tasks, analysis, writing
-
-
-
-
- ### Hermes Series
- Fine-tuned models optimized for helpfulness and reasoning.
-
- - **Hermes-3-Llama-3.1-8B**: Efficient and capable
- - **Best for**: Conversational AI, helpful responses
-
-
-
-
-## Setup Instructions
-
-
-
- Visit [Hugging Face](https://huggingface.co/) and create a free account
-
-
- Go to [Settings > Access Tokens](https://huggingface.co/settings/tokens) and create a new token
-
- Ensure your token has "Read" permissions for model inference
- Enter your Hugging Face token in the provider settings
- Try different models to find the best fit for your needs
-
-
-## Key Features
-
-
- Open Source
- Community Driven
- Research Models
- Cost Effective
- Diverse Models
-
-
-### Platform Advantages
-
-- **Open Source**: All models are openly available and auditable
-- **Community Driven**: Constantly updated by global AI community
-- **Research Access**: Latest models from top research institutions
-- **Flexible Pricing**: Pay only for what you use
-- **Wide Selection**: Models for every use case and skill level
-
-## Use Cases
-
-
-
- ### Programming Tasks
- Specialized models for software development and coding assistance.
-
- - Code generation and completion
- - Code review and analysis
- - Debugging assistance
- - Technical documentation
-
-
-
-
- ### Academic Research
- Powerful models for research, analysis, and academic work.
-
- - Scientific paper analysis
- - Research summarization
- - Data interpretation
- - Academic writing assistance
-
-
-
-
- ### Creative Writing
- Models for content creation and creative tasks.
-
- - Creative writing
- - Content generation
- - Language translation
- - Educational content
-
-
-
-
- ### Enterprise Use
- Suitable for business and productivity applications.
-
- - Business analysis
- - Report generation
- - Customer communication
- - Process automation
-
-
-
-
-## Pricing Information
-
-Hugging Face offers flexible pricing based on model size and usage:
-
-
-
-
-
-
-
-
-**Free Tier**: Hugging Face offers a generous free tier for testing and light usage.
-
-
- **Model Selection**: Start with smaller models for testing, then scale up to larger models for production use.
-
-
-
- **Rate Limits**: Free tier has usage limits. Paid plans offer higher rate limits and priority access.
-
-
-## Model Performance Notes
-
-
-
- ### Speed Considerations
- Model size affects response time and resource usage.
-
- - **Small models**: Fast responses, lower resource usage
- - **Large models**: Slower responses, higher resource usage
- - **Consider trade-offs**: Speed vs. quality based on your needs
-
-
-
-
- ### Token Limits
- Different models have varying context window sizes.
-
- - **Most models**: 4K-8K token context windows
- - **Specialized models**: May have different limits
- - **Check documentation**: Verify limits for your chosen model
-
-
-
-
- ### Staying Current
- Hugging Face models are frequently updated by the community.
-
- - **Regular updates**: New model versions released frequently
- - **Version pinning**: Specify exact model versions for consistency
- - **Community contributions**: New models added regularly
-
-
-
+- **Pricing:** Based on model size and usage
+- **Rate limits:** Free tier has usage limits
diff --git a/providers/hyperbolic.mdx b/providers/hyperbolic.mdx
index faa86f9..0d6b0e6 100644
--- a/providers/hyperbolic.mdx
+++ b/providers/hyperbolic.mdx
@@ -1,130 +1,39 @@
---
title: Hyperbolic
-description: Access high-performance open-source AI models through Hyperbolic's optimized infrastructure
+description: Access high-performance open-source AI models through Hyperbolic's optimized infrastructure.
---
-Hyperbolic provides access to cutting-edge open-source AI models through their optimized cloud infrastructure, offering fast inference speeds and a wide selection of models from leading AI research organizations.
+**Website:** [https://app.hyperbolic.xyz/](https://app.hyperbolic.xyz/)
-## Overview
+## Getting an API Key
-Hyperbolic specializes in running open-source AI models with enterprise-grade performance and reliability. Their platform offers models from Qwen, DeepSeek, and other leading AI labs, optimized for speed and cost-effectiveness.
+1. Go to [Hyperbolic](https://app.hyperbolic.xyz/) and sign in
+2. Navigate to Settings
+3. Create a new API key
+4. Copy the key immediately
-
-
- Optimized infrastructure for fast inference
-
-
- Access to leading open-source AI models
-
-
- Competitive pricing for enterprise use
-
-
+## Configuration
-## Available Models
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "Hyperbolic" as the API Provider
+3. Paste your API key
+4. Choose your model
-
-
- ### Qwen Series
- Advanced models from Alibaba Cloud, excellent for coding and general AI tasks.
+## Supported Models
- - **Qwen2.5-Coder-32B**: Specialized code generation model
- - **Qwen2.5-72B**: Large general-purpose model
- - **QwQ-32B-Preview**: Advanced reasoning model
- - **Qwen2-VL-72B**: Multimodal model with vision capabilities
+- **Qwen series:** Qwen2.5-Coder-32B, Qwen2.5-72B, QwQ-32B-Preview
+- **Qwen Vision:** Qwen2-VL-72B (multimodal)
+- **DeepSeek:** DeepSeek-V2.5
-
+## Features
-
- ### DeepSeek Series
- Efficient and capable models from DeepSeek AI.
+- **High performance:** Optimized infrastructure for fast inference
+- **Open source focus:** Latest open-source models
+- **Multimodal:** Vision and text capabilities
+- **Cost effective:** Competitive pricing
+- **Free credits:** Available for new accounts
- - **DeepSeek-V2.5**: Balanced performance and efficiency
- - **Best for**: General AI tasks, analysis, reasoning
+## Notes
-
-
-
-## Setup Instructions
-
-
- Visit [Hyperbolic](https://app.hyperbolic.xyz/) and create an account
- Navigate to Settings and create a new API key
- Add your API key to the Hyperbolic provider settings
- Choose from available models based on your needs
-
-
-## Key Features
-
-
- High Performance
- Open Source
- Multimodal Support
- Enterprise Ready
- Cost Optimized
-
-
-### Platform Advantages
-
-- **Optimized Infrastructure**: Fast inference speeds with low latency
-- **Open Source Focus**: Access to the latest open-source models
-- **Multimodal Capabilities**: Models with vision and text understanding
-- **Enterprise Features**: Reliability and scalability for business use
-- **Competitive Pricing**: Cost-effective compared to proprietary models
-
-## Use Cases
-
-
-
- ### Programming Tasks
- Perfect for software development and technical work.
-
- - Code generation and completion
- - Technical problem-solving
- - Code review and analysis
- - Documentation generation
-
-
-
-
- ### Research Applications
- Suitable for research, analysis, and academic work.
-
- - Scientific research assistance
- - Data analysis and interpretation
- - Academic writing support
- - Complex reasoning tasks
-
-
-
-
- ### Enterprise Use
- Reliable for business and productivity applications.
-
- - Business intelligence
- - Content creation
- - Customer service automation
- - Process optimization
-
-
-
-
-## Pricing Information
-
-Hyperbolic offers flexible pricing based on model usage:
-
-
-
-
-
-
-
-**Free Credits**: New accounts receive free credits for testing and evaluation.
-
-
- **Model Selection**: Choose Qwen models for coding tasks and DeepSeek for general-purpose applications.
-
-
-
- **Rate Limits**: Monitor your usage to stay within rate limits. Enterprise plans offer higher limits.
-
+- **Pricing:** Usage-based, competitive rates
+- **Rate limits:** Monitor usage to stay within limits
diff --git a/providers/lmstudio.mdx b/providers/lmstudio.mdx
index 1b74381..1a29f89 100644
--- a/providers/lmstudio.mdx
+++ b/providers/lmstudio.mdx
@@ -1,211 +1,73 @@
---
title: LM Studio
-description: Run AI models locally with LM Studio's user-friendly interface
+description: Run AI models locally with LM Studio's user-friendly interface for privacy and offline development.
---
-LM Studio provides a user-friendly way to run large language models locally on your computer, offering privacy, speed, and offline capabilities without requiring an internet connection.
+LM Studio provides a user-friendly way to run AI models locally with privacy, speed, and offline capabilities.
-## Overview
+**Website:** [https://lmstudio.ai](https://lmstudio.ai)
-LM Studio bridges the gap between powerful AI models and local computing, allowing you to run advanced AI models directly on your machine. It's perfect for users who want privacy, speed, and control over their AI interactions.
+## Setup
-
-
- Run AI models directly on your computer
-
-
- Keep conversations and data completely private
-
-
- Work without internet connectivity
-
-
+1. **Download:** Visit [lmstudio.ai](https://lmstudio.ai) and download for your OS
+2. **Install and launch:** Open LM Studio
+3. **Download a model:** Go to "Discover" tab and download a model
+ - **Recommended:** Qwen3 Coder 30B A3B Instruct for best CodinIT experience
+4. **Start server:** Go to "Developer" tab and toggle server to "Running" (runs at `http://localhost:51732`)
+5. **Configure model settings:**
+ - **Context Length:** Set to 262,144 (maximum)
+ - **KV Cache Quantization:** Leave unchecked (critical for performance)
+ - **Flash Attention:** Enable if available
-## How It Works
+## Configuration in CodinIT
-LM Studio downloads and runs AI models locally using your computer's resources. It provides a simple interface to manage models, start local servers, and connect to various applications including Codinit.
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "LM Studio" as the API Provider
+3. Set server URL to `http://localhost:51732`
+4. Choose your model
-
-
- ### Downloading Models
- Choose from thousands of available models in various sizes and capabilities.
+## Quantization Guide
- - **Model Library**: Browse and download models from Hugging Face
- - **Size Options**: From small 1GB models to large 100GB+ models
- - **Format Support**: GGUF, SafeTensor, and other formats
- - **Automatic Updates**: Stay current with latest model versions
+Choose based on available RAM:
+- **32GB RAM:** 4-bit quantization (~17GB download)
+- **64GB RAM:** 8-bit quantization (~32GB download)
+- **128GB+ RAM:** Full precision or larger models
-
+## Model Format
-
- ### Running AI Locally
- Start a local API server that applications can connect to.
+- **Mac (Apple Silicon):** Use MLX format
+- **Windows/Linux:** Use GGUF format
- - **One-Click Setup**: Start local server with single button
- - **API Compatibility**: OpenAI-compatible API endpoints
- - **Multi-Platform**: Windows, macOS, and Linux support
- - **Resource Management**: Monitor CPU/GPU usage and memory
+## Features
-
-
-
- ### Optimization Settings
- Fine-tune performance based on your hardware capabilities.
-
- - **GPU Acceleration**: Utilize NVIDIA/AMD GPUs when available
- - **CPU Optimization**: Efficient CPU inference for all systems
- - **Memory Management**: Control RAM usage and model loading
- - **Quantization**: Balance speed vs. quality with different precisions
-
-
-
-
-## Setup Instructions
-
-
- Visit [LM Studio website](https://lmstudio.ai/) and download the application
- Install LM Studio and launch the application
- Browse the model library and download models you want to use
- Click "Start Server" in LM Studio to begin the local API server
- Set the server URL (usually http://localhost:1234) in Codinit settings
- Verify the connection and start using local AI models
-
-
-## Key Features
-
-
- Local Execution
- Privacy Focused
- Offline Capable
- Cost Free
- Customizable
-
-
-### Platform Advantages
-
-- **Complete Privacy**: All conversations stay on your device
-- **No API Costs**: Run unlimited AI interactions for free
-- **Offline Operation**: Work without internet connectivity
-- **Hardware Flexibility**: Run on any modern computer
-- **Model Variety**: Access thousands of different AI models
-
-## Use Cases
-
-
-
- ### Secure Development
- Perfect for sensitive development work and private projects.
-
- - Code review without sharing code externally
- - Private documentation and analysis
- - Secure brainstorming and planning
- - Confidential business applications
-
-
-
-
- ### Offline Productivity
- Continue working with AI assistance even without internet.
-
- - Travel and remote work scenarios
- - Limited connectivity environments
- - Data-sensitive offline processing
- - Emergency backup AI capabilities
-
-
-
-
- ### Budget-Friendly AI
- Access advanced AI capabilities without ongoing costs.
-
- - Unlimited usage without API fees
- - No per-token or per-request charges
- - One-time setup, ongoing free usage
- - Cost-effective for heavy AI users
-
-
-
-
- ### Educational Use
- Learn about AI and experiment with different models.
-
- - Study different model architectures
- - Compare model performance and capabilities
- - Learn prompt engineering techniques
- - Understand AI model behaviors
-
-
-
+- **Complete privacy:** All data stays on your device
+- **No API costs:** Unlimited free usage
+- **Offline operation:** Works without internet
+- **Hardware flexibility:** Runs on any modern computer
## System Requirements
-
-
- ### Basic Setup
- Requirements for running small to medium models.
-
- - **RAM**: 8GB minimum, 16GB recommended
- - **Storage**: 10GB free space for models and application
- - **OS**: Windows 10+, macOS 10.15+, Ubuntu 18.04+
- - **CPU**: Modern multi-core processor
-
-
-
-
- ### Optimal Performance
- Recommended specifications for large models and best performance.
-
- - **RAM**: 32GB or more for large models
- - **GPU**: NVIDIA GPU with 8GB+ VRAM (optional but recommended)
- - **Storage**: SSD with 50GB+ free space
- - **CPU**: Multi-core processor with AVX2 support
-
-
-
-
- ### Hardware Acceleration
- Utilize GPU acceleration for faster inference speeds.
-
- - **NVIDIA GPUs**: CUDA support for maximum performance
- - **AMD GPUs**: ROCm support on Linux
- - **Apple Silicon**: Native acceleration on M1/M2/M3 Macs
- - **CPU Fallback**: Automatic fallback to CPU when GPU unavailable
-
-
-
-
-## Model Selection Guide
-
-
-
- ### Choosing Model Size
- Balance between performance and resource requirements.
-
- - **Small Models (1-3GB)**: Fast, basic capabilities, good for simple tasks
- - **Medium Models (3-7GB)**: Balanced performance, good for most applications
- - **Large Models (7-20GB)**: High quality, slower but more capable
- - **XL Models (20GB+)**: Maximum quality, requires powerful hardware
-
-
-
-
- ### Specialized Models
- Choose models based on your specific needs.
+**Minimum:**
+- 8GB RAM (16GB recommended)
+- 10GB free storage
+- Modern multi-core CPU
- - **Code Models**: Code generation, debugging, technical writing
- - **General Chat**: Conversation, analysis, creative writing
- - **Math/Science**: Mathematical reasoning, scientific analysis
- - **Multilingual**: Support for multiple languages and cultures
+**Recommended:**
+- 32GB+ RAM for large models
+- NVIDIA GPU with 8GB+ VRAM (optional)
+- SSD with 50GB+ free space
-
-
+## Troubleshooting
-**Free Forever**: LM Studio is completely free to use. No subscriptions or hidden costs.
+If CodinIT can't connect:
+1. Verify LM Studio server is running
+2. Ensure a model is loaded
+3. Check system meets hardware requirements
+4. Confirm server URL matches in CodinIT settings
-
- **Start Small**: Begin with smaller models to test your setup, then upgrade to larger models as needed.
-
+## Notes
-
- **Resource Intensive**: Large models require significant RAM and may run slowly on lower-end hardware.
-
+- Start LM Studio before using with CodinIT
+- Keep LM Studio running in background
+- First model download may take several minutes
+- Models are stored locally after download
diff --git a/providers/mistral-ai.mdx b/providers/mistral-ai.mdx
index ffa5d3b..ef075a6 100644
--- a/providers/mistral-ai.mdx
+++ b/providers/mistral-ai.mdx
@@ -1,53 +1,43 @@
---
title: "Mistral"
-description: "Learn how to configure and use Mistral AI models, including Codestral, with CodinIT. Covers API key setup and model selection."
+description: "Configure Mistral AI models including Codestral for code generation with CodinIT."
---
-CodinIT supports accessing models through the Mistral AI API, including both standard Mistral models and the code-specialized Codestral model.
-
**Website:** [https://mistral.ai/](https://mistral.ai/)
-### Getting an API Key
-
-1. **Sign Up/Sign In:** Go to the [Mistral Platform](https://console.mistral.ai/). Create an account or sign in. You may need to go through a verification process.
-2. **Create an API Key:**
- - [La Plateforme API Key](https://console.mistral.ai/api-keys/) and/or
- - [Codestral API Key](https://console.mistral.ai/codestral)
+## Getting an API Key
-### Supported Models
+1. Go to [Mistral Console](https://console.mistral.ai/) and sign in
+2. Create API keys:
+ - [La Plateforme API Key](https://console.mistral.ai/api-keys/) for standard models
+ - [Codestral API Key](https://console.mistral.ai/codestral) for Codestral models
+3. Copy the key immediately
-CodinIT supports the following Mistral models:
+## Configuration
-- pixtral-large-2411
-- ministral-3b-2410
-- ministral-8b-2410
-- mistral-small-latest
-- mistral-medium-latest
-- mistral-small-2501
-- pixtral-12b-2409
-- open-mistral-nemo-2407
-- open-codestral-mamba
-- codestral-2501
-- devstral-small-2505
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "Mistral" as the API Provider
+3. Paste your API key
+4. Choose your model
-**Note:** Model availability and specifications may change.
-Refer to the [Mistral AI documentation](https://docs.mistral.ai/api/) and [Mistral Model Overview](https://docs.mistral.ai/getting-started/models/models_overview/) for the most current information.
+## Supported Models
-### Configuration in CodinIT
+- `pixtral-large-2411`
+- `mistral-small-2501`
+- `ministral-8b-2410`
+- `codestral-2501` (Code specialized)
+- `devstral-small-2505` (Code specialized)
+- `open-codestral-mamba`
-1. **Open CodinIT Settings:** Click the settings icon (⚙️) in the CodinIT panel.
-2. **Select Provider:** Choose "Mistral" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Mistral API key into the "Mistral API Key" field if you're using a standard `mistral` model. If you intend to use `codestral-latest`, see the "Using Codestral" section below.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
+See [Mistral Models](https://docs.mistral.ai/getting-started/models/models_overview/) for full details.
-### Using Codestral
+## Using Codestral
-[Codestral](https://docs.mistral.ai/capabilities/code_generation/) is a model specifically designed for code generation and interaction.
-For Codestral, you can use different endpoints (Default: codestral.mistral.ai).
-If using the La Plateforme API Key for Codestral, change the **Codestral Base Url** to: `https://api.mistral.ai`
+For Codestral models:
+- Use Codestral API key from `codestral.mistral.ai`
+- Or use La Plateforme API key and set **Codestral Base URL** to `https://api.mistral.ai`
-To use Codestral with CodinIT:
+## Notes
-1. **Select "Mistral" as the API Provider in CodinIT Settings.**
-2. **Select a Codestral Model** (e.g., `codestral-latest`) from the "Model" dropdown.
-3. **Enter your Codestral API Key** (from `codestral.mistral.ai`) or your La Plateforme API Key (from `api.mistral.ai`) into the appropriate API key field in CodinIT.
\ No newline at end of file
+- **Pricing:** See [Mistral Pricing](https://mistral.ai/pricing)
+- **Documentation:** [Mistral API Docs](https://docs.mistral.ai/api/)
diff --git a/providers/moonshot.mdx b/providers/moonshot.mdx
index a4dea4d..2a1e797 100644
--- a/providers/moonshot.mdx
+++ b/providers/moonshot.mdx
@@ -1,196 +1,38 @@
---
title: Moonshot
-description: Access Chinese AI models including Kimi series through Moonshot's platform
+description: Configure Moonshot's Kimi series models for Chinese language and multilingual tasks.
---
-Moonshot provides access to advanced Chinese AI models, including the popular Kimi series, offering strong performance in both Chinese and English languages with competitive pricing.
+**Website:** [https://platform.moonshot.ai/](https://platform.moonshot.ai/)
-## Overview
+## Getting an API Key
-Moonshot specializes in Chinese language AI models while maintaining strong English capabilities. Their platform offers a range of models from lightweight options to powerful enterprise-grade solutions, with particular strength in Chinese language processing and cultural understanding.
+1. Go to [Moonshot Platform](https://platform.moonshot.ai/) and sign in
+2. Navigate to API console
+3. Create a new API key
+4. Copy the key immediately
-
-
- Specialized models for Chinese language processing
-
-
- Popular conversational AI models
-
-
- Strong performance in both Chinese and English
-
-
+## Configuration
-## Available Models
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "Moonshot" as the API Provider
+3. Paste your API key
+4. Choose your model
-
-
- ### Kimi Conversational Models
- Moonshot's flagship conversational AI models, known for natural interactions.
+## Supported Models
- - **Kimi Latest**: Most advanced conversational model
- - **Kimi K2 Preview**: Advanced reasoning and analysis
- - **Kimi K2 Turbo**: Fast and efficient for quick tasks
- - **Kimi Thinking**: Enhanced reasoning capabilities
+- **Kimi series:** Kimi Latest, K2 Preview, K2 Turbo, Kimi Thinking
+- **Moonshot v1:** 8K, 32K, 128K context variants
+- **Vision models:** Moonshot v1 Vision (8K, 32K, 128K)
-
+## Features
-
- ### Moonshot Vision Models
- Multimodal models with vision capabilities and large context windows.
+- **Chinese language:** Specialized for Chinese text processing
+- **Multimodal:** Vision and text understanding
+- **Large context:** Up to 128K tokens
+- **Cultural intelligence:** Chinese cultural context understanding
- - **Moonshot v1 8K**: Basic model with 8K context
- - **Moonshot v1 32K**: Extended context for longer conversations
- - **Moonshot v1 128K**: Large context for complex tasks
- - **Moonshot v1 Auto**: Adaptive model selection
+## Notes
-
-
-
- ### Multimodal Vision Series
- Models capable of understanding both text and images.
-
- - **Moonshot v1 8K Vision**: Basic multimodal capabilities
- - **Moonshot v1 32K Vision**: Extended context with vision
- - **Moonshot v1 128K Vision**: Large context multimodal model
-
-
-
-
-## Setup Instructions
-
-
- Visit [Moonshot Platform](https://platform.moonshot.ai/) and create an account
- Navigate to the API console in your dashboard
- Create a new API key for accessing the models
- Add your Moonshot API key to the provider settings
- Try different models to find the best fit for your needs
-
-
-## Key Features
-
-
- Chinese Language
- Multimodal
- Large Context
- Conversational AI
- Cultural Understanding
-
-
-### Platform Advantages
-
-- **Chinese Language Excellence**: Superior performance in Chinese text processing
-- **Multimodal Capabilities**: Vision and text understanding combined
-- **Large Context Windows**: Handle complex, lengthy conversations
-- **Cultural Intelligence**: Better understanding of Chinese cultural context
-- **Competitive Pricing**: Cost-effective for Chinese language AI
-
-## Use Cases
-
-
-
- ### Chinese Language Applications
- Perfect for Chinese language content creation and processing.
-
- - Chinese content generation and translation
- - Cultural content creation
- - Chinese business communication
- - Educational content in Chinese
-
-
-
-
- ### Chat and Interaction
- Excellent for natural conversational interfaces.
-
- - Customer service chatbots
- - Personal assistants
- - Interactive learning systems
- - Social conversation applications
-
-
-
-
- ### Vision and Text
- Combine image understanding with text processing.
-
- - Image description and analysis
- - Visual content creation
- - Document processing with images
- - Multimodal content generation
-
-
-
-
- ### Enterprise Use
- Suitable for business applications requiring Chinese language support.
-
- - Chinese market analysis
- - International business communication
- - Cross-cultural content creation
- - Multilingual customer support
-
-
-
-
-## Pricing Information
-
-Moonshot offers flexible pricing based on model usage and context length:
-
-
-
-
-
-
-
-
-
- **Context Pricing**: Pricing scales with context window size. Larger context models cost more per token.
-
-
-
- **Model Selection**: Use Kimi models for conversational tasks and Moonshot v1 models for complex reasoning or
- multimodal work.
-
-
-
- **Language Optimization**: While Moonshot models excel at Chinese, they also perform well in English for general
- tasks.
-
-
-## Model Capabilities
-
-
-
- ### Multilingual Performance
- Strong capabilities across multiple languages with Chinese specialization.
-
- - **Primary Language**: Chinese (Mandarin, simplified/traditional)
- - **Secondary Languages**: English, Japanese, Korean
- - **Cultural Context**: Deep understanding of Chinese culture and context
- - **Code Switching**: Natural switching between languages
-
-
-
-
- ### Context Window Management
- Different models offer varying context window sizes for different needs.
-
- - **Short Context (8K)**: Quick interactions, simple tasks
- - **Medium Context (32K)**: Complex conversations, document analysis
- - **Long Context (128K)**: Large documents, extended conversations
- - **Adaptive Models**: Automatic context optimization
-
-
-
-
- ### Advanced Capabilities
- Unique features that set Moonshot models apart.
-
- - **Cultural Intelligence**: Understanding of Chinese cultural nuances
- - **Vision Integration**: Image understanding and description
- - **Reasoning Enhancement**: Improved logical reasoning in Chinese contexts
- - **Conversational Memory**: Better context retention in conversations
-
-
-
+- **Pricing:** Scales with context window size
+- **Languages:** Excellent for Chinese, strong English support
diff --git a/providers/ollama.mdx b/providers/ollama.mdx
index 578529e..cf8ee0c 100644
--- a/providers/ollama.mdx
+++ b/providers/ollama.mdx
@@ -1,79 +1,62 @@
---
title: "Ollama"
-description: "A quick guide to setting up Ollama for local AI model execution with CodinIT."
+description: "Run AI models locally with Ollama for privacy and offline access."
---
-CodinIT supports running models locally using Ollama. This approach offers privacy, offline access, and potentially reduced costs. It requires some initial setup and a sufficiently powerful computer. Because of the present state of consumer hardware, it's not recommended to use Ollama with CodinIT as performance will likely be poor for average hardware configurations.
+Run models locally using Ollama for privacy, offline access, and control. Requires initial setup and sufficient hardware.
**Website:** [https://ollama.com/](https://ollama.com/)
-### Setting up Ollama
+## Setup
-1. **Download and Install Ollama:**
- Obtain the Ollama installer for your operating system from the [Ollama website](https://ollama.com/) and follow their installation guide. Ensure Ollama is running. You can typically start it with:
+1. **Install Ollama:** Download from [ollama.com](https://ollama.com/) and install
+2. **Start Ollama:** Run `ollama serve` in terminal
+3. **Download a model:**
+ ```bash
+ ollama pull qwen2.5-coder:32b
+ ```
+4. **Configure context window:**
+ ```bash
+ ollama run qwen2.5-coder:32b
+ /set parameter num_ctx 32768
+ /save your_custom_model_name
+ ```
- ```bash
- ollama serve
- ```
+## Configuration in CodinIT
-2. **Download a Model:**
- Ollama supports a wide variety of models. A list of available models can be found on the [Ollama model library](https://ollama.com/library). Some models recommended for coding tasks include:
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "ollama" as the API Provider
+3. Enter your saved model name
+4. (Optional) Set base URL if not using default `http://localhost:11434`
- - `codellama:7b-code` (a good, smaller starting point)
- - `codellama:13b-code` (offers better quality, larger size)
- - `codellama:34b-code` (provides even higher quality, very large)
- - `qwen2.5-coder:32b`
- - `mistralai/Mistral-7B-Instruct-v0.1` (a solid general-purpose model)
- - `deepseek-coder:6.7b-base` (effective for coding)
- - `llama3:8b-instruct-q5_1` (suitable for general tasks)
+## Recommended Models
- To download a model, open your terminal and execute:
+- `qwen2.5-coder:32b` - Excellent for coding
+- `codellama:34b-code` - High quality, large size
+- `deepseek-coder:6.7b-base` - Effective for coding
+- `llama3:8b-instruct-q5_1` - General tasks
- ```bash
- ollama pull
- ```
+See [Ollama model library](https://ollama.com/library) for full list.
- For instance:
+## Dynamic Context Windows
- ```bash
- ollama pull qwen2.5-coder:32b
- ```
+CodinIT automatically calculates optimal context windows based on model parameter size:
-3. **Configure the Model's Context Window:**
- By default, Ollama models often use a context window of 2048 tokens, which can be insufficient for many CodinIT requests. A minimum of 12,000 tokens is advisable for decent results, with 32,000 tokens being ideal. To adjust this, you'll modify the model's parameters and save it as a new version.
+- **70B+ models:** 32k context window (e.g., Llama 70B)
+- **30B+ models:** 16k context window
+- **7B+ models:** 8k context window
+- **Smaller models:** 4k context window (default)
- First, load the model (using `qwen2.5-coder:32b` as an example):
+**Special model families:**
+- Llama 70B models: 32k context
+- Llama 405B models: 128k context
- ```bash
- ollama run qwen2.5-coder:32b
- ```
+Model labels in CodinIT show both parameter size and context window (e.g., "qwen2.5-coder:32b (32B, 16k ctx)").
- Once the model is loaded within the Ollama interactive session, set the context size parameter:
+## Notes
- ```
- /set parameter num_ctx 32768
- ```
-
- Then, save this configured model with a new name:
-
- ```
- /save your_custom_model_name
- ```
-
- (Replace `your_custom_model_name` with a name of your choice.)
-
-4. **Configure CodinIT:**
- - Open the CodinIT sidebar (usually indicated by the CodinIT icon).
- - Click the settings gear icon (⚙️).
- - Select "ollama" as the API Provider.
- - Enter the Model name you saved in the previous step (e.g., `your_custom_model_name`).
- - (Optional) Adjust the base URL if Ollama is running on a different machine or port. The default is `http://localhost:11434`.
- - (Optional) Configure the Model context size in CodinIT's Advanced settings. This helps CodinIT manage its context window effectively with your customized Ollama model.
-
-### Tips and Notes
-
-- **Resource Demands:** Running large language models locally can be demanding on system resources. Ensure your computer meets the requirements for your chosen model.
-- **Model Choice:** Experiment with various models to discover which best fits your specific tasks and preferences.
-- **Offline Capability:** After downloading a model, you can use CodinIT with that model even without an internet connection.
-- **Token Usage Tracking:** CodinIT tracks token usage for models accessed via Ollama, allowing you to monitor consumption.
-- **Ollama's Own Documentation:** For more detailed information, consult the official [Ollama documentation](https://ollama.com/docs).
\ No newline at end of file
+- **Auto-detection:** CodinIT automatically detects Ollama running on port 11434
+- **Context window:** Dynamically calculated based on model capabilities
+- **Resource demands:** Large models require significant system resources
+- **Offline capability:** Works without internet after model download
+- **Performance:** May be slow on average hardware
diff --git a/providers/openai-like.mdx b/providers/openai-like.mdx
index 1c37de2..3a95395 100644
--- a/providers/openai-like.mdx
+++ b/providers/openai-like.mdx
@@ -1,279 +1,55 @@
---
title: OpenAI Compatible
-description: Connect to any OpenAI-compatible API endpoint or service
+description: Connect to any OpenAI-compatible API endpoint including custom deployments and self-hosted models.
---
-The OpenAI Compatible provider allows you to connect to any service that implements the OpenAI API specification, including custom deployments, alternative providers, and self-hosted models that maintain API compatibility.
+Connect to any service that implements the OpenAI API specification.
-## Overview
+## Configuration
-This flexible provider enables integration with any OpenAI-compatible API, making it easy to use custom AI deployments, alternative hosting services, or self-hosted models that follow the OpenAI API standard.
+Set these environment variables:
+- `OPENAI_LIKE_API_BASE_URL` - Your API endpoint URL
+- `OPENAI_LIKE_API_KEY` - Authentication token
+- `OPENAI_LIKE_API_MODELS` (optional) - Manual model list in format: `model1:limit;model2:limit`
-
-
- Works with any OpenAI-compatible API
-
-
- Connect to self-hosted or custom AI services
-
-
- Highly customizable connection settings
-
-
+## Setup
-## How It Works
-
-
-
- ### OpenAI Standard
- Connects to services that implement the OpenAI API specification.
-
- - **Standard Endpoints**: Uses familiar `/chat/completions` and `/models` endpoints
- - **Compatible Formats**: Supports standard OpenAI request/response formats
- - **Authentication**: Uses Bearer token authentication like OpenAI
- - **Streaming Support**: Compatible with streaming responses
-
-
-
-
- ### Setup Flexibility
- Highly configurable to work with different API providers.
-
- - **Custom Base URL**: Specify any API endpoint URL
- - **API Key**: Configure authentication tokens
- - **Model List**: Define available models manually or auto-discover
- - **Environment Variables**: Support for different deployment environments
-
-
-
-
- ### Dynamic Model Loading
- Automatically discovers available models from compatible APIs.
-
- - **API Query**: Fetches model list from `/models` endpoint
- - **Fallback Configuration**: Manual model specification if API discovery fails
- - **Model Parsing**: Intelligent model name and capability detection
- - **Real-time Updates**: Reflects current API capabilities
-
-
-
-
-## Setup Instructions
-
-
- Determine the base URL of your OpenAI-compatible API service
- Get the authentication token or API key for the service
- Set the required environment variables in your deployment
- Verify the API endpoint and authentication work correctly
- Set up model list either through API discovery or manual configuration
-
-
-## Configuration Options
-
-
-
- ### Required Settings
- Environment variables needed for OpenAI-compatible provider setup.
-
- - **OPENAI_LIKE_API_BASE_URL**: The base URL of your API service
- - **OPENAI_LIKE_API_KEY**: Authentication token for API access
- - **OPENAI_LIKE_API_MODELS** (optional): Manual model specification
-
-
-
-
- ### Model Specification
- How to manually specify models when API discovery is not available.
-
- - **Format**: `model1:limit;model2:limit;model3:limit`
- - **Example**: `gpt-4:8000;claude-3:4000;llama-2:2000`
- - **Token Limits**: Specify context window limits per model
- - **Naming**: Use clear, descriptive model names
-
-
-
-
- ### Container Deployment
- Special considerations for Docker and containerized deployments.
-
- - **Network Access**: Ensure API endpoints are accessible from containers
- - **Environment Variables**: Pass configuration through Docker environment
- - **Volume Mounting**: Mount configuration files if needed
- - **Service Discovery**: Use container networking for service communication
-
-
-
-
-## Use Cases
-
-
-
- ### Local AI Deployment
- Connect to locally hosted AI models and services.
-
- - Local LLM deployments (Ollama, LM Studio, etc.)
- - Custom model servers
- - Private AI infrastructure
- - Development environments
-
-
-
-
- ### Third-Party Services
- Integrate with alternative AI providers using OpenAI compatibility.
-
- - Alternative hosting services
- - Specialized AI providers
- - Regional AI services
- - Custom AI platforms
-
-
-
-
- ### Corporate AI
- Connect to enterprise AI deployments and private clouds.
-
- - Corporate AI infrastructure
- - Private cloud deployments
- - On-premises AI services
- - Hybrid cloud setups
-
-
-
-
- ### Development and Testing
- Useful for development, testing, and prototyping scenarios.
-
- - Local development servers
- - Staging environment testing
- - API compatibility testing
- - Mock AI services for development
-
-
-
+1. Identify your OpenAI-compatible API endpoint
+2. Obtain the API key or authentication token
+3. Set environment variables in your deployment
+4. Test the connection
+5. Configure available models
## Compatible Services
-
-
- ### Desktop Applications
- Popular local AI tools that provide OpenAI-compatible APIs.
-
- - **LM Studio**: Local model server with web UI
- - **Ollama**: Command-line tool for running models locally
- - **LocalAI**: Self-hosted OpenAI-compatible API
- - **Text Generation WebUI**: Local web interface for models
-
-
-
-
- ### Alternative Cloud Providers
- Cloud services that offer OpenAI-compatible APIs.
-
- - **Together AI**: Open-source model hosting
- - **Replicate**: Model deployment platform
- - **Modal**: Serverless model inference
- - **Anthropic-compatible services**: Alternative Claude hosting
-
-
-
-
- ### Custom AI Services
- Self-hosted or custom AI service deployments.
-
- - **vLLM**: High-performance LLM serving
- - **TGI (Text Generation Inference)**: Optimized text generation
- - **FastChat**: Open-source chat platform
- - **Custom model servers**: Your own AI service implementations
-
-
-
-
-## Troubleshooting
-
-
-
- ### API Connectivity
- Common connection and authentication problems.
-
- - **Network Access**: Verify API endpoint is reachable
- - **Authentication**: Check API key validity and format
- - **CORS Issues**: Ensure proper cross-origin headers
- - **SSL/TLS**: Verify certificate validity for HTTPS endpoints
+**Local AI Tools:**
+- LM Studio
+- Ollama
+- LocalAI
+- Text Generation WebUI
-
+**Cloud Alternatives:**
+- Together AI
+- Replicate
+- Modal
+- Custom deployments
-
- ### Model Loading Issues
- Problems with model list retrieval and configuration.
+**Self-Hosted:**
+- vLLM
+- TGI (Text Generation Inference)
+- FastChat
+- Custom model servers
- - **API Endpoint**: Verify `/models` endpoint exists and works
- - **Authentication**: Ensure proper API key for model discovery
- - **Manual Configuration**: Use environment variable fallback
- - **Model Format**: Check model ID format and naming conventions
-
-
-
-
- ### Speed and Reliability
- Addressing performance and reliability concerns.
-
- - **Response Times**: Check network latency to API endpoint
- - **Rate Limits**: Monitor API rate limiting and quotas
- - **Model Size**: Consider model size vs. available resources
- - **Caching**: Implement response caching for repeated queries
-
-
-
-
-
- **Compatibility Check**: Always verify that your target service implements the OpenAI API specification correctly,
- including proper request/response formats and authentication.
-
-
-
- **Testing Strategy**: Start with simple requests to verify connectivity, then test model discovery, and finally test
- actual model inference before full deployment.
-
-
-
- **Security Considerations**: Ensure your API keys are properly secured and that the API endpoint uses HTTPS for secure
- communication.
-
-
-## Advanced Configuration
-
-
-
- ### Additional Headers
- Configure custom headers for special API requirements.
-
- - **Authorization Variants**: Different authentication header formats
- - **API Version Headers**: Specify API version requirements
- - **Custom Metadata**: Service-specific header requirements
- - **Rate Limiting**: Custom rate limit headers
-
-
-
-
- ### Network Proxies
- Configure proxy settings for restricted network environments.
-
- - **HTTP Proxies**: Route API calls through proxy servers
- - **Corporate Networks**: Work within enterprise network restrictions
- - **VPN Requirements**: Handle VPN-dependent API access
- - **Load Balancing**: Distribute requests across multiple endpoints
-
-
+## Use Cases
-
- ### Observability
- Integrate with monitoring and logging systems.
+- Self-hosted models and services
+- Alternative AI providers
+- Enterprise private deployments
+- Development and testing environments
- - **Request Logging**: Track API usage and performance
- - **Error Monitoring**: Capture and analyze API errors
- - **Usage Analytics**: Monitor token consumption and costs
- - **Health Checks**: Implement API endpoint health monitoring
+## Notes
-
-
+- Verify API implements OpenAI specification correctly
+- Ensure HTTPS for secure communication
+- Test with simple requests first
+- Use manual model configuration if auto-discovery fails
diff --git a/providers/openai.mdx b/providers/openai.mdx
index baa8286..759228d 100644
--- a/providers/openai.mdx
+++ b/providers/openai.mdx
@@ -1,47 +1,37 @@
---
title: "OpenAI"
-description: "Learn how to configure and use official OpenAI models with CodinIT."
+description: "Configure OpenAI models including GPT-5, o3, and o4-mini with CodinIT."
---
-CodinIT supports accessing models directly through the official OpenAI API.
-
**Website:** [https://openai.com/](https://openai.com/)
-### Getting an API Key
-
-1. **Sign Up/Sign In:** Visit the [OpenAI Platform](https://platform.openai.com/). You'll need to create an account or sign in if you already have one.
-2. **Navigate to API Keys:** Once logged in, go to the [API keys section](https://platform.openai.com/api-keys) of your account.
-3. **Create a Key:** Click on "Create new secret key". It's good practice to give your key a descriptive name (e.g., "CodinIT API Key").
-4. **Copy the Key:** **Crucial:** Copy the generated API key immediately. For security reasons, OpenAI will not show it to you again. Store this key in a safe and secure location.
+## Getting an API Key
-### Supported Models
+1. Visit [OpenAI Platform](https://platform.openai.com/) and sign in
+2. Go to [API keys](https://platform.openai.com/api-keys)
+3. Click "Create new secret key" and name it (e.g., "CodinIT")
+4. Copy the key immediately - you won't see it again
-CodinIT is compatible with a variety of OpenAI models, including but not limited to:
+## Configuration
-- 'o3'
-- `o3-mini` (medium reasoning effort)
-- 'o4-mini'
-- `o3-mini-high` (high reasoning effort)
-- `o3-mini-low` (low reasoning effort)
-- `o1`
-- `o1-preview`
-- `o1-mini`
-- `GPT-5o`
-- `GPT-5o-mini`
-- 'GPT-5.1'
-- 'GPT-5.1-mini'
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "OpenAI" as the API Provider
+3. Paste your API key
+4. Choose your model
-For the most current list of available models and their capabilities, please refer to the official [OpenAI Models documentation](https://platform.openai.com/models).
+## Supported Models
-### Configuration in CodinIT
+- `GPT-5o`
+- `GPT-5.1`
+- `o3`
+- `o3-mini` (medium reasoning)
+- `o4-mini`
+- `o1`
+- `o1-mini`
-1. **Open CodinIT Settings:** Click the settings gear icon (⚙️) in the CodinIT panel.
-2. **Select Provider:** Choose "OpenAI" from the "API Provider" dropdown menu.
-3. **Enter API Key:** Paste your OpenAI API key into the "OpenAI API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown list.
-5. **(Optional) Base URL:** If you need to use a proxy or a custom base URL for the OpenAI API, you can enter it here. Most users will not need to change this from the default.
+See [OpenAI Models documentation](https://platform.openai.com/models) for full details.
-### Tips and Notes
+## Notes
-- **Pricing:** Be sure to review the [OpenAI Pricing page](https://openai.com/pricing) for detailed information on the costs associated with different models.
-- **Azure OpenAI Service:** If you are looking to use the Azure OpenAI service, please note that specific documentation for Azure OpenAI with CodinIT may be found separately, or you might need to configure it as an OpenAI-compatible endpoint if such functionality is supported by CodinIT for custom configurations.
\ No newline at end of file
+- **Pricing:** See [OpenAI Pricing](https://openai.com/pricing)
+- **Azure OpenAI:** Configure as OpenAI-compatible endpoint if needed
\ No newline at end of file
diff --git a/providers/openrouter.mdx b/providers/openrouter.mdx
index 4212d2b..3ea730b 100644
--- a/providers/openrouter.mdx
+++ b/providers/openrouter.mdx
@@ -1,40 +1,43 @@
---
title: "OpenRouter"
-description: "Learn how to use OpenRouter with CodinIT to access a wide variety of language models through a single API."
+description: "Access multiple AI models through a unified API with OpenRouter."
---
-OpenRouter is an AI platform that provides access to a wide variety of language models from different providers, all through a single API. This can simplify setup and allow you to easily experiment with different models.
+OpenRouter provides access to models from multiple providers through a single API.
**Website:** [https://openrouter.ai/](https://openrouter.ai/)
-### Getting an API Key
+## Getting an API Key
-1. **Sign Up/Sign In:** Go to the [OpenRouter website](https://openrouter.ai/). Sign in with your Google or GitHub account.
-2. **Get an API Key:** Go to the [keys page](https://openrouter.ai/keys). You should see an API key listed. If not, create a new key.
-3. **Copy the Key:** Copy the API key.
+1. Go to [OpenRouter](https://openrouter.ai/) and sign in with Google or GitHub
+2. Navigate to the [keys page](https://openrouter.ai/keys)
+3. Copy your API key (or create a new one)
-### Supported Models
+## Configuration
-OpenRouter supports a large and growing number of models. CodinIT automatically fetches the list of available models. Refer to the [OpenRouter Models page](https://openrouter.ai/models) for the complete and up-to-date list.
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "OpenRouter" as the API Provider
+3. Paste your API key
+4. Choose your model
-### Configuration in CodinIT
+## Supported Models
-1. **Open CodinIT Settings:** Click the settings icon (⚙️) in the CodinIT panel.
-2. **Select Provider:** Choose "OpenRouter" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your OpenRouter API key into the "OpenRouter API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
-5. **(Optional) Custom Base URL:** If you need to use a custom base URL for the OpenRouter API, check "Use custom base URL" and enter the URL. Leave this blank for most users.
+CodinIT automatically fetches available models. Featured models include:
-### Supported Transforms
+- **Claude Opus 4.5:** 200k context, maximum intelligence
+- **Claude Sonnet 4.5:** 1M context, highest intelligence
+- **GPT-5.2 Pro:** 400k context, latest GPT model
+- **GPT-4o:** 128k context, reliable fallback
+- **DeepSeek R1 (Free):** 163k context, free tier available
-OpenRouter provides an [optional "middle-out" message transform](https://openrouter.ai/features/message-transforms) to help with prompts that exceed the maximum context size of a model. You can enable it by checking the "Compress prompts and message chains to the context size" box.
+See [OpenRouter Models](https://openrouter.ai/models) for the complete list.
-### Tips and Notes
+## Features
-- **Model Selection:** OpenRouter offers a wide range of models. Experiment to find the best one for your needs.
-- **Pricing:** OpenRouter charges based on the underlying model's pricing. See the [OpenRouter Models page](https://openrouter.ai/models) for details.
-- **Prompt Caching:**
- - OpenRouter passes caching requests to underlying models that support it. Check the [OpenRouter Models page](https://openrouter.ai/models) to see which models offer caching.
- - For most models, caching should activate automatically if supported by the model itself (similar to how Requesty works).
- - **Exception for Gemini Models via OpenRouter:** Due to potential response delays sometimes observed with Google's caching mechanism when accessed via OpenRouter, a manual activation step is required _specifically for Gemini models_.
- - If using a **Gemini model** via OpenRouter, you **must manually check** the "Enable Prompt Caching" box in the provider settings to activate caching for that model. This checkbox serves as a temporary workaround. For non-Gemini models on OpenRouter, this checkbox is not necessary for caching.
\ No newline at end of file
+- **Message transforms:** Enable "Compress prompts and message chains to context size" to handle large prompts
+- **Prompt caching:** Automatically passes caching to supported models
+- **Gemini caching:** Manually enable "Enable Prompt Caching" for Gemini models
+
+## Notes
+
+- **Pricing:** Based on underlying model pricing. See [OpenRouter Models](https://openrouter.ai/models)
\ No newline at end of file
diff --git a/providers/perplexity.mdx b/providers/perplexity.mdx
index 529c8a7..b0f5ffb 100644
--- a/providers/perplexity.mdx
+++ b/providers/perplexity.mdx
@@ -1,212 +1,39 @@
---
title: Perplexity
-description: Access AI models with built-in search and reasoning capabilities
+description: Configure Perplexity's Sonar models with integrated web search for research tasks.
---
-Perplexity provides AI models with integrated search capabilities, allowing models to access real-time information and provide more accurate, up-to-date responses based on current web data.
+**Website:** [https://www.perplexity.ai/](https://www.perplexity.ai/)
-## Overview
+## Getting an API Key
-Perplexity combines advanced language models with web search functionality, enabling AI to provide responses based on the latest available information. Their Sonar models are specifically designed for research, analysis, and knowledge-intensive tasks.
+1. Go to [Perplexity AI](https://www.perplexity.ai/) and sign in
+2. Navigate to Settings > API
+3. Create a new API key
+4. Copy the key immediately
-
-
- Access current web information and data
-
-
- Specialized for analysis and research tasks
-
-
- Sources and references for information
-
-
+## Configuration
-## Available Models
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "Perplexity" as the API Provider
+3. Paste your API key
+4. Choose your model
-
-
- ### Sonar Base Models
- Standard models with integrated search capabilities.
+## Supported Models
- - **Sonar**: Basic model with web search integration
- - **Sonar Pro**: Enhanced model with advanced search features
- - **Best for**: General research, current events, fact-checking
+- `sonar` - Basic model with web search
+- `sonar-pro` - Enhanced search features
+- `sonar-reasoning` - Advanced reasoning with search
+- `sonar-reasoning-pro` - Professional-grade reasoning
-
+## Features
-
- ### Sonar Reasoning Models
- Advanced models with enhanced reasoning and analysis capabilities.
+- **Web search:** Integrated real-time web search
+- **Source citations:** References for all information
+- **Real-time data:** Access to current information
+- **Research focused:** Optimized for research tasks
- - **Sonar Reasoning**: Advanced reasoning with search integration
- - **Sonar Reasoning Pro**: Professional-grade reasoning and research
- - **Best for**: Complex analysis, academic research, technical problems
+## Notes
-
-
-
-## Setup Instructions
-
-
- Visit [Perplexity AI](https://www.perplexity.ai/) and create an account
- Navigate to Settings > API in your Perplexity account
- Create a new API key for model access
- Add your Perplexity API key to the provider settings
- Try queries that require current information to test search capabilities
-
-
-## Key Features
-
-
- Web Search
- Real-time Data
- Source Citations
- Research Focused
- Fact Checking
-
-
-### Platform Advantages
-
-- **Integrated Search**: Models can access current web information automatically
-- **Source Transparency**: Citations and references for all information provided
-- **Real-time Updates**: Access to the latest news, data, and developments
-- **Research Enhancement**: Better performance on research and analytical tasks
-- **Fact Verification**: Cross-referencing information for accuracy
-
-## Use Cases
-
-
-
- ### Academic Research
- Perfect for research tasks requiring current and accurate information.
-
- - Academic research and literature review
- - Current events analysis
- - Market research and trends
- - Scientific paper analysis
- - Competitive intelligence
-
-
-
-
- ### Business Applications
- Excellent for business research and decision-making.
-
- - Market analysis and reporting
- - Industry trend monitoring
- - Competitive analysis
- - Business intelligence gathering
- - Strategic planning support
-
-
-
-
- ### News & Information
- Ideal for staying updated with current events and information.
-
- - News analysis and summarization
- - Breaking news monitoring
- - Event impact assessment
- - Real-time information queries
- - Fact-checking and verification
-
-
-
-
- ### Technical Analysis
- Suitable for technical research and problem-solving.
-
- - Technical documentation research
- - API and tool research
- - Technology trend analysis
- - Development resource discovery
- - Technical problem-solving
-
-
-
-
-## Pricing Information
-
-Perplexity offers straightforward pricing based on usage:
-
-
-
-
-
-
-
-
-
- **Search Costs**: Web search functionality is included in the token pricing. No additional search fees.
-
-
-
- **Model Selection**: Use Sonar Pro for most applications. Choose Sonar Reasoning models for complex analytical tasks.
-
-
-
- **Rate Limits**: Perplexity implements rate limits based on your account tier. Monitor usage to avoid interruptions.
-
-
-## Search Integration
-
-
-
- ### Search Mechanism
- Understanding how Perplexity integrates search with AI responses.
-
- - **Automatic Search**: Models search the web when needed for current information
- - **Source Selection**: Chooses reliable sources and recent information
- - **Citation System**: Provides links and references for all information
- - **Fact Verification**: Cross-references information for accuracy
-
-
-
-
- ### Search Features
- Advanced search capabilities built into the models.
-
- - **Real-time Data**: Access to current news, prices, and statistics
- - **Comprehensive Coverage**: Searches across multiple reliable sources
- - **Quality Filtering**: Prioritizes high-quality, authoritative sources
- - **Freshness Priority**: Emphasizes recent and up-to-date information
-
-
-
-
- ### Source Transparency
- How Perplexity provides transparency in its responses.
-
- - **Source Links**: Direct links to original sources
- - **Citation Markers**: Clear indicators of cited information
- - **Source Quality**: Indicators of source reliability and recency
- - **Verification**: Cross-referenced information for accuracy
-
-
-
-
-## Best Practices
-
-
-
- ### Query Optimization
- Tips for getting the best results from Perplexity models.
-
- - **Be Specific**: Clear, specific questions get better results
- - **Include Context**: Provide background information when relevant
- - **Specify Timeframes**: Mention if you need current vs. historical information
- - **Request Citations**: Ask for sources when accuracy is critical
-
-
-
-
- ### Research Strategies
- Effective ways to use Perplexity for research tasks.
-
- - **Iterative Refinement**: Start broad, then narrow down queries
- - **Cross-Verification**: Use multiple queries to verify information
- - **Source Evaluation**: Check the quality and recency of sources
- - **Follow-up Questions**: Ask for clarification or additional details
-
-
-
+- **Pricing:** Search functionality included in token pricing
+- **Use cases:** Research, fact-checking, current events
diff --git a/providers/togetherai.mdx b/providers/togetherai.mdx
index af0ee2b..775005b 100644
--- a/providers/togetherai.mdx
+++ b/providers/togetherai.mdx
@@ -1,251 +1,41 @@
---
title: Together AI
-description: Access a wide range of open-source AI models through Together's platform
+description: Access hundreds of open-source AI models through Together's optimized platform.
---
+**Website:** [https://api.together.xyz/](https://api.together.xyz/)
-Together AI provides access to a comprehensive collection of open-source AI models from leading research organizations, offering high-performance inference with competitive pricing and extensive model variety.
+## Getting an API Key
-## Overview
+1. Go to [Together AI](https://api.together.xyz/) and sign in
+2. Navigate to Settings > API Keys
+3. Create a new API key
+4. Copy the key immediately
-Together AI serves as a marketplace for open-source AI models, providing access to cutting-edge models from Meta, Mistral, Google, and other leading AI research labs. Their platform offers both static model access and dynamic model discovery.
+## Configuration
-
-
- Access to hundreds of open-source models
-
-
- Optimized infrastructure for fast inference
-
-
- Latest models from top AI research labs
-
-
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "Together AI" as the API Provider
+3. Paste your API key
+4. Choose your model
## Popular Models
-
-
- ### Llama Models
- Industry-leading open-source models from Meta.
+- **Llama series:** Llama 3.1 70B, Llama 3.2 90B Vision
+- **Mistral series:** Mixtral 8x7B, Devstral Small, Magistral Small
+- **Google series:** Gemma 3 (27B, 12B, 4B, 1B)
+- **Coding models:** Qwen3-Coder 480B, Arcee AI Coder
+- **Reasoning models:** Kimi K2 Thinking, DeepSeek-V3.2-Exp, Cogito V1
- - **Llama 3.1 70B**: Powerful general-purpose model
- - **Llama 3.2 90B Vision**: Multimodal model with vision capabilities
- - **Best for**: Advanced reasoning, creative tasks, multimodal applications
+## Features
-
+- **Model variety:** Hundreds of open-source models
+- **Research access:** Latest models from top labs
+- **High performance:** Optimized inference infrastructure
+- **Dynamic discovery:** Automatic model catalog updates
+- **Cost effective:** Flexible usage-based pricing
-
- ### Mistral AI Series
- Efficient and capable models from Mistral AI.
+## Notes
- - **Mixtral 8x7B**: High-performance mixture-of-experts model
- - **Devstral Small**: Specialized coding model
- - **Magistral Small**: Advanced reasoning model
- - **Best for**: Efficient inference, specialized tasks
-
-
-
-
- ### Google/Gemma Series
- Lightweight and capable models from Google.
-
- - **Gemma 3 27B**: Advanced general-purpose model
- - **Gemma 3 12B/4B/1B**: Range of model sizes for different needs
- - **Best for**: Balanced performance, research applications
-
-
-
-
- ### Specialized Coding
- Models optimized for programming and technical tasks.
-
- - **Qwen3-Coder 480B**: Massive coding model
- - **Arcee AI Coder**: Specialized coding assistant
- - **Best for**: Code generation, debugging, technical writing
-
-
-
-
- ### Advanced Reasoning
- Models with enhanced reasoning and thinking capabilities.
-
- - **Kimi K2 Thinking**: Advanced thinking model
- - **DeepSeek-V3.2-Exp**: Experimental reasoning model
- - **Cogito V1 Preview**: Best-in-class reasoning
- - **Best for**: Complex problem-solving, analysis
-
-
-
-
-## Setup Instructions
-
-
- Visit [Together AI](https://api.together.xyz/) and create an account
- Navigate to Settings > API Keys in your dashboard
- Create a new API key for model access
- Add your Together API key to the provider settings
- Browse available models and test different options
-
-
-## Key Features
-
-
- Open Source
- Model Variety
- High Performance
- Dynamic Discovery
- Cost Effective
-
-
-### Platform Advantages
-
-- **Extensive Catalog**: Access to hundreds of open-source models
-- **Research Access**: Latest models from top AI research institutions
-- **Performance Optimized**: Fast inference with optimized infrastructure
-- **Flexible Pricing**: Pay only for what you use
-- **Regular Updates**: New models added frequently
-
-## Use Cases
-
-
-
- ### AI Research
- Perfect for researchers and developers exploring different models.
-
- - Model comparison and evaluation
- - Research prototyping
- - Algorithm testing
- - Performance benchmarking
-
-
-
-
- ### Software Development
- Excellent for coding and technical development work.
-
- - Code generation and completion
- - Technical documentation
- - API development
- - Debugging assistance
-
-
-
-
- ### Creative Applications
- Suitable for content creation and creative tasks.
-
- - Creative writing and ideation
- - Content generation
- - Marketing copy
- - Educational materials
-
-
-
-
- ### Enterprise Use
- Reliable for business and productivity applications.
-
- - Business analysis
- - Customer service automation
- - Process optimization
- - Data analysis
-
-
-
-
-## Pricing Information
-
-Together AI offers flexible pricing based on model size and usage:
-
-
-
-
-
-
-
-
-**Dynamic Pricing**: Actual prices may vary based on current model popularity and demand.
-
-
- **Model Selection**: Start with smaller models for testing, then scale up to larger models for production use.
-
-
-
- **Model Availability**: Some models may have limited availability or higher costs during peak usage.
-
-
-## Model Management
-
-
-
- ### Model Catalog
- How Together AI provides access to available models.
-
- - **API Integration**: Automatic model discovery through API
- - **Real-time Updates**: New models added as they become available
- - **Pricing Information**: Cost details included in model listings
- - **Performance Metrics**: Context window and capability information
-
-
-
-
- ### Model Organization
- Understanding the different types of models available.
-
- - **Text Generation**: General-purpose language models
- - **Code Models**: Specialized for programming tasks
- - **Vision Models**: Multimodal models with image understanding
- - **Reasoning Models**: Enhanced logical reasoning capabilities
- - **Experimental Models**: Cutting-edge research models
-
-
-
-
- ### Optimization Features
- Features to improve model performance and cost efficiency.
-
- - **Context Management**: Efficient handling of large context windows
- - **Caching**: Request caching for repeated queries
- - **Load Balancing**: Automatic distribution across available resources
- - **Cost Optimization**: Suggestions for cost-effective model selection
-
-
-
-
-## Best Practices
-
-
-
- ### Choosing the Right Model
- Guidelines for selecting appropriate models for your use case.
-
- - **Task Matching**: Choose models specialized for your specific task
- - **Cost Consideration**: Balance performance needs with budget constraints
- - **Context Requirements**: Ensure model context window meets your needs
- - **Testing Phase**: Test multiple models before committing to one
-
-
-
-
- ### Optimizing Performance
- Tips for getting the best performance from Together AI models.
-
- - **Prompt Engineering**: Craft clear, specific prompts
- - **Context Management**: Keep context focused and relevant
- - **Batch Processing**: Group similar requests when possible
- - **Caching Strategy**: Cache frequent queries to reduce costs
-
-
-
-
- ### Managing Costs
- Strategies for controlling and optimizing usage costs.
-
- - **Usage Monitoring**: Track usage patterns and costs
- - **Model Switching**: Use smaller models for simpler tasks
- - **Caching**: Implement caching to reduce API calls
- - **Bulk Operations**: Combine multiple operations when feasible
-
-
-
+- **Pricing:** Based on model size and usage
+- **Model updates:** New models added frequently
diff --git a/providers/xai-grok.mdx b/providers/xai-grok.mdx
index c35cf86..553d7cb 100644
--- a/providers/xai-grok.mdx
+++ b/providers/xai-grok.mdx
@@ -1,85 +1,51 @@
---
title: "xAI (Grok)"
-description: "Learn how to configure and use xAI's Grok models with CodinIT, including API key setup, supported models, and reasoning capabilities."
+description: "Configure xAI's Grok models with large context windows and reasoning capabilities."
---
-xAI is the company behind Grok, a large language model known for its conversational abilities and large context window. Grok models are designed to provide helpful, informative, and contextually relevant responses.
-
**Website:** [https://x.ai/](https://x.ai/)
-### Getting an API Key
-
-1. **Sign Up/Sign In:** Go to the [xAI Console](https://console.x.ai/). Create an account or sign in.
-2. **Navigate to API Keys:** Go to the API keys section in your dashboard.
-3. **Create a Key:** Click to create a new API key. Give your key a descriptive name (e.g., "CodinIT").
-4. **Copy the Key:** **Important:** Copy the API key _immediately_. You will not be able to see it again. Store it securely.
-
-### Supported Models
-
-CodinIT supports the following xAI Grok models:
-
-#### Grok-3 Models
-
-- `grok-3-beta` (Default) - xAI's Grok-3 beta model with 131K context window
-- `grok-3-fast-beta` - xAI's Grok-3 fast beta model with 131K context window
-- `grok-3-mini-beta` - xAI's Grok-3 mini beta model with 131K context window
-- `grok-3-mini-fast-beta` - xAI's Grok-3 mini fast beta model with 131K context window
-
-#### Grok-2 Models
-
-- `grok-2-latest` - xAI's Grok-2 model - latest version with 131K context window
-- `grok-2` - xAI's Grok-2 model with 131K context window
-- `grok-2-1212` - xAI's Grok-2 model (version 1212) with 131K context window
-
-#### Grok Vision Models
-
-- `grok-2-vision-latest` - xAI's Grok-2 Vision model - latest version with image support and 32K context window
-- `grok-2-vision` - xAI's Grok-2 Vision model with image support and 32K context window
-- `grok-2-vision-1212` - xAI's Grok-2 Vision model (version 1212) with image support and 32K context window
-- `grok-vision-beta` - xAI's Grok Vision Beta model with image support and 8K context window
-
-#### Legacy Models
-
-- `grok-beta` - xAI's Grok Beta model (legacy) with 131K context window
-
-### Configuration in CodinIT
-
-1. **Open CodinIT Settings:** Click the settings icon (⚙️) in the CodinIT panel.
-2. **Select Provider:** Choose "xAI" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your xAI API key into the "xAI API Key" field.
-4. **Select Model:** Choose your desired Grok model from the "Model" dropdown.
-
-### Reasoning Capabilities
-
-Grok 3 Mini models feature specialized reasoning capabilities, allowing them to "think before responding" - particularly useful for complex problem-solving tasks.
-
-#### Reasoning-Enabled Models
+## Getting an API Key
-Reasoning is only supported by:
+1. Go to [xAI Console](https://console.x.ai/) and sign in
+2. Navigate to API Keys section
+3. Create a new API key and name it (e.g., "CodinIT")
+4. Copy the key immediately - you won't see it again
-- `grok-3-mini-beta`
-- `grok-3-mini-fast-beta`
+## Configuration
-The Grok 3 models `grok-3-beta` and `grok-3-fast-beta` do not support reasoning.
+1. Click the settings icon (⚙️) in CodinIT
+2. Select "xAI" as the API Provider
+3. Paste your API key
+4. Choose your model
-#### Controlling Reasoning Effort
+## Supported Models
-When using reasoning-enabled models, you can control how hard the model thinks with the `reasoning_effort` parameter:
+**Grok-3 Series (131K context):**
+- `grok-3-beta` (Default)
+- `grok-3-fast-beta`
+- `grok-3-mini-beta` (Reasoning enabled)
+- `grok-3-mini-fast-beta` (Reasoning enabled)
-- `low`: Minimal thinking time, using fewer tokens for quick responses
-- `high`: Maximum thinking time, leveraging more tokens for complex problems
+**Grok-2 Series (131K context):**
+- `grok-2-latest`
+- `grok-2`
+- `grok-2-1212`
-Choose `low` for simple queries that should complete quickly, and `high` for harder problems where response latency is less important.
+**Vision Models:**
+- `grok-2-vision-latest` (32K context)
+- `grok-2-vision` (32K context)
+- `grok-vision-beta` (8K context)
-#### Key Features
+## Reasoning Capabilities
-- **Step-by-Step Problem Solving**: The model thinks through problems methodically before delivering an answer
-- **Math & Quantitative Strength**: Excels at numerical challenges and logic puzzles
-- **Reasoning Trace Access**: The model's thinking process is available via the `reasoning_content` field in the response completion object
+Available on `grok-3-mini-beta` and `grok-3-mini-fast-beta`:
+- **Step-by-step problem solving:** Methodical thinking process
+- **Reasoning effort control:** Set `low` for quick responses or `high` for complex problems
+- **Reasoning trace access:** View the model's thinking process
-### Tips and Notes
+## Notes
-- **Context Window:** Most Grok models feature large context windows (up to 131K tokens), allowing you to include substantial amounts of code and context in your prompts.
-- **Vision Capabilities:** Select vision-enabled models (`grok-2-vision-latest`, `grok-2-vision`, etc.) when you need to process or analyze images.
-- **Pricing:** Pricing varies by model, with input costs ranging from $0.3 to $5.0 per million tokens and output costs from $0.5 to $25.0 per million tokens. Refer to the xAI documentation for the most current pricing information.
-- **Performance Tradeoffs:** "Fast" variants typically offer quicker response times but may have higher costs, while "mini" variants are more economical but may have reduced capabilities.
\ No newline at end of file
+- **Context window:** Up to 131K tokens for most models
+- **Vision support:** Available on select models
+- **Pricing:** Varies by model, see xAI documentation
diff --git a/quickstart.mdx b/quickstart.mdx
index 04fd264..d468df3 100644
--- a/quickstart.mdx
+++ b/quickstart.mdx
@@ -1,68 +1,73 @@
---
title: "Quickstart"
-description: "Get started with CodinIT's AI-powered development platform in minutes"
+description: "Get started with CodinIT AI-powered IDE in minutes. Install the AI coding assistant, connect LLM providers like Claude and OpenAI, and start building with AI code generation."
---
-## Start Building with AI in Three Steps
+
-Launch your AI-powered development environment and create your first project with intelligent assistance.
+## Start building with AI code generation in three steps
+
+Launch your AI-powered IDE and create full-stack applications with intelligent AI coding assistance, automated code completion, and LLM-powered development tools.
Step 1: Set up your development environment
- Download and install CodinIT for your platform:
-
**Desktop App Version (Recommended)**
- - Download from [codinit.dev/download](https://codinit.dev/download)
- - Available for Windows, macOS, and Linux
- - Full-featured native application
- or:
+ Download the latest prebuilt release for macOS, Windows, and Linux:
+ - Visit [GitHub Releases](https://github.com/codinit-dev/codinit-dev/releases/latest)
+ - Download the installer for your platform
+ - Run the installer and follow the setup wizard
+ - Full-featured native application with automatic updates
+
+ **Alternative: Run from Source**
+
+ Clone the repository and install dependencies:
```bash
- git clone https://github.com/Gerome-Elassaad/codinit-app.git
- cd codinit-app
+ git clone https://github.com/codinit-dev/codinit-dev.git
+ cd codinit-dev
+ ```
+
+ Install dependencies using your preferred package manager:
+
+ ```bash pnpm
+ pnpm install
```
- **Web Template (for devs)**
- - Visit [Template](https://codingit.vercel.app)
- - Works in any modern browser
- - Syncs across devices
+ ```bash npm
+ npm install
+ ```
+
+ Configure your environment by creating a `.env` file and adding your preferred AI provider keys:
```bash
- git clone https://github.com/Gerome-Elassaad/codingit.git
- cd codingit
+ # Create .env file with your AI provider API keys
+ # You can mix and match multiple providers
+ ```
+
+ Run the development server:
+
+ ```bash pnpm
+ pnpm run dev
+ ```
+
+ ```bash npm
+ npm run dev
```
+ The application will be available at `http://localhost:5173`
- - Install dependencies using your preferred package manager:
-
-
-
- ```bash
- pnpm install
- cp .env.example .env.local
- pnpm run dev
- ```
-
-
- ```bash
- npm install
- cp .env.example .env.local
- npm run dev
- ```
-
-
- ```bash
- yarn install
- cp .env.example .env.local
- yarn dev
- ```
-
-
-
- Both versions offer Different features - choose based on your preference for native or web applications.
+ Choose the desktop app for a native experience or run from source for development and customization.
1. Open the CodinIT application
@@ -73,105 +78,105 @@ Step 1: Set up your development environment
-### Step 2: Connect your AI providers
+### Step 2: Connect AI model providers (Claude, GPT-4, Gemini)
-
- Sign up for AI provider accounts and obtain API keys:
+
+ Sign up for LLM provider accounts and obtain API keys for AI-powered coding:
- **Essential Providers (Free tiers available)**
- - [OpenAI](https://platform.openai.com) - GPT-5o, GPT-5
- - [Anthropic](https://console.anthropic.com) - Claude 4.5 Sonnet
- - [Google AI](https://aistudio.google.com) - Gemini 2.5 Pro/Flash
+ **Essential AI coding models (Free tiers available)**
+ - [OpenAI](https://platform.openai.com) - GPT-4o, GPT-4 for AI code generation
+ - [Anthropic](https://console.anthropic.com) - Claude 3.5 Sonnet AI coding assistant
+ - [Google AI](https://aistudio.google.com) - Gemini 2.0 Flash for intelligent code completion
- **Optional Providers**
- - [DeepSeek](https://platform.deepseek.com) - Cost-effective coding models
- - [Groq](https://console.groq.com) - Fast inference
- - [Perplexity](https://www.perplexity.ai) - Research-focused models
+ **Optional AI development providers**
+ - [DeepSeek](https://platform.deepseek.com) - Cost-effective AI coding models
+ - [Groq](https://console.groq.com) - Fast LLM inference for real-time coding
+ - [Perplexity](https://www.perplexity.ai) - Research-focused AI models
Keep your API keys secure and never commit them to version control.
-
- 1. Click **Settings** in the top navigation
- 2. Navigate to **AI Providers** section
- 3. Add your API keys for each provider
- 4. Test connections and select default models
- 5. Start using AI assistance immediately!
-
- You can use multiple providers simultaneously and switch between them as needed.
+
+ 1. Click **Settings** in the AI coding assistant interface
+ 2. Navigate to **AI Providers** section for LLM configuration
+ 3. Add your API keys for Claude, GPT-4, or other AI models
+ 4. Test AI connections and select default coding models
+ 5. Start using AI code generation and intelligent autocomplete immediately!
+
+ Use multiple AI providers simultaneously and switch between LLMs for optimal code generation results.
-### Step 3: Create your first AI-powered project
+### Step 3: Create your first AI-generated project
-
- Create a new project with AI assistance:
+
+ Create full-stack applications with AI code generation:
- 1. Click **New Project** in the workspace
- 2. Choose your tech stack (React, Vue, Svelte, Node.js, etc.)
- 3. Select a template or start from scratch
- 4. AI helps generate boilerplate code and structure
+ 1. Click **New Project** in the AI development workspace
+ 2. Choose your tech stack (React, Vue, Svelte, Node.js, Next.js, etc.)
+ 3. Select an AI-optimized template or start from scratch
+ 4. AI coding assistant generates boilerplate code and project structure
- Use voice commands or chat to describe what you want to build - AI will help generate the code!
+ Use natural language prompts or voice commands to describe your app - AI will generate production-ready code!
-
- Explore CodinIT's AI capabilities:
+
+ Explore CodinIT's AI-powered development capabilities:
- - **Smart Code Generation**: Describe features in natural language
- - **Context-Aware Suggestions**: Get intelligent code completions
- - **Error Detection & Fixes**: Automatic bug detection and solutions
- - **Documentation Generation**: Auto-generate README and API docs
- - **Testing Assistance**: Create unit and integration tests
+ - **AI code generation**: Describe features in natural language for instant code creation
+ - **Intelligent autocomplete**: Get context-aware AI code suggestions
+ - **AI debugging & fixes**: Automatic bug detection and AI-powered solutions
+ - **AI documentation**: Auto-generate README files and API documentation
+ - **AI test generation**: Create unit tests and integration tests with AI assistance
- Use Ctrl+Space (or Cmd+Space on Mac) to trigger AI completions anywhere in your code.
+ Use Ctrl+Space (or Cmd+Space on Mac) to trigger AI code completions anywhere in your development environment.
-## Next Steps
+## Next steps for AI-powered development
-Now that you're set up with CodinIT, explore these powerful features:
+Now that you're set up with CodinIT AI IDE, explore these AI coding features:
-
- Learn about all 19+ supported AI providers and choose the best ones for your needs.
+
+ Learn about 19+ supported LLM providers including Claude, GPT-4, Gemini for AI code generation.
-
- Master CodinIT's integrated development environment and productivity features.
+
+ Master the AI-powered IDE with intelligent code completion and refactoring tools.
-
- Deploy to Vercel, Netlify, or any platform with one-click deployment.
+
+ Deploy AI-generated applications to Vercel, Netlify, and GitHub Pages with one-click deployment.
-
- Connect your repositories and use advanced Git features with AI assistance.
+
+ Use AI-powered Git features for intelligent commit messages and code reviews.
-
- Build full-stack apps with built-in database, auth, and real-time features.
+
+ Build AI-powered full-stack apps with Supabase database, auth, and real-time features.
-
- Extend CodinIT's capabilities with your favourite tools in one place.
+
+ Master prompt engineering techniques for better AI code generation and LLM interactions.
-
- Discover all the powerful features CodinIT offers for AI-powered development.
+
+ Discover AI pair programming, intelligent refactoring, and automated testing features.
-
- Customize your development experience with advanced settings and preferences.
+
+ Customize your AI development experience with advanced LLM settings and preferences.
- **Need help?** Check out our [troubleshooting guide](/support/troubleshooting) for common issues, or explore the [features overview](/features/overview) to discover everything CodinIT can do.
+ **Need help with AI coding?** Check out our [AI troubleshooting guide](/support/troubleshooting) for common LLM issues, or explore the [AI features overview](/features/overview) to discover all AI-powered development capabilities.
diff --git a/running-models-locally/lm-studio.mdx b/running-models-locally/lm-studio.mdx
deleted file mode 100644
index 2bf4c1a..0000000
--- a/running-models-locally/lm-studio.mdx
+++ /dev/null
@@ -1,79 +0,0 @@
----
-title: "LM Studio"
-description: "A quick guide to setting up LM Studio for local AI model execution with CodinIT."
----
-
-## Setting Up LM Studio with CodinIT
-
-Run AI models locally using LM Studio with CodinIT.
-
-### Prerequisites
-
-* Windows, macOS, or Linux computer with AVX2 support
-
-### Setup Steps
-
-#### 1. Install LM Studio
-
-* Visit [lmstudio.ai](https://lmstudio.ai)
-* Download and install for your operating system
-
-
-
-#### 2. Launch LM Studio
-
-* Open the installed application
-* You'll see four tabs on the left: **Chat**, **Developer** (where you will start the server), **My Models** (where your downloaded models are stored), **Discover** (add new models)
-
-#### 3. Download a Model
-
-* Browse the "Discover" page
-* Select and download your preferred model
-* Wait for download to complete
-
-#### 4. Start the Server
-
-* Navigate to the "Developer" tab
-* Toggle the server switch to "Running"
-* Note: The server will run at `http://localhost:51732`
-
-
-
-### Recommended Model and Settings
-
-For the best experience with CodinIT, use **Qwen3 Coder 30B A3B Instruct**. This model delivers strong coding performance and reliable tool use.
-
-#### Critical Settings
-
-After loading your model in the Developer tab, configure these settings:
-
-1. **Context Length**: Set to 262,144 (the model's maximum)
-2. **KV Cache Quantization**: Leave unchecked (critical for consistent performance)
-3. **Flash Attention**: Enable if available (improves performance)
-
-#### Quantization Guide
-
-Choose quantization based on your RAM:
-
-* **32GB RAM**: Use 4-bit quantization (\~17GB download)
-* **64GB RAM**: Use 8-bit quantization (\~32GB download) for better quality
-* **128GB+ RAM**: Consider full precision or larger models
-
-#### Model Format
-
-* **Mac (Apple Silicon)**: Use MLX format for optimized performance
-* **Windows/Linux**: Use GGUF format
-
-### Important Notes
-
-* Start LM Studio before using with CodinIT
-* Keep LM Studio running in background
-* First model download may take several minutes depending on size
-* Models are stored locally after download
-
-### Troubleshooting
-
-1. If CodinIT can't connect to LM Studio:
-2. Verify LM Studio server is running (check Developer tab)
-3. Ensure a model is loaded
-4. Check your system meets hardware requirements
\ No newline at end of file
diff --git a/running-models-locally/local-model-setup.mdx b/running-models-locally/local-model-setup.mdx
index be7c3df..fdab17d 100644
--- a/running-models-locally/local-model-setup.mdx
+++ b/running-models-locally/local-model-setup.mdx
@@ -1,6 +1,6 @@
---
title: "Local models setup"
-description: "Run AI models locally on your own hardware for privacy, cost savings, and offline development"
+description: "Run AI models locally on your own hardware for enhanced privacy, zero API costs, offline development, and complete data control."
---
## Running Models Locally with CodinIT
@@ -12,7 +12,7 @@ Local models have reached a turning point where they're now practical for real d
## Quick Start
1. **Check your hardware** - 32GB+ RAM minimum
-2. **Choose your runtime** - [LM Studio](/running-models-locally/lm-studio) or [Ollama](/providers/ollama)
+2. **Choose your runtime** - [LM Studio](/providers/lmstudio) or [Ollama](/providers/ollama)
3. **Download Qwen3 Coder 30B** - The recommended model
4. **Configure settings** - Enable compact prompts, set max context
5. **Start coding** - Completely offline
@@ -60,7 +60,7 @@ LM Studio
* **Pros**: User-friendly GUI, easy model management, built-in server
* **Cons**: Memory overhead from UI, limited to single model at a time
* **Best for**: Desktop users who want simplicity
-* [Setup Guide →](/running-models-locally/lm-studio)
+* [Setup Guide →](/providers/lmstudio)
### Ollama
@@ -222,14 +222,14 @@ Note: These may require additional configuration and testing.
## Community & Support
-* **GitHub**: [Report issues](https://github.com/gerome-elassaad/codinit-app/issues)
+* **GitHub**: [Report issues](https://github.com/codinit-dev/codinit-dev/issues)
## Next Steps
Ready to get started? Choose your path:
-
+
User-friendly GUI approach with detailed configuration guide
diff --git a/support/frequently-asked-questions.mdx b/support/frequently-asked-questions.mdx
index 6ab4fa8..3cfd4cb 100644
--- a/support/frequently-asked-questions.mdx
+++ b/support/frequently-asked-questions.mdx
@@ -1,27 +1,27 @@
---
-title: "Frequently Asked Questions"
-description: "Common questions about CodinIT, models, features, and troubleshooting"
+title: "FAQ"
+description: "Find answers to common questions about CodinIT AI IDE setup, LLM model selection, AI code generation features, integrations, and AI development troubleshooting."
---
-Get answers to the most common questions about CodinIT's AI-powered development platform.
+Get answers to the most common questions about CodinIT's AI-powered development platform with LLM integration and intelligent code generation.
-
- Can't find your answer? Check our troubleshooting guide or contact support for personalized assistance.
+
+ Can't find your answer? Check our AI troubleshooting guide or contact support for personalized LLM assistance.
-## Models and Setup
+## AI models and setup for code generation
-
- For the best experience with CodinIT.dev, we recommend using the following models from our 19 supported providers:
+
+ For the best AI-powered development experience with CodinIT, we recommend using the following LLMs from our 19 supported AI providers:
- **Top Recommended Models:**
+ **Top recommended AI coding models:**
- - **Claude 4.5 Sonnet** (Anthropic): Best overall coder, excellent for complex applications
- - **GPT-5o** (OpenAI): Strong alternative with great performance across all use cases
- - **Claude 4 Opus** (Anthropic): Latest flagship model with enhanced capabilities
- - **Gemini 2.5 Flash** (Google): Exceptional speed for rapid development
- - **DeepSeekCoder V3** (DeepSeek): Best open-source model for coding tasks
+ - **Claude 3.5 Sonnet** (Anthropic): Best overall AI coder, excellent for complex application code generation
+ - **GPT-4o** (OpenAI): Strong alternative LLM with great performance across all AI coding use cases
+ - **Claude 4 Opus** (Anthropic): Latest flagship AI model with enhanced code generation capabilities
+ - **Gemini 2.0 Flash** (Google): Exceptional speed for rapid AI-powered development
+ - **DeepSeekCoder V3** (DeepSeek): Best open-source LLM for AI coding tasks
**Self-Hosting Options:**
@@ -475,6 +475,6 @@ Get answers to the most common questions about CodinIT's AI-powered development
[Open an Issue For Web Version](https://github.com/Gerome-Elassaad/CodingIT/issues/) in our GitHub Repository
- [Open an Issue For Desktop Version](https://github.com/Gerome-Elassaad/codinit-app/issues/) in our GitHub Repository
+ [Open an Issue For Desktop Version](https://github.com/codinit-dev/codinit-dev/issues/) in our GitHub Repository
-
\ No newline at end of file
+
diff --git a/support/troubleshooting.mdx b/support/troubleshooting.mdx
index ce6e7ab..f782b85 100644
--- a/support/troubleshooting.mdx
+++ b/support/troubleshooting.mdx
@@ -1,30 +1,30 @@
---
-title: "Troubleshooting Guide"
-description: "Solve common issues with CodinIT, AI providers, and development environment"
+title: "Troubleshooting"
+description: "Solve common issues with CodinIT AI IDE, LLM providers, code generation errors, and AI-powered development environment problems."
---
-
- Can't find your issue? Contact our support team for personalized assistance.
+
+ Can't find your AI issue? Contact our support team for personalized assistance with LLM troubleshooting.
-## AI Provider Issues
+## LLM provider issues for AI coding
-
- **Problem**: Getting "Invalid API key" or "Authentication failed" errors.
+
+ **Problem**: Getting "Invalid API key" or "Authentication failed" errors with LLM providers for AI code generation.
**Solutions**:
- 1. **Check API key format**: Ensure you're copying the complete key from your provider dashboard
- 2. **Verify key permissions**: Some providers require specific permissions for API access
- 3. **Check key expiration**: Some keys have expiration dates or usage limits
- 4. **Test in provider dashboard**: Verify your key works directly with the provider
+ 1. **Check LLM API key format**: Ensure you're copying the complete key from your AI provider dashboard
+ 2. **Verify AI key permissions**: Some LLM providers require specific permissions for code generation API access
+ 3. **Check key expiration**: Some AI API keys have expiration dates or usage limits
+ 4. **Test in provider dashboard**: Verify your LLM key works directly with the AI provider
- **Provider-specific tips**:
+ **AI provider-specific tips**:
- - **OpenAI**: Keys start with `sk-` and are found in [API Keys section](https://platform.openai.com/api-keys)
- - **Anthropic**: Keys start with `sk-ant-` and require console access
- - **Google**: Keys are 39 characters and need Gemini API enabled
+ - **OpenAI GPT models**: Keys start with `sk-` and are found in [API Keys section](https://platform.openai.com/api-keys)
+ - **Anthropic Claude**: Keys start with `sk-ant-` and require console access for AI coding
+ - **Google Gemini**: Keys are 39 characters and need Gemini API enabled for code generation
Never share your API keys publicly or commit them to version control.
@@ -60,6 +60,43 @@ description: "Solve common issues with CodinIT, AI providers, and development en
Network issues typically resolve themselves - try again in a few minutes.
+
+
+ **Problem**: Ollama models don't show up in the model selector dropdown.
+
+ **Solutions**:
+
+ 1. **Install models first**: Ensure you've installed Ollama models on your device before using them
+ 2. **Configure base URL**: Go to Settings → Local Providers and set the Ollama base URL (e.g., `http://127.0.0.1:11434`)
+ 3. **Enable Ollama**: Make sure Ollama provider is enabled in settings
+ 4. **Check Ollama service**: Verify Ollama is running on your system
+ 5. **Refresh model list**: Return to chat and open the provider/model dropdown to see available models
+
+ **Docker users**:
+ - Use `host.docker.internal` instead of `localhost` or `127.0.0.1` for the base URL
+ - Ensure Docker has network access to your host machine
+
+
+ After configuring Ollama settings, models should appear automatically in the provider dropdown.
+
+
+
+
+## MacOS Specific Issues
+
+
+ **Problem**: Application fails to launch or shows security warning.
+
+ **Solutions**:
+ 1. After moving the app to your **Applications** folder (or another location of your choice), open **Terminal**.
+ 2. Enter the following command to remove the security quarantine:
+
+ `xattr -cr /path/to/your/codinit... `
+
+
+ This happens because the app is not notarized by Apple. We are working on getting it notarized for future releases.
+
+
## Development Environment Issues
@@ -289,7 +326,7 @@ description: "Solve common issues with CodinIT, AI providers, and development en
Browse our complete [documentation](/features/overview) for detailed guides and references.
- Check [existing issues](https://github.com/codinit-dev/codinit-app/issues) or create new ones for bugs and features.
+ Check [existing issues](https://github.com/codinit-dev/codinit-dev/issues) or create new ones for bugs and features.
@@ -319,4 +356,4 @@ description: "Solve common issues with CodinIT, AI providers, and development en
**Still having trouble?** Don't hesitate to reach out - our support team is here to help you succeed with CodinIT!
-
\ No newline at end of file
+