Skip to content
@SynapseKit

SynapseKit-AI

Async-first Python framework for building LLM applications — RAG, agents, and graph workflows.

SynapseKit

Ship LLM apps faster.

Production-grade LLM framework for Python. Async-native RAG, agents, and graph workflows. 2 dependencies. Zero magic.


PyPI version Downloads Python License: Apache 2.0 Tests GitHub Stars


DocumentationQuickstartAPI ReferenceRoadmapContributing




Why SynapseKit?

The problem: Existing LLM frameworks are heavy — 50+ dependencies, hidden chains, magic callbacks, YAML configs. Hard to debug, harder to ship.

The fix: SynapseKit gives you everything you need to build production LLM apps with just 2 core dependencies and plain Python you can actually read.

pip install synapsekit[openai]
from synapsekit import RAG

rag = RAG(model="gpt-4o-mini", api_key="sk-...")
rag.add("Your document text here")
print(rag.ask_sync("What is the main topic?"))

3 lines. That's it.




What's inside

RAG Pipelines

5 text splitters • 10+ loaders
BM25 reranking • conversation memory
streaming retrieval-augmented generation

Agents & Multi-Agent

ReAct • native function calling
Supervisor/Worker • Handoff • Crew
32 built-in tools • fully extensible

Graph Workflows

parallel execution • conditional routing
cycle support • checkpointing
SSE/WS streaming • human-in-the-loop

13 LLM Providers

OpenAI • Anthropic • Gemini
Mistral • Ollama • Cohere
Bedrock • Groq • DeepSeek • more

5 Vector Stores

InMemory • ChromaDB • FAISS
Qdrant • Pinecone
all behind VectorStore ABC

Production Ready

Evaluation • Observability • Guardrails
MCP • A2A • Multimodal
1011 tests • Apache 2.0 licensed



See it in action

RAG in 3 lines

from synapsekit import RAG

rag = RAG(model="gpt-4o-mini", api_key="sk-...")
rag.add("Your document text here")

async for token in rag.stream("What is the main topic?"):
    print(token, end="", flush=True)

Agent with tools

from synapsekit import FunctionCallingAgent
from synapsekit.agents.tools import CalculatorTool

agent = FunctionCallingAgent(
    llm=llm,
    tools=[CalculatorTool()]
)
result = await agent.run("What is 42 * 17?")

Graph workflow

from synapsekit import StateGraph

graph = StateGraph()
graph.add_node("fetch", fetch_data)
graph.add_node("process", process_data)
graph.add_edge("fetch", "process")
graph.set_entry("fetch")
graph.set_finish("process")

app = graph.compile()
result = await app.run({"query": "hello"})

Swap providers in one line

from synapsekit import RAG

# OpenAI
rag = RAG(model="gpt-4o-mini", api_key="sk-...")

# Anthropic
rag = RAG(model="claude-3-haiku", api_key="sk-ant-...")

# Ollama (local)
rag = RAG(model="ollama/llama3", api_key="")

# Same API. Same code. Different brain.



Growing fast

Contributors welcomeApache 2.0 Licensed

We're building the most comprehensive async-native LLM framework in Python. Whether you're a seasoned open-source contributor or looking for your first PR — jump in.


Star the repoBrowse good first issuesJoin the discussion



Pinned Loading

  1. SynapseKit SynapseKit Public

    Ship LLM apps faster. Production-grade LLM framework for Python. Async-native RAG, agents, and graph workflows. 2 dependencies. Zero magic.

    Python 3 8

  2. synapsekit-docs synapsekit-docs Public

    Documentation site for SynapseKit — async-first Python framework for RAG, agents, and graph workflows

    CSS 1

Repositories

Showing 3 of 3 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…