"Torchrun for the World": Enabling any terminal user to mobilize global computing resources with a single command to execute local code.
AI-Native Distributed Computing | Building the Next-Generation Computing Internet - EasyNet
English | δΈζ
EasyRemote is not just a Private Function-as-a-Service (Private FaaS) platformβit's our answer to the future of computing:
While current cloud computing models are platform-centric, requiring data and code to "go to the cloud" to exchange resources, we believe: The next-generation computing network should be terminal-centric, language-interfaced, function-granular, and trust-bounded.
We call it: "EasyNet".
EasyRemote is the first-stage implementation of EasyNet, allowing you to:
- π§ Define task logic using familiar Python function structures
- π Deploy computing nodes on any device while maintaining privacy, performance, and control
- π Transform local functions into globally accessible task interfaces through lightweight VPS gateways
- π Launch tasks as simply as using
torchrun, automatically scheduling to the most suitable resources for execution
| Traditional Cloud Computing | EasyNet Mode |
|---|---|
| Platform-centric | Terminal-centric |
| Code must go to cloud | Code stays on your device |
| Pay for computing power | Contribute to earn computing power |
| Vendor lock-in | Decentralized collaboration |
| Cold start delays | Always warm |
# 1. Start gateway node (any VPS)
from easyremote import Server
Server(port=8080).start()
# 2. Contribute computing node (your device)
from easyremote import ComputeNode
node = ComputeNode("your-gateway:8080")
@node.register
def ai_inference(prompt):
return your_local_model.generate(prompt) # Runs on your GPU
node.serve()
# 3. Global computing access (anywhere)
from easyremote import Client
result = Client("your-gateway:8080").execute("ai_inference", "Hello AI")π Your device has joined EasyNet!
| Feature | AWS Lambda | Google Cloud | EasyNet Node |
|---|---|---|---|
| Computing Location | Cloud servers | Cloud servers | Your device |
| Data Privacy | Upload to cloud | Upload to cloud | Never leaves local |
| Computing Cost | $200+/million calls | $200+/million calls | $5 gateway fee |
| Hardware Limitations | Cloud specs | Cloud specs | Your GPU/CPU |
| Startup Latency | 100-1000ms | 100-1000ms | 0ms (always online) |
| AI Agent Integration | Custom API wrappers | Custom API wrappers | Native MCP/A2A protocols |
EasyRemote is purpose-built for the AI era. Beyond general distributed computing, it directly addresses the pain points AI teams face today:
- GPU Isolation: Team GPUs sit idle 80% of the time, yet cloud inference costs $200+/million calls
- Data Can't Leave: Healthcare, finance, and government data must stay on-premise, but AI models live in the cloud
- Agent Integration Is Fragmented: Connecting AI agents to real tools requires custom glue code for every service
- Cold Starts Kill UX: Cloud functions take 100-1000ms to wake up, destroying real-time AI experiences
| # | Scenario | Who It's For | What It Solves |
|---|---|---|---|
| K1 | Private AI Inference Hub (Team GPU Pool) | AI teams / R&D groups | Share team GPUs for model inference with load balancing; eliminate redundant cloud spend |
| K2 | Agent Tool Gateway (Enterprise Tool Mesh) | Agent platform teams | Unified MCP/A2A tool catalog so Claude, GPT, and custom agents can discover and call enterprise functions |
| K3 | A2A Operations Network (Incident Copilot) | Platform SRE / Ops | Automated incident response via agent-to-agent task chains; reduce manual handoffs |
| K4 | Demo-as-Service (Demo-API) | Product / Pre-sales / Startups | Publish AI demos as callable APIs in 3 steps; rapid prototyping for investor demos and POCs |
| K5 | Function Marketplace (Org-Internal) | Platform / Middle-office teams | Reusable AI function registry with capability discovery and automatic load balancing |
| K6 | Local Data Residency AI | Healthcare / Finance / Government | Run AI inference locally on HIPAA/GDPR-compliant devices; data never leaves your network |
| K9 | Runtime Device Capability Injection | ToC Agent apps / Edge platforms | Dynamically inject camera/media/sensor skills onto user devices at runtime without restart |
| # | Scenario | What's Needed |
|---|---|---|
| K7 | Multi-Agent Collaboration Factory | A2A state machine enhancements for long-running task lifecycles |
| K8 | MCP Resource Knowledge Network | MCP resources/prompts extensions for full knowledge graph integration |
EasyRemote speaks the languages AI agents already understand β MCP and A2A:
MCP (Model Context Protocol) β AI agents (Claude, etc.) discover and invoke your functions as tools:
# Your compute node registers functions as usual
@node.register(description="Summarize text using local LLM")
def summarize(text: str) -> str:
return local_llm.summarize(text)
# Agents discover via MCP: POST /mcp {"method": "tools/list"}
# Agents invoke via MCP: POST /mcp {"method": "tools/call", "params": {"name": "summarize", ...}}Supported: initialize, tools/list, tools/call, ping, batch requests, notifications, JSON-RPC 2.0
A2A (Agent-to-Agent Protocol) β Agents coordinate through standardized task execution:
# Agent discovers: POST /a2a {"method": "agent.capabilities"}
# Agent executes: POST /a2a {"method": "task.execute", "params": {"task": {...}}}
# Agent notifies: POST /a2a {"method": "task.send", "params": {...}}Supported: agent.capabilities, task.execute, task.send, ping, task ID fallback, batch requests
EasyRemoteClientRuntime β Agent-side proxy that bridges MCP/A2A to EasyRemote's distributed gateway:
from easyremote.protocols import EasyRemoteClientRuntime
runtime = EasyRemoteClientRuntime(gateway="your-gateway:8080")
# Exposes real gateway functions as MCP tools / A2A capabilities
# Supports node_id targeting, load_balancing, and streaming| Route | For | How |
|---|---|---|
| Route A: Agent (MCP/A2A) | AI agent platforms | AI Agent --MCP/A2A JSON-RPC--> Protocol Gateway --gRPC--> Compute Nodes |
| Route B: Human (Decorator) | Python engineers | @node.register to expose, @remote to call, transparent remote execution |
- π English Documentation Center - Complete English documentation navigation
- π δΈζζζ‘£δΈεΏ - Complete Chinese documentation navigation
- 5-Minute Quick Start - Fastest way to get started | δΈζ
- Installation Guide - Detailed installation instructions | δΈζ
- Examples - Core runnable examples | δΈζ
- Business Use Cases & Route Layers - Current vs roadmap (MCP/A2A and decorator paths)
- Killer Apps Gallery - Real-world use-case catalog with flagship applications
- Gallery Projects - Quickstart project templates restored from removed demos
- MCP Implemented Scope - Current protocol behavior and limits
- A2A Implemented Scope - Current protocol behavior and limits
- Agent-side Gateway Proxy Runtime -
EasyRemoteClientRuntime(see MCP/A2A guides section 2.3) - Capability Management Protocol (CMP) - Skill/ability CRUD on user nodes (install/uninstall/list)
- Technical Whitepaper - EasyNet theoretical foundation | δΈζ
- Research Proposal - Academic research plan | δΈζ
- Project Pitch - Business plan overview | δΈζ
@node.register
def medical_diagnosis(scan_data):
# Medical data never leaves your HIPAA-compliant device
# But diagnostic services can be securely accessed globally
return your_private_ai_model.diagnose(scan_data)- Traditional Cloud Services: Pay-per-use, costs increase exponentially with scale
- EasyNet Model: Contribute computing power to earn credits, use credits to call others' computing power
- Gateway Cost: $5/month vs traditional cloud $200+/million calls
# Your gaming PC can provide AI inference services globally
@node.register
def image_generation(prompt):
return your_stable_diffusion.generate(prompt)
# Your MacBook can participate in distributed training
@node.register
def gradient_computation(batch_data):
return your_local_model.compute_gradients(batch_data)"Computing Evolution is not linear progression, but paradigmatic leaps"
Core Innovation: From local calls β cross-node function calls
Technical Expression: @remote decorator for transparent distributed execution
Paradigm Analogy: RPC β gRPC β EasyRemote (spatial decoupling of function calls)
# Traditional local calls
def ai_inference(data): return model.predict(data)
# EasyRemote: Function calls across global networks
@node.register
def ai_inference(data): return model.predict(data)
result = client.execute("global_node.ai_inference", data)Breakthrough Metrics:
- API Simplicity: 25+ lines β 12 lines (-52%)
- Startup Latency: 100-1000ms β 0ms (-100%)
- Privacy Protection: Data to cloud β Never leaves local
Core Innovation: From explicit scheduling β adaptive intelligent scheduling Technical Expression: Intent-driven multi-objective optimization scheduling Paradigm Analogy: Kubernetes β Ray β EasyRemote ComputePool
# Traditional explicit scheduling
client.execute("specific_node.specific_function", data)
# EasyRemote: Intelligent intent scheduling
result = await compute_pool.execute_optimized(
task_intent="image_classification",
requirements=TaskRequirements(accuracy=">95%", cost="<$5")
)
# System automatically: task analysis β resource matching β optimal schedulingBreakthrough Metrics:
- Scheduling Efficiency: Manual config β Millisecond auto-decisions
- Resource Utilization: 60% β 85% (+42%)
- Cognitive Load: Complex config β Intent expression
Core Innovation: From calling functions β expressing intentions Technical Expression: Natural language-driven expert collaboration networks Paradigm Analogy: LangChain β AutoGPT β EasyRemote Intent Engine
# Traditional function call mindset
await compute_pool.execute_optimized(function="train_classifier", ...)
# EasyRemote: Natural language intent expression
result = await easynet.fulfill_intent(
"Train a medical imaging AI with >90% accuracy for under $10"
)
# System automatically: intent understanding β task decomposition β expert discovery β collaborative executionBreakthrough Metrics:
- User Barrier: Python developers β General users (10M+ user scale)
- Interaction Mode: Code calls β Natural language
- Collaboration Depth: Tool calls β Intelligent agent networks
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Global Compute OS β β Paradigm 3: Intent Layer
β "Train medical AI" β Auto-coordinate global experts β (Intent-Graph)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β²
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Compute Sharing Platform β β Paradigm 2: Autonomous Layer
β Intelligent scheduling + Multi-objective optimization β (Intelligence-Linked)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β²
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Private Function Network β β Paradigm 1: Function Layer
β @remote decorator + Cross-node calls + Load balancing β (Function-Driven)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Ultimate Vision: Mobilize global computing as easily as using torchrun
$ easynet "Train a medical imaging AI with my local data, 95%+ accuracy required"
π€ Understanding your needs, coordinating global medical AI expert nodes...
β
Found stanford-medical-ai and 3 other expert nodes, starting collaborative training...π€ AI Agents (MCP/A2A) π Human Clients (Decorator/@remote)
\ /
v v
βοΈ Lightweight gateway cluster (routing + protocol adaptation, no computing)
β
π» Personal computing nodes (actual GPU/CPU execution)
β
π Peer-to-peer collaboration network
- Communication Protocol: gRPC + Protocol Buffers
- Secure Transport: End-to-end encryption
- Load Balancing: Intelligent resource awareness
- Fault Tolerance: Automatic retry and recovery
Limitations of Traditional Models:
- πΈ Cloud service costs grow exponentially with scale
- π Data must be uploaded to third-party servers
- β‘ Cold starts and network latency limit performance
- π’ Locked into major cloud service providers
EasyNet's Breakthroughs:
- π° Computing Sharing Economy: Contribute idle resources, gain global computing power
- π Privacy by Design: Data never leaves your device
- π Decentralized: No single points of failure, no vendor lock-in
Redefining the future of computing: From a few cloud providers monopolizing computing power to every device being part of the computing network.
# Become an early node in EasyNet
pip install easyremote
# Contribute your computing power
python -c "
from easyremote import ComputeNode
node = ComputeNode('demo.easynet.io:8080')
@node.register
def hello_world(): return 'Hello from my device!'
node.serve()
"| Role | Contribution | Benefits |
|---|---|---|
| Computing Providers | Idle GPU/CPU time | Computing credits/token rewards |
| Application Developers | Innovative algorithms and applications | Global computing resource access |
| Gateway Operators | Network infrastructure | Routing fee sharing |
| Ecosystem Builders | Tools and documentation | Community governance rights |
- Technical Discussions: GitHub Issues
- Community Chat: GitHub Discussions
- Business Collaboration: silan.hu@u.nus.edu
- Project Founder: Silan Hu - NUS PhD Candidate
Ready to join the computing revolution?
pip install easyremoteDon't just see it as a distributed function tool β it's a prototype running on old-world tracks but heading towards a new-world destination.
β If you believe in this new worldview, please give us a star!
