Releases: henrybit/dify
v1.13.0
Human-in-the-Loop (HITL)
We are introducing the Human Input node, a major update that transforms how AI and humans
collaborate within Dify workflows.
Background
Previously, workflows were binary: either fully automated or fully manual. This created a "trust gap" in
high-stakes scenarios where AI speed is needed but human judgment is essential. With HITL, we are making h
uman oversight a native part of the workflow architecture, allowing you to embed
review steps directly into the execution graph.
Key Capabilities
Native Workflow Pausing: Insert a "Human Input" node to suspend workflow execution at critical decision points.
Review & Edit: The node generates a UI where humans can review AI outputs and modify variables (e.g., editing a draft or correcting data) before the process continues.
Action-Based Routing: Configure custom buttons (like "Approve," "Reject," or "Escalate") that determine
the subsequent path of the workflow.
Flexible Delivery Methods: Human input forms can be delivered via Webapp or Email. In cloud environments, Email delivery availability may depend on plan/feature settings.
🛠 Architecture Updates
To support the stateful pause/resume mechanism required by HITL and provide event‑subscription APIs, we refactored the execution engine: Workflow‑based streaming executions and Advanced Chat executions now run in Celery workers, while non‑streaming WORKFLOW runs still execute in the API process.
All pause/resume paths (e.g., HITL) are resumed via Celery, and events are streamed back through Redis Pub/Sub.
For Large Deployments & Self-Hosted Users:
We have introduced a new Celery queue named workflow_based_app_execution. While standard setups will work out of the box, high-throughput environments should consider the following optimizations to ensure stability and performance:
Scale Workers: Adjust the number of workers consuming the workflow_based_app_execution queue based on your specific workload.
Dedicated Redis (Optional): For large-scale deployments, we recommend configuring the new PUBSUB_REDIS_URL environment variable to point to a dedicated Redis instance. Using Redis Cluster mode with Sharded PubSub is strongly advised to ensure horizontal scalability.
New Celery Queue Required: workflow_based_app_execution
Please ensure your deployment configuration (Docker Compose, Helm Chart, etc.) includes workers listening to the new workflow_based_app_execution queue.
This queue is required for workflow‑based streaming executions and all resume flows (e.g., HITL); otherwise, streaming executions and resume tasks will not be processed.
🔧 Operational Note
Additional Celery Queue: api_token
If ENABLE_API_TOKEN_LAST_USED_UPDATE_TASK=true, ensure your deployment also has workers listening to api_token.
This queue is used by the scheduled batch update task for API token last_used_at timestamps.
⚙️ Configuration Changes
We have introduced several new environment variables to support the architectural changes. Large deployments should pay special attention to the PubSub Redis configurations to ensure scalability.
PUBSUB_REDIS_URL (Critical): Specifies the Redis URL used for PubSub communication between the API and Celery workers. If left empty, it defaults to the standard REDIS_* configuration.
PUBSUB_REDIS_CHANNEL_TYPE (Critical): Defines the channel type for streaming events. Options are pubsub (default) or sharded. We highly recommend using sharded for high-throughput environments.
PUBSUB_REDIS_USE_CLUSTERS (Critical): Set to true to enable Redis cluster mode for PubSub. Combined with sharded PubSub, this is essential for horizontal scaling.
Other Additions:
WEB_FORM_SUBMIT_RATE_LIMIT_MAX_ATTEMPTS: Maximum number of web form submissions allowed per IP within the rate limit window (Default: 30).
WEB_FORM_SUBMIT_RATE_LIMIT_WINDOW_SECONDS: Time window in seconds for web form submission rate limiting (Default: 60).
HUMAN_INPUT_GLOBAL_TIMEOUT_SECONDS: Maximum seconds a workflow run can stay paused waiting for human input before global timeout (Default: 604800, 7 days).
ENABLE_HUMAN_INPUT_TIMEOUT_TASK: Enables the background task that checks for expired human input requests (Default: true).
HUMAN_INPUT_TIMEOUT_TASK_INTERVAL: Sets the interval (in minutes) for the timeout check task (Default: 1).
ENABLE_API_TOKEN_LAST_USED_UPDATE_TASK: Enables the periodic background task that batch-updates API token last_used_at timestamps (Default: true).
API_TOKEN_LAST_USED_UPDATE_INTERVAL: Sets the interval (in minutes) for batch-updating API token last_used_at timestamps (Default: 30).
SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_MAX_INTERVAL: Maximum random delay (in milliseconds) between retention cleanup batches to reduce DB pressure spikes (Default: 200).
📌 Additional Changelog Highlights
Reliability & Correctness
Added migration-time deduplication and a unique constraint for tenant default models to prevent duplicate default model records.
Fixed a tools-deletion edge case caused by provider ID type mismatch.
Fixed a FastOpenAPI integration regression where authenticated users could be resolved as anonymous in remote file APIs.
Fixed message event type detection for file-related responses, and hid the workspace invite action for non-manager users.
Performance & Scalability
Reduced backend load and console latency with plugin manifest pre-caching and AppListApi query optimizations.
Improved large-data task stability with split DB sessions, batched cleanup execution, index tuning, and configurable inter-batch throttling for retention cleanup jobs.
API & Platform Capabilities
Added a Service API endpoint for end-user lookup with tenant/app scope enforcement.
Improved workflow run history refresh behavior during run state transitions.
Enhanced MCP Tool integration by extracting and reporting usage metadata (for example, token/cost fields) from MCP responses.
Security
Removed dynamic new Function evaluation from ECharts parsing and now return explicit parsing errors for unsupported chart code.
Localization
Added Dutch (nl-NL) language support across backend language mapping and web localization resources.
Upgrade Guide
Important
If you use custom CELERY_QUEUES, make sure workflow_based_app_execution is included.
If ENABLE_API_TOKEN_LAST_USED_UPDATE_TASK=true, also include api_token.
For background and details, see
Docker Compose Deployments
Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
Get the latest code from the main branch
git checkout main
git pull origin main
Stop the service. Please execute in the docker directory
docker compose down
Back up data
tar -cvf volumes-$(date +%s).tgz volumes
Upgrade services
docker compose up -d
Note
If you encounter errors like below
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:30
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=dify_plugin: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:34
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=postgres: hostname > resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 init.go:99: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
panic: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
Please use the following command instead. For details, please read this langgenius#28706
docker compose --profile postgresql up -d
Source Code Deployments
Stop the API server, Worker, and Web frontend Server.
Get the latest code from the release branch:
git checkout 1.13.0
Update Python dependencies:
cd api
uv sync
Then, let's run the migration script:
uv run flask db upgrade
Finally, run the API server, Worker, and Web frontend Server again.
What's Changed
refactor(api): replace reqparse with Pydantic models in trial.py by @Sean-Kenneth-Doherty in langgenius#31789
refactor: plugin detail panel components for better maintainability and code organization. by @CodingOnStar in langgenius#31870
fix: remove api reference doc link en prefix by @hyoban in langgenius#31910
fix: missing import console_ns by @hjlarry in langgenius#31916
fix: fix mcp server status is not right by @fatelei in langgenius#31826
test: try fix test, clear test log in CI by @hyoban in langgenius#31912
fix: fix mcp output schema is union type frontend crash by @fatelei in langgenius#31779
fix: auto summary env by @zxhlyh in langgenius#31930
refactor(datasets): extract hooks and components with comprehensive tests by @CodingOnStar in langgenius#31707
fix: include locale in appList query key for localization support inuseExploreAppList by @CodingOnStar in langgenius#31921
chore: assign code owners for test directories by @laipz8200 in langgenius#31940
refactor(web): extract complex components into modular structure with comprehensive tests by @CodingOnStar in langgenius#31729
fix: fix delete_draft_variables_batch cycle forever by @fatelei in https://github.com/langgenius/dify/pul...
v1.11.4
🔒 Security
Dify now requires Node.js 24.13.0 to pick up the upstream fix for the AsyncLocalStorage/async_hooks DoS CVE that can crash apps with deeply nested input. All self-hosted deployments should upgrade Node.js. Thanks to @hyoban in langgenius#30945.
Related: langgenius#30935.
🛠️ Bug Fixes
Redirect After Login: We’ve sorted out the login redirects to bring you back to your intended destination smoothly after logging in. Shoutout to @hyoban for this fix in langgenius#30985.
Missing ID and Message ID: Missing the essentials? Not anymore! We’ve patched the missing id and message_id issue, thanks to @fatelei in langgenius#31008.
Destructuring Undefined Properties: Ever run into that annoying error where you can't destructure name from value because it's undefined? That’s been crushed too, all thanks to @fatelei in langgenius#30991.
Upgrade Guide
Docker Compose Deployments
Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
Get the latest code from the main branch
git checkout main
git pull origin main
Stop the service. Please execute in the docker directory
docker compose down
Back up data
tar -cvf volumes-$(date +%s).tgz volumes
Upgrade services
docker compose up -d
Note
If you encounter errors like below
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:30
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=dify_plugin: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:34
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=postgres: hostname > resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 init.go:99: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
panic: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
Please use the following command instead. For details, please read this langgenius#28706
docker compose --profile postgresql up -d
Source Code Deployments
Stop the API server, Worker, and Web frontend Server.
Get the latest code from the release branch:
git checkout 1.11.4
Update Python dependencies:
cd api
uv sync
Then, let's run the migration script:
uv run flask db upgrade
Finally, run the API server, Worker, and Web frontend Server again.
What's Changed
fix: redirect after login by @hyoban in langgenius#30985
fix: fix missing id and message_id by @fatelei in langgenius#31008
build: require node 24.13.0 by @hyoban in langgenius#30945
chore: bump version to 1.11.4 by @laipz8200 in langgenius#30961
fix: fix Cannot destructure property 'name' of 'value' as it is undef… by @hyoban in langgenius#30991
Full Changelog: langgenius/dify@1.11.3...1.11.4
v1.11.2
🌟 What’s New in v1.11.2 🌟
Welcome to version 1.11.2! This release sees a significant number of improvements, especially around testing, fixes, and new integrations to enhance the robustness and flexibility of the platform. Here's the lowdown:
🚀 New Features
InterSystems IRIS Vector Database: We've added support for this database to bolster data handling capabilities. Big ups to @TomoOkuyama! (langgenius#29480)
Aliyun SLS Integration: Workflow execution logging can now leverage Aliyun's Simple Log Service, courtesy of @adongfan. (langgenius#28986)
Tunisian Arabic Support: We've expanded our language support with Tunisian Arabic. Shukran @nourzakhama2003! (langgenius#29306)
⚙️ Enhancements
Comprehensive Test Coverage: A slew of Jest tests have been added to various components, such as the ConfirmModal, AppCard, CustomizeModal, and more. Thanks to everyone involved, especially @lyzno1! These enhance our confidence in releasing robust changes. (langgenius#29627, langgenius#29667, etc.)
Amplitude Tracking: Enhanced user behavior tracking across the platform for deeper insights, thanks to @CodingOnStar. (langgenius#29662)
Pipeline Setting Tests: Automated testing has been added to ensure any future changes to pipeline settings won't break your optimizations. (langgenius#29478)
Responsive Chat Wrapper: We've optimized the chat interface for better usability across all device types. Props to @hangboss1761. (langgenius#29687)
🛠️ Bug Fixes
Unified Translation: Fixed various translation-related issues across multiple languages for a more coherent global experience. Thanks, @ZeroZ-lab! (langgenius#29759)
Security Enhancements: We've patched an XSS vulnerability with the Mermaid Graph and tackled SSRF and CSV injection issues. Kudos to @zyssyz123 and @laipz8200. (langgenius#29811, langgenius#29462)
Upload Fixes: If file uploads are disabled, they'll now be consistently so across the board. Big thanks to @iamjoel. (langgenius#29681)
API Key Validation: Ensures API keys in HTTPRequest nodes are never empty, thanks to @AziizBg. (langgenius#29950)
Miscellaneous Fixes: A whole host of tweaks ranging from workflow past version data synchronization to adjustment of padding for better alignment. Massive thanks to all who squashed these bugs! (langgenius#30139, langgenius#29999)
🎨 Code Quality & Maintenance
Refactor Marathon: We've massively refactored our API and web controllers to make future updates easier and more performant. Big thanks to @asukaminato0721 for spearheading this. (langgenius#29894, langgenius#29888, etc.)
Jest and Webpack Optimizations: Improved Jest caching, configuration, and migration to Vitest/ESM in the web components for quicker, more reliable tests. Hats off to @lyzno1 and @hyoban. (langgenius#29881, langgenius#29974)
Documentation Cleanup: The Swagger UI is now disabled by default in production releases, being more cautious about what information hangs out there. Thanks @laipz8200. (langgenius#29723)
That's the round-up for v1.11.2! As always, a huge shoutout to all contributors who make these improvements possible. You rock! Now, go try the new release, and hit us up with feedback. Happy coding! 🙌
Upgrade Guide
Docker Compose Deployments
Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
Get the latest code from the main branch
git checkout main
git pull origin main
Stop the service. Please execute in the docker directory
docker compose down
Back up data
tar -cvf volumes-$(date +%s).tgz volumes
Upgrade services
docker compose up -d
Note
If you encounter errors like below
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:30
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=dify_plugin: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:34
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=postgres: hostname > resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 init.go:99: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
panic: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
Please use the following command instead. For details, please read this langgenius#28706
docker compose --profile postgresql up -d
Source Code Deployments
Stop the API server, Worker, and Web frontend Server.
Get the latest code from the release branch:
git checkout 1.11.2
Update Python dependencies:
cd api
uv sync
Then, let's run the migration script:
uv run flask db upgrade
Finally, run the API server, Worker, and Web frontend Server again.
v1.11.1
🛠️ Fixes and Improvements
React and Next.js Security Upgrades: We've bumped up react and react-dom to 19.2.3 to fix some CVE vulnerabilities. Next.js also got a security update, courtesy of @douxc (PRs langgenius#29532 and langgenius#29545).
Credential Management: If you've been seeing empty available_credentials, that's sorted out now (thanks to @fatelei in langgenius#29521).
Description Length Limitation: Autogenerated descriptions will now be truncated to avoid the 400-character limit error, ensuring smoother submissions by @shua-chen in langgenius#28681.
Content Type Charset: Response content types now include charsets to keep your data formats consistent, by @Pleasurecruise in langgenius#29534.
Flask-Restx Attribute Error: The pesky AttributeError caused by validate=True in flask-restx is no more (fixed by @Mairuis in langgenius#29552).
Document Handling: Optimized the save_document_with_dataset_id function for better performance by @fatelei in langgenius#29550. Plus, we fixed an issue where external images in DOCX files were causing extraction failures (@JohnJyong in langgenius#29558).
Token Retrieval: No more errors when access_token is empty; it now gracefully returns None by @kashira2339 in langgenius#29516.
Hit-Test Failures: Resolved the hit-test failure when an attachment ID doesn’t exist by @JohnJyong in langgenius#29563.
🚀 New Features
Amplitude Integration: We’ve integrated the Amplitude API key into our layout and provider components for enhanced analytics. Big thanks to @CodingOnStar in langgenius#29546.
🧪 Testing
Container Integration Tests: Added integration tests for triggers to make sure everything runs smoothly in container environments by @Stream29 in langgenius#29527.
⚡ Performance Enhancements
Excel Extractor: We've optimized the performance and memory usage of the Excel extractor, making it faster and more efficient, by @NieRonghua in langgenius#29551.
Thanks to everyone who contributed to this release! Your feedback and contributions make all the difference. As always, happy coding! 🌟
Upgrade Guide
Docker Compose Deployments
Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
Get the latest code from the main branch
git checkout main
git pull origin main
Stop the service. Please execute in the docker directory
docker compose down
Back up data
tar -cvf volumes-$(date +%s).tgz volumes
Upgrade services
docker compose up -d
Note
If you encounter errors like below
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:30
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=dify_plugin: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:34
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=postgres: hostname > resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 init.go:99: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
panic: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
Please use the following command instead. For details, please read this langgenius#28706
docker compose --profile postgresql up -d
Source Code Deployments
Stop the API server, Worker, and Web frontend Server.
Get the latest code from the release branch:
git checkout 1.11.1
Update Python dependencies:
cd api
uv sync
Then, let's run the migration script:
uv run flask db upgrade
Finally, run the API server, Worker, and Web frontend Server again.
v1.10.1-fix.1
Security/deps: backend bumps pyarrow 17.0.0, werkzeug 3.1.4, urllib3 2.5.0 in api/uv.lock; frontend bumps React 19.2.1 (addresses GHSA-fv66-9v8q-g76r) and Next.js 15.5.7 in web/package.json + web/pnpm-lock.yaml.
v1.10.1 – Multi-Database Era Begins: MySQL Joins the Family
🎉 Major new capabilities, critical stability fixes
🧩 And the long-awaited MySQL support finally arrives!
🚀 New Features
Infrastructure & DevOps
MySQL adaptation (PostgreSQL / MySQL / OceanBase now fully supported)
Thanks @longbingljw from the OceanBase team!
PR: langgenius#28188
Adds DB_TYPE configuration option
Supports MySQL JSON / LONGTEXT / UUID / index differences
Updates Alembic migrations for multi-DB compatibility
Introduces cross-DB SQL helpers for statistics and date handling
Rewrites dataset metadata filters with SQLAlchemy JSON operators
Adds CI workflows for MySQL migration testing
This is a significant backend upgrade in Dify’s history — multi-database support is now first-class.
Performance & Workflow Editor Optimization
Implemented a major performance upgrade for the Workflow Editor, eliminating costly per-node validation scans, reducing unnecessary re-renders, and improving responsiveness from becoming laggy at ~50 nodes to remaining smooth even near ~200 nodes — langgenius#28591, by @iamjoel.
Pipelines & Workflow Engine
Introduced a broad set of workflow-editor improvements, including UI refinement, stability fixes, and quality-of-life enhancements across variable inspection, media components, and node interactions — langgenius#27981, by @Xiu-Lan, @crazywoola, @johnny0120, @Woo0ood.
🛠 Fixes & Improvements
Runtime Stability & Workflow Execution
Fixed an issue where advanced-chat workflows could fail to stop, preventing stuck or lingering processes — langgenius#27803, by @Kevin9703.
Fixed a 500 error triggered when running “any node” in draft mode, improving workflow debugging reliability — langgenius#28636, by @hjlarry.
Corrected token overcounting during loop/iteration evaluation (not related to billing tokens) — langgenius#28406, by @anobaka.
Fixed workflow-as-tool returning an empty files field, ensuring tool integrations receive correct file metadata — langgenius#27925, by @CrabSAMA.
Resolved a session-scope error in FileService that could cause inconsistent file deletion behavior — langgenius#27911, by @ethanlee928.
Knowledge Base
Fixed a 500 error when using the weightedScore retrieval option, restoring stability for weighted ranking scenarios — langgenius#28586, by @Eric-Guo.
Developer Experience & SDKs
Fixed Node.js SDK route and multipart upload handling, ensuring robust file and data submission through JavaScript integrations — langgenius#28573, by @lyzno1.
Fixed OpenAPI/Swagger failing to load, restoring developer documentation access — langgenius#28509, by @changkeke, with contributions from @asukaminato0721.
Web UI & UX
Corrected dark-mode rendering for the ExternalDataToolModal, ensuring consistent appearance across themes — langgenius#28630, by @Nov1c444.
Fixed Marketplace search-trigger behavior and scroll position, improving discovery and navigation — langgenius#28645, by @lyzno1.
Fixed incorrect navigation when opening chatflow log details, providing more predictable UI behavior — langgenius#28626, by @hjlarry.
Fixed layout and rendering issues in the README display panel, ensuring cleaner content presentation — langgenius#28658, by @yangzheli.
Reduced unnecessary re-renders in the useNodes hook, improving overall front-end performance — langgenius#28682, by @iamjoel.
Plugins & Integrations
Updated plugin verification logic to use a unique identifier, improving correctness across plugin installations and updates — langgenius#28608, by @Mairuis.
System Robustness
Prevented nullable tags in TriggerProviderIdentity, avoiding potential runtime errors — langgenius#28646, by @Yeuoly.
Improved error messaging for invalid webhook requests, providing clearer diagnostics — langgenius#28671, by @hjlarry.
Feedback & Logging
Fixed like/dislike feedback not appearing in logs, ensuring end-user rating signals are correctly visualized — langgenius#28652, by @fatelei.
Internationalization (i18n)
Standardized terminology for trigger and billing events, improving translation consistency — langgenius#28543, by @NeatGuyCoding.
Fixed multiple issues in execution-related translations, correcting missing or malformed entries — langgenius#28610, by @NeatGuyCoding.
Removed incorrect “running” translation entries — langgenius#28571, by @NeatGuyCoding.
Refactored i18n scripts and removed obsolete translation keys — langgenius#28618, by @lyzno1.
Added missing translations across the UI, improving language coverage — langgenius#28631, by @lyzno1.
Maintenance & Developer Tooling
Added front-end automated testing rules to strengthen baseline reliability — langgenius#28679, by @CodingOnStar and contributors.
Upgraded system libraries and Python dependencies to maintain security and compatibility — langgenius#28624, by @laipz8200 and @GareArc.
Updated start-web development script to use pnpm dev, simplifying contributor workflows — langgenius#28684, by @laipz8200.
Upgrade Guide
Docker Compose Deployments
Important
Required Action Before Upgrading
Starting from 1.10.1, the Dify API image now runs as a non-root user (UID 1001) for improved security.
If you are using local filesystem storage (the default in community deployments), you must update the ownership of your mounted storage directories on the host machine, or the containers will fail to read/write files.
Affected services:
api
worker
Affected host directory:
./volumes/app/storage → mounted to /app/api/storage
What you must do before restarting the new version:
Stop existing containers
docker compose down
Update directory ownership on the host
sudo chown -R 1001:1001 ./volumes/app/storage
Restart normally
docker compose up -d
After this one-time migration, Dify will operate normally with the new non-root user model.
Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
Get the latest code from the main branch
git checkout main
git pull origin main
Stop the service. Please execute in the docker directory
docker compose down
Back up data
tar -cvf volumes-$(date +%s).tgz volumes
Upgrade services
docker compose up -d
If you encounter errors like below
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:30
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=dify_plugin: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 /app/internal/db/pg/pg.go:34
[error] failed to initialize database, got error failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 init.go:99: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
panic: [PANIC]failed to init dify plugin db: failed to connect to host=db_postgres user=postgres database=postgres: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
Please use the following command instead. For details, please read this langgenius#28706
docker compose --profile postgresql up -d
Source Code Deployments
Stop the API server, Worker, and Web frontend Server.
Get the latest code from the release branch:
git checkout 1.10.1
Update Python dependencies:
cd api
uv sync
Then, let's run the migration script:
uv run flask db upgrade
Finally, run the API server, Worker, and Web frontend Server again.
v1.10.0 - Event-Driven Workflows
Introduce Trigger Functionality
A trigger is a type of Start node that allows your workflow to run automatically—either on a schedule or in response to events from external systems (such as GitHub, Gmail, or your internal services)—without requiring a user action or API call.
Triggers are ideal for automating repetitive processes and integrating workflows with third-party applications to enable seamless data synchronization and processing.
⚡️ Trigger = When something happens → then do something
Triggers form the foundation of event-driven Workflow capabilities and currently support the following types:
Schedule — time-based triggers
SaaS Integration Event — events from external SaaS platforms (e.g., Slack, GitHub, Linear) integrated through Plugins
Webhook — HTTP callbacks from external systems
These trigger features are only available for Workflows. Chatflow, Agent, and BasicChat currently do not support triggers.
🧩 Marketplace
We provide several popular trigger plugins, which you can explore in our Marketplace.
image
😎 Enjoy the Experience
Sit back, relax, and let your workflows run themselves.
image
A big thanks to our contributor !
Thanks so much for the contributors in langgenius#23981 who helps us developed this feature! It's a big deal made by you guys! @ACAne0320 @hjlarry @lyzno1 @CathyL0 @zhangxuhe1
Upgrade Guide
Docker Compose Deployments
Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
Get the latest code from the main branch
git checkout main
git pull origin main
Stop the service. Please execute in the docker directory
docker compose down
Back up data
tar -cvf volumes-$(date +%s).tgz volumes
Upgrade services
docker compose up -d
Source Code Deployments
Stop the API server, Worker, and Web frontend Server.
Get the latest code from the release branch:
git checkout 1.10.0
Update Python dependencies:
cd api
uv sync
Then, let's run the migration script:
uv run flask db upgrade
Finally, run the API server, Worker, and Web frontend Server again.
v1.9.2 - Sharper, Faster, and More Reliable
This release focuses on improving stability, async performance, and developer experience. Expect cleaner internals, better workflow control, and improved observability across the stack.
Warning
A recent change has modernized the Dify integration for Weaviate (see PR langgenius#25447 and related update in PR langgenius#26964). The upgrade switches the Weaviate Python client from v3 to v4 and raises the minimum required Weaviate server version to 1.24.0 or newer. With this update:
If you are running an older Weaviate server (e.g., v1.19.0), you must upgrade your server to at least v1.24.0 before updating Dify.
The code now uses the new client API and supports gRPC for faster operations, which may require opening port 50051 in your Docker Compose files.
Data migration between server versions may require re-indexing using Weaviate’s Cursor API or standard backup/restore procedures.
The Dify documentation will be updated to provide migration steps and compatibility guidance.
Action required:
Upgrade your Weaviate server to v1.24.0 or higher.
Follow the migration guide to update your data and Docker configuration as described in the latest official Dify documentation.
Ensure your environment meets the new version requirements before deploying Dify updates.
✨ Highlights
Workflow & Agents
Pause and resume workflow graph executions (by @laipz8200 in langgenius#26585)
Structured output now supported during LLM node streaming (by @white‑loub in langgenius#27089)
Agent variables support drag‑and‑drop, just like workflow start nodes (by @yangzheli in langgenius#26899)
Workflow runs can be filtered by status or re‑executed from logs (by @twjackysu in langgenius#26850, @wellCh4n in langgenius#26787)
Integrations & SDK
OpenTelemetry + HTTPX tracing for better observability (by @qiqizjl in langgenius#26651)
CORS config now accepts custom headers (by @laipz8200 in langgenius#27133)
Web & UI
Faster load times by splitting and lazy‑loading constant files (by @yangzheli in langgenius#26794)
Improved DataSources with marketplace plugin integration and filtering (by @WTW0313 in langgenius#26810)
Added tax tooltips to pricing footer (by @CodingOnStar in langgenius#26705)
Account creation now syncs interface language with display settings (by @feelshana in langgenius#27042)
⚙️ Core Improvements
Enabled Pyright across multiple modules and fixed typing issues (by @asukaminato0721 in langgenius#26425, langgenius#26462, langgenius#26461)
Refined HTTP timeout configurations and input validation (by @linancn in langgenius#26685)
Redis queue efficiency improvements via cached checks and explicit key cleanup (by @Blackoutta in langgenius#26406)
App models now track updated_by and updated_at fields (by @liugddx in langgenius#26736)
Structured output for non‑streaming and single‑step runs (by @goofy‑z in langgenius#26430)
🧩 Fixes
Fixed duplicate chunks and dataset pagination duplication (by @kenwoodjw in langgenius#26360, @zlyszx in langgenius#25783)
Fixed workflow token usage, LLM usage tracking, and detached user sessions (by @kenwoodjw in langgenius#26723, @laipz8200 in langgenius#27021, @liugddx in langgenius#27162)
Resolved missing module and logical errors in Weaviate vector distance calculation (by @DhruvGorasiya in langgenius#26964, langgenius#27019)
Fixed multi‑auth credential collisions (by @dickens88 in langgenius#26615)
Fixed chat flickers, infinite reloads, login redirects, and loader visibility (by @DavideDelbianco in langgenius#26829, @iamjoel in langgenius#27150, langgenius#27178)
Fixed SSRF validation in external knowledge URLs (by @mrdear in langgenius#26789)
Corrected missing LLM output var descriptions, variable truncation logic, and EndUser relationship loading (by @hjlarry in langgenius#26648, @hj24 in langgenius#27129, @liugddx in langgenius#27162)
Fixed dataset deselection issue when deleting a single file (by @HyaCiovo in langgenius#26502)
Ensured correct JSON serialization and payload indentation across APIs (by @QuantumGhost in langgenius#27097, @ZeroZ‑lab in langgenius#26871)
🧹 Cleanup & DevX
Removed unused dependencies, redundant DB commits, and dead templates (by @asukaminato0721, @yihong0618, @IthacaDream)
Added Knip configuration for dead code detection (by @ZeroZ‑lab in langgenius#26758)
Improved internal observability with HTTPX tracing and async telemetry (by @qiqizjl in langgenius#26651)
Upgrade Guide
Docker Compose Deployments
Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
Get the latest code from the main branch
git checkout main
git pull origin main
Stop the service. Please execute in the docker directory
docker compose down
Back up data
tar -cvf volumes-$(date +%s).tgz volumes
Upgrade services
docker compose up -d
Source Code Deployments
Stop the API server, Worker, and Web frontend Server.
Get the latest code from the release branch:
git checkout 1.9.2
Update Python dependencies:
cd api
uv sync
Then, let's run the migration script:
uv run flask db upgrade
Finally, run the API server, Worker, and Web frontend Server again.
v1.9.1 – 1,000 Contributors, Infinite Gratitude
Congratulations on having our 1000th contributor!
image
🚀 New Features
Infrastructure & DevOps:
Next.js upgraded to 15.5, now leveraging Turbopack in development for a faster, more modern build pipeline by @17hz in langgenius#24346.
Provided X-Dify-Version headers in marketplace API access for better traceability by @RockChinQ in langgenius#26210.
Security reporting improvements, with new sec report workflow added by @crazywoola in langgenius#26313.
Pipelines & Engines:
Built-in pipeline templates now support language configuration, unlocking multilingual deployments by @WTW0313 in langgenius#26124.
Graph engine now blocks response nodes during streaming to avoid unintended outputs by @laipz8200 in langgenius#26364 / langgenius#26377.
Community & Documentation:
Streamlined AGENTS.md contribution guidelines by @laipz8200 in langgenius#26308.
Updated Graph Engine README docs for clarity by @hjlarry in langgenius#26337.
🛠 Fixes & Improvements
Debugging & Logging:
Fixed NodeRunRetryEvent debug logging not working properly in Graph Engine by @quicksandznzn in langgenius#26085.
Fixed LLM node losing Flask context during parallel iterations, ensuring stable concurrent runs by @quicksandznzn in langgenius#26098.
Fixed agent-strategy prompt generator error by @quicksandznzn in langgenius#26278.
Search & Parsing:
Fixed full_text_search name reliability by @JohnJyong in langgenius#26104.
Corrected value extraction handling in IME composition for search input fields by @yangzheli in langgenius#26147.
OceanBase parser selection explanation clarified by @longbingljw in langgenius#26071.
Pipeline & Workflow:
Fixed workflow variable splitting logic (requires ≥2 parts) by @zhanluxianshen in langgenius#26355.
Fixed tool node attribute tool_node_version judgment error causing compatibility issues by @goofy-z in langgenius#26274.
Fixed iteration conversation variables not syncing correctly by @laipz8200 in langgenius#26368.
Fixed Knowledge Base node crash when retrieval_model is null by @quicksandznzn in langgenius#26397.
Fixed workflow node mutation issues, preventing props from being incorrectly altered by @hyongtao-code in langgenius#26266.
Removed restrictions on adding workflow nodes by @zxhlyh in langgenius#26218.
File Handling:
Fixed remote filename handling so Content-Disposition: inline becomes inline instead of incorrect parsing by @sorphwer in langgenius#25877.
Synced FileUploader context with props to fix inconsistent file parameters in cached variable view by @Woo0ood in langgenius#26199.
Fixed variable not found error (langgenius#26144) by @sqewad in langgenius#26155.
Fixed db connection error in embed_documents() by @AkisAya in langgenius#26196.
Fixed model list refresh when credentials change by @zxhlyh in langgenius#26421.
Fixed retrieval configuration handling and missing vector_setting in dataset components by @WTW0313 in langgenius#26361 / langgenius#26380.
Fixed ChatClient audio_to_text files keyword bug by @EchterTimo in langgenius#26317.
Added missing import IO in client.py by @EchterTimo in langgenius#26389.
Removed FILES_URL in default .yaml settings by @JoJohanse in langgenius#26410.
Performance & Networking:
Improved pooling of httpx clients for requests to code sandbox and SSRF protection by @Blackoutta in langgenius#26052.
Distributed plugin auto-upgrade tasks with concurrency control by @RockChinQ in langgenius#26282.
Switched plugin auto-upgrade cache to Redis for reliability by @RockChinQ in langgenius#26356.
Fixed plugin detail panel not showing when >100 plugins are installed by @JzoNgKVO in langgenius#26405.
Debounce reference fix for performance stability by @crazywoola in langgenius#26433.
UI/UX & Display:
Fixed lingering display-related issues (translations, UI consistency) by @hjlarry in langgenius#26335.
Fixed broken CSS animations under Turbopack by naming unnamed animations in CSS modules by @lyzno1 in langgenius#26408.
Fixed verification code input using wrong maxLength prop by @hyongtao-code in langgenius#26244.
Fixed array-only filtering in List Operator picker, removed file-children fallback, aligned child types by @Woo0ood in langgenius#26240.
Fixed translation inconsistencies in ja-JP: “ナレッジベース” vs. “ナレッジの名前とアイコン” by @mshr-h in langgenius#26243 and @NeatGuyCoding in langgenius#26270.
Improved “time from now” i18n support by @hjlarry in langgenius#26328.
Standardized dataset-pipeline i18n terminology by @lyzno1 in langgenius#26353.
Code & Components:
Refactored component exports for consistency by @ZeroZ-lab in langgenius#26033.
Refactored router to apply ns.route style by @laipz8200 in langgenius#26339.
Refactored lint scripts to remove duplication and simplify naming by @lyzno1 in langgenius#26259.
Applied @console_ns.route decorators to RAG pipeline controllers (internal refactor) by @Copilot in langgenius#26348.
Added missing type="button" attributes in components by @Copilot in langgenius#26249.
Upgrade Guide
Docker Compose Deployments
Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
Get the latest code from the main branch
git checkout main
git pull origin main
Stop the service. Please execute in the docker directory
docker compose down
Back up data
tar -cvf volumes-$(date +%s).tgz volumes
Upgrade services
docker compose up -d
Source Code Deployments
Stop the API server, Worker, and Web frontend Server.
Get the latest code from the release branch:
git checkout 1.9.1
Update Python dependencies:
cd api
uv sync
Then, let's run the migration script:
uv run flask db upgrade
Finally, run the API server, Worker, and Web frontend Server again.
What's Changed
fix(api): graph engine debug logging NodeRunRetryEvent not effective by @quicksandznzn in langgenius#26085
fix full_text_search name by @JohnJyong in langgenius#26104
bump nextjs to 15.5 and turbopack for development mode by @17hz in langgenius#24346
chore: refactor component exports for consistency by @ZeroZ-lab in langgenius#26033
fix:add some explanation for oceanbase parser selection by @longbingljw in langgenius#26071
feat(pipeline): add language support to built-in pipeline templates and update related components by @WTW0313 in langgenius#26124
ci: Add hotfix/** branches to build-push workflow triggers by @QuantumGhost in langgenius#26129
fix(api): Fix variable truncation for list[File] value in output mapping by @QuantumGhost in langgenius#26133
one example of Session by @asukaminato0721 in langgenius#24135
fix(api):LLM node losing Flask context during parallel iterations by @quicksandznzn in langgenius#26098
fix(search-input): ensure proper value extraction in composition end handler by @yangzheli in langgenius#26147
delete end_user check by @JohnJyong in langgenius#26187
improve: pooling httpx clients for requests to code sandbox and ssrf by @Blackoutta in langgenius#26052
fix: remote filename will be 'inline' if Content-Disposition: inline by @sorphwer in langgenius#25877
perf: provide X-Dify-Version for marketplace api access by @RockChinQ in langgenius#26210
Chore/remove add node restrict of workflow by @zxhlyh in langgenius#26218
Fix array-only filtering in List Operator picker; remove file children fallback and align child types. by @Woo0ood in langgenius#26240
fix: sync FileUploader context with props to fix inconsistent file parameter state in “View cached variables”. by @Woo0ood in langgenius#26199
fix: add echarts and zrender to transpilePackages for ESM compatibility by @lyzno1 in langgenius#26208
chore: fix inaccurate translation in ja-JP by @mshr-h in langgenius#26243
aliyun_trace: unify the span attribute & compatible CMS 2.0 endpoint by @hieheihei in langgenius#26194
fix(api): resolve error in agent‑strategy prompt generator by @quicksandznzn in langgenius#26278
...
v1.9.0-Orchestrating Knowledge, Powering Workflows
🚀 Introduction
In Dify 1.9.0, we are introducing two major new capabilities: the Knowledge Pipeline and the Queue-based Graph Engine.
This is a beta release, and we hope to explore these improvements together with you and gather your feedback. The Knowledge Pipeline provides a modularized and extensible workflow for knowledge ingestion and processing, while the Queue-based Graph Engine makes workflow execution more robust and controllable. We believe these will help you build and debug AI applications more smoothly, and we look forward to your experiences to help us continuously improve.
📚 Knowledge Pipeline
✨ Introduction
With the brand-new orchestration interface for knowledge pipelines, we introduce a fundamental architectural upgrade that reshapes how document processing are designed and executed, providing a more modular and flexible workflow that enables users to orchestrate every stage of the pipeline. Enhanced with a wide range of powerful plugins available in the marketplace, it empowers users to flexibly integrate diverse data sources and processing tools. Ultimately, this architecture enables building highly customized, domain-specific RAG solutions that meet enterprises’ growing demands for scalability, adaptability, and precision.
❓ Why Do We Need It?
Previously, Dify's RAG users still encounter persistent challenges in real-world adoption — from inaccurate knowledge retrieval and information loss to limited data integration and extensibility. Common pain points include:
🔗 restricted integration of data sources
🖼️ missing critical elements such as tables and images
✂️ suboptimal chunking results
All of them lead to poor answer quality and hinder the model's overall performance.
In response, we reimagined RAG in Dify as an open and modular architecture, enabling developers, integrators, and domain experts to build document processing pipelines tailored to their specific requirements—from data ingestion to chunk storage and retrieval.
🛠️ Core Capabilities
🧩 Knowledge Pipeline Architecture
The Knowledge Pipeline is a visual, node-based orchestration system dedicated to document ingestion. It provides a customizable way to automate complex document processing, enabling fine-grained transformations and bridging raw content with structured, retrievable knowledge. Developers can build workflows step by step, like assembling puzzle pieces, making document handling easier to observe and adjust.
📑 Templates & Pipeline DSL
template
⚡ Start quickly with official templates
🔄 Customize and share pipelines by importing/exporting via DSL for easier reusability and collaboration
🔌 Customizable Data Sources & Tools
tools
tools-2
Each knowledge base can support multiple data sources. You can seamlessly integrate local files, online documents, cloud drives, and web crawlers through a plugin-based ingestion framework. Developers can extend the ecosystem with new data-source plugins, while marketplace processors handle specialized use cases like formulas, spreadsheets, and image parsing — ensuring accurate ingestion and structured representation.
🧾 New Chunking Strategies
In addition to General and Parent-Child modes, the new Q&A Processor plugin supports Q&A structures. This expands coverage for more use cases, balancing retrieval precision with contextual completeness.
🖼️ Image Extraction & Retrieval
image_in_pdf
Extract images from documents in multiple formats, store them as URLs in the knowledge base, and enable mixed text-image outputs to improve LLM-generated answers.
🧪 Test Run & Debugging Support
Before publishing a pipeline, you can:
🔍 Inspect intermediate variables in detail
👀 Preview string variables as Markdown in the variable inspector
This provides safe iteration and debugging at every stage.
🔄 One-Click Migration from Legacy Knowledge Bases
Seamlessly convert existing knowledge bases into the Knowledge Pipeline architecture with a single action, ensuring smooth transition and backward compatibility.
🌟 Why It Matters
The Knowledge Pipeline makes knowledge management more transparent, debuggable, and extensible. It is not the endpoint, but a foundation for future enhancements such as multimodal retrieval, human-in-the-loop collaboration, and enterprise-level data governance. We’re excited to see how you apply it and share your feedback.
⚙️ Queue-based Graph Engine
❓ Why Do We Need It?
Previously, designing workflows with parallel branches often led to:
🌀 Difficulty managing branch states and reproducing errors
❌ Insufficient debugging information
🧱 Rigid execution logic lacking flexibility
These issues reduced the usability of complex workflows. To solve this, we redesigned the execution engine around queue scheduling, improving management of parallel tasks.
🛠️ Core Capabilities
📋 Queue Scheduling Model
All tasks enter a unified queue, where the scheduler manages dependencies and order. This reduces errors in parallel execution and makes topology more intuitive.
🎯 Flexible Execution Start Points
Execution can begin at any node, supporting partial runs, resumptions, and subgraph invocations.
🌊 Stream Processing Component
A new ResponseCoordinator handles streaming outputs from multiple nodes, such as token-by-token LLM generation or staged results from long-running tasks.
🕹️ Command Mechanism
With the CommandProcessor, workflows can be paused, resumed, or terminated during execution, enabling external control.
🧩 GraphEngineLayer
A new plugin layer that allows extending engine functionality without modifying core code. It can monitor states, send commands, and support custom monitoring.
Quickstart
Prerequisites
Dify version: 1.9.0 or higher
How to Enable
Enabled by default, no additional configuration required.
Debug mode: set DEBUG=true to enable DebugLoggingLayer.
Execution limits:
WORKFLOW_MAX_EXECUTION_STEPS=500
WORKFLOW_MAX_EXECUTION_TIME=1200
WORKFLOW_CALL_MAX_DEPTH=10
Worker configuration (optional):
WORKFLOW_MIN_WORKERS=1
WORKFLOW_MAX_WORKERS=10
WORKFLOW_SCALE_UP_THRESHOLD=3
WORKFLOW_SCALE_DOWN_IDLE_TIME=30
Applies to all workflows.
More Controllable Parallel Branches
Execution Flow:
Start ─→ Unified Task Queue ─→ WorkerPool Scheduling
├─→ Branch-1 Execution
└─→ Branch-2 Execution
↓
Aggregator
↓
End
Improvements:
- All tasks enter a single queue, managed by the Dispatcher.
- WorkerPool auto-scales based on load.
- ResponseCoordinator manages streaming outputs, ensuring correct order.
Example: Command Mechanism
from core.workflow.graph_engine.manager import GraphEngineManager
Send stop command
GraphEngineManager.send_stop_command(
task_id="workflow_task_123",
reason="Emergency stop: resource limit exceeded"
)
Note: pause/resume functionality will be supported in future versions.
Example: GraphEngineLayer
GraphEngineLayer Example