Skip to content

Fix infinite loop in post-relay flush when peer stops reading#598

Open
renaudallard wants to merge 1 commit intotinyproxy:masterfrom
renaudallard:master
Open

Fix infinite loop in post-relay flush when peer stops reading#598
renaudallard wants to merge 1 commit intotinyproxy:masterfrom
renaudallard:master

Conversation

@renaudallard
Copy link

write_buffer() returns 0 on EAGAIN/EWOULDBLOCK (from SO_SNDTIMEO timeout), but the flush loops after relay_connection() only broke on return < 0. This caused an infinite retry loop when the peer stopped accepting data: send() would block for idletimeout seconds, return EAGAIN, write_buffer would return 0, and the loop would retry forever.

Each stuck thread consumes a maxclients slot. Once all slots are exhausted, the proxy stops accepting new connections entirely.

Fix: break on write_buffer() <= 0 in both client and server flush loops. These are best-effort flushes after the main relay loop already delivered the bulk of the data.

write_buffer() returns 0 on EAGAIN/EWOULDBLOCK (from SO_SNDTIMEO
timeout), but the flush loops after relay_connection() only broke on
return < 0. This caused an infinite retry loop when the peer stopped
accepting data: send() would block for idletimeout seconds, return
EAGAIN, write_buffer would return 0, and the loop would retry forever.

Each stuck thread consumes a maxclients slot. Once all slots are
exhausted, the proxy stops accepting new connections entirely.

Fix: break on write_buffer() <= 0 in both client and server flush
loops. These are best-effort flushes after the main relay loop already
delivered the bulk of the data.
@rofl0r
Copy link
Contributor

rofl0r commented Mar 18, 2026

nice debugging work here. but imo your PR doesn't go far enough, that's why i opened #599 . would you mind testing my PR and confirming whether it fixes the issues you encountered ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants