Skip to content

fix catching timed-out socket#599

Open
rofl0r wants to merge 1 commit intotinyproxy:masterfrom
rofl0r:all_blocking
Open

fix catching timed-out socket#599
rofl0r wants to merge 1 commit intotinyproxy:masterfrom
rofl0r:all_blocking

Conversation

@rofl0r
Copy link
Contributor

@rofl0r rofl0r commented Mar 18, 2026

due to some ancient piece of code that's supposed to fix a bug in 1990's internet explorer, sockets were switched between blocking and non-blocking mode, making it hard to differentiate when socket timeouts kicked in.

with the IE bug workaround removed, sockets are now always in blocking mode so we no longer need to catch EAGAIN/EWOULDBLOCK and treat them specially, they now always are treated as an error (whenever they are returned, the timeout kicked in).

this should fix once and for all the cases where tinyproxy would not respect the user-provided socket timeouts, potentially causing endless loops.

due to some ancient piece of code that's supposed to fix a bug
in 1990's internet explorer, sockets were switched between blocking
and non-blocking mode, making it hard to differentiate when
socket timeouts kicked in.

with the IE bug workaround removed, sockets are now always in
blocking mode so we no longer need to catch EAGAIN/EWOULDBLOCK
and treat them specially, they now always are treated as an
error (whenever they are returned, the timeout kicked in).

this should fix once and for all the cases where tinyproxy
would not respect the user-provided socket timeouts, potentially
causing endless loops.
@renaudallard
Copy link

Agreed, this is the proper fix. Treating EAGAIN/EWOULDBLOCK as "retry" in write_buffer()/read_buffer() only makes sense for non-blocking sockets, and since nothing switches to non-blocking mode anymore (the only place that did was the IE hack), they always mean the SO_SNDTIMEO/SO_RCVTIMEO timeout fired and should be treated as errors.

This also fixes the same class of issue inside the main relay loop (the write_buffer() < 0 checks at lines 1254 and 1258), where a timed-out send would return 0 and silently continue rather than breaking out. Less severe there since poll() gates the calls, but still wrong.

@rofl0r
Copy link
Contributor Author

rofl0r commented Mar 18, 2026

thanks for the analysis. did you also happen to test the PR in the scenario that lead to you noticing the endless loop ?

@renaudallard
Copy link

It takes some time before I am able to trigger the endless loop. Besides, I have a niche use, so I am writing my own proxy at https://github.com/renaudallard/thinproxy

@rofl0r
Copy link
Contributor Author

rofl0r commented Mar 19, 2026

I am writing my own proxy at https://github.com/renaudallard/thinproxy

interesting. i pondered doing something similar too, in the style of microsocks because the handshake for the CONNECT method is more efficient than SOCKS5. however, even with poll(), using only one thread will soon become a bottleneck if there are any concurrent file transfers.

It takes some time before I am able to trigger the endless loop.

so just to be clear, did you actually experience any endless loops using tinyproxy when you filed #598 or did you just read the code and figure that could happen?

in either case if you already have some sort of test client and server that deliberately make connections hang i'd be grateful so i dont have to write them myself. python or perl would probably be the best candidates.

@renaudallard
Copy link

I am runing multiple servers in a round-robin fashion, and I noticed a hang. But it was only clear when 50%+1 servers failed and it can take some time.

@rofl0r
Copy link
Contributor Author

rofl0r commented Mar 19, 2026

were these servers under your control? i figure once a thread goes into the endless loop, write/read syscalls would return immediately, causing high cpu usage. i'm asking because i'm running tinyproxy 1.11+ since years for all my connections (with uptime into several months) and never noticed neither irresponsive instances nor high cpu usage.

@renaudallard
Copy link

renaudallard commented Mar 19, 2026

It's running on OpenBSD and it seems connections never end properly. That's with tinyproxy not running anymore for 30+ minutes.

tcp 0 0 172.20.66.101.8080 172.20.66.3.17914 FIN_WAIT_2
tcp 0 0 172.20.66.101.8080 172.20.66.3.13047 FIN_WAIT_2
tcp 0 0 172.20.66.101.8080 172.20.66.3.30194 FIN_WAIT_2
tcp 0 0 172.20.66.101.8080 172.20.66.3.3311 FIN_WAIT_2
tcp 0 0 172.20.66.101.8080 172.20.66.3.10220 FIN_WAIT_2

@renaudallard
Copy link

I submitted this for the FIN_WAIT_2 issue: #600

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants