Conversation
due to some ancient piece of code that's supposed to fix a bug in 1990's internet explorer, sockets were switched between blocking and non-blocking mode, making it hard to differentiate when socket timeouts kicked in. with the IE bug workaround removed, sockets are now always in blocking mode so we no longer need to catch EAGAIN/EWOULDBLOCK and treat them specially, they now always are treated as an error (whenever they are returned, the timeout kicked in). this should fix once and for all the cases where tinyproxy would not respect the user-provided socket timeouts, potentially causing endless loops.
|
Agreed, this is the proper fix. Treating EAGAIN/EWOULDBLOCK as "retry" in This also fixes the same class of issue inside the main relay loop (the |
|
thanks for the analysis. did you also happen to test the PR in the scenario that lead to you noticing the endless loop ? |
|
It takes some time before I am able to trigger the endless loop. Besides, I have a niche use, so I am writing my own proxy at https://github.com/renaudallard/thinproxy |
interesting. i pondered doing something similar too, in the style of microsocks because the handshake for the CONNECT method is more efficient than SOCKS5. however, even with poll(), using only one thread will soon become a bottleneck if there are any concurrent file transfers.
so just to be clear, did you actually experience any endless loops using tinyproxy when you filed #598 or did you just read the code and figure that could happen? in either case if you already have some sort of test client and server that deliberately make connections hang i'd be grateful so i dont have to write them myself. python or perl would probably be the best candidates. |
|
I am runing multiple servers in a round-robin fashion, and I noticed a hang. But it was only clear when 50%+1 servers failed and it can take some time. |
|
were these servers under your control? i figure once a thread goes into the endless loop, write/read syscalls would return immediately, causing high cpu usage. i'm asking because i'm running tinyproxy 1.11+ since years for all my connections (with uptime into several months) and never noticed neither irresponsive instances nor high cpu usage. |
|
It's running on OpenBSD and it seems connections never end properly. That's with tinyproxy not running anymore for 30+ minutes.
|
|
I submitted this for the FIN_WAIT_2 issue: #600 |
due to some ancient piece of code that's supposed to fix a bug in 1990's internet explorer, sockets were switched between blocking and non-blocking mode, making it hard to differentiate when socket timeouts kicked in.
with the IE bug workaround removed, sockets are now always in blocking mode so we no longer need to catch EAGAIN/EWOULDBLOCK and treat them specially, they now always are treated as an error (whenever they are returned, the timeout kicked in).
this should fix once and for all the cases where tinyproxy would not respect the user-provided socket timeouts, potentially causing endless loops.