When receiving REFUSED_STREAM, mark the connection for close and retry
streams accordingly on another/fresh connection.
Reported-by: Terry Wu
Fixes#2416Fixes#1618Closes#2510
When a transfer is requested to get done and it is put in the pending
queue when limited by number of connections, total or per-host, libcurl
would previously very aggressively retry *ALL* pending transfers to get
them transferring. That was very time consuming.
By reducing the aggressiveness in how pending are being retried, we
waste MUCH less time on putting transfers back into pending again.
Some test cases got a factor 30(!) speed improvement with this change.
Reported-by: Cyril B
Fixes#2369Closes#2383
Especially unpausing a transfer might have to move the socket back to the
"currently used sockets" hash to get monitored. Otherwise it would never get
any more data and get stuck. Easily triggered with pausing using the
multi_socket API.
Reported-by: Philip Prindeville
Bug: https://curl.haxx.se/mail/lib-2018-03/0048.htmlFixes#2393Closes#2391
Due to very frequent updates of the rate limit "window", it could
attempt to rate limit within the same milliseconds and that then made
the calculations wrong, leading to it not behaving correctly on very
fast transfers.
This new logic updates the rate limit "window" to be no shorter than the
last three seconds and only updating the timestamps for this when
switching between the states TOOFAST/PERFORM.
Reported-by: 刘佩东
Fixes#2386Closes#2388
Prune the DNS cache immediately after the dns entry is unlocked in
multi_done. Timed out entries will then get discarded in a more orderly
fashion.
Test506 is updated
Reported-by: Oleg Pudeyev
Fixes#2169Closes#2170
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes#2132Fixes#2151Closes#2139
returning 'time_t' is problematic when that type is unsigned and we
return values less than zero to signal "already expired", used in
several places in the code.
Closes#2021
... since the 'tv' stood for timeval and this function does not return a
timeval struct anymore.
Also, cleaned up the Curl_timediff*() functions to avoid typecasts and
clean up the descriptive comments.
Closes#2011
... to cater for systems with unsigned time_t variables.
- Renamed the functions to curlx_timediff and Curl_timediff_us.
- Added overflow protection for both of them in either direction for
both 32 bit and 64 bit time_ts
- Reprefixed the curlx_time functions to use Curl_*
Reported-by: Peter Piekarski
Fixes#2004Closes#2005
This reverts commit f3e03f6c0a.
Caused memory leaks in the fuzzer, needs to be done differently.
Disable test 1553 for now too, as it causes memory leaks without this
commit!
... fixes a memory leak with at least IMAP when remove_handle is never
called and the transfer is abruptly just abandoned early.
Test 1552 added to verify
Detected by OSS-fuzz
Assisted-by: Max Dymond
Closes#1954
There are some bugs in how timers are managed for a single easy handle
that causes the wrong "next timeout" value to be reported to the
application when a new minimum needs to be recomputed and that new
minimum should be an existing timer that isn't currently set for the
easy handle. When the application drives a set of easy handles via the
`curl_multi_socket_action()` API (for example), it gets told to wait the
wrong amount of time before the next call, which causes requests to
linger for a long time (or, it is my guess, possibly forever).
Bug: https://curl.haxx.se/mail/lib-2017-07/0033.html
... to make all libcurl internals able to use the same data types for
the struct members. The timeval struct differs subtly on several
platforms so it makes it cumbersome to use everywhere.
Ref: #1652Closes#1693
With the introduction of expire IDs and the fact that existing timers
can be removed now and thus never expire, the concept with adding a
"latest" timer is not working anymore as it risks to not expire at all.
So, to be certain the timers actually are in line and will expire, the
plain Curl_expire() needs to be used. The _latest() function was added
as a sort of shortcut in the past that's quite simply not necessary
anymore.
Follow-up to 31b39c40cf
Reported-by: Paul Harris
Closes#1555
... since the total amount is low this is faster, easier and reduces
memory overhead.
Also, Curl_expire_done() can now mark an expire timeout as done so that
it never times out.
Closes#1472
A) reduces the timeout lists drastically
B) prevents a lot of superfluous loops for timers that expires "in vain"
when it has actually already been extended to fire later on
`if(nfds || extra_nfds) {` is followed by `malloc(nfds * ...)`.
If `extra_fs` could be non-zero when `nfds` was zero, then we have
`malloc(0)` which is allowed to return `NULL`. But, malloc returning
NULL can be confusing. In this code, the next line would treat the NULL
as an allocation failure.
It turns out, if `nfds` is zero then `extra_nfds` must also be zero.
The final value of `nfds` includes `extra_nfds`. So the test for
`extra_nfds` is redundant. It can only confuse the reader.
Closes#1439
The 'list element' struct now has to be within the data that is being
added to the list. Removes 16.6% (tiny) mallocs from a simple HTTP
transfer. (96 => 80)
Also removed return codes since the llist functions can't fail now.
Test 1300 updated accordingly.
Closes#1435
When receiving chunked encoded data with trailers, and the write
callback returns PAUSE, there might be both body and header to store to
resend on unpause. Previously libcurl returned error for that case.
Added test case 1540 to verify.
Reported-by: Stephen Toub
Fixes#1354Closes#1357
Properly resolve, convert and log the proxy host names.
Support the "--connect-to" feature for SOCKS proxies and for passive FTP
data transfers.
Follow-up to cb4e2be
Reported-by: Jay Satiro
Fixes https://github.com/curl/curl/issues/1248
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes#1131
Several independent reports on infinite loops hanging in the
close_all_connections() function when closing a multi handle, can be
fixed by first marking the connection to get closed before calling
Curl_disconnect.
This is more fixing-the-symptom rather than the underlying problem
though.
Bug: https://curl.haxx.se/mail/lib-2016-10/0011.html
Bug: https://curl.haxx.se/mail/lib-2016-10/0059.html
Reported-by: Dan Fandrich, Valentin David, Miloš Ljumović
In short the easy handle needs to be disconnected from its connection at
this point since the connection still is serving other easy handles.
In our app we can reliably reproduce a crash in our http2 stress test
that is fixed by this change. I can't easily reproduce the same test in
a small example.
This is the gdb/asan output:
==11785==ERROR: AddressSanitizer: heap-use-after-free on address 0xe9f4fb80 at pc 0x09f41f19 bp 0xf27be688 sp 0xf27be67c
READ of size 4 at 0xe9f4fb80 thread T13 (RESOURCE_HTTP)
#0 0x9f41f18 in curl_multi_remove_handle /path/to/source/3rdparty/curl/lib/multi.c:666
0xe9f4fb80 is located 0 bytes inside of 1128-byte region [0xe9f4fb80,0xe9f4ffe8)
freed by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1b5c2 in __interceptor_free /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:45
#1 0x9f7862d in conn_free /path/to/source/3rdparty/curl/lib/url.c:2808
#2 0x9f78c6a in Curl_disconnect /path/to/source/3rdparty/curl/lib/url.c:2876
#3 0x9f41b09 in multi_done /path/to/source/3rdparty/curl/lib/multi.c:615
#4 0x9f48017 in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1896
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
previously allocated by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1ba27 in __interceptor_calloc /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:70
#1 0x9f7dfa6 in allocate_conn /path/to/source/3rdparty/curl/lib/url.c:3904
#2 0x9f88ca0 in create_conn /path/to/source/3rdparty/curl/lib/url.c:5797
#3 0x9f8c928 in Curl_connect /path/to/source/3rdparty/curl/lib/url.c:6438
#4 0x9f45a8c in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1411
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
SUMMARY: AddressSanitizer: heap-use-after-free /path/to/source/3rdparty/curl/lib/multi.c:666 in curl_multi_remove_handle
Shadow bytes around the buggy address:
0x3d3e9f20: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f40: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f50: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa
0x3d3e9f60: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x3d3e9f70:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fa0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fb0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fc0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==11785==ABORTING
Thread 14 "RESOURCE_HTTP" received signal SIGABRT, Aborted.
[Switching to Thread 0xf27bfb40 (LWP 12324)]
0xf7fd8be9 in __kernel_vsyscall ()
(gdb) bt
#0 0xf7fd8be9 in __kernel_vsyscall ()
#1 0xf4c7ee89 in __GI_raise (sig=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#2 0xf4c803e7 in __GI_abort () at abort.c:89
#3 0xf7b2ef2e in __sanitizer::Abort () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cc:122
#4 0xf7b262fa in __sanitizer::Die () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_common.cc:145
#5 0xf7b21ab3 in __asan::ScopedInErrorReport::~ScopedInErrorReport (this=0xf27be171, __in_chrg=<optimized out>) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:689
#6 0xf7b214a5 in __asan::ReportGenericError (pc=166993689, bp=4068206216, sp=4068206204, addr=3925146496, is_write=false, access_size=4, exp=0, fatal=true) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:1074
#7 0xf7b21fce in __asan::__asan_report_load4 (addr=3925146496) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_rtl.cc:129
#8 0x09f41f19 in curl_multi_remove_handle (multi=0xf3406080, data=0xde582400) at /path/to/source3rdparty/curl/lib/multi.c:666
#9 0x09f6b277 in Curl_close (data=0xde582400) at /path/to/source3rdparty/curl/lib/url.c:415
#10 0x09f3354e in curl_easy_cleanup (data=0xde582400) at /path/to/source3rdparty/curl/lib/easy.c:860
#11 0x09c6de3f in ...
#12 0x09c378c5 in ...
#13 0x09c48133 in ...
#14 0x09c4d092 in ...
#15 0x0a2be6b6 in ...
#16 0xf7aa5781 in asan_thread_start (arg=0xf2d22938) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#17 0xf5de52b5 in start_thread (arg=0xf27bfb40) at pthread_create.c:333
#18 0xf4d3a16e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:114
Fixes#1083
The closure handle only ever has default timeouts set. To improve the
state somewhat we clone the timeouts from each added handle so that the
closure handle always has the same timeouts as the most recently added
easy handle.
Fixes#739
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes#971
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
Regression added in 790d6de485. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
The proper FTP wildcard init is now more properly done in Curl_pretransfer()
and the corresponding cleanup in Curl_close().
The previous place of init/cleanup code made the internal pointer to be NULL
when this feature was used with the multi_socket() API, as it was made within
the curl_multi_perform() function.
Reported-by: Jonathan Cardoso Machado
Fixes#800
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes#743
Previously, when a stream was closed with other than NGHTTP2_NO_ERROR
by RST_STREAM, underlying TCP connection was dropped. This is
undesirable since there may be other streams multiplexed and they are
very much fine. This change introduce new error code
CURLE_HTTP2_STREAM, which indicates stream error that only affects the
relevant stream, and connection should be kept open. The existing
CURLE_HTTP2 means connection error in general.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
Simplify the code by using a single entry that looks for a socket in the
socket hash. As indicated in #712, the code looked for CURL_SOCKET_BAD
at some point and that is ineffective/wrong and this makes it easier to
avoid that.
Such a return value isn't documented but could still happen, and the
curl tool code checks for it. It would happen when the underlying
Curl_poll() function returns an error. Starting now we mask that error
as a user of curl_multi_wait() would have no way to handle it anyway.
Reported-by: Jay Satiro
Closes#707
The internal Curl_done() function uses Curl_expire() at times and that
uses the timeout list. Better clean up the list once we're done using
it. This caused a segfault.
Reported-by: 蔡文凱
Bug: https://curl.haxx.se/mail/lib-2016-02/0097.html
introduced in c6aedf680f. It needs to be CURLM_STATE_LAST big since it
must hande the range 0 .. CURLM_STATE_MSGSENT (18) and CURLM_STATE_LAST
is 19 right now.
Reported-by: Dan Fandrich
Bug: http://curl.haxx.se/mail/lib-2015-10/0069.html
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346Closes#346
With many easy handles using the same connection for multiplexing, it is
important we store and keep the transfer-oriented stuff in the
SessionHandle so that callbacks and callback data work fine even when
many easy handles share the same physical connection.
.. also make __func__ replacement in multi.
Prior to this change debug builds would fail to build if the compiler
was building pre-c99 and didn't support __func__.
to allow code to act differently on the situation.
Also added some more info message for the connection re-use function to
make it clearer when connections are not re-used.
All the existing Curl_bundle* functions were only ever used from within
the conncache.c file, so I moved them over and made them static (and
removed the Curl_ prefix).
This avoids unnecessary dynamic allocs and as this also removed the last
users of *hash_alloc() and *hash_destroy(), those two functions are now
removed.
If the handle removed from the multi handle happens to be the one
"owning" the pipeline other transfers will be waiting indefinitely. Now
we move such handles back to connect to have them race (again) for
getting the connection and thus avoid hanging.
Bug: http://curl.haxx.se/bug/view.cgi?id=1465
Reported-by: Jiri Dvorak
... even if they don't have an associated connection anymore. It could
leave the waiting transfers pending with no active one on the
connection.
Bug: http://curl.haxx.se/bug/view.cgi?id=1465
Reported-by: Jiri Dvorak