- Change closure handle to receive verbose setting from the easy handle
most recently added via curl_multi_add_handle.
The closure handle is a special easy handle used for closing cached
connections. It receives limited settings from the easy handle most
recently added to the multi handle. Prior to this change that did not
include verbose which was a problem because on connection shutdown
verbose mode was not acknowledged.
Ref: https://github.com/curl/curl/pull/3598
Co-authored-by: Daniel Stenberg
Closes https://github.com/curl/curl/pull/3618
Failing to do so would make the CURLINFO_TOTAL_TIME timeout to not get
updated correctly and could end up getting reported to the application
completely wrong (way too small).
Reported-by: accountantM on github
Fixes#3602Closes#3605
The variable wasn't properly reset within the loop and thus could remain
set for sockets that hadn't been set before and miss notifying the app.
This is a follow-up to 4c35574 (shipped in curl 7.64.0)
Reported-by: buzo-ffm on github
Detected-by: Jan Alexander Steffens
Fixes#3585Closes#3589
urlapi: turn three local-only functions into statics
conncache: make conncache_find_first_connection static
multi: make detach_connnection static
connect: make getaddressinfo static
curl_ntlm_core: make hmac_md5 static
http2: make two functions static
http: make http_setup_conn static
connect: make tcpnodelay static
tests: make UNITTEST a thing to mark functions with, so they can be static for
normal builds and non-static for unit test builds
... and mark Curl_shuffle_addr accordingly.
url: make up_free static
setopt: make vsetopt static
curl_endian: make write32_le static
rtsp: make rtsp_connisdead static
warnless: remove unused functions
memdebug: remove one unused function, made another static
We use "conn" everywhere to be a pointer to the connection.
Introduces two functions that "attaches" and "detaches" the connection
to and from the transfer.
Going forward, we should favour using "data->conn" (since a transfer
always only has a single connection or none at all) to "conn->data"
(since a connection can have none, one or many transfers associated with
it and updating conn->data to be correct is error prone and a frequent
reason for internal issues).
Closes#3442
Fixes#3436Closes#3448
Problem 1
After LOTS of scratching my head, I eventually realized that even when doing
10 uploads in parallel, sometimes the socket callback to the application that
tells it what to wait for on the socket, looked like it would reflect the
status of just the single transfer that just changed state.
Digging into the code revealed that this was indeed the truth. When multiple
transfers are using the same connection, the application did not correctly get
the *combined* flags for all transfers which then could make it switch to READ
(only) when in fact most transfers wanted to get told when the socket was
WRITEABLE.
Problem 1b
A separate but related regression had also been introduced by me when I
cleared connection/transfer association better a while ago, as now the logic
couldn't find the connection and see if that was marked as used by more
transfers and then it would also prematurely remove the socket from the socket
hash table even in times other transfers were still using it!
Fix 1
Make sure that each socket stored in the socket hash has a "combined" action
field of what to ask the application to wait for, that is potentially the ORed
action of multiple parallel transfers. And remove that socket hash entry only
if there are no transfers left using it.
Problem 2
The socket hash entry stored an association to a single transfer using that
socket - and when curl_multi_socket_action() was called to tell libcurl about
activities on that specific socket only that transfer was "handled".
This was WRONG, as a single socket/connection can be used by numerous parallel
transfers and not necessarily a single one.
Fix 2
We now store a list of handles in the socket hashtable entry and when libcurl
is told there's traffic for a particular socket, it now iterates over all
known transfers using that single socket.
Added Curl_resolver_kill() for all three resolver modes, which only
blocks when necessary, along with test 1592 to confirm
curl_multi_remove_handle() doesn't block unless it must.
Closes#3428Fixes#3371
Do not assume/store assocation between a given easy handle and the
connection if it can be avoided.
Long-term, the 'conn->data' pointer should probably be removed as it is a
little too error-prone. Still used very widely though.
Reported-by: masbug on github
Fixes#3391Closes#3400
The time_t type is unsigned on some systems and these variables are used
to hold return values from functions that return timediff_t
already. timediff_t is always a signed type.
Closes#3363
This is a companion patch to cbea2fd2c (NTLM: force the connection to
HTTP/1.1, 2018-12-06): with NTLM, we can switch to HTTP/1.1
preemptively. However, with other (Negotiate) authentication it is not
clear to this developer whether there is a way to make it work with
HTTP/2, so let's try HTTP/2 first and fall back in case we encounter the
error HTTP_1_1_REQUIRED.
Note: we will still keep the NTLM workaround, as it avoids an extra
round trip.
Daniel Stenberg helped a lot with this patch, in particular by
suggesting to introduce the Curl_h2_http_1_1_error() function.
Closes#3349
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
curl_multi_wait() was erroneously used from within
curl_easy_perform(). It could lead to it believing there was no socket
to wait for and then instead sleep for a while instead of monitoring the
socket and then miss acting on that activity as swiftly as it should
(causing an up to 1000 ms delay).
Reported-by: Antoni Villalonga
Fixes#3305Closes#3306Closes#3308
The function does not return the same value as snprintf() normally does,
so readers may be mislead into thinking the code works differently than
it actually does. A different function name makes this easier to detect.
Reported-by: Tomas Hoger
Assisted-by: Daniel Gustafsson
Fixes#3296Closes#3297
When using c-ares for asyn dns, the dns socket fd was silently closed
by c-ares without curl being aware. curl would then 'realize' the fd
has been removed at next call of Curl_resolver_getsock, and only then
notify the CURLMOPT_SOCKETFUNCTION to remove fd from its poll set with
CURL_POLL_REMOVE. At this point the fd is already closed.
By using ares socket state callback (ARES_OPT_SOCK_STATE_CB), this
patch allows curl to be notified that the fd is not longer needed
for neither for write nor read. At this point by calling
Curl_multi_closed we are able to notify multi with CURL_POLL_REMOVE
before the fd is actually closed by ares.
In asyn-ares.c Curl_resolver_duphandle we can't use ares_dup anymore
since it does not allow passing a different sock_state_cb_data
Closes#3238
Curl_follow() no longer frees the string. Make sure it happens in the
caller function, like we normally handle allocations.
This bug was introduced with the use of the URL API internally, it has
never been in a release version
Reported-by: Dario Weißer
Closes#3149
Transparently. The related curl_multi_setopt() options all still returns
OK when pipelining is selected.
To re-enable the support, the single line change in lib/multi.c needs to
be reverted.
See docs/DEPRECATE.md
Closes#2705
It was previously erroneously skipped in some situations.
libtest/libntlmconnect.c wrongly depended on wrong behavior (that it
would get a zero timeout) when no handles are "running" in a multi
handle. That behavior is no longer present with this fix. Now libcurl
will always return a -1 timeout when all handles are completed.
Closes#2733
When the application just started the transfer and then stops it while
the name resolve in the background thread hasn't completed, we need to
wait for the resolve to complete and then cleanup data accordingly.
Enabled test 1553 again and added test 1590 to also check when the host
name resolves successfully.
Detected by OSS-fuzz.
Closes#1968
- Get rid of variable that was generating false positive warning
(unitialized)
- Fix issues in tests
- Reduce scope of several variables all over
etc
Closes#2631
... it might call infof() with a NULL first argument that isn't harmful
but makes it not do anything. The infof() line is not very useful
anymore, it has served it purpose. Good riddance!
Fixes#2627
The latest psl is cached in the multi or share handle. It is refreshed
before use after 72 hours.
New share lock CURL_LOCK_DATA_PSL controls the psl cache sharing.
If the latest psl is not available, the builtin psl is used.
Reported-by: Yaakov Selkowitz
Fixes#2553Closes#2601
This extends the INDENTATION case to also handle 'else' statements
and require proper indentation on the following line. Also fixes the
offending cases found in the codebase.
Closes#2532
When receiving REFUSED_STREAM, mark the connection for close and retry
streams accordingly on another/fresh connection.
Reported-by: Terry Wu
Fixes#2416Fixes#1618Closes#2510
When a transfer is requested to get done and it is put in the pending
queue when limited by number of connections, total or per-host, libcurl
would previously very aggressively retry *ALL* pending transfers to get
them transferring. That was very time consuming.
By reducing the aggressiveness in how pending are being retried, we
waste MUCH less time on putting transfers back into pending again.
Some test cases got a factor 30(!) speed improvement with this change.
Reported-by: Cyril B
Fixes#2369Closes#2383
Especially unpausing a transfer might have to move the socket back to the
"currently used sockets" hash to get monitored. Otherwise it would never get
any more data and get stuck. Easily triggered with pausing using the
multi_socket API.
Reported-by: Philip Prindeville
Bug: https://curl.haxx.se/mail/lib-2018-03/0048.htmlFixes#2393Closes#2391
Due to very frequent updates of the rate limit "window", it could
attempt to rate limit within the same milliseconds and that then made
the calculations wrong, leading to it not behaving correctly on very
fast transfers.
This new logic updates the rate limit "window" to be no shorter than the
last three seconds and only updating the timestamps for this when
switching between the states TOOFAST/PERFORM.
Reported-by: 刘佩东
Fixes#2386Closes#2388
Prune the DNS cache immediately after the dns entry is unlocked in
multi_done. Timed out entries will then get discarded in a more orderly
fashion.
Test506 is updated
Reported-by: Oleg Pudeyev
Fixes#2169Closes#2170
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes#2132Fixes#2151Closes#2139
returning 'time_t' is problematic when that type is unsigned and we
return values less than zero to signal "already expired", used in
several places in the code.
Closes#2021
... since the 'tv' stood for timeval and this function does not return a
timeval struct anymore.
Also, cleaned up the Curl_timediff*() functions to avoid typecasts and
clean up the descriptive comments.
Closes#2011
... to cater for systems with unsigned time_t variables.
- Renamed the functions to curlx_timediff and Curl_timediff_us.
- Added overflow protection for both of them in either direction for
both 32 bit and 64 bit time_ts
- Reprefixed the curlx_time functions to use Curl_*
Reported-by: Peter Piekarski
Fixes#2004Closes#2005
This reverts commit f3e03f6c0a.
Caused memory leaks in the fuzzer, needs to be done differently.
Disable test 1553 for now too, as it causes memory leaks without this
commit!
... fixes a memory leak with at least IMAP when remove_handle is never
called and the transfer is abruptly just abandoned early.
Test 1552 added to verify
Detected by OSS-fuzz
Assisted-by: Max Dymond
Closes#1954
There are some bugs in how timers are managed for a single easy handle
that causes the wrong "next timeout" value to be reported to the
application when a new minimum needs to be recomputed and that new
minimum should be an existing timer that isn't currently set for the
easy handle. When the application drives a set of easy handles via the
`curl_multi_socket_action()` API (for example), it gets told to wait the
wrong amount of time before the next call, which causes requests to
linger for a long time (or, it is my guess, possibly forever).
Bug: https://curl.haxx.se/mail/lib-2017-07/0033.html
... to make all libcurl internals able to use the same data types for
the struct members. The timeval struct differs subtly on several
platforms so it makes it cumbersome to use everywhere.
Ref: #1652Closes#1693
With the introduction of expire IDs and the fact that existing timers
can be removed now and thus never expire, the concept with adding a
"latest" timer is not working anymore as it risks to not expire at all.
So, to be certain the timers actually are in line and will expire, the
plain Curl_expire() needs to be used. The _latest() function was added
as a sort of shortcut in the past that's quite simply not necessary
anymore.
Follow-up to 31b39c40cf
Reported-by: Paul Harris
Closes#1555
... since the total amount is low this is faster, easier and reduces
memory overhead.
Also, Curl_expire_done() can now mark an expire timeout as done so that
it never times out.
Closes#1472
A) reduces the timeout lists drastically
B) prevents a lot of superfluous loops for timers that expires "in vain"
when it has actually already been extended to fire later on
`if(nfds || extra_nfds) {` is followed by `malloc(nfds * ...)`.
If `extra_fs` could be non-zero when `nfds` was zero, then we have
`malloc(0)` which is allowed to return `NULL`. But, malloc returning
NULL can be confusing. In this code, the next line would treat the NULL
as an allocation failure.
It turns out, if `nfds` is zero then `extra_nfds` must also be zero.
The final value of `nfds` includes `extra_nfds`. So the test for
`extra_nfds` is redundant. It can only confuse the reader.
Closes#1439
The 'list element' struct now has to be within the data that is being
added to the list. Removes 16.6% (tiny) mallocs from a simple HTTP
transfer. (96 => 80)
Also removed return codes since the llist functions can't fail now.
Test 1300 updated accordingly.
Closes#1435
When receiving chunked encoded data with trailers, and the write
callback returns PAUSE, there might be both body and header to store to
resend on unpause. Previously libcurl returned error for that case.
Added test case 1540 to verify.
Reported-by: Stephen Toub
Fixes#1354Closes#1357
Properly resolve, convert and log the proxy host names.
Support the "--connect-to" feature for SOCKS proxies and for passive FTP
data transfers.
Follow-up to cb4e2be
Reported-by: Jay Satiro
Fixes https://github.com/curl/curl/issues/1248
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes#1131
Several independent reports on infinite loops hanging in the
close_all_connections() function when closing a multi handle, can be
fixed by first marking the connection to get closed before calling
Curl_disconnect.
This is more fixing-the-symptom rather than the underlying problem
though.
Bug: https://curl.haxx.se/mail/lib-2016-10/0011.html
Bug: https://curl.haxx.se/mail/lib-2016-10/0059.html
Reported-by: Dan Fandrich, Valentin David, Miloš Ljumović
In short the easy handle needs to be disconnected from its connection at
this point since the connection still is serving other easy handles.
In our app we can reliably reproduce a crash in our http2 stress test
that is fixed by this change. I can't easily reproduce the same test in
a small example.
This is the gdb/asan output:
==11785==ERROR: AddressSanitizer: heap-use-after-free on address 0xe9f4fb80 at pc 0x09f41f19 bp 0xf27be688 sp 0xf27be67c
READ of size 4 at 0xe9f4fb80 thread T13 (RESOURCE_HTTP)
#0 0x9f41f18 in curl_multi_remove_handle /path/to/source/3rdparty/curl/lib/multi.c:666
0xe9f4fb80 is located 0 bytes inside of 1128-byte region [0xe9f4fb80,0xe9f4ffe8)
freed by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1b5c2 in __interceptor_free /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:45
#1 0x9f7862d in conn_free /path/to/source/3rdparty/curl/lib/url.c:2808
#2 0x9f78c6a in Curl_disconnect /path/to/source/3rdparty/curl/lib/url.c:2876
#3 0x9f41b09 in multi_done /path/to/source/3rdparty/curl/lib/multi.c:615
#4 0x9f48017 in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1896
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
previously allocated by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1ba27 in __interceptor_calloc /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:70
#1 0x9f7dfa6 in allocate_conn /path/to/source/3rdparty/curl/lib/url.c:3904
#2 0x9f88ca0 in create_conn /path/to/source/3rdparty/curl/lib/url.c:5797
#3 0x9f8c928 in Curl_connect /path/to/source/3rdparty/curl/lib/url.c:6438
#4 0x9f45a8c in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1411
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
SUMMARY: AddressSanitizer: heap-use-after-free /path/to/source/3rdparty/curl/lib/multi.c:666 in curl_multi_remove_handle
Shadow bytes around the buggy address:
0x3d3e9f20: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f40: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f50: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa
0x3d3e9f60: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x3d3e9f70:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fa0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fb0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fc0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==11785==ABORTING
Thread 14 "RESOURCE_HTTP" received signal SIGABRT, Aborted.
[Switching to Thread 0xf27bfb40 (LWP 12324)]
0xf7fd8be9 in __kernel_vsyscall ()
(gdb) bt
#0 0xf7fd8be9 in __kernel_vsyscall ()
#1 0xf4c7ee89 in __GI_raise (sig=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#2 0xf4c803e7 in __GI_abort () at abort.c:89
#3 0xf7b2ef2e in __sanitizer::Abort () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cc:122
#4 0xf7b262fa in __sanitizer::Die () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_common.cc:145
#5 0xf7b21ab3 in __asan::ScopedInErrorReport::~ScopedInErrorReport (this=0xf27be171, __in_chrg=<optimized out>) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:689
#6 0xf7b214a5 in __asan::ReportGenericError (pc=166993689, bp=4068206216, sp=4068206204, addr=3925146496, is_write=false, access_size=4, exp=0, fatal=true) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:1074
#7 0xf7b21fce in __asan::__asan_report_load4 (addr=3925146496) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_rtl.cc:129
#8 0x09f41f19 in curl_multi_remove_handle (multi=0xf3406080, data=0xde582400) at /path/to/source3rdparty/curl/lib/multi.c:666
#9 0x09f6b277 in Curl_close (data=0xde582400) at /path/to/source3rdparty/curl/lib/url.c:415
#10 0x09f3354e in curl_easy_cleanup (data=0xde582400) at /path/to/source3rdparty/curl/lib/easy.c:860
#11 0x09c6de3f in ...
#12 0x09c378c5 in ...
#13 0x09c48133 in ...
#14 0x09c4d092 in ...
#15 0x0a2be6b6 in ...
#16 0xf7aa5781 in asan_thread_start (arg=0xf2d22938) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#17 0xf5de52b5 in start_thread (arg=0xf27bfb40) at pthread_create.c:333
#18 0xf4d3a16e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:114
Fixes#1083
The closure handle only ever has default timeouts set. To improve the
state somewhat we clone the timeouts from each added handle so that the
closure handle always has the same timeouts as the most recently added
easy handle.
Fixes#739
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes#971
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
Regression added in 790d6de485. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html