Check the FIRST for all other protocols.
This fixes a timeout in an ftps download. The server sends a TLS
close_notify message in the same packet as the file data. The
close_notify seems to not be handled in the schannel_recv function, so
libcurl is not aware that the server has closed the connection. Thus
libcurl ends up waiting for action on the socket until a timeout is
reached. With the secondary socket check added to the data_pending
function, the close_notify is properly handled, and the ftps transfer
terminates as expected.
Fixes#7068Closes#7069
The libssh2 backend has SSH session associated with the connection but
the callback context is the easy handle, so when a connection gets
attached to a transfer, the protocol handler now allows for a custom
function to get used to set things up correctly.
Reported-by: Michael O'Farrell
Fixes#6898Closes#7078
Since the function is called for any protocol, we can't assume that the
HTTP struct is there without first making sure it is HTTP.
Reported-by: Denis Goleshchikhin
Fixes#7079Closes#7080
... or the cookies won't get sent. Push users to using the "Netscape"
format instead, which curl uses when saving a cookie "jar".
Reported-by: Martin Dorey
Reviewed-by: Daniel Gustafsson
Fixes#6723Closes#7077
Azure Pipelines is currently being used for debug builds,
let's also run some non-debug (release) Windows builds and
make use of previously underutilized Cirrus CI for that.
Reviewed-by: Marcel Raad
Closes#6991
Some of the time, we get a HYPER_TASK_EMPTY response before the status
line, headers, and body have been read. Previously, that would cause us
to poll again, leading to a 1 second timeout.
The HYPER_TASK_EMPTY docs say:
The value of this task is null (does not imply an error).
So, if we receive a HYPER_TASK_EMPTY, continue on with processing the
response.
Reported-by: Kevin Burke
Fixes#7064Closes#7070
As of commit 54e7475, these flags would only be set when using a new
credential handle. When re-using an existing credential handle, the
flags would not be set.
Closes https://github.com/curl/curl/pull/7051
The Curl_resolv() had special code (when built in debug mode) for when
resolving the host name "LocalHost" (using that exact casing). It would
then get the host name from the --interface option instead.
This development-only feature was not used by anything (anymore) and we
have the --resolve feature if we want to play similar tricks properly
going forward.
Closes#7044
Otherwise the old value would linger from a previous use and would mess
up the network speed cap logic.
Reported-by: Ymir1711 on github
Fixes#7042Closes#7043
Writing the cookie file has multiple error conditions, and was using an
int with magic numbers to report the different error (which in turn were
disregarded anyways). This moves reporting to use a CURLcode value.
Lightly-touched-by: Daniel Stenberg
Closes#7037Closes#6749
strstore() is defined as a strdup which ensures to free the target
pointer before duping the source char * into it. Make use of it in
two more cases where it can simplify the code.
Comments in the cookie code were a bit all over the place in terms of
style and wording. This takes a stab at cleaning them up by keeping to
a single style and overall shape. Some comments are moved a little and
some removed alltogether due to being redundant. No functional changes
have been made,
This is considered not harmful as a following http2_recv shall be
called very soon.
This is considered helpful in the specific situation where some
servers (e.g. nghttpx v1.43.0) may fulfill stream 1 immediately
following the return of HTTP status 101, other than waiting for
the client-side connection preface to arrive.
Fixes#7036Closes#7040
It can't run on focal and causes warnings on bionic. Since the focal
failure started rather suddenly a while ago, we can suspect it might be
temporary.
Added "bring back the build" to the TODO document.
Fixes#7011Closes#7012
Assumed to be a minor coding style improvement with no behavior change.
A modern compiler is expected to have the calculation optimized during
compilation. It may be deemed okay even if that's not the case, since
the added overhead is considered very low.
Closes#7032
Also added 'CURL_SMALLSENDS' to make Curl_write() send short packets,
which helped verifying this even more.
Add test 363 to verify.
Reported-by: ustcqidi on github
Fixes#6950Closes#7024
Previously this logic would cap the send to CURL_MAX_WRITE_SIZE bytes,
but for the situations where a larger upload buffer has been set, this
function can benefit from sending more bytes. With default size used,
this does the same as before.
Also changed the storage of the size to an 'unsigned int' as it is not
allowed to be set larger than 2M.
Also added cautions to the man pages about changing buffer sizes in
run-time.
Closes#7022
A reused transfer handle could otherwise reuse the previous leftover
buffer and havoc would ensue.
Reported-by: sergio-nsk on github
Fixes#7018Closes#7021
These functions have existed in the API since the dawn of time. It is
about time we describe how they work, even if we discourage users from
using them.
Closes#7010
WHATWG URL has dictated the use of Nontransitional Processing (IDNA
2008) for several years now. Chrome (and derivatives) still use
Transitional Processing, but Firefox and Safari have both switched.
Also document the fact that winidn functions differently from libidn2
here.
Closes#7026
Reported by GCC analyzer:
Error: GCC_ANALYZER_WARNING (CWE-476):
src/tool_getparam.c: scope_hint: In function 'parse_args'
src/tool_getparam.c:2318:38: warning[-Wanalyzer-possible-null-dereference]: dereference of possibly-NULL 'orig_opt'
lib/curlx.h:56: included_from: Included from here.
src/tool_getparam.c:28: included_from: Included from here.
lib/curl_multibyte.h:70:51: note: in definition of macro 'curlx_convert_tchar_to_UTF8'
src/tool_getparam.c:2316:16: note: in expansion of macro 'curlx_convert_tchar_to_UTF8'
Reviewed-by: Marcel Raad
Reviewed-by: Daniel Stenberg
Closes#7023