If the QLOGDIR environment variable is set, enable qlogging.
... and create Curl_qlogdir() in the new generic vquic/vquic.c file for
QUIC functions that are backend independent.
Closes#5353
That return code is reserved for build-time conditional code not being
present while this was a regular run-time error from a Windows API.
Reported-by: wangp on github
Fixes#5349Closes#5350
Triggered by a crash detected by OSS-Fuzz after the dynbuf introduction in
ed35d6590e. This should make the trailer handling more straight forward and
hopefully less error-prone.
Deliver the trailer header to the callback already at receive-time. No
longer caches the trailers to get delivered at end of stream.
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=22030Closes#5348
In my very basic test that lists sftp://127.0.0.1/tmp/, this patched
code makes 161 allocations compared to 194 in git master. A 17%
reduction.
Closes#5336
quiche has the potential to log qlog files. To enable this, you must
build quiche with the qlog feature enabled `cargo build --features
qlog`. curl then passes a file descriptor to quiche, which takes
ownership of the file. The FD transfer only works on UNIX.
The convention is to enable logging when the QLOGDIR environment is
set. This should be a path to a folder where files are written with the
naming template <SCID>.qlog.
Co-authored-by: Lucas Pardue
Replaces #5337Closes#5341
A common set of functions instead of many separate implementations for
creating buffers that can grow when appending data to them. Existing
functionality has been ported over.
In my early basic testing, the total number of allocations seem at
roughly the same amount as before, possibly a few less.
See docs/DYNBUF.md for a description of the API.
Closes#5300
- Check for NULL entry parameter before attempting to deref entry in
Curl_resolver_is_resolved, like is already done in asyn-ares.
This is to silence cppcheck which does not seem to understand that
asyn-ares and asyn-thread have separate Curl_resolver_is_resolved
and those units are mutually exclusive. Prior to this change it warned
of a scenario where asyn-thread's Curl_resolver_is_resolved is called
with a NULL entry from asyn-ares, but that couldn't happen.
Reported-by: rl1987@users.noreply.github.com
Fixes https://github.com/curl/curl/issues/5326
More connection cache accesses are protected by locks.
CONNCACHE_* is a beter prefix for the connection cache lock macros.
Curl_attach_connnection: now called as soon as there's a connection
struct available and before the connection is added to the connection
cache.
Curl_disconnect: now assumes that the connection is already removed from
the connection cache.
Ref: #4915Closes#5009
Regression since 7.69.0 and 68fb25fa3f.
The code wrongly assigned 'from' instead of 'auth' which probably was a
copy and paste mistake from other code, leading to that auth could
remain NULL and later cause an error to be returned.
Assisted-by: Eric Sauvageau
Fixes#5294Closes#5295
Previously, options set explicitly through command line options could be
overridden by the configuration files parsed automatically when
ssh_connect() was called.
By calling ssh_options_parse_config() explicitly, the configuration
files are parsed before setting the options, avoiding the options
override. Once the configuration files are parsed, the automatic
configuration parsing is not executed.
Fixes#4972Closes#5283
Signed-off-by: Anderson Toshiyuki Sasaki <ansasaki@redhat.com>
Coverity found CID 1461718:
Integer handling issues (CONSTANT_EXPRESSION_RESULT) "timeout_ms >
9223372036854775807L" is always false regardless of the values of its
operands. This occurs as the logical second operand of "||".
Closes#5240
Prior to this change if there was a 303 reply to a PUT request then
the subsequent request to respond to that redirect would also be a PUT.
It was determined that was most likely incorrect based on the language
of the RFCs. Basically 303 means "see other" resource, which implies it
is most likely not the same resource, therefore we should not try to PUT
to that different resource.
Refer to the discussions in #5237 and #5248 for more information.
Fixes https://github.com/curl/curl/issues/5237
Closes https://github.com/curl/curl/pull/5248
GnuTLS 3.1.10 added new functions we want to use. That version was
released on Mar 22, 2013. Removing support for older versions also
greatly simplifies the code.
Ref: #5271Closes#5276
Detected by Coverity. CID 1462319.
"The same code is executed when the condition result is true or false,
because the code in the if-then branch and after the if statement is
identical."
Closes#5275
When cURL is compiled with support for multiple SSL backends, it is
possible to configure an SSL backend via `curl_global_sslset()`, but
only *before* `curl_global_init()` was called.
If another SSL backend should be used after that, a user might be
tempted to call `curl_global_cleanup()` to start over. However, we did
not foresee that use case and forgot to reset the SSL backend in that
cleanup.
Let's allow that use case.
Fixes#5255Closes#5257
Reported-by: davidedec on github
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
From libssh 0.9.0, ssh_key_type() returns different key types for ECDSA
keys depending on the curve.
Signed-off-by: Anderson Toshiyuki Sasaki <ansasaki@redhat.com>
Fixes#5252Closes#5253
- Fix schannel_send for the case when no timeout was set.
Prior to this change schannel would error if the socket was not ready
to send data and no timeout was set.
This commit is similar to parent commit 89dc6e0 which recently made the
same change for SOCKS, for the same reason. Basically it was not well
understood that when Curl_timeleft returns 0 it is not a timeout of 0 ms
but actually means no timeout.
Fixes https://github.com/curl/curl/issues/5177
Closes https://github.com/curl/curl/pull/5221
- Document in Curl_timeleft's comment block that returning 0 signals no
timeout (ie there's infinite time left).
- Fix SOCKS' Curl_blockread_all for the case when no timeout was set.
Prior to this change if the timeout had a value of 0 and that was passed
to SOCKET_READABLE it would return right away instead of blocking. That
was likely because it was not well understood that when Curl_timeleft
returns 0 it is not a timeout of 0 ms but actually means no timeout.
Ref: https://github.com/curl/curl/pull/5214#issuecomment-612512360
Closes https://github.com/curl/curl/pull/5220
Prior to this change gopher's blocking code would block forever,
ignoring any set timeout value.
Assisted-by: Jay Satiro
Reviewed-by: Daniel Stenberg
Similar to #5220 and #5221Closes#5214
When SRP is requested in the priority string, GnuTLS will disable
support for TLS 1.3. Before this change, curl would always add +SRP to
the priority list, effectively always disabling TLS 1.3 support.
With this change, +SRP is only added to the priority list when SRP
authentication is also requested. This also allows updating the error
handling here to not have to retry without SRP. This is because SRP is
only added when requested and in that case a retry is not needed.
Closes#5223
- If loss of data may occur converting a timediff_t to time_t and
the time value is > TIME_T_MAX then treat it as TIME_T_MAX.
This is a follow-up to 8843678 which removed the (time_t) typecast
from the macros so that conversion warnings could be identified.
Closes https://github.com/curl/curl/pull/5199