Confusingly, nghttp2 has two different error code enums:
- nghttp2_error, to be used with nghttp2_strerror
- nghttp2_error_code, to be used with nghttp2_http2_strerror
Closes#5641
Since commit f3d501dc67, if proxy support is disabled, MSVC warns:
url.c : warning C4701: potentially uninitialized local variable
'hostaddr' used
url.c : error C4703: potentially uninitialized local pointer variable
'hostaddr' used
That could actually only happen if both `conn->bits.proxy` and
`CURL_DISABLE_PROXY` were enabled.
Initialize it to NULL to silence the warning.
Closes https://github.com/curl/curl/pull/5638
Updated terminology in docs, comments and phrases to refer to C strings
as "null-terminated". Done to unify with how most other C oriented docs
refer of them and what users in general seem to prefer (based on a
single highly unscientific poll on twitter).
Reported-by: coinhubs on github
Fixes#5598Closes#5608
Don't reference fields that do not exist. Fixes build failure:
vtls/mbedtls.c: In function 'mbed_connect_step1':
vtls/mbedtls.c:249:54: error: 'struct connectdata' has no member named 'http_proxy'
Closes#5615
...previously CURLINFO_EFFECTIVE_URL would report the URL of the
original "mother transfer", not the actually pushed resource.
Reported-by: Jonathan Cardoso Machado
Fixes#5589Closes#5591
- Include wincrypt before OpenSSL includes so that the latter can
properly handle any conflicts between the two.
Closes https://github.com/curl/curl/pull/5606
Replace "Failed writing body (X != Y)" with
"Failure writing output to destination". Possibly slightly less cryptic.
Reported-by: coinhubs on github
Fixes#5594Closes#5596
Follow-up to c4e6968127
When a new transfer is created, as a resuly of an acknowledged push,
that transfer needs a download buffer allocated.
Closes#5590
This commit changes the behavior of CURLSSLOPT_NATIVE_CA so that it does
not override CURLOPT_CAINFO / CURLOPT_CAPATH, or the hardcoded default
locations. Instead the CA store can now be used at the same time.
The change is due to the impending release. The issue is still being
discussed. The behavior of CURLSSLOPT_NATIVE_CA is subject to change and
is now documented as experimental.
Ref: bc052cc (parent commit)
Ref: https://github.com/curl/curl/issues/5585
For QUIC but also for regular TCP when the second family runs out of IPs
with a failure while the first family is still trying to connect.
Separated the timeout handling for IPv4 and IPv6 connections when they
both have a number of addresses to iterate over.
This avoids using a pair of TCP ports to provide wakeup functionality
for every multi instance on Windows, where socketpair() is emulated
using a TCP socket on loopback which could in turn lead to socket
resource exhaustion.
Reviewed-by: Gergely Nagy
Reviewed-by: Marc Hörsken
Closes#5397
When wolfSSL is built with its OpenSSL API layer, it fetures the same DES*
functions that OpenSSL has. This change take advantage of that.
Co-authored-by: Daniel Stenberg
Closes#5556Fixes#5548
Since the connection can be used by many independent requests (using
HTTP/2 or HTTP/3), things like user-agent and other transfer-specific
data MUST NOT be kept connection oriented as it could lead to requests
getting the wrong string for their requests. This struct data was
lingering like this due to old HTTP1 legacy thinking where it didn't
mattered..
Fixes#5566Closes#5567
When asking for a specific feature to be shared in the share object,
that bit was previously set unconditionally even if the shared feature
failed or otherwise wouldn't work.
Closes#5554
Instead of discussing if there's value or meaning (implied or not) in
the colors, let's use words without the same possibly negative
associations.
Closes#5546
The SOCKS4/5 state machines weren't properly terminated when the proxy
connection got closed, leading to a busy-loop.
Reported-By: zloi-user on github
Fixes#5532Closes#5542
To reduce the amount of allocations needed for creating a Curl_addrinfo
struct, make a single larger malloc instead of three separate smaller
ones.
Closes#5533
quiche now requires the application to explicitly set the keylog path
for each connection, rather than reading the environment variable
itself.
Closes#5541
Now that all functions in select.[ch] take timediff_t instead
of the limited int or long, we can remove type conversions
and related preprocessor checks to silence compiler warnings.
Avoiding conversions from time_t was already done in 842f73de.
Based upon #5262
Supersedes #5214, #5220 and #5221
Follow up to #5343 and #5479Closes#5490
On some systems, openssl 1.0.x is still the default, but it has been
patched to contain all the recent security fixes. As a result of this
patching, it is possible for macro X509_V_FLAG_NO_ALT_CHAINS to be
defined, while the previous behavior of openssl to not look at trusted
chains first, remains.
Fix it: ensure X509_V_FLAG_TRUSTED_FIRST is always set, do not try to
probe for the behavior of openssl based on the existence ofmacros.
Closes#5530
Commit 4a4b63d forgot to set the expected SOCKS5 reply length when the
reply ATYP is X'01'. This resulted in erroneously expecting more bytes
when the request length is greater than the reply length (e.g., when
remotely resolving the hostname).
Closes#5527
When the method is updated inside libcurl we must still not change the
method as set by the user as then repeated transfers with that same
handle might not execute the same operation anymore!
This fixes the libcurl part of #5462
Test 1633 added to verify.
Closes#5499
`http_proxy` will not be available in `conndata` if `CURL_DISABLE_PROXY`
is enabled. Repair the build with that configuration.
Follow-up to f3d501dc67Closes#5498
"Null-checking k->str suggests that it may be null, but it has already
been dereferenced on all paths leading to the check" - and it can't
legally be NULL at this point. Remove check.
Detected by Coverity CID 1463884
Closes#5495
Since Win32 almost always will also have USE_WINSOCK,
we can reduce complexity and always use Sleep there.
Assisted-by: Jay Satiro
Reviewed-by: Daniel Stenberg
Follow up to #5343Closes#5489
... and free it as soon as the transfer is done. It removes the extra
alloc when a new size is set with setopt() and reduces memory for unused
easy handles.
In addition: the closure_handle now doesn't use an allocated buffer at
all but the smallest supported size as a stack based one.
Closes#5472
Using time_t and suseconds_t if suseconds_t is available,
long on Windows (maybe others in the future) and int elsewhere.
Also handle case of ULONG_MAX being greater or equal to INFINITE.
Assisted-by: Jay Satiro
Reviewed-by: Daniel Stenberg
Part of #5343
Make all functions in select.[ch] take timeout_ms as timediff_t
which should always be large enough and signed on all platforms
to take all possible timeout values and avoid type conversions.
Reviewed-by: Jay Satiro
Reviewed-by: Daniel Stenberg
Replaces #5107 and partially #5262
Related to #5240 and #5286Closes#5343
Tested with ngtcp2 built against the OpenSSL library. Additionally
tested with MultiSSL (NSS for TLS and ngtcp2+OpenSSL for QUIC).
The TLS backend (independent of QUIC) may or may not already have opened
the keylog file before. Therefore Curl_tls_keylog_open is always called
to ensure the file is open.
Tested following the same curl and tshark commands as in commit
"vtls: Extract and simplify key log file handling from OpenSSL" using
WolfSSL v4.4.0-stable-128-g5179503e8 from git master built with
`./configure --enable-all --enable-debug CFLAGS=-DHAVE_SECRET_CALLBACK`.
Full support for this feature requires certain wolfSSL build options,
see "Availability note" in lib/vtls/wolfssl.c for details.
Closes#5327
Create a set of routines for TLS key log file handling to enable reuse
with other TLS backends. Simplify the OpenSSL backend as follows:
- Drop the ENABLE_SSLKEYLOGFILE macro as it is unconditionally enabled.
- Do not perform dynamic memory allocation when preparing a log entry.
Unless the TLS specifications change we can suffice with a reasonable
fixed-size buffer.
- Simplify state tracking when SSL_CTX_set_keylog_callback is
unavailable. My original sslkeylog.c code included this tracking in
order to handle multiple calls to SSL_connect and detect new keys
after renegotiation (via SSL_read/SSL_write). For curl however we can
be sure that a single master secret eventually becomes available
after SSL_connect, so a simple flag is sufficient. An alternative to
the flag is examining SSL_state(), but this seems more complex and is
not pursued. Capturing keys after server renegotiation was already
unsupported in curl and remains unsupported.
Tested with curl built against OpenSSL 0.9.8zh, 1.0.2u, and 1.1.1f
(`SSLKEYLOGFILE=keys.txt curl -vkso /dev/null https://localhost:4433`)
against an OpenSSL 1.1.1f server configured with:
# Force non-TLSv1.3, use TLSv1.0 since 0.9.8 fails with 1.1 or 1.2
openssl s_server -www -tls1
# Likewise, but fail the server handshake.
openssl s_server -www -tls1 -Verify 2
# TLS 1.3 test. No need to test the failing server handshake.
openssl s_server -www -tls1_3
Verify that all secrets (1 for TLS 1.0, 4 for TLS 1.3) are correctly
written using Wireshark. For the first and third case, expect four
matches per connection (decrypted Server Finished, Client Finished, HTTP
Request, HTTP Response). For the second case where the handshake fails,
expect a decrypted Server Finished only.
tshark -i lo -pf tcp -otls.keylog_file:keys.txt -Tfields \
-eframe.number -eframe.time -etcp.stream -e_ws.col.Info \
-dtls.port==4433,http -ohttp.desegment_body:FALSE \
-Y 'tls.handshake.verify_data or http'
A single connection can easily be identified via the `tcp.stream` field.
For HTTP 1.x, it's a protocol error when the server sends more bytes
than announced. If this happens, don't reuse the connection, because the
start position of the next response is undefined.
Closes#5440
When USE_RESOLVE_ON_IPS is set (defined on macOS), it means that
numerical IP addresses still need to get "resolved" - but not with DoH.
Reported-by: Viktor Szakats
Fixes#5454Closes#5459
They're only limited to the maximum string input restrictions, not to
256 bytes.
Added test 1178 to verify
Reported-by: Will Roberts
Fixes#5448Closes#5449
Fixed the alt-svc parser to treat a newline as end of line.
The unit tests in test 1654 were done without CRLF and thus didn't quite
match the real world. Now they use CRLF as well.
Reported-by: Peter Wu
Assisted-by: Peter Wu
Assisted-by: Jay Satiro
Fixes#5445Closes#5446
This reverts commit 74623551f3.
Instead mark the function call with (void). Getting the return code and
using it instead triggered Coverity warning CID 1463596 because
snprintf() can return a negative value...
Closes#5441
... as returning a "" is not a good idea as the string is supposed to be
allocated and returning a const string will cause issues.
Reported-by: Brian Carpenter
Follow-up to ed35d6590eCloses#5405
This change introduces a generic way to provide binary data in setopt
options, called BLOBs.
This change introduces these new setopts:
CURLOPT_ISSUERCERT_BLOB, CURLOPT_PROXY_SSLCERT_BLOB,
CURLOPT_PROXY_SSLKEY_BLOB, CURLOPT_SSLCERT_BLOB and CURLOPT_SSLKEY_BLOB.
Reviewed-by: Daniel Stenberg
Closes#5357
- Stick to a single unified way to use structs
- Make checksrc complain on 'typedef struct {'
- Allow them in tests, public headers and examples
- Let MD4_CTX, MD5_CTX, and SHA256_CTX typedefs remain as they actually
typedef different types/structs depending on build conditions.
Closes#5338
Previously, after PASV and immediately after the data connection has
connected, the function would only return the control socket to wait for
which then made the data connection simply timeout and not get polled
correctly. This become obvious when running test 1631 and 1632 event-
based.
Use them only if `_UNICODE` is defined, in which case command-line
arguments have been converted to UTF-8.
Closes https://github.com/curl/curl/pull/3784
- use `wmain` instead of `main` when `_UNICODE` is defined [0]
- define `argv_item_t` as `wchar_t *` in this case
- use the curl_multibyte gear to convert the command-line arguments to
UTF-8
This makes it possible to pass parameters with characters outside of
the current locale on Windows, which is required for some tests, e.g.
the IDN tests. Out of the box, this currently only works with the
Visual Studio project files, which default to Unicode, and winbuild
with the `ENABLE_UNICODE` option.
[0] https://devblogs.microsoft.com/oldnewthing/?p=40643
Ref: https://github.com/curl/curl/issues/3747
Closes https://github.com/curl/curl/pull/3784
Fix theoretical integer overflow in Curl_auth_create_plain_message.
The security impact of the overflow was discussed on hackerone. We
agreed this is more of a theoretical vulnerability, as the integer
overflow would only be triggerable on systems using 32-bits size_t with
over 4GB of available memory space for the process.
Closes#5391
Since input passed to libcurl with CURLOPT_USERPWD and
CURLOPT_PROXYUSERPWD circumvents the regular string length check we have
in Curl_setstropt(), the input length limit is enforced in
Curl_parse_login_details too, separately.
Reported-by: Thomas Bouzerar
Closes#5383
When looking for a protocol match among supported schemes, check the
most "popular" schemes first. It has zero functionality difference and
for all practical purposes a speed difference will not be measureable
but it still think it makes sense to put the least likely matches last.
"Popularity" based on the 2019 user survey.
Closes#5377
Add three new CMake Find modules (using the curl license, but I grant
others the right to apply the CMake BSD license instead).
This CMake config is simpler than the autotools one because it assumes
ngtcp2 and nghttp3 to be used together. Another difference is that this
CMake config checks whether QUIC is actually supported by the TLS
library (patched OpenSSL or boringssl) since this can be a common
configuration mistake that could result in build errors later.
Unlike autotools, CMake does not warn you that the features are
experimental. The user is supposed to already know that and read the
documentation. It requires a very special build environment anyway.
Tested with ngtcp2+OpenSSL+nghttp3 and quiche+boringssl, both built from
current git master. Use `LD_DEBUG=files src/curl |& grep need` to figure
out which features (libldap-2.4, libssh2) to disable due to conflicts
with boringssl.
Closes#5359
If the QLOGDIR environment variable is set, enable qlogging.
... and create Curl_qlogdir() in the new generic vquic/vquic.c file for
QUIC functions that are backend independent.
Closes#5353
That return code is reserved for build-time conditional code not being
present while this was a regular run-time error from a Windows API.
Reported-by: wangp on github
Fixes#5349Closes#5350
Triggered by a crash detected by OSS-Fuzz after the dynbuf introduction in
ed35d6590e. This should make the trailer handling more straight forward and
hopefully less error-prone.
Deliver the trailer header to the callback already at receive-time. No
longer caches the trailers to get delivered at end of stream.
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=22030Closes#5348
In my very basic test that lists sftp://127.0.0.1/tmp/, this patched
code makes 161 allocations compared to 194 in git master. A 17%
reduction.
Closes#5336
quiche has the potential to log qlog files. To enable this, you must
build quiche with the qlog feature enabled `cargo build --features
qlog`. curl then passes a file descriptor to quiche, which takes
ownership of the file. The FD transfer only works on UNIX.
The convention is to enable logging when the QLOGDIR environment is
set. This should be a path to a folder where files are written with the
naming template <SCID>.qlog.
Co-authored-by: Lucas Pardue
Replaces #5337Closes#5341
A common set of functions instead of many separate implementations for
creating buffers that can grow when appending data to them. Existing
functionality has been ported over.
In my early basic testing, the total number of allocations seem at
roughly the same amount as before, possibly a few less.
See docs/DYNBUF.md for a description of the API.
Closes#5300
- Check for NULL entry parameter before attempting to deref entry in
Curl_resolver_is_resolved, like is already done in asyn-ares.
This is to silence cppcheck which does not seem to understand that
asyn-ares and asyn-thread have separate Curl_resolver_is_resolved
and those units are mutually exclusive. Prior to this change it warned
of a scenario where asyn-thread's Curl_resolver_is_resolved is called
with a NULL entry from asyn-ares, but that couldn't happen.
Reported-by: rl1987@users.noreply.github.com
Fixes https://github.com/curl/curl/issues/5326
More connection cache accesses are protected by locks.
CONNCACHE_* is a beter prefix for the connection cache lock macros.
Curl_attach_connnection: now called as soon as there's a connection
struct available and before the connection is added to the connection
cache.
Curl_disconnect: now assumes that the connection is already removed from
the connection cache.
Ref: #4915Closes#5009
Regression since 7.69.0 and 68fb25fa3f.
The code wrongly assigned 'from' instead of 'auth' which probably was a
copy and paste mistake from other code, leading to that auth could
remain NULL and later cause an error to be returned.
Assisted-by: Eric Sauvageau
Fixes#5294Closes#5295
Previously, options set explicitly through command line options could be
overridden by the configuration files parsed automatically when
ssh_connect() was called.
By calling ssh_options_parse_config() explicitly, the configuration
files are parsed before setting the options, avoiding the options
override. Once the configuration files are parsed, the automatic
configuration parsing is not executed.
Fixes#4972Closes#5283
Signed-off-by: Anderson Toshiyuki Sasaki <ansasaki@redhat.com>
Coverity found CID 1461718:
Integer handling issues (CONSTANT_EXPRESSION_RESULT) "timeout_ms >
9223372036854775807L" is always false regardless of the values of its
operands. This occurs as the logical second operand of "||".
Closes#5240
Prior to this change if there was a 303 reply to a PUT request then
the subsequent request to respond to that redirect would also be a PUT.
It was determined that was most likely incorrect based on the language
of the RFCs. Basically 303 means "see other" resource, which implies it
is most likely not the same resource, therefore we should not try to PUT
to that different resource.
Refer to the discussions in #5237 and #5248 for more information.
Fixes https://github.com/curl/curl/issues/5237
Closes https://github.com/curl/curl/pull/5248
GnuTLS 3.1.10 added new functions we want to use. That version was
released on Mar 22, 2013. Removing support for older versions also
greatly simplifies the code.
Ref: #5271Closes#5276
Detected by Coverity. CID 1462319.
"The same code is executed when the condition result is true or false,
because the code in the if-then branch and after the if statement is
identical."
Closes#5275
When cURL is compiled with support for multiple SSL backends, it is
possible to configure an SSL backend via `curl_global_sslset()`, but
only *before* `curl_global_init()` was called.
If another SSL backend should be used after that, a user might be
tempted to call `curl_global_cleanup()` to start over. However, we did
not foresee that use case and forgot to reset the SSL backend in that
cleanup.
Let's allow that use case.
Fixes#5255Closes#5257
Reported-by: davidedec on github
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
From libssh 0.9.0, ssh_key_type() returns different key types for ECDSA
keys depending on the curve.
Signed-off-by: Anderson Toshiyuki Sasaki <ansasaki@redhat.com>
Fixes#5252Closes#5253
- Fix schannel_send for the case when no timeout was set.
Prior to this change schannel would error if the socket was not ready
to send data and no timeout was set.
This commit is similar to parent commit 89dc6e0 which recently made the
same change for SOCKS, for the same reason. Basically it was not well
understood that when Curl_timeleft returns 0 it is not a timeout of 0 ms
but actually means no timeout.
Fixes https://github.com/curl/curl/issues/5177
Closes https://github.com/curl/curl/pull/5221
- Document in Curl_timeleft's comment block that returning 0 signals no
timeout (ie there's infinite time left).
- Fix SOCKS' Curl_blockread_all for the case when no timeout was set.
Prior to this change if the timeout had a value of 0 and that was passed
to SOCKET_READABLE it would return right away instead of blocking. That
was likely because it was not well understood that when Curl_timeleft
returns 0 it is not a timeout of 0 ms but actually means no timeout.
Ref: https://github.com/curl/curl/pull/5214#issuecomment-612512360
Closes https://github.com/curl/curl/pull/5220
Prior to this change gopher's blocking code would block forever,
ignoring any set timeout value.
Assisted-by: Jay Satiro
Reviewed-by: Daniel Stenberg
Similar to #5220 and #5221Closes#5214
When SRP is requested in the priority string, GnuTLS will disable
support for TLS 1.3. Before this change, curl would always add +SRP to
the priority list, effectively always disabling TLS 1.3 support.
With this change, +SRP is only added to the priority list when SRP
authentication is also requested. This also allows updating the error
handling here to not have to retry without SRP. This is because SRP is
only added when requested and in that case a retry is not needed.
Closes#5223
- If loss of data may occur converting a timediff_t to time_t and
the time value is > TIME_T_MAX then treat it as TIME_T_MAX.
This is a follow-up to 8843678 which removed the (time_t) typecast
from the macros so that conversion warnings could be identified.
Closes https://github.com/curl/curl/pull/5199
In a debug build, settting the environment variable "CURL_SMALLREQSEND"
will make the first HTTP request send not send more bytes than the set
amount, thus ending up verifying that the logic for handling a split
HTTP request send works correctly.
Restores the --head functionality to the curl utility which extracts
'protocol' that is stored that way.
Reported-by: James Fuller
Fixes#5196Closes#5198
In libcurl, CURLINFO_CONDITION_UNMET is used to avoid writing to the
output file if the server did not transfered a file based on time
condition. In the same manner, getting a 304 HTTP response back from the
server, for example after passing a custom If-Match-* header, also
fulfill this condition.
Fixes#5181Closes#5183
Currently, the TLS backend used by vquic/ngtcp2.c is selected at compile
time. Therefore OpenSSL support needs to be explicitly disabled.
Signed-off-by: Daiki Ueno <dueno@redhat.com>
Closes#5148
This updates the ngtcp2 OpenSSL backend to follow the API change in
commit 32e703164 of ngtcp2.
Notable changes are:
- ngtcp2_crypto_derive_and_install_{rx,tx}_key have been added to replace
ngtcp2_crypto_derive_and_install_key
- the 'side' argument of ngtcp2_crypto_derive_and_install_initial_key
has been removed
Fixes#5166Closes#5168
OpenSSL 3 deprecates SSL_CTX_load_verify_locations and the MD4, DES
functions we use.
Fix the MD4 and SSL_CTX_load_verify_locations warnings.
In configure, detect OpenSSL v3 and if so, inhibit the deprecation
warnings. OpenSSL v3 deprecates the DES functions we use for NTLM and
until we rewrite the code to use non-deprecated functions we better
ignore these warnings as they don't help us.
Closes#5139
Reported by the new script 'scripts/copyright.pl'. The script has a
regex whitelist for the files that don't need copyright headers.
Removed three (mostly usesless) README files from docs/
Closes#5141
.. because not all Windows build systems have those symbols, and even
those that do may be missing newer symbols (eg the Windows 7 SDK does
not define _WIN32_WINNT_WIN10).
Those symbols are used in build-time logic to decide which API to use
and prior to this change if the symbols were missing it would have
resulted in deprecated API being used when more recent functions were
available (eg GetVersionEx used instead of VerifyVersionInfo).
Reported-by: FuccDucc@users.noreply.github.com
Probably fixes https://github.com/curl/curl/issues/4995
Closes https://github.com/curl/curl/pull/5057
Prior to this change in libcurl debug builds http2 stream closure was
erroneously referred to as connection closure.
Before:
* nread <= 0, server closed connection, bailing
After:
* nread == 0, stream closed, bailing
Closes https://github.com/curl/curl/pull/5118
- Implement new option CURLSSLOPT_REVOKE_BEST_EFFORT and
--ssl-revoke-best-effort to allow a "best effort" revocation check.
A best effort revocation check ignores errors that the revocation check
was unable to take place. The reasoning is described in detail below and
discussed further in the PR.
---
When running e.g. with Fiddler, the schannel backend fails with an
unhelpful error message:
Unknown error (0x80092012) - The revocation function was unable
to check revocation for the certificate.
Sadly, many enterprise users who are stuck behind MITM proxies suffer
the very same problem.
This has been discussed in plenty of issues:
https://github.com/curl/curl/issues/3727,
https://github.com/curl/curl/issues/264, for example.
In the latter, a Microsoft Edge developer even made the case that the
common behavior is to ignore issues when a certificate has no recorded
distribution point for revocation lists, or when the server is offline.
This is also known as "best effort" strategy and addresses the Fiddler
issue.
Unfortunately, this strategy was not chosen as the default for schannel
(and is therefore a backend-specific behavior: OpenSSL seems to happily
ignore the offline servers and missing distribution points).
To maintain backward-compatibility, we therefore add a new flag
(`CURLSSLOPT_REVOKE_BEST_EFFORT`) and a new option
(`--ssl-revoke-best-effort`) to select the new behavior.
Due to the many related issues Git for Windows and GitHub Desktop, the
plan is to make this behavior the default in these software packages.
The test 2070 was added to verify this behavior, adapted from 310.
Based-on-work-by: georgeok <giorgos.n.oikonomou@gmail.com>
Co-authored-by: Markus Olsson <j.markus.olsson@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes https://github.com/curl/curl/pull/4981
- If an easy handle is owned by a multi different from the one specified
then return CURLM_BAD_EASY_HANDLE.
Prior to this change I assume user error could cause corruption.
Closes https://github.com/curl/curl/pull/5116
Makes curl_easy_getinfo() of "variable" numerical content instead return
the number set in the env variable `CURL_TIME`.
Makes curl_version() of "variable" textual content. This guarantees a
stable version string which can be tested against. Environment variable
`CURL_VERSION` defines the content.
Assisted-by: Mathias Gumz
This commit adds support to generate JSON via the writeout feature:
-w "%{json}"
It leverages the existing infrastructure as much as possible. Thus,
generating the JSON on STDERR is possible by:
-w "%{stderr}%{json}"
This implements a variant of
https://github.com/curl/curl/wiki/JSON#--write-out-json.
Closes#4870
When libcurl retries a connection due to it being "seemingly dead" or by
REFUSED_STREAM, it will now only do it up five times before giving up,
to avoid never-ending loops.
Reported-by: Dima Tisnek
Bug: https://curl.haxx.se/mail/lib-2020-03/0044.htmlCloses#5074
Make sure each separate index in connn->tempaddr[] is used for a fixed
family (and only that family) during the connection process.
If family one takes a long time and family two fails immediately, the
previous logic could misbehave and retry the same family two address
repeatedly.
Reported-by: Paul Vixie
Reported-by: Jay Satiro
Fixes#5083Fixes#4954Closes#5089
- Ignore CURLE_NOT_BUILT_IN errors returned by c-ares functions in
curl_easy_duphandle.
Prior to this change if c-ares was used as the resolver backend and
either it was too old or libcurl was built without IPv6 support then
some of our resolver functions could return CURLE_NOT_BUILT_IN to
curl_easy_duphandle causing it to fail.
Caused by c8f086b which shipped in 7.69.1.
Reported-by: Karl Chen
Fixes https://github.com/curl/curl/issues/5097
Closes https://github.com/curl/curl/pull/5100
1. The socks4 state machine was broken in the host resolving phase
2. The code now insists on IPv4-only when using SOCKS4 as the protocol
only supports that.
Regression from #4907 and 4a4b63d, shipped in 7.69.0
Reported-by: amishmm on github
Bug: https://github.com/curl/curl/issues/5053#issuecomment-596191594Closes#5061
New test 666 checks this is effective.
As upload buffer size is significant in this kind of tests, shorten it
in similar test 652.
Fixes#4860Closes#4833
Reported-by: RuurdBeerstra on github
Input buffer filling may delay the data sending if data reads are slow.
To overcome this problem, file and callback data reads do not accumulate
in buffer anymore. All other data (memory data and mime framing) are
considered as fast and still concatenated in buffer.
As this may highly impact performance in terms of data overhead, an early
end of part data check is added to spare a read call.
When encoding a part's data, an encoder may require more bytes than made
available by a single read. In this case, the above rule does not apply
and reads are performed until the encoder is able to deliver some data.
Tests 643, 644, 645, 650 and 654 have been adapted to the output data
changes, with test data size reduced to avoid the boredom of long lists of
1-byte chunks in verification data.
New test 667 checks mimepost using single-byte read callback with encoder.
New test 668 checks the end of part data early detection.
Fixes#4826
Reported-by: MrdUkk on github
In case a read callback returns a status (pause, abort, eof,
error) instead of a byte count, drain the bytes read so far but
remember this status for further processing.
Takes care of not losing data when pausing, and properly resume a
paused mime structure when requested.
New tests 670-673 check unpausing cases, with easy or multi
interface and mime or form api.
Fixes#4813
Reported-by: MrdUkk on github
With c-ares the dns parameters lives in ares_channel. Store them in the
curl handle and set them again in easy_duphandle.
Regression introduced in #3228 (6765e6d), shipped in curl 7.63.0.
Fixes#4893Closes#5020
Signed-off-by: Ernst Sjöstrand <ernst.sjostrand@verisure.com>
- Don't check errno on wakeup socket if sread returned 0 since sread
doesn't set errno in that case.
This is a follow-up to cf7760a from several days ago which fixed
Curl_multi_wait to stop busy looping sread on the non-blocking wakeup
socket if it was closed (ie sread returns 0). Due to a logic error it
was still possible to busy loop in that case if errno == EINTR.
Closes https://github.com/curl/curl/pull/5047
As we have logic that checks if we get a >= 400 reponse code back before
the upload is done, which then got confused since it wasn't "done" but
yet there was no data to send!
Reported-by: IvanoG on github
Fixes#4996Closes#5002
New test 666 checks this is effective.
As upload buffer size is significant in this kind of tests, shorten it
in similar test 652.
Fixes#4860
Reported-by: RuurdBeerstra on github
Input buffer filling may delay the data sending if data reads are slow.
To overcome this problem, file and callback data reads do not accumulate
in buffer anymore. All other data (memory data and mime framing) are
considered as fast and still concatenated in buffer.
As this may highly impact performance in terms of data overhead, an early
end of part data check is added to spare a read call.
When encoding a part's data, an encoder may require more bytes than made
available by a single read. In this case, the above rule does not apply
and reads are performed until the encoder is able to deliver some data.
Tests 643, 644, 645, 650 and 654 have been adapted to the output data
changes, with test data size reduced to avoid the boredom of long lists of
1-byte chunks in verification data.
New test 664 checks mimepost using single-byte read callback with encoder.
New test 665 checks the end of part data early detection.
Fixes#4826
Reported-by: MrdUkk on github
In case a read callback returns a status (pause, abort, eof,
error) instead of a byte count, drain the bytes read so far but
remember this status for further processing.
Takes care of not losing data when pausing, and properly resume a
paused mime structure when requested.
New tests 670-673 check unpausing cases, with easy or multi
interface and mime or form api.
Fixes#4813
Reported-by: MrdUkk on github
Closes#4833
... since the socket might not actually be readable anymore when for
example the data is already buffered in the TLS layer.
Fixes#4966
Reported-by: Anders Berg
Closes#5000
This reduces the HTTP/2 window size to 32 MB since libcurl might have to
buffer up to this amount of data in memory and yet we don't want it set
lower to potentially impact tranfer performance on high speed networks.
Requires nghttp2 commit b3f85e2daa629
(https://github.com/nghttp2/nghttp2/pull/1444) to work properly, to end
up in the next release after 1.40.0.
Fixes#4939Closes#4940
Previously, it was not possible to get a known hosts file entry due to
the lack of an API. ssh_session_get_known_hosts_entry(), introduced in
libssh-0.9.0, allows libcurl to obtain such information and behave the
same as when compiled with libssh2.
This also tries to avoid the usage of deprecated functions when the
replacements are available. The behaviour will not change if versions
older than libssh-0.8.0 are used.
Signed-off-by: Anderson Toshiyuki Sasaki <ansasaki@redhat.com>
Fixes#4953Closes#4962
When doing a request with a body + Expect: 100-continue and the server
responds with a 417, the same request will be retried immediately
without the Expect: header.
Added test 357 to verify.
Also added a control instruction to tell the sws test server to not read
the request body if Expect: is present, which the new test 357 uses.
Reported-by: bramus on github
Fixes#4949Closes#4964
Note: The RCPT TO command isn't required to advertise to the server that
it contains UTF-8 characters, instead the server is told that a mail may
contain UTF-8 in any envelope command via the MAIL command.
Support the SMTPUTF8 extension when sending mailbox information in the
MAIL command (FROM and AUTH parameters). Non-ASCII domain names will
be ACE encoded, if IDN is supported, whilst non-ASCII characters in
the local address part are passed to the server.
Reported-by: ygthien on github
Fixes#4828
* Don't include 'struct' in the gcrypt MD4_CTX typedef
* The call to gcry_md_read() should use a dereferenced ctx
* The call to gcry_md_close() should use a dereferenced ctx
Additional minor whitespace issue in the USE_WIN32_CRYPTO code.
Closes#4959
To simplify our code and since earlier versions lack important function
calls libcurl needs to function correctly.
nghttp2 1.12.0 was relased on June 26, 2016.
Closes#4961
TLS servers may request a certificate from the client. This request
includes a list of 0 or more acceptable issuer DNs. The client may use
this list to determine which certificate to send. GnuTLS's default
behavior is to not send a client certificate if there is no
match. However, OpenSSL's default behavior is to send the configured
certificate. The `GNUTLS_FORCE_CLIENT_CERT` flag mimics OpenSSL
behavior.
Authored-by: jethrogb on github
Fixes#1411Closes#4958
Whilst lib\md4.c used this pre-processor, lib\md5.c and
src\tool_metalink.c did not and simply relied on the WIN32
pre-processor directive.
Reviewed-by: Marcel Raad
Closes#4955
- Change tool_util.c tvnow() for Windows to match more closely to
timeval.c Curl_now().
- Create a win32 init function for the tool, since some initialization
is required for the tvnow() changes.
Prior to this change the monotonic time function used by curl in Windows
was determined at build-time and not runtime. That was a problem because
when curl was built targeted for compatibility with old versions of
Windows (eg _WIN32_WINNT < 0x0600) it would use GetTickCount which wraps
every 49.7 days that Windows has been running.
This change makes curl behave similar to libcurl's tvnow function, which
determines at runtime whether the OS is Vista+ and if so calls
QueryPerformanceCounter instead. (Note QueryPerformanceCounter is used
because it has higher resolution than the more obvious candidate
GetTickCount64). The changes to tvnow are basically a copy and paste but
the types in some cases are different.
Ref: https://github.com/curl/curl/issues/3309
Closes https://github.com/curl/curl/pull/4847
Saves the file as "[filename].[8 random hex digits].tmp" and renames
away the extension when done.
Co-authored-by: Jay Satiro
Reported-by: Mike Frysinger
Fixes#4914Closes#4926
- Deduplicate GetEnv() code.
- On Windows change ultimate call to use Windows API
GetEnvironmentVariable() instead of C runtime getenv().
Prior to this change both libcurl and the tool had their own GetEnv
which over time diverged. Now the tool's GetEnv is a wrapper around
curl_getenv (libcurl API function which is itself a wrapper around
libcurl's GetEnv).
Furthermore this change fixes a bug in that Windows API
GetEnvironmentVariable() is called instead of C runtime getenv() to get
the environment variable since some changes aren't always visible to the
latter.
Reported-by: Christoph M. Becker
Fixes https://github.com/curl/curl/issues/4774
Closes https://github.com/curl/curl/pull/4863
STRERROR_LEN is the constant used throughout the library to set the size
of the buffer on the stack that the curl strerror functions write to.
Prior to this change some extended length Windows error messages could
be truncated.
Closes https://github.com/curl/curl/pull/4920
- Do not say that conn->data is "cleared" by multi_done().
If the connection is in use then multi_done assigns another easy handle
still using the connection to conn->data, therefore in that case it is
not cleared.
Closes https://github.com/curl/curl/pull/4901
This avoids the duplication of strings when the optional AUTH and SIZE
parameters are required. It also assists with the modifications that
are part of #4892.
Closes#4903
The alt-svc cache survives a call to curl_easy_reset fine, but the file
name to use for saving the cache was cleared. Now the alt-svc cache has
a copy of the file name to survive handle resets.
Added test 1908 to verify.
Reported-by: Craig Andrews
Fixes#4898Closes#4902
RFC 7616 section 3.4 (The Authorization Header Field) states that "For
historical reasons, a sender MUST NOT generate the quoted string syntax
for the following parameters: algorithm, qop, and nc". This removes the
quoting for the algorithm parameter.
Reviewed-by: Steve Holme
Closes#4890
... as this is already done much earlier in the URL parser.
Also add test case 894 that verifies that pop3 with an encodedd CR in
the user name is rejected.
Closes#4887
- Fixed the flag parsing to apply to specific alternative entry only, as
per RFC. The earlier code would also get totally confused by
multiprotocol header, parsing flags from the wrong part of the header.
- Fixed the parser terminating on unknown protocols, instead of skipping
them.
- Fixed a busyloop when protocol-id was present without an equal sign.
Closes#4875
... since the current transfer is being killed. Setting to NULL is
wrong, leaving it pointing to 'data' is wrong since that handle might be
about to get freed.
Fixes#4845Closes#4858
Reported-by: dmitrmax on github
In the "scheme-less" parsing case, we need to strip off credentials
first before we guess scheme based on the host name!
Assisted-by: Jay Satiro
Fixes#4856Closes#4857
Previously it was stored in a global state which contributed to
curl_global_init's thread unsafety. This boolean is now instead figured
out in curl_multi_init() and stored in the multi handle. Less effective,
but thread safe.
Closes#4851
- Removed from global_init since it isn't thread-safe. The symbol will
still remain to not break compiles, it just won't have any effect going
forward.
- make the internals NOT loop on EINTR (the opposite from previously).
It only risks returning from the select/poll/wait functions early, and that
should be risk-free.
Closes#4840
Avoid "reparsing" the content and instead deliver more exactly what is
provided in the certificate and avoid truncating the data after 512
bytes as done previously. This no longer removes embedded newlines.
Fixes#4837
Reported-by: bnfp on github
Closes#4841
As detailed in DEPRECATE.md, the polarssl support is now removed after
having been disabled for 6 months and nobody has missed it.
The threadlock files used by mbedtls are renamed to an 'mbedtls' prefix
instead of the former 'polarssl' and the common functions that
previously were shared between mbedtls and polarssl and contained the
name 'polarssl' have now all been renamed to instead say 'mbedtls'.
Closes#4825
A regression made the code use 'multiplexed' as a boolean instead of the
counter it is intended to be. This made curl try to "over-populate"
connections with new streams.
This regression came with 41fcdf71a1, shipped in curl 7.65.0.
Also, respect the CURLMOPT_MAX_CONCURRENT_STREAMS value in the same
check.
Reported-by: Kunal Ekawde
Fixes#4779Closes#4784
- Allow forcing the host's key type found in the known_hosts file.
Currently, curl (with libssh2) does not take keys from your known_hosts
file into account when talking to a server. With this patch the
known_hosts file will be searched for an entry matching the hostname
and, if found, libssh2 will be told to claim this key type from the
server.
Closes https://github.com/curl/curl/pull/4747
- Support hostname verification via alternative names (SAN) in the
peer certificate when CURLOPT_CAINFO is used in Windows 7 and earlier.
CERT_NAME_SEARCH_ALL_NAMES_FLAG doesn't exist before Windows 8. As a
result CertGetNameString doesn't quite work on those versions of
Windows. This change provides an alternative solution for
CertGetNameString by iterating through CERT_ALT_NAME_INFO for earlier
versions of Windows.
Prior to this change many certificates failed the hostname validation
when CURLOPT_CAINFO was used in Windows 7 and earlier. Most certificates
now represent multiple hostnames and rely on the alternative names field
exclusively to represent their hostnames.
Reported-by: Jeroen Ooms
Fixes https://github.com/curl/curl/issues/3711
Closes https://github.com/curl/curl/pull/4761
- Add new error code CURLE_QUIC_CONNECT_ERROR for QUIC connection
errors.
Prior to this change CURLE_FAILED_INIT was used, but that was not
correct.
Closes https://github.com/curl/curl/pull/4754
- Define USE_WIN32_CRYPTO by default. This enables SMB.
- Show whether SMB is enabled in the "Enabled features" output.
- Fix mingw compiler warning for call to CryptHashData by casting away
const param. mingw CryptHashData prototype is wrong.
Closes https://github.com/curl/curl/pull/4717
The code was duplicated in the various resolver backends.
Also, it was called after the call to `Curl_ipvalid`, which matters in
case of `CURLRES_IPV4` when called from `connect.c:bindlocal`. This
caused test 1048 to fail on classic MinGW.
The code ignores `conn->ip_version` as done previously in the
individual resolver backends.
Move the call to the `resolver_start` callback up to appease test 655,
which wants it to be called also for literal addresses.
Closes https://github.com/curl/curl/pull/4798
Factor out common I/O loop as bearssl_run_until, which reads/writes TLS
records until the desired engine state is reached. This is now used for
the handshake, read, write, and close.
Match OpenSSL SSL_write behavior, and don't return the number of bytes
written until the corresponding records have been completely flushed
across the socket. This involves keeping track of the length of data
buffered into the TLS engine, and assumes that when CURLE_AGAIN is
returned, the write function will be called again with the same data
and length arguments. This is the same requirement of SSL_write.
Handle TLS close notify as EOF when reading by returning 0.
Closes https://github.com/curl/curl/pull/4748
- Undefine DEBUGASSERT in curl_setup_once.h in case it was already
defined as a system macro.
- Don't compile write32_le in curl_endian unless
CURL_SIZEOF_CURL_OFF_T > 4, since it's only used by Curl_write64_le.
- Include <arpa/inet.h> in socketpair.c.
Closes https://github.com/curl/curl/pull/4756
- Remove our cb_update_key in favor of ngtcp2's new
ngtcp2_crypto_update_key_cb which does the same thing.
Several days ago the ngtcp2_update_key callback function prototype was
changed in ngtcp2/ngtcp2@42ce09c. Though it would be possible to
fix up our cb_update_key for that change they also added
ngtcp2_crypto_update_key_cb which does the same thing so we'll use that
instead.
Ref: https://github.com/ngtcp2/ngtcp2/commit/42ce09c
Closes https://github.com/curl/curl/pull/4735
... as it would previously prefer new connections rather than
multiplexing in most conditions! The (now removed) code was a leftover
from the Pipelining code that was translated wrongly into a
multiplex-only world.
Reported-by: Kunal Ekawde
Bug: https://curl.haxx.se/mail/lib-2019-12/0060.htmlCloses#4732
- Remove the final semi-colon in the SEC2TXT() macro definition.
Before: #define SEC2TXT(sec) case sec: txt = #sec; break;
After: #define SEC2TXT(sec) case sec: txt = #sec; break
Prior to this change SEC2TXT(foo); would generate break;; which caused
the empty expression warning.
Ref: https://github.com/curl/curl/commit/5b22e1a#r36458547
This makes them never to be considered "the oldest" to be discarded when
reaching the connection cache limit. The reasoning here is that
CONNECT_ONLY is primarily used in combination with using the
connection's socket post connect and since that is used outside of
curl's knowledge we must assume that it is in use until explicitly
closed.
Reported-by: Pavel Pavlov
Reported-by: Pavel Löbl
Fixes#4426Fixes#4369Closes#4696
It could accidentally let the connection get used by more than one
thread, leading to double-free and more.
Reported-by: Christopher Reid
Fixes#4544Closes#4557
Add support for CURLSSLOPT_NO_PARTIALCHAIN in CURLOPT_PROXY_SSL_OPTIONS
and OS400 package spec.
Also I added the option to the NameValue list in the tool even though it
isn't exposed as a command-line option (...yet?). (NameValue stringizes
the option name for the curl cmd -> libcurl source generator)
Follow-up to 564d88a which added CURLSSLOPT_NO_PARTIALCHAIN.
Ref: https://github.com/curl/curl/pull/4655
- Stop treating lack of HTTP2 as an unknown option error result for
CURLOPT_SSL_ENABLE_ALPN and CURLOPT_SSL_ENABLE_NPN.
Prior to this change it was impossible to disable ALPN / NPN if libcurl
was built without HTTP2. Setting either option would result in
CURLE_UNKNOWN_OPTION and the respective internal option would not be
set. That was incorrect since ALPN and NPN are used independent of
HTTP2.
Reported-by: Shailesh Kapse
Fixes https://github.com/curl/curl/issues/4668
Closes https://github.com/curl/curl/pull/4672
Options are cross-checked with configure.ac and acinclude.m4.
Tested on Arch Linux, untested on other platforms like Windows or macOS.
Closes#4663
Reviewed-by: Kamil Dudka
Also, use `CURLRES_IPV6` only for actual DNS resolution, not for IPv6
address support. This makes it possible to connect to IPv6 literals by
setting `ENABLE_IPV6` even without `getaddrinfo` support. It also fixes
the CMake build when using the synchronous resolver without
`getaddrinfo` support.
Closes https://github.com/curl/curl/pull/4662
Have intermediate certificates in the trust store be treated as
trust-anchors, in the same way as self-signed root CA certificates
are. This allows users to verify servers using the intermediate cert
only, instead of needing the whole chain.
Other TLS backends already accept partial chains.
Reported-by: Jeffrey Walton
Bug: https://curl.haxx.se/mail/lib-2019-11/0094.html
- Disable warning C4127 "conditional expression is constant" globally
in curl_setup.h for when building with Microsoft's compiler.
This mainly affects building with the Visual Studio project files found
in the projects dir.
Prior to this change the cmake and winbuild build systems already
disabled 4127 globally for when building with Microsoft's compiler.
Also, 4127 was already disabled for all build systems in the limited
circumstance of the WHILE_FALSE macro which disabled the warning
specifically for while(0). This commit removes the WHILE_FALSE macro and
all other cruft in favor of disabling globally in curl_setup.
Background:
We have various macros that cause 0 or 1 to be evaluated, which would
cause warning C4127 in Visual Studio. For example this causes it:
#define Curl_resolver_asynch() 1
Full behavior is not clearly defined and inconsistent across versions.
However it is documented that since VS 2015 Update 3 Microsoft has
addressed this somewhat but not entirely, not warning on while(true) for
example.
Prior to this change some C4127 warnings occurred when I built with
Visual Studio using the generated projects in the projects dir.
Closes https://github.com/curl/curl/pull/4658
- In all code call Curl_winapi_strerror instead of Curl_strerror when
the error code is known to be from Windows GetLastError.
Curl_strerror prefers CRT error codes (errno) over Windows API error
codes (GetLastError) when the two overlap. When we know the error code
is from GetLastError it is more accurate to prefer the Windows API error
messages.
Reported-by: Richard Alcock
Fixes https://github.com/curl/curl/issues/4550
Closes https://github.com/curl/curl/pull/4581
... so that failures in the global init function don't count as a
working init and it can then be called again.
Reported-by: Paul Groke
Fixes#4636Closes#4653
... and use internally. This function will return TIME_T_MAX instead of
failure if the parsed data is found to be larger than what can be
represented. TIME_T_MAX being the largest value curl can represent.
Reviewed-by: Daniel Gustafsson
Reported-by: JanB on github
Fixes#4152Closes#4651
The WHILE_FALSE construction is used to avoid compiler warnings in
macro constructions. This fixes a few instances where it was not
used in order to keep the code consistent.
Closes#4649
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Given that this is performed by the NTLM code there is no need to
perform the initialisation in the HTTP layer. This also keeps the
initialisation the same as the SASL based protocols and also fixes a
possible compilation issue if both NSS and SSPI were to be used as
multiple SSL backends.
Reviewed-by: Kamil Dudka
Closes#3935
The regexp looking for assignments within conditions was too greedy
and matched a too long string in the case of multiple conditionals
on the same line. This is basically only a problem in single line
macros, and the code which exemplified this was essentially:
do { if((x) != NULL) { x = NULL; } } while(0)
..where the final parenthesis of while(0) matched the regexp, and
the legal assignment in the block triggered the warning. Fix by
making the regexp less greedy by matching for the tell-tale signs
of the if statement ending.
Also remove the one occurrence where the warning was disabled due
to a construction like the above, where the warning didn't apply
when fixed.
Closes#4647
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
ERR_error_string(NULL) should never be called. It places the error in a
global buffer, which is not thread-safe. Use ERR_error_string_n with a
local buffer instead.
Closes#4645
This commit adds curl_multi_wakeup() which was previously in the TODO
list under the curl_multi_unblock name.
On some platforms and with some configurations this feature might not be
available or can fail, in these cases a new error code
(CURLM_WAKEUP_FAILURE) is returned from curl_multi_wakeup().
Fixes#4418Closes#4608
Prior to this change schannel ignored --tls-max (CURL_SSLVERSION_MAX_
macros) when --tlsv1 (CURL_SSLVERSION_TLSv1) or default TLS
(CURL_SSLVERSION_DEFAULT), using a max of TLS 1.2 always.
Closes https://github.com/curl/curl/pull/4633
- Disable the extra sensitivity except in debug builds (--enable-debug).
- Improve SYSCALL error message logic in ossl_send and ossl_recv so that
"No error" / "Success" socket error text isn't shown on SYSCALL error.
Prior to this change 0ab38f5 (precedes 7.67.0) increased the sensitivity
of OpenSSL's SSL_ERROR_SYSCALL error so that abrupt server closures were
also considered errors. For example, a server that does not send a known
protocol termination point (eg HTTP content length or chunked encoding)
_and_ does not send a TLS termination point (close_notify alert) would
cause an error if it closed the connection.
To be clear that behavior made it into release build 7.67.0
unintentionally. Several users have reported it as an issue.
Ultimately the idea is a good one, since it can help prevent against a
truncation attack. Other SSL backends may already behave similarly (such
as Windows native OS SSL Schannel). However much more of our user base
is using OpenSSL and there is a mass of legacy users in that space, so I
think that behavior should be partially reverted and then rolled out
slowly.
This commit changes the behavior so that the increased sensitivity is
disabled in all curl builds except curl debug builds (DEBUGBUILD). If
after a period of time there are no major issues then it can be enabled
in dev and release builds with the newest OpenSSL (1.1.1+), since users
using the newest OpenSSL are the least likely to have legacy problems.
Bug: https://github.com/curl/curl/issues/4409#issuecomment-555955794
Reported-by: Bjoern Franke
Fixes https://github.com/curl/curl/issues/4624
Closes https://github.com/curl/curl/pull/4623
Prior to this change:
The check if an extra wait is necessary was based not on the
number of extra fds but on the pointer.
If a non-null pointer was given in extra_fds, but extra_nfds
was zero, then the wait was skipped even though poll was not
called.
Closes https://github.com/curl/curl/pull/4610
Improved estimation of expected_len and updated related comments;
increased strictness of QNAME-encoding, adding error detection for empty
labels and names longer than the overall limit; avoided treating DNAME
as unexpected;
updated unit test 1655 with more thorough set of proofs and tests
Closes#4598
Since 59041f0, a new timer might be set in multi_done() so the clearing
of the timers need to happen afterwards!
Reported-by: Max Kellermann
Fixes#4575Closes#4583
- Use FORMAT_MESSAGE_IGNORE_INSERTS to ignore format specifiers in
Windows error strings.
Since we are not in control of the error code we don't know what
information may be needed by the error string's format specifiers.
Prior to this change Windows API error strings which contain specifiers
(think specifiers like similar to printf specifiers) would not be shown.
The FormatMessage Windows API call which turns a Windows error code into
a string could fail and set error ERROR_INVALID_PARAMETER if that error
string contained a format specifier. FormatMessage expects a va_list for
the specifiers, unless inserts are ignored in which case no substitution
is attempted.
Ref: https://devblogs.microsoft.com/oldnewthing/20071128-00/?p=24353
- Consider a modified file to be committed this year.
- Make the travis CHECKSRC also do COPYRIGHTYEAR scan in examples and
includes
- Ignore 0 parents when getting latest commit date of file.
since in the CI we're dealing with a truncated repo of last 50 commits,
the file's most recent commit may not be available. when this happens
git log and rev-list show the initial commit (ie first commit not to be
truncated) but that's incorrect so ignore it.
Ref: https://github.com/curl/curl/pull/4547
Closes https://github.com/curl/curl/pull/4549
Co-authored-by: Jay Satiro
- Open the CA file using FILE_SHARE_READ mode so that others can read
from it as well.
Prior to this change our schannel code opened the CA file without
sharing which meant concurrent openings (eg an attempt from another
thread or process) would fail during the time it was open without
sharing, which in curl's case would cause error:
"schannel: failed to open CA file".
Bug: https://curl.haxx.se/mail/lib-2019-10/0104.html
Reported-by: Richard Alcock
... as it can make it wait there for a long time for no good purpose.
Patched-by: Jay Satiro
Reported-by: Bylon2 on github
Adviced-by: Nikos Mavrogiannopoulos
Fixes#4487Closes#4541
On macOS/BSD, trying to call sendto on a connected UDP socket fails
with a EISCONN error. Because the singleipconnect has already called
connect on the socket when we're trying to use it for QUIC transfers
we need to use plain send instead.
Fixes#4529
Closes https://github.com/curl/curl/pull/4533
The ngtcp2 QUIC backend was using the MSG_DONTWAIT flag for send/recv
in order to perform nonblocking operations. On Windows this flag does
not exist. Instead, the socket must be set to nonblocking mode via
ioctlsocket.
This change sets the nonblocking flag on UDP sockets used for QUIC on
all platforms so the use of MSG_DONTWAIT is not needed.
Fixes#4531Closes#4532
To make sure that transfer is being dealt with. Streams without
Content-Length need a final read to notice the end-of-stream state.
Reported-by: Tom van der Woerdt
Fixes#4496
The URL extracted with CURLINFO_EFFECTIVE_URL was returned as given as
input in most cases, which made it not get a scheme prefixed like before
if the URL was given without one, and it didn't remove dotdot sequences
etc.
Added test case 1907 to verify that this now works as intended and as
before 7.62.0.
Regression introduced in 7.62.0
Reported-by: Christophe Dervieux
Fixes#4491Closes#4493
With MinGW-w64, `curl_socket_t` is is a 32 or 64 bit unsigned integer,
while `read` expects a 32 bit signed integer.
Use `sread` instead of `read` to use the correct parameter type.
Closes https://github.com/curl/curl/pull/4483
Previosly all connect() failures would return CURLE_COULDNT_CONNECT, no
matter what errno said.
This makes for example --retry work on these transfer failures.
Reported-by: Nathaniel J. Smith
Fixes#4461
Clsoes #4462
To make sure that the HTTP/2 state is initialized correctly for
duplicated handles. It would otherwise easily generate "spurious"
PRIORITY frames to get sent over HTTP/2 connections when duplicated easy
handles were used.
Reported-by: Daniel Silverstone
Fixes#4303Closes#4442
This fix removes a use after free which can be triggered by
the internal cookie fuzzer, but otherwise is probably
impossible to trigger from an ordinary application.
The following program reproduces it:
curl_global_init(CURL_GLOBAL_DEFAULT);
CURL* handle=curl_easy_init();
CookieInfo* info=Curl_cookie_init(handle,NULL,NULL,false);
curl_easy_setopt(handle, CURLOPT_COOKIEJAR, "/dev/null");
Curl_flush_cookies(handle, true);
Curl_cookie_cleanup(info);
curl_easy_cleanup(handle);
curl_global_cleanup();
This was found through fuzzing.
Closes#4454
The 'share object' only sets the storage area for cookies. The "cookie
engine" still needs to be enabled or activated using the normal cookie
options.
This caused the curl command line tool to accidentally use cookies
without having been told to, since curl switched to using shared cookies
in 7.66.0.
Test 1166 verifies
Updated test 506
Fixes#4429Closes#4434
Prior to this change non-ssl/non-ssh connections that were reused set
TIMER_APPCONNECT [1]. Arguably that was incorrect since no SSL/SSH
handshake took place.
[1]: TIMER_APPCONNECT is publicly known as CURLINFO_APPCONNECT_TIME in
libcurl and %{time_appconnect} in the curl tool. It is documented as
"the time until the SSL/SSH handshake is completed".
Reported-by: Marcel Hernandez
Ref: https://github.com/curl/curl/issues/3760
Closes https://github.com/curl/curl/pull/3773
- convert some of them to H3BUF() calls to infof()
- remove some of them completely
- made DEBUG_HTTP3 defined only if CURLDEBUG is set for now
Closes#4421
The parser would check for a query part before fragment, which caused it
to do wrong when the fragment contains a question mark.
Extended test 1560 to verify.
Reported-by: Alex Konev
Fixes#4412Closes#4413
As libcurl now uses these 2 system functions, wrappers are needed on os400
to convert returned AF_UNIX sockaddrs to ascii.
This is a follow-up to commit 7fb54ef.
See also #4037.
Closes#4214
Otherwise curl may be told to use for instance pop3 to
communicate with the doh server, which most likely
is not what you want.
Found through fuzzing.
Closes#4406
It was already fixed for BoringSSL in commit a0f8fccb1e.
LibreSSL has had the second argument to SSL_CTX_set_min_proto_version
as uint16_t ever since the function was added in [0].
[0] 56f107201b
Closes https://github.com/curl/curl/pull/4397
Prior to this change when a server returned a socks5 connect error then
curl would parse the destination address:port from that data and show it
to the user as the destination:
curld -v --socks5 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to IPv4 172.217.12.206 (locally resolved)
* Can't complete SOCKS5 connection to 253.127.0.0:26673. (1)
curl: (7) Can't complete SOCKS5 connection to 253.127.0.0:26673. (1)
That's incorrect because the address:port included in the connect error
is actually a bind address:port (typically unused) and not the
destination address:port. This fix changes curl to show the destination
information that curl sent to the server instead:
curld -v --socks5 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to IPv4 172.217.7.14:99 (locally resolved)
* Can't complete SOCKS5 connection to 172.217.7.14:99. (1)
curl: (7) Can't complete SOCKS5 connection to 172.217.7.14:99. (1)
curld -v --socks5-hostname 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to google.com:99 (remotely resolved)
* Can't complete SOCKS5 connection to google.com:99. (1)
curl: (7) Can't complete SOCKS5 connection to google.com:99. (1)
Ref: https://tools.ietf.org/html/rfc1928#section-6
Closes https://github.com/curl/curl/pull/4394
As the loop discards cookies without domain set. This bug would lead to
qsort() trying to sort uninitialized pointers. We have however not found
it a security problem.
Reported-by: Paul Dreik
Closes#4386
If the input hostname is "[", hlen will underflow to max of size_t when
it is subtracted with 2.
hostname[hlen] will then cause a warning by ubsanitizer:
runtime error: addition of unsigned offset to 0x<snip> overflowed to
0x<snip>
I think that in practice, the generated code will work, and the output
of hostname[hlen] will be the first character "[".
This can be demonstrated by the following program (tested in both clang
and gcc, with -O3)
int main() {
char* hostname=strdup("[");
size_t hlen = strlen(hostname);
hlen-=2;
hostname++;
printf("character is %d\n",+hostname[hlen]);
free(hostname-1);
}
I found this through fuzzing, and even if it seems harmless, the proper
thing is to return early with an error.
Closes#4389
CURLU_NO_AUTHORITY is intended for use with unknown schemes (i.e. not
"file:///") to override cURL's default demand that an authority exists.
Closes#4349
If the :authority pseudo header field doesn't contain an explicit port,
we assume it is valid for the default port, instead of rejecting the
request for all ports.
Ref: https://curl.haxx.se/mail/lib-2019-09/0041.htmlCloses#4365
If you set the same URL for target as for DoH (and it isn't a DoH
server), like "https://example.com" in both, the easy handles used for
the DoH requests could be left "dangling" and end up not getting freed.
Reported-by: Paul Dreik
Closes#4366
The undefined behaviour is annoying when running fuzzing with
sanitizers. The codegen is the same, but the meaning is now not up for
dispute. See https://cppinsights.io/s/516a2ff4
By incrementing the pointer first, both gcc and clang recognize this as
a bswap and optimizes it to a single instruction. See
https://godbolt.org/z/994ZpxCloses#4350
Added unit test case 1655 to verify.
Close#4352
the code correctly finds the flaws in the old code,
if one temporarily restores doh.c to the old version.
This is a protocol violation but apparently there are legacy proprietary
servers doing this.
Added test 336 and 337 to verify.
Reported-by: Philippe Marguinaud
Closes#4339
For FTPS transfers, curl gets close_notify on the data connection
without that being a signal to close the control connection!
Regression since 3f5da4e59a (7.65.0)
Reported-by: Zenju on github
Reviewed-by: Jay Satiro
Fixes#4329Closes#4340
Despite ldapp_err2string being documented by MS as returning a
PCHAR (char *), when UNICODE it is mapped to ldap_err2stringW and
returns PWCHAR (wchar_t *).
We have lots of code that expects ldap_err2string to return char *,
most of it failf used like this:
failf(data, "LDAP local: Some error: %s", ldap_err2string(rc));
Closes https://github.com/curl/curl/pull/4272
It needs to parse correctly. Otherwise it could be tricked into letting
through a-f using host names that libcurl would then resolve. Like
'[ab.be]'.
Reported-by: Thomas Vegas
Closes#4315
OpenSSL 1.1.0 adds SSL_CTX_set_<min|max>_proto_version() that we now use
when available. Existing code is preserved for older versions of
OpenSSL.
Closes#4304
Otherwise, a three byte response would make the smtp_state_ehlo_resp()
function misbehave.
Credit to OSS-Fuzz
Bug: https://crbug.com/oss-fuzz/16918
Assisted-by: Max Dymond
Closes#4287
This allows the function to figure out if a unix domain socket has a
file name or not associated with it! When a socket is created with
socketpair(), as done in the fuzzer testing, the path struct member is
uninitialized and must not be accessed.
Bug: https://crbug.com/oss-fuzz/16699Closes#4283
It could otherwise return an error even when closed correctly if GOAWAY
had been received previously.
Reported-by: Tom van der Woerdt
Fixes#4267Closes#4268
For a long time (since 7.28.1) we've returned error when setting the
value to 1 to make applications notice that we stopped supported the old
behavior for 1. Starting now, we treat 1 and 2 exactly the same.
Closes#4241
The quiche debug callback is global and can only be initialized once, so
make sure we don't do it multiple times (e.g. if multiple requests are
executed).
In addition this initializes the callback before the connection is
created, so we get logs for the handshake as well.
Closes#4236
When a username and password are provided in the URL, they were wrongly
removed from the stored URL so that subsequent uses of the same URL
wouldn't find the crendentials. This made doing HTTP auth with multiple
connections (like Digest) mishave.
Regression from 46e164069d (7.62.0)
Test case 335 added to verify.
Reported-by: Mike Crowe
Fixes#4228Closes#4229
SSL_VersionRangeGetDefault returns (TLSv1.0, TLSv1.2) as supported
range in NSS 3.45. It looks like the intention is to raise the minimum
version rather than lowering the maximum, so adjust accordingly. Note
that the caller (nss_setup_connect) initializes the version range to
(TLSv1.0, TLSv1.3), so there is no need to check for >= TLSv1.0 again.
Closes#4187
Reviewed-by: Daniel Stenberg
Reviewed-by: Kamil Dudka