Avoid "reparsing" the content and instead deliver more exactly what is
provided in the certificate and avoid truncating the data after 512
bytes as done previously. This no longer removes embedded newlines.
Fixes#4837
Reported-by: bnfp on github
Closes#4841
As detailed in DEPRECATE.md, the polarssl support is now removed after
having been disabled for 6 months and nobody has missed it.
The threadlock files used by mbedtls are renamed to an 'mbedtls' prefix
instead of the former 'polarssl' and the common functions that
previously were shared between mbedtls and polarssl and contained the
name 'polarssl' have now all been renamed to instead say 'mbedtls'.
Closes#4825
A regression made the code use 'multiplexed' as a boolean instead of the
counter it is intended to be. This made curl try to "over-populate"
connections with new streams.
This regression came with 41fcdf71a1, shipped in curl 7.65.0.
Also, respect the CURLMOPT_MAX_CONCURRENT_STREAMS value in the same
check.
Reported-by: Kunal Ekawde
Fixes#4779Closes#4784
- Allow forcing the host's key type found in the known_hosts file.
Currently, curl (with libssh2) does not take keys from your known_hosts
file into account when talking to a server. With this patch the
known_hosts file will be searched for an entry matching the hostname
and, if found, libssh2 will be told to claim this key type from the
server.
Closes https://github.com/curl/curl/pull/4747
- Support hostname verification via alternative names (SAN) in the
peer certificate when CURLOPT_CAINFO is used in Windows 7 and earlier.
CERT_NAME_SEARCH_ALL_NAMES_FLAG doesn't exist before Windows 8. As a
result CertGetNameString doesn't quite work on those versions of
Windows. This change provides an alternative solution for
CertGetNameString by iterating through CERT_ALT_NAME_INFO for earlier
versions of Windows.
Prior to this change many certificates failed the hostname validation
when CURLOPT_CAINFO was used in Windows 7 and earlier. Most certificates
now represent multiple hostnames and rely on the alternative names field
exclusively to represent their hostnames.
Reported-by: Jeroen Ooms
Fixes https://github.com/curl/curl/issues/3711
Closes https://github.com/curl/curl/pull/4761
- Add new error code CURLE_QUIC_CONNECT_ERROR for QUIC connection
errors.
Prior to this change CURLE_FAILED_INIT was used, but that was not
correct.
Closes https://github.com/curl/curl/pull/4754
- Define USE_WIN32_CRYPTO by default. This enables SMB.
- Show whether SMB is enabled in the "Enabled features" output.
- Fix mingw compiler warning for call to CryptHashData by casting away
const param. mingw CryptHashData prototype is wrong.
Closes https://github.com/curl/curl/pull/4717
The code was duplicated in the various resolver backends.
Also, it was called after the call to `Curl_ipvalid`, which matters in
case of `CURLRES_IPV4` when called from `connect.c:bindlocal`. This
caused test 1048 to fail on classic MinGW.
The code ignores `conn->ip_version` as done previously in the
individual resolver backends.
Move the call to the `resolver_start` callback up to appease test 655,
which wants it to be called also for literal addresses.
Closes https://github.com/curl/curl/pull/4798
Factor out common I/O loop as bearssl_run_until, which reads/writes TLS
records until the desired engine state is reached. This is now used for
the handshake, read, write, and close.
Match OpenSSL SSL_write behavior, and don't return the number of bytes
written until the corresponding records have been completely flushed
across the socket. This involves keeping track of the length of data
buffered into the TLS engine, and assumes that when CURLE_AGAIN is
returned, the write function will be called again with the same data
and length arguments. This is the same requirement of SSL_write.
Handle TLS close notify as EOF when reading by returning 0.
Closes https://github.com/curl/curl/pull/4748
- Undefine DEBUGASSERT in curl_setup_once.h in case it was already
defined as a system macro.
- Don't compile write32_le in curl_endian unless
CURL_SIZEOF_CURL_OFF_T > 4, since it's only used by Curl_write64_le.
- Include <arpa/inet.h> in socketpair.c.
Closes https://github.com/curl/curl/pull/4756
- Remove our cb_update_key in favor of ngtcp2's new
ngtcp2_crypto_update_key_cb which does the same thing.
Several days ago the ngtcp2_update_key callback function prototype was
changed in ngtcp2/ngtcp2@42ce09c. Though it would be possible to
fix up our cb_update_key for that change they also added
ngtcp2_crypto_update_key_cb which does the same thing so we'll use that
instead.
Ref: https://github.com/ngtcp2/ngtcp2/commit/42ce09c
Closes https://github.com/curl/curl/pull/4735
... as it would previously prefer new connections rather than
multiplexing in most conditions! The (now removed) code was a leftover
from the Pipelining code that was translated wrongly into a
multiplex-only world.
Reported-by: Kunal Ekawde
Bug: https://curl.haxx.se/mail/lib-2019-12/0060.htmlCloses#4732
- Remove the final semi-colon in the SEC2TXT() macro definition.
Before: #define SEC2TXT(sec) case sec: txt = #sec; break;
After: #define SEC2TXT(sec) case sec: txt = #sec; break
Prior to this change SEC2TXT(foo); would generate break;; which caused
the empty expression warning.
Ref: https://github.com/curl/curl/commit/5b22e1a#r36458547
This makes them never to be considered "the oldest" to be discarded when
reaching the connection cache limit. The reasoning here is that
CONNECT_ONLY is primarily used in combination with using the
connection's socket post connect and since that is used outside of
curl's knowledge we must assume that it is in use until explicitly
closed.
Reported-by: Pavel Pavlov
Reported-by: Pavel Löbl
Fixes#4426Fixes#4369Closes#4696
It could accidentally let the connection get used by more than one
thread, leading to double-free and more.
Reported-by: Christopher Reid
Fixes#4544Closes#4557
Add support for CURLSSLOPT_NO_PARTIALCHAIN in CURLOPT_PROXY_SSL_OPTIONS
and OS400 package spec.
Also I added the option to the NameValue list in the tool even though it
isn't exposed as a command-line option (...yet?). (NameValue stringizes
the option name for the curl cmd -> libcurl source generator)
Follow-up to 564d88a which added CURLSSLOPT_NO_PARTIALCHAIN.
Ref: https://github.com/curl/curl/pull/4655
- Stop treating lack of HTTP2 as an unknown option error result for
CURLOPT_SSL_ENABLE_ALPN and CURLOPT_SSL_ENABLE_NPN.
Prior to this change it was impossible to disable ALPN / NPN if libcurl
was built without HTTP2. Setting either option would result in
CURLE_UNKNOWN_OPTION and the respective internal option would not be
set. That was incorrect since ALPN and NPN are used independent of
HTTP2.
Reported-by: Shailesh Kapse
Fixes https://github.com/curl/curl/issues/4668
Closes https://github.com/curl/curl/pull/4672
Options are cross-checked with configure.ac and acinclude.m4.
Tested on Arch Linux, untested on other platforms like Windows or macOS.
Closes#4663
Reviewed-by: Kamil Dudka
Also, use `CURLRES_IPV6` only for actual DNS resolution, not for IPv6
address support. This makes it possible to connect to IPv6 literals by
setting `ENABLE_IPV6` even without `getaddrinfo` support. It also fixes
the CMake build when using the synchronous resolver without
`getaddrinfo` support.
Closes https://github.com/curl/curl/pull/4662
Have intermediate certificates in the trust store be treated as
trust-anchors, in the same way as self-signed root CA certificates
are. This allows users to verify servers using the intermediate cert
only, instead of needing the whole chain.
Other TLS backends already accept partial chains.
Reported-by: Jeffrey Walton
Bug: https://curl.haxx.se/mail/lib-2019-11/0094.html
- Disable warning C4127 "conditional expression is constant" globally
in curl_setup.h for when building with Microsoft's compiler.
This mainly affects building with the Visual Studio project files found
in the projects dir.
Prior to this change the cmake and winbuild build systems already
disabled 4127 globally for when building with Microsoft's compiler.
Also, 4127 was already disabled for all build systems in the limited
circumstance of the WHILE_FALSE macro which disabled the warning
specifically for while(0). This commit removes the WHILE_FALSE macro and
all other cruft in favor of disabling globally in curl_setup.
Background:
We have various macros that cause 0 or 1 to be evaluated, which would
cause warning C4127 in Visual Studio. For example this causes it:
#define Curl_resolver_asynch() 1
Full behavior is not clearly defined and inconsistent across versions.
However it is documented that since VS 2015 Update 3 Microsoft has
addressed this somewhat but not entirely, not warning on while(true) for
example.
Prior to this change some C4127 warnings occurred when I built with
Visual Studio using the generated projects in the projects dir.
Closes https://github.com/curl/curl/pull/4658
- In all code call Curl_winapi_strerror instead of Curl_strerror when
the error code is known to be from Windows GetLastError.
Curl_strerror prefers CRT error codes (errno) over Windows API error
codes (GetLastError) when the two overlap. When we know the error code
is from GetLastError it is more accurate to prefer the Windows API error
messages.
Reported-by: Richard Alcock
Fixes https://github.com/curl/curl/issues/4550
Closes https://github.com/curl/curl/pull/4581
... so that failures in the global init function don't count as a
working init and it can then be called again.
Reported-by: Paul Groke
Fixes#4636Closes#4653
... and use internally. This function will return TIME_T_MAX instead of
failure if the parsed data is found to be larger than what can be
represented. TIME_T_MAX being the largest value curl can represent.
Reviewed-by: Daniel Gustafsson
Reported-by: JanB on github
Fixes#4152Closes#4651
The WHILE_FALSE construction is used to avoid compiler warnings in
macro constructions. This fixes a few instances where it was not
used in order to keep the code consistent.
Closes#4649
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Given that this is performed by the NTLM code there is no need to
perform the initialisation in the HTTP layer. This also keeps the
initialisation the same as the SASL based protocols and also fixes a
possible compilation issue if both NSS and SSPI were to be used as
multiple SSL backends.
Reviewed-by: Kamil Dudka
Closes#3935
The regexp looking for assignments within conditions was too greedy
and matched a too long string in the case of multiple conditionals
on the same line. This is basically only a problem in single line
macros, and the code which exemplified this was essentially:
do { if((x) != NULL) { x = NULL; } } while(0)
..where the final parenthesis of while(0) matched the regexp, and
the legal assignment in the block triggered the warning. Fix by
making the regexp less greedy by matching for the tell-tale signs
of the if statement ending.
Also remove the one occurrence where the warning was disabled due
to a construction like the above, where the warning didn't apply
when fixed.
Closes#4647
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
ERR_error_string(NULL) should never be called. It places the error in a
global buffer, which is not thread-safe. Use ERR_error_string_n with a
local buffer instead.
Closes#4645
This commit adds curl_multi_wakeup() which was previously in the TODO
list under the curl_multi_unblock name.
On some platforms and with some configurations this feature might not be
available or can fail, in these cases a new error code
(CURLM_WAKEUP_FAILURE) is returned from curl_multi_wakeup().
Fixes#4418Closes#4608
Prior to this change schannel ignored --tls-max (CURL_SSLVERSION_MAX_
macros) when --tlsv1 (CURL_SSLVERSION_TLSv1) or default TLS
(CURL_SSLVERSION_DEFAULT), using a max of TLS 1.2 always.
Closes https://github.com/curl/curl/pull/4633
- Disable the extra sensitivity except in debug builds (--enable-debug).
- Improve SYSCALL error message logic in ossl_send and ossl_recv so that
"No error" / "Success" socket error text isn't shown on SYSCALL error.
Prior to this change 0ab38f5 (precedes 7.67.0) increased the sensitivity
of OpenSSL's SSL_ERROR_SYSCALL error so that abrupt server closures were
also considered errors. For example, a server that does not send a known
protocol termination point (eg HTTP content length or chunked encoding)
_and_ does not send a TLS termination point (close_notify alert) would
cause an error if it closed the connection.
To be clear that behavior made it into release build 7.67.0
unintentionally. Several users have reported it as an issue.
Ultimately the idea is a good one, since it can help prevent against a
truncation attack. Other SSL backends may already behave similarly (such
as Windows native OS SSL Schannel). However much more of our user base
is using OpenSSL and there is a mass of legacy users in that space, so I
think that behavior should be partially reverted and then rolled out
slowly.
This commit changes the behavior so that the increased sensitivity is
disabled in all curl builds except curl debug builds (DEBUGBUILD). If
after a period of time there are no major issues then it can be enabled
in dev and release builds with the newest OpenSSL (1.1.1+), since users
using the newest OpenSSL are the least likely to have legacy problems.
Bug: https://github.com/curl/curl/issues/4409#issuecomment-555955794
Reported-by: Bjoern Franke
Fixes https://github.com/curl/curl/issues/4624
Closes https://github.com/curl/curl/pull/4623
Prior to this change:
The check if an extra wait is necessary was based not on the
number of extra fds but on the pointer.
If a non-null pointer was given in extra_fds, but extra_nfds
was zero, then the wait was skipped even though poll was not
called.
Closes https://github.com/curl/curl/pull/4610
Improved estimation of expected_len and updated related comments;
increased strictness of QNAME-encoding, adding error detection for empty
labels and names longer than the overall limit; avoided treating DNAME
as unexpected;
updated unit test 1655 with more thorough set of proofs and tests
Closes#4598
Since 59041f0, a new timer might be set in multi_done() so the clearing
of the timers need to happen afterwards!
Reported-by: Max Kellermann
Fixes#4575Closes#4583
- Use FORMAT_MESSAGE_IGNORE_INSERTS to ignore format specifiers in
Windows error strings.
Since we are not in control of the error code we don't know what
information may be needed by the error string's format specifiers.
Prior to this change Windows API error strings which contain specifiers
(think specifiers like similar to printf specifiers) would not be shown.
The FormatMessage Windows API call which turns a Windows error code into
a string could fail and set error ERROR_INVALID_PARAMETER if that error
string contained a format specifier. FormatMessage expects a va_list for
the specifiers, unless inserts are ignored in which case no substitution
is attempted.
Ref: https://devblogs.microsoft.com/oldnewthing/20071128-00/?p=24353
- Consider a modified file to be committed this year.
- Make the travis CHECKSRC also do COPYRIGHTYEAR scan in examples and
includes
- Ignore 0 parents when getting latest commit date of file.
since in the CI we're dealing with a truncated repo of last 50 commits,
the file's most recent commit may not be available. when this happens
git log and rev-list show the initial commit (ie first commit not to be
truncated) but that's incorrect so ignore it.
Ref: https://github.com/curl/curl/pull/4547
Closes https://github.com/curl/curl/pull/4549
Co-authored-by: Jay Satiro
- Open the CA file using FILE_SHARE_READ mode so that others can read
from it as well.
Prior to this change our schannel code opened the CA file without
sharing which meant concurrent openings (eg an attempt from another
thread or process) would fail during the time it was open without
sharing, which in curl's case would cause error:
"schannel: failed to open CA file".
Bug: https://curl.haxx.se/mail/lib-2019-10/0104.html
Reported-by: Richard Alcock
... as it can make it wait there for a long time for no good purpose.
Patched-by: Jay Satiro
Reported-by: Bylon2 on github
Adviced-by: Nikos Mavrogiannopoulos
Fixes#4487Closes#4541
On macOS/BSD, trying to call sendto on a connected UDP socket fails
with a EISCONN error. Because the singleipconnect has already called
connect on the socket when we're trying to use it for QUIC transfers
we need to use plain send instead.
Fixes#4529
Closes https://github.com/curl/curl/pull/4533
The ngtcp2 QUIC backend was using the MSG_DONTWAIT flag for send/recv
in order to perform nonblocking operations. On Windows this flag does
not exist. Instead, the socket must be set to nonblocking mode via
ioctlsocket.
This change sets the nonblocking flag on UDP sockets used for QUIC on
all platforms so the use of MSG_DONTWAIT is not needed.
Fixes#4531Closes#4532
To make sure that transfer is being dealt with. Streams without
Content-Length need a final read to notice the end-of-stream state.
Reported-by: Tom van der Woerdt
Fixes#4496
The URL extracted with CURLINFO_EFFECTIVE_URL was returned as given as
input in most cases, which made it not get a scheme prefixed like before
if the URL was given without one, and it didn't remove dotdot sequences
etc.
Added test case 1907 to verify that this now works as intended and as
before 7.62.0.
Regression introduced in 7.62.0
Reported-by: Christophe Dervieux
Fixes#4491Closes#4493
With MinGW-w64, `curl_socket_t` is is a 32 or 64 bit unsigned integer,
while `read` expects a 32 bit signed integer.
Use `sread` instead of `read` to use the correct parameter type.
Closes https://github.com/curl/curl/pull/4483
Previosly all connect() failures would return CURLE_COULDNT_CONNECT, no
matter what errno said.
This makes for example --retry work on these transfer failures.
Reported-by: Nathaniel J. Smith
Fixes#4461
Clsoes #4462
To make sure that the HTTP/2 state is initialized correctly for
duplicated handles. It would otherwise easily generate "spurious"
PRIORITY frames to get sent over HTTP/2 connections when duplicated easy
handles were used.
Reported-by: Daniel Silverstone
Fixes#4303Closes#4442
This fix removes a use after free which can be triggered by
the internal cookie fuzzer, but otherwise is probably
impossible to trigger from an ordinary application.
The following program reproduces it:
curl_global_init(CURL_GLOBAL_DEFAULT);
CURL* handle=curl_easy_init();
CookieInfo* info=Curl_cookie_init(handle,NULL,NULL,false);
curl_easy_setopt(handle, CURLOPT_COOKIEJAR, "/dev/null");
Curl_flush_cookies(handle, true);
Curl_cookie_cleanup(info);
curl_easy_cleanup(handle);
curl_global_cleanup();
This was found through fuzzing.
Closes#4454
The 'share object' only sets the storage area for cookies. The "cookie
engine" still needs to be enabled or activated using the normal cookie
options.
This caused the curl command line tool to accidentally use cookies
without having been told to, since curl switched to using shared cookies
in 7.66.0.
Test 1166 verifies
Updated test 506
Fixes#4429Closes#4434
Prior to this change non-ssl/non-ssh connections that were reused set
TIMER_APPCONNECT [1]. Arguably that was incorrect since no SSL/SSH
handshake took place.
[1]: TIMER_APPCONNECT is publicly known as CURLINFO_APPCONNECT_TIME in
libcurl and %{time_appconnect} in the curl tool. It is documented as
"the time until the SSL/SSH handshake is completed".
Reported-by: Marcel Hernandez
Ref: https://github.com/curl/curl/issues/3760
Closes https://github.com/curl/curl/pull/3773
- convert some of them to H3BUF() calls to infof()
- remove some of them completely
- made DEBUG_HTTP3 defined only if CURLDEBUG is set for now
Closes#4421
The parser would check for a query part before fragment, which caused it
to do wrong when the fragment contains a question mark.
Extended test 1560 to verify.
Reported-by: Alex Konev
Fixes#4412Closes#4413
As libcurl now uses these 2 system functions, wrappers are needed on os400
to convert returned AF_UNIX sockaddrs to ascii.
This is a follow-up to commit 7fb54ef.
See also #4037.
Closes#4214
Otherwise curl may be told to use for instance pop3 to
communicate with the doh server, which most likely
is not what you want.
Found through fuzzing.
Closes#4406
It was already fixed for BoringSSL in commit a0f8fccb1e.
LibreSSL has had the second argument to SSL_CTX_set_min_proto_version
as uint16_t ever since the function was added in [0].
[0] 56f107201b
Closes https://github.com/curl/curl/pull/4397
Prior to this change when a server returned a socks5 connect error then
curl would parse the destination address:port from that data and show it
to the user as the destination:
curld -v --socks5 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to IPv4 172.217.12.206 (locally resolved)
* Can't complete SOCKS5 connection to 253.127.0.0:26673. (1)
curl: (7) Can't complete SOCKS5 connection to 253.127.0.0:26673. (1)
That's incorrect because the address:port included in the connect error
is actually a bind address:port (typically unused) and not the
destination address:port. This fix changes curl to show the destination
information that curl sent to the server instead:
curld -v --socks5 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to IPv4 172.217.7.14:99 (locally resolved)
* Can't complete SOCKS5 connection to 172.217.7.14:99. (1)
curl: (7) Can't complete SOCKS5 connection to 172.217.7.14:99. (1)
curld -v --socks5-hostname 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to google.com:99 (remotely resolved)
* Can't complete SOCKS5 connection to google.com:99. (1)
curl: (7) Can't complete SOCKS5 connection to google.com:99. (1)
Ref: https://tools.ietf.org/html/rfc1928#section-6
Closes https://github.com/curl/curl/pull/4394
As the loop discards cookies without domain set. This bug would lead to
qsort() trying to sort uninitialized pointers. We have however not found
it a security problem.
Reported-by: Paul Dreik
Closes#4386
If the input hostname is "[", hlen will underflow to max of size_t when
it is subtracted with 2.
hostname[hlen] will then cause a warning by ubsanitizer:
runtime error: addition of unsigned offset to 0x<snip> overflowed to
0x<snip>
I think that in practice, the generated code will work, and the output
of hostname[hlen] will be the first character "[".
This can be demonstrated by the following program (tested in both clang
and gcc, with -O3)
int main() {
char* hostname=strdup("[");
size_t hlen = strlen(hostname);
hlen-=2;
hostname++;
printf("character is %d\n",+hostname[hlen]);
free(hostname-1);
}
I found this through fuzzing, and even if it seems harmless, the proper
thing is to return early with an error.
Closes#4389
CURLU_NO_AUTHORITY is intended for use with unknown schemes (i.e. not
"file:///") to override cURL's default demand that an authority exists.
Closes#4349
If the :authority pseudo header field doesn't contain an explicit port,
we assume it is valid for the default port, instead of rejecting the
request for all ports.
Ref: https://curl.haxx.se/mail/lib-2019-09/0041.htmlCloses#4365
If you set the same URL for target as for DoH (and it isn't a DoH
server), like "https://example.com" in both, the easy handles used for
the DoH requests could be left "dangling" and end up not getting freed.
Reported-by: Paul Dreik
Closes#4366
The undefined behaviour is annoying when running fuzzing with
sanitizers. The codegen is the same, but the meaning is now not up for
dispute. See https://cppinsights.io/s/516a2ff4
By incrementing the pointer first, both gcc and clang recognize this as
a bswap and optimizes it to a single instruction. See
https://godbolt.org/z/994ZpxCloses#4350
Added unit test case 1655 to verify.
Close#4352
the code correctly finds the flaws in the old code,
if one temporarily restores doh.c to the old version.
This is a protocol violation but apparently there are legacy proprietary
servers doing this.
Added test 336 and 337 to verify.
Reported-by: Philippe Marguinaud
Closes#4339
For FTPS transfers, curl gets close_notify on the data connection
without that being a signal to close the control connection!
Regression since 3f5da4e59a (7.65.0)
Reported-by: Zenju on github
Reviewed-by: Jay Satiro
Fixes#4329Closes#4340
Despite ldapp_err2string being documented by MS as returning a
PCHAR (char *), when UNICODE it is mapped to ldap_err2stringW and
returns PWCHAR (wchar_t *).
We have lots of code that expects ldap_err2string to return char *,
most of it failf used like this:
failf(data, "LDAP local: Some error: %s", ldap_err2string(rc));
Closes https://github.com/curl/curl/pull/4272
It needs to parse correctly. Otherwise it could be tricked into letting
through a-f using host names that libcurl would then resolve. Like
'[ab.be]'.
Reported-by: Thomas Vegas
Closes#4315
OpenSSL 1.1.0 adds SSL_CTX_set_<min|max>_proto_version() that we now use
when available. Existing code is preserved for older versions of
OpenSSL.
Closes#4304
Otherwise, a three byte response would make the smtp_state_ehlo_resp()
function misbehave.
Credit to OSS-Fuzz
Bug: https://crbug.com/oss-fuzz/16918
Assisted-by: Max Dymond
Closes#4287
This allows the function to figure out if a unix domain socket has a
file name or not associated with it! When a socket is created with
socketpair(), as done in the fuzzer testing, the path struct member is
uninitialized and must not be accessed.
Bug: https://crbug.com/oss-fuzz/16699Closes#4283
It could otherwise return an error even when closed correctly if GOAWAY
had been received previously.
Reported-by: Tom van der Woerdt
Fixes#4267Closes#4268
For a long time (since 7.28.1) we've returned error when setting the
value to 1 to make applications notice that we stopped supported the old
behavior for 1. Starting now, we treat 1 and 2 exactly the same.
Closes#4241
The quiche debug callback is global and can only be initialized once, so
make sure we don't do it multiple times (e.g. if multiple requests are
executed).
In addition this initializes the callback before the connection is
created, so we get logs for the handshake as well.
Closes#4236
When a username and password are provided in the URL, they were wrongly
removed from the stored URL so that subsequent uses of the same URL
wouldn't find the crendentials. This made doing HTTP auth with multiple
connections (like Digest) mishave.
Regression from 46e164069d (7.62.0)
Test case 335 added to verify.
Reported-by: Mike Crowe
Fixes#4228Closes#4229
SSL_VersionRangeGetDefault returns (TLSv1.0, TLSv1.2) as supported
range in NSS 3.45. It looks like the intention is to raise the minimum
version rather than lowering the maximum, so adjust accordingly. Note
that the caller (nss_setup_connect) initializes the version range to
(TLSv1.0, TLSv1.3), so there is no need to check for >= TLSv1.0 again.
Closes#4187
Reviewed-by: Daniel Stenberg
Reviewed-by: Kamil Dudka
This commit makes sending HTTP/3 request with nghttp3 work. It
minimally receives HTTP response and calls nghttp3 callbacks, but no
processing is made at the moment.
Closes#4215
This avoids EBADF errors from EPOLL_CTL_DEL operations in the
ephiperfifo.c example. EBADF is dangerous in multi-threaded
applications where I rely on epoll_ctl to operate on the same
epoll description from different threads.
Follow-up to eb9a604f8d
Bug: https://curl.haxx.se/mail/lib-2019-08/0026.htmlCloses#4211
So that users can mask in/out specific HTTP versions when Alt-Svc is
used.
- Removed "h2c" and updated test case accordingly
- Changed how the altsvc struct is laid out
- Added ifdefs to make the unittest run even in a quiche-tree
Closes#4201
RFC 7838 section 5:
When using an alternative service, clients SHOULD include an Alt-Used
header field in all requests.
Removed CURLALTSVC_ALTUSED again (feature is still EXPERIMENTAL thus
this is deemed ok).
You can disable sending this header just like you disable any other HTTP
header in libcurl.
Closes#4199
Even though it cannot fall-back to a lower HTTP version automatically. The
safer way to upgrade remains via CURLOPT_ALTSVC.
CURLOPT_H3 no longer has any bits that do anything and might be removed
before we remove the experimental label.
Updated the curl tool accordingly to use "--http3".
Closes#4197
This is only the libcurl part that provides the information. There's no
user of the parsed value. This change includes three new tests for the
parser.
Ref: #3794
Added the ability for the calling program to specify the authorisation
identity (authzid), the identity to act as, in addition to the
authentication identity (authcid) and password when using SASL PLAIN
authentication.
Fixes#3653Closes#3790
NOTE: This commit was cherry-picked and is part of a series of commits
that added the authzid feature for upcoming 7.66.0. The series was
temporarily reverted in db8ec1f so that it would not ship in a 7.65.x
patch release.
Closes https://github.com/curl/curl/pull/4186
Repeatedly we see problems where using curl_multi_wait() is difficult or
just awkward because if it has no file descriptor to wait for
internally, it returns immediately and leaves it to the caller to wait
for a small amount of time in order to avoid occasional busy-looping.
This is often missed or misunderstood, leading to underperforming
applications.
This change introduces curl_multi_poll() as a replacement drop-in
function that accepts the exact same set of arguments. This function
works identically to curl_multi_wait() - EXCEPT - for the case when
there's nothing to wait for internally, as then this function will by
itself wait for a "suitable" short time before it returns. This
effectiely avoids all risks of busy-looping and should also make it less
likely that apps "over-wait".
This also changes the curl tool to use this funtion internally when
doing parallel transfers and changes curl_easy_perform() to use it
internally.
Closes#4163
As the plan has been laid out in DEPRECATED. Update docs accordingly and
verify in test 1174. Now requires the option to be set to allow HTTP/0.9
responses.
Closes#4191
Allow pretty much anything to be part of the ALPN identifier. In
particular minus, which is used for "h3-20" (in-progress HTTP/3
versions) etc.
Updated test 356.
Closes#4182
If HTTPAUTH_GSSNEGOTIATE was used for a POST request and
gss_init_sec_context() failed, the POST request was sent
with empty body. This commit also restores the original
behavior of `curl --fail --negotiate`, which was changed
by commit 6c60355323.
Add regression tests 2077 and 2078 to cover this.
Fixes#3992Closes#4171
It was used (intended) to pass in the size of the 'socks' array that is
also passed to these functions, but was rarely actually checked/used and
the array is defined to a fixed size of MAX_SOCKSPEREASYHANDLE entries
that should be used instead.
Closes#4169
... to make CURLOPT_MAX_RECV_SPEED_LARGE and
CURLOPT_MAX_SEND_SPEED_LARGE work correctly on subsequent transfers that
reuse the same handle.
Fixed-by: Ironbars13 on github
Fixes#4084Closes#4161
If using the read callback for HTTP_POST, and POSTFIELDSIZE is not set,
automatically add a Transfer-Encoding: chunked header, same as it is
already done for HTTP_PUT, HTTP_POST_FORM and HTTP_POST_MIME. Update
test 1514 according to the new behaviour.
Closes#4138
- In curl_easy_reset attempt to resize the receive buffer to its default
size. If realloc fails then continue using the previous size.
Prior to this change curl_easy_reset did not properly handle resetting
the receive buffer (data->state.buffer). It reset the variable holding
its size (data->set.buffer_size) to the default size (READBUFFER_SIZE)
but then did not actually resize the buffer. If a user resized the
buffer by using CURLOPT_BUFFERSIZE to set the size smaller than the
default, later called curl_easy_reset and attempted to reuse the handle
then a heap overflow would very likely occur during that handle's next
transfer.
Reported-by: Felix Hädicke
Fixes https://github.com/curl/curl/issues/4143
Closes https://github.com/curl/curl/pull/4145
Specifying O_APPEND in conjunction with O_TRUNC and O_CREAT does not
make much sense. And this combination of flags is not accepted by all
SFTP servers (at least not Apache SSHD).
Fixes#4147Closes#4148
Curl_disconnect bails out if conn->easyq is not empty, detach_connection
needs to be called first to remove the current easy from the queue.
Fixes#4144Closes#4151
USe configure --with-ngtcp2 or --with-quiche
Using either option will enable a HTTP3 build.
Co-authored-by: Alessandro Ghedini <alessandro@ghedini.me>
Closes#3500
Several reasons:
- we can't add everyone who's helping out so its unfair to just a few
selected ones.
- we already list all helpers in THANKS and in RELEASE-NOTES for each
release
- we don't want to give the impression that some parts of the code is
"owned" or "controlled" by specific persons
Assisted-by: Daniel Gustafsson
Closes#4129
PK11_IsPresent() checks for the token for the given slot is available,
and sets needlogin flags for the PK11_Authenticate() call. Should it
return false, we should however treat it as an error and bail out.
Closes https://github.com/curl/curl/pull/4110
All protocols except for CURLPROTO_FILE/CURLPROTO_SMB and their TLS
counterpart were allowed for redirect. This vastly broadens the
exploitation surface in case of a vulnerability such as SSRF [1], where
libcurl-based clients are forced to make requests to arbitrary hosts.
For instance, CURLPROTO_GOPHER can be used to smuggle any TCP-based
protocol by URL-encoding a payload in the URI. Gopher will open a TCP
connection and send the payload.
Only HTTP/HTTPS and FTP are allowed. All other protocols have to be
explicitly enabled for redirects through CURLOPT_REDIR_PROTOCOLS.
[1]: https://www.acunetix.com/blog/articles/server-side-request-forgery-vulnerability/
Signed-off-by: Linos Giannopoulos <lgian@skroutz.gr>
Closes#4094
With CURLOPT_TIMECONDITION set, a header is automatically added (e.g.
If-Modified-Since). Allow this to be replaced or suppressed with
CURLOPT_HTTPHEADER.
Fixes#4103Closes#4109
- Return CURLE_REMOTE_ACCESS_DENIED for SMB access denied on file open.
Prior to this change CURLE_REMOTE_FILE_NOT_FOUND was returned instead.
Closes https://github.com/curl/curl/pull/4095
There were a leftover few prototypes of Curl_ functions that we used to
export but no longer do, this removes those prototypes and cleans up any
comments still referring to them.
Curl_write32_le(), Curl_strcpy_url(), Curl_strlen_url(), Curl_up_free()
Curl_concat_url(), Curl_detach_connnection(), Curl_http_setup_conn()
were made static in 05b100aee2.
Curl_http_perhapsrewind() made static in 574aecee20.
For the remainder, I didn't trawl the Git logs hard enough to capture
their exact time of deletion, but they were all gone: Curl_splayprint(),
Curl_http2_send_request(), Curl_global_host_cache_dtor(),
Curl_scan_cache_used(), Curl_hostcache_destroy(), Curl_second_connect(),
Curl_http_auth_stage() and Curl_close_connections().
Closes#4096
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
The file suffix for dynamically loadable objects on macOS is .dylib,
which need to be added for the module definitions in order to get the
NSS TLS backend to work properly on macOS.
Closes https://github.com/curl/curl/pull/4046
The value of the maxPTDs parameter to PR_Init() has since at least
NSPR 2.1, which was released sometime in 1998, been marked ignored
as is accordingly not used in the initialization code. Setting it
to a value when calling PR_Init() is thus benign, but indicates an
intent which may be misleading. Reset the value to zero to improve
clarity.
Closes https://github.com/curl/curl/pull/4054
Change the logic around such that we only keep CRLs that NSS actually
ended up caching around for later deletion. If CERT_CacheCRL() fails
then there is little point in delaying the freeing of the CRL as it
is not used.
Closes https://github.com/curl/curl/pull/4053
Some editors and IDEs assume that source files use UTF-8 file encodings.
It also fixes the build with MSVC when /utf-8 command line option is
used (this option is mandatory for some other open-source projects, this
is useful when using the same options is desired for building all
libraries of a project).
Closes https://github.com/curl/curl/pull/4087
... since that needs UI_OpenSSL() which isn't provided when OpenSSL is
built with OPENSSL_NO_UI_CONSOLE which happens when OpenSSL is built for
UWP (with "VC-WIN32-UWP").
Reported-by: Vasily Lobaskin
Fixes#4073Closes#4077
The header buffer size calculation can from static analysis seem to
overlow as it performs an addition between two size_t variables and
stores the result in a size_t variable. Overflow is however guarded
against elsewhere since the input to the addition is regulated by
the maximum read buffer size. Clarify this with a comment since the
question was asked.
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
To make sure a HTTP/2 stream registers the end of stream.
Bug #4043 made me find this problem but this fix doesn't correct the
reported issue.
Closes#4068
Certinfo gives the same result for all OpenSSL versions.
Also made printing RSA pubkeys consistent with older versions.
Reported-by: Michael Wallner
Fixes#3706Closes#4030
OpenSSL used to call exit(1) on syntax errors in OPENSSL_config(),
which is why we switched to CONF_modules_load_file() and introduced
a comment stating why. This behavior was however changed in OpenSSL
commit abdd677125f3a9e3082f8c5692203590fdb9b860, so remove the now
outdated and incorrect comment. The mentioned commit also declares
OPENSSL_config() deprecated so keep the current coding.
Closes#4033
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Even though the variable was used in a DEBUGASSERT, GCC 8 warned in
debug mode:
krb5.c:324:17: error: unused variable 'maj' [-Werror=unused-variable]
Just suppress the warning and declare the variable unconditionally
instead of only for DEBUGBUILD (which also missed the check for
HAVE_ASSERT_H).
Closes https://github.com/curl/curl/pull/4020
- The transfer hashes weren't using the correct keys so removing entries
failed.
- Simplified the iteration logic over transfers sharing the same socket and
they now simply are set to expire and thus get handled in the "regular"
timer loop instead.
Reported-by: Tom van der Woerdt
Fixes#4012Closes#4014
Old connections are meant to expire from the connection cache after
CURLOPT_MAXAGE_CONN seconds. However, they actually expire after 1000x
that value. This occurs because a time value measured in milliseconds is
accidentally divided by 1M instead of by 1,000.
Closes https://github.com/curl/curl/pull/4013
Remove support for, references to and use of "cyaSSL" from the source
and docs. wolfSSL is the current name and there's no point in keeping
references to ancient history.
Assisted-by: Daniel Gustafsson
Closes#3903
Since more than one socket can be used by each transfer at a given time,
each sockhash entry how has its own hash table with transfers using that
socket.
In addition, the sockhash entry can now be marked 'blocked = TRUE'"
which then makes the delete function just set 'removed = TRUE' instead
of removing it "for real", as a way to not rip out the carpet under the
feet of a parent function that iterates over the transfers of that same
sockhash entry.
Reported-by: Tom van der Woerdt
Fixes#3961Fixes#3986Fixes#3995Fixes#4004Closes#3997
... so that timeouts or other state machine actions get going again
after a changing pause state. For example, if the last delivery was
paused there's no pending socket activity.
Reported-by: sstruchtrup on github
Fixes#3994Closes#4001
Responses with status codes 1xx, 204 or 304 don't have a response body. For
these, don't parse these headers:
- Content-Encoding
- Content-Length
- Content-Range
- Last-Modified
- Transfer-Encoding
This change ensures that HTTP/2 upgrades work even if a
"Content-Length: 0" or a "Transfer-Encoding: chunked" header is present.
Co-authored-by: Daniel Stenberg
Closes#3702Fixes#3968Closes#3977
An inner loop within the singlesocket() function wrongly re-used the
variable for the outer loop which then could cause an infinite
loop. Change to using a separate variable!
Reported-by: Eric Wu
Fixes#3970Closes#3973
Various functions called within Curl_http2_done() can have the
side-effect of setting the Easy connection into drain mode (by calling
drain_this()). However, the last time we unset this for a transfer (by
calling drained_transfer()) is at the beginning of Curl_http2_done().
If the Curl_easy is reused for another transfer, it is then stuck in
drain mode permanently, which in practice makes it unable to write any
data in the new transfer.
This fix moves the last call to drained_transfer() to later in
Curl_http2_done(), after the functions that could potentially call for a
drain.
Fixes#3966Closes#3967
Reported-by: Josie-H
This fixes the static dependency on iphlpapi.lib and allows curl to
build for targets prior to Windows Vista.
This partially reverts 170bd047.
Fixes#3960Closes#3958
... so that it has a sensible value when ConnectionExists() is called which
needs it set to differentiate host "bundles" correctly on port number!
Also, make conncache:hashkey() use correct port for bundles that are proxy vs
host connections.
Probably a regression from 7.62.0
Reported-by: Tom van der Woerdt
Fixes#3956Closes#3957
Only HTTP proxy use where multiple host names can be used over the same
connection should use the proxy host name for bundles.
Reported-by: Tom van der Woerdt
Fixes#3951Closes#3955
They need to be removed from the socket hash linked list with more care.
When sh_delentry() is called to remove a sockethash entry, remove all
individual transfers from the list first. To enable this, each Curl_easy struct
now stores a pointer to the sockethash entry to know how to remove itself.
Reported-by: Tom van der Woerdt and Kunal Ekawde
Fixes#3952Fixes#3904Closes#3953
Microsoft added support for Unix Domain Sockets in Windows 10 1803
(RS4). Rather than expect the user to enable Unix Domain Sockets by
uncommenting the #define that was added in 0fd6221f we use the RS4
pre-processor variable that is present in newer versions of the
Windows SDK.
Closes#3939
- Revert all commits related to the SASL authzid feature since the next
release will be a patch release, 7.65.1.
Prior to this change CURLOPT_SASL_AUTHZID / --sasl-authzid was destined
for the next release, assuming it would be a feature release 7.66.0.
However instead the next release will be a patch release, 7.65.1 and
will not contain any new features.
After the patch release after the reverted commits can be restored by
using cherry-pick:
git cherry-pick a14d72ca9499ff8c1cc36c2a8d520edf690
Details for all reverted commits:
Revert "os400: take care of CURLOPT_SASL_AUTHZID in curl_easy_setopt_ccsid()."
This reverts commit 0edf6907ae.
Revert "tests: Fix the line endings for the SASL alt-auth tests"
This reverts commit c2a8d52a13.
Revert "examples: Added SASL PLAIN authorisation identity (authzid) examples"
This reverts commit 8c1cc369d0.
Revert "curl: --sasl-authzid added to support CURLOPT_SASL_AUTHZID from the tool"
This reverts commit a9499ff136.
Revert "sasl: Implement SASL authorisation identity via CURLOPT_SASL_AUTHZID"
This reverts commit a14d72ca2f.
This reverts commit 3b06e68b77.
Clearly this change wasn't good enough as it broke CURLOPT_LOW_SPEED_LIMIT +
CURLOPT_LOW_SPEED_TIME
Reported-by: Dave Reisner
Fixes#3927Closes#3928
Added the ability for the calling program to specify the authorisation
identity (authzid), the identity to act as, in addition to the
authentication identity (authcid) and password when using SASL PLAIN
authentication.
Fixed#3653Closes#3790
If the proxy string is given as an IPv6 numerical address with a zone
id, make sure to use that for the connect to the proxy.
Reported-by: Edmond Yu
Fixes#3482Closes#3918
When running a multi TLS backend build the version string needs more
buffer space. Make the internal ssl_buffer stack buffer match the one
in Curl_multissl_version() to allow for the longer string. For single
TLS backend builds there is no use in extended to buffer. This is a
fallout from #3863 which fixes up the multi_ssl string generation to
avoid a buffer overflow when the buffer is too small.
Closes#3875
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Currently when the server responds with 401 on NTLM authenticated
connection (re-used) we consider it to have failed. However this is
legitimate and may happen when for example IIS is set configured to
'authPersistSingleRequest' or when the request goes thru a proxy (with
'via' header).
Implemented by imploying an additional state once a connection is
re-used to indicate that if we receive 401 we need to restart
authentication.
Missed in fe6049f0.
They serve very little purpose and mostly just add noise. Most of them
have been around for a very long time. I read them all before removing
or rephrasing them.
Ref: #3876Closes#3883
Given that Curl_disconnect() calls Curl_http_auth_cleanup_ntlm() prior
to calling conn_shutdown() and it in turn performs this, there is no
need to perform the same action in conn_shutdown().
Closes#3881
In Curl_multissl_version() it was possible to overflow the passed in
buffer if the generated version string exceeded the size of the buffer.
Fix by inverting the logic, and also make sure to not exceed the local
buffer during the string generation.
Closes#3863
Reported-by: nevv on HackerOne/curl
Reviewed-by: Jay Satiro
Reviewed-by: Daniel Stenberg
Codacy/CppCheck warns about this. Consistently use parentheses as we
already do in some places to silence the warning.
Closes https://github.com/curl/curl/pull/3866
Due to limitations in Curl_resolver_wait_resolv(), it doesn't work for
DOH resolves. This fix disables DOH for those.
Limitation added to KNOWN_BUGS.
Fixes#3850Closes#3857
This reverts commit b0972bc.
- No longer show verbose output for the conncache closure handle.
The offending commit was added so that the conncache closure handle
would inherit verbose mode from the user's easy handle. (Note there is
no way for the user to set options for the closure handle which is why
that was necessary.) Other debug settings such as the debug function
were not also inherited since we determined that could lead to crashes
if the user's per-handle private data was used on an unexpected handle.
The reporter here says he has a debug function to capture the verbose
output, and does not expect or want any output to stderr; however
because the conncache closure handle does not inherit the debug function
the verbose output for that handle does go to stderr.
There are other plausible scenarios as well such as the user redirects
stderr on their handle, which is also not inherited since it could lead
to crashes when used on an unexpected handle.
Short of allowing the user to set options for the conncache closure
handle I don't think there's much we can safely do except no longer
inherit the verbose setting.
Bug: https://curl.haxx.se/mail/lib-2019-05/0021.html
Reported-by: Kristoffer Gleditsch
Ref: https://github.com/curl/curl/pull/3598
Ref: https://github.com/curl/curl/pull/3618
Closes https://github.com/curl/curl/pull/3856
Older versions of OpenSSL report FIPS availabilty via an OPENSSL_FIPS
define. It uses this define to determine whether to publish -fips at
the end of the version displayed. Applications that utilize the version
reported by OpenSSL will see a mismatch if they compare it to what curl
reports, as curl is not modifying the version in the same way. This
change simply adds a check to see if OPENSSL_FIPS is defined, and will
alter the reported version to match what OpenSSL itself provides. This
only appears to be applicable in versions of OpenSSL <1.1.1
Closes#3771
Currently you can do things like --cert <(cat ./cert.crt) with (at least) the
openssl backend, but that doesn't work for nss because is_file rejects fifos.
I don't actually know if this is sufficient, nss might do things internally
(like seeking back) that make this not work, so actual testing is needed.
Closes#3807
... to make the host name "usable". Store the scope id and put it back
when extracting a URL out of it.
Also makes curl_url_set() syntax check CURLUPART_HOST.
Fixes#3817Closes#3822
As soon as a TLS backend gets ALPN conformation about the specific HTTP
version it can now set the multiplex situation for the "bundle" and
trigger moving potentially queued up transfers to the CONNECT state.
With transfers being queued up, we only move one at a a time back to the
CONNECT state but now we mark moved transfers so that when a moved
transfer is confirmed "successful" (it connected) it will trigger the
move of another pending transfer. Previously, it would otherwise wait
until the transfer was done before doing this. This makes queued up
pending transfers get processed (much) faster.
In case the name pointer isn't set (due to memory pressure most likely)
we need to skip the prefix matching and reject with a badcookie to avoid
a possible NULL pointer dereference.
Closes#3820#3821
Reported-by: Jonathan Moerman
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
This limits all accepted input strings passed to libcurl to be less than
CURL_MAX_INPUT_LENGTH (8000000) bytes, for these API calls:
curl_easy_setopt() and curl_url_set().
The 8000000 number is arbitrary picked and is meant to detect mistakes
or abuse, not to limit actual practical use cases. By limiting the
acceptable string lengths we also reduce the risk of integer overflows
all over.
NOTE: This does not apply to `CURLOPT_POSTFIELDS`.
Test 1559 verifies.
Closes#3805
Just like we do for mbed TLS, use our local implementation of MD4 when
OpenSSL doesn't support it. This allows a type-3 message to include the
NT response.
RFC 4616 specifies the authzid is optional in the client authentication
message and that the server will derive the authorisation identity
(authzid) from the authentication identity (authcid) when not specified
by the client.
ALTSVC requires Curl_get_line which is defined in lib/cookie.c inside a #if
check of HTTP and COOKIES. That makes Curl_get_line undefined if COOKIES is
disabled. Fix by splitting out the function into a separate file which can
be included where needed.
Closes#3717
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
The indentation from 211d5329 and 57d6d253 was a little strange as
parts didn't align correctly, uses 4 spaces rather than 2. Checked
the indentation of the original source so it aligns, albeit, using
curl style.
Commit 9081014 fixed most of the confusing issues between scope id and
scope however 844896d added bad limits checking assuming that the scope
is being set and not the scope id.
I have fixed the documentation so it all refers to scope ids.
In addition Curl_if2ip refered to the scope id as remote_scope_id which
is incorrect, so I renamed it to local_scope_id.
Adjusted-by: Daniel Stenberg
Closes#3655Closes#3765Fixes#3713
Only allow well formed decimal numbers in the input.
Document that the number MUST be between 1 and 65535.
Add tests to test 1560 to verify the above.
Ref: https://github.com/curl/curl/issues/3753Closes#3762
Without this, detecting and avoid reusing a closed TLS connection
(without a previous GOAWAY) when doing HTTP/2 is tricky.
Reported-by: Tom van der Woerdt
Fixes#3750Closes#3763
Functionally this doesn't change anything as we still use the username
for both the authorisation identity and the authentication identity.
Closes#3757
Remove the code too. The functionality has been disabled in code since
7.62.0. Setting this option will from now on simply be ignored and have
no function.
Closes#3654
- remove unused variables
- declare conditionally used variables conditionally
- suppress unused variable warnings in the CMake tests
- remove dead variable stores
- consistently use WIN32 macro to detect Windows
Closes https://github.com/curl/curl/pull/3739