ERR_error_string(NULL) should never be called. It places the error in a
global buffer, which is not thread-safe. Use ERR_error_string_n with a
local buffer instead.
Closes#4645
This commit adds curl_multi_wakeup() which was previously in the TODO
list under the curl_multi_unblock name.
On some platforms and with some configurations this feature might not be
available or can fail, in these cases a new error code
(CURLM_WAKEUP_FAILURE) is returned from curl_multi_wakeup().
Fixes#4418Closes#4608
Prior to this change schannel ignored --tls-max (CURL_SSLVERSION_MAX_
macros) when --tlsv1 (CURL_SSLVERSION_TLSv1) or default TLS
(CURL_SSLVERSION_DEFAULT), using a max of TLS 1.2 always.
Closes https://github.com/curl/curl/pull/4633
- Disable the extra sensitivity except in debug builds (--enable-debug).
- Improve SYSCALL error message logic in ossl_send and ossl_recv so that
"No error" / "Success" socket error text isn't shown on SYSCALL error.
Prior to this change 0ab38f5 (precedes 7.67.0) increased the sensitivity
of OpenSSL's SSL_ERROR_SYSCALL error so that abrupt server closures were
also considered errors. For example, a server that does not send a known
protocol termination point (eg HTTP content length or chunked encoding)
_and_ does not send a TLS termination point (close_notify alert) would
cause an error if it closed the connection.
To be clear that behavior made it into release build 7.67.0
unintentionally. Several users have reported it as an issue.
Ultimately the idea is a good one, since it can help prevent against a
truncation attack. Other SSL backends may already behave similarly (such
as Windows native OS SSL Schannel). However much more of our user base
is using OpenSSL and there is a mass of legacy users in that space, so I
think that behavior should be partially reverted and then rolled out
slowly.
This commit changes the behavior so that the increased sensitivity is
disabled in all curl builds except curl debug builds (DEBUGBUILD). If
after a period of time there are no major issues then it can be enabled
in dev and release builds with the newest OpenSSL (1.1.1+), since users
using the newest OpenSSL are the least likely to have legacy problems.
Bug: https://github.com/curl/curl/issues/4409#issuecomment-555955794
Reported-by: Bjoern Franke
Fixes https://github.com/curl/curl/issues/4624
Closes https://github.com/curl/curl/pull/4623
Prior to this change:
The check if an extra wait is necessary was based not on the
number of extra fds but on the pointer.
If a non-null pointer was given in extra_fds, but extra_nfds
was zero, then the wait was skipped even though poll was not
called.
Closes https://github.com/curl/curl/pull/4610
Improved estimation of expected_len and updated related comments;
increased strictness of QNAME-encoding, adding error detection for empty
labels and names longer than the overall limit; avoided treating DNAME
as unexpected;
updated unit test 1655 with more thorough set of proofs and tests
Closes#4598
Since 59041f0, a new timer might be set in multi_done() so the clearing
of the timers need to happen afterwards!
Reported-by: Max Kellermann
Fixes#4575Closes#4583
- Use FORMAT_MESSAGE_IGNORE_INSERTS to ignore format specifiers in
Windows error strings.
Since we are not in control of the error code we don't know what
information may be needed by the error string's format specifiers.
Prior to this change Windows API error strings which contain specifiers
(think specifiers like similar to printf specifiers) would not be shown.
The FormatMessage Windows API call which turns a Windows error code into
a string could fail and set error ERROR_INVALID_PARAMETER if that error
string contained a format specifier. FormatMessage expects a va_list for
the specifiers, unless inserts are ignored in which case no substitution
is attempted.
Ref: https://devblogs.microsoft.com/oldnewthing/20071128-00/?p=24353
- Consider a modified file to be committed this year.
- Make the travis CHECKSRC also do COPYRIGHTYEAR scan in examples and
includes
- Ignore 0 parents when getting latest commit date of file.
since in the CI we're dealing with a truncated repo of last 50 commits,
the file's most recent commit may not be available. when this happens
git log and rev-list show the initial commit (ie first commit not to be
truncated) but that's incorrect so ignore it.
Ref: https://github.com/curl/curl/pull/4547
Closes https://github.com/curl/curl/pull/4549
Co-authored-by: Jay Satiro
- Open the CA file using FILE_SHARE_READ mode so that others can read
from it as well.
Prior to this change our schannel code opened the CA file without
sharing which meant concurrent openings (eg an attempt from another
thread or process) would fail during the time it was open without
sharing, which in curl's case would cause error:
"schannel: failed to open CA file".
Bug: https://curl.haxx.se/mail/lib-2019-10/0104.html
Reported-by: Richard Alcock
... as it can make it wait there for a long time for no good purpose.
Patched-by: Jay Satiro
Reported-by: Bylon2 on github
Adviced-by: Nikos Mavrogiannopoulos
Fixes#4487Closes#4541
On macOS/BSD, trying to call sendto on a connected UDP socket fails
with a EISCONN error. Because the singleipconnect has already called
connect on the socket when we're trying to use it for QUIC transfers
we need to use plain send instead.
Fixes#4529
Closes https://github.com/curl/curl/pull/4533
The ngtcp2 QUIC backend was using the MSG_DONTWAIT flag for send/recv
in order to perform nonblocking operations. On Windows this flag does
not exist. Instead, the socket must be set to nonblocking mode via
ioctlsocket.
This change sets the nonblocking flag on UDP sockets used for QUIC on
all platforms so the use of MSG_DONTWAIT is not needed.
Fixes#4531Closes#4532
To make sure that transfer is being dealt with. Streams without
Content-Length need a final read to notice the end-of-stream state.
Reported-by: Tom van der Woerdt
Fixes#4496
The URL extracted with CURLINFO_EFFECTIVE_URL was returned as given as
input in most cases, which made it not get a scheme prefixed like before
if the URL was given without one, and it didn't remove dotdot sequences
etc.
Added test case 1907 to verify that this now works as intended and as
before 7.62.0.
Regression introduced in 7.62.0
Reported-by: Christophe Dervieux
Fixes#4491Closes#4493
With MinGW-w64, `curl_socket_t` is is a 32 or 64 bit unsigned integer,
while `read` expects a 32 bit signed integer.
Use `sread` instead of `read` to use the correct parameter type.
Closes https://github.com/curl/curl/pull/4483
Previosly all connect() failures would return CURLE_COULDNT_CONNECT, no
matter what errno said.
This makes for example --retry work on these transfer failures.
Reported-by: Nathaniel J. Smith
Fixes#4461
Clsoes #4462
To make sure that the HTTP/2 state is initialized correctly for
duplicated handles. It would otherwise easily generate "spurious"
PRIORITY frames to get sent over HTTP/2 connections when duplicated easy
handles were used.
Reported-by: Daniel Silverstone
Fixes#4303Closes#4442
This fix removes a use after free which can be triggered by
the internal cookie fuzzer, but otherwise is probably
impossible to trigger from an ordinary application.
The following program reproduces it:
curl_global_init(CURL_GLOBAL_DEFAULT);
CURL* handle=curl_easy_init();
CookieInfo* info=Curl_cookie_init(handle,NULL,NULL,false);
curl_easy_setopt(handle, CURLOPT_COOKIEJAR, "/dev/null");
Curl_flush_cookies(handle, true);
Curl_cookie_cleanup(info);
curl_easy_cleanup(handle);
curl_global_cleanup();
This was found through fuzzing.
Closes#4454
The 'share object' only sets the storage area for cookies. The "cookie
engine" still needs to be enabled or activated using the normal cookie
options.
This caused the curl command line tool to accidentally use cookies
without having been told to, since curl switched to using shared cookies
in 7.66.0.
Test 1166 verifies
Updated test 506
Fixes#4429Closes#4434
Prior to this change non-ssl/non-ssh connections that were reused set
TIMER_APPCONNECT [1]. Arguably that was incorrect since no SSL/SSH
handshake took place.
[1]: TIMER_APPCONNECT is publicly known as CURLINFO_APPCONNECT_TIME in
libcurl and %{time_appconnect} in the curl tool. It is documented as
"the time until the SSL/SSH handshake is completed".
Reported-by: Marcel Hernandez
Ref: https://github.com/curl/curl/issues/3760
Closes https://github.com/curl/curl/pull/3773
- convert some of them to H3BUF() calls to infof()
- remove some of them completely
- made DEBUG_HTTP3 defined only if CURLDEBUG is set for now
Closes#4421
The parser would check for a query part before fragment, which caused it
to do wrong when the fragment contains a question mark.
Extended test 1560 to verify.
Reported-by: Alex Konev
Fixes#4412Closes#4413
As libcurl now uses these 2 system functions, wrappers are needed on os400
to convert returned AF_UNIX sockaddrs to ascii.
This is a follow-up to commit 7fb54ef.
See also #4037.
Closes#4214
Otherwise curl may be told to use for instance pop3 to
communicate with the doh server, which most likely
is not what you want.
Found through fuzzing.
Closes#4406
It was already fixed for BoringSSL in commit a0f8fccb1e.
LibreSSL has had the second argument to SSL_CTX_set_min_proto_version
as uint16_t ever since the function was added in [0].
[0] 56f107201b
Closes https://github.com/curl/curl/pull/4397
Prior to this change when a server returned a socks5 connect error then
curl would parse the destination address:port from that data and show it
to the user as the destination:
curld -v --socks5 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to IPv4 172.217.12.206 (locally resolved)
* Can't complete SOCKS5 connection to 253.127.0.0:26673. (1)
curl: (7) Can't complete SOCKS5 connection to 253.127.0.0:26673. (1)
That's incorrect because the address:port included in the connect error
is actually a bind address:port (typically unused) and not the
destination address:port. This fix changes curl to show the destination
information that curl sent to the server instead:
curld -v --socks5 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to IPv4 172.217.7.14:99 (locally resolved)
* Can't complete SOCKS5 connection to 172.217.7.14:99. (1)
curl: (7) Can't complete SOCKS5 connection to 172.217.7.14:99. (1)
curld -v --socks5-hostname 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to google.com:99 (remotely resolved)
* Can't complete SOCKS5 connection to google.com:99. (1)
curl: (7) Can't complete SOCKS5 connection to google.com:99. (1)
Ref: https://tools.ietf.org/html/rfc1928#section-6
Closes https://github.com/curl/curl/pull/4394
As the loop discards cookies without domain set. This bug would lead to
qsort() trying to sort uninitialized pointers. We have however not found
it a security problem.
Reported-by: Paul Dreik
Closes#4386
If the input hostname is "[", hlen will underflow to max of size_t when
it is subtracted with 2.
hostname[hlen] will then cause a warning by ubsanitizer:
runtime error: addition of unsigned offset to 0x<snip> overflowed to
0x<snip>
I think that in practice, the generated code will work, and the output
of hostname[hlen] will be the first character "[".
This can be demonstrated by the following program (tested in both clang
and gcc, with -O3)
int main() {
char* hostname=strdup("[");
size_t hlen = strlen(hostname);
hlen-=2;
hostname++;
printf("character is %d\n",+hostname[hlen]);
free(hostname-1);
}
I found this through fuzzing, and even if it seems harmless, the proper
thing is to return early with an error.
Closes#4389
CURLU_NO_AUTHORITY is intended for use with unknown schemes (i.e. not
"file:///") to override cURL's default demand that an authority exists.
Closes#4349
If the :authority pseudo header field doesn't contain an explicit port,
we assume it is valid for the default port, instead of rejecting the
request for all ports.
Ref: https://curl.haxx.se/mail/lib-2019-09/0041.htmlCloses#4365
If you set the same URL for target as for DoH (and it isn't a DoH
server), like "https://example.com" in both, the easy handles used for
the DoH requests could be left "dangling" and end up not getting freed.
Reported-by: Paul Dreik
Closes#4366
The undefined behaviour is annoying when running fuzzing with
sanitizers. The codegen is the same, but the meaning is now not up for
dispute. See https://cppinsights.io/s/516a2ff4
By incrementing the pointer first, both gcc and clang recognize this as
a bswap and optimizes it to a single instruction. See
https://godbolt.org/z/994ZpxCloses#4350
Added unit test case 1655 to verify.
Close#4352
the code correctly finds the flaws in the old code,
if one temporarily restores doh.c to the old version.
This is a protocol violation but apparently there are legacy proprietary
servers doing this.
Added test 336 and 337 to verify.
Reported-by: Philippe Marguinaud
Closes#4339
For FTPS transfers, curl gets close_notify on the data connection
without that being a signal to close the control connection!
Regression since 3f5da4e59a (7.65.0)
Reported-by: Zenju on github
Reviewed-by: Jay Satiro
Fixes#4329Closes#4340
Despite ldapp_err2string being documented by MS as returning a
PCHAR (char *), when UNICODE it is mapped to ldap_err2stringW and
returns PWCHAR (wchar_t *).
We have lots of code that expects ldap_err2string to return char *,
most of it failf used like this:
failf(data, "LDAP local: Some error: %s", ldap_err2string(rc));
Closes https://github.com/curl/curl/pull/4272
It needs to parse correctly. Otherwise it could be tricked into letting
through a-f using host names that libcurl would then resolve. Like
'[ab.be]'.
Reported-by: Thomas Vegas
Closes#4315
OpenSSL 1.1.0 adds SSL_CTX_set_<min|max>_proto_version() that we now use
when available. Existing code is preserved for older versions of
OpenSSL.
Closes#4304
Otherwise, a three byte response would make the smtp_state_ehlo_resp()
function misbehave.
Credit to OSS-Fuzz
Bug: https://crbug.com/oss-fuzz/16918
Assisted-by: Max Dymond
Closes#4287
This allows the function to figure out if a unix domain socket has a
file name or not associated with it! When a socket is created with
socketpair(), as done in the fuzzer testing, the path struct member is
uninitialized and must not be accessed.
Bug: https://crbug.com/oss-fuzz/16699Closes#4283
It could otherwise return an error even when closed correctly if GOAWAY
had been received previously.
Reported-by: Tom van der Woerdt
Fixes#4267Closes#4268
For a long time (since 7.28.1) we've returned error when setting the
value to 1 to make applications notice that we stopped supported the old
behavior for 1. Starting now, we treat 1 and 2 exactly the same.
Closes#4241
The quiche debug callback is global and can only be initialized once, so
make sure we don't do it multiple times (e.g. if multiple requests are
executed).
In addition this initializes the callback before the connection is
created, so we get logs for the handshake as well.
Closes#4236
When a username and password are provided in the URL, they were wrongly
removed from the stored URL so that subsequent uses of the same URL
wouldn't find the crendentials. This made doing HTTP auth with multiple
connections (like Digest) mishave.
Regression from 46e164069d (7.62.0)
Test case 335 added to verify.
Reported-by: Mike Crowe
Fixes#4228Closes#4229