In case an identity didn't match[0], the state machine would fail in
state SSH_AUTH_AGENT instead of progressing to the next identity in
ssh-agent. As a result, ssh-agent authentication only worked if the
identity required happened to be the first added to ssh-agent.
This was introduced as part of commit c4eb10e2f0, which
stated that the "else" statement was required to prevent getting stuck
in state SSH_AUTH_AGENT. Given the state machine's logic and libssh2's
interface I couldn't see how this could happen or reproduce it and I
also couldn't find a more detailed description of the problem which
would explain a test case to reproduce the problem this was supposed to
fix.
[0] libssh2_agent_userauth returning LIBSSH2_ERROR_AUTHENTICATION_FAILED
Closes#2248
Follow-up to 84fcaa2e7. libressl does not have the API even if it says it is
late OpenSSL version...
Fixes#2246Closes#2247
Reported-by: jungle-boogie on github
1. don't use "ULL" suffix since unsupported in older MSVC
2. use curl_off_t instead of custom long long ifdefs
3. make get_posix_time() not do unaligned data access
Fixes#2211Closes#2240
Reported-by: Chester Liu
A mime tree attached to an easy handle using CURLOPT_MIMEPOST is
strongly bound to the handle: there is a pointer to the easy handle in
each item of the mime tree and following the parent pointer list
of mime items ends in a dummy part stored within the handle.
Because of this binding, a mime tree cannot be shared between different
easy handles, thus it needs to be cloned upon easy handle duplication.
There is no way for the caller to get the duplicated mime tree
handle: it is then set to be automatically destroyed upon freeing the
new easy handle.
New test 654 checks proper mime structure duplication/release.
Add a warning note in curl_mime_data_cb() documentation about sharing
user data between duplicated handles.
Closes#2235
Prior to this change the stored byte count of each trailer was
miscalculated and 1 less than required. It appears any trailer
after the first that was passed to Curl_client_write would be truncated
or corrupted as well as the size. Potentially the size of some
subsequent trailer could be erroneously extracted from the contents of
that trailer, and since that size is used by client write an
out-of-bounds read could occur and cause a crash or be otherwise
processed by client write.
The bug appears to have been born in 0761a51 (precedes 7.49.0).
Closes https://github.com/curl/curl/pull/2231
Decoding loop implementation did not concern the case when all
received data is consumed by Brotli decoder and the size of decoded
data internally hold by Brotli decoder is greater than CURL_MAX_WRITE_SIZE.
For content with unencoded length greater than CURL_MAX_WRITE_SIZE this
can result in the loss of data at the end of content.
Closes#2194
Move curl_mime_initpart() and curl_mime_cleanpart() calls to lower-level
functions dealing with UserDefined structure contents.
This avoids memory leakages on curl-generated part mime headers.
New test 2073 checks this using the cli tool --next option: it
triggers a valgrind error if bug is present.
Bug: https://curl.haxx.se/mail/lib-2017-12/0060.html
Reported-by: Martin Galvan
- When zlib version is < 1.2.0.4, process gzip trailer before considering
extra data as an error.
- Inflate with Z_BLOCK instead of Z_SYNC_FLUSH to maximize correct data
and minimize corrupt data output.
- Do not try to restart deflate decompression in raw mode if output has
started or if the leading data is not available anymore.
- New test 232 checks inflating raw-deflated content.
Closes#2068
scan-build would warn on a potential access of an uninitialized
buffer. I deem it a false positive and had to add this somewhat ugly
work-around to silence it.
Fixed undefined symbol of getenv() which does not exist when compiling
for Windows 10 App (CURL_WINDOWS_APP). Replaced getenv() with
curl_getenv() which is aware of getenv() absence when CURL_WINDOWS_APP
is defined.
Closes#2171
Prune the DNS cache immediately after the dns entry is unlocked in
multi_done. Timed out entries will then get discarded in a more orderly
fashion.
Test506 is updated
Reported-by: Oleg Pudeyev
Fixes#2169Closes#2170
Prior to this change SSLKEYLOGFILE used line buffering on WIN32 just
like it does for other platforms. However, the Windows CRT does not
actually support line buffering (_IOLBF) and will use full buffering
(_IOFBF) instead. We can't use full buffering because multiple processes
may be writing to the file and that could lead to corruption, and since
full buffering is the only buffering available this commit disables
buffering for Windows SSLKEYLOGFILE entirely (_IONBF).
Ref: https://github.com/curl/curl/pull/1346#issuecomment-350530901
These are OS/2-specific things added to the code in the year 2000. They
were always ugly. If there's any user left, they still don't need it
done this way.
Closes#2166
- Allow proxy_ssl to be checked for pending data even when connssl does
not yet have an SSL handle.
This change is for posterity. Currently there doesn't seem to be a code
path that will cause a pending data check when proxyssl could have
pending data and the connssl handle doesn't yet exist [1].
[1]: Recall that an https proxy connection starts out in connssl but if
the destination is also https then the proxy SSL backend data is moved
from connssl to proxyssl, which means connssl handle is temporarily
empty until an SSL handle for the destination can be created.
Ref: https://github.com/curl/curl/commit/f4a6238#commitcomment-24396542
Closes https://github.com/curl/curl/pull/1916
Connections that are used for HTTP/1.1 Pipelining or HTTP/2 multiplexing
only get additional transfers added to them if the existing connection
is held by the same multi or easy handle. libcurl does not support doing
HTTP/2 streams in different threads using a shared connection.
Closes#2152
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes#2132Fixes#2151Closes#2139
For pop3/imap/smtp, added test 891 to somewhat verify the pop3
case.
For this, I enhanced the pingpong test server to be able to send back
responses with LF-only instead of always using CRLF.
Closes#2150
Figured out while reviewing code in the libssh backend. The pointer was
checked for NULL after having been dereferenced, so we know it would
always equal true or it would've crashed.
Pointed-out-by: Nikos Mavrogiannopoulos
Bug #2143Closes#2148
The previous code was incorrectly following the libssh2 error detection
for libssh2_sftp_statvfs, which is not correct for libssh's sftp_statvfs.
Fixes#2142
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
The SFTP back-end supports asynchronous reading only, limited
to 32-bit file length. Writing is synchronous with no other
limitations.
This also brings keyboard-interactive authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
That also updates tests to expect the right error code
libssh2 back-end returns CURLE_SSH error if the remote file
is not found. Expect instead CURLE_REMOTE_FILE_NOT_FOUND
which is sent by the libssh backend.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
libssh is an alternative library to libssh2.
https://www.libssh.org/
That patch set also introduces support for ECDSA
ed25519 keys, as well as gssapi authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
Absent any 'symbol map' or script to limit what gets exported, static
linking of libraries previously resulted in a libcurl with curl's and
those other symbols being (re-)exported.
This did not happen if 'versioned symbols' were enabled (which is not
the default) because then a version script is employed.
This limits exports to everything starting in 'curl_*'., which is
what "libcurl.vers" exports.
This avoids strange side-effects such as with mixing methods
from system libraries and those erroneously offered by libcurl.
Closes#2127
Originally, my idea was to allocate the two structures (or more
precisely, the connectdata structure and the four SSL backend-specific
strucutres required for ssl[0..1] and proxy_ssl[0..1]) in one go, so
that they all could be free()d together.
However, getting the alignment right is tricky. Too tricky.
So let's just bite the bullet and allocate the SSL backend-specific
data separately.
As a consequence, we now have to be very careful to release the memory
allocated for the SSL backend-specific data whenever we release any
connectdata.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2119
commit d3ab7c5a21 broke the boringssl build since it doesn't have
RSA_flags(), so we disable that code block for boringssl builds.
Reported-by: W. Mark Kubacki
Fixes#2117
This bit is no longer used. It is not clear what it meant for users to
"init the TLS" in a world with different TLS backends and since the
introduction of multissl, libcurl didn't properly work if inited without
this bit set.
Not a single user responded to the call for users of it:
https://curl.haxx.se/mail/lib-2017-11/0072.html
Reported-by: Evgeny Grin
Assisted-by: Jay Satiro
Fixes#2089Fixes#2083Closes#2107
- Align the array of ssl_backend_data on a max 32 byte boundary.
8 is likely to be ok but I went with 32 for posterity should one of
the ssl_backend_data structs change to contain a larger sized variable
in the future.
Prior to this change (since dev 70f1db3, release 7.56) the connectdata
structure was undersized by 4 bytes in 32-bit builds with ssl enabled
because long long * was mistakenly used for alignment instead of
long long, with the intention being an 8 byte boundary. Also long long
may not be an available type.
The undersized connectdata could lead to oob read/write past the end in
what was expected to be the last 4 bytes of the connection's secondary
socket https proxy ssl_backend_data struct (the secondary socket in a
connection is used by ftp, others?).
Closes https://github.com/curl/curl/issues/2093
CVE-2017-8818
Bug: https://curl.haxx.se/docs/adv_2017-af0a.html
With this check present, scan-build warns that we might dereference this
point in other places where it isn't first checked for NULL. Thus, if it
*can* be NULL we have a problem on a few places. However, this pointer
should not be possible to be NULL here so I remove the check and thus
also three different scan-build warnings.
Closes#2111
* LOTS of comment updates
* explicit error for SMB shares (e.g. "file:////share/path/file")
* more strict handling of authority (i.e. "//localhost/")
* now accepts dodgy old "C:|" drive letters
* more precise handling of drive letters in and out of Windows
(especially recognising both "file:c:/" and "file:/c:/")
Closes#2110
The new API added in Linux 4.11 only requires setting a socket option
before connecting, without the whole sento() machinery.
Notably, this makes it possible to use TFO with SSL connections on Linux
as well, without the need to mess around with OpenSSL (or whatever other
SSL library) internals.
Closes#2056
Host names like "127.0.0.1 moo" would otherwise be accepted by some
getaddrinfo() implementations.
Updated test 1034 and 1035 accordingly.
Fixes#2073Closes#2092
... so that IPv6 addresses can be passed like they can for connect-to
and how they're used in URLs.
Added test 1324 to verify
Reported-by: Alex Malinovich
Fixes#2087Closes#2091
There is a conflict on symbol 'free_func' between openssl/crypto.h and
zlib.h on AIX. This is an attempt to resolve it.
Bug: https://curl.haxx.se/mail/lib-2017-11/0032.html
Reported-By: Michael Felt
The --interface command (CURLOPT_INTERFACE option) already uses
SO_BINDTODEVICE on Linux, but it tries to parse it as an interface or IP
address first, which fails in case the user passes a VRF.
Try to use the socket option immediately and parse it as a fallback
instead. Update the documentation to mention this feature, and that it
requires the binary to be ran by root or with CAP_NET_RAW capabilities
for this to work.
Closes#2024
... previously it would store it already in the happy eyeballs stage
which could lead to the IPv6 bit being set for an IPv4 connection,
leading to curl not wanting to do EPSV=>PASV for FTP transfers.
Closes#2053
- Don't call zlib's inflate() when avail_in stream bytes is 0.
This is a follow up to the parent commit 19e66e5. Prior to that change
libcurl's inflate_stream could call zlib's inflate even when no bytes
were available, causing inflate to return Z_BUF_ERROR, and then
inflate_stream would treat that as a hard error and return
CURLE_BAD_CONTENT_ENCODING.
According to the zlib FAQ, Z_BUF_ERROR is not fatal.
This bug would happen randomly since packet sizes are arbitrary. A test
of 10,000 transfers had 55 fail (ie 0.55%).
Ref: https://zlib.net/zlib_faq.html#faq05
Closes https://github.com/curl/curl/pull/2060
This uses the brotli external library (https://github.com/google/brotli).
Brotli becomes a feature: additional curl_version_info() bit and
structure fields are provided for it and CURLVERSION_NOW bumped.
Tests 314 and 315 check Brotli content unencoding with correct and
erroneous data.
Some tests are updated to accomodate with the now configuration dependent
parameters of the Accept-Encoding header.
This is implemented as an output streaming stack of unencoders, the last
calling the client write procedure.
New test 230 checks this feature.
Bug: https://github.com/curl/curl/pull/2002
Reported-By: Daniel Bankhead
Since CURLSSH_AUTH_ANY (aka CURLSSH_AUTH_DEFAULT) is ~0 an arg value
check on this option is incorrect; we have to accept any value.
Prior to this change since f121575 (7.56.1+) CURLOPT_SSH_AUTH_TYPES
erroneously rejected CURLSSH_AUTH_ANY with CURLE_BAD_FUNCTION_ARGUMENT.
Bug: https://github.com/curl/curl/commit/f121575#commitcomment-25347120
... which is valid according to documentation. Regression since
f121575c0b.
Verified now in test 501.
Reported-by: cbartl on github
Fixes#2038Closes#2039
.. also add same arg value check to CURLOPT_POSTFIELDSIZE_LARGE.
Prior to this change since f121575 (7.56.1+) CURLOPT_POSTFIELDSIZE
erroneously rejected -1 value with CURLE_BAD_FUNCTION_ARGUMENT.
Bug: https://curl.haxx.se/mail/lib-2017-11/0000.html
Reported-by: Andrew Lambert
The config files define curl and libcurl targets as imported targets
CURL::curl and CURL::libcurl. For backward compatibility with CMake-
provided find-module the CURL_INCLUDE_DIRS and CURL_LIBRARIES are
also set.
Closes#1879
returning 'time_t' is problematic when that type is unsigned and we
return values less than zero to signal "already expired", used in
several places in the code.
Closes#2021
If WINAPI_FAMILY is defined, it should be safe to try to include
winapifamily.h to check what the define evaluates to.
This should fix detection of CURL_WINDOWS_APP if building with
_WIN32_WINNT set to 0x0600.
Closes#2025
- When uploading via chunked-encoding don't compare file size to bytes
sent to determine whether the upload has finished.
Chunked-encoding adds its own overhead which why the bytes sent is not
equal to the file size. Prior to this change if a file was uploaded in
chunked-encoding and its size was known it was possible that the upload
could end prematurely without sending the final few chunks. That would
result in a server hang waiting for the remaining data, likely followed
by a disconnect.
The scope of this bug is limited to some arbitrary file sizes which have
not been determined. One size that triggers the bug is 475020.
Bug: https://github.com/curl/curl/issues/2001
Reported-by: moohoorama@users.noreply.github.com
Closes https://github.com/curl/curl/pull/2010
... by using curl_off_t for the typedef if time_t is larger than 4
bytes.
Reported-by: Gisle Vanem
Bug: b9d25f9a6b (co)
mmitcomment-25205058
Closes#2019
... since the 'tv' stood for timeval and this function does not return a
timeval struct anymore.
Also, cleaned up the Curl_timediff*() functions to avoid typecasts and
clean up the descriptive comments.
Closes#2011
... to cater for systems with unsigned time_t variables.
- Renamed the functions to curlx_timediff and Curl_timediff_us.
- Added overflow protection for both of them in either direction for
both 32 bit and 64 bit time_ts
- Reprefixed the curlx_time functions to use Curl_*
Reported-by: Peter Piekarski
Fixes#2004Closes#2005
... that are multiplied by 1000 when stored.
For 32 bit long systems, the max value accepted (2147483 seconds) is >
596 hours which is unlikely to ever be set by a legitimate application -
and previously it didn't work either, it just caused undefined behavior.
Also updated the man pages for these timeout options to mention the
return code.
Closes#1938
Allow to ovverride certain build tools, making it possible to
use LLVM/Clang to build curl. The default behavior is unchanged.
To build with clang (as offered by MSYS2), these settings can
be used:
CURL_CC=clang
CURL_AR=llvm-ar
CURL_RANLIB=llvm-ranlib
Closes https://github.com/curl/curl/pull/1993
Use memset() to initialize a structure to avoid LLVM/Clang warning:
ldap.c:193:39: warning: missing field 'UserLength' initializer [-Wmissing-field-initializers]
Closes https://github.com/curl/curl/pull/1992
Now VERIFYHOST, VERIFYPEER and VERIFYSTATUS options change during active
connection updates the current connection's (i.e.'connectdata'
structure) appropriate ssl_config (and ssl_proxy_config) structures
variables, making these options effective for ongoing connection.
This functionality was available before and was broken by the
following change:
"proxy: Support HTTPS proxy and SOCKS+HTTP(s)"
CommitId: cb4e2be7c6.
Bug: https://github.com/curl/curl/issues/1941
Closes https://github.com/curl/curl/pull/1951
Those were temporary things we'd add and remove for our own convenience
long ago. The last few stayed around for too long as an oversight but
have since been removed. These days we have a running
BORINGSSL_API_VERSION counter which is bumped when we find it
convenient, but 2015-11-19 was quite some time ago, so just check
OPENSSL_IS_BORINGSSL.
Closes#1979