Whenever an expected pattern syntax rule cannot be matched, the
character starting the rule loses its special meaning and the parsing
is resumed:
- backslash at the end of pattern string matches itself.
- Error in [:keyword:] results in set containing :\[dekorwy.
Unit test 1307 updated for this new situation.
Closes#2273
... since the libc provided one are locale dependent in a way we don't
want. Also, the "native" isalnum() (for example) works differently on
different platforms which caused test 1307 failures on macos only.
Closes#2269
Make curl_getdate() handle dates before 1970 as well (returning negative
values).
Make test 517 test dates for 64 bit time_t.
This fixes bug (3) mentioned in #2238Closes#2250
... unless CURLOPT_UNRESTRICTED_AUTH is set to allow them. This matches how
curl already handles Authorization headers created internally.
Note: this changes behavior slightly, for the sake of reducing mistakes.
Added test 317 and 318 to verify.
Reported-by: Craig de Stigter
Bug: https://curl.haxx.se/docs/adv_2018-b3bf.html
In case an identity didn't match[0], the state machine would fail in
state SSH_AUTH_AGENT instead of progressing to the next identity in
ssh-agent. As a result, ssh-agent authentication only worked if the
identity required happened to be the first added to ssh-agent.
This was introduced as part of commit c4eb10e2f0, which
stated that the "else" statement was required to prevent getting stuck
in state SSH_AUTH_AGENT. Given the state machine's logic and libssh2's
interface I couldn't see how this could happen or reproduce it and I
also couldn't find a more detailed description of the problem which
would explain a test case to reproduce the problem this was supposed to
fix.
[0] libssh2_agent_userauth returning LIBSSH2_ERROR_AUTHENTICATION_FAILED
Closes#2248
Follow-up to 84fcaa2e7. libressl does not have the API even if it says it is
late OpenSSL version...
Fixes#2246Closes#2247
Reported-by: jungle-boogie on github
1. don't use "ULL" suffix since unsupported in older MSVC
2. use curl_off_t instead of custom long long ifdefs
3. make get_posix_time() not do unaligned data access
Fixes#2211Closes#2240
Reported-by: Chester Liu
A mime tree attached to an easy handle using CURLOPT_MIMEPOST is
strongly bound to the handle: there is a pointer to the easy handle in
each item of the mime tree and following the parent pointer list
of mime items ends in a dummy part stored within the handle.
Because of this binding, a mime tree cannot be shared between different
easy handles, thus it needs to be cloned upon easy handle duplication.
There is no way for the caller to get the duplicated mime tree
handle: it is then set to be automatically destroyed upon freeing the
new easy handle.
New test 654 checks proper mime structure duplication/release.
Add a warning note in curl_mime_data_cb() documentation about sharing
user data between duplicated handles.
Closes#2235
Prior to this change the stored byte count of each trailer was
miscalculated and 1 less than required. It appears any trailer
after the first that was passed to Curl_client_write would be truncated
or corrupted as well as the size. Potentially the size of some
subsequent trailer could be erroneously extracted from the contents of
that trailer, and since that size is used by client write an
out-of-bounds read could occur and cause a crash or be otherwise
processed by client write.
The bug appears to have been born in 0761a51 (precedes 7.49.0).
Closes https://github.com/curl/curl/pull/2231
Decoding loop implementation did not concern the case when all
received data is consumed by Brotli decoder and the size of decoded
data internally hold by Brotli decoder is greater than CURL_MAX_WRITE_SIZE.
For content with unencoded length greater than CURL_MAX_WRITE_SIZE this
can result in the loss of data at the end of content.
Closes#2194
Move curl_mime_initpart() and curl_mime_cleanpart() calls to lower-level
functions dealing with UserDefined structure contents.
This avoids memory leakages on curl-generated part mime headers.
New test 2073 checks this using the cli tool --next option: it
triggers a valgrind error if bug is present.
Bug: https://curl.haxx.se/mail/lib-2017-12/0060.html
Reported-by: Martin Galvan
- When zlib version is < 1.2.0.4, process gzip trailer before considering
extra data as an error.
- Inflate with Z_BLOCK instead of Z_SYNC_FLUSH to maximize correct data
and minimize corrupt data output.
- Do not try to restart deflate decompression in raw mode if output has
started or if the leading data is not available anymore.
- New test 232 checks inflating raw-deflated content.
Closes#2068
scan-build would warn on a potential access of an uninitialized
buffer. I deem it a false positive and had to add this somewhat ugly
work-around to silence it.
Fixed undefined symbol of getenv() which does not exist when compiling
for Windows 10 App (CURL_WINDOWS_APP). Replaced getenv() with
curl_getenv() which is aware of getenv() absence when CURL_WINDOWS_APP
is defined.
Closes#2171
Prune the DNS cache immediately after the dns entry is unlocked in
multi_done. Timed out entries will then get discarded in a more orderly
fashion.
Test506 is updated
Reported-by: Oleg Pudeyev
Fixes#2169Closes#2170
Prior to this change SSLKEYLOGFILE used line buffering on WIN32 just
like it does for other platforms. However, the Windows CRT does not
actually support line buffering (_IOLBF) and will use full buffering
(_IOFBF) instead. We can't use full buffering because multiple processes
may be writing to the file and that could lead to corruption, and since
full buffering is the only buffering available this commit disables
buffering for Windows SSLKEYLOGFILE entirely (_IONBF).
Ref: https://github.com/curl/curl/pull/1346#issuecomment-350530901
These are OS/2-specific things added to the code in the year 2000. They
were always ugly. If there's any user left, they still don't need it
done this way.
Closes#2166
- Allow proxy_ssl to be checked for pending data even when connssl does
not yet have an SSL handle.
This change is for posterity. Currently there doesn't seem to be a code
path that will cause a pending data check when proxyssl could have
pending data and the connssl handle doesn't yet exist [1].
[1]: Recall that an https proxy connection starts out in connssl but if
the destination is also https then the proxy SSL backend data is moved
from connssl to proxyssl, which means connssl handle is temporarily
empty until an SSL handle for the destination can be created.
Ref: https://github.com/curl/curl/commit/f4a6238#commitcomment-24396542
Closes https://github.com/curl/curl/pull/1916
Connections that are used for HTTP/1.1 Pipelining or HTTP/2 multiplexing
only get additional transfers added to them if the existing connection
is held by the same multi or easy handle. libcurl does not support doing
HTTP/2 streams in different threads using a shared connection.
Closes#2152
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes#2132Fixes#2151Closes#2139
For pop3/imap/smtp, added test 891 to somewhat verify the pop3
case.
For this, I enhanced the pingpong test server to be able to send back
responses with LF-only instead of always using CRLF.
Closes#2150
Figured out while reviewing code in the libssh backend. The pointer was
checked for NULL after having been dereferenced, so we know it would
always equal true or it would've crashed.
Pointed-out-by: Nikos Mavrogiannopoulos
Bug #2143Closes#2148
The previous code was incorrectly following the libssh2 error detection
for libssh2_sftp_statvfs, which is not correct for libssh's sftp_statvfs.
Fixes#2142
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
The SFTP back-end supports asynchronous reading only, limited
to 32-bit file length. Writing is synchronous with no other
limitations.
This also brings keyboard-interactive authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
That also updates tests to expect the right error code
libssh2 back-end returns CURLE_SSH error if the remote file
is not found. Expect instead CURLE_REMOTE_FILE_NOT_FOUND
which is sent by the libssh backend.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
libssh is an alternative library to libssh2.
https://www.libssh.org/
That patch set also introduces support for ECDSA
ed25519 keys, as well as gssapi authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
Absent any 'symbol map' or script to limit what gets exported, static
linking of libraries previously resulted in a libcurl with curl's and
those other symbols being (re-)exported.
This did not happen if 'versioned symbols' were enabled (which is not
the default) because then a version script is employed.
This limits exports to everything starting in 'curl_*'., which is
what "libcurl.vers" exports.
This avoids strange side-effects such as with mixing methods
from system libraries and those erroneously offered by libcurl.
Closes#2127
Originally, my idea was to allocate the two structures (or more
precisely, the connectdata structure and the four SSL backend-specific
strucutres required for ssl[0..1] and proxy_ssl[0..1]) in one go, so
that they all could be free()d together.
However, getting the alignment right is tricky. Too tricky.
So let's just bite the bullet and allocate the SSL backend-specific
data separately.
As a consequence, we now have to be very careful to release the memory
allocated for the SSL backend-specific data whenever we release any
connectdata.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2119
commit d3ab7c5a21 broke the boringssl build since it doesn't have
RSA_flags(), so we disable that code block for boringssl builds.
Reported-by: W. Mark Kubacki
Fixes#2117
This bit is no longer used. It is not clear what it meant for users to
"init the TLS" in a world with different TLS backends and since the
introduction of multissl, libcurl didn't properly work if inited without
this bit set.
Not a single user responded to the call for users of it:
https://curl.haxx.se/mail/lib-2017-11/0072.html
Reported-by: Evgeny Grin
Assisted-by: Jay Satiro
Fixes#2089Fixes#2083Closes#2107
- Align the array of ssl_backend_data on a max 32 byte boundary.
8 is likely to be ok but I went with 32 for posterity should one of
the ssl_backend_data structs change to contain a larger sized variable
in the future.
Prior to this change (since dev 70f1db3, release 7.56) the connectdata
structure was undersized by 4 bytes in 32-bit builds with ssl enabled
because long long * was mistakenly used for alignment instead of
long long, with the intention being an 8 byte boundary. Also long long
may not be an available type.
The undersized connectdata could lead to oob read/write past the end in
what was expected to be the last 4 bytes of the connection's secondary
socket https proxy ssl_backend_data struct (the secondary socket in a
connection is used by ftp, others?).
Closes https://github.com/curl/curl/issues/2093
CVE-2017-8818
Bug: https://curl.haxx.se/docs/adv_2017-af0a.html
With this check present, scan-build warns that we might dereference this
point in other places where it isn't first checked for NULL. Thus, if it
*can* be NULL we have a problem on a few places. However, this pointer
should not be possible to be NULL here so I remove the check and thus
also three different scan-build warnings.
Closes#2111
* LOTS of comment updates
* explicit error for SMB shares (e.g. "file:////share/path/file")
* more strict handling of authority (i.e. "//localhost/")
* now accepts dodgy old "C:|" drive letters
* more precise handling of drive letters in and out of Windows
(especially recognising both "file:c:/" and "file:/c:/")
Closes#2110
The new API added in Linux 4.11 only requires setting a socket option
before connecting, without the whole sento() machinery.
Notably, this makes it possible to use TFO with SSL connections on Linux
as well, without the need to mess around with OpenSSL (or whatever other
SSL library) internals.
Closes#2056
Host names like "127.0.0.1 moo" would otherwise be accepted by some
getaddrinfo() implementations.
Updated test 1034 and 1035 accordingly.
Fixes#2073Closes#2092
... so that IPv6 addresses can be passed like they can for connect-to
and how they're used in URLs.
Added test 1324 to verify
Reported-by: Alex Malinovich
Fixes#2087Closes#2091
There is a conflict on symbol 'free_func' between openssl/crypto.h and
zlib.h on AIX. This is an attempt to resolve it.
Bug: https://curl.haxx.se/mail/lib-2017-11/0032.html
Reported-By: Michael Felt
The --interface command (CURLOPT_INTERFACE option) already uses
SO_BINDTODEVICE on Linux, but it tries to parse it as an interface or IP
address first, which fails in case the user passes a VRF.
Try to use the socket option immediately and parse it as a fallback
instead. Update the documentation to mention this feature, and that it
requires the binary to be ran by root or with CAP_NET_RAW capabilities
for this to work.
Closes#2024
... previously it would store it already in the happy eyeballs stage
which could lead to the IPv6 bit being set for an IPv4 connection,
leading to curl not wanting to do EPSV=>PASV for FTP transfers.
Closes#2053
- Don't call zlib's inflate() when avail_in stream bytes is 0.
This is a follow up to the parent commit 19e66e5. Prior to that change
libcurl's inflate_stream could call zlib's inflate even when no bytes
were available, causing inflate to return Z_BUF_ERROR, and then
inflate_stream would treat that as a hard error and return
CURLE_BAD_CONTENT_ENCODING.
According to the zlib FAQ, Z_BUF_ERROR is not fatal.
This bug would happen randomly since packet sizes are arbitrary. A test
of 10,000 transfers had 55 fail (ie 0.55%).
Ref: https://zlib.net/zlib_faq.html#faq05
Closes https://github.com/curl/curl/pull/2060
This uses the brotli external library (https://github.com/google/brotli).
Brotli becomes a feature: additional curl_version_info() bit and
structure fields are provided for it and CURLVERSION_NOW bumped.
Tests 314 and 315 check Brotli content unencoding with correct and
erroneous data.
Some tests are updated to accomodate with the now configuration dependent
parameters of the Accept-Encoding header.
This is implemented as an output streaming stack of unencoders, the last
calling the client write procedure.
New test 230 checks this feature.
Bug: https://github.com/curl/curl/pull/2002
Reported-By: Daniel Bankhead
Since CURLSSH_AUTH_ANY (aka CURLSSH_AUTH_DEFAULT) is ~0 an arg value
check on this option is incorrect; we have to accept any value.
Prior to this change since f121575 (7.56.1+) CURLOPT_SSH_AUTH_TYPES
erroneously rejected CURLSSH_AUTH_ANY with CURLE_BAD_FUNCTION_ARGUMENT.
Bug: https://github.com/curl/curl/commit/f121575#commitcomment-25347120
... which is valid according to documentation. Regression since
f121575c0b.
Verified now in test 501.
Reported-by: cbartl on github
Fixes#2038Closes#2039
.. also add same arg value check to CURLOPT_POSTFIELDSIZE_LARGE.
Prior to this change since f121575 (7.56.1+) CURLOPT_POSTFIELDSIZE
erroneously rejected -1 value with CURLE_BAD_FUNCTION_ARGUMENT.
Bug: https://curl.haxx.se/mail/lib-2017-11/0000.html
Reported-by: Andrew Lambert
The config files define curl and libcurl targets as imported targets
CURL::curl and CURL::libcurl. For backward compatibility with CMake-
provided find-module the CURL_INCLUDE_DIRS and CURL_LIBRARIES are
also set.
Closes#1879
returning 'time_t' is problematic when that type is unsigned and we
return values less than zero to signal "already expired", used in
several places in the code.
Closes#2021
If WINAPI_FAMILY is defined, it should be safe to try to include
winapifamily.h to check what the define evaluates to.
This should fix detection of CURL_WINDOWS_APP if building with
_WIN32_WINNT set to 0x0600.
Closes#2025
- When uploading via chunked-encoding don't compare file size to bytes
sent to determine whether the upload has finished.
Chunked-encoding adds its own overhead which why the bytes sent is not
equal to the file size. Prior to this change if a file was uploaded in
chunked-encoding and its size was known it was possible that the upload
could end prematurely without sending the final few chunks. That would
result in a server hang waiting for the remaining data, likely followed
by a disconnect.
The scope of this bug is limited to some arbitrary file sizes which have
not been determined. One size that triggers the bug is 475020.
Bug: https://github.com/curl/curl/issues/2001
Reported-by: moohoorama@users.noreply.github.com
Closes https://github.com/curl/curl/pull/2010
... by using curl_off_t for the typedef if time_t is larger than 4
bytes.
Reported-by: Gisle Vanem
Bug: b9d25f9a6b (co)
mmitcomment-25205058
Closes#2019
... since the 'tv' stood for timeval and this function does not return a
timeval struct anymore.
Also, cleaned up the Curl_timediff*() functions to avoid typecasts and
clean up the descriptive comments.
Closes#2011
... to cater for systems with unsigned time_t variables.
- Renamed the functions to curlx_timediff and Curl_timediff_us.
- Added overflow protection for both of them in either direction for
both 32 bit and 64 bit time_ts
- Reprefixed the curlx_time functions to use Curl_*
Reported-by: Peter Piekarski
Fixes#2004Closes#2005
... that are multiplied by 1000 when stored.
For 32 bit long systems, the max value accepted (2147483 seconds) is >
596 hours which is unlikely to ever be set by a legitimate application -
and previously it didn't work either, it just caused undefined behavior.
Also updated the man pages for these timeout options to mention the
return code.
Closes#1938
Allow to ovverride certain build tools, making it possible to
use LLVM/Clang to build curl. The default behavior is unchanged.
To build with clang (as offered by MSYS2), these settings can
be used:
CURL_CC=clang
CURL_AR=llvm-ar
CURL_RANLIB=llvm-ranlib
Closes https://github.com/curl/curl/pull/1993
Use memset() to initialize a structure to avoid LLVM/Clang warning:
ldap.c:193:39: warning: missing field 'UserLength' initializer [-Wmissing-field-initializers]
Closes https://github.com/curl/curl/pull/1992
Now VERIFYHOST, VERIFYPEER and VERIFYSTATUS options change during active
connection updates the current connection's (i.e.'connectdata'
structure) appropriate ssl_config (and ssl_proxy_config) structures
variables, making these options effective for ongoing connection.
This functionality was available before and was broken by the
following change:
"proxy: Support HTTPS proxy and SOCKS+HTTP(s)"
CommitId: cb4e2be7c6.
Bug: https://github.com/curl/curl/issues/1941
Closes https://github.com/curl/curl/pull/1951
Those were temporary things we'd add and remove for our own convenience
long ago. The last few stayed around for too long as an oversight but
have since been removed. These days we have a running
BORINGSSL_API_VERSION counter which is bumped when we find it
convenient, but 2015-11-19 was quite some time ago, so just check
OPENSSL_IS_BORINGSSL.
Closes#1979
This reverts commit f3e03f6c0a.
Caused memory leaks in the fuzzer, needs to be done differently.
Disable test 1553 for now too, as it causes memory leaks without this
commit!
When imap_done() got called before a connection is setup, it would try
to "finish up" and dereffed a NULL pointer.
Test case 1553 managed to reproduce. I had to actually use a host name
to try to resolve to slow it down, as using the normal local server IP
will make libcurl get a connection in the first curl_multi_perform()
loop and then the bug doesn't trigger.
Fixes#1953
Assisted-by: Max Dymond
... fixes a memory leak with at least IMAP when remove_handle is never
called and the transfer is abruptly just abandoned early.
Test 1552 added to verify
Detected by OSS-fuzz
Assisted-by: Max Dymond
Closes#1954
The source code is now prepared to handle the case when both
Win32 Crypto and OpenSSL/NSS crypto backends are enabled
at the same time, making it now possible to enable `USE_WIN32_CRYPTO`
whenever the targeted Windows version supports it. Since this
matches the minimum Windows version supported by curl
(Windows 2000), enable it unconditionally for the Win32 platform.
This in turn enables SMB (and SMBS) protocol support whenever
Win32 Crypto is available, regardless of what other crypto backends
are enabled.
Ref: https://github.com/curl/curl/pull/1840#issuecomment-325682052
Closes https://github.com/curl/curl/pull/1943
- New `CURL_DLL_SUFFIX` envvar will add a suffix to the generated
libcurl dll name. Useful to add `-x64` to 64-bit builds so that
it can live in the same directory as the 32-bit one. By default
this is empty.
- New `CURL_DLL_A_SUFFIX` envvar to customize the suffix of the
generated import library (implib) for libcurl .dll. It defaults
to `dll`, and it's useful to modify that to `.dll` to have the
standard naming scheme for mingw-built .dlls, i.e. `libcurl.dll.a`.
Closes https://github.com/curl/curl/pull/1942
Compare these settings in Curl_ssl_config_matches():
- verifystatus (CURLOPT_SSL_VERIFYSTATUS)
- random_file (CURLOPT_RANDOM_FILE)
- egdsocket (CURLOPT_EGDSOCKET)
Also copy the setting "verifystatus" in Curl_clone_primary_ssl_config(),
and copy the setting "sessionid" unconditionally.
This means that reusing connections that are secured with a client
certificate is now possible, and the statement "TLS session resumption
is disabled when a client certificate is used" in the old advisory at
https://curl.haxx.se/docs/adv_20170419.html is obsolete.
Reviewed-by: Daniel Stenberg
Closes#1917
... a single double quote could leave the entry path buffer without a zero
terminating byte. CVE-2017-1000254
Test 1152 added to verify.
Reported-by: Max Dymond
Bug: https://curl.haxx.se/docs/adv_20171004.html
When curl and libcurl are built with some protocols disabled, they stop
setting and receiving some options that don't make sense with those
protocols. In particular, when HTTP is disabled many options aren't set
that are used only by HTTP. However, some options that appear to be
HTTP-only are actually used by other protocols as well (some despite
having HTTP in the name) and should be set, but weren't. This change now
causes some of these options to be set and used for more (or for all)
protocols. In particular, this fixes tests 646 through 649 in an
HTTP-disabled build, which use the MIME API in the mail protocols.
The timer should be started after conn->connecttime is set. Otherwise
the timer could expire without this condition being true:
/* should we try another protocol family? */
if(i == 0 && conn->tempaddr[1] == NULL &&
curlx_tvdiff(now, conn->connecttime) >= HAPPY_EYEBALLS_TIMEOUT) {
Ref: #1928
A connection can only be reused if the flags "conn_to_host" and
"conn_to_port" match. Therefore it is not necessary to copy these flags
in reuse_conn().
Closes#1918
.. and include the core NTLM header in all NTLM-related source files.
Follow up to 6f86022. Since then http_ntlm checks NTLM_NEEDS_NSS_INIT
but did not include vtls.h where it was defined.
Closes https://github.com/curl/curl/pull/1911
With the recently introduced MultiSSL support multiple SSL backends
can be compiled into cURL That means that now the order of the SSL
One option would be to use the same SSL backend as was configured
via `curl_global_sslset()`, however, NTLMv2 support would appear
to be available only with some SSL backends. For example, when
eb88d778e (ntlm: Use Windows Crypt API, 2014-12-02) introduced
support for NTLMv1 using Windows' Crypt API, it specifically did
*not* introduce NTLMv2 support using Crypt API at the same time.
So let's select one specific SSL backend for NTLM support when
compiled with multiple SSL backends, using a priority order such
that we support NTLMv2 even if only one compiled-in SSL backend can
be used for that.
Ref: https://github.com/curl/curl/pull/1848
In some cases the RSA key does not support verifying it because it's
located on a smart card, an engine wants to hide it, ...
Check the flags on the key before trying to verify it.
OpenSSL does the same thing internally; see ssl/ssl_rsa.c
Closes#1904
... instead of truncating them.
There's no fixed limit for acceptable cookie names in RFC 6265, but the
entire cookie is said to be less than 4096 bytes (section 6.1). This is
also what browsers seem to implement.
We now allow max 5000 bytes cookie header. Max 4095 bytes length per
cookie name and value. Name + value together may not exceed 4096 bytes.
Added test 1151 to verify
Bug: https://curl.haxx.se/mail/lib-2017-09/0062.html
Reported-by: Kevin Smith
Closes#1894
lib/vtls/openssl.c uses OpenSSL APIs from BUF_MEM and BIO APIs. Include
their headers directly rather than relying on other OpenSSL headers
including things.
Closes https://github.com/curl/curl/pull/1891
If the INTERLEAVEFUNCTION is defined, then use that plus the
INTERLEAVEDATA information when writing RTP. Otherwise, use
WRITEFUNCTION and WRITEDATA.
Fixes#1880Closes#1884
If the default write callback is used and no destination has been set, a
NULL pointer would be passed to fwrite()'s 4th argument.
OSS-fuzz bug https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=3327
(not publicly open yet)
Detected by OSS-fuzz
Closes#1874
`conn->connect_state` is NULL when doing a regular non-CONNECT request
over the proxy and should therefor be considered complete at once.
Fixes#1853Closes#1862
Reported-by: Lawrence Wagerfield
Another mistake in my manual fixups of the largely mechanical
search-and-replace ("connssl->" -> "BACKEND->"), just like the previous
commit concerning HTTPS proxies (and hence not caught during my
earlier testing).
Fixes#1855Closes#1871
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
In d65e6cc4f (vtls: prepare the SSL backends for encapsulated private
data, 2017-06-21), this developer prepared for a separation of the
private data of the SSL backends from the general connection data.
This conversion was partially automated (search-and-replace) and
partially manual (e.g. proxy_ssl's backend data).
Sadly, there was a crucial error in the manual part, where the wrong
handle was used: rather than connecting ssl[sockindex]' BIO to the
proxy_ssl[sockindex]', we reconnected proxy_ssl[sockindex]. The reason
was an incorrect location to paste "BACKEND->"... d'oh.
Reported by Jay Satiro in https://github.com/curl/curl/issues/1855.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Ever since 70f1db321 (vtls: encapsulate SSL backend-specific data,
2017-07-28), the code handling HTTPS proxies was broken because the
pointer to the SSL backend data was not swapped between
conn->ssl[sockindex] and conn->proxy_ssl[sockindex] as intended, but
instead set to NULL (causing segmentation faults).
[jes: provided the commit message, tested and verified the patch]
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
... instead of the prefix-less version since WolfSSL 3.12 now uses an
enum with that name that causes build failures for us.
Fixes#1865Closes#1867
Reported-by: Gisle Vanem
- The part kind MIMEKIND_FILE and associated code are suppressed.
- Seek data origin offset not used anymore: suppressed.
- MIMEKIND_NAMEDFILE renamed MIMEKIND_FILE; associated fields/functions
renamed accordingly.
- Curl_getformdata() processes stdin via a callback.
Back in 2008, (and commit 3f3d6ebe66) we changed the logic in how we
determine the native type for `curl_off_t`. To really make sure we
didn't break ABI without bumping SONAME, we introduced logic that
attempted to detect that it would use a different size and thus not be
compatible. We also provided a manual switch that allowed users to tell
configure to bump SONAME by force.
Today, we know of no one who ever got a SONAME bump auto-detected and we
don't know of anyone who's using the manual bump feature. The auto-
detection is also no longer working since we introduced defining
curl_off_t in system.h (7.55.0).
Finally, this bumping logic is not present in the cmake build.
Closes#1861
This is an adaptation of 2 of Peter Wu's SSLKEYLOGFILE implementations.
The first one, written for old OpenSSL versions:
https://git.lekensteyn.nl/peter/wireshark-notes/tree/src/sslkeylog.c
The second one, written for BoringSSL and new OpenSSL versions:
https://github.com/curl/curl/pull/1346
Note the first one is GPL licensed but the author gave permission to
waive that license for libcurl.
As of right now this feature is disabled by default, and does not have
a configure option to enable it. To enable this feature define
ENABLE_SSLKEYLOGFILE when building libcurl and set environment
variable SSLKEYLOGFILE to a pathname that will receive the keys.
And in Wireshark change your preferences to point to that key file:
Edit > Preferences > Protocols > SSL > Master-Secret
Co-authored-by: Peter Wu
Ref: https://github.com/curl/curl/pull/1030
Ref: https://github.com/curl/curl/pull/1346
Closes https://github.com/curl/curl/pull/1866
curl_mime_encoder() is operational and documented.
curl tool -F option is extended with ";encoder=".
curl tool --libcurl option generates calls to curl_mime_encoder().
New encoder tests 648 & 649.
Test 1404 extended with an encoder specification.
Up2date versions of OpenSSL maintain the default reasonably secure
without breaking compatibility, so it is better not to override the
default by curl. Suggested at https://bugzilla.redhat.com/1483972Closes#1846
To support telling a string is nul-terminated, symbol CURL_ZERO_TERMINATED
has been introduced.
Documentation updated accordingly.
symbols in versions updated. Added form API symbols deprecation info.
This feature is badly supported in Windows: as a replacement, a caller has
to use curl_mime_data_cb() with fread, fseek and possibly fclose
callbacks to process opened files.
The cli tool and documentation are updated accordingly.
The feature is however kept internally for form API compatibility, with
the known caveats it always had.
As a side effect, stdin size is not determined by the cli tool even if
possible and this results in a chunked transfer encoding. Test 173 is
updated accordingly.
Some calls in different modules were setting the data handle to NULL, causing
segmentation faults when using builds that enable character code conversions.
destroy_async_data() assumes that if the flag "done" is not set yet, the
thread itself will clean up once the request is complete. But if an
error (generally OOM) occurs before the thread even has a chance to
start, it will never get a chance to clean up and memory will be leaked.
By clearing "done" only just before starting the thread, the correct
cleanup sequence will happen in all cases.
Previously, we used as default SSL backend whatever was first in the
`available_backends` array.
However, some users may want to override that default without patching
the source code.
Now they can: with the --with-default-ssl-backend=<backend> option of
the ./configure script.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
When only one SSL backend is configured, it is totally unnecessary to
let multissl_init() configure the backend at runtime, we can select the
correct backend at build time already.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Let's add a compile time safe API to select an SSL backend. This
function needs to be called *before* curl_global_init(), and can be
called only once.
Side note: we do not explicitly test that it is called before
curl_global_init(), but we do verify that it is not called multiple times
(even implicitly).
If SSL is used before the function was called, it will use whatever the
CURL_SSL_BACKEND environment variable says (or default to the first
available SSL backend), and if a subsequent call to
curl_global_sslset() disagrees with the previous choice, it will fail
with CURLSSLSET_TOO_LATE.
The function also accepts an "avail" parameter to point to a (read-only)
NULL-terminated list of available backends. This comes in real handy if
an application wants to let the user choose between whatever SSL backends
the currently available libcurl has to offer: simply call
curl_global_sslset(-1, NULL, &avail);
which will return CURLSSLSET_UNKNOWN_BACKEND and populate the avail
variable to point to the relevant information to present to the user.
Just like with the HTTP/2 push functions, we have to add the function
declaration of curl_global_sslset() function to the header file
*multi.h* because VMS and OS/400 require a stable order of functions
declared in include/curl/*.h (where the header files are sorted
alphabetically). This looks a bit funny, but it cannot be helped.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
There is information about the compiled-in SSL backends that is really
no concern of any code other than the SSL backend itself, such as which
function (if any) implements SHA-256 summing.
And there is information that is really interesting to the user, such as
the name, or the curl_sslbackend value.
Let's factor out the latter into a publicly visible struct. This
information will be used in the upcoming API to set the SSL backend
globally.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
When building software for the masses, it is sometimes not possible to
decide for all users which SSL backend is appropriate.
Git for Windows, for example, uses cURL to perform clones, fetches and
pushes via HTTPS, and some users strongly prefer OpenSSL, while other
users really need to use Secure Channel because it offers
enterprise-ready tools to manage credentials via Windows' Credential
Store.
The current Git for Windows versions use the ugly work-around of
building libcurl once with OpenSSL support and once with Secure Channel
support, and switching out the binaries in the installer depending on
the user's choice.
Needless to say, this is a super ugly workaround that actually only
works in some cases: Git for Windows also comes in a portable form, and
in a form intended for third-party applications requiring Git
functionality, in which cases this "swap out libcurl-4.dll" simply is
not an option.
Therefore, the Git for Windows project has a vested interest in teaching
cURL to make the SSL backend a *runtime* option.
This patch makes that possible.
By running ./configure with multiple --with-<backend> options, cURL will
be built with multiple backends.
For the moment, the backend can be configured using the environment
variable CURL_SSL_BACKEND (valid values are e.g. "openssl" and
"schannel").
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
So far, all of the SSL backends' private data has been declared as
part of the ssl_connect_data struct, in one big #if .. #elif .. #endif
block.
This can only work as long as the SSL backend is a compile-time option,
something we want to change in the next commits.
Therefore, let's encapsulate the exact data needed by each SSL backend
into a private struct, and let's avoid bleeding any SSL backend-specific
information into urldata.h. This is also necessary to allow multiple SSL
backends to be compiled in at the same time, as e.g. OpenSSL's and
CyaSSL's headers cannot be included in the same .c file.
To avoid too many malloc() calls, we simply append the private structs
to the connectdata struct in allocate_conn().
This requires us to take extra care of alignment issues: struct fields
often need to be aligned on certain boundaries e.g. 32-bit values need to
be stored at addresses that divide evenly by 4 (= 32 bit / 8
bit-per-byte).
We do that by assuming that no SSL backend's private data contains any
fields that need to be aligned on boundaries larger than `long long`
(typically 64-bit) would need. Under this assumption, we simply add a
dummy field of type `long long` to the `struct connectdata` struct. This
field will never be accessed but acts as a placeholder for the four
instances of ssl_backend_data instead. the size of each ssl_backend_data
struct is stored in the SSL backend-specific metadata, to allow
allocate_conn() to know how much extra space to allocate, and how to
initialize the ssl[sockindex]->backend and proxy_ssl[sockindex]->backend
pointers.
This would appear to be a little complicated at first, but is really
necessary to encapsulate the private data of each SSL backend correctly.
And we need to encapsulate thusly if we ever want to allow selecting
CyaSSL and OpenSSL at runtime, as their headers cannot be included within
the same .c file (there are just too many conflicting definitions and
declarations for that).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
At the moment, cURL's SSL backend needs to be configured at build time.
As such, it is totally okay for them to hard-code their backend-specific
data in the ssl_connect_data struct.
In preparation for making the SSL backend a runtime option, let's make
the access of said private data a bit more abstract so that it can be
adjusted later in an easy manner.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
In 86b889485 (sasl_gssapi: Added GSS-API based Kerberos V5 variables,
2014-12-03), an SSPI-specific field was added to the kerberos5data
struct without moving the #include "curl_sspi.h" later in the same file.
This broke the build when SSPI was enabled, unless Secure Channel was
used as SSL backend, because it just so happens that Secure Channel also
requires "curl_sspi.h" to be #included.
In f4739f639 (urldata: include curl_sspi.h when Windows SSPI is enabled,
2017-02-21), this bug was fixed incorrectly: Instead of moving the
appropriate conditional #include, the Secure Channel-conditional part
was now also SSPI-conditional.
Fix this problem by moving the correct #include instead.
This is also required for an upcoming patch that moves all the Secure
Channel-specific stuff out of urldata.h and encapsulates it properly in
vtls/schannel.c instead.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Since 5017d5ada (polarssl: now require 1.3.0+, 2014-03-17), we require
a newer PolarSSL version. No need to keep code trying to support any
older version.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
In the ongoing endeavor to abstract out all SSL backend-specific
functionality, this is the next step: Instead of hard-coding how the
different SSL backends access their internal data in getinfo.c, let's
implement backend-specific functions to do that task.
This will also allow for switching SSL backends as a runtime option.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
There are convenient no-op versions of the init/cleanup functions now,
no need to define private ones for axTLS.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
These functions are all available via the Curl_ssl struct now, no need
to declare them separately anymore.
As the global declarations are removed, the corresponding function
definitions are marked as file-local. The only two exceptions here are
Curl_mbedtls_shutdown() and Curl_polarssl_shutdown(): only the
declarations were removed, there are no function definitions to mark
file-local.
Please note that Curl_nss_force_init() is *still* declared globally, as
the only SSL backend-specific function, because it was introduced
specifically for the use case where cURL was compiled with
`--without-ssl --with-nss`. For details, see f3b77e561 (http_ntlm: add
support for NSS, 2010-06-27).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The _shutdown() function calls the _session_free() function; While this
is not a problem now (because schannel.h declares both functions), a
patch looming in the immediate future with make all of these functions
file-local.
So let's just move the _session_free() function's definition before it
is called.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The connect_finish() function (like many other functions after it) calls
the Curl_axtls_close() function; While this is not a problem now
(because axtls.h declares the latter function), a patch looming in the
immediate future with make all of these functions file-local.
So let's just move the Curl_axtls_close() function's definition before
it is called.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The entire idea of introducing the Curl_ssl struct to describe SSL
backends is to prepare for choosing the SSL backend at runtime.
To that end, convert all the #ifdef have_curlssl_* style conditionals
to use bit flags instead.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The SHA-256 checksumming is also an SSL backend-specific function.
Let's include it in the struct declaring the functionality of SSL
backends.
In contrast to MD5, there is no fall-back code. To indicate this, the
respective entries are NULL for those backends that offer no support for
SHA-256 checksumming.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The MD5 summing is also an SSL backend-specific function. So let's
include it, offering the previous fall-back code as a separate function
now: Curl_none_md5sum(). To allow for that, the signature had to be
changed so that an error could be returned from the implementation
(Curl_none_md5sum() can run out of memory).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This is the first step to unify the SSL backend handling. Now all the
SSL backend-specific functionality is accessed via a global instance of
the Curl_ssl struct.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The idea of introducing the Curl_ssl struct was to unify how the SSL
backends are declared and called. To this end, we now provide an
instance of the Curl_ssl struct for each and every SSL backend.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This new struct is similar in nature to Curl_handler: it will define the
functions and capabilities of all the SSL backends (where Curl_handler
defines the functions and capabilities of protocol handlers).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This patch makes the signature of the _sha256sum() functions consistent
among the SSL backends, in preparation for unifying the way all SSL
backends are accessed.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This patch makes the signature of the _data_pending() functions
consistent among the SSL backends, in preparation for unifying the way
all SSL backends are accessed.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This patch makes the signature of the _cleanup() functions consistent
among the SSL backends, in preparation for unifying the way all SSL
backends are accessed.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
... as the previous fixed length 128 bytes buffer was sometimes too
small.
Fixes#1823Closes#1831
Reported-by: Benjamin Sergeant
Assisted-by: Bill Pyne, Ray Satiro, Nick Zitzmann
Recent changes that replaced CURL_SIZEOF_LONG in the source with
SIZEOF_LONG broke builds that use the premade configuration files and
don't have SIZEOF_LONG defined.
Bug: https://github.com/curl/curl/issues/1816
libidn was replaced with libidn2 last year in configure.
Caveat: libidn2 may depend on a list of further libs.
These can be manually specified via CURL_LDFLAG_EXTRAS.
Closes https://github.com/curl/curl/pull/1815
Recent changes that replaced CURL_SIZEOF_LONG in the source with
SIZEOF_LONG broke builds that use the premade configuration files and
don't have SIZEOF_LONG defined.
Closes https://github.com/curl/curl/pull/1814
Fixes
$ valgrind --leak-check=full ~/install-curl-git/bin/curl tftp://localhost/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaz
==9752== Memcheck, a memory error detector
==9752== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==9752== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==9752== Command: /home/even/install-curl-git/bin/curl tftp://localhost/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaz
==9752==
curl: (71) TFTP file name too long
==9752==
==9752== HEAP SUMMARY:
==9752== 505 bytes in 1 blocks are definitely lost in loss record 11 of 11
==9752== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9752== by 0x4E61CED: Curl_urldecode (in /home/even/install-curl-git/lib/libcurl.so.4.4.0)
==9752== by 0x4E75868: tftp_state_machine (in /home/even/install-curl-git/lib/libcurl.so.4.4.0)
==9752== by 0x4E761B6: tftp_do (in /home/even/install-curl-git/lib/libcurl.so.4.4.0)
==9752== by 0x4E711B6: multi_runsingle (in /home/even/install-curl-git/lib/libcurl.so.4.4.0)
==9752== by 0x4E71D00: curl_multi_perform (in /home/even/install-curl-git/lib/libcurl.so.4.4.0)
==9752== by 0x4E6950D: curl_easy_perform (in /home/even/install-curl-git/lib/libcurl.so.4.4.0)
==9752== by 0x40E0B7: operate_do (in /home/even/install-curl-git/bin/curl)
==9752== by 0x40E849: operate (in /home/even/install-curl-git/bin/curl)
==9752== by 0x402693: main (in /home/even/install-curl-git/bin/curl)
Fixes https://oss-fuzz.com/v2/testcase-detail/5232311106797568
Credit to OSS Fuzz
Closes#1808
Since curl 7.55.0, NetworkManager almost always failed its connectivity
check by timeout. I bisected this to 5113ad04 (http-proxy: do the HTTP
CONNECT process entirely non-blocking).
This patch replaces !Curl_connect_complete with Curl_connect_ongoing,
which returns false if the CONNECT state was left uninitialized and lets
the connection continue.
Closes#1803Fixes#1804
Also-fixed-by: Gergely Nagy
The required low-level logic was already available as part of
`libssh2` (via `LIBSSH2_FLAG_COMPRESS` `libssh2_session_flag()`[1]
option.)
This patch adds the new `libcurl` option `CURLOPT_SSH_COMPRESSION`
(boolean) and the new `curl` command-line option `--compressed-ssh`
to request this `libssh2` feature. To have compression enabled, it
is required that the SSH server supports a (zlib) compatible
compression method and that `libssh2` was built with `zlib` support
enabled.
[1] https://www.libssh2.org/libssh2_session_flag.html
Ref: https://github.com/curl/curl/issues/1732
Closes https://github.com/curl/curl/pull/1735
This change does two things:
1. It un-breaks the build in Xcode 9.0. (Xcode 9.0 is currently
failing trying to compile connectx() in lib/connect.c.)
2. It finally weak-links the connectx() function, and falls back on
connect() when run on older operating systems.
Update the progress timers `t_nslookup`, `t_connect`, `t_appconnect`,
`t_pretransfer`, and `t_starttransfer` to track the total times for
these activities when a redirect is followed. Previously, only the times
for the most recent request would be tracked.
Related changes:
- Rename `Curl_pgrsResetTimesSizes` to `Curl_pgrsResetTransferSizes`
now that the function only resets transfer sizes and no longer
modifies any of the progress timers.
- Add a bool to the `Progress` struct that is used to prevent
double-counting `t_starttransfer` times.
Added test case 1399.
Fixes#522 and Known Bug 1.8
Closes#1602
Reported-by: joshhe on github
Fixes the below leak:
$ valgrind --leak-check=full ~/install-curl-git/bin/curl --proxy "http://a:b@/x" http://127.0.0.1
curl: (5) Couldn't resolve proxy name
==5048==
==5048== HEAP SUMMARY:
==5048== in use at exit: 532 bytes in 12 blocks
==5048== total heap usage: 5,288 allocs, 5,276 frees, 445,271 bytes allocated
==5048==
==5048== 2 bytes in 1 blocks are definitely lost in loss record 1 of 12
==5048== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5048== by 0x4E6CB79: parse_login_details (url.c:5614)
==5048== by 0x4E6BA82: parse_proxy (url.c:5091)
==5048== by 0x4E6C46D: create_conn_helper_init_proxy (url.c:5346)
==5048== by 0x4E6EA18: create_conn (url.c:6498)
==5048== by 0x4E6F9B4: Curl_connect (url.c:6967)
==5048== by 0x4E86D05: multi_runsingle (multi.c:1436)
==5048== by 0x4E88432: curl_multi_perform (multi.c:2160)
==5048== by 0x4E7C515: easy_transfer (easy.c:708)
==5048== by 0x4E7C74A: easy_perform (easy.c:794)
==5048== by 0x4E7C7B1: curl_easy_perform (easy.c:813)
==5048== by 0x414025: operate_do (tool_operate.c:1563)
==5048==
==5048== 2 bytes in 1 blocks are definitely lost in loss record 2 of 12
==5048== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5048== by 0x4E6CBB6: parse_login_details (url.c:5621)
==5048== by 0x4E6BA82: parse_proxy (url.c:5091)
==5048== by 0x4E6C46D: create_conn_helper_init_proxy (url.c:5346)
==5048== by 0x4E6EA18: create_conn (url.c:6498)
==5048== by 0x4E6F9B4: Curl_connect (url.c:6967)
==5048== by 0x4E86D05: multi_runsingle (multi.c:1436)
==5048== by 0x4E88432: curl_multi_perform (multi.c:2160)
==5048== by 0x4E7C515: easy_transfer (easy.c:708)
==5048== by 0x4E7C74A: easy_perform (easy.c:794)
==5048== by 0x4E7C7B1: curl_easy_perform (easy.c:813)
==5048== by 0x414025: operate_do (tool_operate.c:1563)
Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=2984
Credit to OSS Fuzz for discovery
Closes#1761
... and thereby avoid telling send() to send off more bytes than the
size of the buffer!
CVE-2017-1000100
Bug: https://curl.haxx.se/docs/adv_20170809B.html
Reported-by: Even Rouault
Credit to OSS-Fuzz for the discovery
First: this function is only used in debug-builds and not in
release/real builds. It is used to drive tests using the event-based
API.
A pointer to the local struct is passed to CURLMOPT_TIMERDATA, but the
CURLMOPT_TIMERFUNCTION calback can in fact be called even after this
funtion returns, namely when curl_multi_remove_handle() is called.
Reported-by: Brian Carpenter
When multiple rounds are needed to establish a security context
(usually ntlm), we overwrite old token with a new one without free.
Found by proposed gss tests using stub a gss implementation (by
valgrind error), though I have confirmed the leak with a real
gssapi implementation as well.
Closes https://github.com/curl/curl/pull/1733
clang complains:
vtls/darwinssl.c:40:8: error: extra tokens at end of #endif directive
[-Werror,-Wextra-tokens]
This breaks the darwinssl build on Travis. Fix it by making this token
a comment.
Closes https://github.com/curl/curl/pull/1734
The MSVC warning level defaults to 3 in CMake. Change it to 4, which is
consistent with the Visual Studio and NMake builds. Disable level 4
warning C4127 for the library and additionally C4306 for the test
servers to get a clean CURL_WERROR build as that warning is raised in
some macros in older Visual Studio versions.
Ref: https://github.com/curl/curl/pull/1667#issuecomment-314082794
Closes https://github.com/curl/curl/pull/1711
Use LongToHandle to convert from long to HANDLE in the Win32
implementation.
This should fix the following warning when compiling with
MSVC 11 (2012) in 64-bit mode:
lib\curl_threads.c(113): warning C4306:
'type cast' : conversion from 'long' to 'HANDLE' of greater size
Closes https://github.com/curl/curl/pull/1717
There are some bugs in how timers are managed for a single easy handle
that causes the wrong "next timeout" value to be reported to the
application when a new minimum needs to be recomputed and that new
minimum should be an existing timer that isn't currently set for the
easy handle. When the application drives a set of easy handles via the
`curl_multi_socket_action()` API (for example), it gets told to wait the
wrong amount of time before the next call, which causes requests to
linger for a long time (or, it is my guess, possibly forever).
Bug: https://curl.haxx.se/mail/lib-2017-07/0033.html
The headers of librtmp declare the socket as `int`, and on Windows, that
disagrees with curl_socket_t.
Bug: #1652
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
... to make all libcurl internals able to use the same data types for
the struct members. The timeval struct differs subtly on several
platforms so it makes it cumbersome to use everywhere.
Ref: #1652Closes#1693