All compilers used by cmake in Windows should support large files.
- Add test SIZEOF_OFF_T
- Remove outdated test SIZEOF_CURL_OFF_T
- Turn on USE_WIN32_LARGE_FILES in Windows
- Check for 'Largefile' during the features output
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
Reminded-by: Tatsuhiro Tsujikawa
In order to make MBEDTLS_DEBUG work, the debug threshold must be unequal
to 0. This patch also adds a comment how mbedtls must be compiled in
order to make debugging work, and explains the possible debug levels.
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
When input stream for curl is stdin and input stream is not a file but
generated by a script then curl can truncate data transfer to arbitrary
size since a partial packet is treated as end of transfer by TFTP.
Fixes#857
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
Regression added in 790d6de485. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
If a call to GetSystemDirectory fails, the `path` pointer that was
previously allocated would be leaked. This makes sure that `path` is
always freed.
Closes#938
This is a follow up to the parent commit dcdd4be which fixes one leak
but creates another by failing to free the credentials handle if out of
memory. Also there's a second location a few lines down where we fail to
do same. This commit fixes both of those issues.
- the expression of an 'if' was always true
- a 'while' contained a condition that was always true
- use 'if(k->exp100 > EXP100_SEND_DATA)' instead of 'if(k->exp100)'
- fixed a typo
Closes#889
... as otherwise we could get a 0 which would count as no error and we'd
wrongly continue and could end up segfaulting.
Bug: https://curl.haxx.se/mail/lib-2016-06/0052.html
Reported-by: 暖和的和暖
Prior to this change we called Curl_ssl_getsessionid and
Curl_ssl_addsessionid regardless of whether session ID reusing was
enabled. According to comments that is in case session ID reuse was
disabled but then later enabled.
The old way was not intuitive and probably not something users expected.
When a user disables session ID caching I'd guess they don't expect the
session ID to be cached anyway in case the caching is later enabled.
- Enable protocol family logic for IPv6 resolves even when support
for synthesized addresses is enabled.
This is a follow up to the parent commit that added support for
synthesized IPv6 addresses from IPv4 on iOS/OS X. The protocol family
logic needed for IPv6 was inadvertently excluded if support for
synthesized addresses was enabled.
Bug: https://github.com/curl/curl/issues/863
Ref: https://github.com/curl/curl/pull/866
Ref: https://github.com/curl/curl/pull/867
Use getaddrinfo() to resolve the IPv4 address literal on iOS/Mac OS X.
If the current network interface doesn’t support IPv4, but supports
IPv6, NAT64, and DNS64.
Closes#866Fixes#863
Calling QueryContextAttributes with SECPKG_ATTR_APPLICATION_PROTOCOL
fails on Windows < 8.1 so we need to disable ALPN on these OS versions.
Inspiration provide by: Daniel Seither
Closes#848Fixes#840
- Change the parser to not require a minor version for HTTP/2.
HTTP/2 connection reuse broke when we changed from HTTP/2.0 to HTTP/2
in 8243a95 because the parser still expected a minor version.
Bug: https://github.com/curl/curl/issues/855
Reported-by: Andrew Robbins, Frank Gevaerts
Sessionid cache management is inseparable from managing individual
session lifetimes. E.g. for reference-counted sessions (like those in
SChannel and OpenSSL engines) every session addition and removal
should be accompanied with refcount increment and decrement
respectively. Failing to do so synchronously leads to a race condition
that causes symptoms like use-after-free and memory corruption.
This commit:
- makes existing session cache locking explicit, thus allowing
individual engines to manage lock's scope.
- fixes OpenSSL and SChannel engines by putting refcount management
inside this lock's scope in relevant places.
- adds these explicit locking calls to other engines that use
sessionid cache to accommodate for this change. Note, however,
that it is unknown whether any of these engines could also have
this race.
Bug: https://github.com/curl/curl/issues/815Fixes#815Closes#847
Mostly in order to support broken web sites that redirect to broken URLs
that are accepted by browsers.
Browsers are typically even more leniant than this as the WHATWG URL
spec they should allow an _infinite_ amount. I tested 8000 slashes with
Firefox and it just worked.
Added test case 1141, 1142 and 1143 to verify the new parser.
Closes#791
Regression from the previous *printf() rearrangements, this file missed to
include the correct header to make sure snprintf() works universally.
Reported-by: Moti Avrahami
Bug: https://curl.haxx.se/mail/lib-2016-05/0196.html
While compiling lib/curl_multibyte.c with '-DUSE_WIN32_IDN' etc. I was
getting:
f:\mingw32\src\inet\curl\lib\memdebug.h(38): error C2054: expected '('
to follow 'CURL_EXTERN'
f:\mingw32\src\inet\curl\lib\memdebug.h(38): error C2085:
'curl_domalloc': not in formal parameter list
...as otherwise the TLS libs will skip the CN/SAN check and just allow
connection to any server. curl previously skipped this function when SNI
wasn't used or when connecting to an IP address specified host.
CVE-2016-3739
Bug: https://curl.haxx.se/docs/adv_20160518A.html
Reported-by: Moti Avrahami
CID 1024412: Memory - illegal accesses (OVERRUN). Claimed to happen when
we run over 'workend' but the condition says <= workend and for all I
can see it should be safe. Compensating for the warning by adding a byte
margin in the buffer.
Also, removed the extra brace level indentation in the code and made it
so that 'workend' is only assigned once within the function.
The proper FTP wildcard init is now more properly done in Curl_pretransfer()
and the corresponding cleanup in Curl_close().
The previous place of init/cleanup code made the internal pointer to be NULL
when this feature was used with the multi_socket() API, as it was made within
the curl_multi_perform() function.
Reported-by: Jonathan Cardoso Machado
Fixes#800
Prior to this change a width arg could be erroneously output, and also
width and precision args could not be used together without crashing.
"%0*d%s", 2, 9, "foo"
Before: "092"
After: "09foo"
"%*.*s", 5, 2, "foo"
Before: crash
After: " fo"
Test 557 is updated to verify this and more
The new way of disabling certificate verification doesn't work on
Mountain Lion (OS X 10.8) so we need to use the old way in that version
too. I've tested this solution on versions 10.7.5, 10.8, 10.9, 10.10.2
and 10.11.
Closes#802
curl's representation of HTTP/2 responses involves transforming the
response to a format that is similar to HTTP/1.1. Prior to this change,
curl would do this by separating header names and values with only a
colon, without introducing a space after the colon.
While this is technically a valid way to represent a HTTP/1.1 header
block, it is much more common to see a space following the colon. This
change introduces that space, to ensure that incautious tools are safely
able to parse the header block.
This also ensures that the difference between the HTTP/1.1 and HTTP/2
response layout is as minimal as possible.
Bug: https://github.com/curl/curl/issues/797Closes#798Fixes#797
... introduced in curl-7_48_0-293-g2968c83:
Error: COMPILER_WARNING:
lib/vtls/openssl.c: scope_hint: In function ‘Curl_ossl_check_cxn’
lib/vtls/openssl.c:767:15: warning: conversion to ‘int’ from ‘ssize_t’
may alter its value [-Wconversion]
- In the case of recv error, limit returning 'connection still in place'
to EINPROGRESS, EAGAIN and EWOULDBLOCK.
This is an improvement on the parent commit which changed the openssl
connection check to use recv MSG_PEEK instead of SSL_peek.
Ref: https://github.com/curl/curl/commit/856baf5#comments
Calling SSL_peek can cause bytes to be read from the raw socket which in
turn can upset the select machinery that determines whether there's data
available on the socket.
Since Curl_ossl_check_cxn only tries to determine whether the socket is
alive and doesn't actually need to see the bytes SSL_peek seems like
the wrong function to call.
We're able to occasionally reproduce a connect timeout due to this
bug. What happens is that Curl doesn't know to call SSL_connect again
after the peek happens since data is buffered in the SSL buffer and thus
select won't fire for this socket.
Closes#795
Only protocols that actually have a protocol registered for ALPN and NPN
should try to get that negotiated in the TLS handshake. That is only
HTTPS (well, http/1.1 and http/2) right now. Previously ALPN and NPN
would wrongly be used in all handshakes if libcurl was built with it
enabled.
Reported-by: Jay Satiro
Fixes#789
Sometimes, in systems with both ipv4 and ipv6 addresses but where the
network doesn't support ipv6, Curl_is_connected returns an error
(intermittently) even if the ipv4 socket connects successfully.
This happens because there's a for-loop that iterates on the sockets but
the error variable is not resetted when the ipv4 is checked and is ok.
This patch fixes this problem by setting error to 0 when checking the
second socket and not having a result yet.
Fixes#794
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes#743
Mostly because they're not needed, because memdebug.h is always included
last of all headers so the others already included the correct ones.
But also, starting now we don't want this to accidentally include any
system headers, as the header included _before_ this header may add
defines and other fun stuff that we won't want used in system includes.
Previously, connections were closed immediately before the user had a
chance to extract the socket when the proxy required Negotiate
authentication.
This regression was brought in with the security fix in commit
79b9d5f1a4Closes#655
WinSock destroys recv() buffer if send() is failed. As result - server
response may be lost if server sent it while curl is still sending
request. This behavior noticeable on HTTP server short replies if
libcurl use several send() for request (usually for POST request).
To workaround this problem, libcurl use recv() before every send() and
keeps received data in intermediate buffer for further processing.
Fixes: #657Closes: #668
This commit fixes a Clang warning introduced in curl-7_48_0-190-g8f72b13:
Error: CLANG_WARNING:
lib/connect.c:1120:11: warning: The right operand of '==' is a garbage value
1118| }
1119|
1120|-> if(-1 == rc)
1121| error = SOCKERRNO;
1122| }
... but output non-stripped version of the line, even if that then can
make the script identify the wrong position in the line at
times. Showing the line stripped (ie without comments) is just too
surprising.
- Error if a header line is larger than supported.
- Warn if cumulative header line length may be larger than supported.
- Allow spaces when parsing the path component.
- Make sure each header line ends in \r\n. This fixes an out of bounds.
- Disallow header continuation lines until we decide what to do.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
This commit ensures that streams which was closed in on_stream_close
callback gets passed to http2_handle_stream_close. Previously, this
might not happen. To achieve this, we increment drain property to
forcibly call recv function for that stream.
To more accurately check that we have no pending event before shutting
down HTTP/2 session, we sum up drain property into
http_conn.drain_total. We only shutdown session if that value is 0.
With this commit, when stream was closed before reading response
header fields, error code CURLE_HTTP2_STREAM is returned even if
HTTP/2 level error is NO_ERROR. This signals the upper layer that
stream was closed by error just like TCP connection close in HTTP/1.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
This commit ensures that data from network are processed before HTTP/2
session is terminated. This is achieved by pausing nghttp2 whenever
different stream than current easy handle receives data.
This commit also fixes the bug that sometimes processing hangs when
multiple HTTP/2 streams are multiplexed.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
Previously, when a stream was closed with other than NGHTTP2_NO_ERROR
by RST_STREAM, underlying TCP connection was dropped. This is
undesirable since there may be other streams multiplexed and they are
very much fine. This change introduce new error code
CURLE_HTTP2_STREAM, which indicates stream error that only affects the
relevant stream, and connection should be kept open. The existing
CURLE_HTTP2 means connection error in general.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
... but ignore EAGAIN if the stream has ended so that we don't end up in
a loop. This is a follow-up to c8ab613 in order to avoid the problem
d261652 was made to fix.
Reported-by: Jay Satiro
Clues-provided-by: Tatsuhiro Tsujikawa
Discussed in #750
As these two options provide identical functionality, the former for
SOCK5 proxies and the latter for HTTP proxies, merged the two options
together.
As such CURLOPT_SOCKS5_GSSAPI_SERVICE is marked as deprecated as of
7.49.0.
Calculate the service name and proxy service names locally, rather than
in url.c which will allow for us to support overriding the service name
for other protocols such as FTP, IMAP, POP3 and SMTP.
It turns out the google GFE HTTP/2 servers send a PING frame immediately
after a stream ends and its last DATA has been received by curl. So if
we don't drain that from the socket, it makes the socket readable in
subsequent checks and libcurl then (wrongly) assumes the connection is
dead when trying to reuse the connection.
Reported-by: Joonas Kuorilehto
Discussed in #750
Although this should never happen due to the relationship between the
'mech' and 'resp' variables, and the way they are allocated together,
it does cause problems for code analysis tools:
V595 The 'mech' pointer was utilized before it was verified against
nullptr. Check lines: 376, 381. curl_sasl.c 376
Bug: https://github.com/curl/curl/issues/745
Reported-by: Alexis La Goutte
This wouldn't cause a problem because of the way the function is called,
but prior to this change, we were processing the challenge message when
the credentials were NULL rather than when the challenge message was
populated.
This also brings this part of the Kerberos 5 code in line with the
Negotiate code.
Although mutual authentication is currently turned off and can only be
enabled by changing libcurl source code, authentication using Kerberos
5 has been broken since commit 79543caf90 in this use case.
This wouldn't cause a problem because of the way the function is called,
but prior to this change, we were processing the challenge message when
the credentials were NULL rather than when the challenge message was
populated.
This also brings this part of the Kerberos 5 code in line with the
Negotiate code.
Prior to this change, we were generating the output token when the
credentials were NULL rather than when the output token was NULL.
This also brings this part of the Kerberos 5 code in line with the
Negotiate code.
Prior to this change, we were generating the SPN in the SSPI code when
the credentials were NULL and in the GSS-API code when the context was
empty. It is better to decouple the SPN generation from these checks
and only generate it when the SPN itself is NULL.
This also brings this part of the Kerberos 5 code in line with the
Negotiate code.
It offers extra info from nghttp2 in certain error cases. Like for
example when trying prior-knowledge http2 on a server that doesn't speak
http2 at all. The error message is passed on as a verbose message to
libcurl.
Discussed in #722
The error callback was added in nghttp2 1.9.0
For consistency with the spnego and oauth2 code moved the setting of
the host name outside of the Curl_auth_create_gssapi_user_messag()
function.
This will allow us to more easily override it in the future.
I had accidentally used the proxy server name for the host and the host
server name for the proxy in commit ad5e9bfd5d and 6d6f9ca1d9. Whilst
Windows SSPI was quite happy with this, GSS-API wasn't.
Thanks-to: Michael Osipov
When an upload is done, there are two places where that can be detected
and only one of them would rewind the input stream - which sometimes is
necessary for example when doing NTLM HTTP POSTs and more.
This could then end up libcurl hanging.
Figured-out-by: Isaac Boukris
Reported-by: Anatol Belski
Fixes#741
So that we only do the extra typedefs in curl_memory.h when we really
need to and avoid double typedefs.
follow-up commit to 7218b52c49
Thanks-to: Steve Holme
The define is not in our name space and is therefore not protected by
our API promises.
It was only really used by libcurl internals but was mostly erased from
there already in 8aabbf5 (March 2015). This is supposedly the final
death blow to that define from everywhere.
As a side-effect, making sure _MPRINTF_REPLACE is gone and not used, I
made the lib tests in tests/libtest/ use curl_printf.h for its redefine
magic and then subsequently the use of sprintf() got banned in the tests
as well (as it is in libcurl internals) and I then replaced them all
with snprintf().
In the unlikely event that any users is actually using this define and
gets sad by this change, it is very easily copied to the user's own
code.
Supports HTTP/2 over clear TCP
- Optimize switching to HTTP/2 by removing calls to init and setup
before switching. Switching will eventually call setup and setup calls
init.
- Supports new version to “force” the use of HTTP/2 over clean TCP
- Add common line parameter “--http2-prior-knowledge” to the Curl
command line tool.
The code copied one byte from a 32bit integer, which works fine as long
as the byte order is the same. Not a fine assumption. Reported by PVS
Studio.
Reported-by: Alexis La Goutte
When compiling with OpenSSL 1.1.0 (so that the HAVE_X509_GET0_SIGNATURE
&& HAVE_X509_GET0_EXTENSIONS pre-processor block is active), Visual C++
14 complains:
warning C4701: potentially uninitialized local variable 'palg' used
warning C4701: potentially uninitialized local variable 'psig' used
Also display the GSS_C_GSS_CODE (major code) when specified instead of
only GSS_C_MECH_CODE (minor code).
In addition, the old code was printing a colon twice after the prefix
and also miscalculated the length of the buffer in between calls to
gss_display_status (the length of ": " was missing).
Also, gss_buffer is not guaranteed to be NULL terminated and thus need
to restrict reading by its length.
Closes#738
Since commit a5aec58 the handler schemes need to match for the
connections to be reused and for HTTP/2 multiplexing to work, reusing
connections is very important!
Closes#736
Renamed the header and source files for this module as they are HTTP
specific and as such, they should use the naming convention as other
HTTP authentication source files do - this revert commit 260ee6b7bf.
Note: We could also rename curl_ntlm_wb.[c|h], however, the Winbind
code needs separating from the HTTP protocol and migrating into the
vauth directory, thus adding support for Winbind to the SASL based
protocols such as IMAP, POP3 and SMTP.
libidn's tld_check_lz returns an error offset of the first character
that it failed to process, however that offset is not a byte offset and
may not even be in the locale encoding therefore we can't use it to show
the user the character that failed to process.
Bug: https://github.com/curl/curl/issues/731
Reported-by: Karlson2k
As the GSS-API and SSPI based source files are no longer library/API
specific, following the extraction of that authentication code to the
vauth directory, combine these files rather than maintain two separate
versions.
Not picked up by checksrc or Visual Studio but my own code review, this
would haven broken Intel based Unix builds - Perhaps I should learn to
type on my laptop's keyboard before committing!
Updated the makefiles and Visual Studio project files to support moving
the authentication code to the new lib/vauth directory that was started
in commit 0d04e859e1.
warning C4701: potentially uninitialized local variable 'size' used
Technically this can't happen, as the usage of 'size' is protected by
'if(parsed)' and 'parsed' is only set after 'size' has been parsed.
Anyway, lets keep the compiler happy.
formdata.c:390: warning: cast from pointer to integer of different size
Introduced in commit ca5f9341ef this happens because a char*, which is
32-bits wide in 32-bit land, is being cast to a curl_off_t which is
64-bits wide where 64-bit integers are supported by the compiler.
This doesn't happen in 64-bit land as a pointer is the same size as a
curl_off_t.
This fix doesn't address the fact that a 64-bit value cannot be used
for CURLFORM_CONTENTLEN when set in a form array and compiled on a
32-bit platforms, it does at least suppress the compilation warning.
... to allow users to see which specfic wildcard that matched when such
is used.
Also minor logic cleanup to simplify the code, and I removed all tabs
from verbose strings.
Simplify the code by using a single entry that looks for a socket in the
socket hash. As indicated in #712, the code looked for CURL_SOCKET_BAD
at some point and that is ineffective/wrong and this makes it easier to
avoid that.
... as it implies we need to check for that on all the other variable
references as well (as Coverity otherwise warns us for missing NULL
checks), and we're alredy making sure that the pointer is never NULL.
RFC 6265 section 4.1.1 spells out that the first name/value pair in the
header is the actual cookie name and content, while the following are
the parameters.
libcurl previously had a more liberal approach which causes significant
problems when introducing new cookie parameters, like the suggested new
cookie priority draft.
The previous logic read all n/v pairs from left-to-right and the first
name used that wassn't a known parameter name would be used as the
cookie name, thus accepting "Set-Cookie: Max-Age=2; person=daniel" to be
a cookie named 'person' while an RFC 6265 compliant parser should
consider that to be a cookie named 'Max-Age' with an (unknown) parameter
'person'.
Fixes#709
Such a return value isn't documented but could still happen, and the
curl tool code checks for it. It would happen when the underlying
Curl_poll() function returns an error. Starting now we mask that error
as a user of curl_multi_wait() would have no way to handle it anyway.
Reported-by: Jay Satiro
Closes#707
I got a crash with this stack:
curl/lib/url.c:2873 (Curl_removeHandleFromPipeline)
curl/lib/url.c:2919 (Curl_getoff_all_pipelines)
curl/lib/multi.c:561 (curl_multi_remove_handle)
curl/lib/url.c:415 (Curl_close)
curl/lib/easy.c:859 (curl_easy_cleanup)
Closes#704
Prior to this change when a single protocol CURL_SSLVERSION_ was
specified by the user that version was set only as the minimum version
but not as the maximum version as well.
In makefile.m32, option -ssh2 (libssh2) automatically implied -ssl
(OpenSSL) option, with no way to override it with -winssl. Since both
libssh2 and curl support using Windows's built-in SSL backend, modify
the logic to allow that combination.
Prior to this change cookies with an expiry date that failed parsing
and were converted to session cookies could be purged in remove_expired.
Bug: https://github.com/curl/curl/issues/697
Reported-by: Seth Mos
Prevent a crash if 2 (or more) requests are made to the same host and
pipelining is enabled and the connection does not complete.
Bug: https://github.com/curl/curl/pull/690
Some systems have special files that report as 0 bytes big, but still
contain data that can be read (for example /proc/cpuinfo on
Linux). Starting now, a zero byte size is considered "unknown" size and
will be read as far as possible anyway.
Reported-by: Jesse Tan
Closes#681
... as when pipelining is used, we read things into a unified buffer and
we don't do that with HTTP/2. This could then easily make programs that
set CURLMOPT_PIPELINING = CURLPIPE_HTTP1|CURLPIPE_MULTIPLEX to get data
intermixed or plain broken between HTTP/2 streams.
Reported-by: Anders Bakken
The two options are almost the same, except in the case of OpenSSL:
CURLINFO_TLS_SESSION OpenSSL session internals is SSL_CTX *.
CURLINFO_TLS_SSL_PTR OpenSSL session internals is SSL *.
For backwards compatibility we couldn't modify CURLINFO_TLS_SESSION to
return an SSL pointer for OpenSSL.
Also, add support for the 'internals' member to point to SSL object for
the other backends axTLS, PolarSSL, Secure Channel, Secure Transport and
wolfSSL.
Bug: https://github.com/curl/curl/issues/234
Reported-by: dkjjr89@users.noreply.github.com
Bug: https://curl.haxx.se/mail/lib-2015-09/0127.html
Reported-by: Michael König
The internal Curl_done() function uses Curl_expire() at times and that
uses the timeout list. Better clean up the list once we're done using
it. This caused a segfault.
Reported-by: 蔡文凱
Bug: https://curl.haxx.se/mail/lib-2016-02/0097.html
- Add tests.
- Add an example to CURLOPT_TFTP_NO_OPTIONS.3.
- Add --tftp-no-options to expose CURLOPT_TFTP_NO_OPTIONS.
Bug: https://github.com/curl/curl/issues/481
Some TFTP server implementations ignore the "TFTP Option extension"
(RFC 1782-1784, 2347-2349), or implement it in a flawed way, causing
problems with libcurl. Another switch for curl_easy_setopt
"CURLOPT_TFTP_NO_OPTIONS" is introduced which prevents libcurl from
sending TFTP option requests to a server, avoiding many problems caused
by faulty implementations.
Bug: https://github.com/curl/curl/issues/481
If any parameter in a HTTP DIGEST challenge message is present multiple
times, memory allocated for all but the last entry should be freed.
Bug: https://github.com/curl/curl/pull/667
At one point during the development of HTTP/2, the commit 133cdd29ea
introduced automatic decompression of Content-Encoding as that was what
the spec said then. Now however, HTTP/2 should work the same way as
HTTP/1 in this regard.
Reported-by: Kazuho Oku
Closes#661
nghttp2 callback deals with TLS layer and therefore the header does not
need to be broken into chunks.
Bug: https://github.com/curl/curl/issues/659
Reported-by: Kazuho Oku
Since we didn't keep the input argument around after having called
mbedtls, it could end up accessing the wrong memory when figuring out
the ALPN protocols.
Closes#642
As of https://boringssl-review.googlesource.com/#/c/6980/, almost all of
BoringSSL #ifdefs in cURL should be unnecessary:
- BoringSSL provides no-op stubs for compatibility which replaces most
#ifdefs.
- DES_set_odd_parity has been in BoringSSL for nearly a year now. Remove
the compatibility codepath.
- With a small tweak to an extend_key_56_to_64 call, the NTLM code
builds fine.
- Switch OCSP-related #ifdefs to the more generally useful
OPENSSL_NO_OCSP.
The only #ifdefs which remain are Curl_ossl_version and the #undefs to
work around OpenSSL and wincrypt.h name conflicts. (BoringSSL leaves
that to the consumer. The in-header workaround makes things sensitive to
include order.)
This change errs on the side of removing conditionals despite many of
the restored codepaths being no-ops. (BoringSSL generally adds no-op
compatibility stubs when possible. OPENSSL_VERSION_NUMBER #ifdefs are
bad enough!)
Closes#640
It turns out Firefox and Chrome both allow spaces in cookie names and
there are sites out there using that.
Turned out the code meant to strip off trailing space from cookie names
didn't work. Fixed now.
Test case 8 modified to verify both these changes.
Closes#639
When trying to verify a peer without having any root CA certificates
set, this makes libcurl use the TLS library's built in default as
fallback.
Closes#569
.. also fix a conversion bug in the unused function
curl_win32_ascii_to_idn().
And remove wprintfs on error (Jay).
Bug: https://github.com/curl/curl/pull/637
It isn't used by the code in current conditions but for safety it seems
sensible to at least not crash on such input.
Extended unit test 1395 to verify this too as well as a plain "/" input.
- Switch from verifying a pinned public key in a callback during the
certificate verification to inline after the certificate verification.
The callback method had three problems:
1. If a pinned public key didn't match, CURLE_SSL_PINNEDPUBKEYNOTMATCH
was not returned.
2. If peer certificate verification was disabled the pinned key
verification did not take place as it should.
3. (related to #2) If there was no certificate of depth 0 the callback
would not have checked the pinned public key.
Though all those problems could have been fixed it would have made the
code more complex. Instead we now verify inline after the certificate
verification in mbedtls_connect_step2.
Ref: http://curl.haxx.se/mail/lib-2016-01/0047.html
Ref: https://github.com/bagder/curl/pull/601
The CURLOPT_SSH_PUBLIC_KEYFILE option has been documented to handle
empty strings specially since curl-7_25_0-31-g05a443a but the behavior
was unintentionally removed in curl-7_38_0-47-gfa7d04f.
This commit restores the original behavior and clarifies it in the
documentation that NULL and "" have both the same meaning when passed
to CURLOPT_SSH_PUBLIC_KEYFILE.
Bug: http://curl.haxx.se/mail/lib-2016-01/0072.html
... by extracting the LIB + REASON from the OpenSSL error code. OpenSSL
1.1.0+ returned a new func number of another cerfificate fail so this
required a fix and this is the better way to catch this error anyway.
When an HTTP/2 upgrade request fails (no protocol switch), it would
previously detect that as still possible to pipeline on (which is
acorrect) and do that when PIPEWAIT was enabled even if pipelining was
not explictily enabled.
It should only pipelined if explicitly asked to.
Closes#584
Before this patch, if a URL does not start with the protocol
name/scheme, effective URLs would be prefixed with upper-case protocol
names/schemes. This behavior might not be expected by library users or
end users.
For example, if `CURLOPT_DEFAULT_PROTOCOL` is set to "https". And the
URL is "hostname/path". The effective URL would be
"HTTPS://hostname/path" instead of "https://hostname/path".
After this patch, effective URLs would be prefixed with a lower-case
protocol name/scheme.
Closes#597
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
Previously, when HTTP/2 is enabled and used, and stream has content
length known, Curl_read was not called when there was no bytes left to
read. Because of this, we could not make sure that
http2_handle_stream_close was called for every stream. Since we use
http2_handle_stream_close to emit trailer fields, they were
effectively ignored. This commit changes the code so that Curl_read is
called even if no bytes left to read, to ensure that
http2_handle_stream_close is called for every stream.
Discussed in https://github.com/bagder/curl/pull/564
Check that the trailer buffer exists before attempting a client write
for trailers on stream close.
Refer to comments in https://github.com/bagder/curl/pull/564
To make sure curl doesn't allow multiplexing before a connection is
upgraded to HTTP/2 (like when Upgrade: h2c fails), we must make sure the
connection uses HTTP/2 as well and not only check what's wanted.
Closes#584
Patch-by: c0ff
Previously file.txt[CR][LF] would have been returned as file.tx
(without the last t) if filetype is symlink. Now the t is
included and the internal item_length includes the zero byte.
Spotted using test 576 on Windows.
Try harder to prevent libcurl from opening up an additional socket when
CURLOPT_PIPEWAIT is set. Accomplished by letting ongoing TCP and TLS
handshakes complete first before the decision is made.
Closes#575
The function is only present in wolfssl/cyassl if it was built with
--enable-opensslextra. With these checks added, pinning support is disabled
unless the TLS lib has that function available.
Also fix the mistake in configure that checks for the wrong lib name.
Closes#566
This commit adds trailer support in HTTP/2. In HTTP/1.1, chunked
encoding must be used to send trialer fields. HTTP/2 deprecated any
trandfer-encoding, including chunked. But trailer fields are now
always available.
Since trailer fields are relatively rare these days (gRPC uses them
extensively though), allocating buffer for trailer fields is done when
we detect that HEADERS frame containing trailer fields is started. We
use Curl_add_buffer_* functions to buffer all trailers, just like we
do for regular header fields. And then deliver them when stream is
closed. We have to be careful here so that all data are delivered to
upper layer before sending trailers to the application.
We can deliver trailer field one by one using NGHTTP2_ERR_PAUSE
mechanism, but current method is far more simple.
Another possibility is use chunked encoding internally for HTTP/2
traffic. I have not tested it, but it could add another overhead.
Closes#564
- In Curl_verifyhost check all altnames in the certificate.
Prior to this change only the first altname was checked. Only the GSKit
SSL backend was affected by this bug.
Bug: http://curl.haxx.se/mail/lib-2015-12/0062.html
Reported-by: John Kohl
When NGHTTP2_ERR_PAUSE is returned from data_source_read_callback, we
might not process DATA frame fully. Calling nghttp2_session_mem_recv()
again will continue to process DATA frame, but if there is no incoming
frames, then we have to call it again with 0-length data. Without this,
on_stream_close callback will not be called, and stream could be hanged.
Bug: http://curl.haxx.se/mail/lib-2015-11/0103.html
Reported-by: Francisco Moraes
The name of the header guard in lwIP's <lwip/opt.h> has changed from
'__LWIP_OPT_H__' to 'LWIP_HDR_OPT_H' (bug #35874 in May 2015).
Other fixes:
- In curl_setup.h, the problem with an old PSDK doesn't apply if lwIP is
used.
- In memdebug.h, the 'socket' should be undefined first due to lwIP's
lwip_socket() macro.
- In curl_addrinfo.c lwIP's getaddrinfo() + freeaddrinfo() macros need
special handling because they were undef'ed in memdebug.h.
- In select.c we can't use preprocessor conditionals inside select if
MSVC and select is a macro, as it is with lwIP.
http://curl.haxx.se/mail/lib-2015-12/0023.htmlhttp://curl.haxx.se/mail/lib-2015-12/0024.html
- If the size of the length type (curl_off_t) is greater than the size
of the size_t type then check before allocating memory to make sure the
value of length will fit in a size_t without overflow. If it doesn't
then return CURLE_BAD_FUNCTION_ARGUMENT.
Bug: https://github.com/bagder/curl/issues/425#issuecomment-154518679
Reported-by: Steve Holme
IoctlSocket() apparently wants a pointer to a long, passed as a char *
in its third parameter. This bug was introduced already back in commit
c5fdeef41d from October 1 2001!
Bug: http://curl.haxx.se/mail/lib-2015-11/0088.html
Reported-by: Norbert Kett
It would previously be skipped if an existing error was returned, but
would lead to a previous value being left there and later used.
CURLINFO_TOTAL_TIME for example.
Still it avoids that final progress update if we reached DONE as the
result of a callback abort to avoid another callback to be called after
an abort-by-callback.
Reported-by: Lukas Ruzicka
Closes#538
smb.c:134:3: warning: conversion to 'short unsigned int' from 'int' may
alter its value
smb.c:146:42: warning: conversion to 'unsigned int' from 'long long
unsigned int' may alter its value
smb.c:146:65: warning: conversion to 'unsigned int' from 'long long
unsigned int' may alter its value
The push headers are freed after the push callback has been invoked,
meaning this code should only free the headers if the callback was never
invoked and thus the headers weren't freed at that time.
Reported-by: Davey Shafik
According to RFC7628 a failure message may be sent by the server in a
base64 encoded JSON string as a continuation response.
Currently only implemented for OAUTHBEARER and not XAUTH2.
OAUTHBEARER is now the official "registered" SASL mechanism name for
OAuth 2.0. However, we don't want to drop support for XOAUTH2 as some
servers won't support the new mechanism yet.
They tend to never get updated anyway so they're frequently inaccurate
and we never go back to revisit them anyway. We document issues to work
on properly in KNOWN_BUGS and TODO instead.
Following the fix in commit d6d58dd558 it is necessary to re-introduce
XOAUTH2 in the default enabled authentication mechanism, which was
removed in commit 7b2012f262, otherwise users will have to specify
AUTH=XOAUTH2 in the URL.
Note: OAuth 2.0 will only be used when the bearer is specified.
Regression from commit 9e8ced9890 which meant if --oauth2-bearer was
specified but the SASL mechanism wasn't supported by the server then
the mechanism would be chosen.
Added support to the OAuth 2.0 message function for host and port, in
order to accommodate the official OAUTHBEARER SASL mechanism which is
to be added shortly.
The curl_config.h file can be generated either from curl_config.h.cmake
or curl_config.h.in, depending on whether you're building using CMake or
the autotools. The CMake template header doesn't include entries for
all of the protocols that you can disable, which (I think) means that
you can't actually disable those protocols when building via CMake.
Closes#523
BoringSSL implements `BIO_get_mem_data` as a function, instead of a
macro, and expects the output pointer to be a `char **`. We have to add
an explicit cast to grab the pointer as a `const char **`.
Closes#524
- Set user info param to the socket returned by Curl_getconnectinfo,
regardless of if the socket is bad. Effectively this means the user info
param now will receive CURL_SOCKET_BAD instead of -1 on bad socket.
- Remove incorrect comments.
CURLINFO_ACTIVESOCKET is documented to write CURL_SOCKET_BAD to user
info param but prior to this change it wrote -1.
Bug: https://github.com/bagder/curl/pull/518
Reported-by: Marcel Raad
Rationale: when starting up a curl-using app, all cookies from the jar
are checked against each other. This was causing a startup delay in the
Fifth browser.
All tests pass.
Signed-off-by: Lauri Kasanen <cand@gmx.com>
Apparently there are sites out there that do redirects to URLs they
provide in plain UTF-8 or similar. Browsers and wget %-encode such
headers when doing a subsequent request. Now libcurl does too.
Added test 1138 to verify.
Closes#473
Fixes a name space pollution at the cost of programs using one of these
defines will no longer compile. However, the vast majority of libcurl
programs that do multipart formposts use curl_formadd() to build this
list.
Closes#506
This reverts commit 370ee919b3.
Issue #509 has all the details but it was confirmed that the crash was
not due to this, so the previous commit was wrong.
Removed wrong assert()s
The 'conn' passed in as userdata can be used and there can be other
sessionhandles ('data') than the single one this checked for.
introduced in c6aedf680f. It needs to be CURLM_STATE_LAST big since it
must hande the range 0 .. CURLM_STATE_MSGSENT (18) and CURLM_STATE_LAST
is 19 right now.
Reported-by: Dan Fandrich
Bug: http://curl.haxx.se/mail/lib-2015-10/0069.html
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346Closes#346
sk_X509_pop will decrease the size of the stack which means that the loop would
end after having added only half of the certificates.
Also make sure that the X509 certificate is freed in case
SSL_CTX_add_extra_chain_cert fails.
- If a CURLINFO option is unknown return CURLE_UNKNOWN_OPTION.
Prior to this change CURLE_BAD_FUNCTION_ARGUMENT was returned on
unknown. That return value is contradicted by the CURLINFO option
documentation which specifies a return of CURLE_UNKNOWN_OPTION on
unknown.
- Change algorithm init to happen after OpenSSL config load.
Additional algorithms may be available due to the user's config so we
initialize the algorithms after the user's config is loaded.
Bug: https://github.com/bagder/curl/issues/447
Reported-by: Denis Feklushkin
For a single-stream download from localhost, we managed to increase
transfer speed from 1.6MB/sec to around 400MB/sec, mostly because of
this single fix.
... only call it when there is data arriving for another handle than the
one that is currently driving it.
Improves single-stream download performance quite a lot.
Thanks-to: Tatsuhiro Tsujikawa
Bug: http://curl.haxx.se/mail/lib-2015-09/0097.html
If GnuTLS fails to read the certificate then include whatever reason it
provides in the failure message reported to the client.
Signed-off-by: Mike Crowe <mac@mcrowe.com>
The gnutls vtls back-end was previously ignoring any password set via
CURLOPT_KEYPASSWD. Presumably this was because
gnutls_certificate_set_x509_key_file did not support encrypted keys.
gnutls now has a gnutls_certificate_set_x509_key_file2 function that
does support encrypted keys. Let's determine at compile time whether the
available gnutls supports this new function. If it does then use it to
pass the password. If it does not then emit a helpful diagnostic if a
password is set. This is preferable to the previous behaviour of just
failing to read the certificate without giving a reason in that case.
Signed-off-by: Mike Crowe <mac@mcrowe.com>
... even for those that don't support providing anything in the
'internals' struct member since it offers a convenient way for
applications to figure this out.
- Change the designator name we use to show the base64 encoded sha256
hash of the server's public key from 'pinnedpubkey' to
'public key hash'.
Though the server's public key hash is only shown when comparing pinned
public key hashes, the server's hash may not match one of the pinned.
Without this workaround, NSS re-uses a session cache entry despite the
server name does not match. This causes SNI host name to differ from
the actual host name. Consequently, certain servers (e.g. github.com)
respond by 400 to such requests.
Bug: https://bugzilla.mozilla.org/1202264
If the port number in the proxy string ended weirdly or the number is
too large, skip it. Mostly as a means to bail out early if a "bare" IPv6
numerical address is used without enclosing brackets.
Also mention the bracket requirement for IPv6 numerical addresses to the
man page for CURLOPT_PROXY.
Closes#415
Reported-by: Marcel Raad
In some timing-dependnt cases when a 4xx response immediately followed
after a 150 when a STOR was issued, this function would wrongly return
'complete == true' while 'wait_data_conn' was still set.
Closes#405
Reported-by: Patricia Muscalu
RFC 7540 section 8.1.2.2 states: "An endpoint MUST NOT generate an
HTTP/2 message containing connection-specific header fields; any message
containing connection-specific header fields MUST be treated as
malformed"
Closes#401
Introduced in commit 59f3f92ba6 this function is only implemented when
CURL_DISABLE_CRYPTO_AUTH is not defined. As such we shouldn't define
the function in the header file either.
This patch addresses known bug #76, where on 64-bit Windows SOCKET is 64
bits wide, but long is only 32, making CURLINFO_LASTSOCKET unreliable.
Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
If strict certificate checking is disabled (CURLOPT_SSL_VERIFYPEER
and CURLOPT_SSL_VERIFYHOST are disabled) do not fail if the server
doesn't present a certificate at all.
Closes#392
The multi state machine would otherwise go into the DO_MORE state after
DO, even for the case when the FTP state machine had already performed
those duties, which caused libcurl to get stuck in that state and fail
miserably. This occured for for active ftp uploads.
Reported-by: Patricia Muscalu
Visual Studio complains with a message box:
"Run-Time Check Failure #1 - A cast to a smaller data type has caused a
loss of data. If this was intentional, you should mask the source of
the cast with the appropriate bitmask.
For example:
char c = (i & 0xFF);
Changing the code in this way will not affect the quality of the
resulting optimized code."
This is because only 'val' is cast to unsigned char, so the "& 0xff" has
no effect.
Closes#387
Return 0 instead of NGHTTP2_ERR_CALLBACK_FAILURE if we can't locate the
SessionHandle. Apparently mod_h2 will sometimes send a frame for a
stream_id we're finished with.
Use nghttp2_session_get_stream_user_data and
nghttp2_session_set_stream_user_data to identify SessionHandles instead
of a hash.
Closes#372
Currently when the server responds with 401 on NTLM authenticated
connection (re-used) we consider it to have failed. However this is
legitimate and may happen when for example IIS is set configured to
'authPersistSingleRequest' or when the request goes thru a proxy (with
'via' header).
Implemented by imploying an additional state once a connection is
re-used to indicate that if we receive 401 we need to restart
authentication.
Closes#363
The SSH state machine didn't clear the 'rc' variable appropriately in a
two places which prevented it from looping the way it should. And it
lacked an 'else' statement that made it possible to erroneously get
stuck in the SSH_AUTH_AGENT state.
Reported-by: Tim Stack
Closes#357
connect.c:953:5: warning: initializer element is not computable at load
time
connect.c:953:5: warning: missing initializer for field 'dwMinorVersion'
of 'OSVERSIONINFOEX'
curl_sspi.c:97:5: warning: initializer element is not computable at load
time
curl_sspi.c:97:5: warning: missing initializer for field 'szCSDVersion'
of 'OSVERSIONINFOEX'
Otherwise it would never be called for an HTTP/2 connection, which has
its own disconnect handler.
I spotted this while debugging <https://bugzilla.redhat.com/1248389>
where the http_disconnect() handler was called on an FTP session handle
causing 'dnf' to crash. conn->data->req.protop of type (struct FTP *)
was reinterpreted as type (struct HTTP *) which resulted in SIGSEGV in
Curl_add_buffer_free() after printing the "Connection cache is full,
closing the oldest one." message.
A previously working version of libcurl started to crash after it was
recompiled with the HTTP/2 support despite the HTTP/2 protocol was not
actually used. This commit makes it work again although I suspect the
root cause (reinterpreting session handle data of incompatible protocol)
still has to be fixed. Otherwise the same will happen when mixing FTP
and HTTP/2 connections and exceeding the connection cache limit.
Reported-by: Tomas Tomecek
Bug: https://bugzilla.redhat.com/1248389
Currently, libcurl rejects responses with "Content-Encoding: compress"
when CURLOPT_ACCEPT_ENCODING is set to "". I think that libcurl should
treat the Content-Encoding "compress" the same as other
Content-Encodings that it does not support, e.g. "bzip2". That means
just ignoring it.
MSVC 12 complains:
lib\vtls\openssl.c(1554): warning C4701: potentially uninitialized local
variable 'verstr' used It's a false positive, but as it's normally not,
I have enabled warning-as-error for that warning.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
Static analysis indicated that my commit 9008f3d564 ("ntlm_wb: Fix
hard-coded limit on NTLM auth packet size") introduced a potential
memory leak on an error path, because we forget to free the buffer
before returning an error.
Fix this.
Although actually, it never happens in practice because we never *get*
here with state == NTLMSTATE_TYPE1. The state is always zero. That
might want cleaning up in a separate patch.
Reported-by: Terri Oda
setup-vms.h: More symbols for SHA256, hacks for older VAX
openssl.h: Use OpenSSL OPENSSL_NO_SHA256 macro to allow building on VAX.
openssl.c: Use OpenSSL version checks and OPENSSL_NO_SHA256 macro to
allow building on VAX and 64 bit VMS.
setup-vms.h: Symbol case fixups submitted by Michael Steve
build_gnv_curl_pcsi_desc.com: VSI aka as VMS Software, is now the
supplier of new versions of VMS. The install kit needs to accept
VSI as a producer.
Since we do prefix match using given header by application code
against header name pair in format "NAME:VALUE", and VALUE part can
contain ":", we have to careful about existence of ":" in header
parameter. ":" should be allowed to match HTTP/2 pseudo-header field,
and other use of ":" in header must be treated as error, and
curl_pushheader_byname should return NULL. This commit implements
this behaviour.
In 3013bb6 I had changed cookie export to ignore any-domain cookies,
however the logic I used to do so was incorrect, and would lead to a
busy loop in the case of exporting a cookie list that contained
any-domain cookies. The result of that is worse though, because in that
case the other cookies would not be written resulting in an empty file
once the application is terminated to stop the busy loop.
Make sure that the error buffer is always initialized and simplify the
use of it to make the logic easier.
Bug: https://github.com/bagder/curl/issues/318
Reported-by: sneis
The symbol SSL3_MT_NEWSESSION_TICKET appears to have been introduced at
around openssl 0.9.8f, and the use of it in lib/vtls/openssl.c breaks
builds with older openssls (certainly with 0.9.8b, which is the latest
older version I have to try with).
** WORK-AROUND **
The introduced non-blocking general behaviour for Curl_proxyCONNECT()
didn't work for the data connection establishment unless it was very
fast. The newly introduced function argument makes it operate in a more
blocking manner, more like it used to work in the past. This blocking
approach is only used when the FTP data connecting through HTTP proxy.
Blocking like this is bad. A better fix would make it work more
asynchronously.
Bug: https://github.com/bagder/curl/issues/278
This commit is several drafts squashed together. The changes from each
draft are noted below. If any changes are similar and possibly
contradictory the change in the latest draft takes precedence.
Bug: https://github.com/bagder/curl/issues/244
Reported-by: Chris Araman
%%
%% Draft 1
%%
- return 0 if len == 0. that will have to be documented.
- continue on and process the caches regardless of raw recv
- if decrypted data will be returned then set the error code to CURLE_OK
and return its count
- if decrypted data will not be returned and the connection has closed
(eg nread == 0) then return 0 and CURLE_OK
- if decrypted data will not be returned and the connection *hasn't*
closed then set the error code to CURLE_AGAIN --only if an error code
isn't already set-- and return -1
- narrow the Win2k workaround to only Win2k
%%
%% Draft 2
%%
- Trying out a change in flow to handle corner cases.
%%
%% Draft 3
%%
- Back out the lazier decryption change made in draft2.
%%
%% Draft 4
%%
- Some formatting and branching changes
- Decrypt all encrypted cached data when len == 0
- Save connection closed state
- Change special Win2k check to use connection closed state
%%
%% Draft 5
%%
- Default to CURLE_AGAIN in cleanup if an error code wasn't set and the
connection isn't closed.
%%
%% Draft 6
%%
- Save the last error only if it is an unrecoverable error.
Prior to this I saved the last error state in all cases; unfortunately
the logic to cover that in all cases would lead to some muddle and I'm
concerned that could then lead to a bug in the future so I've replaced
it by only recording an unrecoverable error and that state will persist.
- Do not recurse on renegotiation.
Instead we'll continue on to process any trailing encrypted data
received during the renegotiation only.
- Move the err checks in cleanup after the check for decrypted data.
In either case decrypted data is always returned but I think it's easier
to understand when those err checks come after the decrypted data check.
%%
%% Draft 7
%%
- Regardless of len value go directly to cleanup if there is an
unrecoverable error or a close_notify was already received. Prior to
this change we only acknowledged those two states if len != 0.
- Fix a bug in connection closed behavior: Set the error state in the
cleanup, because we don't know for sure it's an error until that time.
- (Related to above) In the case the connection is closed go "greedy"
with the decryption to make sure all remaining encrypted data has been
decrypted even if it is not needed at that time by the caller. This is
necessary because we can only tell if the connection closed gracefully
(close_notify) once all encrypted data has been decrypted.
- Do not renegotiate when an unrecoverable error is pending.
%%
%% Draft 8
%%
- Don't show 'server closed the connection' info message twice.
- Show an info message if server closed abruptly (missing close_notify).
Some servers will request a client certificate, but not require one.
This change allows libcurl to connect to such servers when using
schannel as its ssl/tls backend. When a server requests a client
certificate, libcurl will now continue the handshake without one,
rather than terminating the handshake. The server can then decide
if that is acceptable or not. Prior to this change, libcurl would
terminate the handshake, reporting a SEC_I_INCOMPLETE_CREDENTIALS
error.
and a conversion to markdown. Removed the lib/README.* files. The idea
being to move toward having INTERNALS as the one and only "book" of
internals documentation.
Added a TOC to top of the document.
When CURL_SOCKET_BAD is returned in the callback, it should be treated
as an error (CURLE_COULDNT_CONNECT) if no other socket is subsequently
created when trying to connect to a server.
Bug: http://curl.haxx.se/mail/lib-2015-06/0047.html
- Try building a chain using issuers in the trusted store first to avoid
problems with server-sent legacy intermediates.
Prior to this change server-sent legacy intermediates with missing
legacy issuers would cause verification to fail even if the client's CA
bundle contained a valid replacement for the intermediate and an
alternate chain could be constructed that would verify successfully.
https://rt.openssl.org/Ticket/Display.html?id=3621&user=guest&pass=guest
Prior to this change any-domain cookies (cookies without a domain that
are sent to any domain) were exported with domain name "unknown".
Bug: https://github.com/bagder/curl/issues/292
Follow-up to e8423f9ce1 with discussionis in
https://github.com/bagder/curl/pull/258
This check scans for fopen() with a mode string without 'b' present, as
it may indicate that an FOPEN_* define should rather be used.
- Change fopen calls to use FOPEN_READTEXT instead of "r" or "rt"
- Change fopen calls to use FOPEN_WRITETEXT instead of "w" or "wt"
This change is to explicitly specify when we need to read/write text.
Unfortunately 't' is not part of POSIX fopen so we can't specify it
directly. Instead we now have FOPEN_READTEXT, FOPEN_WRITETEXT.
Prior to this change we had an issue on Windows if an application that
uses libcurl overrides the default file mode to binary. The default file
mode in Windows is normally text mode (translation mode) and that's what
libcurl expects.
Bug: https://github.com/bagder/curl/pull/258#issuecomment-107093055
Reported-by: Orgad Shaneh
Previously, after seeing upgrade to HTTP/2, we feed data followed by
upgrade response headers directly to nghttp2_session_mem_recv() in
Curl_http2_switched(). But it turns out that passed buffer, mem, is
part of stream->mem, and callbacks called by
nghttp2_session_mem_recv() will write stream specific data into
stream->mem, overwriting input data. This will corrupt input, and
most likely frame length error is detected by nghttp2 library. The
fix is first copy the passed data to HTTP/2 connection buffer,
httpc->inbuf, and call nghttp2_session_mem_recv().
Coverity CID 1299424 identified dead code because of checks that could
never equal true (if the mechanism's name was NULL).
Simplified the function by removing a level of pointers and removing the
loop and array that weren't used.
Replace use of assert with code that properly catches bad input at
run-time even in non-debug builds.
This flaw was sort of detected by Coverity CID 1299425 which claimed the
"case RTSPREQ_NONE" was dead code.
Coverity CID 1299426 warned about possible NULL dereference otherwise,
but that would only ever happen if we get invalid HTTP/2 data with
frames for stream 0. Avoid this risk by returning early when stream 0 is
used.
Prior to this change the description for SEC_E_ILLEGAL_MESSAGE was OS
and language specific, and invariably translated to something not very
helpful like: "The message received was unexpected or badly formatted."
Bug: https://github.com/bagder/curl/issues/267
Reported-by: Michael Osipov
With many easy handles using the same connection for multiplexing, it is
important we store and keep the transfer-oriented stuff in the
SessionHandle so that callbacks and callback data work fine even when
many easy handles share the same physical connection.
Previously, when we send all given buffer in data_source_callback, we
return NGHTTP2_ERR_DEFERRED, and nghttp2 library removes this stream
temporarily for writing. This itself is good. If this is the sole
stream in the session, nghttp2_session_want_write() returns zero,
which means that libcurl does not check writeability of the underlying
socket. This leads to very slow upload, because it seems curl only
upload 16k something per 1 second. To fix this, if we still have data
to send, call nghttp2_session_resume_data after nghttp2_session_send.
This makes nghttp2_session_want_write() returns nonzero (if connection
window still opens), and as a result, socket writeability is checked,
and upload speed becomes normal.
Stop curl from failing when non-fatal alert is received during
handshake. This e.g. fixes lots of problems when working with https
sites through proxies.
BoringSSL removed support for direct callers of SSL_CTX_callback_ctrl
and SSL_CTX_ctrl, so move to a way that should work on BoringSSL and
OpenSSL.
re #275
Error: CLANG_WARNING:
lib/http.c:173:16: warning: Value stored to 'http' during its initialization is never read
Error: COMPILER_WARNING:
lib/http.c: scope_hint: In function ‘http_disconnect’
lib/http.c:173:16: warning: unused variable ‘http’ [-Wunused-variable]
.. also make __func__ replacement in multi.
Prior to this change debug builds would fail to build if the compiler
was building pre-c99 and didn't support __func__.
We could get stream ID not in the hash in on_stream_close. For
example, if we decided to reject stream (e.g., PUSH_PROMISE), then we
don't create stream and store it in hash with its stream ID.
This commit requires nghttp2 v1.0.0 to compile, and migrate to v1.0.0,
and utilize recent version of nghttp2 to simplify the code,
First we use nghttp2_option_set_no_recv_client_magic function to
detect nghttp2 v1.0.0. That function only exists since v1.0.0.
Since nghttp2 v0.7.5, nghttp2 ensures header field ordering, and
validates received header field. If it found error, RST_STREAM with
PROTOCOL_ERROR is issued. Since we require v1.0.0, we can utilize
this feature to simplify libcurl code. This commit does this.
Migration from 0.7 series are done based on nghttp2 migration
document. For libcurl, we removed the code sending first 24 bytes
client magic. It is now done by nghttp2 library.
on_invalid_frame_recv callback signature changed, and is updated
accordingly.
to allow code to act differently on the situation.
Also added some more info message for the connection re-use function to
make it clearer when connections are not re-used.
Previously when we do pause because of out of buffer, we just throw
away unread data in connection buffer. This just broke protocol
framing, and I saw occasional FRAME_SIZE_ERROR. This commit fix this
issue by remembering how much data read, and in the next iteration, we
process remaining data.
This commit fixes the bug that streams get stuck if stream gets some
DATA, and stream->closed becomes true at the same time. Previously,
in this condition, after we processed DATA, we are going to try to
read data from underlying transport, but there is no data, and gets
EAGAIN. There was no code path to evaludate stream->closed.
... from the connection struct. The stream one being the 'struct HTTP'
which is kept in the SessionHandle struct (easy handle).
lookup streams for incoming frames in the stream hash, hashing is based
on the stream id and we get the SessionHandle for the incoming stream
that way.
Previously we counted all connections to a specific host name and that
would be used for the CURLMOPT_MAX_HOST_CONNECTIONS check for example,
while servers on different port numbers are normally considered
different "origins" on the web and should thus be considered different
hosts.
All the existing Curl_bundle* functions were only ever used from within
the conncache.c file, so I moved them over and made them static (and
removed the Curl_ prefix).
This avoids unnecessary dynamic allocs and as this also removed the last
users of *hash_alloc() and *hash_destroy(), those two functions are now
removed.
The OpenSSL trace callback is wonderfully undocumented but given a
journey in the source code, it seems the cases were ssl_ver is zero
doesn't follow the same pattern and thus turned out confusing and
misleading. For now, we skip doing any CURLINFO_TEXT logging on those
but keep sending them as CURLINFO_SSL_DATA_OUT/IN.
Also, I added direction to the text info and I edited some functions
slightly.
Bug: https://github.com/bagder/curl/issues/219
Reported-by: Jay Satiro, Ashish Shukla
- update default versions of dependencies (except for rare/old platforms)
- update urls
- sync examples makefiles with main ones
- remove line ending space
Make the HTTP headers separated by default for improved security and
reduced risk for information leakage.
Bug: http://curl.haxx.se/docs/adv_20150429.html
Reported-by: Yehezkel Horowitz, Oren Souroujon
When doing HTTP requests Negotiate authenticated, the entire connnection
may become authenticated and not just the specific HTTP request which is
otherwise how HTTP works, as Negotiate can basically use NTLM under the
hood. curl was not adhering to this fact but would assume that such
requests would also be authenticated per request.
CVE-2015-3148
Bug: http://curl.haxx.se/docs/adv_20150422B.html
Reported-by: Isaac Boukris
If a URL is given with a zero-length host name, like in "http://:80" or
just ":80", `fix_hostname()` will index the host name pointer with a -1
offset (as it blindly assumes a non-zero length) and both read and
assign that address.
CVE-2015-3144
Bug: http://curl.haxx.se/docs/adv_20150422D.html
Reported-by: Hanno Böck
The internal libcurl function called sanitize_cookie_path() that cleans
up the path element as given to it from a remote site or when read from
a file, did not properly validate the input. If given a path that
consisted of a single double-quote, libcurl would index a newly
allocated memory area with index -1 and assign a zero to it, thus
destroying heap memory it wasn't supposed to.
CVE-2015-3145
Bug: http://curl.haxx.se/docs/adv_20150422C.html
Reported-by: Hanno Böck
At some point, Firefox has changed and generates different directory
names for the default profile that made this script fail to find them.
Bug: https://github.com/bagder/curl/issues/207
Reported-by: sneakyimp
Add 'gdi32' and 'crypt32' Windows implibs to avoid failure
while building libcurl.dll using the mingw compiler.
The same logic is used in 'src/makefile.m32' when
building curl.exe.
The factor of 8 is a bytes-to-bits conversion factor, but pkt_size and
rate_bps are both in bytes. When using the rate limiting option, curl
waits 8 times too long, and then transfers very quickly until the
average rate reaches the limit. The average rate follows the limit over
time, but the actual traffic is bursty.
Thanks-to: Benjamin Gilbert
The key length in bits will always fit in an unsigned long so the
loss-of-data warning assigning the result of x64 pointer arithmetic to
an unsigned long is unnecessary.
Prior to this change libcurl could show multiple 'CyaSSL: Connecting to'
messages since cyassl_connect_step2 is called multiple times, typically.
The message is superfluous even once since libcurl already informs the
user elsewhere in code that it is connecting.
- cache entries must be also refreshed when they are in use
- have the cache count as inuse reference too, freeing timestamp == 0 special
value
- use timestamp == 0 for CURLOPT_RESOLVE entries which don't get refreshed
- remove CURLOPT_RESOLVE special inuse reference (timestamp == 0 will prevent refresh)
- fix Curl_hostcache_clean - CURLOPT_RESOLVE entries don't have a special
reference anymore, and it would also release non CURLOPT_RESOLVE references
- fix locking in Curl_hostcache_clean
- fix unit1305.c: hash now keeps a reference, need to set inuse = 1
This change is to allow the user's CTX callback to change the minimum
protocol version in the CTX without us later overriding it, as we did
prior to this change.
SSL_CTX_load_verify_locations can return negative values on fail,
therefore to check for failure we check if load is != 1 (success)
instead of if load is == 0 (failure), the latter being incorrect given
that behavior.
Previously in Curl_http2_switched, we called nghttp2_session_mem_recv to
parse incoming data which were already received while curl was handling
upgrade. But we didn't call nghttp2_session_send, and it led to make
curl not send any response to the received frames. Most likely, we
received SETTINGS from server at this point, so we missed opportunity to
send SETTINGS + ACK. This commit adds missing nghttp2_session_send call
in Curl_http2_switched to fix this issue.
Bug: https://github.com/bagder/curl/issues/192
Reported-by: Stefan Eissing
"name =value" is fine and the space should just be skipped.
Updated test 31 to also test for this.
Bug: https://github.com/bagder/curl/issues/195
Reported-by: cromestant
Help-by: Frank Gevaerts
(Curl_cyassl_init)
- Return 1 on success, 0 in failure.
Prior to this change the fail path returned an incorrect value and the
evaluation to determine whether CyaSSL_Init had succeeded was incorrect.
Ironically that combined with the way curl_global_init tests SSL library
initialization (!Curl_ssl_init()) meant that CyaSSL having been
successfully initialized would be seen as that even though the code path
and return value in Curl_cyassl_init were wrong.
If the handle removed from the multi handle happens to be the one
"owning" the pipeline other transfers will be waiting indefinitely. Now
we move such handles back to connect to have them race (again) for
getting the connection and thus avoid hanging.
Bug: http://curl.haxx.se/bug/view.cgi?id=1465
Reported-by: Jiri Dvorak
... even if they don't have an associated connection anymore. It could
leave the waiting transfers pending with no active one on the
connection.
Bug: http://curl.haxx.se/bug/view.cgi?id=1465
Reported-by: Jiri Dvorak
(cyassl_connect_step1)
- Use TLS 1.0-1.2 by default when available.
CyaSSL/wolfSSL >= v3.3.0 supports setting a minimum protocol downgrade
version.
cyassl/cyassl@322f79f
This header file must be included after all header files except
memdebug.h, as it does similar memory function redefinitions and can be
similarly affected by conflicting definitions in system or dependent
library headers.
CID 1202732 warns on the previous use, although I cannot fine any
problems with it. I'm doing this change only to make the code use a more
familiar approach to accomplish the same thing.
We prematurely changed protocol handler to HTTP/2 which made things very
slow (and wrong).
Reported-by: Stefan Eissing
Bug: https://github.com/bagder/curl/issues/169
Since we just started make use of free(NULL) in order to simplify code,
this change takes it a step further and:
- converts lots of Curl_safefree() calls to good old free()
- makes Curl_safefree() not check the pointer before free()
The (new) rule of thumb is: if you really want a function call that
frees a pointer and then assigns it to NULL, then use Curl_safefree().
But we will prefer just using free() from now on.
The following functions return immediately if a null pointer was passed.
* Curl_cookie_cleanup
* curl_formfree
It is therefore not needed that a function caller repeats a corresponding check.
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
The function "free" is documented in the way that no action shall occur for
a passed null pointer. It is therefore not needed that a function caller
repeats a corresponding check.
http://stackoverflow.com/questions/18775608/free-a-null-pointer-anyway-or-check-first
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Bug: https://github.com/bagder/curl/pull/168
(trynextip)
- Don't try the "other" protocol family unless IPv6 is available. In an
IPv4-only build the other family can only be IPv6 which is unavailable.
This change essentially stops IPv4-only builds from attempting the
"happy eyeballs" secondary parallel connection that is supposed to be
used by the "other" address family.
Prior to this change in IPv4-only builds that secondary parallel
connection attempt could be erroneously used by the same family (IPv4)
which caused a bug where every address after the first for a host could
be tried twice, often in parallel. This change fixes that bug. An
example of the bug is shown below.
Assume MTEST resolves to 3 addresses 127.0.0.2, 127.0.0.3 and 127.0.0.4:
* STATE: INIT => CONNECT handle 0x64f4b0; line 1046 (connection #-5000)
* Rebuilt URL to: http://MTEST/
* Added connection 0. The cache now contains 1 members
* STATE: CONNECT => WAITRESOLVE handle 0x64f4b0; line 1083
(connection #0)
* Trying 127.0.0.2...
* STATE: WAITRESOLVE => WAITCONNECT handle 0x64f4b0; line 1163
(connection #0)
* Trying 127.0.0.3...
* connect to 127.0.0.2 port 80 failed: Connection refused
* Trying 127.0.0.3...
* connect to 127.0.0.3 port 80 failed: Connection refused
* Trying 127.0.0.4...
* connect to 127.0.0.3 port 80 failed: Connection refused
* Trying 127.0.0.4...
* connect to 127.0.0.4 port 80 failed: Connection refused
* connect to 127.0.0.4 port 80 failed: Connection refused
* Failed to connect to MTEST port 80: Connection refused
* Closing connection 0
* The cache now contains 0 members
* Expire cleared
curl: (7) Failed to connect to MTEST port 80: Connection refused
The bug was born in commit bagder/curl@2d435c7.
In function Curl_closesocket() in connect.c the call to
Curl_multi_closed() was wrongly omitted if a socket close function
(CURLOPT_CLOSESOCKETFUNCTION) is registered.
That would lead to not removing the socket from the internal hash table
and not calling the multi socket callback appropriately.
Bug: http://curl.haxx.se/bug/view.cgi?id=1493
A signal handler for SIGALRM is installed in Curl_resolv_timeout. It is
configured to interrupt system calls and uses siglongjmp to return into
the function if alarm() goes off.
The signal handler is installed before curl_jmpenv is initialized.
This means that an already installed alarm timer could trigger the
newly installed signal handler, leading to undefined behavior when it
accesses the uninitialized curl_jmpenv.
Even if there is no previously installed alarm available, the code in
Curl_resolv_timeout itself installs an alarm before the environment is
fully set up. If the process is sent into suspend right after that, the
signal handler could be called too early as in previous scenario.
To fix this, the signal handler should only be installed and the alarm
timer only be set after sigsetjmp has been called.
... by using the regular Curl_http_done() method which checks for
that. This makes test 1801 fail consistently with error 56 (which seems
fine) to that test is also updated here.
Reported-by: Ben Darnell
Bug: https://github.com/bagder/curl/issues/166
This makes curl pick better (stronger) ciphers by default. The strongest
available ciphers are fine according to the HTTP/2 spec so an OpenSSL
built curl is no longer rejected by string HTTP/2 servers.
Bug: http://curl.haxx.se/bug/view.cgi?id=1487
...after the method line:
"Since the Host field-value is critical information for handling a
request, a user agent SHOULD generate Host as the first header field
following the request-line." / RFC 7230 section 5.4
Additionally, this will also make libcurl ignore multiple specified
custom Host: headers and only use the first one. Test 1121 has been
updated accordingly
Bug: http://curl.haxx.se/bug/view.cgi?id=1491
Reported-by: Rainer Canavan
When checking for a connection to re-use, a proxy-using request must
check for and use a proxy connection and not one based on the host
name!
Added test 1421 to verify
Bug: http://curl.haxx.se/bug/view.cgi?id=1492
Instead of priting cipher and MAC algorithms names separately, print the
whole cipher suite string which also includes the key exchange algorithm,
along with the negotiated TLS version.
The code used some happy eyeballs logic even _after_ CONNECT has been
sent to a proxy, while the happy eyeball phase is already (should be)
over by then.
This is solved by splitting the multi state into two separate states
introducing the new SENDPROTOCONNECT state.
Bug: http://curl.haxx.se/mail/lib-2015-01/0170.html
Reported-by: Peter Laser
Since 1342a96ecf, a timeout detected in the multi state machine didn't
necesarily clear everything up, like formpost data.
Bug: https://github.com/bagder/curl/issues/147
Reported-by: Michel Promonet
Patched-by: Michel Promonet
SSLeay was the name of the library that was subsequently turned into
OpenSSL many moons ago (1999). curl does not work with the old SSLeay
library since years. This is now reflected by only using USE_OPENSSL in
code that depends on OpenSSL.
Previously, we just ignored error code passed to
on_stream_close_callback and just return 0 (success) after stream
closure even if stream was reset with error. This patch records error
code in on_stream_close_callback, and return -1 and use CURLE_HTTP2
error code on abnormal stream closure.
The vtls layer now checks the return value, so it is no longer necessary
to abort if a random number cannot be provided by NSS. This also fixes
the following Coverity report:
Error: FORWARD_NULL (CWE-476):
lib/vtls/nss.c:1918: var_compare_op: Comparing "data" to null implies that "data" might be null.
lib/vtls/nss.c:1923: var_deref_model: Passing null pointer "data" to "Curl_failf", which dereferences it.
lib/sendf.c:154:3: deref_parm: Directly dereferencing parameter "data".
obj_count can be 1 if the custom read function is set or the stdin
handle is a reference to a pipe. Since the pipe should be handled
using the PeekNamedPipe-check below, the custom read function should
only be used if it is actually enabled.
According to [1]: "Returning 0 will signal end-of-file to the library
and cause it to stop the current transfer."
This change makes the Windows telnet code handle this case accordingly.
[1] http://curl.haxx.se/libcurl/c/CURLOPT_READFUNCTION.html
SSL_CTX_load_verify_locations by default (and if given non-Null
parameters) searches the CAfile first and falls back to CApath. This
allows for CAfile to be a basis (e.g. installed by the package manager)
and CApath to be a user configured directory.
This wasn't reflected by the previous configure constraint which this
patch fixes.
Bug: https://github.com/bagder/curl/pull/139
Correctly check for memcmp() return value (it returns 0 if the strings match).
This is not really important, since curl is going to use http/1.1 anyway, but
it's still a bug I guess.
For consistency with other conditionally compiled code in openssl.c,
use OPENSSL_IS_BORINGSSL rather than HAVE_BORINGSSL and try to use
HAVE_BORINGSSL outside of openssl.c when the OpenSSL header files are
not included.
Previously we don't ignore PUSH_PROMISE header fields in on_header
callback. It makes header values mixed with following HEADERS,
resulting protocol error.
Prior to this change the options for exclusive SSL protocol versions did
not actually set the protocol exclusive.
http://curl.haxx.se/mail/lib-2015-01/0002.html
Reported-by: Dan Fandrich
The struct went private in 1.0.2 so we cannot read the version number
from there anymore. Use SSL_version() instead!
Reported-by: Gisle Vanem
Bug: http://curl.haxx.se/mail/lib-2015-02/0034.html
Modified the Curl_ossl_cert_status_request() function to return FALSE
when built with BoringSSL or when OpenSSL is missing the necessary TLS
extensions.
Commit 7a8b2885e2 made some functions static and removed the public
Curl_ prefix. Unfortunately, it also removed the sasl_ prefix, which
is the naming convention we use in this source file.
curl_sasl.c:1221: error C2065: 'mechtable' : undeclared identifier
This error could also happen for non-SSPI builds when cryptography is
disabled (CURL_DISABLE_CRYPTO_AUTH is defined).
There is an issue with conflicting "struct timeval" definitions with
certain AmigaOS releases and C libraries, depending on what gets
included when. It's a minor difference - the OS one is unsigned,
whereas the common structure has signed elements. If the OS one ends up
getting defined, this causes a timing calculation error in curl.
It's easy enough to resolve this at the curl end, by casting the
potentially errorneous calculation to a signed long.
... of the other cert verification checks so that you can set verifyhost
and verifypeer to FALSE and still check the public key.
Bug: http://curl.haxx.se/bug/view.cgi?id=1471
Reported-by: Kyle J. McKay
Use a dynamicly allocated buffer for the temporary SPN variable similar
to how the SASL GSS-API code does, rather than using a fixed buffer of
2048 characters.
Carrying on from commit 037cd0d991, removed the following unimplemented
instances of curlssl_close_all():
Curl_axtls_close_all()
Curl_darwinssl_close_all()
Curl_cyassl_close_all()
Curl_gskit_close_all()
Curl_gtls_close_all()
Curl_nss_close_all()
Curl_polarssl_close_all()
Fixed the following warning and error from commit 3af90a6e19 when SSL
is not being used:
url.c:2004: warning C4013: 'Curl_ssl_cert_status_request' undefined;
assuming extern returning int
error LNK2019: unresolved external symbol Curl_ssl_cert_status_request
referenced in function Curl_setopt
Use the SECURITY_STATUS typedef rather than a unsigned long for the
QuerySecurityPackageInfo() return and rename the variable as per other
areas of SSPI code.
Also known as "status_request" or OCSP stapling, defined in RFC6066 section 8.
This requires GnuTLS 3.1.3 or higher to build, however it's recommended to use
at least GnuTLS 3.3.11 since previous versions had a bug that caused the OCSP
response verfication to fail even on valid responses.
This option can be used to enable/disable certificate status verification using
the "Certificate Status Request" TLS extension defined in RFC6066 section 8.
This also adds the CURLE_SSL_INVALIDCERTSTATUS error, to be used when the
certificate status verification fails, and the Curl_ssl_cert_status_request()
function, used to check whether the SSL backend supports the status_request
extension.
If the session is still used by active SSL/TLS connections, it
cannot be closed yet. Thus we mark the session as not being cached
any longer so that the reference counting mechanism in
Curl_schannel_shutdown is used to close and free the session.
Reported-by: Jean-Francois Durand
... and make sure we can connect the data connection to a host name that
is longer than 48 bytes.
Also simplifies the code somewhat by re-using the original host name
more, as it is likely still in the DNS cache.
Original-Patch-by: Vojtěch Král
Bug: http://curl.haxx.se/bug/view.cgi?id=1468
...to avoid a session ID getting cached without certificate checking and
then after a subsequent _enabling_ of the check libcurl could still
re-use the session done without cert checks.
Bug: http://curl.haxx.se/docs/adv_20150108A.html
Reported-by: Marc Hesse
As we get the length for the DN and attribute variables, and we know
the length for the line terminator, pass the length values rather than
zero as this will save Curl_client_write() from having to perform an
additional strlen() call.
curl_ntlm_core.c:146: warning: passing 'DES_cblock' (aka 'unsigned char
[8]') to parameter of type 'char *' converts
between pointers to integer types with different
sign
Rather than duplicate the code in setup_des_key() for OpenSSL and in
extend_key_56_to_64() for non-OpenSSL based crypto engines, as it is
the same, use extend_key_56_to_64() for all engines.
smb.c:780: warning: passing 'char *' to parameter of type 'unsigned
char *' converts between pointers to integer types with
different sign
smb.c:781: warning: passing 'char *' to parameter of type 'unsigned
char *' converts between pointers to integer types with
different sign
smb.c:804: warning: passing 'char *' to parameter of type 'unsigned
char *' converts between pointers to integer types with
different sign
Prefer void for unused parameters, rather than assigning an argument to
itself as a) unintelligent compilers won't optimize it out, b) it can't
be used for const parameters, c) it will cause compilation warnings for
clang with -Wself-assign and d) is inconsistent with other areas of the
curl source code.
Moved our Initialize Security Context return attribute definitions to
the SSPI module, as a) these can be used by other SSPI based providers
and b) the ISC required attributes are defined there.
curl_schannel.h:123: warning: right-hand operand of comma expression
has no effect
Some instances of the curlssl_close_all() function were declared with a
void return type whilst others as int. The schannel version returned
CURLE_NOT_BUILT_IN and others simply returned zero, but in all cases the
return code was ignored by the calling function Curl_ssl_close_all().
For the time being and to keep the internal API consistent, changed all
declarations to use a void return type.
To reduce code we might want to consider removing the unimplemented
versions and use a void #define like schannel does.
For consistency, as we seem to have a bit of a mixed bag, changed all
instances of ipv4 and ipv6 in comments and documentations to use the
correct case.
Otherwise Curl_ssl_init_certinfo() can fail and set the num_of_certs
member variable to the requested count, which could then be used
incorrectly as libcurl closes down.
The return type for this function was 0 on success and 1 on error. This
was then examined by the calling functions and, in most cases, used to
return CURLE_OUT_OF_MEMORY.
Instead use CURLcode for the return type and return the out of memory
error directly, propagating it up the call stack.
The return type of this function is a boolean value, and even uses a
bool internally, so use bool in the function declaration as well as
the variables that store the return value, to avoid any confusion.
curl_ntlm_core.c:301: warning: pointer targets in passing argument 2 of
'CryptImportKey' differ in signedness
curl_ntlm_core.c:310: warning: passing argument 6 of 'CryptEncrypt' from
incompatible pointer type
curl_ntlm_core.c:540: warning: passing argument 4 of 'CryptGetHashParam'
from incompatible pointer type
... as it never copies the trailing zero anyway and always just the four
bytes so let's not mislead anyone into thinking it is actually treated
as a string.
Coverity CID: 1260214
lib/setup-vms.h : VAX HP OpenSSL port is ancient, needs help.
More defines to set symbols to uppercase.
src/tool_main.c : Fix parameter to vms_special_exit() call.
packages/vms/ :
backup_gnv_curl_src.com : Fix the error message to have the correct package.
build_curl-config_script.com : Rewrite to be more accurate.
build_libcurl_pc.com : Use tool_version.h now.
build_vms.com : Fix to handle lib/vtls directory.
curl_gnv_build_steps.txt : Updated build procedure documentation.
generate_config_vms_h_curl.com :
* VAX does not support 64 bit ints, so no NTLM support for now.
* VAX HP SSL port is ancient, needs some help.
* Disable NGHTTP2 for now, not ported to VMS.
* Disable UNIX_SOCKETS, not available on VMS yet.
* HP GSSAPI port does not have gss_nt_service_name.
gnv_link_curl.com : Update for new curl structure.
pcsi_product_gnv_curl.com : Set up to optionally do a complete build.
Removed 'next' variable in Curl_convert_form(). Rather than setting it
from 'form->next' and using that to set 'form' after the conversion
just use 'form = form->next' instead.
There was a confusion between these: this commit tries to disambiguate them.
- Scope can be computed from the address itself.
- Scope id is scope dependent: it is currently defined as 1-based local
interface index for link-local scoped addresses, and as a site index(?) for
(obsolete) site-local addresses. Linux only supports it for link-local
addresses.
The URL parser properly parses a scope id as an interface index, but stores it
in a field named "scope": confusion. The field has been renamed into "scope_id".
Curl_if2ip() used the scope id as it was a scope. This caused failures
to bind to an interface.
Scope is now computed from the addresses and Curl_if2ip() matches them.
If redundantly specified in the URL, scope id is check for mismatch with
the interface index.
This commit should fix SF bug #1451.
- do not grow memory by doubling its size
- do not leak previously allocated memory if reallocation fails
- replace while-loop with a single check to make sure
that the requested amount of data fits into the buffer
Bug: http://curl.haxx.se/bug/view.cgi?id=1450
Reported-by: Warren Menzer
There is no need to set the 'state' and 'result' member variables to
SMB_REQUESTING (0) and CURLE_OK (0) after the allocation via calloc()
as calloc() initialises the contents to zero.
I don't think both of my fix ups from yesterday were needed to fix the
compilation warning, so remove the one that I think is unnecessary and
let the next Android autobuild prove/disprove it.
smtp.c:2357 warning: adding 'size_t' (aka 'unsigned long') to a string
does not append to the string
smtp.c:2375 warning: adding 'size_t' (aka 'unsigned long') to a string
does not append to the string
smtp.c:2386 warning: adding 'size_t' (aka 'unsigned long') to a string
does not append to the string
Used array index notation instead.
This fixes compilation issues with compilers that don't support 64-bit
integers through long long or __int64 which was introduced in commit
07b66cbfa4.
Previously USE_NTLM2SESSION would only be defined automatically when
USE_NTRESPONSES wasn't already defined. Separated the two definitions
so that the user can manually set USE_NTRESPONSES themselves but
USE_NTLM2SESSION is defined automatically if they don't define it.
As the OpenSSL and NSS Crypto engines are prefered by the core NTLM
routines, to the Windows Crypt API, don't define USE_WIN32_CRYPT
automatically when either OpenSSL or NSS are in use - doing so would
disable NTLM2Session responses in NTLM type-3 messages.
If the scratch buffer was allocated in a previous call to
Curl_smtp_escape_eob(), a new buffer not allocated in the subsequent
call and no action taken by that call, then an attempt would be made to
try and free the buffer which, by now, would be part of the data->state
structure.
This bug was introduced in commit 4bd860a001.
Fixed a problem with the CRLF. detection when multiple buffers were
used to upload an email to libcurl and the line ending character(s)
appeared at the end of each buffer. This meant any lines which started
with . would not be escaped into .. and could be interpreted as the end
of transmission string instead.
This only affected libcurl based applications that used a read function
and wasn't reproducible with the curl command-line tool.
Bug: http://curl.haxx.se/bug/view.cgi?id=1456
Assisted-by: Patrick Monnerat
parsedate.c:548: warning: 'parsed' may be used uninitialized in this
function
As curl_getdate() returns -1 when parsedate() fails we can initialise
parsed to -1.
This fixes the test 506 torture test. The internal cookie API really
ought to be improved to separate cookie parsing errors (which may be
ignored) with OOM errors (which should be fatal).
As Windows based autoconf builds don't yet define USE_WIN32_CRYPTO
either explicitly through --enable-win32-cypto or automatically on
_WIN32 based platforms, subsequent builds broke with the following
error message:
"Can't compile NTLM support without a crypto library."
Fixed an issue with the message size calculation where the raw bytes
from the buffer were interpreted as signed values rather than unsigned
values.
Reported-by: Gisle Vanem
Assisted-by: Bill Nagel
Don't use a hard coded size of 4 for the security layer and buffer size
in Curl_sasl_create_gssapi_security_message(), instead, use sizeof() as
we have done in the sasl_gssapi module.
Reduced the amount of free's required for the decoded challenge message
in Curl_sasl_create_gssapi_security_message() as a result of coding it
differently in the sasl_gssapi module.
Sending NTLM/Negotiate header again after successful authentication
breaks the connection with certain Proxies and request types (POST to MS
Forefront).
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
On some platforms curl would crash if no credentials were used. As such
added detection of such a use case to prevent this from happening.
Reported-by: Gisle Vanem
This patch prepares for adding UNIX domain sockets support.
TCP_NODELAY and TCP_KEEPALIVE are specific to TCP/IP sockets, so do not
apply these to other socket types. bindlocal only works for IP sockets
(independent of TCP/UDP), so filter that out too for other types.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
smb.c:398: warning: comparison of integers of different signs:
'ssize_t' (aka 'long') and 'unsigned long'
smb.c:443: warning: comparison of integers of different signs:
'ssize_t' (aka 'long') and 'unsigned long'
smb.c:322: warning: conversion to 'short unsigned int' from 'unsigned
int' may alter its value
smb.c:323: warning: conversion to 'short unsigned int' from 'unsigned
int' may alter its value
smb.c:482: warning: conversion to 'short unsigned int' from 'int' may
alter its value
smb.c:521: warning: conversion to 'unsigned int' from 'curl_off_t' may
alter its value
smb.c:549: warning: conversion to 'unsigned int' from 'curl_off_t' may
alter its value
smb.c:550: warning: conversion to 'short unsigned int' from 'int' may
alter its value
smb.c:489: warning: declaration of 'close' shadows a global declaration
smb.c:511: warning: declaration of 'read' shadows a global declaration
smb.c:528: warning: declaration of 'write' shadows a global declaration
smb.c:212: warning: unused parameter 'done'
smb.c:380: warning: ISO C does not allow extra ';' outside of a function
smb.c:812: warning: unused parameter 'premature'
smb.c:822: warning: unused parameter 'dead'
smb.c:311: warning: conversion from 'unsigned __int64' to 'u_short',
possible loss of data
smb.c:425: warning: conversion from '__int64' to 'unsigned short',
possible loss of data
smb.c:452: warning: conversion from '__int64' to 'unsigned short',
possible loss of data
smb.c:162: error: comma at end of enumerator list
smb.c:469: warning: conversion from 'size_t' to 'unsigned short',
possible loss of data
smb.c:517: warning: conversion from 'curl_off_t' to 'unsigned int',
possible loss of data
smb.c:545: warning: conversion from 'curl_off_t' to 'unsigned int',
possible loss of data
If the scratch buffer already existed when the CRLF conversion was
performed then the buffer pointer would be checked twice for NULL. This
second check is only necessary if the call to malloc() was performed by
the first check.
Whilst I had moved the dot stuffing code from being performed before
CRLF conversion takes place to after it, in commit 4bd860a001, I had
moved it outside the 'when something read' block of code when meant
it could perform the dot stuffing twice on partial send if nread
happened to contain the right values. It also meant the function could
potentially read past the end of buffer. This was highlighted by the
following warning:
warning: `nread' might be used uninitialized in this function
After commit 48d19acb7c the HTTP code would call Curl_nss_force_init()
twice when decoding a NTLM type-2 message, once directly and the other
through the call to Curl_sasl_decode_ntlm_type2_message().
This commit disables pipelining for HTTP/2 or upgraded connections. For
HTTP/2, we do not support multiplexing. In general, requests cannot be
pipelined in an upgraded connection, since it is now different protocol.
When the connection code decides to close a socket it informs the multi
system via the Curl_multi_closed function. The multi system may, in
turn, invoke the CURLMOPT_SOCKETFUNCTION function with
CURL_POLL_REMOVE. This happens after the socket has already been
closed. Reorder the code so that CURL_POLL_REMOVE is called before the
socket is closed.
Debug output 'typo' fix.
Don't print an extra "0x" in
* Pipe broke: handle 0x0x2546d88, url = /
Add debug output.
Print the number of connections in the connection cache when
adding one, and not only when one is removed.
Fix typos in comments.