Prior to this change it was possible for libcurl to be built with both
Windows' native IDN lib (normaliz) and libidn2 enabled. It appears that
doesn't offer any benefit --and could cause a bug-- since libcurl's IDN
handling is written to use either one but not both.
Bug: https://github.com/curl/curl/issues/1441#issuecomment-297689856
Reported-by: Gisle Vanem
This fixes the following clang warnings:
http2.c:184:27: error: no previous extern declaration for non-static
variable 'Curl_handler_http2' [-Werror,-Wmissing-variable-declarations]
http2.c:204:27: error: no previous extern declaration for non-static
variable 'Curl_handler_http2_ssl'
[-Werror,-Wmissing-variable-declarations]
clang complains:
curl_rtmp.c:61:27: error: no previous extern declaration for non-static variable 'Curl_handler_rtmp' [-Werror,-Wmissing-variable-declarations]
curl_rtmp.c:81:27: error: no previous extern declaration for non-static variable 'Curl_handler_rtmpt' [-Werror,-Wmissing-variable-declarations]
curl_rtmp.c:101:27: error: no previous extern declaration for non-static variable 'Curl_handler_rtmpe' [-Werror,-Wmissing-variable-declarations]
curl_rtmp.c:121:27: error: no previous extern declaration for non-static variable 'Curl_handler_rtmpte' [-Werror,-Wmissing-variable-declarations]
curl_rtmp.c:141:27: error: no previous extern declaration for non-static variable 'Curl_handler_rtmps' [-Werror,-Wmissing-variable-declarations]
curl_rtmp.c:161:27: error: no previous extern declaration for non-static variable 'Curl_handler_rtmpts' [-Werror,-Wmissing-variable-declarations]
Fix this by including the header file.
This fixes the following clang warnings:
macro is not used [-Wunused-macros]
will never be executed [-Wunreachable-code]
Closes https://github.com/curl/curl/pull/1448
get_protocol_family() is not defined static even though there is a
static local forward declaration. Let's simply make the definition match
it's declaration.
Bug: https://curl.haxx.se/mail/lib-2017-04/0127.html
The module contains a more comprehensive set of trust information than
supported by nss-pem, because libnssckbi.so also includes information
about distrusted certificates.
Reviewed-by: Kai Engert
Closes#1414
The data->req.uploadbuf struct member served no good purpose, instead we
use ->state.uploadbuffer directly. It makes it clearer in the code which
buffer that's being used.
Removed the 'SingleRequest *' argument from the readwrite_upload() proto
as it can be derived from the Curl_easy struct. Also made the code in
the readwrite_upload() function use the 'k->' shortcut to all references
to struct fields in 'data->req', which previously was made with a mix of
both.
- Track when the cached encrypted data contains only a partial record
that can't be decrypted without more data (SEC_E_INCOMPLETE_MESSAGE).
- Change Curl_schannel_data_pending to return false in such a case.
Other SSL libraries have pending data functions that behave similarly.
Ref: https://github.com/curl/curl/pull/1387
Closes https://github.com/curl/curl/pull/1392
`if(nfds || extra_nfds) {` is followed by `malloc(nfds * ...)`.
If `extra_fs` could be non-zero when `nfds` was zero, then we have
`malloc(0)` which is allowed to return `NULL`. But, malloc returning
NULL can be confusing. In this code, the next line would treat the NULL
as an allocation failure.
It turns out, if `nfds` is zero then `extra_nfds` must also be zero.
The final value of `nfds` includes `extra_nfds`. So the test for
`extra_nfds` is redundant. It can only confuse the reader.
Closes#1439
With -Og, GCC complains:
easy.c:628:7: error: ‘mcode’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
../lib/strcase.h:35:29: error: ‘tok_buf’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
vauth/digest.c:208:9: note: ‘tok_buf’ was declared here
../lib/strcase.h:35:29: error: ‘tok_buf’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
vauth/digest.c:566:15: note: ‘tok_buf’ was declared here
Fix this by initializing the variables.
The 'list element' struct now has to be within the data that is being
added to the list. Removes 16.6% (tiny) mallocs from a simple HTTP
transfer. (96 => 80)
Also removed return codes since the llist functions can't fail now.
Test 1300 updated accordingly.
Closes#1435
In that case, use libcurl's internal MD4 routine. This fixes tests 1013
and 1014 which were failing due to configure assuming NTLM and SMB were
always available whenever mbed TLS was in use (which is now true).
This fixes 3 warnings issued by MinGW:
1. PR_ImportTCPSocket actually has a paramter of type PROsfd instead of
PRInt32, which is 64 bits on Windows. Fixed this by including the
corresponding header file instead of redeclaring the function, which is
supported even though it is in the private include folder. [1]
2. In 64-bit mode, size_t is 64 bits while CK_ULONG is 32 bits, so an explicit
narrowing cast is needed.
3. Curl_timeleft returns time_t instead of long since commit
21aa32d30d.
[1] https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_ImportTCPSocket
Closes https://github.com/curl/curl/pull/1393
ERR_error_string with NULL parameter is not thread-safe. The library
writes the string into some static buffer. Two threads doing this at
once may clobber each other and run into problems. Switch to
ERR_error_string_n which avoids this problem and is explicitly
bounds-checked.
Also clean up some remnants of OpenSSL 0.9.5 around here. A number of
comments (fixed buffer size, explaining that ERR_error_string_n was
added in a particular version) date to when ossl_strerror tried to
support pre-ERR_error_string_n OpenSSLs.
Closes#1424
ssl_session_init was only introduced in version 1.3.8, the penultimate
version. The function only contains a memset, so replace it with that.
Suggested-by: Jay Satiro
Fixes https://github.com/curl/curl/issues/1401
The POSIX standard location is <poll.h>. Using <sys/poll.h> results in
warning spam when using the musl standard library.
Closes https://github.com/curl/curl/pull/1406
... because they may include an intermediate certificate for a client
certificate and the intermediate certificate needs to be presented to
the server, no matter if we verify the peer or not.
Reported-by: thraidh
Closes#851
When UNICODE is not defined, the Curl_convert_UTF8_to_tchar macro maps
directly to its argument. As it is declared as a pointer to const and
InitializeSecurityContext expects a pointer to non-const, both MSVC and MinGW
issue a warning about implicitly casting away the const. Fix this by declaring
the variables as pointers to non-const.
Closes https://github.com/curl/curl/pull/1394
Previously, periods of fast speed between periods of slow speed would
not count and could still erroneously trigger a timeout.
Reported-by: Paul Harris
Fixes#1345Closes#1390
Multi handles repeatedly invert the queue of pending easy handles when
used with CURLMOPT_MAX_TOTAL_CONNECTIONS. This is caused by a multistep
process involving Curl_splaygetbest and violates the FIFO property of
the multi handle.
This patch fixes this issue by redefining the "best" node in the
context of timeouts as the "smallest not larger than now", and
implementing the necessary data structure modifications to do this
effectively, namely:
- splay nodes with the same key are now stored in a doubly-linked
circular list instead of a non-circular one to enable O(1)
insertion to the tail of the list
- Curl_splayinsert inserts nodes with the same key to the tail of
the same list
- in case of multiple nodes with the same key, the one on the head of
the list gets selected
- Don't free postponed data on a connection that will be reused since
doing so can cause data loss when pipelining.
Only Windows builds are affected by this.
Closes https://github.com/curl/curl/issues/1380
Safe to silence warning adding time delta of poll, which can trigger on
Windows since sizeof time_t > sizeof long.
warning C4244: '+=' : conversion from 'time_t' to 'long', possible loss
of data
system.h is aimed to replace curlbuild.h at a later point in time when
we feel confident system.h works sufficiently well.
curl/system.h is currently used in parallel with curl/curlbuild.h
curl/system.h determines a data sizes, data types and include file
status based on available preprocessor defines instead of getting
generated at build-time. This, in order to avoid relying on a build-time
generated file that makes it complicated to do 32 and 64 bit bields from
the same installed set of headers.
Test 1541 verifies that system.h comes to the same conclusion that
curlbuild.h offers.
Closes#1373
telnet.c(1427,21): warning: comparison of constant 268435456 with
expression of type 'CURLcode' is always false
telnet.c(1433,21): warning: comparison of constant 268435457 with
expression of type 'CURLcode' is always false
Reviewed-by: Jay Satiro
Reported-by: Gisle Vanem
Bug: https://github.com/curl/curl/issues/1225#issuecomment-290340890Closes#1374
'left' is used as time_t but declared as long.
MinGW complains:
error: conversion to 'long int' from 'time_t {aka long long int}' may alter
its value [-Werror=conversion]
Changed the declaration to time_t.
At least under Windows, there is no SIZEOF_LONG, so it evaluates to 0 even
though sizeof(int) == sizeof(long). This should probably have been
CURL_SIZEOF_LONG, but the type of timeout_ms changed from long to time_t
anyway.
This triggered MSVC warning C4668 about implicitly replacing undefined
macros with '0'.
Closes https://github.com/curl/curl/pull/1362
If we use FTPS over CONNECT, the TLS handshake for the FTPS control
connection needs to be initiated in the SENDPROTOCONNECT state, not
the WAITPROXYCONNECT state. Otherwise, if the TLS handshake completed
without blocking, the information about the completed TLS handshake
would be saved to a wrong flag. Consequently, the TLS handshake would
be initiated in the SENDPROTOCONNECT state once again on the same
connection, resulting in a failure of the TLS handshake. I was able to
observe the failure with the NSS backend if curl ran through valgrind.
Note that this commit partially reverts curl-7_21_6-52-ge34131d.
When receiving chunked encoded data with trailers, and the write
callback returns PAUSE, there might be both body and header to store to
resend on unpause. Previously libcurl returned error for that case.
Added test case 1540 to verify.
Reported-by: Stephen Toub
Fixes#1354Closes#1357
When using basic-auth, connections and proxy connections
can be re-used with different Authorization headers since
it does not authenticate the connection (like NTLM does).
For instance, the below command should re-use the proxy
connection, but it currently doesn't:
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -U bob:b -x http://localhost:8181http://localhost/
This is a regression since refactoring of ConnectionExists()
as part of: cb4e2be7c6
Fix the above by removing the username and password compare
when re-using proxy connection at proxy_info_matches().
However, this fix brings back another bug would make curl
to re-print the old proxy-authorization header of previous
proxy basic-auth connection because it wasn't cleared.
For instance, in the below command the second request should
fail if the proxy requires authentication, but would succeed
after the above fix (and before aforementioned commit):
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -x http://localhost:8181http://localhost/
Fix this by clearing conn->allocptr.proxyuserpwd after use
unconditionally, same as we do for conn->allocptr.userpwd.
Also fix test 540 to not expect digest auth header to be
resent when connection is reused.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Closes https://github.com/curl/curl/pull/1350
- If SSL_get_error is called but no extended error detail is available
then show that SSL_ERROR_* as a string.
Prior to this change there was some inconsistency in that case: the
SSL_ERROR_* code may or may not have been shown, or may have been shown
as unknown even if it was known.
Ref: https://github.com/curl/curl/issues/1300
Closes https://github.com/curl/curl/pull/1348
- Add new option CURLOPT_SUPPRESS_CONNECT_HEADERS to allow suppressing
proxy CONNECT response headers from the user callback functions
CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
- Add new tool option --suppress-connect-headers to expose
CURLOPT_SUPPRESS_CONNECT_HEADERS and allow suppressing proxy CONNECT
response headers from --dump-header and --include.
Assisted-by: Jay Satiro
Assisted-by: CarloCannas@users.noreply.github.com
Closes https://github.com/curl/curl/pull/783
A client MUST ignore any Content-Length or Transfer-Encoding header
fields received in a successful response to CONNECT.
"Successful" described as: 2xx (Successful). RFC 7231 4.3.6
Prior to this change such a case would cause an error.
In some ways this bug appears to be a regression since c50b878. Prior to
that libcurl may have appeared to function correctly in such cases by
acting on those headers instead of causing an error. But that behavior
was also incorrect.
Bug: https://github.com/curl/curl/issues/1317
Reported-by: mkzero@users.noreply.github.com
This flag is meant for the current request based on authentication
state, once the request is done we can clear the flag.
Also change auth.multi to auth.multipass for better readability.
Fixes https://github.com/curl/curl/issues/1095
Closes https://github.com/curl/curl/pull/1326
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Michael Kaufmann
This commit introduces the CURL_SSLVERSION_MAX_* constants as well as
the --tls-max option of the curl tool.
Closes https://github.com/curl/curl/pull/1166
This fixes assertion error which occurs when redirect is done with 0
length body via HTTP/2, and the easy handle is reused, but new
connection is established due to hostname change:
curl: http2.c:1572: ssize_t http2_recv(struct connectdata *,
int, char *, size_t, CURLcode *):
Assertion `httpc->drain_total >= data->state.drain' failed.
To fix this bug, ensure that http2_handle_stream is called.
Fixes#1286Closes#1302
... because it causes confusion with users. Example URLs:
"http://[127.0.0.1]:11211:80" which a lot of languages' URL parsers will
parse and claim uses port number 80, while libcurl would use port number
11211.
"http://user@example.com:80@localhost" which by the WHATWG URL spec will
be treated to contain user name 'user@example.com' but according to
RFC3986 is user name 'user' for the host 'example.com' and then port 80
is followed by "@localhost"
Both these formats are now rejected, and verified so in test 1260.
Reported-by: Orange Tsai
Mark intended fallthroughs with /* FALLTHROUGH */ so that gcc will know
it's expected and won't warn on [-Wimplicit-fallthrough=].
Closes https://github.com/curl/curl/pull/1297
In DarwinSSL the SSLSetPeerDomainName function is used to enable both
sending SNI and verifying the host. When host verification is disabled
the function cannot be called, therefore SNI is disabled as well.
Closes https://github.com/curl/curl/pull/1240
If size_t is 32 bits, MSVC warns:
warning C4310: cast truncates constant value
The warning is harmless as CURL_MASK_SCOFFT gets
truncated to the maximum value of size_t.
If the compile-time CURL_CA_BUNDLE location is defined use it as the
default value for the proxy CA bundle location, which is the same as
what we already do for the regular CA bundle location.
Ref: https://github.com/curl/curl/pull/1257
- Change CURLOPT_PROXY_CAPATH to return CURLE_NOT_BUILT_IN if the option
is not supported, which is the same as what we already do for
CURLOPT_CAPATH.
- Change the curl tool to handle CURLOPT_PROXY_CAPATH error
CURLE_NOT_BUILT_IN as a warning instead of as an error, which is the
same as what we already do for CURLOPT_CAPATH.
- Fix CAPATH docs to show that CURLE_NOT_BUILT_IN is returned when the
respective CAPATH option is not supported by the SSL library.
Ref: https://github.com/curl/curl/pull/1257
The CURLOPT_SSL_VERIFYSTATUS option was not properly handled by libcurl
and thus even if the status couldn't be verified, the connection would
be allowed and the user would not be told about the failed verification.
Regression since cb4e2be7c6
CVE-2017-2629
Bug: https://curl.haxx.se/docs/adv_20170222.html
Reported-by: Marcus Hoffmann
- on the first invocation: keep security context returned by
InitializeSecurityContext()
- on subsequent invocations: use MakeSignature() instead of
InitializeSecurityContext() to generate HTTP digest response
Bug: https://github.com/curl/curl/issues/870
Reported-by: Andreas Roth
Closes https://github.com/curl/curl/pull/1251
Properly resolve, convert and log the proxy host names.
Support the "--connect-to" feature for SOCKS proxies and for passive FTP
data transfers.
Follow-up to cb4e2be
Reported-by: Jay Satiro
Fixes https://github.com/curl/curl/issues/1248
- While negotiating auth during PUT/POST if a user-specified
Content-Length header is set send 'Content-Length: 0'.
This is what we do already in HTTPREQ_POST_FORM and what we did in the
HTTPREQ_POST case (regression since afd288b).
Prior to this change no Content-Length header would be sent in such a
case.
Bug: https://curl.haxx.se/mail/lib-2017-02/0006.html
Reported-by: Dominik Hölzl
Closes https://github.com/curl/curl/pull/1242
Builds with axTLS 2.1.2. This then also breaks compatibility with axTLS
< 2.1.0 (the older API)
... and fix the session_id mixup brought in 04b4ee549Fixes#1220
If the NSS code was in the middle of a non-blocking handshake and it
was asked to finish the handshake in blocking mode, it unexpectedly
continued in the non-blocking mode, which caused a FTPS connection
over CONNECT to fail with "(81) Socket not ready for send/recv".
Bug: https://bugzilla.redhat.com/1420327
When removing an easy handler from a multi before it completed its
transfer, and it had pushed streams, it would segfault due to the pushed
counted not being cleared.
Fixed-by: zelinchen@users.noreply.github.comFixes#1249
Using sftp to delete a file with CURLOPT_NOBODY set with a reused
connection would fail as curl expected to get some data. Thus it would
retry the command again which fails as the file has already been
deleted.
Fixes#1243
The information extracted from the server certificates in step 3 is only
used when in verbose mode, and there is no error handling or validation
performed as that has already been done. Only run the certificate
information extraction when in verbose mode and libcurl was built with
verbose strings.
Closes https://github.com/curl/curl/pull/1246
- Remove the SNI disabled when host verification disabled message
since that is incorrect.
- Show a message for legacy versions of Windows <= XP that connections
may fail since those versions of WinSSL lack SNI, algorithms, etc.
Bug: https://github.com/curl/curl/pull/1240
SSL_CTX_add_extra_chain_cert takes ownership of the given certificate
while, despite the similar name, SSL_CTX_add_client_CA does not. Thus
it's best to call SSL_CTX_add_client_CA before
SSL_CTX_add_extra_chain_cert, while the code still has ownership of the
argument.
Closes https://github.com/curl/curl/pull/1236
This repairs cookies for localhost.
Non-PSL builds will now only accept "localhost" without dots, while PSL
builds okeys everything not listed as PSL.
Added test 1258 to verify.
This was a regression brought in a76825a5ef
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer. Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size. Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.
Closes#1222
Regression since 1d4202ad, which moved the buffer into a more narrow
scope, but the data in that buffer was used outside of that more narrow
scope.
Reported-by: Dan Fandrich
Bug: https://curl.haxx.se/mail/lib-2017-01/0093.html
curl_addrinfo.c:519:20: error: conversion to ‘curl_socklen_t {aka
unsigned int}’ from ‘long unsigned int’ may alter its value
[-Werror=conversion]
Follow-up to 1d786faee1
In addition to unix domain sockets, Linux also supports an
abstract namespace which is independent of the filesystem.
In order to support it, add new CURLOPT_ABSTRACT_UNIX_SOCKET
option which uses the same storage as CURLOPT_UNIX_SOCKET_PATH
internally, along with a flag to specify abstract socket.
On non-supporting platforms, the abstract address will be
interpreted as an empty string and fail gracefully.
Also add new --abstract-unix-socket tool parameter.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Chungtsun Li (typeless)
Reviewed-by: Daniel Stenberg
Reviewed-by: Peter Wu
Closes#1197Fixes#1061
It made the german ß get converted to ss, IDNA2003 style, and we can't
have that for the .de TLD - a primary reason for our switch to IDNA2008.
Test 165 verifies.
Under condition using http_proxy env var, noproxy list was the
combination of --noproxy option and NO_PROXY env var previously. Since
this commit, --noproxy option overrides NO_PROXY environment variable
even if use http_proxy env var.
Closes#1140
If defined CURL_DISABLE_HTTP, detect_proxy() returned NULL. If not
defined CURL_DISABLE_HTTP, detect_proxy() checked noproxy list.
Thus refactor to set proxy to NULL instead of calling detect_proxy() if
define CURL_DISABLE_HTTP, and refactor to call detect_proxy() if not
define CURL_DISABLE_HTTP and the host is not in the noproxy list.
The combination of --noproxy option and http_proxy env var works well
both for proxied hosts and non-proxied hosts.
However, when combining NO_PROXY env var with --proxy option,
non-proxied hosts are not reachable while proxied host is OK.
This patch allows us to access non-proxied hosts even if using NO_PROXY
env var with --proxy option.
Check for presence of gnutls_alpn_* and gnutls_ocsp_* functions during
configure instead of relying on the version number. GnuTLS has options
to turn these features off and we ca just work with with such builds
like we work with older versions.
Signed-off-by: Marcus Hoffmann <m.hoffmann@cartelsol.com>
Closes#1204
Follow-up to 3463408.
Prior to 3463408 file:// hostnames were silently stripped.
Prior to this commit it did not work when a schemeless url was used with
file as the default protocol.
Ref: https://curl.haxx.se/mail/lib-2016-11/0081.html
Closes https://github.com/curl/curl/pull/1124
Also fix for drive letters:
- Support --proto-default file c:/foo/bar.txt
- Support file://c:/foo/bar.txt
- Fail when a file:// drive letter is detected and not MSDOS/Windows.
Bug: https://github.com/curl/curl/issues/1187
Reported-by: Anatol Belski
Assisted-by: Anatol Belski
Both IMAP and POP3 response characters are used internally, but when
appended to the STARTTLS denial message likely could confuse the user.
Closes https://github.com/curl/curl/pull/1203
Fixed an old leftover use of the USE_SSLEAY define which would make a
socket get removed from the applications sockets to monitor when the
multi_socket API was used, leading to timeouts.
Bug: #1174
Visual C++ complained:
warning C4267: '=': conversion from 'size_t' to 'long', possible loss of data
warning C4701: potentially uninitialized local variable 'path' used
Fixes a few issues in manual wildcard cert name validation in
schannel support code for Win32 CE:
- when comparing the wildcard name to the hostname, the wildcard
character was removed from the cert name and the hostname
was checked to see if it ended with the modified cert name.
This allowed cert names like *.com to match the connection
hostname. This violates recommendations from RFC 6125.
- when the wildcard name in the certificate is longer than the
connection hostname, a buffer overread of the connection
hostname buffer would occur during the comparison of the
certificate name and the connection hostname.
It doesn't benefit us much as the connection could get closed at
any time, and also by checking we lose the ability to determine
if the socket was closed by reading zero bytes.
Reported-by: Michael Kaufmann
Closes https://github.com/curl/curl/pull/1134
CURLOPT_SOCKS_PROXY -> CURLOPT_PRE_PROXY
Added the corresponding --preroxy command line option. Sets a SOCKS
proxy to connect to _before_ connecting to a HTTP(S) proxy.
This was added as part of the SOCKS+HTTPS proxy merge but there's no
need to support this as we prefer to have the protocol specified as a
prefix instead.
ERR_PACK is an internal detail of OpenSSL. Also, when using it, a
function name must be specified which is overly specific: the test will
break whenever OpenSSL internally change things so that a different
function creates the error.
Closes#1157
Since it now reads responses one byte a time, a loop could be removed
and it is no longer limited to get the whole response within 16K, it is
now instead only limited to 16K maximum header line lengths.
... so that it doesn't read data that is actually coming from the
remote. 2xx responses have no body from the proxy, that data is from the
peer.
Fixes#1132
A server MUST NOT send any Transfer-Encoding or Content-Length header
fields in a 2xx (Successful) response to CONNECT. (RFC 7231 section
4.3.6)
Also fixes the three test cases that did this.
If a port number in a "connect-to" entry does not match, skip this
entry instead of connecting to port 0.
If a port number in a "connect-to" entry matches, use this entry
and look no further.
Reported-by: Jay Satiro
Assisted-by: Jay Satiro, Daniel Stenberg
Closes#1148
Adds access to the effectively used protocol/scheme to both libcurl and
curl, both in string and numeric (CURLPROTO_*) form.
Note that the string form will be uppercase, as it is just the internal
string.
As these strings are declared internally as const, and all other strings
returned by curl_easy_getinfo() are de-facto const as well, string
handling in getinfo.c got const-ified.
Closes#1137
vtls/gtls.c: In function ‘Curl_gtls_data_pending’:
vtls/gtls.c:1429:3: error: this ‘if’ clause does not guard... [-Werror=misleading-indentation]
if(conn->proxy_ssl[connindex].session &&
^~
vtls/gtls.c:1433:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘if’
return res;
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
- Fix connection reuse for when the proposed new conn 'needle' has a
specified local port but does not have a specified device interface.
Bug: https://curl.haxx.se/mail/lib-2016-11/0137.html
Reported-by: bjt3[at]hotmail.com
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes#1131
- In Curl_http2_switched don't call memcpy when src is NULL.
Curl_http2_switched can be called like:
Curl_http2_switched(conn, NULL, 0);
.. and prior to this change memcpy was then called like:
memcpy(dest, NULL, 0)
.. causing address sanitizer to warn:
http2.c:2057:3: runtime error: null pointer passed as argument 2, which
is declared to never be null
Now Curl_rand() is made to fail if it cannot get the necessary random
level.
Changed the proto of Curl_rand() slightly to provide a number of ints at
once.
Moved out from vtls, since it isn't a TLS function and vtls provides
Curl_ssl_random() for this to use.
Discussion: https://curl.haxx.se/mail/lib-2016-11/0119.html
Previously, the [host] part was just ignored which made libcurl accept
strange URLs misleading users. like "file://etc/passwd" which might've
looked like it refers to "/etc/passwd" but is just "/passwd" since the
"etc" is an ignored host name.
Reported-by: Mike Crowe
Assisted-by: Kamil Dudka
- Fix GnuTLS code for CURL_SSLVERSION_TLSv1_2 that broke when the
TLS 1.3 support was added in 6ad3add.
- Homogenize across code for all backends the error message when TLS 1.3
is not available to "<backend>: TLS 1.3 is not yet supported".
- Return an error when a user-specified ssl version is unrecognized.
---
Prior to this change our code for some of the backends used the
'default' label in the switch statement (ie ver unrecognized) for
ssl.version and treated it the same as CURL_SSLVERSION_DEFAULT.
Bug: https://curl.haxx.se/mail/lib-2016-11/0048.html
Reported-by: Kamil Dudka
We're mostly saying just "curl" in lower case these days so here's a big
cleanup to adapt to this reality. A few instances are left as the
project could still formally be considered called cURL.
- Call Curl_initinfo on init and duphandle.
Prior to this change the statistical and informational variables were
simply zeroed by calloc on easy init and duphandle. While zero is the
correct default value for almost all info variables, there is one where
it isn't (filetime initializes to -1).
Bug: https://github.com/curl/curl/issues/1103
Reported-by: Neal Poole
...to use the public function curl_strnequal(). This isn't ideal because
it adds extra overhead to any internal calls to checkprefix.
follow-up to 95bd2b3e
As they are after all part of the public API. Saves space and reduces
complexity. Remove the strcase defines from the curlx_ family.
Suggested-by: Dan Fandrich
Idea: https://curl.haxx.se/mail/lib-2016-10/0136.html
This should fix the "warning: 'curl_strequal' redeclared without
dllimport attribute: previous dllimport ignored" message and subsequent
link error on Windows because of the missing CURL_EXTERN on the
prototype.
These two public functions have been mentioned as deprecated since a
very long time but since they are still part of the API and ABI we need
to keep them around.
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
If the requested size is zero, bail out with error instead of doing a
realloc() that would cause a double-free: realloc(0) acts as a free()
and then there's a second free in the cleanup path.
CVE-2016-8619
Bug: https://curl.haxx.se/docs/adv_20161102E.html
Reported-by: Cure53
Previously it only held references to them, which was reckless as the
thread lock was released so the cookies could get modified by other
handles that share the same cookie jar over the share interface.
CVE-2016-8623
Bug: https://curl.haxx.se/docs/adv_20161102I.html
Reported-by: Cure53