Replace use of fixed macro BUFSIZE to define the size of the receive
buffer. Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size. Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.
Closes#1222
Regression since 1d4202ad, which moved the buffer into a more narrow
scope, but the data in that buffer was used outside of that more narrow
scope.
Reported-by: Dan Fandrich
Bug: https://curl.haxx.se/mail/lib-2017-01/0093.html
curl_addrinfo.c:519:20: error: conversion to ‘curl_socklen_t {aka
unsigned int}’ from ‘long unsigned int’ may alter its value
[-Werror=conversion]
Follow-up to 1d786faee1
In addition to unix domain sockets, Linux also supports an
abstract namespace which is independent of the filesystem.
In order to support it, add new CURLOPT_ABSTRACT_UNIX_SOCKET
option which uses the same storage as CURLOPT_UNIX_SOCKET_PATH
internally, along with a flag to specify abstract socket.
On non-supporting platforms, the abstract address will be
interpreted as an empty string and fail gracefully.
Also add new --abstract-unix-socket tool parameter.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Chungtsun Li (typeless)
Reviewed-by: Daniel Stenberg
Reviewed-by: Peter Wu
Closes#1197Fixes#1061
It made the german ß get converted to ss, IDNA2003 style, and we can't
have that for the .de TLD - a primary reason for our switch to IDNA2008.
Test 165 verifies.
Under condition using http_proxy env var, noproxy list was the
combination of --noproxy option and NO_PROXY env var previously. Since
this commit, --noproxy option overrides NO_PROXY environment variable
even if use http_proxy env var.
Closes#1140
If defined CURL_DISABLE_HTTP, detect_proxy() returned NULL. If not
defined CURL_DISABLE_HTTP, detect_proxy() checked noproxy list.
Thus refactor to set proxy to NULL instead of calling detect_proxy() if
define CURL_DISABLE_HTTP, and refactor to call detect_proxy() if not
define CURL_DISABLE_HTTP and the host is not in the noproxy list.
The combination of --noproxy option and http_proxy env var works well
both for proxied hosts and non-proxied hosts.
However, when combining NO_PROXY env var with --proxy option,
non-proxied hosts are not reachable while proxied host is OK.
This patch allows us to access non-proxied hosts even if using NO_PROXY
env var with --proxy option.
Check for presence of gnutls_alpn_* and gnutls_ocsp_* functions during
configure instead of relying on the version number. GnuTLS has options
to turn these features off and we ca just work with with such builds
like we work with older versions.
Signed-off-by: Marcus Hoffmann <m.hoffmann@cartelsol.com>
Closes#1204
Follow-up to 3463408.
Prior to 3463408 file:// hostnames were silently stripped.
Prior to this commit it did not work when a schemeless url was used with
file as the default protocol.
Ref: https://curl.haxx.se/mail/lib-2016-11/0081.html
Closes https://github.com/curl/curl/pull/1124
Also fix for drive letters:
- Support --proto-default file c:/foo/bar.txt
- Support file://c:/foo/bar.txt
- Fail when a file:// drive letter is detected and not MSDOS/Windows.
Bug: https://github.com/curl/curl/issues/1187
Reported-by: Anatol Belski
Assisted-by: Anatol Belski
Both IMAP and POP3 response characters are used internally, but when
appended to the STARTTLS denial message likely could confuse the user.
Closes https://github.com/curl/curl/pull/1203
Fixed an old leftover use of the USE_SSLEAY define which would make a
socket get removed from the applications sockets to monitor when the
multi_socket API was used, leading to timeouts.
Bug: #1174
Visual C++ complained:
warning C4267: '=': conversion from 'size_t' to 'long', possible loss of data
warning C4701: potentially uninitialized local variable 'path' used
Fixes a few issues in manual wildcard cert name validation in
schannel support code for Win32 CE:
- when comparing the wildcard name to the hostname, the wildcard
character was removed from the cert name and the hostname
was checked to see if it ended with the modified cert name.
This allowed cert names like *.com to match the connection
hostname. This violates recommendations from RFC 6125.
- when the wildcard name in the certificate is longer than the
connection hostname, a buffer overread of the connection
hostname buffer would occur during the comparison of the
certificate name and the connection hostname.
It doesn't benefit us much as the connection could get closed at
any time, and also by checking we lose the ability to determine
if the socket was closed by reading zero bytes.
Reported-by: Michael Kaufmann
Closes https://github.com/curl/curl/pull/1134
CURLOPT_SOCKS_PROXY -> CURLOPT_PRE_PROXY
Added the corresponding --preroxy command line option. Sets a SOCKS
proxy to connect to _before_ connecting to a HTTP(S) proxy.
This was added as part of the SOCKS+HTTPS proxy merge but there's no
need to support this as we prefer to have the protocol specified as a
prefix instead.
ERR_PACK is an internal detail of OpenSSL. Also, when using it, a
function name must be specified which is overly specific: the test will
break whenever OpenSSL internally change things so that a different
function creates the error.
Closes#1157
Since it now reads responses one byte a time, a loop could be removed
and it is no longer limited to get the whole response within 16K, it is
now instead only limited to 16K maximum header line lengths.
... so that it doesn't read data that is actually coming from the
remote. 2xx responses have no body from the proxy, that data is from the
peer.
Fixes#1132
A server MUST NOT send any Transfer-Encoding or Content-Length header
fields in a 2xx (Successful) response to CONNECT. (RFC 7231 section
4.3.6)
Also fixes the three test cases that did this.
If a port number in a "connect-to" entry does not match, skip this
entry instead of connecting to port 0.
If a port number in a "connect-to" entry matches, use this entry
and look no further.
Reported-by: Jay Satiro
Assisted-by: Jay Satiro, Daniel Stenberg
Closes#1148
Adds access to the effectively used protocol/scheme to both libcurl and
curl, both in string and numeric (CURLPROTO_*) form.
Note that the string form will be uppercase, as it is just the internal
string.
As these strings are declared internally as const, and all other strings
returned by curl_easy_getinfo() are de-facto const as well, string
handling in getinfo.c got const-ified.
Closes#1137
vtls/gtls.c: In function ‘Curl_gtls_data_pending’:
vtls/gtls.c:1429:3: error: this ‘if’ clause does not guard... [-Werror=misleading-indentation]
if(conn->proxy_ssl[connindex].session &&
^~
vtls/gtls.c:1433:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘if’
return res;
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
- Fix connection reuse for when the proposed new conn 'needle' has a
specified local port but does not have a specified device interface.
Bug: https://curl.haxx.se/mail/lib-2016-11/0137.html
Reported-by: bjt3[at]hotmail.com
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes#1131
- In Curl_http2_switched don't call memcpy when src is NULL.
Curl_http2_switched can be called like:
Curl_http2_switched(conn, NULL, 0);
.. and prior to this change memcpy was then called like:
memcpy(dest, NULL, 0)
.. causing address sanitizer to warn:
http2.c:2057:3: runtime error: null pointer passed as argument 2, which
is declared to never be null
Now Curl_rand() is made to fail if it cannot get the necessary random
level.
Changed the proto of Curl_rand() slightly to provide a number of ints at
once.
Moved out from vtls, since it isn't a TLS function and vtls provides
Curl_ssl_random() for this to use.
Discussion: https://curl.haxx.se/mail/lib-2016-11/0119.html
Previously, the [host] part was just ignored which made libcurl accept
strange URLs misleading users. like "file://etc/passwd" which might've
looked like it refers to "/etc/passwd" but is just "/passwd" since the
"etc" is an ignored host name.
Reported-by: Mike Crowe
Assisted-by: Kamil Dudka
- Fix GnuTLS code for CURL_SSLVERSION_TLSv1_2 that broke when the
TLS 1.3 support was added in 6ad3add.
- Homogenize across code for all backends the error message when TLS 1.3
is not available to "<backend>: TLS 1.3 is not yet supported".
- Return an error when a user-specified ssl version is unrecognized.
---
Prior to this change our code for some of the backends used the
'default' label in the switch statement (ie ver unrecognized) for
ssl.version and treated it the same as CURL_SSLVERSION_DEFAULT.
Bug: https://curl.haxx.se/mail/lib-2016-11/0048.html
Reported-by: Kamil Dudka
We're mostly saying just "curl" in lower case these days so here's a big
cleanup to adapt to this reality. A few instances are left as the
project could still formally be considered called cURL.
- Call Curl_initinfo on init and duphandle.
Prior to this change the statistical and informational variables were
simply zeroed by calloc on easy init and duphandle. While zero is the
correct default value for almost all info variables, there is one where
it isn't (filetime initializes to -1).
Bug: https://github.com/curl/curl/issues/1103
Reported-by: Neal Poole
...to use the public function curl_strnequal(). This isn't ideal because
it adds extra overhead to any internal calls to checkprefix.
follow-up to 95bd2b3e
As they are after all part of the public API. Saves space and reduces
complexity. Remove the strcase defines from the curlx_ family.
Suggested-by: Dan Fandrich
Idea: https://curl.haxx.se/mail/lib-2016-10/0136.html
This should fix the "warning: 'curl_strequal' redeclared without
dllimport attribute: previous dllimport ignored" message and subsequent
link error on Windows because of the missing CURL_EXTERN on the
prototype.
These two public functions have been mentioned as deprecated since a
very long time but since they are still part of the API and ABI we need
to keep them around.
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
If the requested size is zero, bail out with error instead of doing a
realloc() that would cause a double-free: realloc(0) acts as a free()
and then there's a second free in the cleanup path.
CVE-2016-8619
Bug: https://curl.haxx.se/docs/adv_20161102E.html
Reported-by: Cure53
Previously it only held references to them, which was reckless as the
thread lock was released so the cookies could get modified by other
handles that share the same cookie jar over the share interface.
CVE-2016-8623
Bug: https://curl.haxx.se/docs/adv_20161102I.html
Reported-by: Cure53
- Change initial message box to mention delay when downloading/parsing.
Since there is no progress meter it was somewhat unexpected that after
choosing a filename nothing appears to happen, when actually the cert
data is in the process of being downloaded and parsed.
- Warn if OpenSSL is not present.
- Use a UTF-8 stream to make the ca-bundle data.
- Save the UTF-8 ca-bundle stream as binary so that no BOM is added.
---
This is a follow-up to d2c6d15 which switched mk-ca-bundle.vbs output to
ANSI due to corrupt UTF-8 output, now fixed.
This change completes making the default certificate bundle output of
mk-ca-bundle.vbs as close as possible to that of mk-ca-bundle.pl, which
should make it easier to review any difference between their output.
Ref: https://github.com/curl/curl/pull/1012
Bring the VBScript version more in line with the perl version:
- Change timestamp to UTC.
- Change URL retrieval to HTTPS-only by default.
- Comment out the options that disabled SSL cert checking by default.
- Assume OpenSSL is present, get SHA256. And add a flag to toggle it.
- Fix cert issuer name output.
The cert issuer output is now ansi, converted from UTF-8. Prior to this
it was corrupt UTF-8. It turns out though we can work with UTF-8 the
FSO object that writes ca-bundle can't write UTF-8, so there will have
to be some alternative if UTF-8 is needed (like an ADODB.Stream).
- Disable the certificate text info feature.
The certificate text info doesn't work properly with any recent OpenSSL.
- Change all predefined Mozilla URLs to HTTPS (Gregory Szorc).
- New option -k to allow URLs other than HTTPS and enable HTTP fallback.
Prior to this change the default URL retrieval mode was to fall back to
HTTP if HTTPS didn't work.
Reported-by: Gregory Szorc
Closes#1012
Several independent reports on infinite loops hanging in the
close_all_connections() function when closing a multi handle, can be
fixed by first marking the connection to get closed before calling
Curl_disconnect.
This is more fixing-the-symptom rather than the underlying problem
though.
Bug: https://curl.haxx.se/mail/lib-2016-10/0011.html
Bug: https://curl.haxx.se/mail/lib-2016-10/0059.html
Reported-by: Dan Fandrich, Valentin David, Miloš Ljumović
In short the easy handle needs to be disconnected from its connection at
this point since the connection still is serving other easy handles.
In our app we can reliably reproduce a crash in our http2 stress test
that is fixed by this change. I can't easily reproduce the same test in
a small example.
This is the gdb/asan output:
==11785==ERROR: AddressSanitizer: heap-use-after-free on address 0xe9f4fb80 at pc 0x09f41f19 bp 0xf27be688 sp 0xf27be67c
READ of size 4 at 0xe9f4fb80 thread T13 (RESOURCE_HTTP)
#0 0x9f41f18 in curl_multi_remove_handle /path/to/source/3rdparty/curl/lib/multi.c:666
0xe9f4fb80 is located 0 bytes inside of 1128-byte region [0xe9f4fb80,0xe9f4ffe8)
freed by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1b5c2 in __interceptor_free /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:45
#1 0x9f7862d in conn_free /path/to/source/3rdparty/curl/lib/url.c:2808
#2 0x9f78c6a in Curl_disconnect /path/to/source/3rdparty/curl/lib/url.c:2876
#3 0x9f41b09 in multi_done /path/to/source/3rdparty/curl/lib/multi.c:615
#4 0x9f48017 in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1896
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
previously allocated by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1ba27 in __interceptor_calloc /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:70
#1 0x9f7dfa6 in allocate_conn /path/to/source/3rdparty/curl/lib/url.c:3904
#2 0x9f88ca0 in create_conn /path/to/source/3rdparty/curl/lib/url.c:5797
#3 0x9f8c928 in Curl_connect /path/to/source/3rdparty/curl/lib/url.c:6438
#4 0x9f45a8c in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1411
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
SUMMARY: AddressSanitizer: heap-use-after-free /path/to/source/3rdparty/curl/lib/multi.c:666 in curl_multi_remove_handle
Shadow bytes around the buggy address:
0x3d3e9f20: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f40: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f50: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa
0x3d3e9f60: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x3d3e9f70:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fa0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fb0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fc0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==11785==ABORTING
Thread 14 "RESOURCE_HTTP" received signal SIGABRT, Aborted.
[Switching to Thread 0xf27bfb40 (LWP 12324)]
0xf7fd8be9 in __kernel_vsyscall ()
(gdb) bt
#0 0xf7fd8be9 in __kernel_vsyscall ()
#1 0xf4c7ee89 in __GI_raise (sig=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#2 0xf4c803e7 in __GI_abort () at abort.c:89
#3 0xf7b2ef2e in __sanitizer::Abort () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cc:122
#4 0xf7b262fa in __sanitizer::Die () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_common.cc:145
#5 0xf7b21ab3 in __asan::ScopedInErrorReport::~ScopedInErrorReport (this=0xf27be171, __in_chrg=<optimized out>) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:689
#6 0xf7b214a5 in __asan::ReportGenericError (pc=166993689, bp=4068206216, sp=4068206204, addr=3925146496, is_write=false, access_size=4, exp=0, fatal=true) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:1074
#7 0xf7b21fce in __asan::__asan_report_load4 (addr=3925146496) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_rtl.cc:129
#8 0x09f41f19 in curl_multi_remove_handle (multi=0xf3406080, data=0xde582400) at /path/to/source3rdparty/curl/lib/multi.c:666
#9 0x09f6b277 in Curl_close (data=0xde582400) at /path/to/source3rdparty/curl/lib/url.c:415
#10 0x09f3354e in curl_easy_cleanup (data=0xde582400) at /path/to/source3rdparty/curl/lib/easy.c:860
#11 0x09c6de3f in ...
#12 0x09c378c5 in ...
#13 0x09c48133 in ...
#14 0x09c4d092 in ...
#15 0x0a2be6b6 in ...
#16 0xf7aa5781 in asan_thread_start (arg=0xf2d22938) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#17 0xf5de52b5 in start_thread (arg=0xf27bfb40) at pthread_create.c:333
#18 0xf4d3a16e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:114
Fixes#1083
The closure handle only ever has default timeouts set. To improve the
state somewhat we clone the timeouts from each added handle so that the
closure handle always has the same timeouts as the most recently added
easy handle.
Fixes#739
Curl_select_ready() was the former API that was replaced with
Curl_select_check() a while back and the former arg setup was provided
with a define (in order to leave existing code unmodified).
Now we instead offer SOCKET_READABLE and SOCKET_WRITABLE for the most
common shortcuts where only one socket is checked. They're also more
visibly macros.
- Change back behavior so that pipelining is considered possible for
connections that have not yet reached the protocol level.
This is a follow-up to e5f0b1a which had changed the behavior of
checking if pipelining is possible to ignore connections that had
'bits.close' set. Connections that have not yet reached the protocol
level also have that bit set, and we need to consider pipelining
possible on those connections.
This fixes a merge error in commit 7f3df80 caused by commit 332e8d6.
Additionally, this changes Curl_verify_windows_version for Windows App
builds to assume to always be running on the target Windows version.
There seems to be no way to determine the Windows version from a
UWP app. Neither GetVersion(Ex), nor VerifyVersionInfo, nor the
Version Helper functions are supported.
Bug: https://github.com/curl/curl/pull/820#issuecomment-250889878
Reported-by: Paul Joyce
Closes https://github.com/curl/curl/pull/1048
No longer attempt to use "doomed" to-be-closed connections when
pipelining. Prior to this change connections marked for deletion (e.g.
timeout) would be erroneously used, resulting in sporadic crashes.
As originally reported and fixed by Carlo Wood (origin unknown).
Bug: https://github.com/curl/curl/issues/627
Reported-by: Rider Linden
Closes https://github.com/curl/curl/pull/1075
Participation-by: nopjmp@users.noreply.github.com
Not all reply messages were properly checked for their lengths, which
made it possible to access uninitialized memory (but this does not lead
to out of boundary accesses).
Closes#1052
... it no longer takes printf() arguments since it was only really taken
advantage by one user and it was not written and used in a safe
way. Thus the 'f' is removed from the function name and the proto is
changed.
Although the current code wouldn't end up in badness, it was a risk that
future changes could end up springf()ing too large data or passing in a
format string inadvertently.
The previous use of snprintf() could make libcurl silently truncate some
input data and not report that back on overly large input, which could
make data get sent over the network in a bad format.
Example:
$ curl --form 'a=b' -H "Content-Type: $(perl -e 'print "A"x4100')"
Cokie with the same domain but different tailmatching property are now
considered different and do not replace each other. If header contains
following lines then two cookies will be set: Set-Cookie: foo=bar;
domain=.foo.com; expires=Thu Mar 3 GMT 8:56:27 2033 Set-Cookie: foo=baz;
domain=foo.com; expires=Thu Mar 3 GMT 8:56:27 2033
This matches Chrome, Opera, Safari, and Firefox behavior. When sending
stored tokens to foo.com Chrome, Opera, Firefox store send them in the
stored order, while Safari pre-sort the cookies.
Closes#1050
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
As it seems to be a rarely used cipher suite (for securely established
but _unencrypted_ connections), I believe it is fine not to provide an
alias for the misspelled variant.
LibreSSL defines `OPENSSL_VERSION_NUMBER` as `0x20000000L` for all
versions returning `LibreSSL/2.0.0` for any LibreSSL version.
This change provides a local OpenSSL_version_num function replacement
returning LIBRESSL_VERSION_NUMBER instead.
Closes#1029
The OpenSSL function CRYTPO_cleanup_all_ex_data() cannot be called
multiple times without crashing - and other libs might call it! We
basically cannot call it without risking a crash. The function is a
no-op since OpenSSL 1.1.0.
Not calling this function only risks a small memory leak with OpenSSL <
1.1.0.
Bug: https://curl.haxx.se/mail/lib-2016-09/0045.html
Reported-by: Todd Short
OpenSSL 1.0.1 and 1.0.2 build an error queue that is stored per-thread
so we need to clean it when easy handles are freed, in case the thread
will be killed in which the easy handle was used. All OpenSSL code in
libcurl should extract the error in association with the error already
so clearing this queue here should be harmless at worst.
Fixes#964
NTLM support with mbedTLS was added in 497e7c9 but requires that mbedTLS
is built with the MD4 functions available, which it isn't in default
builds. This now adapts if the funtion isn't there and builds libcurl
without NTLM support if so.
Fixes#1004
... like when a HTTP/0.9 response comes back without any headers at all
and just a body this now prevents that body from being sent to the
callback etc.
Adapted test 1144 to verify.
Fixes#973
Assisted-by: Ray Satiro
Detect support for compiler symbol visibility flags and apply those
according to CURL_HIDDEN_SYMBOLS option.
It should work true to the autotools build except it tries to unhide
symbols on Windows when requested and prints warning if it fails.
Ref: https://github.com/curl/curl/issues/981#issuecomment-242665951
Reported-by: Daniel Stenberg
... by partially reverting f975f06033. The allocation could be made by
OpenSSL so the free must be made with OPENSSL_free() to avoid problems.
Reported-by: Harold Stuart
Fixes#1005
... by making sure we don't count down the "upload left" counter when the
uploaded size is unknown and then it can be allowed to continue forever.
Fixes#996
Since we're using CURLE_FTP_WEIRD_SERVER_REPLY in imap, pop3 and smtp as
more of a generic "failed to parse" introduce an alias without FTP in
the name.
Closes https://github.com/curl/curl/pull/975
This hash is used to verify the original downloaded certificate bundle
and also included in the generated bundle's comment header. Also
rename related internal symbols to algorithm-agnostic names.
CURLINFO_SSL_VERIFYRESULT does not get the certificate verification
result when SSL_connect fails because of a certificate verification
error.
This fix saves the result of SSL_get_verify_result so that it is
returned by CURLINFO_SSL_VERIFYRESULT.
Closes https://github.com/curl/curl/pull/995
While noErr and errSecSuccess are defined as the same value, the API
documentation states that SecPKCS12Import() returns errSecSuccess if
there were no errors in importing. Ensure that a future change of the
defined value doesn't break (however unlikely) and be consistent with
the API docs.
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes#971
* Added description to Curl_sspi_free_identity()
* Added parameter and return explanations to Curl_sspi_global_init()
* Added parameter explaination to Curl_sspi_global_cleanup()
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
... instead of if() before the switch(), add a default to the switch so
that the compilers don't warn on "warning: enumeration value
'PLATFORM_DONT_CARE' not handled in switch" anymore.
- Disable ALPN on Wine.
- Don't pass input secbuffer when ALPN is disabled.
When ALPN support was added a change was made to pass an input secbuffer
to initialize the context. When ALPN is enabled the buffer contains the
ALPN information, and when it's disabled the buffer is empty. In either
case this input buffer caused problems with Wine and connections would
not complete.
Bug: https://github.com/curl/curl/issues/983
Reported-by: Christian Fillion
Serialise the call to PK11_FindSlotByName() to avoid spurious errors in
a multi-threaded environment. The underlying cause is a race condition
in nssSlot_IsTokenPresent().
Bug: https://bugzilla.mozilla.org/1297397Closes#985
When we're uploading using FTP and the server issues a tiny pause
between opening the connection to the client's secondary socket, the
client's initial poll() times out, which leads to second poll() which
does not wait for POLLIN on the secondary socket. So that poll() also
has to time out, creating a long (200ms) pause.
This patch adds the correct flag to the secondary socket, making the
second poll() correctly wait for the connection there too.
Signed-off-by: Ales Novak <alnovak@suse.cz>
Closes#978
Instead of displaying the requested hostname the one returned
by the SOCKS5 proxy server is used in case of connection error.
The requested hostname is displayed earlier in the connection sequence.
The upper-value of the port is moved to a temporary variable and
replaced with a 0-byte to make sure the hostname is 0-terminated.
Hooked up the HTTP authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
As per commit 00417fd66c existing functionality is maintained for now.
Hooked up the SASL authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
For now existing functionality is maintained.
As Windows SSPI authentication calls fail when a particular mechanism
isn't available, introduced these functions for DIGEST, NTLM, Kerberos 5
and Negotiate to allow both HTTP and SASL authentication the opportunity
to query support for a supported mechanism before selecting it.
For now each function returns TRUE to maintain compatability with the
existing code when called.
I discovered some people have been using "https://example.com" style
strings as proxy and it "works" (curl doesn't complain) because curl
ignores unknown schemes and then assumes plain HTTP instead.
I think this misleads users into believing curl uses HTTPS to proxies
when it doesn't. Now curl rejects proxy strings using unsupported
schemes instead of just ignoring and defaulting to HTTP.
Undo change introduced in d4643d6 which caused iPAddress match to be
ignored if dNSName was present but did not match.
Also, if iPAddress is present but does not match, and dNSName is not
present, fail as no-match. Prior to this change in such a case the CN
would be checked for a match.
Bug: https://github.com/curl/curl/issues/959
Reported-by: wmsch@users.noreply.github.com
Mark's new document about HTTP Retries
(https://mnot.github.io/I-D/httpbis-retry/) made me check our code and I
spotted that we don't retry failed HEAD requests which seems totally
inconsistent and I can't see any reason for that separate treatment.
So, no separate treatment for HEAD starting now. A HTTP request sent
over a reused connection that gets cut off before a single byte is
received will be retried on a fresh connection.
Made-aware-by: Mark Nottingham
Makes libcurl work in communication with gstreamer-based RTSP
servers. The original code validates the session id to be in accordance
with the RFC. I think it is better not to do that:
- For curl the actual content is a don't care.
- The clarity of the RFC is debatable, is $ allowed or only as \$, that
is imho not clear
- Gstreamer seems to url-encode the session id but % is not allowed by
the RFC
- less code
With this patch curl will correctly handle real-life lines like:
Session: biTN4Kc.8%2B1w-AF.; timeout=60
Bug: https://curl.haxx.se/mail/lib-2016-08/0076.html
All compilers used by cmake in Windows should support large files.
- Add test SIZEOF_OFF_T
- Remove outdated test SIZEOF_CURL_OFF_T
- Turn on USE_WIN32_LARGE_FILES in Windows
- Check for 'Largefile' during the features output
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
Reminded-by: Tatsuhiro Tsujikawa
In order to make MBEDTLS_DEBUG work, the debug threshold must be unequal
to 0. This patch also adds a comment how mbedtls must be compiled in
order to make debugging work, and explains the possible debug levels.
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
When input stream for curl is stdin and input stream is not a file but
generated by a script then curl can truncate data transfer to arbitrary
size since a partial packet is treated as end of transfer by TFTP.
Fixes#857
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
Regression added in 790d6de485. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
If a call to GetSystemDirectory fails, the `path` pointer that was
previously allocated would be leaked. This makes sure that `path` is
always freed.
Closes#938
This is a follow up to the parent commit dcdd4be which fixes one leak
but creates another by failing to free the credentials handle if out of
memory. Also there's a second location a few lines down where we fail to
do same. This commit fixes both of those issues.
- the expression of an 'if' was always true
- a 'while' contained a condition that was always true
- use 'if(k->exp100 > EXP100_SEND_DATA)' instead of 'if(k->exp100)'
- fixed a typo
Closes#889
... as otherwise we could get a 0 which would count as no error and we'd
wrongly continue and could end up segfaulting.
Bug: https://curl.haxx.se/mail/lib-2016-06/0052.html
Reported-by: 暖和的和暖
Prior to this change we called Curl_ssl_getsessionid and
Curl_ssl_addsessionid regardless of whether session ID reusing was
enabled. According to comments that is in case session ID reuse was
disabled but then later enabled.
The old way was not intuitive and probably not something users expected.
When a user disables session ID caching I'd guess they don't expect the
session ID to be cached anyway in case the caching is later enabled.
- Enable protocol family logic for IPv6 resolves even when support
for synthesized addresses is enabled.
This is a follow up to the parent commit that added support for
synthesized IPv6 addresses from IPv4 on iOS/OS X. The protocol family
logic needed for IPv6 was inadvertently excluded if support for
synthesized addresses was enabled.
Bug: https://github.com/curl/curl/issues/863
Ref: https://github.com/curl/curl/pull/866
Ref: https://github.com/curl/curl/pull/867
Use getaddrinfo() to resolve the IPv4 address literal on iOS/Mac OS X.
If the current network interface doesn’t support IPv4, but supports
IPv6, NAT64, and DNS64.
Closes#866Fixes#863
Calling QueryContextAttributes with SECPKG_ATTR_APPLICATION_PROTOCOL
fails on Windows < 8.1 so we need to disable ALPN on these OS versions.
Inspiration provide by: Daniel Seither
Closes#848Fixes#840
- Change the parser to not require a minor version for HTTP/2.
HTTP/2 connection reuse broke when we changed from HTTP/2.0 to HTTP/2
in 8243a95 because the parser still expected a minor version.
Bug: https://github.com/curl/curl/issues/855
Reported-by: Andrew Robbins, Frank Gevaerts
Sessionid cache management is inseparable from managing individual
session lifetimes. E.g. for reference-counted sessions (like those in
SChannel and OpenSSL engines) every session addition and removal
should be accompanied with refcount increment and decrement
respectively. Failing to do so synchronously leads to a race condition
that causes symptoms like use-after-free and memory corruption.
This commit:
- makes existing session cache locking explicit, thus allowing
individual engines to manage lock's scope.
- fixes OpenSSL and SChannel engines by putting refcount management
inside this lock's scope in relevant places.
- adds these explicit locking calls to other engines that use
sessionid cache to accommodate for this change. Note, however,
that it is unknown whether any of these engines could also have
this race.
Bug: https://github.com/curl/curl/issues/815Fixes#815Closes#847
Mostly in order to support broken web sites that redirect to broken URLs
that are accepted by browsers.
Browsers are typically even more leniant than this as the WHATWG URL
spec they should allow an _infinite_ amount. I tested 8000 slashes with
Firefox and it just worked.
Added test case 1141, 1142 and 1143 to verify the new parser.
Closes#791
Regression from the previous *printf() rearrangements, this file missed to
include the correct header to make sure snprintf() works universally.
Reported-by: Moti Avrahami
Bug: https://curl.haxx.se/mail/lib-2016-05/0196.html
While compiling lib/curl_multibyte.c with '-DUSE_WIN32_IDN' etc. I was
getting:
f:\mingw32\src\inet\curl\lib\memdebug.h(38): error C2054: expected '('
to follow 'CURL_EXTERN'
f:\mingw32\src\inet\curl\lib\memdebug.h(38): error C2085:
'curl_domalloc': not in formal parameter list
...as otherwise the TLS libs will skip the CN/SAN check and just allow
connection to any server. curl previously skipped this function when SNI
wasn't used or when connecting to an IP address specified host.
CVE-2016-3739
Bug: https://curl.haxx.se/docs/adv_20160518A.html
Reported-by: Moti Avrahami
CID 1024412: Memory - illegal accesses (OVERRUN). Claimed to happen when
we run over 'workend' but the condition says <= workend and for all I
can see it should be safe. Compensating for the warning by adding a byte
margin in the buffer.
Also, removed the extra brace level indentation in the code and made it
so that 'workend' is only assigned once within the function.
The proper FTP wildcard init is now more properly done in Curl_pretransfer()
and the corresponding cleanup in Curl_close().
The previous place of init/cleanup code made the internal pointer to be NULL
when this feature was used with the multi_socket() API, as it was made within
the curl_multi_perform() function.
Reported-by: Jonathan Cardoso Machado
Fixes#800
Prior to this change a width arg could be erroneously output, and also
width and precision args could not be used together without crashing.
"%0*d%s", 2, 9, "foo"
Before: "092"
After: "09foo"
"%*.*s", 5, 2, "foo"
Before: crash
After: " fo"
Test 557 is updated to verify this and more
The new way of disabling certificate verification doesn't work on
Mountain Lion (OS X 10.8) so we need to use the old way in that version
too. I've tested this solution on versions 10.7.5, 10.8, 10.9, 10.10.2
and 10.11.
Closes#802
curl's representation of HTTP/2 responses involves transforming the
response to a format that is similar to HTTP/1.1. Prior to this change,
curl would do this by separating header names and values with only a
colon, without introducing a space after the colon.
While this is technically a valid way to represent a HTTP/1.1 header
block, it is much more common to see a space following the colon. This
change introduces that space, to ensure that incautious tools are safely
able to parse the header block.
This also ensures that the difference between the HTTP/1.1 and HTTP/2
response layout is as minimal as possible.
Bug: https://github.com/curl/curl/issues/797Closes#798Fixes#797
... introduced in curl-7_48_0-293-g2968c83:
Error: COMPILER_WARNING:
lib/vtls/openssl.c: scope_hint: In function ‘Curl_ossl_check_cxn’
lib/vtls/openssl.c:767:15: warning: conversion to ‘int’ from ‘ssize_t’
may alter its value [-Wconversion]
- In the case of recv error, limit returning 'connection still in place'
to EINPROGRESS, EAGAIN and EWOULDBLOCK.
This is an improvement on the parent commit which changed the openssl
connection check to use recv MSG_PEEK instead of SSL_peek.
Ref: https://github.com/curl/curl/commit/856baf5#comments
Calling SSL_peek can cause bytes to be read from the raw socket which in
turn can upset the select machinery that determines whether there's data
available on the socket.
Since Curl_ossl_check_cxn only tries to determine whether the socket is
alive and doesn't actually need to see the bytes SSL_peek seems like
the wrong function to call.
We're able to occasionally reproduce a connect timeout due to this
bug. What happens is that Curl doesn't know to call SSL_connect again
after the peek happens since data is buffered in the SSL buffer and thus
select won't fire for this socket.
Closes#795
Only protocols that actually have a protocol registered for ALPN and NPN
should try to get that negotiated in the TLS handshake. That is only
HTTPS (well, http/1.1 and http/2) right now. Previously ALPN and NPN
would wrongly be used in all handshakes if libcurl was built with it
enabled.
Reported-by: Jay Satiro
Fixes#789
Sometimes, in systems with both ipv4 and ipv6 addresses but where the
network doesn't support ipv6, Curl_is_connected returns an error
(intermittently) even if the ipv4 socket connects successfully.
This happens because there's a for-loop that iterates on the sockets but
the error variable is not resetted when the ipv4 is checked and is ok.
This patch fixes this problem by setting error to 0 when checking the
second socket and not having a result yet.
Fixes#794
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes#743
Mostly because they're not needed, because memdebug.h is always included
last of all headers so the others already included the correct ones.
But also, starting now we don't want this to accidentally include any
system headers, as the header included _before_ this header may add
defines and other fun stuff that we won't want used in system includes.
Previously, connections were closed immediately before the user had a
chance to extract the socket when the proxy required Negotiate
authentication.
This regression was brought in with the security fix in commit
79b9d5f1a4Closes#655
WinSock destroys recv() buffer if send() is failed. As result - server
response may be lost if server sent it while curl is still sending
request. This behavior noticeable on HTTP server short replies if
libcurl use several send() for request (usually for POST request).
To workaround this problem, libcurl use recv() before every send() and
keeps received data in intermediate buffer for further processing.
Fixes: #657Closes: #668
This commit fixes a Clang warning introduced in curl-7_48_0-190-g8f72b13:
Error: CLANG_WARNING:
lib/connect.c:1120:11: warning: The right operand of '==' is a garbage value
1118| }
1119|
1120|-> if(-1 == rc)
1121| error = SOCKERRNO;
1122| }
... but output non-stripped version of the line, even if that then can
make the script identify the wrong position in the line at
times. Showing the line stripped (ie without comments) is just too
surprising.
- Error if a header line is larger than supported.
- Warn if cumulative header line length may be larger than supported.
- Allow spaces when parsing the path component.
- Make sure each header line ends in \r\n. This fixes an out of bounds.
- Disallow header continuation lines until we decide what to do.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
This commit ensures that streams which was closed in on_stream_close
callback gets passed to http2_handle_stream_close. Previously, this
might not happen. To achieve this, we increment drain property to
forcibly call recv function for that stream.
To more accurately check that we have no pending event before shutting
down HTTP/2 session, we sum up drain property into
http_conn.drain_total. We only shutdown session if that value is 0.
With this commit, when stream was closed before reading response
header fields, error code CURLE_HTTP2_STREAM is returned even if
HTTP/2 level error is NO_ERROR. This signals the upper layer that
stream was closed by error just like TCP connection close in HTTP/1.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
This commit ensures that data from network are processed before HTTP/2
session is terminated. This is achieved by pausing nghttp2 whenever
different stream than current easy handle receives data.
This commit also fixes the bug that sometimes processing hangs when
multiple HTTP/2 streams are multiplexed.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
Previously, when a stream was closed with other than NGHTTP2_NO_ERROR
by RST_STREAM, underlying TCP connection was dropped. This is
undesirable since there may be other streams multiplexed and they are
very much fine. This change introduce new error code
CURLE_HTTP2_STREAM, which indicates stream error that only affects the
relevant stream, and connection should be kept open. The existing
CURLE_HTTP2 means connection error in general.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
... but ignore EAGAIN if the stream has ended so that we don't end up in
a loop. This is a follow-up to c8ab613 in order to avoid the problem
d261652 was made to fix.
Reported-by: Jay Satiro
Clues-provided-by: Tatsuhiro Tsujikawa
Discussed in #750
As these two options provide identical functionality, the former for
SOCK5 proxies and the latter for HTTP proxies, merged the two options
together.
As such CURLOPT_SOCKS5_GSSAPI_SERVICE is marked as deprecated as of
7.49.0.
Calculate the service name and proxy service names locally, rather than
in url.c which will allow for us to support overriding the service name
for other protocols such as FTP, IMAP, POP3 and SMTP.
It turns out the google GFE HTTP/2 servers send a PING frame immediately
after a stream ends and its last DATA has been received by curl. So if
we don't drain that from the socket, it makes the socket readable in
subsequent checks and libcurl then (wrongly) assumes the connection is
dead when trying to reuse the connection.
Reported-by: Joonas Kuorilehto
Discussed in #750
Although this should never happen due to the relationship between the
'mech' and 'resp' variables, and the way they are allocated together,
it does cause problems for code analysis tools:
V595 The 'mech' pointer was utilized before it was verified against
nullptr. Check lines: 376, 381. curl_sasl.c 376
Bug: https://github.com/curl/curl/issues/745
Reported-by: Alexis La Goutte
This wouldn't cause a problem because of the way the function is called,
but prior to this change, we were processing the challenge message when
the credentials were NULL rather than when the challenge message was
populated.
This also brings this part of the Kerberos 5 code in line with the
Negotiate code.
Although mutual authentication is currently turned off and can only be
enabled by changing libcurl source code, authentication using Kerberos
5 has been broken since commit 79543caf90 in this use case.
This wouldn't cause a problem because of the way the function is called,
but prior to this change, we were processing the challenge message when
the credentials were NULL rather than when the challenge message was
populated.
This also brings this part of the Kerberos 5 code in line with the
Negotiate code.
Prior to this change, we were generating the output token when the
credentials were NULL rather than when the output token was NULL.
This also brings this part of the Kerberos 5 code in line with the
Negotiate code.
Prior to this change, we were generating the SPN in the SSPI code when
the credentials were NULL and in the GSS-API code when the context was
empty. It is better to decouple the SPN generation from these checks
and only generate it when the SPN itself is NULL.
This also brings this part of the Kerberos 5 code in line with the
Negotiate code.
It offers extra info from nghttp2 in certain error cases. Like for
example when trying prior-knowledge http2 on a server that doesn't speak
http2 at all. The error message is passed on as a verbose message to
libcurl.
Discussed in #722
The error callback was added in nghttp2 1.9.0
For consistency with the spnego and oauth2 code moved the setting of
the host name outside of the Curl_auth_create_gssapi_user_messag()
function.
This will allow us to more easily override it in the future.
I had accidentally used the proxy server name for the host and the host
server name for the proxy in commit ad5e9bfd5d and 6d6f9ca1d9. Whilst
Windows SSPI was quite happy with this, GSS-API wasn't.
Thanks-to: Michael Osipov
When an upload is done, there are two places where that can be detected
and only one of them would rewind the input stream - which sometimes is
necessary for example when doing NTLM HTTP POSTs and more.
This could then end up libcurl hanging.
Figured-out-by: Isaac Boukris
Reported-by: Anatol Belski
Fixes#741
So that we only do the extra typedefs in curl_memory.h when we really
need to and avoid double typedefs.
follow-up commit to 7218b52c49
Thanks-to: Steve Holme
The define is not in our name space and is therefore not protected by
our API promises.
It was only really used by libcurl internals but was mostly erased from
there already in 8aabbf5 (March 2015). This is supposedly the final
death blow to that define from everywhere.
As a side-effect, making sure _MPRINTF_REPLACE is gone and not used, I
made the lib tests in tests/libtest/ use curl_printf.h for its redefine
magic and then subsequently the use of sprintf() got banned in the tests
as well (as it is in libcurl internals) and I then replaced them all
with snprintf().
In the unlikely event that any users is actually using this define and
gets sad by this change, it is very easily copied to the user's own
code.
Supports HTTP/2 over clear TCP
- Optimize switching to HTTP/2 by removing calls to init and setup
before switching. Switching will eventually call setup and setup calls
init.
- Supports new version to “force” the use of HTTP/2 over clean TCP
- Add common line parameter “--http2-prior-knowledge” to the Curl
command line tool.
The code copied one byte from a 32bit integer, which works fine as long
as the byte order is the same. Not a fine assumption. Reported by PVS
Studio.
Reported-by: Alexis La Goutte
When compiling with OpenSSL 1.1.0 (so that the HAVE_X509_GET0_SIGNATURE
&& HAVE_X509_GET0_EXTENSIONS pre-processor block is active), Visual C++
14 complains:
warning C4701: potentially uninitialized local variable 'palg' used
warning C4701: potentially uninitialized local variable 'psig' used
Also display the GSS_C_GSS_CODE (major code) when specified instead of
only GSS_C_MECH_CODE (minor code).
In addition, the old code was printing a colon twice after the prefix
and also miscalculated the length of the buffer in between calls to
gss_display_status (the length of ": " was missing).
Also, gss_buffer is not guaranteed to be NULL terminated and thus need
to restrict reading by its length.
Closes#738
Since commit a5aec58 the handler schemes need to match for the
connections to be reused and for HTTP/2 multiplexing to work, reusing
connections is very important!
Closes#736
Renamed the header and source files for this module as they are HTTP
specific and as such, they should use the naming convention as other
HTTP authentication source files do - this revert commit 260ee6b7bf.
Note: We could also rename curl_ntlm_wb.[c|h], however, the Winbind
code needs separating from the HTTP protocol and migrating into the
vauth directory, thus adding support for Winbind to the SASL based
protocols such as IMAP, POP3 and SMTP.
libidn's tld_check_lz returns an error offset of the first character
that it failed to process, however that offset is not a byte offset and
may not even be in the locale encoding therefore we can't use it to show
the user the character that failed to process.
Bug: https://github.com/curl/curl/issues/731
Reported-by: Karlson2k
As the GSS-API and SSPI based source files are no longer library/API
specific, following the extraction of that authentication code to the
vauth directory, combine these files rather than maintain two separate
versions.
Not picked up by checksrc or Visual Studio but my own code review, this
would haven broken Intel based Unix builds - Perhaps I should learn to
type on my laptop's keyboard before committing!
Updated the makefiles and Visual Studio project files to support moving
the authentication code to the new lib/vauth directory that was started
in commit 0d04e859e1.
warning C4701: potentially uninitialized local variable 'size' used
Technically this can't happen, as the usage of 'size' is protected by
'if(parsed)' and 'parsed' is only set after 'size' has been parsed.
Anyway, lets keep the compiler happy.
formdata.c:390: warning: cast from pointer to integer of different size
Introduced in commit ca5f9341ef this happens because a char*, which is
32-bits wide in 32-bit land, is being cast to a curl_off_t which is
64-bits wide where 64-bit integers are supported by the compiler.
This doesn't happen in 64-bit land as a pointer is the same size as a
curl_off_t.
This fix doesn't address the fact that a 64-bit value cannot be used
for CURLFORM_CONTENTLEN when set in a form array and compiled on a
32-bit platforms, it does at least suppress the compilation warning.
... to allow users to see which specfic wildcard that matched when such
is used.
Also minor logic cleanup to simplify the code, and I removed all tabs
from verbose strings.
Simplify the code by using a single entry that looks for a socket in the
socket hash. As indicated in #712, the code looked for CURL_SOCKET_BAD
at some point and that is ineffective/wrong and this makes it easier to
avoid that.
... as it implies we need to check for that on all the other variable
references as well (as Coverity otherwise warns us for missing NULL
checks), and we're alredy making sure that the pointer is never NULL.
RFC 6265 section 4.1.1 spells out that the first name/value pair in the
header is the actual cookie name and content, while the following are
the parameters.
libcurl previously had a more liberal approach which causes significant
problems when introducing new cookie parameters, like the suggested new
cookie priority draft.
The previous logic read all n/v pairs from left-to-right and the first
name used that wassn't a known parameter name would be used as the
cookie name, thus accepting "Set-Cookie: Max-Age=2; person=daniel" to be
a cookie named 'person' while an RFC 6265 compliant parser should
consider that to be a cookie named 'Max-Age' with an (unknown) parameter
'person'.
Fixes#709
Such a return value isn't documented but could still happen, and the
curl tool code checks for it. It would happen when the underlying
Curl_poll() function returns an error. Starting now we mask that error
as a user of curl_multi_wait() would have no way to handle it anyway.
Reported-by: Jay Satiro
Closes#707
I got a crash with this stack:
curl/lib/url.c:2873 (Curl_removeHandleFromPipeline)
curl/lib/url.c:2919 (Curl_getoff_all_pipelines)
curl/lib/multi.c:561 (curl_multi_remove_handle)
curl/lib/url.c:415 (Curl_close)
curl/lib/easy.c:859 (curl_easy_cleanup)
Closes#704
Prior to this change when a single protocol CURL_SSLVERSION_ was
specified by the user that version was set only as the minimum version
but not as the maximum version as well.
In makefile.m32, option -ssh2 (libssh2) automatically implied -ssl
(OpenSSL) option, with no way to override it with -winssl. Since both
libssh2 and curl support using Windows's built-in SSL backend, modify
the logic to allow that combination.
Prior to this change cookies with an expiry date that failed parsing
and were converted to session cookies could be purged in remove_expired.
Bug: https://github.com/curl/curl/issues/697
Reported-by: Seth Mos
Prevent a crash if 2 (or more) requests are made to the same host and
pipelining is enabled and the connection does not complete.
Bug: https://github.com/curl/curl/pull/690
Some systems have special files that report as 0 bytes big, but still
contain data that can be read (for example /proc/cpuinfo on
Linux). Starting now, a zero byte size is considered "unknown" size and
will be read as far as possible anyway.
Reported-by: Jesse Tan
Closes#681
... as when pipelining is used, we read things into a unified buffer and
we don't do that with HTTP/2. This could then easily make programs that
set CURLMOPT_PIPELINING = CURLPIPE_HTTP1|CURLPIPE_MULTIPLEX to get data
intermixed or plain broken between HTTP/2 streams.
Reported-by: Anders Bakken
The two options are almost the same, except in the case of OpenSSL:
CURLINFO_TLS_SESSION OpenSSL session internals is SSL_CTX *.
CURLINFO_TLS_SSL_PTR OpenSSL session internals is SSL *.
For backwards compatibility we couldn't modify CURLINFO_TLS_SESSION to
return an SSL pointer for OpenSSL.
Also, add support for the 'internals' member to point to SSL object for
the other backends axTLS, PolarSSL, Secure Channel, Secure Transport and
wolfSSL.
Bug: https://github.com/curl/curl/issues/234
Reported-by: dkjjr89@users.noreply.github.com
Bug: https://curl.haxx.se/mail/lib-2015-09/0127.html
Reported-by: Michael König
The internal Curl_done() function uses Curl_expire() at times and that
uses the timeout list. Better clean up the list once we're done using
it. This caused a segfault.
Reported-by: 蔡文凱
Bug: https://curl.haxx.se/mail/lib-2016-02/0097.html
- Add tests.
- Add an example to CURLOPT_TFTP_NO_OPTIONS.3.
- Add --tftp-no-options to expose CURLOPT_TFTP_NO_OPTIONS.
Bug: https://github.com/curl/curl/issues/481
Some TFTP server implementations ignore the "TFTP Option extension"
(RFC 1782-1784, 2347-2349), or implement it in a flawed way, causing
problems with libcurl. Another switch for curl_easy_setopt
"CURLOPT_TFTP_NO_OPTIONS" is introduced which prevents libcurl from
sending TFTP option requests to a server, avoiding many problems caused
by faulty implementations.
Bug: https://github.com/curl/curl/issues/481
If any parameter in a HTTP DIGEST challenge message is present multiple
times, memory allocated for all but the last entry should be freed.
Bug: https://github.com/curl/curl/pull/667
At one point during the development of HTTP/2, the commit 133cdd29ea
introduced automatic decompression of Content-Encoding as that was what
the spec said then. Now however, HTTP/2 should work the same way as
HTTP/1 in this regard.
Reported-by: Kazuho Oku
Closes#661
nghttp2 callback deals with TLS layer and therefore the header does not
need to be broken into chunks.
Bug: https://github.com/curl/curl/issues/659
Reported-by: Kazuho Oku
Since we didn't keep the input argument around after having called
mbedtls, it could end up accessing the wrong memory when figuring out
the ALPN protocols.
Closes#642
As of https://boringssl-review.googlesource.com/#/c/6980/, almost all of
BoringSSL #ifdefs in cURL should be unnecessary:
- BoringSSL provides no-op stubs for compatibility which replaces most
#ifdefs.
- DES_set_odd_parity has been in BoringSSL for nearly a year now. Remove
the compatibility codepath.
- With a small tweak to an extend_key_56_to_64 call, the NTLM code
builds fine.
- Switch OCSP-related #ifdefs to the more generally useful
OPENSSL_NO_OCSP.
The only #ifdefs which remain are Curl_ossl_version and the #undefs to
work around OpenSSL and wincrypt.h name conflicts. (BoringSSL leaves
that to the consumer. The in-header workaround makes things sensitive to
include order.)
This change errs on the side of removing conditionals despite many of
the restored codepaths being no-ops. (BoringSSL generally adds no-op
compatibility stubs when possible. OPENSSL_VERSION_NUMBER #ifdefs are
bad enough!)
Closes#640
It turns out Firefox and Chrome both allow spaces in cookie names and
there are sites out there using that.
Turned out the code meant to strip off trailing space from cookie names
didn't work. Fixed now.
Test case 8 modified to verify both these changes.
Closes#639
When trying to verify a peer without having any root CA certificates
set, this makes libcurl use the TLS library's built in default as
fallback.
Closes#569
.. also fix a conversion bug in the unused function
curl_win32_ascii_to_idn().
And remove wprintfs on error (Jay).
Bug: https://github.com/curl/curl/pull/637