vtls/gtls.c: In function ‘Curl_gtls_data_pending’:
vtls/gtls.c:1429:3: error: this ‘if’ clause does not guard... [-Werror=misleading-indentation]
if(conn->proxy_ssl[connindex].session &&
^~
vtls/gtls.c:1433:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘if’
return res;
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
- Fix connection reuse for when the proposed new conn 'needle' has a
specified local port but does not have a specified device interface.
Bug: https://curl.haxx.se/mail/lib-2016-11/0137.html
Reported-by: bjt3[at]hotmail.com
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes#1131
- In Curl_http2_switched don't call memcpy when src is NULL.
Curl_http2_switched can be called like:
Curl_http2_switched(conn, NULL, 0);
.. and prior to this change memcpy was then called like:
memcpy(dest, NULL, 0)
.. causing address sanitizer to warn:
http2.c:2057:3: runtime error: null pointer passed as argument 2, which
is declared to never be null
Now Curl_rand() is made to fail if it cannot get the necessary random
level.
Changed the proto of Curl_rand() slightly to provide a number of ints at
once.
Moved out from vtls, since it isn't a TLS function and vtls provides
Curl_ssl_random() for this to use.
Discussion: https://curl.haxx.se/mail/lib-2016-11/0119.html
Previously, the [host] part was just ignored which made libcurl accept
strange URLs misleading users. like "file://etc/passwd" which might've
looked like it refers to "/etc/passwd" but is just "/passwd" since the
"etc" is an ignored host name.
Reported-by: Mike Crowe
Assisted-by: Kamil Dudka
- Fix GnuTLS code for CURL_SSLVERSION_TLSv1_2 that broke when the
TLS 1.3 support was added in 6ad3add.
- Homogenize across code for all backends the error message when TLS 1.3
is not available to "<backend>: TLS 1.3 is not yet supported".
- Return an error when a user-specified ssl version is unrecognized.
---
Prior to this change our code for some of the backends used the
'default' label in the switch statement (ie ver unrecognized) for
ssl.version and treated it the same as CURL_SSLVERSION_DEFAULT.
Bug: https://curl.haxx.se/mail/lib-2016-11/0048.html
Reported-by: Kamil Dudka
We're mostly saying just "curl" in lower case these days so here's a big
cleanup to adapt to this reality. A few instances are left as the
project could still formally be considered called cURL.
- Call Curl_initinfo on init and duphandle.
Prior to this change the statistical and informational variables were
simply zeroed by calloc on easy init and duphandle. While zero is the
correct default value for almost all info variables, there is one where
it isn't (filetime initializes to -1).
Bug: https://github.com/curl/curl/issues/1103
Reported-by: Neal Poole
...to use the public function curl_strnequal(). This isn't ideal because
it adds extra overhead to any internal calls to checkprefix.
follow-up to 95bd2b3e
As they are after all part of the public API. Saves space and reduces
complexity. Remove the strcase defines from the curlx_ family.
Suggested-by: Dan Fandrich
Idea: https://curl.haxx.se/mail/lib-2016-10/0136.html
This should fix the "warning: 'curl_strequal' redeclared without
dllimport attribute: previous dllimport ignored" message and subsequent
link error on Windows because of the missing CURL_EXTERN on the
prototype.
These two public functions have been mentioned as deprecated since a
very long time but since they are still part of the API and ABI we need
to keep them around.
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
If the requested size is zero, bail out with error instead of doing a
realloc() that would cause a double-free: realloc(0) acts as a free()
and then there's a second free in the cleanup path.
CVE-2016-8619
Bug: https://curl.haxx.se/docs/adv_20161102E.html
Reported-by: Cure53
Previously it only held references to them, which was reckless as the
thread lock was released so the cookies could get modified by other
handles that share the same cookie jar over the share interface.
CVE-2016-8623
Bug: https://curl.haxx.se/docs/adv_20161102I.html
Reported-by: Cure53
- Change initial message box to mention delay when downloading/parsing.
Since there is no progress meter it was somewhat unexpected that after
choosing a filename nothing appears to happen, when actually the cert
data is in the process of being downloaded and parsed.
- Warn if OpenSSL is not present.
- Use a UTF-8 stream to make the ca-bundle data.
- Save the UTF-8 ca-bundle stream as binary so that no BOM is added.
---
This is a follow-up to d2c6d15 which switched mk-ca-bundle.vbs output to
ANSI due to corrupt UTF-8 output, now fixed.
This change completes making the default certificate bundle output of
mk-ca-bundle.vbs as close as possible to that of mk-ca-bundle.pl, which
should make it easier to review any difference between their output.
Ref: https://github.com/curl/curl/pull/1012
Bring the VBScript version more in line with the perl version:
- Change timestamp to UTC.
- Change URL retrieval to HTTPS-only by default.
- Comment out the options that disabled SSL cert checking by default.
- Assume OpenSSL is present, get SHA256. And add a flag to toggle it.
- Fix cert issuer name output.
The cert issuer output is now ansi, converted from UTF-8. Prior to this
it was corrupt UTF-8. It turns out though we can work with UTF-8 the
FSO object that writes ca-bundle can't write UTF-8, so there will have
to be some alternative if UTF-8 is needed (like an ADODB.Stream).
- Disable the certificate text info feature.
The certificate text info doesn't work properly with any recent OpenSSL.
- Change all predefined Mozilla URLs to HTTPS (Gregory Szorc).
- New option -k to allow URLs other than HTTPS and enable HTTP fallback.
Prior to this change the default URL retrieval mode was to fall back to
HTTP if HTTPS didn't work.
Reported-by: Gregory Szorc
Closes#1012
Several independent reports on infinite loops hanging in the
close_all_connections() function when closing a multi handle, can be
fixed by first marking the connection to get closed before calling
Curl_disconnect.
This is more fixing-the-symptom rather than the underlying problem
though.
Bug: https://curl.haxx.se/mail/lib-2016-10/0011.html
Bug: https://curl.haxx.se/mail/lib-2016-10/0059.html
Reported-by: Dan Fandrich, Valentin David, Miloš Ljumović
In short the easy handle needs to be disconnected from its connection at
this point since the connection still is serving other easy handles.
In our app we can reliably reproduce a crash in our http2 stress test
that is fixed by this change. I can't easily reproduce the same test in
a small example.
This is the gdb/asan output:
==11785==ERROR: AddressSanitizer: heap-use-after-free on address 0xe9f4fb80 at pc 0x09f41f19 bp 0xf27be688 sp 0xf27be67c
READ of size 4 at 0xe9f4fb80 thread T13 (RESOURCE_HTTP)
#0 0x9f41f18 in curl_multi_remove_handle /path/to/source/3rdparty/curl/lib/multi.c:666
0xe9f4fb80 is located 0 bytes inside of 1128-byte region [0xe9f4fb80,0xe9f4ffe8)
freed by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1b5c2 in __interceptor_free /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:45
#1 0x9f7862d in conn_free /path/to/source/3rdparty/curl/lib/url.c:2808
#2 0x9f78c6a in Curl_disconnect /path/to/source/3rdparty/curl/lib/url.c:2876
#3 0x9f41b09 in multi_done /path/to/source/3rdparty/curl/lib/multi.c:615
#4 0x9f48017 in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1896
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
previously allocated by thread T13 (RESOURCE_HTTP) here:
#0 0xf7b1ba27 in __interceptor_calloc /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_malloc_linux.cc:70
#1 0x9f7dfa6 in allocate_conn /path/to/source/3rdparty/curl/lib/url.c:3904
#2 0x9f88ca0 in create_conn /path/to/source/3rdparty/curl/lib/url.c:5797
#3 0x9f8c928 in Curl_connect /path/to/source/3rdparty/curl/lib/url.c:6438
#4 0x9f45a8c in multi_runsingle /path/to/source/3rdparty/curl/lib/multi.c:1411
#5 0x9f490f1 in curl_multi_perform /path/to/source/3rdparty/curl/lib/multi.c:2123
#6 0x9c4443c in perform /path/to/source/src/net/resourcemanager/ResourceManagerCurlThread.cpp:854
#7 0x9c445e0 in ...
#8 0x9c4cf1d in ...
#9 0xa2be6b5 in ...
#10 0xf7aa5780 in asan_thread_start /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#11 0xf4d3a16d in __clone (/lib/i386-linux-gnu/libc.so.6+0xe716d)
SUMMARY: AddressSanitizer: heap-use-after-free /path/to/source/3rdparty/curl/lib/multi.c:666 in curl_multi_remove_handle
Shadow bytes around the buggy address:
0x3d3e9f20: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f40: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f50: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa
0x3d3e9f60: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x3d3e9f70:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9f90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fa0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fb0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x3d3e9fc0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==11785==ABORTING
Thread 14 "RESOURCE_HTTP" received signal SIGABRT, Aborted.
[Switching to Thread 0xf27bfb40 (LWP 12324)]
0xf7fd8be9 in __kernel_vsyscall ()
(gdb) bt
#0 0xf7fd8be9 in __kernel_vsyscall ()
#1 0xf4c7ee89 in __GI_raise (sig=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#2 0xf4c803e7 in __GI_abort () at abort.c:89
#3 0xf7b2ef2e in __sanitizer::Abort () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cc:122
#4 0xf7b262fa in __sanitizer::Die () at /opt/toolchain/src/gcc-6.2.0/libsanitizer/sanitizer_common/sanitizer_common.cc:145
#5 0xf7b21ab3 in __asan::ScopedInErrorReport::~ScopedInErrorReport (this=0xf27be171, __in_chrg=<optimized out>) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:689
#6 0xf7b214a5 in __asan::ReportGenericError (pc=166993689, bp=4068206216, sp=4068206204, addr=3925146496, is_write=false, access_size=4, exp=0, fatal=true) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_report.cc:1074
#7 0xf7b21fce in __asan::__asan_report_load4 (addr=3925146496) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_rtl.cc:129
#8 0x09f41f19 in curl_multi_remove_handle (multi=0xf3406080, data=0xde582400) at /path/to/source3rdparty/curl/lib/multi.c:666
#9 0x09f6b277 in Curl_close (data=0xde582400) at /path/to/source3rdparty/curl/lib/url.c:415
#10 0x09f3354e in curl_easy_cleanup (data=0xde582400) at /path/to/source3rdparty/curl/lib/easy.c:860
#11 0x09c6de3f in ...
#12 0x09c378c5 in ...
#13 0x09c48133 in ...
#14 0x09c4d092 in ...
#15 0x0a2be6b6 in ...
#16 0xf7aa5781 in asan_thread_start (arg=0xf2d22938) at /opt/toolchain/src/gcc-6.2.0/libsanitizer/asan/asan_interceptors.cc:226
#17 0xf5de52b5 in start_thread (arg=0xf27bfb40) at pthread_create.c:333
#18 0xf4d3a16e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:114
Fixes#1083
The closure handle only ever has default timeouts set. To improve the
state somewhat we clone the timeouts from each added handle so that the
closure handle always has the same timeouts as the most recently added
easy handle.
Fixes#739
Curl_select_ready() was the former API that was replaced with
Curl_select_check() a while back and the former arg setup was provided
with a define (in order to leave existing code unmodified).
Now we instead offer SOCKET_READABLE and SOCKET_WRITABLE for the most
common shortcuts where only one socket is checked. They're also more
visibly macros.
- Change back behavior so that pipelining is considered possible for
connections that have not yet reached the protocol level.
This is a follow-up to e5f0b1a which had changed the behavior of
checking if pipelining is possible to ignore connections that had
'bits.close' set. Connections that have not yet reached the protocol
level also have that bit set, and we need to consider pipelining
possible on those connections.
This fixes a merge error in commit 7f3df80 caused by commit 332e8d6.
Additionally, this changes Curl_verify_windows_version for Windows App
builds to assume to always be running on the target Windows version.
There seems to be no way to determine the Windows version from a
UWP app. Neither GetVersion(Ex), nor VerifyVersionInfo, nor the
Version Helper functions are supported.
Bug: https://github.com/curl/curl/pull/820#issuecomment-250889878
Reported-by: Paul Joyce
Closes https://github.com/curl/curl/pull/1048
No longer attempt to use "doomed" to-be-closed connections when
pipelining. Prior to this change connections marked for deletion (e.g.
timeout) would be erroneously used, resulting in sporadic crashes.
As originally reported and fixed by Carlo Wood (origin unknown).
Bug: https://github.com/curl/curl/issues/627
Reported-by: Rider Linden
Closes https://github.com/curl/curl/pull/1075
Participation-by: nopjmp@users.noreply.github.com
Not all reply messages were properly checked for their lengths, which
made it possible to access uninitialized memory (but this does not lead
to out of boundary accesses).
Closes#1052
... it no longer takes printf() arguments since it was only really taken
advantage by one user and it was not written and used in a safe
way. Thus the 'f' is removed from the function name and the proto is
changed.
Although the current code wouldn't end up in badness, it was a risk that
future changes could end up springf()ing too large data or passing in a
format string inadvertently.
The previous use of snprintf() could make libcurl silently truncate some
input data and not report that back on overly large input, which could
make data get sent over the network in a bad format.
Example:
$ curl --form 'a=b' -H "Content-Type: $(perl -e 'print "A"x4100')"
Cokie with the same domain but different tailmatching property are now
considered different and do not replace each other. If header contains
following lines then two cookies will be set: Set-Cookie: foo=bar;
domain=.foo.com; expires=Thu Mar 3 GMT 8:56:27 2033 Set-Cookie: foo=baz;
domain=foo.com; expires=Thu Mar 3 GMT 8:56:27 2033
This matches Chrome, Opera, Safari, and Firefox behavior. When sending
stored tokens to foo.com Chrome, Opera, Firefox store send them in the
stored order, while Safari pre-sort the cookies.
Closes#1050
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
As it seems to be a rarely used cipher suite (for securely established
but _unencrypted_ connections), I believe it is fine not to provide an
alias for the misspelled variant.
LibreSSL defines `OPENSSL_VERSION_NUMBER` as `0x20000000L` for all
versions returning `LibreSSL/2.0.0` for any LibreSSL version.
This change provides a local OpenSSL_version_num function replacement
returning LIBRESSL_VERSION_NUMBER instead.
Closes#1029
The OpenSSL function CRYTPO_cleanup_all_ex_data() cannot be called
multiple times without crashing - and other libs might call it! We
basically cannot call it without risking a crash. The function is a
no-op since OpenSSL 1.1.0.
Not calling this function only risks a small memory leak with OpenSSL <
1.1.0.
Bug: https://curl.haxx.se/mail/lib-2016-09/0045.html
Reported-by: Todd Short
OpenSSL 1.0.1 and 1.0.2 build an error queue that is stored per-thread
so we need to clean it when easy handles are freed, in case the thread
will be killed in which the easy handle was used. All OpenSSL code in
libcurl should extract the error in association with the error already
so clearing this queue here should be harmless at worst.
Fixes#964
NTLM support with mbedTLS was added in 497e7c9 but requires that mbedTLS
is built with the MD4 functions available, which it isn't in default
builds. This now adapts if the funtion isn't there and builds libcurl
without NTLM support if so.
Fixes#1004
... like when a HTTP/0.9 response comes back without any headers at all
and just a body this now prevents that body from being sent to the
callback etc.
Adapted test 1144 to verify.
Fixes#973
Assisted-by: Ray Satiro
Detect support for compiler symbol visibility flags and apply those
according to CURL_HIDDEN_SYMBOLS option.
It should work true to the autotools build except it tries to unhide
symbols on Windows when requested and prints warning if it fails.
Ref: https://github.com/curl/curl/issues/981#issuecomment-242665951
Reported-by: Daniel Stenberg
... by partially reverting f975f06033. The allocation could be made by
OpenSSL so the free must be made with OPENSSL_free() to avoid problems.
Reported-by: Harold Stuart
Fixes#1005
... by making sure we don't count down the "upload left" counter when the
uploaded size is unknown and then it can be allowed to continue forever.
Fixes#996
Since we're using CURLE_FTP_WEIRD_SERVER_REPLY in imap, pop3 and smtp as
more of a generic "failed to parse" introduce an alias without FTP in
the name.
Closes https://github.com/curl/curl/pull/975
This hash is used to verify the original downloaded certificate bundle
and also included in the generated bundle's comment header. Also
rename related internal symbols to algorithm-agnostic names.
CURLINFO_SSL_VERIFYRESULT does not get the certificate verification
result when SSL_connect fails because of a certificate verification
error.
This fix saves the result of SSL_get_verify_result so that it is
returned by CURLINFO_SSL_VERIFYRESULT.
Closes https://github.com/curl/curl/pull/995
While noErr and errSecSuccess are defined as the same value, the API
documentation states that SecPKCS12Import() returns errSecSuccess if
there were no errors in importing. Ensure that a future change of the
defined value doesn't break (however unlikely) and be consistent with
the API docs.
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes#971
* Added description to Curl_sspi_free_identity()
* Added parameter and return explanations to Curl_sspi_global_init()
* Added parameter explaination to Curl_sspi_global_cleanup()
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
... instead of if() before the switch(), add a default to the switch so
that the compilers don't warn on "warning: enumeration value
'PLATFORM_DONT_CARE' not handled in switch" anymore.
- Disable ALPN on Wine.
- Don't pass input secbuffer when ALPN is disabled.
When ALPN support was added a change was made to pass an input secbuffer
to initialize the context. When ALPN is enabled the buffer contains the
ALPN information, and when it's disabled the buffer is empty. In either
case this input buffer caused problems with Wine and connections would
not complete.
Bug: https://github.com/curl/curl/issues/983
Reported-by: Christian Fillion
Serialise the call to PK11_FindSlotByName() to avoid spurious errors in
a multi-threaded environment. The underlying cause is a race condition
in nssSlot_IsTokenPresent().
Bug: https://bugzilla.mozilla.org/1297397Closes#985
When we're uploading using FTP and the server issues a tiny pause
between opening the connection to the client's secondary socket, the
client's initial poll() times out, which leads to second poll() which
does not wait for POLLIN on the secondary socket. So that poll() also
has to time out, creating a long (200ms) pause.
This patch adds the correct flag to the secondary socket, making the
second poll() correctly wait for the connection there too.
Signed-off-by: Ales Novak <alnovak@suse.cz>
Closes#978
Instead of displaying the requested hostname the one returned
by the SOCKS5 proxy server is used in case of connection error.
The requested hostname is displayed earlier in the connection sequence.
The upper-value of the port is moved to a temporary variable and
replaced with a 0-byte to make sure the hostname is 0-terminated.
Hooked up the HTTP authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
As per commit 00417fd66c existing functionality is maintained for now.
Hooked up the SASL authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
For now existing functionality is maintained.
As Windows SSPI authentication calls fail when a particular mechanism
isn't available, introduced these functions for DIGEST, NTLM, Kerberos 5
and Negotiate to allow both HTTP and SASL authentication the opportunity
to query support for a supported mechanism before selecting it.
For now each function returns TRUE to maintain compatability with the
existing code when called.
I discovered some people have been using "https://example.com" style
strings as proxy and it "works" (curl doesn't complain) because curl
ignores unknown schemes and then assumes plain HTTP instead.
I think this misleads users into believing curl uses HTTPS to proxies
when it doesn't. Now curl rejects proxy strings using unsupported
schemes instead of just ignoring and defaulting to HTTP.
Undo change introduced in d4643d6 which caused iPAddress match to be
ignored if dNSName was present but did not match.
Also, if iPAddress is present but does not match, and dNSName is not
present, fail as no-match. Prior to this change in such a case the CN
would be checked for a match.
Bug: https://github.com/curl/curl/issues/959
Reported-by: wmsch@users.noreply.github.com
Mark's new document about HTTP Retries
(https://mnot.github.io/I-D/httpbis-retry/) made me check our code and I
spotted that we don't retry failed HEAD requests which seems totally
inconsistent and I can't see any reason for that separate treatment.
So, no separate treatment for HEAD starting now. A HTTP request sent
over a reused connection that gets cut off before a single byte is
received will be retried on a fresh connection.
Made-aware-by: Mark Nottingham
Makes libcurl work in communication with gstreamer-based RTSP
servers. The original code validates the session id to be in accordance
with the RFC. I think it is better not to do that:
- For curl the actual content is a don't care.
- The clarity of the RFC is debatable, is $ allowed or only as \$, that
is imho not clear
- Gstreamer seems to url-encode the session id but % is not allowed by
the RFC
- less code
With this patch curl will correctly handle real-life lines like:
Session: biTN4Kc.8%2B1w-AF.; timeout=60
Bug: https://curl.haxx.se/mail/lib-2016-08/0076.html
All compilers used by cmake in Windows should support large files.
- Add test SIZEOF_OFF_T
- Remove outdated test SIZEOF_CURL_OFF_T
- Turn on USE_WIN32_LARGE_FILES in Windows
- Check for 'Largefile' during the features output
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
Reminded-by: Tatsuhiro Tsujikawa
In order to make MBEDTLS_DEBUG work, the debug threshold must be unequal
to 0. This patch also adds a comment how mbedtls must be compiled in
order to make debugging work, and explains the possible debug levels.
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
When input stream for curl is stdin and input stream is not a file but
generated by a script then curl can truncate data transfer to arbitrary
size since a partial packet is treated as end of transfer by TFTP.
Fixes#857
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
Regression added in 790d6de485. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
If a call to GetSystemDirectory fails, the `path` pointer that was
previously allocated would be leaked. This makes sure that `path` is
always freed.
Closes#938
This is a follow up to the parent commit dcdd4be which fixes one leak
but creates another by failing to free the credentials handle if out of
memory. Also there's a second location a few lines down where we fail to
do same. This commit fixes both of those issues.
- the expression of an 'if' was always true
- a 'while' contained a condition that was always true
- use 'if(k->exp100 > EXP100_SEND_DATA)' instead of 'if(k->exp100)'
- fixed a typo
Closes#889
... as otherwise we could get a 0 which would count as no error and we'd
wrongly continue and could end up segfaulting.
Bug: https://curl.haxx.se/mail/lib-2016-06/0052.html
Reported-by: 暖和的和暖