Handles created with curl_easy_duphandle do not use the SSL engine set
up in the original handle. This fixes the issue by storing the engine
name in the internal url state and setting the engine from its name
inside curl_easy_duphandle.
Reported-by: Anton Gerasimov
Signed-of-by: Laurent Bonnans
Fixes#2829Closes#2833
Deal with tiny "HTTP/0.9" (header-less) responses by checking the
status-line early, even before a full "HTTP/" is received to allow
detecting 0.9 properly.
Test 1266 and 1267 added to verify.
Fixes#2420Closes#2872
Previously it was checked for in configure/cmake, but that would then
leave other build systems built without engine support.
While engine support probably existed prior to 1.0.1, I decided to play
safe. If someone experience a problem with this, we can widen the
version check.
Fixes#2641Closes#2644
Adds CURLOPT_TLS13_CIPHERS and CURLOPT_PROXY_TLS13_CIPHERS.
curl: added --tls13-ciphers and --proxy-tls13-ciphers
Fixes#2435
Reported-by: zzq1015 on github
Closes#2607
The latest psl is cached in the multi or share handle. It is refreshed
before use after 72 hours.
New share lock CURL_LOCK_DATA_PSL controls the psl cache sharing.
If the latest psl is not available, the builtin psl is used.
Reported-by: Yaakov Selkowitz
Fixes#2553Closes#2601
When receiving REFUSED_STREAM, mark the connection for close and retry
streams accordingly on another/fresh connection.
Reported-by: Terry Wu
Fixes#2416Fixes#1618Closes#2510
If you pass empty user/pass asking curl to use Windows Credential
Storage (as stated in the docs) and it has valid credentials for the
domain, e.g.
curl -v -u : --ntlm example.com
currently authentication fails.
This change fixes it by providing proper SPN string to the SSPI API
calls.
Fixes https://github.com/curl/curl/issues/1622
Closes https://github.com/curl/curl/pull/1660
The ifdefs have become quite long. Also, the condition for the
definition of CURLOPT_SERVICE_NAME and for setting it from
CURLOPT_SERVICE_NAME have diverged. We will soon also need the two
options for NTLM, at least when using SSPI, for
https://github.com/curl/curl/pull/1660.
Just make the definitions unconditional to make that easier.
Closes https://github.com/curl/curl/pull/2479
This patch adds CURLOPT_DNS_SHUFFLE_ADDRESSES to explicitly request
shuffling of IP addresses returned for a hostname when there is more
than one. This is useful when the application knows that a round robin
approach is appropriate and is willing to accept the consequences of
potentially discarding some preference order returned by the system's
implementation.
Closes#1694
- Add new option CURLOPT_RESOLVER_START_FUNCTION to set a callback that
will be called every time before a new resolve request is started
(ie before a host is resolved) with a pointer to backend-specific
resolver data. Currently this is only useful for ares.
- Add new option CURLOPT_RESOLVER_START_DATA to set a user pointer to
pass to the resolver start callback.
Closes https://github.com/curl/curl/pull/2311
- Add new option CURLOPT_HAPPY_EYEBALLS_TIMEOUT to set libcurl's happy
eyeball timeout value.
- Add new optval macro CURL_HET_DEFAULT to represent the default happy
eyeballs timeout value (currently 200 ms).
- Add new tool option --happy-eyeballs-timeout-ms to expose
CURLOPT_HAPPY_EYEBALLS_TIMEOUT. The -ms suffix is used because the
other -timeout options in the tool expect seconds not milliseconds.
Closes https://github.com/curl/curl/pull/2260
... unless CURLOPT_UNRESTRICTED_AUTH is set to allow them. This matches how
curl already handles Authorization headers created internally.
Note: this changes behavior slightly, for the sake of reducing mistakes.
Added test 317 and 318 to verify.
Reported-by: Craig de Stigter
Bug: https://curl.haxx.se/docs/adv_2018-b3bf.html
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes#2132Fixes#2151Closes#2139
libssh is an alternative library to libssh2.
https://www.libssh.org/
That patch set also introduces support for ECDSA
ed25519 keys, as well as gssapi authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
Originally, my idea was to allocate the two structures (or more
precisely, the connectdata structure and the four SSL backend-specific
strucutres required for ssl[0..1] and proxy_ssl[0..1]) in one go, so
that they all could be free()d together.
However, getting the alignment right is tricky. Too tricky.
So let's just bite the bullet and allocate the SSL backend-specific
data separately.
As a consequence, we now have to be very careful to release the memory
allocated for the SSL backend-specific data whenever we release any
connectdata.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2119
- Align the array of ssl_backend_data on a max 32 byte boundary.
8 is likely to be ok but I went with 32 for posterity should one of
the ssl_backend_data structs change to contain a larger sized variable
in the future.
Prior to this change (since dev 70f1db3, release 7.56) the connectdata
structure was undersized by 4 bytes in 32-bit builds with ssl enabled
because long long * was mistakenly used for alignment instead of
long long, with the intention being an 8 byte boundary. Also long long
may not be an available type.
The undersized connectdata could lead to oob read/write past the end in
what was expected to be the last 4 bytes of the connection's secondary
socket https proxy ssl_backend_data struct (the secondary socket in a
connection is used by ftp, others?).
Closes https://github.com/curl/curl/issues/2093
CVE-2017-8818
Bug: https://curl.haxx.se/docs/adv_2017-af0a.html
There is a conflict on symbol 'free_func' between openssl/crypto.h and
zlib.h on AIX. This is an attempt to resolve it.
Bug: https://curl.haxx.se/mail/lib-2017-11/0032.html
Reported-By: Michael Felt
This uses the brotli external library (https://github.com/google/brotli).
Brotli becomes a feature: additional curl_version_info() bit and
structure fields are provided for it and CURLVERSION_NOW bumped.
Tests 314 and 315 check Brotli content unencoding with correct and
erroneous data.
Some tests are updated to accomodate with the now configuration dependent
parameters of the Accept-Encoding header.
This is implemented as an output streaming stack of unencoders, the last
calling the client write procedure.
New test 230 checks this feature.
Bug: https://github.com/curl/curl/pull/2002
Reported-By: Daniel Bankhead
Compare these settings in Curl_ssl_config_matches():
- verifystatus (CURLOPT_SSL_VERIFYSTATUS)
- random_file (CURLOPT_RANDOM_FILE)
- egdsocket (CURLOPT_EGDSOCKET)
Also copy the setting "verifystatus" in Curl_clone_primary_ssl_config(),
and copy the setting "sessionid" unconditionally.
This means that reusing connections that are secured with a client
certificate is now possible, and the statement "TLS session resumption
is disabled when a client certificate is used" in the old advisory at
https://curl.haxx.se/docs/adv_20170419.html is obsolete.
Reviewed-by: Daniel Stenberg
Closes#1917
So far, all of the SSL backends' private data has been declared as
part of the ssl_connect_data struct, in one big #if .. #elif .. #endif
block.
This can only work as long as the SSL backend is a compile-time option,
something we want to change in the next commits.
Therefore, let's encapsulate the exact data needed by each SSL backend
into a private struct, and let's avoid bleeding any SSL backend-specific
information into urldata.h. This is also necessary to allow multiple SSL
backends to be compiled in at the same time, as e.g. OpenSSL's and
CyaSSL's headers cannot be included in the same .c file.
To avoid too many malloc() calls, we simply append the private structs
to the connectdata struct in allocate_conn().
This requires us to take extra care of alignment issues: struct fields
often need to be aligned on certain boundaries e.g. 32-bit values need to
be stored at addresses that divide evenly by 4 (= 32 bit / 8
bit-per-byte).
We do that by assuming that no SSL backend's private data contains any
fields that need to be aligned on boundaries larger than `long long`
(typically 64-bit) would need. Under this assumption, we simply add a
dummy field of type `long long` to the `struct connectdata` struct. This
field will never be accessed but acts as a placeholder for the four
instances of ssl_backend_data instead. the size of each ssl_backend_data
struct is stored in the SSL backend-specific metadata, to allow
allocate_conn() to know how much extra space to allocate, and how to
initialize the ssl[sockindex]->backend and proxy_ssl[sockindex]->backend
pointers.
This would appear to be a little complicated at first, but is really
necessary to encapsulate the private data of each SSL backend correctly.
And we need to encapsulate thusly if we ever want to allow selecting
CyaSSL and OpenSSL at runtime, as their headers cannot be included within
the same .c file (there are just too many conflicting definitions and
declarations for that).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
In 86b889485 (sasl_gssapi: Added GSS-API based Kerberos V5 variables,
2014-12-03), an SSPI-specific field was added to the kerberos5data
struct without moving the #include "curl_sspi.h" later in the same file.
This broke the build when SSPI was enabled, unless Secure Channel was
used as SSL backend, because it just so happens that Secure Channel also
requires "curl_sspi.h" to be #included.
In f4739f639 (urldata: include curl_sspi.h when Windows SSPI is enabled,
2017-02-21), this bug was fixed incorrectly: Instead of moving the
appropriate conditional #include, the Secure Channel-conditional part
was now also SSPI-conditional.
Fix this problem by moving the correct #include instead.
This is also required for an upcoming patch that moves all the Secure
Channel-specific stuff out of urldata.h and encapsulates it properly in
vtls/schannel.c instead.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Since 5017d5ada (polarssl: now require 1.3.0+, 2014-03-17), we require
a newer PolarSSL version. No need to keep code trying to support any
older version.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The required low-level logic was already available as part of
`libssh2` (via `LIBSSH2_FLAG_COMPRESS` `libssh2_session_flag()`[1]
option.)
This patch adds the new `libcurl` option `CURLOPT_SSH_COMPRESSION`
(boolean) and the new `curl` command-line option `--compressed-ssh`
to request this `libssh2` feature. To have compression enabled, it
is required that the SSH server supports a (zlib) compatible
compression method and that `libssh2` was built with `zlib` support
enabled.
[1] https://www.libssh2.org/libssh2_session_flag.html
Ref: https://github.com/curl/curl/issues/1732
Closes https://github.com/curl/curl/pull/1735
Update the progress timers `t_nslookup`, `t_connect`, `t_appconnect`,
`t_pretransfer`, and `t_starttransfer` to track the total times for
these activities when a redirect is followed. Previously, only the times
for the most recent request would be tracked.
Related changes:
- Rename `Curl_pgrsResetTimesSizes` to `Curl_pgrsResetTransferSizes`
now that the function only resets transfer sizes and no longer
modifies any of the progress timers.
- Add a bool to the `Progress` struct that is used to prevent
double-counting `t_starttransfer` times.
Added test case 1399.
Fixes#522 and Known Bug 1.8
Closes#1602
Reported-by: joshhe on github
... to make all libcurl internals able to use the same data types for
the struct members. The timeval struct differs subtly on several
platforms so it makes it cumbersome to use everywhere.
Ref: #1652Closes#1693
Add a new type of callback to Curl_handler which performs checks on
the connection. Alter RTSP so that it uses this callback to do its
own check on connection health.
If libcurl was built with GSS-API support, it unconditionally advertised
GSS-API authentication while connecting to a SOCKS5 proxy. This caused
problems in environments with improperly configured Kerberos: a stock
libcurl failed to connect, despite libcurl built without GSS-API
connected fine using username and password.
This commit introduces the CURLOPT_SOCKS5_AUTH option to control the
allowed methods for SOCKS5 authentication at run time.
Note that a new option was preferred over reusing CURLOPT_PROXYAUTH
for compatibility reasons because the set of authentication methods
allowed by default was different for HTTP and SOCKS5 proxies.
Bug: https://curl.haxx.se/mail/lib-2017-01/0005.html
Closes https://github.com/curl/curl/pull/1454
... to enable sending "OPTIONS *" which wasn't possible previously.
This option currently only works for HTTP.
Added test cases 1298 + 1299 to verify
Fixes#1280Closes#1462
... all other non-HTTP protocol schemes are now defaulting to "tunnel
trough" mode if a HTTP proxy is specified. In reality there are no HTTP
proxies out there that allow those other schemes.
Assisted-by: Ray Satiro, Michael Kaufmann
Closes#1505
This gives us accurate precision and it allows us to avoid storing "no
time" for systems with too low timer resolution as we then bump the time
up to 1 microsecond. Should fix test 573 on windows.
Remove the now unused curlx_tvdiff_secs() function.
Maintains the external getinfo() API with using doubles.
Fixes#1531
... since the total amount is low this is faster, easier and reduces
memory overhead.
Also, Curl_expire_done() can now mark an expire timeout as done so that
it never times out.
Closes#1472
The data->req.uploadbuf struct member served no good purpose, instead we
use ->state.uploadbuffer directly. It makes it clearer in the code which
buffer that's being used.
Removed the 'SingleRequest *' argument from the readwrite_upload() proto
as it can be derived from the Curl_easy struct. Also made the code in
the readwrite_upload() function use the 'k->' shortcut to all references
to struct fields in 'data->req', which previously was made with a mix of
both.
- Track when the cached encrypted data contains only a partial record
that can't be decrypted without more data (SEC_E_INCOMPLETE_MESSAGE).
- Change Curl_schannel_data_pending to return false in such a case.
Other SSL libraries have pending data functions that behave similarly.
Ref: https://github.com/curl/curl/pull/1387
Closes https://github.com/curl/curl/pull/1392
The 'list element' struct now has to be within the data that is being
added to the list. Removes 16.6% (tiny) mallocs from a simple HTTP
transfer. (96 => 80)
Also removed return codes since the llist functions can't fail now.
Test 1300 updated accordingly.
Closes#1435
When receiving chunked encoded data with trailers, and the write
callback returns PAUSE, there might be both body and header to store to
resend on unpause. Previously libcurl returned error for that case.
Added test case 1540 to verify.
Reported-by: Stephen Toub
Fixes#1354Closes#1357
- Add new option CURLOPT_SUPPRESS_CONNECT_HEADERS to allow suppressing
proxy CONNECT response headers from the user callback functions
CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
- Add new tool option --suppress-connect-headers to expose
CURLOPT_SUPPRESS_CONNECT_HEADERS and allow suppressing proxy CONNECT
response headers from --dump-header and --include.
Assisted-by: Jay Satiro
Assisted-by: CarloCannas@users.noreply.github.com
Closes https://github.com/curl/curl/pull/783
This flag is meant for the current request based on authentication
state, once the request is done we can clear the flag.
Also change auth.multi to auth.multipass for better readability.
Fixes https://github.com/curl/curl/issues/1095
Closes https://github.com/curl/curl/pull/1326
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Michael Kaufmann
This commit introduces the CURL_SSLVERSION_MAX_* constants as well as
the --tls-max option of the curl tool.
Closes https://github.com/curl/curl/pull/1166
- on the first invocation: keep security context returned by
InitializeSecurityContext()
- on subsequent invocations: use MakeSignature() instead of
InitializeSecurityContext() to generate HTTP digest response
Bug: https://github.com/curl/curl/issues/870
Reported-by: Andreas Roth
Closes https://github.com/curl/curl/pull/1251
Properly resolve, convert and log the proxy host names.
Support the "--connect-to" feature for SOCKS proxies and for passive FTP
data transfers.
Follow-up to cb4e2be
Reported-by: Jay Satiro
Fixes https://github.com/curl/curl/issues/1248
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer. Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size. Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.
Closes#1222
In addition to unix domain sockets, Linux also supports an
abstract namespace which is independent of the filesystem.
In order to support it, add new CURLOPT_ABSTRACT_UNIX_SOCKET
option which uses the same storage as CURLOPT_UNIX_SOCKET_PATH
internally, along with a flag to specify abstract socket.
On non-supporting platforms, the abstract address will be
interpreted as an empty string and fail gracefully.
Also add new --abstract-unix-socket tool parameter.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Chungtsun Li (typeless)
Reviewed-by: Daniel Stenberg
Reviewed-by: Peter Wu
Closes#1197Fixes#1061
CURLOPT_SOCKS_PROXY -> CURLOPT_PRE_PROXY
Added the corresponding --preroxy command line option. Sets a SOCKS
proxy to connect to _before_ connecting to a HTTP(S) proxy.
Adds access to the effectively used protocol/scheme to both libcurl and
curl, both in string and numeric (CURLPROTO_*) form.
Note that the string form will be uppercase, as it is just the internal
string.
As these strings are declared internally as const, and all other strings
returned by curl_easy_getinfo() are de-facto const as well, string
handling in getinfo.c got const-ified.
Closes#1137
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes#1131
- Call Curl_initinfo on init and duphandle.
Prior to this change the statistical and informational variables were
simply zeroed by calloc on easy init and duphandle. While zero is the
correct default value for almost all info variables, there is one where
it isn't (filetime initializes to -1).
Bug: https://github.com/curl/curl/issues/1103
Reported-by: Neal Poole
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes#971
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
- Disable ALPN on Wine.
- Don't pass input secbuffer when ALPN is disabled.
When ALPN support was added a change was made to pass an input secbuffer
to initialize the context. When ALPN is enabled the buffer contains the
ALPN information, and when it's disabled the buffer is empty. In either
case this input buffer caused problems with Wine and connections would
not complete.
Bug: https://github.com/curl/curl/issues/983
Reported-by: Christian Fillion
Sessionid cache management is inseparable from managing individual
session lifetimes. E.g. for reference-counted sessions (like those in
SChannel and OpenSSL engines) every session addition and removal
should be accompanied with refcount increment and decrement
respectively. Failing to do so synchronously leads to a race condition
that causes symptoms like use-after-free and memory corruption.
This commit:
- makes existing session cache locking explicit, thus allowing
individual engines to manage lock's scope.
- fixes OpenSSL and SChannel engines by putting refcount management
inside this lock's scope in relevant places.
- adds these explicit locking calls to other engines that use
sessionid cache to accommodate for this change. Note, however,
that it is unknown whether any of these engines could also have
this race.
Bug: https://github.com/curl/curl/issues/815Fixes#815Closes#847
Only protocols that actually have a protocol registered for ALPN and NPN
should try to get that negotiated in the TLS handshake. That is only
HTTPS (well, http/1.1 and http/2) right now. Previously ALPN and NPN
would wrongly be used in all handshakes if libcurl was built with it
enabled.
Reported-by: Jay Satiro
Fixes#789
WinSock destroys recv() buffer if send() is failed. As result - server
response may be lost if server sent it while curl is still sending
request. This behavior noticeable on HTTP server short replies if
libcurl use several send() for request (usually for POST request).
To workaround this problem, libcurl use recv() before every send() and
keeps received data in intermediate buffer for further processing.
Fixes: #657Closes: #668
As these two options provide identical functionality, the former for
SOCK5 proxies and the latter for HTTP proxies, merged the two options
together.
As such CURLOPT_SOCKS5_GSSAPI_SERVICE is marked as deprecated as of
7.49.0.
Some TFTP server implementations ignore the "TFTP Option extension"
(RFC 1782-1784, 2347-2349), or implement it in a flawed way, causing
problems with libcurl. Another switch for curl_easy_setopt
"CURLOPT_TFTP_NO_OPTIONS" is introduced which prevents libcurl from
sending TFTP option requests to a server, avoiding many problems caused
by faulty implementations.
Bug: https://github.com/curl/curl/issues/481
Since we didn't keep the input argument around after having called
mbedtls, it could end up accessing the wrong memory when figuring out
the ALPN protocols.
Closes#642
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346Closes#346
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
Currently, libcurl rejects responses with "Content-Encoding: compress"
when CURLOPT_ACCEPT_ENCODING is set to "". I think that libcurl should
treat the Content-Encoding "compress" the same as other
Content-Encodings that it does not support, e.g. "bzip2". That means
just ignoring it.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
This commit is several drafts squashed together. The changes from each
draft are noted below. If any changes are similar and possibly
contradictory the change in the latest draft takes precedence.
Bug: https://github.com/bagder/curl/issues/244
Reported-by: Chris Araman
%%
%% Draft 1
%%
- return 0 if len == 0. that will have to be documented.
- continue on and process the caches regardless of raw recv
- if decrypted data will be returned then set the error code to CURLE_OK
and return its count
- if decrypted data will not be returned and the connection has closed
(eg nread == 0) then return 0 and CURLE_OK
- if decrypted data will not be returned and the connection *hasn't*
closed then set the error code to CURLE_AGAIN --only if an error code
isn't already set-- and return -1
- narrow the Win2k workaround to only Win2k
%%
%% Draft 2
%%
- Trying out a change in flow to handle corner cases.
%%
%% Draft 3
%%
- Back out the lazier decryption change made in draft2.
%%
%% Draft 4
%%
- Some formatting and branching changes
- Decrypt all encrypted cached data when len == 0
- Save connection closed state
- Change special Win2k check to use connection closed state
%%
%% Draft 5
%%
- Default to CURLE_AGAIN in cleanup if an error code wasn't set and the
connection isn't closed.
%%
%% Draft 6
%%
- Save the last error only if it is an unrecoverable error.
Prior to this I saved the last error state in all cases; unfortunately
the logic to cover that in all cases would lead to some muddle and I'm
concerned that could then lead to a bug in the future so I've replaced
it by only recording an unrecoverable error and that state will persist.
- Do not recurse on renegotiation.
Instead we'll continue on to process any trailing encrypted data
received during the renegotiation only.
- Move the err checks in cleanup after the check for decrypted data.
In either case decrypted data is always returned but I think it's easier
to understand when those err checks come after the decrypted data check.
%%
%% Draft 7
%%
- Regardless of len value go directly to cleanup if there is an
unrecoverable error or a close_notify was already received. Prior to
this change we only acknowledged those two states if len != 0.
- Fix a bug in connection closed behavior: Set the error state in the
cleanup, because we don't know for sure it's an error until that time.
- (Related to above) In the case the connection is closed go "greedy"
with the decryption to make sure all remaining encrypted data has been
decrypted even if it is not needed at that time by the caller. This is
necessary because we can only tell if the connection closed gracefully
(close_notify) once all encrypted data has been decrypted.
- Do not renegotiate when an unrecoverable error is pending.
%%
%% Draft 8
%%
- Don't show 'server closed the connection' info message twice.
- Show an info message if server closed abruptly (missing close_notify).
With many easy handles using the same connection for multiplexing, it is
important we store and keep the transfer-oriented stuff in the
SessionHandle so that callbacks and callback data work fine even when
many easy handles share the same physical connection.
SSLeay was the name of the library that was subsequently turned into
OpenSSL many moons ago (1999). curl does not work with the old SSLeay
library since years. This is now reflected by only using USE_OPENSSL in
code that depends on OpenSSL.
This option can be used to enable/disable certificate status verification using
the "Certificate Status Request" TLS extension defined in RFC6066 section 8.
This also adds the CURLE_SSL_INVALIDCERTSTATUS error, to be used when the
certificate status verification fails, and the Curl_ssl_cert_status_request()
function, used to check whether the SSL backend supports the status_request
extension.
There was a confusion between these: this commit tries to disambiguate them.
- Scope can be computed from the address itself.
- Scope id is scope dependent: it is currently defined as 1-based local
interface index for link-local scoped addresses, and as a site index(?) for
(obsolete) site-local addresses. Linux only supports it for link-local
addresses.
The URL parser properly parses a scope id as an interface index, but stores it
in a field named "scope": confusion. The field has been renamed into "scope_id".
Curl_if2ip() used the scope id as it was a scope. This caused failures
to bind to an interface.
Scope is now computed from the addresses and Curl_if2ip() matches them.
If redundantly specified in the URL, scope id is check for mismatch with
the interface index.
This commit should fix SF bug #1451.
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Reworked the input token (challenge message) storage as what is passed
to the buf and desc in the response generation are typically blobs of
data rather than strings, so this is more in keeping with other areas
of the SSPI code, such as the NTLM message functions.
When duplicating a handle, the data to post was duplicated using
strdup() when it could be binary and contain zeroes and it was not even
zero terminated! This caused read out of bounds crashes/segfaults.
Since the lib/strdup.c file no longer is easily shared with the curl
tool with this change, it now uses its own version instead.
Bug: http://curl.haxx.se/docs/adv_20141105.html
CVE: CVE-2014-3707
Reported-By: Symeon Paraschoudis
Typically the USE_WINDOWS_SSPI definition would not be used when the
CURL_DISABLE_CRYPTO_AUTH define is, however, it is still a valid build
configuration and, as such, the SASL Kerberos V5 (GSSAPI) authentication
data structures and functions would incorrectly be used when they
shouldn't be.
Introduced a new USE_KRB5 definition that takes into account the use of
CURL_DISABLE_CRYPTO_AUTH like USE_SPNEGO and USE_NTLM do.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
Updated to use a dynamic buffer for the SPN generation via the recently
introduced Curl_sasl_build_spn() function rather than a fixed buffer of
1024 characters, which should have been more than enough, but by using
the new function removes the need for another variable sname to do the
wide character conversion in Unicode builds.
Given the SSPI package info query indicates a token size of 2888 bytes,
and as with the Winbind code and commit 9008f3d56, use a dynamic buffer
for the Type-1 and Type-3 message generation rather than a fixed buffer
of 1024 bytes.
- Replace CURLAUTH_GSSNEGOTIATE with CURLAUTH_NEGOTIATE
- CURL_VERSION_GSSNEGOTIATE is deprecated which
is served by CURL_VERSION_SSPI, CURL_VERSION_GSSAPI and
CURUL_VERSION_SPNEGO now.
- Remove display of feature 'GSS-Negotiate'
Make all code use connclose() and connkeep() when changing the "close
state" for a connection. These two macros take a string argument with an
explanation, and debug builds of curl will include that in the debug
output. Helps tracking connection re-use/close issues.
In commit 0b3750b5c2 (released in 7.36.0) we fixed a timeout issue
but instead broke the timings.
To fix this, I introduce a new timestamp to use for the timeouts and
restored the previous timestamp and timestamp position so that the old
timer functionality is restored.
In addition to that, that change also broke connection timeouts for when
more than one connect was used (as it would then count the total time
from the first connect and not for the most recent one). Now
Curl_timeleft() has been modified so that it checks against different
start times depending on which timeout it checks.
Test 1303 is updated accordingly.
Bug: http://curl.haxx.se/mail/lib-2014-05/0147.html
Reported-by: Ryan Braud
set.infilesize in this case was modified in several places, which could
lead to repeated requests using the same handle to get unintendent/wrong
consequences based on what the previous request did!
This makes the findprotocol() function work as intended so that libcurl
can properly be restricted to not support HTTP while still supporting
HTTPS - since the HTTPS handler previously set both the HTTP and HTTPS
bits in the protocol field.
This fixes --proto and --proto-redir for most SSL protocols.
This is done by adding a few new convenience defines that groups HTTP
and HTTPS, FTP and FTPS etc that should then be used when the code wants
to check for both protocols at once. PROTO_FAMILY_[protocol] style.
Bug: https://github.com/bagder/curl/pull/97
Reported-by: drizzt
In addition to FTP, other connection based protocols such as IMAP, POP3,
SMTP, SCP, SFTP and LDAP require a new connection when different log-in
credentials are specified. Fixed the detection logic to include these
other protocols.
Bug: http://curl.haxx.se/docs/adv_20140326A.html
Rename x509_cert to x509_crt and add "compat-1.2.h"
include.
This would still need some more thorough conversion
in order to drop "compat-1.2.h" include.
Port number zero is perfectly allowed to connect to. I moved to storing
the remote port number in an int so that -1 means undefined and 0-65535
can be used for legitimate port numbers.
when using --http2 one can now selectively disable NPN or ALPN with
--no-alpn and --no-npn. for now honored with NSS only.
TODO: honor this option with GnuTLS and OpenSSL
1) Renamed curl_tlsinfo to curl_tlssessioninfo as discussed on the
mailing list.
2) Renamed curl_ssl_backend to curl_sslbackend so it doesn't follow our
function naming convention.
3) Updated sessioninfo.c example accordingly.
Added new API for returning a SSL backend type and pointer, in order to
allow access to the TLS internals, that may then be used to obtain X509
certificate information for example.
This patch invokes two socket connect()s nearly simultaneously, and
the socket that is first connected "wins" and is subsequently used for
the connection. The other is terminated.
There is a very slight IPv4 preference, in that if both sockets connect
simultaneously IPv4 is checked first and thus will win.