This uses the brotli external library (https://github.com/google/brotli).
Brotli becomes a feature: additional curl_version_info() bit and
structure fields are provided for it and CURLVERSION_NOW bumped.
Tests 314 and 315 check Brotli content unencoding with correct and
erroneous data.
Some tests are updated to accomodate with the now configuration dependent
parameters of the Accept-Encoding header.
- Use spaces instead of tabs as the delimiter.
Follow up to 7c52b12 which added the entry. The entry had used tabs but
the symbol-scan parser doesn't recognize tabs and would fail the symbol.
To support telling a string is nul-terminated, symbol CURL_ZERO_TERMINATED
has been introduced.
Documentation updated accordingly.
symbols in versions updated. Added form API symbols deprecation info.
Let's add a compile time safe API to select an SSL backend. This
function needs to be called *before* curl_global_init(), and can be
called only once.
Side note: we do not explicitly test that it is called before
curl_global_init(), but we do verify that it is not called multiple times
(even implicitly).
If SSL is used before the function was called, it will use whatever the
CURL_SSL_BACKEND environment variable says (or default to the first
available SSL backend), and if a subsequent call to
curl_global_sslset() disagrees with the previous choice, it will fail
with CURLSSLSET_TOO_LATE.
The function also accepts an "avail" parameter to point to a (read-only)
NULL-terminated list of available backends. This comes in real handy if
an application wants to let the user choose between whatever SSL backends
the currently available libcurl has to offer: simply call
curl_global_sslset(-1, NULL, &avail);
which will return CURLSSLSET_UNKNOWN_BACKEND and populate the avail
variable to point to the relevant information to present to the user.
Just like with the HTTP/2 push functions, we have to add the function
declaration of curl_global_sslset() function to the header file
*multi.h* because VMS and OS/400 require a stable order of functions
declared in include/curl/*.h (where the header files are sorted
alphabetically). This looks a bit funny, but it cannot be helped.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The required low-level logic was already available as part of
`libssh2` (via `LIBSSH2_FLAG_COMPRESS` `libssh2_session_flag()`[1]
option.)
This patch adds the new `libcurl` option `CURLOPT_SSH_COMPRESSION`
(boolean) and the new `curl` command-line option `--compressed-ssh`
to request this `libssh2` feature. To have compression enabled, it
is required that the SSH server supports a (zlib) compatible
compression method and that `libssh2` was built with `zlib` support
enabled.
[1] https://www.libssh2.org/libssh2_session_flag.html
Ref: https://github.com/curl/curl/issues/1732
Closes https://github.com/curl/curl/pull/1735
If libcurl was built with GSS-API support, it unconditionally advertised
GSS-API authentication while connecting to a SOCKS5 proxy. This caused
problems in environments with improperly configured Kerberos: a stock
libcurl failed to connect, despite libcurl built without GSS-API
connected fine using username and password.
This commit introduces the CURLOPT_SOCKS5_AUTH option to control the
allowed methods for SOCKS5 authentication at run time.
Note that a new option was preferred over reusing CURLOPT_PROXYAUTH
for compatibility reasons because the set of authentication methods
allowed by default was different for HTTP and SOCKS5 proxies.
Bug: https://curl.haxx.se/mail/lib-2017-01/0005.html
Closes https://github.com/curl/curl/pull/1454
... to enable sending "OPTIONS *" which wasn't possible previously.
This option currently only works for HTTP.
Added test cases 1298 + 1299 to verify
Fixes#1280Closes#1462
This change introduces new alternatives for the existing six
curl_easy_getinfo() options that return sizes or speeds as doubles. The
new versions are named like the old ones but with an appended '_T':
CURLINFO_CONTENT_LENGTH_DOWNLOAD_T
CURLINFO_CONTENT_LENGTH_UPLOAD_T
CURLINFO_SIZE_DOWNLOAD_T
CURLINFO_SIZE_UPLOAD_T
CURLINFO_SPEED_DOWNLOAD_T
CURLINFO_SPEED_UPLOAD_T
Closes#1511
- Add new option CURLOPT_SUPPRESS_CONNECT_HEADERS to allow suppressing
proxy CONNECT response headers from the user callback functions
CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
- Add new tool option --suppress-connect-headers to expose
CURLOPT_SUPPRESS_CONNECT_HEADERS and allow suppressing proxy CONNECT
response headers from --dump-header and --include.
Assisted-by: Jay Satiro
Assisted-by: CarloCannas@users.noreply.github.com
Closes https://github.com/curl/curl/pull/783
This commit introduces the CURL_SSLVERSION_MAX_* constants as well as
the --tls-max option of the curl tool.
Closes https://github.com/curl/curl/pull/1166
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer. Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size. Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.
Closes#1222
In addition to unix domain sockets, Linux also supports an
abstract namespace which is independent of the filesystem.
In order to support it, add new CURLOPT_ABSTRACT_UNIX_SOCKET
option which uses the same storage as CURLOPT_UNIX_SOCKET_PATH
internally, along with a flag to specify abstract socket.
On non-supporting platforms, the abstract address will be
interpreted as an empty string and fail gracefully.
Also add new --abstract-unix-socket tool parameter.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Chungtsun Li (typeless)
Reviewed-by: Daniel Stenberg
Reviewed-by: Peter Wu
Closes#1197Fixes#1061
Adds access to the effectively used protocol/scheme to both libcurl and
curl, both in string and numeric (CURLPROTO_*) form.
Note that the string form will be uppercase, as it is just the internal
string.
As these strings are declared internally as const, and all other strings
returned by curl_easy_getinfo() are de-facto const as well, string
handling in getinfo.c got const-ified.
Closes#1137
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
Since we're using CURLE_FTP_WEIRD_SERVER_REPLY in imap, pop3 and smtp as
more of a generic "failed to parse" introduce an alias without FTP in
the name.
Closes https://github.com/curl/curl/pull/975
Previously, when a stream was closed with other than NGHTTP2_NO_ERROR
by RST_STREAM, underlying TCP connection was dropped. This is
undesirable since there may be other streams multiplexed and they are
very much fine. This change introduce new error code
CURLE_HTTP2_STREAM, which indicates stream error that only affects the
relevant stream, and connection should be kept open. The existing
CURLE_HTTP2 means connection error in general.
Ref: https://github.com/curl/curl/issues/659
Ref: https://github.com/curl/curl/pull/663
As these two options provide identical functionality, the former for
SOCK5 proxies and the latter for HTTP proxies, merged the two options
together.
As such CURLOPT_SOCKS5_GSSAPI_SERVICE is marked as deprecated as of
7.49.0.
Supports HTTP/2 over clear TCP
- Optimize switching to HTTP/2 by removing calls to init and setup
before switching. Switching will eventually call setup and setup calls
init.
- Supports new version to “force” the use of HTTP/2 over clean TCP
- Add common line parameter “--http2-prior-knowledge” to the Curl
command line tool.
The two options are almost the same, except in the case of OpenSSL:
CURLINFO_TLS_SESSION OpenSSL session internals is SSL_CTX *.
CURLINFO_TLS_SSL_PTR OpenSSL session internals is SSL *.
For backwards compatibility we couldn't modify CURLINFO_TLS_SESSION to
return an SSL pointer for OpenSSL.
Also, add support for the 'internals' member to point to SSL object for
the other backends axTLS, PolarSSL, Secure Channel, Secure Transport and
wolfSSL.
Bug: https://github.com/curl/curl/issues/234
Reported-by: dkjjr89@users.noreply.github.com
Bug: https://curl.haxx.se/mail/lib-2015-09/0127.html
Reported-by: Michael König
Some TFTP server implementations ignore the "TFTP Option extension"
(RFC 1782-1784, 2347-2349), or implement it in a flawed way, causing
problems with libcurl. Another switch for curl_easy_setopt
"CURLOPT_TFTP_NO_OPTIONS" is introduced which prevents libcurl from
sending TFTP option requests to a server, avoiding many problems caused
by faulty implementations.
Bug: https://github.com/curl/curl/issues/481
This patch addresses known bug #76, where on 64-bit Windows SOCKET is 64
bits wide, but long is only 32, making CURLINFO_LASTSOCKET unreliable.
Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
This option can be used to enable/disable certificate status verification using
the "Certificate Status Request" TLS extension defined in RFC6066 section 8.
This also adds the CURLE_SSL_INVALIDCERTSTATUS error, to be used when the
certificate status verification fails, and the Curl_ssl_cert_status_request()
function, used to check whether the SSL backend supports the status_request
extension.
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
1 - fixes the warnings when built without http2 support
2 - adds CURLE_HTTP2, a new error code for errors detected by nghttp2
basically when they are about http2 specific things.
And clarify for curl that --proxy-header now must be used for headers
that are meant for a proxy, and they will not be included if the request
is not for a proxy.
when using --http2 one can now selectively disable NPN or ALPN with
--no-alpn and --no-npn. for now honored with NSS only.
TODO: honor this option with GnuTLS and OpenSSL
To avoid the regression when users pass in passwords containing semi-
colons, we now drop the ability to set the login options with the same
options. Support for login options in CURLOPT_USERPWD was added in
7.31.0.
Test case 83 was modified to verify that colons and semi-colons can be
used as part of the password when using -u (CURLOPT_USERPWD).
Bug: http://curl.haxx.se/bug/view.cgi?id=1311
Reported-by: Petr Bahula
Assisted-by: Steve Holme
Signed-off-by: Daniel Stenberg <daniel@haxx.se>
Rather than set the authentication options as part of the login details
specified in the URL, or via the older CURLOPT_USERPWD option, added a
new libcurl option to allow the login options to be set separately.
CURL_SSLVERSION_TLSv1_0, CURL_SSLVERSION_TLSv1_1,
CURL_SSLVERSION_TLSv1_2 enum values are added to force exact TLS version
(CURL_SSLVERSION_TLSv1 means TLS 1.x).
axTLS:
axTLS only supports TLS 1.0 and 1.1 but it cannot be set that only one
of these should be used, so we don't allow the new enum values.
darwinssl:
Added support for the new enum values.
SChannel:
Added support for the new enum values.
CyaSSL:
Added support for the new enum values.
Bug: The original CURL_SSLVERSION_TLSv1 value enables only TLS 1.0 (it
did the same before this commit), because CyaSSL cannot be configured to
use TLS 1.0-1.2.
GSKit:
GSKit doesn't seem to support TLS 1.1 and TLS 1.2, so we do not allow
those values.
Bugfix: There was a typo that caused wrong SSL versions to be passed to
GSKit.
NSS:
TLS minor version cannot be set, so we don't allow the new enum values.
QsoSSL:
TLS minor version cannot be set, so we don't allow the new enum values.
OpenSSL:
Added support for the new enum values.
Bugfix: The original CURL_SSLVERSION_TLSv1 value enabled only TLS 1.0,
now it enables 1.0-1.2.
Command-line tool:
Added command line options for the new values.
Doing curl_multi_add_handle() on an easy handle that is already added to
a multi handle now returns this error code. It previously returned
CURLM_BAD_EASY_HANDLE for this condition.
CURLOPT_XFERINFOFUNCTION is now the preferred progress callback function
and CURLOPT_PROGRESSFUNCTION is considered deprecated.
This new callback uses pure 'curl_off_t' arguments to pass on full
resolution sizes. It otherwise retains the same characteristics: the
same call rate, the same meanings for the arguments and the return code
is used the same way.
The progressfunc.c example is updated to show how to use the new
callback for newer libcurls while supporting the older one if built with
an older libcurl or even built with a newer libcurl while running with
an older.
Introducing a number of options to the multi interface that
allows for multiple pipelines to the same host, in order to
optimize the balance between the penalty for opening new
connections and the potential pipelining latency.
Two new options for limiting the number of connections:
CURLMOPT_MAX_HOST_CONNECTIONS - Limits the number of running connections
to the same host. When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished, so we can reuse the connection.
CURLMOPT_MAX_TOTAL_CONNECTIONS - Limits the number of connections in total.
When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished. The free connection will then be reused, if possible, or
closed if the pending handle can't reuse it.
Several new options for pipelining:
CURLMOPT_MAX_PIPELINE_LENGTH - Limits the pipeling length. If a
pipeline is "full" when a connection is to be reused, a new connection
will be opened if the CURLMOPT_MAX_xxx_CONNECTIONS limits allow it.
If not, the handle will be put in a pending state until a connection is
ready (either free or a pipe got shorter).
CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a transfer with a content
length that is larger than this.
CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a chunk larger than this.
CURLMOPT_PIPELINING_SITE_BL - A blacklist of hosts that don't allow
pipelining.
CURLMOPT_PIPELINING_SERVER_BL - A blacklist of server types that don't allow
pipelining.
See the curl_multi_setopt() man page for details.
Allow an appliction to set libcurl specific SSL options. The first and
only options supported right now is CURLSSLOPT_ALLOW_BEAST.
It will make libcurl to disable any work-arounds the underlying SSL
library may have to address a known security flaw in the SSL3 and TLS1.0
protocol versions.
This is a reaction to us unconditionally removing that behavior after
this security advisory:
http://curl.haxx.se/docs/adv_20120124B.html
... it did however cause a lot of programs to fail because of old
servers not liking this work-around. Now programs can opt to decrease
the security in order to interoperate with old servers better.
This adds three new options to control the behavior of TCP keepalives:
- CURLOPT_TCP_KEEPALIVE: enable/disable probes
- CURLOPT_TCP_KEEPIDLE: idle time before sending first probe
- CURLOPT_TCP_KEEPINTVL: delay between successive probes
While not all operating systems support the TCP_KEEPIDLE and
TCP_KEEPINTVL knobs, the library will still allow these options to be
set by clients, silently ignoring the values.
1- Two new error codes are introduced.
CURLE_FTP_ACCEPT_FAILED to be set whenever ACCEPTing fails because of
FTP server connected.
CURLE_FTP_ACCEPT_TIMEOUT to be set whenever ACCEPTing timeouts.
Neither of these errors are considered fatal and control connection
remains OK because it could just be a firewall blocking server to
connect to the client.
2- One new setopt option was introduced.
CURLOPT_ACCEPTTIMEOUT_MS
It sets the maximum amount of time FTP client is going to wait for a
server to connect. Internal default accept timeout is 60 seconds.