To support telling a string is nul-terminated, symbol CURL_ZERO_TERMINATED
has been introduced.
Documentation updated accordingly.
symbols in versions updated. Added form API symbols deprecation info.
This feature is badly supported in Windows: as a replacement, a caller has
to use curl_mime_data_cb() with fread, fseek and possibly fclose
callbacks to process opened files.
The cli tool and documentation are updated accordingly.
The feature is however kept internally for form API compatibility, with
the known caveats it always had.
As a side effect, stdin size is not determined by the cli tool even if
possible and this results in a chunked transfer encoding. Test 173 is
updated accordingly.
Additional mime-specific tests.
Existing tests updated to reflect small differences (Expect: 100-continue,
data size change due to empty lines, etc).
Option -F headers= keyword added to tests.
test1135 disabled until the entry point order change is resolved.
New example smtp-mime.
Examples postit2 and multi-post converted from form API to mime API.
./sslbackend.c:58:3: warning: else after closing brace on same line (BRACEELSE)
} else if(isdigit(*name)) {
^
./sslbackend.c:62:3: warning: else after closing brace on same line (BRACEELSE)
} else
^
it is a one time *set*, not necessarily a one time use... it can be
called again if the first call failed or just listed the alternatives.
clarify that the available backends are the ones this build supports
plus add some formatting
Reported-by: Rich Gray
Bug: https://curl.haxx.se/mail/lib-2017-08/0119.html
Let's add a compile time safe API to select an SSL backend. This
function needs to be called *before* curl_global_init(), and can be
called only once.
Side note: we do not explicitly test that it is called before
curl_global_init(), but we do verify that it is not called multiple times
(even implicitly).
If SSL is used before the function was called, it will use whatever the
CURL_SSL_BACKEND environment variable says (or default to the first
available SSL backend), and if a subsequent call to
curl_global_sslset() disagrees with the previous choice, it will fail
with CURLSSLSET_TOO_LATE.
The function also accepts an "avail" parameter to point to a (read-only)
NULL-terminated list of available backends. This comes in real handy if
an application wants to let the user choose between whatever SSL backends
the currently available libcurl has to offer: simply call
curl_global_sslset(-1, NULL, &avail);
which will return CURLSSLSET_UNKNOWN_BACKEND and populate the avail
variable to point to the relevant information to present to the user.
Just like with the HTTP/2 push functions, we have to add the function
declaration of curl_global_sslset() function to the header file
*multi.h* because VMS and OS/400 require a stable order of functions
declared in include/curl/*.h (where the header files are sorted
alphabetically). This looks a bit funny, but it cannot be helped.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The required low-level logic was already available as part of
`libssh2` (via `LIBSSH2_FLAG_COMPRESS` `libssh2_session_flag()`[1]
option.)
This patch adds the new `libcurl` option `CURLOPT_SSH_COMPRESSION`
(boolean) and the new `curl` command-line option `--compressed-ssh`
to request this `libssh2` feature. To have compression enabled, it
is required that the SSH server supports a (zlib) compatible
compression method and that `libssh2` was built with `zlib` support
enabled.
[1] https://www.libssh2.org/libssh2_session_flag.html
Ref: https://github.com/curl/curl/issues/1732
Closes https://github.com/curl/curl/pull/1735
Update the progress timers `t_nslookup`, `t_connect`, `t_appconnect`,
`t_pretransfer`, and `t_starttransfer` to track the total times for
these activities when a redirect is followed. Previously, only the times
for the most recent request would be tracked.
Related changes:
- Rename `Curl_pgrsResetTimesSizes` to `Curl_pgrsResetTransferSizes`
now that the function only resets transfer sizes and no longer
modifies any of the progress timers.
- Add a bool to the `Progress` struct that is used to prevent
double-counting `t_starttransfer` times.
Added test case 1399.
Fixes#522 and Known Bug 1.8
Closes#1602
Reported-by: joshhe on github
Commit curl-7_54_0-118-g8b2f22e changed the output format of curl --help
to use <file> and <dir> instead of FILE and DIR, which caused zsh.pl to
produce a broken completion script:
% curl --<TAB>
_curl:10: no such file or directory: seconds
Closes#1779
If libcurl was built with GSS-API support, it unconditionally advertised
GSS-API authentication while connecting to a SOCKS5 proxy. This caused
problems in environments with improperly configured Kerberos: a stock
libcurl failed to connect, despite libcurl built without GSS-API
connected fine using username and password.
This commit introduces the CURLOPT_SOCKS5_AUTH option to control the
allowed methods for SOCKS5 authentication at run time.
Note that a new option was preferred over reusing CURLOPT_PROXYAUTH
for compatibility reasons because the set of authentication methods
allowed by default was different for HTTP and SOCKS5 proxies.
Bug: https://curl.haxx.se/mail/lib-2017-01/0005.html
Closes https://github.com/curl/curl/pull/1454
... to enable sending "OPTIONS *" which wasn't possible previously.
This option currently only works for HTTP.
Added test cases 1298 + 1299 to verify
Fixes#1280Closes#1462
This change introduces new alternatives for the existing six
curl_easy_getinfo() options that return sizes or speeds as doubles. The
new versions are named like the old ones but with an appended '_T':
CURLINFO_CONTENT_LENGTH_DOWNLOAD_T
CURLINFO_CONTENT_LENGTH_UPLOAD_T
CURLINFO_SIZE_DOWNLOAD_T
CURLINFO_SIZE_UPLOAD_T
CURLINFO_SPEED_DOWNLOAD_T
CURLINFO_SPEED_UPLOAD_T
Closes#1511
... unless "--output -" is used. Binary detection is done by simply
checking for a binary zero in early data.
Added test 1425 1426 to verify.
Closes#1512
Some code (e.g. Curl_fillreadbuffer) assumes that this buffer is not
exceedingly tiny and will break if it is. This same check is already
done at run time in the CURLOPT_BUFFERSIZE option.
... using the docs/cmdline-opts/gen.pl script, so that we get all the
command line option documentation from the same source.
The generation of the list has to be done manually and pasted into the
source code.
Closes#1465
... and USE_ENVIRONMENT and --environment. It was once added for RISC OS
support and its platform specific behavior has been annoying ever
since. Added in commit c3c8bbd3b2, mostly unchanged since
then. Most probably not actually used for years.
Closes#1463
Commit 80a87e8a broke 'make dist' as it can't handle installing from
absolute target names. Rearranged the dependencies so the absolute name
is used for building but the relative name is use for distributing.
The module contains a more comprehensive set of trust information than
supported by nss-pem, because libnssckbi.so also includes information
about distrusted certificates.
Reviewed-by: Kai Engert
Closes#1414
$< is only allowed in implicit rules in some non-GNU makes (e.g. BSD,
AIX) so avoid use elsewhere by referencing the dependent curl.1 file
directly instead. This is somewhat tricky because the file is supplied
in the packaged tar ball (but not in git) but must still be able to be
rebuilt when its dependencies change. The right thing must happen in
both tar ball and git source trees, as well as in both in-tree and
out-of-tree builds.
For some reason, CMake 2.8.12.2 did not expand the list argument in a
single DEPENDS argument. Remove the quotes, so it gets expanded into
multiple arguments for add_custom_command and add_custom_target.
Fixes https://github.com/curl/curl/issues/1370Closes#1372
Note that for some reason there is this warning (that also exists with
autotools, added since curl-7_15_1-94-ga718cb05f):
docs/libcurl/curl_multi_socket_all.3:1: can't open `man3/curl_multi_socket.3': No such file or directory
Additionally, adjust the roffit --mandir option to support creating
links when doing out-of-tree builds.
Ref: https://github.com/curl/curl/pull/1288
Also make Perl mandatory to allow building the docs.
While CMakeLists.txt could probably read the list of manual pages from
Makefile.am, actually putting those in CMakeLists.txt is cleaner so that
is what is done here.
Fixes#1230
Ref: https://github.com/curl/curl/pull/1288
For easier sharing with CMake. The contents were reformatted to use
two-space indent and expanded tabs (matching lib/Makefile.common).
Ref: https://github.com/curl/curl/pull/1288
- Add new option CURLOPT_SUPPRESS_CONNECT_HEADERS to allow suppressing
proxy CONNECT response headers from the user callback functions
CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
- Add new tool option --suppress-connect-headers to expose
CURLOPT_SUPPRESS_CONNECT_HEADERS and allow suppressing proxy CONNECT
response headers from --dump-header and --include.
Assisted-by: Jay Satiro
Assisted-by: CarloCannas@users.noreply.github.com
Closes https://github.com/curl/curl/pull/783
This commit introduces the CURL_SSLVERSION_MAX_* constants as well as
the --tls-max option of the curl tool.
Closes https://github.com/curl/curl/pull/1166
maketgz now runs scripts/updatemanpages.pl to update the man pages .TH
section to use the current date and curl/libcurl version.
(TODO Section 3.1)
Closes#1058
In DarwinSSL the SSLSetPeerDomainName function is used to enable both
sending SNI and verifying the host. When host verification is disabled
the function cannot be called, therefore SNI is disabled as well.
Closes https://github.com/curl/curl/pull/1240
- Change CURLOPT_PROXY_CAPATH to return CURLE_NOT_BUILT_IN if the option
is not supported, which is the same as what we already do for
CURLOPT_CAPATH.
- Change the curl tool to handle CURLOPT_PROXY_CAPATH error
CURLE_NOT_BUILT_IN as a warning instead of as an error, which is the
same as what we already do for CURLOPT_CAPATH.
- Fix CAPATH docs to show that CURLE_NOT_BUILT_IN is returned when the
respective CAPATH option is not supported by the SSL library.
Ref: https://github.com/curl/curl/pull/1257
Builds with axTLS 2.1.2. This then also breaks compatibility with axTLS
< 2.1.0 (the older API)
... and fix the session_id mixup brought in 04b4ee549Fixes#1220
- Document in --socks* opts they're still mutually exclusive of --proxy.
Partial revert of 423a93c; I had misinterpreted the SOCKS proxy +
HTTP/HTTPS proxy combination.
- Document in --socks* opts that --preproxy can be used to specify a
SOCKS proxy at the same time --proxy is used with an HTTP/HTTPS proxy.
On Windows it's possible to have input files with CRLF line endings and
a perl that defaults to LF line endings (eg msysgit). Currently that
results in generator output of mixed line endings of CR, LF and CRLF.
This change fixes that issue in the most succinct way by opening the
files in :crlf text mode even when the perl being used does not default
to that mode. (On operating systems that don't have a separate text mode
it's essentially a no-op.) The output continues to be in the perl's
native line ending.
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer. Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size. Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.
Closes#1222
In addition to unix domain sockets, Linux also supports an
abstract namespace which is independent of the filesystem.
In order to support it, add new CURLOPT_ABSTRACT_UNIX_SOCKET
option which uses the same storage as CURLOPT_UNIX_SOCKET_PATH
internally, along with a flag to specify abstract socket.
On non-supporting platforms, the abstract address will be
interpreted as an empty string and fail gracefully.
Also add new --abstract-unix-socket tool parameter.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Chungtsun Li (typeless)
Reviewed-by: Daniel Stenberg
Reviewed-by: Peter Wu
Closes#1197Fixes#1061
Under condition using http_proxy env var, noproxy list was the
combination of --noproxy option and NO_PROXY env var previously. Since
this commit, --noproxy option overrides NO_PROXY environment variable
even if use http_proxy env var.
Closes#1140
Follow-up to 82245ea: Fix the example program sendrecv.c (handle
CURLE_AGAIN, handle incomplete send). Improve the documentation
for curl_easy_recv() and curl_easy_send().
Reviewed-by: Frank Meier
Assisted-by: Jay Satiro
See https://github.com/curl/curl/pull/1134
This is the first time we replace the manually edited curt.1 with the
generated one created by gen.pl and the individual option documentation
pages.
Do not edit this file, edit the individual pages and regenerate this
output.
This file will be generated by the build system soon and then removed
from git.
CURLOPT_SOCKS_PROXY -> CURLOPT_PRE_PROXY
Added the corresponding --preroxy command line option. Sets a SOCKS
proxy to connect to _before_ connecting to a HTTP(S) proxy.
This was added as part of the SOCKS+HTTPS proxy merge but there's no
need to support this as we prefer to have the protocol specified as a
prefix instead.
Since it now reads responses one byte a time, a loop could be removed
and it is no longer limited to get the whole response within 16K, it is
now instead only limited to 16K maximum header line lengths.
Adds access to the effectively used protocol/scheme to both libcurl and
curl, both in string and numeric (CURLPROTO_*) form.
Note that the string form will be uppercase, as it is just the internal
string.
As these strings are declared internally as const, and all other strings
returned by curl_easy_getinfo() are de-facto const as well, string
handling in getinfo.c got const-ified.
Closes#1137
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
We're mostly saying just "curl" in lower case these days so here's a big
cleanup to adapt to this reality. A few instances are left as the
project could still formally be considered called cURL.
Also heavily edited for content. Removed lots of old cruft that we added
like 10+ years ago that is likely incorrect by now.
Also removed INSTALL.devcpp for same reason.
Type required for YourClass::func C++ function (using size_t in line
with the documentation for CURLOPT_WRITEFUNCTION) and missing second
colon when specifying the static function for CURLOPT_WRITEFUNCTION.
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
.. and add that --proto-redir and CURLOPT_REDIR_PROTOCOLS do not
override protocols denied by --proto and CURLOPT_PROTOCOLS.
- Add a test to enforce: --proto deny must override --proto-redir allow
Closes https://github.com/curl/curl/pull/1031
Since we're using CURLE_FTP_WEIRD_SERVER_REPLY in imap, pop3 and smtp as
more of a generic "failed to parse" introduce an alias without FTP in
the name.
Closes https://github.com/curl/curl/pull/975
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes#971
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
Explain how some FTP servers support the machine readable listing
format MLSD from RFC 3659 and compare it to LIST.
Ref: https://github.com/curl/curl/issues/906
When CURLOPT_POSTFIELDS is set to an empty string libcurl will send a
zero-byte POST. Prior to this change it was documented as sending data
from the read callback.
This also changes the wording of what happens when empty or NULL so that
it's hopefully easier to understand for people whose primary language
isn't English.
Bug: https://github.com/curl/curl/issues/862
Reported-by: Askar Safin
The connect-to list isn't copied so as long as the handle may be used
for a transfer the list must be valid.
Bug: https://github.com/curl/curl/pull/819
Reported-by: Michael Kaufmann