The multiply() function that is used to avoid integer overflows, was
itself reason for a possible division by zero error when passed a
specially formatted glob.
Reported-by: GwanYeong Kim
Don't malloc() the temporary buffer, and use the correct type:
SearchPath() works with TCHAR, but SearchPathA() works with char.
Set the buffer size to MAX_PATH, because the terminating null byte
is already included in MAX_PATH.
Reviewed-by: Daniel Stenberg
Reviewed-by: Marcel Raad
Closes#1548
... and support and additional "security patched" date for those who
enhance older versions that way. Pass on the define CURL_PATCHSTAMP with
a date for that.
Building with non-release headers shows the date as [unreleased].
Also: this changes the date format generated in the curlver.h file to be
"YYYY-MM-DD" (no name of the day or month, no time, no time zone) to
make it easier on the eye and easier to parse. Example (new) date
string: 2017-05-09
Suggested-by: Brian Childs
Closes#1474
... using the docs/cmdline-opts/gen.pl script, so that we get all the
command line option documentation from the same source.
The generation of the list has to be done manually and pasted into the
source code.
Closes#1465
... and USE_ENVIRONMENT and --environment. It was once added for RISC OS
support and its platform specific behavior has been annoying ever
since. Added in commit c3c8bbd3b2, mostly unchanged since
then. Most probably not actually used for years.
Closes#1463
clang complains:
tool_cb_prg.c:86:22: error: implicit conversion increases
floating-point precision: 'float' to 'double'
[-Werror,-Wdouble-promotion]
Fix this by using a double instead of a float constant.
Test command 'time curl http://localhost/80GB -so /dev/null' on a Debian
Linux.
Before (middle performing run out 9):
real 0m28.078s
user 0m11.240s
sys 0m12.876s
After (middle performing run out 9)
real 0m26.356s (93.9%)
user 0m5.324s (47.4%)
sys 0m8.368s (65.0%)
Also, doing SFTP over a 200 millsecond latency link is now about 6 times
faster.
Closes#1446
$< is only allowed in implicit rules in some non-GNU makes (e.g. BSD,
AIX) so avoid use elsewhere by referencing the dependent curl.1 file
directly instead. This is somewhat tricky because the file is supplied
in the packaged tar ball (but not in git) but must still be able to be
rebuilt when its dependencies change. The right thing must happen in
both tar ball and git source trees, as well as in both in-tree and
out-of-tree builds.
The POSIX standard location is <poll.h>. Using <sys/poll.h> results in
warning spam when using the musl standard library.
Closes https://github.com/curl/curl/pull/1406
MinGW complains:
tool_operate.c:197:15: error: comparison is always true due to limited range
of data type [-Werror=type-limits]
Fix this by only doing the comparison if 'long' is large enough to hold the
constant it is compared with.
Closes https://github.com/curl/curl/pull/1378
Also make Perl mandatory to allow building the docs.
While CMakeLists.txt could probably read the list of manual pages from
Makefile.am, actually putting those in CMakeLists.txt is cleaner so that
is what is done here.
Fixes#1230
Ref: https://github.com/curl/curl/pull/1288
- Show the HTTPS-proxy options on CURLE_SSL_CACERT if libcurl was built
with HTTPS-proxy support.
Prior to this change those options were shown only if an HTTPS-proxy was
specified by --proxy, but that did not take into account environment
variables such as http_proxy, https_proxy, etc. Follow-up to e1187c4.
Bug: https://github.com/curl/curl/issues/1331
Reported-by: Nehal J Wani
If a % ended the statement, the string's trailing NUL would be skipped
and memory past the end of the buffer would be accessed and potentially
displayed as part of the --write-out output. Added tests 1440 and 1441
to check for this kind of condition.
Reported-by: Brian Carpenter
- Add new option CURLOPT_SUPPRESS_CONNECT_HEADERS to allow suppressing
proxy CONNECT response headers from the user callback functions
CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
- Add new tool option --suppress-connect-headers to expose
CURLOPT_SUPPRESS_CONNECT_HEADERS and allow suppressing proxy CONNECT
response headers from --dump-header and --include.
Assisted-by: Jay Satiro
Assisted-by: CarloCannas@users.noreply.github.com
Closes https://github.com/curl/curl/pull/783
The man page taken from the release package is found in a different
location than if it's built from source. It must be referenced as $< in
the rule to get its correct location in the VPATH.
This eliminates the need for an external gzip program, which wasn't
working with Busybox's gzip, anyway. It now compresses using perl's
IO::Compress::Gzip
This commit introduces the CURL_SSLVERSION_MAX_* constants as well as
the --tls-max option of the curl tool.
Closes https://github.com/curl/curl/pull/1166
Mark intended fallthroughs with /* FALLTHROUGH */ so that gcc will know
it's expected and won't warn on [-Wimplicit-fallthrough=].
Closes https://github.com/curl/curl/pull/1297
- Change CURLOPT_PROXY_CAPATH to return CURLE_NOT_BUILT_IN if the option
is not supported, which is the same as what we already do for
CURLOPT_CAPATH.
- Change the curl tool to handle CURLOPT_PROXY_CAPATH error
CURLE_NOT_BUILT_IN as a warning instead of as an error, which is the
same as what we already do for CURLOPT_CAPATH.
- Fix CAPATH docs to show that CURLE_NOT_BUILT_IN is returned when the
respective CAPATH option is not supported by the SSL library.
Ref: https://github.com/curl/curl/pull/1257
When CURLE_SSL_CACERT occurs the tool shows a lengthy error message to
the user explaining possible solutions such as --cacert and --insecure.
This change appends to that message similar options --proxy-cacert and
--proxy-insecure when there's a specified HTTPS proxy.
Closes https://github.com/curl/curl/issues/1258
In addition to unix domain sockets, Linux also supports an
abstract namespace which is independent of the filesystem.
In order to support it, add new CURLOPT_ABSTRACT_UNIX_SOCKET
option which uses the same storage as CURLOPT_UNIX_SOCKET_PATH
internally, along with a flag to specify abstract socket.
On non-supporting platforms, the abstract address will be
interpreted as an empty string and fail gracefully.
Also add new --abstract-unix-socket tool parameter.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Chungtsun Li (typeless)
Reviewed-by: Daniel Stenberg
Reviewed-by: Peter Wu
Closes#1197Fixes#1061
The <netinet/tcp.h> is a leftover from the past when TCP socket options
were set in this file. This include causes build issues on AIX 4.3.
Reported-by: Kim Minjoong
Closes#1178
CURLOPT_SOCKS_PROXY -> CURLOPT_PRE_PROXY
Added the corresponding --preroxy command line option. Sets a SOCKS
proxy to connect to _before_ connecting to a HTTP(S) proxy.
... the newly introduced CURLOPT_SOCKS_PROXY is special and should be
asked for specially. (Needs new code.)
Unified proxy type to a single variable in the config struct.
This was added as part of the SOCKS+HTTPS proxy merge but there's no
need to support this as we prefer to have the protocol specified as a
prefix instead.
Prior to this change we depended on errno if strtol could not perform a
conversion. POSIX says EINVAL *may* be set. Some implementations like
Microsoft's will not set it if there's no conversion.
Ref: https://github.com/curl/curl/commit/ee4f7660#commitcomment-19658189
Adds access to the effectively used protocol/scheme to both libcurl and
curl, both in string and numeric (CURLPROTO_*) form.
Note that the string form will be uppercase, as it is just the internal
string.
As these strings are declared internally as const, and all other strings
returned by curl_easy_getinfo() are de-facto const as well, string
handling in getinfo.c got const-ified.
Closes#1137
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
We're mostly saying just "curl" in lower case these days so here's a big
cleanup to adapt to this reality. A few instances are left as the
project could still formally be considered called cURL.
As they are after all part of the public API. Saves space and reduces
complexity. Remove the strcase defines from the curlx_ family.
Suggested-by: Dan Fandrich
Idea: https://curl.haxx.se/mail/lib-2016-10/0136.html
These two public functions have been mentioned as deprecated since a
very long time but since they are still part of the API and ABI we need
to keep them around.
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
RFC7512 provides a standard method to reference certificates in PKCS#11
tokens, by means of a URI starting 'pkcs11:'.
We're working on fixing various applications so that whenever they would
have been able to use certificates from a file, users can simply insert
a PKCS#11 URI instead and expect it to work. This expectation is now a
part of the Fedora packaging guidelines, for example.
This doesn't work with cURL because of the way that the colon is used
to separate the certificate argument from the passphrase. So instead of
curl -E 'pkcs11:manufacturer=piv_II;id=%01' …
I instead need to invoke cURL with the colon escaped, like this:
curl -E 'pkcs11\:manufacturer=piv_II;id=%01' …
This is suboptimal because we want *consistency* — the URI should be
usable in place of a filename anywhere, without having strange
differences for different applications.
This patch therefore disables the processing in parse_cert_parameter()
when the string starts with 'pkcs11:'. It means you can't pass a
passphrase with an unescaped PKCS#11 URI, but there's no need to do so
because RFC7512 allows a PIN to be given as a 'pin-value' attribute in
the URI itself.
Also, if users are already using RFC7512 URIs with the colon escaped as
in the above example — even providing a passphrase for cURL to handling
instead of using a pin-value attribute, that will continue to work
because their string will start 'pkcs11\:' and won't match the check.
What *does* break with this patch is the extremely unlikely case that a
user has a file which is in the local directory and literally named
just "pkcs11", and they have a passphrase on it. If that ever happened,
the user would need to refer to it as './pkcs11:<passphrase>' instead.
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
... causing SIGSEGV while parsing URL with too many globs.
Minimal example:
$ curl $(for i in $(seq 101); do printf '{a}'; done)
Reported-by: Romain Coltel
Bug: https://bugzilla.redhat.com/1340757
- Move the existing scheme check from tool_operate.
In the case of --remote-header-name we want to parse Content-disposition
for a filename, but only if the scheme is http or https. A recent
adjustment 0dc4d8e was made to account for schemeless URLs however it's
not 100% accurate. To remedy that I've moved the scheme check to the
header callback, since at that point the library has already determined
the scheme.
Bug: https://github.com/curl/curl/issues/760
Reported-by: Kai Noda
It does open up a miniscule risk that one of the other protocols that
libcurl could use would send back a Content-Disposition header and then
curl would act on it even if not HTTP.
A future mitigation for this risk would be to allow the callback to ask
libcurl which protocol is being used.
Verified with test 1312
Closes#760
In commit 2e42b0a252 (Jan 2008) we made the option "--socks" deprecated
and it has not been documented since. The more explicit socks options
(like --socks4 or --socks5) should be used.
The underlying libcurl option used for this feature is
CURLOPT_FTP_CREATE_MISSING_DIRS which has the ability to retry the dir
creation, but it was never set to do that by the command line tool.
Now it does.
Bug: https://curl.haxx.se/mail/archive-2016-04/0021.html
Reported-by: John Wanghui
Help-by: Leif W
As these two options provide identical functionality, the former for
SOCK5 proxies and the latter for HTTP proxies, merged the two options
together.
As such CURLOPT_SOCKS5_GSSAPI_SERVICE is marked as deprecated as of
7.49.0.
Supports HTTP/2 over clear TCP
- Optimize switching to HTTP/2 by removing calls to init and setup
before switching. Switching will eventually call setup and setup calls
init.
- Supports new version to “force” the use of HTTP/2 over clean TCP
- Add common line parameter “--http2-prior-knowledge” to the Curl
command line tool.
In makefile.m32, option -ssh2 (libssh2) automatically implied -ssl
(OpenSSL) option, with no way to override it with -winssl. Since both
libssh2 and curl support using Windows's built-in SSL backend, modify
the logic to allow that combination.
- Add tests.
- Add an example to CURLOPT_TFTP_NO_OPTIONS.3.
- Add --tftp-no-options to expose CURLOPT_TFTP_NO_OPTIONS.
Bug: https://github.com/curl/curl/issues/481
Extract the filename from the last slash or backslash. Prior to this
change backslashes could be part of the filename.
This change needed for the curl tool built for Cygwin. Refer to the
CYGWIN addendum in advisory 20160127B.
Bug: https://curl.haxx.se/docs/adv_20160127B.html
- Add unit test 1604 to test the sanitize_file_name function.
- Use -DCURL_STATICLIB when building libcurltool for unit testing.
- Better detection of reserved DOS device names.
- New flags to modify sanitize behavior:
SANITIZE_ALLOW_COLONS: Allow colons
SANITIZE_ALLOW_PATH: Allow path separators and colons
SANITIZE_ALLOW_RESERVED: Allow reserved device names
SANITIZE_ALLOW_TRUNCATE: Allow truncating a long filename
- Restore sanitization of banned characters from user-specified outfile.
Prior to this commit sanitization of a user-specified outfile was
temporarily disabled in 2b6dadc because there was no way to allow path
separators and colons through while replacing other banned characters.
Now in such a case we call the sanitize function with
SANITIZE_ALLOW_PATH which allows path separators and colons to pass
through.
Closes https://github.com/curl/curl/issues/624
Reported-by: Octavio Schroeder
Due to path separators being incorrectly sanitized in --output
pathnames, eg -o c:\foo => c__foo
This is a partial revert of 3017d8a until I write a proper fix. The
remote-name will continue to be sanitized, but if the user specified an
--output with string replacement (#1, #2, etc) that data is unsanitized
until I finish a fix.
Bug: https://github.com/bagder/curl/issues/624
Reported-by: Octavio Schroeder
curl does not sanitize colons in a remote file name that is used as the
local file name. This may lead to a vulnerability on systems where the
colon is a special path character. Currently Windows/DOS is the only OS
where this vulnerability applies.
CVE-2016-0754
Bug: http://curl.haxx.se/docs/adv_20160127B.html
This allows the root Makefile.am to include the Makefile.inc without
causing automake to warn on it (variables named *_SOURCES are
magic). curl_SOURCES is then instead assigned properly in
src/Makefile.am only.
Closes#577
Make this the default for the curl tool (if built with HTTP/2 powers
enabled) unless a specific HTTP version is requested on the command
line.
This should allow more users to get HTTP/2 powers without having to
change anything.
They didn't match the ifdef logic used within libcurl anyway so they
could indeed warn for the wrong case - plus the tool cannot know how the
lib actually performs at that level.
Commit f3bae6ed73 added the URL index to the password prompt when using
--next. Unfortunately, because the size_t specifier (%zu) is not
supported by all sprintf() implementations we use the curl_off_t format
specifier instead. The display of an incorrect value arises on platforms
where size_t and curl_off_t are of a different size.
They tend to never get updated anyway so they're frequently inaccurate
and we never go back to revisit them anyway. We document issues to work
on properly in KNOWN_BUGS and TODO instead.
Fixes a name space pollution at the cost of programs using one of these
defines will no longer compile. However, the vast majority of libcurl
programs that do multipart formposts use curl_formadd() to build this
list.
Closes#506
It uses 'Note:' as a prefix as opposed to the common 'Warning:' to take
down the tone a bit.
It adds a warning for using -XHEAD on other methods becasue that may
lead to a hanging connection.
It isn't always clear to the user which options that cause the HTTP
methods to conflict so by spelling them out it should hopefully be
easier to understand why curl complains.
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
Follow-up to e8423f9ce1 with discussionis in
https://github.com/bagder/curl/pull/258
This check scans for fopen() with a mode string without 'b' present, as
it may indicate that an FOPEN_* define should rather be used.
- Change fopen calls to use FOPEN_READTEXT instead of "r" or "rt"
- Change fopen calls to use FOPEN_WRITETEXT instead of "w" or "wt"
This change is to explicitly specify when we need to read/write text.
Unfortunately 't' is not part of POSIX fopen so we can't specify it
directly. Instead we now have FOPEN_READTEXT, FOPEN_WRITETEXT.
Prior to this change we had an issue on Windows if an application that
uses libcurl overrides the default file mode to binary. The default file
mode in Windows is normally text mode (translation mode) and that's what
libcurl expects.
Bug: https://github.com/bagder/curl/pull/258#issuecomment-107093055
Reported-by: Orgad Shaneh
- update default versions of dependencies (except for rare/old platforms)
- update urls
- sync examples makefiles with main ones
- remove line ending space
Add new option --data-raw which is almost the same as --data but does
not have a special interpretation of the @ character.
Prior to this change there was no (easy) way to pass the @ character as
the first character in POST data without it being interpreted as a
special character.
Bug: https://github.com/bagder/curl/issues/198
Reported-by: Jens Rantil
This commit fixes a regression introduced in curl-7_41_0-186-g261a0fe.
It also introduces a regression test 1424 based on tests 78 and 1423.
Reported-by: Viktor Szakats
Bug: https://github.com/bagder/curl/issues/237
This fixes a build failure where openssl and libmetalink are used
together and the system linker does not do implicit linking (e.g.
Fedora 13 and later releases). The MD5 functions required for
metalink support must be pulled in from the openssl crypto library.
This is similar to commit c6e7cbb94e,
which fixes the same sort of problem for NSS builds.
The glob_range function used wrong offset (3 instead of 4) for parsing
integer step inside character range specification, which led to 'bad
range' error when using character ranges with explicitly specified step
(such as '[a-z:2]')
- Get rid of this flood of warnings in Windows mingw build:
warning: missing terminating " character
The warning is due to the carriage return. When msysgit checks out files
from the repo by default it converts the line endings to CRLF. Prior to
this change when mkhelp.pl processed the MANUAL and curl.1 in CRLF
format the trailing carriage returns caused unnecessary CR in the
output.
SSLeay was the name of the library that was subsequently turned into
OpenSSL many moons ago (1999). curl does not work with the old SSLeay
library since years. This is now reflected by only using USE_OPENSSL in
code that depends on OpenSSL.
As the 'error' and 'mute' options are now part of the GlobalConfig,
rather than per Operation, updated the warnf() function to use this
structure rather than the OperationConfig.
The file number used was wrong. This bug was introduced over 10 years
ago, proving this function isn't used much...
Bug: http://curl.haxx.se/bug/view.cgi?id=1476
Reported-by: Tamir
lib/setup-vms.h : VAX HP OpenSSL port is ancient, needs help.
More defines to set symbols to uppercase.
src/tool_main.c : Fix parameter to vms_special_exit() call.
packages/vms/ :
backup_gnv_curl_src.com : Fix the error message to have the correct package.
build_curl-config_script.com : Rewrite to be more accurate.
build_libcurl_pc.com : Use tool_version.h now.
build_vms.com : Fix to handle lib/vtls directory.
curl_gnv_build_steps.txt : Updated build procedure documentation.
generate_config_vms_h_curl.com :
* VAX does not support 64 bit ints, so no NTLM support for now.
* VAX HP SSL port is ancient, needs some help.
* Disable NGHTTP2 for now, not ported to VMS.
* Disable UNIX_SOCKETS, not available on VMS yet.
* HP GSSAPI port does not have gss_nt_service_name.
gnv_link_curl.com : Update for new curl structure.
pcsi_product_gnv_curl.com : Set up to optionally do a complete build.
There was a mix of GlobCode, CURLcode and ints and they were mostly
passing around CURLcode errors. This change makes the functions use only
CURLcode and removes the GlobCode type completely.
By counting from 0 and up instead of backwards like before, we remove
the need for the "funny" check of the unsigned variable when decreased
passed zero. Easier to read and less risk for compiler warnings.
The >= 0 is actually not required, since i underflows and
the for-loop is stopped using the < condition, but this
makes the VS2012 compiler and code analysis happy.
Mark CURLOPT_UNIX_SOCKET_PATH as string to ensure that it ends up as
option in the file generated by --libcurl.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Although libcurl would never return CURL_VERSION_KERBEROS4 after 7.33,
so would not be output with --version, removed krb4 from the supported
features output.
When duplicating a handle, the data to post was duplicated using
strdup() when it could be binary and contain zeroes and it was not even
zero terminated! This caused read out of bounds crashes/segfaults.
Since the lib/strdup.c file no longer is easily shared with the curl
tool with this change, it now uses its own version instead.
Bug: http://curl.haxx.se/docs/adv_20141105.html
CVE: CVE-2014-3707
Reported-By: Symeon Paraschoudis
Rather than always outputting an empty manual page for the '-M' option,
generate a full manual page as done by autotools. For simplicity in
CMake, always generate the gzipped page as it will not be used anyway
when zlib is not available.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
There is no need for such function. Include_directories propagate by
themselves and having a function with one simple link statement makes
little sense.
Coverity CID 1243583. get_url_file_name() cannot fail and return a NULL
file name pointer so skip the check for that - it tricks coverity into
believing it can happen and it then warns later on when we use 'outfile'
without checking for NULL.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
Coverity CID 1154198. This NULL check implies that the pointer _can_ be
NULL at this point, which it can't. Thus it is dead code. It tricks
static analyzers to warn about dereferencing the pointer since the code
seems to imply it can be NULL.
- Replace CURLAUTH_GSSNEGOTIATE with CURLAUTH_NEGOTIATE
- CURL_VERSION_GSSNEGOTIATE is deprecated which
is served by CURL_VERSION_SSPI, CURL_VERSION_GSSAPI and
CURUL_VERSION_SPNEGO now.
- Remove display of feature 'GSS-Negotiate'
This prevents targets like tool_hugehelp.c from leaving around
half-constructed files if the rule fails with GNU make.
Reported-by: Rafaël Carré <funman@videolan.org>
This is just fundamentally broken. SPNEGO (RFC4178) is a protocol which
allows client and server to negotiate the underlying mechanism which will
actually be used to authenticate. This is *often* Kerberos, and can also
be NTLM and other things. And to complicate matters, there are various
different OIDs which can be used to specify the Kerberos mechanism too.
A SPNEGO exchange will identify *which* GSSAPI mechanism is being used,
and will exchange GSSAPI tokens which are appropriate for that mechanism.
But this SPNEGO implementation just strips the incoming SPNEGO packet
and extracts the token, if any. And completely discards the information
about *which* mechanism is being used. Then we *assume* it was Kerberos,
and feed the token into gss_init_sec_context() with the default
mechanism (GSS_S_NO_OID for the mech_type argument).
Furthermore... broken as this code is, it was never even *used* for input
tokens anyway, because higher layers of curl would just bail out if the
server actually said anything *back* to us in the negotiation. We assume
that we send a single token to the server, and it accepts it. If the server
wants to continue the exchange (as is required for NTLM and for SPNEGO
to do anything useful), then curl was broken anyway.
So the only bit which actually did anything was the bit in
Curl_output_negotiate(), which always generates an *initial* SPNEGO
token saying "Hey, I support only the Kerberos mechanism and this is its
token".
You could have done that by manually just prefixing the Kerberos token
with the appropriate bytes, if you weren't going to do any proper SPNEGO
handling. There's no need for the FBOpenSSL library at all.
The sane way to do SPNEGO is just to *ask* the GSSAPI library to do
SPNEGO. That's what the 'mech_type' argument to gss_init_sec_context()
is for. And then it should all Just Work™.
That 'sane way' will be added in a subsequent patch, as will bug fixes
for our failure to handle any exchange other than a single outbound
token to the server which results in immediate success.
This prevents valgrind from reporting possibly lost memory that NSPR
uses for file descriptor cache and other globally allocated internal
data structures.
Renamed the CURLX_ONES file list definition in order to a) try and be
consistent with other file lists and b) to allow for the addition of
the curlx header files, which will assist with Visual Studio project
files generation rather than hard coding those files.
This makes it possible to fetch from an IPv6 literal without specifying
the -g option. Globbing remains available elsehwere in the URL.
For example:
curl http://[::1]/file[1-3].txt
This creates no ambiguity, because there is no overlap between the
syntax of valid globs and valid IPv6 literals. Globs contain hyphens
and at most 1 colon, while IPv6 literals have no hyphens, and at least 2
colons.
The peek_ipv6() parser simply whitelists a set of characters and counts
colons, because the real validation happens later on. The character set
includes A-Z, in case someone decides to implement support for scopes
like [fe80::1%25eth0] in the future.
Signed-off-by: Paul Marks <pmarks@google.com>
This allows configure --disable-manual to run and build without having
to regenerate the src/tool_hugehelp.c file which otherwise is necessary
since we ship tarballs with that file present.
Reported-by: Remi Gacogne
Bug: http://curl.haxx.se/bug/view.cgi?id=1350
Remove slash/backslash problem, now only slashes are used,
Wmake automaticaly translate slash/backslash to proper version or tools are not sensitive for it.
Enable spaces in path.
Use internal rm command for all host platforms
Add error message if old Open Watcom version is used. Some old versions exhibit build problems for Curl latest version. Now only versions 1.8, 1.9 and 2.O beta are supported
Ensure a source file isn't generated for the following informational
command line parameters when --libcurl is specified:
--help, --manual, --version and --engine list
As the output would only include a fairly empty looking main() function
and a call to curl_easy_init() and curl_easy_cleanup() when performed
with --engine list.
Correctly output libcurl source code that includes multiply operations
as specified by --next. Note that each operation evaluates to a single
curl_easy_perform() in source code form.
Also note that the output could be optimised a little so global config
options are only output once rather than per operation as is presently
the case.
Added initial support for --next/-: which will be used to replace the
rather confusing : command line operation what was used for the URL
specific options prototype.
In preparation for separating the global config options from the per
operation config options, reworked the list engines code to not use a
member variable in the Configurable structure.
To help assist with the detection of incorrect return codes, as per
commits ee23d13a79, 33b8960dc8 and aba98991a5, updated the operate
based functions to return CURLcode error codes.
During initialisation SetHTTPrequest() may fail and cURL would return
PARAM_BAD_USE, which is equivalent to CURLE_NOT_BUILT_IN in cURL error
terms.
Instead, return CURLE_FAILED_INIT as we do for other functions that may
fail during initialisation.
Rather than check for required arguments, and prompt for any host and
proxy passwords, as each operation is performed, changed the code so
all configurations are checked before any operations are performed.
This allows the user to input all the required passwords, for example,
upfront rather than wait for each operation.
Since protocol headers contain explicit line-endings there should
be no automatic conversion to ASCII text or CRLF line-endings.
This might break third party tools that already depend on this
behaviour. We might need to introduce an option to make this optional.
Commmit c5f8e2f5f4 removed the easy handle clean-up from tool_operate,
letting the code that was already present in free_config_fields()
perform the task. Unfortunately, this wasn't the correct place to do
this as it broke protocols, that would perform a logout, as the main
clean-up in tool_main had already been called.
when using --http2 one can now selectively disable NPN or ALPN with
--no-alpn and --no-npn. for now honored with NSS only.
TODO: honor this option with GnuTLS and OpenSSL
Due to the changes in commit 3c929ff9f6 and lack of subsequent
updates, curl could return a CURLE_FTP_ACCEPT_FAILED error if
checkpasswd() ran out of memory in versions 7.33.0 and 7.34.0.
Updated the function declaration and return code to return
CURLE_OUT_OF_MEMORY and CURLE_OK where appropriate.
In the rare instance where getparameter() may return PARAM_NO_MEM whilst
parsing a URL, cURL would return this error code, which is equivalent to
CURLE_FTP_ACCEPT_FAILED in cURL error codes terms.
Instead, return CURLE_FAILED_INIT and output the failure reason as per
the other usage of getparameter().
Increasing the update frequency of the progress bar to 10Hz greatly
improves the visual appearance of the progress bar (at least in my
impression).
Signed-off-by: Tobias Markus <tobias@markus-regensburg.de>
Currently, the progress bar is updated at 5Hz. Because it is often not
updated to 100% when the download is finished and curl exits, the bar
is often "stuck" at 90-something, thus irritating the user.
This patch fixes this by always updating the progress bar (instead of
waiting for 200ms to have elapsed) while the download is finished but
curl has not yet exited. This should not greatly affect performance
because that moment is rather short.
Signed-off-by: Tobias Markus <tobias@markus-regensburg.de>
To avoid the regression when users pass in passwords containing semi-
colons, we now drop the ability to set the login options with the same
options. Support for login options in CURLOPT_USERPWD was added in
7.31.0.
Test case 83 was modified to verify that colons and semi-colons can be
used as part of the password when using -u (CURLOPT_USERPWD).
Bug: http://curl.haxx.se/bug/view.cgi?id=1311
Reported-by: Petr Bahula
Assisted-by: Steve Holme
Signed-off-by: Daniel Stenberg <daniel@haxx.se>
Commit 0db811b6 made some existing config files pass on unexpected
values to libcurl that made it somewhat hard to track down what was
really going on.
This code detects unquoted white spaces in the parameter when parsing a
config file as that would be one symptom and it is generally a bad
syntax anyway.
The "fixed string" function wrongly bumped the "urlnum" counter which
made curl output the total number of URLs wrong when using
{one,two,three} lists in globs.
Reported-by: Michael-O
Bug: http://curl.haxx.se/bug/view.cgi?id=1305
CURL_SSLVERSION_TLSv1_0, CURL_SSLVERSION_TLSv1_1,
CURL_SSLVERSION_TLSv1_2 enum values are added to force exact TLS version
(CURL_SSLVERSION_TLSv1 means TLS 1.x).
axTLS:
axTLS only supports TLS 1.0 and 1.1 but it cannot be set that only one
of these should be used, so we don't allow the new enum values.
darwinssl:
Added support for the new enum values.
SChannel:
Added support for the new enum values.
CyaSSL:
Added support for the new enum values.
Bug: The original CURL_SSLVERSION_TLSv1 value enables only TLS 1.0 (it
did the same before this commit), because CyaSSL cannot be configured to
use TLS 1.0-1.2.
GSKit:
GSKit doesn't seem to support TLS 1.1 and TLS 1.2, so we do not allow
those values.
Bugfix: There was a typo that caused wrong SSL versions to be passed to
GSKit.
NSS:
TLS minor version cannot be set, so we don't allow the new enum values.
QsoSSL:
TLS minor version cannot be set, so we don't allow the new enum values.
OpenSSL:
Added support for the new enum values.
Bugfix: The original CURL_SSLVERSION_TLSv1 value enabled only TLS 1.0,
now it enables 1.0-1.2.
Command-line tool:
Added command line options for the new values.
The option '--bearer' might be slightly ambiguous in name. It doesn't
create any conflict that I am aware of at the moment, however, OAUTH v2
is not the only authentication mechanism which uses "bearer" tokens.
Reported-by: Kyle L. Huff
URL: http://curl.haxx.se/mail/lib-2013-10/0064.html
Added the ability to use an XOAUTH2 bearer token [RFC6750] with POP3 for
authentication using RFC6749 "OAuth 2.0 Authorization Framework".
The bearer token is expected to be valid for the user specified in
conn->user. If CURLOPT_XOAUTH2_BEARER is defined and the connection has
an advertised auth mechanism of "XOAUTH2", the user and access token are
formatted as a base64 encoded string and sent to the server as
"AUTH XOAUTH2 <bearer token>".
Commit 32352ed6ad introduced various DNS options, however, these
would cause curl to exit with CURLE_NOT_BUILT_IN when c-ares wasn't
being used as the backend resolver even if the options weren't set
by the user.
Additionally corrected some minor coding style errors from the same
commit.
Moved the calls to checkpasswd() out of the getparameter() function
which allows for any related arguments to be specified on the command
line before or after --user (and --proxy-user).
For example: --bearer doesn't need to be specified before --user to
prevent curl from asking for an unnecessary password as is the case
with commit e7dcc454c6.
Added the ability to specify an XOAUTH2 bearer token [RFC6750] via the
--bearer option.
Example usage:
curl --url "imaps://imap.gmail.com:993/INBOX/;UID=1" --ssl-reqd
--bearer ya29.AHES6Z...OMfsHYI --user username@example.com
This function is meant to work *exactly* as curl_easy_perform() but will
use the event-based libcurl API internally instead of
curl_multi_perform(). To avoid relying on an actual event-based library
and to not use non-portable functions (like epoll or similar), there's a
rather inefficient emulation layer implemented on top of Curl_poll()
instead.
There's currently some convenience logging done in curl_easy_perform_ev
which helps when tracking down problems. They may be suitable to remove
or change once things seem to be fine enough.
curl has a new --test-event option when built with debug enabled that
then uses curl_easy_perform_ev() instead of curl_easy_perform(). If
built without debug, using --test-event will only output a warning
message.
NOTE: curl_easy_perform_ev() is not part if the public API on purpose.
It is only present in debug builds of libcurl and MUST NOT be considered
stable even then. Use it for libcurl-testing purposes only.
runtests.pl now features an -e command line option that makes it use
--test-event for all curl command line tests. The man page is updated.
The new multiply() function detects range value overflows. 32bit
machines will overflow on a 32bit boundary while 64bit hosts support
ranges up to the full 64 bit range.
Added test 1236 to verify.
Bug: http://curl.haxx.se/bug/view.cgi?id=1267
Reported-by: Will Dietz
A rather big overhaul and cleanup.
1 - curl wouldn't properly detect and reject globbing that ended with an
open brace if there were brackets or braces before it. Like "{}{" or
"[0-1]{"
2 - curl wouldn't properly reject empty lists so that "{}{}" would
result in curl getting (nil) strings in the output.
3 - By using strtoul() instead of sscanf() the code will now detected
over and underflows. It now also better parses the step argument to only
accept positive numbers and only step counters that is smaller than the
delta between the maximum and minimum numbers.
4 - By switching to unsigned longs instead of signed ints for the
counters, the max values for []-ranges are now very large (on 64bit
machines).
5 - Bumped the maximum number of globs in a single URL to 100 (from 10)
6 - Simplified the code somewhat and now it stores fixed strings as
single- entry lists. That's also one of the reasons why I did (5) as now
all strings between "globs" will take a slot in the array.
Added test 1234 and 1235 to verify. Updated test 87.
This commit fixes three separate bug reports.
Bug: http://curl.haxx.se/bug/view.cgi?id=1264
Bug: http://curl.haxx.se/bug/view.cgi?id=1265
Bug: http://curl.haxx.se/bug/view.cgi?id=1266
Reported-by: Will Dietz
Also, use memset() instead of a lame loop.
The previous logic that tried to avoid too many updates were very
ineffective for really fast transfers, as then it could easily end up
doing hundreds of updates per second that would make a significant
impact in transfer performance!
Bug: http://curl.haxx.se/mail/archive-2013-07/0031.html
Reported-by: Marc Doughty
Previously we used __MAC_10_X and __IPHONE_X to mark digest-generating
code that was specific to OS X and iOS. Now we use
__MAC_OS_X_VERSION_MAX_ALLOWED and __IPHONE_OS_VERSION_MAX_ALLOWED
instead of those macros.
Bug: http://sourceforge.net/p/curl/bugs/1255/
Reported by: Edward Rudd
Two fixes:
1. Force output file format to be stream-lf so that partial downloads
can be continued.
This should have minor impact as if the file does not exist, it was
created with stream-lf format. The only time this was an issue is if
there was already an existing file with a different format.
2. Fix file uploads are now fixed.
a. VMS binary files such as ZIP archives are now uploaded
correctly.
b. VMS text files are read once to get the correct size
and then converted to line-feed terminated records as
they are read into curl.
The default VMS text formats do not contain either line-feed or
carriage-return terminated records. Those delimiters are added by the
operating system file read calls if the application requests them.
Bug: http://curl.haxx.se/bug/view.cgi?id=496
We no longer pass our 'bool' data type variables nor constants as
an argument to my_setopt(), instead we use proper 1L or 0L values.
This also fixes macro used to pass string argument for CURLOPT_SSLCERT,
CURLOPT_SSLKEY and CURLOPT_EGDSOCKET using my_setopt_str() instead of
my_setopt().
This also casts enum or int argument data types to long when passed to
my_setopt_enum().
Fixed issue with static build for MSVC2010.
After some investigation I've discovered known issue
http://public.kitware.com/Bug/view.php?id=11240 When .rc file is linked
to static lib it fails with following linker error
LINK : warning LNK4068: /MACHINE not specified; defaulting to X86
file.obj : fatal error LNK1112: module machine type 'x64' conflicts with
target machine type 'X86'
Fix add target property /MACHINE: for MSVC generation.
Also removed old workarounds - it caused errors during msvc build.
Bug: http://curl.haxx.se/mail/lib-2013-07/0046.html
Implement wrappers around strtod to convert the user argument to a
double with sane error checking. Use this to allow --max-time and
--connect-timeout to accept decimal values instead of strictly integers.
The manpage is updated to make mention of this feature and,
additionally, forewarn that the actual timeout of the operation can
vary in its precision (particularly as the value increases in its
decimal precision).
strto* functions happily chomp off leading whitespace, so simply
checking for str[0] can lead to false negatives. Do the full parse and
check the out value instead.
Fix to prevent the options from being displayed when curl requests the
user's password if the following command line is specified:
--user username;options
An extern submits a psect and a global reference to the linker to point
to it. Using "extern int vms_show = 0" also creates a globaldef.
The use of the extern by itself does declare a psect but does not declare
a globalsymbol. It does declare a globalref. But the linker needs one and
only one globaldef or there is an error.
The list of unsafe functions currently consists of sprintf, vsprintf,
strcat, strncat and gets.
Subsequently, some existing code needed updating to avoid warnings on
this.
The this_url pointer wasn't being initialized, so if strdup() would return
null when copying the filename in a metalink file, then hilarity would
ensue during the cleanup phase. This change was brought to you by clang,
which noticed this and raised a warning.
config_h.com is a new file that generates a config.h file based on the
curl_config.h.in file and a quick scan of the configure script. This is
actually a generic procedure that is shared with other VMS packages.
The existing pre-built config-vms.h had over 100 entries that were not
correct and in some cases conflicted with the build options available in
the build_vms.com.
generate_config_vms_h_curl.com is a helper procedure to the
config_h.com. It covers the cases that the generic config_h.com is not
able to figure out, and accepts input from the build_vms.com procedure.
build_curlbuild_h.com is a new file to generate the curlbuild.h file
that Curl is now using when it is using a curl_config.h file.
post-config-vms.h is a new file that is needed to provide VMS specific
definitions, and most of them need to be set before the system header
files are included.
The VMS build procedure is fixed:
1. Fixed to link in the correct HP ssl library.
2. Fixed to detect if HP Kerberos is installed.
3. Fixed to detect if HP LDAP is installed.
4. Fixed to detect if gnv$libzshr is installed.
5. Simplified the input parameter parsing to not use a loop.
6. Warn that 64 bit pointer option support is not complete
in comments.
7. Default to IEEE floating if platform supports it so
resulting libcurl will be compatible with other
open source projects on VMS.
8. Default to LARGEFILE if platform supports it.
9. Default to enable SSL, LDAP, Kerberos, libz
if the libraries are present.
10. Build with exact case global symbols for libcurl.
11. Generate linker option file needed.
12. Compiler list option only commonly needed items.
13. fulllist option for those who really want it.
14. Create debug symbol file on Alpha, IA64.
- document the double-quote and backslash need be escaped if quoting.
- libcurl formdata escape double-quote in filename by backslash.
- curl formparse can parse filename both contains '"' and ',' or ';'.
- curl now can uploading file with ',' or ';' in filename.
Bug: http://curl.haxx.se/bug/view.cgi?id=1171
If the default value for an option taking a long as its value is non
zero, and it is set by zero by a command line option, then that command
line option is not reflected in --libcurl's output. This is because line
520-521 of tool_setopt.c look like:
if(!lval)
skip = TRUE;
An example of a command-line option doing so is the -k option that sets
CURLOPT_SLL_VERIFYPEER and CURLOPT_SSL_VERIFYHOST to 0L, when the
defaults are non-zero.
This commit renames lib/setup.h to lib/curl_setup.h and
renames lib/setup_once.h to lib/curl_setup_once.h.
Removes the need and usage of a header inclusion guard foreign
to libcurl. [1]
Removes the need and presence of an alarming notice we carried
in old setup_once.h [2]
----------------------------------------
1 - lib/setup_once.h used __SETUP_ONCE_H macro as header inclusion guard
up to commit ec691ca3 which changed this to HEADER_CURL_SETUP_ONCE_H,
this single inclusion guard is enough to ensure that inclusion of
lib/setup_once.h done from lib/setup.h is only done once.
Additionally lib/setup.h has always used __SETUP_ONCE_H macro to
protect inclusion of setup_once.h even after commit ec691ca3, this
was to avoid a circular header inclusion triggered when building a
c-ares enabled version with c-ares sources available which also has
a setup_once.h header. Commit ec691ca3 exposes the real nature of
__SETUP_ONCE_H usage in lib/setup.h, it is a header inclusion guard
foreign to libcurl belonging to c-ares's setup_once.h
The renaming this commit does, fixes the circular header inclusion,
and as such removes the need and usage of a header inclusion guard
foreign to libcurl. Macro __SETUP_ONCE_H no longer used in libcurl.
2 - Due to the circular interdependency of old lib/setup_once.h and the
c-ares setup_once.h header, old file lib/setup_once.h has carried
back from 2006 up to now days an alarming and prominent notice about
the need of keeping libcurl's and c-ares's setup_once.h in sync.
Given that this commit fixes the circular interdependency, the need
and presence of mentioned notice is removed.
All mentioned interdependencies come back from now old days when
the c-ares project lived inside a curl subdirectory. This commit
removes last traces of such fact.
This is a work-around for bug #1180 which is really libcurl's inability
to ignore SIGPIPE in a few cases. With this work-around at least curl
won't suffer from it!
Bug: http://curl.haxx.se/bug/view.cgi?id=1180
Reported by: Lluís Batlle i Rossell
This reverts renaming and usage of lib/*.h header files done
28-12-2012, reverting 2 commits:
f871de0... build: make use of 76 lib/*.h renamed files
ffd8e12... build: rename 76 lib/*.h files
This also reverts removal of redundant include guard (redundant thanks
to changes in above commits) done 2-12-2013, reverting 1 commit:
c087374... curl_setup.h: remove redundant include guard
This also reverts renaming and usage of lib/*.c source files done
3-12-2013, reverting 3 commits:
13606bb... build: make use of 93 lib/*.c renamed files
5b6e792... build: rename 93 lib/*.c files
7d83dff... build: commit 13606bbfde follow-up 1
Start of related discussion thread:
http://curl.haxx.se/mail/lib-2013-01/0012.html
Asking for confirmation on pushing this revertion commit:
http://curl.haxx.se/mail/lib-2013-01/0048.html
Confirmation summary:
http://curl.haxx.se/mail/lib-2013-01/0079.html
NOTICE: The list of 2 files that have been modified by other
intermixed commits, while renamed, and also by at least one
of the 6 commits this one reverts follows below. These 2 files
will exhibit a hole in history unless git's '--follow' option
is used when viewing logs.
lib/curl_imap.h
lib/curl_smtp.h
BLANK_AT_MAKETIME may be used in our Makefile.am files to blank
LIBS variable used in generated makefile at makefile processing
time. Doing this functionally prevents LIBS from being used for
all link targets in given makefile.
The {MD5,SHA1,SHA256}_Init functions from OpenSSL are called directly
without any wrappers and they return 1 for success, 0 otherwise. Hence,
we have to use the same approach in all the wrapper functions that are
used for the other crypto libraries.
This commit fixes a regression introduced in commit dca8ae5f.