As these two options provide identical functionality, the former for
SOCK5 proxies and the latter for HTTP proxies, merged the two options
together.
As such CURLOPT_SOCKS5_GSSAPI_SERVICE is marked as deprecated as of
7.49.0.
Supports HTTP/2 over clear TCP
- Optimize switching to HTTP/2 by removing calls to init and setup
before switching. Switching will eventually call setup and setup calls
init.
- Supports new version to “force” the use of HTTP/2 over clean TCP
- Add common line parameter “--http2-prior-knowledge” to the Curl
command line tool.
In makefile.m32, option -ssh2 (libssh2) automatically implied -ssl
(OpenSSL) option, with no way to override it with -winssl. Since both
libssh2 and curl support using Windows's built-in SSL backend, modify
the logic to allow that combination.
- Add tests.
- Add an example to CURLOPT_TFTP_NO_OPTIONS.3.
- Add --tftp-no-options to expose CURLOPT_TFTP_NO_OPTIONS.
Bug: https://github.com/curl/curl/issues/481
Extract the filename from the last slash or backslash. Prior to this
change backslashes could be part of the filename.
This change needed for the curl tool built for Cygwin. Refer to the
CYGWIN addendum in advisory 20160127B.
Bug: https://curl.haxx.se/docs/adv_20160127B.html
- Add unit test 1604 to test the sanitize_file_name function.
- Use -DCURL_STATICLIB when building libcurltool for unit testing.
- Better detection of reserved DOS device names.
- New flags to modify sanitize behavior:
SANITIZE_ALLOW_COLONS: Allow colons
SANITIZE_ALLOW_PATH: Allow path separators and colons
SANITIZE_ALLOW_RESERVED: Allow reserved device names
SANITIZE_ALLOW_TRUNCATE: Allow truncating a long filename
- Restore sanitization of banned characters from user-specified outfile.
Prior to this commit sanitization of a user-specified outfile was
temporarily disabled in 2b6dadc because there was no way to allow path
separators and colons through while replacing other banned characters.
Now in such a case we call the sanitize function with
SANITIZE_ALLOW_PATH which allows path separators and colons to pass
through.
Closes https://github.com/curl/curl/issues/624
Reported-by: Octavio Schroeder
Due to path separators being incorrectly sanitized in --output
pathnames, eg -o c:\foo => c__foo
This is a partial revert of 3017d8a until I write a proper fix. The
remote-name will continue to be sanitized, but if the user specified an
--output with string replacement (#1, #2, etc) that data is unsanitized
until I finish a fix.
Bug: https://github.com/bagder/curl/issues/624
Reported-by: Octavio Schroeder
curl does not sanitize colons in a remote file name that is used as the
local file name. This may lead to a vulnerability on systems where the
colon is a special path character. Currently Windows/DOS is the only OS
where this vulnerability applies.
CVE-2016-0754
Bug: http://curl.haxx.se/docs/adv_20160127B.html
This allows the root Makefile.am to include the Makefile.inc without
causing automake to warn on it (variables named *_SOURCES are
magic). curl_SOURCES is then instead assigned properly in
src/Makefile.am only.
Closes#577
Make this the default for the curl tool (if built with HTTP/2 powers
enabled) unless a specific HTTP version is requested on the command
line.
This should allow more users to get HTTP/2 powers without having to
change anything.
They didn't match the ifdef logic used within libcurl anyway so they
could indeed warn for the wrong case - plus the tool cannot know how the
lib actually performs at that level.
Commit f3bae6ed73 added the URL index to the password prompt when using
--next. Unfortunately, because the size_t specifier (%zu) is not
supported by all sprintf() implementations we use the curl_off_t format
specifier instead. The display of an incorrect value arises on platforms
where size_t and curl_off_t are of a different size.
They tend to never get updated anyway so they're frequently inaccurate
and we never go back to revisit them anyway. We document issues to work
on properly in KNOWN_BUGS and TODO instead.
Fixes a name space pollution at the cost of programs using one of these
defines will no longer compile. However, the vast majority of libcurl
programs that do multipart formposts use curl_formadd() to build this
list.
Closes#506
It uses 'Note:' as a prefix as opposed to the common 'Warning:' to take
down the tone a bit.
It adds a warning for using -XHEAD on other methods becasue that may
lead to a hanging connection.
It isn't always clear to the user which options that cause the HTTP
methods to conflict so by spelling them out it should hopefully be
easier to understand why curl complains.
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
Follow-up to e8423f9ce1 with discussionis in
https://github.com/bagder/curl/pull/258
This check scans for fopen() with a mode string without 'b' present, as
it may indicate that an FOPEN_* define should rather be used.
- Change fopen calls to use FOPEN_READTEXT instead of "r" or "rt"
- Change fopen calls to use FOPEN_WRITETEXT instead of "w" or "wt"
This change is to explicitly specify when we need to read/write text.
Unfortunately 't' is not part of POSIX fopen so we can't specify it
directly. Instead we now have FOPEN_READTEXT, FOPEN_WRITETEXT.
Prior to this change we had an issue on Windows if an application that
uses libcurl overrides the default file mode to binary. The default file
mode in Windows is normally text mode (translation mode) and that's what
libcurl expects.
Bug: https://github.com/bagder/curl/pull/258#issuecomment-107093055
Reported-by: Orgad Shaneh
- update default versions of dependencies (except for rare/old platforms)
- update urls
- sync examples makefiles with main ones
- remove line ending space
Add new option --data-raw which is almost the same as --data but does
not have a special interpretation of the @ character.
Prior to this change there was no (easy) way to pass the @ character as
the first character in POST data without it being interpreted as a
special character.
Bug: https://github.com/bagder/curl/issues/198
Reported-by: Jens Rantil
This commit fixes a regression introduced in curl-7_41_0-186-g261a0fe.
It also introduces a regression test 1424 based on tests 78 and 1423.
Reported-by: Viktor Szakats
Bug: https://github.com/bagder/curl/issues/237
This fixes a build failure where openssl and libmetalink are used
together and the system linker does not do implicit linking (e.g.
Fedora 13 and later releases). The MD5 functions required for
metalink support must be pulled in from the openssl crypto library.
This is similar to commit c6e7cbb94e,
which fixes the same sort of problem for NSS builds.
The glob_range function used wrong offset (3 instead of 4) for parsing
integer step inside character range specification, which led to 'bad
range' error when using character ranges with explicitly specified step
(such as '[a-z:2]')
- Get rid of this flood of warnings in Windows mingw build:
warning: missing terminating " character
The warning is due to the carriage return. When msysgit checks out files
from the repo by default it converts the line endings to CRLF. Prior to
this change when mkhelp.pl processed the MANUAL and curl.1 in CRLF
format the trailing carriage returns caused unnecessary CR in the
output.
SSLeay was the name of the library that was subsequently turned into
OpenSSL many moons ago (1999). curl does not work with the old SSLeay
library since years. This is now reflected by only using USE_OPENSSL in
code that depends on OpenSSL.
As the 'error' and 'mute' options are now part of the GlobalConfig,
rather than per Operation, updated the warnf() function to use this
structure rather than the OperationConfig.
The file number used was wrong. This bug was introduced over 10 years
ago, proving this function isn't used much...
Bug: http://curl.haxx.se/bug/view.cgi?id=1476
Reported-by: Tamir
lib/setup-vms.h : VAX HP OpenSSL port is ancient, needs help.
More defines to set symbols to uppercase.
src/tool_main.c : Fix parameter to vms_special_exit() call.
packages/vms/ :
backup_gnv_curl_src.com : Fix the error message to have the correct package.
build_curl-config_script.com : Rewrite to be more accurate.
build_libcurl_pc.com : Use tool_version.h now.
build_vms.com : Fix to handle lib/vtls directory.
curl_gnv_build_steps.txt : Updated build procedure documentation.
generate_config_vms_h_curl.com :
* VAX does not support 64 bit ints, so no NTLM support for now.
* VAX HP SSL port is ancient, needs some help.
* Disable NGHTTP2 for now, not ported to VMS.
* Disable UNIX_SOCKETS, not available on VMS yet.
* HP GSSAPI port does not have gss_nt_service_name.
gnv_link_curl.com : Update for new curl structure.
pcsi_product_gnv_curl.com : Set up to optionally do a complete build.
There was a mix of GlobCode, CURLcode and ints and they were mostly
passing around CURLcode errors. This change makes the functions use only
CURLcode and removes the GlobCode type completely.
By counting from 0 and up instead of backwards like before, we remove
the need for the "funny" check of the unsigned variable when decreased
passed zero. Easier to read and less risk for compiler warnings.
The >= 0 is actually not required, since i underflows and
the for-loop is stopped using the < condition, but this
makes the VS2012 compiler and code analysis happy.
Mark CURLOPT_UNIX_SOCKET_PATH as string to ensure that it ends up as
option in the file generated by --libcurl.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Although libcurl would never return CURL_VERSION_KERBEROS4 after 7.33,
so would not be output with --version, removed krb4 from the supported
features output.
When duplicating a handle, the data to post was duplicated using
strdup() when it could be binary and contain zeroes and it was not even
zero terminated! This caused read out of bounds crashes/segfaults.
Since the lib/strdup.c file no longer is easily shared with the curl
tool with this change, it now uses its own version instead.
Bug: http://curl.haxx.se/docs/adv_20141105.html
CVE: CVE-2014-3707
Reported-By: Symeon Paraschoudis
Rather than always outputting an empty manual page for the '-M' option,
generate a full manual page as done by autotools. For simplicity in
CMake, always generate the gzipped page as it will not be used anyway
when zlib is not available.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
There is no need for such function. Include_directories propagate by
themselves and having a function with one simple link statement makes
little sense.
Coverity CID 1243583. get_url_file_name() cannot fail and return a NULL
file name pointer so skip the check for that - it tricks coverity into
believing it can happen and it then warns later on when we use 'outfile'
without checking for NULL.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
Coverity CID 1154198. This NULL check implies that the pointer _can_ be
NULL at this point, which it can't. Thus it is dead code. It tricks
static analyzers to warn about dereferencing the pointer since the code
seems to imply it can be NULL.
- Replace CURLAUTH_GSSNEGOTIATE with CURLAUTH_NEGOTIATE
- CURL_VERSION_GSSNEGOTIATE is deprecated which
is served by CURL_VERSION_SSPI, CURL_VERSION_GSSAPI and
CURUL_VERSION_SPNEGO now.
- Remove display of feature 'GSS-Negotiate'
This prevents targets like tool_hugehelp.c from leaving around
half-constructed files if the rule fails with GNU make.
Reported-by: Rafaël Carré <funman@videolan.org>
This is just fundamentally broken. SPNEGO (RFC4178) is a protocol which
allows client and server to negotiate the underlying mechanism which will
actually be used to authenticate. This is *often* Kerberos, and can also
be NTLM and other things. And to complicate matters, there are various
different OIDs which can be used to specify the Kerberos mechanism too.
A SPNEGO exchange will identify *which* GSSAPI mechanism is being used,
and will exchange GSSAPI tokens which are appropriate for that mechanism.
But this SPNEGO implementation just strips the incoming SPNEGO packet
and extracts the token, if any. And completely discards the information
about *which* mechanism is being used. Then we *assume* it was Kerberos,
and feed the token into gss_init_sec_context() with the default
mechanism (GSS_S_NO_OID for the mech_type argument).
Furthermore... broken as this code is, it was never even *used* for input
tokens anyway, because higher layers of curl would just bail out if the
server actually said anything *back* to us in the negotiation. We assume
that we send a single token to the server, and it accepts it. If the server
wants to continue the exchange (as is required for NTLM and for SPNEGO
to do anything useful), then curl was broken anyway.
So the only bit which actually did anything was the bit in
Curl_output_negotiate(), which always generates an *initial* SPNEGO
token saying "Hey, I support only the Kerberos mechanism and this is its
token".
You could have done that by manually just prefixing the Kerberos token
with the appropriate bytes, if you weren't going to do any proper SPNEGO
handling. There's no need for the FBOpenSSL library at all.
The sane way to do SPNEGO is just to *ask* the GSSAPI library to do
SPNEGO. That's what the 'mech_type' argument to gss_init_sec_context()
is for. And then it should all Just Work™.
That 'sane way' will be added in a subsequent patch, as will bug fixes
for our failure to handle any exchange other than a single outbound
token to the server which results in immediate success.
This prevents valgrind from reporting possibly lost memory that NSPR
uses for file descriptor cache and other globally allocated internal
data structures.
Renamed the CURLX_ONES file list definition in order to a) try and be
consistent with other file lists and b) to allow for the addition of
the curlx header files, which will assist with Visual Studio project
files generation rather than hard coding those files.
This makes it possible to fetch from an IPv6 literal without specifying
the -g option. Globbing remains available elsehwere in the URL.
For example:
curl http://[::1]/file[1-3].txt
This creates no ambiguity, because there is no overlap between the
syntax of valid globs and valid IPv6 literals. Globs contain hyphens
and at most 1 colon, while IPv6 literals have no hyphens, and at least 2
colons.
The peek_ipv6() parser simply whitelists a set of characters and counts
colons, because the real validation happens later on. The character set
includes A-Z, in case someone decides to implement support for scopes
like [fe80::1%25eth0] in the future.
Signed-off-by: Paul Marks <pmarks@google.com>
This allows configure --disable-manual to run and build without having
to regenerate the src/tool_hugehelp.c file which otherwise is necessary
since we ship tarballs with that file present.
Reported-by: Remi Gacogne
Bug: http://curl.haxx.se/bug/view.cgi?id=1350
Remove slash/backslash problem, now only slashes are used,
Wmake automaticaly translate slash/backslash to proper version or tools are not sensitive for it.
Enable spaces in path.
Use internal rm command for all host platforms
Add error message if old Open Watcom version is used. Some old versions exhibit build problems for Curl latest version. Now only versions 1.8, 1.9 and 2.O beta are supported
Ensure a source file isn't generated for the following informational
command line parameters when --libcurl is specified:
--help, --manual, --version and --engine list
As the output would only include a fairly empty looking main() function
and a call to curl_easy_init() and curl_easy_cleanup() when performed
with --engine list.
Correctly output libcurl source code that includes multiply operations
as specified by --next. Note that each operation evaluates to a single
curl_easy_perform() in source code form.
Also note that the output could be optimised a little so global config
options are only output once rather than per operation as is presently
the case.