The http2 code for connection checking needs a transfer to use. Make
sure a working one is set before handler->connection_check() is called.
Reported-by: jnbr on github
Fixes#3541Closes#3547
urlapi: turn three local-only functions into statics
conncache: make conncache_find_first_connection static
multi: make detach_connnection static
connect: make getaddressinfo static
curl_ntlm_core: make hmac_md5 static
http2: make two functions static
http: make http_setup_conn static
connect: make tcpnodelay static
tests: make UNITTEST a thing to mark functions with, so they can be static for
normal builds and non-static for unit test builds
... and mark Curl_shuffle_addr accordingly.
url: make up_free static
setopt: make vsetopt static
curl_endian: make write32_le static
rtsp: make rtsp_connisdead static
warnless: remove unused functions
memdebug: remove one unused function, made another static
Stick to "Schannel" everywhere. The configure option --with-winssl is
kept to allow existing builds to work but --with-schannel is added as an
alias.
Closes#3504
extract_if_dead() dead is called from two functions, and only one of
them should get conn->data updated and now neither call path clears it.
scan-build found a case where conn->data would be NULL dereferenced in
ConnectionExists() otherwise.
Closes#3473
Make sure that this function sets a proper "live" transfer for the
connection before calling the protocol-specific connection check
function, and then clear it again afterward as a non-used connection has
no current transfer.
Reported-by: Jeroen Ooms
Reviewed-by: Marcel Raad
Reviewed-by: Daniel Gustafsson
Fixes#3463Closes#3464
We use "conn" everywhere to be a pointer to the connection.
Introduces two functions that "attaches" and "detaches" the connection
to and from the transfer.
Going forward, we should favour using "data->conn" (since a transfer
always only has a single connection or none at all) to "conn->data"
(since a connection can have none, one or many transfers associated with
it and updating conn->data to be correct is error prone and a frequent
reason for internal issues).
Closes#3442
Do not assume/store assocation between a given easy handle and the
connection if it can be avoided.
Long-term, the 'conn->data' pointer should probably be removed as it is a
little too error-prone. Still used very widely though.
Reported-by: masbug on github
Fixes#3391Closes#3400
Added CURLOPT_HTTP09_ALLOWED and --http0.9 for this purpose.
For now, both the tool and library allow HTTP/0.9 by default.
docs/DEPRECATE.md lays out the plan for when to reverse that default: 6
months after the 7.64.0 release. The options are added already now so
that applications/scripts can start using them already now.
Fixes#2873Closes#3383
The function does not return the same value as snprintf() normally does,
so readers may be mislead into thinking the code works differently than
it actually does. A different function name makes this easier to detect.
Reported-by: Tomas Hoger
Assisted-by: Daniel Gustafsson
Fixes#3296Closes#3297
When using c-ares for asyn dns, the dns socket fd was silently closed
by c-ares without curl being aware. curl would then 'realize' the fd
has been removed at next call of Curl_resolver_getsock, and only then
notify the CURLMOPT_SOCKETFUNCTION to remove fd from its poll set with
CURL_POLL_REMOVE. At this point the fd is already closed.
By using ares socket state callback (ARES_OPT_SOCK_STATE_CB), this
patch allows curl to be notified that the fd is not longer needed
for neither for write nor read. At this point by calling
Curl_multi_closed we are able to notify multi with CURL_POLL_REMOVE
before the fd is actually closed by ares.
In asyn-ares.c Curl_resolver_duphandle we can't use ares_dup anymore
since it does not allow passing a different sock_state_cb_data
Closes#3238
The function identifying a leading "scheme" part of the URL considered a
few letters ending with a colon to be a scheme, making something like
"short:80" to become an unknown scheme instead of a short host name and
a port number.
Extended test 1560 to verify.
Also fixed test203 to use file_pwd to make it get the correct path on
windows. Removed test 2070 since it was a duplicate of 203.
Assisted-by: Marcel Raad
Reported-by: Hagai Auro
Fixes#3220Fixes#3233Closes#3223Closes#3235
- for "--netrc", don't ignore the login/password specified with "--user",
only ignore the login/password in the URL.
This restores the netrc behaviour of curl 7.61.1 and earlier.
- fix the documentation of CURL_NETRC_REQUIRED
- improve the detection of login/password changes when reading .netrc
- don't read .netrc if both login and password are already set
Fixes#3213Closes#3224
Now FILE transfers send headers to the header callback like HTTP and
other protocols. Also made curl_easy_getinfo(...CURLINFO_PROTOCOL...)
work for FILE in the callbacks.
Makes "curl -i file://.." and "curl -I file://.." work like before
again. Applied the bold header logic to them too.
Regression from c1c2762 (7.61.0)
Reported-by: Shaun Jackman
Fixes#3083Closes#3101
Add functionality so that protocols can do custom keepalive on their
connections, when an external API function is called.
Add docs for the new options in 7.62.0
Closes#1641
This fixes a memory leak when CURLOPT_LOGIN_OPTIONS is used, together with
connection reuse.
I found this with oss-fuzz on GDAL and curl master:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=9582
I couldn't reproduce with the oss-fuzz original test case, but looking
at curl source code pointed to this well reproducable leak.
Closes#2790
Follow-up to 1b76c38904. The VTLS backends that close down the TLS
layer for a connection still needs a Curl_easy handle for the session_id
cache etc.
Fixes#2764Closes#2771
By masking sure to use the *current* easy handle with extracted
connections from the cache, and make sure to NULLify the ->data pointer
when the connection is put into the cache to make this mistake easier to
detect in the future.
Reported-by: Will Dietz
Fixes#2669Closes#2672
- Get rid of variable that was generating false positive warning
(unitialized)
- Fix issues in tests
- Reduce scope of several variables all over
etc
Closes#2631
... instead of previous separate struct fields, to make it easier to
extend and change individual backends without having to modify them all.
closes#2547
- Move verify_certificate functionality in schannel.c into a new
file called schannel_verify.c. Additionally, some structure defintions
from schannel.c have been moved to schannel.h to allow them to be
used in schannel_verify.c.
- Make verify_certificate functionality for Schannel available on
all versions of Windows instead of just Windows CE. verify_certificate
will be invoked on Windows CE or when the user specifies
CURLOPT_CAINFO and CURLOPT_SSL_VERIFYPEER.
- In verify_certificate, create a custom certificate chain engine that
exclusively trusts the certificate store backed by the CURLOPT_CAINFO
file.
- doc updates of --cacert/CAINFO support for schannel
- Use CERT_NAME_SEARCH_ALL_NAMES_FLAG when invoking CertGetNameString
when available. This implements a TODO in schannel.c to improve
handling of multiple SANs in a certificate. In particular, all SANs
will now be searched instead of just the first name.
- Update tool_operate.c to not search for the curl-ca-bundle.crt file
when using Schannel to maintain backward compatibility. Previously,
any curl-ca-bundle.crt file found in that search would have been
ignored by Schannel. But, with CAINFO support, the file found by
that search would have been used as the certificate store and
could cause issues for any users that have curl-ca-bundle.crt in
the search path.
- Update url.c to not set the build time CURL_CA_BUNDLE if the selected
SSL backend is Schannel. We allow setting CA location for schannel
only when explicitly specified by the user via CURLOPT_CAINFO /
--cacert.
- Add new test cases 3000 and 3001. These test cases check that the first
and last SAN, respectively, matches the connection hostname. New test
certificates have been added for these cases. For 3000, the certificate
prefix is Server-localhost-firstSAN and for 3001, the certificate
prefix is Server-localhost-secondSAN.
- Remove TODO 15.2 (Add support for custom server certificate
validation), this commit addresses it.
Closes https://github.com/curl/curl/pull/1325
curl 7.57.0 and up interpret this according to Appendix E.3.2 of RFC
8089 but then returns an error saying this is unimplemented. This is
actually a regression in behavior on both Windows and Unix.
Before curl 7.57.0 this URL was treated as a path of "//foo/bar" and
then passed to the relevant OS API. This means that the behavior of this
case is actually OS dependent.
The Unix path resolution rules say that the OS must handle swallowing
the extra "/" and so this path is the same as "/foo/bar"
The Windows path resolution rules say that this is a UNC path and
automatically handles the SMB access for the program. So curl on Windows
was already doing Appendix E.3.2 without any special code in curl.
Regression
Closes#2438
- Add new option CURLOPT_HAPPY_EYEBALLS_TIMEOUT to set libcurl's happy
eyeball timeout value.
- Add new optval macro CURL_HET_DEFAULT to represent the default happy
eyeballs timeout value (currently 200 ms).
- Add new tool option --happy-eyeballs-timeout-ms to expose
CURLOPT_HAPPY_EYEBALLS_TIMEOUT. The -ms suffix is used because the
other -timeout options in the tool expect seconds not milliseconds.
Closes https://github.com/curl/curl/pull/2260
Move curl_mime_initpart() and curl_mime_cleanpart() calls to lower-level
functions dealing with UserDefined structure contents.
This avoids memory leakages on curl-generated part mime headers.
New test 2073 checks this using the cli tool --next option: it
triggers a valgrind error if bug is present.
Bug: https://curl.haxx.se/mail/lib-2017-12/0060.html
Reported-by: Martin Galvan
These are OS/2-specific things added to the code in the year 2000. They
were always ugly. If there's any user left, they still don't need it
done this way.
Closes#2166
Connections that are used for HTTP/1.1 Pipelining or HTTP/2 multiplexing
only get additional transfers added to them if the existing connection
is held by the same multi or easy handle. libcurl does not support doing
HTTP/2 streams in different threads using a shared connection.
Closes#2152
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes#2132Fixes#2151Closes#2139
The SFTP back-end supports asynchronous reading only, limited
to 32-bit file length. Writing is synchronous with no other
limitations.
This also brings keyboard-interactive authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
libssh is an alternative library to libssh2.
https://www.libssh.org/
That patch set also introduces support for ECDSA
ed25519 keys, as well as gssapi authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
Originally, my idea was to allocate the two structures (or more
precisely, the connectdata structure and the four SSL backend-specific
strucutres required for ssl[0..1] and proxy_ssl[0..1]) in one go, so
that they all could be free()d together.
However, getting the alignment right is tricky. Too tricky.
So let's just bite the bullet and allocate the SSL backend-specific
data separately.
As a consequence, we now have to be very careful to release the memory
allocated for the SSL backend-specific data whenever we release any
connectdata.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2119
- Align the array of ssl_backend_data on a max 32 byte boundary.
8 is likely to be ok but I went with 32 for posterity should one of
the ssl_backend_data structs change to contain a larger sized variable
in the future.
Prior to this change (since dev 70f1db3, release 7.56) the connectdata
structure was undersized by 4 bytes in 32-bit builds with ssl enabled
because long long * was mistakenly used for alignment instead of
long long, with the intention being an 8 byte boundary. Also long long
may not be an available type.
The undersized connectdata could lead to oob read/write past the end in
what was expected to be the last 4 bytes of the connection's secondary
socket https proxy ssl_backend_data struct (the secondary socket in a
connection is used by ftp, others?).
Closes https://github.com/curl/curl/issues/2093
CVE-2017-8818
Bug: https://curl.haxx.se/docs/adv_2017-af0a.html
* LOTS of comment updates
* explicit error for SMB shares (e.g. "file:////share/path/file")
* more strict handling of authority (i.e. "//localhost/")
* now accepts dodgy old "C:|" drive letters
* more precise handling of drive letters in and out of Windows
(especially recognising both "file:c:/" and "file:/c:/")
Closes#2110
Host names like "127.0.0.1 moo" would otherwise be accepted by some
getaddrinfo() implementations.
Updated test 1034 and 1035 accordingly.
Fixes#2073Closes#2092
This is implemented as an output streaming stack of unencoders, the last
calling the client write procedure.
New test 230 checks this feature.
Bug: https://github.com/curl/curl/pull/2002
Reported-By: Daniel Bankhead
Since CURLSSH_AUTH_ANY (aka CURLSSH_AUTH_DEFAULT) is ~0 an arg value
check on this option is incorrect; we have to accept any value.
Prior to this change since f121575 (7.56.1+) CURLOPT_SSH_AUTH_TYPES
erroneously rejected CURLSSH_AUTH_ANY with CURLE_BAD_FUNCTION_ARGUMENT.
Bug: https://github.com/curl/curl/commit/f121575#commitcomment-25347120
... which is valid according to documentation. Regression since
f121575c0b.
Verified now in test 501.
Reported-by: cbartl on github
Fixes#2038Closes#2039
.. also add same arg value check to CURLOPT_POSTFIELDSIZE_LARGE.
Prior to this change since f121575 (7.56.1+) CURLOPT_POSTFIELDSIZE
erroneously rejected -1 value with CURLE_BAD_FUNCTION_ARGUMENT.
Bug: https://curl.haxx.se/mail/lib-2017-11/0000.html
Reported-by: Andrew Lambert
returning 'time_t' is problematic when that type is unsigned and we
return values less than zero to signal "already expired", used in
several places in the code.
Closes#2021
... since the 'tv' stood for timeval and this function does not return a
timeval struct anymore.
Also, cleaned up the Curl_timediff*() functions to avoid typecasts and
clean up the descriptive comments.
Closes#2011
... to cater for systems with unsigned time_t variables.
- Renamed the functions to curlx_timediff and Curl_timediff_us.
- Added overflow protection for both of them in either direction for
both 32 bit and 64 bit time_ts
- Reprefixed the curlx_time functions to use Curl_*
Reported-by: Peter Piekarski
Fixes#2004Closes#2005
... that are multiplied by 1000 when stored.
For 32 bit long systems, the max value accepted (2147483 seconds) is >
596 hours which is unlikely to ever be set by a legitimate application -
and previously it didn't work either, it just caused undefined behavior.
Also updated the man pages for these timeout options to mention the
return code.
Closes#1938
Now VERIFYHOST, VERIFYPEER and VERIFYSTATUS options change during active
connection updates the current connection's (i.e.'connectdata'
structure) appropriate ssl_config (and ssl_proxy_config) structures
variables, making these options effective for ongoing connection.
This functionality was available before and was broken by the
following change:
"proxy: Support HTTPS proxy and SOCKS+HTTP(s)"
CommitId: cb4e2be7c6.
Bug: https://github.com/curl/curl/issues/1941
Closes https://github.com/curl/curl/pull/1951
When curl and libcurl are built with some protocols disabled, they stop
setting and receiving some options that don't make sense with those
protocols. In particular, when HTTP is disabled many options aren't set
that are used only by HTTP. However, some options that appear to be
HTTP-only are actually used by other protocols as well (some despite
having HTTP in the name) and should be set, but weren't. This change now
causes some of these options to be set and used for more (or for all)
protocols. In particular, this fixes tests 646 through 649 in an
HTTP-disabled build, which use the MIME API in the mail protocols.
A connection can only be reused if the flags "conn_to_host" and
"conn_to_port" match. Therefore it is not necessary to copy these flags
in reuse_conn().
Closes#1918
So far, all of the SSL backends' private data has been declared as
part of the ssl_connect_data struct, in one big #if .. #elif .. #endif
block.
This can only work as long as the SSL backend is a compile-time option,
something we want to change in the next commits.
Therefore, let's encapsulate the exact data needed by each SSL backend
into a private struct, and let's avoid bleeding any SSL backend-specific
information into urldata.h. This is also necessary to allow multiple SSL
backends to be compiled in at the same time, as e.g. OpenSSL's and
CyaSSL's headers cannot be included in the same .c file.
To avoid too many malloc() calls, we simply append the private structs
to the connectdata struct in allocate_conn().
This requires us to take extra care of alignment issues: struct fields
often need to be aligned on certain boundaries e.g. 32-bit values need to
be stored at addresses that divide evenly by 4 (= 32 bit / 8
bit-per-byte).
We do that by assuming that no SSL backend's private data contains any
fields that need to be aligned on boundaries larger than `long long`
(typically 64-bit) would need. Under this assumption, we simply add a
dummy field of type `long long` to the `struct connectdata` struct. This
field will never be accessed but acts as a placeholder for the four
instances of ssl_backend_data instead. the size of each ssl_backend_data
struct is stored in the SSL backend-specific metadata, to allow
allocate_conn() to know how much extra space to allocate, and how to
initialize the ssl[sockindex]->backend and proxy_ssl[sockindex]->backend
pointers.
This would appear to be a little complicated at first, but is really
necessary to encapsulate the private data of each SSL backend correctly.
And we need to encapsulate thusly if we ever want to allow selecting
CyaSSL and OpenSSL at runtime, as their headers cannot be included within
the same .c file (there are just too many conflicting definitions and
declarations for that).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The entire idea of introducing the Curl_ssl struct to describe SSL
backends is to prepare for choosing the SSL backend at runtime.
To that end, convert all the #ifdef have_curlssl_* style conditionals
to use bit flags instead.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The required low-level logic was already available as part of
`libssh2` (via `LIBSSH2_FLAG_COMPRESS` `libssh2_session_flag()`[1]
option.)
This patch adds the new `libcurl` option `CURLOPT_SSH_COMPRESSION`
(boolean) and the new `curl` command-line option `--compressed-ssh`
to request this `libssh2` feature. To have compression enabled, it
is required that the SSH server supports a (zlib) compatible
compression method and that `libssh2` was built with `zlib` support
enabled.
[1] https://www.libssh2.org/libssh2_session_flag.html
Ref: https://github.com/curl/curl/issues/1732
Closes https://github.com/curl/curl/pull/1735
Fixes the below leak:
$ valgrind --leak-check=full ~/install-curl-git/bin/curl --proxy "http://a:b@/x" http://127.0.0.1
curl: (5) Couldn't resolve proxy name
==5048==
==5048== HEAP SUMMARY:
==5048== in use at exit: 532 bytes in 12 blocks
==5048== total heap usage: 5,288 allocs, 5,276 frees, 445,271 bytes allocated
==5048==
==5048== 2 bytes in 1 blocks are definitely lost in loss record 1 of 12
==5048== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5048== by 0x4E6CB79: parse_login_details (url.c:5614)
==5048== by 0x4E6BA82: parse_proxy (url.c:5091)
==5048== by 0x4E6C46D: create_conn_helper_init_proxy (url.c:5346)
==5048== by 0x4E6EA18: create_conn (url.c:6498)
==5048== by 0x4E6F9B4: Curl_connect (url.c:6967)
==5048== by 0x4E86D05: multi_runsingle (multi.c:1436)
==5048== by 0x4E88432: curl_multi_perform (multi.c:2160)
==5048== by 0x4E7C515: easy_transfer (easy.c:708)
==5048== by 0x4E7C74A: easy_perform (easy.c:794)
==5048== by 0x4E7C7B1: curl_easy_perform (easy.c:813)
==5048== by 0x414025: operate_do (tool_operate.c:1563)
==5048==
==5048== 2 bytes in 1 blocks are definitely lost in loss record 2 of 12
==5048== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5048== by 0x4E6CBB6: parse_login_details (url.c:5621)
==5048== by 0x4E6BA82: parse_proxy (url.c:5091)
==5048== by 0x4E6C46D: create_conn_helper_init_proxy (url.c:5346)
==5048== by 0x4E6EA18: create_conn (url.c:6498)
==5048== by 0x4E6F9B4: Curl_connect (url.c:6967)
==5048== by 0x4E86D05: multi_runsingle (multi.c:1436)
==5048== by 0x4E88432: curl_multi_perform (multi.c:2160)
==5048== by 0x4E7C515: easy_transfer (easy.c:708)
==5048== by 0x4E7C74A: easy_perform (easy.c:794)
==5048== by 0x4E7C7B1: curl_easy_perform (easy.c:813)
==5048== by 0x414025: operate_do (tool_operate.c:1563)
Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=2984
Credit to OSS Fuzz for discovery
Closes#1761
... to make all libcurl internals able to use the same data types for
the struct members. The timeval struct differs subtly on several
platforms so it makes it cumbersome to use everywhere.
Ref: #1652Closes#1693
Add a new type of callback to Curl_handler which performs checks on
the connection. Alter RTSP so that it uses this callback to do its
own check on connection health.
If libcurl was built with GSS-API support, it unconditionally advertised
GSS-API authentication while connecting to a SOCKS5 proxy. This caused
problems in environments with improperly configured Kerberos: a stock
libcurl failed to connect, despite libcurl built without GSS-API
connected fine using username and password.
This commit introduces the CURLOPT_SOCKS5_AUTH option to control the
allowed methods for SOCKS5 authentication at run time.
Note that a new option was preferred over reusing CURLOPT_PROXYAUTH
for compatibility reasons because the set of authentication methods
allowed by default was different for HTTP and SOCKS5 proxies.
Bug: https://curl.haxx.se/mail/lib-2017-01/0005.html
Closes https://github.com/curl/curl/pull/1454
... to enable sending "OPTIONS *" which wasn't possible previously.
This option currently only works for HTTP.
Added test cases 1298 + 1299 to verify
Fixes#1280Closes#1462
... all other non-HTTP protocol schemes are now defaulting to "tunnel
trough" mode if a HTTP proxy is specified. In reality there are no HTTP
proxies out there that allow those other schemes.
Assisted-by: Ray Satiro, Michael Kaufmann
Closes#1505
mk-lib1521.pl generates a test program (lib1521.c) that calls
curl_easy_setopt() for every known option with a few typical values to
make sure they work (ignoring the return codes).
Some small changes were necessary to avoid asserts and NULL accesses
when doing this.
The perl script needs to be manually rerun when we add new options.
Closes#1543
Some code (e.g. Curl_fillreadbuffer) assumes that this buffer is not
exceedingly tiny and will break if it is. This same check is already
done at run time in the CURLOPT_BUFFERSIZE option.
The function IsPipeliningPossible() would return TRUE if either
pipelining OR HTTP/2 were possible on a connection, which would lead to
it returning TRUE even for POSTs on HTTP/1 connections.
It now returns a bitmask so that the caller can differentiate which kind
the connection allows.
Fixes#1481Closes#1483
Reported-by: stootill at github
This fixes the following clang warnings:
macro is not used [-Wunused-macros]
will never be executed [-Wunreachable-code]
Closes https://github.com/curl/curl/pull/1448
get_protocol_family() is not defined static even though there is a
static local forward declaration. Let's simply make the definition match
it's declaration.
Bug: https://curl.haxx.se/mail/lib-2017-04/0127.html
The data->req.uploadbuf struct member served no good purpose, instead we
use ->state.uploadbuffer directly. It makes it clearer in the code which
buffer that's being used.
Removed the 'SingleRequest *' argument from the readwrite_upload() proto
as it can be derived from the Curl_easy struct. Also made the code in
the readwrite_upload() function use the 'k->' shortcut to all references
to struct fields in 'data->req', which previously was made with a mix of
both.
- Don't free postponed data on a connection that will be reused since
doing so can cause data loss when pipelining.
Only Windows builds are affected by this.
Closes https://github.com/curl/curl/issues/1380
When using basic-auth, connections and proxy connections
can be re-used with different Authorization headers since
it does not authenticate the connection (like NTLM does).
For instance, the below command should re-use the proxy
connection, but it currently doesn't:
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -U bob:b -x http://localhost:8181http://localhost/
This is a regression since refactoring of ConnectionExists()
as part of: cb4e2be7c6
Fix the above by removing the username and password compare
when re-using proxy connection at proxy_info_matches().
However, this fix brings back another bug would make curl
to re-print the old proxy-authorization header of previous
proxy basic-auth connection because it wasn't cleared.
For instance, in the below command the second request should
fail if the proxy requires authentication, but would succeed
after the above fix (and before aforementioned commit):
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -x http://localhost:8181http://localhost/
Fix this by clearing conn->allocptr.proxyuserpwd after use
unconditionally, same as we do for conn->allocptr.userpwd.
Also fix test 540 to not expect digest auth header to be
resent when connection is reused.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Closes https://github.com/curl/curl/pull/1350
- Add new option CURLOPT_SUPPRESS_CONNECT_HEADERS to allow suppressing
proxy CONNECT response headers from the user callback functions
CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
- Add new tool option --suppress-connect-headers to expose
CURLOPT_SUPPRESS_CONNECT_HEADERS and allow suppressing proxy CONNECT
response headers from --dump-header and --include.
Assisted-by: Jay Satiro
Assisted-by: CarloCannas@users.noreply.github.com
Closes https://github.com/curl/curl/pull/783
This commit introduces the CURL_SSLVERSION_MAX_* constants as well as
the --tls-max option of the curl tool.
Closes https://github.com/curl/curl/pull/1166
... because it causes confusion with users. Example URLs:
"http://[127.0.0.1]:11211:80" which a lot of languages' URL parsers will
parse and claim uses port number 80, while libcurl would use port number
11211.
"http://user@example.com:80@localhost" which by the WHATWG URL spec will
be treated to contain user name 'user@example.com' but according to
RFC3986 is user name 'user' for the host 'example.com' and then port 80
is followed by "@localhost"
Both these formats are now rejected, and verified so in test 1260.
Reported-by: Orange Tsai
If the compile-time CURL_CA_BUNDLE location is defined use it as the
default value for the proxy CA bundle location, which is the same as
what we already do for the regular CA bundle location.
Ref: https://github.com/curl/curl/pull/1257
- Change CURLOPT_PROXY_CAPATH to return CURLE_NOT_BUILT_IN if the option
is not supported, which is the same as what we already do for
CURLOPT_CAPATH.
- Change the curl tool to handle CURLOPT_PROXY_CAPATH error
CURLE_NOT_BUILT_IN as a warning instead of as an error, which is the
same as what we already do for CURLOPT_CAPATH.
- Fix CAPATH docs to show that CURLE_NOT_BUILT_IN is returned when the
respective CAPATH option is not supported by the SSL library.
Ref: https://github.com/curl/curl/pull/1257
The CURLOPT_SSL_VERIFYSTATUS option was not properly handled by libcurl
and thus even if the status couldn't be verified, the connection would
be allowed and the user would not be told about the failed verification.
Regression since cb4e2be7c6
CVE-2017-2629
Bug: https://curl.haxx.se/docs/adv_20170222.html
Reported-by: Marcus Hoffmann
Properly resolve, convert and log the proxy host names.
Support the "--connect-to" feature for SOCKS proxies and for passive FTP
data transfers.
Follow-up to cb4e2be
Reported-by: Jay Satiro
Fixes https://github.com/curl/curl/issues/1248
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer. Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size. Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.
Closes#1222
Regression since 1d4202ad, which moved the buffer into a more narrow
scope, but the data in that buffer was used outside of that more narrow
scope.
Reported-by: Dan Fandrich
Bug: https://curl.haxx.se/mail/lib-2017-01/0093.html
In addition to unix domain sockets, Linux also supports an
abstract namespace which is independent of the filesystem.
In order to support it, add new CURLOPT_ABSTRACT_UNIX_SOCKET
option which uses the same storage as CURLOPT_UNIX_SOCKET_PATH
internally, along with a flag to specify abstract socket.
On non-supporting platforms, the abstract address will be
interpreted as an empty string and fail gracefully.
Also add new --abstract-unix-socket tool parameter.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Chungtsun Li (typeless)
Reviewed-by: Daniel Stenberg
Reviewed-by: Peter Wu
Closes#1197Fixes#1061
It made the german ß get converted to ss, IDNA2003 style, and we can't
have that for the .de TLD - a primary reason for our switch to IDNA2008.
Test 165 verifies.
Under condition using http_proxy env var, noproxy list was the
combination of --noproxy option and NO_PROXY env var previously. Since
this commit, --noproxy option overrides NO_PROXY environment variable
even if use http_proxy env var.
Closes#1140
If defined CURL_DISABLE_HTTP, detect_proxy() returned NULL. If not
defined CURL_DISABLE_HTTP, detect_proxy() checked noproxy list.
Thus refactor to set proxy to NULL instead of calling detect_proxy() if
define CURL_DISABLE_HTTP, and refactor to call detect_proxy() if not
define CURL_DISABLE_HTTP and the host is not in the noproxy list.
The combination of --noproxy option and http_proxy env var works well
both for proxied hosts and non-proxied hosts.
However, when combining NO_PROXY env var with --proxy option,
non-proxied hosts are not reachable while proxied host is OK.
This patch allows us to access non-proxied hosts even if using NO_PROXY
env var with --proxy option.
Follow-up to 3463408.
Prior to 3463408 file:// hostnames were silently stripped.
Prior to this commit it did not work when a schemeless url was used with
file as the default protocol.
Ref: https://curl.haxx.se/mail/lib-2016-11/0081.html
Closes https://github.com/curl/curl/pull/1124
Also fix for drive letters:
- Support --proto-default file c:/foo/bar.txt
- Support file://c:/foo/bar.txt
- Fail when a file:// drive letter is detected and not MSDOS/Windows.
Bug: https://github.com/curl/curl/issues/1187
Reported-by: Anatol Belski
Assisted-by: Anatol Belski
CURLOPT_SOCKS_PROXY -> CURLOPT_PRE_PROXY
Added the corresponding --preroxy command line option. Sets a SOCKS
proxy to connect to _before_ connecting to a HTTP(S) proxy.
This was added as part of the SOCKS+HTTPS proxy merge but there's no
need to support this as we prefer to have the protocol specified as a
prefix instead.
If a port number in a "connect-to" entry does not match, skip this
entry instead of connecting to port 0.
If a port number in a "connect-to" entry matches, use this entry
and look no further.
Reported-by: Jay Satiro
Assisted-by: Jay Satiro, Daniel Stenberg
Closes#1148
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
- Fix connection reuse for when the proposed new conn 'needle' has a
specified local port but does not have a specified device interface.
Bug: https://curl.haxx.se/mail/lib-2016-11/0137.html
Reported-by: bjt3[at]hotmail.com
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes#1131
Previously, the [host] part was just ignored which made libcurl accept
strange URLs misleading users. like "file://etc/passwd" which might've
looked like it refers to "/etc/passwd" but is just "/passwd" since the
"etc" is an ignored host name.
Reported-by: Mike Crowe
Assisted-by: Kamil Dudka
- Call Curl_initinfo on init and duphandle.
Prior to this change the statistical and informational variables were
simply zeroed by calloc on easy init and duphandle. While zero is the
correct default value for almost all info variables, there is one where
it isn't (filetime initializes to -1).
Bug: https://github.com/curl/curl/issues/1103
Reported-by: Neal Poole
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
Curl_select_ready() was the former API that was replaced with
Curl_select_check() a while back and the former arg setup was provided
with a define (in order to leave existing code unmodified).
Now we instead offer SOCKET_READABLE and SOCKET_WRITABLE for the most
common shortcuts where only one socket is checked. They're also more
visibly macros.
- Change back behavior so that pipelining is considered possible for
connections that have not yet reached the protocol level.
This is a follow-up to e5f0b1a which had changed the behavior of
checking if pipelining is possible to ignore connections that had
'bits.close' set. Connections that have not yet reached the protocol
level also have that bit set, and we need to consider pipelining
possible on those connections.
No longer attempt to use "doomed" to-be-closed connections when
pipelining. Prior to this change connections marked for deletion (e.g.
timeout) would be erroneously used, resulting in sporadic crashes.
As originally reported and fixed by Carlo Wood (origin unknown).
Bug: https://github.com/curl/curl/issues/627
Reported-by: Rider Linden
Closes https://github.com/curl/curl/pull/1075
Participation-by: nopjmp@users.noreply.github.com
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
I discovered some people have been using "https://example.com" style
strings as proxy and it "works" (curl doesn't complain) because curl
ignores unknown schemes and then assumes plain HTTP instead.
I think this misleads users into believing curl uses HTTPS to proxies
when it doesn't. Now curl rejects proxy strings using unsupported
schemes instead of just ignoring and defaulting to HTTP.
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
Mostly in order to support broken web sites that redirect to broken URLs
that are accepted by browsers.
Browsers are typically even more leniant than this as the WHATWG URL
spec they should allow an _infinite_ amount. I tested 8000 slashes with
Firefox and it just worked.
Added test case 1141, 1142 and 1143 to verify the new parser.
Closes#791
The proper FTP wildcard init is now more properly done in Curl_pretransfer()
and the corresponding cleanup in Curl_close().
The previous place of init/cleanup code made the internal pointer to be NULL
when this feature was used with the multi_socket() API, as it was made within
the curl_multi_perform() function.
Reported-by: Jonathan Cardoso Machado
Fixes#800
Only protocols that actually have a protocol registered for ALPN and NPN
should try to get that negotiated in the TLS handshake. That is only
HTTPS (well, http/1.1 and http/2) right now. Previously ALPN and NPN
would wrongly be used in all handshakes if libcurl was built with it
enabled.
Reported-by: Jay Satiro
Fixes#789