... to make all libcurl internals able to use the same data types for
the struct members. The timeval struct differs subtly on several
platforms so it makes it cumbersome to use everywhere.
Ref: #1652Closes#1693
Add a new type of callback to Curl_handler which performs checks on
the connection. Alter RTSP so that it uses this callback to do its
own check on connection health.
If libcurl was built with GSS-API support, it unconditionally advertised
GSS-API authentication while connecting to a SOCKS5 proxy. This caused
problems in environments with improperly configured Kerberos: a stock
libcurl failed to connect, despite libcurl built without GSS-API
connected fine using username and password.
This commit introduces the CURLOPT_SOCKS5_AUTH option to control the
allowed methods for SOCKS5 authentication at run time.
Note that a new option was preferred over reusing CURLOPT_PROXYAUTH
for compatibility reasons because the set of authentication methods
allowed by default was different for HTTP and SOCKS5 proxies.
Bug: https://curl.haxx.se/mail/lib-2017-01/0005.html
Closes https://github.com/curl/curl/pull/1454
... to enable sending "OPTIONS *" which wasn't possible previously.
This option currently only works for HTTP.
Added test cases 1298 + 1299 to verify
Fixes#1280Closes#1462
... all other non-HTTP protocol schemes are now defaulting to "tunnel
trough" mode if a HTTP proxy is specified. In reality there are no HTTP
proxies out there that allow those other schemes.
Assisted-by: Ray Satiro, Michael Kaufmann
Closes#1505
mk-lib1521.pl generates a test program (lib1521.c) that calls
curl_easy_setopt() for every known option with a few typical values to
make sure they work (ignoring the return codes).
Some small changes were necessary to avoid asserts and NULL accesses
when doing this.
The perl script needs to be manually rerun when we add new options.
Closes#1543
Some code (e.g. Curl_fillreadbuffer) assumes that this buffer is not
exceedingly tiny and will break if it is. This same check is already
done at run time in the CURLOPT_BUFFERSIZE option.
The function IsPipeliningPossible() would return TRUE if either
pipelining OR HTTP/2 were possible on a connection, which would lead to
it returning TRUE even for POSTs on HTTP/1 connections.
It now returns a bitmask so that the caller can differentiate which kind
the connection allows.
Fixes#1481Closes#1483
Reported-by: stootill at github
This fixes the following clang warnings:
macro is not used [-Wunused-macros]
will never be executed [-Wunreachable-code]
Closes https://github.com/curl/curl/pull/1448
get_protocol_family() is not defined static even though there is a
static local forward declaration. Let's simply make the definition match
it's declaration.
Bug: https://curl.haxx.se/mail/lib-2017-04/0127.html
The data->req.uploadbuf struct member served no good purpose, instead we
use ->state.uploadbuffer directly. It makes it clearer in the code which
buffer that's being used.
Removed the 'SingleRequest *' argument from the readwrite_upload() proto
as it can be derived from the Curl_easy struct. Also made the code in
the readwrite_upload() function use the 'k->' shortcut to all references
to struct fields in 'data->req', which previously was made with a mix of
both.
- Don't free postponed data on a connection that will be reused since
doing so can cause data loss when pipelining.
Only Windows builds are affected by this.
Closes https://github.com/curl/curl/issues/1380
When using basic-auth, connections and proxy connections
can be re-used with different Authorization headers since
it does not authenticate the connection (like NTLM does).
For instance, the below command should re-use the proxy
connection, but it currently doesn't:
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -U bob:b -x http://localhost:8181http://localhost/
This is a regression since refactoring of ConnectionExists()
as part of: cb4e2be7c6
Fix the above by removing the username and password compare
when re-using proxy connection at proxy_info_matches().
However, this fix brings back another bug would make curl
to re-print the old proxy-authorization header of previous
proxy basic-auth connection because it wasn't cleared.
For instance, in the below command the second request should
fail if the proxy requires authentication, but would succeed
after the above fix (and before aforementioned commit):
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -x http://localhost:8181http://localhost/
Fix this by clearing conn->allocptr.proxyuserpwd after use
unconditionally, same as we do for conn->allocptr.userpwd.
Also fix test 540 to not expect digest auth header to be
resent when connection is reused.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Closes https://github.com/curl/curl/pull/1350
- Add new option CURLOPT_SUPPRESS_CONNECT_HEADERS to allow suppressing
proxy CONNECT response headers from the user callback functions
CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
- Add new tool option --suppress-connect-headers to expose
CURLOPT_SUPPRESS_CONNECT_HEADERS and allow suppressing proxy CONNECT
response headers from --dump-header and --include.
Assisted-by: Jay Satiro
Assisted-by: CarloCannas@users.noreply.github.com
Closes https://github.com/curl/curl/pull/783
This commit introduces the CURL_SSLVERSION_MAX_* constants as well as
the --tls-max option of the curl tool.
Closes https://github.com/curl/curl/pull/1166
... because it causes confusion with users. Example URLs:
"http://[127.0.0.1]:11211:80" which a lot of languages' URL parsers will
parse and claim uses port number 80, while libcurl would use port number
11211.
"http://user@example.com:80@localhost" which by the WHATWG URL spec will
be treated to contain user name 'user@example.com' but according to
RFC3986 is user name 'user' for the host 'example.com' and then port 80
is followed by "@localhost"
Both these formats are now rejected, and verified so in test 1260.
Reported-by: Orange Tsai
If the compile-time CURL_CA_BUNDLE location is defined use it as the
default value for the proxy CA bundle location, which is the same as
what we already do for the regular CA bundle location.
Ref: https://github.com/curl/curl/pull/1257
- Change CURLOPT_PROXY_CAPATH to return CURLE_NOT_BUILT_IN if the option
is not supported, which is the same as what we already do for
CURLOPT_CAPATH.
- Change the curl tool to handle CURLOPT_PROXY_CAPATH error
CURLE_NOT_BUILT_IN as a warning instead of as an error, which is the
same as what we already do for CURLOPT_CAPATH.
- Fix CAPATH docs to show that CURLE_NOT_BUILT_IN is returned when the
respective CAPATH option is not supported by the SSL library.
Ref: https://github.com/curl/curl/pull/1257
The CURLOPT_SSL_VERIFYSTATUS option was not properly handled by libcurl
and thus even if the status couldn't be verified, the connection would
be allowed and the user would not be told about the failed verification.
Regression since cb4e2be7c6
CVE-2017-2629
Bug: https://curl.haxx.se/docs/adv_20170222.html
Reported-by: Marcus Hoffmann
Properly resolve, convert and log the proxy host names.
Support the "--connect-to" feature for SOCKS proxies and for passive FTP
data transfers.
Follow-up to cb4e2be
Reported-by: Jay Satiro
Fixes https://github.com/curl/curl/issues/1248
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer. Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size. Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.
Closes#1222
Regression since 1d4202ad, which moved the buffer into a more narrow
scope, but the data in that buffer was used outside of that more narrow
scope.
Reported-by: Dan Fandrich
Bug: https://curl.haxx.se/mail/lib-2017-01/0093.html
In addition to unix domain sockets, Linux also supports an
abstract namespace which is independent of the filesystem.
In order to support it, add new CURLOPT_ABSTRACT_UNIX_SOCKET
option which uses the same storage as CURLOPT_UNIX_SOCKET_PATH
internally, along with a flag to specify abstract socket.
On non-supporting platforms, the abstract address will be
interpreted as an empty string and fail gracefully.
Also add new --abstract-unix-socket tool parameter.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Chungtsun Li (typeless)
Reviewed-by: Daniel Stenberg
Reviewed-by: Peter Wu
Closes#1197Fixes#1061
It made the german ß get converted to ss, IDNA2003 style, and we can't
have that for the .de TLD - a primary reason for our switch to IDNA2008.
Test 165 verifies.
Under condition using http_proxy env var, noproxy list was the
combination of --noproxy option and NO_PROXY env var previously. Since
this commit, --noproxy option overrides NO_PROXY environment variable
even if use http_proxy env var.
Closes#1140
If defined CURL_DISABLE_HTTP, detect_proxy() returned NULL. If not
defined CURL_DISABLE_HTTP, detect_proxy() checked noproxy list.
Thus refactor to set proxy to NULL instead of calling detect_proxy() if
define CURL_DISABLE_HTTP, and refactor to call detect_proxy() if not
define CURL_DISABLE_HTTP and the host is not in the noproxy list.
The combination of --noproxy option and http_proxy env var works well
both for proxied hosts and non-proxied hosts.
However, when combining NO_PROXY env var with --proxy option,
non-proxied hosts are not reachable while proxied host is OK.
This patch allows us to access non-proxied hosts even if using NO_PROXY
env var with --proxy option.
Follow-up to 3463408.
Prior to 3463408 file:// hostnames were silently stripped.
Prior to this commit it did not work when a schemeless url was used with
file as the default protocol.
Ref: https://curl.haxx.se/mail/lib-2016-11/0081.html
Closes https://github.com/curl/curl/pull/1124
Also fix for drive letters:
- Support --proto-default file c:/foo/bar.txt
- Support file://c:/foo/bar.txt
- Fail when a file:// drive letter is detected and not MSDOS/Windows.
Bug: https://github.com/curl/curl/issues/1187
Reported-by: Anatol Belski
Assisted-by: Anatol Belski
CURLOPT_SOCKS_PROXY -> CURLOPT_PRE_PROXY
Added the corresponding --preroxy command line option. Sets a SOCKS
proxy to connect to _before_ connecting to a HTTP(S) proxy.
This was added as part of the SOCKS+HTTPS proxy merge but there's no
need to support this as we prefer to have the protocol specified as a
prefix instead.
If a port number in a "connect-to" entry does not match, skip this
entry instead of connecting to port 0.
If a port number in a "connect-to" entry matches, use this entry
and look no further.
Reported-by: Jay Satiro
Assisted-by: Jay Satiro, Daniel Stenberg
Closes#1148
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
- Fix connection reuse for when the proposed new conn 'needle' has a
specified local port but does not have a specified device interface.
Bug: https://curl.haxx.se/mail/lib-2016-11/0137.html
Reported-by: bjt3[at]hotmail.com
Visual C++ now complains about implicitly casting time_t (64-bit) to
long (32-bit). Fix this by changing some variables from long to time_t,
or explicitly casting to long where the public interface would be
affected.
Closes#1131
Previously, the [host] part was just ignored which made libcurl accept
strange URLs misleading users. like "file://etc/passwd" which might've
looked like it refers to "/etc/passwd" but is just "/passwd" since the
"etc" is an ignored host name.
Reported-by: Mike Crowe
Assisted-by: Kamil Dudka
- Call Curl_initinfo on init and duphandle.
Prior to this change the statistical and informational variables were
simply zeroed by calloc on easy init and duphandle. While zero is the
correct default value for almost all info variables, there is one where
it isn't (filetime initializes to -1).
Bug: https://github.com/curl/curl/issues/1103
Reported-by: Neal Poole
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
Curl_select_ready() was the former API that was replaced with
Curl_select_check() a while back and the former arg setup was provided
with a define (in order to leave existing code unmodified).
Now we instead offer SOCKET_READABLE and SOCKET_WRITABLE for the most
common shortcuts where only one socket is checked. They're also more
visibly macros.
- Change back behavior so that pipelining is considered possible for
connections that have not yet reached the protocol level.
This is a follow-up to e5f0b1a which had changed the behavior of
checking if pipelining is possible to ignore connections that had
'bits.close' set. Connections that have not yet reached the protocol
level also have that bit set, and we need to consider pipelining
possible on those connections.
No longer attempt to use "doomed" to-be-closed connections when
pipelining. Prior to this change connections marked for deletion (e.g.
timeout) would be erroneously used, resulting in sporadic crashes.
As originally reported and fixed by Carlo Wood (origin unknown).
Bug: https://github.com/curl/curl/issues/627
Reported-by: Rider Linden
Closes https://github.com/curl/curl/pull/1075
Participation-by: nopjmp@users.noreply.github.com
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
I discovered some people have been using "https://example.com" style
strings as proxy and it "works" (curl doesn't complain) because curl
ignores unknown schemes and then assumes plain HTTP instead.
I think this misleads users into believing curl uses HTTPS to proxies
when it doesn't. Now curl rejects proxy strings using unsupported
schemes instead of just ignoring and defaulting to HTTP.
After a few wasted hours hunting down the reason for slowness during a
TLS handshake that turned out to be because of TCP_NODELAY not being
set, I think we have enough motivation to toggle the default for this
option. We now enable TCP_NODELAY by default and allow applications to
switch it off.
This also makes --tcp-nodelay unnecessary, but --no-tcp-nodelay can be
used to disable it.
Thanks-to: Tim Rühsen
Bug: https://curl.haxx.se/mail/lib-2016-06/0143.html
Previously, passing a timeout of zero to Curl_expire() was a magic code
for clearing all timeouts for the handle. That is now instead made with
the new Curl_expire_clear() function and thus a 0 timeout is fine to set
and will trigger a timeout ASAP.
This will help removing short delays, in particular notable when doing
HTTP/2.
Mostly in order to support broken web sites that redirect to broken URLs
that are accepted by browsers.
Browsers are typically even more leniant than this as the WHATWG URL
spec they should allow an _infinite_ amount. I tested 8000 slashes with
Firefox and it just worked.
Added test case 1141, 1142 and 1143 to verify the new parser.
Closes#791
The proper FTP wildcard init is now more properly done in Curl_pretransfer()
and the corresponding cleanup in Curl_close().
The previous place of init/cleanup code made the internal pointer to be NULL
when this feature was used with the multi_socket() API, as it was made within
the curl_multi_perform() function.
Reported-by: Jonathan Cardoso Machado
Fixes#800
Only protocols that actually have a protocol registered for ALPN and NPN
should try to get that negotiated in the TLS handshake. That is only
HTTPS (well, http/1.1 and http/2) right now. Previously ALPN and NPN
would wrongly be used in all handshakes if libcurl was built with it
enabled.
Reported-by: Jay Satiro
Fixes#789
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes#743
WinSock destroys recv() buffer if send() is failed. As result - server
response may be lost if server sent it while curl is still sending
request. This behavior noticeable on HTTP server short replies if
libcurl use several send() for request (usually for POST request).
To workaround this problem, libcurl use recv() before every send() and
keeps received data in intermediate buffer for further processing.
Fixes: #657Closes: #668
As these two options provide identical functionality, the former for
SOCK5 proxies and the latter for HTTP proxies, merged the two options
together.
As such CURLOPT_SOCKS5_GSSAPI_SERVICE is marked as deprecated as of
7.49.0.
Calculate the service name and proxy service names locally, rather than
in url.c which will allow for us to support overriding the service name
for other protocols such as FTP, IMAP, POP3 and SMTP.
Renamed the header and source files for this module as they are HTTP
specific and as such, they should use the naming convention as other
HTTP authentication source files do - this revert commit 260ee6b7bf.
Note: We could also rename curl_ntlm_wb.[c|h], however, the Winbind
code needs separating from the HTTP protocol and migrating into the
vauth directory, thus adding support for Winbind to the SASL based
protocols such as IMAP, POP3 and SMTP.
libidn's tld_check_lz returns an error offset of the first character
that it failed to process, however that offset is not a byte offset and
may not even be in the locale encoding therefore we can't use it to show
the user the character that failed to process.
Bug: https://github.com/curl/curl/issues/731
Reported-by: Karlson2k
I got a crash with this stack:
curl/lib/url.c:2873 (Curl_removeHandleFromPipeline)
curl/lib/url.c:2919 (Curl_getoff_all_pipelines)
curl/lib/multi.c:561 (curl_multi_remove_handle)
curl/lib/url.c:415 (Curl_close)
curl/lib/easy.c:859 (curl_easy_cleanup)
Closes#704
Prevent a crash if 2 (or more) requests are made to the same host and
pipelining is enabled and the connection does not complete.
Bug: https://github.com/curl/curl/pull/690
Some TFTP server implementations ignore the "TFTP Option extension"
(RFC 1782-1784, 2347-2349), or implement it in a flawed way, causing
problems with libcurl. Another switch for curl_easy_setopt
"CURLOPT_TFTP_NO_OPTIONS" is introduced which prevents libcurl from
sending TFTP option requests to a server, avoiding many problems caused
by faulty implementations.
Bug: https://github.com/curl/curl/issues/481
When an HTTP/2 upgrade request fails (no protocol switch), it would
previously detect that as still possible to pipeline on (which is
acorrect) and do that when PIPEWAIT was enabled even if pipelining was
not explictily enabled.
It should only pipelined if explicitly asked to.
Closes#584
Before this patch, if a URL does not start with the protocol
name/scheme, effective URLs would be prefixed with upper-case protocol
names/schemes. This behavior might not be expected by library users or
end users.
For example, if `CURLOPT_DEFAULT_PROTOCOL` is set to "https". And the
URL is "hostname/path". The effective URL would be
"HTTPS://hostname/path" instead of "https://hostname/path".
After this patch, effective URLs would be prefixed with a lower-case
protocol name/scheme.
Closes#597
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
To make sure curl doesn't allow multiplexing before a connection is
upgraded to HTTP/2 (like when Upgrade: h2c fails), we must make sure the
connection uses HTTP/2 as well and not only check what's wanted.
Closes#584
Patch-by: c0ff
Try harder to prevent libcurl from opening up an additional socket when
CURLOPT_PIPEWAIT is set. Accomplished by letting ongoing TCP and TLS
handshakes complete first before the decision is made.
Closes#575
It would previously be skipped if an existing error was returned, but
would lead to a previous value being left there and later used.
CURLINFO_TOTAL_TIME for example.
Still it avoids that final progress update if we reached DONE as the
result of a callback abort to avoid another callback to be called after
an abort-by-callback.
Reported-by: Lukas Ruzicka
Closes#538
They tend to never get updated anyway so they're frequently inaccurate
and we never go back to revisit them anyway. We document issues to work
on properly in KNOWN_BUGS and TODO instead.
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346Closes#346
If the port number in the proxy string ended weirdly or the number is
too large, skip it. Mostly as a means to bail out early if a "bare" IPv6
numerical address is used without enclosing brackets.
Also mention the bracket requirement for IPv6 numerical addresses to the
man page for CURLOPT_PROXY.
Closes#415
Reported-by: Marcel Raad
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
With many easy handles using the same connection for multiplexing, it is
important we store and keep the transfer-oriented stuff in the
SessionHandle so that callbacks and callback data work fine even when
many easy handles share the same physical connection.