- no need to have them protocol specific
- no need to set pointers to them with the Curl_setup_transfer() call
- make Curl_setup_transfer() operate on a transfer pointer, not
connection
- switch some counters from long to the more proper curl_off_t type
Closes#3627
- Split off connection shutdown procedure from Curl_disconnect into new
function conn_shutdown.
- Change the shutdown procedure to close the sockets before
disassociating the transfer.
Prior to this change the sockets were closed after disassociating the
transfer so SOCKETFUNCTION wasn't called since the transfer was already
disassociated. That likely came about from recent work started in
Jan 2019 (#3442) to separate transfers from connections.
Bug: https://curl.haxx.se/mail/lib-2019-02/0101.html
Reported-by: Pavel Löbl
Closes https://github.com/curl/curl/issues/3597
Closes https://github.com/curl/curl/pull/3598
- Save the original conn->data before it's changed to the specified
data transfer for the connection check and then restore it afterwards.
This is a follow-up to 38d8e1b 2019-02-11.
History:
It was discovered a month ago that before checking whether to extract a
dead connection that that connection should be associated with a "live"
transfer for the check (ie original conn->data ignored and set to the
passed in data). A fix was landed in 54b201b which did that and also
cleared conn->data after the check. The original conn->data was not
restored, so presumably it was thought that a valid conn->data was no
longer needed.
Several days later it was discovered that a valid conn->data was needed
after the check and follow-up fix was landed in bbae24c which partially
reverted the original fix and attempted to limit the scope of when
conn->data was changed to only when pruning dead connections. In that
case conn->data was not cleared and the original conn->data not
restored.
A month later it was discovered that the original fix was somewhat
correct; a "live" transfer is needed for the check in all cases
because original conn->data could be null which could cause a bad deref
at arbitrary points in the check. A fix was landed in 38d8e1b which
expanded the scope to all cases. conn->data was not cleared and the
original conn->data not restored.
A day later it was discovered that not restoring the original conn->data
may lead to busy loops in applications that use the event interface, and
given this observation it's a pretty safe assumption that there is some
code path that still needs the original conn->data. This commit is the
follow-up fix for that, it restores the original conn->data after the
connection check.
Assisted-by: tholin@users.noreply.github.com
Reported-by: tholin@users.noreply.github.com
Fixes https://github.com/curl/curl/issues/3542Closes#3559
The http2 code for connection checking needs a transfer to use. Make
sure a working one is set before handler->connection_check() is called.
Reported-by: jnbr on github
Fixes#3541Closes#3547
urlapi: turn three local-only functions into statics
conncache: make conncache_find_first_connection static
multi: make detach_connnection static
connect: make getaddressinfo static
curl_ntlm_core: make hmac_md5 static
http2: make two functions static
http: make http_setup_conn static
connect: make tcpnodelay static
tests: make UNITTEST a thing to mark functions with, so they can be static for
normal builds and non-static for unit test builds
... and mark Curl_shuffle_addr accordingly.
url: make up_free static
setopt: make vsetopt static
curl_endian: make write32_le static
rtsp: make rtsp_connisdead static
warnless: remove unused functions
memdebug: remove one unused function, made another static
Stick to "Schannel" everywhere. The configure option --with-winssl is
kept to allow existing builds to work but --with-schannel is added as an
alias.
Closes#3504
extract_if_dead() dead is called from two functions, and only one of
them should get conn->data updated and now neither call path clears it.
scan-build found a case where conn->data would be NULL dereferenced in
ConnectionExists() otherwise.
Closes#3473
Make sure that this function sets a proper "live" transfer for the
connection before calling the protocol-specific connection check
function, and then clear it again afterward as a non-used connection has
no current transfer.
Reported-by: Jeroen Ooms
Reviewed-by: Marcel Raad
Reviewed-by: Daniel Gustafsson
Fixes#3463Closes#3464
We use "conn" everywhere to be a pointer to the connection.
Introduces two functions that "attaches" and "detaches" the connection
to and from the transfer.
Going forward, we should favour using "data->conn" (since a transfer
always only has a single connection or none at all) to "conn->data"
(since a connection can have none, one or many transfers associated with
it and updating conn->data to be correct is error prone and a frequent
reason for internal issues).
Closes#3442
Do not assume/store assocation between a given easy handle and the
connection if it can be avoided.
Long-term, the 'conn->data' pointer should probably be removed as it is a
little too error-prone. Still used very widely though.
Reported-by: masbug on github
Fixes#3391Closes#3400
Added CURLOPT_HTTP09_ALLOWED and --http0.9 for this purpose.
For now, both the tool and library allow HTTP/0.9 by default.
docs/DEPRECATE.md lays out the plan for when to reverse that default: 6
months after the 7.64.0 release. The options are added already now so
that applications/scripts can start using them already now.
Fixes#2873Closes#3383
The function does not return the same value as snprintf() normally does,
so readers may be mislead into thinking the code works differently than
it actually does. A different function name makes this easier to detect.
Reported-by: Tomas Hoger
Assisted-by: Daniel Gustafsson
Fixes#3296Closes#3297
When using c-ares for asyn dns, the dns socket fd was silently closed
by c-ares without curl being aware. curl would then 'realize' the fd
has been removed at next call of Curl_resolver_getsock, and only then
notify the CURLMOPT_SOCKETFUNCTION to remove fd from its poll set with
CURL_POLL_REMOVE. At this point the fd is already closed.
By using ares socket state callback (ARES_OPT_SOCK_STATE_CB), this
patch allows curl to be notified that the fd is not longer needed
for neither for write nor read. At this point by calling
Curl_multi_closed we are able to notify multi with CURL_POLL_REMOVE
before the fd is actually closed by ares.
In asyn-ares.c Curl_resolver_duphandle we can't use ares_dup anymore
since it does not allow passing a different sock_state_cb_data
Closes#3238
The function identifying a leading "scheme" part of the URL considered a
few letters ending with a colon to be a scheme, making something like
"short:80" to become an unknown scheme instead of a short host name and
a port number.
Extended test 1560 to verify.
Also fixed test203 to use file_pwd to make it get the correct path on
windows. Removed test 2070 since it was a duplicate of 203.
Assisted-by: Marcel Raad
Reported-by: Hagai Auro
Fixes#3220Fixes#3233Closes#3223Closes#3235
- for "--netrc", don't ignore the login/password specified with "--user",
only ignore the login/password in the URL.
This restores the netrc behaviour of curl 7.61.1 and earlier.
- fix the documentation of CURL_NETRC_REQUIRED
- improve the detection of login/password changes when reading .netrc
- don't read .netrc if both login and password are already set
Fixes#3213Closes#3224
Now FILE transfers send headers to the header callback like HTTP and
other protocols. Also made curl_easy_getinfo(...CURLINFO_PROTOCOL...)
work for FILE in the callbacks.
Makes "curl -i file://.." and "curl -I file://.." work like before
again. Applied the bold header logic to them too.
Regression from c1c2762 (7.61.0)
Reported-by: Shaun Jackman
Fixes#3083Closes#3101
Add functionality so that protocols can do custom keepalive on their
connections, when an external API function is called.
Add docs for the new options in 7.62.0
Closes#1641
This fixes a memory leak when CURLOPT_LOGIN_OPTIONS is used, together with
connection reuse.
I found this with oss-fuzz on GDAL and curl master:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=9582
I couldn't reproduce with the oss-fuzz original test case, but looking
at curl source code pointed to this well reproducable leak.
Closes#2790
Follow-up to 1b76c38904. The VTLS backends that close down the TLS
layer for a connection still needs a Curl_easy handle for the session_id
cache etc.
Fixes#2764Closes#2771
By masking sure to use the *current* easy handle with extracted
connections from the cache, and make sure to NULLify the ->data pointer
when the connection is put into the cache to make this mistake easier to
detect in the future.
Reported-by: Will Dietz
Fixes#2669Closes#2672
- Get rid of variable that was generating false positive warning
(unitialized)
- Fix issues in tests
- Reduce scope of several variables all over
etc
Closes#2631
... instead of previous separate struct fields, to make it easier to
extend and change individual backends without having to modify them all.
closes#2547
- Move verify_certificate functionality in schannel.c into a new
file called schannel_verify.c. Additionally, some structure defintions
from schannel.c have been moved to schannel.h to allow them to be
used in schannel_verify.c.
- Make verify_certificate functionality for Schannel available on
all versions of Windows instead of just Windows CE. verify_certificate
will be invoked on Windows CE or when the user specifies
CURLOPT_CAINFO and CURLOPT_SSL_VERIFYPEER.
- In verify_certificate, create a custom certificate chain engine that
exclusively trusts the certificate store backed by the CURLOPT_CAINFO
file.
- doc updates of --cacert/CAINFO support for schannel
- Use CERT_NAME_SEARCH_ALL_NAMES_FLAG when invoking CertGetNameString
when available. This implements a TODO in schannel.c to improve
handling of multiple SANs in a certificate. In particular, all SANs
will now be searched instead of just the first name.
- Update tool_operate.c to not search for the curl-ca-bundle.crt file
when using Schannel to maintain backward compatibility. Previously,
any curl-ca-bundle.crt file found in that search would have been
ignored by Schannel. But, with CAINFO support, the file found by
that search would have been used as the certificate store and
could cause issues for any users that have curl-ca-bundle.crt in
the search path.
- Update url.c to not set the build time CURL_CA_BUNDLE if the selected
SSL backend is Schannel. We allow setting CA location for schannel
only when explicitly specified by the user via CURLOPT_CAINFO /
--cacert.
- Add new test cases 3000 and 3001. These test cases check that the first
and last SAN, respectively, matches the connection hostname. New test
certificates have been added for these cases. For 3000, the certificate
prefix is Server-localhost-firstSAN and for 3001, the certificate
prefix is Server-localhost-secondSAN.
- Remove TODO 15.2 (Add support for custom server certificate
validation), this commit addresses it.
Closes https://github.com/curl/curl/pull/1325
curl 7.57.0 and up interpret this according to Appendix E.3.2 of RFC
8089 but then returns an error saying this is unimplemented. This is
actually a regression in behavior on both Windows and Unix.
Before curl 7.57.0 this URL was treated as a path of "//foo/bar" and
then passed to the relevant OS API. This means that the behavior of this
case is actually OS dependent.
The Unix path resolution rules say that the OS must handle swallowing
the extra "/" and so this path is the same as "/foo/bar"
The Windows path resolution rules say that this is a UNC path and
automatically handles the SMB access for the program. So curl on Windows
was already doing Appendix E.3.2 without any special code in curl.
Regression
Closes#2438
- Add new option CURLOPT_HAPPY_EYEBALLS_TIMEOUT to set libcurl's happy
eyeball timeout value.
- Add new optval macro CURL_HET_DEFAULT to represent the default happy
eyeballs timeout value (currently 200 ms).
- Add new tool option --happy-eyeballs-timeout-ms to expose
CURLOPT_HAPPY_EYEBALLS_TIMEOUT. The -ms suffix is used because the
other -timeout options in the tool expect seconds not milliseconds.
Closes https://github.com/curl/curl/pull/2260
Move curl_mime_initpart() and curl_mime_cleanpart() calls to lower-level
functions dealing with UserDefined structure contents.
This avoids memory leakages on curl-generated part mime headers.
New test 2073 checks this using the cli tool --next option: it
triggers a valgrind error if bug is present.
Bug: https://curl.haxx.se/mail/lib-2017-12/0060.html
Reported-by: Martin Galvan
These are OS/2-specific things added to the code in the year 2000. They
were always ugly. If there's any user left, they still don't need it
done this way.
Closes#2166
Connections that are used for HTTP/1.1 Pipelining or HTTP/2 multiplexing
only get additional transfers added to them if the existing connection
is held by the same multi or easy handle. libcurl does not support doing
HTTP/2 streams in different threads using a shared connection.
Closes#2152
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes#2132Fixes#2151Closes#2139
The SFTP back-end supports asynchronous reading only, limited
to 32-bit file length. Writing is synchronous with no other
limitations.
This also brings keyboard-interactive authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
libssh is an alternative library to libssh2.
https://www.libssh.org/
That patch set also introduces support for ECDSA
ed25519 keys, as well as gssapi authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
Originally, my idea was to allocate the two structures (or more
precisely, the connectdata structure and the four SSL backend-specific
strucutres required for ssl[0..1] and proxy_ssl[0..1]) in one go, so
that they all could be free()d together.
However, getting the alignment right is tricky. Too tricky.
So let's just bite the bullet and allocate the SSL backend-specific
data separately.
As a consequence, we now have to be very careful to release the memory
allocated for the SSL backend-specific data whenever we release any
connectdata.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2119
- Align the array of ssl_backend_data on a max 32 byte boundary.
8 is likely to be ok but I went with 32 for posterity should one of
the ssl_backend_data structs change to contain a larger sized variable
in the future.
Prior to this change (since dev 70f1db3, release 7.56) the connectdata
structure was undersized by 4 bytes in 32-bit builds with ssl enabled
because long long * was mistakenly used for alignment instead of
long long, with the intention being an 8 byte boundary. Also long long
may not be an available type.
The undersized connectdata could lead to oob read/write past the end in
what was expected to be the last 4 bytes of the connection's secondary
socket https proxy ssl_backend_data struct (the secondary socket in a
connection is used by ftp, others?).
Closes https://github.com/curl/curl/issues/2093
CVE-2017-8818
Bug: https://curl.haxx.se/docs/adv_2017-af0a.html
* LOTS of comment updates
* explicit error for SMB shares (e.g. "file:////share/path/file")
* more strict handling of authority (i.e. "//localhost/")
* now accepts dodgy old "C:|" drive letters
* more precise handling of drive letters in and out of Windows
(especially recognising both "file:c:/" and "file:/c:/")
Closes#2110
Host names like "127.0.0.1 moo" would otherwise be accepted by some
getaddrinfo() implementations.
Updated test 1034 and 1035 accordingly.
Fixes#2073Closes#2092
This is implemented as an output streaming stack of unencoders, the last
calling the client write procedure.
New test 230 checks this feature.
Bug: https://github.com/curl/curl/pull/2002
Reported-By: Daniel Bankhead
Since CURLSSH_AUTH_ANY (aka CURLSSH_AUTH_DEFAULT) is ~0 an arg value
check on this option is incorrect; we have to accept any value.
Prior to this change since f121575 (7.56.1+) CURLOPT_SSH_AUTH_TYPES
erroneously rejected CURLSSH_AUTH_ANY with CURLE_BAD_FUNCTION_ARGUMENT.
Bug: https://github.com/curl/curl/commit/f121575#commitcomment-25347120
... which is valid according to documentation. Regression since
f121575c0b.
Verified now in test 501.
Reported-by: cbartl on github
Fixes#2038Closes#2039
.. also add same arg value check to CURLOPT_POSTFIELDSIZE_LARGE.
Prior to this change since f121575 (7.56.1+) CURLOPT_POSTFIELDSIZE
erroneously rejected -1 value with CURLE_BAD_FUNCTION_ARGUMENT.
Bug: https://curl.haxx.se/mail/lib-2017-11/0000.html
Reported-by: Andrew Lambert
returning 'time_t' is problematic when that type is unsigned and we
return values less than zero to signal "already expired", used in
several places in the code.
Closes#2021
... since the 'tv' stood for timeval and this function does not return a
timeval struct anymore.
Also, cleaned up the Curl_timediff*() functions to avoid typecasts and
clean up the descriptive comments.
Closes#2011
... to cater for systems with unsigned time_t variables.
- Renamed the functions to curlx_timediff and Curl_timediff_us.
- Added overflow protection for both of them in either direction for
both 32 bit and 64 bit time_ts
- Reprefixed the curlx_time functions to use Curl_*
Reported-by: Peter Piekarski
Fixes#2004Closes#2005
... that are multiplied by 1000 when stored.
For 32 bit long systems, the max value accepted (2147483 seconds) is >
596 hours which is unlikely to ever be set by a legitimate application -
and previously it didn't work either, it just caused undefined behavior.
Also updated the man pages for these timeout options to mention the
return code.
Closes#1938
Now VERIFYHOST, VERIFYPEER and VERIFYSTATUS options change during active
connection updates the current connection's (i.e.'connectdata'
structure) appropriate ssl_config (and ssl_proxy_config) structures
variables, making these options effective for ongoing connection.
This functionality was available before and was broken by the
following change:
"proxy: Support HTTPS proxy and SOCKS+HTTP(s)"
CommitId: cb4e2be7c6.
Bug: https://github.com/curl/curl/issues/1941
Closes https://github.com/curl/curl/pull/1951