There is no benefit to holding the data sharelock when freeing the
addrinfo in case it fails, so ensure releaseing it as soon as we can
rather than holding on to it. This also aligns the code with other
consumers of sharelocks.
Closes#3516
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
The http2 code for connection checking needs a transfer to use. Make
sure a working one is set before handler->connection_check() is called.
Reported-by: jnbr on github
Fixes#3541Closes#3547
urlapi: turn three local-only functions into statics
conncache: make conncache_find_first_connection static
multi: make detach_connnection static
connect: make getaddressinfo static
curl_ntlm_core: make hmac_md5 static
http2: make two functions static
http: make http_setup_conn static
connect: make tcpnodelay static
tests: make UNITTEST a thing to mark functions with, so they can be static for
normal builds and non-static for unit test builds
... and mark Curl_shuffle_addr accordingly.
url: make up_free static
setopt: make vsetopt static
curl_endian: make write32_le static
rtsp: make rtsp_connisdead static
warnless: remove unused functions
memdebug: remove one unused function, made another static
If the incoming len 5, but the buffer does not have a termination
after 5 bytes, the strtol() call may keep reading through the line
buffer until is exceeds its boundary. Fix by ensuring that we are
using a bounded read with a temporary buffer on the stack.
Bug: https://curl.haxx.se/docs/CVE-2019-3823.html
Reported-by: Brian Carpenter (Geeknik Labs)
CVE-2019-3823
Attempt to add support for Secure Channel binding when negotiate
authentication is used. The problem to solve is that by default IIS
accepts channel binding and curl doesn't utilise them. The result was a
401 response. Scope affects only the Schannel(winssl)-SSPI combination.
Fixes https://github.com/curl/curl/issues/3503
Closes https://github.com/curl/curl/pull/3509
Stick to "Schannel" everywhere. The configure option --with-winssl is
kept to allow existing builds to work but --with-schannel is added as an
alias.
Closes#3504
mbedTLS doesn't have a sigpipe management. If a write/read occurs when
the remote closes the socket, the signal is raised and kills the
application. Use the curl mecanisms fix this behavior.
Signed-off-by: Jeremie Rapin <j.rapin@overkiz.com>
Closes#3502
Compiling with msvc /analyze and a recent Windows SDK warns against
using GetTickCount (Suggests to use GetTickCount64 instead.)
Since GetTickCount is only being used when GetTickCount64 isn't
available, I am disabling that warning.
Fixes https://github.com/curl/curl/issues/3437
Closes https://github.com/curl/curl/pull/3440
CURLOPT_SSH_KNOWNHOSTS and CURLOPT_SSH_KEYFUNCTION are supported for
libssh as well. So accepting these options only when compiling with
libssh2 is wrong here.
Fixes#3493Closes#3494
By default, libssh creates a new socket, instead of using the socket
created by curl for SSH connections.
Pass the socket created by curl to libssh using ssh_options_set() with
SSH_OPTIONS_FD directly after ssh_new(). So libssh uses our socket
instead of creating a new one.
This approach is very similar to what is done in the libssh2 code, where
the socket created by curl is passed to libssh2 when
libssh2_session_startup() is called.
Fixes#3491Closes#3495
There is no real gain in performing memcmp() comparisons on single
characters, so change these to array subscript inspections which
saves a call and makes the code clearer.
Closes#3486
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Jay Satiro <raysatiro@yahoo.com>
Windows extended potection (aka ssl channel binding) is required
to login to ntlm IIS endpoint, otherwise the server returns 401
responses.
Fixes#3280Closes#3321
When a ssh session startup fails, it is useful to know why it has
failed. This commit changes the message from:
"Failure establishing ssh session"
to something like this, for example:
"Failure establishing ssh session: -5, Unable to exchange encryption keys"
Closes#3481
.... to not pass in a const in the second argument as that's not how it
is supposed to be used and might cause compiler warnings.
Reported-by: Pavel Pavlov
Fixes#3477Closes#3478
extract_if_dead() dead is called from two functions, and only one of
them should get conn->data updated and now neither call path clears it.
scan-build found a case where conn->data would be NULL dereferenced in
ConnectionExists() otherwise.
Closes#3473
Make sure that this function sets a proper "live" transfer for the
connection before calling the protocol-specific connection check
function, and then clear it again afterward as a non-used connection has
no current transfer.
Reported-by: Jeroen Ooms
Reviewed-by: Marcel Raad
Reviewed-by: Daniel Gustafsson
Fixes#3463Closes#3464
We use "conn" everywhere to be a pointer to the connection.
Introduces two functions that "attaches" and "detaches" the connection
to and from the transfer.
Going forward, we should favour using "data->conn" (since a transfer
always only has a single connection or none at all) to "conn->data"
(since a connection can have none, one or many transfers associated with
it and updating conn->data to be correct is error prone and a frequent
reason for internal issues).
Closes#3442
Fixes#3436Closes#3448
Problem 1
After LOTS of scratching my head, I eventually realized that even when doing
10 uploads in parallel, sometimes the socket callback to the application that
tells it what to wait for on the socket, looked like it would reflect the
status of just the single transfer that just changed state.
Digging into the code revealed that this was indeed the truth. When multiple
transfers are using the same connection, the application did not correctly get
the *combined* flags for all transfers which then could make it switch to READ
(only) when in fact most transfers wanted to get told when the socket was
WRITEABLE.
Problem 1b
A separate but related regression had also been introduced by me when I
cleared connection/transfer association better a while ago, as now the logic
couldn't find the connection and see if that was marked as used by more
transfers and then it would also prematurely remove the socket from the socket
hash table even in times other transfers were still using it!
Fix 1
Make sure that each socket stored in the socket hash has a "combined" action
field of what to ask the application to wait for, that is potentially the ORed
action of multiple parallel transfers. And remove that socket hash entry only
if there are no transfers left using it.
Problem 2
The socket hash entry stored an association to a single transfer using that
socket - and when curl_multi_socket_action() was called to tell libcurl about
activities on that specific socket only that transfer was "handled".
This was WRONG, as a single socket/connection can be used by numerous parallel
transfers and not necessarily a single one.
Fix 2
We now store a list of handles in the socket hashtable entry and when libcurl
is told there's traffic for a particular socket, it now iterates over all
known transfers using that single socket.
Added Curl_resolver_kill() for all three resolver modes, which only
blocks when necessary, along with test 1592 to confirm
curl_multi_remove_handle() doesn't block unless it must.
Closes#3428Fixes#3371
When building with Unicode on MSVC, the compiler warns about freeing a
pointer to const in Curl_unicodefree. Fix this by declaring it as
non-const and casting the argument to Curl_convert_UTF8_to_tchar to
non-const too, like we do in all other places.
Closes https://github.com/curl/curl/pull/3435
The previous fix for parsing IPv6 URLs with a zone index was a paddle
short for URLs without an explicit port. This patch fixes that case
and adds a unit test case.
This bug was highlighted by issue #3408, and while it's not the full
fix for the problem there it is an isolated bug that should be fixed
regardless.
Closes#3411
Reported-by: GitYuanQu on github
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
This adds support for wildcard hosts in CURLOPT_RESOLVE. These are
try-last so any non-wildcard entry is resolved first. If specified,
any host not matched by another CURLOPT_RESOLVE config will use this
as fallback.
Example send a.com to 10.0.0.1 and everything else to 10.0.0.2:
curl --resolve *:443:10.0.0.2 --resolve a.com:443:10.0.0.1 \
https://a.comhttps://b.com
This is probably quite similar to using:
--connect-to a.com:443:10.0.0.1:443 --connect-to :443:10.0.0.2:443
Closes#3406
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
- Use QueryPerformanceCounter on Windows Vista+
There is confusing info floating around that QueryPerformanceCounter
can leap etc, which might have been true long time ago, but no longer
the case nowadays (perhaps starting from WinXP?). Also, boost and
std::chrono::steady_clock use QueryPerformanceCounter in a similar way.
Prior to this change GetTickCount or GetTickCount64 was used, which has
lower resolution. That is still the case for <= XP.
Fixes https://github.com/curl/curl/issues/3309
Closes https://github.com/curl/curl/pull/3318
Do not assume/store assocation between a given easy handle and the
connection if it can be avoided.
Long-term, the 'conn->data' pointer should probably be removed as it is a
little too error-prone. Still used very widely though.
Reported-by: masbug on github
Fixes#3391Closes#3400
Added CURLOPT_HTTP09_ALLOWED and --http0.9 for this purpose.
For now, both the tool and library allow HTTP/0.9 by default.
docs/DEPRECATE.md lays out the plan for when to reverse that default: 6
months after the 7.64.0 release. The options are added already now so
that applications/scripts can start using them already now.
Fixes#2873Closes#3383
This adds a cleanup callback for cyassl. Resolves possible memory leak
when using ECC fixed point cache.
Closes#3395
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Ensure to perform the checks we have to enforce a sane domain in
the cookie request. The check for non-PSL enabled builds is quite
basic but it's better than nothing.
Closes#2964
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Follow-up to 09e401e01b. If connection gets reused, then data member
will be copied, but not the proto member. As a result, in smb_do(),
path has been set from the original proto.share data.
Closes#3388
The timeout set with CURLOPT_TIMEOUT is no longer used when
disconnecting from one of the pingpong protocols (FTP, IMAP, SMTP,
POP3).
Reported-by: jasal82 on github
Fixes#3264Closes#3374
This adds the CURLOPT_TRAILERDATA and CURLOPT_TRAILERFUNCTION
options that allow a callback based approach to sending trailing headers
with chunked transfers.
The test server (sws) was updated to take into account the detection of the
end of transfer in the case of trailing headers presence.
Test 1591 checks that trailing headers can be sent using libcurl.
Closes#3350
After the migration to URL API all octets in the selector after the
first `?' were interpreted as query and accidentally discarded and not
passed to the server.
Add a gopherpath to always concatenate possible path and query URL
pieces.
Fixes#3369Closes#3370
If just a `?' to indicate the query is passed always store a zero length
query instead of having a NULL query.
This permits to distinguish URL with trailing `?'.
Fixes#3369Closes#3370
Only allow secure origins to be able to write cookies with the
'secure' flag set. This reduces the risk of non-secure origins
to influence the state of secure origins. This implements IETF
Internet-Draft draft-ietf-httpbis-cookie-alone-01 which updates
RFC6265.
Closes#2956
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
A URL with a single colon without a portnumber should use the default
port, discarding the colon. Fix, add a testcase and also do little bit
of comment wordsmithing.
Closes#3365
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
... when not actually following the redirect. Otherwise we return error
for this and an application can't extract the value.
Test 1518 added to verify.
Reported-by: Pavel Pavlov
Fixes#3340Closes#3364
The time_t type is unsigned on some systems and these variables are used
to hold return values from functions that return timediff_t
already. timediff_t is always a signed type.
Closes#3363
This adds a new unittest intended to cover the internal functions in
the urlapi code, starting with parse_port(). In order to avoid name
collisions in debug builds, parse_port() is renamed Curl_parse_port()
since it will be exported.
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
An IPv6 URL which contains a zone index includes a '%%25<zode id>'
string before the ending ']' bracket. The parsing logic wasn't set
up to cope with the zone index however, resulting in a malformed url
error being returned. Fix by breaking the parsing into two stages
to correctly handle the zone index.
Closes#3355Closes#3319
Reported-by: tonystz on Github
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
- Include query in the path passed to generate HTTP auth.
Recent changes to use the URL API internally (46e1640, 7.62.0)
inadvertently broke authentication URIs by omitting the query.
Fixes https://github.com/curl/curl/issues/3353Closes#3356
The http status code 204 (No Content) should not change the "condition
unmet" flag. Only the http status code 304 (Not Modified) should do
this.
Closes#359
- Match URL scheme with LDAP and LDAPS
- Retrieve attributes, scope and filter from URL query instead
Regression brought in 46e164069d (7.62.0)
Closes#3362
All resources defined in lib/libcurl.rc and curl.rc are language
neutral.
winbuild/MakefileBuild.vc ALWAYS defines the macro DEBUGBUILD, so the
ifdef's in line 33 of lib/libcurl.rc and src/curl.rc are wrong.
Replace the hard-coded constants in both *.rc files with #define'd
values.
Thumbs-uped-by: Rod Widdowson, Johannes Schindelin
URL: https://curl.haxx.se/mail/lib-2018-11/0000.htmlCloses#3348
This is a companion patch to cbea2fd2c (NTLM: force the connection to
HTTP/1.1, 2018-12-06): with NTLM, we can switch to HTTP/1.1
preemptively. However, with other (Negotiate) authentication it is not
clear to this developer whether there is a way to make it work with
HTTP/2, so let's try HTTP/2 first and fall back in case we encounter the
error HTTP_1_1_REQUIRED.
Note: we will still keep the NTLM workaround, as it avoids an extra
round trip.
Daniel Stenberg helped a lot with this patch, in particular by
suggesting to introduce the Curl_h2_http_1_1_error() function.
Closes#3349
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
It is allowed to call that function with id set to -1, specifying the
backend by the name instead. We should imitate what is done further down
in that function to allow for that.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#3346
NSS may be built without support for the latest SSL/TLS versions,
leading to "SSL version range is not valid" errors when the library
code supports a recent version (e.g. TLS v1.3) but it has explicitly
been disabled.
This change adjusts the maximum SSL version requested by libcurl to
be the maximum supported version at runtime, as long as that version
is at least as high as the minimum version required by libcurl.
Fixes#3261
Forgetting to bump the year in the copyright clause when hacking has
been quite common among curl developers, but a traditional checksrc
check isn't a good fit as it would penalize anyone hacking on January
1st (among other things). This adds a more selective COPYRIGHTYEAR
check which intends to only cover the currently hacked on changeset.
The check for updated copyright year is currently not enforced on all
files but only on files edited and/or committed locally. This is due to
the amount of files which aren't updated with their correct copyright
year at the time of their respective commit.
To further avoid running this expensive check for every developer, it
adds a new local override mode for checksrc where a .checksrc file can
be used to turn on extended warnings locally.
Closes#3303
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
EBADIOCTL doesn't exist on more recent Minix.
There have also been substantial changes to the network stack.
Fixes build on Minix 3.4rc
Closes https://github.com/curl/curl/pull/3323
curl_multi_wait() was erroneously used from within
curl_easy_perform(). It could lead to it believing there was no socket
to wait for and then instead sleep for a while instead of monitoring the
socket and then miss acting on that activity as swiftly as it should
(causing an up to 1000 ms delay).
Reported-by: Antoni Villalonga
Fixes#3305Closes#3306Closes#3308
Important for when the file is going to be read again and thus must not
contain old contents!
Adds test 327 to verify.
Reported-by: daboul on github
Fixes#3299Closes#3300