- Split off connection shutdown procedure from Curl_disconnect into new
function conn_shutdown.
- Change the shutdown procedure to close the sockets before
disassociating the transfer.
Prior to this change the sockets were closed after disassociating the
transfer so SOCKETFUNCTION wasn't called since the transfer was already
disassociated. That likely came about from recent work started in
Jan 2019 (#3442) to separate transfers from connections.
Bug: https://curl.haxx.se/mail/lib-2019-02/0101.html
Reported-by: Pavel Löbl
Closes https://github.com/curl/curl/issues/3597
Closes https://github.com/curl/curl/pull/3598
RFC 7540 says we should verify that the push is for an "authoritative"
server. We make sure of this by only allowing push with an :athority
header that matches the host that was asked for in the URL.
Fixes#3577
Reported-by: Nicolas Grekas
Bug: https://curl.haxx.se/mail/lib-2019-02/0057.htmlCloses#3581
The variable wasn't properly reset within the loop and thus could remain
set for sockets that hadn't been set before and miss notifying the app.
This is a follow-up to 4c35574 (shipped in curl 7.64.0)
Reported-by: buzo-ffm on github
Detected-by: Jan Alexander Steffens
Fixes#3585Closes#3589
- rename 'n' to buflen in functions, and use size_t for them. Don't pass
in negative buffer lengths.
- move most function comments to above the function starts like we use
to
- remove several unnecessary typecasts (especially of NULL)
Reviewed-by: Patrick Monnerat
Closes#3582
Previously the function would edit the provided header in-place when a
semicolon is used to signify an empty header. This made it impossible to
use the same set of custom headers in multiple threads simultaneously.
This approach now makes a local copy when it needs to edit the string.
Reported-by: d912e3 on github
Fixes#3578Closes#3579
- Change the behavior of win32_init so that the required initialization
procedures are not affected by CURL_GLOBAL_WIN32 flag.
libcurl via curl_global_init supports initializing for win32 with an
optional flag CURL_GLOBAL_WIN32, which if omitted was meant to stop
Winsock initialization. It did so internally by skipping win32_init()
when that flag was set. Since then win32_init() has been expanded to
include required initialization routines that are separate from
Winsock and therefore must be called in all cases. This commit fixes
it so that CURL_GLOBAL_WIN32 only controls the optional win32
initialization (which is Winsock initialization, according to our doc).
The only users affected by this change are those that don't pass
CURL_GLOBAL_WIN32 to curl_global_init. For them this commit removes the
risk of a potential crash.
Ref: https://github.com/curl/curl/pull/3573
Fixes https://github.com/curl/curl/issues/3313
Closes https://github.com/curl/curl/pull/3575
The draft-ietf-httpbis-rfc6265bis-02 draft, specify a set of prefixes
and how they should affect cookie initialization, which has been
adopted by the major browsers. This adds support for the two prefixes
defined, __Host- and __Secure, and updates the testcase with the
supplied examples from the draft.
Closes#3554
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
If mbedtls_ssl_get_session() fails, it may still have allocated
memory that needs to be freed to avoid leaking. Call the library
API function to release session resources on this errorpath as
well as on Curl_ssl_addsessionid() errors.
Closes: #3574
Reported-by: Michał Antoniak <M.Antoniak@posnet.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
... and avoid use of static variables that aren't thread safe.
Fixes regression from e9ababd4f5 (present in the 7.64.0 release)
Reported-by: Paul Groke
Fixes#3572Closes#3573
- Save the original conn->data before it's changed to the specified
data transfer for the connection check and then restore it afterwards.
This is a follow-up to 38d8e1b 2019-02-11.
History:
It was discovered a month ago that before checking whether to extract a
dead connection that that connection should be associated with a "live"
transfer for the check (ie original conn->data ignored and set to the
passed in data). A fix was landed in 54b201b which did that and also
cleared conn->data after the check. The original conn->data was not
restored, so presumably it was thought that a valid conn->data was no
longer needed.
Several days later it was discovered that a valid conn->data was needed
after the check and follow-up fix was landed in bbae24c which partially
reverted the original fix and attempted to limit the scope of when
conn->data was changed to only when pruning dead connections. In that
case conn->data was not cleared and the original conn->data not
restored.
A month later it was discovered that the original fix was somewhat
correct; a "live" transfer is needed for the check in all cases
because original conn->data could be null which could cause a bad deref
at arbitrary points in the check. A fix was landed in 38d8e1b which
expanded the scope to all cases. conn->data was not cleared and the
original conn->data not restored.
A day later it was discovered that not restoring the original conn->data
may lead to busy loops in applications that use the event interface, and
given this observation it's a pretty safe assumption that there is some
code path that still needs the original conn->data. This commit is the
follow-up fix for that, it restores the original conn->data after the
connection check.
Assisted-by: tholin@users.noreply.github.com
Reported-by: tholin@users.noreply.github.com
Fixes https://github.com/curl/curl/issues/3542Closes#3559
On non-ascii platforms, the chunked hex header was measured for char code
conversion length, even for chunked trailers that do not have an hex header.
In addition, the efective length is already known: use it.
Since the hex length can be zero, only convert if needed.
Reported by valgrind.
Convert numerous infof() calls into debug-build only messages since they
are annoyingly verbose for regular applications. Removed a few.
Bug: https://curl.haxx.se/mail/lib-2019-02/0027.html
Reported-by: Volker Schmid
Closes#3552
There is no benefit to holding the data sharelock when freeing the
addrinfo in case it fails, so ensure releaseing it as soon as we can
rather than holding on to it. This also aligns the code with other
consumers of sharelocks.
Closes#3516
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
The http2 code for connection checking needs a transfer to use. Make
sure a working one is set before handler->connection_check() is called.
Reported-by: jnbr on github
Fixes#3541Closes#3547
urlapi: turn three local-only functions into statics
conncache: make conncache_find_first_connection static
multi: make detach_connnection static
connect: make getaddressinfo static
curl_ntlm_core: make hmac_md5 static
http2: make two functions static
http: make http_setup_conn static
connect: make tcpnodelay static
tests: make UNITTEST a thing to mark functions with, so they can be static for
normal builds and non-static for unit test builds
... and mark Curl_shuffle_addr accordingly.
url: make up_free static
setopt: make vsetopt static
curl_endian: make write32_le static
rtsp: make rtsp_connisdead static
warnless: remove unused functions
memdebug: remove one unused function, made another static
If the incoming len 5, but the buffer does not have a termination
after 5 bytes, the strtol() call may keep reading through the line
buffer until is exceeds its boundary. Fix by ensuring that we are
using a bounded read with a temporary buffer on the stack.
Bug: https://curl.haxx.se/docs/CVE-2019-3823.html
Reported-by: Brian Carpenter (Geeknik Labs)
CVE-2019-3823
Attempt to add support for Secure Channel binding when negotiate
authentication is used. The problem to solve is that by default IIS
accepts channel binding and curl doesn't utilise them. The result was a
401 response. Scope affects only the Schannel(winssl)-SSPI combination.
Fixes https://github.com/curl/curl/issues/3503
Closes https://github.com/curl/curl/pull/3509
Stick to "Schannel" everywhere. The configure option --with-winssl is
kept to allow existing builds to work but --with-schannel is added as an
alias.
Closes#3504
mbedTLS doesn't have a sigpipe management. If a write/read occurs when
the remote closes the socket, the signal is raised and kills the
application. Use the curl mecanisms fix this behavior.
Signed-off-by: Jeremie Rapin <j.rapin@overkiz.com>
Closes#3502
Compiling with msvc /analyze and a recent Windows SDK warns against
using GetTickCount (Suggests to use GetTickCount64 instead.)
Since GetTickCount is only being used when GetTickCount64 isn't
available, I am disabling that warning.
Fixes https://github.com/curl/curl/issues/3437
Closes https://github.com/curl/curl/pull/3440
CURLOPT_SSH_KNOWNHOSTS and CURLOPT_SSH_KEYFUNCTION are supported for
libssh as well. So accepting these options only when compiling with
libssh2 is wrong here.
Fixes#3493Closes#3494
By default, libssh creates a new socket, instead of using the socket
created by curl for SSH connections.
Pass the socket created by curl to libssh using ssh_options_set() with
SSH_OPTIONS_FD directly after ssh_new(). So libssh uses our socket
instead of creating a new one.
This approach is very similar to what is done in the libssh2 code, where
the socket created by curl is passed to libssh2 when
libssh2_session_startup() is called.
Fixes#3491Closes#3495
There is no real gain in performing memcmp() comparisons on single
characters, so change these to array subscript inspections which
saves a call and makes the code clearer.
Closes#3486
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Jay Satiro <raysatiro@yahoo.com>
Windows extended potection (aka ssl channel binding) is required
to login to ntlm IIS endpoint, otherwise the server returns 401
responses.
Fixes#3280Closes#3321
When a ssh session startup fails, it is useful to know why it has
failed. This commit changes the message from:
"Failure establishing ssh session"
to something like this, for example:
"Failure establishing ssh session: -5, Unable to exchange encryption keys"
Closes#3481
.... to not pass in a const in the second argument as that's not how it
is supposed to be used and might cause compiler warnings.
Reported-by: Pavel Pavlov
Fixes#3477Closes#3478
extract_if_dead() dead is called from two functions, and only one of
them should get conn->data updated and now neither call path clears it.
scan-build found a case where conn->data would be NULL dereferenced in
ConnectionExists() otherwise.
Closes#3473
Make sure that this function sets a proper "live" transfer for the
connection before calling the protocol-specific connection check
function, and then clear it again afterward as a non-used connection has
no current transfer.
Reported-by: Jeroen Ooms
Reviewed-by: Marcel Raad
Reviewed-by: Daniel Gustafsson
Fixes#3463Closes#3464
We use "conn" everywhere to be a pointer to the connection.
Introduces two functions that "attaches" and "detaches" the connection
to and from the transfer.
Going forward, we should favour using "data->conn" (since a transfer
always only has a single connection or none at all) to "conn->data"
(since a connection can have none, one or many transfers associated with
it and updating conn->data to be correct is error prone and a frequent
reason for internal issues).
Closes#3442
Fixes#3436Closes#3448
Problem 1
After LOTS of scratching my head, I eventually realized that even when doing
10 uploads in parallel, sometimes the socket callback to the application that
tells it what to wait for on the socket, looked like it would reflect the
status of just the single transfer that just changed state.
Digging into the code revealed that this was indeed the truth. When multiple
transfers are using the same connection, the application did not correctly get
the *combined* flags for all transfers which then could make it switch to READ
(only) when in fact most transfers wanted to get told when the socket was
WRITEABLE.
Problem 1b
A separate but related regression had also been introduced by me when I
cleared connection/transfer association better a while ago, as now the logic
couldn't find the connection and see if that was marked as used by more
transfers and then it would also prematurely remove the socket from the socket
hash table even in times other transfers were still using it!
Fix 1
Make sure that each socket stored in the socket hash has a "combined" action
field of what to ask the application to wait for, that is potentially the ORed
action of multiple parallel transfers. And remove that socket hash entry only
if there are no transfers left using it.
Problem 2
The socket hash entry stored an association to a single transfer using that
socket - and when curl_multi_socket_action() was called to tell libcurl about
activities on that specific socket only that transfer was "handled".
This was WRONG, as a single socket/connection can be used by numerous parallel
transfers and not necessarily a single one.
Fix 2
We now store a list of handles in the socket hashtable entry and when libcurl
is told there's traffic for a particular socket, it now iterates over all
known transfers using that single socket.
Added Curl_resolver_kill() for all three resolver modes, which only
blocks when necessary, along with test 1592 to confirm
curl_multi_remove_handle() doesn't block unless it must.
Closes#3428Fixes#3371
When building with Unicode on MSVC, the compiler warns about freeing a
pointer to const in Curl_unicodefree. Fix this by declaring it as
non-const and casting the argument to Curl_convert_UTF8_to_tchar to
non-const too, like we do in all other places.
Closes https://github.com/curl/curl/pull/3435
The previous fix for parsing IPv6 URLs with a zone index was a paddle
short for URLs without an explicit port. This patch fixes that case
and adds a unit test case.
This bug was highlighted by issue #3408, and while it's not the full
fix for the problem there it is an isolated bug that should be fixed
regardless.
Closes#3411
Reported-by: GitYuanQu on github
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
This adds support for wildcard hosts in CURLOPT_RESOLVE. These are
try-last so any non-wildcard entry is resolved first. If specified,
any host not matched by another CURLOPT_RESOLVE config will use this
as fallback.
Example send a.com to 10.0.0.1 and everything else to 10.0.0.2:
curl --resolve *:443:10.0.0.2 --resolve a.com:443:10.0.0.1 \
https://a.comhttps://b.com
This is probably quite similar to using:
--connect-to a.com:443:10.0.0.1:443 --connect-to :443:10.0.0.2:443
Closes#3406
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
- Use QueryPerformanceCounter on Windows Vista+
There is confusing info floating around that QueryPerformanceCounter
can leap etc, which might have been true long time ago, but no longer
the case nowadays (perhaps starting from WinXP?). Also, boost and
std::chrono::steady_clock use QueryPerformanceCounter in a similar way.
Prior to this change GetTickCount or GetTickCount64 was used, which has
lower resolution. That is still the case for <= XP.
Fixes https://github.com/curl/curl/issues/3309
Closes https://github.com/curl/curl/pull/3318
Do not assume/store assocation between a given easy handle and the
connection if it can be avoided.
Long-term, the 'conn->data' pointer should probably be removed as it is a
little too error-prone. Still used very widely though.
Reported-by: masbug on github
Fixes#3391Closes#3400
Added CURLOPT_HTTP09_ALLOWED and --http0.9 for this purpose.
For now, both the tool and library allow HTTP/0.9 by default.
docs/DEPRECATE.md lays out the plan for when to reverse that default: 6
months after the 7.64.0 release. The options are added already now so
that applications/scripts can start using them already now.
Fixes#2873Closes#3383
This adds a cleanup callback for cyassl. Resolves possible memory leak
when using ECC fixed point cache.
Closes#3395
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Ensure to perform the checks we have to enforce a sane domain in
the cookie request. The check for non-PSL enabled builds is quite
basic but it's better than nothing.
Closes#2964
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Follow-up to 09e401e01b. If connection gets reused, then data member
will be copied, but not the proto member. As a result, in smb_do(),
path has been set from the original proto.share data.
Closes#3388
The timeout set with CURLOPT_TIMEOUT is no longer used when
disconnecting from one of the pingpong protocols (FTP, IMAP, SMTP,
POP3).
Reported-by: jasal82 on github
Fixes#3264Closes#3374
This adds the CURLOPT_TRAILERDATA and CURLOPT_TRAILERFUNCTION
options that allow a callback based approach to sending trailing headers
with chunked transfers.
The test server (sws) was updated to take into account the detection of the
end of transfer in the case of trailing headers presence.
Test 1591 checks that trailing headers can be sent using libcurl.
Closes#3350
After the migration to URL API all octets in the selector after the
first `?' were interpreted as query and accidentally discarded and not
passed to the server.
Add a gopherpath to always concatenate possible path and query URL
pieces.
Fixes#3369Closes#3370
If just a `?' to indicate the query is passed always store a zero length
query instead of having a NULL query.
This permits to distinguish URL with trailing `?'.
Fixes#3369Closes#3370
Only allow secure origins to be able to write cookies with the
'secure' flag set. This reduces the risk of non-secure origins
to influence the state of secure origins. This implements IETF
Internet-Draft draft-ietf-httpbis-cookie-alone-01 which updates
RFC6265.
Closes#2956
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
A URL with a single colon without a portnumber should use the default
port, discarding the colon. Fix, add a testcase and also do little bit
of comment wordsmithing.
Closes#3365
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
... when not actually following the redirect. Otherwise we return error
for this and an application can't extract the value.
Test 1518 added to verify.
Reported-by: Pavel Pavlov
Fixes#3340Closes#3364
The time_t type is unsigned on some systems and these variables are used
to hold return values from functions that return timediff_t
already. timediff_t is always a signed type.
Closes#3363
This adds a new unittest intended to cover the internal functions in
the urlapi code, starting with parse_port(). In order to avoid name
collisions in debug builds, parse_port() is renamed Curl_parse_port()
since it will be exported.
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
An IPv6 URL which contains a zone index includes a '%%25<zode id>'
string before the ending ']' bracket. The parsing logic wasn't set
up to cope with the zone index however, resulting in a malformed url
error being returned. Fix by breaking the parsing into two stages
to correctly handle the zone index.
Closes#3355Closes#3319
Reported-by: tonystz on Github
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
- Include query in the path passed to generate HTTP auth.
Recent changes to use the URL API internally (46e1640, 7.62.0)
inadvertently broke authentication URIs by omitting the query.
Fixes https://github.com/curl/curl/issues/3353Closes#3356
The http status code 204 (No Content) should not change the "condition
unmet" flag. Only the http status code 304 (Not Modified) should do
this.
Closes#359
- Match URL scheme with LDAP and LDAPS
- Retrieve attributes, scope and filter from URL query instead
Regression brought in 46e164069d (7.62.0)
Closes#3362
All resources defined in lib/libcurl.rc and curl.rc are language
neutral.
winbuild/MakefileBuild.vc ALWAYS defines the macro DEBUGBUILD, so the
ifdef's in line 33 of lib/libcurl.rc and src/curl.rc are wrong.
Replace the hard-coded constants in both *.rc files with #define'd
values.
Thumbs-uped-by: Rod Widdowson, Johannes Schindelin
URL: https://curl.haxx.se/mail/lib-2018-11/0000.htmlCloses#3348
This is a companion patch to cbea2fd2c (NTLM: force the connection to
HTTP/1.1, 2018-12-06): with NTLM, we can switch to HTTP/1.1
preemptively. However, with other (Negotiate) authentication it is not
clear to this developer whether there is a way to make it work with
HTTP/2, so let's try HTTP/2 first and fall back in case we encounter the
error HTTP_1_1_REQUIRED.
Note: we will still keep the NTLM workaround, as it avoids an extra
round trip.
Daniel Stenberg helped a lot with this patch, in particular by
suggesting to introduce the Curl_h2_http_1_1_error() function.
Closes#3349
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
It is allowed to call that function with id set to -1, specifying the
backend by the name instead. We should imitate what is done further down
in that function to allow for that.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#3346
NSS may be built without support for the latest SSL/TLS versions,
leading to "SSL version range is not valid" errors when the library
code supports a recent version (e.g. TLS v1.3) but it has explicitly
been disabled.
This change adjusts the maximum SSL version requested by libcurl to
be the maximum supported version at runtime, as long as that version
is at least as high as the minimum version required by libcurl.
Fixes#3261
Forgetting to bump the year in the copyright clause when hacking has
been quite common among curl developers, but a traditional checksrc
check isn't a good fit as it would penalize anyone hacking on January
1st (among other things). This adds a more selective COPYRIGHTYEAR
check which intends to only cover the currently hacked on changeset.
The check for updated copyright year is currently not enforced on all
files but only on files edited and/or committed locally. This is due to
the amount of files which aren't updated with their correct copyright
year at the time of their respective commit.
To further avoid running this expensive check for every developer, it
adds a new local override mode for checksrc where a .checksrc file can
be used to turn on extended warnings locally.
Closes#3303
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
EBADIOCTL doesn't exist on more recent Minix.
There have also been substantial changes to the network stack.
Fixes build on Minix 3.4rc
Closes https://github.com/curl/curl/pull/3323
curl_multi_wait() was erroneously used from within
curl_easy_perform(). It could lead to it believing there was no socket
to wait for and then instead sleep for a while instead of monitoring the
socket and then miss acting on that activity as swiftly as it should
(causing an up to 1000 ms delay).
Reported-by: Antoni Villalonga
Fixes#3305Closes#3306Closes#3308
Important for when the file is going to be read again and thus must not
contain old contents!
Adds test 327 to verify.
Reported-by: daboul on github
Fixes#3299Closes#3300
The function does not return the same value as snprintf() normally does,
so readers may be mislead into thinking the code works differently than
it actually does. A different function name makes this easier to detect.
Reported-by: Tomas Hoger
Assisted-by: Daniel Gustafsson
Fixes#3296Closes#3297
Session resumption information is not available immediately after a TLS 1.3
handshake. The client must wait until the server has sent a session ticket.
Use OpenSSL's "new session" callback to get the session information and put it
into curl's session cache. For TLS 1.3 sessions, this callback will be invoked
after the server has sent a session ticket.
The "new session" callback is invoked only if OpenSSL's session cache is
enabled, so enable it and use the "external storage" mode which lets curl manage
the contents of the session cache.
A pointer to the connection data and the sockindex are now saved as "SSL extra
data" to make them available to the callback.
This approach also works for old SSL/TLS versions and old OpenSSL versions.
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Fixes#3202Closes#3271
Since we're close to feature freeze, this change disables this feature
with an #ifdef. Define ALLOW_RENEG at build-time to enable.
This could be converted to a bit for CURLOPT_SSL_OPTIONS to let
applications opt-in this.
Concern-raised-by: David Benjamin
Fixes#3283Closes#3293
When using c-ares for asyn dns, the dns socket fd was silently closed
by c-ares without curl being aware. curl would then 'realize' the fd
has been removed at next call of Curl_resolver_getsock, and only then
notify the CURLMOPT_SOCKETFUNCTION to remove fd from its poll set with
CURL_POLL_REMOVE. At this point the fd is already closed.
By using ares socket state callback (ARES_OPT_SOCK_STATE_CB), this
patch allows curl to be notified that the fd is not longer needed
for neither for write nor read. At this point by calling
Curl_multi_closed we are able to notify multi with CURL_POLL_REMOVE
before the fd is actually closed by ares.
In asyn-ares.c Curl_resolver_duphandle we can't use ares_dup anymore
since it does not allow passing a different sock_state_cb_data
Closes#3238
lib/curl_ntlm.c had code that read as follows:
#ifdef USE_OPENSSL
# ifdef USE_OPENSSL
# else
# ..
# endif
#endif
Remove the redundant USE_OPENSSL along with #else (it's not possible to
reach it anyway). The removed construction is a leftover from when the
SSLeay support was removed.
Closes#3269
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Commit 709cf76f6b deprecated USE_SSLEAY, as curl since long isn't
compatible with the SSLeay library. This removes the few leftovers that
were omitted in the less frequently used platform targets.
Closes#3270
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
The SSL_CTX_set_msg_callback callback is not just called for the
Handshake or Alert protocols, but also for the raw record header
(SSL3_RT_HEADER) and the decrypted inner record type
(SSL3_RT_INNER_CONTENT_TYPE). Be sure to ignore the latter to avoid
excess debug spam when using `curl -v` against a TLSv1.3-enabled server:
* TLSv1.3 (IN), TLS app data, [no content] (0):
(Following this message, another callback for the decrypted
handshake/alert messages will be be present anyway.)
Closes https://github.com/curl/curl/pull/3281
The productname from Microsoft is "Schannel", but in infof/failf
reporting we use "schannel". This removes different versions.
Closes#3243
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
APPENDQUERY + URLENCODE would skip all equals signs but now it only skip
encoding the first to better allow "name=content" for any content.
Reported-by: Alexey Melnichuk
Fixes#3231Closes#3231
The function identifying a leading "scheme" part of the URL considered a
few letters ending with a colon to be a scheme, making something like
"short:80" to become an unknown scheme instead of a short host name and
a port number.
Extended test 1560 to verify.
Also fixed test203 to use file_pwd to make it get the correct path on
windows. Removed test 2070 since it was a duplicate of 203.
Assisted-by: Marcel Raad
Reported-by: Hagai Auro
Fixes#3220Fixes#3233Closes#3223Closes#3235
Prior to this change twice as many bytes as necessary were malloc'd when
converting wchar to UTF8. To allay confusion in the future I also
changed the variable name for the amount of bytes from len to bytes.
Closes https://github.com/curl/curl/pull/3209
- for "--netrc", don't ignore the login/password specified with "--user",
only ignore the login/password in the URL.
This restores the netrc behaviour of curl 7.61.1 and earlier.
- fix the documentation of CURL_NETRC_REQUIRED
- improve the detection of login/password changes when reading .netrc
- don't read .netrc if both login and password are already set
Fixes#3213Closes#3224
The internal buffer in infof() is limited to 2048 bytes of payload plus
an additional byte for NULL termination. Servers with very long error
messages can however cause truncation of the string, which currently
isn't very clear, and leads to badly formatted output.
This appends a "...\n" (or just "..." in case the format didn't with a
newline char) marker to the end of the string to clearly show
that it has been truncated.
Also include a unittest covering infof() to try and catch any bugs
introduced in this quite important function.
Closes#3216
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
The function identifying a leading "scheme" part of the URL considered a few
letters ending with a colon to be a scheme, making something like "short:80"
to become an unknown scheme instead of a short host name and a port number.
Extended test 1560 to verify.
Reported-by: Hagai Auro
Fixes#3220Closes#3223
The overflow has no real world impact.
Just avoid it for "best practice".
Code change suggested by "The Infinnovation Team" and Daniel Stenberg.
Closes#3184
When not actually following the redirect and the target URL is only
stored for later retrieval, curl always accepted "non-supported"
schemes. This was a regression from 46e164069d.
Reported-by: Brad King
Fixes#3210Closes#3215
As has been outlined in the DEPRECATE.md document, the axTLS code has
been disabled for 6 months and is hereby removed.
Use a better supported TLS library!
Assisted-by: Daniel Gustafsson
Closes#3194
Curl_verify_certificate() must use the Curl_ prefix since it is globally
available in the lib and otherwise steps outside of our namespace!
Closes#3201
MesaLink support was added in commit 57348eb97d but the
backend was never added to the curl_sslbackend enum in curl/curl.h.
This adds the new backend to the enum and updates the relevant docs.
Closes#3195
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Use an unsigned variable: as the signed operation behavior is undefined,
this change silents clang-tidy about it.
Ref: https://github.com/curl/curl/pull/3163
Reported-By: Daniel Stenberg
When failing to set the 1.3 cipher suite, the wrong string pointer would
be used in the error message. Most often saying "(nil)".
Reported-by: Ricky-Tigg on github
Fixes#3178Closes#3180
Ensure to clear the session object in case the libssh2 initialization
fails.
It could be argued that the libssh2 error function should be called to
get a proper error message in this case. But since the only error path
in libssh2_knownhost_init() is memory a allocation failure it's safest
to avoid since the libssh2 error handling allocates memory.
Closes#3179
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Compiling on _WIN32 and with USE_LWIPSOCK, causes this error:
curl_rtmp.c(223,3): error: use of undeclared identifier 'setsockopt'
setsockopt(r->m_sb.sb_socket, SOL_SOCKET, SO_RCVTIMEO,
^
curl_rtmp.c(41,32): note: expanded from macro 'setsockopt'
#define setsockopt(a,b,c,d,e) (setsockopt)(a,b,c,(const char *)d,(int)e)
^
Closes#3155
- Change the inout parameters after all needed memory has been
allocated. Do not change them if something goes wrong.
- Free the allocated temporary strings if strdup() fails.
Closes#3122
Most headerfiles end with a /* <headerguard> */ comment, but it was
missing from some. The comment isn't the most important part of our
code documentation but consistency has an intrinsic value in itself.
This adds header guard comments to the files that were lacking it.
Closes#3158
Reviewed-by: Jay Satiro <raysatiro@yahoo.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Curl_follow() no longer frees the string. Make sure it happens in the
caller function, like we normally handle allocations.
This bug was introduced with the use of the URL API internally, it has
never been in a release version
Reported-by: Dario Weißer
Closes#3149
For IP addresses in the subject alternative name field, the length
of the IP address (and hence the number of bytes to perform a
memcmp on) is incorrectly calculated to be zero. The code previously
subtracted q from name.end. where in a successful case q = name.end
and therefore addrlen equalled 0. The change modifies the code to
subtract name.beg from name.end to calculate the length correctly.
The issue only affects libcurl with GSKit SSL, not other SSL backends.
The issue is not a security issue as IP verification would always fail.
Fixes#3102Closes#3141
Classic MinGW has neither InitializeCriticalSectionEx nor
GetTickCount64, independent of the target Windows version.
Closes https://github.com/curl/curl/pull/3113
Now FILE transfers send headers to the header callback like HTTP and
other protocols. Also made curl_easy_getinfo(...CURLINFO_PROTOCOL...)
work for FILE in the callbacks.
Makes "curl -i file://.." and "curl -I file://.." work like before
again. Applied the bold header logic to them too.
Regression from c1c2762 (7.61.0)
Reported-by: Shaun Jackman
Fixes#3083Closes#3101
In case a very small buffer was passed to the version function, it could
result in the buffer not being NULL-terminated since strncpy() doesn't
guarantee a terminator on an overflowed buffer. Rather than adding code
to terminate (and handle zero-sized buffers), move to using snprintf()
instead like all the other vtls backends.
Closes#3105
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Viktor Szakats <commit@vszakats.net>
If a !checksrc! disable command specified to ignore zero errors, it was
still added to the ignore block even though nothing was ignored. While
there were no blocks ignored that shouldn't be ignored, the processing
ended with with a warning:
<filename>:<line>:<col>: warning: Unused ignore: LONGLINE (UNUSEDIGNORE)
/* !checksrc! disable LONGLINE 0 */
^
Fix by instead treating a zero ignore as a a badcommand and throw a
warning for that one.
Closes#3096
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Enable strict and warnings mode for checksrc to ensure we aren't missing
anything due to bugs in the checking code. This uncovered a few things
which are all fixed in this commit:
* several variables were used uninitialized
* several variables were not defined in the correct scope
* the whitelist filehandle was read even if the file didn't exist
* the enable_warn() call when a disable counter had expired was passing
incorrect variables, but since the checkwarn() call is unlikely to hit
(the counter is only decremented to zero on actual ignores) it didn't
manifest a problem.
Closes#3090
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
The result of a memory allocation should always be checked, as we may
run under memory pressure where even a small allocation can fail. This
adds checking and error handling to a few cases where the allocation
wasn't checked for success. In the ftp case, the freeing of the path
variable is moved ahead of the allocation since there is little point
in keeping it around across the strdup, and the separation makes for
more readable code. In nwlib, the lock is aslo freed in the error path.
Also bumps the copyright years on affected files.
Closes#3084
Reviewed-by: Jay Satiro <raysatiro@yahoo.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
... and libcurl doesn't support any single-letter URL schemes (if there
even exist any) so it should be fairly risk-free.
Reported-by: Marcel Raad
Fixes#3070Closes#3071
Use 'GNUInstallDirs' standard module to set destinations of installed
files.
Use uppercase "CURL" names instead of lowercase "curl" to match standard
'FindCURL.cmake' CMake module:
* https://cmake.org/cmake/help/latest/module/FindCURL.html
Meaning:
* Install 'CURLConfig.cmake' instead of 'curl-config.cmake'
* User should call 'find_package(CURL)' instead of 'find_package(curl)'
Use 'configure_package_config_file' function to generate
'CURLConfig.cmake' file. This will make 'curl-config.cmake.in' template
file smaller and handle components better. E.g. current configuration
report no error if user specified unknown components (note: new
configuration expects no components, report error if user will try to
specify any).
Closes https://github.com/curl/curl/pull/2849
This fixes potential out-of-buffer access on "file:./" URL
$ valgrind curl "file:./"
==24516== Memcheck, a memory error detector
==24516== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==24516== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==24516== Command: /home/even/install-curl-git/bin/curl file:./
==24516==
==24516== Conditional jump or move depends on uninitialised value(s)
==24516== at 0x4C31F9C: strcmp (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==24516== by 0x4EBB315: seturl (urlapi.c:801)
==24516== by 0x4EBB568: parseurl (urlapi.c:861)
==24516== by 0x4EBC509: curl_url_set (urlapi.c:1199)
==24516== by 0x4E644C6: parseurlandfillconn (url.c:2044)
==24516== by 0x4E67AEF: create_conn (url.c:3613)
==24516== by 0x4E68A4F: Curl_connect (url.c:4119)
==24516== by 0x4E7F0A4: multi_runsingle (multi.c:1440)
==24516== by 0x4E808E5: curl_multi_perform (multi.c:2173)
==24516== by 0x4E7558C: easy_transfer (easy.c:686)
==24516== by 0x4E75801: easy_perform (easy.c:779)
==24516== by 0x4E75868: curl_easy_perform (easy.c:798)
Was originally spotted by
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=10637
Credit to OSS-Fuzz
Closes#3039
- replace tabs with spaces where possible
- remove line ending spaces
- remove double/triple newlines at EOF
- fix a non-UTF-8 character
- cleanup a few indentations/line continuations
in manual examples
Closes https://github.com/curl/curl/pull/3037
- Treat CURL_SSLVERSION_MAX_NONE the same as
CURL_SSLVERSION_MAX_DEFAULT. Prior to this change NONE would mean use
the minimum version also as the maximum.
This is a follow-up to 6015cef which changed the behavior of setting
the SSL version so that the requested version would only be the minimum
and not the maximum. It appears it was (mostly) implemented in OpenSSL
but not other backends. In other words CURL_SSLVERSION_TLSv1_0 used to
mean use just TLS v1.0 and now it means use TLS v1.0 *or later*.
- Fix CURL_SSLVERSION_MAX_DEFAULT for OpenSSL.
Prior to this change CURL_SSLVERSION_MAX_DEFAULT with OpenSSL was
erroneously treated as always TLS 1.3, and would cause an error if
OpenSSL was built without TLS 1.3 support.
Co-authored-by: Daniel Gustafsson
Fixes https://github.com/curl/curl/issues/2969
Closes https://github.com/curl/curl/pull/3012
In order for this API to fully work for libcurl itself, it now offers a
CURLU_GUESS_SCHEME flag that makes it "guess" scheme based on the host
name prefix just like libcurl always did. If there's no known prefix, it
will guess "http://".
Separately, it relaxes the check of the host name so that IDN host names
can be passed in as well.
Both these changes are necessary for libcurl itself to use this API.
Assisted-by: Daniel Gustafsson
Closes#3018
In the CURLUPART_URL case, there is no codepath which invokes url
decoding so remove the assignment of the urldecode variable. This
fixes the deadstore bug-report from clang static analysis.
Closes#3015
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
The reallocation was using the input pointer for the return value, which
leads to a memory leak on reallication failure. Fix by instead use the
safe internal API call Curl_saferealloc().
Closes#3005
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Nick Zitzmann <nickzman@gmail.com>
ftp_send_command() was using vsnprintf() without including the libcurl
*rintf() replacement header. Fix by including curl_printf.h and also
add curl_memory.h while at it since memdebug.h depends on it.
Closes#2999
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
The failf() macro is the name used for invoking Curl_failf(). While
there isn't a way to turn off failf like there is for infof, but it's
still a good idea to use the macro.
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Strings broken on multiple rows in the .c file need to have appropriate
whitespace padding on either side of the concatenation point to render
a correct amalgamated string. Fix by adding a space at the occurrences
found.
Closes#2986
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Commit 8238ba9c5f inadvertently removed
the actual command to be sent from the send buffer in a refactoring.
Add back copying the command into the buffer. Also add more guards
against malformed input while at it.
Closes#2985
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
When erroring out on a request being too large, the existing buffer was
leaked. Fix by explicitly freeing on the way out.
Closes#2966
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
- Use memcpy instead of strncpy to copy a string without termination,
since gcc8 warns about using strncpy to copy as many bytes from a
string as its length.
Suggested-by: Viktor Szakats
Closes https://github.com/curl/curl/issues/2980
Rather than jumping backwards to where failure cleanup happens
to be performed, move the failure case to end of the function
where it is expected per existing coding convention.
Closes#2965
If the formatting fails, we error out on a fatal error and
clean up on the way out. The array was however freed within
the wrong scope and was thus never freed in case the cookies
were written to a file instead of STDOUT.
Closes#2957
Add functionality so that protocols can do custom keepalive on their
connections, when an external API function is called.
Add docs for the new options in 7.62.0
Closes#1641
Sometimes it may be considered a security risk to load an external
OpenSSL configuration automatically inside curl_global_init(). The
configuration option --disable-ssl-auto-load-config disables this
automatism. The Windows build scripts winbuild/Makefile.vs provide a
corresponding option ENABLE_SSL_AUTO_LOAD_CONFIG accepting a boolean
value.
Setting neither of these options corresponds to the previous behavior
loading the external OpenSSL configuration automatically.
Fixes#2724Closes#2791
The gcc typecheck macros and coverity combined made it warn on the 2nd
argument for ERROR_CHECK_SETOPT(). Here's minor rearrange to please it.
Coverity CID 1439115 and CID 1439114.
SEC_E_APPLICATION_PROTOCOL_MISMATCH isn't defined in some versions of
mingw and would require an ifdef otherwise.
Reported-by: Thomas Glanzmann
Approved-by: Marc Hörsken
Bug: https://curl.haxx.se/mail/lib-2018-09/0020.htmlCloses#2950
... and add "MAILINDEX".
As described in #2789, this is a suggested solution. Changing UID=xx to
actually get mail with UID xx and add "MAILINDEX" to get a mail with a
special index in the mail box (old behavior). So MAILINDEX=1 gives the
first non deleted mail in the mail box.
Fixes#2789Closes#2815
CURLE_PEER_FAILED_VERIFICATION makes more sense because Curl_parseX509
does not allocate memory internally as its first argument is a pointer
to the certificate structure. The same error code is also returned by
Curl_verifyhost when its call to Curl_parseX509 fails so the change
makes error handling more consistent.
Transparently. The related curl_multi_setopt() options all still returns
OK when pipelining is selected.
To re-enable the support, the single line change in lib/multi.c needs to
be reverted.
See docs/DEPRECATE.md
Closes#2705
According to RFC6265 section 5.4, cookies with equal path lengths
SHOULD be sorted by creation-time (earlier first). This adds a
creation-time record to the cookie struct in order to make cookie
sorting more deterministic. The creation-time is defined as the
order of the cookies in the jar, the first cookie read fro the
jar being the oldest. The creation-time is thus not serialized
into the jar. Also remove the strcmp() matching in the sorting as
there is no lexicographic ordering in RFC6265. Existing tests are
updated to match.
Closes#2524
As uintptr_t and HANDLE are always the same size, this warning is
harmless. Just silence it using an intermediate uintptr_t variable.
Closes https://github.com/curl/curl/pull/2908
1) Using CERT_STORE_OPEN_EXISTING_FLAG ( or CERT_STORE_READONLY_FLAG )
while opening certificate store would be sufficient in this scenario and
less-demanding in sense of required user credentials ( for example,
IIS_IUSRS will get "Access Denied" 0x05 error for existing CertOpenStore
call without any of flags mentioned above ),
2) as 'cert_store_name' is a DWORD, attempt to format its value like a
string ( in "Failed to open cert store" error message ) will throw null
pointer exception
3) adding GetLastError(), in my opinion, will make error message more
useful.
Bug: https://curl.haxx.se/mail/lib-2018-08/0198.htmlCloses#2909
Since GOPHER support was added in curl `?' character was automatically
translated to `%09' (`\t').
However, this behaviour does not seems documented in RFC 4266 and for
search selectors it is documented to directly use `%09' in the URL.
Apart that several gopher servers in the current gopherspace have CGI
support where `?' is used as part of the selector and translating it to
`%09' often leads to surprising results.
Closes#2910
This enables level 4 instead of the default level 3, which of the
currently used comments only allows /* FALLTHROUGH */ to silence the
warning.
Closes https://github.com/curl/curl/pull/2747
Handles created with curl_easy_duphandle do not use the SSL engine set
up in the original handle. This fixes the issue by storing the engine
name in the internal url state and setting the engine from its name
inside curl_easy_duphandle.
Reported-by: Anton Gerasimov
Signed-of-by: Laurent Bonnans
Fixes#2829Closes#2833
If this is the last stream on this connection, the RST_STREAM might not
get pushed to the wire otherwise.
Fixes#2882Closes#2887
Researched-by: Michael Kaufmann
This change allows to use the CMake config files generated by Curl's
CMake scripts for static builds of the library.
The symbol CURL_STATIC lib must be defined to compile downstream,
thus the config package is the perfect place to do so.
Fixes#2817Closes#2823
Reported-by: adnn on github
Reviewed-by: Sergei Nikulov
The verbose message "Authentication using SSH public key file" was
printed each time the ssh_userauth_publickey_auto() was called, which
meant each time a packet was transferred over network because the API
operates in non-blocking mode.
This patch makes sure that the verbose message is printed just once
(when the authentication state is entered by the SSH state machine).
Deal with tiny "HTTP/0.9" (header-less) responses by checking the
status-line early, even before a full "HTTP/" is received to allow
detecting 0.9 properly.
Test 1266 and 1267 added to verify.
Fixes#2420Closes#2872
This allows the use of PKCS#11 URI for certificates and keys without
setting the corresponding type as "ENG" and the engine as "pkcs11"
explicitly. If a PKCS#11 URI is provided for certificate, key,
proxy_certificate or proxy_key, the corresponding type is set as "ENG"
if not provided and the engine is set to "pkcs11" if not provided.
Acked-by: Nikos Mavrogiannopoulos
Closes#2333
Use standard CMake variable BUILD_SHARED_LIBS instead of introducing
custom option CURL_STATICLIB.
Use '-DBUILD_SHARED_LIBS=%SHARED%' in appveyor.yml.
Reviewed-by: Sergei Nikulov
Closes#2755
This restores the ability to build a static lib with
--disable-symbol-hiding to keep non-curl_ symbols.
Researched-by: Dan Fandrich
Reported-by: Ran Mozes
Fixes#2830Closes#2831
Follow-up to 09e401e01b. The SMB protocol handler needs to use its
doing function too, which requires smb_do() to not mark itself as
done...
Closes#2822
This change fixes a regression where redirect body would needlessly be
decompressed even though it was to be ignored anyway. As it happens this
causes secondary issues since there appears to be a bug in apache2 that
it in certain conditions generates a corrupt zlib response. The
regression was created by commit:
dbcced8e32
Discovered-by: Harry Sintonen
Closes#2798
RNG structure must be freed by call to FreeRng after its use in
Curl_cyassl_random. This call fixes Valgrind failures when running the
test suite with wolfSSL.
Closes#2784
This fixes a memory leak when CURLOPT_LOGIN_OPTIONS is used, together with
connection reuse.
I found this with oss-fuzz on GDAL and curl master:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=9582
I couldn't reproduce with the oss-fuzz original test case, but looking
at curl source code pointed to this well reproducable leak.
Closes#2790
In the current version, VERSION_GREATER_THAN_EQUAL 6.3 will return false
when run on windows 10.0. This patch addresses that error.
Closes https://github.com/curl/curl/pull/2792
So far, the code tries to pick an authentication method only if
user/password credentials are available, which is not the case for
Bearer authentictation...
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2754
The Bearer authentication was added to cURL 7.61.0, but there is a
problem: if CURLAUTH_ANY is selected, and the server supports multiple
authentication methods including the Bearer method, we strongly prefer
that latter method (only CURLAUTH_NEGOTIATE beats it), and if the Bearer
authentication fails, we will never even try to attempt any other
method.
This is particularly unfortunate when we already know that we do not
have any Bearer token to work with.
Such a scenario happens e.g. when using Git to push to Visual Studio
Team Services (which supports Basic and Bearer authentication among
other methods) and specifying the Personal Access Token directly in the
URL (this aproach is frequently taken by automated builds).
Let's make sure that we have a Bearer token to work with before we
select the Bearer authentication among the available authentication
methods.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2754
Follow-up to 1b76c38904. The VTLS backends that close down the TLS
layer for a connection still needs a Curl_easy handle for the session_id
cache etc.
Fixes#2764Closes#2771
... the protocol is doing read/write a lot, so it needs to write often
even when downloading. A more proper fix could check for eactly when it
wants to write and only ask for it then.
Without this fix, an SMB download could easily get stuck when the event-driven
API was used.
Closes#2768
Some servers issue raw deflate data that may be followed by an undocumented
trailer. This commit makes curl tolerate such a trailer of up to 4 bytes
before considering the data is in error.
Reported-by: clbr on github
Fixes#2719
It was previously erroneously skipped in some situations.
libtest/libntlmconnect.c wrongly depended on wrong behavior (that it
would get a zero timeout) when no handles are "running" in a multi
handle. That behavior is no longer present with this fix. Now libcurl
will always return a -1 timeout when all handles are completed.
Closes#2733
Commit 38203f1585 changed engine detection to be version-based,
with a baseline of openssl 1.0.1. This does in fact break builds
with openssl 1.0.0, which has engine support - the configure script
detects that ENGINE_cleanup() is available - but <openssl/engine.h>
doesn't get included to declare it.
According to upstream documentation, engine support was added to
mainstream openssl builds as of version 0.9.7:
https://github.com/openssl/openssl/blob/master/README.ENGINE
This commit drops the version test down to 1.0.0 as version 1.0.0d
is the oldest version I have to test with.
Closes#2732
MinGW warns:
/lib/vtls/schannel.c:219:64: warning: signed and unsigned type in
conditional expression [-Wsign-compare]
Fix this by casting the ptrdiff_t to size_t as we know it's positive.
Closes https://github.com/curl/curl/pull/2721
... not the read buffer size, as that can be set smaller and thus cause
a buffer overflow! CVE-2018-0500
Reported-by: Peter Wu
Bug: https://curl.haxx.se/docs/adv_2018-70a2.html
telnet.c(1401,28): warning: cast from function call of type 'int' to
non-matching type 'HANDLE' (aka 'void *') [-Wbad-function-cast]
Fixes#2696Closes#2700
The code treated the set version as the *exact* version to require in
the TLS handshake, which is not what other TLS backends do and probably
not what most people expect either.
Reported-by: Andreas Olsson
Assisted-by: Gaurav Malhotra
Fixes#2691Closes#2694
... and trim the threaded Curl_resolver_getsock() to return zero
millisecond wait times during the first three milliseconds so that
localhost or names in the OS resolver cache gets detected and used
faster.
Closes#2685
By masking sure to use the *current* easy handle with extracted
connections from the cache, and make sure to NULLify the ->data pointer
when the connection is put into the cache to make this mistake easier to
detect in the future.
Reported-by: Will Dietz
Fixes#2669Closes#2672
When the application just started the transfer and then stops it while
the name resolve in the background thread hasn't completed, we need to
wait for the resolve to complete and then cleanup data accordingly.
Enabled test 1553 again and added test 1590 to also check when the host
name resolves successfully.
Detected by OSS-fuzz.
Closes#1968
certdata.txt should be deleted also when the process is interrupted by
"same certificate downloaded, exiting"
The certdata.txt is currently kept on disk even if you give the -u
option
Closes#2655
with clang-6.0:
```
vtls/schannel_verify.c: In function 'add_certs_to_store':
vtls/schannel_verify.c:212:30: warning: passing argument 11 of 'CryptQueryObject' from incompatible pointer type [-Wincompatible-pointer-types]
&cert_context)) {
^
In file included from /usr/share/mingw-w64/include/schannel.h:10:0,
from /usr/share/mingw-w64/include/schnlsp.h:9,
from vtls/schannel.h:29,
from vtls/schannel_verify.c:40:
/usr/share/mingw-w64/include/wincrypt.h:4437:26: note: expected 'const void **' but argument is of type 'CERT_CONTEXT ** {aka struct _CERT_CONTEXT **}'
WINIMPM WINBOOL WINAPI CryptQueryObject (DWORD dwObjectType, const void *pvObject, DWORD dwExpectedContentTypeFlags, DWORD dwExpectedFormatTypeFlags, DWORD dwFlags,
^~~~~~~~~~~~~~~~
```
Ref: https://msdn.microsoft.com/library/windows/desktop/aa380264
Closes https://github.com/curl/curl/pull/2648
Given the contstraints of SChannel, I'm exposing these as the algorithms
themselves instead; while replicating the ciphersuite as specified by
OpenSSL would have been preferable, I found no way in the SChannel API
to do so.
To use this from the commandline, you need to pass the names of contants
defining the desired algorithms. For example, curl --ciphers
"CALG_SHA1:CALG_RSA_SIGN:CALG_RSA_KEYX:CALG_AES_128:CALG_DH_EPHEM"
https://github.com The specific names come from wincrypt.h
Closes#2630
- Get rid of variable that was generating false positive warning
(unitialized)
- Fix issues in tests
- Reduce scope of several variables all over
etc
Closes#2631
Previously it was checked for in configure/cmake, but that would then
leave other build systems built without engine support.
While engine support probably existed prior to 1.0.1, I decided to play
safe. If someone experience a problem with this, we can widen the
version check.
Fixes#2641Closes#2644
URL: https://curl.haxx.se/mail/lib-2018-06/0000.html
This is step one. It adds #error statements that require source edits to
make curl build again if asked to use axTLS. At a later stage we might
remove the axTLS specific code completely.
Closes#2628
... it might call infof() with a NULL first argument that isn't harmful
but makes it not do anything. The infof() line is not very useful
anymore, it has served it purpose. Good riddance!
Fixes#2627
If configure detects fnmatch to be available, use that instead of our
custom one for FTP wildcard pattern matching. For standard compliance,
to reduce our footprint and to use already well tested and well
exercised code.
A POSIX fnmatch behaves slightly different than the internal function
for a few test patterns currently and the macOS one yet slightly
different. Test case 1307 is adjusted for these differences.
Closes#2626
On our x86 Android toolchain, getpwuid_r is implemented but the header
is missing:
netrc.c:81:7: error: implicit declaration of function 'getpwuid_r' [-Werror=implicit-function-declaration]
Unfortunately, the function is used in curl_ntlm_wb.c, too, so I moved
the prototype to curl_setup.h.
Signed-off-by: Bernhard Walle <bernhard@bwalle.de>
Closes#2609
Adds CURLOPT_TLS13_CIPHERS and CURLOPT_PROXY_TLS13_CIPHERS.
curl: added --tls13-ciphers and --proxy-tls13-ciphers
Fixes#2435
Reported-by: zzq1015 on github
Closes#2607
A non-escaped bracket ([) is for a character group - as documented. It
will *not* match an individual bracket anymore. Test case 1307 updated
accordingly to match.
Problem detected by OSS-Fuzz, although this fix is probably not a final
fix for the notorious timeout issues.
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=8525Closes#2614
The latest psl is cached in the multi or share handle. It is refreshed
before use after 72 hours.
New share lock CURL_LOCK_DATA_PSL controls the psl cache sharing.
If the latest psl is not available, the builtin psl is used.
Reported-by: Yaakov Selkowitz
Fixes#2553Closes#2601
This avoids appending error data to already existing good data.
Test 92 is updated to match this change.
New test 1156 checks all combinations of --range/--resume, --fail,
Content-Range header and http status code 200/416.
Fixes#1163
Reported-By: Ithubg on github
Closes#2578
OpenSSL has supported --cacert for ages, always accepting LF-only line
endings ("Unix line endings") as well as CR/LF line endings ("Windows
line endings").
When we introduced support for --cacert also with Secure Channel (or in
cURL speak: "WinSSL"), we did not take care to support CR/LF line
endings, too, even if we are much more likely to receive input in that
form when using Windows.
Let's fix that.
Happily, CryptQueryObject(), the function we use to parse the ca-bundle,
accepts CR/LF input already, and the trailing LF before the END
CERTIFICATE marker catches naturally any CR/LF line ending, too. So all
we need to care about is the BEGIN CERTIFICATE marker. We do not
actually need to verify here that the line ending is CR/LF. Just
checking for a CR or an LF is really plenty enough.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes https://github.com/curl/curl/pull/2592
The previous limit of 5 can still end up in situation that takes a very
long time and consumes a lot of CPU.
If there is still a rare use case for this, a user can provide their own
fnmatch callback for a version that allows a larger set of wildcards.
This commit was triggered by yet another OSS-Fuzz timeout due to this.
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=8369Closes#2587
Provide a set of new timers that return the time intervals using integer
number of microseconds instead of floats.
The new info names are as following:
CURLINFO_APPCONNECT_TIME_T
CURLINFO_CONNECT_TIME_T
CURLINFO_NAMELOOKUP_TIME_T
CURLINFO_PRETRANSFER_TIME_T
CURLINFO_REDIRECT_TIME_T
CURLINFO_STARTTRANSFER_TIME_T
CURLINFO_TOTAL_TIME_T
Closes#2495
Original MinGW targets Windows 2000 by default, which lacks some APIs and
definitions for this feature. Disable it if these APIs are not available.
Closes https://github.com/curl/curl/pull/2522
Response data for a handle with a large buffer might be cached and then
used with the "closure" handle when it has a smaller buffer and then the
larger cache will be copied and overflow the new smaller heap based
buffer.
Reported-by: Dario Weisser
CVE: CVE-2018-1000300
Bug: https://curl.haxx.se/docs/adv_2018-82c2.html
RFC 6265 section 4.2.1 does not set restrictions on cookie names.
This is a follow-up to commit 7f7fcd0.
Also explicitly check proper syntax of cookie name/value pair.
New test 1155 checks that cookie names are not reserved words.
Reported-By: anshnd at github
Fixes#2564Closes#2566
To make builds with VS2015 work. Recent changes in VS2015 _IOB_ENTRIES
handling is causing problems. This fix changes the OpenSSL backend code
to use BIO functions instead of FILE I/O functions to circumvent those
problems.
Closes#2512
... instead of previous separate struct fields, to make it easier to
extend and change individual backends without having to modify them all.
closes#2547
Curl_setup_transfer() can be called to setup a new individual transfer
over a multiplexed connection so it shouldn't unset writesockfd.
Bug: #2520Closes#2549
ssh-libssh.c:2429:21: warning: result of '1 << 31' requires 33 bits to
represent, but 'int' only has 32 bits [-Wshift-overflow=]
'len' will never be that big anyway so I converted the run-time check to
a regular assert.
Commit 3c630f9b0a partially reverted the
changes from commit dd7521bcc1 because of
the problem that strcpy_url() was modified unilaterally without also
modifying strlen_url(). As a consequence strcpy_url() was again
depending on ASCII encoding.
This change fixes strlen_url() and strcpy_url() in parallel to use a
common host-encoding independent criterion for deciding whether an URL
character must be %-escaped.
Closes#2535
This extends the INDENTATION case to also handle 'else' statements
and require proper indentation on the following line. Also fixes the
offending cases found in the codebase.
Closes#2532
This function can get called on a connection that isn't setup enough to
have the 'recv_underlying' function pointer initialized so it would try
to call the NULL pointer.
Reported-by: Dario Weisser
Follow-up to db1b2c7fe9 (never shipped in a release)
Closes#2536
Follow-up to 1514c44655: replace another strstr() call done on a
buffer that might not be zero terminated - with a memchr() call, even if
we know the substring will be found.
Assisted-by: Max Dymond
Detected by OSS-Fuzz
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=8021Closes#2534
With commit 4272a0b0fc curl-speficic
character classification macros and functions were introduced in
curl_ctype.[ch] to avoid dependencies on the locale. This broke curl on
non-ASCII, e.g. EBCDIC platforms. This change restores the previous set
of character classification macros when CURL_DOES_CONVERSIONS is
defined.
Closes#2494
Fuzzing has proven we can reach code in on_frame_recv with status_code
not having been set, so let's detect that in run-time (instead of with
assert) and error error accordingly.
(This should no longer happen with the latest nghttp2)
Detected by OSS-Fuzz
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=7903Closes#2514
This reverts commit 8fb78f9ddc.
Unfortunately this fix introduces memory leaks I've not been able to fix
in several days. Reverting this for now to get the leaks fixed.
When receiving REFUSED_STREAM, mark the connection for close and retry
streams accordingly on another/fresh connection.
Reported-by: Terry Wu
Fixes#2416Fixes#1618Closes#2510
It's not strictly clear if the API contract allows us to call strstr()
on a string that isn't zero terminated even when we know it will find
the substring, and clang's ASAN check dislikes us for it.
Also added a check of the return code in case it fails, even if I can't
think of a situation how that can trigger.
Detected by OSS-Fuzz
Closes#2513
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=7760
Curl_cert_hostcheck operates with the host character set, therefore the
ASCII subjectAltName string retrieved with OpenSSL must be converted to
the host encoding before comparison.
Closes#2493
This triggered an assert if called more than once in debug mode (and a
memory leak if not debug build). With the right sequence of HTTP/2
headers incoming it can happen.
Detected by OSS-Fuzz
Closes#2507
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=7764
- Move verify_certificate functionality in schannel.c into a new
file called schannel_verify.c. Additionally, some structure defintions
from schannel.c have been moved to schannel.h to allow them to be
used in schannel_verify.c.
- Make verify_certificate functionality for Schannel available on
all versions of Windows instead of just Windows CE. verify_certificate
will be invoked on Windows CE or when the user specifies
CURLOPT_CAINFO and CURLOPT_SSL_VERIFYPEER.
- In verify_certificate, create a custom certificate chain engine that
exclusively trusts the certificate store backed by the CURLOPT_CAINFO
file.
- doc updates of --cacert/CAINFO support for schannel
- Use CERT_NAME_SEARCH_ALL_NAMES_FLAG when invoking CertGetNameString
when available. This implements a TODO in schannel.c to improve
handling of multiple SANs in a certificate. In particular, all SANs
will now be searched instead of just the first name.
- Update tool_operate.c to not search for the curl-ca-bundle.crt file
when using Schannel to maintain backward compatibility. Previously,
any curl-ca-bundle.crt file found in that search would have been
ignored by Schannel. But, with CAINFO support, the file found by
that search would have been used as the certificate store and
could cause issues for any users that have curl-ca-bundle.crt in
the search path.
- Update url.c to not set the build time CURL_CA_BUNDLE if the selected
SSL backend is Schannel. We allow setting CA location for schannel
only when explicitly specified by the user via CURLOPT_CAINFO /
--cacert.
- Add new test cases 3000 and 3001. These test cases check that the first
and last SAN, respectively, matches the connection hostname. New test
certificates have been added for these cases. For 3000, the certificate
prefix is Server-localhost-firstSAN and for 3001, the certificate
prefix is Server-localhost-secondSAN.
- Remove TODO 15.2 (Add support for custom server certificate
validation), this commit addresses it.
Closes https://github.com/curl/curl/pull/1325
- Fix warning 'integer from pointer without a cast' on 3rd arg in
CertOpenStore. The arg type HCRYPTPROV may be a pointer or integer
type of the same size.
Follow-up to e35b025.
Caught by Marc's CI builds.
Users can now specify a client certificate in system certificates store
explicitly using expression like `--cert "CurrentUser\MY\<thumbprint>"`
Closes#2376
If you pass empty user/pass asking curl to use Windows Credential
Storage (as stated in the docs) and it has valid credentials for the
domain, e.g.
curl -v -u : --ntlm example.com
currently authentication fails.
This change fixes it by providing proper SPN string to the SSPI API
calls.
Fixes https://github.com/curl/curl/issues/1622
Closes https://github.com/curl/curl/pull/1660
The ifdefs have become quite long. Also, the condition for the
definition of CURLOPT_SERVICE_NAME and for setting it from
CURLOPT_SERVICE_NAME have diverged. We will soon also need the two
options for NTLM, at least when using SSPI, for
https://github.com/curl/curl/pull/1660.
Just make the definitions unconditional to make that easier.
Closes https://github.com/curl/curl/pull/2479
When a zeroed out allocation is required, use calloc() rather than
malloc() followed by an explicit memset(). The result will be the
same, but using calloc() everywhere increases consistency in the
codebase and avoids the risk of subtle bugs when code is injected
between malloc and memset by accident.
Closes https://github.com/curl/curl/pull/2497
In debug mode, MingGW-w64's GCC 7.3 issues null-dereference warnings
when dereferencing pointers after DEBUGASSERT-ing that they are not
NULL.
Fix this by removing the DEBUGASSERTs.
Suggested-by: Daniel Stenberg
Ref: https://github.com/curl/curl/pull/2463
unit1309 and vtls/gtls: error: arithmetic on a null pointer treated as a
cast from integer to pointer is a GNU extension
Reported-by: Rikard Falkeborn
Fixes#2466Closes#2468
In the situation of a client connecting to an FTP server using an IPv6
tunnel proxy, the connection info will indicate that the connection is
IPv6. However, because the server behing the proxy is IPv4, it is
permissable to attempt PSV mode. In the case of the FTP server being
IPv4 only, EPSV will always fail, and with the current logic curl will
be unable to connect to the server, as the IPv6 fwdproxy causes curl to
think that EPSV is impossible.
Closes#2432
curl 7.57.0 and up interpret this according to Appendix E.3.2 of RFC
8089 but then returns an error saying this is unimplemented. This is
actually a regression in behavior on both Windows and Unix.
Before curl 7.57.0 this URL was treated as a path of "//foo/bar" and
then passed to the relevant OS API. This means that the behavior of this
case is actually OS dependent.
The Unix path resolution rules say that the OS must handle swallowing
the extra "/" and so this path is the same as "/foo/bar"
The Windows path resolution rules say that this is a UNC path and
automatically handles the SMB access for the program. So curl on Windows
was already doing Appendix E.3.2 without any special code in curl.
Regression
Closes#2438
This reverts commit dc85437736.
libcurl (with the OpenSSL backend) performs server certificate verification
even if verifypeer == 0 and the verification result is available using
CURLINFO_SSL_VERIFYRESULT. The commit that is being reverted caused the
CURLINFO_SSL_VERIFYRESULT to not have useful information for the
verifypeer == 0 use case (it would always have
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY).
Closes#2451
This fixes a segfault occurring when a name of the (invalid) form "domain..tld"
is processed.
test46 updated to cover this case.
Follow-up to commit c990ead.
Ref: https://github.com/curl/curl/pull/2440
In order to make curl_multi_timeout() return suitable "sleep" times even
when there's no socket to wait for while the name is being resolved in a
helper thread.
It will increases the timeouts as time passes.
Closes#2419
If a connection has received a GOAWAY frame while not being used, the
function now reads frames off the connection before trying to reuse it
to avoid reusing connections the server has told us not to use.
Reported-by: Alex Baines
Fixes#1967Closes#2402
Currently CMake cannot detect Brotli support. This adds detection of the
libraries and associated header files. It also adds this to the
generated config.
Closes#2392
This patch adds CURLOPT_DNS_SHUFFLE_ADDRESSES to explicitly request
shuffling of IP addresses returned for a hostname when there is more
than one. This is useful when the application knows that a round robin
approach is appropriate and is willing to accept the consequences of
potentially discarding some preference order returned by the system's
implementation.
Closes#1694
When a transfer is requested to get done and it is put in the pending
queue when limited by number of connections, total or per-host, libcurl
would previously very aggressively retry *ALL* pending transfers to get
them transferring. That was very time consuming.
By reducing the aggressiveness in how pending are being retried, we
waste MUCH less time on putting transfers back into pending again.
Some test cases got a factor 30(!) speed improvement with this change.
Reported-by: Cyril B
Fixes#2369Closes#2383
Especially unpausing a transfer might have to move the socket back to the
"currently used sockets" hash to get monitored. Otherwise it would never get
any more data and get stuck. Easily triggered with pausing using the
multi_socket API.
Reported-by: Philip Prindeville
Bug: https://curl.haxx.se/mail/lib-2018-03/0048.htmlFixes#2393Closes#2391
Due to very frequent updates of the rate limit "window", it could
attempt to rate limit within the same milliseconds and that then made
the calculations wrong, leading to it not behaving correctly on very
fast transfers.
This new logic updates the rate limit "window" to be no shorter than the
last three seconds and only updating the timestamps for this when
switching between the states TOOFAST/PERFORM.
Reported-by: 刘佩东
Fixes#2386Closes#2388
Refuse to operate when given path components featuring byte values lower
than 32.
Previously, inserting a %00 sequence early in the directory part when
using the 'singlecwd' ftp method could make curl write a zero byte
outside of the allocated buffer.
Test case 340 verifies.
CVE-2018-1000120
Reported-by: Duy Phan Thanh
Bug: https://curl.haxx.se/docs/adv_2018-9cd6.html
gss_seal/gss_unseal have been deprecated in favor of
gss_wrap/gss_unwrap with GSS-API v2 from January 1997 [1]. The first
version of "The Kerberos Version 5 GSS-API Mechanism" [2] from June
1996 already says "GSS_Wrap() (formerly GSS_Seal())" and
"GSS_Unwrap() (formerly GSS_Unseal())".
Use the nondeprecated functions to avoid deprecation warnings.
[1] https://tools.ietf.org/html/rfc2078
[2] https://tools.ietf.org/html/rfc1964
Closes https://github.com/curl/curl/pull/2356
On MinGW and Cygwin, GCC and clang have been complaining about macro
redefinitions since 4272a0b0fc. Fix this
by undefining the macros before redefining them as suggested in
https://github.com/curl/curl/pull/2269.
Suggested-by: Daniel Stenberg
When targeting x64, MinGW-w64 complains about conversions between
32-bit long and 64-bit pointers. Fix this by reusing the
GNUTLS_POINTER_TO_SOCKET_CAST / GNUTLS_SOCKET_TO_POINTER_CAST logic
from gtls.c, moving it to warnless.h as CURLX_POINTER_TO_INTEGER_CAST /
CURLX_INTEGER_TO_POINTER_CAST.
Closes https://github.com/curl/curl/pull/2341
- Add new option CURLOPT_RESOLVER_START_FUNCTION to set a callback that
will be called every time before a new resolve request is started
(ie before a host is resolved) with a pointer to backend-specific
resolver data. Currently this is only useful for ares.
- Add new option CURLOPT_RESOLVER_START_DATA to set a user pointer to
pass to the resolver start callback.
Closes https://github.com/curl/curl/pull/2311
- In keeping with the naming of our other connect timeout options rename
CURLOPT_HAPPY_EYEBALLS_TIMEOUT to CURLOPT_HAPPY_EYEBALLS_TIMEOUT_MS.
This change adds the _MS suffix since the option expects milliseconds.
This is more intuitive for our users since other connect timeout options
that expect milliseconds use _MS such as CURLOPT_TIMEOUT_MS,
CURLOPT_CONNECTTIMEOUT_MS, CURLOPT_ACCEPTTIMEOUT_MS.
The tool option already uses an -ms suffix, --happy-eyeballs-timeout-ms.
Follow-up to 2427d94 which added the lib and tool option yesterday.
Ref: https://github.com/curl/curl/pull/2260
- Add new option CURLOPT_HAPPY_EYEBALLS_TIMEOUT to set libcurl's happy
eyeball timeout value.
- Add new optval macro CURL_HET_DEFAULT to represent the default happy
eyeballs timeout value (currently 200 ms).
- Add new tool option --happy-eyeballs-timeout-ms to expose
CURLOPT_HAPPY_EYEBALLS_TIMEOUT. The -ms suffix is used because the
other -timeout options in the tool expect seconds not milliseconds.
Closes https://github.com/curl/curl/pull/2260
This enables users to preresolve but still take advantage of happy
eyeballs and trying multiple addresses if some are not connecting.
Ref: https://github.com/curl/curl/pull/2260
Previously, it would only check for max length if the existing alloc
buffer was to small to fit it, which often would make the header still
get used.
Reported-by: Guido Berhoerster
Bug: https://curl.haxx.se/mail/lib-2018-02/0056.htmlCloses#2315
The list of state names (used in debug builds) was out of sync in
relation to the list of states (used in all builds).
I now added an assert to make sure the sizes of the two lists match, to
aid in detecting this mistake better in the future.
Regression since c92d2e14cf, shipped in 7.58.0.
Reported-by: Somnath Kundu
Fixes#2312Closes#2313
RFC 5321 4.1.1.4 specifies the CRLF terminating the DATA command
should be taken into account when chasing the <CRLF>.<CRLF> end marker.
Thus a leading dot character in data is also subject to escaping.
Tests 911 and test server are adapted to this situation.
New tests 951 and 952 check proper handling of initial dot in data.
Closes#2304
Some servers return a "content-encoding" header with a non-standard
"none" value.
Add "none" as an alias to "identity" as a work-around, to avoid
unrecognised content encoding type errors.
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
Closes https://github.com/curl/curl/pull/2298
Windows 10.0.17061 SDK introduces support for Unix Domain Sockets.
Added the necessary include file to curl_addrinfo.c.
Note: The SDK (which is considered beta) has to be installed, VS 2017
project file has to be re-targeted for Windows 10.0.17061 and #define
enabled in config-win32.h.
When peer verification is disabled, calling
SSL_CTX_load_verify_locations is not necessary. Only call it when
verification is enabled to save resources and increase performance.
Closes#2290
Reduce code duplication by making Curl_mime_contenttype available and
used by the formdata function. This also makes the formdata function
recognize a set of more file extensions by default.
PR #2280 brought this to my attention.
Closes#2282
Whenever an expected pattern syntax rule cannot be matched, the
character starting the rule loses its special meaning and the parsing
is resumed:
- backslash at the end of pattern string matches itself.
- Error in [:keyword:] results in set containing :\[dekorwy.
Unit test 1307 updated for this new situation.
Closes#2273
... since the libc provided one are locale dependent in a way we don't
want. Also, the "native" isalnum() (for example) works differently on
different platforms which caused test 1307 failures on macos only.
Closes#2269
Make curl_getdate() handle dates before 1970 as well (returning negative
values).
Make test 517 test dates for 64 bit time_t.
This fixes bug (3) mentioned in #2238Closes#2250
... unless CURLOPT_UNRESTRICTED_AUTH is set to allow them. This matches how
curl already handles Authorization headers created internally.
Note: this changes behavior slightly, for the sake of reducing mistakes.
Added test 317 and 318 to verify.
Reported-by: Craig de Stigter
Bug: https://curl.haxx.se/docs/adv_2018-b3bf.html
In case an identity didn't match[0], the state machine would fail in
state SSH_AUTH_AGENT instead of progressing to the next identity in
ssh-agent. As a result, ssh-agent authentication only worked if the
identity required happened to be the first added to ssh-agent.
This was introduced as part of commit c4eb10e2f0, which
stated that the "else" statement was required to prevent getting stuck
in state SSH_AUTH_AGENT. Given the state machine's logic and libssh2's
interface I couldn't see how this could happen or reproduce it and I
also couldn't find a more detailed description of the problem which
would explain a test case to reproduce the problem this was supposed to
fix.
[0] libssh2_agent_userauth returning LIBSSH2_ERROR_AUTHENTICATION_FAILED
Closes#2248
Follow-up to 84fcaa2e7. libressl does not have the API even if it says it is
late OpenSSL version...
Fixes#2246Closes#2247
Reported-by: jungle-boogie on github
1. don't use "ULL" suffix since unsupported in older MSVC
2. use curl_off_t instead of custom long long ifdefs
3. make get_posix_time() not do unaligned data access
Fixes#2211Closes#2240
Reported-by: Chester Liu
A mime tree attached to an easy handle using CURLOPT_MIMEPOST is
strongly bound to the handle: there is a pointer to the easy handle in
each item of the mime tree and following the parent pointer list
of mime items ends in a dummy part stored within the handle.
Because of this binding, a mime tree cannot be shared between different
easy handles, thus it needs to be cloned upon easy handle duplication.
There is no way for the caller to get the duplicated mime tree
handle: it is then set to be automatically destroyed upon freeing the
new easy handle.
New test 654 checks proper mime structure duplication/release.
Add a warning note in curl_mime_data_cb() documentation about sharing
user data between duplicated handles.
Closes#2235
Prior to this change the stored byte count of each trailer was
miscalculated and 1 less than required. It appears any trailer
after the first that was passed to Curl_client_write would be truncated
or corrupted as well as the size. Potentially the size of some
subsequent trailer could be erroneously extracted from the contents of
that trailer, and since that size is used by client write an
out-of-bounds read could occur and cause a crash or be otherwise
processed by client write.
The bug appears to have been born in 0761a51 (precedes 7.49.0).
Closes https://github.com/curl/curl/pull/2231
Decoding loop implementation did not concern the case when all
received data is consumed by Brotli decoder and the size of decoded
data internally hold by Brotli decoder is greater than CURL_MAX_WRITE_SIZE.
For content with unencoded length greater than CURL_MAX_WRITE_SIZE this
can result in the loss of data at the end of content.
Closes#2194
Move curl_mime_initpart() and curl_mime_cleanpart() calls to lower-level
functions dealing with UserDefined structure contents.
This avoids memory leakages on curl-generated part mime headers.
New test 2073 checks this using the cli tool --next option: it
triggers a valgrind error if bug is present.
Bug: https://curl.haxx.se/mail/lib-2017-12/0060.html
Reported-by: Martin Galvan
- When zlib version is < 1.2.0.4, process gzip trailer before considering
extra data as an error.
- Inflate with Z_BLOCK instead of Z_SYNC_FLUSH to maximize correct data
and minimize corrupt data output.
- Do not try to restart deflate decompression in raw mode if output has
started or if the leading data is not available anymore.
- New test 232 checks inflating raw-deflated content.
Closes#2068
scan-build would warn on a potential access of an uninitialized
buffer. I deem it a false positive and had to add this somewhat ugly
work-around to silence it.
Fixed undefined symbol of getenv() which does not exist when compiling
for Windows 10 App (CURL_WINDOWS_APP). Replaced getenv() with
curl_getenv() which is aware of getenv() absence when CURL_WINDOWS_APP
is defined.
Closes#2171
Prune the DNS cache immediately after the dns entry is unlocked in
multi_done. Timed out entries will then get discarded in a more orderly
fashion.
Test506 is updated
Reported-by: Oleg Pudeyev
Fixes#2169Closes#2170
Prior to this change SSLKEYLOGFILE used line buffering on WIN32 just
like it does for other platforms. However, the Windows CRT does not
actually support line buffering (_IOLBF) and will use full buffering
(_IOFBF) instead. We can't use full buffering because multiple processes
may be writing to the file and that could lead to corruption, and since
full buffering is the only buffering available this commit disables
buffering for Windows SSLKEYLOGFILE entirely (_IONBF).
Ref: https://github.com/curl/curl/pull/1346#issuecomment-350530901
These are OS/2-specific things added to the code in the year 2000. They
were always ugly. If there's any user left, they still don't need it
done this way.
Closes#2166
- Allow proxy_ssl to be checked for pending data even when connssl does
not yet have an SSL handle.
This change is for posterity. Currently there doesn't seem to be a code
path that will cause a pending data check when proxyssl could have
pending data and the connssl handle doesn't yet exist [1].
[1]: Recall that an https proxy connection starts out in connssl but if
the destination is also https then the proxy SSL backend data is moved
from connssl to proxyssl, which means connssl handle is temporarily
empty until an SSL handle for the destination can be created.
Ref: https://github.com/curl/curl/commit/f4a6238#commitcomment-24396542
Closes https://github.com/curl/curl/pull/1916
Connections that are used for HTTP/1.1 Pipelining or HTTP/2 multiplexing
only get additional transfers added to them if the existing connection
is held by the same multi or easy handle. libcurl does not support doing
HTTP/2 streams in different threads using a shared connection.
Closes#2152
If the lock is released before the dealings with the bundle is over, it may
have changed by another thread in the mean time.
Fixes#2132Fixes#2151Closes#2139
For pop3/imap/smtp, added test 891 to somewhat verify the pop3
case.
For this, I enhanced the pingpong test server to be able to send back
responses with LF-only instead of always using CRLF.
Closes#2150
Figured out while reviewing code in the libssh backend. The pointer was
checked for NULL after having been dereferenced, so we know it would
always equal true or it would've crashed.
Pointed-out-by: Nikos Mavrogiannopoulos
Bug #2143Closes#2148
The previous code was incorrectly following the libssh2 error detection
for libssh2_sftp_statvfs, which is not correct for libssh's sftp_statvfs.
Fixes#2142
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
The SFTP back-end supports asynchronous reading only, limited
to 32-bit file length. Writing is synchronous with no other
limitations.
This also brings keyboard-interactive authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@gnutls.org>
That also updates tests to expect the right error code
libssh2 back-end returns CURLE_SSH error if the remote file
is not found. Expect instead CURLE_REMOTE_FILE_NOT_FOUND
which is sent by the libssh backend.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
libssh is an alternative library to libssh2.
https://www.libssh.org/
That patch set also introduces support for ECDSA
ed25519 keys, as well as gssapi authentication.
Signed-off-by: Nikos Mavrogiannopoulos <nmav@redhat.com>
Absent any 'symbol map' or script to limit what gets exported, static
linking of libraries previously resulted in a libcurl with curl's and
those other symbols being (re-)exported.
This did not happen if 'versioned symbols' were enabled (which is not
the default) because then a version script is employed.
This limits exports to everything starting in 'curl_*'., which is
what "libcurl.vers" exports.
This avoids strange side-effects such as with mixing methods
from system libraries and those erroneously offered by libcurl.
Closes#2127
Originally, my idea was to allocate the two structures (or more
precisely, the connectdata structure and the four SSL backend-specific
strucutres required for ssl[0..1] and proxy_ssl[0..1]) in one go, so
that they all could be free()d together.
However, getting the alignment right is tricky. Too tricky.
So let's just bite the bullet and allocate the SSL backend-specific
data separately.
As a consequence, we now have to be very careful to release the memory
allocated for the SSL backend-specific data whenever we release any
connectdata.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2119
commit d3ab7c5a21 broke the boringssl build since it doesn't have
RSA_flags(), so we disable that code block for boringssl builds.
Reported-by: W. Mark Kubacki
Fixes#2117
This bit is no longer used. It is not clear what it meant for users to
"init the TLS" in a world with different TLS backends and since the
introduction of multissl, libcurl didn't properly work if inited without
this bit set.
Not a single user responded to the call for users of it:
https://curl.haxx.se/mail/lib-2017-11/0072.html
Reported-by: Evgeny Grin
Assisted-by: Jay Satiro
Fixes#2089Fixes#2083Closes#2107
- Align the array of ssl_backend_data on a max 32 byte boundary.
8 is likely to be ok but I went with 32 for posterity should one of
the ssl_backend_data structs change to contain a larger sized variable
in the future.
Prior to this change (since dev 70f1db3, release 7.56) the connectdata
structure was undersized by 4 bytes in 32-bit builds with ssl enabled
because long long * was mistakenly used for alignment instead of
long long, with the intention being an 8 byte boundary. Also long long
may not be an available type.
The undersized connectdata could lead to oob read/write past the end in
what was expected to be the last 4 bytes of the connection's secondary
socket https proxy ssl_backend_data struct (the secondary socket in a
connection is used by ftp, others?).
Closes https://github.com/curl/curl/issues/2093
CVE-2017-8818
Bug: https://curl.haxx.se/docs/adv_2017-af0a.html
With this check present, scan-build warns that we might dereference this
point in other places where it isn't first checked for NULL. Thus, if it
*can* be NULL we have a problem on a few places. However, this pointer
should not be possible to be NULL here so I remove the check and thus
also three different scan-build warnings.
Closes#2111
* LOTS of comment updates
* explicit error for SMB shares (e.g. "file:////share/path/file")
* more strict handling of authority (i.e. "//localhost/")
* now accepts dodgy old "C:|" drive letters
* more precise handling of drive letters in and out of Windows
(especially recognising both "file:c:/" and "file:/c:/")
Closes#2110
The new API added in Linux 4.11 only requires setting a socket option
before connecting, without the whole sento() machinery.
Notably, this makes it possible to use TFO with SSL connections on Linux
as well, without the need to mess around with OpenSSL (or whatever other
SSL library) internals.
Closes#2056
Host names like "127.0.0.1 moo" would otherwise be accepted by some
getaddrinfo() implementations.
Updated test 1034 and 1035 accordingly.
Fixes#2073Closes#2092
... so that IPv6 addresses can be passed like they can for connect-to
and how they're used in URLs.
Added test 1324 to verify
Reported-by: Alex Malinovich
Fixes#2087Closes#2091