Warning: this will make existing curl command lines that use metalink to
stop working.
Reasons for removal:
1. We've found several security problems and issues involving the
metalink support in curl. The issues are not detailed here. When
working on those, it become apparent to the team that several of the
problems are due to the system design, metalink library API and what
the metalink RFC says. They are very hard to fix on the curl side
only.
2. The metalink usage with curl was only very briefly documented and was
not following the "normal" curl usage pattern in several ways, making
it surprising and non-intuitive which could lead to further security
issues.
3. The metalink library was last updated 6 years ago and wasn't so
active the years before that either. An unmaintained library means
there's a security problem waiting to happen. This is probably reason
enough.
4. Metalink requires an XML parsing library, which is complex code (even
the smaller alternatives) and to this day often gets security
updates.
5. Metalink is not a widely used curl feature. In the 2020 curl user
survey, only 1.4% of the responders said that they'd are using it. In
2021 that number was 1.2%. Searching the web also show very few
traces of it being used, even with other tools.
6. The torrent format and associated technology clearly won for
downloading large files from multiple sources in parallel.
Cloes #7176
The 'hyper mode' makes line-ending checks work in the test suite for
when hyper is used. Now it also requires that HTTP or HTTPS are
mentioned as keywords to be enabled so that it doesn't wrongly adjusts
tests for other protocols.
This makes test 271 (TFTP) work again in hyper enabled builds.
Closes#7185
The warning about missing entries in that file then doesn't require that
the Makefile has been regenerated which was confusing.
The scan for the test num is a little more error prone than before
(since now it doesn't actually verify that it is legitimate Makefile
syntax), but I think it is good enough.
Closes#7177
For options that pass in lists or strings that are subsequently parsed
and must be correct. This broadens the scope for the option previously
known as CURLE_TELNET_OPTION_SYNTAX but the old name is of course still
provided as a #define for existing applications.
Closes#7175
The previous strip also removed the CR which turned problematic.
valgrind.supp: add zstd suppression using hyper
Reported-and-analyzed-by: Kevin Burke
Fixes#7169Closes#7171
Hyper returns the same error for wrong HTTP version as for negative
content-length. Test 178 verifies that negative content-length is
rejected but the hyper backend will return a different error for it (and
without any helpful message telling why the message was bad). It will
also not return any headers at all for the response, not even the ones
that arrived before the error.
Closes#7147
In some situations, it was possible that a transfer was setup to
use an specific IP version, but due do DNS caching or connection
reuse, it ended up using a different IP version from requested.
This commit changes the effect of CURLOPT_IPRESOLVE from simply
restricting address resolution to preventing the wrong connection
type being used, when choosing a connection from the pool, and
to restricting what addresses could be used when establishing
a new connection.
It is important that all addresses versions are resolved, even if
not used in that transfer in particular, because the result is
cached, and could be useful for a different transfer with a
different CURLOPT_IPRESOLVE setting.
Closes#6853
Also added 'CURL_SMALLSENDS' to make Curl_write() send short packets,
which helped verifying this even more.
Add test 363 to verify.
Reported-by: ustcqidi on github
Fixes#6950Closes#7024
When hyper is used, it emits uppercase hexadecimal numbers for chunked
encoding lengths. Without hyper, lowercase hexadecimal numbers are used.
This change adds preprocessor statements to tests where this is an
issue, and adapts the fixtures to match.
Closes#6987
When a TLS server requests a client certificate during handshake and
none can be provided, libcurl now returns this new error code
CURLE_SSL_CLIENTCERT
Only supported by Secure Transport and OpenSSL for TLS 1.3 so far.
Closes#6721
A struct bufref holds a buffer pointer, a data size and a destructor.
When freed or its contents are changed, the previous buffer is implicitly
released by the associated destructor. The data size, although not used
internally, allows binary data support.
A unit test checks its handling methods: test 1661
Closes#6654
This reverts commit 1cba36d216.
CMake provides properties that can be set on a target to rename the
output artifact without changing the name of a target.
Ref: #6899
When the host name in a URL is given as an IPv4 numerical address, the
address can be specified with dotted numericals in four different ways:
a32, a.b24, a.b.c16 or a.b.c.d and each part can be specified in
decimal, octal (0-prefixed) or hexadecimal (0x-prefixed).
Instead of passing on the name as-is and leaving the handling to the
underlying name functions, which made them not work with c-ares but work
with getaddrinfo, this change now makes the curl URL API itself detect
and "normalize" host names specified as IPv4 numericals.
The WHATWG URL Spec says this is an okay way to specify a host name in a
URL. RFC 3896 does not allow them, but curl didn't prevent them before
and it seems other RFC 3896-using tools have not either. Host names used
like this are widely supported by other tools as well due to the
handling being done by getaddrinfo and friends.
I decided to add the functionality into the URL API itself so that all
users of these functions get the benefits, when for example wanting to
compare two URLs. Also, it makes curl built to use c-ares now support
them as well and make curl builds more consistent.
The normalization makes HTTPS and virtual hosted HTTP work fine even
when curl gets the address specified using one of the "obscure" formats.
Test 1560 is extended to verify.
Fixes#6863Closes#6871