Users of the SMB tests will have to install impacket manually.
Reasoning: our in-tree version of impacket was quite outdated
and only compatible with Python 2 which is already end-of-life.
Upgrading to Python 3 and a compatible impacket version would
require to import additional Python-only and CPython-extension
dependencies. This would have hindered portability enormously.
Closes#5094
When extracting a <section> <part> and there's no </part> before
</section>, this now outputs an error and returns a wrong string to
make users spot the mistake.
Ref: #5070Closes#5071
Even though the existing code can be fixed to run on Python 3, the
tests will fail due to the Unicode transition the protocol is invalid.
Follow up to ee63837Closes#5085
Recent gcc warns when byte count of strncpy() equals the destination
buffer size. Since the destination buffer is previously cleared and
the source string is always shorter, reducing the byte count by one
silents the warning without affecting the result.
Closes#5059
When using maximum code optimization level (-O3), valgrind wrongly
detects uses of uninitialized values in strcmp().
Preset buffers with all zeroes to avoid that.
This test does A LOT of *wakeup() calls and then calls curl_multi_poll()
twice. The first *poll() is then expected to return early and the second
not - as the first is supposed to drain the socketpair pipe.
It turns out however that when given "excessive" amounts of writes to
the pipe, some operating systems (the Solaris based are known) will
return EAGAIN before the pipe is drained, which in our test case causes
the second *poll() call to also abort early.
This change attempts to avoid the OS-specific behaviors in the test by
reducing the amount of wakeup calls from 1234567 to 10.
Reported-by: Andy Fiddaman
Fixes#5037Closes#5058
New test 666 checks this is effective.
As upload buffer size is significant in this kind of tests, shorten it
in similar test 652.
Fixes#4860Closes#4833
Reported-by: RuurdBeerstra on github
Input buffer filling may delay the data sending if data reads are slow.
To overcome this problem, file and callback data reads do not accumulate
in buffer anymore. All other data (memory data and mime framing) are
considered as fast and still concatenated in buffer.
As this may highly impact performance in terms of data overhead, an early
end of part data check is added to spare a read call.
When encoding a part's data, an encoder may require more bytes than made
available by a single read. In this case, the above rule does not apply
and reads are performed until the encoder is able to deliver some data.
Tests 643, 644, 645, 650 and 654 have been adapted to the output data
changes, with test data size reduced to avoid the boredom of long lists of
1-byte chunks in verification data.
New test 667 checks mimepost using single-byte read callback with encoder.
New test 668 checks the end of part data early detection.
Fixes#4826
Reported-by: MrdUkk on github
In case a read callback returns a status (pause, abort, eof,
error) instead of a byte count, drain the bytes read so far but
remember this status for further processing.
Takes care of not losing data when pausing, and properly resume a
paused mime structure when requested.
New tests 670-673 check unpausing cases, with easy or multi
interface and mime or form api.
Fixes#4813
Reported-by: MrdUkk on github
New test 666 checks this is effective.
As upload buffer size is significant in this kind of tests, shorten it
in similar test 652.
Fixes#4860
Reported-by: RuurdBeerstra on github
Input buffer filling may delay the data sending if data reads are slow.
To overcome this problem, file and callback data reads do not accumulate
in buffer anymore. All other data (memory data and mime framing) are
considered as fast and still concatenated in buffer.
As this may highly impact performance in terms of data overhead, an early
end of part data check is added to spare a read call.
When encoding a part's data, an encoder may require more bytes than made
available by a single read. In this case, the above rule does not apply
and reads are performed until the encoder is able to deliver some data.
Tests 643, 644, 645, 650 and 654 have been adapted to the output data
changes, with test data size reduced to avoid the boredom of long lists of
1-byte chunks in verification data.
New test 664 checks mimepost using single-byte read callback with encoder.
New test 665 checks the end of part data early detection.
Fixes#4826
Reported-by: MrdUkk on github
In case a read callback returns a status (pause, abort, eof,
error) instead of a byte count, drain the bytes read so far but
remember this status for further processing.
Takes care of not losing data when pausing, and properly resume a
paused mime structure when requested.
New tests 670-673 check unpausing cases, with easy or multi
interface and mime or form api.
Fixes#4813
Reported-by: MrdUkk on github
Closes#4833
This reduces the HTTP/2 window size to 32 MB since libcurl might have to
buffer up to this amount of data in memory and yet we don't want it set
lower to potentially impact tranfer performance on high speed networks.
Requires nghttp2 commit b3f85e2daa629
(https://github.com/nghttp2/nghttp2/pull/1444) to work properly, to end
up in the next release after 1.40.0.
Fixes#4939Closes#4940
It is still possible to override the executable to run during the test,
using the <tool> tag, but this patch removes the requirement that the
tag must be present for unit tests.
It also removes the possibility of human error when existing test cases
are used as the basis for new tests, as recently witnessed in 81c37124.
Reviewed-by: Daniel Stenberg
Closes#4976
When doing a request with a body + Expect: 100-continue and the server
responds with a 417, the same request will be retried immediately
without the Expect: header.
Added test 357 to verify.
Also added a control instruction to tell the sws test server to not read
the request body if Expect: is present, which the new test 357 uses.
Reported-by: bramus on github
Fixes#4949Closes#4964
Note: The RCPT TO command isn't required to advertise to the server that
it contains UTF-8 characters, instead the server is told that a mail may
contain UTF-8 in any envelope command via the MAIL command.
Support the SMTPUTF8 extension when sending mailbox information in the
MAIL command (FROM and AUTH parameters). Non-ASCII domain names will
be ACE encoded, if IDN is supported, whilst non-ASCII characters in
the local address part are passed to the server.
Reported-by: ygthien on github
Fixes#4828
The dot character between the host and the tld was not being escaped,
which meant it specified a match of 'any' character rather than an
explicit dot separator.
Additionally removed the dot character from the host name as it allowed
the following to be specified as a valid address in our test cases:
<bad@example......com>
Both are typos from 98f7ca7 and 8880f84 :(
I can't remember whether my intention was to allow sub-domains to be
specified in the host or not with these additional dots, but by placing
it outside of the host means it can only be specified once per domain
and by placing a + after the new grouping support for sub-domains is
kept.
Closes#4912
The alt-svc cache survives a call to curl_easy_reset fine, but the file
name to use for saving the cache was cleared. Now the alt-svc cache has
a copy of the file name to survive handle resets.
Added test 1908 to verify.
Reported-by: Craig Andrews
Fixes#4898Closes#4902
RFC 7616 section 3.4 (The Authorization Header Field) states that "For
historical reasons, a sender MUST NOT generate the quoted string syntax
for the following parameters: algorithm, qop, and nc". This removes the
quoting for the algorithm parameter.
Reviewed-by: Steve Holme
Closes#4890
... as this is already done much earlier in the URL parser.
Also add test case 894 that verifies that pop3 with an encodedd CR in
the user name is rejected.
Closes#4887
In the "scheme-less" parsing case, we need to strip off credentials
first before we guess scheme based on the host name!
Assisted-by: Jay Satiro
Fixes#4856Closes#4857
Previously it was stored in a global state which contributed to
curl_global_init's thread unsafety. This boolean is now instead figured
out in curl_multi_init() and stored in the multi handle. Less effective,
but thread safe.
Closes#4851
- Add new error code CURLE_QUIC_CONNECT_ERROR for QUIC connection
errors.
Prior to this change CURLE_FAILED_INIT was used, but that was not
correct.
Closes https://github.com/curl/curl/pull/4754
When using randomized features of runtests (-R and --shallow) it is
useful to have a fixed random seed to make sure for example extra
commits in a branch or a rebase won't change the seed that would make
repeated runs work differently.
As it is also useful to change seed sometimes, the default seed is now
determined based on the current month (and first line curl -V
output). When the month changes, so will the random seed.
The specific seed is also shown in the standard test suite top header
and it can be set explictly with the new --seed=[num] option so that the
exact order of a previous run can be achieved.
Closes#4734
It was removed for output containing ' =' via `s/ =.*//`. With classic
MinGW, this made lines with `free()` end with CRLF, but lines with e.g.
`malloc()` end with only LF. The tests expect LF only.
Closes https://github.com/curl/curl/pull/4788
Previously it would end up with an uninitialized memory buffer that
would lead to a crash or junk getting output.
Added test 1271 to verify.
Reported-by: Brian Carpenter
Closes#4786
Prior to this change the swsbounce check in service_connection could
fail because prevtestno and prevpartno were not set, which would cause
the wrong response data to be sent to some tests and cause them to fail.
Ref: https://github.com/curl/curl/pull/4717#issuecomment-570240785
Prior to this change tests that required NTLM feature did not require
SSL feature.
There are pending changes to cmake builds that will allow enabling NTLM
in non-SSL builds in Windows. In that case the NTLM auth strings created
are different from what is expected by the NTLM tests and they fail:
"The issue with NTLM is that previous non-SSL builds would not enable
NTLM and so the NTLM tests would be skipped."
Assisted-by: marc-groundctl@users.noreply.github.com
Ref: https://github.com/curl/curl/pull/4717#issuecomment-566218729
Closes https://github.com/curl/curl/pull/4768
Even if the initial request line wasn't found. With the fix to 1455, the
test number is now detected correctly.
(Problem found when running tests in random order.)
Closes#4744
On my current Debian Unstable with libidn2 2.2.0, I get an error if
LC_ALL is set to blank. Then curl errors out with:
curl: (3) Failed to convert www.åäö.se to ACE; could not convert string to UTF-8
Closes#4738
Fixup the test to instead not compare the port number. It sometimes
caused problems like this:
"curl: (45) bind failed with errno 98: Address already in use"
Closes#4733
When set, shallow mode limits runtests -t to make no more than NUM fails
per test case. If more are found, it will randomly discard entries until
the number is right. The random seed can also be set.
This is particularly useful when running MANY tests as then most torture
failures will already fail the same functions over and over and make the
total operation painfully tedious.
Closes#4699
This enables the use of Windows Subsystem for Linux (WSL) to run the
testsuite against Windows binaries while using Linux servers.
This commit introduces the following environment variables:
- CURL_TEST_EXE_EXT: set the executable extension for all components
- CURL_TEST_EXE_EXT_TOOL: set it for the curl tool only
- CURL_TEST_EXE_EXT_SSH: set it for the SSH tools only
Later testcurl.pl could be adjusted to make use of those variables.
- CURL_TEST_EXE_EXT_SRV: set it for the test servers only
(This is one of several commits to support use of WSL for the tests.)
Closes https://github.com/curl/curl/pull/3899
Keys created on Windows Subsystem for Linux (WSL) require it for some
reason.
(This is one of several commits to support use of WSL for the tests.)
Ref: https://github.com/curl/curl/pull/3899
Bash in Windows Subsystem for Linux (WSL) requires it for some reason.
(This is one of several commits to support use of WSL for the tests.)
Ref: https://github.com/curl/curl/pull/3899
It could accidentally let the connection get used by more than one
thread, leading to double-free and more.
Reported-by: Christopher Reid
Fixes#4544Closes#4557
Also, use `CURLRES_IPV6` only for actual DNS resolution, not for IPv6
address support. This makes it possible to connect to IPv6 literals by
setting `ENABLE_IPV6` even without `getaddrinfo` support. It also fixes
the CMake build when using the synchronous resolver without
`getaddrinfo` support.
Closes https://github.com/curl/curl/pull/4662
- Disable warning C4127 "conditional expression is constant" globally
in curl_setup.h for when building with Microsoft's compiler.
This mainly affects building with the Visual Studio project files found
in the projects dir.
Prior to this change the cmake and winbuild build systems already
disabled 4127 globally for when building with Microsoft's compiler.
Also, 4127 was already disabled for all build systems in the limited
circumstance of the WHILE_FALSE macro which disabled the warning
specifically for while(0). This commit removes the WHILE_FALSE macro and
all other cruft in favor of disabling globally in curl_setup.
Background:
We have various macros that cause 0 or 1 to be evaluated, which would
cause warning C4127 in Visual Studio. For example this causes it:
#define Curl_resolver_asynch() 1
Full behavior is not clearly defined and inconsistent across versions.
However it is documented that since VS 2015 Update 3 Microsoft has
addressed this somewhat but not entirely, not warning on while(true) for
example.
Prior to this change some C4127 warnings occurred when I built with
Visual Studio using the generated projects in the projects dir.
Closes https://github.com/curl/curl/pull/4658
This commit adds curl_multi_wakeup() which was previously in the TODO
list under the curl_multi_unblock name.
On some platforms and with some configurations this feature might not be
available or can fail, in these cases a new error code
(CURLM_WAKEUP_FAILURE) is returned from curl_multi_wakeup().
Fixes#4418Closes#4608
Improved estimation of expected_len and updated related comments;
increased strictness of QNAME-encoding, adding error detection for empty
labels and names longer than the overall limit; avoided treating DNAME
as unexpected;
updated unit test 1655 with more thorough set of proofs and tests
Closes#4598
Regression from e59371a493 (7.67.0)
Added test 490, 491 and 492 to verify the functionality.
Reported-by: Kamil Dudka
Reported-by: Anderson Sasaki
Fixes#4588Closes#4591
The disable-scan script used in test 1165 is extended to also verify
that the docs cover all used defines and all defines offered by
configure.
Reported-by: SLDiggie on github
Fixes#4545Closes#4587
Classic MinGW / MSYS 1 doesn't support `MSYS2_ARG_CONV_EXCL`, so this
test unnecessarily failed when using `file:/` instead of `file:///`.
Closes https://github.com/curl/curl/pull/4554
- Use FORMAT_MESSAGE_IGNORE_INSERTS to ignore format specifiers in
Windows error strings.
Since we are not in control of the error code we don't know what
information may be needed by the error string's format specifiers.
Prior to this change Windows API error strings which contain specifiers
(think specifiers like similar to printf specifiers) would not be shown.
The FormatMessage Windows API call which turns a Windows error code into
a string could fail and set error ERROR_INVALID_PARAMETER if that error
string contained a format specifier. FormatMessage expects a va_list for
the specifiers, unless inserts are ignored in which case no substitution
is attempted.
Ref: https://devblogs.microsoft.com/oldnewthing/20071128-00/?p=24353
The URL parser function can't reject a bad IPv6 address properly when
curl was built without IPv6 support.
Reported-by: Marcel Raad
Fixes#4556Closes#4572
This way, we always have exactly one slash after the host name, making
the tests pass when curl is compiled with the MSYS GCC.
Closes https://github.com/curl/curl/pull/4512
The MSYS system on Windows can run the test suite for curl built with
any toolset. When built with the MSYS GCC, curl uses Unix line endings,
while it uses Windows line endings when built with the MinGW GCC, and
`^O` reports 'msys' in both cases. Use the curl executable itself to
determine the line endings instead, which reports 'x86_64-pc-msys' when
built with the MSYS GCC.
Closes https://github.com/curl/curl/pull/4506
The URL extracted with CURLINFO_EFFECTIVE_URL was returned as given as
input in most cases, which made it not get a scheme prefixed like before
if the URL was given without one, and it didn't remove dotdot sequences
etc.
Added test case 1907 to verify that this now works as intended and as
before 7.62.0.
Regression introduced in 7.62.0
Reported-by: Christophe Dervieux
Fixes#4491Closes#4493
This should again enable crazy-large download ranges of the style
[1-10000000] that otherwise easily ran out of memory starting in 7.66.0
when this new handle allocating scheme was introduced.
Reported-by: Peter Sumatra
Fixes#4393Closes#4438
The 'share object' only sets the storage area for cookies. The "cookie
engine" still needs to be enabled or activated using the normal cookie
options.
This caused the curl command line tool to accidentally use cookies
without having been told to, since curl switched to using shared cookies
in 7.66.0.
Test 1166 verifies
Updated test 506
Fixes#4429Closes#4434
The parser would check for a query part before fragment, which caused it
to do wrong when the fragment contains a question mark.
Extended test 1560 to verify.
Reported-by: Alex Konev
Fixes#4412Closes#4413
CURLU_NO_AUTHORITY is intended for use with unknown schemes (i.e. not
"file:///") to override cURL's default demand that an authority exists.
Closes#4349
Added unit test case 1655 to verify.
Close#4352
the code correctly finds the flaws in the old code,
if one temporarily restores doh.c to the old version.
This is a protocol violation but apparently there are legacy proprietary
servers doing this.
Added test 336 and 337 to verify.
Reported-by: Philippe Marguinaud
Closes#4339
It needs to parse correctly. Otherwise it could be tricked into letting
through a-f using host names that libcurl would then resolve. Like
'[ab.be]'.
Reported-by: Thomas Vegas
Closes#4315
This allows the function to figure out if a unix domain socket has a
file name or not associated with it! When a socket is created with
socketpair(), as done in the fuzzer testing, the path struct member is
uninitialized and must not be accessed.
Bug: https://crbug.com/oss-fuzz/16699Closes#4283
Double-underscored or underscore plus uppercase letter at least.
... as they're claimed to be reserved.
Reported-by: patnyb on github
Fixes#4254Closes#4255
When a username and password are provided in the URL, they were wrongly
removed from the stored URL so that subsequent uses of the same URL
wouldn't find the crendentials. This made doing HTTP auth with multiple
connections (like Digest) mishave.
Regression from 46e164069d (7.62.0)
Test case 335 added to verify.
Reported-by: Mike Crowe
Fixes#4228Closes#4229
So that users can mask in/out specific HTTP versions when Alt-Svc is
used.
- Removed "h2c" and updated test case accordingly
- Changed how the altsvc struct is laid out
- Added ifdefs to make the unittest run even in a quiche-tree
Closes#4201
This is only the libcurl part that provides the information. There's no
user of the parsed value. This change includes three new tests for the
parser.
Ref: #3794
- Change data and protocol sections to CRLF line endings.
Prior to this change the tests would fail or hang, which is because
certain sections such as protocol require CRLF line endings.
Follow-up to grandparent commit which added the tests.
Ref: https://github.com/curl/curl/issues/3653
Ref: https://github.com/curl/curl/pull/3790
NOTE: This commit was cherry-picked and is part of a series of commits
that added the authzid feature for upcoming 7.66.0. The series was
temporarily reverted in db8ec1f so that it would not ship in a 7.65.x
patch release.
Closes https://github.com/curl/curl/pull/4186
Repeatedly we see problems where using curl_multi_wait() is difficult or
just awkward because if it has no file descriptor to wait for
internally, it returns immediately and leaves it to the caller to wait
for a small amount of time in order to avoid occasional busy-looping.
This is often missed or misunderstood, leading to underperforming
applications.
This change introduces curl_multi_poll() as a replacement drop-in
function that accepts the exact same set of arguments. This function
works identically to curl_multi_wait() - EXCEPT - for the case when
there's nothing to wait for internally, as then this function will by
itself wait for a "suitable" short time before it returns. This
effectiely avoids all risks of busy-looping and should also make it less
likely that apps "over-wait".
This also changes the curl tool to use this funtion internally when
doing parallel transfers and changes curl_easy_perform() to use it
internally.
Closes#4163
As the plan has been laid out in DEPRECATED. Update docs accordingly and
verify in test 1174. Now requires the option to be set to allow HTTP/0.9
responses.
Closes#4191
Allow pretty much anything to be part of the ALPN identifier. In
particular minus, which is used for "h3-20" (in-progress HTTP/3
versions) etc.
Updated test 356.
Closes#4182
If HTTPAUTH_GSSNEGOTIATE was used for a POST request and
gss_init_sec_context() failed, the POST request was sent
with empty body. This commit also restores the original
behavior of `curl --fail --negotiate`, which was changed
by commit 6c60355323.
Add regression tests 2077 and 2078 to cover this.
Fixes#3992Closes#4171
... to avoid integer overflows later when multiplying with 1000 to
convert seconds to milliseconds.
Added test 1269 to verify.
Reported-by: Jason Lee
Closes#4166
If using the read callback for HTTP_POST, and POSTFIELDSIZE is not set,
automatically add a Transfer-Encoding: chunked header, same as it is
already done for HTTP_PUT, HTTP_POST_FORM and HTTP_POST_MIME. Update
test 1514 according to the new behaviour.
Closes#4138
This is done by making sure each individual transfer is first added to a
linked list as then they can be performed serially, or at will, in
parallel.
Closes#3804
The testcase ensures that redirects to CURLPROTO_GOPHER won't be
allowed, by default, in the future. Also, curl is being used
for convenience while keeping the testcases DRY.
The expected error code is CURLE_UNSUPPORTED_PROTOCOL when the client is
redirected to CURLPROTO_GOPHER
Signed-off-by: Linos Giannopoulos <lgian@skroutz.gr>
With CURLOPT_TIMECONDITION set, a header is automatically added (e.g.
If-Modified-Since). Allow this to be replaced or suppressed with
CURLOPT_HTTPHEADER.
Fixes#4103Closes#4109
smbserver.py/dictserver.py were explicitly using localhost/127.0.0.1 for
binding the server which when we were running the tests with a separate
HOSTIP and CLIENTIP had failures verifying the server from the device we
were testing.
This changes them to take the address from runtests.py and default to
localhost/127.0.0.1 if none is given.
Closes#4048
Make '-k' a no-op. The singletest function now clears the log directory
BEFORE each individual test and not after, which makes it possible to
always keep the logfiles around after a test has been run. No need to
specify -k anymore. Keeping the option parsing around to work with users
of old habits.
Some tests also didn't work properly when -k was used (since the old
logs would be kep when a new test starts) which this change also fixes.
Closes#4035
... so that runtests can skip individual test cases that test features
that are explicitly disabled in this build. This new logic is intended
for disabled features that aren't otherwise easily visible through the
curl_version_info() or other API calls.
tests/server/disabled is a newly built executable that will output a
list of disabled features. Outputs nothing for a default build.
Closes#3950
Remove support for, references to and use of "cyaSSL" from the source
and docs. wolfSSL is the current name and there's no point in keeping
references to ancient history.
Assisted-by: Daniel Gustafsson
Closes#3903
Responses with status codes 1xx, 204 or 304 don't have a response body. For
these, don't parse these headers:
- Content-Encoding
- Content-Length
- Content-Range
- Last-Modified
- Transfer-Encoding
This change ensures that HTTP/2 upgrades work even if a
"Content-Length: 0" or a "Transfer-Encoding: chunked" header is present.
Co-authored-by: Daniel Stenberg
Closes#3702Fixes#3968Closes#3977
- Revert all commits related to the SASL authzid feature since the next
release will be a patch release, 7.65.1.
Prior to this change CURLOPT_SASL_AUTHZID / --sasl-authzid was destined
for the next release, assuming it would be a feature release 7.66.0.
However instead the next release will be a patch release, 7.65.1 and
will not contain any new features.
After the patch release after the reverted commits can be restored by
using cherry-pick:
git cherry-pick a14d72ca9499ff8c1cc36c2a8d520edf690
Details for all reverted commits:
Revert "os400: take care of CURLOPT_SASL_AUTHZID in curl_easy_setopt_ccsid()."
This reverts commit 0edf6907ae.
Revert "tests: Fix the line endings for the SASL alt-auth tests"
This reverts commit c2a8d52a13.
Revert "examples: Added SASL PLAIN authorisation identity (authzid) examples"
This reverts commit 8c1cc369d0.
Revert "curl: --sasl-authzid added to support CURLOPT_SASL_AUTHZID from the tool"
This reverts commit a9499ff136.
Revert "sasl: Implement SASL authorisation identity via CURLOPT_SASL_AUTHZID"
This reverts commit a14d72ca2f.
- Change data and protocol sections to CRLF line endings.
Prior to this change the tests would fail or hang, which is because
certain sections such as protocol require CRLF line endings.
Follow-up to a9499ff from today which added the tests.
Ref: https://github.com/curl/curl/pull/3790
They serve very little purpose and mostly just add noise. Most of them
have been around for a very long time. I read them all before removing
or rephrasing them.
Ref: #3876Closes#3883
Codacy/CppCheck warns about this. Consistently use parentheses as we
already do in some places to silence the warning.
Closes https://github.com/curl/curl/pull/3866
This reverts commit b0972bc.
- No longer show verbose output for the conncache closure handle.
The offending commit was added so that the conncache closure handle
would inherit verbose mode from the user's easy handle. (Note there is
no way for the user to set options for the closure handle which is why
that was necessary.) Other debug settings such as the debug function
were not also inherited since we determined that could lead to crashes
if the user's per-handle private data was used on an unexpected handle.
The reporter here says he has a debug function to capture the verbose
output, and does not expect or want any output to stderr; however
because the conncache closure handle does not inherit the debug function
the verbose output for that handle does go to stderr.
There are other plausible scenarios as well such as the user redirects
stderr on their handle, which is also not inherited since it could lead
to crashes when used on an unexpected handle.
Short of allowing the user to set options for the conncache closure
handle I don't think there's much we can safely do except no longer
inherit the verbose setting.
Bug: https://curl.haxx.se/mail/lib-2019-05/0021.html
Reported-by: Kristoffer Gleditsch
Ref: https://github.com/curl/curl/pull/3598
Ref: https://github.com/curl/curl/pull/3618
Closes https://github.com/curl/curl/pull/3856
... to make the host name "usable". Store the scope id and put it back
when extracting a URL out of it.
Also makes curl_url_set() syntax check CURLUPART_HOST.
Fixes#3817Closes#3822
This limits all accepted input strings passed to libcurl to be less than
CURL_MAX_INPUT_LENGTH (8000000) bytes, for these API calls:
curl_easy_setopt() and curl_url_set().
The 8000000 number is arbitrary picked and is meant to detect mistakes
or abuse, not to limit actual practical use cases. By limiting the
acceptable string lengths we also reduce the risk of integer overflows
all over.
NOTE: This does not apply to `CURLOPT_POSTFIELDS`.
Test 1559 verifies.
Closes#3805
RFC 4616 specifies the authzid is optional in the client authentication
message and that the server will derive the authorisation identity
(authzid) from the authentication identity (authcid) when not specified
by the client.
Make sure to run curl_global_cleanup() when shutting down the test
suite to release any resources allocated in the SSL setup. This is
clearly visible when running tests with PolarSSL where the thread
lock calloc() memory which isn't released when not running cleanup.
Below is an excerpt from the autobuild logs:
==12368== 96 bytes in 1 blocks are possibly lost in loss record 1 of 2
==12368== at 0x4837B65: calloc (vg_replace_malloc.c:752)
==12368== by 0x11A76E: curl_dbg_calloc (memdebug.c:205)
==12368== by 0x145CDF: Curl_polarsslthreadlock_thread_setup
(polarssl_threadlock.c:54)
==12368== by 0x145B37: Curl_polarssl_init (polarssl.c:865)
==12368== by 0x14129D: Curl_ssl_init (vtls.c:171)
==12368== by 0x118B4C: global_init (easy.c:158)
==12368== by 0x118BF5: curl_global_init (easy.c:221)
==12368== by 0x118D0B: curl_easy_init (easy.c:299)
==12368== by 0x114E96: test (lib1906.c:32)
==12368== by 0x115495: main (first.c:174)
Closes#3783
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Mark global variables static to avoid compiler warning in Clang when
using -Wmissing-variable-declarations.
Closes#3778
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Only allow well formed decimal numbers in the input.
Document that the number MUST be between 1 and 65535.
Add tests to test 1560 to verify the above.
Ref: https://github.com/curl/curl/issues/3753Closes#3762
Remove the code too. The functionality has been disabled in code since
7.62.0. Setting this option will from now on simply be ignored and have
no function.
Closes#3654
- remove unused variables
- declare conditionally used variables conditionally
- suppress unused variable warnings in the CMake tests
- remove dead variable stores
- consistently use WIN32 macro to detect Windows
Closes https://github.com/curl/curl/pull/3739
The stripcredentials unittest fails to compile on platforms without
xattr support, for example the Solaris member in the buildfarm which
fails with the following:
CC unit1621-unit1621.o
CC ../libtest/unit1621-first.o
CCLD unit1621
Undefined first referenced
symbol in file
stripcredentials unit1621-unit1621.o
goto problem 2
ld: fatal: symbol referencing errors. No output written to .libs/unit1621
collect2: error: ld returned 1 exit status
gmake[2]: *** [Makefile:996: unit1621] Error 1
Fix by excluding the test on such platforms by using the reverse
logic from where stripcredentials() is defined.
Closes#3759
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
OAUTHBEARER tokens were incorrectly generated in a format similar to
XOAUTH2 tokens. These changes make OAUTHBEARER tokens conform to the
RFC7628.
Fixes: #2487
Reported-by: Paolo Mossino
Closes https://github.com/curl/curl/pull/3377
The threaded-shared-conn.c example turned into test case. Only works if
pthread was detected.
An attempt to detect future regressions such as e3a53e3efbCloses#3687
* Adjusted unit tests 2056, 2057
* do not generally close connections with CURLAUTH_NEGOTIATE after every request
* moved negotiatedata from UrlState to connectdata
* Added stream rewind logic for CURLAUTH_NEGOTIATE
* introduced negotiatedata::GSS_AUTHDONE and negotiatedata::GSS_AUTHSUCC
* Consider authproblem state for CURLAUTH_NEGOTIATE
* Consider reuse_forbid for CURLAUTH_NEGOTIATE
* moved and adjusted negotiate authentication state handling from
output_auth_headers into Curl_output_negotiate
* Curl_output_negotiate: ensure auth done is always set
* Curl_output_negotiate: Set auth done also if result code is
GSS_S_CONTINUE_NEEDED/SEC_I_CONTINUE_NEEDED as this result code may
also indicate the last challenge request (only works with disabled
Expect: 100-continue and CURLOPT_KEEP_SENDING_ON_ERROR -> 1)
* Consider "Persistent-Auth" header, detect if not present;
Reset/Cleanup negotiate after authentication if no persistent
authentication
* apply changes introduced with #2546 for negotiate rewind logic
Fixes#1261Closes#1975
The check that prevents payload from sending in case of authentication
doesn't check properly if the authentication is done or not.
They're cases where the proxy respond "200 OK" before sending
authentication challenge. This change takes care of that.
Fixes#2431Closes#3669
- Change closure handle to receive verbose setting from the easy handle
most recently added via curl_multi_add_handle.
The closure handle is a special easy handle used for closing cached
connections. It receives limited settings from the easy handle most
recently added to the multi handle. Prior to this change that did not
include verbose which was a problem because on connection shutdown
verbose mode was not acknowledged.
Ref: https://github.com/curl/curl/pull/3598
Co-authored-by: Daniel Stenberg
Closes https://github.com/curl/curl/pull/3618
Follow-up to 8eddb8f425.
If the cookieinfo pointer is NULL there really is nothing to save.
Without this fix, we got a problem when a handle was using shared object
with cookies and is told to "FLUSH" it to file (which worked) and then
the share object was removed and when the easy handle was closed just
afterwards it has no cookieinfo and no cookies so it decided to save an
empty jar (overwriting the file just flushed).
Test 1905 now verifies that this works.
Assisted-by: Michael Wallner
Assisted-by: Marcel Raad
Closes#3621
... and remove it from the dist tarball. It has served its time, it
barely gets updated anymore and "everything curl" is now convering all
this document once tried to include, and does it more and better.
In the compressed scenario, this removes ~15K data from the binary,
which is 25% of the -M output.
It remains in the git repo for now for as long as the web site builds a
page using that as source. It renders poorly on the site (especially for
mobile users) so its not even good there.
Closes#3587
The draft-ietf-httpbis-rfc6265bis-02 draft, specify a set of prefixes
and how they should affect cookie initialization, which has been
adopted by the major browsers. This adds support for the two prefixes
defined, __Host- and __Secure, and updates the testcase with the
supplied examples from the draft.
Closes#3554
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
The code is more or less copied from the stdout comparison code, maybe
some better reuse is possible.
test 1457 is adjusted to make the output actually match (by using --silent)
test 506 used <stderr> without actually needing it, so that <stderr> block is removed
Closes#3536
Option -F generates an intermediate representation of the mime structure
that is used later to create the libcurl mime structure and generate
the --libcurl statements.
Reported-by: Daniel Stenberg
Fixes#3532Closes#3546
urlapi: turn three local-only functions into statics
conncache: make conncache_find_first_connection static
multi: make detach_connnection static
connect: make getaddressinfo static
curl_ntlm_core: make hmac_md5 static
http2: make two functions static
http: make http_setup_conn static
connect: make tcpnodelay static
tests: make UNITTEST a thing to mark functions with, so they can be static for
normal builds and non-static for unit test builds
... and mark Curl_shuffle_addr accordingly.
url: make up_free static
setopt: make vsetopt static
curl_endian: make write32_le static
rtsp: make rtsp_connisdead static
warnless: remove unused functions
memdebug: remove one unused function, made another static
We use "conn" everywhere to be a pointer to the connection.
Introduces two functions that "attaches" and "detaches" the connection
to and from the transfer.
Going forward, we should favour using "data->conn" (since a transfer
always only has a single connection or none at all) to "conn->data"
(since a connection can have none, one or many transfers associated with
it and updating conn->data to be correct is error prone and a frequent
reason for internal issues).
Closes#3442
Added Curl_resolver_kill() for all three resolver modes, which only
blocks when necessary, along with test 1592 to confirm
curl_multi_remove_handle() doesn't block unless it must.
Closes#3428Fixes#3371
MinGW-w64 defaults to targeting Windows 7 now, so GetTickCount64 is
used and the milliseconds are represented as unsigned long long,
leading to a compiler warning when implicitly converting them to long.
The previous fix for parsing IPv6 URLs with a zone index was a paddle
short for URLs without an explicit port. This patch fixes that case
and adds a unit test case.
This bug was highlighted by issue #3408, and while it's not the full
fix for the problem there it is an isolated bug that should be fixed
regardless.
Closes#3411
Reported-by: GitYuanQu on github
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
This adds support for wildcard hosts in CURLOPT_RESOLVE. These are
try-last so any non-wildcard entry is resolved first. If specified,
any host not matched by another CURLOPT_RESOLVE config will use this
as fallback.
Example send a.com to 10.0.0.1 and everything else to 10.0.0.2:
curl --resolve *:443:10.0.0.2 --resolve a.com:443:10.0.0.1 \
https://a.comhttps://b.com
This is probably quite similar to using:
--connect-to a.com:443:10.0.0.1:443 --connect-to :443:10.0.0.2:443
Closes#3406
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Added CURLOPT_HTTP09_ALLOWED and --http0.9 for this purpose.
For now, both the tool and library allow HTTP/0.9 by default.
docs/DEPRECATE.md lays out the plan for when to reverse that default: 6
months after the 7.64.0 release. The options are added already now so
that applications/scripts can start using them already now.
Fixes#2873Closes#3383
Ensure to perform the checks we have to enforce a sane domain in
the cookie request. The check for non-PSL enabled builds is quite
basic but it's better than nothing.
Closes#2964
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
This adds the CURLOPT_TRAILERDATA and CURLOPT_TRAILERFUNCTION
options that allow a callback based approach to sending trailing headers
with chunked transfers.
The test server (sws) was updated to take into account the detection of the
end of transfer in the case of trailing headers presence.
Test 1591 checks that trailing headers can be sent using libcurl.
Closes#3350
Only allow secure origins to be able to write cookies with the
'secure' flag set. This reduces the risk of non-secure origins
to influence the state of secure origins. This implements IETF
Internet-Draft draft-ietf-httpbis-cookie-alone-01 which updates
RFC6265.
Closes#2956
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
A URL with a single colon without a portnumber should use the default
port, discarding the colon. Fix, add a testcase and also do little bit
of comment wordsmithing.
Closes#3365
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
... when not actually following the redirect. Otherwise we return error
for this and an application can't extract the value.
Test 1518 added to verify.
Reported-by: Pavel Pavlov
Fixes#3340Closes#3364
This adds a new unittest intended to cover the internal functions in
the urlapi code, starting with parse_port(). In order to avoid name
collisions in debug builds, parse_port() is renamed Curl_parse_port()
since it will be exported.
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
- Include query in the path passed to generate HTTP auth.
Recent changes to use the URL API internally (46e1640, 7.62.0)
inadvertently broke authentication URIs by omitting the query.
Fixes https://github.com/curl/curl/issues/3353Closes#3356
Important for when the file is going to be read again and thus must not
contain old contents!
Adds test 327 to verify.
Reported-by: daboul on github
Fixes#3299Closes#3300
The function does not return the same value as snprintf() normally does,
so readers may be mislead into thinking the code works differently than
it actually does. A different function name makes this easier to detect.
Reported-by: Tomas Hoger
Assisted-by: Daniel Gustafsson
Fixes#3296Closes#3297
The tests 20 and 1322 are using getaddrinfo of libc for resolving. In
eglibc-2.19 there is a memory leakage and invalid free bug which
surfaces in some special circumstances (PF_UNSPEC hint with invalid or
non-existent names). The valgrind runs in testing fail in these
situations.
As the tests 20/1322 are not specific on either protocol (IPv4/IPv6)
this commit changes the hints to IPv4 protocol by passing `--ipv4` flag
on the tests' command line. This prevents the valgrind failures.
SO_EXCLUSIVEADDRUSE is on by default on Vista or newer,
but does not work together with SO_REUSEADDR being on.
The default changes were made with stunnel 5.34 and 5.35.
APPENDQUERY + URLENCODE would skip all equals signs but now it only skip
encoding the first to better allow "name=content" for any content.
Reported-by: Alexey Melnichuk
Fixes#3231Closes#3231
The function identifying a leading "scheme" part of the URL considered a
few letters ending with a colon to be a scheme, making something like
"short:80" to become an unknown scheme instead of a short host name and
a port number.
Extended test 1560 to verify.
Also fixed test203 to use file_pwd to make it get the correct path on
windows. Removed test 2070 since it was a duplicate of 203.
Assisted-by: Marcel Raad
Reported-by: Hagai Auro
Fixes#3220Fixes#3233Closes#3223Closes#3235
- for "--netrc", don't ignore the login/password specified with "--user",
only ignore the login/password in the URL.
This restores the netrc behaviour of curl 7.61.1 and earlier.
- fix the documentation of CURL_NETRC_REQUIRED
- improve the detection of login/password changes when reading .netrc
- don't read .netrc if both login and password are already set
Fixes#3213Closes#3224
The previous coding used a format string whose output depended on the
current locale of the environment running the test. Since the gist of
the test is to have a format string, with the actual formatting being
less important, switch to a more stable formatstring with decimals.
Reported-by: Marcel Raad
Closes#3234
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
The internal buffer in infof() is limited to 2048 bytes of payload plus
an additional byte for NULL termination. Servers with very long error
messages can however cause truncation of the string, which currently
isn't very clear, and leads to badly formatted output.
This appends a "...\n" (or just "..." in case the format didn't with a
newline char) marker to the end of the string to clearly show
that it has been truncated.
Also include a unittest covering infof() to try and catch any bugs
introduced in this quite important function.
Closes#3216
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Marcel Raad <Marcel.Raad@teamviewer.com>
The function identifying a leading "scheme" part of the URL considered a few
letters ending with a colon to be a scheme, making something like "short:80"
to become an unknown scheme instead of a short host name and a port number.
Extended test 1560 to verify.
Reported-by: Hagai Auro
Fixes#3220Closes#3223
When not actually following the redirect and the target URL is only
stored for later retrieval, curl always accepted "non-supported"
schemes. This was a regression from 46e164069d.
Reported-by: Brad King
Fixes#3210Closes#3215
As has been outlined in the DEPRECATE.md document, the axTLS code has
been disabled for 6 months and is hereby removed.
Use a better supported TLS library!
Assisted-by: Daniel Gustafsson
Closes#3194
Classic MinGW has neither InitializeCriticalSectionEx nor
GetTickCount64, independent of the target Windows version.
Closes https://github.com/curl/curl/pull/3113
Now FILE transfers send headers to the header callback like HTTP and
other protocols. Also made curl_easy_getinfo(...CURLINFO_PROTOCOL...)
work for FILE in the callbacks.
Makes "curl -i file://.." and "curl -I file://.." work like before
again. Applied the bold header logic to them too.
Regression from c1c2762 (7.61.0)
Reported-by: Shaun Jackman
Fixes#3083Closes#3101
The LD_PRELOAD functionality doesn't exist on macOS, so skip any tests
requiring it.
Fixes#2394Closes#3106
Reported-by: Github user @jakirkham
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
To make it only send one DoH request and avoid the race condition that
could lead to the requests getting sent in reversed order and thus
making it hard to compare in the test case.
Fixes#3107Closes#3108
runtests.pl support running a range of tests, like "44 to 127". Starting
now, the code makes sure that even such given ranges will ignore tests
that are marked as disabled.
Disabled tests can still be run by explictly specifying that test
number.
Closes#3075
The value in question is coming directly from `gnutls-serv`, so it cannot
be modified freely.
Reported-by: Marcel Raad
Ref: 6ae6b2a533 (commitcomment-30621004)
This fixes potential out-of-buffer access on "file:./" URL
$ valgrind curl "file:./"
==24516== Memcheck, a memory error detector
==24516== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==24516== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==24516== Command: /home/even/install-curl-git/bin/curl file:./
==24516==
==24516== Conditional jump or move depends on uninitialised value(s)
==24516== at 0x4C31F9C: strcmp (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==24516== by 0x4EBB315: seturl (urlapi.c:801)
==24516== by 0x4EBB568: parseurl (urlapi.c:861)
==24516== by 0x4EBC509: curl_url_set (urlapi.c:1199)
==24516== by 0x4E644C6: parseurlandfillconn (url.c:2044)
==24516== by 0x4E67AEF: create_conn (url.c:3613)
==24516== by 0x4E68A4F: Curl_connect (url.c:4119)
==24516== by 0x4E7F0A4: multi_runsingle (multi.c:1440)
==24516== by 0x4E808E5: curl_multi_perform (multi.c:2173)
==24516== by 0x4E7558C: easy_transfer (easy.c:686)
==24516== by 0x4E75801: easy_perform (easy.c:779)
==24516== by 0x4E75868: curl_easy_perform (easy.c:798)
Was originally spotted by
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=10637
Credit to OSS-Fuzz
Closes#3039
- replace tabs with spaces where possible
- remove line ending spaces
- remove double/triple newlines at EOF
- fix a non-UTF-8 character
- cleanup a few indentations/line continuations
in manual examples
Closes https://github.com/curl/curl/pull/3037
In order for this API to fully work for libcurl itself, it now offers a
CURLU_GUESS_SCHEME flag that makes it "guess" scheme based on the host
name prefix just like libcurl always did. If there's no known prefix, it
will guess "http://".
Separately, it relaxes the check of the host name so that IDN host names
can be passed in as well.
Both these changes are necessary for libcurl itself to use this API.
Assisted-by: Daniel Gustafsson
Closes#3018
The previous test certificates contained RSA keys of only 1024 bits.
However, RSA claims that 1024-bit RSA keys are likely to become
crackable some time before 2010. The NIST recommends at least 2048-bit
keys for RSA for now.
Better use full 2048 also for testing.
Closes#2973
Add functionality so that protocols can do custom keepalive on their
connections, when an external API function is called.
Add docs for the new options in 7.62.0
Closes#1641
... and add "MAILINDEX".
As described in #2789, this is a suggested solution. Changing UID=xx to
actually get mail with UID xx and add "MAILINDEX" to get a mail with a
special index in the mail box (old behavior). So MAILINDEX=1 gives the
first non deleted mail in the mail box.
Fixes#2789Closes#2815
Transparently. The related curl_multi_setopt() options all still returns
OK when pipelining is selected.
To re-enable the support, the single line change in lib/multi.c needs to
be reverted.
See docs/DEPRECATE.md
Closes#2705
According to RFC6265 section 5.4, cookies with equal path lengths
SHOULD be sorted by creation-time (earlier first). This adds a
creation-time record to the cookie struct in order to make cookie
sorting more deterministic. The creation-time is defined as the
order of the cookies in the jar, the first cookie read fro the
jar being the oldest. The creation-time is thus not serialized
into the jar. Also remove the strcmp() matching in the sorting as
there is no lexicographic ordering in RFC6265. Existing tests are
updated to match.
Closes#2524
All these tests failed on Windows because something like
sftp://%HOSTIP:%SSHPORT%PWD/
expanded to
sftp://127.0.0.1:1234c:/msys64/home/bla/curl
and then curl complained about the port number ending with a letter.
Use the original POSIX path instead of the Windows path created in
checksystem to fix this.
Closes https://github.com/curl/curl/pull/2920
Since GOPHER support was added in curl `?' character was automatically
translated to `%09' (`\t').
However, this behaviour does not seems documented in RFC 4266 and for
search selectors it is documented to directly use `%09' in the URL.
Apart that several gopher servers in the current gopherspace have CGI
support where `?' is used as part of the selector and translating it to
`%09' often leads to surprising results.
Closes#2910
Modifying the locale with environment variables doesn't work for native
Windows applications. Just disable the test in this case if the decimal
separator is something different than a point. Use a precheck with a
small C program to achieve that.
Closes https://github.com/curl/curl/pull/2786
This warning used to be enabled only for clang as it's a bit stricter
on GCC. Silence the remaining occurrences and enable it on GCC too.
Closes https://github.com/curl/curl/pull/2747
... simply because this is usually a sign of the user having omitted the
file name and the next option is instead "eaten" by the parser as a file
name.
Add test1268 to verify
Closes#2885
Deal with tiny "HTTP/0.9" (header-less) responses by checking the
status-line early, even before a full "HTTP/" is received to allow
detecting 0.9 properly.
Test 1266 and 1267 added to verify.
Fixes#2420Closes#2872
Previously, the macro TEST_HANG_TIMEOUT was unused, but since there is
looping going on, we might as well add timing instead of removing it.
Closes#2853
This allows the use of PKCS#11 URI for certificates and keys without
setting the corresponding type as "ENG" and the engine as "pkcs11"
explicitly. If a PKCS#11 URI is provided for certificate, key,
proxy_certificate or proxy_key, the corresponding type is set as "ENG"
if not provided and the engine is set to "pkcs11" if not provided.
Acked-by: Nikos Mavrogiannopoulos
Closes#2333
Use standard CMake variable BUILD_SHARED_LIBS instead of introducing
custom option CURL_STATICLIB.
Use '-DBUILD_SHARED_LIBS=%SHARED%' in appveyor.yml.
Reviewed-by: Sergei Nikulov
Closes#2755
Turns out that since we're using the native fnmatch function now when
available, and they simply disagree on a huge number of test patterns
that make it hard to test this function like this...
Fixes#2825
Otherwise, LF line endings are converted to CRLF on Windows,
but no conversion is done for the reply, so the test case fails.
Closes https://github.com/curl/curl/pull/2776
It was previously erroneously skipped in some situations.
libtest/libntlmconnect.c wrongly depended on wrong behavior (that it
would get a zero timeout) when no handles are "running" in a multi
handle. That behavior is no longer present with this fix. Now libcurl
will always return a -1 timeout when all handles are completed.
Closes#2733
When the application just started the transfer and then stops it while
the name resolve in the background thread hasn't completed, we need to
wait for the resolve to complete and then cleanup data accordingly.
Enabled test 1553 again and added test 1590 to also check when the host
name resolves successfully.
Detected by OSS-fuzz.
Closes#1968
- Get rid of variable that was generating false positive warning
(unitialized)
- Fix issues in tests
- Reduce scope of several variables all over
etc
Closes#2631
Since 467da3af0, lib1521.c is generated instead of checked in. According
to the commit message, the intention was to remove it from the tarball
as well. However, it is still present when running make dist. To remove
it, add it to nodist_lib1521_SOURCES. This also means there is no need
for the manually added dist-rule in the Makefile.
Also update CMakelists.txt to handle the fact that we now may have
nodist_SOURCES.
If configure detects fnmatch to be available, use that instead of our
custom one for FTP wildcard pattern matching. For standard compliance,
to reduce our footprint and to use already well tested and well
exercised code.
A POSIX fnmatch behaves slightly different than the internal function
for a few test patterns currently and the macOS one yet slightly
different. Test case 1307 is adjusted for these differences.
Closes#2626
A non-escaped bracket ([) is for a character group - as documented. It
will *not* match an individual bracket anymore. Test case 1307 updated
accordingly to match.
Problem detected by OSS-Fuzz, although this fix is probably not a final
fix for the notorious timeout issues.
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=8525Closes#2614
This avoids appending error data to already existing good data.
Test 92 is updated to match this change.
New test 1156 checks all combinations of --range/--resume, --fail,
Content-Range header and http status code 200/416.
Fixes#1163
Reported-By: Ithubg on github
Closes#2578
The feature is only enabled if the output is believed to be a tty.
-J: There's some minor differences and improvements in -J handling, as
now J should work with -i and it actually creates a file first using the
initial name and then *renames* that to the one found in
Content-Disposition (if any).
-i: only shows headers for HTTP transfers now (as documented).
Previously it would also show for pieces of the transfer that were HTTP
(for example when doing FTP over a HTTP proxy).
-i: now shows trailers as well. Previously they were not shown at all.
--libcurl: the CURLOPT_HEADER is no longer set, as the header output is
now done in the header callback.
The previous limit of 5 can still end up in situation that takes a very
long time and consumes a lot of CPU.
If there is still a rare use case for this, a user can provide their own
fnmatch callback for a version that allows a larger set of wildcards.
This commit was triggered by yet another OSS-Fuzz timeout due to this.
Bug: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=8369Closes#2587
Provide a set of new timers that return the time intervals using integer
number of microseconds instead of floats.
The new info names are as following:
CURLINFO_APPCONNECT_TIME_T
CURLINFO_CONNECT_TIME_T
CURLINFO_NAMELOOKUP_TIME_T
CURLINFO_PRETRANSFER_TIME_T
CURLINFO_REDIRECT_TIME_T
CURLINFO_STARTTRANSFER_TIME_T
CURLINFO_TOTAL_TIME_T
Closes#2495
RFC 6265 section 4.2.1 does not set restrictions on cookie names.
This is a follow-up to commit 7f7fcd0.
Also explicitly check proper syntax of cookie name/value pair.
New test 1155 checks that cookie names are not reserved words.
Reported-By: anshnd at github
Fixes#2564Closes#2566
... and make test 1026 rely on that feature so that --disable-manual
builds don't cause test failures.
Reported-by: Max Dymond and Anders Roxell
Fixes#2533Closes#2540
This extends the INDENTATION case to also handle 'else' statements
and require proper indentation on the following line. Also fixes the
offending cases found in the codebase.
Closes#2532
- Move verify_certificate functionality in schannel.c into a new
file called schannel_verify.c. Additionally, some structure defintions
from schannel.c have been moved to schannel.h to allow them to be
used in schannel_verify.c.
- Make verify_certificate functionality for Schannel available on
all versions of Windows instead of just Windows CE. verify_certificate
will be invoked on Windows CE or when the user specifies
CURLOPT_CAINFO and CURLOPT_SSL_VERIFYPEER.
- In verify_certificate, create a custom certificate chain engine that
exclusively trusts the certificate store backed by the CURLOPT_CAINFO
file.
- doc updates of --cacert/CAINFO support for schannel
- Use CERT_NAME_SEARCH_ALL_NAMES_FLAG when invoking CertGetNameString
when available. This implements a TODO in schannel.c to improve
handling of multiple SANs in a certificate. In particular, all SANs
will now be searched instead of just the first name.
- Update tool_operate.c to not search for the curl-ca-bundle.crt file
when using Schannel to maintain backward compatibility. Previously,
any curl-ca-bundle.crt file found in that search would have been
ignored by Schannel. But, with CAINFO support, the file found by
that search would have been used as the certificate store and
could cause issues for any users that have curl-ca-bundle.crt in
the search path.
- Update url.c to not set the build time CURL_CA_BUNDLE if the selected
SSL backend is Schannel. We allow setting CA location for schannel
only when explicitly specified by the user via CURLOPT_CAINFO /
--cacert.
- Add new test cases 3000 and 3001. These test cases check that the first
and last SAN, respectively, matches the connection hostname. New test
certificates have been added for these cases. For 3000, the certificate
prefix is Server-localhost-firstSAN and for 3001, the certificate
prefix is Server-localhost-secondSAN.
- Remove TODO 15.2 (Add support for custom server certificate
validation), this commit addresses it.
Closes https://github.com/curl/curl/pull/1325
When a zeroed out allocation is required, use calloc() rather than
malloc() followed by an explicit memset(). The result will be the
same, but using calloc() everywhere increases consistency in the
codebase and avoids the risk of subtle bugs when code is injected
between malloc and memset by accident.
Closes https://github.com/curl/curl/pull/2497
unit1309 and vtls/gtls: error: arithmetic on a null pointer treated as a
cast from integer to pointer is a GNU extension
Reported-by: Rikard Falkeborn
Fixes#2466Closes#2468
curl 7.57.0 and up interpret this according to Appendix E.3.2 of RFC
8089 but then returns an error saying this is unimplemented. This is
actually a regression in behavior on both Windows and Unix.
Before curl 7.57.0 this URL was treated as a path of "//foo/bar" and
then passed to the relevant OS API. This means that the behavior of this
case is actually OS dependent.
The Unix path resolution rules say that the OS must handle swallowing
the extra "/" and so this path is the same as "/foo/bar"
The Windows path resolution rules say that this is a UNC path and
automatically handles the SMB access for the program. So curl on Windows
was already doing Appendix E.3.2 without any special code in curl.
Regression
Closes#2438
This fixes a segfault occurring when a name of the (invalid) form "domain..tld"
is processed.
test46 updated to cover this case.
Follow-up to commit c990ead.
Ref: https://github.com/curl/curl/pull/2440
This patch adds CURLOPT_DNS_SHUFFLE_ADDRESSES to explicitly request
shuffling of IP addresses returned for a hostname when there is more
than one. This is useful when the application knows that a round robin
approach is appropriate and is willing to accept the consequences of
potentially discarding some preference order returned by the system's
implementation.
Closes#1694
Detected by Coverity Analysis:
Error: IDENTIFIER_TYPO:
curl-7.58.0/tests/python_dependencies/impacket/spnego.py:229: identifier_typo: Using "SuportedMech" appears to be a typo:
* Identifier "SuportedMech" is only known to be referenced here, or in copies of this code.
* Identifier "SupportedMech" is referenced elsewhere at least 4 times.
curl-7.58.0/tests/python_dependencies/impacket/smbserver.py:2651: identifier_use: Example 1: Using identifier "SupportedMech".
curl-7.58.0/tests/python_dependencies/impacket/smbserver.py:2308: identifier_use: Example 2: Using identifier "SupportedMech".
curl-7.58.0/tests/python_dependencies/impacket/spnego.py:252: identifier_use: Example 3: Using identifier "SupportedMech" (2 total uses in this function).
curl-7.58.0/tests/python_dependencies/impacket/spnego.py:229: remediation: Should identifier "SuportedMech" be replaced by "SupportedMech"?
Closes#2379
Refuse to operate when given path components featuring byte values lower
than 32.
Previously, inserting a %00 sequence early in the directory part when
using the 'singlecwd' ftp method could make curl write a zero byte
outside of the allocated buffer.
Test case 340 verifies.
CVE-2018-1000120
Reported-by: Duy Phan Thanh
Bug: https://curl.haxx.se/docs/adv_2018-9cd6.html
When targeting x64, MinGW-w64 complains about conversions between
32-bit long and 64-bit pointers. Fix this by reusing the
GNUTLS_POINTER_TO_SOCKET_CAST / GNUTLS_SOCKET_TO_POINTER_CAST logic
from gtls.c, moving it to warnless.h as CURLX_POINTER_TO_INTEGER_CAST /
CURLX_INTEGER_TO_POINTER_CAST.
Closes https://github.com/curl/curl/pull/2341
- Add new option CURLOPT_RESOLVER_START_FUNCTION to set a callback that
will be called every time before a new resolve request is started
(ie before a host is resolved) with a pointer to backend-specific
resolver data. Currently this is only useful for ares.
- Add new option CURLOPT_RESOLVER_START_DATA to set a user pointer to
pass to the resolver start callback.
Closes https://github.com/curl/curl/pull/2311
This enables users to preresolve but still take advantage of happy
eyeballs and trying multiple addresses if some are not connecting.
Ref: https://github.com/curl/curl/pull/2260
Test 319 checks proper raw mode data with non-chunked gzip
transfer-encoded server data.
Test 326 checks raw mode with chunked server data.
Bug: #2303Closes#2308
RFC 5321 4.1.1.4 specifies the CRLF terminating the DATA command
should be taken into account when chasing the <CRLF>.<CRLF> end marker.
Thus a leading dot character in data is also subject to escaping.
Tests 911 and test server are adapted to this situation.
New tests 951 and 952 check proper handling of initial dot in data.
Closes#2304
Whenever an expected pattern syntax rule cannot be matched, the
character starting the rule loses its special meaning and the parsing
is resumed:
- backslash at the end of pattern string matches itself.
- Error in [:keyword:] results in set containing :\[dekorwy.
Unit test 1307 updated for this new situation.
Closes#2273
... since the libc provided one are locale dependent in a way we don't
want. Also, the "native" isalnum() (for example) works differently on
different platforms which caused test 1307 failures on macos only.
Closes#2269
If CURL_DOES_CONVERSION is enabled, uploaded LFs are mapped to CRLFs,
giving a result that is different from what is expected.
This commit avoids using CURLOPT_TRANSFERTEXT and directly encodes data
to upload in ascii.
Bug: https://github.com/curl/curl/pull/1872
Make curl_getdate() handle dates before 1970 as well (returning negative
values).
Make test 517 test dates for 64 bit time_t.
This fixes bug (3) mentioned in #2238Closes#2250
... unless CURLOPT_UNRESTRICTED_AUTH is set to allow them. This matches how
curl already handles Authorization headers created internally.
Note: this changes behavior slightly, for the sake of reducing mistakes.
Added test 317 and 318 to verify.
Reported-by: Craig de Stigter
Bug: https://curl.haxx.se/docs/adv_2018-b3bf.html
Get screen width from the environment variable COLUMNS first, if set. If
not, use ioctl(). If nether works, assume 79.
Closes#2242
The "refresh" is for the -# output when no total transfer size is
known. It will now only use a single updated line even for this case:
The "-=O=-" ship moves when data is transferred. The four flying
"hashes" move (on a sine wave) on each refresh, independent of data.
vtls.c:multissl_init() might do a curl_free() call so strip that out to
make this work with more builds. We just want to verify that
memorytracking works so skipping one line is no harm.
A mime tree attached to an easy handle using CURLOPT_MIMEPOST is
strongly bound to the handle: there is a pointer to the easy handle in
each item of the mime tree and following the parent pointer list
of mime items ends in a dummy part stored within the handle.
Because of this binding, a mime tree cannot be shared between different
easy handles, thus it needs to be cloned upon easy handle duplication.
There is no way for the caller to get the duplicated mime tree
handle: it is then set to be automatically destroyed upon freeing the
new easy handle.
New test 654 checks proper mime structure duplication/release.
Add a warning note in curl_mime_data_cb() documentation about sharing
user data between duplicated handles.
Closes#2235
Decoding loop implementation did not concern the case when all
received data is consumed by Brotli decoder and the size of decoded
data internally hold by Brotli decoder is greater than CURL_MAX_WRITE_SIZE.
For content with unencoded length greater than CURL_MAX_WRITE_SIZE this
can result in the loss of data at the end of content.
Closes#2194
Move curl_mime_initpart() and curl_mime_cleanpart() calls to lower-level
functions dealing with UserDefined structure contents.
This avoids memory leakages on curl-generated part mime headers.
New test 2073 checks this using the cli tool --next option: it
triggers a valgrind error if bug is present.
Bug: https://curl.haxx.se/mail/lib-2017-12/0060.html
Reported-by: Martin Galvan
- When zlib version is < 1.2.0.4, process gzip trailer before considering
extra data as an error.
- Inflate with Z_BLOCK instead of Z_SYNC_FLUSH to maximize correct data
and minimize corrupt data output.
- Do not try to restart deflate decompression in raw mode if output has
started or if the leading data is not available anymore.
- New test 232 checks inflating raw-deflated content.
Closes#2068
This reverts commit 9ffad8eb13.
It was actually added rather recently in 8e8afa82cb due to a crash
that would otherwise happen in the RTSP code. As I don't think we've
fixed that behavior yet, we better keep this work-around until we have
fixed it better.
Prune the DNS cache immediately after the dns entry is unlocked in
multi_done. Timed out entries will then get discarded in a more orderly
fashion.
Test506 is updated
Reported-by: Oleg Pudeyev
Fixes#2169Closes#2170
That data is only ever used by the CURLOPT_INTERLEAVEFUNCTION callback
and that option isn't set or used by the curl tool!
Updates the 9 tests that verify --libcurl
Closes#2167