This change inverse the order of processing for the --etag-compare and
--etag-save option to process first --etag-compare. This in turn allows
to use the same file name to compare and save an etag.
The original behavior of not failing if the etag file does not exists is
conserved.
Fixes#5179Closes#5180
Replace the incomplete workaround regarding FD_CLOSE
only signalling once by instead doing a pre-check with
standard select and storing the result for later use.
select keeps triggering on closed sockets on Windows while
WSAEventSelect fires only once with data still available.
By doing the pre-check we do not run in a deadlock
due to waiting forever for another FD_CLOSE event.
Fix race-condition of waiting threads finishing while events are
already being processed which lead to invalid or skipped events.
Use mutex to check for one event at a time or do post-processing.
In addition to mutex-based locking use specific event as signal.
Closes#5156
Load long values correctly (e.g. for http_code).
Use curl_off_t (not long) for:
- size_download (CURLINFO_SIZE_DOWNLOAD_T)
- size_upload (CURLINFO_SIZE_UPLOAD_T)
The unit for these values is bytes/second, not microseconds:
- speed_download (CURLINFO_SPEED_DOWNLOAD_T)
- speed_upload (CURLINFO_SPEED_UPLOAD_T)
Fixes#5131Closes#5152
Reported by the new script 'scripts/copyright.pl'. The script has a
regex whitelist for the files that don't need copyright headers.
Removed three (mostly usesless) README files from docs/
Closes#5141
- send more data to make problems more obvious
- don't start the data with minus, it makes diffs harder to read
- skip the headers in the stdout comparison
- save to a file name to also verify 'filename_effective'
Ref: #5131
Update smbserver.py and negtelnetserver.py to be compatible with
Python 3 while staying backwards-compatible to support Python 2.
Fix string encoding and handling of echoed and transferred data.
Tested with both Python 2.7.17 and Python 3.7.7
Reported-by: Daniel Stenberg
Assisted-by: Kamil Dudka
Reviewed-by: Marcel Raad
Fixes#5104Closes#5110
- Implement new option CURLSSLOPT_REVOKE_BEST_EFFORT and
--ssl-revoke-best-effort to allow a "best effort" revocation check.
A best effort revocation check ignores errors that the revocation check
was unable to take place. The reasoning is described in detail below and
discussed further in the PR.
---
When running e.g. with Fiddler, the schannel backend fails with an
unhelpful error message:
Unknown error (0x80092012) - The revocation function was unable
to check revocation for the certificate.
Sadly, many enterprise users who are stuck behind MITM proxies suffer
the very same problem.
This has been discussed in plenty of issues:
https://github.com/curl/curl/issues/3727,
https://github.com/curl/curl/issues/264, for example.
In the latter, a Microsoft Edge developer even made the case that the
common behavior is to ignore issues when a certificate has no recorded
distribution point for revocation lists, or when the server is offline.
This is also known as "best effort" strategy and addresses the Fiddler
issue.
Unfortunately, this strategy was not chosen as the default for schannel
(and is therefore a backend-specific behavior: OpenSSL seems to happily
ignore the offline servers and missing distribution points).
To maintain backward-compatibility, we therefore add a new flag
(`CURLSSLOPT_REVOKE_BEST_EFFORT`) and a new option
(`--ssl-revoke-best-effort`) to select the new behavior.
Due to the many related issues Git for Windows and GitHub Desktop, the
plan is to make this behavior the default in these software packages.
The test 2070 was added to verify this behavior, adapted from 310.
Based-on-work-by: georgeok <giorgos.n.oikonomou@gmail.com>
Co-authored-by: Markus Olsson <j.markus.olsson@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes https://github.com/curl/curl/pull/4981
Makes curl_easy_getinfo() of "variable" numerical content instead return
the number set in the env variable `CURL_TIME`.
Makes curl_version() of "variable" textual content. This guarantees a
stable version string which can be tested against. Environment variable
`CURL_VERSION` defines the content.
Assisted-by: Mathias Gumz
The test uses SRP to "a server not supporting it" but modern stunnel
versions will silently accept it and remain happy. The test is therefore
faulty.
I haven't figured out how to make stunnel explicitly reject SRP-using
connects.
Reported-by: Marc Hörsken
Fixes#5105Closes#5113
Users of the SMB tests will have to install impacket manually.
Reasoning: our in-tree version of impacket was quite outdated
and only compatible with Python 2 which is already end-of-life.
Upgrading to Python 3 and a compatible impacket version would
require to import additional Python-only and CPython-extension
dependencies. This would have hindered portability enormously.
Closes#5094
When extracting a <section> <part> and there's no </part> before
</section>, this now outputs an error and returns a wrong string to
make users spot the mistake.
Ref: #5070Closes#5071
Even though the existing code can be fixed to run on Python 3, the
tests will fail due to the Unicode transition the protocol is invalid.
Follow up to ee63837Closes#5085
Recent gcc warns when byte count of strncpy() equals the destination
buffer size. Since the destination buffer is previously cleared and
the source string is always shorter, reducing the byte count by one
silents the warning without affecting the result.
Closes#5059
When using maximum code optimization level (-O3), valgrind wrongly
detects uses of uninitialized values in strcmp().
Preset buffers with all zeroes to avoid that.
This test does A LOT of *wakeup() calls and then calls curl_multi_poll()
twice. The first *poll() is then expected to return early and the second
not - as the first is supposed to drain the socketpair pipe.
It turns out however that when given "excessive" amounts of writes to
the pipe, some operating systems (the Solaris based are known) will
return EAGAIN before the pipe is drained, which in our test case causes
the second *poll() call to also abort early.
This change attempts to avoid the OS-specific behaviors in the test by
reducing the amount of wakeup calls from 1234567 to 10.
Reported-by: Andy Fiddaman
Fixes#5037Closes#5058
New test 666 checks this is effective.
As upload buffer size is significant in this kind of tests, shorten it
in similar test 652.
Fixes#4860Closes#4833
Reported-by: RuurdBeerstra on github
Input buffer filling may delay the data sending if data reads are slow.
To overcome this problem, file and callback data reads do not accumulate
in buffer anymore. All other data (memory data and mime framing) are
considered as fast and still concatenated in buffer.
As this may highly impact performance in terms of data overhead, an early
end of part data check is added to spare a read call.
When encoding a part's data, an encoder may require more bytes than made
available by a single read. In this case, the above rule does not apply
and reads are performed until the encoder is able to deliver some data.
Tests 643, 644, 645, 650 and 654 have been adapted to the output data
changes, with test data size reduced to avoid the boredom of long lists of
1-byte chunks in verification data.
New test 667 checks mimepost using single-byte read callback with encoder.
New test 668 checks the end of part data early detection.
Fixes#4826
Reported-by: MrdUkk on github
In case a read callback returns a status (pause, abort, eof,
error) instead of a byte count, drain the bytes read so far but
remember this status for further processing.
Takes care of not losing data when pausing, and properly resume a
paused mime structure when requested.
New tests 670-673 check unpausing cases, with easy or multi
interface and mime or form api.
Fixes#4813
Reported-by: MrdUkk on github