As it was just unnecessary duplicated information already stored in the
'per_transfer' struct and that's around mostly anyway.
The duplicated pointer caused problems when the code flow was aborted
before the dupe was filled in and could cause a NULL pointer access.
Reported-by: Brian Carpenter
Fixes#4807Closes#4810
- When creating a directory hierarchy do not error when mkdir fails due
to error EACCESS (13) "access denied".
Some file systems allow for directory traversal; in this case that it
should be possible to create child directories when permission to the
parent directory is restricted.
This is a regression caused by me in f16bed0 (precedes curl-7_61_1).
Basically I had assumed that if a directory already existed it would
fail only with error EEXIST, and not error EACCES. The latter may
happen if the directory exists but has certain restricted permissions.
Reported-by: mbeifuss@users.noreply.github.com
Fixes https://github.com/curl/curl/issues/4796
Closes https://github.com/curl/curl/pull/4797
Previously it would end up with an uninitialized memory buffer that
would lead to a crash or junk getting output.
Added test 1271 to verify.
Reported-by: Brian Carpenter
Closes#4786
Change series of error outputs to use errorf().
Only errors that are due to mistakes in command line option usage should
use helpf(), other types of errors in the tool should rather use
errorf().
Closes#4691
Add support for CURLSSLOPT_NO_PARTIALCHAIN in CURLOPT_PROXY_SSL_OPTIONS
and OS400 package spec.
Also I added the option to the NameValue list in the tool even though it
isn't exposed as a command-line option (...yet?). (NameValue stringizes
the option name for the curl cmd -> libcurl source generator)
Follow-up to 564d88a which added CURLSSLOPT_NO_PARTIALCHAIN.
Ref: https://github.com/curl/curl/pull/4655
- Disable warning C4127 "conditional expression is constant" globally
in curl_setup.h for when building with Microsoft's compiler.
This mainly affects building with the Visual Studio project files found
in the projects dir.
Prior to this change the cmake and winbuild build systems already
disabled 4127 globally for when building with Microsoft's compiler.
Also, 4127 was already disabled for all build systems in the limited
circumstance of the WHILE_FALSE macro which disabled the warning
specifically for while(0). This commit removes the WHILE_FALSE macro and
all other cruft in favor of disabling globally in curl_setup.
Background:
We have various macros that cause 0 or 1 to be evaluated, which would
cause warning C4127 in Visual Studio. For example this causes it:
#define Curl_resolver_asynch() 1
Full behavior is not clearly defined and inconsistent across versions.
However it is documented that since VS 2015 Update 3 Microsoft has
addressed this somewhat but not entirely, not warning on while(true) for
example.
Prior to this change some C4127 warnings occurred when I built with
Visual Studio using the generated projects in the projects dir.
Closes https://github.com/curl/curl/pull/4658
Attempt to unpause a busy read in the CURLOPT_XFERINFOFUNCTION.
When uploading from stdin in non-blocking mode, a delay in reading
the stream (EAGAIN) causes curl to pause sending data
(CURL_READFUNC_PAUSE). Prior to this change, a busy read was
detected and unpaused only in the CURLOPT_WRITEFUNCTION handler.
This change performs the same busy read handling in a
CURLOPT_XFERINFOFUNCTION handler.
Fixes#2051Closes#4599
Reported-by: bdry on github
Starting with this change when doing parallel transfers, without this
option set, curl will prefer to create new transfers multiplexed on an
existing connection rather than creating a brand new one.
--parallel-immediate can be set to tell curl to prefer to use new
connections rather than to wait and try to multiplex.
libcurl-wise, this means that curl will set CURLOPT_PIPEWAIT by default
on parallel transfers.
Suggested-by: Tom van der Woerdt
Closes#4500
Regression from e59371a493 (7.67.0)
Added test 490, 491 and 492 to verify the functionality.
Reported-by: Kamil Dudka
Reported-by: Anderson Sasaki
Fixes#4588Closes#4591
- If server header Retry-After is being used for retry sleep time then
show that value to the user instead of the normal retry sleep time.
This is a follow-up to 640b973 (7.66.0) which changed curl tool so that
the value from Retry-After header overrides other retry timing options.
Closes https://github.com/curl/curl/pull/4498
New option that allows a user to ONLY switch off curl's progress meter
and leave everything else in "talkative" mode.
Reported-by: Piotr Komborski
Fixes#4422Closes#4470
This should again enable crazy-large download ranges of the style
[1-10000000] that otherwise easily ran out of memory starting in 7.66.0
when this new handle allocating scheme was introduced.
Reported-by: Peter Sumatra
Fixes#4393Closes#4438
When looping around the ranges and given URLs to create transfers, all
errors should exit the loop and return. Previously it would keep
looping.
Reported-by: SumatraPeter on github
Bug: #4393Closes#4396
This commit fixes a regression introduced by curl-7_65_3-5-gb88940850.
Detected by tests 2005, 2008, 2009, 2010, 2011, and 2012 with valgrind
and libmetalink enabled.
Closes#4326
Even though it cannot fall-back to a lower HTTP version automatically. The
safer way to upgrade remains via CURLOPT_ALTSVC.
CURLOPT_H3 no longer has any bits that do anything and might be removed
before we remove the experimental label.
Updated the curl tool accordingly to use "--http3".
Closes#4197
Repeatedly we see problems where using curl_multi_wait() is difficult or
just awkward because if it has no file descriptor to wait for
internally, it returns immediately and leaves it to the caller to wait
for a small amount of time in order to avoid occasional busy-looping.
This is often missed or misunderstood, leading to underperforming
applications.
This change introduces curl_multi_poll() as a replacement drop-in
function that accepts the exact same set of arguments. This function
works identically to curl_multi_wait() - EXCEPT - for the case when
there's nothing to wait for internally, as then this function will by
itself wait for a "suitable" short time before it returns. This
effectiely avoids all risks of busy-looping and should also make it less
likely that apps "over-wait".
This also changes the curl tool to use this funtion internally when
doing parallel transfers and changes curl_easy_perform() to use it
internally.
Closes#4163
As the plan has been laid out in DEPRECATED. Update docs accordingly and
verify in test 1174. Now requires the option to be set to allow HTTP/0.9
responses.
Closes#4191
... to avoid integer overflows later when multiplying with 1000 to
convert seconds to milliseconds.
Added test 1269 to verify.
Reported-by: Jason Lee
Closes#4166
USe configure --with-ngtcp2 or --with-quiche
Using either option will enable a HTTP3 build.
Co-authored-by: Alessandro Ghedini <alessandro@ghedini.me>
Closes#3500
This is done by making sure each individual transfer is first added to a
linked list as then they can be performed serially, or at will, in
parallel.
Closes#3804
... as larger values would rather indicate something silly (and could
potentially cause buffer problems).
Reported-by: pendrek at hackerone
Closes#4114
Commit 61faa0b420 fixed the progress bar
width calculation to avoid integer overflow, but failed to account for
the fact that initial_size is initialized to -1 when the file size is
retrieved from the remote on an upload, causing another signed integer
overflow. Fix by separately checking for this case before the width
calculation.
Closes#3984
Reported-by: Brian Carpenter (Geeknik Labs)
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
- Revert all commits related to the SASL authzid feature since the next
release will be a patch release, 7.65.1.
Prior to this change CURLOPT_SASL_AUTHZID / --sasl-authzid was destined
for the next release, assuming it would be a feature release 7.66.0.
However instead the next release will be a patch release, 7.65.1 and
will not contain any new features.
After the patch release after the reverted commits can be restored by
using cherry-pick:
git cherry-pick a14d72ca9499ff8c1cc36c2a8d520edf690
Details for all reverted commits:
Revert "os400: take care of CURLOPT_SASL_AUTHZID in curl_easy_setopt_ccsid()."
This reverts commit 0edf6907ae.
Revert "tests: Fix the line endings for the SASL alt-auth tests"
This reverts commit c2a8d52a13.
Revert "examples: Added SASL PLAIN authorisation identity (authzid) examples"
This reverts commit 8c1cc369d0.
Revert "curl: --sasl-authzid added to support CURLOPT_SASL_AUTHZID from the tool"
This reverts commit a9499ff136.
Revert "sasl: Implement SASL authorisation identity via CURLOPT_SASL_AUTHZID"
This reverts commit a14d72ca2f.
Using the memdebug.h mem-leak feature, I noticed 2 calls like:
FILE tool_parsecfg.c:70 fopen("c:\Users\Gisle\AppData\Roaming\_curlrc","rt")
FILE tool_parsecfg.c:114 fopen("c:\Users\Gisle\AppData\Roaming\_curlrc","rt")
No need for 'fopen(), 'fclose()' and a 'fopen()' yet again.
They serve very little purpose and mostly just add noise. Most of them
have been around for a very long time. I read them all before removing
or rephrasing them.
Ref: #3876Closes#3883
... since libcurl has started to be totally unaware of options for
disabled protocols they now return error.
Bug: c9c5304dd4 (commitcomment-33533937)
Reported-by: Marcel Raad
Closes#3886
- remove unused variables
- declare conditionally used variables conditionally
- suppress unused variable warnings in the CMake tests
- remove dead variable stores
- consistently use WIN32 macro to detect Windows
Closes https://github.com/curl/curl/pull/3739
Commit f5bc578f4c reintroduced the
warning fixed in commit 2f5f31bb57.
Extend fhnd's scope and reuse that variable instead of calling
_get_osfhandle a second time to fix the warning again.
Closes https://github.com/curl/curl/pull/3718
- Improve console detection.
Prior to this change WriteConsole could be called to write to a handle
that may not be a console, which would cause an error. This issue is
limited to character devices that are not also consoles such as the null
device NUL.
Bug: https://github.com/curl/curl/issues/3175#issuecomment-439068724
Reported-by: Gisle Vanem
... and remove it from the dist tarball. It has served its time, it
barely gets updated anymore and "everything curl" is now convering all
this document once tried to include, and does it more and better.
In the compressed scenario, this removes ~15K data from the binary,
which is 25% of the -M output.
It remains in the git repo for now for as long as the web site builds a
page using that as source. It renders poorly on the site (especially for
mobile users) so its not even good there.
Closes#3587