By making sure never to send off more than the allowed number of bytes
per second the speed limit logic is given more room to actually work.
Reported-by: Fabian Keil
Bug: https://curl.se/mail/lib-2021-03/0042.htmlCloses#6797
Both were used for the same purposes and there was no logical separation
between them. Combined, this also saves 16 bytes in less holes in my
test build.
Closes#6798
Otherwise libcurl is likely to reuse the connection again in the next
attempt since the connection reuse logic doesn't take downgrades into
account.
Reported-by: Anthony Ramine
Fixes#6788Closes#6793
Otherwise, the transfer will be NULL in the trace function when the
early handshake details arrive and then curl won't show them.
Regresssion in 7.75.0
Reported-by: David Hu
Fixes#6783Closes#6792
Instead of clearing the callback argument in disconnect, set it to the
(new) transfer to make sure the correct data is passed to the callbacks.
Follow-up to e467ea3bd9
Assisted-by: Patrick Monnerat
Closes#6787
After the recent conn/data refactor in this source file, this function
was mistakenly still getting the old struct pointer which would lead to
crash on servers with keyboard-interactive auth enabled.
Follow-up to a304051620 (shipped in 7.75.0)
Reported-by: Christian Schmitz
Fixes#6691Closes#6782
To make sure the Host: header and the URL provide the same authority
portion when sent to the proxy, strip the default port number from the
URL if one was provided.
Reported-by: Michael Brown
Fixes#6769Closes#6778
If libssh2_knownhost_init() returns NULL, like in an OOM situation, the
ssh session was freed but the pointer wasn't cleared which made libcurl
later call libssh2 to cleanup using the stale pointer.
Fixes#6764Closes#6766
If we get a close_notify, treat that as EOF. If we get an EOF from the
TCP stream, treat that as an error (because we should have ended the
connection earlier, when we got a close_notify).
Closes#6763
- Document in DOH that some SSL settings are inherited but DOH hostname
and peer verification are not and are controlled separately.
- Document that CURLOPT_SSL_CTX_FUNCTION is inherited by DOH handles but
we're considering changing behavior to no longer inherit it. Request
feedback.
Closes https://github.com/curl/curl/pull/6688
When asked to resume a download, libcurl will convert that to HTTP logic
and if then the entire file is already transferred it will result in a
416 response from the HTTP server. With CURLOPT_FAILONERRROR set in that
scenario, it should *not* lead to an error return.
Updated test 1156, added test 1273
Reported-by: Jonathan Watt
Fixes#6740Closes#6753
The duration of a connect and the total transfer are calculated from two
different time-stamps. It can end up with the total timeout triggering
before the connect timeout expires and we should make sure to
acknowledge whichever timeout that is reached first.
This is especially notable when a transfer first sits in PENDING, as
that time is counted in the total time but the connect timeout is based
on the time since the handle changed to the CONNECT state.
The CONNECTTIMEOUT is per connect attempt. The TIMEOUT is for the entire
operation.
Fixes#6744Closes#6745
Reported-by: Andrei Bica
Assisted-by: Jay Satiro
Previously, rustls was using an on-stack array for TLS data. However,
crustls has an (unusual) requirement that buffers it deals with are
initialized before writing to them. By using calloc, we can ensure the
buffer is initialized once and then reuse it across calls.
Closes#6742
Align conditions for NTLM features between CMake and configure
builds by differentiating between USE_NTLM and USE_CURL_NTLM_CORE,
just like curl_setup.h does internally to detect support of:
- USE_NTLM: required for NTLM crypto authentication feature
- USE_CURL_NTLM_CORE: required for SMB protocol
Implement USE_WIN32_CRYPTO detection by checking for Crypt functions
in wincrypt.h which are not available in the Windows App environment.
Link advapi32 and crypt32 for Crypto API and Schannel SSL backend.
Fix condition of Schannel SSL backend in CMake build accordingly.
Reviewed-by: Marcel Raad
Closes#6277
Move the detection of the restricted Windows App environment
in curl_setup.h before the definition of USE_WIN32_CRYPTO
via included config-win32.h in case no build system is used.
Reviewed-by: Marcel Raad
Part of #6277
MAX_HSTS_SUBLEN and MAX_HSTS_SUBLENSTR were unused from the initial commit,
and mostly likely leftovers from early development. Remove as they're not
used for anything.
Closes#6741
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
If after a transfer is complete Curl_GetFTPResponse() returns an error,
curl would not free the ftp->pathalloc block.
Found by torture-testing test 576
Closes#6737
This requires the latest main branch of crustls, which provides
rustls_client_config_builder_dangerous_set_certificate_verifier and
rustls_client_config_builder_set_enable_sni.
This refactors the session setup into its own function, and adds a new
function cr_hostname_is_ip. Because crustls doesn't support verification
of IP addresses, special handling is needed: We disable SNI and set a
placeholder hostname (which never actually gets sent on the wire).
Closes#6719
Curl_cookie_init can be called with data being NULL, and this can in turn
be passed to Curl_cookie_add, meaning that both functions must be careful
to only use data where it's checked for being a NULL pointer. The libpsl
support code does however dereference data without checking, so if we are
indeed having an unset data pointer we cannot PSL check the cookiedomain.
This is currently not a reachable dereference, as the only caller with a
NULL data isn't passing a file to initialize cookies from, but since the
API has this contract let's ensure we hold it.
Closes#6731
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
- Change use of those options from CURLOPT_SSL_OPTIONS that are not
already evaluated via SSL_SET_OPTION in schannel and secure transport
to use that instead of data->set.ssl.optname.
Example:
Evaluate SSL_SET_OPTION(no_revoke) instead of data->set.ssl.no_revoke.
This change is because options set via CURLOPT_SSL_OPTIONS
(data->set.ssl.optname) are separate from those set for HTTPS proxy via
CURLOPT_PROXY_SSL_OPTIONS (data->set.proxy_ssl.optname). The
SSL_SET_OPTION macro determines whether the connection is for HTTPS
proxy and based on that which option to evaluate.
Since neither Schannel nor Secure Transport backends currently support
HTTPS proxy in libcurl, this change is for posterity and has no other
effect.
Closes https://github.com/curl/curl/pull/6690
unescaped is coming from Curl_urldecode and not a unicode conversion
function, so reclaiming its memory should be performed with a normal
call to free rather than curlx_unicodefree. In reality, this is the
same thing as curlx_unicodefree is implemented as a call to free but
that's not guaranteed to always hold. Using the curlx macro present
issues with memory debugging as well.
Closes#6671
Reviewed-by: Jay Satiro <raysatiro@yahoo.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
openssl: use SSL_get_version to get connection protocol
Replace our bespoke get_ssl_version_txt in favor of SSL_get_version.
We can get rid of few lines of code, since SSL_get_version achieve
the exact same thing
Closes#6665
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Signed-off-by: Jean-Philippe Menil <jpmenil@gmail.com>
Commit e06fa7462a removed support for libgcrypt leaving only
support for nettle which has been the default crypto library in
GnuTLS for a long time. There were however a few conditionals on
USE_GNUTLS_NETTLE which cause compilation errors in the metalink
code (as it used the gcrypt fallback instead as a result). See the
below autobuild for an example of the error:
https://curl.se/dev/log.cgi?id=20210225123226-30704#prob1
This removes all uses of USE_GNUTLS_NETTLE and also removes the
gcrypt support from the metalink code while at it.
Closes#6656
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Align header with project style of using named parameters in the
function prototypes to aid readability and self-documentation.
Closes#6653
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
... since the state machine might go to RATELIMITING and then back to
PERFORMING doing once-per-transfer inits in that function is wrong and
it caused problems with receiving chunked HTTP and it set the
PRETRANSFER time much too often...
Regression from b68dc34af3 (shipped in 7.75.0)
Reported-by: Amaury Denoyelle
Fixes#6640Closes#6641
(Unless 32-bit `time_t` is selected manually via the `_USE_32BIT_TIME_T`
mingw macro.)
Previously, 64-bit `time_t` was enabled on VS2005 and newer only, and
32-bit `time_t` was used on all other Windows builds.
Assisted-by: Jay Satiro
Closes#6636
- Use atexit to register a dbg cleanup function that closes the logfile.
LeakSantizier (LSAN) calls _exit() instead of exit() when a leak is
detected on exit so the logfile must be closed explicitly or data could
be lost. Though _exit() does not call atexit handlers such as this,
LSAN's call to _exit() comes after the atexit handlers are called.
Prior to this change the logfile was not explicitly closed so it was
possible that if LSAN detected a leak and called _exit (which does
not flush or close files like exit) then the logfile could be missing
data. That could then cause curl's memanalyze to report false leaks
(eg a malloc was recorded to the logfile but the corresponding free was
discarded from the buffer instead of written to the logfile, then
memanalyze reports that as a leak).
Ref: https://github.com/google/sanitizers/issues/1374
Bug: https://github.com/curl/curl/pull/6591#issuecomment-780396541
Closes https://github.com/curl/curl/pull/6620
- Change the Windows char <-> UTF-8 conversion functions to return an
allocated copy of the passed in string instead of the original.
Prior to this change the curlx_convert_ functions would, as what I
assume was an optimization, not make a copy of the passed in string if
no conversion was required. No conversion is required in non-UNICODE
Windows builds since our tchar strings are type char and remain in
whatever the passed in encoding is, which is assumed to be UTF-8 but may
be other encoding.
In contrast the UNICODE Windows builds require conversion
(wchar <-> char) and do return a copy. That inconsistency could lead to
programming errors where the developer expects a copy, and does not
realize that won't happen in all cases.
Closes https://github.com/curl/curl/pull/6602
- add CURLINFO_REFERER libcurl option
- add --write-out '%{referer}' command-line option
- extend --xattr command-line option to fill user.xdg.referrer.url extended
attribute with the referrer (if there was any)
Closes#6591
... with the help of Curl_resolver_error() which now is moved from
asyn-thead.c and is provided globally for this purpose.
Follow-up to 35ca04ce1b
Makes test 1188 work for c-ares builds
Closes#6626
This caused a memory leak as the session id cache entry was still
erroneously stored with a NULL sessionid and that would later be treated
as not needed to get freed.
Reported-by: Gisle Vanem
Fixes#6616Closes#6617
While working on documenting the states it dawned on me that step one is
to use more descriptive names on the states. This also changes prefix on
the states to make them shorter in the source.
State names NOT ending with *ing are transitional ones.
Closes#6612
The Curl_easy pointer struct entry in connectdata is now gone. Just
before commit 215db086e0 landed on January 8, 2021 there were 919
references to conn->data.
Closes#6608
- Share the shared object from the user's easy handle with the DOH
handles.
Prior to this change if the user had set a shared object with shared
cached DNS (CURL_LOCK_DATA_DNS) for their easy handle then that wasn't
used by any associated DOH handles, since they used the multi's default
hostcache.
This change means all the handles now use the same hostcache, which is
either the shared hostcache from the user created shared object if it
exists or if not then the multi's default hostcache.
Reported-by: Manuj Bhatia
Fixes https://github.com/curl/curl/issues/6589
Closes https://github.com/curl/curl/pull/6607
... but instead use a private alternative that points to the "driving
transfer" from the connection. We set the "user data" associated with
the connection to be the connectdata struct, but when we drive transfers
the code still needs to know the pointer to the transfer. We can change
the user data to become the Curl_easy handle, but with older nghttp2
version we cannot dynamically update that pointer properly when
different transfers are used over the same connection.
Closes#6520
We still make the trace callback function get the connectdata struct
passed to it, since the callback is anchored on the connection.
Repeatedly updating the callback pointer to set 'data' with
SSL_CTX_set_msg_callback_arg() doesn't seem to work, probably because
there might already be messages in the queue with the old pointer.
This code therefore makes sure to set the "logger" handle before using
OpenSSL calls so that the right easy handle gets used for tracing.
Closes#6522
- New libcurl options CURLOPT_DOH_SSL_VERIFYHOST,
CURLOPT_DOH_SSL_VERIFYPEER and CURLOPT_DOH_SSL_VERIFYSTATUS do the
same as their respective counterparts.
- New curl tool options --doh-insecure and --doh-cert-status do the same
as their respective counterparts.
Prior to this change DOH SSL certificate verification settings for
verifyhost and verifypeer were supposed to be inherited respectively
from CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER, but due to a bug
were not. As a result DOH verification remained at the default, ie
enabled, and it was not possible to disable. This commit changes
behavior so that the DOH verification settings are independent and not
inherited.
Ref: https://github.com/curl/curl/pull/4579#issuecomment-554723676
Fixes https://github.com/curl/curl/issues/4578
Closes https://github.com/curl/curl/pull/6597
- Guard some Curl_async accesses with USE_CURL_ASYNC instead of
!CURLRES_SYNCH.
This is another follow-up to 8335c64 which moved the async struct from
the connectdata struct into the Curl_easy struct. A previous follow-up
6cd167a fixed building for sync resolver by guarding some async struct
accesses with !CURLRES_SYNCH. The problem is since DOH (DNS-over-HTTPS)
is available as an asynchronous secondary resolver the async struct may
be used even when libcurl is built for the sync resolver. That means
that CURLRES_SYNCH and USE_CURL_ASYNC may be defined at the same time.
Closes https://github.com/curl/curl/pull/6603
HTTP auth "accidentally" worked before this cleanup since the code would
always overwrite the connection credentials with the credentials from
the most recent transfer and since HTTP auth is typically done first
thing, this has not been an issue. It was still wrong and subject to
possible race conditions or future breakage if the sequence of functions
would change.
The data.set.str[] strings MUST remain unmodified exactly as set by the
user, and the credentials to use internally are instead set/updated in
state.aptr.*
Added test 675 to verify different credentials used in two requests done
over a reused HTTP connection, which previously behaved wrongly.
Fixes#6542Closes#6545
Rename it to 'httpwant' and make a cloned field in the state struct as
well for run-time updates.
Also: refuse non-supported HTTP versions. Verified with test 129.
Closes#6585
and rename it from 'ftp_list_only' since it is also used for SSH and
POP3. The state is updated internally for 'type=D' FTP URLs.
Added test case 1570 to verify.
Closes#6578
... and make sure the code never updates 'set.prefer_ascii' as it breaks
handle reuse which should use the setting as the user specified it.
Added test 1569 to verify: it first makes an FTP transfer with ';type=A'
and then another without type on the same handle and the second should
then use binary. Previously, curl failed this.
Closes#6578
If libcurl is built with Unicode support for Windows then it is assumed
the filename string is Unicode in UTF-8 encoding and it is converted to
UTF-16 to be passed to the wide character version of the respective
function (eg wstat). However the filename string may actually be in the
local encoding so, even if it successfully converted to UTF-16, if it
could not be stat/accessed then try again using the local code page
version of the function (eg wstat fails try stat).
We already do this with fopen (ie wfopen fails try fopen), so I think it
makes sense to extend it to stat and access functions.
Closes https://github.com/curl/curl/pull/6514
- Use _imp.lib suffix only for Microsoft's compiler (MSVC).
Prior to this change library suffix _imp.lib was used for the import
library on Windows regardless of compiler.
With this change the other compilers should now use their default
suffix which should be .dll.a.
This change is motivated by the usage of pkg-config on MSYS2.
Indeed, when 'pkg-config --libs libcurl' is used, -lcurl is
passed to ld. The documentation of ld on Windows :
https://sourceware.org/binutils/docs/ld/WIN32.html
lists, in the 'direct linking to a dll' section, the pattern
of the searched import library, and libcurl_imp.lib is not there.
Closes https://github.com/curl/curl/pull/6225
Since the set value then risks getting used like that when the easy
handle is reused by the application.
Also: renamed the struct field from 'ftp_append' to 'remote_append'
since it is also used for SSH protocols.
Closes#6579
... as we ignore it anyway because servers don't report the correct size
and proftpd even blatantly returns a 550.
Updates a set of tests accordingly.
Reported-by: awesomenode on github
Fixes#6564Closes#6565
- Separate ngtcp2_transport_params.
ngtcp2/ngtcp2@05d7adc made ngtcp2_transport_params separate from
ngtcp2_settings.
ngtcp2 master is required to build curl with http3 support.
Closes#6554
- Add support services without region and service prefixes in
the URL endpoint (ex. Min.IO, GCP, Yandex Cloud, Mail.Ru Cloud Solutions, etc)
by providing region and service parameters via aws-sigv4 option.
- Add [:region[:service]] suffix to aws-sigv4 option;
- Fix memory allocation errors.
- Refactor memory management.
- Use Curl_http_method instead() STRING_CUSTOMREQUEST.
- Refactor canonical headers generating.
- Remove repeated sha256_to_hex() usage.
- Add some docs fixes.
- Add some codestyle fixes.
- Add overloaded strndup() for debug - curl_dbg_strndup().
- Update tests.
Closes#6524
... because it turns out several servers out there don't actually behave
correctly otherwise in spite of the fact that the SNI field is
specifically said to be case insensitive in RFC 6066 section 3.
Reported-by: David Earl
Fixes#6540Closes#6543
- Update build instructions in packages/DOS/README
- Extend 'VPATH' with 'vquic' and 'vssh'.
- Allow 'Makefile.dist' to build both 'lib' and 'src'.
- Allow using the Windows hosted djgpp cross compiler to build for MSDOS
under Windows.
- 'USE_SSL' -> 'USE_OPENSSL'
- Added a 'link_EXE' macro. Etc, etc.
- Linking 'curl.exe' needs '$(CURLX_CFILES)' too.
- Do not pick-up '../lib/djgpp/*.o' files. Recompile locally.
- Generate a gzipped 'tool_hugehelp.c' if 'USE_ZLIB=1'.
- Remove 'djgpp-clean'
- Adapt to new C-ares directory structure
- Use conditional variable assignments
Clarify the 'conditional variable assignment' in 'common.dj'.
Closes https://github.com/curl/curl/pull/6382
This is a follow-up to 8315343 which several days ago moved the resolver
pointer into the async struct but did not update the code that uses it
when getaddrinfo is not present.
Closes https://github.com/curl/curl/pull/6536
As the info is already stored in the transfer handle anyway, there's no
need to carry around a duplicate buffer for the life-time of the handle.
Closes#6534
... and use 'int' for ports. We don't use 'unsigned short' since -1 is
still often used internally to signify "unknown value" and 0 - 65535 are
all valid port numbers.
Closes#6534
The old function should not be used anywhere anymore (the only remaining
gskit use has to be fixed to instead use Curl_poll or none at all).
The static function version is now called our_select() and is only built
if necessary.
Closes#6531
Readdir data, filenames and attributes are strictly related to the
transfer and not the connection. This also reduces the total size of the
fixed connectdata struct.
Closes#6519
On Windows an error number may be greater than INT_MAX and negative once
cast to int.
The assertion is checked only in debug builds.
Closes https://github.com/curl/curl/pull/6504
... if Curl_doh() returned a NULL, this function gets called anyway as
in a asynch procedure. Then the doh struct pointer is NULL and signifies
an OOM situation.
Follow-up to 6246a1d8c6
- Reorder some internal struct members so that less padding is used.
This is an attempt at saving a bit of space by packing some structs
(using pahole to find the holes) where it might make sense to do
so without losing readability.
I.e., I tried to avoid separating fields that seem grouped
together (like the cwd... fields in struct ftp_conn for instance).
Also abstained from touching fields behind conditional macros as
that quickly can get complicated.
Closes https://github.com/curl/curl/pull/6483
... instead of having it static within the Curl_easy struct. This takes
away 1176 bytes (18%) from the Curl_easy struct that aren't used very
often and instead makes the code allocate it when needed.
Closes#6492
The SOCKS code now uses the generic download buffer for temporary
storage during the connection procedure, instead of having its own
private 600 byte buffer that adds to the connectdata struct size. This
works fine because this point the buffer is allocated but is not use for
download yet since the connection hasn't completed.
This reduces the connection struct size by 22% on a 64bit arch!
The SOCKS buffer needs to be at least 600 bytes, and the download buffer
is guaranteed to never be smaller than 1000 bytes.
Closes#6491
By making the `magic` identifier the same size and at the same place
within the structs (easy, multi, share), libcurl will be able to more
reliably detect and safely error out if an application passes in the
wrong handle to APIs. Easier to detect and less likely to cause crashes
if done.
Such mixups can't be detected at compile-time due to them being
typedefed void pointers - unless `CURL_STRICTER` is defined.
Closes#6484
Since curl's own memory debugging system redefines free() calls to track
and fiddle with memory, it cannot be used on memory allocated by 3rd
party libraries.
Third party libraries SHOULD NOT require free() to release allocated
resources for this reason - and libs can use separate healp allocators
on some systems (like Windows) so free() doesn't necessarily work
anyway.
Filed as an issue with libssh: https://bugs.libssh.org/T268Closes#6481
... in most cases instead of 'struct connectdata *' but in some cases in
addition to.
- We mostly operate on transfers and not connections.
- We need the transfer handle to log, store data and more. Everything in
libcurl is driven by a transfer (the CURL * in the public API).
- This work clarifies and separates the transfers from the connections
better.
- We should avoid "conn->data". Since individual connections can be used
by many transfers when multiplexing, making sure that conn->data
points to the current and correct transfer at all times is difficult
and has been notoriously error-prone over the years. The goal is to
ultimately remove the conn->data pointer for this reason.
Closes#6425
... so that a function can first use MIMEPOST and then set it to NULL to
reset it back to a blank POST.
Added test 584 to verify the fix.
Reported-by: Christoph M. Becker
Fixes#6455Closes#6456
... instead of at end of the DO state. This makes the timer more
accurate for the protocols that use the DOING state (such as FTP), and
simplifies how the function (now called init_perform) is called.
The timer will then include the entire procedure up to PERFORM -
including all instructions for getting the transfer started.
Closes#6454
- During the end-of-headers response phase do not mark the tunnel
complete unless the response body was completely parsed/ignored.
Prior to this change if the entirety of a CONNECT response with chunked
encoding was not received by the time the final header was parsed then
the connection would be marked done prematurely, before all the chunked
data could be read in and ignored (since this is what we do with any
CONNECT response body) and the connection could not be used.
Bug: https://curl.se/mail/lib-2021-01/0033.html
Reported-by: Fabian Keil
Closes https://github.com/curl/curl/pull/6432
When doing a request with a request body expecting a 401/407 back, that
initial request is sent with a zero content-length. Test 177 and more.
Closes#6424
... so that Retry-After and other meta-content can still be used.
Added 1634 to verify. Adjusted test 194 and 281 since --fail now also
includes the header-terminating CRLF in the output before it exits.
Fixes#6408Closes#6409
... to make build tools/valgrind warn if no curl_global_cleanup is
called.
This is conditionally only done for debug builds with the env variable
CURL_GLOBAL_INIT set.
Closes#6410
... and not in the connection setup, as for multiplexed transfers the
connection setup might be skipped and then the transfer would end up
without the set user-agent!
Reported-by: Flameborn on github
Assisted-by: Andrey Gursky
Assisted-by: Jay Satiro
Assisted-by: Mike Gelfand
Fixes#6312Closes#6417
The wolfSSL TLS library defines NO_OLD_TLS in some of their build
configurations and that causes the library to be built without TLS 1.1.
For example if MD5 is explicitly disabled when building wolfSSL then
that defines NO_OLD_TLS and the library is built without TLS 1.1 [1].
Prior to this change attempting to build curl with a wolfSSL that was
built with NO_OLD_TLS would cause a build link error undefined reference
to wolfTLSv1_client_method.
[1]: https://github.com/wolfSSL/wolfssl/blob/v4.5.0-stable/configure.ac#L2366
Bug: https://curl.se/mail/lib-2020-12/0121.html
Reported-by: Julian Montes
Closes https://github.com/curl/curl/pull/6388
When doing HTTP authentication and a port number set with CURLOPT_PORT,
the code would previously have the URL's port number override as if it
had been a redirect to an absolute URL.
Added test 1568 to verify.
Reported-by: UrsusArctos on github
Fixes#6397Closes#6400
We currently use both spellings the british "behaviour" and the american
"behavior". However "behavior" is more used in the project so I think
it's worth dropping the british name.
Closes#6395
Extend the syntax of CURLOPT_RESOLVE strings: allow using a '+' prefix
(similar to the existing '-' prefix for removing entries) to add
DNS cache entries that will time out just like entries that are added
by libcurl itself.
Append " (non-permanent)" to info log message in case a non-permanent
entry is added.
Adjust relevant comments to reflect the new behavior.
Adjust documentation.
Extend unit1607 to test the new functionality.
Closes#6294
Paused transfers should not be stopped due to slow speed even when
CURLOPT_LOW_SPEED_LIMIT is set. Additionally, the slow speed timer is
now reset when the transfer is unpaused - as otherwise it would easily
just trigger immediately after unpausing.
Reported-by: Harry Sintonen
Fixes#6358Closes#6359
... as the socket might be readable all the time when paused and thus
causing a busy-loop.
Reported-by: Harry Sintonen
Reviewed-by: Jay Satiro
Fixes#6356Closes#6357
It is a security process for HTTP.
It doesn't seems to be standard, but it is used by some cloud providers.
Aws:
https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
Outscale:
https://wiki.outscale.net/display/EN/Creating+a+Canonical+Request
GCP (I didn't test that this code work with GCP though):
https://cloud.google.com/storage/docs/access-control/signing-urls-manually
most of the code is in lib/http_v4_signature.c
Information require by the algorithm:
- The URL
- Current time
- some prefix that are append to some of the signature parameters.
The data extracted from the URL are: the URI, the region,
the host and the API type
example:
https://api.eu-west-2.outscale.com/api/latest/ReadNets
~~~ ~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
^ ^ ^
/ \ URI
API type region
Small description of the algorithm:
- make canonical header using content type, the host, and the date
- hash the post data
- make canonical_request using custom request, the URI,
the get data, the canonical header, the signed header
and post data hash
- hash canonical_request
- make str_to_sign using one of the prefix pass in parameter,
the date, the credential scope and the canonical_request hash
- compute hmac from date, using secret key as key.
- compute hmac from region, using above hmac as key
- compute hmac from api_type, using above hmac as key
- compute hmac from request_type, using above hmac as key
- compute hmac from str_to_sign using above hmac as key
- create Authorization header using above hmac, prefix pass in parameter,
the date, and above hash
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Closes#5703
It seems current hmac implementation use md5 for the hash,
V4 signature require sha256, so I've added the needed struct in
this commit.
I've added the functions that do the hmac in v4 signature file
as a static function ,in the next patch of the serie,
because it's used only by this file.
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
The linux kernel does not report all ICMP errors back to userspace due
to historical reasons.
IP*_RECVERR sockopt must be turned on to have the correct behaviour
which is to pass all ICMP errors to userspace.
See https://bugzilla.kernel.org/show_bug.cgi?id=202355Closes#6341
When the initial request isn't possible to send in its entirety, the
remainder of request would be delivered to the debug callback as data
and would wrongly be counted internally as body-bytes sent.
Extended test 1295 to verify.
Closes#6328
When failing in TOOFAST, the multi_done() wasn't called so the same
cleanup and handling wasn't done like when it fails in PERFORM, which in
the case of FTP could mean that the control connection wouldn't be
marked as "dead" for the CURLE_ABORTED_BY_CALLBACK case. Which caused
ftp_disconnect() to use it to send "QUIT", which could end up waiting
for a response a long time before giving up!
Reported-by: Tomas Berger
Fixes#6333Closes#6337
This commit introduces a "gophers" handler inside the gopher protocol if
USE_SSL is defined. This protocol is no different than the usual gopher
prococol, with the added TLS encapsulation upon connecting. The protocol
has been adopted in the gopher community, and many people have enabled
TLS in their gopher daemons like geomyidae(8), and clients, like clic(1)
and hurl(1).
I have not implemented test units for this protocol because my knowledge
of Perl is sub-par. However, for someone more knowledgeable it might be
fairly trivial, because the same test that tests the plain gopher
protocol can be used for "gophers" just by adding a TLS listener.
Signed-off-by: parazyd <parazyd@dyne.org>
Closes#6208
The error is shown with infof rather than failf so that the user will
see the extended error message information only in verbose mode, and
will still see the standard CURLE_AUTH_ERROR message. For example:
---
* schannel: InitializeSecurityContext failed: SEC_E_QOP_NOT_SUPPORTED
(0x8009030A) - The per-message Quality of Protection is not supported by
the security package
* multi_done
* Connection #1 to host 127.0.0.1 left intact
curl: (94) An authentication function returned an error
---
Ref: https://github.com/curl/curl/issues/6302
Closes https://github.com/curl/curl/pull/6315
If supported, defer port selection until connect() time
if --interface is given and source port is 0.
Reproducer:
* start fast webserver on port 80
* starve system of ephemeral ports
$ sysctl net.ipv4.ip_local_port_range="60990 60999"
* start a curl/libcurl "crawler"
$curl --keepalive --parallel --parallel-immediate --head --interface
127.0.0.2 "http://127.0.0.[1-254]/file[001-002].txt"
current result:
(possible some successful data)
curl: (45) bind failed with errno 98: Address already in use
result after patch:
(complete success or few connections failing, higlhy depending on load)
Fail only when all the possible 4-tuple combinations are exhausted,
which is impossible to do when port is selected at bind() time becuse
the kernel does not know if socket will be listen()'ed on or connect'ed
yet.
Closes#6295