Some systems have special files that report as 0 bytes big, but still
contain data that can be read (for example /proc/cpuinfo on
Linux). Starting now, a zero byte size is considered "unknown" size and
will be read as far as possible anyway.
Reported-by: Jesse Tan
Closes#681
... as when pipelining is used, we read things into a unified buffer and
we don't do that with HTTP/2. This could then easily make programs that
set CURLMOPT_PIPELINING = CURLPIPE_HTTP1|CURLPIPE_MULTIPLEX to get data
intermixed or plain broken between HTTP/2 streams.
Reported-by: Anders Bakken
The two options are almost the same, except in the case of OpenSSL:
CURLINFO_TLS_SESSION OpenSSL session internals is SSL_CTX *.
CURLINFO_TLS_SSL_PTR OpenSSL session internals is SSL *.
For backwards compatibility we couldn't modify CURLINFO_TLS_SESSION to
return an SSL pointer for OpenSSL.
Also, add support for the 'internals' member to point to SSL object for
the other backends axTLS, PolarSSL, Secure Channel, Secure Transport and
wolfSSL.
Bug: https://github.com/curl/curl/issues/234
Reported-by: dkjjr89@users.noreply.github.com
Bug: https://curl.haxx.se/mail/lib-2015-09/0127.html
Reported-by: Michael König
The internal Curl_done() function uses Curl_expire() at times and that
uses the timeout list. Better clean up the list once we're done using
it. This caused a segfault.
Reported-by: 蔡文凱
Bug: https://curl.haxx.se/mail/lib-2016-02/0097.html
- Add tests.
- Add an example to CURLOPT_TFTP_NO_OPTIONS.3.
- Add --tftp-no-options to expose CURLOPT_TFTP_NO_OPTIONS.
Bug: https://github.com/curl/curl/issues/481
Some TFTP server implementations ignore the "TFTP Option extension"
(RFC 1782-1784, 2347-2349), or implement it in a flawed way, causing
problems with libcurl. Another switch for curl_easy_setopt
"CURLOPT_TFTP_NO_OPTIONS" is introduced which prevents libcurl from
sending TFTP option requests to a server, avoiding many problems caused
by faulty implementations.
Bug: https://github.com/curl/curl/issues/481
If any parameter in a HTTP DIGEST challenge message is present multiple
times, memory allocated for all but the last entry should be freed.
Bug: https://github.com/curl/curl/pull/667
At one point during the development of HTTP/2, the commit 133cdd29ea
introduced automatic decompression of Content-Encoding as that was what
the spec said then. Now however, HTTP/2 should work the same way as
HTTP/1 in this regard.
Reported-by: Kazuho Oku
Closes#661
nghttp2 callback deals with TLS layer and therefore the header does not
need to be broken into chunks.
Bug: https://github.com/curl/curl/issues/659
Reported-by: Kazuho Oku
Since we didn't keep the input argument around after having called
mbedtls, it could end up accessing the wrong memory when figuring out
the ALPN protocols.
Closes#642
As of https://boringssl-review.googlesource.com/#/c/6980/, almost all of
BoringSSL #ifdefs in cURL should be unnecessary:
- BoringSSL provides no-op stubs for compatibility which replaces most
#ifdefs.
- DES_set_odd_parity has been in BoringSSL for nearly a year now. Remove
the compatibility codepath.
- With a small tweak to an extend_key_56_to_64 call, the NTLM code
builds fine.
- Switch OCSP-related #ifdefs to the more generally useful
OPENSSL_NO_OCSP.
The only #ifdefs which remain are Curl_ossl_version and the #undefs to
work around OpenSSL and wincrypt.h name conflicts. (BoringSSL leaves
that to the consumer. The in-header workaround makes things sensitive to
include order.)
This change errs on the side of removing conditionals despite many of
the restored codepaths being no-ops. (BoringSSL generally adds no-op
compatibility stubs when possible. OPENSSL_VERSION_NUMBER #ifdefs are
bad enough!)
Closes#640
It turns out Firefox and Chrome both allow spaces in cookie names and
there are sites out there using that.
Turned out the code meant to strip off trailing space from cookie names
didn't work. Fixed now.
Test case 8 modified to verify both these changes.
Closes#639
When trying to verify a peer without having any root CA certificates
set, this makes libcurl use the TLS library's built in default as
fallback.
Closes#569
.. also fix a conversion bug in the unused function
curl_win32_ascii_to_idn().
And remove wprintfs on error (Jay).
Bug: https://github.com/curl/curl/pull/637
It isn't used by the code in current conditions but for safety it seems
sensible to at least not crash on such input.
Extended unit test 1395 to verify this too as well as a plain "/" input.
- Switch from verifying a pinned public key in a callback during the
certificate verification to inline after the certificate verification.
The callback method had three problems:
1. If a pinned public key didn't match, CURLE_SSL_PINNEDPUBKEYNOTMATCH
was not returned.
2. If peer certificate verification was disabled the pinned key
verification did not take place as it should.
3. (related to #2) If there was no certificate of depth 0 the callback
would not have checked the pinned public key.
Though all those problems could have been fixed it would have made the
code more complex. Instead we now verify inline after the certificate
verification in mbedtls_connect_step2.
Ref: http://curl.haxx.se/mail/lib-2016-01/0047.html
Ref: https://github.com/bagder/curl/pull/601
The CURLOPT_SSH_PUBLIC_KEYFILE option has been documented to handle
empty strings specially since curl-7_25_0-31-g05a443a but the behavior
was unintentionally removed in curl-7_38_0-47-gfa7d04f.
This commit restores the original behavior and clarifies it in the
documentation that NULL and "" have both the same meaning when passed
to CURLOPT_SSH_PUBLIC_KEYFILE.
Bug: http://curl.haxx.se/mail/lib-2016-01/0072.html
... by extracting the LIB + REASON from the OpenSSL error code. OpenSSL
1.1.0+ returned a new func number of another cerfificate fail so this
required a fix and this is the better way to catch this error anyway.
When an HTTP/2 upgrade request fails (no protocol switch), it would
previously detect that as still possible to pipeline on (which is
acorrect) and do that when PIPEWAIT was enabled even if pipelining was
not explictily enabled.
It should only pipelined if explicitly asked to.
Closes#584
Before this patch, if a URL does not start with the protocol
name/scheme, effective URLs would be prefixed with upper-case protocol
names/schemes. This behavior might not be expected by library users or
end users.
For example, if `CURLOPT_DEFAULT_PROTOCOL` is set to "https". And the
URL is "hostname/path". The effective URL would be
"HTTPS://hostname/path" instead of "https://hostname/path".
After this patch, effective URLs would be prefixed with a lower-case
protocol name/scheme.
Closes#597
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
Previously, when HTTP/2 is enabled and used, and stream has content
length known, Curl_read was not called when there was no bytes left to
read. Because of this, we could not make sure that
http2_handle_stream_close was called for every stream. Since we use
http2_handle_stream_close to emit trailer fields, they were
effectively ignored. This commit changes the code so that Curl_read is
called even if no bytes left to read, to ensure that
http2_handle_stream_close is called for every stream.
Discussed in https://github.com/bagder/curl/pull/564
Check that the trailer buffer exists before attempting a client write
for trailers on stream close.
Refer to comments in https://github.com/bagder/curl/pull/564