As Windows SSPI authentication calls fail when a particular mechanism
isn't available, introduced these functions for DIGEST, NTLM, Kerberos 5
and Negotiate to allow both HTTP and SASL authentication the opportunity
to query support for a supported mechanism before selecting it.
For now each function returns TRUE to maintain compatability with the
existing code when called.
RFC7512 provides a standard method to reference certificates in PKCS#11
tokens, by means of a URI starting 'pkcs11:'.
We're working on fixing various applications so that whenever they would
have been able to use certificates from a file, users can simply insert
a PKCS#11 URI instead and expect it to work. This expectation is now a
part of the Fedora packaging guidelines, for example.
This doesn't work with cURL because of the way that the colon is used
to separate the certificate argument from the passphrase. So instead of
curl -E 'pkcs11:manufacturer=piv_II;id=%01' …
I instead need to invoke cURL with the colon escaped, like this:
curl -E 'pkcs11\:manufacturer=piv_II;id=%01' …
This is suboptimal because we want *consistency* — the URI should be
usable in place of a filename anywhere, without having strange
differences for different applications.
This patch therefore disables the processing in parse_cert_parameter()
when the string starts with 'pkcs11:'. It means you can't pass a
passphrase with an unescaped PKCS#11 URI, but there's no need to do so
because RFC7512 allows a PIN to be given as a 'pin-value' attribute in
the URI itself.
Also, if users are already using RFC7512 URIs with the colon escaped as
in the above example — even providing a passphrase for cURL to handling
instead of using a pin-value attribute, that will continue to work
because their string will start 'pkcs11\:' and won't match the check.
What *does* break with this patch is the extremely unlikely case that a
user has a file which is in the local directory and literally named
just "pkcs11", and they have a passphrase on it. If that ever happened,
the user would need to refer to it as './pkcs11:<passphrase>' instead.
This fixes tests that were added after 113f04e664 as the tests would
fail otherwise.
We bring back "Proxy-Connection: Keep-Alive" now unconditionally to fix
regressions with old and stupid proxies, but we could possibly switch to
using it only for CONNECT or only for NTLM in a future if we want to
gradually reduce it.
Fixes#954
Reported-by: János Fekete
I discovered some people have been using "https://example.com" style
strings as proxy and it "works" (curl doesn't complain) because curl
ignores unknown schemes and then assumes plain HTTP instead.
I think this misleads users into believing curl uses HTTPS to proxies
when it doesn't. Now curl rejects proxy strings using unsupported
schemes instead of just ignoring and defaulting to HTTP.
Undo change introduced in d4643d6 which caused iPAddress match to be
ignored if dNSName was present but did not match.
Also, if iPAddress is present but does not match, and dNSName is not
present, fail as no-match. Prior to this change in such a case the CN
would be checked for a match.
Bug: https://github.com/curl/curl/issues/959
Reported-by: wmsch@users.noreply.github.com
Mark's new document about HTTP Retries
(https://mnot.github.io/I-D/httpbis-retry/) made me check our code and I
spotted that we don't retry failed HEAD requests which seems totally
inconsistent and I can't see any reason for that separate treatment.
So, no separate treatment for HEAD starting now. A HTTP request sent
over a reused connection that gets cut off before a single byte is
received will be retried on a fresh connection.
Made-aware-by: Mark Nottingham
Makes libcurl work in communication with gstreamer-based RTSP
servers. The original code validates the session id to be in accordance
with the RFC. I think it is better not to do that:
- For curl the actual content is a don't care.
- The clarity of the RFC is debatable, is $ allowed or only as \$, that
is imho not clear
- Gstreamer seems to url-encode the session id but % is not allowed by
the RFC
- less code
With this patch curl will correctly handle real-life lines like:
Session: biTN4Kc.8%2B1w-AF.; timeout=60
Bug: https://curl.haxx.se/mail/lib-2016-08/0076.html
This makes it possible to use specific compilers or a cache.
Sample use for clcache:
set CC=clcache.bat
nmake /f Makefile.vc DEBUG=no MODE=static VC=14 GEN_PDB=no
In the old line number 290, CC and CURL_CC had the same value. After
that, /DCURL_STATICLIB was added to CC but not CURL_CC (intended?).
This gets rid of the CC variable entirely. It is a first step to make it
possible to manualyl set a CC variable in order to be able to change the
compiler.
All compilers used by cmake in Windows should support large files.
- Add test SIZEOF_OFF_T
- Remove outdated test SIZEOF_CURL_OFF_T
- Turn on USE_WIN32_LARGE_FILES in Windows
- Check for 'Largefile' during the features output
Since the server can at any time send a HTTP/2 frame to us, we need to
wait for the socket to be readable during all transfers so that we can
act on incoming frames even when uploading etc.
Reminded-by: Tatsuhiro Tsujikawa