- Remove SSLv3 from SSL default in darwinssl, schannel, cyassl, nss,
openssl effectively making the default TLS 1.x. axTLS is not affected
since it supports only TLS, and gnutls is not affected since it already
defaults to TLS 1.x.
- Update CURLOPT_SSLVERSION doc
... for the local variable name in functions holding the return
code. Using the same name universally makes code easier to read and
follow.
Also, unify code for checking for CURLcode errors with:
if(result) or if(!result)
instead of
if(result == CURLE_OK), if(CURLE_OK == result) or if(result != CURLE_OK)
Prefer usage of Perl modules for sha1 calculation since there
might be systems where openssl is not installed or not in path.
If openssl is used for sha1 calculation then dont rely on cut
since it is usually not available on other systems than Linux.
It turned out some features were not enabled in the build since for
example url.c #ifdefs on features that are defined on a per-backend
basis but vtls.h didn't include the backend headers.
CURLOPT_CERTINFO was one such feature that was accidentally disabled.
There is no need for such function. Include_directories propagate by
themselves and having a function with one simple link statement makes
little sense.
Coverity CID 252518. This function is in general far too complicated for
its own good and really should be broken down into several smaller
funcitons instead - but I'm adding this protection here now since it
seems there's a risk the code flow can end up here and dereference a
NULL pointer.
Coverity CID 1241957. Removed the unused argument. As this struct and
pointer now are used only for krb5, there's no need to keep unused
function arguments around.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
Coverify CID 1157776. Removed a superfluous if() that always evaluated
true (and an else clause that never ran), and then re-indented the
function accordingly.
Coverity CID 1215284. The server name is extracted with
Curl_copy_header_value() and passed in to this function, and
copy_header_value can actually can fail and return NULL.
For private keys, use the first match from: user-specified key file
(if provided), ~/.ssh/id_rsa, ~/.ssh/id_dsa, ./id_rsa, ./id_dsa
Note that the previous code only looked for id_dsa files. id_rsa is
now generally preferred, as it supports larger key sizes.
For public keys, use the user-specified key file, if provided.
Otherwise, try to extract the public key from the private key file.
This means that passing --pubkey is typically no longer required,
and makes the key-handling behavior more like OpenSSH.
Coverity CID 1202836. If the proxy environment variable returned an empty
string, it would be leaked. While an empty string is not really a proxy, other
logic in this function already allows a blank string to be returned so allow
that here to avoid the leak.
Coverity CID 1215287. There's a potential risk for a memory leak in
here, and moving the free call to be unconditional seems like a cheap
price to remove the risk.
Coverity CID 1215296. There's a potential risk for a memory leak in
here, and moving the free call to be unconditional seems like a cheap
price to remove the risk.
Coverity detected this. CID 1241954. When Curl_poll() returns a negative value
'mcode' was uninitialized. Pretty harmless since this is debug code only and
would at worst cause an error to _not_ be returned...
Mostly because we use C strings and they end at a binary zero so we know
we can't open a file name using an embedded binary zero.
Reported-by: research@g0blin.co.uk
The switch to using Curl_expire_latest() in commit cacdc27f52 was a
mistake and was against the advice even mentioned in that commit. The
comparison in asyn-thread.c:Curl_resolver_is_resolved() makes
Curl_expire() the suitable function to use.
Bug: http://curl.haxx.se/bug/view.cgi?id=1426
Reported-By: graysky
Previously we did not handle EOF from underlying transport socket and
wrongly just returned error code CURL_AGAIN from http2_recv, which
caused busy loop since socket has been closed. This patch adds the
code to handle EOF situation and tells the upper layer that we got
EOF.
Removed ISC_REQ_* flags from calls to InitializeSecurityContext to fix
bug in NTLM handshake for HTTP proxy authentication.
NTLM handshake for HTTP proxy authentication failed with error
SEC_E_INVALID_TOKEN from InitializeSecurityContext for certain proxy
servers on generating the NTLM Type-3 message.
The flag ISC_REQ_CONFIDENTIALITY seems to cause the problem according
to the observations and suggestions made in a bug report for the
QT project (https://bugreports.qt-project.org/browse/QTBUG-17322).
Removing all the flags solved the problem.
Bug: http://curl.haxx.se/mail/lib-2014-08/0273.html
Reported-by: Ulrich Telle
Assisted-by: Steve Holme, Daniel Stenberg
As a sort of step forward, this script will now first try to get the
data from the HTTPS URL using curl, and only if that fails it will
switch back to the HTTP transfer using perl's native LWP functionality.
To reduce the risk of this script being tricked.
Using HTTPS to get a cert bundle introduces a chicken-and-egg problem so
we can't really ever completely disable HTTP, but chances are that most
users already have a ca cert bundle that trusts the mozilla.org site
that this script downloads from.
A future version of this script will probably switch to require a
dedicated "insecure" command line option to allow downloading over HTTP
(or unverified HTTPS).
By not detecting and rejecting domain names for partial literal IP
addresses properly when parsing received HTTP cookies, libcurl can be
fooled to both send cookies to wrong sites and to allow arbitrary sites
to set cookies for others.
CVE-2014-3613
Bug: http://curl.haxx.se/docs/adv_20140910A.html
Historically the default "unknown" value for progress.size_dl and
progress.size_ul has been zero, since these values are initialized
implicitly by the calloc that allocates the curl handle that these
variables are a part of. Users of curl that install progress
callbacks may expect these values to always be >= 0.
Currently it is possible for progress.size_dl and progress.size_ul
to by set to a value of -1, if Curl_pgrsSetDownloadSize() or
Curl_pgrsSetUploadSize() are passed a "size" of -1 (which a few
places currently do, and a following patch will add more). So
lets update Curl_pgrsSetDownloadSize() and Curl_pgrsSetUploadSize()
so they make sure that these variables always contain a value that
is >= 0.
Updates test579 and test599.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
As the current element in the list is free()d by Curl_llist_remove(),
when the associated connection is pending, reworked the loop to avoid
accessing the next element through e->next afterward.
SecCertificateCopyPublicKey() is not available on iPhone. Use
CopyCertSubject() instead to see if the certificate returned by
SecCertificateCreateWithData() is valid.
Reported-by: Toby Peterson
... as the struct is free()d in the end anyway. It was first pointed out
to me that one of the ->msglist assignments were supposed to have been
->pending but was a copy and paste mistake when I realized none of the
clearing of pointers had to be there.
... instead of scanning through all handles, stash only the actual
handles that are in that state in the new ->pending list and scan that
list only. It should be mostly empty or very short. And only used for
pipelining.
This avoids a rather hefty slow-down especially notable if you add many
handles to the same multi handle. Regression introduced in commit
0f147887 (version 7.30.0).
Bug: http://curl.haxx.se/mail/lib-2014-07/0206.html
Reported-by: David Meyer
Forwards the setting as minimum ssl version (if set) to polarssl. If
the server does not support the requested version the SSL Handshake will
fail.
Bug: http://curl.haxx.se/bug/view.cgi?id=1419
SecCertificateCreateWithData() returns a non-NULL SecCertificateRef even
if the buffer holds an invalid or corrupt certificate. Call
SecCertificateCopyPublicKey() to make sure cacert is a valid
certificate.
Introducing Curl_expire_latest(). To be used when we the code flow only
wants to get called at a later time that is "no later than X" so that
something can be checked (and another timeout be added).
The low-speed logic for example could easily be made to set very many
expire timeouts if it would be called faster or sooner than what it had
set its own timer and this goes for a few other timers too that aren't
explictiy checked for timer expiration in the code.
If there's no condition the code that says if(time-passed >= TIME), then
Curl_expire_latest() is preferred to Curl_expire().
If there exists such a condition, it is on the other hand important that
Curl_expire() is used and not the other.
Bug: http://curl.haxx.se/mail/lib-2014-06/0235.html
Reported-by: Florian Weimer
While waiting for a host resolve, check if the host cache may have
gotten the name already (by someone else), for when the same name is
resolved by several simultanoues requests.
The resolver thread occasionally gets stuck in getaddrinfo() when the
DNS or anything else is crappy or slow, so when a host is found in the
DNS cache, leave the thread alone and let itself cleanup the mess.
If the --cacert option is used with a CA certificate bundle that
contains multiple CA certificates, iterate through it, adding each
certificate as a trusted root CA.
This is usually due to failed auth. There's no point in us keeping such
a connection alive since it shouldn't be re-used anyway.
Bug: http://curl.haxx.se/bug/view.cgi?id=1381
Reported-by: Marcel Raad
This was done to make sure NTLM state that is bound to a connection
doesn't survive and gets used for the subsequent request - but
disconnects can also be done to for example make room in the connection
cache and thus that connection is not strictly related to the easy
handle's current operation.
The http authentication state is still kept in the easy handle since all
http auth _except_ NTLM is connection independent and thus survive over
multiple connections.
Bug: http://curl.haxx.se/mail/lib-2014-08/0148.html
Reported-by: Paras S
Problem: if CURLOPT_FORBID_REUSE is set, requests using NTLM failed
since NTLM requires multiple requests that re-use the same connection
for the authentication to work
Solution: Ignore the forbid reuse flag in case the NTLM authentication
handshake is in progress, according to the NTLM state flag.
Fixed known bug #77.
A conditionally compiled block in connect.c references WinSock 2
symbols, but used `#ifdef HAVE_WINSOCK_H` instead of `#ifdef
HAVE_WINSOCK2_H`.
Bug: http://curl.haxx.se/mail/lib-2014-08/0155.html
The URL is not a property of the connection so it should not be freed in
the connection disconnect but in the Curl_close() that frees the easy
handle.
Bug: http://curl.haxx.se/mail/lib-2014-08/0148.html
Reported-by: Paras S
Corrected a number of the error codes that can be returned from the
Curl_sasl_create_gssapi_security_message() function when things go
wrong.
It makes more sense to return CURLE_BAD_CONTENT_ENCODING when the
inbound security challenge can't be decoded correctly or doesn't
contain the KERB_WRAP_NO_ENCRYPT flag and CURLE_OUT_OF_MEMORY when
EncryptMessage() fails. Unfortunately the previous error code of
CURLE_RECV_ERROR was a copy and paste mistakes on my part and should
have been correct in commit 4b491c675f :(
... to handle "*/[total]". Also, removed the strange hack that made
CURLOPT_FAILONERROR on a 416 response after a *RESUME_FROM return
CURLE_OK.
Reported-by: Dimitrios Siganos
Bug: http://curl.haxx.se/mail/lib-2014-06/0221.html
In preparation for the upcoming SSPI implementation of GSSAPI
authentication, moved the definition of KERB_WRAP_NO_ENCRYPT from
socks_sspi.c to curl_sspi.h allowing it to be shared amongst other
SSPI based code.
... as mxr.mozilla.org is due to be retired.
The new host doesn't support If-Modified-Since nor ETags, meaning that
the script will now defer to download and do a post-transfer checksum
check to see if a new output is to be generated. The new output format
will hold the SHA1 checksum of the source file for that purpose.
We call this version 1.22
Reported-by: Ed Morley
Bug: http://curl.haxx.se/bug/view.cgi?id=1409
Bringing back the old functionality that was mistakenly removed when the
connection cache was remade. When creating a new connection, all the
existing ones are checked and those that are known to be dead get
disconnected for real and removed from the connection cache. It helps
the cache from holding on to very many stale connections and aids in
keeping down the number of system sockets in wait states.
Help-by: Jonatan Vela <jonatan.vela@ergon.ch>
Bug: http://curl.haxx.se/mail/lib-2014-06/0189.html
Curl_poll and Curl_wait_ms require the fix applied to Curl_socket_check
in commits b61e8b8 and c771968:
When poll or select are interrupted and coincides with the timeout
elapsing, the functions return -1 indicating an error instead of 0 for
the timeout.
Given the SSPI package info query indicates a token size of 4096 bytes,
updated to use a dynamic buffer for the response message generation
rather than a fixed buffer of 1024 bytes.
Updated to use a dynamic buffer for the SPN generation via the recently
introduced Curl_sasl_build_spn() function rather than a fixed buffer of
1024 characters, which should have been more than enough, but by using
the new function removes the need for another variable sname to do the
wide character conversion in Unicode builds.
Updated Curl_sasl_create_digest_md5_message() to use a dynamic buffer
for the SPN generation via the recently introduced Curl_sasl_build_spn()
function rather than a fixed buffer of 128 characters.
Curl_sasl_create_digest_md5_message() would simply cast the SPN variable
to a TCHAR when calling InitializeSecurityContext(). This meant that,
under Unicode builds, it would not be valid wide character string.
Updated to use the recently introduced Curl_sasl_build_spn() function
which performs the correct conversion for us.
Various parts of the libcurl source code build a SPN for inclusion in
authentication data. This information is either used by our own native
generation routines or passed to authentication functions in third-party
libraries such as SSPI. However, some of these instances use fixed
buffers rather than dynamically allocated ones and not all of those that
should, convert to wide character strings in Unicode builds.
Implemented a common function that generates a SPN and performs the
wide character conversion where necessary.
Following the recent changes and in attempt to align the SSPI based
authentication code performed the following:
* Use NULL and SECBUFFVERSION rather than hard coded constants.
* Avoid comparison of zero in if statements.
* Standardised the buf and desc setup code.
Given the SSPI package info query indicates a token size of 2888 bytes,
and as with the Winbind code and commit 9008f3d56, use a dynamic buffer
for the Type-1 and Type-3 message generation rather than a fixed buffer
of 1024 bytes.
Just as with the SSPI implementations of Digest and Negotiate added a
package info query so that libcurl can a) return a more appropriate
error code when the NTLM package is not supported and b) it can be of
use later to allocate a dynamic buffer for the Type-1 and Type-3
output tokens rather than use a fixed buffer of 1024 bytes.
OPENSSL_config() is "strongly recommended" to use but unfortunately that
function makes an exit() call on wrongly formatted config files which
makes it hard to use in some situations. OPENSSL_config() itself calls
CONF_modules_load_file() and we use that instead and we ignore its
return code!
Reported-by: Jan Ehrhardt
Bug: http://curl.haxx.se/bug/view.cgi?id=1401
If the server rejects our authentication attempt and curl hasn't
called CompleteAuthToken() then the status variable will be
SEC_I_CONTINUE_NEEDED and not SEC_E_OK.
As such the existing detection mechanism for determining whether or not
the authentication process has finished is not sufficient.
However, the WWW-Authenticate: Negotiate header line will not contain
any data when the server has exhausted the negotiation, so we can use
that coupled with the already allocated context pointer.
To prevent infinite loop in readwrite_data() function when stream is
reset before any response body comes, reset closed flag to false once
it is evaluated to true.
"Expect: 100-continue", which was once deprecated in HTTP/2, is now
resurrected in HTTP/2 draft 14. This change adds its support to
HTTP/2 code. This change also includes stricter header field
checking.
Previously it only returned a CURLcode for errors, which is when it
returns a different size than what was passed in to it.
The http2 code only checked the curlcode and thus failed.
Under these circumstances, the connection hasn't been fully established
and smtp_connect hasn't been called, yet smtp_done still calls the state
machine which dereferences the NULL conn pointer in struct pingpong.
This now provides a weak random function since PolarSSL doesn't have a
quick and easy way to provide a good one. It does however provide the
framework to make one so it _can_ and _should_ be done...
To force each backend implementation to really attempt to provide proper
random. If a proper random function is missing, then we can explicitly
make use of the default one we use when TLS support is missing.
This commit makes sure it works for darwinssl, gnutls, nss and openssl.
warning C4267: '=' : conversion from 'size_t' to 'long', possible loss
of data
The member connection_id of struct connectdata is a long (always a
32-bit signed integer on Visual C++) and the member next_connection_id
of struct conncache is a size_t, so one of them should be changed to
match the other.
This patch the size_t in struct conncache to long (the less invasive
change as that variable is only ever used in a single code line).
Bug: http://curl.haxx.se/bug/view.cgi?id=1399
1 - fixes the warnings when built without http2 support
2 - adds CURLE_HTTP2, a new error code for errors detected by nghttp2
basically when they are about http2 specific things.
CyaSSL 3.0.0 returns a unique error code if no CA cert is available,
so translate that into CURLE_SSL_CACERT_BADFILE when peer verification
is requested.
- Replace CURLAUTH_GSSNEGOTIATE with CURLAUTH_NEGOTIATE
- CURL_VERSION_GSSNEGOTIATE is deprecated which
is served by CURL_VERSION_SSPI, CURL_VERSION_GSSAPI and
CURUL_VERSION_SPNEGO now.
- Remove display of feature 'GSS-Negotiate'
This reverts commit cb3e6dfa35 and instead fixes the problem
differently.
The reverted commit addressed a test failure in test 1021 by simplifying
and generalizing the code flow in a way that damaged the
performance. Now we modify the flow so that Curl_proxyCONNECT() again
does as much as possible in one go, yet still do test 1021 with and
without valgrind. It failed due to mistakes in the multi state machine.
Bug: http://curl.haxx.se/bug/view.cgi?id=1397
Reported-by: Paul Saab
It's wrong to assume that we can send a single SPNEGO packet which will
complete the authentication. It's a *negotiation* — the clue is in the
name. So make sure we handle responses from the server.
Curl_input_negotiate() will already handle bailing out if it thinks the
state is GSS_S_COMPLETE (or SEC_E_OK on Windows) and the server keeps
talking to us, so we should avoid endless loops that way.
This is the correct way to do SPNEGO. Just ask for it
Now I correctly see it trying NTLMSSP authentication when a Kerberos ticket
isn't available. Of course, we bail out when the server responds with the
challenge packet, since we don't expect that. But I'll fix that bug next...
This is just fundamentally broken. SPNEGO (RFC4178) is a protocol which
allows client and server to negotiate the underlying mechanism which will
actually be used to authenticate. This is *often* Kerberos, and can also
be NTLM and other things. And to complicate matters, there are various
different OIDs which can be used to specify the Kerberos mechanism too.
A SPNEGO exchange will identify *which* GSSAPI mechanism is being used,
and will exchange GSSAPI tokens which are appropriate for that mechanism.
But this SPNEGO implementation just strips the incoming SPNEGO packet
and extracts the token, if any. And completely discards the information
about *which* mechanism is being used. Then we *assume* it was Kerberos,
and feed the token into gss_init_sec_context() with the default
mechanism (GSS_S_NO_OID for the mech_type argument).
Furthermore... broken as this code is, it was never even *used* for input
tokens anyway, because higher layers of curl would just bail out if the
server actually said anything *back* to us in the negotiation. We assume
that we send a single token to the server, and it accepts it. If the server
wants to continue the exchange (as is required for NTLM and for SPNEGO
to do anything useful), then curl was broken anyway.
So the only bit which actually did anything was the bit in
Curl_output_negotiate(), which always generates an *initial* SPNEGO
token saying "Hey, I support only the Kerberos mechanism and this is its
token".
You could have done that by manually just prefixing the Kerberos token
with the appropriate bytes, if you weren't going to do any proper SPNEGO
handling. There's no need for the FBOpenSSL library at all.
The sane way to do SPNEGO is just to *ask* the GSSAPI library to do
SPNEGO. That's what the 'mech_type' argument to gss_init_sec_context()
is for. And then it should all Just Work™.
That 'sane way' will be added in a subsequent patch, as will bug fixes
for our failure to handle any exchange other than a single outbound
token to the server which results in immediate success.
Before GnuTLS 3.3.6, the gnutls_x509_crt_check_hostname() function
didn't actually check IP addresses in SubjectAltName, even though it was
explicitly documented as doing so. So do it ourselves...
The old way using getpwuid could cause problems in programs that enable
reading from netrc files simultaneously in multiple threads.
Reported-by: David Woodhouse
The AES-GCM ciphers were added to GnuTLS as late as ver. 3.0.1 but
the code path in which they're referenced here is only ever used for
somewhat older GnuTLS versions. This caused undeclared identifier errors
when compiling against those.
This seems to have become necessary for SRP support to work starting
with GnuTLS ver. 2.99.0. Since support for SRP was added to GnuTLS
before the function that takes this priority string, there should be no
issue with backward compatibility.
When an error has been detected, skip the final forced call to the
progress callback by making sure to pass the current return code
variable in the Curl_done() call in the CURLM_STATE_DONE state.
This avoids the "extra" callback that could occur even if you returned
error from the progress callback.
Bug: http://curl.haxx.se/mail/lib-2014-06/0062.html
Reported by: Jonathan Cardoso Machado
The static connection counter caused a race condition. Moving the
connection id counter into conncache solves it, as well as simplifying
the related logic.
They were added because of an older code path that used allocations and
should not have been left in the code. With this change the logic goes
back to how it was.
Curl_rand() will return a dummy and repatable random value for this
case. Makes it possible to write test cases that verify output.
Also, fake timestamp with CURL_FORCETIME set.
Only when built debug enabled of course.
Curl_ssl_random() was not used anymore so it has been
removed. Curl_rand() is enough.
create_digest_md5_message: generate base64 instead of hex string
curl_sasl: also fix memory leaks in some OOM situations
httpproxycode is not reset in Curl_initinfo, so a 407 is not reset even
if curl_easy_reset is called between transfers.
Bug: http://curl.haxx.se/bug/view.cgi?id=1380
The method change is forbidden by the obsolete RFC2616, but libcurl did
it anyway for compatibility reasons. The new RFC7231 allows this
behaviour so there's no need for the scary "Violate RFC 2616/10.3.x"
notice. Also update the comments accordingly.
The SASL/Digest previously used the current time's seconds +
microseconds to add randomness but it is much better to instead get more
data from Curl_rand().
It will also allow us to easier "fake" that for debug builds on demand
in a future.
Rather than use a short 8-byte hex string, extended the cnonce to be
32-bytes long, like Windows SSPI does.
Used a combination of random data as well as the current date and
time for the generation.
"Any two of the parameters, readfds, writefds, or exceptfds, can be
given as null. At least one must be non-null, and any non-null
descriptor set must contain at least one handle to a socket."
http://msdn.microsoft.com/en-ca/library/windows/desktop/ms740141(v=vs.85).aspx
When using select(), cURL doesn't adhere to this (WinSock-specific)
rule, and can ask to monitor empty fd_sets, which leads to select()
returning WSAEINVAL (i.e. EINVAL) and connections failing in mysterious
ways as a result (at least when using the curl_multi_socket_action()
interface).
Bug: http://curl.haxx.se/mail/lib-2014-05/0278.html
OpenSSL passes out and outlen variable uninitialized to
select_next_proto_cb callback function. If the callback function
returns SSL_TLSEXT_ERR_OK, the caller assumes the callback filled
values in out and outlen and processes as such. Previously, if there
is no overlap in protocol lists, curl code does not fill any values in
these variables and returns SSL_TLSEXT_ERR_OK, which means we are
triggering undefined behavior. valgrind warns this.
This patch fixes this issue by fallback to HTTP/1.1 if there is no
overlap.
Make all code use connclose() and connkeep() when changing the "close
state" for a connection. These two macros take a string argument with an
explanation, and debug builds of curl will include that in the debug
output. Helps tracking connection re-use/close issues.
Commit 517b06d657 (in 7.36.0) that brought the CREDSPERREQUEST flag
only set it for HTTPS, making HTTP less good at doing connection re-use
than it should be. Now set it for HTTP as well.
Simple test case
"curl -v -u foo:bar localhost --next -u bar:foo localhos"
Bug: http://curl.haxx.se/mail/lib-2014-05/0127.html
Reported-by: Kamil Dudka
The variable wasn't assigned at all until step3 which would lead to a
failed connect never assigning the variable and thus returning a bad
value.
Reported-by: Larry Lin
Bug: http://curl.haxx.se/mail/lib-2014-04/0203.html
In commit 0b3750b5c2 (released in 7.36.0) we fixed a timeout issue
but instead broke the timings.
To fix this, I introduce a new timestamp to use for the timeouts and
restored the previous timestamp and timestamp position so that the old
timer functionality is restored.
In addition to that, that change also broke connection timeouts for when
more than one connect was used (as it would then count the total time
from the first connect and not for the most recent one). Now
Curl_timeleft() has been modified so that it checks against different
start times depending on which timeout it checks.
Test 1303 is updated accordingly.
Bug: http://curl.haxx.se/mail/lib-2014-05/0147.html
Reported-by: Ryan Braud
Whilst the qop directive isn't required to be present in a client's
response, as servers should assume a qop of "auth" if it isn't
specified, some may return authentication failure if it is missing.
To cater for the automatic generation of the new Visual Studio project
files, moved the lib file list into a separated variable so that lib
and lib/vtls can be referenced independently.
-p takes a list of Mozilla trust purposes and levels for certificates to
include in output. Takes the form of a comma separated list of
purposes, a colon, and a comma separated list of levels.
Now nghttp2_submit_request returns assigned stream ID, we don't have
to check stream ID using before_stream_send_callback. The
adjust_priority_callback was removed.
Depending on compiler line 3505 could generate the following warning or
error:
* warning: ISO C90 forbids mixed declarations and code
* A declaration cannot appear after an executable statement in a block
* error C2275: 'size_t' : illegal use of this type as an expression
As there's a default connection timeout and this wrongly used the
connection timeout during a transfer after the connection is completed,
this function would trigger timeouts during transfers erroneously.
Bug: http://curl.haxx.se/bug/view.cgi?id=1352
Figured-out-by: Radu Simionescu
If the precision is indeed shorter than the string, don't strlen() to
find the end because that's not how the precision operator works.
I also added a unit test for curl_msnprintf to make sure this works and
that the fix doesn't a few other basic use cases. I found a POSIX
compliance problem that I marked TODO in the unit test, and I figure we
need to add more tests in the future.
Reported-by: Török Edwin
Commit 07b66cbfa4 unfortunately broke native NTLM message support in
compilers, such as VC6, VC7 and others, that don't support long long
type declarations. This commit fixes VC6 and VC7 as they support the
__int64 extension, however, we should consider an additional fix for
other compilers that don't support this.
set.infilesize in this case was modified in several places, which could
lead to repeated requests using the same handle to get unintendent/wrong
consequences based on what the previous request did!
This makes the findprotocol() function work as intended so that libcurl
can properly be restricted to not support HTTP while still supporting
HTTPS - since the HTTPS handler previously set both the HTTP and HTTPS
bits in the protocol field.
This fixes --proto and --proto-redir for most SSL protocols.
This is done by adding a few new convenience defines that groups HTTP
and HTTPS, FTP and FTPS etc that should then be used when the code wants
to check for both protocols at once. PROTO_FAMILY_[protocol] style.
Bug: https://github.com/bagder/curl/pull/97
Reported-by: drizzt
As this makes curl_global_init_mem() behave the same way as
curl_global_init() already does in that aspect - the same number of
curl_global_cleanup() calls is then required to again decrease the
counter and then eventually do the cleanup.
Bug: http://curl.haxx.se/bug/view.cgi?id=1362
Reported-by: Tristan
ufds might not be allocated in case nfds overflows to zero while
extra_nfds is still non-zero. udfs is then accessed within the
extra_nfds-based for loop.
Should a command return untagged responses that contained no data then
the imap_matchresp() function would not detect them as valid responses,
as it wasn't taking the CRLF characters into account at the end of each
line.
Given that we presently support "auth" and not "auth-int" or "auth-conf"
for native challenge-response messages, added client side validation of
the quality-of-protection options from the server's challenge message.
To avoid urldata.h being included from the header file or that the
source file has the correct include order as highlighted by one of
the auto builds recently.
When CURL_DISABLE_CRYPTO_AUTH is defined the DIGEST-MD5 code should not
be included, regardless of whether USE__WINDOWS_SSPI is defined or not.
This is indicated by the definition of USE_HTTP_NEGOTIATE and USE_NTLM
in curl_setup.h.
Updated the docs to clarify and the code accordingly, with test 1528 to
verify:
When CURLHEADER_SEPARATE is set and libcurl is asked to send a request
to a proxy but it isn't CONNECT, then _both_ header lists
(CURLOPT_HTTPHEADER and CURLOPT_PROXYHEADER) will be used since the
single request is then made for both the proxy and the server.
When doing passive FTP, the multi state function needs to extract and
use the happy eyeballs sockets to wait for to check for completion!
Bug: http://curl.haxx.se/mail/lib-2014-02/0135.html (ruined)
Reported-by: Alan
In addition to commit fe260b75e7 fixed the same issue for RFC-821 based
SMTP servers and allow the credientials to be given to curl even though
they are not used with the server.
Specifying user credentials when the SMTP server doesn't support
authentication would cause curl to display "No known authentication
mechanisms supported!" and return CURLE_LOGIN_DENIED.
Reported-by: Tom Sparrow
Bug: http://curl.haxx.se/mail/lib-2014-03/0173.html
There are server certificates used with IP address in the CN field, but
we MUST not allow wild cart certs for hostnames given as IP addresses
only. Therefore we must make Curl_cert_hostcheck() fail such attempts.
Bug: http://curl.haxx.se/docs/adv_20140326B.html
Reported-by: Richard Moore
In addition to FTP, other connection based protocols such as IMAP, POP3,
SMTP, SCP, SFTP and LDAP require a new connection when different log-in
credentials are specified. Fixed the detection logic to include these
other protocols.
Bug: http://curl.haxx.se/docs/adv_20140326A.html
The debug messages printed inside PolarSSL always seems to end with a
newline. So 'infof()' should not add one. Besides the trace 'line'
should be 'const'.
Because of the socket is unblocking, PolarSSL does need call to getsock to
get the action to perform in multi environment.
In some cases, it might happen we have not received yet all data to perform
the handshake. ssh_handshake returns POLARSSL_ERR_NET_WANT_READ, the state
is updated but because of the getsock has not the proper #define macro to,
the library never prevents to select socket for input thus the socket will
never be awaken when last data is available. Thus it leads to timeout.
API has changed since version 1.3. A compatibility header has been created
to ensure forward compatibility for code using old API:
* x509 certificate structure has been renamed to from x509_cert to
x509_crt
* new dedicated setter for RSA certificates ssl_set_own_cert_rsa,
ssl_set_own_cert is for generic keys
* ssl_default_ciphersuites has been replaced by function
ssl_list_ciphersuites()
This patch drops the use of the compatibly header.
Rename x509_cert to x509_crt and add "compat-1.2.h"
include.
This would still need some more thorough conversion
in order to drop "compat-1.2.h" include.
Port number zero is perfectly allowed to connect to. I moved to storing
the remote port number in an int so that -1 means undefined and 0-65535
can be used for legitimate port numbers.
Setting the TIMER_STARTSINGLE timestamp first in CONNECT has the
drawback that for actions that go back to the CONNECT state, the time
stamp is reset and for the multi_socket API there's no corresponding
Curl_expire() then so the timeout logic gets wrong!
Reported-by: Brad Spencer
Bug: http://curl.haxx.se/mail/lib-2014-02/0036.html
Remove slash/backslash problem, now only slashes are used,
Wmake automaticaly translate slash/backslash to proper version or tools are not sensitive for it.
Enable spaces in path.
Use internal rm command for all host platforms
Add error message if old Open Watcom version is used. Some old versions exhibit build problems for Curl latest version. Now only versions 1.8, 1.9 and 2.O beta are supported
For HTTP/2, we may read up everything including responde body with
header fields in Curl_http_readwrite_headers. If no content-length is
provided, curl waits for the connection close, which we emulate it
using conn->proto.httpc.closed = TRUE. The thing is if we read
everything, then http2_recv won't be called and we cannot signal the
HTTP/2 stream has closed. As a workaround, we return nonzero from
data_pending to call http2_recv.
Original commit message was:
Don't omit CN verification in SChannel when an IP address is used.
Side-effect of this change:
SChannel and CryptoAPI do not support the iPAddress subjectAltName
according to RFC 2818. If present, SChannel will first compare the
IP address to the dNSName subjectAltNames and then fallback to the
most specific Common Name in the Subject field of the certificate.
This means that after this change curl will not connect to SSL/TLS
hosts as long as the IP address is not specified in the SAN or CN
of the server certificate or the verifyhost option is disabled.
This patch enables HTTP POST/PUT in HTTP2.
We disabled Expect header field and chunked transfer encoding
since HTTP2 forbids them.
In HTTP1, Curl sends small upload data with request headers, but
HTTP2 requires upload data must be in DATA frame separately.
So we added some conditionals to achieve this.
Perform more work in between sleeps. This is work around the
fact that axtls does not expose any knowledge about when work needs
to be performed. Depending on connection and how often perform is
being called this can save ~25% of time on SSL handshakes (measured
on 20ms latency connection calling perform roughly every 10ms).
When allowing NTLM, the re-use connection logic was too focused on
finding an existing NTLM connection to use and didn't properly allow
re-use of other ones. This made the logic not re-use perfectly re-usable
connections.
Added test case 1418 and 1419 to verify.
Regression brought in 8ae35102c (curl 7.35.0)
Reported-by: Jeff King
Bug: http://thread.gmane.org/gmane.comp.version-control.git/242213
For a function that returns a decoded version of a string, it seems
really strange to allow a NULL pointer to get passed in which then
prevents the decoded data from being returned!
This functionality was not documented anywhere either.
If anyone would use it that way, that memory would've been leaked.
Bug: https://github.com/bagder/curl/pull/90
Reported-by: Arvid Norberg
Make sure that the special NTLM magic we do is for HTTP+NTLM only since
that's where the authenticated connection is a weird non-standard
paradigm.
Regression brought in 8ae35102c (curl 7.35.0)
Bug: http://curl.haxx.se/mail/lib-2014-02/0100.html
Reported-by: Dan Fandrich
The code didn't properly check the return codes to detect overflows so
it could trigger incorrectly. Like on mingw32.
Regression introduced in 345891edba (curl 7.35.0)
Bug: http://curl.haxx.se/mail/lib-2014-02/0097.html
Reported-by: LM
when using --http2 one can now selectively disable NPN or ALPN with
--no-alpn and --no-npn. for now honored with NSS only.
TODO: honor this option with GnuTLS and OpenSSL
SSL_ENABLE_ALPN can be used for preprocessor ALPN feature detection,
but not SSL_NEXT_PROTO_SELECTED, since it is an enum value and not a
preprocessor macro.
Not comma, which is an inconsistency and a mistake probably inherited
from the examples section of RFC1867.
This bug has been present since the day curl started to support
multipart formposts, back in the 90s.
Reported-by: Rob Davies
Bug: http://curl.haxx.se/bug/view.cgi?id=1333
When using the multi socket interface, libcurl calls the
curl_multi_timer_callback asking to be woken up after
CURL_TIMEOUT_EXPECT_100 milliseconds.
After the timeout has expired, calling curl_multi_socket_action with
CURL_SOCKET_TIMEOUT as sockfd leads libcurl to check expired
timeouts. When handling the 100-continue one, the following check in
Curl_readwrite() fails if exactly CURL_TIMEOUT_EXPECT_100 milliseconds
passed since the timeout has been set!
It seems logical to consider that having waited for exactly
CURL_TIMEOUT_EXPECT_100 ms is enough.
Bug: http://curl.haxx.se/bug/view.cgi?id=1334
A server might respond with a content-encoding header and a response
that was encoded accordingly in HTTP-draft-09/2.0 mode, even if the
client did not send an accept-encoding header earlier. The server might
not send a content-encoding header if the identity encoding was used to
encode the response.
See:
http://tools.ietf.org/html/draft-ietf-httpbis-http2-09#section-9.3
This patch chooses different approach to integrate HTTP2 into HTTP curl
stack. The idea is that we insert HTTP2 layer between HTTP code and
socket(TLS) layer. When HTTP2 is initialized (either in NPN or Upgrade),
we replace the Curl_recv/Curl_send callbacks with HTTP2's, but keep the
original callbacks in http_conn struct. When sending serialized data by
nghttp2, we use original Curl_send callback. Likewise, when reading data
from network, we use original Curl_recv callback. In this way we can
treat both TLS and non-TLS connections.
With this patch, one can transfer contents from https://twitter.com and
from nghttp2 test server in plain HTTP as well.
The code still has rough edges. The notable one is I could not figure
out how to call nghttp2_session_send() when underlying socket is
writable.
Check the NPN result before preparing an HTTP request and switch into
HTTP/2.0 mode if necessary. This is a work in progress, the actual code
to prepare and send the request using nghttp2 is still missing from
Curl_http2_send_request().
the number of elements in the 'nghttp2_session_callbacks' structure is
now reduced by 2 in version 0.3.0 (I'm not sure when the change
happened, but checking for ver 0.3.0 work for me).
Something is wrong in 'userp' for the HTTP2 recv_callback(). The
session is created using bogus user-data; '&conn' and not 'conn'.
I noticed this since the socket-value in Curl_read_plain() was set to a
impossible high value.
hostcache_timestamp_remove() should remove old *unused* entries from the
host cache, but it never checked whether the entry was actually in
use. This complements commit 030a2b8cb.
Bug: http://curl.haxx.se/bug/view.cgi?id=1327
tftp_done() can get called with its TFTP state pointer still being NULL
on an early time-out, which caused a segfault when dereferenced.
Reported-by: Glenn Sheridan
Bug: http://curl.haxx.se/mail/lib-2014-01/0246.html
Make it possible to call
curl_easy_getinfo(curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &filesize)
and related functions on remote sftp:// files, without downloading them.
Reported-by: Yingwei Liu
Bug: http://curl.haxx.se/mail/lib-2014-01/0139.html
This prevents sending a `Content-Length: -1` header, e.g this ocurred
with the following combination:
* standard HTTP POST (no chunked encoding),
* user-defined read function set,
* `CURLOPT_POSTFIELDSIZE(_LARGE)` NOT set.
With this fix it now behaves like HTTP PUT.
Make GnuTLS old and new consistent, specify the desired protocol, cipher
and certificate type in always in both modes. Disable insecure ciphers
as reported by howsmyssl.com. Honor not only --sslv3, but also the
--tlsv1[.N] switches.
Related Bug: http://curl.haxx.se/bug/view.cgi?id=1323
conversion from 'curl_off_t' to 'size_t', possible loss of data
Where curl_off_t is a 64-bit word and size_t is 32-bit - for example
with 32-bit Windows builds.
1 - allow >31 bit max-age values
2 - don't overflow on extremely large max-age values when we add the
value to the current time
3 - make sure max-age takes precedence over expires as dictated by
RFC6265
Bug: http://curl.haxx.se/mail/lib-2014-01/0130.html
Reported-by: Chen Prog
Starting with Visual Studio 2013 (VC12) and Windows 8.1 the
GetVersionInfoEx() function has been marked as deprecated and it's
return value atered. Updated connect.c and curl_sspi.c to use
VerifyVersionInfo() where possible, which has been available since
Windows 2000.
A transfer timeout could result in an error message such as "Operation
timed out after 3000 milliseconds with 19 bytes of -1 received". This
patch removes the non-sensical "of -1" when the size of the transfer
is unknown, mirroring the logic in lib/transfer.c
By default even recent versions of OpenSSL support and accept both
"export strength" ciphers, small-bitsize ciphers as well as downright
deprecated ones.
This change sets a default cipher set that avoids the worst ciphers, and
subsequently makes https://www.howsmyssl.com/a/check no longer grade
curl/OpenSSL connects as 'Bad'.
Bug: http://curl.haxx.se/bug/view.cgi?id=1323
Reported-by: Jeff Hodges
With the recently added timeout "reminder" functionality, there's no
reason left for us to execute timeout code before the time is
ripe. Simplifies the handling too.
This will make the *TIMEOUT and *CONNECTTIMEOUT options more accurate
again, which probably is most important when the *_MS versions are used.
In multi_socket, make sure to update 'now' after having handled activity
on a socket.
BACKGROUND:
We have learned that on some systems timeout timers are inaccurate and
might occasionally fire off too early. To make the multi_socket API work
with this, we made libcurl execute timeout actions a bit early too if
they are within our MULTI_TIMEOUT_INACCURACY. (added in commit
2c72732ebf, present since 7.21.0)
Switching everything to the multi API made this inaccuracy problem
slightly more notable as now everyone can be affected.
Recently (commit 21091549c0) we tweaked that inaccuracy value to make
timeouts more accurate and made it platform specific. We also figured
out that we have code at places that check for fixed timeout values so
they MUST NOT run too early as then they will not trigger at all (see
commit be28223f35 and a691e04470) - so there are definitately problems
with running timeouts before they're supposed to run. (We've handled
that so far by adding the inaccuracy margin to those specific timeouts.)
The libcurl multi_socket API tells the application with a callback that
a timeout expires in N milliseconds (and it explicitly will not tell it
again for the same timeout), and the application is then supposed to
call libcurl when that timeout expires. When libcurl subsequently gets
called with curl_multi_socket_action(...CURL_SOCKET_TIMEOUT...), it
knows that the application thinks the timeout expired - and alas, if it
is within the inaccuracy level libcurl will run code handling that
handle.
If the application says CURL_SOCKET_TIMEOUT to libcurl and _isn't_
within the inaccuracy level, libcurl will not consider the timeout
expired and it will not tell the application again since the timeout
value is still the same.
NOW:
This change introduces a modified behavior here. If the application says
CURL_SOCKET_TIMEOUT and libcurl finds no timeout code to run, it will
inform the application about the timeout value - *again* even if it is
the same timeout that it already told about before (although libcurl
will of course tell it the updated time so that it'll still get the
correct remaining time). This way, we will not risk that the application
believes it has done its job and libcurl thinks the time hasn't come yet
to run any code and both just sit waiting. This also allows us to
decrease the MULTI_TIMEOUT_INACCURACY margin, but that will be handled
in a separate commit.
A repeated timeout update to the application risk that the timeout will
then fire again immediately and we have what basically is a busy-loop
until the time is fine even for libcurl. If that becomes a problem, we
need to address it.
The net effect of this bug as it appeared to users, would be that
libcurl would timeout in the connect phase.
When disabling IPv6 use but still using getaddrinfo, libcurl would
wrongly not init the "hints" struct field in init_thread_sync() which
would subsequently lead to a getaddrinfo() invoke with a zeroed hints
with ai_socktype set to 0 instead of SOCK_STREAM. This would lead to
different behaviors on different platforms but basically incorrect
output.
This code was introduced in 483ff1ca75, released in curl 7.20.0.
This bug became a problem now due to the happy eyeballs code and how
libcurl now traverses the getaddrinfo() results differently.
Bug: http://curl.haxx.se/mail/lib-2014-01/0061.html
Reported-by: Fabian Frank
Debugged-by: Fabian Frank
Removed some of the infof() calls that were added with the recent
pipeline improvements but they're not useful to the vast majority of
readers and the pipelining seems to fundamentaly work - the debugging
outputs can easily be added there if debugging these functions is needed
again.
When the requested authentication bitmask includes NTLM, we cannot
re-use a connection for another username/password as we then risk
re-using NTLM (connection-based auth).
This has the unfortunate downside that if you include NTLM as a possible
auth, you cannot re-use connections for other usernames/passwords even
if NTLM doesn't end up the auth type used.
Reported-by: Paras S
Patched-by: Paras S
Bug: http://curl.haxx.se/mail/lib-2014-01/0046.html
When the progress callback returned 1 at a very early state, the code
would not make CURLE_ABORTED_BY_CALLBACK get returned but the process
would still be interrupted. In the HTTP case, this would then cause a
CURLE_GOT_NOTHING to erroneously get returned instead.
Reported-by: Petr Novak
Bug: http://curl.haxx.se/bug/view.cgi?id=1318
This is a debug function only and serves no purpose in production code,
it only slows things down. I left the code #ifdef'ed for possible future
pipeline debugging.
Also, this was a global function without proper namespace usage.
Reported-by: He Qin
Bug: http://curl.haxx.se/bug/view.cgi?id=1320
If OpenSSL is built to support SSLv2 this brings back the ability to
explicitly select that as a protocol level.
Reported-by: Steve Holme
Bug: http://curl.haxx.se/mail/lib-2014-01/0013.html
Some feedback provided by byte_bucket on IRC pointed out that commit
db11750cfa wasn’t really correct because it allows for “upgrading” to a
newer protocol when it should be only allowing for SSLv3.
This change fixes that.
When SSLv3 connection is forced, don't allow SSL negotiations for newer
versions. Feedback provided by byte_bucket in #curl. This behavior is
also consistent with the other force flags like --tlsv1.1 which doesn't
allow for TLSv1.2 negotiation, etc
Feedback-by: byte_bucket
Bug: http://curl.haxx.se/bug/view.cgi?id=1319
Since ad34a2d5c8 (present in 7.34.0 release) forcing
SSLv3 will always return the error "curl: (35) Unsupported SSL protocol
version" Can be replicated with `curl -I -3 https://www.google.com/`.
This fix simply allows for v3 to be forced.
Following commit 0aafd77fa4, replaced the internal usage of
FORMAT_OFF_T and FORMAT_OFF_TU with the external versions that we
expect API programmers to use.
This negates the need for separate definitions which were subtly
different under different platforms/compilers.
Added support to the built-in printf() replacement functions, for these
non-ANSI extensions when compiling under Visual Studio, Borland, Watcom
and MinGW.
This fixes problems when generating libcurl source code that contains
curl_off_t variables.
Fixes a bug when all addresses in the first family fail immediately, due
to "Network unreachable" for example, curl would hang and never try the
next address family.
Iterate through all address families when to trying establish the first
connection attempt.
Bug: http://curl.haxx.se/bug/view.cgi?id=1315
Reported-by: Michal Górny and Anthony G. Basile
Introduced in commit 2a4ee0d221 sending of data via the FILE
protocol would always return CURLE_WRITE_ERROR regardless of whether
CURL_WRITEFUNC_PAUSE was returned from the callback function or not.
Make sure that we detect such attempts and return a proper error code
instead of silently handling this in problematic ways.
Updated the documentation to mention this limitation.
Bug: http://curl.haxx.se/bug/view.cgi?id=1286
Previously this memdebug free() replacement didn't properly work with a
NULL argument which has made us write code that avoids calling
free(NULL) - which causes some extra nuisance and unnecessary code.
Starting now, we should allow free(NULL) even when built with the
memdebug system enabled.
free(NULL) is permitted by POSIX
free() itself allows a NULL input but our memory debug system requires
Curl_safefree() to be used instead when a "legitimate" NULL may be freed. Like
in the code here.
Pointed-out-by: Steve Holme
If a user indicated they preferred to authenticate using a SASL
mechanism, but SASL authentication wasn't supported by the server, curl
would always fall back to clear text when CAPABILITY wasn't supported,
even though the user didn't want to use this.
If a user indicated they preferred to authenticate using APOP or a SASL
mechanism, but neither were supported by the server, curl would always
fall back to clear text when CAPA wasn't supported, even though the
user didn't want to use this.
This also fixes the auto build failure caused by commit 6f2d5f0562.
This commit replaces that of 9f260b5d66 because according to RFC-2449,
section 6, there is no APOP capability "...even though APOP is an
optional command in [POP3]. Clients discover server support of APOP by
the presence in the greeting banner of an initial challenge enclosed in
angle brackets."
The FILE:// code doesn't support this option - and it doesn't make sense
to support it as long as it works as it does since then it'd only block
even longer.
But: setting CURLOPT_MAX_RECV_SPEED_LARGE would make the transfer first
get done and then libcurl would wait until the average speed would get
low enough. This happened because the transfer happens completely in the
DO state for FILE:// but then it would still unconditionally continue in
to the PERFORM state where the speed check is made.
Starting now, the code will skip from DO_DONE to DONE immediately if no
socket is set to be recv()ed or send()ed to.
Bug: http://curl.haxx.se/bug/view.cgi?id=1312
Reported-by: Mohammad AlSaleh
The comment in the code mentions the zero terminating after having
copied data, but it mistakingly zero terminated the source data and not
the destination! This caused the test 864 problem discussed on the list:
http://curl.haxx.se/mail/lib-2013-12/0113.html
Signed-off-by: Daniel Stenberg <daniel@haxx.se>
Although highlighted by a bug in commit 1cfb436a2f, APOP
authentication could be chosen if the server was to reply with an empty
or missing timestamp in the server greeting and APOP was given in the
capability list by the server.
Added a loop to pop3_statemach_act() in which Curl_pp_readresp() is
called until the cache is drained. Without this multiple responses
received in a single packet could result in a hang or delay.
Similar to the processing of untagged CAPABILITY responses in IMAP and
multi-line EHLO responses in SMTP, moved the processing of multi-line
CAPA responses to pop3_state_capa_resp().
In an effort to reduce what pop3_endofresp() does and bring the POP3
source back inline with the IMAP and SMTP protocols, moved the APOP
detection into pop3_state_servergreet_resp().
Added support for downgrading the SASL authentication mechanism when the
decoding of CRAM-MD5, DIGEST-MD5 and NTLM messages fails. This enhances
the previously added support for graceful cancellation by allowing the
client to retry a lesser SASL mechanism such as LOGIN or PLAIN, or even
APOP / clear text (in the case of POP3 and IMAP) when supported by the
server.