On some platforms curl would crash if no credentials were used. As such
added detection of such a use case to prevent this from happening.
Reported-by: Gisle Vanem
This patch prepares for adding UNIX domain sockets support.
TCP_NODELAY and TCP_KEEPALIVE are specific to TCP/IP sockets, so do not
apply these to other socket types. bindlocal only works for IP sockets
(independent of TCP/UDP), so filter that out too for other types.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
smb.c:398: warning: comparison of integers of different signs:
'ssize_t' (aka 'long') and 'unsigned long'
smb.c:443: warning: comparison of integers of different signs:
'ssize_t' (aka 'long') and 'unsigned long'
smb.c:322: warning: conversion to 'short unsigned int' from 'unsigned
int' may alter its value
smb.c:323: warning: conversion to 'short unsigned int' from 'unsigned
int' may alter its value
smb.c:482: warning: conversion to 'short unsigned int' from 'int' may
alter its value
smb.c:521: warning: conversion to 'unsigned int' from 'curl_off_t' may
alter its value
smb.c:549: warning: conversion to 'unsigned int' from 'curl_off_t' may
alter its value
smb.c:550: warning: conversion to 'short unsigned int' from 'int' may
alter its value
smb.c:489: warning: declaration of 'close' shadows a global declaration
smb.c:511: warning: declaration of 'read' shadows a global declaration
smb.c:528: warning: declaration of 'write' shadows a global declaration
smb.c:212: warning: unused parameter 'done'
smb.c:380: warning: ISO C does not allow extra ';' outside of a function
smb.c:812: warning: unused parameter 'premature'
smb.c:822: warning: unused parameter 'dead'
smb.c:311: warning: conversion from 'unsigned __int64' to 'u_short',
possible loss of data
smb.c:425: warning: conversion from '__int64' to 'unsigned short',
possible loss of data
smb.c:452: warning: conversion from '__int64' to 'unsigned short',
possible loss of data
smb.c:162: error: comma at end of enumerator list
smb.c:469: warning: conversion from 'size_t' to 'unsigned short',
possible loss of data
smb.c:517: warning: conversion from 'curl_off_t' to 'unsigned int',
possible loss of data
smb.c:545: warning: conversion from 'curl_off_t' to 'unsigned int',
possible loss of data
If the scratch buffer already existed when the CRLF conversion was
performed then the buffer pointer would be checked twice for NULL. This
second check is only necessary if the call to malloc() was performed by
the first check.
Whilst I had moved the dot stuffing code from being performed before
CRLF conversion takes place to after it, in commit 4bd860a001, I had
moved it outside the 'when something read' block of code when meant
it could perform the dot stuffing twice on partial send if nread
happened to contain the right values. It also meant the function could
potentially read past the end of buffer. This was highlighted by the
following warning:
warning: `nread' might be used uninitialized in this function
After commit 48d19acb7c the HTTP code would call Curl_nss_force_init()
twice when decoding a NTLM type-2 message, once directly and the other
through the call to Curl_sasl_decode_ntlm_type2_message().
This commit disables pipelining for HTTP/2 or upgraded connections. For
HTTP/2, we do not support multiplexing. In general, requests cannot be
pipelined in an upgraded connection, since it is now different protocol.
When the connection code decides to close a socket it informs the multi
system via the Curl_multi_closed function. The multi system may, in
turn, invoke the CURLMOPT_SOCKETFUNCTION function with
CURL_POLL_REMOVE. This happens after the socket has already been
closed. Reorder the code so that CURL_POLL_REMOVE is called before the
socket is closed.
Debug output 'typo' fix.
Don't print an extra "0x" in
* Pipe broke: handle 0x0x2546d88, url = /
Add debug output.
Print the number of connections in the connection cache when
adding one, and not only when one is removed.
Fix typos in comments.
Updated the usage of some legacy APIs, that are preventing curl from
compiling for Windows Store and Windows Phone build targets.
Suggested-by: Stefan Neis
Feature: http://sourceforge.net/p/curl/feature-requests/82/
Visual Studio 2012 introduced support for Windows Store apps as well as
supporting Windows Phone 8. Introduced build targets that allow more
modern APIs to be used as certain legacy ones are not available on these
new platforms.
Rather than define the function as extern in the source files that use
it, moved the function declaration into the SASL header file just like
the Digest and NTLM clean-up functions.
Additionally, added a function description comment block.
Previously if HTTP/2 traffic is appended to HTTP Upgrade response header
(thus they are in the same buffer), the trailing HTTP/2 traffic is not
processed and lost. The appended data is most likely SETTINGS frame.
If it is lost, nghttp2 library complains server does not obey the HTTP/2
protocol and issues GOAWAY frame and curl eventually drops connection.
This commit fixes this problem and now trailing data is processed.
Fix detection of the AsynchDNS feature which not just depends on
pthreads support, but also on whether USE_POSIX_THREADS is set or not.
Caught by test 1014.
This patch adds a new ENABLE_THREADED_RESOLVER option (corresponding to
--enable-threaded-resolver of autotools) which also needs a check for
HAVE_PTHREAD_H.
For symmetry with autotools, CURL_USE_ARES is renamed to ENABLE_ARES
(--enable-ares). Checks that test for the availability actually use
USE_ARES instead as that is the result of whether a-res is available or
not (in practice this does not matter as CARES is marked as required
package, but nevertheless it is better to write the intent).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
In preparation for moving the NTLM message code into the SASL module,
and separating the native code from the SSPI code, added functions that
simply call the functions in curl_ntlm_msg.c.
USE_NTLM would only be defined if: HTTP support was enabled, NTLM and
cryptography weren't disabled, and either a supporting cryptography
library or Windows SSPI was being compiled against.
This means it was not possible to build libcurl without HTTP support
and use NTLM for other protocols such as IMAP, POP3 and SMTP. Rather
than introduce a new SASL pre-processor definition, removed the HTTP
prerequisite just like USE_SPNEGO and USE_KRB5.
Note: Winbind support still needs to be dependent on CURL_DISABLE_HTTP
as it is only available to HTTP at present.
This bug dates back to August 2011 when I started to add support for
NTLM to SMTP.
Reworked the input token (challenge message) storage as what is passed
to the buf and desc in the response generation are typically blobs of
data rather than strings, so this is more in keeping with other areas
of the SSPI code, such as the NTLM message functions.
This temporarily breaks HTTP digest authentication in SSPI based builds,
causing CURLE_NOT_BUILT_IN to be returned. A follow up commit will
resume normal operation.
Added forward declaration of digestdata to overcome the following
compilation warning:
warning: 'struct digestdata' declared inside parameter list
Additionally made the ntlmdata forward declaration dependent on
USE_NTLM similar to how digestdata and kerberosdata are.
To provide consistent behaviour between the various HTTP authentication
functions use CURLcode based error codes for Curl_input_digest()
especially as the calling code doesn't use the specific error code just
that it failed.
These were previously hard coded, and whilst defined in security.h,
they may or may not be present in old header files given that these
defines were never used in the original code.
Not only that, but there appears to be some ambiguity between the ANSI
and UNICODE NTLM definition name in security.h.
When duplicating a handle, the data to post was duplicated using
strdup() when it could be binary and contain zeroes and it was not even
zero terminated! This caused read out of bounds crashes/segfaults.
Since the lib/strdup.c file no longer is easily shared with the curl
tool with this change, it now uses its own version instead.
Bug: http://curl.haxx.se/docs/adv_20141105.html
CVE: CVE-2014-3707
Reported-By: Symeon Paraschoudis
- Prior to this change no SSL minimum version was set by default at
runtime for PolarSSL. Therefore in most cases PolarSSL would probably
have defaulted to a minimum version of SSLv3 which is no longer secure.
The previous condition that checked if the socket was marked as readable
when also adding a writable one, was incorrect and didn't take the pause
bits properly into account.
autotools does not use features.h nor _BSD_SOURCE. As this macro
triggers warnings since glibc 2.20, remove it. It should not have
functional differences.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Typically the USE_WINDOWS_SSPI definition would not be used when the
CURL_DISABLE_CRYPTO_AUTH define is, however, it is still a valid build
configuration and, as such, the SASL Kerberos V5 (GSSAPI) authentication
data structures and functions would incorrectly be used when they
shouldn't be.
Introduced a new USE_KRB5 definition that takes into account the use of
CURL_DISABLE_CRYPTO_AUTH like USE_SPNEGO and USE_NTLM do.
Basically since servers often then don't respond well to this and
instead send the full contents and then libcurl would instead error out
with the assumption that the server doesn't support resume. As the data
is then already transfered, this is now considered fine.
Test case 1434 added to verify this. Test case 1042 slightly modified.
Reported-by: hugo
Bug: http://curl.haxx.se/bug/view.cgi?id=1443
Return a more appropriate error, rather than CURLE_OUT_OF_MEMORY when
acquiring the credentials handle fails. This is then consistent with
the code prior to commit f7e24683c4 when log-in credentials were empty.
Fixed the ability to use the current log-in credentials with DIGEST-MD5.
I had previously disabled this functionality in commit 607883f13c as I
couldn't get this to work under Windows 8, however, from testing HTTP
Digest authentication through Windows SSPI and then further testing of
this code I have found it works in Windows 7.
Some further investigation is required to see what the differences are
between Windows 7 and 8, but for now enable this functionality as the
code will return an error when AcquireCredentialsHandle() fails.
HTTP 1.1 is clearly specified to only allow three digit response codes,
and libcurl used sscanf("%3d") for that purpose. This made libcurl
support smaller numbers but not larger. It does now, but we will not
make any specific promises nor document this further since it is going
outside of what HTTP is.
Bug: http://curl.haxx.se/bug/view.cgi?id=1441
Reported-by: Balaji
Don't call CompleteAuthToken() after InitializeSecurityContext() has
returned SEC_I_CONTINUE_NEEDED as this return code only indicates the
function should be called again after receiving a response back from
the server.
This only affected the Digest and NTLM authentication code.
For consistency with other areas of the NTLM code propagate all errors
from Curl_ntlm_core_mk_nt_hash() up the call stack rather than just
CURLE_OUT_OF_MEMORY.
- Remove SSLv3 from SSL default in darwinssl, schannel, cyassl, nss,
openssl effectively making the default TLS 1.x. axTLS is not affected
since it supports only TLS, and gnutls is not affected since it already
defaults to TLS 1.x.
- Update CURLOPT_SSLVERSION doc
... for the local variable name in functions holding the return
code. Using the same name universally makes code easier to read and
follow.
Also, unify code for checking for CURLcode errors with:
if(result) or if(!result)
instead of
if(result == CURLE_OK), if(CURLE_OK == result) or if(result != CURLE_OK)
Prefer usage of Perl modules for sha1 calculation since there
might be systems where openssl is not installed or not in path.
If openssl is used for sha1 calculation then dont rely on cut
since it is usually not available on other systems than Linux.
It turned out some features were not enabled in the build since for
example url.c #ifdefs on features that are defined on a per-backend
basis but vtls.h didn't include the backend headers.
CURLOPT_CERTINFO was one such feature that was accidentally disabled.
There is no need for such function. Include_directories propagate by
themselves and having a function with one simple link statement makes
little sense.
Coverity CID 252518. This function is in general far too complicated for
its own good and really should be broken down into several smaller
funcitons instead - but I'm adding this protection here now since it
seems there's a risk the code flow can end up here and dereference a
NULL pointer.
Coverity CID 1241957. Removed the unused argument. As this struct and
pointer now are used only for krb5, there's no need to keep unused
function arguments around.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
Coverify CID 1157776. Removed a superfluous if() that always evaluated
true (and an else clause that never ran), and then re-indented the
function accordingly.
Coverity CID 1215284. The server name is extracted with
Curl_copy_header_value() and passed in to this function, and
copy_header_value can actually can fail and return NULL.
For private keys, use the first match from: user-specified key file
(if provided), ~/.ssh/id_rsa, ~/.ssh/id_dsa, ./id_rsa, ./id_dsa
Note that the previous code only looked for id_dsa files. id_rsa is
now generally preferred, as it supports larger key sizes.
For public keys, use the user-specified key file, if provided.
Otherwise, try to extract the public key from the private key file.
This means that passing --pubkey is typically no longer required,
and makes the key-handling behavior more like OpenSSH.
Coverity CID 1202836. If the proxy environment variable returned an empty
string, it would be leaked. While an empty string is not really a proxy, other
logic in this function already allows a blank string to be returned so allow
that here to avoid the leak.
Coverity CID 1215287. There's a potential risk for a memory leak in
here, and moving the free call to be unconditional seems like a cheap
price to remove the risk.
Coverity CID 1215296. There's a potential risk for a memory leak in
here, and moving the free call to be unconditional seems like a cheap
price to remove the risk.
Coverity detected this. CID 1241954. When Curl_poll() returns a negative value
'mcode' was uninitialized. Pretty harmless since this is debug code only and
would at worst cause an error to _not_ be returned...
Mostly because we use C strings and they end at a binary zero so we know
we can't open a file name using an embedded binary zero.
Reported-by: research@g0blin.co.uk
The switch to using Curl_expire_latest() in commit cacdc27f52 was a
mistake and was against the advice even mentioned in that commit. The
comparison in asyn-thread.c:Curl_resolver_is_resolved() makes
Curl_expire() the suitable function to use.
Bug: http://curl.haxx.se/bug/view.cgi?id=1426
Reported-By: graysky
Previously we did not handle EOF from underlying transport socket and
wrongly just returned error code CURL_AGAIN from http2_recv, which
caused busy loop since socket has been closed. This patch adds the
code to handle EOF situation and tells the upper layer that we got
EOF.
Removed ISC_REQ_* flags from calls to InitializeSecurityContext to fix
bug in NTLM handshake for HTTP proxy authentication.
NTLM handshake for HTTP proxy authentication failed with error
SEC_E_INVALID_TOKEN from InitializeSecurityContext for certain proxy
servers on generating the NTLM Type-3 message.
The flag ISC_REQ_CONFIDENTIALITY seems to cause the problem according
to the observations and suggestions made in a bug report for the
QT project (https://bugreports.qt-project.org/browse/QTBUG-17322).
Removing all the flags solved the problem.
Bug: http://curl.haxx.se/mail/lib-2014-08/0273.html
Reported-by: Ulrich Telle
Assisted-by: Steve Holme, Daniel Stenberg
As a sort of step forward, this script will now first try to get the
data from the HTTPS URL using curl, and only if that fails it will
switch back to the HTTP transfer using perl's native LWP functionality.
To reduce the risk of this script being tricked.
Using HTTPS to get a cert bundle introduces a chicken-and-egg problem so
we can't really ever completely disable HTTP, but chances are that most
users already have a ca cert bundle that trusts the mozilla.org site
that this script downloads from.
A future version of this script will probably switch to require a
dedicated "insecure" command line option to allow downloading over HTTP
(or unverified HTTPS).
By not detecting and rejecting domain names for partial literal IP
addresses properly when parsing received HTTP cookies, libcurl can be
fooled to both send cookies to wrong sites and to allow arbitrary sites
to set cookies for others.
CVE-2014-3613
Bug: http://curl.haxx.se/docs/adv_20140910A.html
Historically the default "unknown" value for progress.size_dl and
progress.size_ul has been zero, since these values are initialized
implicitly by the calloc that allocates the curl handle that these
variables are a part of. Users of curl that install progress
callbacks may expect these values to always be >= 0.
Currently it is possible for progress.size_dl and progress.size_ul
to by set to a value of -1, if Curl_pgrsSetDownloadSize() or
Curl_pgrsSetUploadSize() are passed a "size" of -1 (which a few
places currently do, and a following patch will add more). So
lets update Curl_pgrsSetDownloadSize() and Curl_pgrsSetUploadSize()
so they make sure that these variables always contain a value that
is >= 0.
Updates test579 and test599.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
As the current element in the list is free()d by Curl_llist_remove(),
when the associated connection is pending, reworked the loop to avoid
accessing the next element through e->next afterward.
SecCertificateCopyPublicKey() is not available on iPhone. Use
CopyCertSubject() instead to see if the certificate returned by
SecCertificateCreateWithData() is valid.
Reported-by: Toby Peterson
... as the struct is free()d in the end anyway. It was first pointed out
to me that one of the ->msglist assignments were supposed to have been
->pending but was a copy and paste mistake when I realized none of the
clearing of pointers had to be there.
... instead of scanning through all handles, stash only the actual
handles that are in that state in the new ->pending list and scan that
list only. It should be mostly empty or very short. And only used for
pipelining.
This avoids a rather hefty slow-down especially notable if you add many
handles to the same multi handle. Regression introduced in commit
0f147887 (version 7.30.0).
Bug: http://curl.haxx.se/mail/lib-2014-07/0206.html
Reported-by: David Meyer
Forwards the setting as minimum ssl version (if set) to polarssl. If
the server does not support the requested version the SSL Handshake will
fail.
Bug: http://curl.haxx.se/bug/view.cgi?id=1419
SecCertificateCreateWithData() returns a non-NULL SecCertificateRef even
if the buffer holds an invalid or corrupt certificate. Call
SecCertificateCopyPublicKey() to make sure cacert is a valid
certificate.
Introducing Curl_expire_latest(). To be used when we the code flow only
wants to get called at a later time that is "no later than X" so that
something can be checked (and another timeout be added).
The low-speed logic for example could easily be made to set very many
expire timeouts if it would be called faster or sooner than what it had
set its own timer and this goes for a few other timers too that aren't
explictiy checked for timer expiration in the code.
If there's no condition the code that says if(time-passed >= TIME), then
Curl_expire_latest() is preferred to Curl_expire().
If there exists such a condition, it is on the other hand important that
Curl_expire() is used and not the other.
Bug: http://curl.haxx.se/mail/lib-2014-06/0235.html
Reported-by: Florian Weimer
While waiting for a host resolve, check if the host cache may have
gotten the name already (by someone else), for when the same name is
resolved by several simultanoues requests.
The resolver thread occasionally gets stuck in getaddrinfo() when the
DNS or anything else is crappy or slow, so when a host is found in the
DNS cache, leave the thread alone and let itself cleanup the mess.
If the --cacert option is used with a CA certificate bundle that
contains multiple CA certificates, iterate through it, adding each
certificate as a trusted root CA.
This is usually due to failed auth. There's no point in us keeping such
a connection alive since it shouldn't be re-used anyway.
Bug: http://curl.haxx.se/bug/view.cgi?id=1381
Reported-by: Marcel Raad
This was done to make sure NTLM state that is bound to a connection
doesn't survive and gets used for the subsequent request - but
disconnects can also be done to for example make room in the connection
cache and thus that connection is not strictly related to the easy
handle's current operation.
The http authentication state is still kept in the easy handle since all
http auth _except_ NTLM is connection independent and thus survive over
multiple connections.
Bug: http://curl.haxx.se/mail/lib-2014-08/0148.html
Reported-by: Paras S
Problem: if CURLOPT_FORBID_REUSE is set, requests using NTLM failed
since NTLM requires multiple requests that re-use the same connection
for the authentication to work
Solution: Ignore the forbid reuse flag in case the NTLM authentication
handshake is in progress, according to the NTLM state flag.
Fixed known bug #77.
A conditionally compiled block in connect.c references WinSock 2
symbols, but used `#ifdef HAVE_WINSOCK_H` instead of `#ifdef
HAVE_WINSOCK2_H`.
Bug: http://curl.haxx.se/mail/lib-2014-08/0155.html
The URL is not a property of the connection so it should not be freed in
the connection disconnect but in the Curl_close() that frees the easy
handle.
Bug: http://curl.haxx.se/mail/lib-2014-08/0148.html
Reported-by: Paras S
Corrected a number of the error codes that can be returned from the
Curl_sasl_create_gssapi_security_message() function when things go
wrong.
It makes more sense to return CURLE_BAD_CONTENT_ENCODING when the
inbound security challenge can't be decoded correctly or doesn't
contain the KERB_WRAP_NO_ENCRYPT flag and CURLE_OUT_OF_MEMORY when
EncryptMessage() fails. Unfortunately the previous error code of
CURLE_RECV_ERROR was a copy and paste mistakes on my part and should
have been correct in commit 4b491c675f :(
... to handle "*/[total]". Also, removed the strange hack that made
CURLOPT_FAILONERROR on a 416 response after a *RESUME_FROM return
CURLE_OK.
Reported-by: Dimitrios Siganos
Bug: http://curl.haxx.se/mail/lib-2014-06/0221.html
In preparation for the upcoming SSPI implementation of GSSAPI
authentication, moved the definition of KERB_WRAP_NO_ENCRYPT from
socks_sspi.c to curl_sspi.h allowing it to be shared amongst other
SSPI based code.
... as mxr.mozilla.org is due to be retired.
The new host doesn't support If-Modified-Since nor ETags, meaning that
the script will now defer to download and do a post-transfer checksum
check to see if a new output is to be generated. The new output format
will hold the SHA1 checksum of the source file for that purpose.
We call this version 1.22
Reported-by: Ed Morley
Bug: http://curl.haxx.se/bug/view.cgi?id=1409
Bringing back the old functionality that was mistakenly removed when the
connection cache was remade. When creating a new connection, all the
existing ones are checked and those that are known to be dead get
disconnected for real and removed from the connection cache. It helps
the cache from holding on to very many stale connections and aids in
keeping down the number of system sockets in wait states.
Help-by: Jonatan Vela <jonatan.vela@ergon.ch>
Bug: http://curl.haxx.se/mail/lib-2014-06/0189.html
Curl_poll and Curl_wait_ms require the fix applied to Curl_socket_check
in commits b61e8b8 and c771968:
When poll or select are interrupted and coincides with the timeout
elapsing, the functions return -1 indicating an error instead of 0 for
the timeout.
Given the SSPI package info query indicates a token size of 4096 bytes,
updated to use a dynamic buffer for the response message generation
rather than a fixed buffer of 1024 bytes.
Updated to use a dynamic buffer for the SPN generation via the recently
introduced Curl_sasl_build_spn() function rather than a fixed buffer of
1024 characters, which should have been more than enough, but by using
the new function removes the need for another variable sname to do the
wide character conversion in Unicode builds.
Updated Curl_sasl_create_digest_md5_message() to use a dynamic buffer
for the SPN generation via the recently introduced Curl_sasl_build_spn()
function rather than a fixed buffer of 128 characters.
Curl_sasl_create_digest_md5_message() would simply cast the SPN variable
to a TCHAR when calling InitializeSecurityContext(). This meant that,
under Unicode builds, it would not be valid wide character string.
Updated to use the recently introduced Curl_sasl_build_spn() function
which performs the correct conversion for us.
Various parts of the libcurl source code build a SPN for inclusion in
authentication data. This information is either used by our own native
generation routines or passed to authentication functions in third-party
libraries such as SSPI. However, some of these instances use fixed
buffers rather than dynamically allocated ones and not all of those that
should, convert to wide character strings in Unicode builds.
Implemented a common function that generates a SPN and performs the
wide character conversion where necessary.
Following the recent changes and in attempt to align the SSPI based
authentication code performed the following:
* Use NULL and SECBUFFVERSION rather than hard coded constants.
* Avoid comparison of zero in if statements.
* Standardised the buf and desc setup code.
Given the SSPI package info query indicates a token size of 2888 bytes,
and as with the Winbind code and commit 9008f3d56, use a dynamic buffer
for the Type-1 and Type-3 message generation rather than a fixed buffer
of 1024 bytes.
Just as with the SSPI implementations of Digest and Negotiate added a
package info query so that libcurl can a) return a more appropriate
error code when the NTLM package is not supported and b) it can be of
use later to allocate a dynamic buffer for the Type-1 and Type-3
output tokens rather than use a fixed buffer of 1024 bytes.
OPENSSL_config() is "strongly recommended" to use but unfortunately that
function makes an exit() call on wrongly formatted config files which
makes it hard to use in some situations. OPENSSL_config() itself calls
CONF_modules_load_file() and we use that instead and we ignore its
return code!
Reported-by: Jan Ehrhardt
Bug: http://curl.haxx.se/bug/view.cgi?id=1401
If the server rejects our authentication attempt and curl hasn't
called CompleteAuthToken() then the status variable will be
SEC_I_CONTINUE_NEEDED and not SEC_E_OK.
As such the existing detection mechanism for determining whether or not
the authentication process has finished is not sufficient.
However, the WWW-Authenticate: Negotiate header line will not contain
any data when the server has exhausted the negotiation, so we can use
that coupled with the already allocated context pointer.
To prevent infinite loop in readwrite_data() function when stream is
reset before any response body comes, reset closed flag to false once
it is evaluated to true.
"Expect: 100-continue", which was once deprecated in HTTP/2, is now
resurrected in HTTP/2 draft 14. This change adds its support to
HTTP/2 code. This change also includes stricter header field
checking.
Previously it only returned a CURLcode for errors, which is when it
returns a different size than what was passed in to it.
The http2 code only checked the curlcode and thus failed.
Under these circumstances, the connection hasn't been fully established
and smtp_connect hasn't been called, yet smtp_done still calls the state
machine which dereferences the NULL conn pointer in struct pingpong.
This now provides a weak random function since PolarSSL doesn't have a
quick and easy way to provide a good one. It does however provide the
framework to make one so it _can_ and _should_ be done...
To force each backend implementation to really attempt to provide proper
random. If a proper random function is missing, then we can explicitly
make use of the default one we use when TLS support is missing.
This commit makes sure it works for darwinssl, gnutls, nss and openssl.
warning C4267: '=' : conversion from 'size_t' to 'long', possible loss
of data
The member connection_id of struct connectdata is a long (always a
32-bit signed integer on Visual C++) and the member next_connection_id
of struct conncache is a size_t, so one of them should be changed to
match the other.
This patch the size_t in struct conncache to long (the less invasive
change as that variable is only ever used in a single code line).
Bug: http://curl.haxx.se/bug/view.cgi?id=1399
1 - fixes the warnings when built without http2 support
2 - adds CURLE_HTTP2, a new error code for errors detected by nghttp2
basically when they are about http2 specific things.
CyaSSL 3.0.0 returns a unique error code if no CA cert is available,
so translate that into CURLE_SSL_CACERT_BADFILE when peer verification
is requested.
- Replace CURLAUTH_GSSNEGOTIATE with CURLAUTH_NEGOTIATE
- CURL_VERSION_GSSNEGOTIATE is deprecated which
is served by CURL_VERSION_SSPI, CURL_VERSION_GSSAPI and
CURUL_VERSION_SPNEGO now.
- Remove display of feature 'GSS-Negotiate'
This reverts commit cb3e6dfa35 and instead fixes the problem
differently.
The reverted commit addressed a test failure in test 1021 by simplifying
and generalizing the code flow in a way that damaged the
performance. Now we modify the flow so that Curl_proxyCONNECT() again
does as much as possible in one go, yet still do test 1021 with and
without valgrind. It failed due to mistakes in the multi state machine.
Bug: http://curl.haxx.se/bug/view.cgi?id=1397
Reported-by: Paul Saab
It's wrong to assume that we can send a single SPNEGO packet which will
complete the authentication. It's a *negotiation* — the clue is in the
name. So make sure we handle responses from the server.
Curl_input_negotiate() will already handle bailing out if it thinks the
state is GSS_S_COMPLETE (or SEC_E_OK on Windows) and the server keeps
talking to us, so we should avoid endless loops that way.
This is the correct way to do SPNEGO. Just ask for it
Now I correctly see it trying NTLMSSP authentication when a Kerberos ticket
isn't available. Of course, we bail out when the server responds with the
challenge packet, since we don't expect that. But I'll fix that bug next...
This is just fundamentally broken. SPNEGO (RFC4178) is a protocol which
allows client and server to negotiate the underlying mechanism which will
actually be used to authenticate. This is *often* Kerberos, and can also
be NTLM and other things. And to complicate matters, there are various
different OIDs which can be used to specify the Kerberos mechanism too.
A SPNEGO exchange will identify *which* GSSAPI mechanism is being used,
and will exchange GSSAPI tokens which are appropriate for that mechanism.
But this SPNEGO implementation just strips the incoming SPNEGO packet
and extracts the token, if any. And completely discards the information
about *which* mechanism is being used. Then we *assume* it was Kerberos,
and feed the token into gss_init_sec_context() with the default
mechanism (GSS_S_NO_OID for the mech_type argument).
Furthermore... broken as this code is, it was never even *used* for input
tokens anyway, because higher layers of curl would just bail out if the
server actually said anything *back* to us in the negotiation. We assume
that we send a single token to the server, and it accepts it. If the server
wants to continue the exchange (as is required for NTLM and for SPNEGO
to do anything useful), then curl was broken anyway.
So the only bit which actually did anything was the bit in
Curl_output_negotiate(), which always generates an *initial* SPNEGO
token saying "Hey, I support only the Kerberos mechanism and this is its
token".
You could have done that by manually just prefixing the Kerberos token
with the appropriate bytes, if you weren't going to do any proper SPNEGO
handling. There's no need for the FBOpenSSL library at all.
The sane way to do SPNEGO is just to *ask* the GSSAPI library to do
SPNEGO. That's what the 'mech_type' argument to gss_init_sec_context()
is for. And then it should all Just Work™.
That 'sane way' will be added in a subsequent patch, as will bug fixes
for our failure to handle any exchange other than a single outbound
token to the server which results in immediate success.
Before GnuTLS 3.3.6, the gnutls_x509_crt_check_hostname() function
didn't actually check IP addresses in SubjectAltName, even though it was
explicitly documented as doing so. So do it ourselves...
The old way using getpwuid could cause problems in programs that enable
reading from netrc files simultaneously in multiple threads.
Reported-by: David Woodhouse
The AES-GCM ciphers were added to GnuTLS as late as ver. 3.0.1 but
the code path in which they're referenced here is only ever used for
somewhat older GnuTLS versions. This caused undeclared identifier errors
when compiling against those.
This seems to have become necessary for SRP support to work starting
with GnuTLS ver. 2.99.0. Since support for SRP was added to GnuTLS
before the function that takes this priority string, there should be no
issue with backward compatibility.
When an error has been detected, skip the final forced call to the
progress callback by making sure to pass the current return code
variable in the Curl_done() call in the CURLM_STATE_DONE state.
This avoids the "extra" callback that could occur even if you returned
error from the progress callback.
Bug: http://curl.haxx.se/mail/lib-2014-06/0062.html
Reported by: Jonathan Cardoso Machado
The static connection counter caused a race condition. Moving the
connection id counter into conncache solves it, as well as simplifying
the related logic.
They were added because of an older code path that used allocations and
should not have been left in the code. With this change the logic goes
back to how it was.
Curl_rand() will return a dummy and repatable random value for this
case. Makes it possible to write test cases that verify output.
Also, fake timestamp with CURL_FORCETIME set.
Only when built debug enabled of course.
Curl_ssl_random() was not used anymore so it has been
removed. Curl_rand() is enough.
create_digest_md5_message: generate base64 instead of hex string
curl_sasl: also fix memory leaks in some OOM situations
httpproxycode is not reset in Curl_initinfo, so a 407 is not reset even
if curl_easy_reset is called between transfers.
Bug: http://curl.haxx.se/bug/view.cgi?id=1380
The method change is forbidden by the obsolete RFC2616, but libcurl did
it anyway for compatibility reasons. The new RFC7231 allows this
behaviour so there's no need for the scary "Violate RFC 2616/10.3.x"
notice. Also update the comments accordingly.
The SASL/Digest previously used the current time's seconds +
microseconds to add randomness but it is much better to instead get more
data from Curl_rand().
It will also allow us to easier "fake" that for debug builds on demand
in a future.
Rather than use a short 8-byte hex string, extended the cnonce to be
32-bytes long, like Windows SSPI does.
Used a combination of random data as well as the current date and
time for the generation.