Don't use a hard coded size of 4 for the security layer and buffer size
in Curl_sasl_create_gssapi_security_message(), instead, use sizeof() as
we have done in the sasl_gssapi module.
Reduced the amount of free's required for the decoded challenge message
in Curl_sasl_create_gssapi_security_message() as a result of coding it
differently in the sasl_gssapi module.
Sending NTLM/Negotiate header again after successful authentication
breaks the connection with certain Proxies and request types (POST to MS
Forefront).
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
On some platforms curl would crash if no credentials were used. As such
added detection of such a use case to prevent this from happening.
Reported-by: Gisle Vanem
This patch prepares for adding UNIX domain sockets support.
TCP_NODELAY and TCP_KEEPALIVE are specific to TCP/IP sockets, so do not
apply these to other socket types. bindlocal only works for IP sockets
(independent of TCP/UDP), so filter that out too for other types.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
smb.c:398: warning: comparison of integers of different signs:
'ssize_t' (aka 'long') and 'unsigned long'
smb.c:443: warning: comparison of integers of different signs:
'ssize_t' (aka 'long') and 'unsigned long'
smb.c:322: warning: conversion to 'short unsigned int' from 'unsigned
int' may alter its value
smb.c:323: warning: conversion to 'short unsigned int' from 'unsigned
int' may alter its value
smb.c:482: warning: conversion to 'short unsigned int' from 'int' may
alter its value
smb.c:521: warning: conversion to 'unsigned int' from 'curl_off_t' may
alter its value
smb.c:549: warning: conversion to 'unsigned int' from 'curl_off_t' may
alter its value
smb.c:550: warning: conversion to 'short unsigned int' from 'int' may
alter its value
smb.c:489: warning: declaration of 'close' shadows a global declaration
smb.c:511: warning: declaration of 'read' shadows a global declaration
smb.c:528: warning: declaration of 'write' shadows a global declaration
smb.c:212: warning: unused parameter 'done'
smb.c:380: warning: ISO C does not allow extra ';' outside of a function
smb.c:812: warning: unused parameter 'premature'
smb.c:822: warning: unused parameter 'dead'
smb.c:311: warning: conversion from 'unsigned __int64' to 'u_short',
possible loss of data
smb.c:425: warning: conversion from '__int64' to 'unsigned short',
possible loss of data
smb.c:452: warning: conversion from '__int64' to 'unsigned short',
possible loss of data
smb.c:162: error: comma at end of enumerator list
smb.c:469: warning: conversion from 'size_t' to 'unsigned short',
possible loss of data
smb.c:517: warning: conversion from 'curl_off_t' to 'unsigned int',
possible loss of data
smb.c:545: warning: conversion from 'curl_off_t' to 'unsigned int',
possible loss of data
If the scratch buffer already existed when the CRLF conversion was
performed then the buffer pointer would be checked twice for NULL. This
second check is only necessary if the call to malloc() was performed by
the first check.
Whilst I had moved the dot stuffing code from being performed before
CRLF conversion takes place to after it, in commit 4bd860a001, I had
moved it outside the 'when something read' block of code when meant
it could perform the dot stuffing twice on partial send if nread
happened to contain the right values. It also meant the function could
potentially read past the end of buffer. This was highlighted by the
following warning:
warning: `nread' might be used uninitialized in this function
After commit 48d19acb7c the HTTP code would call Curl_nss_force_init()
twice when decoding a NTLM type-2 message, once directly and the other
through the call to Curl_sasl_decode_ntlm_type2_message().
This commit disables pipelining for HTTP/2 or upgraded connections. For
HTTP/2, we do not support multiplexing. In general, requests cannot be
pipelined in an upgraded connection, since it is now different protocol.
When the connection code decides to close a socket it informs the multi
system via the Curl_multi_closed function. The multi system may, in
turn, invoke the CURLMOPT_SOCKETFUNCTION function with
CURL_POLL_REMOVE. This happens after the socket has already been
closed. Reorder the code so that CURL_POLL_REMOVE is called before the
socket is closed.
Debug output 'typo' fix.
Don't print an extra "0x" in
* Pipe broke: handle 0x0x2546d88, url = /
Add debug output.
Print the number of connections in the connection cache when
adding one, and not only when one is removed.
Fix typos in comments.
Updated the usage of some legacy APIs, that are preventing curl from
compiling for Windows Store and Windows Phone build targets.
Suggested-by: Stefan Neis
Feature: http://sourceforge.net/p/curl/feature-requests/82/
Visual Studio 2012 introduced support for Windows Store apps as well as
supporting Windows Phone 8. Introduced build targets that allow more
modern APIs to be used as certain legacy ones are not available on these
new platforms.
Rather than define the function as extern in the source files that use
it, moved the function declaration into the SASL header file just like
the Digest and NTLM clean-up functions.
Additionally, added a function description comment block.
Previously if HTTP/2 traffic is appended to HTTP Upgrade response header
(thus they are in the same buffer), the trailing HTTP/2 traffic is not
processed and lost. The appended data is most likely SETTINGS frame.
If it is lost, nghttp2 library complains server does not obey the HTTP/2
protocol and issues GOAWAY frame and curl eventually drops connection.
This commit fixes this problem and now trailing data is processed.
Fix detection of the AsynchDNS feature which not just depends on
pthreads support, but also on whether USE_POSIX_THREADS is set or not.
Caught by test 1014.
This patch adds a new ENABLE_THREADED_RESOLVER option (corresponding to
--enable-threaded-resolver of autotools) which also needs a check for
HAVE_PTHREAD_H.
For symmetry with autotools, CURL_USE_ARES is renamed to ENABLE_ARES
(--enable-ares). Checks that test for the availability actually use
USE_ARES instead as that is the result of whether a-res is available or
not (in practice this does not matter as CARES is marked as required
package, but nevertheless it is better to write the intent).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
In preparation for moving the NTLM message code into the SASL module,
and separating the native code from the SSPI code, added functions that
simply call the functions in curl_ntlm_msg.c.
USE_NTLM would only be defined if: HTTP support was enabled, NTLM and
cryptography weren't disabled, and either a supporting cryptography
library or Windows SSPI was being compiled against.
This means it was not possible to build libcurl without HTTP support
and use NTLM for other protocols such as IMAP, POP3 and SMTP. Rather
than introduce a new SASL pre-processor definition, removed the HTTP
prerequisite just like USE_SPNEGO and USE_KRB5.
Note: Winbind support still needs to be dependent on CURL_DISABLE_HTTP
as it is only available to HTTP at present.
This bug dates back to August 2011 when I started to add support for
NTLM to SMTP.
Reworked the input token (challenge message) storage as what is passed
to the buf and desc in the response generation are typically blobs of
data rather than strings, so this is more in keeping with other areas
of the SSPI code, such as the NTLM message functions.
This temporarily breaks HTTP digest authentication in SSPI based builds,
causing CURLE_NOT_BUILT_IN to be returned. A follow up commit will
resume normal operation.
Added forward declaration of digestdata to overcome the following
compilation warning:
warning: 'struct digestdata' declared inside parameter list
Additionally made the ntlmdata forward declaration dependent on
USE_NTLM similar to how digestdata and kerberosdata are.
To provide consistent behaviour between the various HTTP authentication
functions use CURLcode based error codes for Curl_input_digest()
especially as the calling code doesn't use the specific error code just
that it failed.
These were previously hard coded, and whilst defined in security.h,
they may or may not be present in old header files given that these
defines were never used in the original code.
Not only that, but there appears to be some ambiguity between the ANSI
and UNICODE NTLM definition name in security.h.
When duplicating a handle, the data to post was duplicated using
strdup() when it could be binary and contain zeroes and it was not even
zero terminated! This caused read out of bounds crashes/segfaults.
Since the lib/strdup.c file no longer is easily shared with the curl
tool with this change, it now uses its own version instead.
Bug: http://curl.haxx.se/docs/adv_20141105.html
CVE: CVE-2014-3707
Reported-By: Symeon Paraschoudis
- Prior to this change no SSL minimum version was set by default at
runtime for PolarSSL. Therefore in most cases PolarSSL would probably
have defaulted to a minimum version of SSLv3 which is no longer secure.
The previous condition that checked if the socket was marked as readable
when also adding a writable one, was incorrect and didn't take the pause
bits properly into account.
autotools does not use features.h nor _BSD_SOURCE. As this macro
triggers warnings since glibc 2.20, remove it. It should not have
functional differences.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Typically the USE_WINDOWS_SSPI definition would not be used when the
CURL_DISABLE_CRYPTO_AUTH define is, however, it is still a valid build
configuration and, as such, the SASL Kerberos V5 (GSSAPI) authentication
data structures and functions would incorrectly be used when they
shouldn't be.
Introduced a new USE_KRB5 definition that takes into account the use of
CURL_DISABLE_CRYPTO_AUTH like USE_SPNEGO and USE_NTLM do.
Basically since servers often then don't respond well to this and
instead send the full contents and then libcurl would instead error out
with the assumption that the server doesn't support resume. As the data
is then already transfered, this is now considered fine.
Test case 1434 added to verify this. Test case 1042 slightly modified.
Reported-by: hugo
Bug: http://curl.haxx.se/bug/view.cgi?id=1443
Return a more appropriate error, rather than CURLE_OUT_OF_MEMORY when
acquiring the credentials handle fails. This is then consistent with
the code prior to commit f7e24683c4 when log-in credentials were empty.
Fixed the ability to use the current log-in credentials with DIGEST-MD5.
I had previously disabled this functionality in commit 607883f13c as I
couldn't get this to work under Windows 8, however, from testing HTTP
Digest authentication through Windows SSPI and then further testing of
this code I have found it works in Windows 7.
Some further investigation is required to see what the differences are
between Windows 7 and 8, but for now enable this functionality as the
code will return an error when AcquireCredentialsHandle() fails.
HTTP 1.1 is clearly specified to only allow three digit response codes,
and libcurl used sscanf("%3d") for that purpose. This made libcurl
support smaller numbers but not larger. It does now, but we will not
make any specific promises nor document this further since it is going
outside of what HTTP is.
Bug: http://curl.haxx.se/bug/view.cgi?id=1441
Reported-by: Balaji
Don't call CompleteAuthToken() after InitializeSecurityContext() has
returned SEC_I_CONTINUE_NEEDED as this return code only indicates the
function should be called again after receiving a response back from
the server.
This only affected the Digest and NTLM authentication code.
For consistency with other areas of the NTLM code propagate all errors
from Curl_ntlm_core_mk_nt_hash() up the call stack rather than just
CURLE_OUT_OF_MEMORY.
- Remove SSLv3 from SSL default in darwinssl, schannel, cyassl, nss,
openssl effectively making the default TLS 1.x. axTLS is not affected
since it supports only TLS, and gnutls is not affected since it already
defaults to TLS 1.x.
- Update CURLOPT_SSLVERSION doc
... for the local variable name in functions holding the return
code. Using the same name universally makes code easier to read and
follow.
Also, unify code for checking for CURLcode errors with:
if(result) or if(!result)
instead of
if(result == CURLE_OK), if(CURLE_OK == result) or if(result != CURLE_OK)
Prefer usage of Perl modules for sha1 calculation since there
might be systems where openssl is not installed or not in path.
If openssl is used for sha1 calculation then dont rely on cut
since it is usually not available on other systems than Linux.
It turned out some features were not enabled in the build since for
example url.c #ifdefs on features that are defined on a per-backend
basis but vtls.h didn't include the backend headers.
CURLOPT_CERTINFO was one such feature that was accidentally disabled.
There is no need for such function. Include_directories propagate by
themselves and having a function with one simple link statement makes
little sense.
Coverity CID 252518. This function is in general far too complicated for
its own good and really should be broken down into several smaller
funcitons instead - but I'm adding this protection here now since it
seems there's a risk the code flow can end up here and dereference a
NULL pointer.
Coverity CID 1241957. Removed the unused argument. As this struct and
pointer now are used only for krb5, there's no need to keep unused
function arguments around.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
Coverify CID 1157776. Removed a superfluous if() that always evaluated
true (and an else clause that never ran), and then re-indented the
function accordingly.
Coverity CID 1215284. The server name is extracted with
Curl_copy_header_value() and passed in to this function, and
copy_header_value can actually can fail and return NULL.
For private keys, use the first match from: user-specified key file
(if provided), ~/.ssh/id_rsa, ~/.ssh/id_dsa, ./id_rsa, ./id_dsa
Note that the previous code only looked for id_dsa files. id_rsa is
now generally preferred, as it supports larger key sizes.
For public keys, use the user-specified key file, if provided.
Otherwise, try to extract the public key from the private key file.
This means that passing --pubkey is typically no longer required,
and makes the key-handling behavior more like OpenSSH.
Coverity CID 1202836. If the proxy environment variable returned an empty
string, it would be leaked. While an empty string is not really a proxy, other
logic in this function already allows a blank string to be returned so allow
that here to avoid the leak.
Coverity CID 1215287. There's a potential risk for a memory leak in
here, and moving the free call to be unconditional seems like a cheap
price to remove the risk.
Coverity CID 1215296. There's a potential risk for a memory leak in
here, and moving the free call to be unconditional seems like a cheap
price to remove the risk.
Coverity detected this. CID 1241954. When Curl_poll() returns a negative value
'mcode' was uninitialized. Pretty harmless since this is debug code only and
would at worst cause an error to _not_ be returned...
Mostly because we use C strings and they end at a binary zero so we know
we can't open a file name using an embedded binary zero.
Reported-by: research@g0blin.co.uk
The switch to using Curl_expire_latest() in commit cacdc27f52 was a
mistake and was against the advice even mentioned in that commit. The
comparison in asyn-thread.c:Curl_resolver_is_resolved() makes
Curl_expire() the suitable function to use.
Bug: http://curl.haxx.se/bug/view.cgi?id=1426
Reported-By: graysky