Add 'gdi32' and 'crypt32' Windows implibs to avoid failure
while building libcurl.dll using the mingw compiler.
The same logic is used in 'src/makefile.m32' when
building curl.exe.
The factor of 8 is a bytes-to-bits conversion factor, but pkt_size and
rate_bps are both in bytes. When using the rate limiting option, curl
waits 8 times too long, and then transfers very quickly until the
average rate reaches the limit. The average rate follows the limit over
time, but the actual traffic is bursty.
Thanks-to: Benjamin Gilbert
The key length in bits will always fit in an unsigned long so the
loss-of-data warning assigning the result of x64 pointer arithmetic to
an unsigned long is unnecessary.
Prior to this change libcurl could show multiple 'CyaSSL: Connecting to'
messages since cyassl_connect_step2 is called multiple times, typically.
The message is superfluous even once since libcurl already informs the
user elsewhere in code that it is connecting.
- cache entries must be also refreshed when they are in use
- have the cache count as inuse reference too, freeing timestamp == 0 special
value
- use timestamp == 0 for CURLOPT_RESOLVE entries which don't get refreshed
- remove CURLOPT_RESOLVE special inuse reference (timestamp == 0 will prevent refresh)
- fix Curl_hostcache_clean - CURLOPT_RESOLVE entries don't have a special
reference anymore, and it would also release non CURLOPT_RESOLVE references
- fix locking in Curl_hostcache_clean
- fix unit1305.c: hash now keeps a reference, need to set inuse = 1
This change is to allow the user's CTX callback to change the minimum
protocol version in the CTX without us later overriding it, as we did
prior to this change.
SSL_CTX_load_verify_locations can return negative values on fail,
therefore to check for failure we check if load is != 1 (success)
instead of if load is == 0 (failure), the latter being incorrect given
that behavior.
Previously in Curl_http2_switched, we called nghttp2_session_mem_recv to
parse incoming data which were already received while curl was handling
upgrade. But we didn't call nghttp2_session_send, and it led to make
curl not send any response to the received frames. Most likely, we
received SETTINGS from server at this point, so we missed opportunity to
send SETTINGS + ACK. This commit adds missing nghttp2_session_send call
in Curl_http2_switched to fix this issue.
Bug: https://github.com/bagder/curl/issues/192
Reported-by: Stefan Eissing
"name =value" is fine and the space should just be skipped.
Updated test 31 to also test for this.
Bug: https://github.com/bagder/curl/issues/195
Reported-by: cromestant
Help-by: Frank Gevaerts
(Curl_cyassl_init)
- Return 1 on success, 0 in failure.
Prior to this change the fail path returned an incorrect value and the
evaluation to determine whether CyaSSL_Init had succeeded was incorrect.
Ironically that combined with the way curl_global_init tests SSL library
initialization (!Curl_ssl_init()) meant that CyaSSL having been
successfully initialized would be seen as that even though the code path
and return value in Curl_cyassl_init were wrong.
If the handle removed from the multi handle happens to be the one
"owning" the pipeline other transfers will be waiting indefinitely. Now
we move such handles back to connect to have them race (again) for
getting the connection and thus avoid hanging.
Bug: http://curl.haxx.se/bug/view.cgi?id=1465
Reported-by: Jiri Dvorak
... even if they don't have an associated connection anymore. It could
leave the waiting transfers pending with no active one on the
connection.
Bug: http://curl.haxx.se/bug/view.cgi?id=1465
Reported-by: Jiri Dvorak
(cyassl_connect_step1)
- Use TLS 1.0-1.2 by default when available.
CyaSSL/wolfSSL >= v3.3.0 supports setting a minimum protocol downgrade
version.
cyassl/cyassl@322f79f
This header file must be included after all header files except
memdebug.h, as it does similar memory function redefinitions and can be
similarly affected by conflicting definitions in system or dependent
library headers.
CID 1202732 warns on the previous use, although I cannot fine any
problems with it. I'm doing this change only to make the code use a more
familiar approach to accomplish the same thing.
We prematurely changed protocol handler to HTTP/2 which made things very
slow (and wrong).
Reported-by: Stefan Eissing
Bug: https://github.com/bagder/curl/issues/169
Since we just started make use of free(NULL) in order to simplify code,
this change takes it a step further and:
- converts lots of Curl_safefree() calls to good old free()
- makes Curl_safefree() not check the pointer before free()
The (new) rule of thumb is: if you really want a function call that
frees a pointer and then assigns it to NULL, then use Curl_safefree().
But we will prefer just using free() from now on.
The following functions return immediately if a null pointer was passed.
* Curl_cookie_cleanup
* curl_formfree
It is therefore not needed that a function caller repeats a corresponding check.
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
The function "free" is documented in the way that no action shall occur for
a passed null pointer. It is therefore not needed that a function caller
repeats a corresponding check.
http://stackoverflow.com/questions/18775608/free-a-null-pointer-anyway-or-check-first
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Bug: https://github.com/bagder/curl/pull/168
(trynextip)
- Don't try the "other" protocol family unless IPv6 is available. In an
IPv4-only build the other family can only be IPv6 which is unavailable.
This change essentially stops IPv4-only builds from attempting the
"happy eyeballs" secondary parallel connection that is supposed to be
used by the "other" address family.
Prior to this change in IPv4-only builds that secondary parallel
connection attempt could be erroneously used by the same family (IPv4)
which caused a bug where every address after the first for a host could
be tried twice, often in parallel. This change fixes that bug. An
example of the bug is shown below.
Assume MTEST resolves to 3 addresses 127.0.0.2, 127.0.0.3 and 127.0.0.4:
* STATE: INIT => CONNECT handle 0x64f4b0; line 1046 (connection #-5000)
* Rebuilt URL to: http://MTEST/
* Added connection 0. The cache now contains 1 members
* STATE: CONNECT => WAITRESOLVE handle 0x64f4b0; line 1083
(connection #0)
* Trying 127.0.0.2...
* STATE: WAITRESOLVE => WAITCONNECT handle 0x64f4b0; line 1163
(connection #0)
* Trying 127.0.0.3...
* connect to 127.0.0.2 port 80 failed: Connection refused
* Trying 127.0.0.3...
* connect to 127.0.0.3 port 80 failed: Connection refused
* Trying 127.0.0.4...
* connect to 127.0.0.3 port 80 failed: Connection refused
* Trying 127.0.0.4...
* connect to 127.0.0.4 port 80 failed: Connection refused
* connect to 127.0.0.4 port 80 failed: Connection refused
* Failed to connect to MTEST port 80: Connection refused
* Closing connection 0
* The cache now contains 0 members
* Expire cleared
curl: (7) Failed to connect to MTEST port 80: Connection refused
The bug was born in commit bagder/curl@2d435c7.
In function Curl_closesocket() in connect.c the call to
Curl_multi_closed() was wrongly omitted if a socket close function
(CURLOPT_CLOSESOCKETFUNCTION) is registered.
That would lead to not removing the socket from the internal hash table
and not calling the multi socket callback appropriately.
Bug: http://curl.haxx.se/bug/view.cgi?id=1493
A signal handler for SIGALRM is installed in Curl_resolv_timeout. It is
configured to interrupt system calls and uses siglongjmp to return into
the function if alarm() goes off.
The signal handler is installed before curl_jmpenv is initialized.
This means that an already installed alarm timer could trigger the
newly installed signal handler, leading to undefined behavior when it
accesses the uninitialized curl_jmpenv.
Even if there is no previously installed alarm available, the code in
Curl_resolv_timeout itself installs an alarm before the environment is
fully set up. If the process is sent into suspend right after that, the
signal handler could be called too early as in previous scenario.
To fix this, the signal handler should only be installed and the alarm
timer only be set after sigsetjmp has been called.
... by using the regular Curl_http_done() method which checks for
that. This makes test 1801 fail consistently with error 56 (which seems
fine) to that test is also updated here.
Reported-by: Ben Darnell
Bug: https://github.com/bagder/curl/issues/166
This makes curl pick better (stronger) ciphers by default. The strongest
available ciphers are fine according to the HTTP/2 spec so an OpenSSL
built curl is no longer rejected by string HTTP/2 servers.
Bug: http://curl.haxx.se/bug/view.cgi?id=1487
...after the method line:
"Since the Host field-value is critical information for handling a
request, a user agent SHOULD generate Host as the first header field
following the request-line." / RFC 7230 section 5.4
Additionally, this will also make libcurl ignore multiple specified
custom Host: headers and only use the first one. Test 1121 has been
updated accordingly
Bug: http://curl.haxx.se/bug/view.cgi?id=1491
Reported-by: Rainer Canavan
When checking for a connection to re-use, a proxy-using request must
check for and use a proxy connection and not one based on the host
name!
Added test 1421 to verify
Bug: http://curl.haxx.se/bug/view.cgi?id=1492
Instead of priting cipher and MAC algorithms names separately, print the
whole cipher suite string which also includes the key exchange algorithm,
along with the negotiated TLS version.
The code used some happy eyeballs logic even _after_ CONNECT has been
sent to a proxy, while the happy eyeball phase is already (should be)
over by then.
This is solved by splitting the multi state into two separate states
introducing the new SENDPROTOCONNECT state.
Bug: http://curl.haxx.se/mail/lib-2015-01/0170.html
Reported-by: Peter Laser
Since 1342a96ecf, a timeout detected in the multi state machine didn't
necesarily clear everything up, like formpost data.
Bug: https://github.com/bagder/curl/issues/147
Reported-by: Michel Promonet
Patched-by: Michel Promonet
SSLeay was the name of the library that was subsequently turned into
OpenSSL many moons ago (1999). curl does not work with the old SSLeay
library since years. This is now reflected by only using USE_OPENSSL in
code that depends on OpenSSL.
Previously, we just ignored error code passed to
on_stream_close_callback and just return 0 (success) after stream
closure even if stream was reset with error. This patch records error
code in on_stream_close_callback, and return -1 and use CURLE_HTTP2
error code on abnormal stream closure.
The vtls layer now checks the return value, so it is no longer necessary
to abort if a random number cannot be provided by NSS. This also fixes
the following Coverity report:
Error: FORWARD_NULL (CWE-476):
lib/vtls/nss.c:1918: var_compare_op: Comparing "data" to null implies that "data" might be null.
lib/vtls/nss.c:1923: var_deref_model: Passing null pointer "data" to "Curl_failf", which dereferences it.
lib/sendf.c:154:3: deref_parm: Directly dereferencing parameter "data".
obj_count can be 1 if the custom read function is set or the stdin
handle is a reference to a pipe. Since the pipe should be handled
using the PeekNamedPipe-check below, the custom read function should
only be used if it is actually enabled.
According to [1]: "Returning 0 will signal end-of-file to the library
and cause it to stop the current transfer."
This change makes the Windows telnet code handle this case accordingly.
[1] http://curl.haxx.se/libcurl/c/CURLOPT_READFUNCTION.html
SSL_CTX_load_verify_locations by default (and if given non-Null
parameters) searches the CAfile first and falls back to CApath. This
allows for CAfile to be a basis (e.g. installed by the package manager)
and CApath to be a user configured directory.
This wasn't reflected by the previous configure constraint which this
patch fixes.
Bug: https://github.com/bagder/curl/pull/139
Correctly check for memcmp() return value (it returns 0 if the strings match).
This is not really important, since curl is going to use http/1.1 anyway, but
it's still a bug I guess.
For consistency with other conditionally compiled code in openssl.c,
use OPENSSL_IS_BORINGSSL rather than HAVE_BORINGSSL and try to use
HAVE_BORINGSSL outside of openssl.c when the OpenSSL header files are
not included.
Previously we don't ignore PUSH_PROMISE header fields in on_header
callback. It makes header values mixed with following HEADERS,
resulting protocol error.
Prior to this change the options for exclusive SSL protocol versions did
not actually set the protocol exclusive.
http://curl.haxx.se/mail/lib-2015-01/0002.html
Reported-by: Dan Fandrich
The struct went private in 1.0.2 so we cannot read the version number
from there anymore. Use SSL_version() instead!
Reported-by: Gisle Vanem
Bug: http://curl.haxx.se/mail/lib-2015-02/0034.html
Modified the Curl_ossl_cert_status_request() function to return FALSE
when built with BoringSSL or when OpenSSL is missing the necessary TLS
extensions.
Commit 7a8b2885e2 made some functions static and removed the public
Curl_ prefix. Unfortunately, it also removed the sasl_ prefix, which
is the naming convention we use in this source file.
curl_sasl.c:1221: error C2065: 'mechtable' : undeclared identifier
This error could also happen for non-SSPI builds when cryptography is
disabled (CURL_DISABLE_CRYPTO_AUTH is defined).
There is an issue with conflicting "struct timeval" definitions with
certain AmigaOS releases and C libraries, depending on what gets
included when. It's a minor difference - the OS one is unsigned,
whereas the common structure has signed elements. If the OS one ends up
getting defined, this causes a timing calculation error in curl.
It's easy enough to resolve this at the curl end, by casting the
potentially errorneous calculation to a signed long.
... of the other cert verification checks so that you can set verifyhost
and verifypeer to FALSE and still check the public key.
Bug: http://curl.haxx.se/bug/view.cgi?id=1471
Reported-by: Kyle J. McKay
Use a dynamicly allocated buffer for the temporary SPN variable similar
to how the SASL GSS-API code does, rather than using a fixed buffer of
2048 characters.
Carrying on from commit 037cd0d991, removed the following unimplemented
instances of curlssl_close_all():
Curl_axtls_close_all()
Curl_darwinssl_close_all()
Curl_cyassl_close_all()
Curl_gskit_close_all()
Curl_gtls_close_all()
Curl_nss_close_all()
Curl_polarssl_close_all()
Fixed the following warning and error from commit 3af90a6e19 when SSL
is not being used:
url.c:2004: warning C4013: 'Curl_ssl_cert_status_request' undefined;
assuming extern returning int
error LNK2019: unresolved external symbol Curl_ssl_cert_status_request
referenced in function Curl_setopt
Use the SECURITY_STATUS typedef rather than a unsigned long for the
QuerySecurityPackageInfo() return and rename the variable as per other
areas of SSPI code.
Also known as "status_request" or OCSP stapling, defined in RFC6066 section 8.
This requires GnuTLS 3.1.3 or higher to build, however it's recommended to use
at least GnuTLS 3.3.11 since previous versions had a bug that caused the OCSP
response verfication to fail even on valid responses.
This option can be used to enable/disable certificate status verification using
the "Certificate Status Request" TLS extension defined in RFC6066 section 8.
This also adds the CURLE_SSL_INVALIDCERTSTATUS error, to be used when the
certificate status verification fails, and the Curl_ssl_cert_status_request()
function, used to check whether the SSL backend supports the status_request
extension.
If the session is still used by active SSL/TLS connections, it
cannot be closed yet. Thus we mark the session as not being cached
any longer so that the reference counting mechanism in
Curl_schannel_shutdown is used to close and free the session.
Reported-by: Jean-Francois Durand
... and make sure we can connect the data connection to a host name that
is longer than 48 bytes.
Also simplifies the code somewhat by re-using the original host name
more, as it is likely still in the DNS cache.
Original-Patch-by: Vojtěch Král
Bug: http://curl.haxx.se/bug/view.cgi?id=1468
...to avoid a session ID getting cached without certificate checking and
then after a subsequent _enabling_ of the check libcurl could still
re-use the session done without cert checks.
Bug: http://curl.haxx.se/docs/adv_20150108A.html
Reported-by: Marc Hesse
As we get the length for the DN and attribute variables, and we know
the length for the line terminator, pass the length values rather than
zero as this will save Curl_client_write() from having to perform an
additional strlen() call.
curl_ntlm_core.c:146: warning: passing 'DES_cblock' (aka 'unsigned char
[8]') to parameter of type 'char *' converts
between pointers to integer types with different
sign
Rather than duplicate the code in setup_des_key() for OpenSSL and in
extend_key_56_to_64() for non-OpenSSL based crypto engines, as it is
the same, use extend_key_56_to_64() for all engines.
smb.c:780: warning: passing 'char *' to parameter of type 'unsigned
char *' converts between pointers to integer types with
different sign
smb.c:781: warning: passing 'char *' to parameter of type 'unsigned
char *' converts between pointers to integer types with
different sign
smb.c:804: warning: passing 'char *' to parameter of type 'unsigned
char *' converts between pointers to integer types with
different sign
Prefer void for unused parameters, rather than assigning an argument to
itself as a) unintelligent compilers won't optimize it out, b) it can't
be used for const parameters, c) it will cause compilation warnings for
clang with -Wself-assign and d) is inconsistent with other areas of the
curl source code.
Moved our Initialize Security Context return attribute definitions to
the SSPI module, as a) these can be used by other SSPI based providers
and b) the ISC required attributes are defined there.
curl_schannel.h:123: warning: right-hand operand of comma expression
has no effect
Some instances of the curlssl_close_all() function were declared with a
void return type whilst others as int. The schannel version returned
CURLE_NOT_BUILT_IN and others simply returned zero, but in all cases the
return code was ignored by the calling function Curl_ssl_close_all().
For the time being and to keep the internal API consistent, changed all
declarations to use a void return type.
To reduce code we might want to consider removing the unimplemented
versions and use a void #define like schannel does.
For consistency, as we seem to have a bit of a mixed bag, changed all
instances of ipv4 and ipv6 in comments and documentations to use the
correct case.
Otherwise Curl_ssl_init_certinfo() can fail and set the num_of_certs
member variable to the requested count, which could then be used
incorrectly as libcurl closes down.
The return type for this function was 0 on success and 1 on error. This
was then examined by the calling functions and, in most cases, used to
return CURLE_OUT_OF_MEMORY.
Instead use CURLcode for the return type and return the out of memory
error directly, propagating it up the call stack.
The return type of this function is a boolean value, and even uses a
bool internally, so use bool in the function declaration as well as
the variables that store the return value, to avoid any confusion.
curl_ntlm_core.c:301: warning: pointer targets in passing argument 2 of
'CryptImportKey' differ in signedness
curl_ntlm_core.c:310: warning: passing argument 6 of 'CryptEncrypt' from
incompatible pointer type
curl_ntlm_core.c:540: warning: passing argument 4 of 'CryptGetHashParam'
from incompatible pointer type
... as it never copies the trailing zero anyway and always just the four
bytes so let's not mislead anyone into thinking it is actually treated
as a string.
Coverity CID: 1260214
lib/setup-vms.h : VAX HP OpenSSL port is ancient, needs help.
More defines to set symbols to uppercase.
src/tool_main.c : Fix parameter to vms_special_exit() call.
packages/vms/ :
backup_gnv_curl_src.com : Fix the error message to have the correct package.
build_curl-config_script.com : Rewrite to be more accurate.
build_libcurl_pc.com : Use tool_version.h now.
build_vms.com : Fix to handle lib/vtls directory.
curl_gnv_build_steps.txt : Updated build procedure documentation.
generate_config_vms_h_curl.com :
* VAX does not support 64 bit ints, so no NTLM support for now.
* VAX HP SSL port is ancient, needs some help.
* Disable NGHTTP2 for now, not ported to VMS.
* Disable UNIX_SOCKETS, not available on VMS yet.
* HP GSSAPI port does not have gss_nt_service_name.
gnv_link_curl.com : Update for new curl structure.
pcsi_product_gnv_curl.com : Set up to optionally do a complete build.
Removed 'next' variable in Curl_convert_form(). Rather than setting it
from 'form->next' and using that to set 'form' after the conversion
just use 'form = form->next' instead.
There was a confusion between these: this commit tries to disambiguate them.
- Scope can be computed from the address itself.
- Scope id is scope dependent: it is currently defined as 1-based local
interface index for link-local scoped addresses, and as a site index(?) for
(obsolete) site-local addresses. Linux only supports it for link-local
addresses.
The URL parser properly parses a scope id as an interface index, but stores it
in a field named "scope": confusion. The field has been renamed into "scope_id".
Curl_if2ip() used the scope id as it was a scope. This caused failures
to bind to an interface.
Scope is now computed from the addresses and Curl_if2ip() matches them.
If redundantly specified in the URL, scope id is check for mismatch with
the interface index.
This commit should fix SF bug #1451.
- do not grow memory by doubling its size
- do not leak previously allocated memory if reallocation fails
- replace while-loop with a single check to make sure
that the requested amount of data fits into the buffer
Bug: http://curl.haxx.se/bug/view.cgi?id=1450
Reported-by: Warren Menzer
There is no need to set the 'state' and 'result' member variables to
SMB_REQUESTING (0) and CURLE_OK (0) after the allocation via calloc()
as calloc() initialises the contents to zero.
I don't think both of my fix ups from yesterday were needed to fix the
compilation warning, so remove the one that I think is unnecessary and
let the next Android autobuild prove/disprove it.
smtp.c:2357 warning: adding 'size_t' (aka 'unsigned long') to a string
does not append to the string
smtp.c:2375 warning: adding 'size_t' (aka 'unsigned long') to a string
does not append to the string
smtp.c:2386 warning: adding 'size_t' (aka 'unsigned long') to a string
does not append to the string
Used array index notation instead.
This fixes compilation issues with compilers that don't support 64-bit
integers through long long or __int64 which was introduced in commit
07b66cbfa4.
Previously USE_NTLM2SESSION would only be defined automatically when
USE_NTRESPONSES wasn't already defined. Separated the two definitions
so that the user can manually set USE_NTRESPONSES themselves but
USE_NTLM2SESSION is defined automatically if they don't define it.
As the OpenSSL and NSS Crypto engines are prefered by the core NTLM
routines, to the Windows Crypt API, don't define USE_WIN32_CRYPT
automatically when either OpenSSL or NSS are in use - doing so would
disable NTLM2Session responses in NTLM type-3 messages.
If the scratch buffer was allocated in a previous call to
Curl_smtp_escape_eob(), a new buffer not allocated in the subsequent
call and no action taken by that call, then an attempt would be made to
try and free the buffer which, by now, would be part of the data->state
structure.
This bug was introduced in commit 4bd860a001.
Fixed a problem with the CRLF. detection when multiple buffers were
used to upload an email to libcurl and the line ending character(s)
appeared at the end of each buffer. This meant any lines which started
with . would not be escaped into .. and could be interpreted as the end
of transmission string instead.
This only affected libcurl based applications that used a read function
and wasn't reproducible with the curl command-line tool.
Bug: http://curl.haxx.se/bug/view.cgi?id=1456
Assisted-by: Patrick Monnerat
parsedate.c:548: warning: 'parsed' may be used uninitialized in this
function
As curl_getdate() returns -1 when parsedate() fails we can initialise
parsed to -1.
This fixes the test 506 torture test. The internal cookie API really
ought to be improved to separate cookie parsing errors (which may be
ignored) with OOM errors (which should be fatal).
As Windows based autoconf builds don't yet define USE_WIN32_CRYPTO
either explicitly through --enable-win32-cypto or automatically on
_WIN32 based platforms, subsequent builds broke with the following
error message:
"Can't compile NTLM support without a crypto library."
Fixed an issue with the message size calculation where the raw bytes
from the buffer were interpreted as signed values rather than unsigned
values.
Reported-by: Gisle Vanem
Assisted-by: Bill Nagel
Don't use a hard coded size of 4 for the security layer and buffer size
in Curl_sasl_create_gssapi_security_message(), instead, use sizeof() as
we have done in the sasl_gssapi module.
Reduced the amount of free's required for the decoded challenge message
in Curl_sasl_create_gssapi_security_message() as a result of coding it
differently in the sasl_gssapi module.
Sending NTLM/Negotiate header again after successful authentication
breaks the connection with certain Proxies and request types (POST to MS
Forefront).
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
On some platforms curl would crash if no credentials were used. As such
added detection of such a use case to prevent this from happening.
Reported-by: Gisle Vanem
This patch prepares for adding UNIX domain sockets support.
TCP_NODELAY and TCP_KEEPALIVE are specific to TCP/IP sockets, so do not
apply these to other socket types. bindlocal only works for IP sockets
(independent of TCP/UDP), so filter that out too for other types.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
smb.c:398: warning: comparison of integers of different signs:
'ssize_t' (aka 'long') and 'unsigned long'
smb.c:443: warning: comparison of integers of different signs:
'ssize_t' (aka 'long') and 'unsigned long'
smb.c:322: warning: conversion to 'short unsigned int' from 'unsigned
int' may alter its value
smb.c:323: warning: conversion to 'short unsigned int' from 'unsigned
int' may alter its value
smb.c:482: warning: conversion to 'short unsigned int' from 'int' may
alter its value
smb.c:521: warning: conversion to 'unsigned int' from 'curl_off_t' may
alter its value
smb.c:549: warning: conversion to 'unsigned int' from 'curl_off_t' may
alter its value
smb.c:550: warning: conversion to 'short unsigned int' from 'int' may
alter its value
smb.c:489: warning: declaration of 'close' shadows a global declaration
smb.c:511: warning: declaration of 'read' shadows a global declaration
smb.c:528: warning: declaration of 'write' shadows a global declaration
smb.c:212: warning: unused parameter 'done'
smb.c:380: warning: ISO C does not allow extra ';' outside of a function
smb.c:812: warning: unused parameter 'premature'
smb.c:822: warning: unused parameter 'dead'
smb.c:311: warning: conversion from 'unsigned __int64' to 'u_short',
possible loss of data
smb.c:425: warning: conversion from '__int64' to 'unsigned short',
possible loss of data
smb.c:452: warning: conversion from '__int64' to 'unsigned short',
possible loss of data
smb.c:162: error: comma at end of enumerator list
smb.c:469: warning: conversion from 'size_t' to 'unsigned short',
possible loss of data
smb.c:517: warning: conversion from 'curl_off_t' to 'unsigned int',
possible loss of data
smb.c:545: warning: conversion from 'curl_off_t' to 'unsigned int',
possible loss of data
If the scratch buffer already existed when the CRLF conversion was
performed then the buffer pointer would be checked twice for NULL. This
second check is only necessary if the call to malloc() was performed by
the first check.
Whilst I had moved the dot stuffing code from being performed before
CRLF conversion takes place to after it, in commit 4bd860a001, I had
moved it outside the 'when something read' block of code when meant
it could perform the dot stuffing twice on partial send if nread
happened to contain the right values. It also meant the function could
potentially read past the end of buffer. This was highlighted by the
following warning:
warning: `nread' might be used uninitialized in this function
After commit 48d19acb7c the HTTP code would call Curl_nss_force_init()
twice when decoding a NTLM type-2 message, once directly and the other
through the call to Curl_sasl_decode_ntlm_type2_message().
This commit disables pipelining for HTTP/2 or upgraded connections. For
HTTP/2, we do not support multiplexing. In general, requests cannot be
pipelined in an upgraded connection, since it is now different protocol.
When the connection code decides to close a socket it informs the multi
system via the Curl_multi_closed function. The multi system may, in
turn, invoke the CURLMOPT_SOCKETFUNCTION function with
CURL_POLL_REMOVE. This happens after the socket has already been
closed. Reorder the code so that CURL_POLL_REMOVE is called before the
socket is closed.
Debug output 'typo' fix.
Don't print an extra "0x" in
* Pipe broke: handle 0x0x2546d88, url = /
Add debug output.
Print the number of connections in the connection cache when
adding one, and not only when one is removed.
Fix typos in comments.
Updated the usage of some legacy APIs, that are preventing curl from
compiling for Windows Store and Windows Phone build targets.
Suggested-by: Stefan Neis
Feature: http://sourceforge.net/p/curl/feature-requests/82/
Visual Studio 2012 introduced support for Windows Store apps as well as
supporting Windows Phone 8. Introduced build targets that allow more
modern APIs to be used as certain legacy ones are not available on these
new platforms.
Rather than define the function as extern in the source files that use
it, moved the function declaration into the SASL header file just like
the Digest and NTLM clean-up functions.
Additionally, added a function description comment block.
Previously if HTTP/2 traffic is appended to HTTP Upgrade response header
(thus they are in the same buffer), the trailing HTTP/2 traffic is not
processed and lost. The appended data is most likely SETTINGS frame.
If it is lost, nghttp2 library complains server does not obey the HTTP/2
protocol and issues GOAWAY frame and curl eventually drops connection.
This commit fixes this problem and now trailing data is processed.
Fix detection of the AsynchDNS feature which not just depends on
pthreads support, but also on whether USE_POSIX_THREADS is set or not.
Caught by test 1014.
This patch adds a new ENABLE_THREADED_RESOLVER option (corresponding to
--enable-threaded-resolver of autotools) which also needs a check for
HAVE_PTHREAD_H.
For symmetry with autotools, CURL_USE_ARES is renamed to ENABLE_ARES
(--enable-ares). Checks that test for the availability actually use
USE_ARES instead as that is the result of whether a-res is available or
not (in practice this does not matter as CARES is marked as required
package, but nevertheless it is better to write the intent).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
In preparation for moving the NTLM message code into the SASL module,
and separating the native code from the SSPI code, added functions that
simply call the functions in curl_ntlm_msg.c.
USE_NTLM would only be defined if: HTTP support was enabled, NTLM and
cryptography weren't disabled, and either a supporting cryptography
library or Windows SSPI was being compiled against.
This means it was not possible to build libcurl without HTTP support
and use NTLM for other protocols such as IMAP, POP3 and SMTP. Rather
than introduce a new SASL pre-processor definition, removed the HTTP
prerequisite just like USE_SPNEGO and USE_KRB5.
Note: Winbind support still needs to be dependent on CURL_DISABLE_HTTP
as it is only available to HTTP at present.
This bug dates back to August 2011 when I started to add support for
NTLM to SMTP.
Reworked the input token (challenge message) storage as what is passed
to the buf and desc in the response generation are typically blobs of
data rather than strings, so this is more in keeping with other areas
of the SSPI code, such as the NTLM message functions.
This temporarily breaks HTTP digest authentication in SSPI based builds,
causing CURLE_NOT_BUILT_IN to be returned. A follow up commit will
resume normal operation.
Added forward declaration of digestdata to overcome the following
compilation warning:
warning: 'struct digestdata' declared inside parameter list
Additionally made the ntlmdata forward declaration dependent on
USE_NTLM similar to how digestdata and kerberosdata are.
To provide consistent behaviour between the various HTTP authentication
functions use CURLcode based error codes for Curl_input_digest()
especially as the calling code doesn't use the specific error code just
that it failed.
These were previously hard coded, and whilst defined in security.h,
they may or may not be present in old header files given that these
defines were never used in the original code.
Not only that, but there appears to be some ambiguity between the ANSI
and UNICODE NTLM definition name in security.h.
When duplicating a handle, the data to post was duplicated using
strdup() when it could be binary and contain zeroes and it was not even
zero terminated! This caused read out of bounds crashes/segfaults.
Since the lib/strdup.c file no longer is easily shared with the curl
tool with this change, it now uses its own version instead.
Bug: http://curl.haxx.se/docs/adv_20141105.html
CVE: CVE-2014-3707
Reported-By: Symeon Paraschoudis
- Prior to this change no SSL minimum version was set by default at
runtime for PolarSSL. Therefore in most cases PolarSSL would probably
have defaulted to a minimum version of SSLv3 which is no longer secure.
The previous condition that checked if the socket was marked as readable
when also adding a writable one, was incorrect and didn't take the pause
bits properly into account.
autotools does not use features.h nor _BSD_SOURCE. As this macro
triggers warnings since glibc 2.20, remove it. It should not have
functional differences.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Typically the USE_WINDOWS_SSPI definition would not be used when the
CURL_DISABLE_CRYPTO_AUTH define is, however, it is still a valid build
configuration and, as such, the SASL Kerberos V5 (GSSAPI) authentication
data structures and functions would incorrectly be used when they
shouldn't be.
Introduced a new USE_KRB5 definition that takes into account the use of
CURL_DISABLE_CRYPTO_AUTH like USE_SPNEGO and USE_NTLM do.
Basically since servers often then don't respond well to this and
instead send the full contents and then libcurl would instead error out
with the assumption that the server doesn't support resume. As the data
is then already transfered, this is now considered fine.
Test case 1434 added to verify this. Test case 1042 slightly modified.
Reported-by: hugo
Bug: http://curl.haxx.se/bug/view.cgi?id=1443
Return a more appropriate error, rather than CURLE_OUT_OF_MEMORY when
acquiring the credentials handle fails. This is then consistent with
the code prior to commit f7e24683c4 when log-in credentials were empty.
Fixed the ability to use the current log-in credentials with DIGEST-MD5.
I had previously disabled this functionality in commit 607883f13c as I
couldn't get this to work under Windows 8, however, from testing HTTP
Digest authentication through Windows SSPI and then further testing of
this code I have found it works in Windows 7.
Some further investigation is required to see what the differences are
between Windows 7 and 8, but for now enable this functionality as the
code will return an error when AcquireCredentialsHandle() fails.
HTTP 1.1 is clearly specified to only allow three digit response codes,
and libcurl used sscanf("%3d") for that purpose. This made libcurl
support smaller numbers but not larger. It does now, but we will not
make any specific promises nor document this further since it is going
outside of what HTTP is.
Bug: http://curl.haxx.se/bug/view.cgi?id=1441
Reported-by: Balaji
Don't call CompleteAuthToken() after InitializeSecurityContext() has
returned SEC_I_CONTINUE_NEEDED as this return code only indicates the
function should be called again after receiving a response back from
the server.
This only affected the Digest and NTLM authentication code.
For consistency with other areas of the NTLM code propagate all errors
from Curl_ntlm_core_mk_nt_hash() up the call stack rather than just
CURLE_OUT_OF_MEMORY.
- Remove SSLv3 from SSL default in darwinssl, schannel, cyassl, nss,
openssl effectively making the default TLS 1.x. axTLS is not affected
since it supports only TLS, and gnutls is not affected since it already
defaults to TLS 1.x.
- Update CURLOPT_SSLVERSION doc
... for the local variable name in functions holding the return
code. Using the same name universally makes code easier to read and
follow.
Also, unify code for checking for CURLcode errors with:
if(result) or if(!result)
instead of
if(result == CURLE_OK), if(CURLE_OK == result) or if(result != CURLE_OK)
Prefer usage of Perl modules for sha1 calculation since there
might be systems where openssl is not installed or not in path.
If openssl is used for sha1 calculation then dont rely on cut
since it is usually not available on other systems than Linux.
It turned out some features were not enabled in the build since for
example url.c #ifdefs on features that are defined on a per-backend
basis but vtls.h didn't include the backend headers.
CURLOPT_CERTINFO was one such feature that was accidentally disabled.
There is no need for such function. Include_directories propagate by
themselves and having a function with one simple link statement makes
little sense.
Coverity CID 252518. This function is in general far too complicated for
its own good and really should be broken down into several smaller
funcitons instead - but I'm adding this protection here now since it
seems there's a risk the code flow can end up here and dereference a
NULL pointer.
Coverity CID 1241957. Removed the unused argument. As this struct and
pointer now are used only for krb5, there's no need to keep unused
function arguments around.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
Coverify CID 1157776. Removed a superfluous if() that always evaluated
true (and an else clause that never ran), and then re-indented the
function accordingly.
Coverity CID 1215284. The server name is extracted with
Curl_copy_header_value() and passed in to this function, and
copy_header_value can actually can fail and return NULL.
For private keys, use the first match from: user-specified key file
(if provided), ~/.ssh/id_rsa, ~/.ssh/id_dsa, ./id_rsa, ./id_dsa
Note that the previous code only looked for id_dsa files. id_rsa is
now generally preferred, as it supports larger key sizes.
For public keys, use the user-specified key file, if provided.
Otherwise, try to extract the public key from the private key file.
This means that passing --pubkey is typically no longer required,
and makes the key-handling behavior more like OpenSSH.
Coverity CID 1202836. If the proxy environment variable returned an empty
string, it would be leaked. While an empty string is not really a proxy, other
logic in this function already allows a blank string to be returned so allow
that here to avoid the leak.
Coverity CID 1215287. There's a potential risk for a memory leak in
here, and moving the free call to be unconditional seems like a cheap
price to remove the risk.
Coverity CID 1215296. There's a potential risk for a memory leak in
here, and moving the free call to be unconditional seems like a cheap
price to remove the risk.
Coverity detected this. CID 1241954. When Curl_poll() returns a negative value
'mcode' was uninitialized. Pretty harmless since this is debug code only and
would at worst cause an error to _not_ be returned...
Mostly because we use C strings and they end at a binary zero so we know
we can't open a file name using an embedded binary zero.
Reported-by: research@g0blin.co.uk
The switch to using Curl_expire_latest() in commit cacdc27f52 was a
mistake and was against the advice even mentioned in that commit. The
comparison in asyn-thread.c:Curl_resolver_is_resolved() makes
Curl_expire() the suitable function to use.
Bug: http://curl.haxx.se/bug/view.cgi?id=1426
Reported-By: graysky
Previously we did not handle EOF from underlying transport socket and
wrongly just returned error code CURL_AGAIN from http2_recv, which
caused busy loop since socket has been closed. This patch adds the
code to handle EOF situation and tells the upper layer that we got
EOF.
Removed ISC_REQ_* flags from calls to InitializeSecurityContext to fix
bug in NTLM handshake for HTTP proxy authentication.
NTLM handshake for HTTP proxy authentication failed with error
SEC_E_INVALID_TOKEN from InitializeSecurityContext for certain proxy
servers on generating the NTLM Type-3 message.
The flag ISC_REQ_CONFIDENTIALITY seems to cause the problem according
to the observations and suggestions made in a bug report for the
QT project (https://bugreports.qt-project.org/browse/QTBUG-17322).
Removing all the flags solved the problem.
Bug: http://curl.haxx.se/mail/lib-2014-08/0273.html
Reported-by: Ulrich Telle
Assisted-by: Steve Holme, Daniel Stenberg
As a sort of step forward, this script will now first try to get the
data from the HTTPS URL using curl, and only if that fails it will
switch back to the HTTP transfer using perl's native LWP functionality.
To reduce the risk of this script being tricked.
Using HTTPS to get a cert bundle introduces a chicken-and-egg problem so
we can't really ever completely disable HTTP, but chances are that most
users already have a ca cert bundle that trusts the mozilla.org site
that this script downloads from.
A future version of this script will probably switch to require a
dedicated "insecure" command line option to allow downloading over HTTP
(or unverified HTTPS).
By not detecting and rejecting domain names for partial literal IP
addresses properly when parsing received HTTP cookies, libcurl can be
fooled to both send cookies to wrong sites and to allow arbitrary sites
to set cookies for others.
CVE-2014-3613
Bug: http://curl.haxx.se/docs/adv_20140910A.html
Historically the default "unknown" value for progress.size_dl and
progress.size_ul has been zero, since these values are initialized
implicitly by the calloc that allocates the curl handle that these
variables are a part of. Users of curl that install progress
callbacks may expect these values to always be >= 0.
Currently it is possible for progress.size_dl and progress.size_ul
to by set to a value of -1, if Curl_pgrsSetDownloadSize() or
Curl_pgrsSetUploadSize() are passed a "size" of -1 (which a few
places currently do, and a following patch will add more). So
lets update Curl_pgrsSetDownloadSize() and Curl_pgrsSetUploadSize()
so they make sure that these variables always contain a value that
is >= 0.
Updates test579 and test599.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
As the current element in the list is free()d by Curl_llist_remove(),
when the associated connection is pending, reworked the loop to avoid
accessing the next element through e->next afterward.
SecCertificateCopyPublicKey() is not available on iPhone. Use
CopyCertSubject() instead to see if the certificate returned by
SecCertificateCreateWithData() is valid.
Reported-by: Toby Peterson
... as the struct is free()d in the end anyway. It was first pointed out
to me that one of the ->msglist assignments were supposed to have been
->pending but was a copy and paste mistake when I realized none of the
clearing of pointers had to be there.
... instead of scanning through all handles, stash only the actual
handles that are in that state in the new ->pending list and scan that
list only. It should be mostly empty or very short. And only used for
pipelining.
This avoids a rather hefty slow-down especially notable if you add many
handles to the same multi handle. Regression introduced in commit
0f147887 (version 7.30.0).
Bug: http://curl.haxx.se/mail/lib-2014-07/0206.html
Reported-by: David Meyer
Forwards the setting as minimum ssl version (if set) to polarssl. If
the server does not support the requested version the SSL Handshake will
fail.
Bug: http://curl.haxx.se/bug/view.cgi?id=1419
SecCertificateCreateWithData() returns a non-NULL SecCertificateRef even
if the buffer holds an invalid or corrupt certificate. Call
SecCertificateCopyPublicKey() to make sure cacert is a valid
certificate.
Introducing Curl_expire_latest(). To be used when we the code flow only
wants to get called at a later time that is "no later than X" so that
something can be checked (and another timeout be added).
The low-speed logic for example could easily be made to set very many
expire timeouts if it would be called faster or sooner than what it had
set its own timer and this goes for a few other timers too that aren't
explictiy checked for timer expiration in the code.
If there's no condition the code that says if(time-passed >= TIME), then
Curl_expire_latest() is preferred to Curl_expire().
If there exists such a condition, it is on the other hand important that
Curl_expire() is used and not the other.
Bug: http://curl.haxx.se/mail/lib-2014-06/0235.html
Reported-by: Florian Weimer
While waiting for a host resolve, check if the host cache may have
gotten the name already (by someone else), for when the same name is
resolved by several simultanoues requests.
The resolver thread occasionally gets stuck in getaddrinfo() when the
DNS or anything else is crappy or slow, so when a host is found in the
DNS cache, leave the thread alone and let itself cleanup the mess.
If the --cacert option is used with a CA certificate bundle that
contains multiple CA certificates, iterate through it, adding each
certificate as a trusted root CA.
This is usually due to failed auth. There's no point in us keeping such
a connection alive since it shouldn't be re-used anyway.
Bug: http://curl.haxx.se/bug/view.cgi?id=1381
Reported-by: Marcel Raad
This was done to make sure NTLM state that is bound to a connection
doesn't survive and gets used for the subsequent request - but
disconnects can also be done to for example make room in the connection
cache and thus that connection is not strictly related to the easy
handle's current operation.
The http authentication state is still kept in the easy handle since all
http auth _except_ NTLM is connection independent and thus survive over
multiple connections.
Bug: http://curl.haxx.se/mail/lib-2014-08/0148.html
Reported-by: Paras S
Problem: if CURLOPT_FORBID_REUSE is set, requests using NTLM failed
since NTLM requires multiple requests that re-use the same connection
for the authentication to work
Solution: Ignore the forbid reuse flag in case the NTLM authentication
handshake is in progress, according to the NTLM state flag.
Fixed known bug #77.
A conditionally compiled block in connect.c references WinSock 2
symbols, but used `#ifdef HAVE_WINSOCK_H` instead of `#ifdef
HAVE_WINSOCK2_H`.
Bug: http://curl.haxx.se/mail/lib-2014-08/0155.html
The URL is not a property of the connection so it should not be freed in
the connection disconnect but in the Curl_close() that frees the easy
handle.
Bug: http://curl.haxx.se/mail/lib-2014-08/0148.html
Reported-by: Paras S
Corrected a number of the error codes that can be returned from the
Curl_sasl_create_gssapi_security_message() function when things go
wrong.
It makes more sense to return CURLE_BAD_CONTENT_ENCODING when the
inbound security challenge can't be decoded correctly or doesn't
contain the KERB_WRAP_NO_ENCRYPT flag and CURLE_OUT_OF_MEMORY when
EncryptMessage() fails. Unfortunately the previous error code of
CURLE_RECV_ERROR was a copy and paste mistakes on my part and should
have been correct in commit 4b491c675f :(
... to handle "*/[total]". Also, removed the strange hack that made
CURLOPT_FAILONERROR on a 416 response after a *RESUME_FROM return
CURLE_OK.
Reported-by: Dimitrios Siganos
Bug: http://curl.haxx.se/mail/lib-2014-06/0221.html
In preparation for the upcoming SSPI implementation of GSSAPI
authentication, moved the definition of KERB_WRAP_NO_ENCRYPT from
socks_sspi.c to curl_sspi.h allowing it to be shared amongst other
SSPI based code.
... as mxr.mozilla.org is due to be retired.
The new host doesn't support If-Modified-Since nor ETags, meaning that
the script will now defer to download and do a post-transfer checksum
check to see if a new output is to be generated. The new output format
will hold the SHA1 checksum of the source file for that purpose.
We call this version 1.22
Reported-by: Ed Morley
Bug: http://curl.haxx.se/bug/view.cgi?id=1409
Bringing back the old functionality that was mistakenly removed when the
connection cache was remade. When creating a new connection, all the
existing ones are checked and those that are known to be dead get
disconnected for real and removed from the connection cache. It helps
the cache from holding on to very many stale connections and aids in
keeping down the number of system sockets in wait states.
Help-by: Jonatan Vela <jonatan.vela@ergon.ch>
Bug: http://curl.haxx.se/mail/lib-2014-06/0189.html
Curl_poll and Curl_wait_ms require the fix applied to Curl_socket_check
in commits b61e8b8 and c771968:
When poll or select are interrupted and coincides with the timeout
elapsing, the functions return -1 indicating an error instead of 0 for
the timeout.
Given the SSPI package info query indicates a token size of 4096 bytes,
updated to use a dynamic buffer for the response message generation
rather than a fixed buffer of 1024 bytes.
Updated to use a dynamic buffer for the SPN generation via the recently
introduced Curl_sasl_build_spn() function rather than a fixed buffer of
1024 characters, which should have been more than enough, but by using
the new function removes the need for another variable sname to do the
wide character conversion in Unicode builds.
Updated Curl_sasl_create_digest_md5_message() to use a dynamic buffer
for the SPN generation via the recently introduced Curl_sasl_build_spn()
function rather than a fixed buffer of 128 characters.
Curl_sasl_create_digest_md5_message() would simply cast the SPN variable
to a TCHAR when calling InitializeSecurityContext(). This meant that,
under Unicode builds, it would not be valid wide character string.
Updated to use the recently introduced Curl_sasl_build_spn() function
which performs the correct conversion for us.
Various parts of the libcurl source code build a SPN for inclusion in
authentication data. This information is either used by our own native
generation routines or passed to authentication functions in third-party
libraries such as SSPI. However, some of these instances use fixed
buffers rather than dynamically allocated ones and not all of those that
should, convert to wide character strings in Unicode builds.
Implemented a common function that generates a SPN and performs the
wide character conversion where necessary.
Following the recent changes and in attempt to align the SSPI based
authentication code performed the following:
* Use NULL and SECBUFFVERSION rather than hard coded constants.
* Avoid comparison of zero in if statements.
* Standardised the buf and desc setup code.
Given the SSPI package info query indicates a token size of 2888 bytes,
and as with the Winbind code and commit 9008f3d56, use a dynamic buffer
for the Type-1 and Type-3 message generation rather than a fixed buffer
of 1024 bytes.
Just as with the SSPI implementations of Digest and Negotiate added a
package info query so that libcurl can a) return a more appropriate
error code when the NTLM package is not supported and b) it can be of
use later to allocate a dynamic buffer for the Type-1 and Type-3
output tokens rather than use a fixed buffer of 1024 bytes.
OPENSSL_config() is "strongly recommended" to use but unfortunately that
function makes an exit() call on wrongly formatted config files which
makes it hard to use in some situations. OPENSSL_config() itself calls
CONF_modules_load_file() and we use that instead and we ignore its
return code!
Reported-by: Jan Ehrhardt
Bug: http://curl.haxx.se/bug/view.cgi?id=1401
If the server rejects our authentication attempt and curl hasn't
called CompleteAuthToken() then the status variable will be
SEC_I_CONTINUE_NEEDED and not SEC_E_OK.
As such the existing detection mechanism for determining whether or not
the authentication process has finished is not sufficient.
However, the WWW-Authenticate: Negotiate header line will not contain
any data when the server has exhausted the negotiation, so we can use
that coupled with the already allocated context pointer.
To prevent infinite loop in readwrite_data() function when stream is
reset before any response body comes, reset closed flag to false once
it is evaluated to true.
"Expect: 100-continue", which was once deprecated in HTTP/2, is now
resurrected in HTTP/2 draft 14. This change adds its support to
HTTP/2 code. This change also includes stricter header field
checking.
Previously it only returned a CURLcode for errors, which is when it
returns a different size than what was passed in to it.
The http2 code only checked the curlcode and thus failed.
Under these circumstances, the connection hasn't been fully established
and smtp_connect hasn't been called, yet smtp_done still calls the state
machine which dereferences the NULL conn pointer in struct pingpong.
This now provides a weak random function since PolarSSL doesn't have a
quick and easy way to provide a good one. It does however provide the
framework to make one so it _can_ and _should_ be done...
To force each backend implementation to really attempt to provide proper
random. If a proper random function is missing, then we can explicitly
make use of the default one we use when TLS support is missing.
This commit makes sure it works for darwinssl, gnutls, nss and openssl.
warning C4267: '=' : conversion from 'size_t' to 'long', possible loss
of data
The member connection_id of struct connectdata is a long (always a
32-bit signed integer on Visual C++) and the member next_connection_id
of struct conncache is a size_t, so one of them should be changed to
match the other.
This patch the size_t in struct conncache to long (the less invasive
change as that variable is only ever used in a single code line).
Bug: http://curl.haxx.se/bug/view.cgi?id=1399
1 - fixes the warnings when built without http2 support
2 - adds CURLE_HTTP2, a new error code for errors detected by nghttp2
basically when they are about http2 specific things.
CyaSSL 3.0.0 returns a unique error code if no CA cert is available,
so translate that into CURLE_SSL_CACERT_BADFILE when peer verification
is requested.
- Replace CURLAUTH_GSSNEGOTIATE with CURLAUTH_NEGOTIATE
- CURL_VERSION_GSSNEGOTIATE is deprecated which
is served by CURL_VERSION_SSPI, CURL_VERSION_GSSAPI and
CURUL_VERSION_SPNEGO now.
- Remove display of feature 'GSS-Negotiate'
This reverts commit cb3e6dfa35 and instead fixes the problem
differently.
The reverted commit addressed a test failure in test 1021 by simplifying
and generalizing the code flow in a way that damaged the
performance. Now we modify the flow so that Curl_proxyCONNECT() again
does as much as possible in one go, yet still do test 1021 with and
without valgrind. It failed due to mistakes in the multi state machine.
Bug: http://curl.haxx.se/bug/view.cgi?id=1397
Reported-by: Paul Saab
It's wrong to assume that we can send a single SPNEGO packet which will
complete the authentication. It's a *negotiation* — the clue is in the
name. So make sure we handle responses from the server.
Curl_input_negotiate() will already handle bailing out if it thinks the
state is GSS_S_COMPLETE (or SEC_E_OK on Windows) and the server keeps
talking to us, so we should avoid endless loops that way.
This is the correct way to do SPNEGO. Just ask for it
Now I correctly see it trying NTLMSSP authentication when a Kerberos ticket
isn't available. Of course, we bail out when the server responds with the
challenge packet, since we don't expect that. But I'll fix that bug next...
This is just fundamentally broken. SPNEGO (RFC4178) is a protocol which
allows client and server to negotiate the underlying mechanism which will
actually be used to authenticate. This is *often* Kerberos, and can also
be NTLM and other things. And to complicate matters, there are various
different OIDs which can be used to specify the Kerberos mechanism too.
A SPNEGO exchange will identify *which* GSSAPI mechanism is being used,
and will exchange GSSAPI tokens which are appropriate for that mechanism.
But this SPNEGO implementation just strips the incoming SPNEGO packet
and extracts the token, if any. And completely discards the information
about *which* mechanism is being used. Then we *assume* it was Kerberos,
and feed the token into gss_init_sec_context() with the default
mechanism (GSS_S_NO_OID for the mech_type argument).
Furthermore... broken as this code is, it was never even *used* for input
tokens anyway, because higher layers of curl would just bail out if the
server actually said anything *back* to us in the negotiation. We assume
that we send a single token to the server, and it accepts it. If the server
wants to continue the exchange (as is required for NTLM and for SPNEGO
to do anything useful), then curl was broken anyway.
So the only bit which actually did anything was the bit in
Curl_output_negotiate(), which always generates an *initial* SPNEGO
token saying "Hey, I support only the Kerberos mechanism and this is its
token".
You could have done that by manually just prefixing the Kerberos token
with the appropriate bytes, if you weren't going to do any proper SPNEGO
handling. There's no need for the FBOpenSSL library at all.
The sane way to do SPNEGO is just to *ask* the GSSAPI library to do
SPNEGO. That's what the 'mech_type' argument to gss_init_sec_context()
is for. And then it should all Just Work™.
That 'sane way' will be added in a subsequent patch, as will bug fixes
for our failure to handle any exchange other than a single outbound
token to the server which results in immediate success.
Before GnuTLS 3.3.6, the gnutls_x509_crt_check_hostname() function
didn't actually check IP addresses in SubjectAltName, even though it was
explicitly documented as doing so. So do it ourselves...
The old way using getpwuid could cause problems in programs that enable
reading from netrc files simultaneously in multiple threads.
Reported-by: David Woodhouse
The AES-GCM ciphers were added to GnuTLS as late as ver. 3.0.1 but
the code path in which they're referenced here is only ever used for
somewhat older GnuTLS versions. This caused undeclared identifier errors
when compiling against those.
This seems to have become necessary for SRP support to work starting
with GnuTLS ver. 2.99.0. Since support for SRP was added to GnuTLS
before the function that takes this priority string, there should be no
issue with backward compatibility.
When an error has been detected, skip the final forced call to the
progress callback by making sure to pass the current return code
variable in the Curl_done() call in the CURLM_STATE_DONE state.
This avoids the "extra" callback that could occur even if you returned
error from the progress callback.
Bug: http://curl.haxx.se/mail/lib-2014-06/0062.html
Reported by: Jonathan Cardoso Machado
The static connection counter caused a race condition. Moving the
connection id counter into conncache solves it, as well as simplifying
the related logic.
They were added because of an older code path that used allocations and
should not have been left in the code. With this change the logic goes
back to how it was.
Curl_rand() will return a dummy and repatable random value for this
case. Makes it possible to write test cases that verify output.
Also, fake timestamp with CURL_FORCETIME set.
Only when built debug enabled of course.
Curl_ssl_random() was not used anymore so it has been
removed. Curl_rand() is enough.
create_digest_md5_message: generate base64 instead of hex string
curl_sasl: also fix memory leaks in some OOM situations
httpproxycode is not reset in Curl_initinfo, so a 407 is not reset even
if curl_easy_reset is called between transfers.
Bug: http://curl.haxx.se/bug/view.cgi?id=1380
The method change is forbidden by the obsolete RFC2616, but libcurl did
it anyway for compatibility reasons. The new RFC7231 allows this
behaviour so there's no need for the scary "Violate RFC 2616/10.3.x"
notice. Also update the comments accordingly.
The SASL/Digest previously used the current time's seconds +
microseconds to add randomness but it is much better to instead get more
data from Curl_rand().
It will also allow us to easier "fake" that for debug builds on demand
in a future.
Rather than use a short 8-byte hex string, extended the cnonce to be
32-bytes long, like Windows SSPI does.
Used a combination of random data as well as the current date and
time for the generation.
"Any two of the parameters, readfds, writefds, or exceptfds, can be
given as null. At least one must be non-null, and any non-null
descriptor set must contain at least one handle to a socket."
http://msdn.microsoft.com/en-ca/library/windows/desktop/ms740141(v=vs.85).aspx
When using select(), cURL doesn't adhere to this (WinSock-specific)
rule, and can ask to monitor empty fd_sets, which leads to select()
returning WSAEINVAL (i.e. EINVAL) and connections failing in mysterious
ways as a result (at least when using the curl_multi_socket_action()
interface).
Bug: http://curl.haxx.se/mail/lib-2014-05/0278.html
OpenSSL passes out and outlen variable uninitialized to
select_next_proto_cb callback function. If the callback function
returns SSL_TLSEXT_ERR_OK, the caller assumes the callback filled
values in out and outlen and processes as such. Previously, if there
is no overlap in protocol lists, curl code does not fill any values in
these variables and returns SSL_TLSEXT_ERR_OK, which means we are
triggering undefined behavior. valgrind warns this.
This patch fixes this issue by fallback to HTTP/1.1 if there is no
overlap.
Make all code use connclose() and connkeep() when changing the "close
state" for a connection. These two macros take a string argument with an
explanation, and debug builds of curl will include that in the debug
output. Helps tracking connection re-use/close issues.
Commit 517b06d657 (in 7.36.0) that brought the CREDSPERREQUEST flag
only set it for HTTPS, making HTTP less good at doing connection re-use
than it should be. Now set it for HTTP as well.
Simple test case
"curl -v -u foo:bar localhost --next -u bar:foo localhos"
Bug: http://curl.haxx.se/mail/lib-2014-05/0127.html
Reported-by: Kamil Dudka
The variable wasn't assigned at all until step3 which would lead to a
failed connect never assigning the variable and thus returning a bad
value.
Reported-by: Larry Lin
Bug: http://curl.haxx.se/mail/lib-2014-04/0203.html
In commit 0b3750b5c2 (released in 7.36.0) we fixed a timeout issue
but instead broke the timings.
To fix this, I introduce a new timestamp to use for the timeouts and
restored the previous timestamp and timestamp position so that the old
timer functionality is restored.
In addition to that, that change also broke connection timeouts for when
more than one connect was used (as it would then count the total time
from the first connect and not for the most recent one). Now
Curl_timeleft() has been modified so that it checks against different
start times depending on which timeout it checks.
Test 1303 is updated accordingly.
Bug: http://curl.haxx.se/mail/lib-2014-05/0147.html
Reported-by: Ryan Braud
Whilst the qop directive isn't required to be present in a client's
response, as servers should assume a qop of "auth" if it isn't
specified, some may return authentication failure if it is missing.
To cater for the automatic generation of the new Visual Studio project
files, moved the lib file list into a separated variable so that lib
and lib/vtls can be referenced independently.
-p takes a list of Mozilla trust purposes and levels for certificates to
include in output. Takes the form of a comma separated list of
purposes, a colon, and a comma separated list of levels.
Now nghttp2_submit_request returns assigned stream ID, we don't have
to check stream ID using before_stream_send_callback. The
adjust_priority_callback was removed.
Depending on compiler line 3505 could generate the following warning or
error:
* warning: ISO C90 forbids mixed declarations and code
* A declaration cannot appear after an executable statement in a block
* error C2275: 'size_t' : illegal use of this type as an expression
As there's a default connection timeout and this wrongly used the
connection timeout during a transfer after the connection is completed,
this function would trigger timeouts during transfers erroneously.
Bug: http://curl.haxx.se/bug/view.cgi?id=1352
Figured-out-by: Radu Simionescu
If the precision is indeed shorter than the string, don't strlen() to
find the end because that's not how the precision operator works.
I also added a unit test for curl_msnprintf to make sure this works and
that the fix doesn't a few other basic use cases. I found a POSIX
compliance problem that I marked TODO in the unit test, and I figure we
need to add more tests in the future.
Reported-by: Török Edwin
Commit 07b66cbfa4 unfortunately broke native NTLM message support in
compilers, such as VC6, VC7 and others, that don't support long long
type declarations. This commit fixes VC6 and VC7 as they support the
__int64 extension, however, we should consider an additional fix for
other compilers that don't support this.
set.infilesize in this case was modified in several places, which could
lead to repeated requests using the same handle to get unintendent/wrong
consequences based on what the previous request did!
This makes the findprotocol() function work as intended so that libcurl
can properly be restricted to not support HTTP while still supporting
HTTPS - since the HTTPS handler previously set both the HTTP and HTTPS
bits in the protocol field.
This fixes --proto and --proto-redir for most SSL protocols.
This is done by adding a few new convenience defines that groups HTTP
and HTTPS, FTP and FTPS etc that should then be used when the code wants
to check for both protocols at once. PROTO_FAMILY_[protocol] style.
Bug: https://github.com/bagder/curl/pull/97
Reported-by: drizzt
As this makes curl_global_init_mem() behave the same way as
curl_global_init() already does in that aspect - the same number of
curl_global_cleanup() calls is then required to again decrease the
counter and then eventually do the cleanup.
Bug: http://curl.haxx.se/bug/view.cgi?id=1362
Reported-by: Tristan
ufds might not be allocated in case nfds overflows to zero while
extra_nfds is still non-zero. udfs is then accessed within the
extra_nfds-based for loop.
Should a command return untagged responses that contained no data then
the imap_matchresp() function would not detect them as valid responses,
as it wasn't taking the CRLF characters into account at the end of each
line.
Given that we presently support "auth" and not "auth-int" or "auth-conf"
for native challenge-response messages, added client side validation of
the quality-of-protection options from the server's challenge message.
To avoid urldata.h being included from the header file or that the
source file has the correct include order as highlighted by one of
the auto builds recently.
When CURL_DISABLE_CRYPTO_AUTH is defined the DIGEST-MD5 code should not
be included, regardless of whether USE__WINDOWS_SSPI is defined or not.
This is indicated by the definition of USE_HTTP_NEGOTIATE and USE_NTLM
in curl_setup.h.
Updated the docs to clarify and the code accordingly, with test 1528 to
verify:
When CURLHEADER_SEPARATE is set and libcurl is asked to send a request
to a proxy but it isn't CONNECT, then _both_ header lists
(CURLOPT_HTTPHEADER and CURLOPT_PROXYHEADER) will be used since the
single request is then made for both the proxy and the server.
When doing passive FTP, the multi state function needs to extract and
use the happy eyeballs sockets to wait for to check for completion!
Bug: http://curl.haxx.se/mail/lib-2014-02/0135.html (ruined)
Reported-by: Alan
In addition to commit fe260b75e7 fixed the same issue for RFC-821 based
SMTP servers and allow the credientials to be given to curl even though
they are not used with the server.
Specifying user credentials when the SMTP server doesn't support
authentication would cause curl to display "No known authentication
mechanisms supported!" and return CURLE_LOGIN_DENIED.
Reported-by: Tom Sparrow
Bug: http://curl.haxx.se/mail/lib-2014-03/0173.html
There are server certificates used with IP address in the CN field, but
we MUST not allow wild cart certs for hostnames given as IP addresses
only. Therefore we must make Curl_cert_hostcheck() fail such attempts.
Bug: http://curl.haxx.se/docs/adv_20140326B.html
Reported-by: Richard Moore
In addition to FTP, other connection based protocols such as IMAP, POP3,
SMTP, SCP, SFTP and LDAP require a new connection when different log-in
credentials are specified. Fixed the detection logic to include these
other protocols.
Bug: http://curl.haxx.se/docs/adv_20140326A.html
The debug messages printed inside PolarSSL always seems to end with a
newline. So 'infof()' should not add one. Besides the trace 'line'
should be 'const'.
Because of the socket is unblocking, PolarSSL does need call to getsock to
get the action to perform in multi environment.
In some cases, it might happen we have not received yet all data to perform
the handshake. ssh_handshake returns POLARSSL_ERR_NET_WANT_READ, the state
is updated but because of the getsock has not the proper #define macro to,
the library never prevents to select socket for input thus the socket will
never be awaken when last data is available. Thus it leads to timeout.
API has changed since version 1.3. A compatibility header has been created
to ensure forward compatibility for code using old API:
* x509 certificate structure has been renamed to from x509_cert to
x509_crt
* new dedicated setter for RSA certificates ssl_set_own_cert_rsa,
ssl_set_own_cert is for generic keys
* ssl_default_ciphersuites has been replaced by function
ssl_list_ciphersuites()
This patch drops the use of the compatibly header.
Rename x509_cert to x509_crt and add "compat-1.2.h"
include.
This would still need some more thorough conversion
in order to drop "compat-1.2.h" include.
Port number zero is perfectly allowed to connect to. I moved to storing
the remote port number in an int so that -1 means undefined and 0-65535
can be used for legitimate port numbers.
Setting the TIMER_STARTSINGLE timestamp first in CONNECT has the
drawback that for actions that go back to the CONNECT state, the time
stamp is reset and for the multi_socket API there's no corresponding
Curl_expire() then so the timeout logic gets wrong!
Reported-by: Brad Spencer
Bug: http://curl.haxx.se/mail/lib-2014-02/0036.html
Remove slash/backslash problem, now only slashes are used,
Wmake automaticaly translate slash/backslash to proper version or tools are not sensitive for it.
Enable spaces in path.
Use internal rm command for all host platforms
Add error message if old Open Watcom version is used. Some old versions exhibit build problems for Curl latest version. Now only versions 1.8, 1.9 and 2.O beta are supported
For HTTP/2, we may read up everything including responde body with
header fields in Curl_http_readwrite_headers. If no content-length is
provided, curl waits for the connection close, which we emulate it
using conn->proto.httpc.closed = TRUE. The thing is if we read
everything, then http2_recv won't be called and we cannot signal the
HTTP/2 stream has closed. As a workaround, we return nonzero from
data_pending to call http2_recv.
Original commit message was:
Don't omit CN verification in SChannel when an IP address is used.
Side-effect of this change:
SChannel and CryptoAPI do not support the iPAddress subjectAltName
according to RFC 2818. If present, SChannel will first compare the
IP address to the dNSName subjectAltNames and then fallback to the
most specific Common Name in the Subject field of the certificate.
This means that after this change curl will not connect to SSL/TLS
hosts as long as the IP address is not specified in the SAN or CN
of the server certificate or the verifyhost option is disabled.
This patch enables HTTP POST/PUT in HTTP2.
We disabled Expect header field and chunked transfer encoding
since HTTP2 forbids them.
In HTTP1, Curl sends small upload data with request headers, but
HTTP2 requires upload data must be in DATA frame separately.
So we added some conditionals to achieve this.
Perform more work in between sleeps. This is work around the
fact that axtls does not expose any knowledge about when work needs
to be performed. Depending on connection and how often perform is
being called this can save ~25% of time on SSL handshakes (measured
on 20ms latency connection calling perform roughly every 10ms).
When allowing NTLM, the re-use connection logic was too focused on
finding an existing NTLM connection to use and didn't properly allow
re-use of other ones. This made the logic not re-use perfectly re-usable
connections.
Added test case 1418 and 1419 to verify.
Regression brought in 8ae35102c (curl 7.35.0)
Reported-by: Jeff King
Bug: http://thread.gmane.org/gmane.comp.version-control.git/242213
For a function that returns a decoded version of a string, it seems
really strange to allow a NULL pointer to get passed in which then
prevents the decoded data from being returned!
This functionality was not documented anywhere either.
If anyone would use it that way, that memory would've been leaked.
Bug: https://github.com/bagder/curl/pull/90
Reported-by: Arvid Norberg
Make sure that the special NTLM magic we do is for HTTP+NTLM only since
that's where the authenticated connection is a weird non-standard
paradigm.
Regression brought in 8ae35102c (curl 7.35.0)
Bug: http://curl.haxx.se/mail/lib-2014-02/0100.html
Reported-by: Dan Fandrich
The code didn't properly check the return codes to detect overflows so
it could trigger incorrectly. Like on mingw32.
Regression introduced in 345891edba (curl 7.35.0)
Bug: http://curl.haxx.se/mail/lib-2014-02/0097.html
Reported-by: LM
when using --http2 one can now selectively disable NPN or ALPN with
--no-alpn and --no-npn. for now honored with NSS only.
TODO: honor this option with GnuTLS and OpenSSL
SSL_ENABLE_ALPN can be used for preprocessor ALPN feature detection,
but not SSL_NEXT_PROTO_SELECTED, since it is an enum value and not a
preprocessor macro.
Not comma, which is an inconsistency and a mistake probably inherited
from the examples section of RFC1867.
This bug has been present since the day curl started to support
multipart formposts, back in the 90s.
Reported-by: Rob Davies
Bug: http://curl.haxx.se/bug/view.cgi?id=1333
When using the multi socket interface, libcurl calls the
curl_multi_timer_callback asking to be woken up after
CURL_TIMEOUT_EXPECT_100 milliseconds.
After the timeout has expired, calling curl_multi_socket_action with
CURL_SOCKET_TIMEOUT as sockfd leads libcurl to check expired
timeouts. When handling the 100-continue one, the following check in
Curl_readwrite() fails if exactly CURL_TIMEOUT_EXPECT_100 milliseconds
passed since the timeout has been set!
It seems logical to consider that having waited for exactly
CURL_TIMEOUT_EXPECT_100 ms is enough.
Bug: http://curl.haxx.se/bug/view.cgi?id=1334
A server might respond with a content-encoding header and a response
that was encoded accordingly in HTTP-draft-09/2.0 mode, even if the
client did not send an accept-encoding header earlier. The server might
not send a content-encoding header if the identity encoding was used to
encode the response.
See:
http://tools.ietf.org/html/draft-ietf-httpbis-http2-09#section-9.3
This patch chooses different approach to integrate HTTP2 into HTTP curl
stack. The idea is that we insert HTTP2 layer between HTTP code and
socket(TLS) layer. When HTTP2 is initialized (either in NPN or Upgrade),
we replace the Curl_recv/Curl_send callbacks with HTTP2's, but keep the
original callbacks in http_conn struct. When sending serialized data by
nghttp2, we use original Curl_send callback. Likewise, when reading data
from network, we use original Curl_recv callback. In this way we can
treat both TLS and non-TLS connections.
With this patch, one can transfer contents from https://twitter.com and
from nghttp2 test server in plain HTTP as well.
The code still has rough edges. The notable one is I could not figure
out how to call nghttp2_session_send() when underlying socket is
writable.
Check the NPN result before preparing an HTTP request and switch into
HTTP/2.0 mode if necessary. This is a work in progress, the actual code
to prepare and send the request using nghttp2 is still missing from
Curl_http2_send_request().
the number of elements in the 'nghttp2_session_callbacks' structure is
now reduced by 2 in version 0.3.0 (I'm not sure when the change
happened, but checking for ver 0.3.0 work for me).
Something is wrong in 'userp' for the HTTP2 recv_callback(). The
session is created using bogus user-data; '&conn' and not 'conn'.
I noticed this since the socket-value in Curl_read_plain() was set to a
impossible high value.
hostcache_timestamp_remove() should remove old *unused* entries from the
host cache, but it never checked whether the entry was actually in
use. This complements commit 030a2b8cb.
Bug: http://curl.haxx.se/bug/view.cgi?id=1327
tftp_done() can get called with its TFTP state pointer still being NULL
on an early time-out, which caused a segfault when dereferenced.
Reported-by: Glenn Sheridan
Bug: http://curl.haxx.se/mail/lib-2014-01/0246.html