The static connection counter caused a race condition. Moving the
connection id counter into conncache solves it, as well as simplifying
the related logic.
They were added because of an older code path that used allocations and
should not have been left in the code. With this change the logic goes
back to how it was.
Curl_rand() will return a dummy and repatable random value for this
case. Makes it possible to write test cases that verify output.
Also, fake timestamp with CURL_FORCETIME set.
Only when built debug enabled of course.
Curl_ssl_random() was not used anymore so it has been
removed. Curl_rand() is enough.
create_digest_md5_message: generate base64 instead of hex string
curl_sasl: also fix memory leaks in some OOM situations
httpproxycode is not reset in Curl_initinfo, so a 407 is not reset even
if curl_easy_reset is called between transfers.
Bug: http://curl.haxx.se/bug/view.cgi?id=1380
The method change is forbidden by the obsolete RFC2616, but libcurl did
it anyway for compatibility reasons. The new RFC7231 allows this
behaviour so there's no need for the scary "Violate RFC 2616/10.3.x"
notice. Also update the comments accordingly.
The SASL/Digest previously used the current time's seconds +
microseconds to add randomness but it is much better to instead get more
data from Curl_rand().
It will also allow us to easier "fake" that for debug builds on demand
in a future.
Rather than use a short 8-byte hex string, extended the cnonce to be
32-bytes long, like Windows SSPI does.
Used a combination of random data as well as the current date and
time for the generation.
"Any two of the parameters, readfds, writefds, or exceptfds, can be
given as null. At least one must be non-null, and any non-null
descriptor set must contain at least one handle to a socket."
http://msdn.microsoft.com/en-ca/library/windows/desktop/ms740141(v=vs.85).aspx
When using select(), cURL doesn't adhere to this (WinSock-specific)
rule, and can ask to monitor empty fd_sets, which leads to select()
returning WSAEINVAL (i.e. EINVAL) and connections failing in mysterious
ways as a result (at least when using the curl_multi_socket_action()
interface).
Bug: http://curl.haxx.se/mail/lib-2014-05/0278.html
OpenSSL passes out and outlen variable uninitialized to
select_next_proto_cb callback function. If the callback function
returns SSL_TLSEXT_ERR_OK, the caller assumes the callback filled
values in out and outlen and processes as such. Previously, if there
is no overlap in protocol lists, curl code does not fill any values in
these variables and returns SSL_TLSEXT_ERR_OK, which means we are
triggering undefined behavior. valgrind warns this.
This patch fixes this issue by fallback to HTTP/1.1 if there is no
overlap.
Make all code use connclose() and connkeep() when changing the "close
state" for a connection. These two macros take a string argument with an
explanation, and debug builds of curl will include that in the debug
output. Helps tracking connection re-use/close issues.
Commit 517b06d657 (in 7.36.0) that brought the CREDSPERREQUEST flag
only set it for HTTPS, making HTTP less good at doing connection re-use
than it should be. Now set it for HTTP as well.
Simple test case
"curl -v -u foo:bar localhost --next -u bar:foo localhos"
Bug: http://curl.haxx.se/mail/lib-2014-05/0127.html
Reported-by: Kamil Dudka
The variable wasn't assigned at all until step3 which would lead to a
failed connect never assigning the variable and thus returning a bad
value.
Reported-by: Larry Lin
Bug: http://curl.haxx.se/mail/lib-2014-04/0203.html
In commit 0b3750b5c2 (released in 7.36.0) we fixed a timeout issue
but instead broke the timings.
To fix this, I introduce a new timestamp to use for the timeouts and
restored the previous timestamp and timestamp position so that the old
timer functionality is restored.
In addition to that, that change also broke connection timeouts for when
more than one connect was used (as it would then count the total time
from the first connect and not for the most recent one). Now
Curl_timeleft() has been modified so that it checks against different
start times depending on which timeout it checks.
Test 1303 is updated accordingly.
Bug: http://curl.haxx.se/mail/lib-2014-05/0147.html
Reported-by: Ryan Braud
Whilst the qop directive isn't required to be present in a client's
response, as servers should assume a qop of "auth" if it isn't
specified, some may return authentication failure if it is missing.
To cater for the automatic generation of the new Visual Studio project
files, moved the lib file list into a separated variable so that lib
and lib/vtls can be referenced independently.
-p takes a list of Mozilla trust purposes and levels for certificates to
include in output. Takes the form of a comma separated list of
purposes, a colon, and a comma separated list of levels.
Now nghttp2_submit_request returns assigned stream ID, we don't have
to check stream ID using before_stream_send_callback. The
adjust_priority_callback was removed.
Depending on compiler line 3505 could generate the following warning or
error:
* warning: ISO C90 forbids mixed declarations and code
* A declaration cannot appear after an executable statement in a block
* error C2275: 'size_t' : illegal use of this type as an expression
As there's a default connection timeout and this wrongly used the
connection timeout during a transfer after the connection is completed,
this function would trigger timeouts during transfers erroneously.
Bug: http://curl.haxx.se/bug/view.cgi?id=1352
Figured-out-by: Radu Simionescu
If the precision is indeed shorter than the string, don't strlen() to
find the end because that's not how the precision operator works.
I also added a unit test for curl_msnprintf to make sure this works and
that the fix doesn't a few other basic use cases. I found a POSIX
compliance problem that I marked TODO in the unit test, and I figure we
need to add more tests in the future.
Reported-by: Török Edwin
Commit 07b66cbfa4 unfortunately broke native NTLM message support in
compilers, such as VC6, VC7 and others, that don't support long long
type declarations. This commit fixes VC6 and VC7 as they support the
__int64 extension, however, we should consider an additional fix for
other compilers that don't support this.
set.infilesize in this case was modified in several places, which could
lead to repeated requests using the same handle to get unintendent/wrong
consequences based on what the previous request did!
This makes the findprotocol() function work as intended so that libcurl
can properly be restricted to not support HTTP while still supporting
HTTPS - since the HTTPS handler previously set both the HTTP and HTTPS
bits in the protocol field.
This fixes --proto and --proto-redir for most SSL protocols.
This is done by adding a few new convenience defines that groups HTTP
and HTTPS, FTP and FTPS etc that should then be used when the code wants
to check for both protocols at once. PROTO_FAMILY_[protocol] style.
Bug: https://github.com/bagder/curl/pull/97
Reported-by: drizzt
As this makes curl_global_init_mem() behave the same way as
curl_global_init() already does in that aspect - the same number of
curl_global_cleanup() calls is then required to again decrease the
counter and then eventually do the cleanup.
Bug: http://curl.haxx.se/bug/view.cgi?id=1362
Reported-by: Tristan
ufds might not be allocated in case nfds overflows to zero while
extra_nfds is still non-zero. udfs is then accessed within the
extra_nfds-based for loop.
Should a command return untagged responses that contained no data then
the imap_matchresp() function would not detect them as valid responses,
as it wasn't taking the CRLF characters into account at the end of each
line.
Given that we presently support "auth" and not "auth-int" or "auth-conf"
for native challenge-response messages, added client side validation of
the quality-of-protection options from the server's challenge message.
To avoid urldata.h being included from the header file or that the
source file has the correct include order as highlighted by one of
the auto builds recently.
When CURL_DISABLE_CRYPTO_AUTH is defined the DIGEST-MD5 code should not
be included, regardless of whether USE__WINDOWS_SSPI is defined or not.
This is indicated by the definition of USE_HTTP_NEGOTIATE and USE_NTLM
in curl_setup.h.
Updated the docs to clarify and the code accordingly, with test 1528 to
verify:
When CURLHEADER_SEPARATE is set and libcurl is asked to send a request
to a proxy but it isn't CONNECT, then _both_ header lists
(CURLOPT_HTTPHEADER and CURLOPT_PROXYHEADER) will be used since the
single request is then made for both the proxy and the server.
When doing passive FTP, the multi state function needs to extract and
use the happy eyeballs sockets to wait for to check for completion!
Bug: http://curl.haxx.se/mail/lib-2014-02/0135.html (ruined)
Reported-by: Alan
In addition to commit fe260b75e7 fixed the same issue for RFC-821 based
SMTP servers and allow the credientials to be given to curl even though
they are not used with the server.
Specifying user credentials when the SMTP server doesn't support
authentication would cause curl to display "No known authentication
mechanisms supported!" and return CURLE_LOGIN_DENIED.
Reported-by: Tom Sparrow
Bug: http://curl.haxx.se/mail/lib-2014-03/0173.html
There are server certificates used with IP address in the CN field, but
we MUST not allow wild cart certs for hostnames given as IP addresses
only. Therefore we must make Curl_cert_hostcheck() fail such attempts.
Bug: http://curl.haxx.se/docs/adv_20140326B.html
Reported-by: Richard Moore
In addition to FTP, other connection based protocols such as IMAP, POP3,
SMTP, SCP, SFTP and LDAP require a new connection when different log-in
credentials are specified. Fixed the detection logic to include these
other protocols.
Bug: http://curl.haxx.se/docs/adv_20140326A.html
The debug messages printed inside PolarSSL always seems to end with a
newline. So 'infof()' should not add one. Besides the trace 'line'
should be 'const'.
Because of the socket is unblocking, PolarSSL does need call to getsock to
get the action to perform in multi environment.
In some cases, it might happen we have not received yet all data to perform
the handshake. ssh_handshake returns POLARSSL_ERR_NET_WANT_READ, the state
is updated but because of the getsock has not the proper #define macro to,
the library never prevents to select socket for input thus the socket will
never be awaken when last data is available. Thus it leads to timeout.
API has changed since version 1.3. A compatibility header has been created
to ensure forward compatibility for code using old API:
* x509 certificate structure has been renamed to from x509_cert to
x509_crt
* new dedicated setter for RSA certificates ssl_set_own_cert_rsa,
ssl_set_own_cert is for generic keys
* ssl_default_ciphersuites has been replaced by function
ssl_list_ciphersuites()
This patch drops the use of the compatibly header.
Rename x509_cert to x509_crt and add "compat-1.2.h"
include.
This would still need some more thorough conversion
in order to drop "compat-1.2.h" include.
Port number zero is perfectly allowed to connect to. I moved to storing
the remote port number in an int so that -1 means undefined and 0-65535
can be used for legitimate port numbers.
Setting the TIMER_STARTSINGLE timestamp first in CONNECT has the
drawback that for actions that go back to the CONNECT state, the time
stamp is reset and for the multi_socket API there's no corresponding
Curl_expire() then so the timeout logic gets wrong!
Reported-by: Brad Spencer
Bug: http://curl.haxx.se/mail/lib-2014-02/0036.html
Remove slash/backslash problem, now only slashes are used,
Wmake automaticaly translate slash/backslash to proper version or tools are not sensitive for it.
Enable spaces in path.
Use internal rm command for all host platforms
Add error message if old Open Watcom version is used. Some old versions exhibit build problems for Curl latest version. Now only versions 1.8, 1.9 and 2.O beta are supported
For HTTP/2, we may read up everything including responde body with
header fields in Curl_http_readwrite_headers. If no content-length is
provided, curl waits for the connection close, which we emulate it
using conn->proto.httpc.closed = TRUE. The thing is if we read
everything, then http2_recv won't be called and we cannot signal the
HTTP/2 stream has closed. As a workaround, we return nonzero from
data_pending to call http2_recv.
Original commit message was:
Don't omit CN verification in SChannel when an IP address is used.
Side-effect of this change:
SChannel and CryptoAPI do not support the iPAddress subjectAltName
according to RFC 2818. If present, SChannel will first compare the
IP address to the dNSName subjectAltNames and then fallback to the
most specific Common Name in the Subject field of the certificate.
This means that after this change curl will not connect to SSL/TLS
hosts as long as the IP address is not specified in the SAN or CN
of the server certificate or the verifyhost option is disabled.
This patch enables HTTP POST/PUT in HTTP2.
We disabled Expect header field and chunked transfer encoding
since HTTP2 forbids them.
In HTTP1, Curl sends small upload data with request headers, but
HTTP2 requires upload data must be in DATA frame separately.
So we added some conditionals to achieve this.
Perform more work in between sleeps. This is work around the
fact that axtls does not expose any knowledge about when work needs
to be performed. Depending on connection and how often perform is
being called this can save ~25% of time on SSL handshakes (measured
on 20ms latency connection calling perform roughly every 10ms).
When allowing NTLM, the re-use connection logic was too focused on
finding an existing NTLM connection to use and didn't properly allow
re-use of other ones. This made the logic not re-use perfectly re-usable
connections.
Added test case 1418 and 1419 to verify.
Regression brought in 8ae35102c (curl 7.35.0)
Reported-by: Jeff King
Bug: http://thread.gmane.org/gmane.comp.version-control.git/242213
For a function that returns a decoded version of a string, it seems
really strange to allow a NULL pointer to get passed in which then
prevents the decoded data from being returned!
This functionality was not documented anywhere either.
If anyone would use it that way, that memory would've been leaked.
Bug: https://github.com/bagder/curl/pull/90
Reported-by: Arvid Norberg
Make sure that the special NTLM magic we do is for HTTP+NTLM only since
that's where the authenticated connection is a weird non-standard
paradigm.
Regression brought in 8ae35102c (curl 7.35.0)
Bug: http://curl.haxx.se/mail/lib-2014-02/0100.html
Reported-by: Dan Fandrich
The code didn't properly check the return codes to detect overflows so
it could trigger incorrectly. Like on mingw32.
Regression introduced in 345891edba (curl 7.35.0)
Bug: http://curl.haxx.se/mail/lib-2014-02/0097.html
Reported-by: LM
when using --http2 one can now selectively disable NPN or ALPN with
--no-alpn and --no-npn. for now honored with NSS only.
TODO: honor this option with GnuTLS and OpenSSL
SSL_ENABLE_ALPN can be used for preprocessor ALPN feature detection,
but not SSL_NEXT_PROTO_SELECTED, since it is an enum value and not a
preprocessor macro.
Not comma, which is an inconsistency and a mistake probably inherited
from the examples section of RFC1867.
This bug has been present since the day curl started to support
multipart formposts, back in the 90s.
Reported-by: Rob Davies
Bug: http://curl.haxx.se/bug/view.cgi?id=1333
When using the multi socket interface, libcurl calls the
curl_multi_timer_callback asking to be woken up after
CURL_TIMEOUT_EXPECT_100 milliseconds.
After the timeout has expired, calling curl_multi_socket_action with
CURL_SOCKET_TIMEOUT as sockfd leads libcurl to check expired
timeouts. When handling the 100-continue one, the following check in
Curl_readwrite() fails if exactly CURL_TIMEOUT_EXPECT_100 milliseconds
passed since the timeout has been set!
It seems logical to consider that having waited for exactly
CURL_TIMEOUT_EXPECT_100 ms is enough.
Bug: http://curl.haxx.se/bug/view.cgi?id=1334
A server might respond with a content-encoding header and a response
that was encoded accordingly in HTTP-draft-09/2.0 mode, even if the
client did not send an accept-encoding header earlier. The server might
not send a content-encoding header if the identity encoding was used to
encode the response.
See:
http://tools.ietf.org/html/draft-ietf-httpbis-http2-09#section-9.3