As a remedy to the problem when a socket gets closed and a new one is
opened with the same file descriptor number and as a result
multi.c:singlesocket() doesn't detect the difference, the new function
Curl_multi_closed() gets told when a socket is closed so that it can be
removed from the socket hash. When the old one has been removed, a new
socket should be detected fine by the singlesocket() on next invoke.
Bug: http://curl.haxx.se/bug/view.cgi?id=1248
Reported-by: Erik Johansson
When performing COOKIELIST operations the cookie lock needs to be taken
for the cases where the cookies are shared among multiple handles!
Verified by Benjamin Gilbert's updated test 506
Bug: http://curl.haxx.se/bug/view.cgi?id=1215
Reported-by: Benjamin Gilbert
When curl_multi_wait() finds no file descriptor to wait for, it returns
instantly and this must be handled gracefully within curl_easy_perform()
or cause a busy-loop. Starting now, repeated fast returns without any
file descriptors is detected and a gradually increasing sleep will be
used (up to a max of 1000 milliseconds) before continuing the loop.
Bug: http://curl.haxx.se/bug/view.cgi?id=1238
Reported-by: Miguel Angel
The initial fix to only compare full path names were done in commit
04f52e9b4d but found out to be incomplete. This takes should make the
change more complete and there's now two additional tests to verify
(test 31 and 62).
By always returning the md5 for an empty body when auth-int is asked
for, libcurl now at least sometimes does the right thing.
Bug: http://curl.haxx.se/bug/view.cgi?id=1235
Patched-by: Nach M. S.
Allow less room for "triggered too early" mistakes by applications /
timers on non-windows platforms. Starting now, we assume that a timeout
call is never made earlier than 3 milliseconds before the actual
timeout. This greatly improves timeout accuracy on Linux.
Bug: http://curl.haxx.se/bug/view.cgi?id=1228
Reported-by: Hang Su
In the pkcs12 code, we get a list of x509 records returned from
PKCS12_parse but when iterating over the list and passing each to
SSL_CTX_add_extra_chain_cert() we didn't also properly remove them from
the "stack", which made them get freed twice (both in sk_X509_pop_free()
and then later in SSL_CTX_free).
This isn't really documented anywhere...
Bug: http://curl.haxx.se/bug/view.cgi?id=1236
Reported-by: Nikaiw
commit 29bf0598aa introduced a problem when the "internal" timeout is
prefered to the given if shorter, as it didn't consider the case where
-1 was returned. Now the internal timeout is only considered if not -1.
Reported-by: Tor Arntsen
Bug: http://curl.haxx.se/mail/lib-2013-06/0015.html
If the multi handle's pending timeout is less than what is passed into
this function, it will now opt to use the shorter time anyway since it
is a very good hint that the handle wants to process something in a
shorter time than what otherwise would happen.
curl_multi_wait.3 was updated accordingly to clarify
This is the reason for bug #1224
Bug: http://curl.haxx.se/bug/view.cgi?id=1224
Reported-by: Andrii Moiseiev
When sending the HTTP Authorization: header for digest, the user name
needs to be escaped if it contains a double-quote or backslash.
Test 1229 was added to verify
Reported and fixed by: Nach M. S
Bug: http://curl.haxx.se/bug/view.cgi?id=1230
We found that in specific cases if the connection is abruptly closed,
the underlying socket is listed in a close_wait state. We continue to
call the curl_multi_perform, curl_mutli_fdset etc. None of these APIs
report the socket closed / connection finished. Since we have cases
where the multi connection is only used once, this can pose a problem
for us. I've read that if another connection was to come in, curl would
see the socket as bad and attempt to close it at that time -
unfortunately, this does not work for us.
I found that in specific situations, if SSL_write returns 0, curl did
not recognize the socket as closed (or errored out) and did not report
it to the application. I believe we need to change the code slightly, to
check if ssl_write returns 0. If so, treat it as an error - the same as
a negative return code.
For OpenSSL - the ssl_write documentation is here:
http://www.openssl.org/docs/ssl/SSL_write.html
1 - don't skip host names with a colon in them in an attempt to bail out
on HTTP headers in the cookie file parser. It was only a shortcut anyway
and trying to parse a file with HTTP headers will still be handled, only
slightly slower.
2 - don't skip domain names based on number of dots. The original
netscape cookie spec had this oddity mentioned and while our code
decreased the check to only check for two, the existing cookie spec has
no such dot counting required.
Bug: http://curl.haxx.se/bug/view.cgi?id=1221
Reported-by: Stefan Neis
I found a bug which cURL sends cookies to the path not to aim at.
For example:
- cURL sends a request to http://example.fake/hoge/
- server returns cookie which with path=/hoge;
the point is there is NOT the '/' end of path string.
- cURL sends a request to http://example.fake/hogege/ with the cookie.
The reason for this old "feature" is because that behavior is what is
described in the original netscape cookie spec:
http://curl.haxx.se/rfc/cookie_spec.html
The current cookie spec (RFC6265) clarifies the situation:
http://tools.ietf.org/html/rfc6265#section-5.2.4
This reverts commit 8ec2cb5544.
We don't have any code anywhere in libcurl (or the curl tool) that use
wcsdup so there's no such memory use to track. It seems to cause mild
problems with the Borland compiler though that we may avoid by reverting
this change again.
Bug: http://curl.haxx.se/mail/lib-2013-05/0070.html
If the mail sent during the transfer contains a terminating <CRLF> then
we should not send the first <CRLF> of the EOB as specified in RFC-5321.
Additionally don't send the <CRLF> if there is "no mail data" as the
DATA command already includes it.
The code within #ifdef HAVE_SOCKADDR_IN6_SIN6_SCOPE_ID wrongly had two
closing braces when it should only have one, so builds without that
define would fail.
Bug: http://curl.haxx.se/mail/lib-2013-05/0000.html
The curl command line utility would display the the completed progress
bar with a percentage of zero as the progress routines didn't know the
size of the transfer.
Removed the hard returns from imap and pop3 by using the same style for
sending the authentication string as smtp. Moved the "Other mechanisms
not supported" check in smtp to match that of imap and pop3 to provide
consistency between the three email protocols.
Users using the Secure Transport (darwinssl) back-end can now use a
certificate and private key to authenticate with a site using TLS. Because
Apple's security system is based around the keychain and does not have any
non-public function to create a SecIdentityRef data structure from data
loaded outside of the Keychain, the certificate and private key have to be
loaded into the Keychain first (using the certtool command line tool or
the Security framework's C API) before we can find it and use it.
In addition to checking for the SASL-IR capability the user can override
the sending of the client's initial response in the AUTHENTICATION
command with the use of CURLOPT_SASL_IR should the server erroneously
not report SASL-IR when it does support it.
Updated the default behaviour of sending the client's initial response in the AUTH
command to not send it and added support for CURLOPT_SASL_IR to allow the user to
specify including the response.
Related Bug: http://curl.haxx.se/mail/lib-2012-03/0114.html
Reported-by: Gokhan Sengun
By introducing an internal alternative to curl_multi_init() that accepts
parameters to set the hash sizes, easy handles will now use tiny socket
and connection hash tables since it will only ever add a single easy
handle to that multi handle.
This decreased the number mallocs in test 40 (which is a rather simple
and typical easy interface use case) from 1142 to 138. The maximum
amount of memory allocated used went down from 118969 to 78805.
When connecting back to an FTP server after having sent PASV/EPSV,
libcurl sometimes didn't use the proxy properly even though the proxy
was used for the initial connect.
The function wrongly checked for the CURLOPT_PROXY variable to be set,
which made it act wrongly if the proxy information was set with an
environment variable.
Added test case 711 to verify (based on 707 which uses --socks5). Also
added test712 to verify another variation of setting the proxy: with
--proxy socks5://
Bug: http://curl.haxx.se/bug/view.cgi?id=1218
Reported-by: Zekun Ni
... in order to prevent an artificial timeout event based on stale
speed-check data from a previous network transfer. This commit fixes
a regression caused by 9dd85bced5.
Bug: https://bugzilla.redhat.com/906031
Fixed an issue in parse_proxy(), introduced in commit 11332577b3,
where an empty username or password (For example: http://:@example.com)
would cause a crash.
There is no need to perform separate clearing of data if a NULL option
pointer is passed in. Instead this operation can be performed by simply
not calling parse_login_details() and letting the rest of the code do
the work.
setstropt_userpwd() was calling setstropt() in commit fddb7b44a7 to
set each of the login details which would duplicate the strings and
subsequently cause a memory leak.
In addition to parsing the optional login options from the URL, added
support for parsing them from CURLOPT_USERPWD, to allow the following
supported command line:
--user username:password;options
Added bounds checking when searching for the separator characters within
the login string as this string may not be NULL terminated (For example
it is the login part of a URL). We do this in preference to allocating a
new string to copy the login details into which could then be passed to
parse_login_details() for performance reasons.
As well as parsing the username and password from the URL, added support
for parsing the optional options part from the login details, to allow
the following supported URL format:
schema://username:password;options@example.com/path?q=foobar
This will only be used by IMAP, POP3 and SMTP at present but any
protocol that may be given login options in the URL will be able to
add support for them.
...instead of the 220 we otherwise expect.
Made the ftpserver.pl support sending a custom "welcome" and then
created test 1219 to verify this fix with such a 230 welcome.
Bug: http://curl.haxx.se/mail/lib-2013-02/0102.html
Reported by: Anders Havn
Accessing a file with an absolute path in the root dir but with no
directory specified was not handled correctly. This fix comes with four
new test cases that verify it.
Bug: http://curl.haxx.se/mail/lib-2013-04/0142.html
Reported by: Sam Deane
Cookies set for 'example.com' could accidentaly also be sent by libcurl
to the 'bexample.com' (ie with a prefix to the first domain name).
This is a security vulnerabilty, CVE-2013-1944.
Bug: http://curl.haxx.se/docs/adv_20130412.html
The previously applied patch didnt work on Windows; we cant rely
on shell commands like 'echo' since they act diffently on each
platform and each shell.
In order to keep this script platform-independent the code must
only use pure Perl.
When doing PWD, there's a 257 response which apparently some servers
prefix with a comment before the path instead of after it as is
otherwise the norm.
Failing to parse this, several otherwise legitimate use cases break.
Bug: http://curl.haxx.se/mail/lib-2013-04/0113.html
The OpenSSL pipe wrote to the final CA bundle file, but the encoded PEM
output wrote to a temporary file. Consequently, the OpenSSL output was
lost when the temp file was renamed to the final file at script finish
(overwriting the final file written earlier by openssl).
Patch posted to the list by Richard Michael (rmichael edgeofthenet org).
I noticed that aria2's SecureTransport code disables insecure ciphers such
as NULL, anonymous, IDEA, and weak-key ciphers used by SSLv3 and later.
That's a good idea, and now we do the same thing in order to prevent curl
from accessing a "secure" site that only negotiates insecure ciphersuites.
Previously it only compared credentials if the requested needle
connection wasn't using a proxy. This caused NTLM authentication
failures when using proxies as the authentication code wasn't send on
the connection where the challenge arrived.
Added test 1215 to verify: NTLM server authentication through a proxy
(This is a modified copy of test 67)
Since qsort implementations vary with regards to handling the order
of similiar elements, this change makes the internal sort function
more deterministic by comparing path length first, then domain length
and finally the cookie name. Spotted with testcase 62 on Windows.
When doing PORT and upload (STOR), this function needs to extract the
file descriptor for both connections so that it will respond immediately
when the server eventually connects back.
This flaw caused active connections to become unnecessary slow but they
would still often work due to the normal polling on a timeout. The bug
also would not occur if the server connected back very fast, like when
testing on local networks.
Bug: http://curl.haxx.se/bug/view.cgi?id=1183
Reported by: Daniel Theron
I am using curl_easy_setopt(CURLOPT_INTERFACE, "if!something") to force
transfers to use a particular interface but the transfer fails with
CURLE_INTERFACE_FAILED, "Failed binding local connection end" if the
interface I specify has no IPv6 address. The cause is as follows:
The remote hostname resolves successfully and has an IPv6 address and an
IPv4 address.
cURL attempts to connect to the IPv6 address first.
bindlocal (in lib/connect.c) fails because Curl_if2ip cannot find an
IPv6 address on the interface.
This is a fatal error in singleipconnect()
This change will make cURL try the next IP address in the list.
Also included are two changes related to IPv6 address scope:
- Filter the choice of address in Curl_if2ip to only consider addresses
with the same scope ID as the connection address (mismatched scope for
local and remote address does not result in a working connection).
- bindlocal was ignoring the scope ID of addresses returned by
Curl_if2ip . Now it uses them.
Bug: http://curl.haxx.se/bug/view.cgi?id=1189
At some point recently we lost the default value for the easy handle's
connection cache, and this change puts it back to 5 - which is the
former default value and it is documented in the curl_easy_setopt.3 man
page.
The Microsoft knowledge-base article
http://support.microsoft.com/kb/823764 describes how to use SNDBUF to
overcome a performance shortcoming in winsock, but it doesn't apply to
Windows Vista and later versions. If the described SNDBUF magic is
applied when running on those more recent Windows versions, it seems to
instead have the reversed effect in many cases and thus make libcurl
perform less good on those systems.
This fix thus adds a run-time version-check that does the SNDBUF magic
conditionally depending if it is deemed necessary or not.
Bug: http://curl.haxx.se/bug/view.cgi?id=1188
Reported by: Andrew Kurushin
Tested by: Christian Hägele
The last remaining code piece that still used FTPSENDF now uses PPSENDF.
In the problematic case, a PREQUOTE series was done on a re-used
connection when Curl_pp_init() hadn't been called so it had messed up
pointers. The init call is done properly from Curl_pp_sendf() so this
change fixes this particular crash.
Bug: http://curl.haxx.se/mail/lib-2013-03/0319.html
Reported by: Sam Deane
As of 25-mar-2013 wcsdup() _wcsdup() and _tcsdup() are only used in
WIN32 specific code, so tracking of these has not been extended for
other build targets. Without this fix, memory tracking system on
WIN32 builds, when using these functions, would provide misleading
results.
In order to properly extend this support for all targets curl.h
would have to define curl_wcsdup_callback prototype and consequently
wchar_t should be visible before that in curl.h. IOW curl_wchar_t
defined in curlbuild.h and this pulling whatever system header is
required to get wchar_t definition.
Additionally a new curl_global_init_mem() function that also receives
user defined wcsdup() callback would be required.
Proxy servers tend to add their own headers at the beginning of
responses. The size of these headers was not taken into account by
CURLINFO_HEADER_SIZE before this change.
Bug: http://curl.haxx.se/bug/view.cgi?id=1204
After having done a POST over a CONNECT request, the 'rewindaftersend'
boolean could be holding the previous value which could lead to badness.
This should be tested for in a new test case!
Bug: https://groups.google.com/d/msg/msysgit/B31LNftR4BI/KhRTz0iuGmUJ
Fixed incorrect initial response generation for the NTLM and LOGIN SASL
authentication mechanisms when the SASL-IR was detected.
Introduced in commit: 6da7dc026c.
curl has been accepting URLs using slightly wrong syntax for a long
time, such as when completely missing as slash "http://example.org" or
missing a slash when a query part is given
"http://example.org?q=foobar".
curl would translate these into a legitimate HTTP request to servers,
although as was shown in bug #1206 it was not adjusted properly in the
cases where a HTTP proxy was used.
Test 1213 and 1214 were added to the test suite to verify this fix.
The test HTTP server was adjusted to allow us to specify test number in
the host name only without using any slashes in a given URL.
Bug: http://curl.haxx.se/bug/view.cgi?id=1206
Reported by: ScottJi
Introducing a number of options to the multi interface that
allows for multiple pipelines to the same host, in order to
optimize the balance between the penalty for opening new
connections and the potential pipelining latency.
Two new options for limiting the number of connections:
CURLMOPT_MAX_HOST_CONNECTIONS - Limits the number of running connections
to the same host. When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished, so we can reuse the connection.
CURLMOPT_MAX_TOTAL_CONNECTIONS - Limits the number of connections in total.
When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished. The free connection will then be reused, if possible, or
closed if the pending handle can't reuse it.
Several new options for pipelining:
CURLMOPT_MAX_PIPELINE_LENGTH - Limits the pipeling length. If a
pipeline is "full" when a connection is to be reused, a new connection
will be opened if the CURLMOPT_MAX_xxx_CONNECTIONS limits allow it.
If not, the handle will be put in a pending state until a connection is
ready (either free or a pipe got shorter).
CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a transfer with a content
length that is larger than this.
CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a chunk larger than this.
CURLMOPT_PIPELINING_SITE_BL - A blacklist of hosts that don't allow
pipelining.
CURLMOPT_PIPELINING_SERVER_BL - A blacklist of server types that don't allow
pipelining.
See the curl_multi_setopt() man page for details.
Following commit e450f66a02 and the changes in the multi interface
being used internally, from 7.29.0, the transfer cancellation in
pop3_dophase_done() is no longer required.
When Curl_do() returns failure, the connection pointer could be NULL so
the code path following needs to that that into account.
Bug: http://curl.haxx.se/mail/lib-2013-03/0062.html
Reported by: Eric Hu
Moved the blocking state machine to the disconnect functions so that the
logout / quit functions are only responsible for sending the actual
command needed to logout or quit.
Additionally removed the hard return on failure.
Added an exception, for the STORE command, to the untagged response
processor in imap_endofresp() as servers will back respones containing
the FETCH keyword instead.
The list of unsafe functions currently consists of sprintf, vsprintf,
strcat, strncat and gets.
Subsequently, some existing code needed updating to avoid warnings on
this.
As the UID has to be specified by the user for the FETCH command to work
correctly, added a check to imap_fetch(), although strictly speaking it
is protected by the call from imap_perform().
The option needs to be set on the SSL socket. Setting it on the model
takes no effect. Note that the non-blocking mode is still not enabled
for the handshake because the code is not yet ready for that.
Commit 26eaa83830 introduces the use of S_ISDIR() yet some compilers,
such as MSVC don't support it, so we must define a substitute using
file flags and mask.
Commit f4cc54cb47 (shipped as part of the 7.29.0 release) was a
bug fix that introduced a regression in that while trying to avoid
allowing directory names, it also forbade "special" files like character
devices and more. like "/dev/null" as was used by Oliver who reported
this regression.
Reported by: Oliver Gondža
Bug: http://curl.haxx.se/mail/archive-2013-02/0040.html
If the server hung up the connection without sending a closure alert,
then we'd keep probing the socket for data even though it's dead. Now
we're ready for this situation.
Bug: http://curl.haxx.se/mail/lib-2013-03/0014.html
Reported by: Aki Koskinen
Some state changes would be performed after a failure test that
performed a hard return, whilst others would be performed within a test
for success. Updated the code, for consistency, so all instances are
performed within a success test.
Some state changes would be performed after a failure test that
performed a hard return, whilst others would be performed within a test
for success. Updated the code, for consistency, so all instances are
performed within a success test.