...instead of the 220 we otherwise expect.
Made the ftpserver.pl support sending a custom "welcome" and then
created test 1219 to verify this fix with such a 230 welcome.
Bug: http://curl.haxx.se/mail/lib-2013-02/0102.html
Reported by: Anders Havn
Accessing a file with an absolute path in the root dir but with no
directory specified was not handled correctly. This fix comes with four
new test cases that verify it.
Bug: http://curl.haxx.se/mail/lib-2013-04/0142.html
Reported by: Sam Deane
Cookies set for 'example.com' could accidentaly also be sent by libcurl
to the 'bexample.com' (ie with a prefix to the first domain name).
This is a security vulnerabilty, CVE-2013-1944.
Bug: http://curl.haxx.se/docs/adv_20130412.html
The previously applied patch didnt work on Windows; we cant rely
on shell commands like 'echo' since they act diffently on each
platform and each shell.
In order to keep this script platform-independent the code must
only use pure Perl.
When doing PWD, there's a 257 response which apparently some servers
prefix with a comment before the path instead of after it as is
otherwise the norm.
Failing to parse this, several otherwise legitimate use cases break.
Bug: http://curl.haxx.se/mail/lib-2013-04/0113.html
The OpenSSL pipe wrote to the final CA bundle file, but the encoded PEM
output wrote to a temporary file. Consequently, the OpenSSL output was
lost when the temp file was renamed to the final file at script finish
(overwriting the final file written earlier by openssl).
Patch posted to the list by Richard Michael (rmichael edgeofthenet org).
I noticed that aria2's SecureTransport code disables insecure ciphers such
as NULL, anonymous, IDEA, and weak-key ciphers used by SSLv3 and later.
That's a good idea, and now we do the same thing in order to prevent curl
from accessing a "secure" site that only negotiates insecure ciphersuites.
Previously it only compared credentials if the requested needle
connection wasn't using a proxy. This caused NTLM authentication
failures when using proxies as the authentication code wasn't send on
the connection where the challenge arrived.
Added test 1215 to verify: NTLM server authentication through a proxy
(This is a modified copy of test 67)
Since qsort implementations vary with regards to handling the order
of similiar elements, this change makes the internal sort function
more deterministic by comparing path length first, then domain length
and finally the cookie name. Spotted with testcase 62 on Windows.
When doing PORT and upload (STOR), this function needs to extract the
file descriptor for both connections so that it will respond immediately
when the server eventually connects back.
This flaw caused active connections to become unnecessary slow but they
would still often work due to the normal polling on a timeout. The bug
also would not occur if the server connected back very fast, like when
testing on local networks.
Bug: http://curl.haxx.se/bug/view.cgi?id=1183
Reported by: Daniel Theron
I am using curl_easy_setopt(CURLOPT_INTERFACE, "if!something") to force
transfers to use a particular interface but the transfer fails with
CURLE_INTERFACE_FAILED, "Failed binding local connection end" if the
interface I specify has no IPv6 address. The cause is as follows:
The remote hostname resolves successfully and has an IPv6 address and an
IPv4 address.
cURL attempts to connect to the IPv6 address first.
bindlocal (in lib/connect.c) fails because Curl_if2ip cannot find an
IPv6 address on the interface.
This is a fatal error in singleipconnect()
This change will make cURL try the next IP address in the list.
Also included are two changes related to IPv6 address scope:
- Filter the choice of address in Curl_if2ip to only consider addresses
with the same scope ID as the connection address (mismatched scope for
local and remote address does not result in a working connection).
- bindlocal was ignoring the scope ID of addresses returned by
Curl_if2ip . Now it uses them.
Bug: http://curl.haxx.se/bug/view.cgi?id=1189
At some point recently we lost the default value for the easy handle's
connection cache, and this change puts it back to 5 - which is the
former default value and it is documented in the curl_easy_setopt.3 man
page.
The Microsoft knowledge-base article
http://support.microsoft.com/kb/823764 describes how to use SNDBUF to
overcome a performance shortcoming in winsock, but it doesn't apply to
Windows Vista and later versions. If the described SNDBUF magic is
applied when running on those more recent Windows versions, it seems to
instead have the reversed effect in many cases and thus make libcurl
perform less good on those systems.
This fix thus adds a run-time version-check that does the SNDBUF magic
conditionally depending if it is deemed necessary or not.
Bug: http://curl.haxx.se/bug/view.cgi?id=1188
Reported by: Andrew Kurushin
Tested by: Christian Hägele
The last remaining code piece that still used FTPSENDF now uses PPSENDF.
In the problematic case, a PREQUOTE series was done on a re-used
connection when Curl_pp_init() hadn't been called so it had messed up
pointers. The init call is done properly from Curl_pp_sendf() so this
change fixes this particular crash.
Bug: http://curl.haxx.se/mail/lib-2013-03/0319.html
Reported by: Sam Deane
As of 25-mar-2013 wcsdup() _wcsdup() and _tcsdup() are only used in
WIN32 specific code, so tracking of these has not been extended for
other build targets. Without this fix, memory tracking system on
WIN32 builds, when using these functions, would provide misleading
results.
In order to properly extend this support for all targets curl.h
would have to define curl_wcsdup_callback prototype and consequently
wchar_t should be visible before that in curl.h. IOW curl_wchar_t
defined in curlbuild.h and this pulling whatever system header is
required to get wchar_t definition.
Additionally a new curl_global_init_mem() function that also receives
user defined wcsdup() callback would be required.
Proxy servers tend to add their own headers at the beginning of
responses. The size of these headers was not taken into account by
CURLINFO_HEADER_SIZE before this change.
Bug: http://curl.haxx.se/bug/view.cgi?id=1204
After having done a POST over a CONNECT request, the 'rewindaftersend'
boolean could be holding the previous value which could lead to badness.
This should be tested for in a new test case!
Bug: https://groups.google.com/d/msg/msysgit/B31LNftR4BI/KhRTz0iuGmUJ
Fixed incorrect initial response generation for the NTLM and LOGIN SASL
authentication mechanisms when the SASL-IR was detected.
Introduced in commit: 6da7dc026c.
curl has been accepting URLs using slightly wrong syntax for a long
time, such as when completely missing as slash "http://example.org" or
missing a slash when a query part is given
"http://example.org?q=foobar".
curl would translate these into a legitimate HTTP request to servers,
although as was shown in bug #1206 it was not adjusted properly in the
cases where a HTTP proxy was used.
Test 1213 and 1214 were added to the test suite to verify this fix.
The test HTTP server was adjusted to allow us to specify test number in
the host name only without using any slashes in a given URL.
Bug: http://curl.haxx.se/bug/view.cgi?id=1206
Reported by: ScottJi
Introducing a number of options to the multi interface that
allows for multiple pipelines to the same host, in order to
optimize the balance between the penalty for opening new
connections and the potential pipelining latency.
Two new options for limiting the number of connections:
CURLMOPT_MAX_HOST_CONNECTIONS - Limits the number of running connections
to the same host. When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished, so we can reuse the connection.
CURLMOPT_MAX_TOTAL_CONNECTIONS - Limits the number of connections in total.
When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished. The free connection will then be reused, if possible, or
closed if the pending handle can't reuse it.
Several new options for pipelining:
CURLMOPT_MAX_PIPELINE_LENGTH - Limits the pipeling length. If a
pipeline is "full" when a connection is to be reused, a new connection
will be opened if the CURLMOPT_MAX_xxx_CONNECTIONS limits allow it.
If not, the handle will be put in a pending state until a connection is
ready (either free or a pipe got shorter).
CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a transfer with a content
length that is larger than this.
CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a chunk larger than this.
CURLMOPT_PIPELINING_SITE_BL - A blacklist of hosts that don't allow
pipelining.
CURLMOPT_PIPELINING_SERVER_BL - A blacklist of server types that don't allow
pipelining.
See the curl_multi_setopt() man page for details.
Following commit e450f66a02 and the changes in the multi interface
being used internally, from 7.29.0, the transfer cancellation in
pop3_dophase_done() is no longer required.
When Curl_do() returns failure, the connection pointer could be NULL so
the code path following needs to that that into account.
Bug: http://curl.haxx.se/mail/lib-2013-03/0062.html
Reported by: Eric Hu
Moved the blocking state machine to the disconnect functions so that the
logout / quit functions are only responsible for sending the actual
command needed to logout or quit.
Additionally removed the hard return on failure.
Added an exception, for the STORE command, to the untagged response
processor in imap_endofresp() as servers will back respones containing
the FETCH keyword instead.
The list of unsafe functions currently consists of sprintf, vsprintf,
strcat, strncat and gets.
Subsequently, some existing code needed updating to avoid warnings on
this.
As the UID has to be specified by the user for the FETCH command to work
correctly, added a check to imap_fetch(), although strictly speaking it
is protected by the call from imap_perform().
The option needs to be set on the SSL socket. Setting it on the model
takes no effect. Note that the non-blocking mode is still not enabled
for the handshake because the code is not yet ready for that.
Commit 26eaa83830 introduces the use of S_ISDIR() yet some compilers,
such as MSVC don't support it, so we must define a substitute using
file flags and mask.
Commit f4cc54cb47 (shipped as part of the 7.29.0 release) was a
bug fix that introduced a regression in that while trying to avoid
allowing directory names, it also forbade "special" files like character
devices and more. like "/dev/null" as was used by Oliver who reported
this regression.
Reported by: Oliver Gondža
Bug: http://curl.haxx.se/mail/archive-2013-02/0040.html
If the server hung up the connection without sending a closure alert,
then we'd keep probing the socket for data even though it's dead. Now
we're ready for this situation.
Bug: http://curl.haxx.se/mail/lib-2013-03/0014.html
Reported by: Aki Koskinen
Some state changes would be performed after a failure test that
performed a hard return, whilst others would be performed within a test
for success. Updated the code, for consistency, so all instances are
performed within a success test.
Some state changes would be performed after a failure test that
performed a hard return, whilst others would be performed within a test
for success. Updated the code, for consistency, so all instances are
performed within a success test.
Added imap_custom(), which initiates the custom command processing,
and an associated response handler imap_state_custom_resp(), which
handles any responses by sending them to the client as body data.
All untagged responses with the same name as the first word of the
custom request string are accepted, with the exception of SELECT and
EXAMINE which have responses that cannot be easily identified. An
extra check has been provided for them so that any untagged responses
are accepted for them.
Added imap_parse_custom_request() for parsing the CURLOPT_CUSTOMREQUEST
parameter which URL decodes the value and separates the request from
any parameters - This makes it easier to filter untagged responses
by the request command.
For consistency changed the logic of the imap_state_append_resp()
function to test for an unsucessful continuation response rather than a
succesful one.
The APPEND operation needs to be performed in several steps:
1) We send "<tag> APPEND <mailbox> <flags> {<size>}\r\n"
2) Server responds with continuation respose "+ ...\r\n"
3) We start the transfer and send <size> bytes of data
4) Only now we end the request command line by sending "\r\n"
5) Server responds with "<tag> OK ...\r\n"
This commit performs steps 4 and 5, in the DONE phase, as more
processing is required after the transfer.
Some state changes would be performed after a failure test that
performed a hard return, whilst others would be performed within a test
for success. Updated the code, for consistency, so all instances are
performed within a success test.
Not processing the final FETCH responses was not optimal, not only
because the response code would be ignored but it would also leave data
unread on the socket which would prohibit connection reuse.
A typical FETCH response can be broken down into four parts:
1) "* <uid> FETCH (<what> {<size>}\r\n", using continuation syntax
2) <size> bytes of the actual message
3) ")\r\n", finishing the untagged response
4) "<tag> OK ...", finishing the command
Part 1 is read in imap_fetch_resp(), part 2 is consumed in the PERFORM
phase by the transfer subsystem, parts 3 and 4 are currently ignored.
Added a loop to imap_statemach_act() in which Curl_pp_readresp() is
called until the cache is drained. Without this multiple responses
received in a single packet could result in a hang or delay.
RFC 3501 states that "the client MUST be prepared to accept any response
at all times" yet we assume anything received with "* " at the beginning
is the untagged response we want.
Introduced a helper function that checks whether the input looks like a
response to specified command, so that we may filter the ones we are
interested in according to the current state.
Introduced similar handling to the FETCH responses, where even the
untagged data responses are handled by the response handler of the
individual state.
Removed this pointer to a downloaded bytes counter because it was set in
smtp_init() to point to the same variable the transfer functions keep
the count in (k->bytecount), effectively making the code in transfer.c
"*k->bytecountp = k->bytecount" a no-op.
Removed this pointer to a downloaded bytes counter because it was set in
pop3_init() to point to the same variable the transfer functions keep
the count in (k->bytecount), effectively making the code in transfer.c
"*k->bytecountp = k->bytecount" a no-op.
Removed this pointer to a downloaded bytes counter because it was set in
imap_init() to point to the same variable the transfer functions keep
the count in (k->bytecount), effectively making the code in transfer.c
"*k->bytecountp = k->bytecount" a no-op.
From a maintenance point of view the code reads better to view tagged
responses, then untagged followed by continuation responses.
Additionally, this matches the order of responses in POP3.
Updated the mailbox variable to correctly reflect it's purpose. The
name mailbox was a leftover from when IMAP and POP3 support was
initially added to curl.
Updated the FETCH command to send the UID and SECTION parsed from the
URL. By default the BODY specifier doesn't include a section, BODY[] is
now sent whereas BODY[TEXT] was previously sent. In my opinion
retrieving just the message text is rarely useful when dealing with
emails, as the headers are required for example, so that functionality
is not retained. In can however be simulated by adding SECTION=TEXT to
the URL.
Also updated test801 and test1321 due to the BODY change.
Removed user and passwd from the SMTP struct as these cannot be set on
a per-request basis and are leftover from legacy FTP code.
Changed some comments still using FTP terminology.
Removed user and passwd from the POP3 struct as these cannot be set on
a per-request basis and are leftover from legacy FTP code.
Changed some comments still using FTP terminology.
Moved the mailbox and custom request variables from the per-connection
struct pop3_conn to the new per-request struct and fixed references
accordingly.
Created a new IMAP structure and changed the type of the imap proto
variable in connectdata from FTP* to the new IMAP*.
Moved the mailbox variable from the per-connection struct imap_conn to
the new per-request struct and fixed references accordingly.
Moved the clean-up of the mailbox variable from imap_disconnect() to
imap_done() as this variable is allocated in the do phase, yet would
have only been freed only once if multiple selects where preformed
on a single connection.
Always interprets the pointer passed with the CURLOPT_WRITEDATA or
CURLOPT_READDATA options of curl_easy_setopt() as a void pointer in
order to avoid problems in environments where FILE and void pointers
have non-trivial conversion.
Use Curl_pp_moredata() in Curl_pp_multi_statemach() to check if there is
more data to be received, rather than the socket state, as a task could
hang waiting for more data from the socket itself.
A simple function to test whether the PP is not sending and there are
still more data in its receiver cache. This will be later utilized to:
1) Change Curl_pp_multi_statemach() and Curl_pp_easy_statemach() to
not test socket state and just call user's statemach_act() function
when there are more data to process, because otherwise the task would
just hang, waiting for more data from the socket.
2) Allow PP users to read multiple responses by looping as long as there
are more data available and current phase is not finished.
(Currently needed for correct processing of IMAP SELECT responses.)
The attempt to use gai_strerror() or alternative function didn't work as
the 'sock_error' field didn't contain the proper error code. But since
this hasn't been reported and thus isn't really a big deal I decided to
just scrap the whole attempt to output the detailed resolver error and
instead remain with just stating that the resolving of the name failed.
It seems older gcc installations (at least) will cause warnings if we
name a variable 'wait'. Now changed to 'block' instead.
Reported by: Jiří Hruška
Bug: http://curl.haxx.se/mail/lib-2013-02/0247.html
Apple made a number of changes to Xcode 4. The SDKs were moved, the entire
Developer folder was moved, and PowerPC support was removed. The script
will now adapt to those changes and should be future-proofed against
additional changes in case Apple moves the Developer folder ever again.
Also, the minimum OS X version compiler option was removed, so that the
framework can be built against the latest SDK but still run in older cats.
... since they're not used by the easy interface really, I wanted to
remove the association. Also, I unified the pingpong statemachine driver
into a single function with a 'wait' argument: Curl_pp_statemach.
A call to Curl_ssl_connect() was accidentally left in when the SSL/TLS
connection layer was reworked in 7.29. Not only would this cause the
connection to block but had the additional overhead of calling the
non-blocking connect a little bit later.
This function was only used twice, both in places where performance
isn't crucial (socks + if2ip). Removing the use of this function removes
the need to have our private version for systems without it == reduced
amount of code.
Also, in the SOCKS case it is clearly better to fail gracefully rather
than to truncate the results.
This work was triggered by a bug report on the strcal prototype in
strequal.h.
strlcat was added in commit db70cd28 in February 2001!
Bug: http://curl.haxx.se/bug/view.cgi?id=1192
Reported by: Jeremy Huddleston