Introducing a number of options to the multi interface that
allows for multiple pipelines to the same host, in order to
optimize the balance between the penalty for opening new
connections and the potential pipelining latency.
Two new options for limiting the number of connections:
CURLMOPT_MAX_HOST_CONNECTIONS - Limits the number of running connections
to the same host. When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished, so we can reuse the connection.
CURLMOPT_MAX_TOTAL_CONNECTIONS - Limits the number of connections in total.
When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished. The free connection will then be reused, if possible, or
closed if the pending handle can't reuse it.
Several new options for pipelining:
CURLMOPT_MAX_PIPELINE_LENGTH - Limits the pipeling length. If a
pipeline is "full" when a connection is to be reused, a new connection
will be opened if the CURLMOPT_MAX_xxx_CONNECTIONS limits allow it.
If not, the handle will be put in a pending state until a connection is
ready (either free or a pipe got shorter).
CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a transfer with a content
length that is larger than this.
CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a chunk larger than this.
CURLMOPT_PIPELINING_SITE_BL - A blacklist of hosts that don't allow
pipelining.
CURLMOPT_PIPELINING_SERVER_BL - A blacklist of server types that don't allow
pipelining.
See the curl_multi_setopt() man page for details.
When Curl_do() returns failure, the connection pointer could be NULL so
the code path following needs to that that into account.
Bug: http://curl.haxx.se/mail/lib-2013-03/0062.html
Reported by: Eric Hu
Remove internal separated behavior of the easy vs multi intercace.
curl_easy_perform() is now using the multi interface itself.
Several minor multi interface quirks and bugs have been fixed in the
process.
Much help with debugging this has been provided by: Yang Tse
This commit renames lib/setup.h to lib/curl_setup.h and
renames lib/setup_once.h to lib/curl_setup_once.h.
Removes the need and usage of a header inclusion guard foreign
to libcurl. [1]
Removes the need and presence of an alarming notice we carried
in old setup_once.h [2]
----------------------------------------
1 - lib/setup_once.h used __SETUP_ONCE_H macro as header inclusion guard
up to commit ec691ca3 which changed this to HEADER_CURL_SETUP_ONCE_H,
this single inclusion guard is enough to ensure that inclusion of
lib/setup_once.h done from lib/setup.h is only done once.
Additionally lib/setup.h has always used __SETUP_ONCE_H macro to
protect inclusion of setup_once.h even after commit ec691ca3, this
was to avoid a circular header inclusion triggered when building a
c-ares enabled version with c-ares sources available which also has
a setup_once.h header. Commit ec691ca3 exposes the real nature of
__SETUP_ONCE_H usage in lib/setup.h, it is a header inclusion guard
foreign to libcurl belonging to c-ares's setup_once.h
The renaming this commit does, fixes the circular header inclusion,
and as such removes the need and usage of a header inclusion guard
foreign to libcurl. Macro __SETUP_ONCE_H no longer used in libcurl.
2 - Due to the circular interdependency of old lib/setup_once.h and the
c-ares setup_once.h header, old file lib/setup_once.h has carried
back from 2006 up to now days an alarming and prominent notice about
the need of keeping libcurl's and c-ares's setup_once.h in sync.
Given that this commit fixes the circular interdependency, the need
and presence of mentioned notice is removed.
All mentioned interdependencies come back from now old days when
the c-ares project lived inside a curl subdirectory. This commit
removes last traces of such fact.
This reverts renaming and usage of lib/*.h header files done
28-12-2012, reverting 2 commits:
f871de0... build: make use of 76 lib/*.h renamed files
ffd8e12... build: rename 76 lib/*.h files
This also reverts removal of redundant include guard (redundant thanks
to changes in above commits) done 2-12-2013, reverting 1 commit:
c087374... curl_setup.h: remove redundant include guard
This also reverts renaming and usage of lib/*.c source files done
3-12-2013, reverting 3 commits:
13606bb... build: make use of 93 lib/*.c renamed files
5b6e792... build: rename 93 lib/*.c files
7d83dff... build: commit 13606bbfde follow-up 1
Start of related discussion thread:
http://curl.haxx.se/mail/lib-2013-01/0012.html
Asking for confirmation on pushing this revertion commit:
http://curl.haxx.se/mail/lib-2013-01/0048.html
Confirmation summary:
http://curl.haxx.se/mail/lib-2013-01/0079.html
NOTICE: The list of 2 files that have been modified by other
intermixed commits, while renamed, and also by at least one
of the 6 commits this one reverts follows below. These 2 files
will exhibit a hole in history unless git's '--follow' option
is used when viewing logs.
lib/curl_imap.h
lib/curl_smtp.h
A bundle is a list of all persistent connections to the same host.
The connection cache consists of a hash of bundles, with the
hostname as the key.
The benefits may not be obvious, but they are two:
1) Faster search for connections to reuse, since the hash
lookup only finds connections to the host in question.
2) It lays out the groundworks for an upcoming patch,
which will introduce multiple HTTP pipelines.
This patch also removes the awkward list of "closure handles",
which were needed to send QUIT commands to the FTP server
when closing a connection.
Now we allocate a separate closure handle and use that
one to close all connections.
This has been tested in a live system for a few weeks, and of
course passes the test suite.
This handling already works with the easy-interface code. When a request
is sent on a re-used connection that gets closed by the server at the
same time as the request is sent, the situation may occur so that we can
send the request and we discover the broken connection as a RECV_ERROR
in the PERFORM state and then the request needs to be retried on a fresh
connection. Test 64 broke with 'multi-always-internally'.
DNS cache entries populated with CURLOPT_RESOLVE were not properly freed
again when done using the multi interface.
Test case 1502 added to verify.
Bug: http://curl.haxx.se/bug/view.cgi?id=3575448
Reported by: Alex Gruz
This is a minor change in behavior after having been pointed out by Mark
Tully and discussed on the list. Initially this case would internally
call poll() with no sockets and a timeout which would equal a sleep for
that specified time.
Bug: http://curl.haxx.se/mail/lib-2012-10/0076.html
Reported by: Mark Tully
During the periods of rate limitation, the speedcheck function wasn't
called and thus the values weren't updated accordingly and it would then
easily trigger wrongly once data got transferred again.
Also, the progress callback's return code was not acknowledged in this
state so it could make an "abort" return code to get ignored and not
have the documented effect of aborting an ongoing transfer.
Bug: http://curl.haxx.se/mail/lib-2012-09/0081.html
Reported by: Jie He
/*
* Name: curl_multi_wait()
*
* Desc: Poll on all fds within a CURLM set as well as any
* additional fds passed to the function.
*
* Returns: CURLMcode type, general multi error code.
*/
CURL_EXTERN CURLMcode curl_multi_wait(CURLM *multi_handle,
struct curl_waitfd extra_fds[],
unsigned int extra_nfds,
int timeout_ms);
By reading the ->head pointer and using that instead of the ->size
number to figure out if there's a list remaining we avoid the (false
positive) clang-analyzer warning that we might dereference of a null
pointer.
In many states the easy_conn pointer is referenced and just assumed to
be working. This is an added extra check since analyzing indicates
there's a risk we can end up in these states with a NULL pointer there.
The refactoring of HTTP CONNECT handling in commit 41b0237834 that
made it protocol independent broke it for the multi interface. This fix
now introduce a better state handling and moved some logic to the
http_proxy.c source file.
Reported by: Yang Tse
Bug: http://curl.haxx.se/mail/lib-2012-03/0162.html
Backpedaled out the funny double-change of state in the multi state
machine by adding a new argument to the do_more() function to signal
completion. This way it can remain in the DO_MORE state properly until
done. Long term, the entire DO_MORE logic should be moved into the FTP
code and be hidden from the multi code as the logic is only used for
FTP.
1- Two new error codes are introduced.
CURLE_FTP_ACCEPT_FAILED to be set whenever ACCEPTing fails because of
FTP server connected.
CURLE_FTP_ACCEPT_TIMEOUT to be set whenever ACCEPTing timeouts.
Neither of these errors are considered fatal and control connection
remains OK because it could just be a firewall blocking server to
connect to the client.
2- One new setopt option was introduced.
CURLOPT_ACCEPTTIMEOUT_MS
It sets the maximum amount of time FTP client is going to wait for a
server to connect. Internal default accept timeout is 60 seconds.
If the first name server is not available, the multi interface does
not invoke the socket_cb when the DNS request to the first name server
timesout. Ensure that the list of sockets are always updated after
calling Curl_resolver_is_resolved.
This bug can be reproduced if Curl is complied with --enable_ares and
your code uses the multi socket interfaces and the
CURLMOPT_SOCKETFUNCTION option. To test try:
iptables -I INPUT \
-s $(sed -n -e '/name/{s/.* //p;q}' /etc/resolv.conf)/32 \
-j REJECT
and then run a program which uses the multi-interface.
This extends the fix from commit d7934b8bd4
When the multi state is changed within the multi_runsingle from DOING to
DO_MORE, we didn't immediately start the FTP state machine again. That
then left the FTP state in FTP_STOP. When curl_multi_fdset() was
subsequently called, the ftp_domore_getsock() function would return the
wrong fd info.
Reported by: Gokhan Sengun
After a PORT has been issued, and the multi handle would switch to the
CURLM_STATE_DO_MORE state (which is unique for FTP), libcurl would
return the wrong fdset to wait for when curl_multi_fdset() is
called. The code would blindly assume that it was waiting for a connect
of the second connection, while that isn't true immediately after the
PORT command.
Also, the function multi.c:domore_getsock() was highly FTP-centric and
therefore ugly to keep in protocol-agnostic code. I solved this problem
by introducing a new function pointer in the Curl_handler struct called
domore_getsock() which is only called during the DOMORE state for
protocols that set that pointer.
The new ftp.c:ftp_domore_getsock() function now returns fdset info about
the control connection's command/response handling while such a state is
in use, and goes over to waiting for a writable second connection first
once the commands are done.
The original problem could be seen by running test 525 and checking the
time stamps in the FTP server log. I can verify that this fix at least
fixes this problem.
Bug: http://curl.haxx.se/mail/lib-2011-10/0250.html
Reported by: Gokhan Sengun
Prevent modification of easy handle being added with curl_multi_add_handle()
unless this function actually suceeds.
Run Curl_posttransfer() to allow restoring of SIGPIPE handler when
Curl_connect() fails early in multi_runsingle().
When the progress function returns to cancel the request, we must mark
the connection to get closed and it must do to the DONE state.
do_init() must be called as early as possible so that state variables
for new connections are reset early. We could otherwise see that the old
values were still there when a connection was to be disconnected very
early and it would make it behave wrongly.
Bug: http://curl.haxx.se/mail/lib-2011-10/0006.html
Reported by: Vladimir Grishchenko
If a socket is larger than FD_SETSIZE, avoid using FD_SET() on the
platforms where this is possible.
Bug: http://curl.haxx.se/bug/view.cgi?id=3413274
Reported by: Tim Starling
Just internal stuff...
Curl_safefree is now a macro defined in memdebug.h instead of a function
prototyped in url.h and implemented in url.c, so inclusion of url.h is no
longer required in order to simply use Curl_safefree.
Provide definition of macro WHILE_FALSE in setup_once.h in order to allow
other macros such as DEBUGF and DEBUGASSERT, and code using it, to compile
without 'conditional expression is constant' warnings.
The WHILE_FALSE stuff fixes 150+ MSVC compiler warnings.
When connecting to a socks or similar proxy we do the proxy handshake at
once when we know the TCP connect is completed and we only consider the
"connection" complete after the proxy handshake. This fixes test 564
which is now no longer considered disabled.
Reported by: Dmitri Shubin
Bug: http://curl.haxx.se/mail/lib-2011-04/0127.html
asyn-ares.c and asyn-thread.c are two separate backends that implement
the same (internal) async resolver API for libcurl to use. Backend is
specified at build time.
The internal resolver API is defined in asyn.h for asynch resolvers.
Within multi_socket when conn is used as a shorthand, data could be
changed and multi_runsingle could modify the connectdata struct to deal
with. This bug has not been included in a public release.
Using 'conn' like that turned out to be ugly. This change is a partial
revert of commit f1c6cd42f4.
Reported by: Miroslav Spousta
Bug: http://curl.haxx.se/bug/view.cgi?id=3265485
The protocol handler struct got a 'flags' field for special information
and characteristics of the given protocol.
This now enables us to move away central protocol information such as
CLOSEACTION and DUALCHANNEL from single defines in a central place, out
to each protocol's definition. It also made us stop abusing the protocol
field for other info than the protocol, and we could start cleaning up
other protocol-specific things by adding flags bits to set in the
handler struct.
The "protocol" field connectdata struct was removed as well and the code
now refers directly to the conn->handler->protocol field instead. To
make things work properly, the code now always store a conn->given
pointer that points out the original handler struct so that the code can
learn details from the original protocol even if conn->handler is
modified along the way - for example when switching to go over a HTTP
proxy.
Some protocols have to call the underlying functions without regard to
what exact state the socket signals. For example even if the socket says
"readable", the send function might need to be called while uploading,
or vice versa. This is the case for libssh2 based protocols: SCP and
SFTP and we now introduce a define to set those protocols and we make
the multi interface code aware of this concept.
This is another fix to make test 582 run properly.
After a request times out, the connection wasn't properly closed and
prevented to get re-used, so subsequent transfers could still mistakenly
get to use the previously aborted connection.
When failing to connect the protocol during the CURLM_STATE_PROTOCONNECT
state, Curl_done() has to be called with the premature flag set TRUE as
for the pingpong protocols this can be important.
When Curl_done() is called with premature == TRUE, it needs to call
Curl_disconnect() with its 'dead_connection' argument set to TRUE as
well so that any protocol handler's disconnect function won't attempt to
use the (control) connection for anything.
This problem caused the pingpong protocols to fail to disconnect when
STARTTLS failed.
Reported by: Alona Rossen
Bug: http://curl.haxx.se/mail/lib-2011-02/0195.html
The code in the toofast state needs to first recalculate the values
before it uses them again since it may have been a while since it last
did it when it reaches this point.
The info about pipe status and expire cleared are clearly debug-related
and not anything mere mortals will or should care about so they are now
ifdef'ed DEBUGBUILD
The generic timeout code must not check easy handles that are already
completed. Going to completed (again) within there risked decreasing the
number of alive handles again and thus it could go negative.
This regression bug was added in 7.21.2 in commit ca10e28f06
It helps to prevent a hangup with some FTP servers in case idle session
timeout has exceeded. But it may be useful also for other protocols
that send any quit message on disconnect. Currently used by FTP, POP3,
IMAP and SMTP.
The functions Curl_disconnect() and Curl_done() are both used within the
scope of a single request so they cannot be allowed to use
Curl_expire(... 0) to kill all timeouts as there are some timeouts that
are set before a request that are supposed to remain until the request
is done.
The timeouts are now instead cleared at curl_easy_cleanup() and when the
multi state machine changes a handle to the complete state.
Add a timeout check for handles in the state machine so that they will
timeout in all states disregarding what actions that may or may not
happen.
Fixed a bug in socket_action introduced recently when looping over timed
out handles: it wouldn't assign the 'data' variable and thus it wouldn't
properly take care of handles.
In the update_timer function, the code now checks if the timeout has
been removed and then it tells the application. Previously it would
always let the remaining timeout(s) just linger to expire later on.
Each easy handle has a list of timeouts, so as soon as the main timeout
for a handle expires, we must make sure to get the next entry from the
list and re-add the handle to the splay tree.
This was attempted previously but was done poorly in my commit
232ad6549a.
I fell over this bug report that mentioned that libcurl could wrongly
send more than one complete messages at the end of a transfer. Reading
the code confirmed this, so I've added a new multi state to make it not
happen. The mentioned bug report was made by Brad Jorsch but is (oddly
enough) filed in Debian's bug tracker for the "wmweather+" tool.
Bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=593390
When detecting that the send or recv speed, the multi interface changes
state to TOOFAST and previously there was no timeout set that would
force a recheck but it would rely on the application to somehow call
libcurl anyway. This now sets a timeout for a suitable future time to
check again if the average transfer speed is then below the threshold
again.
Curl_expire() is now expanded to hold a list of timeouts for each easy
handle. Only the closest in time will be the one used as the primary
timeout for the handle and will be used for the splay tree (which sorts
and lists all handles within the multi handle).
When the main timeout has triggered/expired, the next timeout in time
that is kept in the list will be moved to the main timeout position and
used as the key to splay with. This way, all timeouts that are set with
Curl_expire() internally will end up as a proper timeout. Previously any
Curl_expire() that set a _later_ timeout than what was already set was
just silently ignored and thus missed.
Setting Curl_expire() with timeout 0 (zero) will cancel all previously
added timeouts.
Corrects known bug #62.
Instead of looping over all attached easy handles, this now keeps a list
of messages in the multi handle. It allows curl_multi_info_read() to
perform O(1) no matter how many easy handles that are handled. This is
of importance since this function may be polled very frequently by apps
using the multi interface.
The struct used for storing the message for a completed transfer is now
no longer allocated separatly but is kept within the main struct kept
for each easy handle so that we avoid one malloc (and the subsequent
free).
curl_multi perform has two phases: run through every easy handle calling
multi_runsingle and remove expired timers (timer removal).
If a small timer (e.g. 1-10ms) is set during multi_runsingle, then it's
possible that the timer has passed by when the timer removal runs. The
timer which was just added is then removed. This will potentially cause
the timer list to be empty and cause the next call to curl_multi_timeout
to return -1. Ideally, curl_multi_timeout should return 0 in this case.
One way to fix this is to move the struct timeval now = Curl_tvnow(); to
the top of curl_multi_perform. The change does that.
When curl_multi_remove_handle() is called and an easy handle is returned
to the connection cache held in the multi handle, then we cannot allow
CURLINFO_LASTSOCKET to extract it since that will more or less encourage
that the user uses the socket while it can get used by libcurl again.
Without this fix, we'd get a segfault in Curl_getconnectinfo() trying to
dereference the NULL pointer in 'data->state.connc'.
Bug: http://curl.haxx.se/bug/view.cgi?id=3023840
My additional call to Curl_pgrsUpdate() would sometimes get
called even though there's no connection (left) so a NULL pointer
would get passed, causing a segfault.
1) no need to call the progress function twice when in the
CURLM_STATE_TOOFAST state.
2) Make sure that the progress callback's return code is
acknowledged when used
As long as no error is reported, the progress function can get
called. This may be a little TOO often so we should keep an eye
on this and possibly make this conditional somehow.
Igor Novoseltsev reported a problem with the multi socket API and
using timeouts and timers. It boiled down to a problem with
libcurl's use of GetTickCount() interally to figure out the
current time, while Igor's own application code used another
function call.
It made his app call the socket API timeout function a bit
_before_ libcurl would consider the timeout to trigger, and that
could easily lead to timeouts or stalls in the app. It seems
GetTickCount() in general often has no better resolution than
16ms and switching to the alternative function
QueryPerformanceCounter has its share of problems:
http://www.virtualdub.org/blog/pivot/entry.php?id=106
We address this problem by simply having libcurl treat timers
that already has occured or will occur within 40ms subject for
treatment. I'm confident that there are other implementations and
operating systems with similarly in accurate timer functions so
it makes sense to have applied generically and I don't believe we
sacrifice much by adding a 40ms inaccuracy on these timeouts.
Hauke Duden provided an example program that made the multi
interface crash. His example simply used the multi interface and
did first one FTP transfer and after completion it used a second
easy handle and did another FTP transfer on the same FTP server.
This triggered a bug in the "delayed easy handle kill" system
that curl uses: when an FTP connection is left alive it must keep
an easy handle around internally - only for the purpose of having
an easy handle when it later disconnects it. The code assumed
that when the easy handle was removed and an internal reference
was made, that version could be killed later on when a new easy
handle came using the same connection. This was wrong as Hauke's
example showed that the removed handle wasn't killed for real
until later. This caused a double close attempt => segfault.
The problem mentioned on Dec 10 2009
(http://curl.haxx.se/bug/view.cgi?id=2905220) was only partially fixed.
Partially because an easy handle can be associated with many connections in
the cache (e.g. if there is a redirect during the lifetime of the easy
handle). The previous patch only cleaned up the first one. The new fix now
removes the easy handle from all connections, not just the first one.
simply check for CURLM_CALL_MULTI_PERFORM internally. This has the added
benefit that this goes in line with my long-term wishes to get rid of the
CURLM_CALL_MULTI_PERFORM all together from the public API.