accessing alredy freed memory and thus crash when using HTTPS (with
OpenSSL), multi interface and the CURLOPT_DEBUGFUNCTION and a certain order
of cleaning things up. I fixed it.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
strdup() that could lead to segfault if it returned NULL. I extended his
suggest patch to now have Curl_retry_request() return a regular return code
and better check that.
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
With the curl memory tracking feature decoupled from the debug build feature,
CURLDEBUG and DEBUGBUILD preprocessor symbol definitions are used as follows:
CURLDEBUG used for curl debug memory tracking specific code (--enable-curldebug)
DEBUGBUILD used for debug enabled specific code (--enable-debug)
KEEP_RECV to better match the general terminology: receive and send is what we
do from the (remote) servers. We read and write from and to the local fs.
(http://curl.haxx.se/bug/view.cgi?id=2784055) identifying a problem to
connect to SOCKS proxies when using the multi interface. It turned out to
almost not work at all previously. We need to wait for the TCP connect to
be properly verified before doing the SOCKS magic.
There's still a flaw in the FTP code for this.
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
When using the multi interface over HTTP and the server returns a Location
header, the running easy handle will get stuck in the CURLM_STATE_PERFORM
state, leaving the external event loop stuck waiting for data from the
ingoing socket (when using the curl_multi_socket_action stuff). While this
bug was pretty hard to find, it seems to require only a one-line fix. The
break statement on line 1374 in multi.c caused the function to skip the call
to multistate().
How to reproduce this bug? Well, that's another question. evhiperfifo.c in
the examples directory chokes on this bug only _sometimes_, probably
depending on how fast the URLs are added. One way of testing the bug out is
writing to hiper.fifo from more than one source at the same time.
pipelining, as libcurl could then easily get confused and A) work on the
handle that was not "first in queue" on a pipeline, or even B) tell the app
to REMOVE a socket while it was in use by a second handle in a pipeline. Both
errors caused hanging or stalling applications.
was actually ready to get done, as the internal time resolution is higher
than the returned millisecond timer. Therefore it could cause applications
running on fast processors to do short bursts of busy-loops.
curl_multi_timeout() will now only return 0 if the timeout is actually
alreay triggered.
now has an improved ability to do right when the multi interface (both
"regular" and multi_socket) is used for SCP and SFTP transfers. This should
result in (much) less busy-loop situations and thus less CPU usage with no
speed loss.
removing easy handles from multi handles when the easy handle is/was within
a HTTP pipeline. His bug report #2351653
(http://curl.haxx.se/bug/view.cgi?id=2351653) was also related and was
eventually fixed by a patch by Igor himself.
(http://curl.haxx.se/bug/view.cgi?id=2351645) that identified a problem with
the multi interface that occured if you removed an easy handle while in
progress and the handle was used in a HTTP pipeline.
eventually identified a flaw in how the multi_socket interface in some cases
missed to call the timeout callback when easy interfaces are removed and
added within the same millisecond.
libcurl to not tell the app properly when a socket was closed (when the name
resolve done by c-ares is done) and then immediately re-created and put to
use again (for the actual connection). Since the closure will make the
"watch status" get lost in several event-based systems libcurl will need to
tell the app about this close/re-create case.
the curl_multi_socket() API with HTTP pipelining enabled and could lead to
the pipeline basically stalling for a very long period of time until it took
off again.
go straight to DO
we had multiple states for which the internal function returned no socket at
all to wait for, with the effect that libcurl calls the socket callback (when
curl_multi_socket() is used) with REMOVE prematurely (as it would be added
again within very shortly)
redirections and thus cannot use CURLOPT_FOLLOWLOCATION easily, we now
introduce the new CURLINFO_REDIRECT_URL option that lets applications
extract the URL libcurl would've redirected to if it had been told to. This
then enables the application to continue to that URL as it thinks is
suitable, without having to re-implement the magic of creating the new URL
from the Location: header etc. Test 1029 verifies it.
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
do_init() and do_complete() which now are called first and last in the DO
function. It simplified the flow in multi.c and the functions got more
sensible names!
hash function for different hashes, and also expanded the default size for
the socket hash table used in multi handles to greatly enhance speed when
very many connections are added and the socket API is used.
when CURLM_CALL_MULTI_PERFORM is returned from curl_multi_socket*/perform,
to make applications using only curl_multi_socket() to properly function
when adding easy handles "on the fly". Bug report and test app provided by
Michael Wallner.
function that deprecates the curl_multi_socket() function. Using the new
function the application tell libcurl what action that was found in the
socket that it passes in. This gives a significant performance boost as it
allows libcurl to avoid a call to poll()/select() for every call to
curl_multi_socket*().
fixing some bugs:
o Don't mix GET and POST requests in a pipeline
o Fix the order in which requests are dispatched from the pipeline
o Fixed several curl bugs with pipelining when the server is returning
chunked encoding:
* Added states to chunked parsing for final CRLF
* Rewind buffer after parsing chunk with data remaining
* Moved chunked header initializing to a spot just before receiving
headers
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
(http://curl.haxx.se/bug/view.cgi?id=1604956) which identified setting
CURLOPT_MAXCONNECTS to zero caused libcurl to SIGSEGV. Starting now, libcurl
will always internally use no less than 1 entry in the connection cache.
and while doing so it became apparent that the current timeout system for
the socket API really was a bit awkward since it become quite some work to
be sure we have the correct timeout set.
Jeff then provided the new CURLMOPT_TIMERFUNCTION that is yet another
callback the app can set to get to know when the general timeout time
changes and thus for an application like hiperfifo.c it makes everything a
lot easier and nicer. There's a CURLMOPT_TIMERDATA option too of course in
good old libcurl tradition.
would crash if a bad function sequence was used when shutting down after
using the multi interface (i.e using easy_cleanup after multi_cleanup) so
precautions have been added to make sure it doesn't any more - test case 529
was added to verify.