curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
downloaded data in two buffers, just to be able to deal with a special HTTP
pipelining case. That is now only activated for pipelined transfers. In
Matt's case, it showed as a considerable performance difference,
(http://curl.haxx.se/bug/view.cgi?id=1603712) (known bug #36) --limit-rate
(CURLOPT_MAX_SEND_SPEED_LARGE and CURLOPT_MAX_RECV_SPEED_LARGE) are broken
on Windows (since 7.16.0, but that's when they were introduced as previous
to that the limiting logic was made in the application only and not in the
library). It was actually also broken on select()-based systems (as apposed
to poll()) but we haven't had any such reports. We now use select(), Sleep()
or delay() properly to sleep a while without waiting for anything input or
output when the rate limiting is activated with the easy interface.
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
authentication (with performs multiple "passes" and authenticates a
connection rather than a HTTP request), and particularly when using the
multi interface, there's a risk that libcurl will re-use a wrong connection
when doing the different passes in the NTLM negotiation and thus fail to
negotiate (in seemingly mysterious ways).
36. --limit-rate (CURLOPT_MAX_SEND_SPEED_LARGE and
CURLOPT_MAX_RECV_SPEED_LARGE) are broken on Windows (since 7.16.0, but
that's when they were introduced as previous to that the limiting logic was
made in the application only and not in the library). This problem is easily
repeated and it takes a Windows person to fire up his/hers debugger in order
to fix. http://curl.haxx.se/bug/view.cgi?id=1603712
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
(http://curl.haxx.se/bug/view.cgi?id=1603712) which is about connections
getting cut off prematurely when --limit-rate is used. While I found no such
problems in my tests nor in my reading of the code, I found that the
--limit-rate code was severly flawed (since it was moved into the lib, since
7.15.5) when used with the easy interface and it didn't work as documented so
I reworked it somewhat and now it works for my tests.
passing a curl_off_t argument to the Curl_read_rewind() function which takes
an size_t argument. Curl_read_rewind() also had debug code left in it and it
was put in a different source file with no good reason when only used from
one single spot.
no code present in the library that receives the option. Since it was not
possible to use, we know that no current users exist and thus we simply
removed it from the docs and made the code always use the default path of
the code.
(http://curl.haxx.se/bug/view.cgi?id=1604956) which identified setting
CURLOPT_MAXCONNECTS to zero caused libcurl to SIGSEGV. Starting now, libcurl
will always internally use no less than 1 entry in the connection cache.
(http://curl.haxx.se/bug/view.cgi?id=1230118) curl_getdate() did not work
properly for all input dates on Windows. It was mostly seen on some TZ time
zones using DST. Luckily, Martin also provided a fix.
(http://curl.haxx.se/bug/view.cgi?id=1600447) in which he noted that active
FTP connections don't work with the multi interface. The problem is here that
the multi interface state machine has a state during which it can wait for the
data connection to connect, but the active connection is not done in the same
step in the sequence as the passive one is so it doesn't quite work for
active. The active FTP code still use a blocking function to allow the remote
server to connect.
The fix (work-around is a better word) for this problem is to set the
boolean prematurely that the data connection is completed, so that the "wait
for connect" phase ends at once.
HTTP upload was disconnected:
"What appears to be happening is that my system (Linux 2.6.17 and 2.6.13) is
setting *only* POLLHUP on poll() when the conditions in my previous mail
occur. As you can see, select.c:Curl_select() does not check for POLLHUP. So
basically what was happening, is poll() was returning immediately (with
POLLHUP set), but when Curl_select() looked at the bits, neither POLLERR or
POLLOUT was set. This still caused Curl_readwrite() to be called, which
quickly returned. Then the transfer() loop kept continuing at full speed
forever."