(http://curl.haxx.se/bug/view.cgi?id=2413067) that identified a problem that
would cause libcurl to mark a DNS cache entry "in use" eternally if the
subsequence TCP connect failed. It would thus never get pruned and refreshed
as it should've been.
The curl tool parts are postponed to a later time
201 - "bug: header data output to the body callback function after set header"
Was probably not a bug, I asked about it but I didn't get any response.
202 - "hangs up of application above libcurl" - problems with the multi_socket
Fixes from Igor have been committed and there's currently no pending ones.
removing easy handles from multi handles when the easy handle is/was within
a HTTP pipeline. His bug report #2351653
(http://curl.haxx.se/bug/view.cgi?id=2351653) was also related and was
eventually fixed by a patch by Igor himself.
particular state for the control connection like it did before for implicit
FTPS (libcurl assumed such control connections to be encrypted while some
FTPS servers such as FileZilla assumes such connections to be clear
mode). Use the CURLOPT_USE_SSL option to set your desired level.
(http://curl.haxx.se/bug/view.cgi?id=2154627) which pointed out that libcurl
uses strcasecmp() in multiple places where it causes failures when the
Turkish locale is used. This is because 'i' and 'I' isn't the same letter so
strcasecmp() on those letters are different in Turkish than in English (or
just about all other languages). I thus introduced a totally new internal
function in libcurl (called Curl_ascii_equal) for doing case insentive
comparisons for english-(ascii?) style strings that thus will make "file"
and "FILE" match even if the Turkish locale is selected.
fixed a CURLINFO_REDIRECT_URL memory leak and an additional wrong-doing:
Any subsequent transfer with a redirect leaks memory, eventually crashing
the process potentially.
Any subsequent transfer WITHOUT a redirect causes the most recent redirect
that DID occur on some previous transfer to still be reported.
Markus Moeller reported: http://curl.haxx.se/mail/archive-2008-09/0016.html
- recv() errors other than those equal to EAGAIN now cause proper
CURLE_RECV_ERROR to get returned. This made test case 160 fail so I've now
disabled it until we can figure out another way to exercise that logic.
enabling this feature with CURLOPT_CERTINFO for a request using SSL (HTTPS
or FTPS), libcurl will gather lots of server certificate info and that info
can then get extracted by a client after the request has completed with
curl_easy_getinfo()'s CURLINFO_CERTINFO option. Linus Nielsen Feltzing
helped me test and smoothen out this feature.
Unfortunately, this feature currently only works with libcurl built to use
OpenSSL.
This feature was sponsored by networking4all.com - thanks!
"Connection: close" and actually close the connection after the
response-body, libcurl could still have outstanding data to send and it
would not properly notice this and stop sending. This caused weirdness and
sad faces. http://curl.haxx.se/bug/view.cgi?id=2080222
Note that there are still reasons to consider libcurl's behavior when
getting a >= 400 response code while sending data, as Craig Perras' note
"http upload: how to stop on error" specifies:
http://curl.haxx.se/mail/archive-2008-08/0138.html
- Logic based on CURL_SIZEOF_CURL_OFF_T and SIZEOF_OFF_T already adjusted.
- Test case 557 already passes on all autobuilds.
- System off_t, or equivalent, size is finally not recorded in curlbuild.h
for this release. SIZEOF_OFF_T from config file is used.
this really hasn't bitten anyone else. The issuer of the report (Felix) suggested
the closure himself and he will get back when (if?) he manage to get a more
reliable way to see the problem.
154 - bug #2041827 "Segfault in http_output_auth w/ FORBID_REUSE (7.18.2)"
connection with the multi interface even if a previous use of it caused a
CURLE_PEER_FAILED_VERIFICATION to get returned. I now make sure that failed
SSL connections properly close the connections.
proved how PUT and POST with a redirect could lead to a "hang" due to the
data stream not being rewound properly when it had to in order to get sent
properly (again) to the subsequent URL. This is now fixed and these test
cases are no longer disabled.
with -C - sent garbage in the Content-Range: header. I fixed this problem by
making sure libcurl always sets the size of the _entire_ upload if an app
attemps to do resumed uploads since libcurl simply cannot know the size of
what is currently at the server end. Test 1041 is no longer disabled.
Rebooting the Solaris system, releasing allocated memory and swap,
has allowed buildconf and configure to complete sucessfully. Further
tests on the system might allow determination of the problem origin.
Solaris AutoBuilds suceeded on August 2 and 3.
146 - Yehoshua Hershberg's re-using of connections that failed with
CURLE_PEER_FAILED_VERIFICATION
147 - PHP's bug report #43158 (http://bugs.php.net/bug.php?id=43158) identifies
a true bug in libcurl built with OpenSSL.
because at the current point in time I think the benefit of adding that new
return code is very slim and it is a lot of work to introduce new return codes
(for docs and maintenance etc)
I added "145 - Phil Blundell's CURLOPT_SCOPE patch/work" since I want it
sorted/committed.
GET simply because previously when you set CURLOPT_NOBODY to TRUE first and
then FALSE you'd end up in a broken state where a HTTP request would do a
HEAD by still act a lot like for a GET and hang waiting for the content etc.
121 - Kaspar Brand's and Guenter Knauf's work on the TLS extension Server Name
Indication is now committed
122 - Progress callback not called during failed socket connect with the multi
interface, is now simply pending a closure since no feedback has been
received lately.
Added:
123 - Mike Protts' SFTP resume download
124 - Anatoli Tubman's fix for a Negotiate: crash
125 - Michal Marek's typechecker-gcc work
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
--keepalive-time to curl to set the keepalive probe interval. I also took
the opportunity to rename the recently added no-keep-alive option to
no-keepalive to keep a consistent naming and to avoid getting two dashes in
these option names. Eric also provided an update to the man page for the new
option.
the same state struct as the host auth, so both could never be used at the
same time! I fixed it (without being able to check) to use two separate
structs to allow authentication using Negotiate on host and proxy
simultanouesly.
a response that was larger than 16KB is now improved slightly so that now
the restriction at 16KB is for the headers only and it should be a rare
situation where the response-headers exceed 16KB. Thus, I consider #47 fixed
and the header limitation is now known as known bug #48.