The problem mentioned on Dec 10 2009
(http://curl.haxx.se/bug/view.cgi?id=2905220) was only partially fixed.
Partially because an easy handle can be associated with many connections in
the cache (e.g. if there is a redirect during the lifetime of the easy
handle). The previous patch only cleaned up the first one. The new fix now
removes the easy handle from all connections, not just the first one.
ran into some issues with the GSSAPI tests in configure.ac. The tests first
try to determine the include dirs and libs and set CPPFLAGS and LIBS
accordingly. It then checks for the headers and finally sets LIBS a second
time, causing the libs to be included twice. The first setting of LIBS seems
redundant and should be left out, since the first part is otherwise just
about finding headers.
My second issue is that 'krb5-config --libs gssapi' on Darwin is less than
useless and returns junk that, while it happens to work with gcc, causes
clang to choke. For example, --libs returns $CFLAGS along with the libs,
which is really retarded. Simply setting 'LIBS="$LIBS -lgssapi_krb5
-lresolv"' on Darwin is sufficient.
makes sure that when using sub-second timeouts, there's no final bad 1000ms
wait. Previously, a sub-second timeout would often make the elapsed time end
up the time rounded up to the nearest second (e.g. 1s for 200ms timeout)
the global timeout if set. Also, as was reported in the bug report #2956437
by Ryan Chan, the time stamp to use as basis for the per command timeout was
not set properly in the DONE phase for FTP (and not for SMTP) so I fixed
that just now. This was a regression compared to 7.19.7 due to the
conversion of FTP code over to the generic pingpong concepts.
http://curl.haxx.se/bug/view.cgi?id=2956437
(http://curl.haxx.se/bug/view.cgi?id=2958074) that curl on Windows with
option --trace-time did not use local time when timestamping trace lines.
This could also happen on other systems depending on time souurce.
properly in angle brackets. Recipients provided with CURLOPT_MAIL_RCPT now
get angle bracket wrapping automatically by libcurl unless the recipient
starts with an angle bracket as then the app is assumed to deal with that
properly on its own.
full DATA has been sent, and I modified the test SMTP server to also send
that response. As usual, the DONE operation that is made after a completed
transfer is still not doable in a non-blocking way so this waiting for 250
is unfortunately made blockingly.
in the same RCPT TO line, when they should be sent in separate single
commands. I updated test case 802 to verify this.
- I also fixed a bad use of my_setopt_str() of CURLOPT_MAIL_RCPT in the curl
tool which made it try to output it as string for the --libcurl feature
which could lead to crashes.
to automatically uncompress it with the CURLOPT_ENCODING option, libcurl
could wrongly provide the callback with more data than what the maximum
documented amount. An application could thus get tricked into badness if the
maximum limit was trusted to be enforced by libcurl itself (as it is
documented).
This is further detailed and explained in the libcurl security advisory
20100209 at
http://curl.haxx.se/docs/adv_20100209.html
simply check for CURLM_CALL_MULTI_PERFORM internally. This has the added
benefit that this goes in line with my long-term wishes to get rid of the
CURLM_CALL_MULTI_PERFORM all together from the public API.
HTTP Cookie: header _needs_ to be sorted on the path length in the cases
where two cookies using the same name are set more than once using
(overlapping) paths. Realizing this, identically named cookies must be
sorted correctly. But detecting only identically named cookies and take care
of them individually is harder than just to blindly and unconditionally sort
all cookies based on their path lengths. All major browsers also already do
this, so this makes our behavior one step closer to them in the cookie area.
Test case 8 was the only one that broke due to this change and I updated it
accordingly.
again when downloading files over FTP using ASCII and it turns out that the
final size of the file is not the same as the initial size the server
reported. This is very common since servers don't take the newline
conversions into account.
transfers: curl_multi_fdset() would return -1 and not set and file
descriptors several times during a transfer of a single file. It turned out
to be due to two different flaws now fixed. Gil's excellent recipe helped me
nail this.
ossl_connect_step3() increments an SSL session handle reference counter on
each call. When sessions are re-used this reference counter may be
incremented many times, but it will be decremented only once when done (by
Curl_ossl_session_free()); and the internal OpenSSL data will not be freed
if this reference count remains positive. When a session is re-used the
reference counter should be corrected by explicitly calling
SSL_SESSION_free() after each consecutive SSL_get1_session() to avoid
introducing a memory leak.
(http://curl.haxx.se/bug/view.cgi?id=2926284)
versions --ftp-ssl and --ftp-ssl-reqd as these options are now used to
control SSL/TLS for IMAP, POP3 and SMTP as well in addition to FTP. The old
option names are still working but the new ones are the prefered ones
(listed and documented).
command is a special "hack" used by the drftpd server, but even though it is
a custom extension I've deemed it fine to add to libcurl since this server
seems to survive and people keep using it and want libcurl to support
it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
usable from the curl tool with --ftp-pret. Using this option on a server
that doesn't support this command will make libcurl fail.
detects and uses proxies based on the environment variables. If the proxy
was given as an explicit option it worked, but due to the setup order
mistake proxies would not be used fine for a few protocols when picked up
from '[protocol]_proxy'. Obviously this broke after 7.19.4. I now also added
test case 1106 that verifies this functionality.
(http://curl.haxx.se/bug/view.cgi?id=2913886)
on FTP errors in the transient 5xx range. Transient FTP errors are in the
4xx range. The code itself only tried on 5xx errors that occured _at login_.
Now the retry code retries on all FTP transfer failures that ended with a
4xx response.
(http://curl.haxx.se/bug/view.cgi?id=2911279)
accessing alredy freed memory and thus crash when using HTTPS (with
OpenSSL), multi interface and the CURLOPT_DEBUGFUNCTION and a certain order
of cleaning things up. I fixed it.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
curl_easy_setopt with CURLOPT_HTTPHEADER, the library should set
data->state.expect100header accordingly - the current code (in 7.19.7 at
least) doesn't handle this properly. Martin Storsjo provided the fix!