(http://curl.haxx.se/mail/lib-2006-02/0154.html) by adding the NTLM hash
function in addition to the LM one and making some other adjustments in the
order the different parts of the data block are sent in the Type-2 reply.
Inspiration for this work was taken from the Firefox NTLM implementation.
I edited the existing 21(!) NTLM test cases to run fine with these news. Due
to the fact that we now properly include the host name in the Type-2 message
the test cases now only compare parts of that chunk.
with the multi interface and multi-part formposts. The fix from February
22nd could make the Curl_done() function get called twice on the same
connection and it was not designed for that and thus tried to call free() on
an already freed memory area!
callback" error message so I've now made the setting of that callback not be
as critical as before. The function is only used for additional loggging/
trace anyway so a failure just means slightly less data. It should still be
able to proceed and connect fine to the server.
(http://curl.haxx.se/bug/view.cgi?id=1431750) helped me identify and fix two
different but related bugs:
1) Removing an easy handle from a multi handle before the transfer is done
could leave a connection in the connection cache for that handle that is
in a state that isn't suitable for re-use. A subsequent re-use could then
read from a NULL pointer and segfault.
2) When an easy handle was removed from the multi handle, there could be an
outstanding c-ares DNS name resolve request. When the response arrived,
it caused havoc since the connection struct it "belonged" to could've
been freed already.
Now Curl_done() is called when an easy handle is removed from a multi handle
pre-maturely (that is, before the transfer was complteted). Curl_done() also
makes sure to cancel all (if any) outstanding c-ares requests.
type to the already provided type CURLPROXY_SOCKS4.
I added a --socks4 option that works like the current --socks5 option but
instead use the socks4 protocol.
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
with re-used FTP connections. If the second request on the same connection was
set not to fetch a "body", libcurl could get confused and consider it an
attempt to use a dead connection and would go acting mighty strange.
curl tool with --local-port. Plain and simply set the range of ports to bind
the local end of connections to. Implemented on to popular demand.
Not extensively tested. Please let me know how it works.
connection setup as a follow-redirect. It turns out 1) this fails when a FTP
connection is re-setup and 2) it does make the max-redirs counter behave
wrong. This fix was not verified since the reporter vanished, but I believe
this is the right fix nonetheless.
even after EPSV returned a positive response code, if libcurl failed to
connect to the port number the EPSV response said. Obviously some people are
going through protocol-sensitive firewalls (or similar) that don't understand
EPSV and then they don't allow the second connection unless PASV was
used. This also called for a minor fix of test case 238.
(CURLOPT_FTPPORT) didn't work for ipv6-enabed curls if the IP wasn't a
"native" IP while it works fine for ipv6-disabled builds!
In the process of fixing this, I removed the support for LPRT since I can't
think of many reasons to keep doing it and asking on the mailing list didn't
reveal anyone else that could either. The code that sends EPRT and PORT is
now also a lot simpler than before (IMHO).
into a counter, and thus you can now do multiple curl_global_init() and you
are then supposed to do the same amount of calls to curl_global_cleanup().
Bryan also updated the docs accordingly.
given subdirs, libcurl would still "remember" the full path as if it is the
current directory libcurl is in so that the next curl_easy_perform() would
get really confused if it tried the same path again - as it would not issue
any CWD commands at all, assuming it is already in the "proper" dir.
Starting now, a failed CWD command sets a flag that prevents the path to be
"remembered" after returning.
SSL_CTX_new and others, and also makes functions SSLv23_client_method,
TLSv1_client_method, etc return a 'const' SSL_METHOD pointer. Previous
versions do not use the 'const' qualifier.
http://www.greenend.org.uk/rjk/2001/06/poll.html and further tests by Eugene
Kotlyarov, we now know that cygwin's poll returns only POLLHUP on remote
connection closure so we check for that case (too) and re-enable poll for
cygwin builds.
version of libcurl with different Windows versions. Current version of
libcurl imports SSPI functions from secur32.dll. However, under Windows NT
4.0 these functions are located in security.dll, under Windows 9x - in
secur32.dll and Windows 2000 and XP contains both these DLLs (security.dll
just forwards calls to secur32.dll).
Dmitry's patch loads proper library dynamically depending on Windows
version. Function InitSecurityInterface() is used to obtain pointers to all
of SSPI function in one structure.
: ----------------------------------------------------------------------
The LDAP code in libcurl can't handle LDAP servers of LDAPv3 nor binary
attributes in LDAP objects. So, I made a quick patch to address these
problems.
The solution is simple: if we connect to an LDAP server, first try LDAPv3
(which is the preferred protocol as of now) and then fall back to LDAPv2.
In case of binary attributes, we first convert them to base64, just like the
openldap client does. It uses ldap_get_values_len() instead of
ldap_get_values() to be able to retrieve binary attributes correctly. I
defined the necessary LDAP macros in lib/ldap.c to be able to compile
libcurl without the presence of libldap
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
#1334338 (http://curl.haxx.se/bug/view.cgi?id=1334338). When reading an SSL
stream from a server and the server requests a "rehandshake", the current
code simply returns this as an error. I have no good way to test this, but
I've added a crude attempt of dealing with this situation slightly better -
it makes a blocking handshake if this happens. Done like this because fixing
this the "proper" way (that would handshake asynchronously) will require
quite some work and I really need a good way to test this to do such a
change.
(wrongly) sends *two* WWW-Authenticate headers for Digest. While this should
never happen in a sane world, libcurl previously got into an infinite loop
when this occurred. Dave added test 273 to verify this.
(http://curl.haxx.se/bug/view.cgi?id=1299181) that identified a silly problem
with Content-Range: headers with the 'bytes' keyword written in a different
case than all lowercase! It would cause a segfault!
for Windows, that could lead to an Access Violation when the multi interface
was used due to an issue with how the resolver thread was and was not
terminated.
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
that made curl run fine in his end. The key was to make sure we do the
SSL/TLS negotiation immediately after the TCP connect is done and not after
a few other commands have been sent like we did previously. I don't consider
this change necessary to obey the standards, I think this server is pickier
than what the specs allow it to be, but I can't see how this modified
libcurl code can add any problems to those who are interpreting the
standards more liberally.
CURLOPT_COOKIEFILE), add a cookie (with CURLOPT_COOKIELIST), tell it to
write the result to a given cookie jar and then never actually call
curl_easy_perform() - the given file(s) to read was never read but the
output file was written and thus it caused a "funny" result.
- While doing some tests for the bug above, I noticed that Firefox generates
large numbers (for the expire time) in the cookies.txt file and libcurl
didn't treat them properly. Now it does.
site responds with bad HTTP response that doesn't contain any header at all,
only a response body, and the write callback returns 0 to abort the
transfer, it didn't have any real effect but the write callback would be
called once more anyway.
zone name of a daylight savings time was used. For example, PDT vs PDS. This
flaw was introduced with the new date parser (11 sep 2004 - 7.12.2).
Fortunately, no web server or cookie string etc should be using such time
zone names thus limiting the effect of this bug.
HTTP proxy if an FTP URL was given. libcurl now properly switches to pure HTTP
internally when an HTTP proxy is used, even for FTP URLs. The problem would
also occur with other multi-pass auth methods.