(http://curl.haxx.se/mail/lib-2006-02/0154.html) by adding the NTLM hash
function in addition to the LM one and making some other adjustments in the
order the different parts of the data block are sent in the Type-2 reply.
Inspiration for this work was taken from the Firefox NTLM implementation.
I edited the existing 21(!) NTLM test cases to run fine with these news. Due
to the fact that we now properly include the host name in the Type-2 message
the test cases now only compare parts of that chunk.
with the multi interface and multi-part formposts. The fix from February
22nd could make the Curl_done() function get called twice on the same
connection and it was not designed for that and thus tried to call free() on
an already freed memory area!
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
curl tool with --local-port. Plain and simply set the range of ports to bind
the local end of connections to. Implemented on to popular demand.
Not extensively tested. Please let me know how it works.
(CURLOPT_FTPPORT) didn't work for ipv6-enabed curls if the IP wasn't a
"native" IP while it works fine for ipv6-disabled builds!
In the process of fixing this, I removed the support for LPRT since I can't
think of many reasons to keep doing it and asking on the mailing list didn't
reveal anyone else that could either. The code that sends EPRT and PORT is
now also a lot simpler than before (IMHO).
given subdirs, libcurl would still "remember" the full path as if it is the
current directory libcurl is in so that the next curl_easy_perform() would
get really confused if it tried the same path again - as it would not issue
any CWD commands at all, assuming it is already in the "proper" dir.
Starting now, a failed CWD command sets a flag that prevents the path to be
"remembered" after returning.
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
internally, with code provided by sslgen.c. All SSL-layer-specific code is
then written in ssluse.c (for OpenSSL) and gtls.c (for GnuTLS).
As far as possible, internals should not need to know what SSL layer that is
in use. Building with GnuTLS currently makes two test cases fail.
TODO.gnutls contains a few known outstanding issues for the GnuTLS support.
GnuTLS support is enabled with configure --with-gnutls
USE_WINDOWS_SSPI on Windows, and then libcurl will be built to use the native
way to do NTLM. SSPI also allows libcurl to pass on the current user and its
password in the request.
curl_easy_perform() invokes. It was previously unlocked at disconnect, which
could mean that it remained locked between multiple transfers. The DNS cache
may not live as long as the connection cache does, as they are separate.
To deal with the lack of DNS (host address) data availability in re-used
connections, libcurl now keeps a copy of the IP adress as a string, to be able
to show it even on subsequent requests on the same connection.
present in RFC959... so now (lib)curl supports it as well. --ftp-account and
CURLOPT_FTP_ACCOUNT set the account string. (The server may ask for an account
string after PASS have been sent away. The client responds with "ACCT [account
string]".) Added test case 228 and 229 to verify the functionality. Updated
the test FTP server to support ACCT somewhat.
#1098843. In short, a shared DNS cache was setup for a multi handle and when
the shared cache was deleted before the individual easy handles, the latter
cleanups caused read/writes to already freed memory.
If EPSV, EPRT or LPRT is tried and doesn't work, it will not be retried on
the same server again even if a following request is made using a persistent
connection.
If a second request is made to a server, requesting a file from the same
directory as the previous request operated on, libcurl will no longer make
that long series of CWD commands just to end up on the same spot. Note that
this is only for *exactly* the same dir. There is still room for improvements
to optimize the CWD-sending when the dirs are only slightly different.
Added test 210, 211 and 212 to verify these changes. Had to improve the
test script too and added a new primitive to the test file format.
app to retrieve the errno variable after a (connect) failure. It will make
sense to provide this for more failures in a more generic way, but let's
start like this.
replacement, curl only replaced the Host: header on the initial request
and didn't replace it on the following ones. This resulted in requests with
two Host: headers.
Now, curl checks if the location is on the same host as the initial request
and then continues to replace the Host: header. And when it moves to another
host, it doesn't replace the Host: header but it also doesn't make the
second Host: header get used in the request.
This change is verified by the two new test cases 184 and 185.
enough. This is most likely the bug Jean-Louis Lemaire reported that makes
2GB FTP uploads to report error when completed.
Also padded comments to get them aligned again, only for visibility.
server doesn't require any auth at all and then we just continue nicely. We
now have an extra bit in the connection struct named 'authprobe' that is TRUE
when doing pure "HTTP authentication probing".
all things up to work with encoded host names internally, as well as keeping
'display names' to show in debug messages. IDN resolves work for me now using
ipv6, ipv4 and ares resolving. Even cookies on IDN sites seem to do right.
stuff added a few weeks ago. Turns out that if you specify --proxy-ntlm and
communicate with a proxy that requires basic authentication, the proxy
properly returns a 407, but the failure detection code doesn't realize it
should give up, so curl returns with exit code 0. Test case 162 verifies
this.
160 shows.
We got no data and we attempted to re-use a connection. This might happen if
the connection was left alive when we were done using it before, but that was
closed when we wanted to read from it again. Bad luck. Retry the same request
on a fresh connect!
Deleted the sockerror variable again, it serves no purpose anymore.
sessionhandle to make the duphandle() function work as supposed. Also tried
to start document functions the doxygen way (in the headers of the functions).
Can't make it work though...
connectdata struct instead. Each being an allocated pointer.
The passwdgiven field was turned into a local variable in the only
function it was being used.
authenticates connections and not single requests. This should make it work
better when we mix requests from multiple hosts. Problem pointed out by
Cris Bailiff.
CURLOPT_SSL_CTX_FUNCTION to set a callback that gets called with the
OpenSSL's ssl_ctx pointer passed in and allow a callback to act on it. If
anything but CURLE_OK is returned, that will also be returned by libcurl
all the way back. If this function changes the CURLOPT_URL, libcurl will
detect this and instead go use the new URL.
CURLOPT_SSL_CTX_DATA is a pointer you set to get passed to the callback set
with CURLOPT_SSL_CTX_FUNCTION.
header is used, we must wait for a 100-code (or timeout), before we send the
data. The timeout is merely 1000 ms at this point. We may have reason to set
a longer timeout in the future.
regular transfer process. This required some new tweaks, like for example
we need to be able to tell the tranfer loop to not chunky-encode uploads
while we're transferring the rest of the request...