curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
CURLOPT_NOBODY enabled but not CURLOPT_HEADER, libcurl wouldn't do TYPE
before it does SIZE which makes it less useful. I walked over the code and
made it do this properly, and added test case 542 to verify it.
o It looks for the NSS database first in the environment variable SSL_DIR,
then in /etc/pki/nssdb, then it initializes with no database if neither of
those exist.
o If the NSS PKCS#11 libnspsem.so driver is available then PEM files may be
loaded, including the ca-bundle. If it is not available then only
certificates already in the NSS database are used.
o Tries to detect whether a file or nickname is being passed in so the right
thing is done
o Added a bit of code to make the output more like the OpenSSL module,
including displaying the certificate information when connecting in
verbose mode
o Improved handling of certificate errors (expired, untrusted, etc)
The libnsspem.so PKCS#11 module is currently only available in Fedora
8/rawhide. Work will be done soon to upstream it. The NSS module will work
with or without it, all that changes is the source of the certificates and
keys.
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
and CURLOPT_NEW_DIRECTORY_PERMS. These control the premissions for files
and directories created on the remote server. CURLOPT_NEW_FILE_PERMS
defaults to 0644 and CURLOPT_NEW_DIRECTORY_PERMS defaults to 0755
libssh2_sftp_shutdown() and libssh2_session_free() can now return
LIBSSH2_ERROR_EAGAIN.
* Fix the _send() and _recv() return values so non-blocking works
function that deprecates the curl_multi_socket() function. Using the new
function the application tell libcurl what action that was found in the
socket that it passes in. This gives a significant performance boost as it
allows libcurl to avoid a call to poll()/select() for every call to
curl_multi_socket*().
"case label value exceeds maximum value for type" and
"comparison is always false due to limited range of data type"
Both triggered when using a bool variable as the switch variable
in a switch statement and using enums for the case targets.
to the debug callback.
- Shmulik Regev added CURLOPT_HTTP_CONTENT_DECODING and
CURLOPT_HTTP_TRANSFER_DECODING that if set to zero will disable libcurl's
internal decoding of content or transfer encoded content. This may be
preferable in cases where you use libcurl for proxy purposes or similar. The
command line tool got a --raw option to disable both at once.
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
(http://curl.haxx.se/bug/view.cgi?id=1603712) which is about connections
getting cut off prematurely when --limit-rate is used. While I found no such
problems in my tests nor in my reading of the code, I found that the
--limit-rate code was severly flawed (since it was moved into the lib, since
7.15.5) when used with the easy interface and it didn't work as documented so
I reworked it somewhat and now it works for my tests.
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
would crash if a bad function sequence was used when shutting down after
using the multi interface (i.e using easy_cleanup after multi_cleanup) so
precautions have been added to make sure it doesn't any more - test case 529
was added to verify.
handle that is part of a multi handle first removes the handle from the
stack.
- Added CURLOPT_SSL_SESSIONID_CACHE and --no-sessionid to disable SSL
session-ID re-use on demand since there obviously are broken servers out
there that misbehave with session-IDs used.
problem with it (SIGSEGV-style). It clearly showed that the existing
socket-state and state-difference function wasn't good enough so I rewrote
it and could then re-run Jeff's program without any crash. The previous
version clearly could miss to tell the application when a handle changed
from using one socket to using another.
While I was at it (as I could use this as a means to track this problem
down), I've now added a 'magic' number to the easy handle struct that is
inited at curl_easy_init() time and cleared at curl_easy_cleanup() time that
we can use internally to detect that an easy handle seems to be fine, or at
least not closed or freed (freeing in debug builds fill the area with 0x13
bytes but in normal builds we can of course not assume any particular data
in the freed areas).
the crash was that libcurl internally was a bit confused about who owned the
DNS cache at all times so if you created an easy handle that uses a shared
DNS cache and added that to a multi handle it would crash. Now we keep more
careful internal track of exactly what kind of DNS cache each easy handle
uses: None, Private (allocated for and used only by this single handle),
Shared (points to a cache held by a shared object), Global (points to the
global cache) or Multi (points to the cache within the multi handle that is
automatically shared between all easy handles that are added with private
caches).
CURLOPT_MAX_RECV_SPEED_LARGE that limit tha maximum rate libcurl is allowed
to send or receive data. This kind of adds the the command line tool's
option --limit-rate to the library.
The rate limiting logic in the curl app is now removed and is instead
provided by libcurl itself. Transfer rate limiting will now also work for -d
and -F, which it didn't before.
(http://curl.haxx.se/mail/lib-2006-02/0154.html) by adding the NTLM hash
function in addition to the LM one and making some other adjustments in the
order the different parts of the data block are sent in the Type-2 reply.
Inspiration for this work was taken from the Firefox NTLM implementation.
I edited the existing 21(!) NTLM test cases to run fine with these news. Due
to the fact that we now properly include the host name in the Type-2 message
the test cases now only compare parts of that chunk.
with the multi interface and multi-part formposts. The fix from February
22nd could make the Curl_done() function get called twice on the same
connection and it was not designed for that and thus tried to call free() on
an already freed memory area!
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
curl tool with --local-port. Plain and simply set the range of ports to bind
the local end of connections to. Implemented on to popular demand.
Not extensively tested. Please let me know how it works.
(CURLOPT_FTPPORT) didn't work for ipv6-enabed curls if the IP wasn't a
"native" IP while it works fine for ipv6-disabled builds!
In the process of fixing this, I removed the support for LPRT since I can't
think of many reasons to keep doing it and asking on the mailing list didn't
reveal anyone else that could either. The code that sends EPRT and PORT is
now also a lot simpler than before (IMHO).
given subdirs, libcurl would still "remember" the full path as if it is the
current directory libcurl is in so that the next curl_easy_perform() would
get really confused if it tried the same path again - as it would not issue
any CWD commands at all, assuming it is already in the "proper" dir.
Starting now, a failed CWD command sets a flag that prevents the path to be
"remembered" after returning.
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
internally, with code provided by sslgen.c. All SSL-layer-specific code is
then written in ssluse.c (for OpenSSL) and gtls.c (for GnuTLS).
As far as possible, internals should not need to know what SSL layer that is
in use. Building with GnuTLS currently makes two test cases fail.
TODO.gnutls contains a few known outstanding issues for the GnuTLS support.
GnuTLS support is enabled with configure --with-gnutls
USE_WINDOWS_SSPI on Windows, and then libcurl will be built to use the native
way to do NTLM. SSPI also allows libcurl to pass on the current user and its
password in the request.
curl_easy_perform() invokes. It was previously unlocked at disconnect, which
could mean that it remained locked between multiple transfers. The DNS cache
may not live as long as the connection cache does, as they are separate.
To deal with the lack of DNS (host address) data availability in re-used
connections, libcurl now keeps a copy of the IP adress as a string, to be able
to show it even on subsequent requests on the same connection.
present in RFC959... so now (lib)curl supports it as well. --ftp-account and
CURLOPT_FTP_ACCOUNT set the account string. (The server may ask for an account
string after PASS have been sent away. The client responds with "ACCT [account
string]".) Added test case 228 and 229 to verify the functionality. Updated
the test FTP server to support ACCT somewhat.
#1098843. In short, a shared DNS cache was setup for a multi handle and when
the shared cache was deleted before the individual easy handles, the latter
cleanups caused read/writes to already freed memory.
If EPSV, EPRT or LPRT is tried and doesn't work, it will not be retried on
the same server again even if a following request is made using a persistent
connection.
If a second request is made to a server, requesting a file from the same
directory as the previous request operated on, libcurl will no longer make
that long series of CWD commands just to end up on the same spot. Note that
this is only for *exactly* the same dir. There is still room for improvements
to optimize the CWD-sending when the dirs are only slightly different.
Added test 210, 211 and 212 to verify these changes. Had to improve the
test script too and added a new primitive to the test file format.
app to retrieve the errno variable after a (connect) failure. It will make
sense to provide this for more failures in a more generic way, but let's
start like this.
replacement, curl only replaced the Host: header on the initial request
and didn't replace it on the following ones. This resulted in requests with
two Host: headers.
Now, curl checks if the location is on the same host as the initial request
and then continues to replace the Host: header. And when it moves to another
host, it doesn't replace the Host: header but it also doesn't make the
second Host: header get used in the request.
This change is verified by the two new test cases 184 and 185.
enough. This is most likely the bug Jean-Louis Lemaire reported that makes
2GB FTP uploads to report error when completed.
Also padded comments to get them aligned again, only for visibility.
server doesn't require any auth at all and then we just continue nicely. We
now have an extra bit in the connection struct named 'authprobe' that is TRUE
when doing pure "HTTP authentication probing".
all things up to work with encoded host names internally, as well as keeping
'display names' to show in debug messages. IDN resolves work for me now using
ipv6, ipv4 and ares resolving. Even cookies on IDN sites seem to do right.
stuff added a few weeks ago. Turns out that if you specify --proxy-ntlm and
communicate with a proxy that requires basic authentication, the proxy
properly returns a 407, but the failure detection code doesn't realize it
should give up, so curl returns with exit code 0. Test case 162 verifies
this.
160 shows.
We got no data and we attempted to re-use a connection. This might happen if
the connection was left alive when we were done using it before, but that was
closed when we wanted to read from it again. Bad luck. Retry the same request
on a fresh connect!
Deleted the sockerror variable again, it serves no purpose anymore.
sessionhandle to make the duphandle() function work as supposed. Also tried
to start document functions the doxygen way (in the headers of the functions).
Can't make it work though...
connectdata struct instead. Each being an allocated pointer.
The passwdgiven field was turned into a local variable in the only
function it was being used.
authenticates connections and not single requests. This should make it work
better when we mix requests from multiple hosts. Problem pointed out by
Cris Bailiff.
CURLOPT_SSL_CTX_FUNCTION to set a callback that gets called with the
OpenSSL's ssl_ctx pointer passed in and allow a callback to act on it. If
anything but CURLE_OK is returned, that will also be returned by libcurl
all the way back. If this function changes the CURLOPT_URL, libcurl will
detect this and instead go use the new URL.
CURLOPT_SSL_CTX_DATA is a pointer you set to get passed to the callback set
with CURLOPT_SSL_CTX_FUNCTION.
header is used, we must wait for a 100-code (or timeout), before we send the
data. The timeout is merely 1000 ms at this point. We may have reason to set
a longer timeout in the future.
regular transfer process. This required some new tweaks, like for example
we need to be able to tell the tranfer loop to not chunky-encode uploads
while we're transferring the rest of the request...
counter so that the caller needs to decrease that counter when done with
the returned data.
If compiled with MALLOCDEBUG I've added some extra checking that the counter
is decreased before a handle is closed etc.
any name resolved data in any curl handle struct. That way, we won't mind
if the cache entries are pruned for the next time we need them. We'll just
resolve them again instead.
This changes the Curl_resolv() proto. It modifies the SessionHandle struct
but perhaps most importantly, it'll make the internals somewhat dependent
on the DNS cache not being disabled as that will cripple operations somewhat.
Especially for persistant connections.
- Use a global dns cache (via setting the tentatively named,
CURLOPT_DNS_USE_GLOBAL_CACHE option to true)
- Use a per-handle dns cache, by default
- Use a pooled dns cache when in the "multi" interface
is not dealt with when we find an end-of-response line, as there might be
important stuff even after the correct line. So on subsequent invokes, the
cached data must be used!
already been transfered no longer returns error but instead is OK. The
reasoning behind this is of course that no extra actions need to be taken
and it is as if a transfer had been successfully performed.