fixing some bugs:
o Don't mix GET and POST requests in a pipeline
o Fix the order in which requests are dispatched from the pipeline
o Fixed several curl bugs with pipelining when the server is returning
chunked encoding:
* Added states to chunked parsing for final CRLF
* Rewind buffer after parsing chunk with data remaining
* Moved chunked header initializing to a spot just before receiving
headers
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
to the debug callback.
- Shmulik Regev added CURLOPT_HTTP_CONTENT_DECODING and
CURLOPT_HTTP_TRANSFER_DECODING that if set to zero will disable libcurl's
internal decoding of content or transfer encoded content. This may be
preferable in cases where you use libcurl for proxy purposes or similar. The
command line tool got a --raw option to disable both at once.
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
non-ASCII platforms. It does add some complexity, most notably with more
#ifdefs, but I want to see this supported added and I can't see how we can
add it without the extra stuff added.
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
(http://curl.haxx.se/bug/view.cgi?id=1604956) which identified setting
CURLOPT_MAXCONNECTS to zero caused libcurl to SIGSEGV. Starting now, libcurl
will always internally use no less than 1 entry in the connection cache.
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
Assigning the const value zero to a pointer to function
results in a null pointer value assignment to the function
pointer.
Assignment of any nonzero value is what should result in a
implementation compiler dependent result.
Since what we want to do here is the first case, this should
not trigger compiler warnings related with conversions from
'pointer to data' to 'pointer to function'.
Our autobuild test suite will judge.
could very well cause a negate number get passed in and thus cause reading
outside of the array usually used for this purpose.
We avoid this by using the uppercase macro versions introduced just now that
does some extra crazy typecasts to avoid byte codes > 127 to cause negative
int values.
would crash if a bad function sequence was used when shutting down after
using the multi interface (i.e using easy_cleanup after multi_cleanup) so
precautions have been added to make sure it doesn't any more - test case 529
was added to verify.
it basically was that we didn't remove the current connection from the pipe
list when following a redirect. Also in this commit: several cases of
additional debug code for debug builds helping to check and track down some
signs of run-time trouble.
handle that is part of a multi handle first removes the handle from the
stack.
- Added CURLOPT_SSL_SESSIONID_CACHE and --no-sessionid to disable SSL
session-ID re-use on demand since there obviously are broken servers out
there that misbehave with session-IDs used.
problem with it (SIGSEGV-style). It clearly showed that the existing
socket-state and state-difference function wasn't good enough so I rewrote
it and could then re-run Jeff's program without any crash. The previous
version clearly could miss to tell the application when a handle changed
from using one socket to using another.
While I was at it (as I could use this as a means to track this problem
down), I've now added a 'magic' number to the easy handle struct that is
inited at curl_easy_init() time and cleared at curl_easy_cleanup() time that
we can use internally to detect that an easy handle seems to be fine, or at
least not closed or freed (freeing in debug builds fill the area with 0x13
bytes but in normal builds we can of course not assume any particular data
in the freed areas).
while not fixing things very nicely, it does make the SOCKS5 proxy
connection slightly better as it now acknowledges the timeout for connection
and it no longer segfaults in the case when SOCKS requires authentication
and you did not specify username:password.
the crash was that libcurl internally was a bit confused about who owned the
DNS cache at all times so if you created an easy handle that uses a shared
DNS cache and added that to a multi handle it would crash. Now we keep more
careful internal track of exactly what kind of DNS cache each easy handle
uses: None, Private (allocated for and used only by this single handle),
Shared (points to a cache held by a shared object), Global (points to the
global cache) or Multi (points to the cache within the multi handle that is
automatically shared between all easy handles that are added with private
caches).
CURLOPT_MAX_RECV_SPEED_LARGE that limit tha maximum rate libcurl is allowed
to send or receive data. This kind of adds the the command line tool's
option --limit-rate to the library.
The rate limiting logic in the curl app is now removed and is instead
provided by libcurl itself. Transfer rate limiting will now also work for -d
and -F, which it didn't before.
with the multi interface and multi-part formposts. The fix from February
22nd could make the Curl_done() function get called twice on the same
connection and it was not designed for that and thus tried to call free() on
an already freed memory area!
(http://curl.haxx.se/bug/view.cgi?id=1431750) helped me identify and fix two
different but related bugs:
1) Removing an easy handle from a multi handle before the transfer is done
could leave a connection in the connection cache for that handle that is
in a state that isn't suitable for re-use. A subsequent re-use could then
read from a NULL pointer and segfault.
2) When an easy handle was removed from the multi handle, there could be an
outstanding c-ares DNS name resolve request. When the response arrived,
it caused havoc since the connection struct it "belonged" to could've
been freed already.
Now Curl_done() is called when an easy handle is removed from a multi handle
pre-maturely (that is, before the transfer was complteted). Curl_done() also
makes sure to cancel all (if any) outstanding c-ares requests.
type to the already provided type CURLPROXY_SOCKS4.
I added a --socks4 option that works like the current --socks5 option but
instead use the socks4 protocol.
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
curl tool with --local-port. Plain and simply set the range of ports to bind
the local end of connections to. Implemented on to popular demand.
Not extensively tested. Please let me know how it works.
(CURLOPT_FTPPORT) didn't work for ipv6-enabed curls if the IP wasn't a
"native" IP while it works fine for ipv6-disabled builds!
In the process of fixing this, I removed the support for LPRT since I can't
think of many reasons to keep doing it and asking on the mailing list didn't
reveal anyone else that could either. The code that sends EPRT and PORT is
now also a lot simpler than before (IMHO).
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
CURLOPT_COOKIEFILE), add a cookie (with CURLOPT_COOKIELIST), tell it to
write the result to a given cookie jar and then never actually call
curl_easy_perform() - the given file(s) to read was never read but the
output file was written and thus it caused a "funny" result.
- While doing some tests for the bug above, I noticed that Firefox generates
large numbers (for the expire time) in the cookies.txt file and libcurl
didn't treat them properly. Now it does.
HTTP proxy if an FTP URL was given. libcurl now properly switches to pure HTTP
internally when an HTTP proxy is used, even for FTP URLs. The problem would
also occur with other multi-pass auth methods.
with CURLOPT_PROXY can use a http:// prefix and user + password. The user
and password fields are now also URL decoded properly.
Test case 264 added to verify.
address was not possible to use. It is now, but requires it written
RFC2732-style, within brackets - which incidently is how you enter numerical
IPv6 addresses in URLs. Test case 263 added to verify.
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
internally, with code provided by sslgen.c. All SSL-layer-specific code is
then written in ssluse.c (for OpenSSL) and gtls.c (for GnuTLS).
As far as possible, internals should not need to know what SSL layer that is
in use. Building with GnuTLS currently makes two test cases fail.
TODO.gnutls contains a few known outstanding issues for the GnuTLS support.
GnuTLS support is enabled with configure --with-gnutls
USE_WINDOWS_SSPI on Windows, and then libcurl will be built to use the native
way to do NTLM. SSPI also allows libcurl to pass on the current user and its
password in the request.
curl_easy_perform() invokes. It was previously unlocked at disconnect, which
could mean that it remained locked between multiple transfers. The DNS cache
may not live as long as the connection cache does, as they are separate.
To deal with the lack of DNS (host address) data availability in re-used
connections, libcurl now keeps a copy of the IP adress as a string, to be able
to show it even on subsequent requests on the same connection.
present in RFC959... so now (lib)curl supports it as well. --ftp-account and
CURLOPT_FTP_ACCOUNT set the account string. (The server may ask for an account
string after PASS have been sent away. The client responds with "ACCT [account
string]".) Added test case 228 and 229 to verify the functionality. Updated
the test FTP server to support ACCT somewhat.
1) the proxy environment variables are still read and used to set HTTP proxy
2) you couldn't disable http proxy with CURLOPT_PROXY (since the option was
disabled)
assumed this used the DICT protocol. While guessing protocols will remain
fuzzy, I've now made sure that the host names must start with "[protocol]."
for them to be a valid guessable name. I also removed "https" as a prefix that
indicates HTTPS, since we hardly ever see any host names using that.
problem with the version byte and the check for bad versions. Bruce has lots
of clues on this, and based on his suggestion I've now removed the check of
that byte since it seems to be able to contain 1 or 5.
#1098843. In short, a shared DNS cache was setup for a multi handle and when
the shared cache was deleted before the individual easy handles, the latter
cleanups caused read/writes to already freed memory.
libcurl without cookie support. This is mainly useful if you want to build a
minimalistic libcurl with no cookies support at all. Like for embedded
systems or similar.
If EPSV, EPRT or LPRT is tried and doesn't work, it will not be retried on
the same server again even if a following request is made using a persistent
connection.
If a second request is made to a server, requesting a file from the same
directory as the previous request operated on, libcurl will no longer make
that long series of CWD commands just to end up on the same spot. Note that
this is only for *exactly* the same dir. There is still room for improvements
to optimize the CWD-sending when the dirs are only slightly different.
Added test 210, 211 and 212 to verify these changes. Had to improve the
test script too and added a new primitive to the test file format.
based) IDN conversion fails. This is really due to a missing suitable
function in the libidn API that I hope we can remove once libidn gets a
function like this.
replacement, curl only replaced the Host: header on the initial request
and didn't replace it on the following ones. This resulted in requests with
two Host: headers.
Now, curl checks if the location is on the same host as the initial request
and then continues to replace the Host: header. And when it moves to another
host, it doesn't replace the Host: header but it also doesn't make the
second Host: header get used in the request.
This change is verified by the two new test cases 184 and 185.
previous difference was not clear nor documented properly). They can now both
be used interchangeably, but we prefer UPLOAD to PUT since it is a more
generic term.
all things up to work with encoded host names internally, as well as keeping
'display names' to show in debug messages. IDN resolves work for me now using
ipv6, ipv4 and ares resolving. Even cookies on IDN sites seem to do right.
sessionhandle to make the duphandle() function work as supposed. Also tried
to start document functions the doxygen way (in the headers of the functions).
Can't make it work though...