reported, the define is used by the configure script and is assumed to use
the 0xYYXXZZ format. This made "curl-config --vernum" fail in the 7.15.0
release version.
(http://curl.haxx.se/bug/view.cgi?id=1299181) that identified a silly problem
with Content-Range: headers with the 'bytes' keyword written in a different
case than all lowercase! It would cause a segfault!
for Windows, that could lead to an Access Violation when the multi interface
was used due to an issue with how the resolver thread was and was not
terminated.
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
that made curl run fine in his end. The key was to make sure we do the
SSL/TLS negotiation immediately after the TCP connect is done and not after
a few other commands have been sent like we did previously. I don't consider
this change necessary to obey the standards, I think this server is pickier
than what the specs allow it to be, but I can't see how this modified
libcurl code can add any problems to those who are interpreting the
standards more liberally.
CURLOPT_COOKIEFILE), add a cookie (with CURLOPT_COOKIELIST), tell it to
write the result to a given cookie jar and then never actually call
curl_easy_perform() - the given file(s) to read was never read but the
output file was written and thus it caused a "funny" result.
- While doing some tests for the bug above, I noticed that Firefox generates
large numbers (for the expire time) in the cookies.txt file and libcurl
didn't treat them properly. Now it does.
zone name of a daylight savings time was used. For example, PDT vs PDS. This
flaw was introduced with the new date parser (11 sep 2004 - 7.12.2).
Fortunately, no web server or cookie string etc should be using such time
zone names thus limiting the effect of this bug.
HTTP proxy if an FTP URL was given. libcurl now properly switches to pure HTTP
internally when an HTTP proxy is used, even for FTP URLs. The problem would
also occur with other multi-pass auth methods.
seems the Windows (MSVC) libc time functions may return data one hour off if
TZ is not set and automatic DST adjustment is enabled. This made
curl_getdate() return wrong value, and it also concerned internal cookie
expirations etc.
fix the CONNECT authentication code with multi-pass auth methods (such as
NTLM) as it didn't previously properly ignore response-bodies - in fact it
stopped reading after all response headers had been received. This could
lead to libcurl sending the next request and reading the body from the first
request as response to the second request. (I also renamed the function,
which wasn't strictly necessary but...)
The best fix would to once and for all make the CONNECT code use the
ordinary request sending/receiving code, treating it as any ordinary request
instead of the special-purpose function we have now. It should make it
better for multi-interface too. And possibly lead to less code...
Added test case 265 for this. It doesn't work as a _really_ good test case
since the test proxy is too stupid, but the test case helps when running the
debugger to verify.
1) findtool does look per tool in PATH and think ./perl is the perl
executable, while is just a local directory (I have . in the PATH)
2) I got several warning for head -1 deprecated in favour of head -n 1
3) ares directory is missing some file (missing is missing :-) ) because
automake and friends is not run.
(Let's hope number 2 doesn't break somewhere "out there", if so we can always
search/replace that back.)
address was not possible to use. It is now, but requires it written
RFC2732-style, within brackets - which incidently is how you enter numerical
IPv6 addresses in URLs. Test case 263 added to verify.
binary zeroes within the headers. They confused libcurl to do wrong so the
downloaded headers become incomplete. The fix is now verified with test case
262.
times, like on my HP-UX 10.20 tests. And then lib/strerror.c badly assumed
the glibc version if the posix define wasn't set (since it _had_ found a
strerror_r).
least it should no longer cause a compiler error. However, it does not have
AI_NUMERICHOST so we cannot getaddrinfo() any numerical addresses with it (we
use that for FTP PORT/EPRT)! So, I modified the configure check that checks if
the getaddrinfo() is working, to use AI_NUMERICHOST since then it'll fail on
AIX 4.3 and it will automatically build with IPv6 support disabled.
--trace, --trace-ascii and --verbose output. I also made the '>' display
separate each line on the linefeed so that HTTP requests etc look nicer in the
-v output.
more places. First, CURL_HOME is a new environment variable that is used
instead of HOME if it is set, to point out where the default config file
lives. If there's no config file in the dir pointed out by one of the
environment variables, the Windows version will instead check the same
directory the executable curl is located in.
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
also affecting NTLM and Negotiate.) It turned out that if the server responded
with 100 Continue before the initial 401 response, libcurl didn't take care of
the response properly. Test case 245 and 246 added to verify this.
inet_addr() functions seems to use &255 on all numericals in a ipv4 dotted
address which makes a different failure... Now I've modified the ipv4
resolve code to use inet_pton() instead in an attempt to make these systems
better detect this as a bad IP address rather than creating a toally bogus
address that is then passed on and used.
USE_WINDOWS_SSPI on Windows, and then libcurl will be built to use the native
way to do NTLM. SSPI also allows libcurl to pass on the current user and its
password in the request.
file got a Last-Modified: header written to the data stream, corrupting the
actual data. This was because some conditions from the previous FTP code was
not properly brought into the new FTP code. I fixed and I added test case 520
to verify. (This bug was introduced in 7.13.1)
on the remote side. This then converts the operation to an ordinary STOR
upload. This was requested/pointed out by Ignacio Vazquez-Abrams.
It also proved (and I fixed) a bug in the newly rewritten ftp code (and
present in the 7.13.1 release) when trying to resume an upload and the servers
returns an error to the SIZE command. libcurl then loops and sends SIZE
commands infinitely.
requested data from a host and then followed a redirect to another
host. libcurl then didn't use the proxy-auth properly in the second request,
due to the host-only check for original host name wrongly being extended to
the proxy auth as well. Added test case 233 to verify the flaw and that the
fix removed the problem.
that picks NTLM. Thanks to David Byron letting me test NTLM against his
servers, I could quickly repeat and fix the problem. It turned out to be:
When libcurl POSTs without knowing/using an authentication and it gets back a
list of types from which it picks NTLM, it needs to either continue sending
its data if it keeps the connection alive, or not send the data but close the
connection. Then do the first step in the NTLM auth. libcurl didn't send the
data nor close the connection but simply read the response-body and then sent
the first negotiation step. Which then failed miserably of course. The fixed
version forces a connection if there is more than 2000 bytes left to send.
gets closed just after the request has been sent failed and did not re-issue
a request on a fresh reconnect like the easy interface did. Now it does!
(define CURL_MULTIEASY, run test case 160)
curl_easy_perform() invokes. It was previously unlocked at disconnect, which
could mean that it remained locked between multiple transfers. The DNS cache
may not live as long as the connection cache does, as they are separate.
To deal with the lack of DNS (host address) data availability in re-used
connections, libcurl now keeps a copy of the IP adress as a string, to be able
to show it even on subsequent requests on the same connection.
present in RFC959... so now (lib)curl supports it as well. --ftp-account and
CURLOPT_FTP_ACCOUNT set the account string. (The server may ask for an account
string after PASS have been sent away. The client responds with "ACCT [account
string]".) Added test case 228 and 229 to verify the functionality. Updated
the test FTP server to support ACCT somewhat.
contains %0a or %0d in the user, password or CWD parts. (A future fix would
include doing it for %00 as well - see KNOWN_BUGS for details.) Test case 225
and 226 were added to verify this
1) the proxy environment variables are still read and used to set HTTP proxy
2) you couldn't disable http proxy with CURLOPT_PROXY (since the option was
disabled)
assumed this used the DICT protocol. While guessing protocols will remain
fuzzy, I've now made sure that the host names must start with "[protocol]."
for them to be a valid guessable name. I also removed "https" as a prefix that
indicates HTTPS, since we hardly ever see any host names using that.
using a custom Host: header and curl fails to send a request on a re-used
persistent connection and thus creates a new connection and resends it. It
then sent two Host: headers. Cyrill's analysis was posted here:
http://curl.haxx.se/mail/archive-2005-01/0022.html
problem with the version byte and the check for bad versions. Bruce has lots
of clues on this, and based on his suggestion I've now removed the check of
that byte since it seems to be able to contain 1 or 5.
#1098843. In short, a shared DNS cache was setup for a multi handle and when
the shared cache was deleted before the individual easy handles, the latter
cleanups caused read/writes to already freed memory.
reported on Solaris) problem where the size_t check fails due to the SSL libs
being found in a dir not searched through by the run-time linker.
patch-tracker entry #1081707.
libcurl always and unconditionally overwrote a stack-based array with 3 zero
bytes. I edited the fix to make it less likely to occur again (and added
a comment explaining the reason to the buffer size).
libcurl without cookie support. This is mainly useful if you want to build a
minimalistic libcurl with no cookies support at all. Like for embedded
systems or similar.
response. Previously, libcurl would re-resolve the host name with the new
port number and attempt to connect to that, while it should use the IP from
the control channel. This bug made it hard to EPSV from an FTP server with
multiple IP addresses!
(http://qa.mandrakesoft.com/show_bug.cgi?id=12285), when connecting to an
IPv6 host with FTP, --disable-epsv (or --disable-eprt) effectively disables
the ability to transfer a file. Now, when connected to an FTP server with
IPv6, these FTP commands can't be disabled even if asked to with the
available libcurl options.
If EPSV, EPRT or LPRT is tried and doesn't work, it will not be retried on
the same server again even if a following request is made using a persistent
connection.
If a second request is made to a server, requesting a file from the same
directory as the previous request operated on, libcurl will no longer make
that long series of CWD commands just to end up on the same spot. Note that
this is only for *exactly* the same dir. There is still room for improvements
to optimize the CWD-sending when the dirs are only slightly different.
Added test 210, 211 and 212 to verify these changes. Had to improve the
test script too and added a new primitive to the test file format.
file that was already completely downloaded caused an error, while it
doesn't if you don't use --fail! I added test case 194 to verify the fix.
Grrr. CURLOPT_FAILONERROR is now added to the list stuff to remove in
libcurl v8 due to all the kludges needed to support it.
CURLOPT_FOLLOWLOCATION, libcurl reported error if a redirect happened even if
the new URL would provide the resumed file. Test case 188 added to verify the
fix (together with existing test 99).
based) IDN conversion fails. This is really due to a missing suitable
function in the libidn API that I hope we can remove once libidn gets a
function like this.
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=134133) and not to anyone
involved in the curl project! This happens when you try to curl a file from a
proftpd site using SSL. It seems proftpd sends a somewhat unorthodox PASS
response code (232 instead of 230). I relaxed the response code check to deal
with this and similar cases.
formposts no longer include the path part. If you _really_ want them, you
must provide your preferred full file name with CURLFORM_FILENAME.
Added detection for libgen.h and basename() to configure. My custom
basename() replacement function for systems without it, might be a bit too
naive...
Updated 6 test cases to make them work with the stripped paths.
app to retrieve the errno variable after a (connect) failure. It will make
sense to provide this for more failures in a more generic way, but let's
start like this.
replacement, curl only replaced the Host: header on the initial request
and didn't replace it on the following ones. This resulted in requests with
two Host: headers.
Now, curl checks if the location is on the same host as the initial request
and then continues to replace the Host: header. And when it moves to another
host, it doesn't replace the Host: header but it also doesn't make the
second Host: header get used in the request.
This change is verified by the two new test cases 184 and 185.
stuff added a few weeks ago. Turns out that if you specify --proxy-ntlm and
communicate with a proxy that requires basic authentication, the proxy
properly returns a 407, but the failure detection code doesn't realize it
should give up, so curl returns with exit code 0. Test case 162 verifies
this.
as a problem serious enough to skip the final QUIT command before closing
the control connection. To avoid the risk that it will "hang" waiting for
the QUIT response. Added test case 161 to verify this.