(missing file, bad file, etc), gnutls will no longer handshake properly but it
just loops forever. Therefore, we must return error if we get an error when
setting the CA cert file name. This is not the same behaviour as with OpenSSL.
Question/report posted to the help-gnutls mailing list, April 8 2005.
internally, with code provided by sslgen.c. All SSL-layer-specific code is
then written in ssluse.c (for OpenSSL) and gtls.c (for GnuTLS).
As far as possible, internals should not need to know what SSL layer that is
in use. Building with GnuTLS currently makes two test cases fail.
TODO.gnutls contains a few known outstanding issues for the GnuTLS support.
GnuTLS support is enabled with configure --with-gnutls
also affecting NTLM and Negotiate.) It turned out that if the server responded
with 100 Continue before the initial 401 response, libcurl didn't take care of
the response properly. Test case 245 and 246 added to verify this.
function was fixed to use the proper proxy authentication when multiple ones
were added as accepted. test 239 and test 243 were added to repeat the
problems and verify the fixes.
inet_addr() functions seems to use &255 on all numericals in a ipv4 dotted
address which makes a different failure... Now I've modified the ipv4
resolve code to use inet_pton() instead in an attempt to make these systems
better detect this as a bad IP address rather than creating a toally bogus
address that is then passed on and used.
the correct dynamic library names by default, and provides override switches
--with-ldap-lib, --with-lber-lib and --without-lber-lib. Added
CURL_DISABLE_LDAP to platform-specific config files to disable LDAP
support on those platforms that probably don't have dynamic OpenLDAP
libraries available to avoid compile errors.
USE_WINDOWS_SSPI on Windows, and then libcurl will be built to use the native
way to do NTLM. SSPI also allows libcurl to pass on the current user and its
password in the request.
that feature 64 bit 'long'.
Some systems have 64 bit time_t and deal with years beyond 2038. However, even
some of the systems with 64 bit time_t returns -1 for dates beyond 03:14:07
UTC, January 19, 2038. (Such as AIX 5100-06)
file got a Last-Modified: header written to the data stream, corrupting the
actual data. This was because some conditions from the previous FTP code was
not properly brought into the new FTP code. I fixed and I added test case 520
to verify. (This bug was introduced in 7.13.1)
on the remote side. This then converts the operation to an ordinary STOR
upload. This was requested/pointed out by Ignacio Vazquez-Abrams.
It also proved (and I fixed) a bug in the newly rewritten ftp code (and
present in the 7.13.1 release) when trying to resume an upload and the servers
returns an error to the SIZE command. libcurl then loops and sends SIZE
commands infinitely.
got no notification, no mail, no nothing.
You didn't even bother to mail us when you went public with this. Cool.
NTLM buffer overflow fix, as reported here:
http://www.securityfocus.com/archive/1/391042
requested data from a host and then followed a redirect to another
host. libcurl then didn't use the proxy-auth properly in the second request,
due to the host-only check for original host name wrongly being extended to
the proxy auth as well. Added test case 233 to verify the flaw and that the
fix removed the problem.
that picks NTLM. Thanks to David Byron letting me test NTLM against his
servers, I could quickly repeat and fix the problem. It turned out to be:
When libcurl POSTs without knowing/using an authentication and it gets back a
list of types from which it picks NTLM, it needs to either continue sending
its data if it keeps the connection alive, or not send the data but close the
connection. Then do the first step in the NTLM auth. libcurl didn't send the
data nor close the connection but simply read the response-body and then sent
the first negotiation step. Which then failed miserably of course. The fixed
version forces a connection if there is more than 2000 bytes left to send.
operation to the caller. Disconnecting has the disadvantage that the conn
pointer gets completely invalidated and this is not handled on lots of places
in the code.
gets closed just after the request has been sent failed and did not re-issue
a request on a fresh reconnect like the easy interface did. Now it does!
(define CURL_MULTIEASY, run test case 160)
that uses the multi interface to run the request. It is a great testbed for
the multi interface and I believe we shall do it this way for real in the
future when we have a successor to curl_multi_fdset().
curl_easy_perform() invokes. It was previously unlocked at disconnect, which
could mean that it remained locked between multiple transfers. The DNS cache
may not live as long as the connection cache does, as they are separate.
To deal with the lack of DNS (host address) data availability in re-used
connections, libcurl now keeps a copy of the IP adress as a string, to be able
to show it even on subsequent requests on the same connection.
present in RFC959... so now (lib)curl supports it as well. --ftp-account and
CURLOPT_FTP_ACCOUNT set the account string. (The server may ask for an account
string after PASS have been sent away. The client responds with "ACCT [account
string]".) Added test case 228 and 229 to verify the functionality. Updated
the test FTP server to support ACCT somewhat.
contains %0a or %0d in the user, password or CWD parts. (A future fix would
include doing it for %00 as well - see KNOWN_BUGS for details.) Test case 225
and 226 were added to verify this
1) the proxy environment variables are still read and used to set HTTP proxy
2) you couldn't disable http proxy with CURLOPT_PROXY (since the option was
disabled)
assumed this used the DICT protocol. While guessing protocols will remain
fuzzy, I've now made sure that the host names must start with "[protocol]."
for them to be a valid guessable name. I also removed "https" as a prefix that
indicates HTTPS, since we hardly ever see any host names using that.
using a custom Host: header and curl fails to send a request on a re-used
persistent connection and thus creates a new connection and resends it. It
then sent two Host: headers. Cyrill's analysis was posted here:
http://curl.haxx.se/mail/archive-2005-01/0022.html
problem with the version byte and the check for bad versions. Bruce has lots
of clues on this, and based on his suggestion I've now removed the check of
that byte since it seems to be able to contain 1 or 5.
#1098843. In short, a shared DNS cache was setup for a multi handle and when
the shared cache was deleted before the individual easy handles, the latter
cleanups caused read/writes to already freed memory.
Here's a stab at a consolidation of the SSL detection heuristics into
configure. Source files aren't changed by this patch, except for setup.h and
the various config*.h files. Within the configure script, OPENSSL_ENABLED is
used to determine if SSL is being used or not, and outside configure,
USE_SSLEAY means the same thing; this could be even further unified some day.
Now, when SSL is not detected, configure skips the various checks that are
dependent on SSL, speeding up the configure process and avoiding complications
with cross compiles. I also updated all the architecture- specific config
files I could see, but I couldn't test them.
- removal of getdate.c
- Added hostares.c, hostasyn.c, hostip4.c, hostip6.c, hostsync.c,
hostthre.c, inet_ntop.c, nwlib.c, parsedate.c, sterror.c, strtoofft.c
I have tested the build on 10.3, and will build on 10.2.8 in the next days.
libcurl always and unconditionally overwrote a stack-based array with 3 zero
bytes. I edited the fix to make it less likely to occur again (and added
a comment explaining the reason to the buffer size).
libcurl without cookie support. This is mainly useful if you want to build a
minimalistic libcurl with no cookies support at all. Like for embedded
systems or similar.
response. Previously, libcurl would re-resolve the host name with the new
port number and attempt to connect to that, while it should use the IP from
the control channel. This bug made it hard to EPSV from an FTP server with
multiple IP addresses!
(http://qa.mandrakesoft.com/show_bug.cgi?id=12285), when connecting to an
IPv6 host with FTP, --disable-epsv (or --disable-eprt) effectively disables
the ability to transfer a file. Now, when connected to an FTP server with
IPv6, these FTP commands can't be disabled even if asked to with the
available libcurl options.
If EPSV, EPRT or LPRT is tried and doesn't work, it will not be retried on
the same server again even if a following request is made using a persistent
connection.
If a second request is made to a server, requesting a file from the same
directory as the previous request operated on, libcurl will no longer make
that long series of CWD commands just to end up on the same spot. Note that
this is only for *exactly* the same dir. There is still room for improvements
to optimize the CWD-sending when the dirs are only slightly different.
Added test 210, 211 and 212 to verify these changes. Had to improve the
test script too and added a new primitive to the test file format.