on FTP errors in the transient 5xx range. Transient FTP errors are in the
4xx range. The code itself only tried on 5xx errors that occured _at login_.
Now the retry code retries on all FTP transfer failures that ended with a
4xx response.
(http://curl.haxx.se/bug/view.cgi?id=2911279)
accessing alredy freed memory and thus crash when using HTTPS (with
OpenSSL), multi interface and the CURLOPT_DEBUGFUNCTION and a certain order
of cleaning things up. I fixed it.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
curl_easy_setopt with CURLOPT_HTTPHEADER, the library should set
data->state.expect100header accordingly - the current code (in 7.19.7 at
least) doesn't handle this properly. Martin Storsjo provided the fix!
rework patch that now integrates TFTP properly into libcurl so that it can
be used non-blocking with the multi interface and more. BLKSIZE also works.
The --tftp-blksize option was added to allow setting the TFTP BLKSIZE from
the command line.
though it failed to write a very small download to disk (done in a single
fwrite call). It turned out to be because fwrite() returned success, but
there was insufficient error-checking for the fclose() call which tricked
curl to believe things were fine.
CURLOPT_HTTPPROXYTUNNEL enabled over a proxy, a subsequent request using the
same proxy with the tunnel option disabled would still wrongly re-use that
previous connection and the outcome would only be badness.
end up with entries that wouldn't time-out:
1. Set up a first web server that redirects (307) to a http://server:port
that's down
2. Have curl connect to the first web server using curl multi
After the curl_easy_cleanup call, there will be curl dns entries hanging
around with in_use != 0.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
its pkg-config file. So -Wl stuff ended up in the .pc file, which is really
bad, and breaks if there are multiple -Wl in our LDFLAGS (which are in
PTXdist). bug #2893592 (http://curl.haxx.se/bug/view.cgi?id=2893592)
--with-nss is set but not "yes".
I think we can still improve that to check for pkg-config in that path etc,
but at least this patch brings back the same functionality we had before.
the client certificate. It also disable the key name test as some engines
can select a private key/cert automatically (When there is only one key
and/or certificate on the hardware device used by the engine)
(http://curl.haxx.se/bug/view.cgi?id=2891595) which identified how an entry
in the DNS cache would linger too long if the request that added it was in
use that long. He also provided the patch that now makes libcurl capable of
still doing a request while the DNS hash entry may get timed out.
used during the FTP connection phase (after the actual TCP connect), while
it of course should be. I also made the speed check get called correctly so
that really slow servers will trigger that properly too.
wrong percentage for small files, most notable for <1000 bytes and could
easily end up showing more than 100% at the end. It also didn't show any
percentage, transfer size or estimated transfer times when transferring
less than 100 bytes.
auth is used, as it caused a crash. I failed to repeat the issue, but still
made a change that now forces the TCP connection used for a freed SCP
session to get closed and not be re-used.
POST using a read callback, with Digest authentication and
"Transfer-Encoding: chunked" enforced. I would then cause the first request
to be wrongly sent and then basically hang until the server closed the
connection. I fixed the problem and added test case 565 to verify it.
unparsable expiry dates and then treat them as session cookies - previously
libcurl would reject cookies with a date format it couldn't parse. Research
shows that the major browser treat such cookies as session cookies. I
modified test 8 and 31 to verify this.
by user 'koresh' introduced the --crlfile option to curl, which makes curl
tell libcurl about a file with CRL (certificate revocation list) data to
read.
(http://curl.haxx.se/bug/view.cgi?id=2873666) which identified a problem which
made libcurl loop infinitely when given incorrect credentials when using HTTP
GSS negotiate authentication.
(http://curl.haxx.se/bug/view.cgi?id=2870221) that libcurl returned an
incorrect return code from the internal trynextip() function which caused
him grief. This is a regression that was introduced in 7.19.1 and I find it
strange it hasn't hit us harder, but I won't persue into figuring out
exactly why.
SO_SNDBUF to CURL_WRITE_SIZE even if the SO_SNDBUF starts out larger. The
patch doesn't do a setsockopt if SO_SNDBUF is already greater than
CURL_WRITE_SIZE. This should help folks who have set up their computer with
large send buffers.
the define CURL_MAX_HTTP_HEADER which is even exposed in the public header
file to allow for users to fairly easy rebuild libcurl with a modified
limit. The rationale for a fixed limit is that libcurl is realloc()ing a
buffer to be able to put a full header into it, so that it can call the
header callback with the entire header, but that also risk getting it into
trouble if a server by mistake or willingly sends a header that is more or
less without an end. The limit is set to 100K.
saving received cookies with no given path, if the path in the request had a
query part. That is means a question mark (?) and characters on the right
side of that. I wrote test case 1105 and fixed this problem.
(http://curl.haxx.se/bug/view.cgi?id=2861587) identifying that libcurl used
the OpenSSL function X509_load_crl_file() wrongly and failed if it would
load a CRL file with more than one certificate within. This is now fixed.
powered libcurl in 7.19.6. If there was a X509v3 Subject Alternative Name
field in the certficate it had to match and so even if non-DNS and non-IP
entry was present it caused the verification to fail.
start second "Thu Jan 1 00:00:00 GMT 1970" as the date parser then returns 0
which internally then is treated as a session cookie. That particular date
is now made to get the value of 1.
libcurl to resolve 'localhost' whatever name you use in the URL *if* you set
the --interface option to (exactly) "LocalHost". This will enable us to
write tests for custom hosts names but still use a local host server.
when cross-compiling. The key to success is then you properly setup
PKG_CONFIG_PATH before invoking configure.
I also improved how NSS is detected by trying nss-config if pkg-config isn't
present, and as a last resort just use the lib name and force the user to
setup the LIBS/LDFLAGS/CFLAGS etc properly. The previous last resort would
add a range of various libs that would almost never be quite correct.
QUOTE commands and the request used the same path as the connection had
already changed to, it would decide that no commands would be necessary for
the "DO" action and that was not handled properly but libcurl would instead
hang.
read stdin in a non-blocking fashion. This also brings back -T- (minus) to
the previous blocking behavior since it could break stuff for people at
times.
strdup() that could lead to segfault if it returned NULL. I extended his
suggest patch to now have Curl_retry_request() return a regular return code
and better check that.
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
sending of the TSIZE option. I don't like fixing bugs just hours before
a release, but since it was broken and the patch fixes this for him I decided
to get it in anyway.
each test, so that the test suite can now be used to actually test the
verification of cert names etc. This made an error show up in the OpenSSL-
specific code where it would attempt to match the CN field even if a
subjectAltName exists that doesn't match. This is now fixed and verified
in test 311.
should introduce an option to disable SNI, but as we're in feature freeze
now I've addressed the obvious bug here (pointed out by Peter Sylvester): we
shouldn't try to enable SNI when SSLv2 or SSLv3 is explicitly selected.
Code for OpenSSL and GnuTLS was fixed. NSS doesn't seem to have a particular
option for SNI, or are we simply not using it?
(http://curl.haxx.se/bug/view.cgi?id=2829955) mentioning the recent SSL cert
verification flaw found and exploited by Moxie Marlinspike. The presentation
he did at Black Hat is available here:
https://www.blackhat.com/html/bh-usa-09/bh-usa-09-archives.html#Marlinspike
Apparently at least one CA allowed a subjectAltName or CN that contain a
zero byte, and thus clients that assumed they would never have zero bytes
were exploited to OK a certificate that didn't actually match the site. Like
if the name in the cert was "example.com\0theatualsite.com", libcurl would
happily verify that cert for example.com.
libcurl now better use the length of the extracted name, not assuming it is
zero terminated.
only in some OpenSSL installs - like on Windows) isn't thread-safe and we
agreed that moving it to the global_init() function is a decent way to deal
with this situation.
CURLOPT_PREQUOTE) now accept a preceeding asterisk before the command to
send when using FTP, as a sign that libcurl shall simply ignore the response
from the server instead of treating it as an error. Not treating a 400+ FTP
response code as an error means that failed commands will not abort the
chain of commands, nor will they cause the connection to get disconnected.
out that OpenSSL-powered libcurl didn't support the SHA-2 digest algorithm,
and provided the solution too: to use OpenSSL_add_all_algorithms() instead
of the older SSLeay_* alternative. OpenSSL_add_all_algorithms was added in
OpenSSL 0.9.5
(http://curl.haxx.se/bug/view.cgi?id=2813123) and an a patch that fixes the
problem:
Url A is accessed using auth. Url A redirects to Url B (on a different
server0. Url B reuses a persistent connection. Url B has auth, even though
it's on a different server.
Note: if Url B does not reuse a persistent connection, auth is not sent.
This allows curl(1) to be used as a client-side tunnel for arbitrary stream
protocols by abusing chunked transfer encoding in both the HTTP request and
HTTP response. This requires server support for sending a response while a
request is still being read, of course.
If attempting to read from stdin returns EAGAIN, then we pause our sender.
This leaves curl to attempt to read from the socket while reading from stdin
(and thus sending) is paused.
to detect gnutls build options with pkg-config only and not libgnutls-config
anymore since GnuTLS has stopped distributing that tool. If an explicit path
is given to configure, we will instead guess on how to link and use that
lib. I did not use the patch from the bug report.
is almost always a VERY BAD IDEA. Yet there are still apps out there doing
this, and now recently it triggered a bug/side-effect in libcurl as when
libcurl sends a POST or PUT with NTLM, it sends an empty post first when it
knows it will just get a 401/407 back. If the app then replaced the
Content-Length header, it caused the server to wait for input that libcurl
wouldn't send. Aaron Oneal reported this problem in bug report #2799008http://curl.haxx.se/bug/view.cgi?id=2799008) and helped us verify the fix.
out that the cookie parser would leak memory when it parses cookies that are
received with domain, path etc set multiple times in the same header. While
such a cookie is questionable, they occur in the wild and libcurl no longer
leaks memory for them. I added such a header to test case 8.
of streams that had some parts (legitimately) missing. We now provide and use
a proper cleanup function for the content encoding submodule.
http://curl.haxx.se/mail/lib-2009-05/0092.html
as reported by Ebenezer Ikonne (on curl-users) and Laurent Rabret (on
curl-library). The transfer was mistakenly marked to get more data to send
but since it didn't actually have that, it just hung there...
(http://curl.haxx.se/bug/view.cgi?id=2784055) identifying a problem to
connect to SOCKS proxies when using the multi interface. It turned out to
almost not work at all previously. We need to wait for the TCP connect to
be properly verified before doing the SOCKS magic.
There's still a flaw in the FTP code for this.
(http://curl.haxx.se/bug/view.cgi?id=2786255) with a patch, identifying how
libcurl did not deal with SSL session ids properly if the server rejected a
re-use of one. Starting now, it will forget the rejected one and remember
the new. This change was for OpenSSL only, it is likely that other SSL lib
code needs similar fixes.
I've now made TFTP "connections" not being kept for re-use within libcurl.
TFTP is UDP-based so the benefit was really low (if even existing) to begin
with so instead of tracking down to fix this problem we instead removed the
re-use. I also enabled test case 1099 that I wrote a few days ago to verify
that this change fixes the reported problem.
Chen pointed out how curl couldn't upload with resume when reading from a
pipe.
This ended up with the introduction of a new return code for the
CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
that libcurl may try to resolve the situation anyway. In our case this means
libcurl will attempt to instead read that much data from the stream instead
of seeking and that way curl can now upload with resume when data is read
from a stream!
how it occurs (http://curl.haxx.se/mail/lib-2009-04/0289.html). The
conclusion was that if an error is detected and Curl_done() is called for
the connection, ftp_done() could at times return another error code that
then would take precedence and that new code confused existing logic that
works for the first error code (CURLE_SEND_ERROR) only.
OBJECTPOINT options. Now we've introduced a new function - my_setopt_str -
within the app for setting plain string options to avoid the risk of this
mistake happening.
Storsjo pointed out how setting CURLOPT_NOBODY to 0 could be downright
confusing as it set the method to either GET or HEAD. The example he showed
looked like:
curl_easy_setopt(curl, CURLOPT_PUT, 1);
curl_easy_setopt(curl, CURLOPT_NOBODY, 0);
The new way doesn't alter the method until the request is about to start. If
CURLOPT_NOBODY is then 1 the HTTP request will be HEAD. If CURLOPT_NOBODY is
0 and the request happens to have been set to HEAD, it will then instead be
set to GET. I believe this will be less surprising to users, and hopefully
not hit any existing users badly.
out to be leaking cacerts. Kamil Dudka helped me complete the fix. The issue
is found in Redhat's bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=453612
There are still memory leaks present, but they seem to have other reasons.
non-configured libcurl. In this case curl_off_t data type was gated
to the off_t data type which depends on the _FILE_OFFSET_BITS. This
configuration is exactly the unwanted configuration for our curl_off_t
data type which must not depend on such setting. This breaks ABI for
libcurl libraries built with Sun compilers which were built without
having run the configure script with _FILE_OFFSET_BITS different than
64 and using the ILP32 data model.
curl_easy_duphandle did not necessarily duplicate the CURLOPT_COOKIEFILE
option. It only enabled the cookie engine in the destination handle if
data->cookies is not NULL (where data is the source handle). In case of a
newly initialized handle which just had the cookie support enabled by a
curl_easy_setopt(handle, CURL_COOKIEFILE, "")-call, handle->cookies was
still NULL because the setopt-call only appends the value to
data->change.cookielist, hence duplicating this handle would not have the
cookie engine switched on.
We also concluded that the slist-functionality would be suitable for being
put in its own module rather than simply hanging out in lib/sendf.c so I
created lib/slist.[ch] for them.
scripts to make it detect a bad checkout earlier. People with older
checkouts who don't do cvs update with the -d option won't get the new dirs
and then will get funny outputs that can be a bit hard to understand and
fix.
in the gnutls code where we were checking for negative values for errors,
when the man pages state that GNUTLS_E_SUCCESS is returned on success and
other values indicate error conditions.
curl didn't use sprintf() in a way that is documented to work in POSIX but
since we use our own printf() code (from libcurl) that shouldn't be a
problem. Nonetheless I modified the code to not rely on such particular
features and to not cause further raised eyebrowse with no good reason.
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
curl_global_init() function to properly maintain the performing functions
thread-safe. We've previously (28 April 2007) moved the init to a later time
just to avoid it to fail very early when libgcrypt dislikes the situation,
but that move was bad and the fix should rather be in libgcrypt or
elsewhere.
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
-1 if the sizes aren't know. Previously these returned 0, make it impossible
to detect the difference between actually zero and unknown.
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
libcurl did a superfluous 1000ms wait when doing SFTP downloads!
We read data with libssh2 while doing the "DO" operation for SFTP and then
when we were about to start getting data for the actual file part, the
"TRANSFER" part, we waited for socket action (in 1000ms) before doing a
libssh2-read. But in this case libssh2 had already read and buffered the
data so we ended up always just waiting 1000ms before we get working on the
data!
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.