curl_easy_setopt with CURLOPT_HTTPHEADER, the library should set
data->state.expect100header accordingly - the current code (in 7.19.7 at
least) doesn't handle this properly. Martin Storsjo provided the fix!
rework patch that now integrates TFTP properly into libcurl so that it can
be used non-blocking with the multi interface and more. BLKSIZE also works.
The --tftp-blksize option was added to allow setting the TFTP BLKSIZE from
the command line.
though it failed to write a very small download to disk (done in a single
fwrite call). It turned out to be because fwrite() returned success, but
there was insufficient error-checking for the fclose() call which tricked
curl to believe things were fine.
CURLOPT_HTTPPROXYTUNNEL enabled over a proxy, a subsequent request using the
same proxy with the tunnel option disabled would still wrongly re-use that
previous connection and the outcome would only be badness.
end up with entries that wouldn't time-out:
1. Set up a first web server that redirects (307) to a http://server:port
that's down
2. Have curl connect to the first web server using curl multi
After the curl_easy_cleanup call, there will be curl dns entries hanging
around with in_use != 0.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
its pkg-config file. So -Wl stuff ended up in the .pc file, which is really
bad, and breaks if there are multiple -Wl in our LDFLAGS (which are in
PTXdist). bug #2893592 (http://curl.haxx.se/bug/view.cgi?id=2893592)
--with-nss is set but not "yes".
I think we can still improve that to check for pkg-config in that path etc,
but at least this patch brings back the same functionality we had before.
the client certificate. It also disable the key name test as some engines
can select a private key/cert automatically (When there is only one key
and/or certificate on the hardware device used by the engine)
(http://curl.haxx.se/bug/view.cgi?id=2891595) which identified how an entry
in the DNS cache would linger too long if the request that added it was in
use that long. He also provided the patch that now makes libcurl capable of
still doing a request while the DNS hash entry may get timed out.
used during the FTP connection phase (after the actual TCP connect), while
it of course should be. I also made the speed check get called correctly so
that really slow servers will trigger that properly too.
wrong percentage for small files, most notable for <1000 bytes and could
easily end up showing more than 100% at the end. It also didn't show any
percentage, transfer size or estimated transfer times when transferring
less than 100 bytes.
auth is used, as it caused a crash. I failed to repeat the issue, but still
made a change that now forces the TCP connection used for a freed SCP
session to get closed and not be re-used.
POST using a read callback, with Digest authentication and
"Transfer-Encoding: chunked" enforced. I would then cause the first request
to be wrongly sent and then basically hang until the server closed the
connection. I fixed the problem and added test case 565 to verify it.
unparsable expiry dates and then treat them as session cookies - previously
libcurl would reject cookies with a date format it couldn't parse. Research
shows that the major browser treat such cookies as session cookies. I
modified test 8 and 31 to verify this.
by user 'koresh' introduced the --crlfile option to curl, which makes curl
tell libcurl about a file with CRL (certificate revocation list) data to
read.
(http://curl.haxx.se/bug/view.cgi?id=2873666) which identified a problem which
made libcurl loop infinitely when given incorrect credentials when using HTTP
GSS negotiate authentication.
(http://curl.haxx.se/bug/view.cgi?id=2870221) that libcurl returned an
incorrect return code from the internal trynextip() function which caused
him grief. This is a regression that was introduced in 7.19.1 and I find it
strange it hasn't hit us harder, but I won't persue into figuring out
exactly why.
SO_SNDBUF to CURL_WRITE_SIZE even if the SO_SNDBUF starts out larger. The
patch doesn't do a setsockopt if SO_SNDBUF is already greater than
CURL_WRITE_SIZE. This should help folks who have set up their computer with
large send buffers.
the define CURL_MAX_HTTP_HEADER which is even exposed in the public header
file to allow for users to fairly easy rebuild libcurl with a modified
limit. The rationale for a fixed limit is that libcurl is realloc()ing a
buffer to be able to put a full header into it, so that it can call the
header callback with the entire header, but that also risk getting it into
trouble if a server by mistake or willingly sends a header that is more or
less without an end. The limit is set to 100K.
saving received cookies with no given path, if the path in the request had a
query part. That is means a question mark (?) and characters on the right
side of that. I wrote test case 1105 and fixed this problem.
(http://curl.haxx.se/bug/view.cgi?id=2861587) identifying that libcurl used
the OpenSSL function X509_load_crl_file() wrongly and failed if it would
load a CRL file with more than one certificate within. This is now fixed.
powered libcurl in 7.19.6. If there was a X509v3 Subject Alternative Name
field in the certficate it had to match and so even if non-DNS and non-IP
entry was present it caused the verification to fail.
start second "Thu Jan 1 00:00:00 GMT 1970" as the date parser then returns 0
which internally then is treated as a session cookie. That particular date
is now made to get the value of 1.
libcurl to resolve 'localhost' whatever name you use in the URL *if* you set
the --interface option to (exactly) "LocalHost". This will enable us to
write tests for custom hosts names but still use a local host server.
when cross-compiling. The key to success is then you properly setup
PKG_CONFIG_PATH before invoking configure.
I also improved how NSS is detected by trying nss-config if pkg-config isn't
present, and as a last resort just use the lib name and force the user to
setup the LIBS/LDFLAGS/CFLAGS etc properly. The previous last resort would
add a range of various libs that would almost never be quite correct.
QUOTE commands and the request used the same path as the connection had
already changed to, it would decide that no commands would be necessary for
the "DO" action and that was not handled properly but libcurl would instead
hang.
read stdin in a non-blocking fashion. This also brings back -T- (minus) to
the previous blocking behavior since it could break stuff for people at
times.
strdup() that could lead to segfault if it returned NULL. I extended his
suggest patch to now have Curl_retry_request() return a regular return code
and better check that.
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
sending of the TSIZE option. I don't like fixing bugs just hours before
a release, but since it was broken and the patch fixes this for him I decided
to get it in anyway.
each test, so that the test suite can now be used to actually test the
verification of cert names etc. This made an error show up in the OpenSSL-
specific code where it would attempt to match the CN field even if a
subjectAltName exists that doesn't match. This is now fixed and verified
in test 311.
should introduce an option to disable SNI, but as we're in feature freeze
now I've addressed the obvious bug here (pointed out by Peter Sylvester): we
shouldn't try to enable SNI when SSLv2 or SSLv3 is explicitly selected.
Code for OpenSSL and GnuTLS was fixed. NSS doesn't seem to have a particular
option for SNI, or are we simply not using it?
(http://curl.haxx.se/bug/view.cgi?id=2829955) mentioning the recent SSL cert
verification flaw found and exploited by Moxie Marlinspike. The presentation
he did at Black Hat is available here:
https://www.blackhat.com/html/bh-usa-09/bh-usa-09-archives.html#Marlinspike
Apparently at least one CA allowed a subjectAltName or CN that contain a
zero byte, and thus clients that assumed they would never have zero bytes
were exploited to OK a certificate that didn't actually match the site. Like
if the name in the cert was "example.com\0theatualsite.com", libcurl would
happily verify that cert for example.com.
libcurl now better use the length of the extracted name, not assuming it is
zero terminated.
only in some OpenSSL installs - like on Windows) isn't thread-safe and we
agreed that moving it to the global_init() function is a decent way to deal
with this situation.
CURLOPT_PREQUOTE) now accept a preceeding asterisk before the command to
send when using FTP, as a sign that libcurl shall simply ignore the response
from the server instead of treating it as an error. Not treating a 400+ FTP
response code as an error means that failed commands will not abort the
chain of commands, nor will they cause the connection to get disconnected.
out that OpenSSL-powered libcurl didn't support the SHA-2 digest algorithm,
and provided the solution too: to use OpenSSL_add_all_algorithms() instead
of the older SSLeay_* alternative. OpenSSL_add_all_algorithms was added in
OpenSSL 0.9.5
(http://curl.haxx.se/bug/view.cgi?id=2813123) and an a patch that fixes the
problem:
Url A is accessed using auth. Url A redirects to Url B (on a different
server0. Url B reuses a persistent connection. Url B has auth, even though
it's on a different server.
Note: if Url B does not reuse a persistent connection, auth is not sent.
This allows curl(1) to be used as a client-side tunnel for arbitrary stream
protocols by abusing chunked transfer encoding in both the HTTP request and
HTTP response. This requires server support for sending a response while a
request is still being read, of course.
If attempting to read from stdin returns EAGAIN, then we pause our sender.
This leaves curl to attempt to read from the socket while reading from stdin
(and thus sending) is paused.
to detect gnutls build options with pkg-config only and not libgnutls-config
anymore since GnuTLS has stopped distributing that tool. If an explicit path
is given to configure, we will instead guess on how to link and use that
lib. I did not use the patch from the bug report.
is almost always a VERY BAD IDEA. Yet there are still apps out there doing
this, and now recently it triggered a bug/side-effect in libcurl as when
libcurl sends a POST or PUT with NTLM, it sends an empty post first when it
knows it will just get a 401/407 back. If the app then replaced the
Content-Length header, it caused the server to wait for input that libcurl
wouldn't send. Aaron Oneal reported this problem in bug report #2799008http://curl.haxx.se/bug/view.cgi?id=2799008) and helped us verify the fix.
out that the cookie parser would leak memory when it parses cookies that are
received with domain, path etc set multiple times in the same header. While
such a cookie is questionable, they occur in the wild and libcurl no longer
leaks memory for them. I added such a header to test case 8.
of streams that had some parts (legitimately) missing. We now provide and use
a proper cleanup function for the content encoding submodule.
http://curl.haxx.se/mail/lib-2009-05/0092.html
as reported by Ebenezer Ikonne (on curl-users) and Laurent Rabret (on
curl-library). The transfer was mistakenly marked to get more data to send
but since it didn't actually have that, it just hung there...
(http://curl.haxx.se/bug/view.cgi?id=2784055) identifying a problem to
connect to SOCKS proxies when using the multi interface. It turned out to
almost not work at all previously. We need to wait for the TCP connect to
be properly verified before doing the SOCKS magic.
There's still a flaw in the FTP code for this.
(http://curl.haxx.se/bug/view.cgi?id=2786255) with a patch, identifying how
libcurl did not deal with SSL session ids properly if the server rejected a
re-use of one. Starting now, it will forget the rejected one and remember
the new. This change was for OpenSSL only, it is likely that other SSL lib
code needs similar fixes.
I've now made TFTP "connections" not being kept for re-use within libcurl.
TFTP is UDP-based so the benefit was really low (if even existing) to begin
with so instead of tracking down to fix this problem we instead removed the
re-use. I also enabled test case 1099 that I wrote a few days ago to verify
that this change fixes the reported problem.
Chen pointed out how curl couldn't upload with resume when reading from a
pipe.
This ended up with the introduction of a new return code for the
CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
that libcurl may try to resolve the situation anyway. In our case this means
libcurl will attempt to instead read that much data from the stream instead
of seeking and that way curl can now upload with resume when data is read
from a stream!
how it occurs (http://curl.haxx.se/mail/lib-2009-04/0289.html). The
conclusion was that if an error is detected and Curl_done() is called for
the connection, ftp_done() could at times return another error code that
then would take precedence and that new code confused existing logic that
works for the first error code (CURLE_SEND_ERROR) only.
OBJECTPOINT options. Now we've introduced a new function - my_setopt_str -
within the app for setting plain string options to avoid the risk of this
mistake happening.
Storsjo pointed out how setting CURLOPT_NOBODY to 0 could be downright
confusing as it set the method to either GET or HEAD. The example he showed
looked like:
curl_easy_setopt(curl, CURLOPT_PUT, 1);
curl_easy_setopt(curl, CURLOPT_NOBODY, 0);
The new way doesn't alter the method until the request is about to start. If
CURLOPT_NOBODY is then 1 the HTTP request will be HEAD. If CURLOPT_NOBODY is
0 and the request happens to have been set to HEAD, it will then instead be
set to GET. I believe this will be less surprising to users, and hopefully
not hit any existing users badly.
out to be leaking cacerts. Kamil Dudka helped me complete the fix. The issue
is found in Redhat's bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=453612
There are still memory leaks present, but they seem to have other reasons.
non-configured libcurl. In this case curl_off_t data type was gated
to the off_t data type which depends on the _FILE_OFFSET_BITS. This
configuration is exactly the unwanted configuration for our curl_off_t
data type which must not depend on such setting. This breaks ABI for
libcurl libraries built with Sun compilers which were built without
having run the configure script with _FILE_OFFSET_BITS different than
64 and using the ILP32 data model.
curl_easy_duphandle did not necessarily duplicate the CURLOPT_COOKIEFILE
option. It only enabled the cookie engine in the destination handle if
data->cookies is not NULL (where data is the source handle). In case of a
newly initialized handle which just had the cookie support enabled by a
curl_easy_setopt(handle, CURL_COOKIEFILE, "")-call, handle->cookies was
still NULL because the setopt-call only appends the value to
data->change.cookielist, hence duplicating this handle would not have the
cookie engine switched on.
We also concluded that the slist-functionality would be suitable for being
put in its own module rather than simply hanging out in lib/sendf.c so I
created lib/slist.[ch] for them.
scripts to make it detect a bad checkout earlier. People with older
checkouts who don't do cvs update with the -d option won't get the new dirs
and then will get funny outputs that can be a bit hard to understand and
fix.
in the gnutls code where we were checking for negative values for errors,
when the man pages state that GNUTLS_E_SUCCESS is returned on success and
other values indicate error conditions.
curl didn't use sprintf() in a way that is documented to work in POSIX but
since we use our own printf() code (from libcurl) that shouldn't be a
problem. Nonetheless I modified the code to not rely on such particular
features and to not cause further raised eyebrowse with no good reason.
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
curl_global_init() function to properly maintain the performing functions
thread-safe. We've previously (28 April 2007) moved the init to a later time
just to avoid it to fail very early when libgcrypt dislikes the situation,
but that move was bad and the fix should rather be in libgcrypt or
elsewhere.
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
-1 if the sizes aren't know. Previously these returned 0, make it impossible
to detect the difference between actually zero and unknown.
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
libcurl did a superfluous 1000ms wait when doing SFTP downloads!
We read data with libssh2 while doing the "DO" operation for SFTP and then
when we were about to start getting data for the actual file part, the
"TRANSFER" part, we waited for socket action (in 1000ms) before doing a
libssh2-read. But in this case libssh2 had already read and buffered the
data so we ended up always just waiting 1000ms before we get working on the
data!
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
the condition in the previous request was unmet. This is typically a time
condition set with CURLOPT_TIMECONDITION and was previously not possible to
reliably figure out. From bug report #2565128
(http://curl.haxx.se/bug/view.cgi?id=2565128)
interface and setting CURLMOPT_MAXCONNECTS to something less than the number
of handles you add to the multi handle. All the connections that didn't fit
in the cache would not be properly disconnected nor freed!
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
enabled, we can now take advantage of its brand new AF_UNSPEC support in
ares_gethostbyname(). This makes test case 241 finally run fine for me wtih
this setup since it now parses the "::1 ip6-localhost" line fine in my
/etc/hosts file!
(http://curl.haxx.se/bug/view.cgi?id=2550061) mentioning that I failed to
properly make sure that the VC9 makefiles got included in the latest
release. I've now fixed the release script and verified it so next release
will hopefully include them properly!
CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC to allow libcurl
to do GSS-style authentication with SOCKS5 proxies. The curl tool got the
options called --socks5-gssapi-service and --socks5-gssapi-nec to enable
these.
disable "rfc4507bis session ticket support". rfc4507bis was later turned
into the proper RFC5077 it seems: http://tools.ietf.org/html/rfc5077
The enabled extension concerns the session management. I wonder how often
libcurl stops a connection and then resumes a TLS session. also, sending the
session data is some overhead. .I suggest that you just use your proposed
patch (which explicitly disables TICKET).
If someone writes an application with libcurl and openssl who wants to
enable the feature, one can do this in the SSL callback.
Sharad Gupta brought this to my attention. Peter Sylvester helped me decide
on the proper action.
(http://curl.haxx.se/bug/view.cgi?id=2535504) pointing out that realms with
quoted quotation marks in HTTP Digest headers didn't work. I've now added
test case 1095 that verifies my fix.
They basically offer the same thing the NO_PROXY environment variable only
offered previously: list a set of host names that shall not use the proxy
even if one is specified.
clarity. This does fix one problem that causes ;type=i FTP URLs
to fail in the Turkish locale when CURLOPT_PROXY_TRANSFER_MODE is
used (test case 561)
Added tests 561 and 1092 through 1094 to test various combinations
of ;type= and ;mode= URLs that could potentially fail in the Turkish
locale.
by Daniel Black, I've now added magic to the configure script that makes it
use pkg-config to detect gnutls details as well if the existing method
(using libgnutls-config) fails. While doing this, I cleaned up and unified
the pkg-config usage when detecting openssl and nss as well.
When using the multi interface over HTTP and the server returns a Location
header, the running easy handle will get stuck in the CURLM_STATE_PERFORM
state, leaving the external event loop stuck waiting for data from the
ingoing socket (when using the curl_multi_socket_action stuff). While this
bug was pretty hard to find, it seems to require only a one-line fix. The
break statement on line 1374 in multi.c caused the function to skip the call
to multistate().
How to reproduce this bug? Well, that's another question. evhiperfifo.c in
the examples directory chokes on this bug only _sometimes_, probably
depending on how fast the URLs are added. One way of testing the bug out is
writing to hiper.fifo from more than one source at the same time.
curl_easy_reset() by creating Curl_init_userdefined(). This had the side effect
of fixing curl_easy_reset() so it now also resets CURLOPT_FTP_FILEMETHOD and
CURLOPT_SSL_SESSIONID_CACHE
I have to jump through a few hoops now with the NSS library initialization
since another part of an application may have already initialized NSS by the
time Curl gets invoked. This patch is more careful to only shutdown the NSS
library if Curl did the initialization.
It also adds in a bit of code to set the default ciphers if the app that
call NSS_Init* did not call NSS_SetDomesticPolicy() or set specific
ciphers. One might argue that this lets other application developers get
lazy and/or they aren't using the NSS API correctly, and you'd be right.
But still, this will avoid terribly difficult-to-trace crashes and is
generally helpful.
(http://curl.haxx.se/bug/view.cgi?id=2413067) that identified a problem that
would cause libcurl to mark a DNS cache entry "in use" eternally if the
subsequence TCP connect failed. It would thus never get pruned and refreshed
as it should've been.
pipelining, as libcurl could then easily get confused and A) work on the
handle that was not "first in queue" on a pipeline, or even B) tell the app
to REMOVE a socket while it was in use by a second handle in a pipeline. Both
errors caused hanging or stalling applications.
was actually ready to get done, as the internal time resolution is higher
than the returned millisecond timer. Therefore it could cause applications
running on fast processors to do short bursts of busy-loops.
curl_multi_timeout() will now only return 0 if the timeout is actually
alreay triggered.
now has an improved ability to do right when the multi interface (both
"regular" and multi_socket) is used for SCP and SFTP transfers. This should
result in (much) less busy-loop situations and thus less CPU usage with no
speed loss.
operation didn't complete properly if the EAGAIN equivalent was returned but
libcurl would simply continue with a half-completed close operation
performed. This ruined persistent connection re-use and cause some
SSH-protocol errors in general. The correction is unfortunately adding a
blocking function - doing it entirely non-blocking should be considered for
a better fix.
duphandle+curl_mutli" (http://curl.haxx.se/bug/view.cgi?id=2416182) showed
that curl_easy_duphandle() wrongly also copied the pointer to the connection
cache, which was plain wrong and caused a segfault if the handle would be
used in a different multi handle than the handle it was duplicated from.
there are servers "out there" that relies on the client doing this broken
Digest authentication. Apache even comes with an option to work with such
broken clients.
The difference is only for URLs that contain a query-part (a '?'-letter and
text to the right of it).
libcurl now supports this quirk, and you enable it by setting the
CURLAUTH_DIGEST_IE bit in the bitmask you pass to the CURLOPT_HTTPAUTH or
CURLOPT_PROXYAUTH options. They are thus individually controlled to server
and proxy.
particular state for the control connection like it did before for implicit
FTPS (libcurl assumed such control connections to be encrypted while some
FTPS servers such as FileZilla assumes such connections to be clear
mode). Use the CURLOPT_USE_SSL option to set your desired level.
researching it, it turned out he got a 550 response back from a SIZE command
and then I fell over the text in RFC3659 that says:
The presence of the 550 error response to a SIZE command MUST NOT be taken
by the client as an indication that the file cannot be transferred in the
current MODE and TYPE.
In other words: the change I did on September 30th 2008 and that has been
included in the last two releases were a regression and a bad idea. We MUST
NOT take a 550 response from SIZE as a hint that the file doesn't exist.
(http://curl.haxx.se/bug/view.cgi?id=2221237) that identified an infinite
loop during GSS authentication given some specific conditions. With his
patience and great feedback I managed to narrow down the problem and
eventually fix it although I can't test any of this myself!
(http://curl.haxx.se/bug/view.cgi?id=2351645) that identified a problem with
the multi interface that occured if you removed an easy handle while in
progress and the handle was used in a HTTP pipeline.
function when built to support SCP and SFTP that helps the library to know
in which direction a particular libssh2 operation would return EAGAIN so
that libcurl knows what socket conditions to wait for before trying the
function call again. Previously (and still when using libssh2 0.18 or
earlier), libcurl will busy-loop in this situation when the easy interface
is used!
when uploading files to a single FTP server using multiple easy handle
handles with the multi interface. Occasionally a handle would stall in
mysterious ways.
The problem turned out to be a side-effect of the ConnectionExists()
function's eagerness to re-use a handle for HTTP pipelining so it would
select it even if already being in use, due to an inadequate check for its
chances of being used for pipelnining.
(http://curl.haxx.se/bug/view.cgi?id=2255627) which pointed out that a
program using libcurl's multi interface to download a HTTPS page with a
libcurl built powered by OpenSSL, would easily get silly and instead hand
over SSL details as data instead of the actual HTTP headers and body. This
happened because libcurl would consider the connection handshake done too
early. This problem was introduced at September 22nd 2008 with my fix of the
bug #2107377
The correct fix is now instead done within the GnuTLS-handling code, as both
the OpenSSL and the NSS code already deal with this situation in similar
fashion. I added test case 560 in an attempt to verify this fix, but
unfortunately it didn't trigger it even before this fix!
problem with MSVC 6 makefile that caused a build failure. It was noted that
the curl_addrinfo.obj reference was missing. I took the opportunity to sort
the list in which this was missing.
make CURLOPT_PROXYUSERPWD sort of deprecated. The primary motive for adding
these new options is that they have no problems with the colon separator
that the CURLOPT_PROXYUSERPWD option does.
(http://curl.haxx.se/bug/view.cgi?id=2154627) which pointed out that libcurl
uses strcasecmp() in multiple places where it causes failures when the
Turkish locale is used. This is because 'i' and 'I' isn't the same letter so
strcasecmp() on those letters are different in Turkish than in English (or
just about all other languages). I thus introduced a totally new internal
function in libcurl (called Curl_ascii_equal) for doing case insentive
comparisons for english-(ascii?) style strings that thus will make "file"
and "FILE" match even if the Turkish locale is selected.
systems supporting getifaddrs(). Also fixed a problem where an IPv6
address could be chosen instead of an IPv4 one for --interface when it
involved a name lookup.
fixed a CURLINFO_REDIRECT_URL memory leak and an additional wrong-doing:
Any subsequent transfer with a redirect leaks memory, eventually crashing
the process potentially.
Any subsequent transfer WITHOUT a redirect causes the most recent redirect
that DID occur on some previous transfer to still be reported.
eventually identified a flaw in how the multi_socket interface in some cases
missed to call the timeout callback when easy interfaces are removed and
added within the same millisecond.
curl_easy_setopt: CURLOPT_USERNAME and CURLOPT_PASSWORD that sort of
deprecates the good old CURLOPT_USERPWD since they allow applications to set
the user name and password independently and perhaps more importantly allow
both to contain colon(s) which CURLOPT_USERPWD doesn't fully support.
the app re-used the handle to do a connection to host B and then again
re-used the handle to host A, it would not update the info with host A's IP
address (due to the connection being re-used) but it would instead report
the info from host B.
gets a 550 response back for the cases where a download (or NOBODY) is
wanted. It still allows a 550 as response if the SIZE is used as part of an
upload process (like if resuming an upload is requested and the file isn't
there before the upload). I also modified the FTP test server and a few test
cases accordingly to match this modified behavior.
date parser function. This makes our function less dependent on system-
provided functions and instead we do all the magic ourselves. We also no
longer depend on the TZ environment variable.
Markus Moeller reported: http://curl.haxx.se/mail/archive-2008-09/0016.html
- recv() errors other than those equal to EAGAIN now cause proper
CURLE_RECV_ERROR to get returned. This made test case 160 fail so I've now
disabled it until we can figure out another way to exercise that logic.
proxy" (http://curl.haxx.se/bug/view.cgi?id=2107377) that showed how a multi
interface using program didn't work when built with GnuTLS and a CONNECT
request was done over a proxy (basically test 502 over a proxy to a HTTPS
site). It turned out the ssl connect function would get called twice which
caused the second call to fail.
sites in cases where the cookie clearly has a very old expiry date. The
condition was simply that libcurl's date parser would fail to convert the
date and it would then count as a (timed-based) match. Starting now, a
missed date due to an unsupported date format or date range will now cause
the cookie to not match.
CURLOPT_POST301 (but adds a define for backwards compatibility for you who
don't define CURL_NO_OLDIES). This option allows you to now also change the
libcurl behavior for a HTTP response 302 after a POST to not use GET in the
subsequent request (when CURLOPT_FOLLOWLOCATION is enabled). I edited the
patch somewhat before commit. The curl tool got a matching --post302
option. Test case 1076 was added to verify this.
enabling this feature with CURLOPT_CERTINFO for a request using SSL (HTTPS
or FTPS), libcurl will gather lots of server certificate info and that info
can then get extracted by a client after the request has completed with
curl_easy_getinfo()'s CURLINFO_CERTINFO option. Linus Nielsen Feltzing
helped me test and smoothen out this feature.
Unfortunately, this feature currently only works with libcurl built to use
OpenSSL.
This feature was sponsored by networking4all.com - thanks!
file for libcurl, and while doing that fix he unified with curl-config.in
how the supported protocols and features are extracted and used, so both those
tools should now always be synced.
"Connection: close" and actually close the connection after the
response-body, libcurl could still have outstanding data to send and it
would not properly notice this and stop sending. This caused weirdness and
sad faces. http://curl.haxx.se/bug/view.cgi?id=2080222
Note that there are still reasons to consider libcurl's behavior when
getting a >= 400 response code while sending data, as Craig Perras' note
"http upload: how to stop on error" specifies:
http://curl.haxx.se/mail/archive-2008-08/0138.html
an unlock in between) for a certain case and that in fact works when using
regular windows mutexes but not with pthreads'! Locks should of course not
get locked again so this is now fixed.
http://curl.haxx.se/mail/lib-2008-08/0422.html
which caused an error when the second header was dumped due to stdout
being closed. Added test case 1066 to verify. Also fixed a potential
problem where a closed file descriptor might be used for an upload
when more than one URL is given.
memory leak because it never called the OpenSSL function
CRYPTO_cleanup_all_ex_data() as it was supposed to. This was because of a
missing define in config-win32.h!
(http://curl.haxx.se/bug/view.cgi?id=2042430) with a patch. "NTLM Windows
SSPI code is not thread safe". This was due to libcurl using static
variables to tell wether to load the necessary SSPI DLL, but now the loading
has been moved to the more suitable curl_global_init() call.
(http://curl.haxx.se/bug/view.cgi?id=2042440) with a patch. He identified a
problem when using NTLM over a proxy but the end-point does Basic, and then
libcurl would do wrong when the host sent "Connection: close" as the proxy's
NTLM state was erroneously cleared.
connection with the multi interface even if a previous use of it caused a
CURLE_PEER_FAILED_VERIFICATION to get returned. I now make sure that failed
SSL connections properly close the connections.
proved how PUT and POST with a redirect could lead to a "hang" due to the
data stream not being rewound properly when it had to in order to get sent
properly (again) to the subsequent URL. This is now fixed and these test
cases are no longer disabled.
with -C - sent garbage in the Content-Range: header. I fixed this problem by
making sure libcurl always sets the size of the _entire_ upload if an app
attemps to do resumed uploads since libcurl simply cannot know the size of
what is currently at the server end. Test 1041 is no longer disabled.
when we have been doing this since revision 1.47 of configure.ac 4 years and
5 months ago when cross-compiling a Windows target. We actually don't use any
function from the Windows GDI (Graphics Device Interface) related with drawing
or graphics-related operations.
incorrectly--the host name is treated as part of the user name and the
port number becomes the password. This can be observed in test 279
(was KNOWN_ISSUE #54).
parser to allow numerical IPv6-addresses to be specified with the scope
given, as per RFC4007 - with a percent letter that itself needs to be URL
escaped. For example, for an address of fe80::1234%1 the HTTP URL is:
"http://[fe80::1234%251]/"
true bug in libcurl built with OpenSSL. It made curl_easy_getinfo() more or
less always return 0 for CURLINFO_SSL_VERIFYRESULT because the function that
would set it to something non-zero would return before the assign in almost
all error cases. The internal variable is now set to non-zero from the start
of the function only to get cleared later on if things work out fine.
overrun" (http://curl.haxx.se/bug/view.cgi?id=2026240) identifying two
problems, and providing the fix for them:
- CURL_READFUNC_PAUSE did in fact not pause the _sending_ of data that it is
designed for but paused _receiving_ of data!
- libcurl didn't internally set the read counter to zero when this return
code was detected, which would potentially lead to junk getting sent to
the server.
is set in fdset.events" (http://curl.haxx.se/bug/view.cgi?id=2015126) which
exactly pinpointed the problem only triggered on Windows Vista, provided
reference to docs and also a fix. There is much work behind Peter Lamberg's
excellent bug report. Thank You!
fix for it. It occured when you did a FTP transfer using
CURLFTPMETHOD_SINGLECWD and then did another one on the same easy handle but
switched to CURLFTPMETHOD_NOCWD. Due to the "dir depth" variable not being
cleared properly. Scott's test case is now known as test 539 and it
verifies the fix.
CURLINFO_APPCONNECT_TIME. This is set with the "application layer"
handshake/connection is completed (typically SSL, TLS or SSH). By using this
you can figure out the application layer's own connect time. You can extract
the time stamp using curl's -w option and the new variable named
'time_appconnect'. This feature was sponsored by Lenny Rachitsky at NeuStar.
some systems" (http://curl.haxx.se/bug/view.cgi?id=1999181). The problem was
that the configure script did not use the _POSIX_MONOTONIC_CLOCK feature test
macro when checking monotonic clock availability. This is now fixed and the
monotonic clock will not be used unless the feature test macro is defined
with a value greater than zero indicating always supported.
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=487567) pointing out that
libcurl used Content-Range: instead of Range when doing a range request with
--head (CURLOPT_NOBODY). This is now fixed and test case 1032 was added to
verify.
handshake with a SSLv2 server, and it turned out to be because it didn't
recognize the cipher named "rc4-md5". In our list that cipher was named
plainly "rc4". I've now added rc4-md5 to work as an alias as Phil reported
that it made things work for him again.
crashed libcurl. This is now addressed by making sure we use "plain send"
internally when doing the socks handshake instead of the Curl_write()
function which is designed to use the "target" protocol. That's then SCP or
SFTP in this case. I also took the opportunity and cleaned up some ssh-
related #ifdefs in the code for readability.
libcurl to not tell the app properly when a socket was closed (when the name
resolve done by c-ares is done) and then immediately re-created and put to
use again (for the actual connection). Since the closure will make the
"watch status" get lost in several event-based systems libcurl will need to
tell the app about this close/re-create case.
multi interface with pipelining enabled as it would wrongly check for,
detect and close "dead connections" even though that connection was already
in use!
All boolean options (such as -O, -I, -v etc), both short and long versions,
now always switch on/enable the option named. Using the same option multiple
times thus make no difference. To switch off one of those options, you need
to use the long version of the option and type --no-OPTION. Like to disable
verbose mode you use --no-verbose!
- Added --remote-name-all to curl, which if used changes the default for all
given URLs to be dealt with as if -O is used. So if you want to disable that
for a specific URL after --remote-name-all has been used, you muse use -o -
or --no-remote-name.
curl_easy_getinfo. It returns a pointer to a string with the most recently
used IP address. Modified test case 500 to also verify this feature. The
implementing of this feature was sponsored by Lenny Rachitsky at NeuStar.
the curl_multi_socket() API with HTTP pipelining enabled and could lead to
the pipeline basically stalling for a very long period of time until it took
off again.
provided excellent repeat recipes. I fixed the cases I managed to reproduce
but Jeff still got some (SCP) problems even after these fixes:
http://curl.haxx.se/mail/lib-2008-05/0342.html
how the HTTP redirect following code didn't properly follow to a new URL if
the new url was but a query string such as "Location: ?moo=foo". Test case
1031 was added to verify this fix.
interface problems:
o with pipelining disabled, the state should never be set to WAITDO but
rather go straight to DO
o we had multiple states for which the internal function returned no socket
at all to wait for, with the effect that libcurl calls the socket callback
(when curl_multi_socket() is used) with REMOVE prematurely (as it would be
added again within very shortly)
o when in DO and DOING states, the HTTP and HTTPS protocol handler functions
didn't return that the socket should be waited for writing, but instead it
was treated as if no socket was needing monitoring so again REMOVE was
called prematurely.
and receive data over a connection previously setup with curl_easy_perform()
and its CURLOPT_CONNECT_ONLY option. The sendrecv.c example was added to
show how they can be used.
when using CURL_AUTH_ANY" (http://curl.haxx.se/bug/view.cgi?id=1945240).
The problem was that when libcurl rewound a stream meant for upload when it
would prepare for a second request, it could accidentally continue the
sending of the rewound data on the first request instead of on the second.
Ben also provided test case 1030 that verifies this fix.
since libcurl used getprotobyname() and that isn't thread-safe. We now
switched to use IPPROTO_TCP unconditionally, but perhaps the proper fix is
to detect the thread-safe version of the function and use that.
http://curl.haxx.se/mail/lib-2008-05/0011.html
redirections and thus cannot use CURLOPT_FOLLOWLOCATION easily, we now
introduce the new CURLINFO_REDIRECT_URL option that lets applications
extract the URL libcurl would've redirected to if it had been told to. This
then enables the application to continue to that URL as it thinks is
suitable, without having to re-implement the magic of creating the new URL
from the Location: header etc. Test 1029 verifies it.
Define HAVE_GSSMIT if <gssapi/{gssapi.h,gssapi_generic.h,gssapi_krb5.h}> are
available, otherwise define HAVE_GSSHEIMDAL if <gssapi.h> is available.
Only define GSS_C_NT_HOSTBASED_SERVICE to gss_nt_service_name if
GSS_C_NT_HOSTBASED_SERVICE isn't declared by the gssapi headers. This should
avoid breakage in case we wrongly recognize Heimdal as MIT again.
GET simply because previously when you set CURLOPT_NOBODY to TRUE first and
then FALSE you'd end up in a broken state where a HTTP request would do a
HEAD by still act a lot like for a GET and hang waiting for the content etc.
application to provide data for a multipart with the read callback. Note
that the size needs to be provided with CURLFORM_CONTENTSLENGTH when the
stream option is used. This feature is verified by the new test case
554. This feature was sponsored by Xponaut.
support when curl didn't even have regular LDAP support. It looks like
this could happen when the --enable-ldaps configure switch is given but
configure couldn't find the LDAP headers or libraries.
default instead of a ca bundle. The configure script will also look for a
ca path if no ca bundle is found and no option given.
- Fixed detection of previously installed curl-ca-bundle.crt
CURLOPT_FTP_CREATE_MISSING_DIRS to create a directory, and then re-used the
handle and uploaded another file to another directory that needed to be
created, the second upload would fail. Another case of a state variable that
wasn't properly reset between requests.
- I rewrote the 100-continue code to use a single state variable instead of
the previous two ones. I think it made the logic somewhat clearer.
(http://curl.haxx.se/bug/view.cgi?id=1911069) that identified a race
condition in the name resolver code when the DNS cache is shared between
multiple easy handles, each running in simultaneous threads that could cause
crashes.
does nothing with them, just to make sure libcurl users always use three
arguments to this function. Due to its use of ... for the third argument, it
is otherwise hard to detect abuse.
easy handle if curl_easy_reset() was used between them. I fixed it and Brian
verified that it cured his problem.
- Brian Ulm reported that if you first tried to download a non-existing SFTP
file and then fetched an existing one and re-used the handle, libcurl would
still report the second one as non-existing as well! I fixed it abd Brian
verified that it cured his problem.
select/poll calls will only be retried upon EINTR failures as
it previously was in lib/select.c revision 1.29
In this way Curl_socket_ready() and Curl_poll() will again fail
on any select/poll errors different than EINTR.
better control at the exact state of the connection's SSL status so that we
know exactly when it has completed the SSL negotiation or not so that there
won't be accidental re-uses of connections that are wrongly believed to be
in SSL-completed-negotiate state.
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
get a fresh one downloaded and created with 'make ca-bundle' or you can get
one from here => http://curl.haxx.se/docs/caextract.html if you want a fresh
new one extracted from Mozilla's recent list of ca certs.
The configure option --with-ca-bundle now lets you specify what file to use
as default ca bundle for your build. If not specified, the configure script
will check a few known standard places for a global ca cert to use.
connection by force when it was called before the entire request is
completed, simply because we can't know if the connection really can be
re-used safely at that point.
verification is requested. Previously it would even return failure if gnutls
failed to get the server cert even though no verification was asked for.
- Fix my Curl_timeleft() leftover mistake in the gnutls code
out and provides test program that demonstrates that libcurl might not set
error description message for error CURLE_COULDNT_RESOLVE_HOST for Windows
threaded name resolver builds. Fixed now.
(http://curl.haxx.se/bug/view.cgi?id=1889856): When using the gnutls ssl
layer, cleaning-up and reinitializing curl ends up with https requests
failing with "ASN1 parser: Element was not found" errors. Obviously a
regression added in 7.16.3.
creates a suitable ca-bundle.crt file in PEM format for use with curl. The
recommended way to run it is to use 'make ca-bundle' in the build tree root.
"HttpOnly" feature introduced by Microsoft and apparently also supported by
Firefox: http://msdn2.microsoft.com/en-us/library/ms533046.aspx . HttpOnly
is now supported when received from servers in HTTP headers, when written to
cookie jars and when read from existing cookie jars.
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
(http://curl.haxx.se/bug/view.cgi?id=1871269) and we could fix his hang-
problem that occurred when doing a large HTTP POST request with the
response-body read from a callback.
--keepalive-time to curl to set the keepalive probe interval. I also took
the opportunity to rename the recently added no-keep-alive option to
no-keepalive to keep a consistent naming and to avoid getting two dashes in
these option names. Eric also provided an update to the man page for the new
option.
(it already before skipped /usr/lib). /usr/lib64 is the default library
directory on many 64bit systems and it's unlikely that anyone would use the
path privately on systems where it's not.
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
(http://curl.haxx.se/bug/view.cgi?id=1868255) with a patch. It identifies
and fixes a problem with parsing WWW-Authenticate: headers with additional
spaces in the line that the parser wasn't written to deal with.
(http://curl.haxx.se/bug/view.cgi?id=1863171) where he pointed out that
libcurl's date parser didn't accept a +1300 time zone which actually is used
fairly often (like New Zealand's Dailight Savings Time), so I modified the
parser to now accept up to and including -1400 to +1400.
code to instead introduce support for a new proxy type called
CURLPROXY_SOCKS5_HOSTNAME that is used to send the host name to the proxy
instead of IP address and there's thus no longer any need for a new
curl_easy_setopt() option.
The default SOCKS5 proxy is again back to sending the IP address to the
proxy. The new curl command line option for enabling sending host name to a
SOCKS5 proxy is now --socks5-hostname.
proxy do the host name resolving and only if --socks5ip (or
CURLOPT_SOCKS5_RESOLVE_LOCAL) is used we resolve the host name locally and
pass on the IP address only to the proxy.
made it an unsigned int. The type was only used in the curl_sockaddr struct
definition (only used by the curl_opensocket_callback). On all platforms I
could find information about, socklen_t is 32 unsigned bits large so I don't
think this will break the API or ABI. The main reason for this change is of
course for all the platforms that don't have a socklen_t definition in their
headers to build fine again. Providing our own configure magic and custom
definition of socklen_t on those systems proved to work but was a lot of
cruft, code and extra magic needed - when this very small change of type seems
harmless and still solves the missing socklen_t problem.
is an inofficial PROXY4 variant that sends the hostname to the proxy instead
of the resolved address (which is already supported by SOCKS5). --socks4a is
the curl command line option for it and CURLOPT_PROXYTYPE can now be set to
CURLPROXY_SOCKS4A as well.
(http://curl.haxx.se/bug/view.cgi?id=1856628) and provided a fix for the
(small) memory leak in the SSL session ID caching code. It happened when a
previous entry in the cache was re-used.
(http://curl.haxx.se/bug/view.cgi?id=1849764) with an included fix. He
identified a problem for re-used connections that previously had sent
Expect: 100-continue and in some situations the subsequent POST (that didn't
use Expect:) still had the internal flag set for its use. David's fix (that
makes the setting of the flag in every single request unconditionally) is
fine and is now used!
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
the same state struct as the host auth, so both could never be used at the
same time! I fixed it (without being able to check) to use two separate
structs to allow authentication using Negotiate on host and proxy
simultanouesly.
callback was used, as it could wrongly pass on a bad size for the outgoing
HTTP header. The bad size would be a very large value as it was a wrapped
size_t content. This happened when the whole HTTP request failed to get sent
in one single send. http://curl.haxx.se/mail/lib-2007-11/0165.html
forwarded from the Gentoo bug tracker by Daniel Black and was originally
submitted by Robin Johnson, pointed out that libcurl would do bad memory
references when it failed and bailed out before the handler thing was
setup. My fix is not done like the provided patch does it, but instead I
make sure that there's never any chance for a NULL pointer in that struct
member.
out that SFTP requests didn't use persistent connections. Neither did SCP
ones. I gave the SSH code a good beating and now both SCP and SFTP should
use persistent connections fine. I also did a bunch for indent changes as
well as a bug fix for the "keyboard interactive" auth.
out a problem in curl.h when building C++ apps with MSVC. To fix it, the
inclusion of header files in curl.h is moved outside of the C++ extern "C"
linkage block.
all the numericals at the top phrased "shorter" and I cut out the "number of
releases since the very beginning" since that's just the number curl releases
+ 26 and not a very interesting number anyway.
building with VC8 to get the "manifest" embedded to make fine stand-alone
binaries. The maketgz and the src/Makefile.vc6 files were adjusted
accordingly.
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).
This happened because the tftp code always uncondionally did a bind()
without caring if one already had been done and then it failed. I wrote a
test case (1009) to verify this, but it is a bit error-prone since it will
have to pick a fixed local port number and since the tests are run on so
many different hosts in different situations I add it in disabled state.
CURLOPT_OPENSOCKETDATA to set a callback that allows an application to replace
the socket() call used by libcurl. It basically allows the app to change
address, protocol or whatever of the socket. (I also did some whitespace
indent/cleanups in lib/url.c which kind of hides some of these changes, sorry
for mixing those in.)
CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 and the curl tool --hostpubmd5. They both make
the SCP or SFTP connection verify the remote host's md5 checksum of the public
key before doing a connect, to reduce the risk of a man-in-the-middle attack.
function do wrong on all input bytes that are >= 0x80 (decimal 128) due to a
signed / unsigned mistake in the code. I fixed it and added test case 543 to
verify.
curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
CURLOPT_NOBODY enabled but not CURLOPT_HEADER, libcurl wouldn't do TYPE
before it does SIZE which makes it less useful. I walked over the code and
made it do this properly, and added test case 542 to verify it.
o It looks for the NSS database first in the environment variable SSL_DIR,
then in /etc/pki/nssdb, then it initializes with no database if neither of
those exist.
o If the NSS PKCS#11 libnspsem.so driver is available then PEM files may be
loaded, including the ca-bundle. If it is not available then only
certificates already in the NSS database are used.
o Tries to detect whether a file or nickname is being passed in so the right
thing is done
o Added a bit of code to make the output more like the OpenSSL module,
including displaying the certificate information when connecting in
verbose mode
o Improved handling of certificate errors (expired, untrusted, etc)
The libnsspem.so PKCS#11 module is currently only available in Fedora
8/rawhide. Work will be done soon to upstream it. The NSS module will work
with or without it, all that changes is the source of the certificates and
keys.
key was specified and there was no HOME environment variable, and then it
didn't continue to try the other auth methods. Now it will instead try to
get the files id_dsa.pub and id_dsa from the current directory if none of
the two conditions were met.
- Bug report #1792649 (http://curl.haxx.se/bug/view.cgi?id=1792649) pointed
out a problem with doing an empty upload over FTP on a re-used connection.
I added test case 541 to reproduce it and to verify the fix.
- I noticed while writing test 541 that the FTP code wrongly did a CWD on the
second transfer as it didn't store and remember the "" path from the
previous transfer so it would instead CWD to the entry path as stored. This
worked, but did a superfluous command. Thus, test case 541 now also verifies
this fix.
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
out that libcurl didn't deal with large responses from server commands, when
the single response was consisting of multiple lines but of a total size of
16KB or more. Dan Fandrich improved the ftp test script and provided test
case 1006 to repeat the problem, and I fixed the code to make sure this new
test case runs fine.
out that doing first a file:// upload and then an FTP upload crashed libcurl
or at best caused furious valgrind complaints. Fixed now by making sure we
free and clear the file-specific struct properly when done with it.
out that libcurl didn't deal with very long (>16K) FTP server response lines
properly. Starting now, libcurl will chop them off (thus the client app will
not get the full line) but survive and deal with them fine otherwise. Test
case 1003 was added to verify this.
(http://curl.haxx.se/bug/view.cgi?id=1776232) about libcurl calling
Curl_client_write(), passing on a const string that the caller may not
modify and yet it does (on some platforms).
(http://curl.haxx.se/bug/view.cgi?id=1776235) about ftp requests with NOBODY
on a directory would do a "SIZE (null)" request. This is now fixed and test
case 1000 was added to verify.
the configure script checks for openldap and friends and we link with those
libs just like we link all other third party libraries, and we no longer
dlopen() those libraries. Our private header file lib/ldap.h was renamed to
lib/curl_ldap.h due to this. I set a tag in CVS (curl-7_17_0-preldapfix)
just before this commit, just in case.
(http://curl.haxx.se/bug/view.cgi?id=1766320) pointing out that the libcurl
code accessed two curl_easy_setopt() options (CURLOPT_DNS_CACHE_TIMEOUT and
CURLOPT_DNS_USE_GLOBAL_CACHE) as ints even though they're documented to be
passed in as longs, and that makes a difference on 64 bit architectures.
after 7.16.2. This is much due to the different treatment file:// gets
internally, but now I added test 231 to make it less likely to happen again
without us noticing!
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
NTLM, and he provided test code and a test server and we worked out a bug
fix. We failed to count sent body data at times, which then caused internal
confusions when libcurl tried to send the rest of the data in order to
maintain the same connection alive.
(and then I did some minor reformatting of code in lib/http.c)
(http://curl.haxx.se/bug/view.cgi?id=1757328) and submitted a patch. It turns
out we broke login to FTP servers that don't require (nor understand) PASS
after the USER command
(http://curl.haxx.se/bug/view.cgi?id=1750274) and submitted a patch for the
case where libcurl did a connect attempt to a non-listening port and didn't
provide a human readable error string back.
fail to connect if there is no Common Name field found in the remote cert.
We should deprecate the support for this set to 1 anyway soon, since the
feature is pointless and most likely never really used by anyone.
The tiny patch below fixes a bug (that I introduced :) which happens
when negotiating authentication with a proxy (probably with web
servers as well) that uses chunked transfer encoding for the 407 error
pages. In this case the ''ignorebody'' flag was ignored (no pun
intended).
using one of the so-called 'right' time zones that take into account
leap seconds, which causes the tests to fail (as reported by
Daniel Black in bug report #1745964).
hash function for different hashes, and also expanded the default size for
the socket hash table used in multi handles to greatly enhance speed when
very many connections are added and the socket API is used.
chunked encoding (that also lacks "Connection: close"). It now simply
assumes that the connection WILL be closed to signal the end, as that is how
RFC2616 section 4.4 point #5 says we should behave.
http://curl.haxx.se/mail/lib-2007-06/0238.html, libcurl didn't properly do
no-body requests on FTP files on re-used connections properly, or at least
it didn't provide the info back in the header callback properly in the
subsequent requests.
(http://curl.haxx.se/bug/view.cgi?id=1740263). Adam discovered that when
getting a large amount of URLs with curl, they were fetched slower and
slower... which turned out to be because the --libcurl data collecting which
wrongly always was enabled, but no longer is...
(http://curl.haxx.se/bug/view.cgi?id=1739100) that mentioned that libcurl
could not actually list the contents of the root directory of a given FTP
server if the login directory isn't root. I fixed the problem and added three
test cases (one is disabled for now since I identified KNOWN_BUGS #44, we
cannot use --ftp-method nocwd and list ftp directories).
(http://curl.haxx.se/bug/view.cgi?id=1733119) and we collaborated on the fix.
The problem is that for 64bit HPUX builds, several socket-related functions
would still assume int (32 bit) arguments and not socklen_t (64 bit) ones.
- -s/--silent can now be used to toggle off the silence again if used a second
time.
Daniel S (5 June 2007)
- Added Daniel Black's work that adds the first few SOCKS test cases. I also
fixed two minor SOCKS problems to make the test cases run fine.
to find that it crashed miserably, and this was due to some select()isms left
in the code. This was due to API restrictions in c-ares 1.3.x, but with the
upcoming c-ares 1.4.0 this is no longer the case so now libcurl runs much
better with c-ares and the multi interface with > 1024 file descriptors in
use.
overwrite in Curl_select(). While fixing it, I also improved its performance
somewhat by changing calloc to malloc and breaking out of a loop earlier
(when possible).
(http://curl.haxx.se/bug/view.cgi?id=1705802), which was filed by Daniel
Black identifying several FTP-SSL test cases fail when we build libcurl with
NSS for TLS/SSL. Listed as #42 in KNOWN_BUGS.
peer's name in the SSL certificate when built for OpenSSL. The leak happens
for libcurls with CURL_DOES_CONVERSIONS enabled that fail to convert the CN
name from UTF8.
bug report #1715394 (http://curl.haxx.se/bug/view.cgi?id=1715394), and the
transfer-related info "variables" were indeed overwritten with zeroes wrongly
and have now been adjusted. The upload size still isn't accurate.
when CURLOPT_HTTP200ALIASES is used to avoid the problem mentioned below is
not very nice if the client wants to be able to use _either_ a HTTP 1.1
server or one within the aliases list... so starting now, libcurl will
simply consider 200-alias matches the to be HTTP 1.0 compliant.
libcurls, which turned out to be the 25-nov-2006 change which treats HTTP
responses without Content-Length or chunked encoding as without bodies. We
now added the conditional that the above mentioned response is only without
body if the response is HTTP 1.1.
when CURLM_CALL_MULTI_PERFORM is returned from curl_multi_socket*/perform,
to make applications using only curl_multi_socket() to properly function
when adding easy handles "on the fly". Bug report and test app provided by
Michael Wallner.
since it then inits libgcrypt and libgcrypt is being evil and EXITS the
application if it fails to get a fine random seed. That's really not a nice
thing to do by a library.
been removed from a multi handle, and then fixed another flaw that prevented
curl_easy_duphandle() to work even after the first fix - the handle was
still marked as using the multi interface.
was 16385 bytes (16K+1) and it turned out we didn't properly always "suck
out" all data from libssh2. The effect being that libcurl would hang on the
socket waiting for data when libssh2 had in fact already read it all...
the CURLOPT_RESUME_FROM or CURLOPT_RANGE options and an existing connection
in the connection cache is closed to make room for the new one when you call
curl_easy_perform(). It would then wrongly free range-related data in the
connection close funtion.
identifying a double-free problem in the SSL-dealing layer, telling GnuTLS to
free NULL credentials on closedown after a failure and a bad #ifdef for NSS
when closing down SSL.
Curl_socket_ready(), Curl_poll() and Curl_select() when these are called
with a zero timeout or a timeout value indicating a blocking call should
be performed.
These unnecessary calls to gettimeofday() got introduced in 7.16.2 when
fixing 'timeout would restart when signal caught while awaiting socket
events' on 20 March 2007.
- Move some loop breaking logic from the while clause into the loop,
avoiding compiler warning 'assignment within conditional expression'
function that deprecates the curl_multi_socket() function. Using the new
function the application tell libcurl what action that was found in the
socket that it passes in. This gives a significant performance boost as it
allows libcurl to avoid a call to poll()/select() for every call to
curl_multi_socket*().
by letting configure check for setmode and ifdef on HAVE_SETMODE. NOTE: non-
configure platforms that havve setmode() needs their hard-coded config.h files
fixed. I fixed the src/config-win32.h.
or Curl_poll() with a non-zero timeout both functions would restart the
specified timeout. This could even lead to the extreme case that if a
signal arrived with a frecuency lower to the specified timeout neither
function would ever exit.
Added experimental symbol definition check CURL_ACKNOWLEDGE_EINTR in
Curl_select() and Curl_poll(). When compiled with CURL_ACKNOWLEDGE_EINTR
defined both functions will return as soon as a signal is caught. Use it
at your own risk, all calls to these functions in the library should be
revisited and checked before fully supporting this feature.
more frequently allowing same calling frecuency for the client progress
callback, while keeping the once a second frecuency for speed calculations
and internal display of the transfer progress.
makefiles that are included in the source release archives, generated from
the Makefile.vc6 files by the maketgz script. I also modified the root
Makefile to have a VC variable that defaults to vc6 but can be overridden to
allow it to be used for vc8 as well. Like this:
nmake VC=vc8 vc
server through a proxy and have the remote https server port set using the
CURLOPT_PORT option, protocol gets reset to http from https after the first
request.
User defined URL was modified internally by libcurl and subsequent reuse of
the easy handle may lead to connection using a different protocol (if not
originally http).
I found that libcurl hardcoded the protocol to "http" when it tries to
regenerate the URL if CURLOPT_PORT is set. I tried to fix the problem as
follows and it's working fine so far
fixing some bugs:
o Don't mix GET and POST requests in a pipeline
o Fix the order in which requests are dispatched from the pipeline
o Fixed several curl bugs with pipelining when the server is returning
chunked encoding:
* Added states to chunked parsing for final CRLF
* Rewind buffer after parsing chunk with data remaining
* Moved chunked header initializing to a spot just before receiving
headers
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
to the debug callback.
- Shmulik Regev added CURLOPT_HTTP_CONTENT_DECODING and
CURLOPT_HTTP_TRANSFER_DECODING that if set to zero will disable libcurl's
internal decoding of content or transfer encoded content. This may be
preferable in cases where you use libcurl for proxy purposes or similar. The
command line tool got a --raw option to disable both at once.
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
fail since they used "1 feb 2007"...
- Manfred Schwarb reported that socks5 support was broken and help us pinpoint
the problem. The code now tries harder to use httproxy and proxy where
apppropriate, as not all proxies are HTTP...
ordinary curl command line, and you will get a libcurl-using source code
written to the file that does the equivalent operation of what your command
line operation does!
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
non-ASCII platforms. It does add some complexity, most notably with more
#ifdefs, but I want to see this supported added and I can't see how we can
add it without the extra stuff added.
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
downloaded data in two buffers, just to be able to deal with a special HTTP
pipelining case. That is now only activated for pipelined transfers. In
Matt's case, it showed as a considerable performance difference,
(http://curl.haxx.se/bug/view.cgi?id=1603712) (known bug #36) --limit-rate
(CURLOPT_MAX_SEND_SPEED_LARGE and CURLOPT_MAX_RECV_SPEED_LARGE) are broken
on Windows (since 7.16.0, but that's when they were introduced as previous
to that the limiting logic was made in the application only and not in the
library). It was actually also broken on select()-based systems (as apposed
to poll()) but we haven't had any such reports. We now use select(), Sleep()
or delay() properly to sleep a while without waiting for anything input or
output when the rate limiting is activated with the easy interface.
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
(http://curl.haxx.se/bug/view.cgi?id=1603712) which is about connections
getting cut off prematurely when --limit-rate is used. While I found no such
problems in my tests nor in my reading of the code, I found that the
--limit-rate code was severly flawed (since it was moved into the lib, since
7.15.5) when used with the easy interface and it didn't work as documented so
I reworked it somewhat and now it works for my tests.
passing a curl_off_t argument to the Curl_read_rewind() function which takes
an size_t argument. Curl_read_rewind() also had debug code left in it and it
was put in a different source file with no good reason when only used from
one single spot.
no code present in the library that receives the option. Since it was not
possible to use, we know that no current users exist and thus we simply
removed it from the docs and made the code always use the default path of
the code.
(http://curl.haxx.se/bug/view.cgi?id=1604956) which identified setting
CURLOPT_MAXCONNECTS to zero caused libcurl to SIGSEGV. Starting now, libcurl
will always internally use no less than 1 entry in the connection cache.
(http://curl.haxx.se/bug/view.cgi?id=1230118) curl_getdate() did not work
properly for all input dates on Windows. It was mostly seen on some TZ time
zones using DST. Luckily, Martin also provided a fix.
(http://curl.haxx.se/bug/view.cgi?id=1600447) in which he noted that active
FTP connections don't work with the multi interface. The problem is here that
the multi interface state machine has a state during which it can wait for the
data connection to connect, but the active connection is not done in the same
step in the sequence as the passive one is so it doesn't quite work for
active. The active FTP code still use a blocking function to allow the remote
server to connect.
The fix (work-around is a better word) for this problem is to set the
boolean prematurely that the data connection is completed, so that the "wait
for connect" phase ends at once.
HTTP upload was disconnected:
"What appears to be happening is that my system (Linux 2.6.17 and 2.6.13) is
setting *only* POLLHUP on poll() when the conditions in my previous mail
occur. As you can see, select.c:Curl_select() does not check for POLLHUP. So
basically what was happening, is poll() was returning immediately (with
POLLHUP set), but when Curl_select() looked at the bits, neither POLLERR or
POLLOUT was set. This still caused Curl_readwrite() to be called, which
quickly returned. Then the transfer() loop kept continuing at full speed
forever."
header in a third, not suppported by libcurl, format and we agreed that we
could make the parser more forgiving to accept all the three found
variations.
responded with a single status line and no headers nor body. Starting now, a
HTTP response on a persistent connection (i.e not set to be closed after the
response has been taken care of) must have Content-Length or chunked
encoding set, or libcurl will simply assume that there is no body.
To my horror I learned that we had no less than 57(!) test cases that did bad
HTTP responses like this, and even the test http server (sws) responded badly
when queried by the test system if it is the test system. So although the
actual fix for the problem was tiny, going through all the newly failing test
cases got really painful and boring.
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
case when 401 or 407 are returned, *IF* no auth credentials have been given.
The CURLOPT_FAILONERROR option is not possible to make fool-proof for 401
and 407 cases when auth credentials is given, but we've now covered this
somewhat more.
You might get some amounts of headers transferred before this situation is
detected, like for when a "100-continue" is received as a response to a
POST/PUT and a 401 or 407 is received immediately afterwards.
Added test 281 to verify this change.
to the CURLOPT_DEBUGFUNCTION callback has been fixed (it was erroneously
included as part of the header). A message was also added to the
command line tool to show when data is being sent, enabled when
--verbose is used.
and while doing so it became apparent that the current timeout system for
the socket API really was a bit awkward since it become quite some work to
be sure we have the correct timeout set.
Jeff then provided the new CURLMOPT_TIMERFUNCTION that is yet another
callback the app can set to get to know when the general timeout time
changes and thus for an application like hiperfifo.c it makes everything a
lot easier and nicer. There's a CURLMOPT_TIMERDATA option too of course in
good old libcurl tradition.
would crash if a bad function sequence was used when shutting down after
using the multi interface (i.e using easy_cleanup after multi_cleanup) so
precautions have been added to make sure it doesn't any more - test case 529
was added to verify.
handle that is part of a multi handle first removes the handle from the
stack.
- Added CURLOPT_SSL_SESSIONID_CACHE and --no-sessionid to disable SSL
session-ID re-use on demand since there obviously are broken servers out
there that misbehave with session-IDs used.
while not fixing things very nicely, it does make the SOCKS5 proxy
connection slightly better as it now acknowledges the timeout for connection
and it no longer segfaults in the case when SOCKS requires authentication
and you did not specify username:password.
send the whole request at once, even though the Expect: header was disabled
by the application. An effect of this change is also that small (< 1024
bytes) POSTs are now always sent without Expect: header since we deem it
more costly to bother about that than the risk that we send the data in
vain.
CURLOPT_NOBODY is set true. PREQUOTE is then run roughly at the same place
in the command sequence as it would have run if there would've been a
transfer.