date parser function. This makes our function less dependent on system-
provided functions and instead we do all the magic ourselves. We also no
longer depend on the TZ environment variable.
Markus Moeller reported: http://curl.haxx.se/mail/archive-2008-09/0016.html
- recv() errors other than those equal to EAGAIN now cause proper
CURLE_RECV_ERROR to get returned. This made test case 160 fail so I've now
disabled it until we can figure out another way to exercise that logic.
proxy" (http://curl.haxx.se/bug/view.cgi?id=2107377) that showed how a multi
interface using program didn't work when built with GnuTLS and a CONNECT
request was done over a proxy (basically test 502 over a proxy to a HTTPS
site). It turned out the ssl connect function would get called twice which
caused the second call to fail.
sites in cases where the cookie clearly has a very old expiry date. The
condition was simply that libcurl's date parser would fail to convert the
date and it would then count as a (timed-based) match. Starting now, a
missed date due to an unsupported date format or date range will now cause
the cookie to not match.
CURLOPT_POST301 (but adds a define for backwards compatibility for you who
don't define CURL_NO_OLDIES). This option allows you to now also change the
libcurl behavior for a HTTP response 302 after a POST to not use GET in the
subsequent request (when CURLOPT_FOLLOWLOCATION is enabled). I edited the
patch somewhat before commit. The curl tool got a matching --post302
option. Test case 1076 was added to verify this.
enabling this feature with CURLOPT_CERTINFO for a request using SSL (HTTPS
or FTPS), libcurl will gather lots of server certificate info and that info
can then get extracted by a client after the request has completed with
curl_easy_getinfo()'s CURLINFO_CERTINFO option. Linus Nielsen Feltzing
helped me test and smoothen out this feature.
Unfortunately, this feature currently only works with libcurl built to use
OpenSSL.
This feature was sponsored by networking4all.com - thanks!
"Connection: close" and actually close the connection after the
response-body, libcurl could still have outstanding data to send and it
would not properly notice this and stop sending. This caused weirdness and
sad faces. http://curl.haxx.se/bug/view.cgi?id=2080222
Note that there are still reasons to consider libcurl's behavior when
getting a >= 400 response code while sending data, as Craig Perras' note
"http upload: how to stop on error" specifies:
http://curl.haxx.se/mail/archive-2008-08/0138.html
an unlock in between) for a certain case and that in fact works when using
regular windows mutexes but not with pthreads'! Locks should of course not
get locked again so this is now fixed.
http://curl.haxx.se/mail/lib-2008-08/0422.html
firefox-db2pem.sh conversion script that converts a local Firefox db of ca
certs into PEM format, suitable for use with a OpenSSL or GnuTLS built
libcurl.
memory leak because it never called the OpenSSL function
CRYPTO_cleanup_all_ex_data() as it was supposed to. This was because of a
missing define in config-win32.h!
when including the OpenSSL header files. This is the recommended setting, this
prevents the undesired inclusion of header files with the same name as those
of OpenSSL but which do not belong to the OpenSSL package. The visible change
from previously released libcurl versions is that now OpenSSl enabled NetWare
builds also define USE_OPENSSL in config files, and that OpenSSL header files
must be located in a subdirectory named 'openssl'.
remain in use as internal curl_off_t print formatting strings for the internal
*printf functions which still cannot handle print formatting string directives
such as "I64d", "I64u", and others available on MSVC, MinGW, Intel's ICC, and
other DOS/Windows compilers.
This reverts previous commit part which did:
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
the names of the curl_off_t formatting string directives now become
CURL_FORMAT_CURL_OFF_T and CURL_FORMAT_CURL_OFF_TU.
CURL_FMT_OFF_T -> CURL_FORMAT_CURL_OFF_T
CURL_FMT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
Remove the use of an internal name for the curl_off_t formatting string directives
and use the common one available from the inside and outside of the library.
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
when a server responded with long headers and data. Luckily, the buffer
overflowed into another unused buffer, so no actual harm was done.
Added test cases 1060 and 1061 to verify.
line of a multiline FTP response whose last byte landed exactly at the end
of the BUFSIZE-length buffer would be treated as the terminal response
line. The following response code read in would then actually be the
end of the previous response line, and all responses from then on would
correspond to the wrong command. Test case 1062 verifies this.
Stop closing a never-opened ftp socket.
(http://curl.haxx.se/bug/view.cgi?id=2042430) with a patch. "NTLM Windows
SSPI code is not thread safe". This was due to libcurl using static
variables to tell wether to load the necessary SSPI DLL, but now the loading
has been moved to the more suitable curl_global_init() call.
(http://curl.haxx.se/bug/view.cgi?id=2042440) with a patch. He identified a
problem when using NTLM over a proxy but the end-point does Basic, and then
libcurl would do wrong when the host sent "Connection: close" as the proxy's
NTLM state was erroneously cleared.
NetWare curlbuild.h settings depend on whether LIBC or CLIB is used.
The NetWare specific Makefile is capable of knowing which target is being built.
So, finally, the NetWare Makefile will take care of generating curlbuild.h
connection with the multi interface even if a previous use of it caused a
CURLE_PEER_FAILED_VERIFICATION to get returned. I now make sure that failed
SSL connections properly close the connections.
proved how PUT and POST with a redirect could lead to a "hang" due to the
data stream not being rewound properly when it had to in order to get sent
properly (again) to the subsequent URL. This is now fixed and these test
cases are no longer disabled.
with -C - sent garbage in the Content-Range: header. I fixed this problem by
making sure libcurl always sets the size of the _entire_ upload if an app
attemps to do resumed uploads since libcurl simply cannot know the size of
what is currently at the server end. Test 1041 is no longer disabled.
incorrectly--the host name is treated as part of the user name and the
port number becomes the password. This can be observed in test 279
(was KNOWN_ISSUE #54).
an URL in a Location: header didn't have the scope ID removed, so an
invalid host name was used. Second, when the scope ID was removed, it
also removed any port number that may have existed in the URL.
parser to allow numerical IPv6-addresses to be specified with the scope
given, as per RFC4007 - with a percent letter that itself needs to be URL
escaped. For example, for an address of fe80::1234%1 the HTTP URL is:
"http://[fe80::1234%251]/"
true bug in libcurl built with OpenSSL. It made curl_easy_getinfo() more or
less always return 0 for CURLINFO_SSL_VERIFYRESULT because the function that
would set it to something non-zero would return before the assign in almost
all error cases. The internal variable is now set to non-zero from the start
of the function only to get cleared later on if things work out fine.
overrun" (http://curl.haxx.se/bug/view.cgi?id=2026240) identifying two
problems, and providing the fix for them:
- CURL_READFUNC_PAUSE did in fact not pause the _sending_ of data that it is
designed for but paused _receiving_ of data!
- libcurl didn't internally set the read counter to zero when this return
code was detected, which would potentially lead to junk getting sent to
the server.
finds out its return type and the types of its arguments. Added definitions
for non-configure systems config files, and introduced macro sreadfrom which
will be used on udp sockets as a recvfrom() wrapper.
set the attribute that has changed instead of all possible ones. Hopefully,
this will solve the "Permission denied" problem that Nagarajan Sreenivasan
reported when setting some modes, but regardless, it saves a protocol
round trip in the chmod case.
is set in fdset.events" (http://curl.haxx.se/bug/view.cgi?id=2015126) which
exactly pinpointed the problem only triggered on Windows Vista, provided
reference to docs and also a fix. There is much work behind Peter Lamberg's
excellent bug report. Thank You!
fix for it. It occured when you did a FTP transfer using
CURLFTPMETHOD_SINGLECWD and then did another one on the same easy handle but
switched to CURLFTPMETHOD_NOCWD. Due to the "dir depth" variable not being
cleared properly. Scott's test case is now known as test 539 and it
verifies the fix.
CURLINFO_APPCONNECT_TIME. This is set with the "application layer"
handshake/connection is completed (typically SSL, TLS or SSH). By using this
you can figure out the application layer's own connect time. You can extract
the time stamp using curl's -w option and the new variable named
'time_appconnect'. This feature was sponsored by Lenny Rachitsky at NeuStar.
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=487567) pointing out that
libcurl used Content-Range: instead of Range when doing a range request with
--head (CURLOPT_NOBODY). This is now fixed and test case 1032 was added to
verify.
handshake with a SSLv2 server, and it turned out to be because it didn't
recognize the cipher named "rc4-md5". In our list that cipher was named
plainly "rc4". I've now added rc4-md5 to work as an alias as Phil reported
that it made things work for him again.
crashed libcurl. This is now addressed by making sure we use "plain send"
internally when doing the socks handshake instead of the Curl_write()
function which is designed to use the "target" protocol. That's then SCP or
SFTP in this case. I also took the opportunity and cleaned up some ssh-
related #ifdefs in the code for readability.
libcurl to not tell the app properly when a socket was closed (when the name
resolve done by c-ares is done) and then immediately re-created and put to
use again (for the actual connection). Since the closure will make the
"watch status" get lost in several event-based systems libcurl will need to
tell the app about this close/re-create case.
multi interface with pipelining enabled as it would wrongly check for,
detect and close "dead connections" even though that connection was already
in use!
warning in the code though but we need NSS' base64.h header for that and we
don't currently have a suitable way to include it as our own base64.h header
kind of "blocks" it.
libraries are supported. Starting now, each underlying SSL library support
code does a set of defines for the 16 functions the generic layer (sslgen.c)
uses (all these new function defines use the prefix "curlssl_"). This
greatly simplified the generic layer in readability by involving much less
#ifdefs and other preprocessor stuff and should make it easier for people to
make libcurl work with new SSL libraries.
Hopefully I can later on document these 16 functions somewhat as well.
I also made most of the internal SSL-dependent functions (using Curl_ssl_
prefix) #defined to nothing when no SSL support is requested - previously
they would unnecessarily call mostly empty functions.
curl_easy_getinfo. It returns a pointer to a string with the most recently
used IP address. Modified test case 500 to also verify this feature. The
implementing of this feature was sponsored by Lenny Rachitsky at NeuStar.
the curl_multi_socket() API with HTTP pipelining enabled and could lead to
the pipeline basically stalling for a very long period of time until it took
off again.
provided excellent repeat recipes. I fixed the cases I managed to reproduce
but Jeff still got some (SCP) problems even after these fixes:
http://curl.haxx.se/mail/lib-2008-05/0342.html
how the HTTP redirect following code didn't properly follow to a new URL if
the new url was but a query string such as "Location: ?moo=foo". Test case
1031 was added to verify this fix.
_ Updated packages/OS400/curl.inc.in with new definitions.
_ New connect/bind/sendto/recvfrom wrappers to support AF_UNIX sockets.
_ Include files line length shortened below 100 chars.
_ Const parameter in lib/qssl.[ch].
_ Typos in packages/OS400/initscript.sh.
go straight to DO
we had multiple states for which the internal function returned no socket at
all to wait for, with the effect that libcurl calls the socket callback (when
curl_multi_socket() is used) with REMOVE prematurely (as it would be added
again within very shortly)
handler functions didn't return that the socket should be waited for writing,
but instead it was treated as if no socket was needing monitoring so REMOVE
was called prematurely
and receive data over a connection previously setup with curl_easy_perform()
and its CURLOPT_CONNECT_ONLY option. The sendrecv.c example was added to
show how they can be used.
when function clock_gettime() is available and the monotonic timer is
also available. Otherwise, in some cases, librt or libposix4 could be used
for linking even when finally not using the clock_gettime() function due
to lack of the monotonic clock.
when using CURL_AUTH_ANY" (http://curl.haxx.se/bug/view.cgi?id=1945240).
The problem was that when libcurl rewound a stream meant for upload when it
would prepare for a second request, it could accidentally continue the
sending of the rewound data on the first request instead of on the second.
Ben also provided test case 1030 that verifies this fix.
since libcurl used getprotobyname() and that isn't thread-safe. We now
switched to use IPPROTO_TCP unconditionally, but perhaps the proper fix is
to detect the thread-safe version of the function and use that.
http://curl.haxx.se/mail/lib-2008-05/0011.html
redirections and thus cannot use CURLOPT_FOLLOWLOCATION easily, we now
introduce the new CURLINFO_REDIRECT_URL option that lets applications
extract the URL libcurl would've redirected to if it had been told to. This
then enables the application to continue to that URL as it thinks is
suitable, without having to re-implement the magic of creating the new URL
from the Location: header etc. Test 1029 verifies it.
server input and response request files of the test harness sws server.
Reintroduce, for test # 1001, the <postcheck> small delay. The delay is
needed even with the accelerated writing of server input and response
request files in test harness sws server.
http://curl.haxx.se/mail/lib-2008-04/0385.html
Define HAVE_GSSMIT if <gssapi/{gssapi.h,gssapi_generic.h,gssapi_krb5.h}> are
available, otherwise define HAVE_GSSHEIMDAL if <gssapi.h> is available.
Only define GSS_C_NT_HOSTBASED_SERVICE to gss_nt_service_name if
GSS_C_NT_HOSTBASED_SERVICE isn't declared by the gssapi headers. This should
avoid breakage in case we wrongly recognize Heimdal as MIT again.
message when libcurl doesn't get a 220 back immediately on connect, I now
changed it to be more specific on what the problem is. Also worth noticing:
while the bug report contains an example where the response is:
421 There are too many connected users, please try again later
we cannot assume that the error message will always be this readable nor
that it fits within a particular boundary etc.
GET simply because previously when you set CURLOPT_NOBODY to TRUE first and
then FALSE you'd end up in a broken state where a HTTP request would do a
HEAD by still act a lot like for a GET and hang waiting for the content etc.
application to provide data for a multipart with the read callback. Note
that the size needs to be provided with CURLFORM_CONTENTSLENGTH when the
stream option is used. This feature is verified by the new test case
554. This feature was sponsored by Xponaut.
default instead of a ca bundle. The configure script will also look for a
ca path if no ca bundle is found and no option given.
- Fixed detection of previously installed curl-ca-bundle.crt
CURLOPT_FTP_CREATE_MISSING_DIRS to create a directory, and then re-used the
handle and uploaded another file to another directory that needed to be
created, the second upload would fail. Another case of a state variable that
wasn't properly reset between requests.
(http://curl.haxx.se/bug/view.cgi?id=1911069) that identified a race
condition in the name resolver code when the DNS cache is shared between
multiple easy handles, each running in simultaneous threads that could cause
crashes.
easy handle if curl_easy_reset() was used between them. I fixed it and Brian
verified that it cured his problem.
- Brian Ulm reported that if you first tried to download a non-existing SFTP
file and then fetched an existing one and re-used the handle, libcurl would
still report the second one as non-existing as well! I fixed it abd Brian
verified that it cured his problem.
select/poll calls will only be retried upon EINTR failures as
it previously was in lib/select.c revision 1.29
In this way Curl_socket_ready() and Curl_poll() will again fail
on any select/poll errors different than EINTR.
better control at the exact state of the connection's SSL status so that we
know exactly when it has completed the SSL negotiation or not so that there
won't be accidental re-uses of connections that are wrongly believed to be
in SSL-completed-negotiate state.
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
get a fresh one downloaded and created with 'make ca-bundle' or you can get
one from here => http://curl.haxx.se/docs/caextract.html if you want a fresh
new one extracted from Mozilla's recent list of ca certs.
The configure option --with-ca-bundle now lets you specify what file to use
as default ca bundle for your build. If not specified, the configure script
will check a few known standard places for a global ca cert to use.
DONE before the entire request operation is complete and thus we can't know in
what state it is for re-using, so we're forced to close it. In a perfect world
we can add code that keep track of if we really must close it here or not, but
currently we have no such detail knowledge.
Jerome Muffat-Meridol helped us work this out.
verification is requested. Previously it would even return failure if gnutls
failed to get the server cert even though no verification was asked for.
- Fix my Curl_timeleft() leftover mistake in the gnutls code
out and provides test program that demonstrates that libcurl might not set
error description message for error CURLE_COULDNT_RESOLVE_HOST for Windows
threaded name resolver builds. Fixed now.
(http://curl.haxx.se/bug/view.cgi?id=1889856): When using the gnutls ssl
layer, cleaning-up and reinitializing curl ends up with https requests
failing with "ASN1 parser: Element was not found" errors. Obviously a
regression added in 7.16.3.
"HttpOnly" feature introduced by Microsoft and apparently also supported by
Firefox: http://msdn2.microsoft.com/en-us/library/ms533046.aspx . HttpOnly
is now supported when received from servers in HTTP headers, when written to
cookie jars and when read from existing cookie jars.
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
The signalling of that a global DNS cache is wanted is done by setting the
option but the setting of the internal variable that it is in use must not be
done until it finally actually gets used!
NOTE and WARNING: I noticed that you can't actually switch off the global dns
cache with CURLOPT_DNS_USE_GLOBAL_CACHE but you couldn't do that previously
either and the option is very clearly and loudly documented as DO NOTE USE so
I won't bother to fix this bug now.
silly code left from when we switched to let the multi handle "hold" the dns
cache when using the multi interface... Of course this only triggered when a
certain function call returned error at the correct moment.
(http://curl.haxx.se/bug/view.cgi?id=1871269) and we could fix his hang-
problem that occurred when doing a large HTTP POST request with the
response-body read from a callback.
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
(http://curl.haxx.se/bug/view.cgi?id=1868255) with a patch. It identifies
and fixes a problem with parsing WWW-Authenticate: headers with additional
spaces in the line that the parser wasn't written to deal with.
(http://curl.haxx.se/bug/view.cgi?id=1863171) where he pointed out that
libcurl's date parser didn't accept a +1300 time zone which actually is used
fairly often (like New Zealand's Dailight Savings Time), so I modified the
parser to now accept up to and including -1400 to +1400.
code to instead introduce support for a new proxy type called
CURLPROXY_SOCKS5_HOSTNAME that is used to send the host name to the proxy
instead of IP address and there's thus no longer any need for a new
curl_easy_setopt() option.
The default SOCKS5 proxy is again back to sending the IP address to the
proxy. The new curl command line option for enabling sending host name to a
SOCKS5 proxy is now --socks5-hostname.
proxy do the host name resolving and only if --socks5ip (or
CURLOPT_SOCKS5_RESOLVE_LOCAL) is used we resolve the host name locally and
pass on the IP address only to the proxy.
is an inofficial PROXY4 variant that sends the hostname to the proxy instead
of the resolved address (which is already supported by SOCKS5). --socks4a is
the curl command line option for it and CURLOPT_PROXYTYPE can now be set to
CURLPROXY_SOCKS4A as well.
(http://curl.haxx.se/bug/view.cgi?id=1856628) and provided a fix for the
(small) memory leak in the SSL session ID caching code. It happened when a
previous entry in the cache was re-used.
and makes wrong asumptions of build target when it isn't specified. So,
if no build target has been defined we will target WinXP when building
with MSVC 9.0 (VS2008).
(http://curl.haxx.se/bug/view.cgi?id=1849764) with an included fix. He
identified a problem for re-used connections that previously had sent
Expect: 100-continue and in some situations the subsequent POST (that didn't
use Expect:) still had the internal flag set for its use. David's fix (that
makes the setting of the flag in every single request unconditionally) is
fine and is now used!
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
the same state struct as the host auth, so both could never be used at the
same time! I fixed it (without being able to check) to use two separate
structs to allow authentication using Negotiate on host and proxy
simultanouesly.
from the other day. It is time to setup the internal SSL libs and treat them
with a "handler" struct similar to how we deal with the protocols these days...
callback was used, as it could wrongly pass on a bad size for the outgoing
HTTP header. The bad size would be a very large value as it was a wrapped
size_t content. This happened when the whole HTTP request failed to get sent
in one single send. http://curl.haxx.se/mail/lib-2007-11/0165.html
do_init() and do_complete() which now are called first and last in the DO
function. It simplified the flow in multi.c and the functions got more
sensible names!
for the cases where there's nothing to do in here, like for SFTP directory
listings that already is complete when this function gets called. The init
stuff clears byte counters which isn't really desired.
forwarded from the Gentoo bug tracker by Daniel Black and was originally
submitted by Robin Johnson, pointed out that libcurl would do bad memory
references when it failed and bailed out before the handler thing was
setup. My fix is not done like the provided patch does it, but instead I
make sure that there's never any chance for a NULL pointer in that struct
member.
out that SFTP requests didn't use persistent connections. Neither did SCP
ones. I gave the SSH code a good beating and now both SCP and SFTP should
use persistent connections fine. I also did a bunch for indent changes as
well as a bug fix for the "keyboard interactive" auth.
connectdata struct. This will in theory enable us to do persistent connections
with SCP+SFTP, but currently the state machine always (and wrongly) cleanup
everything in the 'done' action instead of in 'disconnect'. Also did a bunch
of indent fixes, if () => if() and a few other source cleanups like added
comments etc.
was even mentioned to be bad in a comment! Should make test 2000 and 2001 work
fine.
Also, freedirs() now take a ftp_conn struct pointer which saves some extra
unnecessary variable assignments.
versions of ws2tcpip.h do not have the definition. It seems that when
the socklen_t definition is missing from ws2tcpip.h the definition for
INET_ADDRSTRLEN is also missing, and that when one definition is present
the other one also is available.
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).