It helps to prevent a hangup with some FTP servers in case idle session
timeout has exceeded. But it may be useful also for other protocols
that send any quit message on disconnect. Currently used by FTP, POP3,
IMAP and SMTP.
While changing Curl_sec_read_msg to accept an enum protection_level
instead of an int, I went ahead and fixed the usage of the associated
fields.
Some code was assuming that prot_clear == 0. Fixed those to use the
proper value. Added assertions prior to any code that would set the
protection level.
The IP version choice was previously only in the UserDefined struct
within the SessionHandle, but since we sometimes alter that option
during a request we need to have it on a per-connection basis.
I also moved more "init conn" code into the allocate_conn() function
which is designed for that purpose more or less.
CURLOPT_RESOLVE is a new option that sends along a curl_slist with
name:port:address sets that will populate the DNS cache with entries so
that request can be "fooled" to use another host than what otherwise
would've been used. Previously we've encouraged the use of Host: for
that when dealing with HTTP, but this new feature has the added bonus
that it allows the name from the URL to be used for TLS SNI and server
certificate name checks as well.
This is a first change. Surely more will follow to make it decent.
When given a custom host name in a Host: header, we can use it for
several different purposes other than just cookies, so we rename it and
use it for SSL SNI etc.
The URL parser got a little stricter as it now considers a ? to be a
host name divider so that the slightly sloppier URLs work too. The
problem that made me do this change was the reported problem with an URL
like: www.example.com?email=name@example.com This form of URL is not
really a legal URL (due to the missing slash after the host name) but is
widely accepted by all major browsers and libcurl also already accepted
it, it was just the '@' letter that triggered the problem now.
The side-effect of this change is that now libcurl no longer accepts the
? letter as part of user-name or password when given in the URL, which
it used to accept (and is tested in test 191). That letter is however
mentioned in RFC3986 to be required to be percent encoded since it is
used as a divider.
Bug: http://curl.haxx.se/bug/view.cgi?id=3090268
In order to avoid for example the pingpong protocols to issue STARTTLS
(or equivalent) even though there's no SSL support built-in.
Reported by: Sune Ahlgren
Bug: http://curl.haxx.se/mail/archive-2010-10/0045.html
The functions Curl_disconnect() and Curl_done() are both used within the
scope of a single request so they cannot be allowed to use
Curl_expire(... 0) to kill all timeouts as there are some timeouts that
are set before a request that are supposed to remain until the request
is done.
The timeouts are now instead cleared at curl_easy_cleanup() and when the
multi state machine changes a handle to the complete state.
Obviously, browsers ignore a colon without a following port number. Both
Firefox and Chrome just removes the colon for such URLs. This change
does not remove the colon for URLs sent over a HTTP proxy, so we should
consider doing that change as well.
Reported by: github user 'kreshano'
Curl_getconnectinfo() is changed to return a proper curl_socket_t for
the last socket so that it'll work more portably (and cause less
compiler warnings).
HTTP allows that a server sends trailing headers after all the chunks
have been sent WITHOUT signalling their presence in the first response
headers. The "Trailer:" header is only a SHOULD there and as we need to
handle the situation even without that header I made libcurl ignore
Trailer: completely.
Test case 1116 was added to verify this and to make sure we handle more
than one trailer header properly.
Reported by: Patrick McManus
Bug: http://curl.haxx.se/bug/view.cgi?id=3052450
Curl_expire() is now expanded to hold a list of timeouts for each easy
handle. Only the closest in time will be the one used as the primary
timeout for the handle and will be used for the splay tree (which sorts
and lists all handles within the multi handle).
When the main timeout has triggered/expired, the next timeout in time
that is kept in the list will be moved to the main timeout position and
used as the key to splay with. This way, all timeouts that are set with
Curl_expire() internally will end up as a proper timeout. Previously any
Curl_expire() that set a _later_ timeout than what was already set was
just silently ignored and thus missed.
Setting Curl_expire() with timeout 0 (zero) will cancel all previously
added timeouts.
Corrects known bug #62.
Test 563 is enabled now and verifies that the combo FTP type=A URL,
CURLOPT_PORT set and proxy work fine. As a bonus I managed to remove the
somewhat odd FTP check in parse_remote_port() and instead converted it
to a better and more generic 'slash_removed' struct field. Checking the
->protocol field isn't right since when an FTP:// URL is sent over a
HTTP proxy, the protocol is HTTP but the URL was handled by the FTP code
and thus slash_removed is set TRUE for this case.
Simply because the TCP might be connected already we cannot skip the
proxy connect procedure. We need to be careful to not overload more
meaning to the bits.tcpconnect field like this.
With this fix, SOCKS proxies work again when the multi interface is
used. I believe this regression was added with commit 4b351d018e,
released as 7.20.1.
Left todo: add a test case that verifies this functionality that
prevents us from breaking it again in the future!
Reported by: Robin Cornelius
Bug: http://curl.haxx.se/bug/view.cgi?id=3033966
... since FTP is using it as well, and potentially other protocols!
Also, an #endif CURL_DISABLE_HTTP was incorrectly marked, as it seems to
end the proxy block instead.
makes the LDAP code much cleaner, nicer and in general being a
better libcurl citizen. If a new enough OpenLDAP version is
detect, the new and shiny lib/openldap.c code is then used
instead of the old cruft
Code by Howard, minor cleanups by Daniel.
FTP(S) use two connections that can be set to different recv and
send functions independently, so by introducing recv+send pairs
in the same manner we already have sockets/connections we can
work with FTPS fine.
This commit fixes the FTPS regression introduced in change d64bd82.
Dirk Manske reported a regression. When connecting with the multi
interface, there were situations where libcurl wouldn't store
connect time correctly as it used to (and is documented to) do.
Using his fine sample program we could repeat it, and I wrote up
test case 573 using that code. The problem does not easily show
itself using the local test suite though.
The fix, also as suggested by Dirk, is a bit on the ugly side as
it adds yet another call to Curl_verboseconnect() and setting the
TIMER_CONNECT time. That situation is subject for some closer
inspection in the future.
Howard Chu brought the bulk work of this patch that properly
moves out the sending and recving of data to the parts of the
code that are properly responsible for the various ways of doing
so.
Daniel Stenberg assisted with polishing a few bits and fixed some
minor flaws in the original patch.
Another upside of this patch is that we now abuse CURLcodes less
with the "magic" -1 return codes and instead use CURLE_AGAIN more
consistently.
The main change is to allow input from user-specified methods,
when they are specified with CURLOPT_READFUNCTION.
All calls to fflush(stdout) in telnet.c were removed, which makes
using 'curl telnet://foo.com' painful since prompts and other data
are not always returned to the user promptly. Use
'curl --no-buffer telnet://foo.com' instead. In general,
the user should have their CURLOPT_WRITEFUNCTION do a fflush
for interactive use.
Also fix assumption that reading from stdin never returns < 0.
Old code could crash in that case.
Call progress functions in telnet main loop.
Signed-off-by: Ben Greear <greearb@candelatech.com>
Ben Greear brought a patch that from now on allows all protocols
to specify name and user within the URL, in the same manner HTTP
and FTP have been allowed to in the past - although far from all
of the libcurl supported protocols actually have that feature in
their URL definition spec.
from hostip.h to setup.h in order to allow proper inclusion in any file.
This represents no functional change at all in which resolver is used,
everything still works as usual, internally and externally there is no
difference in behavior.
command is a special "hack" used by the drftpd server, but even though it is
a custom extension I've deemed it fine to add to libcurl since this server
seems to survive and people keep using it and want libcurl to support
it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
usable from the curl tool with --ftp-pret. Using this option on a server
that doesn't support this command will make libcurl fail.
detects and uses proxies based on the environment variables. If the proxy
was given as an explicit option it worked, but due to the setup order
mistake proxies would not be used fine for a few protocols when picked up
from '[protocol]_proxy'. Obviously this broke after 7.19.4. I now also added
test case 1106 that verifies this functionality.
(http://curl.haxx.se/bug/view.cgi?id=2913886)
CURLOPT_HTTPPROXYTUNNEL enabled over a proxy, a subsequent request using the
same proxy with the tunnel option disabled would still wrongly re-use that
previous connection and the outcome would only be badness.
end up with entries that wouldn't time-out:
1. Set up a first web server that redirects (307) to a http://server:port
that's down
2. Have curl connect to the first web server using curl multi
After the curl_easy_cleanup call, there will be curl dns entries hanging
around with in_use != 0.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
With the curl memory tracking feature decoupled from the debug build feature,
CURLDEBUG and DEBUGBUILD preprocessor symbol definitions are used as follows:
CURLDEBUG used for curl debug memory tracking specific code (--enable-curldebug)
DEBUGBUILD used for debug enabled specific code (--enable-debug)
(http://curl.haxx.se/bug/view.cgi?id=2784055) identifying a problem to
connect to SOCKS proxies when using the multi interface. It turned out to
almost not work at all previously. We need to wait for the TCP connect to
be properly verified before doing the SOCKS magic.
There's still a flaw in the FTP code for this.
Storsjo pointed out how setting CURLOPT_NOBODY to 0 could be downright
confusing as it set the method to either GET or HEAD. The example he showed
looked like:
curl_easy_setopt(curl, CURLOPT_PUT, 1);
curl_easy_setopt(curl, CURLOPT_NOBODY, 0);
The new way doesn't alter the method until the request is about to start. If
CURLOPT_NOBODY is then 1 the HTTP request will be HEAD. If CURLOPT_NOBODY is
0 and the request happens to have been set to HEAD, it will then instead be
set to GET. I believe this will be less surprising to users, and hopefully
not hit any existing users badly.
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
interface and setting CURLMOPT_MAXCONNECTS to something less than the number
of handles you add to the multi handle. All the connections that didn't fit
in the cache would not be properly disconnected nor freed!
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC to allow libcurl
to do GSS-style authentication with SOCKS5 proxies. The curl tool got the
options called --socks5-gssapi-service and --socks5-gssapi-nec to enable
these.
They basically offer the same thing the NO_PROXY environment variable only
offered previously: list a set of host names that shall not use the proxy
even if one is specified.
clarity. This does fix one problem that causes ;type=i FTP URLs
to fail in the Turkish locale when CURLOPT_PROXY_TRANSFER_MODE is
used (test case 561)
Added tests 561 and 1092 through 1094 to test various combinations
of ;type= and ;mode= URLs that could potentially fail in the Turkish
locale.
curl_easy_reset() by creating Curl_init_userdefined(). This had the side effect
of fixing curl_easy_reset() so it now also resets CURLOPT_FTP_FILEMETHOD and
CURLOPT_SSL_SESSIONID_CACHE
(http://curl.haxx.se/bug/view.cgi?id=2413067) that identified a problem that
would cause libcurl to mark a DNS cache entry "in use" eternally if the
subsequence TCP connect failed. It would thus never get pruned and refreshed
as it should've been.
now has an improved ability to do right when the multi interface (both
"regular" and multi_socket) is used for SCP and SFTP transfers. This should
result in (much) less busy-loop situations and thus less CPU usage with no
speed loss.
there are servers "out there" that relies on the client doing this broken
Digest authentication. Apache even comes with an option to work with such
broken clients.
The difference is only for URLs that contain a query-part (a '?'-letter and
text to the right of it).
libcurl now supports this quirk, and you enable it by setting the
CURLAUTH_DIGEST_IE bit in the bitmask you pass to the CURLOPT_HTTPAUTH or
CURLOPT_PROXYAUTH options. They are thus individually controlled to server
and proxy.
(http://curl.haxx.se/bug/view.cgi?id=2351645) that identified a problem with
the multi interface that occured if you removed an easy handle while in
progress and the handle was used in a HTTP pipeline.
when uploading files to a single FTP server using multiple easy handle
handles with the multi interface. Occasionally a handle would stall in
mysterious ways.
The problem turned out to be a side-effect of the ConnectionExists()
function's eagerness to re-use a handle for HTTP pipelining so it would
select it even if already being in use, due to an inadequate check for its
chances of being used for pipelnining.
problem with my CURLINFO_PRIMARY_IP fix from October 7th that caused a NULL
pointer read. I also took the opportunity to clean up this logic (storing of
the connection's IP address) somewhat as we had it stored in two different
places and ways previously and they are now unified.
Changed checkprefix() to use it and those instances of strnequal() that
compare host names or other protocol strings that are defined to be
independent of case in the C locale. This should fix a few more
Turkish locale problems.
make CURLOPT_PROXYUSERPWD sort of deprecated. The primary motive for adding
these new options is that they have no problems with the colon separator
that the CURLOPT_PROXYUSERPWD option does.
(http://curl.haxx.se/bug/view.cgi?id=2154627) which pointed out that libcurl
uses strcasecmp() in multiple places where it causes failures when the
Turkish locale is used. This is because 'i' and 'I' isn't the same letter so
strcasecmp() on those letters are different in Turkish than in English (or
just about all other languages). I thus introduced a totally new internal
function in libcurl (called Curl_ascii_equal) for doing case insentive
comparisons for english-(ascii?) style strings that thus will make "file"
and "FILE" match even if the Turkish locale is selected.
curl_easy_setopt: CURLOPT_USERNAME and CURLOPT_PASSWORD that sort of
deprecates the good old CURLOPT_USERPWD since they allow applications to set
the user name and password independently and perhaps more importantly allow
both to contain colon(s) which CURLOPT_USERPWD doesn't fully support.
the app re-used the handle to do a connection to host B and then again
re-used the handle to host A, it would not update the info with host A's IP
address (due to the connection being re-used) but it would instead report
the info from host B.
switching from one protocol to another in a single request (e.g.
redirecting from HTTP to FTP as in test 1055) by resetting
state.expect100header before every request.
CURLOPT_POST301 (but adds a define for backwards compatibility for you who
don't define CURL_NO_OLDIES). This option allows you to now also change the
libcurl behavior for a HTTP response 302 after a POST to not use GET in the
subsequent request (when CURLOPT_FOLLOWLOCATION is enabled). I edited the
patch somewhat before commit. The curl tool got a matching --post302
option. Test case 1076 was added to verify this.
enabling this feature with CURLOPT_CERTINFO for a request using SSL (HTTPS
or FTPS), libcurl will gather lots of server certificate info and that info
can then get extracted by a client after the request has completed with
curl_easy_getinfo()'s CURLINFO_CERTINFO option. Linus Nielsen Feltzing
helped me test and smoothen out this feature.
Unfortunately, this feature currently only works with libcurl built to use
OpenSSL.
This feature was sponsored by networking4all.com - thanks!
an unlock in between) for a certain case and that in fact works when using
regular windows mutexes but not with pthreads'! Locks should of course not
get locked again so this is now fixed.
http://curl.haxx.se/mail/lib-2008-08/0422.html
remain in use as internal curl_off_t print formatting strings for the internal
*printf functions which still cannot handle print formatting string directives
such as "I64d", "I64u", and others available on MSVC, MinGW, Intel's ICC, and
other DOS/Windows compilers.
This reverts previous commit part which did:
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
the names of the curl_off_t formatting string directives now become
CURL_FORMAT_CURL_OFF_T and CURL_FORMAT_CURL_OFF_TU.
CURL_FMT_OFF_T -> CURL_FORMAT_CURL_OFF_T
CURL_FMT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
Remove the use of an internal name for the curl_off_t formatting string directives
and use the common one available from the inside and outside of the library.
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
(http://curl.haxx.se/bug/view.cgi?id=2042440) with a patch. He identified a
problem when using NTLM over a proxy but the end-point does Basic, and then
libcurl would do wrong when the host sent "Connection: close" as the proxy's
NTLM state was erroneously cleared.
incorrectly--the host name is treated as part of the user name and the
port number becomes the password. This can be observed in test 279
(was KNOWN_ISSUE #54).
an URL in a Location: header didn't have the scope ID removed, so an
invalid host name was used. Second, when the scope ID was removed, it
also removed any port number that may have existed in the URL.
parser to allow numerical IPv6-addresses to be specified with the scope
given, as per RFC4007 - with a percent letter that itself needs to be URL
escaped. For example, for an address of fe80::1234%1 the HTTP URL is:
"http://[fe80::1234%251]/"
CURLINFO_APPCONNECT_TIME. This is set with the "application layer"
handshake/connection is completed (typically SSL, TLS or SSH). By using this
you can figure out the application layer's own connect time. You can extract
the time stamp using curl's -w option and the new variable named
'time_appconnect'. This feature was sponsored by Lenny Rachitsky at NeuStar.
multi interface with pipelining enabled as it would wrongly check for,
detect and close "dead connections" even though that connection was already
in use!
redirections and thus cannot use CURLOPT_FOLLOWLOCATION easily, we now
introduce the new CURLINFO_REDIRECT_URL option that lets applications
extract the URL libcurl would've redirected to if it had been told to. This
then enables the application to continue to that URL as it thinks is
suitable, without having to re-implement the magic of creating the new URL
from the Location: header etc. Test 1029 verifies it.
GET simply because previously when you set CURLOPT_NOBODY to TRUE first and
then FALSE you'd end up in a broken state where a HTTP request would do a
HEAD by still act a lot like for a GET and hang waiting for the content etc.
default instead of a ca bundle. The configure script will also look for a
ca path if no ca bundle is found and no option given.
- Fixed detection of previously installed curl-ca-bundle.crt
better control at the exact state of the connection's SSL status so that we
know exactly when it has completed the SSL negotiation or not so that there
won't be accidental re-uses of connections that are wrongly believed to be
in SSL-completed-negotiate state.
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
DONE before the entire request operation is complete and thus we can't know in
what state it is for re-using, so we're forced to close it. In a perfect world
we can add code that keep track of if we really must close it here or not, but
currently we have no such detail knowledge.
Jerome Muffat-Meridol helped us work this out.
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
The signalling of that a global DNS cache is wanted is done by setting the
option but the setting of the internal variable that it is in use must not be
done until it finally actually gets used!
NOTE and WARNING: I noticed that you can't actually switch off the global dns
cache with CURLOPT_DNS_USE_GLOBAL_CACHE but you couldn't do that previously
either and the option is very clearly and loudly documented as DO NOTE USE so
I won't bother to fix this bug now.
silly code left from when we switched to let the multi handle "hold" the dns
cache when using the multi interface... Of course this only triggered when a
certain function call returned error at the correct moment.
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
code to instead introduce support for a new proxy type called
CURLPROXY_SOCKS5_HOSTNAME that is used to send the host name to the proxy
instead of IP address and there's thus no longer any need for a new
curl_easy_setopt() option.
The default SOCKS5 proxy is again back to sending the IP address to the
proxy. The new curl command line option for enabling sending host name to a
SOCKS5 proxy is now --socks5-hostname.
proxy do the host name resolving and only if --socks5ip (or
CURLOPT_SOCKS5_RESOLVE_LOCAL) is used we resolve the host name locally and
pass on the IP address only to the proxy.
is an inofficial PROXY4 variant that sends the hostname to the proxy instead
of the resolved address (which is already supported by SOCKS5). --socks4a is
the curl command line option for it and CURLOPT_PROXYTYPE can now be set to
CURLPROXY_SOCKS4A as well.
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
do_init() and do_complete() which now are called first and last in the DO
function. It simplified the flow in multi.c and the functions got more
sensible names!
forwarded from the Gentoo bug tracker by Daniel Black and was originally
submitted by Robin Johnson, pointed out that libcurl would do bad memory
references when it failed and bailed out before the handler thing was
setup. My fix is not done like the provided patch does it, but instead I
make sure that there's never any chance for a NULL pointer in that struct
member.
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).
CURLOPT_OPENSOCKETDATA to set a callback that allows an application to replace
the socket() call used by libcurl. It basically allows the app to change
address, protocol or whatever of the socket. (I also did some whitespace
indent/cleanups in lib/url.c which kind of hides some of these changes, sorry
for mixing those in.)
CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 and the curl tool --hostpubmd5. They both make
the SCP or SFTP connection verify the remote host's md5 checksum of the public
key before doing a connect, to reduce the risk of a man-in-the-middle attack.
curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
the configure script checks for openldap and friends and we link with those
libs just like we link all other third party libraries, and we no longer
dlopen() those libraries. Our private header file lib/ldap.h was renamed to
lib/curl_ldap.h due to this. I set a tag in CVS (curl-7_17_0-preldapfix)
just before this commit, just in case.
(http://curl.haxx.se/bug/view.cgi?id=1766320) pointing out that the libcurl
code accessed two curl_easy_setopt() options (CURLOPT_DNS_CACHE_TIMEOUT and
CURLOPT_DNS_USE_GLOBAL_CACHE) as ints even though they're documented to be
passed in as longs, and that makes a difference on 64 bit architectures.
after 7.16.2. This is much due to the different treatment file:// gets
internally, but now I added test 231 to make it less likely to happen again
without us noticing!
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
and CURLOPT_NEW_DIRECTORY_PERMS. These control the premissions for files
and directories created on the remote server. CURLOPT_NEW_FILE_PERMS
defaults to 0644 and CURLOPT_NEW_DIRECTORY_PERMS defaults to 0755
the CURLOPT_RESUME_FROM or CURLOPT_RANGE options and an existing connection
in the connection cache is closed to make room for the new one when you call
curl_easy_perform(). It would then wrongly free range-related data in the
connection close funtion.
server through a proxy and have the remote https server port set using the
CURLOPT_PORT option, protocol gets reset to http from https after the first
request.
User defined URL was modified internally by libcurl and subsequent reuse of
the easy handle may lead to connection using a different protocol (if not
originally http).
I found that libcurl hardcoded the protocol to "http" when it tries to
regenerate the URL if CURLOPT_PORT is set. I tried to fix the problem as
follows and it's working fine so far
fixing some bugs:
o Don't mix GET and POST requests in a pipeline
o Fix the order in which requests are dispatched from the pipeline
o Fixed several curl bugs with pipelining when the server is returning
chunked encoding:
* Added states to chunked parsing for final CRLF
* Rewind buffer after parsing chunk with data remaining
* Moved chunked header initializing to a spot just before receiving
headers