GET simply because previously when you set CURLOPT_NOBODY to TRUE first and
then FALSE you'd end up in a broken state where a HTTP request would do a
HEAD by still act a lot like for a GET and hang waiting for the content etc.
default instead of a ca bundle. The configure script will also look for a
ca path if no ca bundle is found and no option given.
- Fixed detection of previously installed curl-ca-bundle.crt
better control at the exact state of the connection's SSL status so that we
know exactly when it has completed the SSL negotiation or not so that there
won't be accidental re-uses of connections that are wrongly believed to be
in SSL-completed-negotiate state.
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
DONE before the entire request operation is complete and thus we can't know in
what state it is for re-using, so we're forced to close it. In a perfect world
we can add code that keep track of if we really must close it here or not, but
currently we have no such detail knowledge.
Jerome Muffat-Meridol helped us work this out.
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
The signalling of that a global DNS cache is wanted is done by setting the
option but the setting of the internal variable that it is in use must not be
done until it finally actually gets used!
NOTE and WARNING: I noticed that you can't actually switch off the global dns
cache with CURLOPT_DNS_USE_GLOBAL_CACHE but you couldn't do that previously
either and the option is very clearly and loudly documented as DO NOTE USE so
I won't bother to fix this bug now.
silly code left from when we switched to let the multi handle "hold" the dns
cache when using the multi interface... Of course this only triggered when a
certain function call returned error at the correct moment.
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
code to instead introduce support for a new proxy type called
CURLPROXY_SOCKS5_HOSTNAME that is used to send the host name to the proxy
instead of IP address and there's thus no longer any need for a new
curl_easy_setopt() option.
The default SOCKS5 proxy is again back to sending the IP address to the
proxy. The new curl command line option for enabling sending host name to a
SOCKS5 proxy is now --socks5-hostname.
proxy do the host name resolving and only if --socks5ip (or
CURLOPT_SOCKS5_RESOLVE_LOCAL) is used we resolve the host name locally and
pass on the IP address only to the proxy.
is an inofficial PROXY4 variant that sends the hostname to the proxy instead
of the resolved address (which is already supported by SOCKS5). --socks4a is
the curl command line option for it and CURLOPT_PROXYTYPE can now be set to
CURLPROXY_SOCKS4A as well.
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
do_init() and do_complete() which now are called first and last in the DO
function. It simplified the flow in multi.c and the functions got more
sensible names!
forwarded from the Gentoo bug tracker by Daniel Black and was originally
submitted by Robin Johnson, pointed out that libcurl would do bad memory
references when it failed and bailed out before the handler thing was
setup. My fix is not done like the provided patch does it, but instead I
make sure that there's never any chance for a NULL pointer in that struct
member.
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).
CURLOPT_OPENSOCKETDATA to set a callback that allows an application to replace
the socket() call used by libcurl. It basically allows the app to change
address, protocol or whatever of the socket. (I also did some whitespace
indent/cleanups in lib/url.c which kind of hides some of these changes, sorry
for mixing those in.)
CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 and the curl tool --hostpubmd5. They both make
the SCP or SFTP connection verify the remote host's md5 checksum of the public
key before doing a connect, to reduce the risk of a man-in-the-middle attack.
curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
the configure script checks for openldap and friends and we link with those
libs just like we link all other third party libraries, and we no longer
dlopen() those libraries. Our private header file lib/ldap.h was renamed to
lib/curl_ldap.h due to this. I set a tag in CVS (curl-7_17_0-preldapfix)
just before this commit, just in case.
(http://curl.haxx.se/bug/view.cgi?id=1766320) pointing out that the libcurl
code accessed two curl_easy_setopt() options (CURLOPT_DNS_CACHE_TIMEOUT and
CURLOPT_DNS_USE_GLOBAL_CACHE) as ints even though they're documented to be
passed in as longs, and that makes a difference on 64 bit architectures.
after 7.16.2. This is much due to the different treatment file:// gets
internally, but now I added test 231 to make it less likely to happen again
without us noticing!
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
and CURLOPT_NEW_DIRECTORY_PERMS. These control the premissions for files
and directories created on the remote server. CURLOPT_NEW_FILE_PERMS
defaults to 0644 and CURLOPT_NEW_DIRECTORY_PERMS defaults to 0755
the CURLOPT_RESUME_FROM or CURLOPT_RANGE options and an existing connection
in the connection cache is closed to make room for the new one when you call
curl_easy_perform(). It would then wrongly free range-related data in the
connection close funtion.
server through a proxy and have the remote https server port set using the
CURLOPT_PORT option, protocol gets reset to http from https after the first
request.
User defined URL was modified internally by libcurl and subsequent reuse of
the easy handle may lead to connection using a different protocol (if not
originally http).
I found that libcurl hardcoded the protocol to "http" when it tries to
regenerate the URL if CURLOPT_PORT is set. I tried to fix the problem as
follows and it's working fine so far
fixing some bugs:
o Don't mix GET and POST requests in a pipeline
o Fix the order in which requests are dispatched from the pipeline
o Fixed several curl bugs with pipelining when the server is returning
chunked encoding:
* Added states to chunked parsing for final CRLF
* Rewind buffer after parsing chunk with data remaining
* Moved chunked header initializing to a spot just before receiving
headers
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
to the debug callback.
- Shmulik Regev added CURLOPT_HTTP_CONTENT_DECODING and
CURLOPT_HTTP_TRANSFER_DECODING that if set to zero will disable libcurl's
internal decoding of content or transfer encoded content. This may be
preferable in cases where you use libcurl for proxy purposes or similar. The
command line tool got a --raw option to disable both at once.
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
non-ASCII platforms. It does add some complexity, most notably with more
#ifdefs, but I want to see this supported added and I can't see how we can
add it without the extra stuff added.
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
(http://curl.haxx.se/bug/view.cgi?id=1604956) which identified setting
CURLOPT_MAXCONNECTS to zero caused libcurl to SIGSEGV. Starting now, libcurl
will always internally use no less than 1 entry in the connection cache.
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.