than what's absolutely necessary:
curl will do its best to use what you pass to it as a URL. It is not trying to
validate it as a syntactically correct URL by any means but is instead
VERY liberal with what it accepts.
something beyond ascii but currently libcurl will only pass in the verbatim
string the app provides. There are several browsers that already do this
encoding. The key seems to be the updated draft to RFC2231:
http://tools.ietf.org/html/draft-reschke-rfc2231-in-http-02
With the curl memory tracking feature decoupled from the debug build feature,
CURLDEBUG and DEBUGBUILD preprocessor symbol definitions are used as follows:
CURLDEBUG used for curl debug memory tracking specific code (--enable-curldebug)
DEBUGBUILD used for debug enabled specific code (--enable-debug)
not in the mood enough to fight this now.
65. When doing FTP over a socks proxy or CONNECT through HTTP proxy and the
multi interface is used, libcurl will fail if the (passive) TCP connection
for the data transfer isn't more or less instant as the code does not
properly wait for the connect to be confirmed. See test case 564 for a first
shot at a test case.
If the CURLOPT_PORT option is used on an FTP URL like
"ftp://example.com/file;type=A" the ";type=A" is stripped off.
I added test case 562 to verify, only to find out that I couldn't repeat
this bug so I hereby consider it not a bug anymore!
Chen pointed out how curl couldn't upload with resume when reading from a
pipe.
This ended up with the introduction of a new return code for the
CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
that libcurl may try to resolve the situation anyway. In our case this means
libcurl will attempt to instead read that much data from the stream instead
of seeking and that way curl can now upload with resume when data is read
from a stream!
Koenig pointed out that the man page didn't tell that the *_proxy
environment variables can be specified lower case or UPPER CASE and the
lower case takes precedence,
for any further requests or transfers. The work-around is then to close that
handle with curl_easy_cleanup() and create a new. Some more details:
http://curl.haxx.se/mail/lib-2009-04/0300.html
and 1 on fatal errors. Previously it only mentioned non-zero on fatal
errors. This is a slight change in meaning, but it follows what we've done
elsewhere before and it opens up for LOTS of more useful return codes
whenever we can think of them...
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
-1 if the sizes aren't know. Previously these returned 0, make it impossible
to detect the difference between actually zero and unknown.
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
the condition in the previous request was unmet. This is typically a time
condition set with CURLOPT_TIMECONDITION and was previously not possible to
reliably figure out. From bug report #2565128
(http://curl.haxx.se/bug/view.cgi?id=2565128)
getaddrinfo() sorts the response list
This isn't a libcurl bug since this is how getaddrinfo() is *supposed* to work!
Apparently you deal with this using the /etc/gai.conf file.
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC to allow libcurl
to do GSS-style authentication with SOCKS5 proxies. The curl tool got the
options called --socks5-gssapi-service and --socks5-gssapi-nec to enable
these.
They basically offer the same thing the NO_PROXY environment variable only
offered previously: list a set of host names that shall not use the proxy
even if one is specified.
there are servers "out there" that relies on the client doing this broken
Digest authentication. Apache even comes with an option to work with such
broken clients.
The difference is only for URLs that contain a query-part (a '?'-letter and
text to the right of it).
libcurl now supports this quirk, and you enable it by setting the
CURLAUTH_DIGEST_IE bit in the bitmask you pass to the CURLOPT_HTTPAUTH or
CURLOPT_PROXYAUTH options. They are thus individually controlled to server
and proxy.
make CURLOPT_PROXYUSERPWD sort of deprecated. The primary motive for adding
these new options is that they have no problems with the colon separator
that the CURLOPT_PROXYUSERPWD option does.
curl_easy_setopt: CURLOPT_USERNAME and CURLOPT_PASSWORD that sort of
deprecates the good old CURLOPT_USERPWD since they allow applications to set
the user name and password independently and perhaps more importantly allow
both to contain colon(s) which CURLOPT_USERPWD doesn't fully support.
CURLOPT_POST301 (but adds a define for backwards compatibility for you who
don't define CURL_NO_OLDIES). This option allows you to now also change the
libcurl behavior for a HTTP response 302 after a POST to not use GET in the
subsequent request (when CURLOPT_FOLLOWLOCATION is enabled). I edited the
patch somewhat before commit. The curl tool got a matching --post302
option. Test case 1076 was added to verify this.
enabling this feature with CURLOPT_CERTINFO for a request using SSL (HTTPS
or FTPS), libcurl will gather lots of server certificate info and that info
can then get extracted by a client after the request has completed with
curl_easy_getinfo()'s CURLINFO_CERTINFO option. Linus Nielsen Feltzing
helped me test and smoothen out this feature.
Unfortunately, this feature currently only works with libcurl built to use
OpenSSL.
This feature was sponsored by networking4all.com - thanks!
Server with the correct content-length. Sending a file with 511 or less
bytes, content-length 512 is used. Sending a file with 513 - 1023 bytes,
content-length 1024 is used. Files with a length of a multiple of 512 Bytes
show the correct content-length. Only these files work for upload.
http://curl.haxx.se/bug/view.cgi?id=2057858
incorrectly--the host name is treated as part of the user name and the
port number becomes the password. This can be observed in test 279
(was KNOWN_ISSUE #54).
parser to allow numerical IPv6-addresses to be specified with the scope
given, as per RFC4007 - with a percent letter that itself needs to be URL
escaped. For example, for an address of fe80::1234%1 the HTTP URL is:
"http://[fe80::1234%251]/"
server using the multi interface, the commands are not being sent correctly
and instead the connection is "cancelled" (the operation is considered done)
prematurely. There is a half-baked (busy-looping) patch provided in the bug
report but it cannot be accepted as-is. See
http://curl.haxx.se/bug/view.cgi?id=2006544
non-zero with the fixed value of 1. We should strive at making options
support '1' for enabling them mentioned explicitly, as that then will allow
us for to extend them in the future without breaking older programs.
curl-library list on July 9th 2008 by Mathew Hounsell)
NOTE: the name resolve functions of various libc implementations don't re-read
name server information unless explicitly told so (by for example calling
Ires_init(3). This may cause libcurl to keep using the older server even
if DHCP has updated the server info, and this may look like a DNS cache issue
to the casual libcurl-app user.
CURLINFO_APPCONNECT_TIME. This is set with the "application layer"
handshake/connection is completed (typically SSL, TLS or SSH). By using this
you can figure out the application layer's own connect time. You can extract
the time stamp using curl's -w option and the new variable named
'time_appconnect'. This feature was sponsored by Lenny Rachitsky at NeuStar.
All boolean options (such as -O, -I, -v etc), both short and long versions,
now always switch on/enable the option named. Using the same option multiple
times thus make no difference. To switch off one of those options, you need
to use the long version of the option and type --no-OPTION. Like to disable
verbose mode you use --no-verbose!
- Added --remote-name-all to curl, which if used changes the default for all
given URLs to be dealt with as if -O is used. So if you want to disable that
for a specific URL after --remote-name-all has been used, you muse use -o -
or --no-remote-name.
curl_easy_getinfo. It returns a pointer to a string with the most recently
used IP address. Modified test case 500 to also verify this feature. The
implementing of this feature was sponsored by Lenny Rachitsky at NeuStar.
due to KfW's library header files exporting symbols/macros that should be
kept private to the KfW library. See ticket #5601 at http://krbdev.mit.edu/rt/
and receive data over a connection previously setup with curl_easy_perform()
and its CURLOPT_CONNECT_ONLY option. The sendrecv.c example was added to
show how they can be used.
redirections and thus cannot use CURLOPT_FOLLOWLOCATION easily, we now
introduce the new CURLINFO_REDIRECT_URL option that lets applications
extract the URL libcurl would've redirected to if it had been told to. This
then enables the application to continue to that URL as it thinks is
suitable, without having to re-implement the magic of creating the new URL
from the Location: header etc. Test 1029 verifies it.
application to provide data for a multipart with the read callback. Note
that the size needs to be provided with CURLFORM_CONTENTSLENGTH when the
stream option is used. This feature is verified by the new test case
554. This feature was sponsored by Xponaut.
get a fresh one downloaded and created with 'make ca-bundle' or you can get
one from here => http://curl.haxx.se/docs/caextract.html if you want a fresh
new one extracted from Mozilla's recent list of ca certs.
The configure option --with-ca-bundle now lets you specify what file to use
as default ca bundle for your build. If not specified, the configure script
will check a few known standard places for a global ca cert to use.
51.Kevin Reed's reported problem with a proxy when doing CONNECT and it
wants NTLM and close the connection to the initial CONNECT response:
http://curl.haxx.se/bug/view.cgi?id=1879375
silly code left from when we switched to let the multi handle "hold" the dns
cache when using the multi interface... Of course this only triggered when a
certain function call returned error at the correct moment.
--keepalive-time to curl to set the keepalive probe interval. I also took
the opportunity to rename the recently added no-keep-alive option to
no-keepalive to keep a consistent naming and to avoid getting two dashes in
these option names. Eric also provided an update to the man page for the new
option.
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
code to instead introduce support for a new proxy type called
CURLPROXY_SOCKS5_HOSTNAME that is used to send the host name to the proxy
instead of IP address and there's thus no longer any need for a new
curl_easy_setopt() option.
The default SOCKS5 proxy is again back to sending the IP address to the
proxy. The new curl command line option for enabling sending host name to a
SOCKS5 proxy is now --socks5-hostname.
proxy do the host name resolving and only if --socks5ip (or
CURLOPT_SOCKS5_RESOLVE_LOCAL) is used we resolve the host name locally and
pass on the IP address only to the proxy.
is an inofficial PROXY4 variant that sends the hostname to the proxy instead
of the resolved address (which is already supported by SOCKS5). --socks4a is
the curl command line option for it and CURLOPT_PROXYTYPE can now be set to
CURLPROXY_SOCKS4A as well.
is no current timeout. It does not mean wait forever and it does not mean
do not wait at all. It means there is no timeout value known at this point in
time.
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!