strlen() returns size_t, but ssh libraries are wanting 'unsigned int'. Add
explicit casts and use _ex versions of the ssh library calls.
Signed-off-by: Ben Greear <greearb@candelatech.com>
If you pass a URL to pop3 that does not contain a message ID as
part of the URL, it will currently ask for 'INBOX' which just
causes the pop3 server to return an error.
The change makes libcurl treat en empty message ID as a request
for LIST (list of pop3 message IDs). User's code could then
parse this and download individual messages as desired.
Ben Greear brought a patch that from now on allows all protocols
to specify name and user within the URL, in the same manner HTTP
and FTP have been allowed to in the past - although far from all
of the libcurl supported protocols actually have that feature in
their URL definition spec.
Bob Richmond: There's an annoying situation where libcurl will
read new HTTP response data from a socket, then check if it's a
timeout if one is set. If the last packet received constitutes
the end of the response body, libcurl still treats it as a
timeout condition and reports a message like:
"Operation timed out after 3000 milliseconds with 876 out of 876
bytes received"
It should only a timeout if the timer lapsed and we DIDN'T
receive the end of the response body yet.
This commit fixes the cmake build of curl, and cleans up the
cmake code a little. It removes some commented out code and
some trailing whitespace. To get curl to build the binary
tree include/curl directory needed to be added to the include
path. Also, SIZEOF_SHORT needed to be added. A check for the
lack of defines of SIZEOF_* for warnless.c was added.
Kenny To filed the bug report #2963679 with patch to fix a
problem he experienced with doing multi interface HTTP POST over
a proxy using PROXYTUNNEL. He found a case where it would connect
fine but bits.tcpconnect was not set correct so libcurl didn't
work properly.
(http://curl.haxx.se/bug/view.cgi?id=2963679)
Akos Pasztory filed debian bug report #572276http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=572276
mentioning a problem with a resource that returns chunked-encoded
_and_ with a Content-Length and libcurl failed to properly ignore
the latter information.
Hauke Duden provided an example program that made the multi
interface crash. His example simply used the multi interface and
did first one FTP transfer and after completion it used a second
easy handle and did another FTP transfer on the same FTP server.
This triggered a bug in the "delayed easy handle kill" system
that curl uses: when an FTP connection is left alive it must keep
an easy handle around internally - only for the purpose of having
an easy handle when it later disconnects it. The code assumed
that when the easy handle was removed and an internal reference
was made, that version could be killed later on when a new easy
handle came using the same connection. This was wrong as Hauke's
example showed that the removed handle wasn't killed for real
until later. This caused a double close attempt => segfault.
Looking at the code of Curl_resolv_timeout() in hostip.c, I think
that in case of a timeout, the signal handler for SIGALRM never
gets removed. I think that in my case it gets executed at some
point later on when execution has long left Curl_resolv_timeout()
or even the cURL library.
The code that is jumped to with siglongjmp() simply sets the
error message to "name lookup timed out" and then returns with
CURLRESOLV_ERROR. I guess that instead of simply returning
without cleaning up, the code should have a goto that jumps to
the spot right after the call to Curl_resolv().
Error codes were not properly returned to the main curl code (and on to apps
using libcurl).
tftp was crapping out when tsize == 0 on upload, but I see no reason to fail
to upload just because the remote file is zero-length. Ignore tsize option on
upload.
The problem mentioned on Dec 10 2009
(http://curl.haxx.se/bug/view.cgi?id=2905220) was only partially fixed.
Partially because an easy handle can be associated with many connections in
the cache (e.g. if there is a redirect during the lifetime of the easy
handle). The previous patch only cleaned up the first one. The new fix now
removes the easy handle from all connections, not just the first one.
makes sure that when using sub-second timeouts, there's no final bad 1000ms
wait. Previously, a sub-second timeout would often make the elapsed time end
up the time rounded up to the nearest second (e.g. 1s for 200ms timeout)
the global timeout if set. Also, as was reported in the bug report #2956437
by Ryan Chan, the time stamp to use as basis for the per command timeout was
not set properly in the DONE phase for FTP (and not for SMTP) so I fixed
that just now. This was a regression compared to 7.19.7 due to the
conversion of FTP code over to the generic pingpong concepts.
http://curl.haxx.se/bug/view.cgi?id=2956437
- SMTP falls back to RFC821 HELO when EHLO fails (and SSL is not required).
- Use of true local host name (i.e.: via gethostname()) when available, as default argument to SMTP HELO/EHLO.
- Test case 804 for HELO fallback.
properly in angle brackets. Recipients provided with CURLOPT_MAIL_RCPT now
get angle bracket wrapping automatically by libcurl unless the recipient
starts with an angle bracket as then the app is assumed to deal with that
properly on its own.
full DATA has been sent, and I modified the test SMTP server to also send
that response. As usual, the DONE operation that is made after a completed
transfer is still not doable in a non-blocking way so this waiting for 250
is unfortunately made blockingly.
in the same RCPT TO line, when they should be sent in separate single
commands. I updated test case 802 to verify this.
- I also fixed a bad use of my_setopt_str() of CURLOPT_MAIL_RCPT in the curl
tool which made it try to output it as string for the --libcurl feature
which could lead to crashes.
to automatically uncompress it with the CURLOPT_ENCODING option, libcurl
could wrongly provide the callback with more data than what the maximum
documented amount. An application could thus get tricked into badness if the
maximum limit was trusted to be enforced by libcurl itself (as it is
documented).
This is further detailed and explained in the libcurl security advisory
20100209 at
http://curl.haxx.se/docs/adv_20100209.html
simply check for CURLM_CALL_MULTI_PERFORM internally. This has the added
benefit that this goes in line with my long-term wishes to get rid of the
CURLM_CALL_MULTI_PERFORM all together from the public API.
from hostip.h to setup.h in order to allow proper inclusion in any file.
This represents no functional change at all in which resolver is used,
everything still works as usual, internally and externally there is no
difference in behavior.
HTTP Cookie: header _needs_ to be sorted on the path length in the cases
where two cookies using the same name are set more than once using
(overlapping) paths. Realizing this, identically named cookies must be
sorted correctly. But detecting only identically named cookies and take care
of them individually is harder than just to blindly and unconditionally sort
all cookies based on their path lengths. All major browsers also already do
this, so this makes our behavior one step closer to them in the cookie area.
Test case 8 was the only one that broke due to this change and I updated it
accordingly.
again when downloading files over FTP using ASCII and it turns out that the
final size of the file is not the same as the initial size the server
reported. This is very common since servers don't take the newline
conversions into account.
much as possible in one go, as long as it doesn't block and hasn't reached the
end of the state machine.
This avoids spurious -1 returns from curl_multi_fdset() simply because
previously it would return from this function without anything in EWOUDLBLOCK
and thus basically it wasn't actually waiting for anything!!
state, we return CURLM_CALL_MULTI_PERFORM unconditionally then so that we
can act faster like in the case the protocol-specific connect doesn't block
on anything and we can just persue on the next action immediately. It also
then avoids a case where curl_multi_fdset() would return -1.
ossl_connect_step3() increments an SSL session handle reference counter on
each call. When sessions are re-used this reference counter may be
incremented many times, but it will be decremented only once when done (by
Curl_ossl_session_free()); and the internal OpenSSL data will not be freed
if this reference count remains positive. When a session is re-used the
reference counter should be corrected by explicitly calling
SSL_SESSION_free() after each consecutive SSL_get1_session() to avoid
introducing a memory leak.
(http://curl.haxx.se/bug/view.cgi?id=2926284)
command is a special "hack" used by the drftpd server, but even though it is
a custom extension I've deemed it fine to add to libcurl since this server
seems to survive and people keep using it and want libcurl to support
it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
usable from the curl tool with --ftp-pret. Using this option on a server
that doesn't support this command will make libcurl fail.
sequences in uploaded data. The test server doesn't "decode" escaped dot-lines
but instead test cases must be written to take them into account. Added test
case 803 to verify dot-escaping.
detects and uses proxies based on the environment variables. If the proxy
was given as an explicit option it worked, but due to the setup order
mistake proxies would not be used fine for a few protocols when picked up
from '[protocol]_proxy'. Obviously this broke after 7.19.4. I now also added
test case 1106 that verifies this functionality.
(http://curl.haxx.se/bug/view.cgi?id=2913886)
accessing alredy freed memory and thus crash when using HTTPS (with
OpenSSL), multi interface and the CURLOPT_DEBUGFUNCTION and a certain order
of cleaning things up. I fixed it.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
curl_easy_setopt with CURLOPT_HTTPHEADER, the library should set
data->state.expect100header accordingly - the current code (in 7.19.7 at
least) doesn't handle this properly. Martin Storsjo provided the fix!
rework patch that now integrates TFTP properly into libcurl so that it can
be used non-blocking with the multi interface and more. BLKSIZE also works.
The --tftp-blksize option was added to allow setting the TFTP BLKSIZE from
the command line.
meter/callback during FTP command/response sequences. It turned out it was
really lame before and now the progress meter SHOULD get called at least
once per second.
CURLOPT_HTTPPROXYTUNNEL enabled over a proxy, a subsequent request using the
same proxy with the tunnel option disabled would still wrongly re-use that
previous connection and the outcome would only be badness.
end up with entries that wouldn't time-out:
1. Set up a first web server that redirects (307) to a http://server:port
that's down
2. Have curl connect to the first web server using curl multi
After the curl_easy_cleanup call, there will be curl dns entries hanging
around with in_use != 0.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
the client certificate. It also disable the key name test as some engines
can select a private key/cert automatically (When there is only one key
and/or certificate on the hardware device used by the engine)
No need for a separate variable ndns.
The memory leak detection will detect code that fails to release a dns reference.
The DEBUGASSERT will detect code that releases too many references.
closed NSPR descriptor. The issue was hard to find, reported several times
before and always closed unresolved. More info at the RH bug:
https://bugzilla.redhat.com/534176
(http://curl.haxx.se/bug/view.cgi?id=2891595) which identified how an entry
in the DNS cache would linger too long if the request that added it was in
use that long. He also provided the patch that now makes libcurl capable of
still doing a request while the DNS hash entry may get timed out.
used during the FTP connection phase (after the actual TCP connect), while
it of course should be. I also made the speed check get called correctly so
that really slow servers will trigger that properly too.
wrong percentage for small files, most notable for <1000 bytes and could
easily end up showing more than 100% at the end. It also didn't show any
percentage, transfer size or estimated transfer times when transferring
less than 100 bytes.
auth is used, as it caused a crash. I failed to repeat the issue, but still
made a change that now forces the TCP connection used for a freed SCP
session to get closed and not be re-used.
POST using a read callback, with Digest authentication and
"Transfer-Encoding: chunked" enforced. I would then cause the first request
to be wrongly sent and then basically hang until the server closed the
connection. I fixed the problem and added test case 565 to verify it.
unparsable expiry dates and then treat them as session cookies - previously
libcurl would reject cookies with a date format it couldn't parse. Research
shows that the major browser treat such cookies as session cookies. I
modified test 8 and 31 to verify this.
fail to build when this happens, and show an appropriate error.
The brave of heart can circumvect this. Defining ALLOW_MSVC6_WITHOUT_PSDK
in lib/config-win32.h, although absolutely discouraged and unsupported,
this will allow the die hard MSVC hacker to build in such a discouraged
environment.
The actually supported 'fix' is to install 'February 2003 Platform SDK'
a.k.a. 'Windows Server 2003 PSDK' which can be freely downloaded from
http://www.microsoft.com/msdownload/platformsdk/sdkupdate/psdk-full.htm
(http://curl.haxx.se/bug/view.cgi?id=2873666) which identified a problem which
made libcurl loop infinitely when given incorrect credentials when using HTTP
GSS negotiate authentication.
(http://curl.haxx.se/bug/view.cgi?id=2870221) that libcurl returned an
incorrect return code from the internal trynextip() function which caused
him grief. This is a regression that was introduced in 7.19.1 and I find it
strange it hasn't hit us harder, but I won't persue into figuring out
exactly why.
SO_SNDBUF to CURL_WRITE_SIZE even if the SO_SNDBUF starts out larger. The
patch doesn't do a setsockopt if SO_SNDBUF is already greater than
CURL_WRITE_SIZE. This should help folks who have set up their computer with
large send buffers.
the define CURL_MAX_HTTP_HEADER which is even exposed in the public header
file to allow for users to fairly easy rebuild libcurl with a modified
limit. The rationale for a fixed limit is that libcurl is realloc()ing a
buffer to be able to put a full header into it, so that it can call the
header callback with the entire header, but that also risk getting it into
trouble if a server by mistake or willingly sends a header that is more or
less without an end. The limit is set to 100K.
saving received cookies with no given path, if the path in the request had a
query part. That is means a question mark (?) and characters on the right
side of that. I wrote test case 1105 and fixed this problem.
transfer.c for blocking. It is currently used only by SCP and SFTP protocols.
This enhancement resolves an issue with 100% CPU usage during SFTP upload,
reported by Vourhey.
(http://curl.haxx.se/bug/view.cgi?id=2861587) identifying that libcurl used
the OpenSSL function X509_load_crl_file() wrongly and failed if it would
load a CRL file with more than one certificate within. This is now fixed.