strlen() returns size_t, but ssh libraries are wanting 'unsigned int'. Add
explicit casts and use _ex versions of the ssh library calls.
Signed-off-by: Ben Greear <greearb@candelatech.com>
If you pass a URL to pop3 that does not contain a message ID as
part of the URL, it will currently ask for 'INBOX' which just
causes the pop3 server to return an error.
The change makes libcurl treat en empty message ID as a request
for LIST (list of pop3 message IDs). User's code could then
parse this and download individual messages as desired.
Ben Greear brought a patch that from now on allows all protocols
to specify name and user within the URL, in the same manner HTTP
and FTP have been allowed to in the past - although far from all
of the libcurl supported protocols actually have that feature in
their URL definition spec.
Bob Richmond: There's an annoying situation where libcurl will
read new HTTP response data from a socket, then check if it's a
timeout if one is set. If the last packet received constitutes
the end of the response body, libcurl still treats it as a
timeout condition and reports a message like:
"Operation timed out after 3000 milliseconds with 876 out of 876
bytes received"
It should only a timeout if the timer lapsed and we DIDN'T
receive the end of the response body yet.
This commit fixes the cmake build of curl, and cleans up the
cmake code a little. It removes some commented out code and
some trailing whitespace. To get curl to build the binary
tree include/curl directory needed to be added to the include
path. Also, SIZEOF_SHORT needed to be added. A check for the
lack of defines of SIZEOF_* for warnless.c was added.
Kenny To filed the bug report #2963679 with patch to fix a
problem he experienced with doing multi interface HTTP POST over
a proxy using PROXYTUNNEL. He found a case where it would connect
fine but bits.tcpconnect was not set correct so libcurl didn't
work properly.
(http://curl.haxx.se/bug/view.cgi?id=2963679)
Akos Pasztory filed debian bug report #572276http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=572276
mentioning a problem with a resource that returns chunked-encoded
_and_ with a Content-Length and libcurl failed to properly ignore
the latter information.
Hauke Duden provided an example program that made the multi
interface crash. His example simply used the multi interface and
did first one FTP transfer and after completion it used a second
easy handle and did another FTP transfer on the same FTP server.
This triggered a bug in the "delayed easy handle kill" system
that curl uses: when an FTP connection is left alive it must keep
an easy handle around internally - only for the purpose of having
an easy handle when it later disconnects it. The code assumed
that when the easy handle was removed and an internal reference
was made, that version could be killed later on when a new easy
handle came using the same connection. This was wrong as Hauke's
example showed that the removed handle wasn't killed for real
until later. This caused a double close attempt => segfault.
Looking at the code of Curl_resolv_timeout() in hostip.c, I think
that in case of a timeout, the signal handler for SIGALRM never
gets removed. I think that in my case it gets executed at some
point later on when execution has long left Curl_resolv_timeout()
or even the cURL library.
The code that is jumped to with siglongjmp() simply sets the
error message to "name lookup timed out" and then returns with
CURLRESOLV_ERROR. I guess that instead of simply returning
without cleaning up, the code should have a goto that jumps to
the spot right after the call to Curl_resolv().
Error codes were not properly returned to the main curl code (and on to apps
using libcurl).
tftp was crapping out when tsize == 0 on upload, but I see no reason to fail
to upload just because the remote file is zero-length. Ignore tsize option on
upload.
The problem mentioned on Dec 10 2009
(http://curl.haxx.se/bug/view.cgi?id=2905220) was only partially fixed.
Partially because an easy handle can be associated with many connections in
the cache (e.g. if there is a redirect during the lifetime of the easy
handle). The previous patch only cleaned up the first one. The new fix now
removes the easy handle from all connections, not just the first one.
makes sure that when using sub-second timeouts, there's no final bad 1000ms
wait. Previously, a sub-second timeout would often make the elapsed time end
up the time rounded up to the nearest second (e.g. 1s for 200ms timeout)
the global timeout if set. Also, as was reported in the bug report #2956437
by Ryan Chan, the time stamp to use as basis for the per command timeout was
not set properly in the DONE phase for FTP (and not for SMTP) so I fixed
that just now. This was a regression compared to 7.19.7 due to the
conversion of FTP code over to the generic pingpong concepts.
http://curl.haxx.se/bug/view.cgi?id=2956437