a response that was larger than 16KB is now improved slightly so that now
the restriction at 16KB is for the headers only and it should be a rare
situation where the response-headers exceed 16KB. Thus, I consider #47 fixed
and the header limitation is now known as known bug #48.
This happened because the tftp code always uncondionally did a bind()
without caring if one already had been done and then it failed. I wrote a
test case (1009) to verify this, but it is a bit error-prone since it will
have to pick a fixed local port number and since the tests are run on so
many different hosts in different situations I add it in disabled state.
CURLOPT_OPENSOCKETDATA to set a callback that allows an application to replace
the socket() call used by libcurl. It basically allows the app to change
address, protocol or whatever of the socket. (I also did some whitespace
indent/cleanups in lib/url.c which kind of hides some of these changes, sorry
for mixing those in.)
CURLE_PEER_FAILED_VERIFICATION (standard CURL_NO_OLDIES style), and made this
return code get used by the previous SSH MD5 fingerprint check in case it
fails.
CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 and the curl tool --hostpubmd5. They both make
the SCP or SFTP connection verify the remote host's md5 checksum of the public
key before doing a connect, to reduce the risk of a man-in-the-middle attack.
function do wrong on all input bytes that are >= 0x80 (decimal 128) due to a
signed / unsigned mistake in the code. I fixed it and added test case 543 to
verify.
curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
CURLOPT_NOBODY enabled but not CURLOPT_HEADER, libcurl wouldn't do TYPE
before it does SIZE which makes it less useful. I walked over the code and
made it do this properly, and added test case 542 to verify it.
o It looks for the NSS database first in the environment variable SSL_DIR,
then in /etc/pki/nssdb, then it initializes with no database if neither of
those exist.
o If the NSS PKCS#11 libnspsem.so driver is available then PEM files may be
loaded, including the ca-bundle. If it is not available then only
certificates already in the NSS database are used.
o Tries to detect whether a file or nickname is being passed in so the right
thing is done
o Added a bit of code to make the output more like the OpenSSL module,
including displaying the certificate information when connecting in
verbose mode
o Improved handling of certificate errors (expired, untrusted, etc)
The libnsspem.so PKCS#11 module is currently only available in Fedora
8/rawhide. Work will be done soon to upstream it. The NSS module will work
with or without it, all that changes is the source of the certificates and
keys.
key was specified and there was no HOME environment variable, and then it
didn't continue to try the other auth methods. Now it will instead try to
get the files id_dsa.pub and id_dsa from the current directory if none of
the two conditions were met.
second transfer as it didn't store and remember the "" path from the
previous transfer so it would instead CWD to the entry path as stored. This
worked, but did a superfluous command. Thus, test case 541 now also verifies
this fix.
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
out that libcurl didn't deal with large responses from server commands, when
the single response was consisting of multiple lines but of a total size of
16KB or more. Dan Fandrich improved the ftp test script and provided test
case 1006 to repeat the problem, and I fixed the code to make sure this new
test case runs fine.
out that doing first a file:// upload and then an FTP upload crashed libcurl
or at best caused furious valgrind complaints. Fixed now by making sure we
free and clear the file-specific struct properly when done with it.
out that libcurl didn't deal with very long (>16K) FTP server response lines
properly. Starting now, libcurl will chop them off (thus the client app will
not get the full line) but survive and deal with them fine otherwise. Test
case 1003 was added to verify this.
(http://curl.haxx.se/bug/view.cgi?id=1776232) about libcurl calling
Curl_client_write(), passing on a const string that the caller may not
modify and yet it does (on some platforms).
(http://curl.haxx.se/bug/view.cgi?id=1776235) about ftp requests with NOBODY
on a directory would do a "SIZE (null)" request. This is now fixed and test
case 1000 was added to verify.
the configure script checks for openldap and friends and we link with those
libs just like we link all other third party libraries, and we no longer
dlopen() those libraries. Our private header file lib/ldap.h was renamed to
lib/curl_ldap.h due to this. I set a tag in CVS (curl-7_17_0-preldapfix)
just before this commit, just in case.
(http://curl.haxx.se/bug/view.cgi?id=1766320) pointing out that the libcurl
code accessed two curl_easy_setopt() options (CURLOPT_DNS_CACHE_TIMEOUT and
CURLOPT_DNS_USE_GLOBAL_CACHE) as ints even though they're documented to be
passed in as longs, and that makes a difference on 64 bit architectures.
after 7.16.2. This is much due to the different treatment file:// gets
internally, but now I added test 231 to make it less likely to happen again
without us noticing!
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
NTLM, and he provided test code and a test server and we worked out a bug
fix. We failed to count sent body data at times, which then caused internal
confusions when libcurl tried to send the rest of the data in order to
maintain the same connection alive.
(and then I did some minor reformatting of code in lib/http.c)
(http://curl.haxx.se/bug/view.cgi?id=1757328) and submitted a patch. It turns
out we broke login to FTP servers that don't require (nor understand) PASS
after the USER command
(http://curl.haxx.se/bug/view.cgi?id=1750274) and submitted a patch for the
case where libcurl did a connect attempt to a non-listening port and didn't
provide a human readable error string back.
fail to connect if there is no Common Name field found in the remote cert.
We should deprecate the support for this set to 1 anyway soon, since the
feature is pointless and most likely never really used by anyone.
The tiny patch below fixes a bug (that I introduced :) which happens
when negotiating authentication with a proxy (probably with web
servers as well) that uses chunked transfer encoding for the 407 error
pages. In this case the ''ignorebody'' flag was ignored (no pun
intended).
message for an scp:// upload failure. If libssh2 has his matching
patch, then the error message return by the server will be used instead
of a more generic error.
and CURLOPT_NEW_DIRECTORY_PERMS. These control the premissions for files
and directories created on the remote server. CURLOPT_NEW_FILE_PERMS
defaults to 0644 and CURLOPT_NEW_DIRECTORY_PERMS defaults to 0755
hash function for different hashes, and also expanded the default size for
the socket hash table used in multi handles to greatly enhance speed when
very many connections are added and the socket API is used.
chunked encoding (that also lacks "Connection: close"). It now simply
assumes that the connection WILL be closed to signal the end, as that is how
RFC2616 section 4.4 point #5 says we should behave.
http://curl.haxx.se/mail/lib-2007-06/0238.html, libcurl didn't properly do
no-body requests on FTP files on re-used connections properly, or at least
it didn't provide the info back in the header callback properly in the
subsequent requests.
done if the sys/poll.h file is missing, as we have seen machines with poll()
present but without the header file and machines that don't get HAVE_POLL
defined but that do have the sys/poll.h header file...
complicated work-around for 64bit HPUX compiles. We do the fix using inline
static functions to make them follow the header file properly and thus get
used fine in the test suite too etc.
libssh2_sftp_shutdown() and libssh2_session_free() can now return
LIBSSH2_ERROR_EAGAIN.
* Fix the _send() and _recv() return values so non-blocking works
* As of (LIBSSH2_APINO >= 200706012030) there are not *nb() functions
* As of (LIBSSH2_APINO >= 200706012030) most libssh2_*() functions
can return LIBSSH2_ERROR_EAGAIN to indicate that the call would block.
To make the code work as previously, blocking, all the code has been
updated so that when (LIBSSH2_APINO >= 200706012030) it loops simulating
blocking. This allows the existing code to function and not hold up
the upcoming release.
to find that it crashed miserably, and this was due to some select()isms left
in the code. This was due to API restrictions in c-ares 1.3.x, but with the
upcoming c-ares 1.4.0 this is no longer the case so now libcurl runs much
better with c-ares and the multi interface with > 1024 file descriptors in
use.
I also switched from calloc() to malloc() as a minor performance boost since
the rest of the code fills in the structs fine anyway - and they must for the
case when we use the stack-based auto variable array instead of the allocated
one.
I made the loop filling in poll_fds[] break when poll_nfds is reached as a
minor speed improvement.
(http://curl.haxx.se/bug/view.cgi?id=1705802), which was filed by Daniel
Black identifying several FTP-SSL test cases fail when we build libcurl with
NSS for TLS/SSL. Listed as #42 in KNOWN_BUGS.
peer's name in the SSL certificate when built for OpenSSL. The leak happens
for libcurls with CURL_DOES_CONVERSIONS enabled that fail to convert the CN
name from UTF8.
bug report #1715394 (http://curl.haxx.se/bug/view.cgi?id=1715394), and the
transfer-related info "variables" were indeed overwritten with zeroes wrongly
and have now been adjusted. The upload size still isn't accurate.
because I just made SCP uploads return this value if the file size of
the upload file isn't given with CURLOPT_INFILESIZE*. Docs updated to
reflect this news, and a define for the old name was added to the public
header file.
when CURLOPT_HTTP200ALIASES is used to avoid the problem mentioned below is
not very nice if the client wants to be able to use _either_ a HTTP 1.1
server or one within the aliases list... so starting now, libcurl will
simply consider 200-alias matches the to be HTTP 1.0 compliant.
libcurls, which turned out to be the 25-nov-2006 change which treats HTTP
responses without Content-Length or chunked encoding as without bodies. We
now added the conditional that the above mentioned response is only without
body if the response is HTTP 1.1.
when CURLM_CALL_MULTI_PERFORM is returned from curl_multi_socket*/perform,
to make applications using only curl_multi_socket() to properly function
when adding easy handles "on the fly". Bug report and test app provided by
Michael Wallner.
since it then inits libgcrypt and libgcrypt is being evil and EXITS the
application if it fails to get a fine random seed. That's really not a nice
thing to do by a library.
been removed from a multi handle, and then fixed another flaw that prevented
curl_easy_duphandle() to work even after the first fix - the handle was
still marked as using the multi interface.
was 16385 bytes (16K+1) and it turned out we didn't properly always "suck
out" all data from libssh2. The effect being that libcurl would hang on the
socket waiting for data when libssh2 had in fact already read it all...
the CURLOPT_RESUME_FROM or CURLOPT_RANGE options and an existing connection
in the connection cache is closed to make room for the new one when you call
curl_easy_perform(). It would then wrongly free range-related data in the
connection close funtion.
identifying a double-free problem in the SSL-dealing layer, telling GnuTLS to
free NULL credentials on closedown after a failure and a bad #ifdef for NSS
when closing down SSL.
Curl_socket_ready(), Curl_poll() and Curl_select() when these are called
with a zero timeout or a timeout value indicating a blocking call should
be performed.
These unnecessary calls to gettimeofday() got introduced in 7.16.2 when
fixing 'timeout would restart when signal caught while awaiting socket
events' on 20 March 2007.
- Move some loop breaking logic from the while clause into the loop,
avoiding compiler warning 'assignment within conditional expression'
function that deprecates the curl_multi_socket() function. Using the new
function the application tell libcurl what action that was found in the
socket that it passes in. This gives a significant performance boost as it
allows libcurl to avoid a call to poll()/select() for every call to
curl_multi_socket*().
returning an error code, to allow connections to be torn down
cleanly since this function can be called AFTER an OOM situation
has already been reached.
by letting configure check for setmode and ifdef on HAVE_SETMODE. NOTE: non-
configure platforms that havve setmode() needs their hard-coded config.h files
fixed. I fixed the src/config-win32.h.
have not been properly defined to allow this! Instead of changing the defines
and break the ABI/API, I opted to modify the code to check for exact type
matches.
CID 10 coverity.com scan
to complete with no delay and actually find out what happened with
the socket. As well as detection of socket send()able condition.
This also allows removal of a Cygwin specific block of code.
and/or HAVE_VARIADIC_MACROS_GCC for specific compiler versions that support
variadic macros with C99 style and/or old gcc style in their specific config.h
file.
If previous definitions are not done, even when aplicable, and --disable-verbose
is used, the fallback (void) method will be used to define infof, avoiding the
inclusion of unwanted strings in the resulting library/executable.
or Curl_poll() with a non-zero timeout both functions would restart the
specified timeout. This could even lead to the extreme case that if a
signal arrived with a frecuency lower to the specified timeout neither
function would ever exit.
Added experimental symbol definition check CURL_ACKNOWLEDGE_EINTR in
Curl_select() and Curl_poll(). When compiled with CURL_ACKNOWLEDGE_EINTR
defined both functions will return as soon as a signal is caught. Use it
at your own risk, all calls to these functions in the library should be
revisited and checked before fully supporting this feature.
more frequently allowing same calling frecuency for the client progress
callback, while keeping the once a second frecuency for speed calculations
and internal display of the transfer progress.
Curl_poll() which is called whenever not a single valid file descriptor is
passed to these functions.
Improve readibility using a poll() macro to replace WSApoll().
server through a proxy and have the remote https server port set using the
CURLOPT_PORT option, protocol gets reset to http from https after the first
request.
User defined URL was modified internally by libcurl and subsequent reuse of
the easy handle may lead to connection using a different protocol (if not
originally http).
I found that libcurl hardcoded the protocol to "http" when it tries to
regenerate the URL if CURLOPT_PORT is set. I tried to fix the problem as
follows and it's working fine so far
"case label value exceeds maximum value for type" and
"comparison is always false due to limited range of data type"
Both triggered when using a bool variable as the switch variable
in a switch statement and using enums for the case targets.
Check for lowercase 'bool' type at configuration stage. If not available
provide a suitable replacement with a type definition of 'unsigned char'
in setup_once.h
Move definitions of TRUE and FALSE to setup_once.h
fixing some bugs:
o Don't mix GET and POST requests in a pipeline
o Fix the order in which requests are dispatched from the pipeline
o Fixed several curl bugs with pipelining when the server is returning
chunked encoding:
* Added states to chunked parsing for final CRLF
* Rewind buffer after parsing chunk with data remaining
* Moved chunked header initializing to a spot just before receiving
headers
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
making them available to any source code file which includes "setup.h".
Macro SOCKERRNO / SET_SOCKERRNO() returns / sets the *socket-related* errno
(or equivalent) on this platform to hide platform details to code using it.
Macro ERRNO / SET_ERRNO() returns / sets the NOT *socket-related* errno
(or equivalent) on this platform to hide platform details to code using it.
to the debug callback.
- Shmulik Regev added CURLOPT_HTTP_CONTENT_DECODING and
CURLOPT_HTTP_TRANSFER_DECODING that if set to zero will disable libcurl's
internal decoding of content or transfer encoded content. This may be
preferable in cases where you use libcurl for proxy purposes or similar. The
command line tool got a --raw option to disable both at once.
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
#1
There's a compilation error in http_ntlm.c if USE_NTLM2SESSION is NOT
defined. I noticed this while testing various configurations. Line 867 of
the current http_ntlm.c is a closing bracket for an if/else pair that only
gets compiled in if USE_NTLM2SESSION is defined. But this closing bracket
wasn't in an #ifdef so the code fails to compile unless USE_NTLM2SESSION was
defined. Lines 198 and 140 of my patch wraps that closing bracket in an
#ifdef USE_NTLM2SESSION.
#2
I noticed several picky compiler warnings when DEBUG_ME is defined. I've
fixed them with casting. By the way, DEBUG_ME was a huge help in
understanding this code.
#3
Hopefully the last non-ASCII conversion patch for libcurl in a while. I
changed the "NTLMSSP" literal to hex since this signature must always be in
ASCII.
Conversion code was strategically added where necessary. And the
Curl_base64_encode calls were changed so the binary "blobs" http_ntlm.c
creates are NOT translated on non-ASCII platforms.
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
non-ASCII platforms. It does add some complexity, most notably with more
#ifdefs, but I want to see this supported added and I can't see how we can
add it without the extra stuff added.
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
downloaded data in two buffers, just to be able to deal with a special HTTP
pipelining case. That is now only activated for pipelined transfers. In
Matt's case, it showed as a considerable performance difference,
(http://curl.haxx.se/bug/view.cgi?id=1603712) (known bug #36) --limit-rate
(CURLOPT_MAX_SEND_SPEED_LARGE and CURLOPT_MAX_RECV_SPEED_LARGE) are broken
on Windows (since 7.16.0, but that's when they were introduced as previous
to that the limiting logic was made in the application only and not in the
library). It was actually also broken on select()-based systems (as apposed
to poll()) but we haven't had any such reports. We now use select(), Sleep()
or delay() properly to sleep a while without waiting for anything input or
output when the rate limiting is activated with the easy interface.
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
(http://curl.haxx.se/bug/view.cgi?id=1603712) which is about connections
getting cut off prematurely when --limit-rate is used. While I found no such
problems in my tests nor in my reading of the code, I found that the
--limit-rate code was severly flawed (since it was moved into the lib, since
7.15.5) when used with the easy interface and it didn't work as documented so
I reworked it somewhat and now it works for my tests.
passing a curl_off_t argument to the Curl_read_rewind() function which takes
an size_t argument. Curl_read_rewind() also had debug code left in it and it
was put in a different source file with no good reason when only used from
one single spot.
(http://curl.haxx.se/bug/view.cgi?id=1604956) which identified setting
CURLOPT_MAXCONNECTS to zero caused libcurl to SIGSEGV. Starting now, libcurl
will always internally use no less than 1 entry in the connection cache.
(http://curl.haxx.se/bug/view.cgi?id=1230118) curl_getdate() did not work
properly for all input dates on Windows. It was mostly seen on some TZ time
zones using DST. Luckily, Martin also provided a fix.
(http://curl.haxx.se/bug/view.cgi?id=1600447) in which he noted that active
FTP connections don't work with the multi interface. The problem is here that
the multi interface state machine has a state during which it can wait for the
data connection to connect, but the active connection is not done in the same
step in the sequence as the passive one is so it doesn't quite work for
active. The active FTP code still use a blocking function to allow the remote
server to connect.
The fix (work-around is a better word) for this problem is to set the
boolean prematurely that the data connection is completed, so that the "wait
for connect" phase ends at once.
HTTP upload was disconnected:
"What appears to be happening is that my system (Linux 2.6.17 and 2.6.13) is
setting *only* POLLHUP on poll() when the conditions in my previous mail
occur. As you can see, select.c:Curl_select() does not check for POLLHUP. So
basically what was happening, is poll() was returning immediately (with
POLLHUP set), but when Curl_select() looked at the bits, neither POLLERR or
POLLOUT was set. This still caused Curl_readwrite() to be called, which
quickly returned. Then the transfer() loop kept continuing at full speed
forever."
header in a third, not suppported by libcurl, format and we agreed that we
could make the parser more forgiving to accept all the three found
variations.
responded with a single status line and no headers nor body. Starting now, a
HTTP response on a persistent connection (i.e not set to be closed after the
response has been taken care of) must have Content-Length or chunked
encoding set, or libcurl will simply assume that there is no body.
To my horror I learned that we had no less than 57(!) test cases that did bad
HTTP responses like this, and even the test http server (sws) responded badly
when queried by the test system if it is the test system. So although the
actual fix for the problem was tiny, going through all the newly failing test
cases got really painful and boring.
defining HAVE_SIGNAL_H if the header is available.
Added a check in configure that tests if the sig_atomic_t type is
available, defining HAVE_SIG_ATOMIC_T if it is available. Providing
a suitable default in setup_once.h if not available.
Added a check in configure that tests if the sig_atomic_t type is
already defined as volatile, defining HAVE_SIG_ATOMIC_T_VOLATILE
if it is available and already defined as volatile.