and makes wrong asumptions of build target when it isn't specified. So,
if no build target has been defined we will target WinXP when building
with MSVC 9.0 (VS2008).
(http://curl.haxx.se/bug/view.cgi?id=1849764) with an included fix. He
identified a problem for re-used connections that previously had sent
Expect: 100-continue and in some situations the subsequent POST (that didn't
use Expect:) still had the internal flag set for its use. David's fix (that
makes the setting of the flag in every single request unconditionally) is
fine and is now used!
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
the same state struct as the host auth, so both could never be used at the
same time! I fixed it (without being able to check) to use two separate
structs to allow authentication using Negotiate on host and proxy
simultanouesly.
callback was used, as it could wrongly pass on a bad size for the outgoing
HTTP header. The bad size would be a very large value as it was a wrapped
size_t content. This happened when the whole HTTP request failed to get sent
in one single send. http://curl.haxx.se/mail/lib-2007-11/0165.html
forwarded from the Gentoo bug tracker by Daniel Black and was originally
submitted by Robin Johnson, pointed out that libcurl would do bad memory
references when it failed and bailed out before the handler thing was
setup. My fix is not done like the provided patch does it, but instead I
make sure that there's never any chance for a NULL pointer in that struct
member.
out that SFTP requests didn't use persistent connections. Neither did SCP
ones. I gave the SSH code a good beating and now both SCP and SFTP should
use persistent connections fine. I also did a bunch for indent changes as
well as a bug fix for the "keyboard interactive" auth.
out a problem in curl.h when building C++ apps with MSVC. To fix it, the
inclusion of header files in curl.h is moved outside of the C++ extern "C"
linkage block.
building with VC8 to get the "manifest" embedded to make fine stand-alone
binaries. The maketgz and the src/Makefile.vc6 files were adjusted
accordingly.
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).
a response that was larger than 16KB is now improved slightly so that now
the restriction at 16KB is for the headers only and it should be a rare
situation where the response-headers exceed 16KB. Thus, I consider #47 fixed
and the header limitation is now known as known bug #48.
This happened because the tftp code always uncondionally did a bind()
without caring if one already had been done and then it failed. I wrote a
test case (1009) to verify this, but it is a bit error-prone since it will
have to pick a fixed local port number and since the tests are run on so
many different hosts in different situations I add it in disabled state.
CURLOPT_OPENSOCKETDATA to set a callback that allows an application to replace
the socket() call used by libcurl. It basically allows the app to change
address, protocol or whatever of the socket. (I also did some whitespace
indent/cleanups in lib/url.c which kind of hides some of these changes, sorry
for mixing those in.)
CURLE_PEER_FAILED_VERIFICATION (standard CURL_NO_OLDIES style), and made this
return code get used by the previous SSH MD5 fingerprint check in case it
fails.
CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 and the curl tool --hostpubmd5. They both make
the SCP or SFTP connection verify the remote host's md5 checksum of the public
key before doing a connect, to reduce the risk of a man-in-the-middle attack.
function do wrong on all input bytes that are >= 0x80 (decimal 128) due to a
signed / unsigned mistake in the code. I fixed it and added test case 543 to
verify.
curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
CURLOPT_NOBODY enabled but not CURLOPT_HEADER, libcurl wouldn't do TYPE
before it does SIZE which makes it less useful. I walked over the code and
made it do this properly, and added test case 542 to verify it.
o It looks for the NSS database first in the environment variable SSL_DIR,
then in /etc/pki/nssdb, then it initializes with no database if neither of
those exist.
o If the NSS PKCS#11 libnspsem.so driver is available then PEM files may be
loaded, including the ca-bundle. If it is not available then only
certificates already in the NSS database are used.
o Tries to detect whether a file or nickname is being passed in so the right
thing is done
o Added a bit of code to make the output more like the OpenSSL module,
including displaying the certificate information when connecting in
verbose mode
o Improved handling of certificate errors (expired, untrusted, etc)
The libnsspem.so PKCS#11 module is currently only available in Fedora
8/rawhide. Work will be done soon to upstream it. The NSS module will work
with or without it, all that changes is the source of the certificates and
keys.
key was specified and there was no HOME environment variable, and then it
didn't continue to try the other auth methods. Now it will instead try to
get the files id_dsa.pub and id_dsa from the current directory if none of
the two conditions were met.
- Bug report #1792649 (http://curl.haxx.se/bug/view.cgi?id=1792649) pointed
out a problem with doing an empty upload over FTP on a re-used connection.
I added test case 541 to reproduce it and to verify the fix.
- I noticed while writing test 541 that the FTP code wrongly did a CWD on the
second transfer as it didn't store and remember the "" path from the
previous transfer so it would instead CWD to the entry path as stored. This
worked, but did a superfluous command. Thus, test case 541 now also verifies
this fix.
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
out that libcurl didn't deal with large responses from server commands, when
the single response was consisting of multiple lines but of a total size of
16KB or more. Dan Fandrich improved the ftp test script and provided test
case 1006 to repeat the problem, and I fixed the code to make sure this new
test case runs fine.
out that doing first a file:// upload and then an FTP upload crashed libcurl
or at best caused furious valgrind complaints. Fixed now by making sure we
free and clear the file-specific struct properly when done with it.
out that libcurl didn't deal with very long (>16K) FTP server response lines
properly. Starting now, libcurl will chop them off (thus the client app will
not get the full line) but survive and deal with them fine otherwise. Test
case 1003 was added to verify this.
(http://curl.haxx.se/bug/view.cgi?id=1776232) about libcurl calling
Curl_client_write(), passing on a const string that the caller may not
modify and yet it does (on some platforms).
(http://curl.haxx.se/bug/view.cgi?id=1776235) about ftp requests with NOBODY
on a directory would do a "SIZE (null)" request. This is now fixed and test
case 1000 was added to verify.
the configure script checks for openldap and friends and we link with those
libs just like we link all other third party libraries, and we no longer
dlopen() those libraries. Our private header file lib/ldap.h was renamed to
lib/curl_ldap.h due to this. I set a tag in CVS (curl-7_17_0-preldapfix)
just before this commit, just in case.
(http://curl.haxx.se/bug/view.cgi?id=1766320) pointing out that the libcurl
code accessed two curl_easy_setopt() options (CURLOPT_DNS_CACHE_TIMEOUT and
CURLOPT_DNS_USE_GLOBAL_CACHE) as ints even though they're documented to be
passed in as longs, and that makes a difference on 64 bit architectures.
after 7.16.2. This is much due to the different treatment file:// gets
internally, but now I added test 231 to make it less likely to happen again
without us noticing!
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
NTLM, and he provided test code and a test server and we worked out a bug
fix. We failed to count sent body data at times, which then caused internal
confusions when libcurl tried to send the rest of the data in order to
maintain the same connection alive.
(and then I did some minor reformatting of code in lib/http.c)
(http://curl.haxx.se/bug/view.cgi?id=1757328) and submitted a patch. It turns
out we broke login to FTP servers that don't require (nor understand) PASS
after the USER command
a new directory listing format that newer libssh2's can provide. This
is probably NOT sufficient to handle all directory listing formats that
server's can provide and should be revisited.
(http://curl.haxx.se/bug/view.cgi?id=1750274) and submitted a patch for the
case where libcurl did a connect attempt to a non-listening port and didn't
provide a human readable error string back.
fail to connect if there is no Common Name field found in the remote cert.
We should deprecate the support for this set to 1 anyway soon, since the
feature is pointless and most likely never really used by anyone.
The tiny patch below fixes a bug (that I introduced :) which happens
when negotiating authentication with a proxy (probably with web
servers as well) that uses chunked transfer encoding for the 407 error
pages. In this case the ''ignorebody'' flag was ignored (no pun
intended).
using one of the so-called 'right' time zones that take into account
leap seconds, which causes the tests to fail (as reported by
Daniel Black in bug report #1745964).
message for an scp:// upload failure. If libssh2 has his matching
patch, then the error message return by the server will be used instead
of a more generic error.
hash function for different hashes, and also expanded the default size for
the socket hash table used in multi handles to greatly enhance speed when
very many connections are added and the socket API is used.
chunked encoding (that also lacks "Connection: close"). It now simply
assumes that the connection WILL be closed to signal the end, as that is how
RFC2616 section 4.4 point #5 says we should behave.
http://curl.haxx.se/mail/lib-2007-06/0238.html, libcurl didn't properly do
no-body requests on FTP files on re-used connections properly, or at least
it didn't provide the info back in the header callback properly in the
subsequent requests.
(http://curl.haxx.se/bug/view.cgi?id=1740263). Adam discovered that when
getting a large amount of URLs with curl, they were fetched slower and
slower... which turned out to be because the --libcurl data collecting which
wrongly always was enabled, but no longer is...
(http://curl.haxx.se/bug/view.cgi?id=1739100) that mentioned that libcurl
could not actually list the contents of the root directory of a given FTP
server if the login directory isn't root. I fixed the problem and added three
test cases (one is disabled for now since I identified KNOWN_BUGS #44, we
cannot use --ftp-method nocwd and list ftp directories).
(http://curl.haxx.se/bug/view.cgi?id=1733119) and we collaborated on the fix.
The problem is that for 64bit HPUX builds, several socket-related functions
would still assume int (32 bit) arguments and not socklen_t (64 bit) ones.
- -s/--silent can now be used to toggle off the silence again if used a second
time.
Daniel S (5 June 2007)
- Added Daniel Black's work that adds the first few SOCKS test cases. I also
fixed two minor SOCKS problems to make the test cases run fine.
to find that it crashed miserably, and this was due to some select()isms left
in the code. This was due to API restrictions in c-ares 1.3.x, but with the
upcoming c-ares 1.4.0 this is no longer the case so now libcurl runs much
better with c-ares and the multi interface with > 1024 file descriptors in
use.
overwrite in Curl_select(). While fixing it, I also improved its performance
somewhat by changing calloc to malloc and breaking out of a loop earlier
(when possible).
(http://curl.haxx.se/bug/view.cgi?id=1705802), which was filed by Daniel
Black identifying several FTP-SSL test cases fail when we build libcurl with
NSS for TLS/SSL. Listed as #42 in KNOWN_BUGS.
pointed out that the warnf() function in the curl tool didn't properly deal
with the cases when excessively long words were used in the string to chop
up.
peer's name in the SSL certificate when built for OpenSSL. The leak happens
for libcurls with CURL_DOES_CONVERSIONS enabled that fail to convert the CN
name from UTF8.
bug report #1715394 (http://curl.haxx.se/bug/view.cgi?id=1715394), and the
transfer-related info "variables" were indeed overwritten with zeroes wrongly
and have now been adjusted. The upload size still isn't accurate.
because I just made SCP uploads return this value if the file size of
the upload file isn't given with CURLOPT_INFILESIZE*. Docs updated to
reflect this news, and a define for the old name was added to the public
header file.
when CURLOPT_HTTP200ALIASES is used to avoid the problem mentioned below is
not very nice if the client wants to be able to use _either_ a HTTP 1.1
server or one within the aliases list... so starting now, libcurl will
simply consider 200-alias matches the to be HTTP 1.0 compliant.
libcurls, which turned out to be the 25-nov-2006 change which treats HTTP
responses without Content-Length or chunked encoding as without bodies. We
now added the conditional that the above mentioned response is only without
body if the response is HTTP 1.1.
when CURLM_CALL_MULTI_PERFORM is returned from curl_multi_socket*/perform,
to make applications using only curl_multi_socket() to properly function
when adding easy handles "on the fly". Bug report and test app provided by
Michael Wallner.
since it then inits libgcrypt and libgcrypt is being evil and EXITS the
application if it fails to get a fine random seed. That's really not a nice
thing to do by a library.
been removed from a multi handle, and then fixed another flaw that prevented
curl_easy_duphandle() to work even after the first fix - the handle was
still marked as using the multi interface.
was 16385 bytes (16K+1) and it turned out we didn't properly always "suck
out" all data from libssh2. The effect being that libcurl would hang on the
socket waiting for data when libssh2 had in fact already read it all...
the CURLOPT_RESUME_FROM or CURLOPT_RANGE options and an existing connection
in the connection cache is closed to make room for the new one when you call
curl_easy_perform(). It would then wrongly free range-related data in the
connection close funtion.
identifying a double-free problem in the SSL-dealing layer, telling GnuTLS to
free NULL credentials on closedown after a failure and a bad #ifdef for NSS
when closing down SSL.
Curl_socket_ready(), Curl_poll() and Curl_select() when these are called
with a zero timeout or a timeout value indicating a blocking call should
be performed.
These unnecessary calls to gettimeofday() got introduced in 7.16.2 when
fixing 'timeout would restart when signal caught while awaiting socket
events' on 20 March 2007.
- Move some loop breaking logic from the while clause into the loop,
avoiding compiler warning 'assignment within conditional expression'
function that deprecates the curl_multi_socket() function. Using the new
function the application tell libcurl what action that was found in the
socket that it passes in. This gives a significant performance boost as it
allows libcurl to avoid a call to poll()/select() for every call to
curl_multi_socket*().
by letting configure check for setmode and ifdef on HAVE_SETMODE. NOTE: non-
configure platforms that havve setmode() needs their hard-coded config.h files
fixed. I fixed the src/config-win32.h.
or Curl_poll() with a non-zero timeout both functions would restart the
specified timeout. This could even lead to the extreme case that if a
signal arrived with a frecuency lower to the specified timeout neither
function would ever exit.
Added experimental symbol definition check CURL_ACKNOWLEDGE_EINTR in
Curl_select() and Curl_poll(). When compiled with CURL_ACKNOWLEDGE_EINTR
defined both functions will return as soon as a signal is caught. Use it
at your own risk, all calls to these functions in the library should be
revisited and checked before fully supporting this feature.
more frequently allowing same calling frecuency for the client progress
callback, while keeping the once a second frecuency for speed calculations
and internal display of the transfer progress.
makefiles that are included in the source release archives, generated from
the Makefile.vc6 files by the maketgz script. I also modified the root
Makefile to have a VC variable that defaults to vc6 but can be overridden to
allow it to be used for vc8 as well. Like this:
nmake VC=vc8 vc
server through a proxy and have the remote https server port set using the
CURLOPT_PORT option, protocol gets reset to http from https after the first
request.
User defined URL was modified internally by libcurl and subsequent reuse of
the easy handle may lead to connection using a different protocol (if not
originally http).
I found that libcurl hardcoded the protocol to "http" when it tries to
regenerate the URL if CURLOPT_PORT is set. I tried to fix the problem as
follows and it's working fine so far
fixing some bugs:
o Don't mix GET and POST requests in a pipeline
o Fix the order in which requests are dispatched from the pipeline
o Fixed several curl bugs with pipelining when the server is returning
chunked encoding:
* Added states to chunked parsing for final CRLF
* Rewind buffer after parsing chunk with data remaining
* Moved chunked header initializing to a spot just before receiving
headers
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
to the debug callback.
- Shmulik Regev added CURLOPT_HTTP_CONTENT_DECODING and
CURLOPT_HTTP_TRANSFER_DECODING that if set to zero will disable libcurl's
internal decoding of content or transfer encoded content. This may be
preferable in cases where you use libcurl for proxy purposes or similar. The
command line tool got a --raw option to disable both at once.
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
fail since they used "1 feb 2007"...
- Manfred Schwarb reported that socks5 support was broken and help us pinpoint
the problem. The code now tries harder to use httproxy and proxy where
apppropriate, as not all proxies are HTTP...
ordinary curl command line, and you will get a libcurl-using source code
written to the file that does the equivalent operation of what your command
line operation does!
#1
There's a compilation error in http_ntlm.c if USE_NTLM2SESSION is NOT
defined. I noticed this while testing various configurations. Line 867 of
the current http_ntlm.c is a closing bracket for an if/else pair that only
gets compiled in if USE_NTLM2SESSION is defined. But this closing bracket
wasn't in an #ifdef so the code fails to compile unless USE_NTLM2SESSION was
defined. Lines 198 and 140 of my patch wraps that closing bracket in an
#ifdef USE_NTLM2SESSION.
#2
I noticed several picky compiler warnings when DEBUG_ME is defined. I've
fixed them with casting. By the way, DEBUG_ME was a huge help in
understanding this code.
#3
Hopefully the last non-ASCII conversion patch for libcurl in a while. I
changed the "NTLMSSP" literal to hex since this signature must always be in
ASCII.
Conversion code was strategically added where necessary. And the
Curl_base64_encode calls were changed so the binary "blobs" http_ntlm.c
creates are NOT translated on non-ASCII platforms.
are not, due mainly to the lack of support for XML character entities
(e.g. & => & ). This will make it easier to validate test files using
tools like xmllint, as well as edit and view them using XML tools.
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
non-ASCII platforms. It does add some complexity, most notably with more
#ifdefs, but I want to see this supported added and I can't see how we can
add it without the extra stuff added.
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
downloaded data in two buffers, just to be able to deal with a special HTTP
pipelining case. That is now only activated for pipelined transfers. In
Matt's case, it showed as a considerable performance difference,
(http://curl.haxx.se/bug/view.cgi?id=1603712) (known bug #36) --limit-rate
(CURLOPT_MAX_SEND_SPEED_LARGE and CURLOPT_MAX_RECV_SPEED_LARGE) are broken
on Windows (since 7.16.0, but that's when they were introduced as previous
to that the limiting logic was made in the application only and not in the
library). It was actually also broken on select()-based systems (as apposed
to poll()) but we haven't had any such reports. We now use select(), Sleep()
or delay() properly to sleep a while without waiting for anything input or
output when the rate limiting is activated with the easy interface.
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
(http://curl.haxx.se/bug/view.cgi?id=1603712) which is about connections
getting cut off prematurely when --limit-rate is used. While I found no such
problems in my tests nor in my reading of the code, I found that the
--limit-rate code was severly flawed (since it was moved into the lib, since
7.15.5) when used with the easy interface and it didn't work as documented so
I reworked it somewhat and now it works for my tests.
passing a curl_off_t argument to the Curl_read_rewind() function which takes
an size_t argument. Curl_read_rewind() also had debug code left in it and it
was put in a different source file with no good reason when only used from
one single spot.
no code present in the library that receives the option. Since it was not
possible to use, we know that no current users exist and thus we simply
removed it from the docs and made the code always use the default path of
the code.
(http://curl.haxx.se/bug/view.cgi?id=1604956) which identified setting
CURLOPT_MAXCONNECTS to zero caused libcurl to SIGSEGV. Starting now, libcurl
will always internally use no less than 1 entry in the connection cache.
(http://curl.haxx.se/bug/view.cgi?id=1230118) curl_getdate() did not work
properly for all input dates on Windows. It was mostly seen on some TZ time
zones using DST. Luckily, Martin also provided a fix.
(http://curl.haxx.se/bug/view.cgi?id=1600447) in which he noted that active
FTP connections don't work with the multi interface. The problem is here that
the multi interface state machine has a state during which it can wait for the
data connection to connect, but the active connection is not done in the same
step in the sequence as the passive one is so it doesn't quite work for
active. The active FTP code still use a blocking function to allow the remote
server to connect.
The fix (work-around is a better word) for this problem is to set the
boolean prematurely that the data connection is completed, so that the "wait
for connect" phase ends at once.
HTTP upload was disconnected:
"What appears to be happening is that my system (Linux 2.6.17 and 2.6.13) is
setting *only* POLLHUP on poll() when the conditions in my previous mail
occur. As you can see, select.c:Curl_select() does not check for POLLHUP. So
basically what was happening, is poll() was returning immediately (with
POLLHUP set), but when Curl_select() looked at the bits, neither POLLERR or
POLLOUT was set. This still caused Curl_readwrite() to be called, which
quickly returned. Then the transfer() loop kept continuing at full speed
forever."
header in a third, not suppported by libcurl, format and we agreed that we
could make the parser more forgiving to accept all the three found
variations.
responded with a single status line and no headers nor body. Starting now, a
HTTP response on a persistent connection (i.e not set to be closed after the
response has been taken care of) must have Content-Length or chunked
encoding set, or libcurl will simply assume that there is no body.
To my horror I learned that we had no less than 57(!) test cases that did bad
HTTP responses like this, and even the test http server (sws) responded badly
when queried by the test system if it is the test system. So although the
actual fix for the problem was tiny, going through all the newly failing test
cases got really painful and boring.
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
case when 401 or 407 are returned, *IF* no auth credentials have been given.
The CURLOPT_FAILONERROR option is not possible to make fool-proof for 401
and 407 cases when auth credentials is given, but we've now covered this
somewhat more.
You might get some amounts of headers transferred before this situation is
detected, like for when a "100-continue" is received as a response to a
POST/PUT and a 401 or 407 is received immediately afterwards.
Added test 281 to verify this change.
cross-compiling) in order to detect problems with run-time libraries that
otherwise would occur when the sizeof tests for curl_off_t would run and
thus be much more confusing to users. The check of course should run after
all lib-checks are done and before any other test is used that would run an
executable built for testing-purposes.
to the CURLOPT_DEBUGFUNCTION callback has been fixed (it was erroneously
included as part of the header). A message was also added to the
command line tool to show when data is being sent, enabled when
--verbose is used.
and while doing so it became apparent that the current timeout system for
the socket API really was a bit awkward since it become quite some work to
be sure we have the correct timeout set.
Jeff then provided the new CURLMOPT_TIMERFUNCTION that is yet another
callback the app can set to get to know when the general timeout time
changes and thus for an application like hiperfifo.c it makes everything a
lot easier and nicer. There's a CURLMOPT_TIMERDATA option too of course in
good old libcurl tradition.