such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
"HttpOnly" feature introduced by Microsoft and apparently also supported by
Firefox: http://msdn2.microsoft.com/en-us/library/ms533046.aspx . HttpOnly
is now supported when received from servers in HTTP headers, when written to
cookie jars and when read from existing cookie jars.
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
(http://curl.haxx.se/bug/view.cgi?id=1850730) I wrote up test case 552. The
test is doing a 70K POST with a read callback and an ioctl callback over a
proxy requiring Digest auth. The test case code is more or less identical to
the test recipe code provided by Spacen Jasset (who submitted the bug report).
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
is supposed to repeat the bug report "NTLM proxy authentication with
CURLOPT_READDATA seems broken." posted on the curl-library mailing list on dec
3 2007.
This happened because the tftp code always uncondionally did a bind()
without caring if one already had been done and then it failed. I wrote a
test case (1009) to verify this, but it is a bit error-prone since it will
have to pick a fixed local port number and since the tests are run on so
many different hosts in different situations I add it in disabled state.
target called 'filecheck' so that if you run 'make filecheck' in this directory
it'll check if the local files are also mentioned in the Makefile.am so that
they are properly included in release archives!
function do wrong on all input bytes that are >= 0x80 (decimal 128) due to a
signed / unsigned mistake in the code. I fixed it and added test case 543 to
verify.
curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
CURLOPT_NOBODY enabled but not CURLOPT_HEADER, libcurl wouldn't do TYPE
before it does SIZE which makes it less useful. I walked over the code and
made it do this properly, and added test case 542 to verify it.
- Bug report #1792649 (http://curl.haxx.se/bug/view.cgi?id=1792649) pointed
out a problem with doing an empty upload over FTP on a re-used connection.
I added test case 541 to reproduce it and to verify the fix.
- I noticed while writing test 541 that the FTP code wrongly did a CWD on the
second transfer as it didn't store and remember the "" path from the
previous transfer so it would instead CWD to the entry path as stored. This
worked, but did a superfluous command. Thus, test case 541 now also verifies
this fix.
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
out that libcurl didn't deal with large responses from server commands, when
the single response was consisting of multiple lines but of a total size of
16KB or more. Dan Fandrich improved the ftp test script and provided test
case 1006 to repeat the problem, and I fixed the code to make sure this new
test case runs fine.
out that libcurl didn't deal with very long (>16K) FTP server response lines
properly. Starting now, libcurl will chop them off (thus the client app will
not get the full line) but survive and deal with them fine otherwise. Test
case 1003 was added to verify this.
(http://curl.haxx.se/bug/view.cgi?id=1776235) about ftp requests with NOBODY
on a directory would do a "SIZE (null)" request. This is now fixed and test
case 1000 was added to verify.
after 7.16.2. This is much due to the different treatment file:// gets
internally, but now I added test 231 to make it less likely to happen again
without us noticing!
using one of the so-called 'right' time zones that take into account
leap seconds, which causes the tests to fail (as reported by
Daniel Black in bug report #1745964).
chunked encoding (that also lacks "Connection: close"). It now simply
assumes that the connection WILL be closed to signal the end, as that is how
RFC2616 section 4.4 point #5 says we should behave.
supports only ftps:// URLs with --ftp-ssl-control specified, which
implicitly encrypts the control channel but not the data channels. That
allows stunnel to be used with an unmodified ftp server in exactly the
same way that the test https server is set up.
Added test case 400 as a basic FTPS test.
fixing some bugs:
o Don't mix GET and POST requests in a pipeline
o Fix the order in which requests are dispatched from the pipeline
o Fixed several curl bugs with pipelining when the server is returning
chunked encoding:
* Added states to chunked parsing for final CRLF
* Rewind buffer after parsing chunk with data remaining
* Moved chunked header initializing to a spot just before receiving
headers
are not, due mainly to the lack of support for XML character entities
(e.g. & => & ). This will make it easier to validate test files using
tools like xmllint, as well as edit and view them using XML tools.
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.