It makes no difference from curl's point of view but
makes it more convenient to use the tests with a
lws-normalizing proxy between curl and the test server.
Consistently use CRLF instead. The mixed endings weren't
documented so I assume they were unintentional.
This change doesn't matter for curl itself but makes using
the tests with a proxy between curl and the test server
more convenient.
Tests that consistently use no carriage returns were
left unmodified as one can easily work around this.
DNS cache entries populated with CURLOPT_RESOLVE were not properly freed
again when done using the multi interface.
Test case 1502 added to verify.
Bug: http://curl.haxx.se/bug/view.cgi?id=3575448
Reported by: Alex Gruz
If we use memory functions (malloc, free, strdup etc) in C sources in
libcurl and we fail to include curl_memory.h or memdebug.h we either
fail to properly support user-provided memory callbacks or the memory
leak system of the test suite fails.
After Ajit's report of a failure in the first category in http_proxy.c,
I spotted a few in the second category as well. These problems are now
tested for by test 1132 which runs a perl program that scans for and
attempts to check that we use the correct include files if a memory
related function is used in the source code.
Reported by: Ajit Dhumale
Bug: http://curl.haxx.se/mail/lib-2012-11/0125.html
The existing logic only cut off the fragment from the separate 'path'
buffer which is used when sending HTTP to hosts. The buffer that held
the full URL used for proxies were not dealt with. It is now.
Test case 5 was updated to use a fragment on a URL over a proxy.
Bug: http://curl.haxx.se/bug/view.cgi?id=3579813
As pointed out in Bug report #3579064, curl_multi_perform() would
wrongly use a blocking mechanism internally for some commands which
could lead to for example a very long block if the LIST response never
showed.
The solution was to make sure to properly continue to use the multi
interface non-blocking state machine.
The new test 1501 verifies the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3579064
Reported by: Guido Berhoerster
The test would hang and get aborted with a "ABORTING TEST, since it
seems that it would have run forever." until I prevented that from
happening.
I also fixed the data file which got broken CRLF line endings when I
sucked down the path from Joe's repo == my fault.
Removed #37 from KNOWN_BUGS as this fix and test case verifies exactly
this.
I suspect this is a regression introduced in commit 207cf150, included
since 7.24.0.
Avoid showing '(nil)' as hostname in verbose output by making sure the
hostname fixup function is called early enough to set the pointers that
are used for this. The name data is set again for each request even for
re-used connections to handle multiple hostnames over the same
connection (like with proxy) or that the casing etc of the host name is
changed between requests (which has proven to be important at least once
in the past).
Test1011 was modified to use a redirect with a re-used a connection
since it then showed the bug and now lo longer does. There's currently
no easy way to have the test suite detect 'nil' texts in verbose ouputs
so no tests will detect if this problem gets reintroduced.
Bug: http://curl.haxx.se/mail/lib-2012-07/0111.html
Reported by: Gisle Vanem
Fix a bug where closed sockets (fd -1) were left in the all_sockets
list, because of missing parens in a pointer arithmetic expression
Reenable the tests that were locking up due to this bug.
Two commits ago, we fixed a bug where the connction would be closed
prematurely after a HEAD. Now I added connection-monitor to test 48 and
added a second HEAD and make sure that both are sent over the same
connection.
This triggered a failure before the bug fix and now works. Will help us
avoid a future regression of this kind.
This makes verifying easier and makes us more sure curl closes the
connection only at the correct point in time. Adjusted test 206 and 1008
accordingly and updated the docs for it.
Since the order of the cookies is sorted by the length of the paths,
having them on the same path length will make the test depend on what
order the qsort() implementation will put them. As seen in the
windows/msys output posted by Guenter in this posting:
http://curl.haxx.se/mail/lib-2012-07/0105.html
Test 1008 and 206 don't show the disconnect since it happens when SWS
awaits a new request, but 503 does and so the verify section needs that
string added.
Using this, the server will output in the protocol log when the
connection gets disconnected and thus we will verify correctly in the
test cases that the connection doesn't get closed prematurely. This is
important for example NTLM to work.
Documentation added to FILEFORMAT, test 503 updated to use this.
With this commit, checks done in previous test2017 are now done in test2018.
Whole range test2017 to test2022 DISABLED until configure is capable of
requiring a new-enough metalink library.
Don't try these without mentioned check in place!
Print "parsing (...) OK" only when no warnings are generated. If
no file is found in Metalink, treat it FAILED.
If no digest is provided, print WARNING in parse_metalink().
Also print validating FAILED after download.
These changes make tests 2012 to 2016 pass.
Verify that the "Saved to filename 'blabla'" message is only displayed when
the 'blabla' filename being used _actually_ has been specified by the server
in the Content-Disposition header.
Use relative path for unintended file creation postcheck.
Introduce SUPPORTCAPA and SUPPORTAUTH config commands to allow further
pop3 test server expansion for tests that require CAPA or AUTH support,
although this will need some extra work to make it fully functional.
Added support for detecting the supported SASL authentication mechanisms
via the AUTH command. There are two ways of detecting them, either by
using the AUTH command, that will return -ERR if not supported or by
using the CAPA command which will return SASL and the list of mechanisms
if supported, not include SASL if SASL authentication is not supported
or -ERR if the CAPA command is not supported. As such it seems simpler
to use the AUTH command and fallback to normal clear text authentication
if the the command is not supported.
Additionally updated the test cases to return -ERR when the AUTH command
is encountered. Additional test cases will be added when support for the
individual authentication mechanisms is added.
To achieve this, first new structure HeaderData is defined to hold
necessary data to perform header-related work. Then tool_header_cb now
receives HeaderData pointer as userdata. All header-related work
(currently, dumping header and Content-Disposition inspection) are done
in this callback function. HeaderData.outs->config is used to determine
whether each work is done.
Unit tests were also updated because after this change, curl code always
sets CURLOPT_HEADERFUNCTION and CURLOPT_HEADERDATA.
Tested with -O -J -D, -O -J -i and -O -J -D -i and all worked fine.
When doing a chunked-encoded POST with -d (CURLOPT_POSTFIELDS) and the
size of the POST was zero length, it made libcurl first send a zero
chunk and then the terminating one. This could confuse a receiver and it
should rather just send the terminating chunk as it does with this fix.
Test case 1333 is added to verify.
Bug: http://curl.haxx.se/mail/archive-2012-04/0060.html
Reported by: Arnaud Compan
With FOLLOWLOCATION enabled. When a 3xx page is downloaded and the
download size was known (like with a Content-Length header), but the
subsequent URL (transfered after the 3xx page) was chunked encoded, then
the previous "known download size" would linger and cause the progress
meter to get incorrect information, ie the former value would remain
being sent in. This could easily result in downloads that were WAY
larger than "expected" and would cause >100% outputs with the curl
command line tool.
Test case 599 was created and it was used to repeat the bug and then
verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3510057
Reported by: Michael Wallner
The line endings broke when I saved the three recent patches (my fault,
not Colin's) to 'git am' them.
Adjusted the stripping of the test program for comparing to also exclude
the SSH key file name as that will differ and use a local path name.
With commit 035ef06bda applied, the test pop3 server needs to send
".\r\n" as the body terminating sequence and there needs to be a final
CRLF in the actual body in the test data file.
The proxy parser function strips off trailing slashes off the proxy name
which could lead to a mistaken zero length proxy name which would be
treated as no proxy at all by subsequent functions!
This is now detected and an error is returned. Verified by the new test
1329.
Reported by: Chandrakant Bagul
Bug: http://curl.haxx.se/mail/lib-2012-02/0000.html
When CURLOPT_REFERER has been used, curl_easy_reset() did not properly
clear it.
Verified with the new test 598
Bug: http://curl.haxx.se/bug/view.cgi?id=3481551
Reported by: Michael Day
We want to continue to the next URL to try even on failures returned
from libcurl. This makes -f with ranges still get subsequent URLs even
if occasional ones return error. This was a regression as it used to
work and broke in the 7.23.0 release.
Added test case 1328 to verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3481223
Reported by: Juan Barreto
There's a new 'http-proxy' server for tests that runs on a separate port
and lets clients do HTTP CONNECT to other ports on the same host to
allow us to test HTTP "tunneling" properly.
Test cases now have a <proxy> section in <verify> to check that the
proxy protocol part matches correctly.
Test case 80, 83, 95, 275, 503 and 1078 have been converted. Test 1316
was added.
When a HTTP connection is re-used for a subsequent request without
proxy, it would always re-use the Host: header of the first request. As
host names are case insensitive it would make curl send another host
name case that what the particular request used.
Now it will instead always use the most recent host name to always use
the desired casing.
Added test case 1318 to verify.
Bug: http://curl.haxx.se/mail/lib-2011-12/0314.html
Reported by: Alex Vinnik
1- Two new error codes are introduced.
CURLE_FTP_ACCEPT_FAILED to be set whenever ACCEPTing fails because of
FTP server connected.
CURLE_FTP_ACCEPT_TIMEOUT to be set whenever ACCEPTing timeouts.
Neither of these errors are considered fatal and control connection
remains OK because it could just be a firewall blocking server to
connect to the client.
2- One new setopt option was introduced.
CURLOPT_ACCEPTTIMEOUT_MS
It sets the maximum amount of time FTP client is going to wait for a
server to connect. Internal default accept timeout is 60 seconds.
Test case 1315 was added to verify this functionality. When passing in
multiple files to a single -F, the parser would get all confused if one
of the specified files had a custom type= assigned.
Reported by: Colin Hogben
Keep track of which sockets that are the result of accept() calls and
refuse to call the closesocket callback for those sockets. Test case 596
now verifies that the open socket callback is called the same number of
times as the closed socket callback for active FTP connections.
Bug: http://curl.haxx.se/mail/lib-2011-12/0018.html
Reported by: Gokhan Sengun
When the new socket is created for an active connection, it is now done
using the open socket callback.
Test case 596 was modified to run fine, although it hides the fact that
the close callback is still called too many times, as it also gets
called for closing sockets that were created with accept().