curl/tests/README.md

231 lines
8.8 KiB
Markdown
Raw Normal View History

# The curl Test Suite
# Running
## Requires to run
- perl (and a unix-style shell)
- python (and a unix-style shell, for SMB and TELNET tests)
- python-impacket (for SMB tests)
- diff (when a test fails, a diff is shown)
- stunnel (for HTTPS and FTPS tests)
- OpenSSH or SunSSH (for SCP, SFTP and SOCKS4/5 tests)
- nghttpx (for HTTP/2 tests)
- nroff (for --manual tests)
- An available `en_US.UTF-8` locale
### Installation of python-impacket
The Python-based test servers support both recent Python 2 and 3.
You can figure out your default Python interpreter with python -V
Please install python-impacket in the correct Python environment.
You can use pip or your OS' package manager to install 'impacket'.
On Debian/Ubuntu the package names are:
- Python 2: 'python-impacket'
- Python 3: 'python3-impacket'
On FreeBSD the package names are:
- Python 2: 'py27-impacket'
- Python 3: 'py37-impacket'
On any system where pip is available:
- Python 2: 'pip2 install impacket'
- Python 3: 'pip3 install impacket'
You may also need to manually install the Python package 'six'
as that may be a missing requirement for impacket on Python 3.
### Port numbers used by test servers
All test servers run on "random" port numbers. All tests should be written
to use suitable variables instead of fixed port numbers so that test cases
continue to work independent on what port numbers the test servers actually
use.
See [FILEFORMAT](FILEFORMAT.md) for the port number variables.
2011-09-27 16:13:40 -04:00
### Test servers
2009-12-12 17:48:11 -05:00
The test suite runs stand-alone servers on random ports to which it makes
requests. For SSL tests, it runs stunnel to handle encryption to the regular
servers. For SSH, it runs a standard OpenSSH server. For SOCKS4/5 tests SSH
is used to perform the SOCKS functionality and requires a SSH client and
server.
The listen port numbers for the test servers are picked randomly to allow
users to run multiple test cases concurrently and to not collide with other
existing services that might listen to ports on the machine.
2011-09-27 16:13:40 -04:00
The HTTP server supports listening on a Unix domain socket, the default
location is 'http.sock'.
### Run
2004-02-20 02:05:10 -05:00
`./configure && make && make test`. This builds the test suite support code
and invokes the 'runtests.pl' perl script to run all the tests. Edit the top
variables of that script in case you have some specific needs, or run the
script manually (after the support code has been built).
2000-11-10 10:24:09 -05:00
The script breaks on the first test that doesn't do OK. Use `-a` to prevent
the script from aborting on the first error. Run the script with `-v` for
more verbose output. Use `-d` to run the test servers with debug output
enabled as well. Specifying `-k` keeps all the log files generated by the
test intact.
Use `-s` for shorter output, or pass test numbers to run specific tests only
(like `./runtests.pl 3 4` to test 3 and 4 only). It also supports test case
ranges with 'to', as in `./runtests.pl 3 to 9` which runs the seven tests
from 3 to 9. Any test numbers starting with ! are disabled, as are any test
numbers found in the files `data/DISABLED` or `data/DISABLED.local` (one per
line). The latter is meant for local temporary disables and will be ignored
by git.
2000-11-14 05:24:26 -05:00
Test cases mentioned in `DISABLED` can still be run if `-f` is provided.
When `-s` is not present, each successful test will display on one line the
test number and description and on the next line a set of flags, the test
result, current test sequence, total number of tests to be run and an
estimated amount of time to complete the test run. The flags consist of
these letters describing what is checked in this test:
s stdout
d data
u upload
p protocol
o output
e exit code
m memory
v valgrind
### Shell startup scripts
2011-09-27 16:13:40 -04:00
2008-02-11 20:11:55 -05:00
Tests which use the ssh test server, SCP/SFTP/SOCKS tests, might be badly
2009-12-12 17:48:11 -05:00
influenced by the output of system wide or user specific shell startup
scripts, .bashrc, .profile, /etc/csh.cshrc, .login, /etc/bashrc, etc. which
output text messages or escape sequences on user login. When these shell
startup messages or escape sequences are output they might corrupt the
expected stream of data which flows to the sftp-server or from the ssh
client which can result in bad test behavior or even prevent the test
2009-12-12 17:48:11 -05:00
server from running.
If the test suite ssh or sftp server fails to start up and logs the message
'Received message too long' then you are certainly suffering the unwanted
2009-12-12 17:48:11 -05:00
output of a shell startup script. Locate, cleanup or adjust the shell
script.
### Memory test
2011-09-27 16:13:40 -04:00
The test script will check that all allocated memory is freed properly IF
curl has been built with the `CURLDEBUG` define set. The script will
2011-09-27 16:13:40 -04:00
automatically detect if that is the case, and it will use the
'memanalyze.pl' script to analyze the memory debugging output.
Also, if you run tests on a machine where valgrind is found, the script will
use valgrind to run the test with (unless you use `-n`) to further verify
2011-09-27 16:13:40 -04:00
correctness.
runtests.pl's `-t` option will enable torture testing mode, which runs each
2011-09-27 16:13:40 -04:00
test many times and makes each different memory allocation fail on each
successive run. This tests the out of memory error handling code to ensure
that memory leaks do not occur even in those situations. It can help to
compile curl with `CPPFLAGS=-DMEMDEBUG_LOG_SYNC` when using this option, to
ensure that the memory log file is properly written even if curl crashes.
2011-09-27 16:13:40 -04:00
### Debug
2001-03-04 13:11:25 -05:00
If a test case fails, you can conveniently get the script to invoke the
debugger (gdb) for you with the server running and the exact same command
line parameters that failed. Just invoke `runtests.pl <test number> -g` and
2001-03-04 13:11:25 -05:00
then just type 'run' in the debugger to perform the command through the
debugger.
### Logs
All logs are generated in the log/ subdirectory (it is emptied first in the
runtests.pl script). They remain in there after a test run.
### Test input files
All test cases are put in the `data/` subdirectory. Each test is stored in
the file named according to the test number.
2000-11-13 04:41:47 -05:00
See [FILEFORMAT.md](FILEFORMAT.md) for a description of the test case file
format.
2000-11-13 04:41:47 -05:00
### Code coverage
2011-09-27 16:13:40 -04:00
gcc provides a tool that can determine the code coverage figures for the
test suite. To use it, configure curl with `CFLAGS='-fprofile-arcs
-ftest-coverage -g -O0'`. Make sure you run the normal and torture tests to
get more full coverage, i.e. do:
make test
make test-torture
The graphical tool ggcov can be used to browse the source and create
coverage reports on *NIX hosts:
ggcov -r lib src
The text mode tool gcov may also be used, but it doesn't handle object files
in more than one directory very well.
### Remote testing
2011-09-27 16:13:40 -04:00
The runtests.pl script provides some hooks to allow curl to be tested on a
machine where perl can not be run. The test framework in this case runs on
a workstation where perl is available, while curl itself is run on a remote
system using ssh or some other remote execution method. See the comments at
the beginning of runtests.pl for details.
2000-11-27 07:53:05 -05:00
## Test case numbering
2011-09-27 16:13:40 -04:00
Test cases used to be numbered by category ranges, but the ranges filled
up. Subsets of tests can now be selected by passing keywords to the
runtests.pl script via the make `TFLAGS` variable.
New tests are added by finding a free number in `tests/data/Makefile.inc`.
2011-09-27 16:13:40 -04:00
## Write tests
2011-09-27 16:13:40 -04:00
Here's a quick description on writing test cases. We basically have three
kinds of tests: the ones that test the curl tool, the ones that build small
applications and test libcurl directly and the unit tests that test
individual (possibly internal) functions.
### test data
2011-09-27 16:13:40 -04:00
Each test has a master file that controls all the test data. What to read,
what the protocol exchange should look like, what exit code to expect and
what command line arguments to use etc.
These files are `tests/data/test[num]` where `[num]` is just a unique
identifier described above, and the XML-like file format of them is
described in the separate [FILEFORMAT.md](FILEFORMAT.md) document.
2011-09-27 16:13:40 -04:00
### curl tests
2011-09-27 16:13:40 -04:00
A test case that runs the curl tool and verifies that it gets the correct
data, it sends the correct data, it uses the correct protocol primitives
etc.
### libcurl tests
2011-09-27 16:13:40 -04:00
The libcurl tests are identical to the curl ones, except that they use a
specific and dedicated custom-built program to run instead of "curl". This
tool is built from source code placed in `tests/libtest` and if you want to
2011-09-27 16:13:40 -04:00
make a new libcurl test that is where you add your code.
### unit tests
2011-09-27 16:13:40 -04:00
Unit tests are placed in `tests/unit`. There's a tests/unit/README
describing the specific set of checks and macros that may be used when
writing tests that verify behaviors of specific individual functions.
2011-09-27 16:13:40 -04:00
The unit tests depend on curl being built with debug enabled.