start second "Thu Jan 1 00:00:00 GMT 1970" as the date parser then returns 0
which internally then is treated as a session cookie. That particular date
is now made to get the value of 1.
mail posted to the http-state mailing list, from Adam Barth, and is said to be
the set of date formats the Chrome browser code is tested against:
http://www.ietf.org/mail-archive/web/http-state/current/msg00129.html
libcurl parses most of them identically, but not all of them.
each test, so that the test suite can now be used to actually test the
verification of cert names etc. This made an error show up in the OpenSSL-
specific code where it would attempt to match the CN field even if a
subjectAltName exists that doesn't match. This is now fixed and verified
in test 311.
out that the cookie parser would leak memory when it parses cookies that are
received with domain, path etc set multiple times in the same header. While
such a cookie is questionable, they occur in the wild and libcurl no longer
leaks memory for them. I added such a header to test case 8.
as reported by Ebenezer Ikonne (on curl-users) and Laurent Rabret (on
curl-library). The transfer was mistakenly marked to get more data to send
but since it didn't actually have that, it just hung there...
If the CURLOPT_PORT option is used on an FTP URL like
"ftp://example.com/file;type=A" the ";type=A" is stripped off.
I added test case 562 to verify, only to find out that I couldn't repeat
this bug so I hereby consider it not a bug anymore!
I've now made TFTP "connections" not being kept for re-use within libcurl.
TFTP is UDP-based so the benefit was really low (if even existing) to begin
with so instead of tracking down to fix this problem we instead removed the
re-use. I also enabled test case 1099 that I wrote a few days ago to verify
that this change fixes the reported problem.
proxy. libcurl would then wrongly close the connection after each
request. In his case it had the weird side-effect that it killed NTLM auth
for the proxy causing an inifinite loop!
I added test case 1098 to verify this fix. The test case does however not
properly verify that the transfers are done persistently - as I couldn't
think of a clever way to achieve it right now - but you need to read the
stderr output after a test run to see that it truly did the right thing.
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
(http://curl.haxx.se/bug/view.cgi?id=2535504) pointing out that realms with
quoted quotation marks in HTTP Digest headers didn't work. I've now added
test case 1095 that verifies my fix.
clarity. This does fix one problem that causes ;type=i FTP URLs
to fail in the Turkish locale when CURLOPT_PROXY_TRANSFER_MODE is
used (test case 561)
Added tests 561 and 1092 through 1094 to test various combinations
of ;type= and ;mode= URLs that could potentially fail in the Turkish
locale.
researching it, it turned out he got a 550 response back from a SIZE command
and then I fell over the text in RFC3659 that says:
The presence of the 550 error response to a SIZE command MUST NOT be taken
by the client as an indication that the file cannot be transferred in the
current MODE and TYPE.
In other words: the change I did on September 30th 2008 and that has been
included in the last two releases were a regression and a bad idea. We MUST
NOT take a 550 response from SIZE as a hint that the file doesn't exist.
This test was added after the HTTPS-using-multi-interface with OpenSSL
regression of 7.19.1 to hopefully prevent this embarassing mistake from
appearing again... Unfortunately the bug wasn't triggered by this test, which
presumably is because the connect to a local server is too fast/different
compared to the real/distant servers we saw the bug happen with.
fixed a CURLINFO_REDIRECT_URL memory leak and an additional wrong-doing:
Any subsequent transfer with a redirect leaks memory, eventually crashing
the process potentially.
Any subsequent transfer WITHOUT a redirect causes the most recent redirect
that DID occur on some previous transfer to still be reported.
a fresh connection to be made in such cases and the request retransmitted.
This should fix test case 160. Added test case 1079 in an attempt to
test a similar connection dropping scenario, but as a race condition, it's
hard to test reliably.
gets a 550 response back for the cases where a download (or NOBODY) is
wanted. It still allows a 550 as response if the SIZE is used as part of an
upload process (like if resuming an upload is requested and the file isn't
there before the upload). I also modified the FTP test server and a few test
cases accordingly to match this modified behavior.
date parser function. This makes our function less dependent on system-
provided functions and instead we do all the magic ourselves. We also no
longer depend on the TZ environment variable.
Markus Moeller reported: http://curl.haxx.se/mail/archive-2008-09/0016.html
- recv() errors other than those equal to EAGAIN now cause proper
CURLE_RECV_ERROR to get returned. This made test case 160 fail so I've now
disabled it until we can figure out another way to exercise that logic.
CURLOPT_POST301 (but adds a define for backwards compatibility for you who
don't define CURL_NO_OLDIES). This option allows you to now also change the
libcurl behavior for a HTTP response 302 after a POST to not use GET in the
subsequent request (when CURLOPT_FOLLOWLOCATION is enabled). I edited the
patch somewhat before commit. The curl tool got a matching --post302
option. Test case 1076 was added to verify this.
to HTTP 1.0 upon receiving a response from the HTTP server. Tests 1072
and 1073 are similar to test 1069 in that they involve the impossible
scenario of sending chunked data to a HTTP 1.0 server. All these currently
fail and are added to DISABLED.
Added test 1075 to test --anyauth with Basic authentication.
"Connection: close" and actually close the connection after the
response-body, libcurl could still have outstanding data to send and it
would not properly notice this and stop sending. This caused weirdness and
sad faces. http://curl.haxx.se/bug/view.cgi?id=2080222
Note that there are still reasons to consider libcurl's behavior when
getting a >= 400 response code while sending data, as Craig Perras' note
"http upload: how to stop on error" specifies:
http://curl.haxx.se/mail/archive-2008-08/0138.html
which caused an error when the second header was dumped due to stdout
being closed. Added test case 1066 to verify. Also fixed a potential
problem where a closed file descriptor might be used for an upload
when more than one URL is given.
was discovered to be problematic while investigating an incident reported by
Von back in May. curl in this case doesn't include a Content-Length: or
Transfer-Encoding: chunked header which is illegal. This test case is
added to DISABLED until a solution is found.
when a server responded with long headers and data. Luckily, the buffer
overflowed into another unused buffer, so no actual harm was done.
Added test cases 1060 and 1061 to verify.
line of a multiline FTP response whose last byte landed exactly at the end
of the BUFSIZE-length buffer would be treated as the terminal response
line. The following response code read in would then actually be the
end of the previous response line, and all responses from then on would
correspond to the wrong command. Test case 1062 verifies this.
Stop closing a never-opened ftp socket.
proved how PUT and POST with a redirect could lead to a "hang" due to the
data stream not being rewound properly when it had to in order to get sent
properly (again) to the subsequent URL. This is now fixed and these test
cases are no longer disabled.
with -C - sent garbage in the Content-Range: header. I fixed this problem by
making sure libcurl always sets the size of the _entire_ upload if an app
attemps to do resumed uploads since libcurl simply cannot know the size of
what is currently at the server end. Test 1041 is no longer disabled.
incorrectly--the host name is treated as part of the user name and the
port number becomes the password. This can be observed in test 279
(was KNOWN_ISSUE #54).
by Ben Sutcliffe. The test when run manually shows a problem in curl,
but the test harness web server doesn't run the test correctly so it's
disabled for now.
fix for it. It occured when you did a FTP transfer using
CURLFTPMETHOD_SINGLECWD and then did another one on the same easy handle but
switched to CURLFTPMETHOD_NOCWD. Due to the "dir depth" variable not being
cleared properly. Scott's test case is now known as test 539 and it
verifies the fix.
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=487567) pointing out that
libcurl used Content-Range: instead of Range when doing a range request with
--head (CURLOPT_NOBODY). This is now fixed and test case 1032 was added to
verify.
All boolean options (such as -O, -I, -v etc), both short and long versions,
now always switch on/enable the option named. Using the same option multiple
times thus make no difference. To switch off one of those options, you need
to use the long version of the option and type --no-OPTION. Like to disable
verbose mode you use --no-verbose!
- Added --remote-name-all to curl, which if used changes the default for all
given URLs to be dealt with as if -O is used. So if you want to disable that
for a specific URL after --remote-name-all has been used, you muse use -o -
or --no-remote-name.
curl_easy_getinfo. It returns a pointer to a string with the most recently
used IP address. Modified test case 500 to also verify this feature. The
implementing of this feature was sponsored by Lenny Rachitsky at NeuStar.
how the HTTP redirect following code didn't properly follow to a new URL if
the new url was but a query string such as "Location: ?moo=foo". Test case
1031 was added to verify this fix.
when using CURL_AUTH_ANY" (http://curl.haxx.se/bug/view.cgi?id=1945240).
The problem was that when libcurl rewound a stream meant for upload when it
would prepare for a second request, it could accidentally continue the
sending of the rewound data on the first request instead of on the second.
Ben also provided test case 1030 that verifies this fix.
redirections and thus cannot use CURLOPT_FOLLOWLOCATION easily, we now
introduce the new CURLINFO_REDIRECT_URL option that lets applications
extract the URL libcurl would've redirected to if it had been told to. This
then enables the application to continue to that URL as it thinks is
suitable, without having to re-implement the magic of creating the new URL
from the Location: header etc. Test 1029 verifies it.
server input and response request files of the test harness sws server.
Reintroduce, for test # 1001, the <postcheck> small delay. The delay is
needed even with the accelerated writing of server input and response
request files in test harness sws server.
http://curl.haxx.se/mail/lib-2008-04/0385.html
application to provide data for a multipart with the read callback. Note
that the size needs to be provided with CURLFORM_CONTENTSLENGTH when the
stream option is used. This feature is verified by the new test case
554. This feature was sponsored by Xponaut.
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
"HttpOnly" feature introduced by Microsoft and apparently also supported by
Firefox: http://msdn2.microsoft.com/en-us/library/ms533046.aspx . HttpOnly
is now supported when received from servers in HTTP headers, when written to
cookie jars and when read from existing cookie jars.
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
(http://curl.haxx.se/bug/view.cgi?id=1850730) I wrote up test case 552. The
test is doing a 70K POST with a read callback and an ioctl callback over a
proxy requiring Digest auth. The test case code is more or less identical to
the test recipe code provided by Spacen Jasset (who submitted the bug report).
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
is supposed to repeat the bug report "NTLM proxy authentication with
CURLOPT_READDATA seems broken." posted on the curl-library mailing list on dec
3 2007.
This happened because the tftp code always uncondionally did a bind()
without caring if one already had been done and then it failed. I wrote a
test case (1009) to verify this, but it is a bit error-prone since it will
have to pick a fixed local port number and since the tests are run on so
many different hosts in different situations I add it in disabled state.
target called 'filecheck' so that if you run 'make filecheck' in this directory
it'll check if the local files are also mentioned in the Makefile.am so that
they are properly included in release archives!
function do wrong on all input bytes that are >= 0x80 (decimal 128) due to a
signed / unsigned mistake in the code. I fixed it and added test case 543 to
verify.
curl_easy_setopt() that alters how libcurl functions when following
redirects. It makes libcurl obey the RFC2616 when a 301 response is received
after a non-GET request is made. Default libcurl behaviour is to change
method to GET in the subsequent request (like it does for response code 302
- because that's what many/most browsers do), but with this CURLOPT_POST301
option enabled it will do what the spec says and do the next request using
the same method again. I.e keep POST after 301.
The curl tool got this option as --post301
Test case 1011 and 1012 were added to verify.
CURLOPT_NOBODY enabled but not CURLOPT_HEADER, libcurl wouldn't do TYPE
before it does SIZE which makes it less useful. I walked over the code and
made it do this properly, and added test case 542 to verify it.
- Bug report #1792649 (http://curl.haxx.se/bug/view.cgi?id=1792649) pointed
out a problem with doing an empty upload over FTP on a re-used connection.
I added test case 541 to reproduce it and to verify the fix.
- I noticed while writing test 541 that the FTP code wrongly did a CWD on the
second transfer as it didn't store and remember the "" path from the
previous transfer so it would instead CWD to the entry path as stored. This
worked, but did a superfluous command. Thus, test case 541 now also verifies
this fix.
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
out that libcurl didn't deal with large responses from server commands, when
the single response was consisting of multiple lines but of a total size of
16KB or more. Dan Fandrich improved the ftp test script and provided test
case 1006 to repeat the problem, and I fixed the code to make sure this new
test case runs fine.
out that libcurl didn't deal with very long (>16K) FTP server response lines
properly. Starting now, libcurl will chop them off (thus the client app will
not get the full line) but survive and deal with them fine otherwise. Test
case 1003 was added to verify this.
(http://curl.haxx.se/bug/view.cgi?id=1776235) about ftp requests with NOBODY
on a directory would do a "SIZE (null)" request. This is now fixed and test
case 1000 was added to verify.
after 7.16.2. This is much due to the different treatment file:// gets
internally, but now I added test 231 to make it less likely to happen again
without us noticing!
using one of the so-called 'right' time zones that take into account
leap seconds, which causes the tests to fail (as reported by
Daniel Black in bug report #1745964).
chunked encoding (that also lacks "Connection: close"). It now simply
assumes that the connection WILL be closed to signal the end, as that is how
RFC2616 section 4.4 point #5 says we should behave.
supports only ftps:// URLs with --ftp-ssl-control specified, which
implicitly encrypts the control channel but not the data channels. That
allows stunnel to be used with an unmodified ftp server in exactly the
same way that the test https server is set up.
Added test case 400 as a basic FTPS test.
fixing some bugs:
o Don't mix GET and POST requests in a pipeline
o Fix the order in which requests are dispatched from the pipeline
o Fixed several curl bugs with pipelining when the server is returning
chunked encoding:
* Added states to chunked parsing for final CRLF
* Rewind buffer after parsing chunk with data remaining
* Moved chunked header initializing to a spot just before receiving
headers
are not, due mainly to the lack of support for XML character entities
(e.g. & => & ). This will make it easier to validate test files using
tools like xmllint, as well as edit and view them using XML tools.
something went wrong like it got a bad response code back from the server,
libcurl would leak memory. Added test case 538 to verify the fix.
I also noted that the connection would get cached in that case, which
doesn't make sense since it cannot be re-use when the authentication has
failed. I fixed that issue too at the same time, and also that the path
would be "remembered" in vain for cases where the connection was about to
get closed.
responded with a single status line and no headers nor body. Starting now, a
HTTP response on a persistent connection (i.e not set to be closed after the
response has been taken care of) must have Content-Length or chunked
encoding set, or libcurl will simply assume that there is no body.
To my horror I learned that we had no less than 57(!) test cases that did bad
HTTP responses like this, and even the test http server (sws) responded badly
when queried by the test system if it is the test system. So although the
actual fix for the problem was tiny, going through all the newly failing test
cases got really painful and boring.
when more than FD_SETSIZE file descriptors are open.
This means that if for any reason we are not able to
open more than FD_SETSIZE file descriptors then test
518 should not be run.
test 537 is all about testing libcurl functionality
when the system has nearly exhausted the number of
free file descriptors. Test 537 will try to run with
very few free file descriptors.
case when 401 or 407 are returned, *IF* no auth credentials have been given.
The CURLOPT_FAILONERROR option is not possible to make fool-proof for 401
and 407 cases when auth credentials is given, but we've now covered this
somewhat more.
You might get some amounts of headers transferred before this situation is
detected, like for when a "100-continue" is received as a response to a
POST/PUT and a 401 or 407 is received immediately afterwards.
Added test 281 to verify this change.
would crash if a bad function sequence was used when shutting down after
using the multi interface (i.e using easy_cleanup after multi_cleanup) so
precautions have been added to make sure it doesn't any more - test case 529
was added to verify.
(http://curl.haxx.se/bug/view.cgi?id=1561470) that is said to crash when an
FTP upload fails with the multi interface. It did not, but I made a failed
upload still assume the control connection to be fine.
send the whole request at once, even though the Expect: header was disabled
by the application. An effect of this change is also that small (< 1024
bytes) POSTs are now always sent without Expect: header since we deem it
more costly to bother about that than the risk that we send the data in
vain.
2 - store the time it took to verify it and allow that time to be used as
%FTPTIME[23] in command lines to allow us to adjust better to slow hosts
since test 190 failed on my slow solaris machine just because it hadn't
gotten time to run all the way the test assumed all machines would reach
before the time-out elapsed.
(http://curl.haxx.se/mail/lib-2006-02/0154.html) by adding the NTLM hash
function in addition to the LM one and making some other adjustments in the
order the different parts of the data block are sent in the Type-2 reply.
Inspiration for this work was taken from the Firefox NTLM implementation.
I edited the existing 21(!) NTLM test cases to run fine with these news. Due
to the fact that we now properly include the host name in the Type-2 message
the test cases now only compare parts of that chunk.
even after EPSV returned a positive response code, if libcurl failed to
connect to the port number the EPSV response said. Obviously some people are
going through protocol-sensitive firewalls (or similar) that don't understand
EPSV and then they don't allow the second connection unless PASV was
used. This also called for a minor fix of test case 238.
(CURLOPT_FTPPORT) didn't work for ipv6-enabed curls if the IP wasn't a
"native" IP while it works fine for ipv6-disabled builds!
In the process of fixing this, I removed the support for LPRT since I can't
think of many reasons to keep doing it and asking on the mailing list didn't
reveal anyone else that could either. The code that sends EPRT and PORT is
now also a lot simpler than before (IMHO).
(http://curl.haxx.se/bug/view.cgi?id=1338648) which really is more of a
feature request, but anyway. It pointed out that --max-redirs did not allow
it to be set to 0, which then would return an error code on the first
Location: found. Based on Nis' patch, now libcurl supports CURLOPT_MAXREDIRS
set to 0, or -1 for infinity. Added test case 274 to verify.
(wrongly) sends *two* WWW-Authenticate headers for Digest. While this should
never happen in a sane world, libcurl previously got into an infinite loop
when this occurred. Dave added test 273 to verify this.
from the command line tool with --ignore-content-length. This will make it
easier to download files from Apache 1.x (and similar) servers that are
still having problems serving files larger than 2 or 4 GB. When this option
is enabled, curl will simply have to wait for the server to close the
connection to signal end of transfer. I wrote test case 269 that runs a
simple test that this works.
fix the CONNECT authentication code with multi-pass auth methods (such as
NTLM) as it didn't previously properly ignore response-bodies - in fact it
stopped reading after all response headers had been received. This could
lead to libcurl sending the next request and reading the body from the first
request as response to the second request. (I also renamed the function,
which wasn't strictly necessary but...)
The best fix would to once and for all make the CONNECT code use the
ordinary request sending/receiving code, treating it as any ordinary request
instead of the special-purpose function we have now. It should make it
better for multi-interface too. And possibly lead to less code...
Added test case 265 for this. It doesn't work as a _really_ good test case
since the test proxy is too stupid, but the test case helps when running the
debugger to verify.
with CURLOPT_PROXY can use a http:// prefix and user + password. The user
and password fields are now also URL decoded properly.
Test case 264 added to verify.
address was not possible to use. It is now, but requires it written
RFC2732-style, within brackets - which incidently is how you enter numerical
IPv6 addresses in URLs. Test case 263 added to verify.
binary zeroes within the headers. They confused libcurl to do wrong so the
downloaded headers become incomplete. The fix is now verified with test case
262.
A) Normal non-proxy HTTP:
- no more "Pragma: no-cache" (this only makes sense to proxies)
B) Non-CONNECT HTTP request over proxy:
- "Pragma: no-cache" is used (like before)
- "Proxy-Connection: Keep-alive" (for older style 1.0-proxies)
C) CONNECT HTTP request over proxy:
- "Host: [name]:[port]"
- "Proxy-Connection: Keep-alive"
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
level stuff. The FTP server communicates with sockfilt using perl's open2().
This enables easier IPv6 support and hopefully FTP-SSL support in the future.
Added four test cases for FTP-ipv6.
also affecting NTLM and Negotiate.) It turned out that if the server responded
with 100 Continue before the initial 401 response, libcurl didn't take care of
the response properly. Test case 245 and 246 added to verify this.
function was fixed to use the proper proxy authentication when multiple ones
were added as accepted. test 239 and test 243 were added to repeat the
problems and verify the fixes.
on a host with a buggy resolver that strips all but the bottom 8 bits of
each octet. The resolved address in this case (192.0.2.127) is guaranteed
never to belong to a real host (see RFC3330).
file got a Last-Modified: header written to the data stream, corrupting the
actual data. This was because some conditions from the previous FTP code was
not properly brought into the new FTP code. I fixed and I added test case 520
to verify. (This bug was introduced in 7.13.1)
on the remote side. This then converts the operation to an ordinary STOR
upload. This was requested/pointed out by Ignacio Vazquez-Abrams.
It also proved (and I fixed) a bug in the newly rewritten ftp code (and
present in the 7.13.1 release) when trying to resume an upload and the servers
returns an error to the SIZE command. libcurl then loops and sends SIZE
commands infinitely.
requested data from a host and then followed a redirect to another
host. libcurl then didn't use the proxy-auth properly in the second request,
due to the host-only check for original host name wrongly being extended to
the proxy auth as well. Added test case 233 to verify the flaw and that the
fix removed the problem.
present in RFC959... so now (lib)curl supports it as well. --ftp-account and
CURLOPT_FTP_ACCOUNT set the account string. (The server may ask for an account
string after PASS have been sent away. The client responds with "ACCT [account
string]".) Added test case 228 and 229 to verify the functionality. Updated
the test FTP server to support ACCT somewhat.
contains %0a or %0d in the user, password or CWD parts. (A future fix would
include doing it for %00 as well - see KNOWN_BUGS for details.) Test case 225
and 226 were added to verify this
If EPSV, EPRT or LPRT is tried and doesn't work, it will not be retried on
the same server again even if a following request is made using a persistent
connection.
If a second request is made to a server, requesting a file from the same
directory as the previous request operated on, libcurl will no longer make
that long series of CWD commands just to end up on the same spot. Note that
this is only for *exactly* the same dir. There is still room for improvements
to optimize the CWD-sending when the dirs are only slightly different.
Added test 210, 211 and 212 to verify these changes. Had to improve the
test script too and added a new primitive to the test file format.
file that was already completely downloaded caused an error, while it
doesn't if you don't use --fail! I added test case 194 to verify the fix.
Grrr. CURLOPT_FAILONERROR is now added to the list stuff to remove in
libcurl v8 due to all the kludges needed to support it.
CURLOPT_FOLLOWLOCATION, libcurl reported error if a redirect happened even if
the new URL would provide the resumed file. Test case 188 added to verify the
fix (together with existing test 99).
formposts no longer include the path part. If you _really_ want them, you
must provide your preferred full file name with CURLFORM_FILENAME.
Added detection for libgen.h and basename() to configure. My custom
basename() replacement function for systems without it, might be a bit too
naive...
Updated 6 test cases to make them work with the stripped paths.
replacement, curl only replaced the Host: header on the initial request
and didn't replace it on the following ones. This resulted in requests with
two Host: headers.
Now, curl checks if the location is on the same host as the initial request
and then continues to replace the Host: header. And when it moves to another
host, it doesn't replace the Host: header but it also doesn't make the
second Host: header get used in the request.
This change is verified by the two new test cases 184 and 185.
no fixed port numbers in use anymore. Starting now, the default ports the
servers use are 8990 - 8993. There's no option to modify these yet, but
changing the $base option in the top of the runtests.pl script.