The improved connection reuse logic would otherwise create a new
connection for each one, which isn't supported by the test
server, nor expected by the test.
To better allow arguments like "1 to 9999" without flooding the terminal
with error messages, the given test cases range is now checked and only
test numbers with existing files are actually run.
The previous test certificate contained a MD5 hash which is not
supported using TLSv1.2 with Schannel on Windows 7 or newer.
See the update to this blog post on IEInternals / MSDN:
http://blogs.msdn.com/b/ieinternals/archive/2011/03/25/
misbehaving-https-servers-impair-tls-1.1-and-tls-1.2.aspx
"Update: If the server negotiates a TLS1.2 connection with a
Windows 7 or 8 schannel.dll-using client application, and it
provides a certificate chain which uses the (weak) MD5 hash
algorithm, the client will abort the connection (TCP/IP FIN)
upon receipt of the certificate."
When allowing NTLM, the re-use connection logic was too focused on
finding an existing NTLM connection to use and didn't properly allow
re-use of other ones. This made the logic not re-use perfectly re-usable
connections.
Added test case 1418 and 1419 to verify.
Regression brought in 8ae35102c (curl 7.35.0)
Reported-by: Jeff King
Bug: http://thread.gmane.org/gmane.comp.version-control.git/242213
This one is needed with the gcc options -fstack-protector-all -O2
That brings the number of suppressions for test 165 to four, and I
suspect I could find another two missing without trying very hard. I'm
beginning to think suppressions isn't the best way to handle these
kinds of cases.
Do not try to convert line-endings to CRLF on Windows by setting stdout
to binary mode, just like the curl tool does if --ascii is not specified.
This should prevent corrupted stdout line-ending output like CRCRLF.
In order to make the previously naive text-aware tests work with
binary mode on Windows, text-mode is disabled for them if it is not
actually part of the test case and line-endings are corrected.
According to RFC 2616 and RFC 2326 individual protocol elements, like
headers and except the actual content, are terminated by using CRLF.
Therefore the test data files for these protocols need to contain
mixed line-endings if the actual protocol elements use CRLF while
the file uses LF.
gcc 4.7.2 with -O2 will optimize Curl_connect by inlining some
functions two levels deep, which makes the valgrind suppression
fail to match. The underlying reason for these idna suppressions is
a gcc strlen optimization when compiling libidn; compiling it with
-fno-builtin-strlen makes this suppression unnecessary.
It seems the fips config option causes an error if FIPS mode was
not enabled at stunnel compile-time. FIPS support was disabled
by default in stunnel 5.00, so this is probably really only needed
on versions between 4.32 and 5.00.
This was already mostly being done, except that analysis after the
test still assumed that the valgrind log files would be available. An
alternative way to handle the valgrind + gdb combination could be to
enable one of the valgrind debugger hooks.
lib1515.c:38:26 warning: unused parameter 'curl'
lib1515.c:38:81 warning: unused parameter 'ptr'
lib1515.c:38:5 warning: no previous prototype for 'debug_callback'
lib1515.c:46:5 warning: no previous prototype for 'do_one_request'
lib1515.c:120:3 warning: ISO C90 forbids mixed declarations and code
As well as some code policing such as white space and braces.
Not comma, which is an inconsistency and a mistake probably inherited
from the examples section of RFC1867.
This bug has been present since the day curl started to support
multipart formposts, back in the 90s.
Reported-by: Rob Davies
Bug: http://curl.haxx.se/bug/view.cgi?id=1333
Fix for bug #1303 (030a2b8cb) was not complete.
libcurl still pruned DNS entries added manually
after detecting a dead connection. This test
checks such behavior.
Test-case 1515 reproduces bug #1303, where libcurl
would incorrectly prune DNS entries added via
CURLOPT_RESOLVE after the DNS_CACHE_TIMEOUT had
expired.
The test contains a cookie jar file where one of the cookies has an
expiry date of 1391252187 -- Sat, 1 Feb 2014 10:56:27 GMT which has
now expired. Updated to Wed, 14 Oct 2037 16:36:33 GMT as per test
179.
Reported-by: Adam Sampson
Bug: http://curl.haxx.se/bug/view.cgi?id=1330
Since the timer resolution is lower, there are actually cases that
the compared values are equal. Therefore we check for previous
timestamps being greater than the current one instead.
According to section 2.2 of RFC959 the End-of-Line is defined as:
The end-of-line sequence defines the separation of printing
lines. The sequence is Carriage Return, followed by Line Feed.
Verified by sniffing traffic between a Windows FTP client (FileZilla)
and Unix-hosted FTP server (ProFTPD).
It makes more sense to convert the expected output to [CR][LF] on
Windows than to force the actual, probably correct, output to [LF].
This way it is actually possible to see if curl outputs the correct
line-ending excepted by a text-aware test case.
Since the previous complex select function with initial support for
non-socket file descriptors, did not actually work correctly for
Console handles, this change simplifies the whole procedure by using
an internal waiting thread for the stdin console handle.
The previous implementation made it continuously trigger for the stdin
handle if it was being redirected to a parent process instead of
an actual Console input window.
This approach supports actual Console input handles as well as
anonymous Pipe handles which are used during input redirection.
It depends on the fact that ReadFile supports trying to read zero bytes
which makes it wait for the handle to become ready for reading.
Removed Unix-specific functionality in order to support Windows:
- select.epoll replaced with select.select
- SocketServer.ForkingMixIn replaced with SocketServer.ForkingMixIn
- socket.MSG_DONTWAIT replaced with socket.setblocking(False)
Even though epoll has a better performance and improved socket handling
than select, this change should not affect the actual test case.
Also, make the ftp server return a canned response that doesn't
cause XML verification problems. Although the test file format
isn't technically XML, it's still handy to be able to use XML
tools to verify and manipulate them.
Since /dev/stdout is not always emulated on Windows,
just skip the output option on Windows.
MinGW/msys support /dev/stdout only from a new login shell.
tstunnel on Windows does not support the pid option and is unable
to write to an output log that is already being used as a redirection
target for stdout. Therefore it does now output all log data to stdout
by default and secureserver.pl creates a fake pidfile on Windows.
The built-in memory debug system doesn't work with multi-threaded use so
instead of causing annoying false positives, disable the memory tracking
if the threaded resolver is used.
The Windows console version of stunnel is called "tstunnel", while
running "stunnel" on Windows spawns a new console window which
cannot be handled by the testsuite.
Previously LIST always returned a fixed hardcoded list that the ftp
server code knew about, mostly since the server didn't get any test case
number in the LIST scenario. Starting now, doing a CWD to a directory
named test-[number] will make the test server remember that number and
consider it a test case so that a subsequent LIST command will send the
<data> section of that test case back.
It allows LIST tests to be made more similar to how all other tests
work.
Test 100 was updated to provide its own directory listing.
Verify the change brought in commit 8e11731653061. It makes sure that
returning a failure from the progress callback even very early results
in the correct return code.
memdebug.h already contains all required definitions and including
curl_memory.h causes errors like the following:
tests/unit/unit1394.c:119: undefined reference to `Curl_cfree'
tests/unit/unit1394.c:120: undefined reference to `Curl_cfree'
Following commit 0aafd77fa4, replaced the internal usage of
FORMAT_OFF_T and FORMAT_OFF_TU with the external versions that we
expect API programmers to use.
This negates the need for separate definitions which were subtly
different under different platforms/compilers.
Following the addition of informational commands to the SMTP protocol,
the test server is no longer required to return the verified server
information in responses that curl only outputs in verbose mode.
Instead, a similar detection mechanism to that used by FTP, IMAP and
POP3 can now be used.
This commit replaces that of 9f260b5d66 because according to RFC-2449,
section 6, there is no APOP capability "...even though APOP is an
optional command in [POP3]. Clients discover server support of APOP by
the presence in the greeting banner of an initial challenge enclosed in
angle brackets."
SASL downgrade tests: 833, 835, 879, 881, 935 and 937 would fail as
they contained a minus sign in their authentication mechanism and this
would be missed by the custom reply parser.
Added support for downgrading the SASL authentication mechanism when the
decoding of CRAM-MD5, DIGEST-MD5 and NTLM messages fails. This enhances
the previously added support for graceful cancellation by allowing the
client to retry a lesser SASL mechanism such as LOGIN or PLAIN, or even
APOP / clear text (in the case of POP3 and IMAP) when supported by the
server.
To avoid the regression when users pass in passwords containing semi-
colons, we now drop the ability to set the login options with the same
options. Support for login options in CURLOPT_USERPWD was added in
7.31.0.
Test case 83 was modified to verify that colons and semi-colons can be
used as part of the password when using -u (CURLOPT_USERPWD).
Bug: http://curl.haxx.se/bug/view.cgi?id=1311
Reported-by: Petr Bahula
Assisted-by: Steve Holme
Signed-off-by: Daniel Stenberg <daniel@haxx.se>
A failure during authentication, which is performed as part of the
CONNECT phrase (for IMAP, POP3 and SMTP) is considered by the multi-
interface as being closed prematurely (aka a dead connection). As such
these protocols cannot issue the relevant QUIT or LOGOUT command.
Temporarily fixed the test cases until we can fix this properly.
The error code should not be sent as data as it isn't passed onto the
client as body data, so cannot be compared in the test suite against
expected data.
Although this option should have already been set, the SMTP module can
now download information from and send instructional commands to, an
SMTP server, requiring the option to be set in order to perform a mail
transfer.
As the IMAP regex could fail and $1 would not contain the command id
updated the unrecognised command response to be more generic and
realistic (like those used in the command handlers).
Additionally updated the POP3, SMTP and FTP responses.
A base64 string should be a multiple of 4 characters in length, not
contain any more than 2 padding characters and only contain padding
characters at the end of string. For example: Y3VybA==
Strings such as the following are considered invalid:
Y= - Invalid length
Y== - Invalid length
Y=== - More than two padding characters
Y=x= - Padding character contained within string
This is a regression since the switch to always-multi internally
c43127414d.
Test 1316 was modified since we now clearly call the Curl_client_write()
function when doing the LIST transfer part and then the
handler->protocol says FTP and ftpc.transfertype is 'A' which implies
text converting even though that the response is initially a HTTP
CONNECT response in this case.
As the URI, which is contained within the DIGEST-MD5 response, is
constructed from the service and realm, the encoded message differs
from that generated under POP3.