This fixes the test 506 torture test. The internal cookie API really
ought to be improved to separate cookie parsing errors (which may be
ignored) with OOM errors (which should be fatal).
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
test1435: a simple test that checks whether a HTTP request can be
performed over the UNIX socket. The hostname/port are interpreted
by sws and should be ignored by cURL.
test1436: test for the ability to do two requests to the same host,
interleaved with one to a different hostname.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
The variable `$ipvnum` can now contain "unix" besides the integers 4
and 6 since the variable. Functions which receive this parameter
have their `$port` parameter renamed to `$port_or_path` to support a
path to the UNIX domain socket (as a "port" is only meaningful for TCP).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
If sws is killed it might leave a stale socket file on the filesystem
which would cause an EADDRINUSE error. After this patch, it is checked
whether the socket is really stale and if so, the socket file gets
removed and another bind is executed.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
This extends sws with a --unix-socket option which causes the port to
be ignored (as the server now listens on the path specified by
--unix-socket). This feature will be available in the following patch
that enables checking for UNIX domain socket support.
Proxy support (CONNECT) is not considered nor tested. It does not make
sense anyway, first connecting through a TCP proxy, then let that TCP
proxy connect to a UNIX socket.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Instead of depending the socket domain type on use_ipv6, specify the
domain type (AF_INET / AF_INET6) as variable. An enum is used here with
switch to avoid compiler warnings in connect_to, complaining that rc
is possibly undefined (which is not possible as socket_domain is
always set).
Besides abstracting the socket type, make the debugging messages be
independent on IP (introduce location_str which points to "port XXXXX").
Rename "ipv_inuse" to "socket_type" and tighten the scope (main).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Commit curl-7_23_1-143-g8218064 changed the parameter of
responsive_http_server to accept types other than IPv6 (converting
from a boolean to a string), but only considered the lower-case "ipv6"
and not the "IPv6" variant. This caused all servers to start in IPv4
mode instead.
This patch converts the remaining cases to "ipv6". While not strictly
necessary for the run*server variants, these got also converted for
consistency and to prevent future errors.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
This is the only user of the backtick operator in the command. As the
commands will soon not be executed by a shell anymore (but by perl),
replace the command with its output.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Makes test1119 pass when building with cmake.
configurehelp.pm is generated by configure (autotools). As cmake does
not provide a separate variable for the C preprocessor, default to cpp.
Before commit ef24ecde68 ("symbol-scan:
use configure script knowledge about how to run the C preprocessor"),
this tool would also use 'cpp'.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Fix detection of the AsynchDNS feature which not just depends on
pthreads support, but also on whether USE_POSIX_THREADS is set or not.
Caught by test 1014.
This patch adds a new ENABLE_THREADED_RESOLVER option (corresponding to
--enable-threaded-resolver of autotools) which also needs a check for
HAVE_PTHREAD_H.
For symmetry with autotools, CURL_USE_ARES is renamed to ENABLE_ARES
(--enable-ares). Checks that test for the availability actually use
USE_ARES instead as that is the result of whether a-res is available or
not (in practice this does not matter as CARES is marked as required
package, but nevertheless it is better to write the intent).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
I noticed that a patched cmake build would pass tests with a fake local
hostname, but the autotools build skips them:
got unexpected host name back, LD_PRELOAD failed
It turns out that -fvisibility=hidden hides the symbol, and since the
tests are not part of libcurl, it fails too. Just remove the LIBCURL
guard.
Broken since cURL 7.30 (commit 83a42ee20e,
"curl.h: stricter CURL_EXTERN linkage decorations logic").
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Added !SSPI to the features list of the HTTP digest tests, as SSPI
based builds now use the Windows SSPI messaging API rather than the
internal functions, and we can't control the random numbers that get
used as part of the digest.
Basically since servers often then don't respond well to this and
instead send the full contents and then libcurl would instead error out
with the assumption that the server doesn't support resume. As the data
is then already transfered, this is now considered fine.
Test case 1434 added to verify this. Test case 1042 slightly modified.
Reported-by: hugo
Bug: http://curl.haxx.se/bug/view.cgi?id=1443
HTTP 1.1 is clearly specified to only allow three digit response codes,
and libcurl used sscanf("%3d") for that purpose. This made libcurl
support smaller numbers but not larger. It does now, but we will not
make any specific promises nor document this further since it is going
outside of what HTTP is.
Bug: http://curl.haxx.se/bug/view.cgi?id=1441
Reported-by: Balaji
CURLOPT_COPYPOSTFIELDS with a given CURLOPT_POSTFIELDSIZE does not
require a trailing zero of the data and by making sure this test doesn't
use one we know it works (combined with valgrind).
This change allows runtests.pl to be run from the CMake builddir:
export srcdir=/tmp/curl/tests;
perl -I$srcdir $srcdir/runtests.pl -l
In order to make this possible, all test cases have been moved from
Makefile.am to Makefile.inc.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
The 2to3 tool converted socketserver (which I manually fixed up with an
import fallback) and the print(e) line. The xrange option was converted
to range, but it seems better to use the '*' operator here for
simplicity.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
There is no need for such function. Include_directories propagate by
themselves and having a function with one simple link statement makes
little sense.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
By not detecting and rejecting domain names for partial literal IP
addresses properly when parsing received HTTP cookies, libcurl can be
fooled to both send cookies to wrong sites and to allow arbitrary sites
to set cookies for others.
CVE-2014-3613
Bug: http://curl.haxx.se/docs/adv_20140910A.html
Historically the default "unknown" value for progress.size_dl and
progress.size_ul has been zero, since these values are initialized
implicitly by the calloc that allocates the curl handle that these
variables are a part of. Users of curl that install progress
callbacks may expect these values to always be >= 0.
Currently it is possible for progress.size_dl and progress.size_ul
to by set to a value of -1, if Curl_pgrsSetDownloadSize() or
Curl_pgrsSetUploadSize() are passed a "size" of -1 (which a few
places currently do, and a following patch will add more). So
lets update Curl_pgrsSetDownloadSize() and Curl_pgrsSetUploadSize()
so they make sure that these variables always contain a value that
is >= 0.
Updates test579 and test599.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
... to handle "*/[total]". Also, removed the strange hack that made
CURLOPT_FAILONERROR on a 416 response after a *RESUME_FROM return
CURLE_OK.
Reported-by: Dimitrios Siganos
Bug: http://curl.haxx.se/mail/lib-2014-06/0221.html
If a non-standard $TESTDIR is used the file may not be necessary.
Previously a "missing" file resulted in the warning:
readline() on closed filehandle D at ./runtests.pl line 4940.
This seems to have become necessary for SRP support to work starting
with GnuTLS ver. 2.99.0. Since support for SRP was added to GnuTLS
before the function that takes this priority string, there should be no
issue with backward compatibility.
Curl_rand() will return a dummy and repatable random value for this
case. Makes it possible to write test cases that verify output.
Also, fake timestamp with CURL_FORCETIME set.
Only when built debug enabled of course.
Curl_ssl_random() was not used anymore so it has been
removed. Curl_rand() is enough.
create_digest_md5_message: generate base64 instead of hex string
curl_sasl: also fix memory leaks in some OOM situations
Added required "debug" feature, missed in commit 1c9aaa0bac, as NTLMv2
calls Curl_rand() which can only be fixed to a specific entropy in
debug builds.
gcc spit out warning: variable 'x' might be clobbered by 'longjmp' or
'vfork' messages for a few variables. These automatic variables were
expected to be changed between a setjmp/longjmp and hold their values,
so are now marked volatile.
Follow-up to commit 121bcfee5d. curl-config --features now lists
GSS-API but it is not a listed feature in curl -V. This should probably
be synchronized.
Verifies that the change in 68f0166a92 works as intended and that
different HTTP auth credentials to the same host still re-uses the
connection properly.
In commit 0b3750b5c2 (released in 7.36.0) we fixed a timeout issue
but instead broke the timings.
To fix this, I introduce a new timestamp to use for the timeouts and
restored the previous timestamp and timestamp position so that the old
timer functionality is restored.
In addition to that, that change also broke connection timeouts for when
more than one connect was used (as it would then count the total time
from the first connect and not for the most recent one). Now
Curl_timeleft() has been modified so that it checks against different
start times depending on which timeout it checks.
Test 1303 is updated accordingly.
Bug: http://curl.haxx.se/mail/lib-2014-05/0147.html
Reported-by: Ryan Braud
If the precision is indeed shorter than the string, don't strlen() to
find the end because that's not how the precision operator works.
I also added a unit test for curl_msnprintf to make sure this works and
that the fix doesn't a few other basic use cases. I found a POSIX
compliance problem that I marked TODO in the unit test, and I figure we
need to add more tests in the future.
Reported-by: Török Edwin
Updated the docs to clarify and the code accordingly, with test 1528 to
verify:
When CURLHEADER_SEPARATE is set and libcurl is asked to send a request
to a proxy but it isn't CONNECT, then _both_ header lists
(CURLOPT_HTTPHEADER and CURLOPT_PROXYHEADER) will be used since the
single request is then made for both the proxy and the server.
Since all present tests now have <keywords> listed, this script will now
refuse to run a given test case if no such section is provided.
Hopefully this will help us make sure new test cases get keywords added
at start.
This makes it possible to fetch from an IPv6 literal without specifying
the -g option. Globbing remains available elsehwere in the URL.
For example:
curl http://[::1]/file[1-3].txt
This creates no ambiguity, because there is no overlap between the
syntax of valid globs and valid IPv6 literals. Globs contain hyphens
and at most 1 colon, while IPv6 literals have no hyphens, and at least 2
colons.
The peek_ipv6() parser simply whitelists a set of characters and counts
colons, because the real validation happens later on. The character set
includes A-Z, in case someone decides to implement support for scopes
like [fe80::1%25eth0] in the future.
Signed-off-by: Paul Marks <pmarks@google.com>
When the protocol part fails, the data usually does too but the protocol
part is often more fundamental and often provide the clues you need to
fix the test case.
As the email protocols implement SASL authentication rather than IMAP,
POP3 and SMTP specific authentication, updated the authentication
keywords to reflect this.
The improved connection reuse logic would otherwise create a new
connection for each one, which isn't supported by the test
server, nor expected by the test.
To better allow arguments like "1 to 9999" without flooding the terminal
with error messages, the given test cases range is now checked and only
test numbers with existing files are actually run.
The previous test certificate contained a MD5 hash which is not
supported using TLSv1.2 with Schannel on Windows 7 or newer.
See the update to this blog post on IEInternals / MSDN:
http://blogs.msdn.com/b/ieinternals/archive/2011/03/25/
misbehaving-https-servers-impair-tls-1.1-and-tls-1.2.aspx
"Update: If the server negotiates a TLS1.2 connection with a
Windows 7 or 8 schannel.dll-using client application, and it
provides a certificate chain which uses the (weak) MD5 hash
algorithm, the client will abort the connection (TCP/IP FIN)
upon receipt of the certificate."
When allowing NTLM, the re-use connection logic was too focused on
finding an existing NTLM connection to use and didn't properly allow
re-use of other ones. This made the logic not re-use perfectly re-usable
connections.
Added test case 1418 and 1419 to verify.
Regression brought in 8ae35102c (curl 7.35.0)
Reported-by: Jeff King
Bug: http://thread.gmane.org/gmane.comp.version-control.git/242213
This one is needed with the gcc options -fstack-protector-all -O2
That brings the number of suppressions for test 165 to four, and I
suspect I could find another two missing without trying very hard. I'm
beginning to think suppressions isn't the best way to handle these
kinds of cases.
Do not try to convert line-endings to CRLF on Windows by setting stdout
to binary mode, just like the curl tool does if --ascii is not specified.
This should prevent corrupted stdout line-ending output like CRCRLF.
In order to make the previously naive text-aware tests work with
binary mode on Windows, text-mode is disabled for them if it is not
actually part of the test case and line-endings are corrected.
According to RFC 2616 and RFC 2326 individual protocol elements, like
headers and except the actual content, are terminated by using CRLF.
Therefore the test data files for these protocols need to contain
mixed line-endings if the actual protocol elements use CRLF while
the file uses LF.
gcc 4.7.2 with -O2 will optimize Curl_connect by inlining some
functions two levels deep, which makes the valgrind suppression
fail to match. The underlying reason for these idna suppressions is
a gcc strlen optimization when compiling libidn; compiling it with
-fno-builtin-strlen makes this suppression unnecessary.
It seems the fips config option causes an error if FIPS mode was
not enabled at stunnel compile-time. FIPS support was disabled
by default in stunnel 5.00, so this is probably really only needed
on versions between 4.32 and 5.00.
This was already mostly being done, except that analysis after the
test still assumed that the valgrind log files would be available. An
alternative way to handle the valgrind + gdb combination could be to
enable one of the valgrind debugger hooks.
lib1515.c:38:26 warning: unused parameter 'curl'
lib1515.c:38:81 warning: unused parameter 'ptr'
lib1515.c:38:5 warning: no previous prototype for 'debug_callback'
lib1515.c:46:5 warning: no previous prototype for 'do_one_request'
lib1515.c:120:3 warning: ISO C90 forbids mixed declarations and code
As well as some code policing such as white space and braces.
Not comma, which is an inconsistency and a mistake probably inherited
from the examples section of RFC1867.
This bug has been present since the day curl started to support
multipart formposts, back in the 90s.
Reported-by: Rob Davies
Bug: http://curl.haxx.se/bug/view.cgi?id=1333
Fix for bug #1303 (030a2b8cb) was not complete.
libcurl still pruned DNS entries added manually
after detecting a dead connection. This test
checks such behavior.
Test-case 1515 reproduces bug #1303, where libcurl
would incorrectly prune DNS entries added via
CURLOPT_RESOLVE after the DNS_CACHE_TIMEOUT had
expired.