This repairs cookies for localhost.
Non-PSL builds will now only accept "localhost" without dots, while PSL
builds okeys everything not listed as PSL.
Added test 1258 to verify.
This was a regression brought in a76825a5ef
Under condition using http_proxy env var, noproxy list was the
combination of --noproxy option and NO_PROXY env var previously. Since
this commit, --noproxy option overrides NO_PROXY environment variable
even if use http_proxy env var.
Closes#1140
The combination of --noproxy option and http_proxy env var works well
both for proxied hosts and non-proxied hosts.
However, when combining NO_PROXY env var with --proxy option,
non-proxied hosts are not reachable while proxied host is OK.
This patch allows us to access non-proxied hosts even if using NO_PROXY
env var with --proxy option.
Follow-up to 3463408.
Prior to 3463408 file:// hostnames were silently stripped.
Prior to this commit it did not work when a schemeless url was used with
file as the default protocol.
Ref: https://curl.haxx.se/mail/lib-2016-11/0081.html
Closes https://github.com/curl/curl/pull/1124
Also fix for drive letters:
- Support --proto-default file c:/foo/bar.txt
- Support file://c:/foo/bar.txt
- Fail when a file:// drive letter is detected and not MSDOS/Windows.
Bug: https://github.com/curl/curl/issues/1187
Reported-by: Anatol Belski
Assisted-by: Anatol Belski
Follow-up to 82245ea: Fix the example program sendrecv.c (handle
CURLE_AGAIN, handle incomplete send). Improve the documentation
for curl_easy_recv() and curl_easy_send().
Reviewed-by: Frank Meier
Assisted-by: Jay Satiro
See https://github.com/curl/curl/pull/1134
A server MUST NOT send any Transfer-Encoding or Content-Length header
fields in a 2xx (Successful) response to CONNECT. (RFC 7231 section
4.3.6)
Also fixes the three test cases that did this.
If a port number in a "connect-to" entry does not match, skip this
entry instead of connecting to port 0.
If a port number in a "connect-to" entry matches, use this entry
and look no further.
Reported-by: Jay Satiro
Assisted-by: Jay Satiro, Daniel Stenberg
Closes#1148
We're mostly saying just "curl" in lower case these days so here's a big
cleanup to adapt to this reality. A few instances are left as the
project could still formally be considered called cURL.
- Call Curl_initinfo on init and duphandle.
Prior to this change the statistical and informational variables were
simply zeroed by calloc on easy init and duphandle. While zero is the
correct default value for almost all info variables, there is one where
it isn't (filetime initializes to -1).
Bug: https://github.com/curl/curl/issues/1103
Reported-by: Neal Poole
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
Cokie with the same domain but different tailmatching property are now
considered different and do not replace each other. If header contains
following lines then two cookies will be set: Set-Cookie: foo=bar;
domain=.foo.com; expires=Thu Mar 3 GMT 8:56:27 2033 Set-Cookie: foo=baz;
domain=foo.com; expires=Thu Mar 3 GMT 8:56:27 2033
This matches Chrome, Opera, Safari, and Firefox behavior. When sending
stored tokens to foo.com Chrome, Opera, Firefox store send them in the
stored order, while Safari pre-sort the cookies.
Closes#1050
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
.. and add that --proto-redir and CURLOPT_REDIR_PROTOCOLS do not
override protocols denied by --proto and CURLOPT_PROTOCOLS.
- Add a test to enforce: --proto deny must override --proto-redir allow
Closes https://github.com/curl/curl/pull/1031
... like when a HTTP/0.9 response comes back without any headers at all
and just a body this now prevents that body from being sent to the
callback etc.
Adapted test 1144 to verify.
Fixes#973
Assisted-by: Ray Satiro
This only excludes building unit tests from default build ( 'all' Make
target or "Build Solution" in VisualStudio). The projects and Make
targets will still be generated and shown in supporting IDEs.
Fixes https://github.com/curl/curl/issues/981
Reported-by: Randy Armstrong
Closes https://github.com/curl/curl/pull/990
Detect support for compiler symbol visibility flags and apply those
according to CURL_HIDDEN_SYMBOLS option.
It should work true to the autotools build except it tries to unhide
symbols on Windows when requested and prints warning if it fails.
Ref: https://github.com/curl/curl/issues/981#issuecomment-242665951
Reported-by: Daniel Stenberg
Since we're using CURLE_FTP_WEIRD_SERVER_REPLY in imap, pop3 and smtp as
more of a generic "failed to parse" introduce an alias without FTP in
the name.
Closes https://github.com/curl/curl/pull/975
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
This fixes tests that were added after 113f04e664 as the tests would
fail otherwise.
We bring back "Proxy-Connection: Keep-Alive" now unconditionally to fix
regressions with old and stupid proxies, but we could possibly switch to
using it only for CONNECT or only for NTLM in a future if we want to
gradually reduce it.
Fixes#954
Reported-by: János Fekete
CMake build now using BUILD_TESTING=ON/OFF (default is OFF) to build
tests and enabling CTest integration. Options BUILD_CURL_TESTS and
BUILD_DASHBOARD_REPORTS was removed.
Closes#882
Reviewed-by: Brad King
The HTTP/2 tests brought with commit bf05606ef1 were using the internal
name 'http2' for the HTTP/2 server, while in fact that name was already
used for the second instance of the HTTP server. This made tests using
the second instance (like test 2050) fail after a HTTP/2 test had run.
The server is now known as HTTP/2 internally and within the <server>
section in test cases. 1700, 1701 and 1702 were updated accordingly.
It requires that 'nghttpx' is in the PATH, and it will run the tests
using nghttpx as a front-end proxy in front of the standard HTTP/1 test
server. This uses HTTP/2 over plain TCP.
If you like me have nghttpx installed in a custom path, you can run test 1700
like this:
$ PATH=$PATH:$HOME/build-nghttp2/bin/ ./runtests.pl 1700
Mostly in order to support broken web sites that redirect to broken URLs
that are accepted by browsers.
Browsers are typically even more leniant than this as the WHATWG URL
spec they should allow an _infinite_ amount. I tested 8000 slashes with
Firefox and it just worked.
Added test case 1141, 1142 and 1143 to verify the new parser.
Closes#791
Prior to this change a width arg could be erroneously output, and also
width and precision args could not be used together without crashing.
"%0*d%s", 2, 9, "foo"
Before: "092"
After: "09foo"
"%*.*s", 5, 2, "foo"
Before: crash
After: " fo"
Test 557 is updated to verify this and more
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes#743
It does open up a miniscule risk that one of the other protocols that
libcurl could use would send back a Content-Disposition header and then
curl would act on it even if not HTTP.
A future mitigation for this risk would be to allow the callback to ask
libcurl which protocol is being used.
Verified with test 1312
Closes#760
This script now also scans src/tool_getparam.c, docs/curl.1 and
src/tool_help.c and will warn if any of them lists a command line option
not mentioned in one of the other places.
While being debated (in #716) and a violation of RFC 7230 section 5.4,
this test verifies that the existing functionality works as intended. It
strips the dot from the host name and uses the host without dot
throughout the internals.
... for checking ability to receive full HTTP response when POST request
is used with slow read callback function.
This test checks for bug #657 and verifies the work-around from
72d5e144fb.
Closes#720
warning: implicit declaration of function 'sprintf_was_used'
[-Wimplicit-function-declaration]
Follow up to the modications made to tests/libtest in commit 55452ebdff
as we prefer not to use sprintf() now.
The define is not in our name space and is therefore not protected by
our API promises.
It was only really used by libcurl internals but was mostly erased from
there already in 8aabbf5 (March 2015). This is supposedly the final
death blow to that define from everywhere.
As a side-effect, making sure _MPRINTF_REPLACE is gone and not used, I
made the lib tests in tests/libtest/ use curl_printf.h for its redefine
magic and then subsequently the use of sprintf() got banned in the tests
as well (as it is in libcurl internals) and I then replaced them all
with snprintf().
In the unlikely event that any users is actually using this define and
gets sad by this change, it is very easily copied to the user's own
code.
Fixed failed redirection of stderr with some options. At least on Msys2,
perl fails to redirect stderr if $value contains newline or other weird
characters.
It seems we may have some autobuild problems after this commit went
in. Trying to see if a revert helps to get them back.
This reverts commit 2716350d1f.
RFC 6265 section 4.1.1 spells out that the first name/value pair in the
header is the actual cookie name and content, while the following are
the parameters.
libcurl previously had a more liberal approach which causes significant
problems when introducing new cookie parameters, like the suggested new
cookie priority draft.
The previous logic read all n/v pairs from left-to-right and the first
name used that wassn't a known parameter name would be used as the
cookie name, thus accepting "Set-Cookie: Max-Age=2; person=daniel" to be
a cookie named 'person' while an RFC 6265 compliant parser should
consider that to be a cookie named 'Max-Age' with an (unknown) parameter
'person'.
Fixes#709
DSA is no longer supported by OpenSSH 7.0, which causes all SCP/SFTP
test cases to be skipped. Using RSA for host authentication works with
both old and new versions of OpenSSH.
Reported-by: Karlson2k
Closes#676
- Add tests.
- Add an example to CURLOPT_TFTP_NO_OPTIONS.3.
- Add --tftp-no-options to expose CURLOPT_TFTP_NO_OPTIONS.
Bug: https://github.com/curl/curl/issues/481
It turns out Firefox and Chrome both allow spaces in cookie names and
there are sites out there using that.
Turned out the code meant to strip off trailing space from cookie names
didn't work. Fixed now.
Test case 8 modified to verify both these changes.
Closes#639
- Add unit test 1604 to test the sanitize_file_name function.
- Use -DCURL_STATICLIB when building libcurltool for unit testing.
- Better detection of reserved DOS device names.
- New flags to modify sanitize behavior:
SANITIZE_ALLOW_COLONS: Allow colons
SANITIZE_ALLOW_PATH: Allow path separators and colons
SANITIZE_ALLOW_RESERVED: Allow reserved device names
SANITIZE_ALLOW_TRUNCATE: Allow truncating a long filename
- Restore sanitization of banned characters from user-specified outfile.
Prior to this commit sanitization of a user-specified outfile was
temporarily disabled in 2b6dadc because there was no way to allow path
separators and colons through while replacing other banned characters.
Now in such a case we call the sanitize function with
SANITIZE_ALLOW_PATH which allows path separators and colons to pass
through.
Closes https://github.com/curl/curl/issues/624
Reported-by: Octavio Schroeder
It isn't used by the code in current conditions but for safety it seems
sensible to at least not crash on such input.
Extended unit test 1395 to verify this too as well as a plain "/" input.
Before this patch, if a URL does not start with the protocol
name/scheme, effective URLs would be prefixed with upper-case protocol
names/schemes. This behavior might not be expected by library users or
end users.
For example, if `CURLOPT_DEFAULT_PROTOCOL` is set to "https". And the
URL is "hostname/path". The effective URL would be
"HTTPS://hostname/path" instead of "https://hostname/path".
After this patch, effective URLs would be prefixed with a lower-case
protocol name/scheme.
Closes#597
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
The request needs to be read and send in binary mode in order to use
CRLF instead of LF. Adding --upload-file - causes curl to read stdin
in binary mode.
Make this the default for the curl tool (if built with HTTP/2 powers
enabled) unless a specific HTTP version is requested on the command
line.
This should allow more users to get HTTP/2 powers without having to
change anything.
Tests 842, 843, 844, 845, 887, 888, 889, 890, 946, 947, 948 and 949 fail
if a custom port number is specified via the -b option of runtests.pl.
Suggested by: Kamil Dudka
Bug: http://curl.haxx.se/mail/lib-2015-12/0003.html
As POP3 final and continuation responses both begin with a + character,
and both the finalcode and contcode variables in SASLprotoc are set as
such, we cannot tell the difference between them when we are expecting
an optional continuation from the server such as the following:
+ something else from the server
+OK final response
Disabled these tests until such a time we can tell the responses apart.
The hashes can vary between architectures (e.g. Sparc differs from x86_64).
This is not a fatal problem but just reduces the coverage of these white-box
tests, as the assumptions about into which hash bucket each key falls are no
longer valid.
- no point in repeating curl features that is already listed as features
from the curl -V output
- remove the port numbers/unix domain path from the output unless
verbose is used, as that is rarely interesting to users.
The tftpd test server now logs all received options and thus all TFTP
test cases need to match them exactly.
Extended test 283 to use and verify --tftp-blksize.
Apparently there are sites out there that do redirects to URLs they
provide in plain UTF-8 or similar. Browsers and wget %-encode such
headers when doing a subsequent request. Now libcurl does too.
Added test 1138 to verify.
Closes#473
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
This prevents valgrind from reporting possibly lost memory that NSPR
uses for file descriptor cache and other globally allocated internal
data structures.
Reported-by: Štefan Kremeň
When CURL_SOCKET_BAD is returned in the callback, it should be treated
as an error (CURLE_COULDNT_CONNECT) if no other socket is subsequently
created when trying to connect to a server.
Bug: http://curl.haxx.se/mail/lib-2015-06/0047.html
This function makes a platform-specific absolute path which uses
backslashes on Windows. This form works when passing it on the
command-line, as well as if the source is on another drive.
This avoids unnecessary dynamic allocs and as this also removed the last
users of *hash_alloc() and *hash_destroy(), those two functions are now
removed.
Make the HTTP headers separated by default for improved security and
reduced risk for information leakage.
Bug: http://curl.haxx.se/docs/adv_20150429.html
Reported-by: Yehezkel Horowitz, Oren Souroujon
This commit fixes a regression introduced in curl-7_41_0-186-g261a0fe.
It also introduces a regression test 1424 based on tests 78 and 1423.
Reported-by: Viktor Szakats
Bug: https://github.com/bagder/curl/issues/237
- cache entries must be also refreshed when they are in use
- have the cache count as inuse reference too, freeing timestamp == 0 special
value
- use timestamp == 0 for CURLOPT_RESOLVE entries which don't get refreshed
- remove CURLOPT_RESOLVE special inuse reference (timestamp == 0 will prevent refresh)
- fix Curl_hostcache_clean - CURLOPT_RESOLVE entries don't have a special
reference anymore, and it would also release non CURLOPT_RESOLVE references
- fix locking in Curl_hostcache_clean
- fix unit1305.c: hash now keeps a reference, need to set inuse = 1
Previously in Curl_http2_switched, we called nghttp2_session_mem_recv to
parse incoming data which were already received while curl was handling
upgrade. But we didn't call nghttp2_session_send, and it led to make
curl not send any response to the received frames. Most likely, we
received SETTINGS from server at this point, so we missed opportunity to
send SETTINGS + ACK. This commit adds missing nghttp2_session_send call
in Curl_http2_switched to fix this issue.
Bug: https://github.com/bagder/curl/issues/192
Reported-by: Stefan Eissing
"name =value" is fine and the space should just be skipped.
Updated test 31 to also test for this.
Bug: https://github.com/bagder/curl/issues/195
Reported-by: cromestant
Help-by: Frank Gevaerts
It seems that some systems (e.g. fairly consistently in some recent
Solaris autobuilds) would manage to get to the connect phase before the
progress callback was called, resulting in a CURLE_COULDNT_CONNECT
error. Reworked the test to point at a test server that never returns a
full result so the progress callback always gets a chance to be called
before the transfer can complete in some other way.
The certificates were missing the digitalSignature and keyAgreement
usage types, of which at least digitalSignature was checked by CyaSSL.
This caused the test server in test 310 (among others) to fail the
startup verification and therefore run (see
http://curl.haxx.se/mail/lib-2014-07/0303.html).
Since we just started make use of free(NULL) in order to simplify code,
this change takes it a step further and:
- converts lots of Curl_safefree() calls to good old free()
- makes Curl_safefree() not check the pointer before free()
The (new) rule of thumb is: if you really want a function call that
frees a pointer and then assigns it to NULL, then use Curl_safefree().
But we will prefer just using free() from now on.
The function "free" is documented in the way that no action shall occur for
a passed null pointer. It is therefore not needed that a function caller
repeats a corresponding check.
http://stackoverflow.com/questions/18775608/free-a-null-pointer-anyway-or-check-first
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
... by using the regular Curl_http_done() method which checks for
that. This makes test 1801 fail consistently with error 56 (which seems
fine) to that test is also updated here.
Reported-by: Ben Darnell
Bug: https://github.com/bagder/curl/issues/166
...after the method line:
"Since the Host field-value is critical information for handling a
request, a user agent SHOULD generate Host as the first header field
following the request-line." / RFC 7230 section 5.4
Additionally, this will also make libcurl ignore multiple specified
custom Host: headers and only use the first one. Test 1121 has been
updated accordingly
Bug: http://curl.haxx.se/bug/view.cgi?id=1491
Reported-by: Rainer Canavan
When checking for a connection to re-use, a proxy-using request must
check for and use a proxy connection and not one based on the host
name!
Added test 1421 to verify
Bug: http://curl.haxx.se/bug/view.cgi?id=1492
SSLeay was the name of the library that was subsequently turned into
OpenSSL many moons ago (1999). curl does not work with the old SSLeay
library since years. This is now reflected by only using USE_OPENSSL in
code that depends on OpenSSL.
sockfilt.c:288: warning: conversion to 'DWORD' from 'size_t' may alter
its value
sockfilt.c:291: warning: conversion to 'DWORD' from 'size_t' may alter
its value
sockfilt.c:323: warning: conversion to 'DWORD' from 'size_t' may alter
its value
sockfilt.c:326: warning: conversion to 'DWORD' from 'size_t' may alter
its value
* Missing initialisation of upload status caused a seg fault
* Missing data termination caused corrupt data to be uploaded
* Data verification should be performed in <upload> element
* Added missing recipient list cleanup
For consistency, as we seem to have a bit of a mixed bag, changed all
instances of ipv4 and ipv6 in comments and documentations to use the
correct case.
Merge multiple internal arrays into one, even if some variables
will not not be used. They are all created with the number of
file descriptors as their size.
Also fix possible thread handle leak in CloseHandle-loop.
Improves performance of test cases 574 and 575 by 50%.
A value of zero causes the thread to relinquish the remainder
of its time slice to any other thread of equal priority that is
ready to run. If there are no other threads of equal priority
ready to run, the function returns immediately, and the thread
continues execution.
http://msdn.microsoft.com/library/windows/desktop/ms686307.aspx
This fixes the test 506 torture test. The internal cookie API really
ought to be improved to separate cookie parsing errors (which may be
ignored) with OOM errors (which should be fatal).
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
test1435: a simple test that checks whether a HTTP request can be
performed over the UNIX socket. The hostname/port are interpreted
by sws and should be ignored by cURL.
test1436: test for the ability to do two requests to the same host,
interleaved with one to a different hostname.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
The variable `$ipvnum` can now contain "unix" besides the integers 4
and 6 since the variable. Functions which receive this parameter
have their `$port` parameter renamed to `$port_or_path` to support a
path to the UNIX domain socket (as a "port" is only meaningful for TCP).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
If sws is killed it might leave a stale socket file on the filesystem
which would cause an EADDRINUSE error. After this patch, it is checked
whether the socket is really stale and if so, the socket file gets
removed and another bind is executed.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
This extends sws with a --unix-socket option which causes the port to
be ignored (as the server now listens on the path specified by
--unix-socket). This feature will be available in the following patch
that enables checking for UNIX domain socket support.
Proxy support (CONNECT) is not considered nor tested. It does not make
sense anyway, first connecting through a TCP proxy, then let that TCP
proxy connect to a UNIX socket.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Instead of depending the socket domain type on use_ipv6, specify the
domain type (AF_INET / AF_INET6) as variable. An enum is used here with
switch to avoid compiler warnings in connect_to, complaining that rc
is possibly undefined (which is not possible as socket_domain is
always set).
Besides abstracting the socket type, make the debugging messages be
independent on IP (introduce location_str which points to "port XXXXX").
Rename "ipv_inuse" to "socket_type" and tighten the scope (main).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Commit curl-7_23_1-143-g8218064 changed the parameter of
responsive_http_server to accept types other than IPv6 (converting
from a boolean to a string), but only considered the lower-case "ipv6"
and not the "IPv6" variant. This caused all servers to start in IPv4
mode instead.
This patch converts the remaining cases to "ipv6". While not strictly
necessary for the run*server variants, these got also converted for
consistency and to prevent future errors.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
This is the only user of the backtick operator in the command. As the
commands will soon not be executed by a shell anymore (but by perl),
replace the command with its output.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Makes test1119 pass when building with cmake.
configurehelp.pm is generated by configure (autotools). As cmake does
not provide a separate variable for the C preprocessor, default to cpp.
Before commit ef24ecde68 ("symbol-scan:
use configure script knowledge about how to run the C preprocessor"),
this tool would also use 'cpp'.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Fix detection of the AsynchDNS feature which not just depends on
pthreads support, but also on whether USE_POSIX_THREADS is set or not.
Caught by test 1014.
This patch adds a new ENABLE_THREADED_RESOLVER option (corresponding to
--enable-threaded-resolver of autotools) which also needs a check for
HAVE_PTHREAD_H.
For symmetry with autotools, CURL_USE_ARES is renamed to ENABLE_ARES
(--enable-ares). Checks that test for the availability actually use
USE_ARES instead as that is the result of whether a-res is available or
not (in practice this does not matter as CARES is marked as required
package, but nevertheless it is better to write the intent).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
I noticed that a patched cmake build would pass tests with a fake local
hostname, but the autotools build skips them:
got unexpected host name back, LD_PRELOAD failed
It turns out that -fvisibility=hidden hides the symbol, and since the
tests are not part of libcurl, it fails too. Just remove the LIBCURL
guard.
Broken since cURL 7.30 (commit 83a42ee20e,
"curl.h: stricter CURL_EXTERN linkage decorations logic").
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Added !SSPI to the features list of the HTTP digest tests, as SSPI
based builds now use the Windows SSPI messaging API rather than the
internal functions, and we can't control the random numbers that get
used as part of the digest.
Basically since servers often then don't respond well to this and
instead send the full contents and then libcurl would instead error out
with the assumption that the server doesn't support resume. As the data
is then already transfered, this is now considered fine.
Test case 1434 added to verify this. Test case 1042 slightly modified.
Reported-by: hugo
Bug: http://curl.haxx.se/bug/view.cgi?id=1443
HTTP 1.1 is clearly specified to only allow three digit response codes,
and libcurl used sscanf("%3d") for that purpose. This made libcurl
support smaller numbers but not larger. It does now, but we will not
make any specific promises nor document this further since it is going
outside of what HTTP is.
Bug: http://curl.haxx.se/bug/view.cgi?id=1441
Reported-by: Balaji
CURLOPT_COPYPOSTFIELDS with a given CURLOPT_POSTFIELDSIZE does not
require a trailing zero of the data and by making sure this test doesn't
use one we know it works (combined with valgrind).
This change allows runtests.pl to be run from the CMake builddir:
export srcdir=/tmp/curl/tests;
perl -I$srcdir $srcdir/runtests.pl -l
In order to make this possible, all test cases have been moved from
Makefile.am to Makefile.inc.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
The 2to3 tool converted socketserver (which I manually fixed up with an
import fallback) and the print(e) line. The xrange option was converted
to range, but it seems better to use the '*' operator here for
simplicity.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
There is no need for such function. Include_directories propagate by
themselves and having a function with one simple link statement makes
little sense.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
By not detecting and rejecting domain names for partial literal IP
addresses properly when parsing received HTTP cookies, libcurl can be
fooled to both send cookies to wrong sites and to allow arbitrary sites
to set cookies for others.
CVE-2014-3613
Bug: http://curl.haxx.se/docs/adv_20140910A.html
Historically the default "unknown" value for progress.size_dl and
progress.size_ul has been zero, since these values are initialized
implicitly by the calloc that allocates the curl handle that these
variables are a part of. Users of curl that install progress
callbacks may expect these values to always be >= 0.
Currently it is possible for progress.size_dl and progress.size_ul
to by set to a value of -1, if Curl_pgrsSetDownloadSize() or
Curl_pgrsSetUploadSize() are passed a "size" of -1 (which a few
places currently do, and a following patch will add more). So
lets update Curl_pgrsSetDownloadSize() and Curl_pgrsSetUploadSize()
so they make sure that these variables always contain a value that
is >= 0.
Updates test579 and test599.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
... to handle "*/[total]". Also, removed the strange hack that made
CURLOPT_FAILONERROR on a 416 response after a *RESUME_FROM return
CURLE_OK.
Reported-by: Dimitrios Siganos
Bug: http://curl.haxx.se/mail/lib-2014-06/0221.html
If a non-standard $TESTDIR is used the file may not be necessary.
Previously a "missing" file resulted in the warning:
readline() on closed filehandle D at ./runtests.pl line 4940.
This seems to have become necessary for SRP support to work starting
with GnuTLS ver. 2.99.0. Since support for SRP was added to GnuTLS
before the function that takes this priority string, there should be no
issue with backward compatibility.
Curl_rand() will return a dummy and repatable random value for this
case. Makes it possible to write test cases that verify output.
Also, fake timestamp with CURL_FORCETIME set.
Only when built debug enabled of course.
Curl_ssl_random() was not used anymore so it has been
removed. Curl_rand() is enough.
create_digest_md5_message: generate base64 instead of hex string
curl_sasl: also fix memory leaks in some OOM situations
Added required "debug" feature, missed in commit 1c9aaa0bac, as NTLMv2
calls Curl_rand() which can only be fixed to a specific entropy in
debug builds.
gcc spit out warning: variable 'x' might be clobbered by 'longjmp' or
'vfork' messages for a few variables. These automatic variables were
expected to be changed between a setjmp/longjmp and hold their values,
so are now marked volatile.
Follow-up to commit 121bcfee5d. curl-config --features now lists
GSS-API but it is not a listed feature in curl -V. This should probably
be synchronized.
Verifies that the change in 68f0166a92 works as intended and that
different HTTP auth credentials to the same host still re-uses the
connection properly.
In commit 0b3750b5c2 (released in 7.36.0) we fixed a timeout issue
but instead broke the timings.
To fix this, I introduce a new timestamp to use for the timeouts and
restored the previous timestamp and timestamp position so that the old
timer functionality is restored.
In addition to that, that change also broke connection timeouts for when
more than one connect was used (as it would then count the total time
from the first connect and not for the most recent one). Now
Curl_timeleft() has been modified so that it checks against different
start times depending on which timeout it checks.
Test 1303 is updated accordingly.
Bug: http://curl.haxx.se/mail/lib-2014-05/0147.html
Reported-by: Ryan Braud
If the precision is indeed shorter than the string, don't strlen() to
find the end because that's not how the precision operator works.
I also added a unit test for curl_msnprintf to make sure this works and
that the fix doesn't a few other basic use cases. I found a POSIX
compliance problem that I marked TODO in the unit test, and I figure we
need to add more tests in the future.
Reported-by: Török Edwin
Updated the docs to clarify and the code accordingly, with test 1528 to
verify:
When CURLHEADER_SEPARATE is set and libcurl is asked to send a request
to a proxy but it isn't CONNECT, then _both_ header lists
(CURLOPT_HTTPHEADER and CURLOPT_PROXYHEADER) will be used since the
single request is then made for both the proxy and the server.
Since all present tests now have <keywords> listed, this script will now
refuse to run a given test case if no such section is provided.
Hopefully this will help us make sure new test cases get keywords added
at start.
This makes it possible to fetch from an IPv6 literal without specifying
the -g option. Globbing remains available elsehwere in the URL.
For example:
curl http://[::1]/file[1-3].txt
This creates no ambiguity, because there is no overlap between the
syntax of valid globs and valid IPv6 literals. Globs contain hyphens
and at most 1 colon, while IPv6 literals have no hyphens, and at least 2
colons.
The peek_ipv6() parser simply whitelists a set of characters and counts
colons, because the real validation happens later on. The character set
includes A-Z, in case someone decides to implement support for scopes
like [fe80::1%25eth0] in the future.
Signed-off-by: Paul Marks <pmarks@google.com>
When the protocol part fails, the data usually does too but the protocol
part is often more fundamental and often provide the clues you need to
fix the test case.
As the email protocols implement SASL authentication rather than IMAP,
POP3 and SMTP specific authentication, updated the authentication
keywords to reflect this.
The improved connection reuse logic would otherwise create a new
connection for each one, which isn't supported by the test
server, nor expected by the test.
To better allow arguments like "1 to 9999" without flooding the terminal
with error messages, the given test cases range is now checked and only
test numbers with existing files are actually run.
The previous test certificate contained a MD5 hash which is not
supported using TLSv1.2 with Schannel on Windows 7 or newer.
See the update to this blog post on IEInternals / MSDN:
http://blogs.msdn.com/b/ieinternals/archive/2011/03/25/
misbehaving-https-servers-impair-tls-1.1-and-tls-1.2.aspx
"Update: If the server negotiates a TLS1.2 connection with a
Windows 7 or 8 schannel.dll-using client application, and it
provides a certificate chain which uses the (weak) MD5 hash
algorithm, the client will abort the connection (TCP/IP FIN)
upon receipt of the certificate."
When allowing NTLM, the re-use connection logic was too focused on
finding an existing NTLM connection to use and didn't properly allow
re-use of other ones. This made the logic not re-use perfectly re-usable
connections.
Added test case 1418 and 1419 to verify.
Regression brought in 8ae35102c (curl 7.35.0)
Reported-by: Jeff King
Bug: http://thread.gmane.org/gmane.comp.version-control.git/242213
This one is needed with the gcc options -fstack-protector-all -O2
That brings the number of suppressions for test 165 to four, and I
suspect I could find another two missing without trying very hard. I'm
beginning to think suppressions isn't the best way to handle these
kinds of cases.
Do not try to convert line-endings to CRLF on Windows by setting stdout
to binary mode, just like the curl tool does if --ascii is not specified.
This should prevent corrupted stdout line-ending output like CRCRLF.
In order to make the previously naive text-aware tests work with
binary mode on Windows, text-mode is disabled for them if it is not
actually part of the test case and line-endings are corrected.
According to RFC 2616 and RFC 2326 individual protocol elements, like
headers and except the actual content, are terminated by using CRLF.
Therefore the test data files for these protocols need to contain
mixed line-endings if the actual protocol elements use CRLF while
the file uses LF.
gcc 4.7.2 with -O2 will optimize Curl_connect by inlining some
functions two levels deep, which makes the valgrind suppression
fail to match. The underlying reason for these idna suppressions is
a gcc strlen optimization when compiling libidn; compiling it with
-fno-builtin-strlen makes this suppression unnecessary.
It seems the fips config option causes an error if FIPS mode was
not enabled at stunnel compile-time. FIPS support was disabled
by default in stunnel 5.00, so this is probably really only needed
on versions between 4.32 and 5.00.
This was already mostly being done, except that analysis after the
test still assumed that the valgrind log files would be available. An
alternative way to handle the valgrind + gdb combination could be to
enable one of the valgrind debugger hooks.
lib1515.c:38:26 warning: unused parameter 'curl'
lib1515.c:38:81 warning: unused parameter 'ptr'
lib1515.c:38:5 warning: no previous prototype for 'debug_callback'
lib1515.c:46:5 warning: no previous prototype for 'do_one_request'
lib1515.c:120:3 warning: ISO C90 forbids mixed declarations and code
As well as some code policing such as white space and braces.
Not comma, which is an inconsistency and a mistake probably inherited
from the examples section of RFC1867.
This bug has been present since the day curl started to support
multipart formposts, back in the 90s.
Reported-by: Rob Davies
Bug: http://curl.haxx.se/bug/view.cgi?id=1333
Fix for bug #1303 (030a2b8cb) was not complete.
libcurl still pruned DNS entries added manually
after detecting a dead connection. This test
checks such behavior.
Test-case 1515 reproduces bug #1303, where libcurl
would incorrectly prune DNS entries added via
CURLOPT_RESOLVE after the DNS_CACHE_TIMEOUT had
expired.
The test contains a cookie jar file where one of the cookies has an
expiry date of 1391252187 -- Sat, 1 Feb 2014 10:56:27 GMT which has
now expired. Updated to Wed, 14 Oct 2037 16:36:33 GMT as per test
179.
Reported-by: Adam Sampson
Bug: http://curl.haxx.se/bug/view.cgi?id=1330
Since the timer resolution is lower, there are actually cases that
the compared values are equal. Therefore we check for previous
timestamps being greater than the current one instead.
According to section 2.2 of RFC959 the End-of-Line is defined as:
The end-of-line sequence defines the separation of printing
lines. The sequence is Carriage Return, followed by Line Feed.
Verified by sniffing traffic between a Windows FTP client (FileZilla)
and Unix-hosted FTP server (ProFTPD).
It makes more sense to convert the expected output to [CR][LF] on
Windows than to force the actual, probably correct, output to [LF].
This way it is actually possible to see if curl outputs the correct
line-ending excepted by a text-aware test case.
Since the previous complex select function with initial support for
non-socket file descriptors, did not actually work correctly for
Console handles, this change simplifies the whole procedure by using
an internal waiting thread for the stdin console handle.
The previous implementation made it continuously trigger for the stdin
handle if it was being redirected to a parent process instead of
an actual Console input window.
This approach supports actual Console input handles as well as
anonymous Pipe handles which are used during input redirection.
It depends on the fact that ReadFile supports trying to read zero bytes
which makes it wait for the handle to become ready for reading.
Removed Unix-specific functionality in order to support Windows:
- select.epoll replaced with select.select
- SocketServer.ForkingMixIn replaced with SocketServer.ForkingMixIn
- socket.MSG_DONTWAIT replaced with socket.setblocking(False)
Even though epoll has a better performance and improved socket handling
than select, this change should not affect the actual test case.
Also, make the ftp server return a canned response that doesn't
cause XML verification problems. Although the test file format
isn't technically XML, it's still handy to be able to use XML
tools to verify and manipulate them.
Since /dev/stdout is not always emulated on Windows,
just skip the output option on Windows.
MinGW/msys support /dev/stdout only from a new login shell.
tstunnel on Windows does not support the pid option and is unable
to write to an output log that is already being used as a redirection
target for stdout. Therefore it does now output all log data to stdout
by default and secureserver.pl creates a fake pidfile on Windows.
The built-in memory debug system doesn't work with multi-threaded use so
instead of causing annoying false positives, disable the memory tracking
if the threaded resolver is used.
The Windows console version of stunnel is called "tstunnel", while
running "stunnel" on Windows spawns a new console window which
cannot be handled by the testsuite.
Previously LIST always returned a fixed hardcoded list that the ftp
server code knew about, mostly since the server didn't get any test case
number in the LIST scenario. Starting now, doing a CWD to a directory
named test-[number] will make the test server remember that number and
consider it a test case so that a subsequent LIST command will send the
<data> section of that test case back.
It allows LIST tests to be made more similar to how all other tests
work.
Test 100 was updated to provide its own directory listing.
Verify the change brought in commit 8e11731653061. It makes sure that
returning a failure from the progress callback even very early results
in the correct return code.
memdebug.h already contains all required definitions and including
curl_memory.h causes errors like the following:
tests/unit/unit1394.c:119: undefined reference to `Curl_cfree'
tests/unit/unit1394.c:120: undefined reference to `Curl_cfree'
Following commit 0aafd77fa4, replaced the internal usage of
FORMAT_OFF_T and FORMAT_OFF_TU with the external versions that we
expect API programmers to use.
This negates the need for separate definitions which were subtly
different under different platforms/compilers.
Following the addition of informational commands to the SMTP protocol,
the test server is no longer required to return the verified server
information in responses that curl only outputs in verbose mode.
Instead, a similar detection mechanism to that used by FTP, IMAP and
POP3 can now be used.
This commit replaces that of 9f260b5d66 because according to RFC-2449,
section 6, there is no APOP capability "...even though APOP is an
optional command in [POP3]. Clients discover server support of APOP by
the presence in the greeting banner of an initial challenge enclosed in
angle brackets."
SASL downgrade tests: 833, 835, 879, 881, 935 and 937 would fail as
they contained a minus sign in their authentication mechanism and this
would be missed by the custom reply parser.