Follow-up to 82245ea: Fix the example program sendrecv.c (handle
CURLE_AGAIN, handle incomplete send). Improve the documentation
for curl_easy_recv() and curl_easy_send().
Reviewed-by: Frank Meier
Assisted-by: Jay Satiro
See https://github.com/curl/curl/pull/1134
A server MUST NOT send any Transfer-Encoding or Content-Length header
fields in a 2xx (Successful) response to CONNECT. (RFC 7231 section
4.3.6)
Also fixes the three test cases that did this.
If a port number in a "connect-to" entry does not match, skip this
entry instead of connecting to port 0.
If a port number in a "connect-to" entry matches, use this entry
and look no further.
Reported-by: Jay Satiro
Assisted-by: Jay Satiro, Daniel Stenberg
Closes#1148
We're mostly saying just "curl" in lower case these days so here's a big
cleanup to adapt to this reality. A few instances are left as the
project could still formally be considered called cURL.
- Call Curl_initinfo on init and duphandle.
Prior to this change the statistical and informational variables were
simply zeroed by calloc on easy init and duphandle. While zero is the
correct default value for almost all info variables, there is one where
it isn't (filetime initializes to -1).
Bug: https://github.com/curl/curl/issues/1103
Reported-by: Neal Poole
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
Cokie with the same domain but different tailmatching property are now
considered different and do not replace each other. If header contains
following lines then two cookies will be set: Set-Cookie: foo=bar;
domain=.foo.com; expires=Thu Mar 3 GMT 8:56:27 2033 Set-Cookie: foo=baz;
domain=foo.com; expires=Thu Mar 3 GMT 8:56:27 2033
This matches Chrome, Opera, Safari, and Firefox behavior. When sending
stored tokens to foo.com Chrome, Opera, Firefox store send them in the
stored order, while Safari pre-sort the cookies.
Closes#1050
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
.. and add that --proto-redir and CURLOPT_REDIR_PROTOCOLS do not
override protocols denied by --proto and CURLOPT_PROTOCOLS.
- Add a test to enforce: --proto deny must override --proto-redir allow
Closes https://github.com/curl/curl/pull/1031
... like when a HTTP/0.9 response comes back without any headers at all
and just a body this now prevents that body from being sent to the
callback etc.
Adapted test 1144 to verify.
Fixes#973
Assisted-by: Ray Satiro
This only excludes building unit tests from default build ( 'all' Make
target or "Build Solution" in VisualStudio). The projects and Make
targets will still be generated and shown in supporting IDEs.
Fixes https://github.com/curl/curl/issues/981
Reported-by: Randy Armstrong
Closes https://github.com/curl/curl/pull/990
Detect support for compiler symbol visibility flags and apply those
according to CURL_HIDDEN_SYMBOLS option.
It should work true to the autotools build except it tries to unhide
symbols on Windows when requested and prints warning if it fails.
Ref: https://github.com/curl/curl/issues/981#issuecomment-242665951
Reported-by: Daniel Stenberg
Since we're using CURLE_FTP_WEIRD_SERVER_REPLY in imap, pop3 and smtp as
more of a generic "failed to parse" introduce an alias without FTP in
the name.
Closes https://github.com/curl/curl/pull/975
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
This fixes tests that were added after 113f04e664 as the tests would
fail otherwise.
We bring back "Proxy-Connection: Keep-Alive" now unconditionally to fix
regressions with old and stupid proxies, but we could possibly switch to
using it only for CONNECT or only for NTLM in a future if we want to
gradually reduce it.
Fixes#954
Reported-by: János Fekete
CMake build now using BUILD_TESTING=ON/OFF (default is OFF) to build
tests and enabling CTest integration. Options BUILD_CURL_TESTS and
BUILD_DASHBOARD_REPORTS was removed.
Closes#882
Reviewed-by: Brad King
The HTTP/2 tests brought with commit bf05606ef1 were using the internal
name 'http2' for the HTTP/2 server, while in fact that name was already
used for the second instance of the HTTP server. This made tests using
the second instance (like test 2050) fail after a HTTP/2 test had run.
The server is now known as HTTP/2 internally and within the <server>
section in test cases. 1700, 1701 and 1702 were updated accordingly.
It requires that 'nghttpx' is in the PATH, and it will run the tests
using nghttpx as a front-end proxy in front of the standard HTTP/1 test
server. This uses HTTP/2 over plain TCP.
If you like me have nghttpx installed in a custom path, you can run test 1700
like this:
$ PATH=$PATH:$HOME/build-nghttp2/bin/ ./runtests.pl 1700
Mostly in order to support broken web sites that redirect to broken URLs
that are accepted by browsers.
Browsers are typically even more leniant than this as the WHATWG URL
spec they should allow an _infinite_ amount. I tested 8000 slashes with
Firefox and it just worked.
Added test case 1141, 1142 and 1143 to verify the new parser.
Closes#791
Prior to this change a width arg could be erroneously output, and also
width and precision args could not be used together without crashing.
"%0*d%s", 2, 9, "foo"
Before: "092"
After: "09foo"
"%*.*s", 5, 2, "foo"
Before: crash
After: " fo"
Test 557 is updated to verify this and more
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes#743
It does open up a miniscule risk that one of the other protocols that
libcurl could use would send back a Content-Disposition header and then
curl would act on it even if not HTTP.
A future mitigation for this risk would be to allow the callback to ask
libcurl which protocol is being used.
Verified with test 1312
Closes#760
This script now also scans src/tool_getparam.c, docs/curl.1 and
src/tool_help.c and will warn if any of them lists a command line option
not mentioned in one of the other places.
While being debated (in #716) and a violation of RFC 7230 section 5.4,
this test verifies that the existing functionality works as intended. It
strips the dot from the host name and uses the host without dot
throughout the internals.
... for checking ability to receive full HTTP response when POST request
is used with slow read callback function.
This test checks for bug #657 and verifies the work-around from
72d5e144fb.
Closes#720
warning: implicit declaration of function 'sprintf_was_used'
[-Wimplicit-function-declaration]
Follow up to the modications made to tests/libtest in commit 55452ebdff
as we prefer not to use sprintf() now.
The define is not in our name space and is therefore not protected by
our API promises.
It was only really used by libcurl internals but was mostly erased from
there already in 8aabbf5 (March 2015). This is supposedly the final
death blow to that define from everywhere.
As a side-effect, making sure _MPRINTF_REPLACE is gone and not used, I
made the lib tests in tests/libtest/ use curl_printf.h for its redefine
magic and then subsequently the use of sprintf() got banned in the tests
as well (as it is in libcurl internals) and I then replaced them all
with snprintf().
In the unlikely event that any users is actually using this define and
gets sad by this change, it is very easily copied to the user's own
code.
Fixed failed redirection of stderr with some options. At least on Msys2,
perl fails to redirect stderr if $value contains newline or other weird
characters.
It seems we may have some autobuild problems after this commit went
in. Trying to see if a revert helps to get them back.
This reverts commit 2716350d1f.
RFC 6265 section 4.1.1 spells out that the first name/value pair in the
header is the actual cookie name and content, while the following are
the parameters.
libcurl previously had a more liberal approach which causes significant
problems when introducing new cookie parameters, like the suggested new
cookie priority draft.
The previous logic read all n/v pairs from left-to-right and the first
name used that wassn't a known parameter name would be used as the
cookie name, thus accepting "Set-Cookie: Max-Age=2; person=daniel" to be
a cookie named 'person' while an RFC 6265 compliant parser should
consider that to be a cookie named 'Max-Age' with an (unknown) parameter
'person'.
Fixes#709
DSA is no longer supported by OpenSSH 7.0, which causes all SCP/SFTP
test cases to be skipped. Using RSA for host authentication works with
both old and new versions of OpenSSH.
Reported-by: Karlson2k
Closes#676
- Add tests.
- Add an example to CURLOPT_TFTP_NO_OPTIONS.3.
- Add --tftp-no-options to expose CURLOPT_TFTP_NO_OPTIONS.
Bug: https://github.com/curl/curl/issues/481
It turns out Firefox and Chrome both allow spaces in cookie names and
there are sites out there using that.
Turned out the code meant to strip off trailing space from cookie names
didn't work. Fixed now.
Test case 8 modified to verify both these changes.
Closes#639
- Add unit test 1604 to test the sanitize_file_name function.
- Use -DCURL_STATICLIB when building libcurltool for unit testing.
- Better detection of reserved DOS device names.
- New flags to modify sanitize behavior:
SANITIZE_ALLOW_COLONS: Allow colons
SANITIZE_ALLOW_PATH: Allow path separators and colons
SANITIZE_ALLOW_RESERVED: Allow reserved device names
SANITIZE_ALLOW_TRUNCATE: Allow truncating a long filename
- Restore sanitization of banned characters from user-specified outfile.
Prior to this commit sanitization of a user-specified outfile was
temporarily disabled in 2b6dadc because there was no way to allow path
separators and colons through while replacing other banned characters.
Now in such a case we call the sanitize function with
SANITIZE_ALLOW_PATH which allows path separators and colons to pass
through.
Closes https://github.com/curl/curl/issues/624
Reported-by: Octavio Schroeder
It isn't used by the code in current conditions but for safety it seems
sensible to at least not crash on such input.
Extended unit test 1395 to verify this too as well as a plain "/" input.
Before this patch, if a URL does not start with the protocol
name/scheme, effective URLs would be prefixed with upper-case protocol
names/schemes. This behavior might not be expected by library users or
end users.
For example, if `CURLOPT_DEFAULT_PROTOCOL` is set to "https". And the
URL is "hostname/path". The effective URL would be
"HTTPS://hostname/path" instead of "https://hostname/path".
After this patch, effective URLs would be prefixed with a lower-case
protocol name/scheme.
Closes#597
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
The request needs to be read and send in binary mode in order to use
CRLF instead of LF. Adding --upload-file - causes curl to read stdin
in binary mode.
Make this the default for the curl tool (if built with HTTP/2 powers
enabled) unless a specific HTTP version is requested on the command
line.
This should allow more users to get HTTP/2 powers without having to
change anything.
Tests 842, 843, 844, 845, 887, 888, 889, 890, 946, 947, 948 and 949 fail
if a custom port number is specified via the -b option of runtests.pl.
Suggested by: Kamil Dudka
Bug: http://curl.haxx.se/mail/lib-2015-12/0003.html
As POP3 final and continuation responses both begin with a + character,
and both the finalcode and contcode variables in SASLprotoc are set as
such, we cannot tell the difference between them when we are expecting
an optional continuation from the server such as the following:
+ something else from the server
+OK final response
Disabled these tests until such a time we can tell the responses apart.
The hashes can vary between architectures (e.g. Sparc differs from x86_64).
This is not a fatal problem but just reduces the coverage of these white-box
tests, as the assumptions about into which hash bucket each key falls are no
longer valid.
- no point in repeating curl features that is already listed as features
from the curl -V output
- remove the port numbers/unix domain path from the output unless
verbose is used, as that is rarely interesting to users.
The tftpd test server now logs all received options and thus all TFTP
test cases need to match them exactly.
Extended test 283 to use and verify --tftp-blksize.
Apparently there are sites out there that do redirects to URLs they
provide in plain UTF-8 or similar. Browsers and wget %-encode such
headers when doing a subsequent request. Now libcurl does too.
Added test 1138 to verify.
Closes#473
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
This prevents valgrind from reporting possibly lost memory that NSPR
uses for file descriptor cache and other globally allocated internal
data structures.
Reported-by: Štefan Kremeň
When CURL_SOCKET_BAD is returned in the callback, it should be treated
as an error (CURLE_COULDNT_CONNECT) if no other socket is subsequently
created when trying to connect to a server.
Bug: http://curl.haxx.se/mail/lib-2015-06/0047.html
This function makes a platform-specific absolute path which uses
backslashes on Windows. This form works when passing it on the
command-line, as well as if the source is on another drive.
This avoids unnecessary dynamic allocs and as this also removed the last
users of *hash_alloc() and *hash_destroy(), those two functions are now
removed.
Make the HTTP headers separated by default for improved security and
reduced risk for information leakage.
Bug: http://curl.haxx.se/docs/adv_20150429.html
Reported-by: Yehezkel Horowitz, Oren Souroujon
This commit fixes a regression introduced in curl-7_41_0-186-g261a0fe.
It also introduces a regression test 1424 based on tests 78 and 1423.
Reported-by: Viktor Szakats
Bug: https://github.com/bagder/curl/issues/237
- cache entries must be also refreshed when they are in use
- have the cache count as inuse reference too, freeing timestamp == 0 special
value
- use timestamp == 0 for CURLOPT_RESOLVE entries which don't get refreshed
- remove CURLOPT_RESOLVE special inuse reference (timestamp == 0 will prevent refresh)
- fix Curl_hostcache_clean - CURLOPT_RESOLVE entries don't have a special
reference anymore, and it would also release non CURLOPT_RESOLVE references
- fix locking in Curl_hostcache_clean
- fix unit1305.c: hash now keeps a reference, need to set inuse = 1
Previously in Curl_http2_switched, we called nghttp2_session_mem_recv to
parse incoming data which were already received while curl was handling
upgrade. But we didn't call nghttp2_session_send, and it led to make
curl not send any response to the received frames. Most likely, we
received SETTINGS from server at this point, so we missed opportunity to
send SETTINGS + ACK. This commit adds missing nghttp2_session_send call
in Curl_http2_switched to fix this issue.
Bug: https://github.com/bagder/curl/issues/192
Reported-by: Stefan Eissing
"name =value" is fine and the space should just be skipped.
Updated test 31 to also test for this.
Bug: https://github.com/bagder/curl/issues/195
Reported-by: cromestant
Help-by: Frank Gevaerts
It seems that some systems (e.g. fairly consistently in some recent
Solaris autobuilds) would manage to get to the connect phase before the
progress callback was called, resulting in a CURLE_COULDNT_CONNECT
error. Reworked the test to point at a test server that never returns a
full result so the progress callback always gets a chance to be called
before the transfer can complete in some other way.
The certificates were missing the digitalSignature and keyAgreement
usage types, of which at least digitalSignature was checked by CyaSSL.
This caused the test server in test 310 (among others) to fail the
startup verification and therefore run (see
http://curl.haxx.se/mail/lib-2014-07/0303.html).
Since we just started make use of free(NULL) in order to simplify code,
this change takes it a step further and:
- converts lots of Curl_safefree() calls to good old free()
- makes Curl_safefree() not check the pointer before free()
The (new) rule of thumb is: if you really want a function call that
frees a pointer and then assigns it to NULL, then use Curl_safefree().
But we will prefer just using free() from now on.
The function "free" is documented in the way that no action shall occur for
a passed null pointer. It is therefore not needed that a function caller
repeats a corresponding check.
http://stackoverflow.com/questions/18775608/free-a-null-pointer-anyway-or-check-first
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
... by using the regular Curl_http_done() method which checks for
that. This makes test 1801 fail consistently with error 56 (which seems
fine) to that test is also updated here.
Reported-by: Ben Darnell
Bug: https://github.com/bagder/curl/issues/166
...after the method line:
"Since the Host field-value is critical information for handling a
request, a user agent SHOULD generate Host as the first header field
following the request-line." / RFC 7230 section 5.4
Additionally, this will also make libcurl ignore multiple specified
custom Host: headers and only use the first one. Test 1121 has been
updated accordingly
Bug: http://curl.haxx.se/bug/view.cgi?id=1491
Reported-by: Rainer Canavan
When checking for a connection to re-use, a proxy-using request must
check for and use a proxy connection and not one based on the host
name!
Added test 1421 to verify
Bug: http://curl.haxx.se/bug/view.cgi?id=1492
SSLeay was the name of the library that was subsequently turned into
OpenSSL many moons ago (1999). curl does not work with the old SSLeay
library since years. This is now reflected by only using USE_OPENSSL in
code that depends on OpenSSL.
sockfilt.c:288: warning: conversion to 'DWORD' from 'size_t' may alter
its value
sockfilt.c:291: warning: conversion to 'DWORD' from 'size_t' may alter
its value
sockfilt.c:323: warning: conversion to 'DWORD' from 'size_t' may alter
its value
sockfilt.c:326: warning: conversion to 'DWORD' from 'size_t' may alter
its value
* Missing initialisation of upload status caused a seg fault
* Missing data termination caused corrupt data to be uploaded
* Data verification should be performed in <upload> element
* Added missing recipient list cleanup
For consistency, as we seem to have a bit of a mixed bag, changed all
instances of ipv4 and ipv6 in comments and documentations to use the
correct case.
Merge multiple internal arrays into one, even if some variables
will not not be used. They are all created with the number of
file descriptors as their size.
Also fix possible thread handle leak in CloseHandle-loop.
Improves performance of test cases 574 and 575 by 50%.
A value of zero causes the thread to relinquish the remainder
of its time slice to any other thread of equal priority that is
ready to run. If there are no other threads of equal priority
ready to run, the function returns immediately, and the thread
continues execution.
http://msdn.microsoft.com/library/windows/desktop/ms686307.aspx
This fixes the test 506 torture test. The internal cookie API really
ought to be improved to separate cookie parsing errors (which may be
ignored) with OOM errors (which should be fatal).
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
test1435: a simple test that checks whether a HTTP request can be
performed over the UNIX socket. The hostname/port are interpreted
by sws and should be ignored by cURL.
test1436: test for the ability to do two requests to the same host,
interleaved with one to a different hostname.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
The variable `$ipvnum` can now contain "unix" besides the integers 4
and 6 since the variable. Functions which receive this parameter
have their `$port` parameter renamed to `$port_or_path` to support a
path to the UNIX domain socket (as a "port" is only meaningful for TCP).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
If sws is killed it might leave a stale socket file on the filesystem
which would cause an EADDRINUSE error. After this patch, it is checked
whether the socket is really stale and if so, the socket file gets
removed and another bind is executed.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
This extends sws with a --unix-socket option which causes the port to
be ignored (as the server now listens on the path specified by
--unix-socket). This feature will be available in the following patch
that enables checking for UNIX domain socket support.
Proxy support (CONNECT) is not considered nor tested. It does not make
sense anyway, first connecting through a TCP proxy, then let that TCP
proxy connect to a UNIX socket.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Instead of depending the socket domain type on use_ipv6, specify the
domain type (AF_INET / AF_INET6) as variable. An enum is used here with
switch to avoid compiler warnings in connect_to, complaining that rc
is possibly undefined (which is not possible as socket_domain is
always set).
Besides abstracting the socket type, make the debugging messages be
independent on IP (introduce location_str which points to "port XXXXX").
Rename "ipv_inuse" to "socket_type" and tighten the scope (main).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Commit curl-7_23_1-143-g8218064 changed the parameter of
responsive_http_server to accept types other than IPv6 (converting
from a boolean to a string), but only considered the lower-case "ipv6"
and not the "IPv6" variant. This caused all servers to start in IPv4
mode instead.
This patch converts the remaining cases to "ipv6". While not strictly
necessary for the run*server variants, these got also converted for
consistency and to prevent future errors.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
This is the only user of the backtick operator in the command. As the
commands will soon not be executed by a shell anymore (but by perl),
replace the command with its output.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Makes test1119 pass when building with cmake.
configurehelp.pm is generated by configure (autotools). As cmake does
not provide a separate variable for the C preprocessor, default to cpp.
Before commit ef24ecde68 ("symbol-scan:
use configure script knowledge about how to run the C preprocessor"),
this tool would also use 'cpp'.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Fix detection of the AsynchDNS feature which not just depends on
pthreads support, but also on whether USE_POSIX_THREADS is set or not.
Caught by test 1014.
This patch adds a new ENABLE_THREADED_RESOLVER option (corresponding to
--enable-threaded-resolver of autotools) which also needs a check for
HAVE_PTHREAD_H.
For symmetry with autotools, CURL_USE_ARES is renamed to ENABLE_ARES
(--enable-ares). Checks that test for the availability actually use
USE_ARES instead as that is the result of whether a-res is available or
not (in practice this does not matter as CARES is marked as required
package, but nevertheless it is better to write the intent).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
I noticed that a patched cmake build would pass tests with a fake local
hostname, but the autotools build skips them:
got unexpected host name back, LD_PRELOAD failed
It turns out that -fvisibility=hidden hides the symbol, and since the
tests are not part of libcurl, it fails too. Just remove the LIBCURL
guard.
Broken since cURL 7.30 (commit 83a42ee20e,
"curl.h: stricter CURL_EXTERN linkage decorations logic").
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Added !SSPI to the features list of the HTTP digest tests, as SSPI
based builds now use the Windows SSPI messaging API rather than the
internal functions, and we can't control the random numbers that get
used as part of the digest.
Basically since servers often then don't respond well to this and
instead send the full contents and then libcurl would instead error out
with the assumption that the server doesn't support resume. As the data
is then already transfered, this is now considered fine.
Test case 1434 added to verify this. Test case 1042 slightly modified.
Reported-by: hugo
Bug: http://curl.haxx.se/bug/view.cgi?id=1443
HTTP 1.1 is clearly specified to only allow three digit response codes,
and libcurl used sscanf("%3d") for that purpose. This made libcurl
support smaller numbers but not larger. It does now, but we will not
make any specific promises nor document this further since it is going
outside of what HTTP is.
Bug: http://curl.haxx.se/bug/view.cgi?id=1441
Reported-by: Balaji
CURLOPT_COPYPOSTFIELDS with a given CURLOPT_POSTFIELDSIZE does not
require a trailing zero of the data and by making sure this test doesn't
use one we know it works (combined with valgrind).
This change allows runtests.pl to be run from the CMake builddir:
export srcdir=/tmp/curl/tests;
perl -I$srcdir $srcdir/runtests.pl -l
In order to make this possible, all test cases have been moved from
Makefile.am to Makefile.inc.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
The 2to3 tool converted socketserver (which I manually fixed up with an
import fallback) and the print(e) line. The xrange option was converted
to range, but it seems better to use the '*' operator here for
simplicity.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
There is no need for such function. Include_directories propagate by
themselves and having a function with one simple link statement makes
little sense.
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
By not detecting and rejecting domain names for partial literal IP
addresses properly when parsing received HTTP cookies, libcurl can be
fooled to both send cookies to wrong sites and to allow arbitrary sites
to set cookies for others.
CVE-2014-3613
Bug: http://curl.haxx.se/docs/adv_20140910A.html
Historically the default "unknown" value for progress.size_dl and
progress.size_ul has been zero, since these values are initialized
implicitly by the calloc that allocates the curl handle that these
variables are a part of. Users of curl that install progress
callbacks may expect these values to always be >= 0.
Currently it is possible for progress.size_dl and progress.size_ul
to by set to a value of -1, if Curl_pgrsSetDownloadSize() or
Curl_pgrsSetUploadSize() are passed a "size" of -1 (which a few
places currently do, and a following patch will add more). So
lets update Curl_pgrsSetDownloadSize() and Curl_pgrsSetUploadSize()
so they make sure that these variables always contain a value that
is >= 0.
Updates test579 and test599.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
... to handle "*/[total]". Also, removed the strange hack that made
CURLOPT_FAILONERROR on a 416 response after a *RESUME_FROM return
CURLE_OK.
Reported-by: Dimitrios Siganos
Bug: http://curl.haxx.se/mail/lib-2014-06/0221.html
If a non-standard $TESTDIR is used the file may not be necessary.
Previously a "missing" file resulted in the warning:
readline() on closed filehandle D at ./runtests.pl line 4940.