It turns out Firefox and Chrome both allow spaces in cookie names and
there are sites out there using that.
Turned out the code meant to strip off trailing space from cookie names
didn't work. Fixed now.
Test case 8 modified to verify both these changes.
Closes#639
- Add unit test 1604 to test the sanitize_file_name function.
- Use -DCURL_STATICLIB when building libcurltool for unit testing.
- Better detection of reserved DOS device names.
- New flags to modify sanitize behavior:
SANITIZE_ALLOW_COLONS: Allow colons
SANITIZE_ALLOW_PATH: Allow path separators and colons
SANITIZE_ALLOW_RESERVED: Allow reserved device names
SANITIZE_ALLOW_TRUNCATE: Allow truncating a long filename
- Restore sanitization of banned characters from user-specified outfile.
Prior to this commit sanitization of a user-specified outfile was
temporarily disabled in 2b6dadc because there was no way to allow path
separators and colons through while replacing other banned characters.
Now in such a case we call the sanitize function with
SANITIZE_ALLOW_PATH which allows path separators and colons to pass
through.
Closes https://github.com/curl/curl/issues/624
Reported-by: Octavio Schroeder
It isn't used by the code in current conditions but for safety it seems
sensible to at least not crash on such input.
Extended unit test 1395 to verify this too as well as a plain "/" input.
Before this patch, if a URL does not start with the protocol
name/scheme, effective URLs would be prefixed with upper-case protocol
names/schemes. This behavior might not be expected by library users or
end users.
For example, if `CURLOPT_DEFAULT_PROTOCOL` is set to "https". And the
URL is "hostname/path". The effective URL would be
"HTTPS://hostname/path" instead of "https://hostname/path".
After this patch, effective URLs would be prefixed with a lower-case
protocol name/scheme.
Closes#597
Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
The request needs to be read and send in binary mode in order to use
CRLF instead of LF. Adding --upload-file - causes curl to read stdin
in binary mode.
Make this the default for the curl tool (if built with HTTP/2 powers
enabled) unless a specific HTTP version is requested on the command
line.
This should allow more users to get HTTP/2 powers without having to
change anything.
Tests 842, 843, 844, 845, 887, 888, 889, 890, 946, 947, 948 and 949 fail
if a custom port number is specified via the -b option of runtests.pl.
Suggested by: Kamil Dudka
Bug: http://curl.haxx.se/mail/lib-2015-12/0003.html
As POP3 final and continuation responses both begin with a + character,
and both the finalcode and contcode variables in SASLprotoc are set as
such, we cannot tell the difference between them when we are expecting
an optional continuation from the server such as the following:
+ something else from the server
+OK final response
Disabled these tests until such a time we can tell the responses apart.
The hashes can vary between architectures (e.g. Sparc differs from x86_64).
This is not a fatal problem but just reduces the coverage of these white-box
tests, as the assumptions about into which hash bucket each key falls are no
longer valid.
- no point in repeating curl features that is already listed as features
from the curl -V output
- remove the port numbers/unix domain path from the output unless
verbose is used, as that is rarely interesting to users.
The tftpd test server now logs all received options and thus all TFTP
test cases need to match them exactly.
Extended test 283 to use and verify --tftp-blksize.
Apparently there are sites out there that do redirects to URLs they
provide in plain UTF-8 or similar. Browsers and wget %-encode such
headers when doing a subsequent request. Now libcurl does too.
Added test 1138 to verify.
Closes#473
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
This prevents valgrind from reporting possibly lost memory that NSPR
uses for file descriptor cache and other globally allocated internal
data structures.
Reported-by: Štefan Kremeň
When CURL_SOCKET_BAD is returned in the callback, it should be treated
as an error (CURLE_COULDNT_CONNECT) if no other socket is subsequently
created when trying to connect to a server.
Bug: http://curl.haxx.se/mail/lib-2015-06/0047.html
This function makes a platform-specific absolute path which uses
backslashes on Windows. This form works when passing it on the
command-line, as well as if the source is on another drive.
This avoids unnecessary dynamic allocs and as this also removed the last
users of *hash_alloc() and *hash_destroy(), those two functions are now
removed.
Make the HTTP headers separated by default for improved security and
reduced risk for information leakage.
Bug: http://curl.haxx.se/docs/adv_20150429.html
Reported-by: Yehezkel Horowitz, Oren Souroujon
This commit fixes a regression introduced in curl-7_41_0-186-g261a0fe.
It also introduces a regression test 1424 based on tests 78 and 1423.
Reported-by: Viktor Szakats
Bug: https://github.com/bagder/curl/issues/237
- cache entries must be also refreshed when they are in use
- have the cache count as inuse reference too, freeing timestamp == 0 special
value
- use timestamp == 0 for CURLOPT_RESOLVE entries which don't get refreshed
- remove CURLOPT_RESOLVE special inuse reference (timestamp == 0 will prevent refresh)
- fix Curl_hostcache_clean - CURLOPT_RESOLVE entries don't have a special
reference anymore, and it would also release non CURLOPT_RESOLVE references
- fix locking in Curl_hostcache_clean
- fix unit1305.c: hash now keeps a reference, need to set inuse = 1
Previously in Curl_http2_switched, we called nghttp2_session_mem_recv to
parse incoming data which were already received while curl was handling
upgrade. But we didn't call nghttp2_session_send, and it led to make
curl not send any response to the received frames. Most likely, we
received SETTINGS from server at this point, so we missed opportunity to
send SETTINGS + ACK. This commit adds missing nghttp2_session_send call
in Curl_http2_switched to fix this issue.
Bug: https://github.com/bagder/curl/issues/192
Reported-by: Stefan Eissing
"name =value" is fine and the space should just be skipped.
Updated test 31 to also test for this.
Bug: https://github.com/bagder/curl/issues/195
Reported-by: cromestant
Help-by: Frank Gevaerts
It seems that some systems (e.g. fairly consistently in some recent
Solaris autobuilds) would manage to get to the connect phase before the
progress callback was called, resulting in a CURLE_COULDNT_CONNECT
error. Reworked the test to point at a test server that never returns a
full result so the progress callback always gets a chance to be called
before the transfer can complete in some other way.
The certificates were missing the digitalSignature and keyAgreement
usage types, of which at least digitalSignature was checked by CyaSSL.
This caused the test server in test 310 (among others) to fail the
startup verification and therefore run (see
http://curl.haxx.se/mail/lib-2014-07/0303.html).
Since we just started make use of free(NULL) in order to simplify code,
this change takes it a step further and:
- converts lots of Curl_safefree() calls to good old free()
- makes Curl_safefree() not check the pointer before free()
The (new) rule of thumb is: if you really want a function call that
frees a pointer and then assigns it to NULL, then use Curl_safefree().
But we will prefer just using free() from now on.
The function "free" is documented in the way that no action shall occur for
a passed null pointer. It is therefore not needed that a function caller
repeats a corresponding check.
http://stackoverflow.com/questions/18775608/free-a-null-pointer-anyway-or-check-first
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
... by using the regular Curl_http_done() method which checks for
that. This makes test 1801 fail consistently with error 56 (which seems
fine) to that test is also updated here.
Reported-by: Ben Darnell
Bug: https://github.com/bagder/curl/issues/166
...after the method line:
"Since the Host field-value is critical information for handling a
request, a user agent SHOULD generate Host as the first header field
following the request-line." / RFC 7230 section 5.4
Additionally, this will also make libcurl ignore multiple specified
custom Host: headers and only use the first one. Test 1121 has been
updated accordingly
Bug: http://curl.haxx.se/bug/view.cgi?id=1491
Reported-by: Rainer Canavan
When checking for a connection to re-use, a proxy-using request must
check for and use a proxy connection and not one based on the host
name!
Added test 1421 to verify
Bug: http://curl.haxx.se/bug/view.cgi?id=1492
SSLeay was the name of the library that was subsequently turned into
OpenSSL many moons ago (1999). curl does not work with the old SSLeay
library since years. This is now reflected by only using USE_OPENSSL in
code that depends on OpenSSL.
sockfilt.c:288: warning: conversion to 'DWORD' from 'size_t' may alter
its value
sockfilt.c:291: warning: conversion to 'DWORD' from 'size_t' may alter
its value
sockfilt.c:323: warning: conversion to 'DWORD' from 'size_t' may alter
its value
sockfilt.c:326: warning: conversion to 'DWORD' from 'size_t' may alter
its value
* Missing initialisation of upload status caused a seg fault
* Missing data termination caused corrupt data to be uploaded
* Data verification should be performed in <upload> element
* Added missing recipient list cleanup
For consistency, as we seem to have a bit of a mixed bag, changed all
instances of ipv4 and ipv6 in comments and documentations to use the
correct case.
Merge multiple internal arrays into one, even if some variables
will not not be used. They are all created with the number of
file descriptors as their size.
Also fix possible thread handle leak in CloseHandle-loop.
Improves performance of test cases 574 and 575 by 50%.
A value of zero causes the thread to relinquish the remainder
of its time slice to any other thread of equal priority that is
ready to run. If there are no other threads of equal priority
ready to run, the function returns immediately, and the thread
continues execution.
http://msdn.microsoft.com/library/windows/desktop/ms686307.aspx
This fixes the test 506 torture test. The internal cookie API really
ought to be improved to separate cookie parsing errors (which may be
ignored) with OOM errors (which should be fatal).
The ability to do HTTP requests over a UNIX domain socket has been
requested before, in Apr 2008 [0][1] and Sep 2010 [2]. While a
discussion happened, no patch seems to get through. I decided to give it
a go since I need to test a nginx HTTP server which listens on a UNIX
domain socket.
One patch [3] seems to make it possible to use the
CURLOPT_OPENSOCKETFUNCTION function to gain a UNIX domain socket.
Another person wrote a Go program which can do HTTP over a UNIX socket
for Docker[4] which uses a special URL scheme (though the name contains
cURL, it has no relation to the cURL library).
This patch considers support for UNIX domain sockets at the same level
as HTTP proxies / IPv6, it acts as an intermediate socket provider and
not as a separate protocol. Since this feature affects network
operations, a new feature flag was added ("unix-sockets") with a
corresponding CURL_VERSION_UNIX_SOCKETS macro.
A new CURLOPT_UNIX_SOCKET_PATH option is added and documented. This
option enables UNIX domain sockets support for all requests on the
handle (replacing IP sockets and skipping proxies).
A new configure option (--enable-unix-sockets) and CMake option
(ENABLE_UNIX_SOCKETS) can disable this optional feature. Note that I
deliberately did not mark this feature as advanced, this is a
feature/component that should easily be available.
[0]: http://curl.haxx.se/mail/lib-2008-04/0279.html
[1]: http://daniel.haxx.se/blog/2008/04/14/http-over-unix-domain-sockets/
[2]: http://sourceforge.net/p/curl/feature-requests/53/
[3]: http://curl.haxx.se/mail/lib-2008-04/0361.html
[4]: https://github.com/Soulou/curl-unix-socket
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
test1435: a simple test that checks whether a HTTP request can be
performed over the UNIX socket. The hostname/port are interpreted
by sws and should be ignored by cURL.
test1436: test for the ability to do two requests to the same host,
interleaved with one to a different hostname.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
The variable `$ipvnum` can now contain "unix" besides the integers 4
and 6 since the variable. Functions which receive this parameter
have their `$port` parameter renamed to `$port_or_path` to support a
path to the UNIX domain socket (as a "port" is only meaningful for TCP).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
If sws is killed it might leave a stale socket file on the filesystem
which would cause an EADDRINUSE error. After this patch, it is checked
whether the socket is really stale and if so, the socket file gets
removed and another bind is executed.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
This extends sws with a --unix-socket option which causes the port to
be ignored (as the server now listens on the path specified by
--unix-socket). This feature will be available in the following patch
that enables checking for UNIX domain socket support.
Proxy support (CONNECT) is not considered nor tested. It does not make
sense anyway, first connecting through a TCP proxy, then let that TCP
proxy connect to a UNIX socket.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Instead of depending the socket domain type on use_ipv6, specify the
domain type (AF_INET / AF_INET6) as variable. An enum is used here with
switch to avoid compiler warnings in connect_to, complaining that rc
is possibly undefined (which is not possible as socket_domain is
always set).
Besides abstracting the socket type, make the debugging messages be
independent on IP (introduce location_str which points to "port XXXXX").
Rename "ipv_inuse" to "socket_type" and tighten the scope (main).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Commit curl-7_23_1-143-g8218064 changed the parameter of
responsive_http_server to accept types other than IPv6 (converting
from a boolean to a string), but only considered the lower-case "ipv6"
and not the "IPv6" variant. This caused all servers to start in IPv4
mode instead.
This patch converts the remaining cases to "ipv6". While not strictly
necessary for the run*server variants, these got also converted for
consistency and to prevent future errors.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
This is the only user of the backtick operator in the command. As the
commands will soon not be executed by a shell anymore (but by perl),
replace the command with its output.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Makes test1119 pass when building with cmake.
configurehelp.pm is generated by configure (autotools). As cmake does
not provide a separate variable for the C preprocessor, default to cpp.
Before commit ef24ecde68 ("symbol-scan:
use configure script knowledge about how to run the C preprocessor"),
this tool would also use 'cpp'.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Fix detection of the AsynchDNS feature which not just depends on
pthreads support, but also on whether USE_POSIX_THREADS is set or not.
Caught by test 1014.
This patch adds a new ENABLE_THREADED_RESOLVER option (corresponding to
--enable-threaded-resolver of autotools) which also needs a check for
HAVE_PTHREAD_H.
For symmetry with autotools, CURL_USE_ARES is renamed to ENABLE_ARES
(--enable-ares). Checks that test for the availability actually use
USE_ARES instead as that is the result of whether a-res is available or
not (in practice this does not matter as CARES is marked as required
package, but nevertheless it is better to write the intent).
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
I noticed that a patched cmake build would pass tests with a fake local
hostname, but the autotools build skips them:
got unexpected host name back, LD_PRELOAD failed
It turns out that -fvisibility=hidden hides the symbol, and since the
tests are not part of libcurl, it fails too. Just remove the LIBCURL
guard.
Broken since cURL 7.30 (commit 83a42ee20e,
"curl.h: stricter CURL_EXTERN linkage decorations logic").
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Added !SSPI to the features list of the HTTP digest tests, as SSPI
based builds now use the Windows SSPI messaging API rather than the
internal functions, and we can't control the random numbers that get
used as part of the digest.
Basically since servers often then don't respond well to this and
instead send the full contents and then libcurl would instead error out
with the assumption that the server doesn't support resume. As the data
is then already transfered, this is now considered fine.
Test case 1434 added to verify this. Test case 1042 slightly modified.
Reported-by: hugo
Bug: http://curl.haxx.se/bug/view.cgi?id=1443
HTTP 1.1 is clearly specified to only allow three digit response codes,
and libcurl used sscanf("%3d") for that purpose. This made libcurl
support smaller numbers but not larger. It does now, but we will not
make any specific promises nor document this further since it is going
outside of what HTTP is.
Bug: http://curl.haxx.se/bug/view.cgi?id=1441
Reported-by: Balaji