configure script now provides conditional definitions for Makefile.am
that result in CURL_HIDDEN_SYMBOLS being defined by resulting makefiles
when appropriate.
Additionally, configure script option for symbol hiding control is now
named --enable-symbol-hiding --disable-symbol-hiding. While still valid,
old option name --enable-hidden-symbols --disable-hidden-symbols will
be deprecated in some future release.
BUILDING_LIBCURL and CURL_STATICLIB are no longer defined in curl_config.h,
configure will generate appropriate conditionals so that mentioned symbols
get defined and used in Makefiles at compilation time
Don't set the "has_openssl" variable if yassl or polarssl is found as
they will simply not work as 100% drop-in replacements for some of the
stuff the "OpenSSL" feature is used for.
I spotted this problem when doing test runs with PolarSSL builds.
With FOLLOWLOCATION enabled. When a 3xx page is downloaded and the
download size was known (like with a Content-Length header), but the
subsequent URL (transfered after the 3xx page) was chunked encoded, then
the previous "known download size" would linger and cause the progress
meter to get incorrect information, ie the former value would remain
being sent in. This could easily result in downloads that were WAY
larger than "expected" and would cause >100% outputs with the curl
command line tool.
Test case 599 was created and it was used to repeat the bug and then
verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3510057
Reported by: Michael Wallner
The line endings broke when I saved the three recent patches (my fault,
not Colin's) to 'git am' them.
Adjusted the stripping of the test program for comparing to also exclude
the SSH key file name as that will differ and use a local path name.
The intention is to take the output of curl's --libcurl option,
as exercised in test 14xx, and generate a corresponding test15xx
in which the generated code is compiled and run. This will verify
that the generated code behaves equivalently to the original
invocation of the curl command.
The script is not yet integrated into the configure / makefile
machinery.
With commit 035ef06bda applied, the test pop3 server needs to send
".\r\n" as the body terminating sequence and there needs to be a final
CRLF in the actual body in the test data file.
The proxy parser function strips off trailing slashes off the proxy name
which could lead to a mistaken zero length proxy name which would be
treated as no proxy at all by subsequent functions!
This is now detected and an error is returned. Verified by the new test
1329.
Reported by: Chandrakant Bagul
Bug: http://curl.haxx.se/mail/lib-2012-02/0000.html
When CURLOPT_REFERER has been used, curl_easy_reset() did not properly
clear it.
Verified with the new test 598
Bug: http://curl.haxx.se/bug/view.cgi?id=3481551
Reported by: Michael Day
We want to continue to the next URL to try even on failures returned
from libcurl. This makes -f with ranges still get subsequent URLs even
if occasional ones return error. This was a regression as it used to
work and broke in the 7.23.0 release.
Added test case 1328 to verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3481223
Reported by: Juan Barreto
Allows tests from the libtest subdir to generate log traces
similar to those of curl with --tracetime and --trace-ascii
options but with output going to stderr.
make 'pidfile' and 'logfile' options appear first on command line in order
to ensure that processing of other options which write to logfile do this
to intended file and not the default one.
There's a new 'http-proxy' server for tests that runs on a separate port
and lets clients do HTTP CONNECT to other ports on the same host to
allow us to test HTTP "tunneling" properly.
Test cases now have a <proxy> section in <verify> to check that the
proxy protocol part matches correctly.
Test case 80, 83, 95, 275, 503 and 1078 have been converted. Test 1316
was added.
When a HTTP connection is re-used for a subsequent request without
proxy, it would always re-use the Host: header of the first request. As
host names are case insensitive it would make curl send another host
name case that what the particular request used.
Now it will instead always use the most recent host name to always use
the desired casing.
Added test case 1318 to verify.
Bug: http://curl.haxx.se/mail/lib-2011-12/0314.html
Reported by: Alex Vinnik
Initial step in order to allow our pingpong server to better support arbitrary
application data splitting among TCP packets. This first commit only addresses
reasembly of data that sockfilter processes reads from soockets and pingpong
server later reads from sockfilters stdout.
Make testcurl.pl ignore messages pertaining to third party m4 files we don't
care nor use on a file basis policy while retaining all other warnings.
This closes temporary commit e71e226f
1- Two new error codes are introduced.
CURLE_FTP_ACCEPT_FAILED to be set whenever ACCEPTing fails because of
FTP server connected.
CURLE_FTP_ACCEPT_TIMEOUT to be set whenever ACCEPTing timeouts.
Neither of these errors are considered fatal and control connection
remains OK because it could just be a firewall blocking server to
connect to the client.
2- One new setopt option was introduced.
CURLOPT_ACCEPTTIMEOUT_MS
It sets the maximum amount of time FTP client is going to wait for a
server to connect. Internal default accept timeout is 60 seconds.
As commit ce896875f8 fixed a timer that accidentally had been moved in
code and then returned a bad timer, the lib500.c code (used in test 500
and some others) now verifies 5 timers against each other to verify that
they have the correct relative values. We cannot compare against
absolute values as the timings will vary a lot.
Test case 1315 was added to verify this functionality. When passing in
multiple files to a single -F, the parser would get all confused if one
of the specified files had a custom type= assigned.
Reported by: Colin Hogben
Keep track of which sockets that are the result of accept() calls and
refuse to call the closesocket callback for those sockets. Test case 596
now verifies that the open socket callback is called the same number of
times as the closed socket callback for active FTP connections.
Bug: http://curl.haxx.se/mail/lib-2011-12/0018.html
Reported by: Gokhan Sengun
When the new socket is created for an active connection, it is now done
using the open socket callback.
Test case 596 was modified to run fine, although it hides the fact that
the close callback is still called too many times, as it also gets
called for closing sockets that were created with accept().
Previously the log function would just filter out all CR and LF
occurances from the log to make it more readable. This had the downside
that it made it very hard to see CR LFs when they actually matters.
Now, they're instead converted to "[CR]" and "[LR]" in the log to become
apparent to readers.
Curl_pop3_write() now has a state machine that scans for the end of a
POP3 body so that the CR LF '.' CR LF sequence can come in everything
from one up to five subsequent packets.
Test case 810 is modified to use SLOWDOWN which makes the server pause
between each single byte and thus makes the POP3 body get sent to curl
basically one byte at a time.
"Active FTP hangs if server does not open data connection"
The server first sends a 150 and then when libcurl waits for the data
transfer, the server sends a 425.
The protocol parts for these tests do not include QUIT simply because
the error is CURLE_OPERATION_TIMEDOUT (28) which is a generic timeout
error without specificly saying for which connection it concerns, and
for timeouts libcurl marks the control channel as "invalid". As this
test case times out for the data connection it could still use the
control channel.
Skip a floating point addition operation when integral part of time difference
is zero. This avoids potential floating point addition rounding problems while
preserving decimal part value.
By setting PROTOPT_NOURLQUERY in the protocol handler struct, the
protocol will get the "query part" of the URL cut off before the data is
handled by the protocol-specific code. This makes libcurl adhere to
RFC3986 section 2.2.
Test 1220 is added to verify a file:// URL with query-part.
A regression between 7.22.0 and 7.23.0 -- downloading a file with the
flags -O and -J results in the content being written to stdout if and
only if there was no Content-Disposition header in the http response. If
there is a C-D header with a filename attribute, the output is correctly
written.
Reported by: Dave Reisner
Bug: http://curl.haxx.se/mail/archive-2011-11/0030.html
591 -> FTP multi PORT and 425 on upload
592 -> FTP multi PORT and 421 on upload
593 -> FTP multi PORT upload, no data conn and no transient neg. reply
594 -> FTP multi PORT upload, no data conn and no positive prelim. reply
1206 -> FTP PORT and 425 on download
1207 -> FTP PORT and 421 on download
1208 -> FTP PORT download, no data conn and no transient negative reply
1209 -> FTP PORT download, no data conn and no positive preliminary reply
This test is created to verify Rene Bernhardt's patch which makes sure
libcurl properly _not_ deals with Negotiate if not asked to even if the
proxy says it can serve it.
Make NODATACONN425 and NODATACONN421 return a 150 positive preliminary reply
before 425 or 421.
New NODATACONN150 returns 150 without further positive nor negative reply
Now NODATACONN doesn't reply anything at all.
Some torture tests left FTP test server in an unresponsive state, resulting
in torture tests that actually completed following unexpected code paths.
Changes in this commit solely address this issue and some adjustments for
ftpserver.pl logging relative to data channel establishment and tear down.
Pending NODATACONN relative adjustments reserved for a further commit.
Ensure verification takes place with no server commands file.
Ignore verbose setting for running server precheck.
Tweak unresponsive server message, to allow detection by haxx.se scripts.
NODATACONN421: applies only to active FTP mode, instructs server to not
establish data connection back to client and reply with FTP 421.
NODATACONN425: applies only to active FTP mode, instructs server to not
establish data connection back to client and reply with FTP 425.
NODATACONN: applies to both active and passive FTP modes, instructs server
to not establish nor accept a data channel and fool client into believing
that the data channel connection is possible.
Some polishing probably required.
When running torture tests, verify before each test case that required
pingpong servers which are supposed to be alive are actually responsive.
If found not responsive then restart them.
EPRT is now supported by default by the server. To disable it, use the
generic REPLY instruction in the <servercmd> tag. Test 116 now has it
disabled. All other existing active FTP port tests strip out the port
commands from the logs already so the change of the server isn't that
noticable.
As commit 5850cc4808 clarifies, libcurl can deliver header lines that
are longer than CURL_MAX_WRITE_SIZE, only body data is limited to that
size. The curl tool has check (when built debug-enabled) that made the
wrong checks and this new test 1205 verifies that larger headers work.
The fix is pretty much the one Nick Zitzmann provided, just edited to do
the right indent levels and with test case 1204 added to verify the fix.
Bug: http://curl.haxx.se/mail/lib-2011-10/0190.html
Reported by: Nick Zitzmann
When doing a multipart formpost with a read callback, and that callback
returns CURL_READFUNC_ABORT, that return code must be properly
propagated back and handled accordingly. Previously it would be handled
as a zero byte read which would cause a hang!
Added test case 587 to verify. It uses the lib554.c source code with a
small ifdef.
Reported by: Anton Bychkov
Bug: http://curl.haxx.se/mail/lib-2011-10/0097.html
When, for a given test, server is instructed to close connection after
server reply we now wait a very small amount of time (50ms) before doing
so. This is done to allow client to, at least partially, read server
reply before getting an ECONNRESET.
The above is required to make test cases 1070, 1200, 1201 and 1202 pass
with Cygwin 1.5.X on W2K.
GOPHER test server closes connection after _every_ server-reply, as such,
at some point it could require a bigger time or using shutdown() before
a server-side initiated disconnection.
Added missing memoryTracking to test cases 560 and 583. If this triggers
leak detection on these, it only means that previously it was going unnoticed.
Configure script option --enable-wb-ntlm-auth renamed to --enable-ntlm-wb
Configure script option --disable-wb-ntlm-auth renamed to --disable-ntlm-wb
Preprocessor symbol WINBIND_NTLM_AUTH_ENABLED renamed to NTLM_WB_ENABLED
Preprocessor symbol WINBIND_NTLM_AUTH_FILE renamed to NTLM_WB_FILE
Test harness env var CURL_NTLM_AUTH renamed to CURL_NTLM_WB_FILE
Static function wb_ntlm_close renamed to ntlm_wb_cleanup
Static function wb_ntlm_initiate renamed to ntlm_wb_init
Static function wb_ntlm_response renamed to ntlm_wb_response
Feature string literal NTLM_SSO renamed to NTLM_WB.
Preprocessor symbol USE_NTLM_SSO renamed to WINBIND_NTLM_AUTH_ENABLED.
curl's 'long' option 'ntlm-sso' renamed to 'ntlm-wb'.
Fix some comments to make clear that this is actually a NTLM delegation.
Previous interfaces for these libcurl internal functions did not allow to tell
apart a legitimate zero size result from an error condition. These functions
now return a CURLcode indicating function success or otherwise specific error.
Output size is returned using a pointer argument.
All usage of these two functions, and others closely related, has been adapted
to the new interfaces. Relative error and OOM handling adapted or added where
missing. Unit test 1302 also adapted.
Two problems were fixed:
GET_PARAMETER responses that have no body must be 204 response or
properly set length to 0.
One of the <data> sections had the wrong content-length for its
GET_PARAMETER response.
Enabled test 572 again.
There are two keywords in cookie headers that don't follow the regular
name=value style: secure and httponly. Still we must support that they
are written like 'secure=' and then treat them as if they were written
'secure'. Test case 31 was much extended by Rob Ward to test this.
Bug: http://curl.haxx.se/bug/view.cgi?id=3349227
Reported by: "gnombat"
A regression where CURLFORM_BUFFER stopped to properly insert the file
name part in the formpart. Bug introduced in commit f851f76857.
Added CURLFORM_BUFFER use to test 554 to verify this.
Bug: http://curl.haxx.se/mail/lib-2011-07/0176.html
Reported by: Henry Ludemann
Content-disposition headers can provide file names with semicolons which
previously would be cut off at that point.
Added test case 1311 and 1312 to verify -J.
Bug: http://curl.haxx.se/bug/view.cgi?id=3375603
Reported by: Peter Hjalmarsson
Use preprocessor symbols WINBIND_NTLM_AUTH_ENABLED and WINBIND_NTLM_AUTH_FILE
for Samba's winbind daemon ntlm_auth helper code implementation and filename.
Retain preprocessor symbol USE_NTLM_SSO for NTLM single-sign-on feature
availability implementation independent.
For test harness, prefix NTLM_AUTH environment vars with CURL_
Refactor and rename configure option --with-ntlm-auth to --enable-wb-ntlm-auth[=FILE]
When libcurl has said to the server that there's a POST or PUT coming
(with a content-length and all) it has to either deliver that amount of
data or it needs to close the connection before trying a second request.
Adds test case 1129, 1130 and 1131
The bug report is about when used with 100-continue, but the change is
more generic.
Bug: http://curl.haxx.se/mail/lib-2011-06/0191.html
Reported by: Steven Parkes
CURLM_CALL_MULTI_PERFORM stopped being a valid return code from
curl_multi_perform back in 7.20.0. All the libcurl tests are ajusted to
this and no longer check for this return code. Makes them simpler.
Autobuild submitters can use this to add some text to their
setup files to describe issues they've found with the build
or tests. This could include laying blame on test failures on
network issues or dependent libraries, explaining away compiler
warnings or providing any additional information that could be
useful to people reviewing and investigating problems with the
publicly available autobuild logs. Note that persistent test
failures that are not issues with curl itself should normally be
fixed by excluding them from the test run instead.
This is an entirely optional field that is not entered by the
user the first time a new build is created.
adding unit test for Curl_llist_move, documenting unit-tested functions
in llist.c, changing unit-test to unittest, replacing assert calls with
abort_unless calls
The CURLFORM_STREAM is documented to only insert a file name (and thus
look like a file upload) in the part if CURLFORM_FILENAME is set, but in
reality it always inserted a filename="" and if CURLFORM_FILENAME wasn't
set, it would insert insert rubbish (or possibly crash).
This is now fixed to work as documented, and test 554 has been extended
to verify this.
Reported by: Sascha Swiercy
Bug: http://curl.haxx.se/mail/lib-2011-06/0070.html
Properly deal with the fact that the last fread() call most probably is
a short read, and when using callbacks in fact all calls can be short
reads. No longer consider a file read done until it returns a 0 from the
read function.
Reported by: Aaron Orenstein
Bug: http://curl.haxx.se/mail/lib-2011-06/0048.html
When a time condition isn't met, so that no body is delivered to the
application even though a 2xx response is being read from the server, we
must close the connection to avoid a re-use of the connection to be
completely tricked.
Added test 1128 to verify.
Added test 1126 and 1127 to verify curl's behaviour when If-Modified-Since
is used and a 200 is returned.
The list of test cases in Makefile.am is now sorted numerically.
When connecting to a socks or similar proxy we do the proxy handshake at
once when we know the TCP connect is completed and we only consider the
"connection" complete after the proxy handshake. This fixes test 564
which is now no longer considered disabled.
Reported by: Dmitri Shubin
Bug: http://curl.haxx.se/mail/lib-2011-04/0127.html
Added CURLOPT_TRANSFER_ENCODING as the option to set to request Transfer
Encoding in HTTP requests (if built zlib enabled). I also renamed
CURLOPT_ENCODING to CURLOPT_ACCEPT_ENCODING (while keeping the old name
around) to reduce the confusion when we have to encoding options for
HTTP.
--tr-encoding is now the new command line option for curl to request
this, and thus I updated the test cases accordingly.
When TE: is inserted in the request, we must add a "Connection: TE" as
well to be HTTP 1.1 compliant. If a custom Connection: header is passed
in, we must use that and only append TE to it. Test case 1125 verifies
TE: + custom Connection:.
Transfer-Encoding differs from Content-Encoding in a few subtle ways,
but primarily it concerns the transfer only and not the content so when
discovered to be compressed we know we have to uncompress it. There will
only arrive compressed transfers in a response after we have requested
them with the appropriate TE: header.
Test case 1122 and 1123 verify.
curl-config --version didn't output the correct version string (bug
introduced in commit 0355e33b5f), and unfortunately the test
case 1022 that was supposed to check for this was broken.
This change fixes the test to detect this problem and it fixes the
output.
Bug: http://curl.haxx.se/bug/view.cgi?id=3288727
This test case is meant to verify that the logic in commit
60172a0446 actually works. This test failed for me before that
change and it works after it.
All C and H files now (should) feature the proper project curl source
code header, which includes basic info, a copyright statement and some
basic disclaimers.
Add test 582 for uploading a file using sftp and the multi interface.
(Patch and test slightly tweaked by Daniel Stenberg)
Initially marked as disabled until it is fixed in the source.
The stopserver function would append pids to kill and could append them
without separating them with space properly. The result would be a very
large number that by (some implementations of) kill would be interpreted
as a negative number and that process group would be wiped...
Bug: http://curl.haxx.se/bug/view.cgi?id=3188836
Reported by: Greg Pratt
Removed the "netrc_debug" keyword replaced with --netrc-file additions.
Removed the debug code from Curl_parsenetrc as it is superseeded by
--netrc-file.
The HTTP parser allocated memory on each received Location: header
without properly freeing old data. Starting now, the code only considers
the first Location: header and will blissfully ignore subsequent ones.
Bug: http://curl.haxx.se/bug/view.cgi?id=3165129
Reported by: Martin Lemke
This makes it possible to skip the call to unit_stop() in such
cases. Also use Curl_safefree() in unit test 1302 so it will
pass the memory torture test.
The UNITTEST_START and UNITTEST_STOP defines needed to do a new brace
level so that test cases can declare variables fine and still remain
fine C89 code.
The test runner script now knows if unittests can run and the unit test
setup file says it is one. I also made runtests.pl deal with no
<command> tag set, so that the description file can get even simpler.
When configure --enable-debug has been used, all files in lib/ are now
built twice and a separate static library crafted for unit-testing will
be linked. The unit tests in the tests/unit subdir will use that
library.
configure.ac: Test harness libhostname library will not be built for Windows.
runtests.pl: LD_PRELOAD mechanism will not be used to load libhostname
library on operating systems which lack LD_PRELOAD support.
Providing multiple dots in a series in the domain field (domain=..com) could
trick the cookie engine to wrongly accept the cookie believing it to be
fine. Since the tailmatching would then match all .com sites, the cookie would
then be sent to all of them.
The code now requires at least one letter between each dot for them to be
counted. Edited test case 61 to verify this.
They were all wrong previously since none used the <brackets> they
should for MAIL FROM. Now libcurl adds them itself if the app doesn't so
they end up wrong less easy.
Failed HTTPS tests: 301, 306, 311, 312, 313, 560
311, 312 need more detailed error reporting from axTLS.
313 relates to CRL, which hasn't been implemented yet.
Test 580 is removed again for two reasons:
1) Some compilers aren't satisfied by just a data variable called 'test'
when first.o wants a function called 'test'. The Solaris compiler says
"ld: warning: symbol `test' has differing types:" while the AIX compiler
downright rejects it.
2) Test case 1119 that was added after this test is way more complete
and cover everything test 580 does and more without introducing the same
problems.
If a command is set type="perl", it can now specify a perl program that will
be run instead of an ordinary curl or built tool.
A perl test automatically disables memory and valgrind debugging.
This new script scans for all enums and #defines used by the curl/curl.h
and curl/multi.h headers. Then it reads all symbols mentioned in
symbols-in-vesions and make sure that there's no entries missing in
there. It then proceeds to verify that the entries that
symbols-in-vesions mentions but aren't found in the sources are truly
documented as removed.
This script is used in the new test case 1119
The new perl script mk580.pl generates a C table in a fresh source file
named lib580.c and if that compiles fine we know that the file
docs/libcurl/symbols-in-versions at least doesn't include any symbols
that are misspelled.
An additional feature would be to somehow scan curl/curl.h and compare
with symbols-in-versions to see if there are symbols missing.
Some FTP servers (e.g. Pure-ftpd) end up hanging if we close the data
connection before transferring all the requested data. If we send ABOR
in that case, it prevents the server from hanging.
Bug: https://bugzilla.redhat.com/643656
Reported by: Pasi Karkkainen, Patrick Monnerat
The URL parser got a little stricter as it now considers a ? to be a
host name divider so that the slightly sloppier URLs work too. The
problem that made me do this change was the reported problem with an URL
like: www.example.com?email=name@example.com This form of URL is not
really a legal URL (due to the missing slash after the host name) but is
widely accepted by all major browsers and libcurl also already accepted
it, it was just the '@' letter that triggered the problem now.
The side-effect of this change is that now libcurl no longer accepts the
? letter as part of user-name or password when given in the URL, which
it used to accept (and is tested in test 191). That letter is however
mentioned in RFC3986 to be required to be percent encoded since it is
used as a divider.
Bug: http://curl.haxx.se/bug/view.cgi?id=3090268
It was pointed out that the special case libcurl did for 416 was
incorrect and wrong. 416 is not really different to other errors so the
response body must be handled like for other errors/http responses.
Reported by: Chris Smowton
Bug: http://curl.haxx.se/bug/view.cgi?id=3076808
This delays between write operations, hopefully making it easier
to spot problems where libcurl doesn't flush the socket properly
before waiting for the next response.
The date format in RFC822 allows that the seconds part of HH:MM:SS is
left out, but this function didn't allow it. This change also includes a
modified test case that makes sure that this now works.
Reported by: Matt Ford
Bug: http://curl.haxx.se/bug/view.cgi?id=3076529
When curl calls a function from that library then it needs to
explicitly link to the library instead of piggybacking on
libcurl's own dependency. Without this, GNU ld with the
--no-add-needed flag fails when linking (which Fedora now does
by default).
Reported by: Quanah Gibson-Mount
Bug: http://curl.haxx.se/mail/lib-2010-09/0085.html
Fixed some issues that caused xmllint failures, added features
and keywords, fixed some quotes and removed some <strip> sections
that unnecessarily limited test checking.
Introduced in the initial gopher commits, there was added logic to do
GOPHER test serving in the pingpong server but as it resembles HTTP much
more than FTP or SMTP, the gopher testing has been moved over to instead
use the sws (HTTP) server. This change simply removes unused code.
HTTP allows that a server sends trailing headers after all the chunks
have been sent WITHOUT signalling their presence in the first response
headers. The "Trailer:" header is only a SHOULD there and as we need to
handle the situation even without that header I made libcurl ignore
Trailer: completely.
Test case 1116 was added to verify this and to make sure we handle more
than one trailer header properly.
Reported by: Patrick McManus
Bug: http://curl.haxx.se/bug/view.cgi?id=3052450
It was introduced in commit eeb2cb05 along with the -F type=
change. Also fixed a typo in the name of the magic filename=
parameter. Tweaked tests 39 and 173 to better test this path.
The -F option allows some custom parameters within the given string, and
those strings are separated with semicolons. You can for example specify
"name=daniel;type=text/plain" to set content-type for the
field. However, the use of semicolons like that made it not work fine if
you specified one within the content-type, like for:
"name=daniel;type=text/plain;charset=UTF-8"
... as the second one would be seen as a separator and "charset" is no
parameter curl knows anything about so it was just silently discarded.
The new logic now checks if the semicolon and following keyword looks
like a parameter it knows about and if it isn't it is assumed to be
meant to be used within the content-type string itself.
I modified test case 186 to verify that this works as intended.
Reported by: Larry Stone
Bug: http://curl.haxx.se/bug/view.cgi?id=3048988
It seems that its time to look at some better ideas for the win32
non-configure builds; probably a prebuild target which copies
config-win32.h to curl_config.h and appends also then feature
defines like USE_ARES.
The 66 bytes checked are those 38 bytes with the chunked encoding
headers added: 8+8+10+35+5 = 66
The three-letter words become 8 bytes on the wire because they are sent
like: "3\r\none\r\n"
... and there's the trailing 5 bytes write after the four lines since
the final chunk is sent (which is "0\r\n\r\n").
In some situations, libtool will change directories and perform
a link step before executing the libtest test app. Since
LD_PRELOAD is in effect for this entire process, the path to the
binary must be absolute so it will be valid no matter in which
directory the app is running.
Due to the layout of the singletest function there are situations where
it returns before it clears the environment variables that were
especially set for the single specific test case. That could lead to
subsequent tests getting executed with environment variables sticking
around from a previous test which could lead to badness.
This change makes sure to clear all custom variables that may be laying
around from a previous round, before running a test case.
Reported by: Kamil Dudka
Bug: http://curl.haxx.se/mail/lib-2010-08/0141.html
When the progress callback is called during the TCP connection, an error
return would accidentally not abort the operation as intended but would
instead be counted as a failure to connect to that particular IP and
libcurl would just continue to try the next. I made singleipconnect()
and trynextip() return CURLcode properly.
Added bonus: it corrected the error code for bad --interface usages,
like tested in test 1084 and test 1085.
Reported by: Adam Light
Bug: http://curl.haxx.se/mail/lib-2010-08/0105.html
Test 563 is enabled now and verifies that the combo FTP type=A URL,
CURLOPT_PORT set and proxy work fine. As a bonus I managed to remove the
somewhat odd FTP check in parse_remote_port() and instead converted it
to a better and more generic 'slash_removed' struct field. Checking the
->protocol field isn't right since when an FTP:// URL is sent over a
HTTP proxy, the protocol is HTTP but the URL was handled by the FTP code
and thus slash_removed is set TRUE for this case.
A shared library tests/libtest/.libs/lihostname.so is preloaded in NTLM
test-cases to override the system implementation of gethostname(). It
makes it possible to test the NTLM authentication for exact match, and
this way test the implementation of MD4 and DES.
If LD_PRELOAD doesn't work, a debug build willl also workk as debug
builds are now made to prefer a specific environment variable and will
then return that content as host name instead of the actual one.
Kamil wrote the bulk of this, Daniel Stenberg polished it.