When doing a chunked-encoded POST with -d (CURLOPT_POSTFIELDS) and the
size of the POST was zero length, it made libcurl first send a zero
chunk and then the terminating one. This could confuse a receiver and it
should rather just send the terminating chunk as it does with this fix.
Test case 1333 is added to verify.
Bug: http://curl.haxx.se/mail/archive-2012-04/0060.html
Reported by: Arnaud Compan
With FOLLOWLOCATION enabled. When a 3xx page is downloaded and the
download size was known (like with a Content-Length header), but the
subsequent URL (transfered after the 3xx page) was chunked encoded, then
the previous "known download size" would linger and cause the progress
meter to get incorrect information, ie the former value would remain
being sent in. This could easily result in downloads that were WAY
larger than "expected" and would cause >100% outputs with the curl
command line tool.
Test case 599 was created and it was used to repeat the bug and then
verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3510057
Reported by: Michael Wallner
The proxy parser function strips off trailing slashes off the proxy name
which could lead to a mistaken zero length proxy name which would be
treated as no proxy at all by subsequent functions!
This is now detected and an error is returned. Verified by the new test
1329.
Reported by: Chandrakant Bagul
Bug: http://curl.haxx.se/mail/lib-2012-02/0000.html
We want to continue to the next URL to try even on failures returned
from libcurl. This makes -f with ranges still get subsequent URLs even
if occasional ones return error. This was a regression as it used to
work and broke in the 7.23.0 release.
Added test case 1328 to verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3481223
Reported by: Juan Barreto
There's a new 'http-proxy' server for tests that runs on a separate port
and lets clients do HTTP CONNECT to other ports on the same host to
allow us to test HTTP "tunneling" properly.
Test cases now have a <proxy> section in <verify> to check that the
proxy protocol part matches correctly.
Test case 80, 83, 95, 275, 503 and 1078 have been converted. Test 1316
was added.
When a HTTP connection is re-used for a subsequent request without
proxy, it would always re-use the Host: header of the first request. As
host names are case insensitive it would make curl send another host
name case that what the particular request used.
Now it will instead always use the most recent host name to always use
the desired casing.
Added test case 1318 to verify.
Bug: http://curl.haxx.se/mail/lib-2011-12/0314.html
Reported by: Alex Vinnik
Test case 1315 was added to verify this functionality. When passing in
multiple files to a single -F, the parser would get all confused if one
of the specified files had a custom type= assigned.
Reported by: Colin Hogben
"Active FTP hangs if server does not open data connection"
The server first sends a 150 and then when libcurl waits for the data
transfer, the server sends a 425.
By setting PROTOPT_NOURLQUERY in the protocol handler struct, the
protocol will get the "query part" of the URL cut off before the data is
handled by the protocol-specific code. This makes libcurl adhere to
RFC3986 section 2.2.
Test 1220 is added to verify a file:// URL with query-part.
A regression between 7.22.0 and 7.23.0 -- downloading a file with the
flags -O and -J results in the content being written to stdout if and
only if there was no Content-Disposition header in the http response. If
there is a C-D header with a filename attribute, the output is correctly
written.
Reported by: Dave Reisner
Bug: http://curl.haxx.se/mail/archive-2011-11/0030.html
591 -> FTP multi PORT and 425 on upload
592 -> FTP multi PORT and 421 on upload
593 -> FTP multi PORT upload, no data conn and no transient neg. reply
594 -> FTP multi PORT upload, no data conn and no positive prelim. reply
1206 -> FTP PORT and 425 on download
1207 -> FTP PORT and 421 on download
1208 -> FTP PORT download, no data conn and no transient negative reply
1209 -> FTP PORT download, no data conn and no positive preliminary reply
This test is created to verify Rene Bernhardt's patch which makes sure
libcurl properly _not_ deals with Negotiate if not asked to even if the
proxy says it can serve it.
As commit 5850cc4808 clarifies, libcurl can deliver header lines that
are longer than CURL_MAX_WRITE_SIZE, only body data is limited to that
size. The curl tool has check (when built debug-enabled) that made the
wrong checks and this new test 1205 verifies that larger headers work.
The fix is pretty much the one Nick Zitzmann provided, just edited to do
the right indent levels and with test case 1204 added to verify the fix.
Bug: http://curl.haxx.se/mail/lib-2011-10/0190.html
Reported by: Nick Zitzmann
Content-disposition headers can provide file names with semicolons which
previously would be cut off at that point.
Added test case 1311 and 1312 to verify -J.
Bug: http://curl.haxx.se/bug/view.cgi?id=3375603
Reported by: Peter Hjalmarsson
When libcurl has said to the server that there's a POST or PUT coming
(with a content-length and all) it has to either deliver that amount of
data or it needs to close the connection before trying a second request.
Adds test case 1129, 1130 and 1131
The bug report is about when used with 100-continue, but the change is
more generic.
Bug: http://curl.haxx.se/mail/lib-2011-06/0191.html
Reported by: Steven Parkes
When a time condition isn't met, so that no body is delivered to the
application even though a 2xx response is being read from the server, we
must close the connection to avoid a re-use of the connection to be
completely tricked.
Added test 1128 to verify.
Added test 1126 and 1127 to verify curl's behaviour when If-Modified-Since
is used and a 200 is returned.
The list of test cases in Makefile.am is now sorted numerically.
When TE: is inserted in the request, we must add a "Connection: TE" as
well to be HTTP 1.1 compliant. If a custom Connection: header is passed
in, we must use that and only append TE to it. Test case 1125 verifies
TE: + custom Connection:.
Transfer-Encoding differs from Content-Encoding in a few subtle ways,
but primarily it concerns the transfer only and not the content so when
discovered to be compressed we know we have to uncompress it. There will
only arrive compressed transfers in a response after we have requested
them with the appropriate TE: header.
Test case 1122 and 1123 verify.
This test case is meant to verify that the logic in commit
60172a0446 actually works. This test failed for me before that
change and it works after it.
Add test 582 for uploading a file using sftp and the multi interface.
(Patch and test slightly tweaked by Daniel Stenberg)
Initially marked as disabled until it is fixed in the source.
The URL parser got a little stricter as it now considers a ? to be a
host name divider so that the slightly sloppier URLs work too. The
problem that made me do this change was the reported problem with an URL
like: www.example.com?email=name@example.com This form of URL is not
really a legal URL (due to the missing slash after the host name) but is
widely accepted by all major browsers and libcurl also already accepted
it, it was just the '@' letter that triggered the problem now.
The side-effect of this change is that now libcurl no longer accepts the
? letter as part of user-name or password when given in the URL, which
it used to accept (and is tested in test 191). That letter is however
mentioned in RFC3986 to be required to be percent encoded since it is
used as a divider.
Bug: http://curl.haxx.se/bug/view.cgi?id=3090268
HTTP allows that a server sends trailing headers after all the chunks
have been sent WITHOUT signalling their presence in the first response
headers. The "Trailer:" header is only a SHOULD there and as we need to
handle the situation even without that header I made libcurl ignore
Trailer: completely.
Test case 1116 was added to verify this and to make sure we handle more
than one trailer header properly.
Reported by: Patrick McManus
Bug: http://curl.haxx.se/bug/view.cgi?id=3052450
The 66 bytes checked are those 38 bytes with the chunked encoding
headers added: 8+8+10+35+5 = 66
The three-letter words become 8 bytes on the wire because they are sent
like: "3\r\none\r\n"
... and there's the trailing 5 bytes write after the four lines since
the final chunk is sent (which is "0\r\n\r\n").
Dirk Manske reported a regression. When connecting with the multi
interface, there were situations where libcurl wouldn't store
connect time correctly as it used to (and is documented to) do.
Using his fine sample program we could repeat it, and I wrote up
test case 573 using that code. The problem does not easily show
itself using the local test suite though.
The fix, also as suggested by Dirk, is a bit on the ugly side as
it adds yet another call to Curl_verboseconnect() and setting the
TIMER_CONNECT time. That situation is subject for some closer
inspection in the future.
- SMTP falls back to RFC821 HELO when EHLO fails (and SSL is not required).
- Use of true local host name (i.e.: via gethostname()) when available, as default argument to SMTP HELO/EHLO.
- Test case 804 for HELO fallback.
again when downloading files over FTP using ASCII and it turns out that the
final size of the file is not the same as the initial size the server
reported. This is very common since servers don't take the newline
conversions into account.
present in the tests/data/Makefile.am and outputs a notice message on the
screen if not. Each test file has to be included in that Makefile.am to get
included in release archives and forgetting to add files there is a common
mistake. This is an attempt to make it harder to forget.
command is a special "hack" used by the drftpd server, but even though it is
a custom extension I've deemed it fine to add to libcurl since this server
seems to survive and people keep using it and want libcurl to support
it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
usable from the curl tool with --ftp-pret. Using this option on a server
that doesn't support this command will make libcurl fail.
sequences in uploaded data. The test server doesn't "decode" escaped dot-lines
but instead test cases must be written to take them into account. Added test
case 803 to verify dot-escaping.
detects and uses proxies based on the environment variables. If the proxy
was given as an explicit option it worked, but due to the setup order
mistake proxies would not be used fine for a few protocols when picked up
from '[protocol]_proxy'. Obviously this broke after 7.19.4. I now also added
test case 1106 that verifies this functionality.
(http://curl.haxx.se/bug/view.cgi?id=2913886)
POST using a read callback, with Digest authentication and
"Transfer-Encoding: chunked" enforced. I would then cause the first request
to be wrongly sent and then basically hang until the server closed the
connection. I fixed the problem and added test case 565 to verify it.