the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
(http://curl.haxx.se/bug/view.cgi?id=1871269) and we could fix his hang-
problem that occurred when doing a large HTTP POST request with the
response-body read from a callback.
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
(http://curl.haxx.se/bug/view.cgi?id=1849764) with an included fix. He
identified a problem for re-used connections that previously had sent
Expect: 100-continue and in some situations the subsequent POST (that didn't
use Expect:) still had the internal flag set for its use. David's fix (that
makes the setting of the flag in every single request unconditionally) is
fine and is now used!
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
callback was used, as it could wrongly pass on a bad size for the outgoing
HTTP header. The bad size would be a very large value as it was a wrapped
size_t content. This happened when the whole HTTP request failed to get sent
in one single send. http://curl.haxx.se/mail/lib-2007-11/0165.html
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).
a response that was larger than 16KB is now improved slightly so that now
the restriction at 16KB is for the headers only and it should be a rare
situation where the response-headers exceed 16KB. Thus, I consider #47 fixed
and the header limitation is now known as known bug #48.
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
NTLM, and he provided test code and a test server and we worked out a bug
fix. We failed to count sent body data at times, which then caused internal
confusions when libcurl tried to send the rest of the data in order to
maintain the same connection alive.
(and then I did some minor reformatting of code in lib/http.c)
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.