Since curl 7.55.0, NetworkManager almost always failed its connectivity
check by timeout. I bisected this to 5113ad04 (http-proxy: do the HTTP
CONNECT process entirely non-blocking).
This patch replaces !Curl_connect_complete with Curl_connect_ongoing,
which returns false if the CONNECT state was left uninitialized and lets
the connection continue.
Closes#1803Fixes#1804
Also-fixed-by: Gergely Nagy
Add a new type of callback to Curl_handler which performs checks on
the connection. Alter RTSP so that it uses this callback to do its
own check on connection health.
... to enable sending "OPTIONS *" which wasn't possible previously.
This option currently only works for HTTP.
Added test cases 1298 + 1299 to verify
Fixes#1280Closes#1462
... with a strlen() if no size was set, and do this in the pretransfer
function so that the info is set early. Otherwise, the default strlen()
done on the POSTFIELDS data never sets state.infilesize.
Reported-by: Vincas Razma
Bug: #1294
... since the total amount is low this is faster, easier and reduces
memory overhead.
Also, Curl_expire_done() can now mark an expire timeout as done so that
it never times out.
Closes#1472
If we use FTPS over CONNECT, the TLS handshake for the FTPS control
connection needs to be initiated in the SENDPROTOCONNECT state, not
the WAITPROXYCONNECT state. Otherwise, if the TLS handshake completed
without blocking, the information about the completed TLS handshake
would be saved to a wrong flag. Consequently, the TLS handshake would
be initiated in the SENDPROTOCONNECT state once again on the same
connection, resulting in a failure of the TLS handshake. I was able to
observe the failure with the NSS backend if curl ran through valgrind.
Note that this commit partially reverts curl-7_21_6-52-ge34131d.
When using basic-auth, connections and proxy connections
can be re-used with different Authorization headers since
it does not authenticate the connection (like NTLM does).
For instance, the below command should re-use the proxy
connection, but it currently doesn't:
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -U bob:b -x http://localhost:8181http://localhost/
This is a regression since refactoring of ConnectionExists()
as part of: cb4e2be7c6
Fix the above by removing the username and password compare
when re-using proxy connection at proxy_info_matches().
However, this fix brings back another bug would make curl
to re-print the old proxy-authorization header of previous
proxy basic-auth connection because it wasn't cleared.
For instance, in the below command the second request should
fail if the proxy requires authentication, but would succeed
after the above fix (and before aforementioned commit):
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -x http://localhost:8181http://localhost/
Fix this by clearing conn->allocptr.proxyuserpwd after use
unconditionally, same as we do for conn->allocptr.userpwd.
Also fix test 540 to not expect digest auth header to be
resent when connection is reused.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Closes https://github.com/curl/curl/pull/1350
This flag is meant for the current request based on authentication
state, once the request is done we can clear the flag.
Also change auth.multi to auth.multipass for better readability.
Fixes https://github.com/curl/curl/issues/1095
Closes https://github.com/curl/curl/pull/1326
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Michael Kaufmann
This fixes assertion error which occurs when redirect is done with 0
length body via HTTP/2, and the easy handle is reused, but new
connection is established due to hostname change:
curl: http2.c:1572: ssize_t http2_recv(struct connectdata *,
int, char *, size_t, CURLcode *):
Assertion `httpc->drain_total >= data->state.drain' failed.
To fix this bug, ensure that http2_handle_stream is called.
Fixes#1286Closes#1302
- While negotiating auth during PUT/POST if a user-specified
Content-Length header is set send 'Content-Length: 0'.
This is what we do already in HTTPREQ_POST_FORM and what we did in the
HTTPREQ_POST case (regression since afd288b).
Prior to this change no Content-Length header would be sent in such a
case.
Bug: https://curl.haxx.se/mail/lib-2017-02/0006.html
Reported-by: Dominik Hölzl
Closes https://github.com/curl/curl/pull/1242
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer. Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size. Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.
Closes#1222
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
Previously it only held references to them, which was reckless as the
thread lock was released so the cookies could get modified by other
handles that share the same cookie jar over the share interface.
CVE-2016-8623
Bug: https://curl.haxx.se/docs/adv_20161102I.html
Reported-by: Cure53
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
Hooked up the HTTP authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
As per commit 00417fd66c existing functionality is maintained for now.
- the expression of an 'if' was always true
- a 'while' contained a condition that was always true
- use 'if(k->exp100 > EXP100_SEND_DATA)' instead of 'if(k->exp100)'
- fixed a typo
Closes#889
- Change the parser to not require a minor version for HTTP/2.
HTTP/2 connection reuse broke when we changed from HTTP/2.0 to HTTP/2
in 8243a95 because the parser still expected a minor version.
Bug: https://github.com/curl/curl/issues/855
Reported-by: Andrew Robbins, Frank Gevaerts
Only protocols that actually have a protocol registered for ALPN and NPN
should try to get that negotiated in the TLS handshake. That is only
HTTPS (well, http/1.1 and http/2) right now. Previously ALPN and NPN
would wrongly be used in all handshakes if libcurl was built with it
enabled.
Reported-by: Jay Satiro
Fixes#789
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes#743
Previously, connections were closed immediately before the user had a
chance to extract the socket when the proxy required Negotiate
authentication.
This regression was brought in with the security fix in commit
79b9d5f1a4Closes#655
Supports HTTP/2 over clear TCP
- Optimize switching to HTTP/2 by removing calls to init and setup
before switching. Switching will eventually call setup and setup calls
init.
- Supports new version to “force” the use of HTTP/2 over clean TCP
- Add common line parameter “--http2-prior-knowledge” to the Curl
command line tool.
Renamed the header and source files for this module as they are HTTP
specific and as such, they should use the naming convention as other
HTTP authentication source files do - this revert commit 260ee6b7bf.
Note: We could also rename curl_ntlm_wb.[c|h], however, the Winbind
code needs separating from the HTTP protocol and migrating into the
vauth directory, thus adding support for Winbind to the SASL based
protocols such as IMAP, POP3 and SMTP.
At one point during the development of HTTP/2, the commit 133cdd29ea
introduced automatic decompression of Content-Encoding as that was what
the spec said then. Now however, HTTP/2 should work the same way as
HTTP/1 in this regard.
Reported-by: Kazuho Oku
Closes#661
nghttp2 callback deals with TLS layer and therefore the header does not
need to be broken into chunks.
Bug: https://github.com/curl/curl/issues/659
Reported-by: Kazuho Oku
This commit adds trailer support in HTTP/2. In HTTP/1.1, chunked
encoding must be used to send trialer fields. HTTP/2 deprecated any
trandfer-encoding, including chunked. But trailer fields are now
always available.
Since trailer fields are relatively rare these days (gRPC uses them
extensively though), allocating buffer for trailer fields is done when
we detect that HEADERS frame containing trailer fields is started. We
use Curl_add_buffer_* functions to buffer all trailers, just like we
do for regular header fields. And then deliver them when stream is
closed. We have to be careful here so that all data are delivered to
upper layer before sending trailers to the application.
We can deliver trailer field one by one using NGHTTP2_ERR_PAUSE
mechanism, but current method is far more simple.
Another possibility is use chunked encoding internally for HTTP/2
traffic. I have not tested it, but it could add another overhead.
Closes#564
The push headers are freed after the push callback has been invoked,
meaning this code should only free the headers if the callback was never
invoked and thus the headers weren't freed at that time.
Reported-by: Davey Shafik
They tend to never get updated anyway so they're frequently inaccurate
and we never go back to revisit them anyway. We document issues to work
on properly in KNOWN_BUGS and TODO instead.
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346Closes#346
Return 0 instead of NGHTTP2_ERR_CALLBACK_FAILURE if we can't locate the
SessionHandle. Apparently mod_h2 will sometimes send a frame for a
stream_id we're finished with.
Use nghttp2_session_get_stream_user_data and
nghttp2_session_set_stream_user_data to identify SessionHandles instead
of a hash.
Closes#372
Otherwise it would never be called for an HTTP/2 connection, which has
its own disconnect handler.
I spotted this while debugging <https://bugzilla.redhat.com/1248389>
where the http_disconnect() handler was called on an FTP session handle
causing 'dnf' to crash. conn->data->req.protop of type (struct FTP *)
was reinterpreted as type (struct HTTP *) which resulted in SIGSEGV in
Curl_add_buffer_free() after printing the "Connection cache is full,
closing the oldest one." message.
A previously working version of libcurl started to crash after it was
recompiled with the HTTP/2 support despite the HTTP/2 protocol was not
actually used. This commit makes it work again although I suspect the
root cause (reinterpreting session handle data of incompatible protocol)
still has to be fixed. Otherwise the same will happen when mixing FTP
and HTTP/2 connections and exceeding the connection cache limit.
Reported-by: Tomas Tomecek
Bug: https://bugzilla.redhat.com/1248389
Currently, libcurl rejects responses with "Content-Encoding: compress"
when CURLOPT_ACCEPT_ENCODING is set to "". I think that libcurl should
treat the Content-Encoding "compress" the same as other
Content-Encodings that it does not support, e.g. "bzip2". That means
just ignoring it.
With many easy handles using the same connection for multiplexing, it is
important we store and keep the transfer-oriented stuff in the
SessionHandle so that callbacks and callback data work fine even when
many easy handles share the same physical connection.
Error: CLANG_WARNING:
lib/http.c:173:16: warning: Value stored to 'http' during its initialization is never read
Error: COMPILER_WARNING:
lib/http.c: scope_hint: In function ‘http_disconnect’
lib/http.c:173:16: warning: unused variable ‘http’ [-Wunused-variable]
to allow code to act differently on the situation.
Also added some more info message for the connection re-use function to
make it clearer when connections are not re-used.
... from the connection struct. The stream one being the 'struct HTTP'
which is kept in the SessionHandle struct (easy handle).
lookup streams for incoming frames in the stream hash, hashing is based
on the stream id and we get the SessionHandle for the incoming stream
that way.
All the existing Curl_bundle* functions were only ever used from within
the conncache.c file, so I moved them over and made them static (and
removed the Curl_ prefix).
When doing HTTP requests Negotiate authenticated, the entire connnection
may become authenticated and not just the specific HTTP request which is
otherwise how HTTP works, as Negotiate can basically use NTLM under the
hood. curl was not adhering to this fact but would assume that such
requests would also be authenticated per request.
CVE-2015-3148
Bug: http://curl.haxx.se/docs/adv_20150422B.html
Reported-by: Isaac Boukris
This header file must be included after all header files except
memdebug.h, as it does similar memory function redefinitions and can be
similarly affected by conflicting definitions in system or dependent
library headers.
Since we just started make use of free(NULL) in order to simplify code,
this change takes it a step further and:
- converts lots of Curl_safefree() calls to good old free()
- makes Curl_safefree() not check the pointer before free()
The (new) rule of thumb is: if you really want a function call that
frees a pointer and then assigns it to NULL, then use Curl_safefree().
But we will prefer just using free() from now on.
The function "free" is documented in the way that no action shall occur for
a passed null pointer. It is therefore not needed that a function caller
repeats a corresponding check.
http://stackoverflow.com/questions/18775608/free-a-null-pointer-anyway-or-check-first
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
...after the method line:
"Since the Host field-value is critical information for handling a
request, a user agent SHOULD generate Host as the first header field
following the request-line." / RFC 7230 section 5.4
Additionally, this will also make libcurl ignore multiple specified
custom Host: headers and only use the first one. Test 1121 has been
updated accordingly
Bug: http://curl.haxx.se/bug/view.cgi?id=1491
Reported-by: Rainer Canavan
SSLeay was the name of the library that was subsequently turned into
OpenSSL many moons ago (1999). curl does not work with the old SSLeay
library since years. This is now reflected by only using USE_OPENSSL in
code that depends on OpenSSL.
Sending NTLM/Negotiate header again after successful authentication
breaks the connection with certain Proxies and request types (POST to MS
Forefront).
This commit disables pipelining for HTTP/2 or upgraded connections. For
HTTP/2, we do not support multiplexing. In general, requests cannot be
pipelined in an upgraded connection, since it is now different protocol.
Previously if HTTP/2 traffic is appended to HTTP Upgrade response header
(thus they are in the same buffer), the trailing HTTP/2 traffic is not
processed and lost. The appended data is most likely SETTINGS frame.
If it is lost, nghttp2 library complains server does not obey the HTTP/2
protocol and issues GOAWAY frame and curl eventually drops connection.
This commit fixes this problem and now trailing data is processed.
To provide consistent behaviour between the various HTTP authentication
functions use CURLcode based error codes for Curl_input_digest()
especially as the calling code doesn't use the specific error code just
that it failed.
HTTP 1.1 is clearly specified to only allow three digit response codes,
and libcurl used sscanf("%3d") for that purpose. This made libcurl
support smaller numbers but not larger. It does now, but we will not
make any specific promises nor document this further since it is going
outside of what HTTP is.
Bug: http://curl.haxx.se/bug/view.cgi?id=1441
Reported-by: Balaji
... for the local variable name in functions holding the return
code. Using the same name universally makes code easier to read and
follow.
Also, unify code for checking for CURLcode errors with:
if(result) or if(!result)
instead of
if(result == CURLE_OK), if(CURLE_OK == result) or if(result != CURLE_OK)
Historically the default "unknown" value for progress.size_dl and
progress.size_ul has been zero, since these values are initialized
implicitly by the calloc that allocates the curl handle that these
variables are a part of. Users of curl that install progress
callbacks may expect these values to always be >= 0.
Currently it is possible for progress.size_dl and progress.size_ul
to by set to a value of -1, if Curl_pgrsSetDownloadSize() or
Curl_pgrsSetUploadSize() are passed a "size" of -1 (which a few
places currently do, and a following patch will add more). So
lets update Curl_pgrsSetDownloadSize() and Curl_pgrsSetUploadSize()
so they make sure that these variables always contain a value that
is >= 0.
Updates test579 and test599.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
... to handle "*/[total]". Also, removed the strange hack that made
CURLOPT_FAILONERROR on a 416 response after a *RESUME_FROM return
CURLE_OK.
Reported-by: Dimitrios Siganos
Bug: http://curl.haxx.se/mail/lib-2014-06/0221.html
"Expect: 100-continue", which was once deprecated in HTTP/2, is now
resurrected in HTTP/2 draft 14. This change adds its support to
HTTP/2 code. This change also includes stricter header field
checking.
1 - fixes the warnings when built without http2 support
2 - adds CURLE_HTTP2, a new error code for errors detected by nghttp2
basically when they are about http2 specific things.
- Replace CURLAUTH_GSSNEGOTIATE with CURLAUTH_NEGOTIATE
- CURL_VERSION_GSSNEGOTIATE is deprecated which
is served by CURL_VERSION_SSPI, CURL_VERSION_GSSAPI and
CURUL_VERSION_SPNEGO now.
- Remove display of feature 'GSS-Negotiate'
It's wrong to assume that we can send a single SPNEGO packet which will
complete the authentication. It's a *negotiation* — the clue is in the
name. So make sure we handle responses from the server.
Curl_input_negotiate() will already handle bailing out if it thinks the
state is GSS_S_COMPLETE (or SEC_E_OK on Windows) and the server keeps
talking to us, so we should avoid endless loops that way.
Make all code use connclose() and connkeep() when changing the "close
state" for a connection. These two macros take a string argument with an
explanation, and debug builds of curl will include that in the debug
output. Helps tracking connection re-use/close issues.
Commit 517b06d657 (in 7.36.0) that brought the CREDSPERREQUEST flag
only set it for HTTPS, making HTTP less good at doing connection re-use
than it should be. Now set it for HTTP as well.
Simple test case
"curl -v -u foo:bar localhost --next -u bar:foo localhos"
Bug: http://curl.haxx.se/mail/lib-2014-05/0127.html
Reported-by: Kamil Dudka
set.infilesize in this case was modified in several places, which could
lead to repeated requests using the same handle to get unintendent/wrong
consequences based on what the previous request did!
This makes the findprotocol() function work as intended so that libcurl
can properly be restricted to not support HTTP while still supporting
HTTPS - since the HTTPS handler previously set both the HTTP and HTTPS
bits in the protocol field.
This fixes --proto and --proto-redir for most SSL protocols.
This is done by adding a few new convenience defines that groups HTTP
and HTTPS, FTP and FTPS etc that should then be used when the code wants
to check for both protocols at once. PROTO_FAMILY_[protocol] style.
Bug: https://github.com/bagder/curl/pull/97
Reported-by: drizzt
Updated the docs to clarify and the code accordingly, with test 1528 to
verify:
When CURLHEADER_SEPARATE is set and libcurl is asked to send a request
to a proxy but it isn't CONNECT, then _both_ header lists
(CURLOPT_HTTPHEADER and CURLOPT_PROXYHEADER) will be used since the
single request is then made for both the proxy and the server.
In addition to FTP, other connection based protocols such as IMAP, POP3,
SMTP, SCP, SFTP and LDAP require a new connection when different log-in
credentials are specified. Fixed the detection logic to include these
other protocols.
Bug: http://curl.haxx.se/docs/adv_20140326A.html
Because of the socket is unblocking, PolarSSL does need call to getsock to
get the action to perform in multi environment.
In some cases, it might happen we have not received yet all data to perform
the handshake. ssh_handshake returns POLARSSL_ERR_NET_WANT_READ, the state
is updated but because of the getsock has not the proper #define macro to,
the library never prevents to select socket for input thus the socket will
never be awaken when last data is available. Thus it leads to timeout.
This patch enables HTTP POST/PUT in HTTP2.
We disabled Expect header field and chunked transfer encoding
since HTTP2 forbids them.
In HTTP1, Curl sends small upload data with request headers, but
HTTP2 requires upload data must be in DATA frame separately.
So we added some conditionals to achieve this.
A server might respond with a content-encoding header and a response
that was encoded accordingly in HTTP-draft-09/2.0 mode, even if the
client did not send an accept-encoding header earlier. The server might
not send a content-encoding header if the identity encoding was used to
encode the response.
See:
http://tools.ietf.org/html/draft-ietf-httpbis-http2-09#section-9.3
This patch chooses different approach to integrate HTTP2 into HTTP curl
stack. The idea is that we insert HTTP2 layer between HTTP code and
socket(TLS) layer. When HTTP2 is initialized (either in NPN or Upgrade),
we replace the Curl_recv/Curl_send callbacks with HTTP2's, but keep the
original callbacks in http_conn struct. When sending serialized data by
nghttp2, we use original Curl_send callback. Likewise, when reading data
from network, we use original Curl_recv callback. In this way we can
treat both TLS and non-TLS connections.
With this patch, one can transfer contents from https://twitter.com and
from nghttp2 test server in plain HTTP as well.
The code still has rough edges. The notable one is I could not figure
out how to call nghttp2_session_send() when underlying socket is
writable.
Check the NPN result before preparing an HTTP request and switch into
HTTP/2.0 mode if necessary. This is a work in progress, the actual code
to prepare and send the request using nghttp2 is still missing from
Curl_http2_send_request().
This prevents sending a `Content-Length: -1` header, e.g this ocurred
with the following combination:
* standard HTTP POST (no chunked encoding),
* user-defined read function set,
* `CURLOPT_POSTFIELDSIZE(_LARGE)` NOT set.
With this fix it now behaves like HTTP PUT.
Following commit 0aafd77fa4, replaced the internal usage of
FORMAT_OFF_T and FORMAT_OFF_TU with the external versions that we
expect API programmers to use.
This negates the need for separate definitions which were subtly
different under different platforms/compilers.
Renamed copy_header_value() to Curl_copy_header_value() as this
function is now non static.
Simplified proxy flag in Curl_http_input_auth() when calling
sub-functions.
Removed unnecessary white space removal when using negotiate as it had
been missed in commit cdccb42267.
...following recent changes to Curl_base64_decode() rather than trying
to parse a header line for the authentication mechanisms which is CRLF
terminated and inline zero terminate it.
...following recent changes to Curl_base64_decode() rather than trying
to parse a header line for the authentication mechanisms which is CRLF
terminated and inline zero terminate it.
All protocol handler structs are now opaque (void *) in the
SessionHandle struct and moved in the request-specific sub-struct
'SingleRequest'. The intension is to keep the protocol specific
knowledge in their own dedicated source files [protocol].c etc.
There's some "leakage" where this policy is violated, to be addressed at
a later point in time.
1 - always allocate the struct in protocol->setup_connection. Some
protocol handlers had to get this function added.
2 - always free at the end of a request. This is also an attempt to keep
less memory in the handle after it is completed.
Introducing a number of options to the multi interface that
allows for multiple pipelines to the same host, in order to
optimize the balance between the penalty for opening new
connections and the potential pipelining latency.
Two new options for limiting the number of connections:
CURLMOPT_MAX_HOST_CONNECTIONS - Limits the number of running connections
to the same host. When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished, so we can reuse the connection.
CURLMOPT_MAX_TOTAL_CONNECTIONS - Limits the number of connections in total.
When adding a handle that exceeds this limit,
that handle will be put in a pending state until another handle is
finished. The free connection will then be reused, if possible, or
closed if the pending handle can't reuse it.
Several new options for pipelining:
CURLMOPT_MAX_PIPELINE_LENGTH - Limits the pipeling length. If a
pipeline is "full" when a connection is to be reused, a new connection
will be opened if the CURLMOPT_MAX_xxx_CONNECTIONS limits allow it.
If not, the handle will be put in a pending state until a connection is
ready (either free or a pipe got shorter).
CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a transfer with a content
length that is larger than this.
CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE - A pipelined connection will not
be reused if it is currently processing a chunk larger than this.
CURLMOPT_PIPELINING_SITE_BL - A blacklist of hosts that don't allow
pipelining.
CURLMOPT_PIPELINING_SERVER_BL - A blacklist of server types that don't allow
pipelining.
See the curl_multi_setopt() man page for details.
Remove internal separated behavior of the easy vs multi intercace.
curl_easy_perform() is now using the multi interface itself.
Several minor multi interface quirks and bugs have been fixed in the
process.
Much help with debugging this has been provided by: Yang Tse
This commit renames lib/setup.h to lib/curl_setup.h and
renames lib/setup_once.h to lib/curl_setup_once.h.
Removes the need and usage of a header inclusion guard foreign
to libcurl. [1]
Removes the need and presence of an alarming notice we carried
in old setup_once.h [2]
----------------------------------------
1 - lib/setup_once.h used __SETUP_ONCE_H macro as header inclusion guard
up to commit ec691ca3 which changed this to HEADER_CURL_SETUP_ONCE_H,
this single inclusion guard is enough to ensure that inclusion of
lib/setup_once.h done from lib/setup.h is only done once.
Additionally lib/setup.h has always used __SETUP_ONCE_H macro to
protect inclusion of setup_once.h even after commit ec691ca3, this
was to avoid a circular header inclusion triggered when building a
c-ares enabled version with c-ares sources available which also has
a setup_once.h header. Commit ec691ca3 exposes the real nature of
__SETUP_ONCE_H usage in lib/setup.h, it is a header inclusion guard
foreign to libcurl belonging to c-ares's setup_once.h
The renaming this commit does, fixes the circular header inclusion,
and as such removes the need and usage of a header inclusion guard
foreign to libcurl. Macro __SETUP_ONCE_H no longer used in libcurl.
2 - Due to the circular interdependency of old lib/setup_once.h and the
c-ares setup_once.h header, old file lib/setup_once.h has carried
back from 2006 up to now days an alarming and prominent notice about
the need of keeping libcurl's and c-ares's setup_once.h in sync.
Given that this commit fixes the circular interdependency, the need
and presence of mentioned notice is removed.
All mentioned interdependencies come back from now old days when
the c-ares project lived inside a curl subdirectory. This commit
removes last traces of such fact.
This reverts renaming and usage of lib/*.h header files done
28-12-2012, reverting 2 commits:
f871de0... build: make use of 76 lib/*.h renamed files
ffd8e12... build: rename 76 lib/*.h files
This also reverts removal of redundant include guard (redundant thanks
to changes in above commits) done 2-12-2013, reverting 1 commit:
c087374... curl_setup.h: remove redundant include guard
This also reverts renaming and usage of lib/*.c source files done
3-12-2013, reverting 3 commits:
13606bb... build: make use of 93 lib/*.c renamed files
5b6e792... build: rename 93 lib/*.c files
7d83dff... build: commit 13606bbfde follow-up 1
Start of related discussion thread:
http://curl.haxx.se/mail/lib-2013-01/0012.html
Asking for confirmation on pushing this revertion commit:
http://curl.haxx.se/mail/lib-2013-01/0048.html
Confirmation summary:
http://curl.haxx.se/mail/lib-2013-01/0079.html
NOTICE: The list of 2 files that have been modified by other
intermixed commits, while renamed, and also by at least one
of the 6 commits this one reverts follows below. These 2 files
will exhibit a hole in history unless git's '--follow' option
is used when viewing logs.
lib/curl_imap.h
lib/curl_smtp.h
A bundle is a list of all persistent connections to the same host.
The connection cache consists of a hash of bundles, with the
hostname as the key.
The benefits may not be obvious, but they are two:
1) Faster search for connections to reuse, since the hash
lookup only finds connections to the host in question.
2) It lays out the groundworks for an upcoming patch,
which will introduce multiple HTTP pipelines.
This patch also removes the awkward list of "closure handles",
which were needed to send QUIT commands to the FTP server
when closing a connection.
Now we allocate a separate closure handle and use that
one to close all connections.
This has been tested in a live system for a few weeks, and of
course passes the test suite.
.. that are sent when auth-negotiating before a chunked
upload or when setting the 'Transfer-Encoding: chunked'
header and intentionally sending no content.
Adjust test565 and test1333 accordingly.
The logic previously checked for a started NTLM negotiation only for
host and not also with proxy, leading to problems doing POSTs over a
proxy NTLM that are larger than 2000 bytes. Now it includes proxy in the
check.
Bug: http://curl.haxx.se/bug/view.cgi?id=3582321
Reported by: John Suprock
Previously the curl_multi interface would freeze if darwinssl was
enabled and at least one of the handles tried to connect to a Web site
using HTTPS. Removed the "wouldblock" state darwinssl was using because
I figured out a solution for our "would block but in which direction?"
dilemma.
A HEAD response has no body length and gets the headers like the
corresponding GET would so it should not get closed after the response
based on the same rules. This mistake caused connections that did HEAD
to get closed too often without a valid reason.
Bug: http://curl.haxx.se/bug/view.cgi?id=3542731
Reported by: Eelco Dolstra
The function https_getsock was only implemented properly when USE_SSLEAY
or USE_GNUTLS is defined, but it is also necessary for USE_SCHANNEL.
The problem occurs when Curl_read_plain or Curl_write_plain returns
CURLE_AGAIN. In that case CURL_OK is returned to the multi-interface an
the used socket is set to state CURL_POLL_REMOVE and the easy-state is
set to CURLM_STATE_PROTOCONNECT. This is fine, because later the socket
should be set to CURL_POLL_IN or CURL_POLL_OUT via multi_getsock. That's
where https_getsock is called and doesn't return any sockets.
When doing a chunked-encoded POST with -d (CURLOPT_POSTFIELDS) and the
size of the POST was zero length, it made libcurl first send a zero
chunk and then the terminating one. This could confuse a receiver and it
should rather just send the terminating chunk as it does with this fix.
Test case 1333 is added to verify.
Bug: http://curl.haxx.se/mail/archive-2012-04/0060.html
Reported by: Arnaud Compan
The commit e650dbde86 that stripped off [brackets] from ipv6-only host
headers for the sake of cookie parsing wrongly incremented the host
pointer which would cause a bad free() call later on.
The refactoring of HTTP CONNECT handling in commit 41b0237834 that
made it protocol independent broke it for the multi interface. This fix
now introduce a better state handling and moved some logic to the
http_proxy.c source file.
Reported by: Yang Tse
Bug: http://curl.haxx.se/mail/lib-2012-03/0162.html
Curl_protocol_connect() now does the tunneling through the HTTP proxy if
requested instead of letting each protocol specific connection function
do it.