1
0
mirror of https://github.com/moparisthebest/curl synced 2024-11-15 14:05:03 -05:00

pipelining: removed

As previously planned and documented in DEPRECATE.md, all pipelining
code is removed.

Closes #3651
This commit is contained in:
Daniel Stenberg 2019-04-05 16:38:36 +02:00
parent aba1c51553
commit 2f44e94efb
No known key found for this signature in database
GPG Key ID: 5CC908FDB71E12C2
28 changed files with 190 additions and 1401 deletions

View File

@ -5,46 +5,6 @@ email the curl-library mailing list as soon as possible and explain to us why
this is a problem for you and how your use case can't be satisfied properly this is a problem for you and how your use case can't be satisfied properly
using a work around. using a work around.
## HTTP pipelining
HTTP pipelining is badly supported by curl in the sense that we have bugs and
it is a fragile feature without enough tests. Also, when something turns out
to have problems it is really tricky to debug due to the timing sensitivity so
very often enabling debug outputs or similar completely changes the nature of
the behavior and things are not reproducing anymore!
HTTP pipelining was never enabled by default by the large desktop browsers due
to all the issues with it. Both Firefox and Chrome have also dropped
pipelining support entirely since a long time back now. We are in fact over
time becoming more and more lonely in supporting pipelining.
The bad state of HTTP pipelining was a primary driving factor behind HTTP/2
and its multiplexing feature. HTTP/2 multiplexing is truly and really
"pipelining done right". It is way more solid, practical and solves the use
case in a better way with better performance and fewer downsides and problems.
In 2018, pipelining *should* be abandoned and HTTP/2 should be used instead.
### State
In 7.62.0, we will add code that ignores the "enable pipeline" option
setting). The *setopt() function would still return "OK" though so the
application couldn't tell that this is happening.
Users who truly need pipelining from that version will need to modify the code
(ever so slightly) and rebuild.
### Removal
Six months later, in sync with the planned release happen in April 2019,
(might be 7.66.0), assuming no major riots have occurred due to this in the
mean time, we rip out the pipelining code. It is in the order of 1000 lines of
libcurl code.
Left to answer: should the *setopt() function start to return error when these
options are set to be able to tell when they're trying to use options that are
no longer around or should we maintain behavior as much as possible?
## `CURLOPT_DNS_USE_GLOBAL_CACHE` ## `CURLOPT_DNS_USE_GLOBAL_CACHE`
This option makes libcurl use a global non-thread-safe cache for DNS if This option makes libcurl use a global non-thread-safe cache for DNS if

View File

@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___ .\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____| .\" * \___|\___/|_| \_\_____|
.\" * .\" *
.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al. .\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" * .\" *
.\" * This software is licensed as described in the file COPYING, which .\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms .\" * you should have received as part of this distribution. The terms
@ -28,6 +28,8 @@ CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE \- chunk length threshold for pipelining
CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE, long size); CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE, long size);
.SH DESCRIPTION .SH DESCRIPTION
No function since pipelining was removed in 7.62.0.
Pass a long with a \fBsize\fP in bytes. If a pipelined connection is currently Pass a long with a \fBsize\fP in bytes. If a pipelined connection is currently
processing a chunked (Transfer-encoding: chunked) request with a current chunk processing a chunked (Transfer-encoding: chunked) request with a current chunk
length larger than \fICURLMOPT_CHUNK_LENGTH_PENALTY_SIZE(3)\fP, that pipeline length larger than \fICURLMOPT_CHUNK_LENGTH_PENALTY_SIZE(3)\fP, that pipeline

View File

@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___ .\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____| .\" * \___|\___/|_| \_\_____|
.\" * .\" *
.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al. .\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" * .\" *
.\" * This software is licensed as described in the file COPYING, which .\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms .\" * you should have received as part of this distribution. The terms
@ -28,6 +28,8 @@ CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE \- size threshold for pipelining penalty
CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE, long size); CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE, long size);
.SH DESCRIPTION .SH DESCRIPTION
No function since pipelining was removed in 7.62.0.
Pass a long with a \fBsize\fP in bytes. If a pipelined connection is currently Pass a long with a \fBsize\fP in bytes. If a pipelined connection is currently
processing a request with a Content-Length larger than this processing a request with a Content-Length larger than this
\fICURLMOPT_CONTENT_LENGTH_PENALTY_SIZE(3)\fP, that pipeline will then not be \fICURLMOPT_CONTENT_LENGTH_PENALTY_SIZE(3)\fP, that pipeline will then not be

View File

@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___ .\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____| .\" * \___|\___/|_| \_\_____|
.\" * .\" *
.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al. .\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" * .\" *
.\" * This software is licensed as described in the file COPYING, which .\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms .\" * you should have received as part of this distribution. The terms
@ -28,6 +28,8 @@ CURLMOPT_MAX_PIPELINE_LENGTH \- maximum number of requests in a pipeline
CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_MAX_PIPELINE_LENGTH, long max); CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_MAX_PIPELINE_LENGTH, long max);
.SH DESCRIPTION .SH DESCRIPTION
No function since pipelining was removed in 7.62.0.
Pass a long. The set \fBmax\fP number will be used as the maximum amount of Pass a long. The set \fBmax\fP number will be used as the maximum amount of
outstanding requests in an HTTP/1.1 pipelined connection. This option is only outstanding requests in an HTTP/1.1 pipelined connection. This option is only
used for HTTP/1.1 pipelining, not for HTTP/2 multiplexing. used for HTTP/1.1 pipelining, not for HTTP/2 multiplexing.

View File

@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___ .\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____| .\" * \___|\___/|_| \_\_____|
.\" * .\" *
.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <daniel@haxx.se>, et al. .\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" * .\" *
.\" * This software is licensed as described in the file COPYING, which .\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms .\" * you should have received as part of this distribution. The terms
@ -71,12 +71,12 @@ HTTP(S)
.SH EXAMPLE .SH EXAMPLE
.nf .nf
CURLM *m = curl_multi_init(); CURLM *m = curl_multi_init();
/* try HTTP/1 pipelining and HTTP/2 multiplexing */ /* try HTTP/2 multiplexing */
curl_multi_setopt(m, CURLMOPT_PIPELINING, CURLPIPE_HTTP1 | curl_multi_setopt(m, CURLMOPT_PIPELINING, CURLPIPE_MULTIPLEX);
CURLPIPE_MULTIPLEX);
.fi .fi
.SH AVAILABILITY .SH AVAILABILITY
Added in 7.16.0. Multiplex support bit added in 7.43.0. Added in 7.16.0. Multiplex support bit added in 7.43.0. HTTP/1 Pipelining
support was disabled in 7.62.0.
.SH RETURN VALUE .SH RETURN VALUE
Returns CURLM_OK if the option is supported, and CURLM_UNKNOWN_OPTION if not. Returns CURLM_OK if the option is supported, and CURLM_UNKNOWN_OPTION if not.
.SH "SEE ALSO" .SH "SEE ALSO"

View File

@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___ .\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____| .\" * \___|\___/|_| \_\_____|
.\" * .\" *
.\" * Copyright (C) 1998 - 2014, Daniel Stenberg, <daniel@haxx.se>, et al. .\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" * .\" *
.\" * This software is licensed as described in the file COPYING, which .\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms .\" * you should have received as part of this distribution. The terms
@ -28,6 +28,8 @@ CURLMOPT_PIPELINING_SERVER_BL \- pipelining server blacklist
CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_PIPELINING_SERVER_BL, char **servers); CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_PIPELINING_SERVER_BL, char **servers);
.SH DESCRIPTION .SH DESCRIPTION
No function since pipelining was removed in 7.62.0.
Pass a \fBservers\fP array of char *, ending with a NULL entry. This is a list Pass a \fBservers\fP array of char *, ending with a NULL entry. This is a list
of server types prefixes (in the Server: HTTP header) that are blacklisted of server types prefixes (in the Server: HTTP header) that are blacklisted
from pipelining, i.e server types that are known to not support HTTP from pipelining, i.e server types that are known to not support HTTP

View File

@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___ .\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____| .\" * \___|\___/|_| \_\_____|
.\" * .\" *
.\" * Copyright (C) 1998 - 2014, Daniel Stenberg, <daniel@haxx.se>, et al. .\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" * .\" *
.\" * This software is licensed as described in the file COPYING, which .\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms .\" * you should have received as part of this distribution. The terms
@ -28,6 +28,8 @@ CURLMOPT_PIPELINING_SITE_BL \- pipelining host blacklist
CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_PIPELINING_SITE_BL, char **hosts); CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_PIPELINING_SITE_BL, char **hosts);
.SH DESCRIPTION .SH DESCRIPTION
No function since pipelining was removed in 7.62.0.
Pass a \fBhosts\fP array of char *, ending with a NULL entry. This is a list Pass a \fBhosts\fP array of char *, ending with a NULL entry. This is a list
of sites that are blacklisted from pipelining, i.e sites that are known to not of sites that are blacklisted from pipelining, i.e sites that are known to not
support HTTP pipelining. The array is copied by libcurl. support HTTP pipelining. The array is copied by libcurl.

View File

@ -52,7 +52,7 @@ LIB_CFILES = file.c timeval.c base64.c hostip.c progress.c formdata.c \
openldap.c curl_gethostname.c gopher.c idn_win32.c \ openldap.c curl_gethostname.c gopher.c idn_win32.c \
http_proxy.c non-ascii.c asyn-ares.c asyn-thread.c curl_gssapi.c \ http_proxy.c non-ascii.c asyn-ares.c asyn-thread.c curl_gssapi.c \
http_ntlm.c curl_ntlm_wb.c curl_ntlm_core.c curl_sasl.c rand.c \ http_ntlm.c curl_ntlm_wb.c curl_ntlm_core.c curl_sasl.c rand.c \
curl_multibyte.c hostcheck.c conncache.c pipeline.c dotdot.c \ curl_multibyte.c hostcheck.c conncache.c dotdot.c \
x509asn1.c http2.c smb.c curl_endian.c curl_des.c system_win32.c \ x509asn1.c http2.c smb.c curl_endian.c curl_des.c system_win32.c \
mime.c sha256.c setopt.c curl_path.c curl_ctype.c curl_range.c psl.c \ mime.c sha256.c setopt.c curl_path.c curl_ctype.c curl_range.c psl.c \
doh.c urlapi.c altsvc.c doh.c urlapi.c altsvc.c
@ -72,7 +72,7 @@ LIB_HFILES = arpa_telnet.h netrc.h file.h timeval.h hostip.h progress.h \
curl_gethostname.h gopher.h http_proxy.h non-ascii.h asyn.h \ curl_gethostname.h gopher.h http_proxy.h non-ascii.h asyn.h \
http_ntlm.h curl_gssapi.h curl_ntlm_wb.h curl_ntlm_core.h \ http_ntlm.h curl_gssapi.h curl_ntlm_wb.h curl_ntlm_core.h \
curl_sasl.h curl_multibyte.h hostcheck.h conncache.h \ curl_sasl.h curl_multibyte.h hostcheck.h conncache.h \
curl_setup_once.h multihandle.h setup-vms.h pipeline.h dotdot.h \ curl_setup_once.h multihandle.h setup-vms.h dotdot.h \
x509asn1.h http2.h sigpipe.h smb.h curl_endian.h curl_des.h \ x509asn1.h http2.h sigpipe.h smb.h curl_endian.h curl_des.h \
curl_printf.h system_win32.h rand.h mime.h curl_sha256.h setopt.h \ curl_printf.h system_win32.h rand.h mime.h curl_sha256.h setopt.h \
curl_path.h curl_ctype.h curl_range.h psl.h doh.h urlapi-int.h \ curl_path.h curl_ctype.h curl_range.h psl.h doh.h urlapi-int.h \

View File

@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___ * | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____| * \___|\___/|_| \_\_____|
* *
* Copyright (C) 2015 - 2018, Daniel Stenberg, <daniel@haxx.se>, et al. * Copyright (C) 2015 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 2012 - 2014, Linus Nielsen Feltzing, <linus@haxx.se> * Copyright (C) 2012 - 2014, Linus Nielsen Feltzing, <linus@haxx.se>
* *
* This software is licensed as described in the file COPYING, which * This software is licensed as described in the file COPYING, which
@ -40,7 +40,6 @@ struct conncache {
#define BUNDLE_NO_MULTIUSE -1 #define BUNDLE_NO_MULTIUSE -1
#define BUNDLE_UNKNOWN 0 /* initial value */ #define BUNDLE_UNKNOWN 0 /* initial value */
#define BUNDLE_PIPELINING 1
#define BUNDLE_MULTIPLEX 2 #define BUNDLE_MULTIPLEX 2
struct connectbundle { struct connectbundle {

View File

@ -73,7 +73,6 @@
#include "http_proxy.h" #include "http_proxy.h"
#include "warnless.h" #include "warnless.h"
#include "non-ascii.h" #include "non-ascii.h"
#include "pipeline.h"
#include "http2.h" #include "http2.h"
#include "connect.h" #include "connect.h"
#include "strdup.h" #include "strdup.h"
@ -1280,7 +1279,6 @@ CURLcode Curl_add_buffer_send(Curl_send_buffer **inp,
This needs FIXing. This needs FIXing.
*/ */
return CURLE_SEND_ERROR; return CURLE_SEND_ERROR;
Curl_pipeline_leave_write(conn);
} }
} }
Curl_add_buffer_free(&in); Curl_add_buffer_free(&in);
@ -3722,16 +3720,9 @@ CURLcode Curl_http_readwrite_headers(struct Curl_easy *data,
} }
else if(conn->httpversion >= 11 && else if(conn->httpversion >= 11 &&
!conn->bits.close) { !conn->bits.close) {
/* If HTTP version is >= 1.1 and connection is persistent /* If HTTP version is >= 1.1 and connection is persistent */
server supports pipelining. */
DEBUGF(infof(data, DEBUGF(infof(data,
"HTTP 1.1 or later with persistent connection, " "HTTP 1.1 or later with persistent connection\n"));
"pipelining supported\n"));
/* Activate pipelining if needed */
if(conn->bundle) {
if(!Curl_pipeline_site_blacklisted(data, conn))
conn->bundle->multiuse = BUNDLE_PIPELINING;
}
} }
switch(k->httpcode) { switch(k->httpcode) {
@ -3816,19 +3807,6 @@ CURLcode Curl_http_readwrite_headers(struct Curl_easy *data,
data->info.contenttype = contenttype; data->info.contenttype = contenttype;
} }
} }
else if(checkprefix("Server:", k->p)) {
if(conn->httpversion < 20) {
/* only do this for non-h2 servers */
char *server_name = Curl_copy_header_value(k->p);
/* Turn off pipelining if the server version is blacklisted */
if(conn->bundle && (conn->bundle->multiuse == BUNDLE_PIPELINING)) {
if(Curl_pipeline_server_blacklisted(data, server_name))
conn->bundle->multiuse = BUNDLE_NO_MULTIUSE;
}
free(server_name);
}
}
else if((conn->httpversion == 10) && else if((conn->httpversion == 10) &&
conn->bits.httpproxy && conn->bits.httpproxy &&
Curl_compareheader(k->p, Curl_compareheader(k->p,

View File

@ -620,7 +620,7 @@ static int push_promise(struct Curl_easy *data,
/* /*
* multi_connchanged() is called to tell that there is a connection in * multi_connchanged() is called to tell that there is a connection in
* this multi handle that has changed state (pipelining become possible, the * this multi handle that has changed state (multiplexing become possible, the
* number of allowed streams changed or similar), and a subsequent use of this * number of allowed streams changed or similar), and a subsequent use of this
* multi handle should move CONNECT_PEND handles back to CONNECT to have them * multi handle should move CONNECT_PEND handles back to CONNECT to have them
* retry. * retry.

View File

@ -41,7 +41,6 @@
#include "speedcheck.h" #include "speedcheck.h"
#include "conncache.h" #include "conncache.h"
#include "multihandle.h" #include "multihandle.h"
#include "pipeline.h"
#include "sigpipe.h" #include "sigpipe.h"
#include "vtls/vtls.h" #include "vtls/vtls.h"
#include "connect.h" #include "connect.h"
@ -136,12 +135,10 @@ static void mstate(struct Curl_easy *data, CURLMstate state
NULL, /* WAITPROXYCONNECT */ NULL, /* WAITPROXYCONNECT */
NULL, /* SENDPROTOCONNECT */ NULL, /* SENDPROTOCONNECT */
NULL, /* PROTOCONNECT */ NULL, /* PROTOCONNECT */
NULL, /* WAITDO */
Curl_connect_free, /* DO */ Curl_connect_free, /* DO */
NULL, /* DOING */ NULL, /* DOING */
NULL, /* DO_MORE */ NULL, /* DO_MORE */
NULL, /* DO_DONE */ NULL, /* DO_DONE */
NULL, /* WAITPERFORM */
NULL, /* PERFORM */ NULL, /* PERFORM */
NULL, /* TOOFAST */ NULL, /* TOOFAST */
NULL, /* DONE */ NULL, /* DONE */
@ -349,9 +346,6 @@ struct Curl_multi *Curl_multi_handle(int hashsize, /* socket hash */
Curl_llist_init(&multi->msglist, multi_freeamsg); Curl_llist_init(&multi->msglist, multi_freeamsg);
Curl_llist_init(&multi->pending, multi_freeamsg); Curl_llist_init(&multi->pending, multi_freeamsg);
multi->max_pipeline_length = 5;
multi->pipelining = CURLPIPE_MULTIPLEX;
/* -1 means it not set by user, use the default value */ /* -1 means it not set by user, use the default value */
multi->maxconnects = -1; multi->maxconnects = -1;
return multi; return multi;
@ -440,12 +434,7 @@ CURLMcode curl_multi_add_handle(struct Curl_multi *multi,
data->psl = &multi->psl; data->psl = &multi->psl;
#endif #endif
/* This adds the new entry at the 'end' of the doubly-linked circular /* We add the new entry last in the list. */
list of Curl_easy structs to try and maintain a FIFO queue so
the pipelined requests are in order. */
/* We add this new entry last in the list. */
data->next = NULL; /* end of the line */ data->next = NULL; /* end of the line */
if(multi->easyp) { if(multi->easyp) {
struct Curl_easy *last = multi->easylp; struct Curl_easy *last = multi->easylp;
@ -538,8 +527,6 @@ static CURLcode multi_done(struct Curl_easy *data,
/* Stop the resolver and free its own resources (but not dns_entry yet). */ /* Stop the resolver and free its own resources (but not dns_entry yet). */
Curl_resolver_kill(conn); Curl_resolver_kill(conn);
Curl_getoff_all_pipelines(data, conn);
/* Cleanup possible redirect junk */ /* Cleanup possible redirect junk */
Curl_safefree(data->req.newurl); Curl_safefree(data->req.newurl);
Curl_safefree(data->req.location); Curl_safefree(data->req.location);
@ -573,12 +560,12 @@ static CURLcode multi_done(struct Curl_easy *data,
process_pending_handles(data->multi); /* connection / multiplex */ process_pending_handles(data->multi); /* connection / multiplex */
if(conn->send_pipe.size || conn->recv_pipe.size) { detach_connnection(data);
/* Stop if pipeline is not empty . */ if(CONN_INUSE(conn)) {
detach_connnection(data); /* Stop if still used. */
DEBUGF(infof(data, "Connection still in use %zu/%zu, " DEBUGF(infof(data, "Connection still in use %zu, "
"no more multi_done now!\n", "no more multi_done now!\n",
conn->send_pipe.size, conn->recv_pipe.size)); conn->easyq.size));
return CURLE_OK; return CURLE_OK;
} }
@ -652,7 +639,6 @@ static CURLcode multi_done(struct Curl_easy *data,
data->state.lastconnect = NULL; data->state.lastconnect = NULL;
} }
detach_connnection(data);
Curl_free_request_state(data); Curl_free_request_state(data);
return result; return result;
} }
@ -698,9 +684,6 @@ CURLMcode curl_multi_remove_handle(struct Curl_multi *multi,
/* Set connection owner so that the DONE function closes it. We can /* Set connection owner so that the DONE function closes it. We can
safely do this here since connection is killed. */ safely do this here since connection is killed. */
data->conn->data = easy; data->conn->data = easy;
/* If the handle is in a pipeline and has started sending off its
request but not received its response yet, we need to close
connection. */
streamclose(data->conn, "Removed with partial response"); streamclose(data->conn, "Removed with partial response");
easy_owns_conn = TRUE; easy_owns_conn = TRUE;
} }
@ -723,9 +706,6 @@ CURLMcode curl_multi_remove_handle(struct Curl_multi *multi,
nothing really useful to do with it anyway! */ nothing really useful to do with it anyway! */
(void)multi_done(data, data->result, premature); (void)multi_done(data, data->result, premature);
} }
else
/* Clear connection pipelines, if multi_done above was not called */
Curl_getoff_all_pipelines(data, data->conn);
} }
if(data->connect_queue.ptr) if(data->connect_queue.ptr)
@ -803,16 +783,19 @@ CURLMcode curl_multi_remove_handle(struct Curl_multi *multi,
return CURLM_OK; return CURLM_OK;
} }
/* Return TRUE if the application asked for a certain set of pipelining */ /* Return TRUE if the application asked for multiplexing */
bool Curl_pipeline_wanted(const struct Curl_multi *multi, int bits) bool Curl_multiplex_wanted(const struct Curl_multi *multi)
{ {
return (multi && (multi->pipelining & bits)) ? TRUE : FALSE; return (multi && (multi->multiplexing));
} }
/* This is the only function that should clear data->conn. This will /* This is the only function that should clear data->conn. This will
occasionally be called with the pointer already cleared. */ occasionally be called with the pointer already cleared. */
static void detach_connnection(struct Curl_easy *data) static void detach_connnection(struct Curl_easy *data)
{ {
struct connectdata *conn = data->conn;
if(conn)
Curl_llist_remove(&conn->easyq, &data->conn_queue, NULL);
data->conn = NULL; data->conn = NULL;
} }
@ -821,7 +804,10 @@ void Curl_attach_connnection(struct Curl_easy *data,
struct connectdata *conn) struct connectdata *conn)
{ {
DEBUGASSERT(!data->conn); DEBUGASSERT(!data->conn);
DEBUGASSERT(conn);
data->conn = conn; data->conn = conn;
Curl_llist_insert_next(&conn->easyq, conn->easyq.tail, data,
&data->conn_queue);
} }
static int waitconnect_getsock(struct connectdata *conn, static int waitconnect_getsock(struct connectdata *conn,
@ -935,7 +921,6 @@ static int multi_getsock(struct Curl_easy *data,
to waiting for the same as the *PERFORM to waiting for the same as the *PERFORM
states */ states */
case CURLM_STATE_PERFORM: case CURLM_STATE_PERFORM:
case CURLM_STATE_WAITPERFORM:
return Curl_single_getsock(data->conn, socks, numsocks); return Curl_single_getsock(data->conn, socks, numsocks);
} }
@ -1203,7 +1188,7 @@ CURLMcode Curl_multi_add_perform(struct Curl_multi *multi,
* do_complete is called when the DO actions are complete. * do_complete is called when the DO actions are complete.
* *
* We init chunking and trailer bits to their default values here immediately * We init chunking and trailer bits to their default values here immediately
* before receiving any header data for the current request in the pipeline. * before receiving any header data for the current request.
*/ */
static void do_complete(struct connectdata *conn) static void do_complete(struct connectdata *conn)
{ {
@ -1216,6 +1201,9 @@ static CURLcode multi_do(struct Curl_easy *data, bool *done)
CURLcode result = CURLE_OK; CURLcode result = CURLE_OK;
struct connectdata *conn = data->conn; struct connectdata *conn = data->conn;
DEBUGASSERT(conn);
DEBUGASSERT(conn->handler);
if(conn->handler->do_it) { if(conn->handler->do_it) {
/* generic protocol-specific function pointer set in curl_connect() */ /* generic protocol-specific function pointer set in curl_connect() */
result = conn->handler->do_it(conn, done); result = conn->handler->do_it(conn, done);
@ -1266,7 +1254,6 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
bool done = FALSE; bool done = FALSE;
CURLMcode rc; CURLMcode rc;
CURLcode result = CURLE_OK; CURLcode result = CURLE_OK;
struct SingleRequest *k;
timediff_t timeout_ms; timediff_t timeout_ms;
timediff_t recv_timeout_ms; timediff_t recv_timeout_ms;
timediff_t send_timeout_ms; timediff_t send_timeout_ms;
@ -1293,7 +1280,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
if(multi_ischanged(multi, TRUE)) { if(multi_ischanged(multi, TRUE)) {
DEBUGF(infof(data, "multi changed, check CONNECT_PEND queue!\n")); DEBUGF(infof(data, "multi changed, check CONNECT_PEND queue!\n"));
process_pending_handles(multi); /* pipelined/multiplexed */ process_pending_handles(multi); /* multiplexed */
} }
if(data->conn && data->mstate > CURLM_STATE_CONNECT && if(data->conn && data->mstate > CURLM_STATE_CONNECT &&
@ -1308,7 +1295,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
/* we need to wait for the connect state as only then is the start time /* we need to wait for the connect state as only then is the start time
stored, but we must not check already completed handles */ stored, but we must not check already completed handles */
timeout_ms = Curl_timeleft(data, &now, timeout_ms = Curl_timeleft(data, &now,
(data->mstate <= CURLM_STATE_WAITDO)? (data->mstate <= CURLM_STATE_DO)?
TRUE:FALSE); TRUE:FALSE);
if(timeout_ms < 0) { if(timeout_ms < 0) {
@ -1322,7 +1309,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
" milliseconds", " milliseconds",
Curl_timediff(now, data->progress.t_startsingle)); Curl_timediff(now, data->progress.t_startsingle));
else { else {
k = &data->req; struct SingleRequest *k = &data->req;
if(k->size != -1) { if(k->size != -1) {
failf(data, "Operation timed out after %" CURL_FORMAT_TIMEDIFF_T failf(data, "Operation timed out after %" CURL_FORMAT_TIMEDIFF_T
" milliseconds with %" CURL_FORMAT_CURL_OFF_T " out of %" " milliseconds with %" CURL_FORMAT_CURL_OFF_T " out of %"
@ -1392,31 +1379,24 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
} }
if(!result) { if(!result) {
/* Add this handle to the send or pend pipeline */ if(async)
result = Curl_add_handle_to_pipeline(data, data->conn); /* We're now waiting for an asynchronous name lookup */
if(result) multistate(data, CURLM_STATE_WAITRESOLVE);
stream_error = TRUE;
else { else {
if(async) /* after the connect has been sent off, go WAITCONNECT unless the
/* We're now waiting for an asynchronous name lookup */ protocol connect is already done and we can go directly to
multistate(data, CURLM_STATE_WAITRESOLVE); WAITDO or DO! */
else { rc = CURLM_CALL_MULTI_PERFORM;
/* after the connect has been sent off, go WAITCONNECT unless the
protocol connect is already done and we can go directly to
WAITDO or DO! */
rc = CURLM_CALL_MULTI_PERFORM;
if(protocol_connect) if(protocol_connect)
multistate(data, Curl_pipeline_wanted(multi, CURLPIPE_HTTP1)? multistate(data, CURLM_STATE_DO);
CURLM_STATE_WAITDO:CURLM_STATE_DO); else {
else {
#ifndef CURL_DISABLE_HTTP #ifndef CURL_DISABLE_HTTP
if(Curl_connect_ongoing(data->conn)) if(Curl_connect_ongoing(data->conn))
multistate(data, CURLM_STATE_WAITPROXYCONNECT); multistate(data, CURLM_STATE_WAITPROXYCONNECT);
else else
#endif #endif
multistate(data, CURLM_STATE_WAITCONNECT); multistate(data, CURLM_STATE_WAITCONNECT);
}
} }
} }
} }
@ -1429,6 +1409,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
struct connectdata *conn = data->conn; struct connectdata *conn = data->conn;
const char *hostname; const char *hostname;
DEBUGASSERT(conn);
if(conn->bits.httpproxy) if(conn->bits.httpproxy)
hostname = conn->http_proxy.host.name; hostname = conn->http_proxy.host.name;
else if(conn->bits.conn_to_host) else if(conn->bits.conn_to_host)
@ -1472,8 +1453,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
/* call again please so that we get the next socket setup */ /* call again please so that we get the next socket setup */
rc = CURLM_CALL_MULTI_PERFORM; rc = CURLM_CALL_MULTI_PERFORM;
if(protocol_connect) if(protocol_connect)
multistate(data, Curl_pipeline_wanted(multi, CURLPIPE_HTTP1)? multistate(data, CURLM_STATE_DO);
CURLM_STATE_WAITDO:CURLM_STATE_DO);
else { else {
#ifndef CURL_DISABLE_HTTP #ifndef CURL_DISABLE_HTTP
if(Curl_connect_ongoing(data->conn)) if(Curl_connect_ongoing(data->conn))
@ -1496,6 +1476,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
#ifndef CURL_DISABLE_HTTP #ifndef CURL_DISABLE_HTTP
case CURLM_STATE_WAITPROXYCONNECT: case CURLM_STATE_WAITPROXYCONNECT:
/* this is HTTP-specific, but sending CONNECT to a proxy is HTTP... */ /* this is HTTP-specific, but sending CONNECT to a proxy is HTTP... */
DEBUGASSERT(data->conn);
result = Curl_http_connect(data->conn, &protocol_connect); result = Curl_http_connect(data->conn, &protocol_connect);
if(data->conn->bits.proxy_connect_closed) { if(data->conn->bits.proxy_connect_closed) {
@ -1521,6 +1502,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
case CURLM_STATE_WAITCONNECT: case CURLM_STATE_WAITCONNECT:
/* awaiting a completion of an asynch TCP connect */ /* awaiting a completion of an asynch TCP connect */
DEBUGASSERT(data->conn);
result = Curl_is_connected(data->conn, FIRSTSOCKET, &connected); result = Curl_is_connected(data->conn, FIRSTSOCKET, &connected);
if(connected && !result) { if(connected && !result) {
#ifndef CURL_DISABLE_HTTP #ifndef CURL_DISABLE_HTTP
@ -1552,8 +1534,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
multistate(data, CURLM_STATE_PROTOCONNECT); multistate(data, CURLM_STATE_PROTOCONNECT);
else if(!result) { else if(!result) {
/* protocol connect has completed, go WAITDO or DO */ /* protocol connect has completed, go WAITDO or DO */
multistate(data, Curl_pipeline_wanted(multi, CURLPIPE_HTTP1)? multistate(data, CURLM_STATE_DO);
CURLM_STATE_WAITDO:CURLM_STATE_DO);
rc = CURLM_CALL_MULTI_PERFORM; rc = CURLM_CALL_MULTI_PERFORM;
} }
else if(result) { else if(result) {
@ -1569,8 +1550,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
result = Curl_protocol_connecting(data->conn, &protocol_connect); result = Curl_protocol_connecting(data->conn, &protocol_connect);
if(!result && protocol_connect) { if(!result && protocol_connect) {
/* after the connect has completed, go WAITDO or DO */ /* after the connect has completed, go WAITDO or DO */
multistate(data, Curl_pipeline_wanted(multi, CURLPIPE_HTTP1)? multistate(data, CURLM_STATE_DO);
CURLM_STATE_WAITDO:CURLM_STATE_DO);
rc = CURLM_CALL_MULTI_PERFORM; rc = CURLM_CALL_MULTI_PERFORM;
} }
else if(result) { else if(result) {
@ -1581,15 +1561,6 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
} }
break; break;
case CURLM_STATE_WAITDO:
/* Wait for our turn to DO when we're pipelining requests */
if(Curl_pipeline_checkget_write(data, data->conn)) {
/* Grabbed the channel */
multistate(data, CURLM_STATE_DO);
rc = CURLM_CALL_MULTI_PERFORM;
}
break;
case CURLM_STATE_DO: case CURLM_STATE_DO:
if(data->set.connect_only) { if(data->set.connect_only) {
/* keep connection open for application to use the socket */ /* keep connection open for application to use the socket */
@ -1696,6 +1667,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
case CURLM_STATE_DOING: case CURLM_STATE_DOING:
/* we continue DOING until the DO phase is complete */ /* we continue DOING until the DO phase is complete */
DEBUGASSERT(data->conn);
result = Curl_protocol_doing(data->conn, result = Curl_protocol_doing(data->conn,
&dophase_done); &dophase_done);
if(!result) { if(!result) {
@ -1719,10 +1691,9 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
/* /*
* When we are connected, DO MORE and then go DO_DONE * When we are connected, DO MORE and then go DO_DONE
*/ */
DEBUGASSERT(data->conn);
result = multi_do_more(data->conn, &control); result = multi_do_more(data->conn, &control);
/* No need to remove this handle from the send pipeline here since that
is done in multi_done() */
if(!result) { if(!result) {
if(control) { if(control) {
/* if positive, advance to DO_DONE /* if positive, advance to DO_DONE
@ -1745,38 +1716,28 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
break; break;
case CURLM_STATE_DO_DONE: case CURLM_STATE_DO_DONE:
/* Move ourselves from the send to recv pipeline */ DEBUGASSERT(data->conn);
Curl_move_handle_from_send_to_recv_pipe(data, data->conn); if(data->conn->bits.multiplex)
if(data->conn->bits.multiplex || data->conn->send_pipe.size)
/* Check if we can move pending requests to send pipe */ /* Check if we can move pending requests to send pipe */
process_pending_handles(multi); /* pipelined/multiplexed */ process_pending_handles(multi); /* multiplexed */
/* Only perform the transfer if there's a good socket to work with. /* Only perform the transfer if there's a good socket to work with.
Having both BAD is a signal to skip immediately to DONE */ Having both BAD is a signal to skip immediately to DONE */
if((data->conn->sockfd != CURL_SOCKET_BAD) || if((data->conn->sockfd != CURL_SOCKET_BAD) ||
(data->conn->writesockfd != CURL_SOCKET_BAD)) (data->conn->writesockfd != CURL_SOCKET_BAD))
multistate(data, CURLM_STATE_WAITPERFORM); multistate(data, CURLM_STATE_PERFORM);
else { else {
if(data->state.wildcardmatch && if(data->state.wildcardmatch &&
((data->conn->handler->flags & PROTOPT_WILDCARD) == 0)) { ((data->conn->handler->flags & PROTOPT_WILDCARD) == 0)) {
data->wildcard.state = CURLWC_DONE; data->wildcard.state = CURLWC_DONE;
} }
multistate(data, CURLM_STATE_DONE); multistate(data, CURLM_STATE_DONE);
} }
rc = CURLM_CALL_MULTI_PERFORM; rc = CURLM_CALL_MULTI_PERFORM;
break; break;
case CURLM_STATE_WAITPERFORM:
/* Wait for our turn to PERFORM */
if(Curl_pipeline_checkget_read(data, data->conn)) {
/* Grabbed the channel */
multistate(data, CURLM_STATE_PERFORM);
rc = CURLM_CALL_MULTI_PERFORM;
}
break;
case CURLM_STATE_TOOFAST: /* limit-rate exceeded in either direction */ case CURLM_STATE_TOOFAST: /* limit-rate exceeded in either direction */
DEBUGASSERT(data->conn);
/* if both rates are within spec, resume transfer */ /* if both rates are within spec, resume transfer */
if(Curl_pgrsUpdate(data->conn)) if(Curl_pgrsUpdate(data->conn))
result = CURLE_ABORTED_BY_CALLBACK; result = CURLE_ABORTED_BY_CALLBACK;
@ -1850,18 +1811,6 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
/* read/write data if it is ready to do so */ /* read/write data if it is ready to do so */
result = Curl_readwrite(data->conn, data, &done, &comeback); result = Curl_readwrite(data->conn, data, &done, &comeback);
k = &data->req;
if(!(k->keepon & KEEP_RECV)) {
/* We're done receiving */
Curl_pipeline_leave_read(data->conn);
}
if(!(k->keepon & KEEP_SEND)) {
/* We're done sending */
Curl_pipeline_leave_write(data->conn);
}
if(done || (result == CURLE_RECV_ERROR)) { if(done || (result == CURLE_RECV_ERROR)) {
/* If CURLE_RECV_ERROR happens early enough, we assume it was a race /* If CURLE_RECV_ERROR happens early enough, we assume it was a race
* condition and the server closed the re-used connection exactly when * condition and the server closed the re-used connection exactly when
@ -1924,13 +1873,6 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
/* call this even if the readwrite function returned error */ /* call this even if the readwrite function returned error */
Curl_posttransfer(data); Curl_posttransfer(data);
/* we're no longer receiving */
Curl_removeHandleFromPipeline(data, &data->conn->recv_pipe);
/* expire the new receiving pipeline head */
if(data->conn->recv_pipe.head)
Curl_expire(data->conn->recv_pipe.head->ptr, 0, EXPIRE_RUN_NOW);
/* When we follow redirects or is set to retry the connection, we must /* When we follow redirects or is set to retry the connection, we must
to go back to the CONNECT state */ to go back to the CONNECT state */
if(data->req.newurl || retry) { if(data->req.newurl || retry) {
@ -1988,12 +1930,9 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
if(data->conn) { if(data->conn) {
CURLcode res; CURLcode res;
/* Remove ourselves from the receive pipeline, if we are there. */ if(data->conn->bits.multiplex)
Curl_removeHandleFromPipeline(data, &data->conn->recv_pipe);
if(data->conn->bits.multiplex || data->conn->send_pipe.size)
/* Check if we can move pending requests to connection */ /* Check if we can move pending requests to connection */
process_pending_handles(multi); /* pipelined/multiplexing */ process_pending_handles(multi); /* multiplexing */
/* post-transfer command */ /* post-transfer command */
res = multi_done(data, result, FALSE); res = multi_done(data, result, FALSE);
@ -2003,7 +1942,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
result = res; result = res;
/* /*
* If there are other handles on the pipeline, multi_done won't set * If there are other handles on the connection, multi_done won't set
* conn to NULL. In such a case, curl_multi_remove_handle() can * conn to NULL. In such a case, curl_multi_remove_handle() can
* access free'd data, if the connection is free'd and the handle * access free'd data, if the connection is free'd and the handle
* removed before we perform the processing in CURLM_STATE_COMPLETED * removed before we perform the processing in CURLM_STATE_COMPLETED
@ -2052,12 +1991,6 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
process_pending_handles(multi); /* connection */ process_pending_handles(multi); /* connection */
if(data->conn) { if(data->conn) {
/* if this has a connection, unsubscribe from the pipelines */
Curl_pipeline_leave_write(data->conn);
Curl_pipeline_leave_read(data->conn);
Curl_removeHandleFromPipeline(data, &data->conn->send_pipe);
Curl_removeHandleFromPipeline(data, &data->conn->recv_pipe);
if(stream_error) { if(stream_error) {
/* Don't attempt to send data over a connection that timed out */ /* Don't attempt to send data over a connection that timed out */
bool dead_connection = result == CURLE_OPERATION_TIMEDOUT; bool dead_connection = result == CURLE_OPERATION_TIMEDOUT;
@ -2218,12 +2151,6 @@ CURLMcode curl_multi_cleanup(struct Curl_multi *multi)
Curl_hash_destroy(&multi->hostcache); Curl_hash_destroy(&multi->hostcache);
Curl_psl_destroy(&multi->psl); Curl_psl_destroy(&multi->psl);
/* Free the blacklists by setting them to NULL */
(void)Curl_pipeline_set_site_blacklist(NULL, &multi->pipelining_site_bl);
(void)Curl_pipeline_set_server_blacklist(NULL,
&multi->pipelining_server_bl);
free(multi); free(multi);
return CURLM_OK; return CURLM_OK;
@ -2576,19 +2503,6 @@ static CURLMcode multi_socket(struct Curl_multi *multi,
/* bad bad bad bad bad bad bad */ /* bad bad bad bad bad bad bad */
return CURLM_INTERNAL_ERROR; return CURLM_INTERNAL_ERROR;
/* If the pipeline is enabled, take the handle which is in the head of
the pipeline. If we should write into the socket, take the
send_pipe head. If we should read from the socket, take the
recv_pipe head. */
if(data->conn) {
if((ev_bitmask & CURL_POLL_OUT) &&
data->conn->send_pipe.head)
data = data->conn->send_pipe.head->ptr;
else if((ev_bitmask & CURL_POLL_IN) &&
data->conn->recv_pipe.head)
data = data->conn->recv_pipe.head->ptr;
}
if(data->conn && !(data->conn->handler->flags & PROTOPT_DIRLOCK)) if(data->conn && !(data->conn->handler->flags & PROTOPT_DIRLOCK))
/* set socket event bitmask if they're not locked */ /* set socket event bitmask if they're not locked */
data->conn->cselect_bits = ev_bitmask; data->conn->cselect_bits = ev_bitmask;
@ -2695,7 +2609,7 @@ CURLMcode curl_multi_setopt(struct Curl_multi *multi,
multi->push_userp = va_arg(param, void *); multi->push_userp = va_arg(param, void *);
break; break;
case CURLMOPT_PIPELINING: case CURLMOPT_PIPELINING:
multi->pipelining = va_arg(param, long) & CURLPIPE_MULTIPLEX; multi->multiplexing = va_arg(param, long) & CURLPIPE_MULTIPLEX;
break; break;
case CURLMOPT_TIMERFUNCTION: case CURLMOPT_TIMERFUNCTION:
multi->timer_cb = va_arg(param, curl_multi_timer_callback); multi->timer_cb = va_arg(param, curl_multi_timer_callback);
@ -2709,26 +2623,20 @@ CURLMcode curl_multi_setopt(struct Curl_multi *multi,
case CURLMOPT_MAX_HOST_CONNECTIONS: case CURLMOPT_MAX_HOST_CONNECTIONS:
multi->max_host_connections = va_arg(param, long); multi->max_host_connections = va_arg(param, long);
break; break;
case CURLMOPT_MAX_PIPELINE_LENGTH:
multi->max_pipeline_length = va_arg(param, long);
break;
case CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE:
multi->content_length_penalty_size = va_arg(param, long);
break;
case CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE:
multi->chunk_length_penalty_size = va_arg(param, long);
break;
case CURLMOPT_PIPELINING_SITE_BL:
res = Curl_pipeline_set_site_blacklist(va_arg(param, char **),
&multi->pipelining_site_bl);
break;
case CURLMOPT_PIPELINING_SERVER_BL:
res = Curl_pipeline_set_server_blacklist(va_arg(param, char **),
&multi->pipelining_server_bl);
break;
case CURLMOPT_MAX_TOTAL_CONNECTIONS: case CURLMOPT_MAX_TOTAL_CONNECTIONS:
multi->max_total_connections = va_arg(param, long); multi->max_total_connections = va_arg(param, long);
break; break;
/* options formerly used for pipelining */
case CURLMOPT_MAX_PIPELINE_LENGTH:
break;
case CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE:
break;
case CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE:
break;
case CURLMOPT_PIPELINING_SITE_BL:
break;
case CURLMOPT_PIPELINING_SERVER_BL:
break;
default: default:
res = CURLM_UNKNOWN_OPTION; res = CURLM_UNKNOWN_OPTION;
break; break;
@ -3080,26 +2988,6 @@ size_t Curl_multi_max_total_connections(struct Curl_multi *multi)
return multi ? multi->max_total_connections : 0; return multi ? multi->max_total_connections : 0;
} }
curl_off_t Curl_multi_content_length_penalty_size(struct Curl_multi *multi)
{
return multi ? multi->content_length_penalty_size : 0;
}
curl_off_t Curl_multi_chunk_length_penalty_size(struct Curl_multi *multi)
{
return multi ? multi->chunk_length_penalty_size : 0;
}
struct curl_llist *Curl_multi_pipelining_site_bl(struct Curl_multi *multi)
{
return &multi->pipelining_site_bl;
}
struct curl_llist *Curl_multi_pipelining_server_bl(struct Curl_multi *multi)
{
return &multi->pipelining_server_bl;
}
static void process_pending_handles(struct Curl_multi *multi) static void process_pending_handles(struct Curl_multi *multi)
{ {
struct curl_llist_element *e = multi->pending.head; struct curl_llist_element *e = multi->pending.head;

View File

@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___ * | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____| * \___|\___/|_| \_\_____|
* *
* Copyright (C) 1998 - 2018, Daniel Stenberg, <daniel@haxx.se>, et al. * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
* *
* This software is licensed as described in the file COPYING, which * This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms * you should have received as part of this distribution. The terms
@ -46,18 +46,16 @@ typedef enum {
CURLM_STATE_SENDPROTOCONNECT, /* 6 - initiate protocol connect procedure */ CURLM_STATE_SENDPROTOCONNECT, /* 6 - initiate protocol connect procedure */
CURLM_STATE_PROTOCONNECT, /* 7 - completing the protocol-specific connect CURLM_STATE_PROTOCONNECT, /* 7 - completing the protocol-specific connect
phase */ phase */
CURLM_STATE_WAITDO, /* 8 - wait for our turn to send the request */ CURLM_STATE_DO, /* 8 - start send off the request (part 1) */
CURLM_STATE_DO, /* 9 - start send off the request (part 1) */ CURLM_STATE_DOING, /* 9 - sending off the request (part 1) */
CURLM_STATE_DOING, /* 10 - sending off the request (part 1) */ CURLM_STATE_DO_MORE, /* 10 - send off the request (part 2) */
CURLM_STATE_DO_MORE, /* 11 - send off the request (part 2) */ CURLM_STATE_DO_DONE, /* 11 - done sending off request */
CURLM_STATE_DO_DONE, /* 12 - done sending off request */ CURLM_STATE_PERFORM, /* 12 - transfer data */
CURLM_STATE_WAITPERFORM, /* 13 - wait for our turn to read the response */ CURLM_STATE_TOOFAST, /* 13 - wait because limit-rate exceeded */
CURLM_STATE_PERFORM, /* 14 - transfer data */ CURLM_STATE_DONE, /* 14 - post data transfer operation */
CURLM_STATE_TOOFAST, /* 15 - wait because limit-rate exceeded */ CURLM_STATE_COMPLETED, /* 15 - operation complete */
CURLM_STATE_DONE, /* 16 - post data transfer operation */ CURLM_STATE_MSGSENT, /* 16 - the operation complete message is sent */
CURLM_STATE_COMPLETED, /* 17 - operation complete */ CURLM_STATE_LAST /* 17 - not a true state, never use this */
CURLM_STATE_MSGSENT, /* 18 - the operation complete message is sent */
CURLM_STATE_LAST /* 19 - not a true state, never use this */
} CURLMstate; } CURLMstate;
/* we support N sockets per easy handle. Set the corresponding bit to what /* we support N sockets per easy handle. Set the corresponding bit to what
@ -66,7 +64,7 @@ typedef enum {
#define GETSOCK_READABLE (0x00ff) #define GETSOCK_READABLE (0x00ff)
#define GETSOCK_WRITABLE (0xff00) #define GETSOCK_WRITABLE (0xff00)
#define CURLPIPE_ANY (CURLPIPE_HTTP1 | CURLPIPE_MULTIPLEX) #define CURLPIPE_ANY (CURLPIPE_MULTIPLEX)
/* This is the struct known as CURLM on the outside */ /* This is the struct known as CURLM on the outside */
struct Curl_multi { struct Curl_multi {
@ -112,8 +110,8 @@ struct Curl_multi {
same actual socket) */ same actual socket) */
struct curl_hash sockhash; struct curl_hash sockhash;
/* pipelining wanted bits (CURLPIPE*) */ /* multiplexing wanted */
long pipelining; bool multiplexing;
bool recheckstate; /* see Curl_multi_connchanged */ bool recheckstate; /* see Curl_multi_connchanged */
@ -129,24 +127,6 @@ struct Curl_multi {
long max_total_connections; /* if >0, a fixed limit of the maximum number long max_total_connections; /* if >0, a fixed limit of the maximum number
of connections in total */ of connections in total */
long max_pipeline_length; /* if >0, maximum number of requests in a
pipeline */
long content_length_penalty_size; /* a connection with a
content-length bigger than
this is not considered
for pipelining */
long chunk_length_penalty_size; /* a connection with a chunk length
bigger than this is not
considered for pipelining */
struct curl_llist pipelining_site_bl; /* List of sites that are blacklisted
from pipelining */
struct curl_llist pipelining_server_bl; /* List of server types that are
blacklisted from pipelining */
/* timer callback and user data pointer for the *socket() API */ /* timer callback and user data pointer for the *socket() API */
curl_multi_timer_callback timer_cb; curl_multi_timer_callback timer_cb;
void *timer_userp; void *timer_userp;

View File

@ -30,10 +30,11 @@ void Curl_updatesocket(struct Curl_easy *data);
void Curl_expire(struct Curl_easy *data, time_t milli, expire_id); void Curl_expire(struct Curl_easy *data, time_t milli, expire_id);
void Curl_expire_clear(struct Curl_easy *data); void Curl_expire_clear(struct Curl_easy *data);
void Curl_expire_done(struct Curl_easy *data, expire_id id); void Curl_expire_done(struct Curl_easy *data, expire_id id);
bool Curl_pipeline_wanted(const struct Curl_multi* multi, int bits);
void Curl_detach_connnection(struct Curl_easy *data); void Curl_detach_connnection(struct Curl_easy *data);
void Curl_attach_connnection(struct Curl_easy *data, void Curl_attach_connnection(struct Curl_easy *data,
struct connectdata *conn); struct connectdata *conn);
bool Curl_multiplex_wanted(const struct Curl_multi *multi);
void Curl_multi_handlePipeBreak(struct Curl_easy *data);
void Curl_set_in_callback(struct Curl_easy *data, bool value); void Curl_set_in_callback(struct Curl_easy *data, bool value);
bool Curl_is_in_callback(struct Curl_easy *easy); bool Curl_is_in_callback(struct Curl_easy *easy);

View File

@ -1,404 +0,0 @@
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 2013, Linus Nielsen Feltzing, <linus@haxx.se>
* Copyright (C) 2013 - 2018, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at https://curl.haxx.se/docs/copyright.html.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
***************************************************************************/
#include "curl_setup.h"
#include <curl/curl.h>
#include "urldata.h"
#include "url.h"
#include "progress.h"
#include "multiif.h"
#include "pipeline.h"
#include "sendf.h"
#include "strcase.h"
#include "curl_memory.h"
/* The last #include file should be: */
#include "memdebug.h"
struct site_blacklist_entry {
struct curl_llist_element list;
unsigned short port;
char hostname[1];
};
static void site_blacklist_llist_dtor(void *user, void *element)
{
struct site_blacklist_entry *entry = element;
(void)user;
free(entry);
}
static void server_blacklist_llist_dtor(void *user, void *element)
{
(void)user;
free(element);
}
bool Curl_pipeline_penalized(struct Curl_easy *data,
struct connectdata *conn)
{
if(data) {
bool penalized = FALSE;
curl_off_t penalty_size =
Curl_multi_content_length_penalty_size(data->multi);
curl_off_t chunk_penalty_size =
Curl_multi_chunk_length_penalty_size(data->multi);
curl_off_t recv_size = -2; /* Make it easy to spot in the log */
/* Find the head of the recv pipe, if any */
if(conn->recv_pipe.head) {
struct Curl_easy *recv_handle = conn->recv_pipe.head->ptr;
recv_size = recv_handle->req.size;
if(penalty_size > 0 && recv_size > penalty_size)
penalized = TRUE;
}
if(chunk_penalty_size > 0 &&
(curl_off_t)conn->chunk.datasize > chunk_penalty_size)
penalized = TRUE;
infof(data, "Conn: %ld (%p) Receive pipe weight: (%"
CURL_FORMAT_CURL_OFF_T "/%" CURL_FORMAT_CURL_OFF_T
"), penalized: %s\n",
conn->connection_id, (void *)conn, recv_size,
conn->chunk.datasize, penalized?"TRUE":"FALSE");
return penalized;
}
return FALSE;
}
static CURLcode addHandleToPipeline(struct Curl_easy *data,
struct curl_llist *pipeline)
{
Curl_llist_insert_next(pipeline, pipeline->tail, data,
&data->pipeline_queue);
return CURLE_OK;
}
CURLcode Curl_add_handle_to_pipeline(struct Curl_easy *handle,
struct connectdata *conn)
{
struct curl_llist_element *sendhead = conn->send_pipe.head;
struct curl_llist *pipeline;
CURLcode result;
pipeline = &conn->send_pipe;
result = addHandleToPipeline(handle, pipeline);
if((conn->bundle->multiuse == BUNDLE_PIPELINING) &&
(pipeline == &conn->send_pipe && sendhead != conn->send_pipe.head)) {
/* this is a new one as head, expire it */
Curl_pipeline_leave_write(conn); /* not in use yet */
Curl_expire(conn->send_pipe.head->ptr, 0, EXPIRE_RUN_NOW);
}
#if 0 /* enable for pipeline debugging */
print_pipeline(conn);
#endif
return result;
}
/* Move this transfer from the sending list to the receiving list.
Pay special attention to the new sending list "leader" as it needs to get
checked to update what sockets it acts on.
*/
void Curl_move_handle_from_send_to_recv_pipe(struct Curl_easy *handle,
struct connectdata *conn)
{
struct curl_llist_element *curr;
curr = conn->send_pipe.head;
while(curr) {
if(curr->ptr == handle) {
Curl_llist_move(&conn->send_pipe, curr,
&conn->recv_pipe, conn->recv_pipe.tail);
if(conn->send_pipe.head) {
/* Since there's a new easy handle at the start of the send pipeline,
set its timeout value to 1ms to make it trigger instantly */
Curl_pipeline_leave_write(conn); /* not used now */
#ifdef DEBUGBUILD
infof(conn->data, "%p is at send pipe head B!\n",
(void *)conn->send_pipe.head->ptr);
#endif
Curl_expire(conn->send_pipe.head->ptr, 0, EXPIRE_RUN_NOW);
}
/* The receiver's list is not really interesting here since either this
handle is now first in the list and we'll deal with it soon, or
another handle is already first and thus is already taken care of */
break; /* we're done! */
}
curr = curr->next;
}
}
bool Curl_pipeline_site_blacklisted(struct Curl_easy *handle,
struct connectdata *conn)
{
if(handle->multi) {
struct curl_llist *blacklist =
Curl_multi_pipelining_site_bl(handle->multi);
if(blacklist) {
struct curl_llist_element *curr;
curr = blacklist->head;
while(curr) {
struct site_blacklist_entry *site;
site = curr->ptr;
if(strcasecompare(site->hostname, conn->host.name) &&
site->port == conn->remote_port) {
infof(handle, "Site %s:%d is pipeline blacklisted\n",
conn->host.name, conn->remote_port);
return TRUE;
}
curr = curr->next;
}
}
}
return FALSE;
}
CURLMcode Curl_pipeline_set_site_blacklist(char **sites,
struct curl_llist *list)
{
/* Free the old list */
if(list->size)
Curl_llist_destroy(list, NULL);
if(sites) {
Curl_llist_init(list, (curl_llist_dtor) site_blacklist_llist_dtor);
/* Parse the URLs and populate the list */
while(*sites) {
char *port;
struct site_blacklist_entry *entry;
entry = malloc(sizeof(struct site_blacklist_entry) + strlen(*sites));
if(!entry) {
Curl_llist_destroy(list, NULL);
return CURLM_OUT_OF_MEMORY;
}
strcpy(entry->hostname, *sites);
port = strchr(entry->hostname, ':');
if(port) {
*port = '\0';
port++;
entry->port = (unsigned short)strtol(port, NULL, 10);
}
else {
/* Default port number for HTTP */
entry->port = 80;
}
Curl_llist_insert_next(list, list->tail, entry, &entry->list);
sites++;
}
}
return CURLM_OK;
}
struct blacklist_node {
struct curl_llist_element list;
char server_name[1];
};
bool Curl_pipeline_server_blacklisted(struct Curl_easy *handle,
char *server_name)
{
if(handle->multi && server_name) {
struct curl_llist *list =
Curl_multi_pipelining_server_bl(handle->multi);
struct curl_llist_element *e = list->head;
while(e) {
struct blacklist_node *bl = (struct blacklist_node *)e;
if(strncasecompare(bl->server_name, server_name,
strlen(bl->server_name))) {
infof(handle, "Server %s is blacklisted\n", server_name);
return TRUE;
}
e = e->next;
}
DEBUGF(infof(handle, "Server %s is not blacklisted\n", server_name));
}
return FALSE;
}
CURLMcode Curl_pipeline_set_server_blacklist(char **servers,
struct curl_llist *list)
{
/* Free the old list */
if(list->size)
Curl_llist_destroy(list, NULL);
if(servers) {
Curl_llist_init(list, (curl_llist_dtor) server_blacklist_llist_dtor);
/* Parse the URLs and populate the list */
while(*servers) {
struct blacklist_node *n;
size_t len = strlen(*servers);
n = malloc(sizeof(struct blacklist_node) + len);
if(!n) {
Curl_llist_destroy(list, NULL);
return CURLM_OUT_OF_MEMORY;
}
strcpy(n->server_name, *servers);
Curl_llist_insert_next(list, list->tail, n, &n->list);
servers++;
}
}
return CURLM_OK;
}
static bool pipe_head(struct Curl_easy *data,
struct curl_llist *pipeline)
{
if(pipeline) {
struct curl_llist_element *curr = pipeline->head;
if(curr)
return (curr->ptr == data) ? TRUE : FALSE;
}
return FALSE;
}
/* returns TRUE if the given handle is head of the recv pipe */
bool Curl_recvpipe_head(struct Curl_easy *data,
struct connectdata *conn)
{
return pipe_head(data, &conn->recv_pipe);
}
/* returns TRUE if the given handle is head of the send pipe */
bool Curl_sendpipe_head(struct Curl_easy *data,
struct connectdata *conn)
{
return pipe_head(data, &conn->send_pipe);
}
/*
* Check if the write channel is available and this handle as at the head,
* then grab the channel and return TRUE.
*
* If not available, return FALSE.
*/
bool Curl_pipeline_checkget_write(struct Curl_easy *data,
struct connectdata *conn)
{
if(conn->bits.multiplex)
/* when multiplexing, we can use it at once */
return TRUE;
if(!conn->writechannel_inuse && Curl_sendpipe_head(data, conn)) {
/* Grab the channel */
conn->writechannel_inuse = TRUE;
return TRUE;
}
return FALSE;
}
/*
* Check if the read channel is available and this handle as at the head, then
* grab the channel and return TRUE.
*
* If not available, return FALSE.
*/
bool Curl_pipeline_checkget_read(struct Curl_easy *data,
struct connectdata *conn)
{
if(conn->bits.multiplex)
/* when multiplexing, we can use it at once */
return TRUE;
if(!conn->readchannel_inuse && Curl_recvpipe_head(data, conn)) {
/* Grab the channel */
conn->readchannel_inuse = TRUE;
return TRUE;
}
return FALSE;
}
/*
* The current user of the pipeline write channel gives it up.
*/
void Curl_pipeline_leave_write(struct connectdata *conn)
{
conn->writechannel_inuse = FALSE;
}
/*
* The current user of the pipeline read channel gives it up.
*/
void Curl_pipeline_leave_read(struct connectdata *conn)
{
conn->readchannel_inuse = FALSE;
}
#if 0
void print_pipeline(struct connectdata *conn)
{
struct curl_llist_element *curr;
struct connectbundle *cb_ptr;
struct Curl_easy *data = conn->data;
cb_ptr = conn->bundle;
if(cb_ptr) {
curr = cb_ptr->conn_list->head;
while(curr) {
conn = curr->ptr;
infof(data, "- Conn %ld (%p) send_pipe: %zu, recv_pipe: %zu\n",
conn->connection_id,
(void *)conn,
conn->send_pipe->size,
conn->recv_pipe->size);
curr = curr->next;
}
}
}
#endif

View File

@ -1,56 +0,0 @@
#ifndef HEADER_CURL_PIPELINE_H
#define HEADER_CURL_PIPELINE_H
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 2015 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 2013 - 2014, Linus Nielsen Feltzing, <linus@haxx.se>
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at https://curl.haxx.se/docs/copyright.html.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
***************************************************************************/
CURLcode Curl_add_handle_to_pipeline(struct Curl_easy *handle,
struct connectdata *conn);
void Curl_move_handle_from_send_to_recv_pipe(struct Curl_easy *handle,
struct connectdata *conn);
bool Curl_pipeline_penalized(struct Curl_easy *data,
struct connectdata *conn);
bool Curl_pipeline_site_blacklisted(struct Curl_easy *handle,
struct connectdata *conn);
CURLMcode Curl_pipeline_set_site_blacklist(char **sites,
struct curl_llist *list_ptr);
bool Curl_pipeline_server_blacklisted(struct Curl_easy *handle,
char *server_name);
CURLMcode Curl_pipeline_set_server_blacklist(char **servers,
struct curl_llist *list_ptr);
bool Curl_pipeline_checkget_write(struct Curl_easy *data,
struct connectdata *conn);
bool Curl_pipeline_checkget_read(struct Curl_easy *data,
struct connectdata *conn);
void Curl_pipeline_leave_write(struct connectdata *conn);
void Curl_pipeline_leave_read(struct connectdata *conn);
bool Curl_recvpipe_head(struct Curl_easy *data,
struct connectdata *conn);
bool Curl_sendpipe_head(struct Curl_easy *data,
struct connectdata *conn);
#endif /* HEADER_CURL_PIPELINE_H */

View File

@ -48,7 +48,6 @@
* -server CSeq counter * -server CSeq counter
* -digest authentication * -digest authentication
* -connect through proxy * -connect through proxy
* -pipelining?
*/ */

View File

@ -724,10 +724,6 @@ CURLcode Curl_read(struct connectdata *conn, /* connection data */
char *buffertofill = NULL; char *buffertofill = NULL;
struct Curl_easy *data = conn->data; struct Curl_easy *data = conn->data;
/* if HTTP/1 pipelining is both wanted and possible */
bool pipelining = Curl_pipeline_wanted(data->multi, CURLPIPE_HTTP1) &&
(conn->bundle->multiuse == BUNDLE_PIPELINING);
/* Set 'num' to 0 or 1, depending on which socket that has been sent here. /* Set 'num' to 0 or 1, depending on which socket that has been sent here.
If it is the second socket, we set num to 1. Otherwise to 0. This lets If it is the second socket, we set num to 1. Otherwise to 0. This lets
us use the correct ssl handle. */ us use the correct ssl handle. */
@ -735,40 +731,13 @@ CURLcode Curl_read(struct connectdata *conn, /* connection data */
*n = 0; /* reset amount to zero */ *n = 0; /* reset amount to zero */
/* If session can pipeline, check connection buffer */ bytesfromsocket = CURLMIN(sizerequested, (size_t)data->set.buffer_size);
if(pipelining) { buffertofill = buf;
size_t bytestocopy = CURLMIN(conn->buf_len - conn->read_pos,
sizerequested);
/* Copy from our master buffer first if we have some unread data there*/
if(bytestocopy > 0) {
memcpy(buf, conn->master_buffer + conn->read_pos, bytestocopy);
conn->read_pos += bytestocopy;
conn->bits.stream_was_rewound = FALSE;
*n = (ssize_t)bytestocopy;
return CURLE_OK;
}
/* If we come here, it means that there is no data to read from the buffer,
* so we read from the socket */
bytesfromsocket = CURLMIN(sizerequested, MASTERBUF_SIZE);
buffertofill = conn->master_buffer;
}
else {
bytesfromsocket = CURLMIN(sizerequested, (size_t)data->set.buffer_size);
buffertofill = buf;
}
nread = conn->recv[num](conn, num, buffertofill, bytesfromsocket, &result); nread = conn->recv[num](conn, num, buffertofill, bytesfromsocket, &result);
if(nread < 0) if(nread < 0)
return result; return result;
if(pipelining) {
memcpy(buf, conn->master_buffer, nread);
conn->buf_len = nread;
conn->read_pos = nread;
}
*n += nread; *n += nread;
return CURLE_OK; return CURLE_OK;

View File

@ -506,35 +506,6 @@ static int data_pending(const struct connectdata *conn)
#endif #endif
} }
static void read_rewind(struct connectdata *conn,
size_t thismuch)
{
DEBUGASSERT(conn->read_pos >= thismuch);
conn->read_pos -= thismuch;
conn->bits.stream_was_rewound = TRUE;
#ifdef DEBUGBUILD
{
char buf[512 + 1];
size_t show;
show = CURLMIN(conn->buf_len - conn->read_pos, sizeof(buf)-1);
if(conn->master_buffer) {
memcpy(buf, conn->master_buffer + conn->read_pos, show);
buf[show] = '\0';
}
else {
buf[0] = '\0';
}
DEBUGF(infof(conn->data,
"Buffer after stream rewind (read_pos = %zu): [%s]\n",
conn->read_pos, buf));
}
#endif
}
/* /*
* Check to see if CURLOPT_TIMECONDITION was met by comparing the time of the * Check to see if CURLOPT_TIMECONDITION was met by comparing the time of the
* remote document with the time provided by CURLOPT_TIMEVAL * remote document with the time provided by CURLOPT_TIMEVAL
@ -609,9 +580,7 @@ static CURLcode readwrite_data(struct Curl_easy *data,
conn->httpversion == 20) && conn->httpversion == 20) &&
#endif #endif
k->size != -1 && !k->header) { k->size != -1 && !k->header) {
/* make sure we don't read "too much" if we can help it since we /* make sure we don't read too much */
might be pipelining and then someone else might want to read what
follows! */
curl_off_t totalleft = k->size - k->bytecount; curl_off_t totalleft = k->size - k->bytecount;
if(totalleft < (curl_off_t)bytestoread) if(totalleft < (curl_off_t)bytestoread)
bytestoread = (size_t)totalleft; bytestoread = (size_t)totalleft;
@ -693,20 +662,11 @@ static CURLcode readwrite_data(struct Curl_easy *data,
/* We've stopped dealing with input, get out of the do-while loop */ /* We've stopped dealing with input, get out of the do-while loop */
if(nread > 0) { if(nread > 0) {
if(Curl_pipeline_wanted(conn->data->multi, CURLPIPE_HTTP1)) { infof(data,
infof(data, "Excess found:"
"Rewinding stream by : %zd" " excess = %zd"
" bytes on url %s (zero-length body)\n", " url = %s (zero-length body)\n",
nread, data->state.up.path); nread, data->state.up.path);
read_rewind(conn, (size_t)nread);
}
else {
infof(data,
"Excess found in a non pipelined read:"
" excess = %zd"
" url = %s (zero-length body)\n",
nread, data->state.up.path);
}
} }
break; break;
@ -837,19 +797,12 @@ static CURLcode readwrite_data(struct Curl_easy *data,
/* There are now possibly N number of bytes at the end of the /* There are now possibly N number of bytes at the end of the
str buffer that weren't written to the client. str buffer that weren't written to the client.
We DO care about this data if we are pipelining.
Push it back to be read on the next pass. */ Push it back to be read on the next pass. */
dataleft = conn->chunk.dataleft; dataleft = conn->chunk.dataleft;
if(dataleft != 0) { if(dataleft != 0) {
infof(conn->data, "Leftovers after chunking: %zu bytes\n", infof(conn->data, "Leftovers after chunking: %zu bytes\n",
dataleft); dataleft);
if(Curl_pipeline_wanted(conn->data->multi, CURLPIPE_HTTP1)) {
/* only attempt the rewind if we truly are pipelining */
infof(conn->data, "Rewinding %zu bytes\n",dataleft);
read_rewind(conn, dataleft);
}
} }
} }
/* If it returned OK, we just keep going */ /* If it returned OK, we just keep going */
@ -868,25 +821,13 @@ static CURLcode readwrite_data(struct Curl_easy *data,
excess = (size_t)(k->bytecount + nread - k->maxdownload); excess = (size_t)(k->bytecount + nread - k->maxdownload);
if(excess > 0 && !k->ignorebody) { if(excess > 0 && !k->ignorebody) {
if(Curl_pipeline_wanted(conn->data->multi, CURLPIPE_HTTP1)) { infof(data,
infof(data, "Excess found in a read:"
"Rewinding stream by : %zu" " excess = %zu"
" bytes on url %s (size = %" CURL_FORMAT_CURL_OFF_T ", size = %" CURL_FORMAT_CURL_OFF_T
", maxdownload = %" CURL_FORMAT_CURL_OFF_T ", maxdownload = %" CURL_FORMAT_CURL_OFF_T
", bytecount = %" CURL_FORMAT_CURL_OFF_T ", nread = %zd)\n", ", bytecount = %" CURL_FORMAT_CURL_OFF_T "\n",
excess, data->state.up.path, excess, k->size, k->maxdownload, k->bytecount);
k->size, k->maxdownload, k->bytecount, nread);
read_rewind(conn, excess);
}
else {
infof(data,
"Excess found in a non pipelined read:"
" excess = %zu"
", size = %" CURL_FORMAT_CURL_OFF_T
", maxdownload = %" CURL_FORMAT_CURL_OFF_T
", bytecount = %" CURL_FORMAT_CURL_OFF_T "\n",
excess, k->size, k->maxdownload, k->bytecount);
}
} }
nread = (ssize_t) (k->maxdownload - k->bytecount); nread = (ssize_t) (k->maxdownload - k->bytecount);

283
lib/url.c
View File

@ -116,7 +116,6 @@ bool curl_win32_idn_to_ascii(const char *in, char **out);
#include "http_proxy.h" #include "http_proxy.h"
#include "conncache.h" #include "conncache.h"
#include "multihandle.h" #include "multihandle.h"
#include "pipeline.h"
#include "dotdot.h" #include "dotdot.h"
#include "strdup.h" #include "strdup.h"
#include "setopt.h" #include "setopt.h"
@ -739,14 +738,10 @@ static void conn_free(struct connectdata *conn)
Curl_safefree(conn->secondaryhostname); Curl_safefree(conn->secondaryhostname);
Curl_safefree(conn->http_proxy.host.rawalloc); /* http proxy name buffer */ Curl_safefree(conn->http_proxy.host.rawalloc); /* http proxy name buffer */
Curl_safefree(conn->socks_proxy.host.rawalloc); /* socks proxy name buffer */ Curl_safefree(conn->socks_proxy.host.rawalloc); /* socks proxy name buffer */
Curl_safefree(conn->master_buffer);
Curl_safefree(conn->connect_state); Curl_safefree(conn->connect_state);
conn_reset_all_postponed_data(conn); conn_reset_all_postponed_data(conn);
Curl_llist_destroy(&conn->easyq, NULL);
Curl_llist_destroy(&conn->send_pipe, NULL);
Curl_llist_destroy(&conn->recv_pipe, NULL);
Curl_safefree(conn->localdev); Curl_safefree(conn->localdev);
Curl_free_primary_ssl_config(&conn->ssl_config); Curl_free_primary_ssl_config(&conn->ssl_config);
Curl_free_primary_ssl_config(&conn->proxy_ssl_config); Curl_free_primary_ssl_config(&conn->proxy_ssl_config);
@ -843,28 +838,21 @@ static bool SocketIsDead(curl_socket_t sock)
} }
/* /*
* IsPipeliningPossible() * IsMultiplexingPossible()
* *
* Return a bitmask with the available pipelining and multiplexing options for * Return a bitmask with the available multiplexing options for the given
* the given requested connection. * requested connection.
*/ */
static int IsPipeliningPossible(const struct Curl_easy *handle, static int IsMultiplexingPossible(const struct Curl_easy *handle,
const struct connectdata *conn) const struct connectdata *conn)
{ {
int avail = 0; int avail = 0;
/* If a HTTP protocol and pipelining is enabled */ /* If a HTTP protocol and multiplexing is enabled */
if((conn->handler->protocol & PROTO_FAMILY_HTTP) && if((conn->handler->protocol & PROTO_FAMILY_HTTP) &&
(!conn->bits.protoconnstart || !conn->bits.close)) { (!conn->bits.protoconnstart || !conn->bits.close)) {
if(Curl_pipeline_wanted(handle->multi, CURLPIPE_HTTP1) && if(Curl_multiplex_wanted(handle->multi) &&
(handle->set.httpversion != CURL_HTTP_VERSION_1_0) &&
(handle->set.httpreq == HTTPREQ_GET ||
handle->set.httpreq == HTTPREQ_HEAD))
/* didn't ask for HTTP/1.0 and a GET or HEAD */
avail |= CURLPIPE_HTTP1;
if(Curl_pipeline_wanted(handle->multi, CURLPIPE_MULTIPLEX) &&
(handle->set.httpversion >= CURL_HTTP_VERSION_2)) (handle->set.httpversion >= CURL_HTTP_VERSION_2))
/* allows HTTP/2 */ /* allows HTTP/2 */
avail |= CURLPIPE_MULTIPLEX; avail |= CURLPIPE_MULTIPLEX;
@ -872,84 +860,6 @@ static int IsPipeliningPossible(const struct Curl_easy *handle,
return avail; return avail;
} }
/* Returns non-zero if a handle was removed */
int Curl_removeHandleFromPipeline(struct Curl_easy *handle,
struct curl_llist *pipeline)
{
if(pipeline) {
struct curl_llist_element *curr;
curr = pipeline->head;
while(curr) {
if(curr->ptr == handle) {
Curl_llist_remove(pipeline, curr, NULL);
return 1; /* we removed a handle */
}
curr = curr->next;
}
}
return 0;
}
#if 0 /* this code is saved here as it is useful for debugging purposes */
static void Curl_printPipeline(struct curl_llist *pipeline)
{
struct curl_llist_element *curr;
curr = pipeline->head;
while(curr) {
struct Curl_easy *data = (struct Curl_easy *) curr->ptr;
infof(data, "Handle in pipeline: %s\n", data->state.path);
curr = curr->next;
}
}
#endif
static struct Curl_easy* gethandleathead(struct curl_llist *pipeline)
{
struct curl_llist_element *curr = pipeline->head;
#ifdef DEBUGBUILD
{
struct curl_llist_element *p = pipeline->head;
while(p) {
struct Curl_easy *e = p->ptr;
DEBUGASSERT(GOOD_EASY_HANDLE(e));
p = p->next;
}
}
#endif
if(curr) {
return (struct Curl_easy *) curr->ptr;
}
return NULL;
}
/* remove the specified connection from all (possible) pipelines and related
queues */
void Curl_getoff_all_pipelines(struct Curl_easy *data,
struct connectdata *conn)
{
if(!conn->bundle)
return;
if(conn->bundle->multiuse == BUNDLE_PIPELINING) {
bool recv_head = (conn->readchannel_inuse &&
Curl_recvpipe_head(data, conn));
bool send_head = (conn->writechannel_inuse &&
Curl_sendpipe_head(data, conn));
if(Curl_removeHandleFromPipeline(data, &conn->recv_pipe) && recv_head)
Curl_pipeline_leave_read(conn);
if(Curl_removeHandleFromPipeline(data, &conn->send_pipe) && send_head)
Curl_pipeline_leave_write(conn);
}
else {
(void)Curl_removeHandleFromPipeline(data, &conn->recv_pipe);
(void)Curl_removeHandleFromPipeline(data, &conn->send_pipe);
}
}
static bool static bool
proxy_info_matches(const struct proxy_info* data, proxy_info_matches(const struct proxy_info* data,
const struct proxy_info* needle) const struct proxy_info* needle)
@ -974,10 +884,8 @@ proxy_info_matches(const struct proxy_info* data,
static bool extract_if_dead(struct connectdata *conn, static bool extract_if_dead(struct connectdata *conn,
struct Curl_easy *data) struct Curl_easy *data)
{ {
size_t pipeLen = conn->send_pipe.size + conn->recv_pipe.size; if(!CONN_INUSE(conn) && !conn->data) {
if(!pipeLen && !CONN_INUSE(conn) && !conn->data) { /* The check for a dead socket makes sense only if the connection isn't in
/* The check for a dead socket makes sense only if there are no
handles in pipeline and the connection isn't already marked in
use */ use */
bool dead; bool dead;
if(conn->handler->connection_check) { if(conn->handler->connection_check) {
@ -1047,13 +955,6 @@ static void prune_dead_connections(struct Curl_easy *data)
} }
} }
static size_t max_pipeline_length(struct Curl_multi *multi)
{
return multi ? multi->max_pipeline_length : 0;
}
/* /*
* Given one filled in connection struct (named needle), this function should * Given one filled in connection struct (named needle), this function should
* detect if there already is one that has all the significant details * detect if there already is one that has all the significant details
@ -1063,8 +964,7 @@ static size_t max_pipeline_length(struct Curl_multi *multi)
* connection as 'in-use'. It must later be called with ConnectionDone() to * connection as 'in-use'. It must later be called with ConnectionDone() to
* return back to 'idle' (unused) state. * return back to 'idle' (unused) state.
* *
* The force_reuse flag is set if the connection must be used, even if * The force_reuse flag is set if the connection must be used.
* the pipelining strategy wants to open a new connection instead of reusing.
*/ */
static bool static bool
ConnectionExists(struct Curl_easy *data, ConnectionExists(struct Curl_easy *data,
@ -1076,7 +976,7 @@ ConnectionExists(struct Curl_easy *data,
struct connectdata *check; struct connectdata *check;
struct connectdata *chosen = 0; struct connectdata *chosen = 0;
bool foundPendingCandidate = FALSE; bool foundPendingCandidate = FALSE;
int canpipe = IsPipeliningPossible(data, needle); bool canmultiplex = IsMultiplexingPossible(data, needle);
struct connectbundle *bundle; struct connectbundle *bundle;
#ifdef USE_NTLM #ifdef USE_NTLM
@ -1092,59 +992,43 @@ ConnectionExists(struct Curl_easy *data,
*force_reuse = FALSE; *force_reuse = FALSE;
*waitpipe = FALSE; *waitpipe = FALSE;
/* We can't pipeline if the site is blacklisted */
if((canpipe & CURLPIPE_HTTP1) &&
Curl_pipeline_site_blacklisted(data, needle))
canpipe &= ~ CURLPIPE_HTTP1;
/* Look up the bundle with all the connections to this particular host. /* Look up the bundle with all the connections to this particular host.
Locks the connection cache, beware of early returns! */ Locks the connection cache, beware of early returns! */
bundle = Curl_conncache_find_bundle(needle, data->state.conn_cache); bundle = Curl_conncache_find_bundle(needle, data->state.conn_cache);
if(bundle) { if(bundle) {
/* Max pipe length is zero (unlimited) for multiplexed connections */ /* Max pipe length is zero (unlimited) for multiplexed connections */
size_t max_pipe_len = (bundle->multiuse != BUNDLE_MULTIPLEX)?
max_pipeline_length(data->multi):0;
size_t best_pipe_len = max_pipe_len;
struct curl_llist_element *curr; struct curl_llist_element *curr;
infof(data, "Found bundle for host %s: %p [%s]\n", infof(data, "Found bundle for host %s: %p [%s]\n",
(needle->bits.conn_to_host ? needle->conn_to_host.name : (needle->bits.conn_to_host ? needle->conn_to_host.name :
needle->host.name), (void *)bundle, needle->host.name), (void *)bundle,
(bundle->multiuse == BUNDLE_PIPELINING ? (bundle->multiuse == BUNDLE_MULTIPLEX ?
"can pipeline" : "can multiplex" : "serially"));
(bundle->multiuse == BUNDLE_MULTIPLEX ?
"can multiplex" : "serially")));
/* We can't pipeline if we don't know anything about the server */ /* We can't multiplex if we don't know anything about the server */
if(canpipe) { if(canmultiplex) {
if(bundle->multiuse <= BUNDLE_UNKNOWN) { if(bundle->multiuse <= BUNDLE_UNKNOWN) {
if((bundle->multiuse == BUNDLE_UNKNOWN) && data->set.pipewait) { if((bundle->multiuse == BUNDLE_UNKNOWN) && data->set.pipewait) {
infof(data, "Server doesn't support multi-use yet, wait\n"); infof(data, "Server doesn't support multiplex yet, wait\n");
*waitpipe = TRUE; *waitpipe = TRUE;
Curl_conncache_unlock(data); Curl_conncache_unlock(data);
return FALSE; /* no re-use */ return FALSE; /* no re-use */
} }
infof(data, "Server doesn't support multi-use (yet)\n"); infof(data, "Server doesn't support multiplex (yet)\n");
canpipe = 0; canmultiplex = FALSE;
} }
if((bundle->multiuse == BUNDLE_PIPELINING) && if((bundle->multiuse == BUNDLE_MULTIPLEX) &&
!Curl_pipeline_wanted(data->multi, CURLPIPE_HTTP1)) { !Curl_multiplex_wanted(data->multi)) {
/* not asked for, switch off */
infof(data, "Could pipeline, but not asked to!\n");
canpipe = 0;
}
else if((bundle->multiuse == BUNDLE_MULTIPLEX) &&
!Curl_pipeline_wanted(data->multi, CURLPIPE_MULTIPLEX)) {
infof(data, "Could multiplex, but not asked to!\n"); infof(data, "Could multiplex, but not asked to!\n");
canpipe = 0; canmultiplex = FALSE;
} }
} }
curr = bundle->conn_list.head; curr = bundle->conn_list.head;
while(curr) { while(curr) {
bool match = FALSE; bool match = FALSE;
size_t pipeLen; size_t multiplexed;
/* /*
* Note that if we use a HTTP proxy in normal mode (no tunneling), we * Note that if we use a HTTP proxy in normal mode (no tunneling), we
@ -1163,29 +1047,14 @@ ConnectionExists(struct Curl_easy *data,
continue; continue;
} }
pipeLen = check->send_pipe.size + check->recv_pipe.size; multiplexed = CONN_INUSE(check);
if(canpipe) { if(canmultiplex) {
if(check->bits.protoconnstart && check->bits.close) if(check->bits.protoconnstart && check->bits.close)
continue; continue;
if(!check->bits.multiplex) {
/* If not multiplexing, make sure the connection is fine for HTTP/1
pipelining */
struct Curl_easy* sh = gethandleathead(&check->send_pipe);
struct Curl_easy* rh = gethandleathead(&check->recv_pipe);
if(sh) {
if(!(IsPipeliningPossible(sh, check) & CURLPIPE_HTTP1))
continue;
}
else if(rh) {
if(!(IsPipeliningPossible(rh, check) & CURLPIPE_HTTP1))
continue;
}
}
} }
else { else {
if(pipeLen > 0) { if(multiplexed) {
/* can only happen within multi handles, and means that another easy /* can only happen within multi handles, and means that another easy
handle is using this connection */ handle is using this connection */
continue; continue;
@ -1210,13 +1079,6 @@ ConnectionExists(struct Curl_easy *data,
to get closed. */ to get closed. */
infof(data, "Connection #%ld isn't open enough, can't reuse\n", infof(data, "Connection #%ld isn't open enough, can't reuse\n",
check->connection_id); check->connection_id);
#ifdef DEBUGBUILD
if(check->recv_pipe.size > 0) {
infof(data,
"BAD! Unconnected #%ld has a non-empty recv pipeline!\n",
check->connection_id);
}
#endif
continue; continue;
} }
} }
@ -1287,15 +1149,15 @@ ConnectionExists(struct Curl_easy *data,
} }
} }
if(!canpipe && check->data) if(!canmultiplex && check->data)
/* this request can't be pipelined but the checked connection is /* this request can't be multiplexed but the checked connection is
already in use so we skip it */ already in use so we skip it */
continue; continue;
if(CONN_INUSE(check) && check->data && if(CONN_INUSE(check) && check->data &&
(check->data->multi != needle->data->multi)) (check->data->multi != needle->data->multi))
/* this could be subject for pipeline/multiplex use, but only if they /* this could be subject for multiplex use, but only if they belong to
belong to the same multi handle */ * the same multi handle */
continue; continue;
if(needle->localdev || needle->localport) { if(needle->localdev || needle->localport) {
@ -1424,55 +1286,32 @@ ConnectionExists(struct Curl_easy *data,
continue; continue;
} }
#endif #endif
if(canpipe) { if(canmultiplex) {
/* We can pipeline if we want to. Let's continue looking for /* We can multiplex if we want to. Let's continue looking for
the optimal connection to use, i.e the shortest pipe that is not the optimal connection to use. */
blacklisted. */
if(pipeLen == 0) { if(!multiplexed) {
/* We have the optimal connection. Let's stop looking. */ /* We have the optimal connection. Let's stop looking. */
chosen = check; chosen = check;
break; break;
} }
/* We can't use the connection if the pipe is full */
if(max_pipe_len && (pipeLen >= max_pipe_len)) {
infof(data, "Pipe is full, skip (%zu)\n", pipeLen);
continue;
}
#ifdef USE_NGHTTP2 #ifdef USE_NGHTTP2
/* If multiplexed, make sure we don't go over concurrency limit */ /* If multiplexed, make sure we don't go over concurrency limit */
if(check->bits.multiplex) { if(check->bits.multiplex) {
/* Multiplexed connections can only be HTTP/2 for now */ /* Multiplexed connections can only be HTTP/2 for now */
struct http_conn *httpc = &check->proto.httpc; struct http_conn *httpc = &check->proto.httpc;
if(pipeLen >= httpc->settings.max_concurrent_streams) { if(multiplexed >= httpc->settings.max_concurrent_streams) {
infof(data, "MAX_CONCURRENT_STREAMS reached, skip (%zu)\n", infof(data, "MAX_CONCURRENT_STREAMS reached, skip (%zu)\n",
pipeLen); multiplexed);
continue; continue;
} }
} }
#endif #endif
/* We can't use the connection if the pipe is penalized */ /* When not multiplexed, we have a match here! */
if(Curl_pipeline_penalized(data, check)) { chosen = check;
infof(data, "Penalized, skip\n"); infof(data, "Multiplexed connection found!\n");
continue; break;
}
if(max_pipe_len) {
if(pipeLen < best_pipe_len) {
/* This connection has a shorter pipe so far. We'll pick this
and continue searching */
chosen = check;
best_pipe_len = pipeLen;
continue;
}
}
else {
/* When not pipelining (== multiplexed), we have a match here! */
chosen = check;
infof(data, "Multiplexed connection found!\n");
break;
}
} }
else { else {
/* We have found a connection. Let's stop searching. */ /* We have found a connection. Let's stop searching. */
@ -1929,17 +1768,8 @@ static struct connectdata *allocate_conn(struct Curl_easy *data)
conn->response_header = NULL; conn->response_header = NULL;
#endif #endif
if(Curl_pipeline_wanted(data->multi, CURLPIPE_HTTP1) && /* Initialize the easy handle list */
!conn->master_buffer) { Curl_llist_init(&conn->easyq, (curl_llist_dtor) llist_dtor);
/* Allocate master_buffer to be used for HTTP/1 pipelining */
conn->master_buffer = calloc(MASTERBUF_SIZE, sizeof(char));
if(!conn->master_buffer)
goto error;
}
/* Initialize the pipeline lists */
Curl_llist_init(&conn->send_pipe, (curl_llist_dtor) llist_dtor);
Curl_llist_init(&conn->recv_pipe, (curl_llist_dtor) llist_dtor);
#ifdef HAVE_GSSAPI #ifdef HAVE_GSSAPI
conn->data_prot = PROT_CLEAR; conn->data_prot = PROT_CLEAR;
@ -1962,10 +1792,7 @@ static struct connectdata *allocate_conn(struct Curl_easy *data)
return conn; return conn;
error: error:
Curl_llist_destroy(&conn->send_pipe, NULL); Curl_llist_destroy(&conn->easyq, NULL);
Curl_llist_destroy(&conn->recv_pipe, NULL);
free(conn->master_buffer);
free(conn->localdev); free(conn->localdev);
#ifdef USE_SSL #ifdef USE_SSL
free(conn->ssl_extra); free(conn->ssl_extra);
@ -3614,11 +3441,7 @@ static void reuse_conn(struct connectdata *old_conn,
Curl_safefree(old_conn->http_proxy.passwd); Curl_safefree(old_conn->http_proxy.passwd);
Curl_safefree(old_conn->socks_proxy.passwd); Curl_safefree(old_conn->socks_proxy.passwd);
Curl_safefree(old_conn->localdev); Curl_safefree(old_conn->localdev);
Curl_llist_destroy(&old_conn->easyq, NULL);
Curl_llist_destroy(&old_conn->send_pipe, NULL);
Curl_llist_destroy(&old_conn->recv_pipe, NULL);
Curl_safefree(old_conn->master_buffer);
#ifdef USE_UNIX_SOCKETS #ifdef USE_UNIX_SOCKETS
Curl_safefree(old_conn->unix_domain_socket); Curl_safefree(old_conn->unix_domain_socket);
@ -3933,12 +3756,12 @@ static CURLcode create_conn(struct Curl_easy *data,
reuse = ConnectionExists(data, conn, &conn_temp, &force_reuse, &waitpipe); reuse = ConnectionExists(data, conn, &conn_temp, &force_reuse, &waitpipe);
/* If we found a reusable connection that is now marked as in use, we may /* If we found a reusable connection that is now marked as in use, we may
still want to open a new connection if we are pipelining. */ still want to open a new connection if we are multiplexing. */
if(reuse && !force_reuse && IsPipeliningPossible(data, conn_temp)) { if(reuse && !force_reuse && IsMultiplexingPossible(data, conn_temp)) {
size_t pipelen = conn_temp->send_pipe.size + conn_temp->recv_pipe.size; size_t multiplexed = CONN_INUSE(conn_temp);
if(pipelen > 0) { if(multiplexed > 0) {
infof(data, "Found connection %ld, with requests in the pipe (%zu)\n", infof(data, "Found connection %ld, with %zu requests on it\n",
conn_temp->connection_id, pipelen); conn_temp->connection_id, multiplexed);
if(Curl_conncache_bundle_size(conn_temp) < max_host_connections && if(Curl_conncache_bundle_size(conn_temp) < max_host_connections &&
Curl_conncache_size(data) < max_total_connections) { Curl_conncache_size(data) < max_total_connections) {
@ -3988,7 +3811,7 @@ static CURLcode create_conn(struct Curl_easy *data,
} }
if(waitpipe) if(waitpipe)
/* There is a connection that *might* become usable for pipelining /* There is a connection that *might* become usable for multiplexing
"soon", and we wait for that */ "soon", and we wait for that */
connections_available = FALSE; connections_available = FALSE;
else { else {
@ -4201,7 +4024,7 @@ CURLcode Curl_connect(struct Curl_easy *data,
if(!result) { if(!result) {
if(CONN_INUSE(conn)) if(CONN_INUSE(conn))
/* pipelining */ /* multiplexed */
*protocol_done = TRUE; *protocol_done = TRUE;
else if(!*asyncp) { else if(!*asyncp) {
/* DNS resolution is done: that's either because this is a reused /* DNS resolution is done: that's either because this is a reused
@ -4219,7 +4042,7 @@ CURLcode Curl_connect(struct Curl_easy *data,
connectdata struct, free those here */ connectdata struct, free those here */
Curl_disconnect(data, conn, TRUE); Curl_disconnect(data, conn, TRUE);
} }
else if(!data->conn) else if(!result && !data->conn)
/* FILE: transfers already have the connection attached */ /* FILE: transfers already have the connection attached */
Curl_attach_connnection(data, conn); Curl_attach_connnection(data, conn);

View File

@ -71,14 +71,7 @@ int Curl_doing_getsock(struct connectdata *conn,
CURLcode Curl_parse_login_details(const char *login, const size_t len, CURLcode Curl_parse_login_details(const char *login, const size_t len,
char **userptr, char **passwdptr, char **userptr, char **passwdptr,
char **optionsptr); char **optionsptr);
void Curl_close_connections(struct Curl_easy *data);
int Curl_removeHandleFromPipeline(struct Curl_easy *handle,
struct curl_llist *pipeline);
/* remove the specified connection from all (possible) pipelines and related
queues */
void Curl_getoff_all_pipelines(struct Curl_easy *data,
struct connectdata *conn);
CURLcode Curl_upkeep(struct conncache *conn_cache, void *data); CURLcode Curl_upkeep(struct conncache *conn_cache, void *data);
const struct Curl_handler *Curl_builtin_scheme(const char *scheme); const struct Curl_handler *Curl_builtin_scheme(const char *scheme);

View File

@ -144,10 +144,6 @@ typedef ssize_t (Curl_recv)(struct connectdata *conn, /* connection data */
#include <libssh2_sftp.h> #include <libssh2_sftp.h>
#endif /* HAVE_LIBSSH2_H */ #endif /* HAVE_LIBSSH2_H */
/* The "master buffer" is for HTTP pipelining */
#define MASTERBUF_SIZE 16384
/* Initial size of the buffer to store headers in, it'll be enlarged in case /* Initial size of the buffer to store headers in, it'll be enlarged in case
of need. */ of need. */
#define HEADERSIZE 256 #define HEADERSIZE 256
@ -796,11 +792,10 @@ struct connectdata {
void *closesocket_client; void *closesocket_client;
/* This is used by the connection cache logic. If this returns TRUE, this /* This is used by the connection cache logic. If this returns TRUE, this
handle is being used by one or more easy handles and can only used by any handle is still used by one or more easy handles and can only used by any
other easy handle without careful consideration (== only for other easy handle without careful consideration (== only for
pipelining/multiplexing) and it cannot be used by another multi multiplexing) and it cannot be used by another multi handle! */
handle! */ #define CONN_INUSE(c) ((c)->easyq.size)
#define CONN_INUSE(c) ((c)->send_pipe.size + (c)->recv_pipe.size)
/**** Fields set when inited and not modified again */ /**** Fields set when inited and not modified again */
long connection_id; /* Contains a unique number to make it easier to long connection_id; /* Contains a unique number to make it easier to
@ -950,16 +945,7 @@ struct connectdata {
struct kerberos5data krb5; /* variables into the structure definition, */ struct kerberos5data krb5; /* variables into the structure definition, */
#endif /* however, some of them are ftp specific. */ #endif /* however, some of them are ftp specific. */
struct curl_llist send_pipe; /* List of handles waiting to send on this struct curl_llist easyq; /* List of easy handles using this connection */
pipeline */
struct curl_llist recv_pipe; /* List of handles waiting to read their
responses on this pipeline */
char *master_buffer; /* The master buffer allocated on-demand;
used for pipelining. */
size_t read_pos; /* Current read position in the master buffer */
size_t buf_len; /* Length of the buffer?? */
curl_seek_callback seek_func; /* function that seeks the input */ curl_seek_callback seek_func; /* function that seeks the input */
void *seek_client; /* pointer to pass to the seek() above */ void *seek_client; /* pointer to pass to the seek() above */
@ -1727,8 +1713,8 @@ struct UserDefined {
bit ssl_enable_npn:1; /* TLS NPN extension? */ bit ssl_enable_npn:1; /* TLS NPN extension? */
bit ssl_enable_alpn:1;/* TLS ALPN extension? */ bit ssl_enable_alpn:1;/* TLS ALPN extension? */
bit path_as_is:1; /* allow dotdots? */ bit path_as_is:1; /* allow dotdots? */
bit pipewait:1; /* wait for pipe/multiplex status before starting a bit pipewait:1; /* wait for multiplex status before starting a new
new connection */ connection */
bit suppress_connect_headers:1; /* suppress proxy CONNECT response headers bit suppress_connect_headers:1; /* suppress proxy CONNECT response headers
from user callbacks */ from user callbacks */
bit dns_shuffle_addresses:1; /* whether to shuffle addresses before use */ bit dns_shuffle_addresses:1; /* whether to shuffle addresses before use */
@ -1769,8 +1755,8 @@ struct Curl_easy {
struct connectdata *conn; struct connectdata *conn;
struct curl_llist_element connect_queue; struct curl_llist_element connect_queue;
struct curl_llist_element pipeline_queue;
struct curl_llist_element sh_queue; /* list per Curl_sh_entry */ struct curl_llist_element sh_queue; /* list per Curl_sh_entry */
struct curl_llist_element conn_queue; /* list per connectdata */
CURLMstate mstate; /* the handle's state */ CURLMstate mstate; /* the handle's state */
CURLcode result; /* previous result */ CURLcode result; /* previous result */

View File

@ -156,8 +156,6 @@ auth_required if this is set and a POST/PUT is made without auth, the
idle do nothing after receiving the request, just "sit idle" idle do nothing after receiving the request, just "sit idle"
stream continuously send data to the client, never-ending stream continuously send data to the client, never-ending
writedelay: [secs] delay this amount between reply packets writedelay: [secs] delay this amount between reply packets
pipe: [num] tell the server to expect this many HTTP requests before
sending back anything, to allow pipelining tests
skip: [num] instructs the server to ignore reading this many bytes from a PUT skip: [num] instructs the server to ignore reading this many bytes from a PUT
or POST request or POST request
@ -188,7 +186,6 @@ ftp-ipv6
ftps ftps
http http
http-ipv6 http-ipv6
http-pipe
http-proxy http-proxy
http-unix http-unix
https https
@ -354,7 +351,6 @@ Available substitute variables include:
%HOST6IP - IPv6 address of the host running this test %HOST6IP - IPv6 address of the host running this test
%HOSTIP - IPv4 address of the host running this test %HOSTIP - IPv4 address of the host running this test
%HTTP6PORT - IPv6 port number of the HTTP server %HTTP6PORT - IPv6 port number of the HTTP server
%HTTPPIPEPORT - Port number of the HTTP pipelining server
%HTTPUNIXPATH - Path to the Unix socket of the HTTP server %HTTPUNIXPATH - Path to the Unix socket of the HTTP server
%HTTPPORT - Port number of the HTTP server %HTTPPORT - Port number of the HTTP server
%HTTPSPORT - Port number of the HTTPS server %HTTPSPORT - Port number of the HTTPS server

View File

@ -70,7 +70,7 @@ test500 test501 test502 test503 test504 test505 test506 test507 test508 \
test509 test510 test511 test512 test513 test514 test515 test516 test517 \ test509 test510 test511 test512 test513 test514 test515 test516 test517 \
test518 test519 test520 test521 test522 test523 test524 test525 test526 \ test518 test519 test520 test521 test522 test523 test524 test525 test526 \
test527 test528 test529 test530 test531 test532 test533 test534 test535 \ test527 test528 test529 test530 test531 test532 test533 test534 test535 \
test536 test537 test538 test539 test540 test541 test542 test543 test544 \ test537 test538 test539 test540 test541 test542 test543 test544 \
test545 test546 test547 test548 test549 test550 test551 test552 test553 \ test545 test546 test547 test548 test549 test550 test551 test552 test553 \
test554 test555 test556 test557 test558 test559 test560 test561 test562 \ test554 test555 test556 test557 test558 test559 test560 test561 test562 \
test563 test564 test565 test566 test567 test568 test569 test570 test571 \ test563 test564 test565 test566 test567 test568 test569 test570 test571 \

View File

@ -1,74 +0,0 @@
<testcase>
<info>
<keywords>
HTTP
HTTP GET
pipelining
multi
</keywords>
</info>
<reply>
<data>
HTTP/1.1 404 Badness
Date: Thu, 09 Nov 2010 14:49:00 GMT
ETag: "21025-dc7-39462498"
Content-Length: 6
Content-Type: text/html
Funny-head: yesyes
hejsan
</data>
<data1>
HTTP/1.1 200 Fine
Date: Thu, 09 Nov 2010 14:49:00 GMT
Content-Length: 13
Connection: close
Content-Type: text/html
fine content
</data1>
<datacheck>
fine content
Finished!
</datacheck>
<servercmd>
pipe: 1
</servercmd>
</reply>
# Client-side
<client>
<server>
http
</server>
# tool is what to use instead of 'curl'
<tool>
lib536
</tool>
<name>
HTTP GET multi two files with FAILONERROR and pipelining
</name>
<command>
http://%HOSTIP:%HTTPPORT/536 http://%HOSTIP:%HTTPPORT/5360001
</command>
</client>
#
# Verify data after the test has been "shot"
<verify>
<protocol>
GET /536 HTTP/1.1
Host: %HOSTIP:%HTTPPORT
Accept: */*
GET /5360001 HTTP/1.1
Host: %HOSTIP:%HTTPPORT
Accept: */*
</protocol>
</verify>
</testcase>

View File

@ -16,7 +16,7 @@ noinst_PROGRAMS = chkhostname libauthretry libntlmconnect \
lib500 lib501 lib502 lib503 lib504 lib505 lib506 lib507 lib508 lib509 \ lib500 lib501 lib502 lib503 lib504 lib505 lib506 lib507 lib508 lib509 \
lib510 lib511 lib512 lib513 lib514 lib515 lib516 lib517 lib518 lib519 \ lib510 lib511 lib512 lib513 lib514 lib515 lib516 lib517 lib518 lib519 \
lib520 lib521 lib523 lib524 lib525 lib526 lib527 lib529 lib530 lib532 \ lib520 lib521 lib523 lib524 lib525 lib526 lib527 lib529 lib530 lib532 \
lib533 lib536 lib537 lib539 lib540 lib541 lib542 lib543 lib544 lib545 \ lib533 lib537 lib539 lib540 lib541 lib542 lib543 lib544 lib545 \
lib547 lib548 lib549 lib552 lib553 lib554 lib555 lib556 lib557 lib558 \ lib547 lib548 lib549 lib552 lib553 lib554 lib555 lib556 lib557 lib558 \
lib559 lib560 lib562 lib564 lib565 lib566 lib567 lib568 lib569 lib570 \ lib559 lib560 lib562 lib564 lib565 lib566 lib567 lib568 lib569 lib570 \
lib571 lib572 lib573 lib574 lib575 lib576 lib578 lib579 lib582 \ lib571 lib572 lib573 lib574 lib575 lib576 lib578 lib579 lib582 \
@ -160,10 +160,6 @@ lib533_SOURCES = lib533.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
lib533_LDADD = $(TESTUTIL_LIBS) lib533_LDADD = $(TESTUTIL_LIBS)
lib533_CPPFLAGS = $(AM_CPPFLAGS) lib533_CPPFLAGS = $(AM_CPPFLAGS)
lib536_SOURCES = lib536.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
lib536_LDADD = $(TESTUTIL_LIBS)
lib536_CPPFLAGS = $(AM_CPPFLAGS)
lib537_SOURCES = lib537.c $(SUPPORTFILES) $(WARNLESS) lib537_SOURCES = lib537.c $(SUPPORTFILES) $(WARNLESS)
lib537_CPPFLAGS = $(AM_CPPFLAGS) lib537_CPPFLAGS = $(AM_CPPFLAGS)

View File

@ -1,142 +0,0 @@
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2011, 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at https://curl.haxx.se/docs/copyright.html.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
***************************************************************************/
#include "test.h"
#include <fcntl.h>
#include "testutil.h"
#include "warnless.h"
#include "memdebug.h"
#define TEST_HANG_TIMEOUT 60 * 1000
static int perform(CURLM *multi)
{
int handles;
fd_set fdread, fdwrite, fdexcep;
int res = 0;
for(;;) {
struct timeval interval;
int maxfd = -99;
interval.tv_sec = 0;
interval.tv_usec = 100000L; /* 100 ms */
res_multi_perform(multi, &handles);
if(res)
return res;
res_test_timedout();
if(res)
return res;
if(!handles)
break; /* done */
FD_ZERO(&fdread);
FD_ZERO(&fdwrite);
FD_ZERO(&fdexcep);
res_multi_fdset(multi, &fdread, &fdwrite, &fdexcep, &maxfd);
if(res)
return res;
/* At this point, maxfd is guaranteed to be greater or equal than -1. */
res_select_test(maxfd + 1, &fdread, &fdwrite, &fdexcep, &interval);
if(res)
return res;
res_test_timedout();
if(res)
return res;
}
return 0; /* success */
}
int test(char *URL)
{
CURLM *multi = NULL;
CURL *easy = NULL;
int res = 0;
start_test_timing();
global_init(CURL_GLOBAL_ALL);
multi_init(multi);
easy_init(easy);
multi_setopt(multi, CURLMOPT_PIPELINING, 1L);
easy_setopt(easy, CURLOPT_WRITEFUNCTION, fwrite);
easy_setopt(easy, CURLOPT_FAILONERROR, 1L);
easy_setopt(easy, CURLOPT_URL, URL);
res_multi_add_handle(multi, easy);
if(res) {
printf("curl_multi_add_handle() 1 failed\n");
goto test_cleanup;
}
res = perform(multi);
if(res) {
printf("retrieve 1 failed\n");
goto test_cleanup;
}
curl_multi_remove_handle(multi, easy);
curl_easy_reset(easy);
easy_setopt(easy, CURLOPT_FAILONERROR, 1L);
easy_setopt(easy, CURLOPT_URL, libtest_arg2);
res_multi_add_handle(multi, easy);
if(res) {
printf("curl_multi_add_handle() 2 failed\n");
goto test_cleanup;
}
res = perform(multi);
if(res) {
printf("retrieve 2 failed\n");
goto test_cleanup;
}
curl_multi_remove_handle(multi, easy);
test_cleanup:
/* undocumented cleanup sequence - type UB */
curl_easy_cleanup(easy);
curl_multi_cleanup(multi);
curl_global_cleanup();
printf("Finished!\n");
return res;
}

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___ * | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____| * \___|\___/|_| \_\_____|
* *
* Copyright (C) 1998 - 2018, Daniel Stenberg, <daniel@haxx.se>, et al. * Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
* *
* This software is licensed as described in the file COPYING, which * This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms * you should have received as part of this distribution. The terms
@ -111,15 +111,12 @@ struct httprequest {
bool ntlm; /* Authorization ntlm header found */ bool ntlm; /* Authorization ntlm header found */
int writedelay; /* if non-zero, delay this number of seconds between int writedelay; /* if non-zero, delay this number of seconds between
writes in the response */ writes in the response */
int pipe; /* if non-zero, expect this many requests to do a "piped"
request/response */
int skip; /* if non-zero, the server is instructed to not read this int skip; /* if non-zero, the server is instructed to not read this
many bytes from a PUT/POST request. Ie the client sends N many bytes from a PUT/POST request. Ie the client sends N
bytes said in Content-Length, but the server only reads N bytes said in Content-Length, but the server only reads N
- skip bytes. */ - skip bytes. */
int rcmd; /* doing a special command, see defines above */ int rcmd; /* doing a special command, see defines above */
int prot_version; /* HTTP version * 10 */ int prot_version; /* HTTP version * 10 */
bool pipelining; /* true if request is pipelined */
int callcount; /* times ProcessRequest() gets called */ int callcount; /* times ProcessRequest() gets called */
bool connmon; /* monitor the state of the connection, log disconnects */ bool connmon; /* monitor the state of the connection, log disconnects */
bool upgrade; /* test case allows upgrade to http2 */ bool upgrade; /* test case allows upgrade to http2 */
@ -426,14 +423,6 @@ static int parse_servercmd(struct httprequest *req)
logmsg("swsclose: close this connection after response"); logmsg("swsclose: close this connection after response");
req->close = TRUE; req->close = TRUE;
} }
else if(1 == sscanf(cmd, "pipe: %d", &num)) {
logmsg("instructed to allow a pipe size of %d", num);
if(num < 0)
logmsg("negative pipe size ignored");
else if(num > 0)
req->pipe = num-1; /* decrease by one since we don't count the
first request in this number */
}
else if(1 == sscanf(cmd, "skip: %d", &num)) { else if(1 == sscanf(cmd, "skip: %d", &num)) {
logmsg("instructed to skip this number of bytes %d", num); logmsg("instructed to skip this number of bytes %d", num);
req->skip = num; req->skip = num;
@ -706,11 +695,6 @@ static int ProcessRequest(struct httprequest *req)
} }
} }
if(req->pipe)
/* we do have a full set, advance the checkindex to after the end of the
headers, for the pipelining case mostly */
req->checkindex += (end - line) + strlen(end_of_headers);
/* **** Persistence **** /* **** Persistence ****
* *
* If the request is a HTTP/1.0 one, we close the connection unconditionally * If the request is a HTTP/1.0 one, we close the connection unconditionally
@ -844,8 +828,7 @@ static int ProcessRequest(struct httprequest *req)
if(strstr(req->reqbuf, "Connection: close")) if(strstr(req->reqbuf, "Connection: close"))
req->open = FALSE; /* close connection after this request */ req->open = FALSE; /* close connection after this request */
if(!req->pipe && if(req->open &&
req->open &&
req->prot_version >= 11 && req->prot_version >= 11 &&
end && end &&
req->reqbuf + req->offset > end + strlen(end_of_headers) && req->reqbuf + req->offset > end + strlen(end_of_headers) &&
@ -855,19 +838,6 @@ static int ProcessRequest(struct httprequest *req)
/* If we have a persistent connection, HTTP version >= 1.1 /* If we have a persistent connection, HTTP version >= 1.1
and GET/HEAD request, enable pipelining. */ and GET/HEAD request, enable pipelining. */
req->checkindex = (end - req->reqbuf) + strlen(end_of_headers); req->checkindex = (end - req->reqbuf) + strlen(end_of_headers);
req->pipelining = TRUE;
}
while(req->pipe) {
if(got_exit_signal)
return 1; /* done */
/* scan for more header ends within this chunk */
line = &req->reqbuf[req->checkindex];
end = strstr(line, end_of_headers);
if(!end)
break;
req->checkindex += (end - line) + strlen(end_of_headers);
req->pipe--;
} }
/* If authentication is required and no auth was provided, end now. This /* If authentication is required and no auth was provided, end now. This
@ -951,13 +921,8 @@ storerequest_cleanup:
static void init_httprequest(struct httprequest *req) static void init_httprequest(struct httprequest *req)
{ {
/* Pipelining is already set, so do not initialize it here. Only initialize req->checkindex = 0;
checkindex and offset if pipelining is not set, since in a pipeline they req->offset = 0;
need to be inherited from the previous request. */
if(!req->pipelining) {
req->checkindex = 0;
req->offset = 0;
}
req->testno = DOCNUMBER_NOTHING; req->testno = DOCNUMBER_NOTHING;
req->partno = 0; req->partno = 0;
req->connect_request = FALSE; req->connect_request = FALSE;
@ -967,7 +932,6 @@ static void init_httprequest(struct httprequest *req)
req->cl = 0; req->cl = 0;
req->digest = FALSE; req->digest = FALSE;
req->ntlm = FALSE; req->ntlm = FALSE;
req->pipe = 0;
req->skip = 0; req->skip = 0;
req->writedelay = 0; req->writedelay = 0;
req->rcmd = RCMD_NORMALREQ; req->rcmd = RCMD_NORMALREQ;
@ -991,17 +955,6 @@ static int get_request(curl_socket_t sock, struct httprequest *req)
char *pipereq = NULL; char *pipereq = NULL;
size_t pipereq_length = 0; size_t pipereq_length = 0;
if(req->pipelining) {
pipereq = reqbuf + req->checkindex;
pipereq_length = req->offset - req->checkindex;
/* Now that we've got the pipelining info we can reset the
pipelining-related vars which were skipped in init_httprequest */
req->pipelining = FALSE;
req->checkindex = 0;
req->offset = 0;
}
if(req->offset >= REQBUFSIZ-1) { if(req->offset >= REQBUFSIZ-1) {
/* buffer is already full; do nothing */ /* buffer is already full; do nothing */
overflow = 1; overflow = 1;
@ -1051,11 +1004,6 @@ static int get_request(curl_socket_t sock, struct httprequest *req)
req->done_processing = ProcessRequest(req); req->done_processing = ProcessRequest(req);
if(got_exit_signal) if(got_exit_signal)
return -1; return -1;
if(req->done_processing && req->pipe) {
logmsg("Waiting for another piped request");
req->done_processing = 0;
req->pipe--;
}
} }
if(overflow || (req->offset == REQBUFSIZ-1 && got > 0)) { if(overflow || (req->offset == REQBUFSIZ-1 && got > 0)) {
@ -1075,7 +1023,7 @@ static int get_request(curl_socket_t sock, struct httprequest *req)
/* at the end of a request dump it to an external file */ /* at the end of a request dump it to an external file */
if(fail || req->done_processing) if(fail || req->done_processing)
storerequest(reqbuf, req->pipelining ? req->checkindex : req->offset); storerequest(reqbuf, req->offset);
if(got_exit_signal) if(got_exit_signal)
return -1; return -1;
@ -1598,7 +1546,6 @@ static void http_connect(curl_socket_t *infdp,
logmsg("====> TCP_NODELAY for client DATA connection failed"); logmsg("====> TCP_NODELAY for client DATA connection failed");
} }
#endif #endif
req2.pipelining = FALSE;
init_httprequest(&req2); init_httprequest(&req2);
while(!req2.done_processing) { while(!req2.done_processing) {
err = get_request(datafd, &req2); err = get_request(datafd, &req2);
@ -2281,7 +2228,6 @@ int main(int argc, char *argv[])
the pipelining struct field must be initialized previously to FALSE the pipelining struct field must be initialized previously to FALSE
every time a new connection arrives. */ every time a new connection arrives. */
req.pipelining = FALSE;
init_httprequest(&req); init_httprequest(&req);
for(;;) { for(;;) {