1
0
mirror of https://github.com/moparisthebest/curl synced 2024-12-23 16:48:49 -05:00
curl/tests/data/test584

103 lines
1.4 KiB
Plaintext
Raw Normal View History

<testcase>
<info>
<keywords>
HTTP
Multiple pipelines and limiting the number of connections. Introducing a number of options to the multi interface that allows for multiple pipelines to the same host, in order to optimize the balance between the penalty for opening new connections and the potential pipelining latency. Two new options for limiting the number of connections: CURLMOPT_MAX_HOST_CONNECTIONS - Limits the number of running connections to the same host. When adding a handle that exceeds this limit, that handle will be put in a pending state until another handle is finished, so we can reuse the connection. CURLMOPT_MAX_TOTAL_CONNECTIONS - Limits the number of connections in total. When adding a handle that exceeds this limit, that handle will be put in a pending state until another handle is finished. The free connection will then be reused, if possible, or closed if the pending handle can't reuse it. Several new options for pipelining: CURLMOPT_MAX_PIPELINE_LENGTH - Limits the pipeling length. If a pipeline is "full" when a connection is to be reused, a new connection will be opened if the CURLMOPT_MAX_xxx_CONNECTIONS limits allow it. If not, the handle will be put in a pending state until a connection is ready (either free or a pipe got shorter). CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE - A pipelined connection will not be reused if it is currently processing a transfer with a content length that is larger than this. CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE - A pipelined connection will not be reused if it is currently processing a chunk larger than this. CURLMOPT_PIPELINING_SITE_BL - A blacklist of hosts that don't allow pipelining. CURLMOPT_PIPELINING_SERVER_BL - A blacklist of server types that don't allow pipelining. See the curl_multi_setopt() man page for details.
2013-02-15 05:50:45 -05:00
pipelining
multi
</keywords>
</info>
# Server-side
# Silly division of the first request is solely to appease the server which expects n_data_items == n_requests
<reply>
<data1>
HTTP/1.1 200 OK
Server: test-server/fake
Content-Length: 4
584
</data1>
<data2>
HTTP/1.1 200 OK
</data2>
<data3>
Server: test-server/fake
</data3>
<data4>
Content-Length: 0
HTTP/1.1 200 OK
Server: test-server/fake
Content-Length: 5
585
HTTP/1.1 200 OK
Server: test-server/fake
Content-Length: 4
586
</data4>
</reply>
# Client-side
<client>
<server>
http
</server>
<tool>
lib530
</tool>
<name>
HTTP GET using pipelining (nonzero length after zero length)
</name>
<command>
http://%HOSTIP:%HTTPPORT/path/584
</command>
</client>
# Verify data after the test has been "shot"
<verify>
<protocol>
GET /path/5840001 HTTP/1.1
Host: %HOSTIP:%HTTPPORT
Accept: */*
GET /path/5840002 HTTP/1.1
Host: %HOSTIP:%HTTPPORT
Accept: */*
GET /path/5840003 HTTP/1.1
Host: %HOSTIP:%HTTPPORT
Accept: */*
GET /path/5840004 HTTP/1.1
Host: %HOSTIP:%HTTPPORT
Accept: */*
</protocol>
<stdout>
HTTP/1.1 200 OK
Server: test-server/fake
Content-Length: 4
584
HTTP/1.1 200 OK
Server: test-server/fake
Content-Length: 0
HTTP/1.1 200 OK
Server: test-server/fake
Content-Length: 5
585
HTTP/1.1 200 OK
Server: test-server/fake
Content-Length: 4
586
</stdout>
</verify>
</testcase>