When libcurl has said to the server that there's a POST or PUT coming
(with a content-length and all) it has to either deliver that amount of
data or it needs to close the connection before trying a second request.
Adds test case 1129, 1130 and 1131
The bug report is about when used with 100-continue, but the change is
more generic.
Bug: http://curl.haxx.se/mail/lib-2011-06/0191.html
Reported by: Steven Parkes
Made several functions static
Made one function defined to nothing when RTSP is disabled to avoid
the #ifdefs in code.
Removed explicit rtsp.h includes
libcurl failed to check the correct struct for HTTPS after CONNECT was
issued to the proxy, so it didn't do the TLS handshake and subsequently
failed the connection. A regression released in 7.21.5 (introduced
around commit 8831000bc0).
Bug: http://curl.haxx.se/mail/lib-2011-04/0134.html
Reported by: Josue Andrade Gomes
Added CURLOPT_TRANSFER_ENCODING as the option to set to request Transfer
Encoding in HTTP requests (if built zlib enabled). I also renamed
CURLOPT_ENCODING to CURLOPT_ACCEPT_ENCODING (while keeping the old name
around) to reduce the confusion when we have to encoding options for
HTTP.
--tr-encoding is now the new command line option for curl to request
this, and thus I updated the test cases accordingly.
When TE: is inserted in the request, we must add a "Connection: TE" as
well to be HTTP 1.1 compliant. If a custom Connection: header is passed
in, we must use that and only append TE to it. Test case 1125 verifies
TE: + custom Connection:.
Since this struct member is used in the code to determine what and how
to decode automatically and since it is now also used for compressed
Transfer-Encodings, I renamed it to the more suitable 'auto_decoding'
Transfer-Encoding differs from Content-Encoding in a few subtle ways,
but primarily it concerns the transfer only and not the content so when
discovered to be compressed we know we have to uncompress it. There will
only arrive compressed transfers in a response after we have requested
them with the appropriate TE: header.
Test case 1122 and 1123 verify.
The new http_proxy.* files now host HTTP proxy specific code (500+ lines
moved out from http.c), and as a consequence there is a macro introduced
for the Curl_proxyCONNECT() function so that code can use it without
actually supporting proxy (or HTTP) in builds.
The PROT_* set of internal defines for the protocols is no longer
used. We now use the same bits internally as we have defined in the
public header using the CURLPROTO_ prefix. This is for simplicity and
because the PROT_* prefix was already used duplicated internally for a
set of KRB4 values.
The PROTOPT_* defines were moved up to just below the struct definition
within which they are used.
The protocol handler struct got a 'flags' field for special information
and characteristics of the given protocol.
This now enables us to move away central protocol information such as
CLOSEACTION and DUALCHANNEL from single defines in a central place, out
to each protocol's definition. It also made us stop abusing the protocol
field for other info than the protocol, and we could start cleaning up
other protocol-specific things by adding flags bits to set in the
handler struct.
The "protocol" field connectdata struct was removed as well and the code
now refers directly to the conn->handler->protocol field instead. To
make things work properly, the code now always store a conn->given
pointer that points out the original handler struct so that the code can
learn details from the original protocol even if conn->handler is
modified along the way - for example when switching to go over a HTTP
proxy.
Make GSS authentication work when a curl handle is reused for multiple
authenticated requests, by always setting negdata->state in
output_auth_headers().
Signed-off-by: Marcus Sundberg <marcus.sundberg@aptilo.com>
Instead of polluting many places with #ifdefs, we create a single place
for this function, and also check return code properly so that a NULL
pointer returned won't cause problems.
The HTTP parser allocated memory on each received Location: header
without properly freeing old data. Starting now, the code only considers
the first Location: header and will blissfully ignore subsequent ones.
Bug: http://curl.haxx.se/bug/view.cgi?id=3165129
Reported by: Martin Lemke
Added axTLS to autotool files and glue code to misc other files.
axtls.h maps SSL API functions, but may change.
axtls.c is just a stub file and will definitely change.
When given a custom host name in a Host: header, we can use it for
several different purposes other than just cookies, so we rename it and
use it for SSL SNI etc.
When failing to build form post due to an error, the code now does a
proper failf(). Previously libcurl would report an error like "failed
creating formpost data" when a file wasn't possible to open which was
not easy for users to figure out.
I also lower cased a function name to be named more curl-style and
removed some unnecessary code.
It was pointed out that the special case libcurl did for 416 was
incorrect and wrong. 416 is not really different to other errors so the
response body must be handled like for other errors/http responses.
Reported by: Chris Smowton
Bug: http://curl.haxx.se/bug/view.cgi?id=3076808
HTTP allows that a server sends trailing headers after all the chunks
have been sent WITHOUT signalling their presence in the first response
headers. The "Trailer:" header is only a SHOULD there and as we need to
handle the situation even without that header I made libcurl ignore
Trailer: completely.
Test case 1116 was added to verify this and to make sure we handle more
than one trailer header properly.
Reported by: Patrick McManus
Bug: http://curl.haxx.se/bug/view.cgi?id=3052450
Win64's 32 bit long but 64 bit size_t caused a warning that we avoid
with a typecast. A small whitespace indent fix was also applied.
Reported by: Adam Light
As mentioned in bug report #2956968, the HTTP code wouldn't send the
first empty chunk during the auth negotiation phase of the HTTP request
sending, so the server would wait for data to come and libcurl would
wait for data to arrive... I've made the code not enable chunked
encoding until the auth negotiation is done and thus this scenario
doesn't occur anymore.
Reported by: Sidney San Martn
Bug: http://curl.haxx.se/bug/view.cgi?id=2956968
Howard Chu brought the bulk work of this patch that properly
moves out the sending and recving of data to the parts of the
code that are properly responsible for the various ways of doing
so.
Daniel Stenberg assisted with polishing a few bits and fixed some
minor flaws in the original patch.
Another upside of this patch is that we now abuse CURLcodes less
with the "magic" -1 return codes and instead use CURLE_AGAIN more
consistently.
This is Hoi-Ho Chan's patch with some minor fixes by me. There
are some potential issues in this, but none worse than we can
sort out on the list and over time.
Akos Pasztory filed debian bug report #572276http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=572276
mentioning a problem with a resource that returns chunked-encoded
_and_ with a Content-Length and libcurl failed to properly ignore
the latter information.
curl_easy_setopt with CURLOPT_HTTPHEADER, the library should set
data->state.expect100header accordingly - the current code (in 7.19.7 at
least) doesn't handle this properly. Martin Storsjo provided the fix!
POST using a read callback, with Digest authentication and
"Transfer-Encoding: chunked" enforced. I would then cause the first request
to be wrongly sent and then basically hang until the server closed the
connection. I fixed the problem and added test case 565 to verify it.
(http://curl.haxx.se/bug/view.cgi?id=2873666) which identified a problem which
made libcurl loop infinitely when given incorrect credentials when using HTTP
GSS negotiate authentication.
(http://curl.haxx.se/bug/view.cgi?id=2813123) and an a patch that fixes the
problem:
Url A is accessed using auth. Url A redirects to Url B (on a different
server0. Url B reuses a persistent connection. Url B has auth, even though
it's on a different server.
Note: if Url B does not reuse a persistent connection, auth is not sent.
is almost always a VERY BAD IDEA. Yet there are still apps out there doing
this, and now recently it triggered a bug/side-effect in libcurl as when
libcurl sends a POST or PUT with NTLM, it sends an empty post first when it
knows it will just get a 401/407 back. If the app then replaced the
Content-Length header, it caused the server to wait for input that libcurl
wouldn't send. Aaron Oneal reported this problem in bug report #2799008http://curl.haxx.se/bug/view.cgi?id=2799008) and helped us verify the fix.
of streams that had some parts (legitimately) missing. We now provide and use
a proper cleanup function for the content encoding submodule.
http://curl.haxx.se/mail/lib-2009-05/0092.html
as reported by Ebenezer Ikonne (on curl-users) and Laurent Rabret (on
curl-library). The transfer was mistakenly marked to get more data to send
but since it didn't actually have that, it just hung there...
Chen pointed out how curl couldn't upload with resume when reading from a
pipe.
This ended up with the introduction of a new return code for the
CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
that libcurl may try to resolve the situation anyway. In our case this means
libcurl will attempt to instead read that much data from the stream instead
of seeking and that way curl can now upload with resume when data is read
from a stream!
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
clarity. This does fix one problem that causes ;type=i FTP URLs
to fail in the Turkish locale when CURLOPT_PROXY_TRANSFER_MODE is
used (test case 561)
Added tests 561 and 1092 through 1094 to test various combinations
of ;type= and ;mode= URLs that could potentially fail in the Turkish
locale.
now has an improved ability to do right when the multi interface (both
"regular" and multi_socket) is used for SCP and SFTP transfers. This should
result in (much) less busy-loop situations and thus less CPU usage with no
speed loss.
(http://curl.haxx.se/bug/view.cgi?id=2221237) that identified an infinite
loop during GSS authentication given some specific conditions. With his
patience and great feedback I managed to narrow down the problem and
eventually fix it although I can't test any of this myself!
(http://curl.haxx.se/bug/view.cgi?id=2255627) which pointed out that a
program using libcurl's multi interface to download a HTTPS page with a
libcurl built powered by OpenSSL, would easily get silly and instead hand
over SSL details as data instead of the actual HTTP headers and body. This
happened because libcurl would consider the connection handshake done too
early. This problem was introduced at September 22nd 2008 with my fix of the
bug #2107377
The correct fix is now instead done within the GnuTLS-handling code, as both
the OpenSSL and the NSS code already deal with this situation in similar
fashion. I added test case 560 in an attempt to verify this fix, but
unfortunately it didn't trigger it even before this fix!
Changed checkprefix() to use it and those instances of strnequal() that
compare host names or other protocol strings that are defined to be
independent of case in the C locale. This should fix a few more
Turkish locale problems.
(http://curl.haxx.se/bug/view.cgi?id=2154627) which pointed out that libcurl
uses strcasecmp() in multiple places where it causes failures when the
Turkish locale is used. This is because 'i' and 'I' isn't the same letter so
strcasecmp() on those letters are different in Turkish than in English (or
just about all other languages). I thus introduced a totally new internal
function in libcurl (called Curl_ascii_equal) for doing case insentive
comparisons for english-(ascii?) style strings that thus will make "file"
and "FILE" match even if the Turkish locale is selected.
proxy" (http://curl.haxx.se/bug/view.cgi?id=2107377) that showed how a multi
interface using program didn't work when built with GnuTLS and a CONNECT
request was done over a proxy (basically test 502 over a proxy to a HTTPS
site). It turned out the ssl connect function would get called twice which
caused the second call to fail.
remain in use as internal curl_off_t print formatting strings for the internal
*printf functions which still cannot handle print formatting string directives
such as "I64d", "I64u", and others available on MSVC, MinGW, Intel's ICC, and
other DOS/Windows compilers.
This reverts previous commit part which did:
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
the names of the curl_off_t formatting string directives now become
CURL_FORMAT_CURL_OFF_T and CURL_FORMAT_CURL_OFF_TU.
CURL_FMT_OFF_T -> CURL_FORMAT_CURL_OFF_T
CURL_FMT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
Remove the use of an internal name for the curl_off_t formatting string directives
and use the common one available from the inside and outside of the library.
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
when a server responded with long headers and data. Luckily, the buffer
overflowed into another unused buffer, so no actual harm was done.
Added test cases 1060 and 1061 to verify.
connection with the multi interface even if a previous use of it caused a
CURLE_PEER_FAILED_VERIFICATION to get returned. I now make sure that failed
SSL connections properly close the connections.
proved how PUT and POST with a redirect could lead to a "hang" due to the
data stream not being rewound properly when it had to in order to get sent
properly (again) to the subsequent URL. This is now fixed and these test
cases are no longer disabled.
with -C - sent garbage in the Content-Range: header. I fixed this problem by
making sure libcurl always sets the size of the _entire_ upload if an app
attemps to do resumed uploads since libcurl simply cannot know the size of
what is currently at the server end. Test 1041 is no longer disabled.
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=487567) pointing out that
libcurl used Content-Range: instead of Range when doing a range request with
--head (CURLOPT_NOBODY). This is now fixed and test case 1032 was added to
verify.
handler functions didn't return that the socket should be waited for writing,
but instead it was treated as if no socket was needing monitoring so REMOVE
was called prematurely
application to provide data for a multipart with the read callback. Note
that the size needs to be provided with CURLFORM_CONTENTSLENGTH when the
stream option is used. This feature is verified by the new test case
554. This feature was sponsored by Xponaut.
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
(http://curl.haxx.se/bug/view.cgi?id=1871269) and we could fix his hang-
problem that occurred when doing a large HTTP POST request with the
response-body read from a callback.
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
(http://curl.haxx.se/bug/view.cgi?id=1849764) with an included fix. He
identified a problem for re-used connections that previously had sent
Expect: 100-continue and in some situations the subsequent POST (that didn't
use Expect:) still had the internal flag set for its use. David's fix (that
makes the setting of the flag in every single request unconditionally) is
fine and is now used!
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
callback was used, as it could wrongly pass on a bad size for the outgoing
HTTP header. The bad size would be a very large value as it was a wrapped
size_t content. This happened when the whole HTTP request failed to get sent
in one single send. http://curl.haxx.se/mail/lib-2007-11/0165.html
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).
a response that was larger than 16KB is now improved slightly so that now
the restriction at 16KB is for the headers only and it should be a rare
situation where the response-headers exceed 16KB. Thus, I consider #47 fixed
and the header limitation is now known as known bug #48.
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
NTLM, and he provided test code and a test server and we worked out a bug
fix. We failed to count sent body data at times, which then caused internal
confusions when libcurl tried to send the rest of the data in order to
maintain the same connection alive.
(and then I did some minor reformatting of code in lib/http.c)
the multi interface and connection re-use that could make a
curl_multi_remove_handle() ruin a pointer in another handle.
The second problem was less of an actual problem but more of minor quirk:
the re-using of connections wasn't properly checking if the connection was
marked for closure.
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
non-ASCII platforms. It does add some complexity, most notably with more
#ifdefs, but I want to see this supported added and I can't see how we can
add it without the extra stuff added.
(http://curl.haxx.se/bug/view.cgi?id=1618359) and subsequently provided a
patch for it: when downloading 2 zero byte files in a row, curl 7.16.0
enters an infinite loop, while curl 7.16.1-20061218 does one additional
unnecessary request.
Fix: During the "Major overhaul introducing http pipelining support and
shared connection cache within the multi handle." change, headerbytecount
was moved to live in the Curl_transfer_keeper structure. But that structure
is reset in the Transfer method, losing the information that we had about
the header size. This patch moves it back to the connectdata struct.
KNOWN_BUGS #25, which happens when a proxy closes the connection when
libcurl has sent CONNECT, as part of an authentication negotiation. Starting
now, libcurl will re-connect accordingly and continue the authentication as
it should.
could very well cause a negate number get passed in and thus cause reading
outside of the array usually used for this purpose.
We avoid this by using the uppercase macro versions introduced just now that
does some extra crazy typecasts to avoid byte codes > 127 to cause negative
int values.
to the CURLOPT_DEBUGFUNCTION callback has been fixed (it was erroneously
included as part of the header). A message was also added to the
command line tool to show when data is being sent, enabled when
--verbose is used.
send the whole request at once, even though the Expect: header was disabled
by the application. An effect of this change is also that small (< 1024
bytes) POSTs are now always sent without Expect: header since we deem it
more costly to bother about that than the risk that we send the data in
vain.
like this in this source file. The quickfix for now is to provide a simple
version for GnuTLS builds. The GnuTLS version of libcurl doesn't yet allow
fully non-blocking connects anyway so this function doesn't get used.
(http://curl.haxx.se/bug/view.cgi?id=1481217), with follow-ups by Michele Bini
and David Byron. libcurl previously wrongly used GetLastError() on windows to
get error details after socket-related function calls, when it really should
use WSAGetLastError() instead.
When changing to this, the former function Curl_ourerrno() is now instead
called Curl_sockerrno() as it is necessary to only use it to get errno from
socket-related functions as otherwise it won't work as intended on Windows.
an app can use to let libcurl only connect to a remote host and then extract
the socket from libcurl. libcurl will then not attempt to do any transfer at
all after the connect is done.
(wrongly) sends *two* WWW-Authenticate headers for Digest. While this should
never happen in a sane world, libcurl previously got into an infinite loop
when this occurred. Dave added test 273 to verify this.
fix the CONNECT authentication code with multi-pass auth methods (such as
NTLM) as it didn't previously properly ignore response-bodies - in fact it
stopped reading after all response headers had been received. This could
lead to libcurl sending the next request and reading the body from the first
request as response to the second request. (I also renamed the function,
which wasn't strictly necessary but...)
The best fix would to once and for all make the CONNECT code use the
ordinary request sending/receiving code, treating it as any ordinary request
instead of the special-purpose function we have now. It should make it
better for multi-interface too. And possibly lead to less code...
Added test case 265 for this. It doesn't work as a _really_ good test case
since the test proxy is too stupid, but the test case helps when running the
debugger to verify.
A) Normal non-proxy HTTP:
- no more "Pragma: no-cache" (this only makes sense to proxies)
B) Non-CONNECT HTTP request over proxy:
- "Pragma: no-cache" is used (like before)
- "Proxy-Connection: Keep-alive" (for older style 1.0-proxies)
C) CONNECT HTTP request over proxy:
- "Host: [name]:[port]"
- "Proxy-Connection: Keep-alive"
.netrc, and when following a Location: the subsequent requests didn't properly
use the auth as found in the netrc file. Added test case 257 to verify my fix.
internally, with code provided by sslgen.c. All SSL-layer-specific code is
then written in ssluse.c (for OpenSSL) and gtls.c (for GnuTLS).
As far as possible, internals should not need to know what SSL layer that is
in use. Building with GnuTLS currently makes two test cases fail.
TODO.gnutls contains a few known outstanding issues for the GnuTLS support.
GnuTLS support is enabled with configure --with-gnutls
also affecting NTLM and Negotiate.) It turned out that if the server responded
with 100 Continue before the initial 401 response, libcurl didn't take care of
the response properly. Test case 245 and 246 added to verify this.
function was fixed to use the proper proxy authentication when multiple ones
were added as accepted. test 239 and test 243 were added to repeat the
problems and verify the fixes.
USE_WINDOWS_SSPI on Windows, and then libcurl will be built to use the native
way to do NTLM. SSPI also allows libcurl to pass on the current user and its
password in the request.
requested data from a host and then followed a redirect to another
host. libcurl then didn't use the proxy-auth properly in the second request,
due to the host-only check for original host name wrongly being extended to
the proxy auth as well. Added test case 233 to verify the flaw and that the
fix removed the problem.
that picks NTLM. Thanks to David Byron letting me test NTLM against his
servers, I could quickly repeat and fix the problem. It turned out to be:
When libcurl POSTs without knowing/using an authentication and it gets back a
list of types from which it picks NTLM, it needs to either continue sending
its data if it keeps the connection alive, or not send the data but close the
connection. Then do the first step in the NTLM auth. libcurl didn't send the
data nor close the connection but simply read the response-body and then sent
the first negotiation step. Which then failed miserably of course. The fixed
version forces a connection if there is more than 2000 bytes left to send.
libcurl without cookie support. This is mainly useful if you want to build a
minimalistic libcurl with no cookies support at all. Like for embedded
systems or similar.
file that was already completely downloaded caused an error, while it
doesn't if you don't use --fail! I added test case 194 to verify the fix.
Grrr. CURLOPT_FAILONERROR is now added to the list stuff to remove in
libcurl v8 due to all the kludges needed to support it.
replacement, curl only replaced the Host: header on the initial request
and didn't replace it on the following ones. This resulted in requests with
two Host: headers.
Now, curl checks if the location is on the same host as the initial request
and then continues to replace the Host: header. And when it moves to another
host, it doesn't replace the Host: header but it also doesn't make the
second Host: header get used in the request.
This change is verified by the two new test cases 184 and 185.
server doesn't require any auth at all and then we just continue nicely. We
now have an extra bit in the connection struct named 'authprobe' that is TRUE
when doing pure "HTTP authentication probing".
all things up to work with encoded host names internally, as well as keeping
'display names' to show in debug messages. IDN resolves work for me now using
ipv6, ipv4 and ares resolving. Even cookies on IDN sites seem to do right.
stuff added a few weeks ago. Turns out that if you specify --proxy-ntlm and
communicate with a proxy that requires basic authentication, the proxy
properly returns a 407, but the failure detection code doesn't realize it
should give up, so curl returns with exit code 0. Test case 162 verifies
this.
connects and otherwise enforced tunnel-thru-proxy requests), the same
authentication header is also wrongly sent to the remote host.
The name and password can then be captured by an evil host and possibly get
used for malicious purposes.
CURLOPT_SSL_CTX_FUNCTION to set a callback that gets called with the
OpenSSL's ssl_ctx pointer passed in and allow a callback to act on it. If
anything but CURLE_OK is returned, that will also be returned by libcurl
all the way back. If this function changes the CURLOPT_URL, libcurl will
detect this and instead go use the new URL.
CURLOPT_SSL_CTX_DATA is a pointer you set to get passed to the callback set
with CURLOPT_SSL_CTX_FUNCTION.