In many states the easy_conn pointer is referenced and just assumed to
be working. This is an added extra check since analyzing indicates
there's a risk we can end up in these states with a NULL pointer there.
I made "connmon" not get initialized properly before use, and I use the
big hammer and make sure we always clear the entire struct to avoid any
problem like this in the future.
Two commits ago, we fixed a bug where the connction would be closed
prematurely after a HEAD. Now I added connection-monitor to test 48 and
added a second HEAD and make sure that both are sent over the same
connection.
This triggered a failure before the bug fix and now works. Will help us
avoid a future regression of this kind.
This makes verifying easier and makes us more sure curl closes the
connection only at the correct point in time. Adjusted test 206 and 1008
accordingly and updated the docs for it.
A HEAD response has no body length and gets the headers like the
corresponding GET would so it should not get closed after the response
based on the same rules. This mistake caused connections that did HEAD
to get closed too often without a valid reason.
Bug: http://curl.haxx.se/bug/view.cgi?id=3542731
Reported by: Eelco Dolstra
1 - str2offset() no longer accepts negative numbers since offsets are by
nature positive.
2 - introduced str2unum() for the command line parser that accepts
numericals which are not supposed to be negative, so that it will
properly complain on apparent bad uses and mistakes.
Bug: http://curl.haxx.se/mail/archive-2012-07/0013.html
Since the order of the cookies is sorted by the length of the paths,
having them on the same path length will make the test depend on what
order the qsort() implementation will put them. As seen in the
windows/msys output posted by Guenter in this posting:
http://curl.haxx.se/mail/lib-2012-07/0105.html
The function https_getsock was only implemented properly when USE_SSLEAY
or USE_GNUTLS is defined, but it is also necessary for USE_SCHANNEL.
The problem occurs when Curl_read_plain or Curl_write_plain returns
CURLE_AGAIN. In that case CURL_OK is returned to the multi-interface an
the used socket is set to state CURL_POLL_REMOVE and the easy-state is
set to CURLM_STATE_PROTOCONNECT. This is fine, because later the socket
should be set to CURL_POLL_IN or CURL_POLL_OUT via multi_getsock. That's
where https_getsock is called and doesn't return any sockets.
Since WinSSL cannot be build without SSPI being enabled,
USE_WINSSL now defaults to the value of USE_SSPI.
The makefile does now raise an error if WinSSL is enabled
while SSPI is disabled.
Renamed external parameter USE_SSPI = yes/no to ENABLE_SSPI = yes/no.
Backwards compatible change: USE_SSPI can still be passed as external
parameter with yes/no value as long as ENABLE_SSPI is not given.
USE_x defines are passed around with true/false values internally,
USE_SSPI is now aligned to this approach, but still accepts external
values yes/no being passed, just like the other defines.
- Changed space usage to line up with the whole file
- Renamed CFLAGS_SSPI/IPV6 to SSPI/IPV6_CFLAGS to be
consistent with the other CFLAGS_x variables
- Make use of existing CFLAGS_IPV6 (previously IPV6_CFLAGS)
instead of appending directly to CFLAGS
The code was printing a warning when SNI was set up successfully. Oops.
Printing the cipher number in verbose mode was something only TLS/SSL
programmers might understand, so I had it print the name of the cipher,
just like in the OpenSSL code. That'll be at least a little bit easier
to understand. The SecureTransport API doesn't have a method of getting
a string from a cipher like OpenSSL does, so I had to generate the
strings manually.