Fixes a name space pollution at the cost of programs using one of these
defines will no longer compile. However, the vast majority of libcurl
programs that do multipart formposts use curl_formadd() to build this
list.
Closes#506
This reverts commit 370ee919b3.
Issue #509 has all the details but it was confirmed that the crash was
not due to this, so the previous commit was wrong.
Removed wrong assert()s
The 'conn' passed in as userdata can be used and there can be other
sessionhandles ('data') than the single one this checked for.
introduced in c6aedf680f. It needs to be CURLM_STATE_LAST big since it
must hande the range 0 .. CURLM_STATE_MSGSENT (18) and CURLM_STATE_LAST
is 19 right now.
Reported-by: Dan Fandrich
Bug: http://curl.haxx.se/mail/lib-2015-10/0069.html
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346Closes#346
sk_X509_pop will decrease the size of the stack which means that the loop would
end after having added only half of the certificates.
Also make sure that the X509 certificate is freed in case
SSL_CTX_add_extra_chain_cert fails.
- If a CURLINFO option is unknown return CURLE_UNKNOWN_OPTION.
Prior to this change CURLE_BAD_FUNCTION_ARGUMENT was returned on
unknown. That return value is contradicted by the CURLINFO option
documentation which specifies a return of CURLE_UNKNOWN_OPTION on
unknown.
- Change algorithm init to happen after OpenSSL config load.
Additional algorithms may be available due to the user's config so we
initialize the algorithms after the user's config is loaded.
Bug: https://github.com/bagder/curl/issues/447
Reported-by: Denis Feklushkin
For a single-stream download from localhost, we managed to increase
transfer speed from 1.6MB/sec to around 400MB/sec, mostly because of
this single fix.
... only call it when there is data arriving for another handle than the
one that is currently driving it.
Improves single-stream download performance quite a lot.
Thanks-to: Tatsuhiro Tsujikawa
Bug: http://curl.haxx.se/mail/lib-2015-09/0097.html
If GnuTLS fails to read the certificate then include whatever reason it
provides in the failure message reported to the client.
Signed-off-by: Mike Crowe <mac@mcrowe.com>
The gnutls vtls back-end was previously ignoring any password set via
CURLOPT_KEYPASSWD. Presumably this was because
gnutls_certificate_set_x509_key_file did not support encrypted keys.
gnutls now has a gnutls_certificate_set_x509_key_file2 function that
does support encrypted keys. Let's determine at compile time whether the
available gnutls supports this new function. If it does then use it to
pass the password. If it does not then emit a helpful diagnostic if a
password is set. This is preferable to the previous behaviour of just
failing to read the certificate without giving a reason in that case.
Signed-off-by: Mike Crowe <mac@mcrowe.com>
... even for those that don't support providing anything in the
'internals' struct member since it offers a convenient way for
applications to figure this out.
- Change the designator name we use to show the base64 encoded sha256
hash of the server's public key from 'pinnedpubkey' to
'public key hash'.
Though the server's public key hash is only shown when comparing pinned
public key hashes, the server's hash may not match one of the pinned.
Without this workaround, NSS re-uses a session cache entry despite the
server name does not match. This causes SNI host name to differ from
the actual host name. Consequently, certain servers (e.g. github.com)
respond by 400 to such requests.
Bug: https://bugzilla.mozilla.org/1202264
If the port number in the proxy string ended weirdly or the number is
too large, skip it. Mostly as a means to bail out early if a "bare" IPv6
numerical address is used without enclosing brackets.
Also mention the bracket requirement for IPv6 numerical addresses to the
man page for CURLOPT_PROXY.
Closes#415
Reported-by: Marcel Raad
In some timing-dependnt cases when a 4xx response immediately followed
after a 150 when a STOR was issued, this function would wrongly return
'complete == true' while 'wait_data_conn' was still set.
Closes#405
Reported-by: Patricia Muscalu
RFC 7540 section 8.1.2.2 states: "An endpoint MUST NOT generate an
HTTP/2 message containing connection-specific header fields; any message
containing connection-specific header fields MUST be treated as
malformed"
Closes#401
Introduced in commit 59f3f92ba6 this function is only implemented when
CURL_DISABLE_CRYPTO_AUTH is not defined. As such we shouldn't define
the function in the header file either.
This patch addresses known bug #76, where on 64-bit Windows SOCKET is 64
bits wide, but long is only 32, making CURLINFO_LASTSOCKET unreliable.
Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
If strict certificate checking is disabled (CURLOPT_SSL_VERIFYPEER
and CURLOPT_SSL_VERIFYHOST are disabled) do not fail if the server
doesn't present a certificate at all.
Closes#392
The multi state machine would otherwise go into the DO_MORE state after
DO, even for the case when the FTP state machine had already performed
those duties, which caused libcurl to get stuck in that state and fail
miserably. This occured for for active ftp uploads.
Reported-by: Patricia Muscalu
Visual Studio complains with a message box:
"Run-Time Check Failure #1 - A cast to a smaller data type has caused a
loss of data. If this was intentional, you should mask the source of
the cast with the appropriate bitmask.
For example:
char c = (i & 0xFF);
Changing the code in this way will not affect the quality of the
resulting optimized code."
This is because only 'val' is cast to unsigned char, so the "& 0xff" has
no effect.
Closes#387
Return 0 instead of NGHTTP2_ERR_CALLBACK_FAILURE if we can't locate the
SessionHandle. Apparently mod_h2 will sometimes send a frame for a
stream_id we're finished with.
Use nghttp2_session_get_stream_user_data and
nghttp2_session_set_stream_user_data to identify SessionHandles instead
of a hash.
Closes#372
Currently when the server responds with 401 on NTLM authenticated
connection (re-used) we consider it to have failed. However this is
legitimate and may happen when for example IIS is set configured to
'authPersistSingleRequest' or when the request goes thru a proxy (with
'via' header).
Implemented by imploying an additional state once a connection is
re-used to indicate that if we receive 401 we need to restart
authentication.
Closes#363
The SSH state machine didn't clear the 'rc' variable appropriately in a
two places which prevented it from looping the way it should. And it
lacked an 'else' statement that made it possible to erroneously get
stuck in the SSH_AUTH_AGENT state.
Reported-by: Tim Stack
Closes#357
connect.c:953:5: warning: initializer element is not computable at load
time
connect.c:953:5: warning: missing initializer for field 'dwMinorVersion'
of 'OSVERSIONINFOEX'
curl_sspi.c:97:5: warning: initializer element is not computable at load
time
curl_sspi.c:97:5: warning: missing initializer for field 'szCSDVersion'
of 'OSVERSIONINFOEX'
Otherwise it would never be called for an HTTP/2 connection, which has
its own disconnect handler.
I spotted this while debugging <https://bugzilla.redhat.com/1248389>
where the http_disconnect() handler was called on an FTP session handle
causing 'dnf' to crash. conn->data->req.protop of type (struct FTP *)
was reinterpreted as type (struct HTTP *) which resulted in SIGSEGV in
Curl_add_buffer_free() after printing the "Connection cache is full,
closing the oldest one." message.
A previously working version of libcurl started to crash after it was
recompiled with the HTTP/2 support despite the HTTP/2 protocol was not
actually used. This commit makes it work again although I suspect the
root cause (reinterpreting session handle data of incompatible protocol)
still has to be fixed. Otherwise the same will happen when mixing FTP
and HTTP/2 connections and exceeding the connection cache limit.
Reported-by: Tomas Tomecek
Bug: https://bugzilla.redhat.com/1248389
Currently, libcurl rejects responses with "Content-Encoding: compress"
when CURLOPT_ACCEPT_ENCODING is set to "". I think that libcurl should
treat the Content-Encoding "compress" the same as other
Content-Encodings that it does not support, e.g. "bzip2". That means
just ignoring it.
MSVC 12 complains:
lib\vtls\openssl.c(1554): warning C4701: potentially uninitialized local
variable 'verstr' used It's a false positive, but as it's normally not,
I have enabled warning-as-error for that warning.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
Static analysis indicated that my commit 9008f3d564 ("ntlm_wb: Fix
hard-coded limit on NTLM auth packet size") introduced a potential
memory leak on an error path, because we forget to free the buffer
before returning an error.
Fix this.
Although actually, it never happens in practice because we never *get*
here with state == NTLMSTATE_TYPE1. The state is always zero. That
might want cleaning up in a separate patch.
Reported-by: Terri Oda
setup-vms.h: More symbols for SHA256, hacks for older VAX
openssl.h: Use OpenSSL OPENSSL_NO_SHA256 macro to allow building on VAX.
openssl.c: Use OpenSSL version checks and OPENSSL_NO_SHA256 macro to
allow building on VAX and 64 bit VMS.
setup-vms.h: Symbol case fixups submitted by Michael Steve
build_gnv_curl_pcsi_desc.com: VSI aka as VMS Software, is now the
supplier of new versions of VMS. The install kit needs to accept
VSI as a producer.
Since we do prefix match using given header by application code
against header name pair in format "NAME:VALUE", and VALUE part can
contain ":", we have to careful about existence of ":" in header
parameter. ":" should be allowed to match HTTP/2 pseudo-header field,
and other use of ":" in header must be treated as error, and
curl_pushheader_byname should return NULL. This commit implements
this behaviour.
In 3013bb6 I had changed cookie export to ignore any-domain cookies,
however the logic I used to do so was incorrect, and would lead to a
busy loop in the case of exporting a cookie list that contained
any-domain cookies. The result of that is worse though, because in that
case the other cookies would not be written resulting in an empty file
once the application is terminated to stop the busy loop.
Make sure that the error buffer is always initialized and simplify the
use of it to make the logic easier.
Bug: https://github.com/bagder/curl/issues/318
Reported-by: sneis
The symbol SSL3_MT_NEWSESSION_TICKET appears to have been introduced at
around openssl 0.9.8f, and the use of it in lib/vtls/openssl.c breaks
builds with older openssls (certainly with 0.9.8b, which is the latest
older version I have to try with).
** WORK-AROUND **
The introduced non-blocking general behaviour for Curl_proxyCONNECT()
didn't work for the data connection establishment unless it was very
fast. The newly introduced function argument makes it operate in a more
blocking manner, more like it used to work in the past. This blocking
approach is only used when the FTP data connecting through HTTP proxy.
Blocking like this is bad. A better fix would make it work more
asynchronously.
Bug: https://github.com/bagder/curl/issues/278
This commit is several drafts squashed together. The changes from each
draft are noted below. If any changes are similar and possibly
contradictory the change in the latest draft takes precedence.
Bug: https://github.com/bagder/curl/issues/244
Reported-by: Chris Araman
%%
%% Draft 1
%%
- return 0 if len == 0. that will have to be documented.
- continue on and process the caches regardless of raw recv
- if decrypted data will be returned then set the error code to CURLE_OK
and return its count
- if decrypted data will not be returned and the connection has closed
(eg nread == 0) then return 0 and CURLE_OK
- if decrypted data will not be returned and the connection *hasn't*
closed then set the error code to CURLE_AGAIN --only if an error code
isn't already set-- and return -1
- narrow the Win2k workaround to only Win2k
%%
%% Draft 2
%%
- Trying out a change in flow to handle corner cases.
%%
%% Draft 3
%%
- Back out the lazier decryption change made in draft2.
%%
%% Draft 4
%%
- Some formatting and branching changes
- Decrypt all encrypted cached data when len == 0
- Save connection closed state
- Change special Win2k check to use connection closed state
%%
%% Draft 5
%%
- Default to CURLE_AGAIN in cleanup if an error code wasn't set and the
connection isn't closed.
%%
%% Draft 6
%%
- Save the last error only if it is an unrecoverable error.
Prior to this I saved the last error state in all cases; unfortunately
the logic to cover that in all cases would lead to some muddle and I'm
concerned that could then lead to a bug in the future so I've replaced
it by only recording an unrecoverable error and that state will persist.
- Do not recurse on renegotiation.
Instead we'll continue on to process any trailing encrypted data
received during the renegotiation only.
- Move the err checks in cleanup after the check for decrypted data.
In either case decrypted data is always returned but I think it's easier
to understand when those err checks come after the decrypted data check.
%%
%% Draft 7
%%
- Regardless of len value go directly to cleanup if there is an
unrecoverable error or a close_notify was already received. Prior to
this change we only acknowledged those two states if len != 0.
- Fix a bug in connection closed behavior: Set the error state in the
cleanup, because we don't know for sure it's an error until that time.
- (Related to above) In the case the connection is closed go "greedy"
with the decryption to make sure all remaining encrypted data has been
decrypted even if it is not needed at that time by the caller. This is
necessary because we can only tell if the connection closed gracefully
(close_notify) once all encrypted data has been decrypted.
- Do not renegotiate when an unrecoverable error is pending.
%%
%% Draft 8
%%
- Don't show 'server closed the connection' info message twice.
- Show an info message if server closed abruptly (missing close_notify).
Some servers will request a client certificate, but not require one.
This change allows libcurl to connect to such servers when using
schannel as its ssl/tls backend. When a server requests a client
certificate, libcurl will now continue the handshake without one,
rather than terminating the handshake. The server can then decide
if that is acceptable or not. Prior to this change, libcurl would
terminate the handshake, reporting a SEC_I_INCOMPLETE_CREDENTIALS
error.
and a conversion to markdown. Removed the lib/README.* files. The idea
being to move toward having INTERNALS as the one and only "book" of
internals documentation.
Added a TOC to top of the document.
When CURL_SOCKET_BAD is returned in the callback, it should be treated
as an error (CURLE_COULDNT_CONNECT) if no other socket is subsequently
created when trying to connect to a server.
Bug: http://curl.haxx.se/mail/lib-2015-06/0047.html
- Try building a chain using issuers in the trusted store first to avoid
problems with server-sent legacy intermediates.
Prior to this change server-sent legacy intermediates with missing
legacy issuers would cause verification to fail even if the client's CA
bundle contained a valid replacement for the intermediate and an
alternate chain could be constructed that would verify successfully.
https://rt.openssl.org/Ticket/Display.html?id=3621&user=guest&pass=guest
Prior to this change any-domain cookies (cookies without a domain that
are sent to any domain) were exported with domain name "unknown".
Bug: https://github.com/bagder/curl/issues/292
Follow-up to e8423f9ce1 with discussionis in
https://github.com/bagder/curl/pull/258
This check scans for fopen() with a mode string without 'b' present, as
it may indicate that an FOPEN_* define should rather be used.
- Change fopen calls to use FOPEN_READTEXT instead of "r" or "rt"
- Change fopen calls to use FOPEN_WRITETEXT instead of "w" or "wt"
This change is to explicitly specify when we need to read/write text.
Unfortunately 't' is not part of POSIX fopen so we can't specify it
directly. Instead we now have FOPEN_READTEXT, FOPEN_WRITETEXT.
Prior to this change we had an issue on Windows if an application that
uses libcurl overrides the default file mode to binary. The default file
mode in Windows is normally text mode (translation mode) and that's what
libcurl expects.
Bug: https://github.com/bagder/curl/pull/258#issuecomment-107093055
Reported-by: Orgad Shaneh
Previously, after seeing upgrade to HTTP/2, we feed data followed by
upgrade response headers directly to nghttp2_session_mem_recv() in
Curl_http2_switched(). But it turns out that passed buffer, mem, is
part of stream->mem, and callbacks called by
nghttp2_session_mem_recv() will write stream specific data into
stream->mem, overwriting input data. This will corrupt input, and
most likely frame length error is detected by nghttp2 library. The
fix is first copy the passed data to HTTP/2 connection buffer,
httpc->inbuf, and call nghttp2_session_mem_recv().
Coverity CID 1299424 identified dead code because of checks that could
never equal true (if the mechanism's name was NULL).
Simplified the function by removing a level of pointers and removing the
loop and array that weren't used.
Replace use of assert with code that properly catches bad input at
run-time even in non-debug builds.
This flaw was sort of detected by Coverity CID 1299425 which claimed the
"case RTSPREQ_NONE" was dead code.
Coverity CID 1299426 warned about possible NULL dereference otherwise,
but that would only ever happen if we get invalid HTTP/2 data with
frames for stream 0. Avoid this risk by returning early when stream 0 is
used.
Prior to this change the description for SEC_E_ILLEGAL_MESSAGE was OS
and language specific, and invariably translated to something not very
helpful like: "The message received was unexpected or badly formatted."
Bug: https://github.com/bagder/curl/issues/267
Reported-by: Michael Osipov
With many easy handles using the same connection for multiplexing, it is
important we store and keep the transfer-oriented stuff in the
SessionHandle so that callbacks and callback data work fine even when
many easy handles share the same physical connection.
Previously, when we send all given buffer in data_source_callback, we
return NGHTTP2_ERR_DEFERRED, and nghttp2 library removes this stream
temporarily for writing. This itself is good. If this is the sole
stream in the session, nghttp2_session_want_write() returns zero,
which means that libcurl does not check writeability of the underlying
socket. This leads to very slow upload, because it seems curl only
upload 16k something per 1 second. To fix this, if we still have data
to send, call nghttp2_session_resume_data after nghttp2_session_send.
This makes nghttp2_session_want_write() returns nonzero (if connection
window still opens), and as a result, socket writeability is checked,
and upload speed becomes normal.
Stop curl from failing when non-fatal alert is received during
handshake. This e.g. fixes lots of problems when working with https
sites through proxies.
BoringSSL removed support for direct callers of SSL_CTX_callback_ctrl
and SSL_CTX_ctrl, so move to a way that should work on BoringSSL and
OpenSSL.
re #275
Error: CLANG_WARNING:
lib/http.c:173:16: warning: Value stored to 'http' during its initialization is never read
Error: COMPILER_WARNING:
lib/http.c: scope_hint: In function ‘http_disconnect’
lib/http.c:173:16: warning: unused variable ‘http’ [-Wunused-variable]
.. also make __func__ replacement in multi.
Prior to this change debug builds would fail to build if the compiler
was building pre-c99 and didn't support __func__.
We could get stream ID not in the hash in on_stream_close. For
example, if we decided to reject stream (e.g., PUSH_PROMISE), then we
don't create stream and store it in hash with its stream ID.
This commit requires nghttp2 v1.0.0 to compile, and migrate to v1.0.0,
and utilize recent version of nghttp2 to simplify the code,
First we use nghttp2_option_set_no_recv_client_magic function to
detect nghttp2 v1.0.0. That function only exists since v1.0.0.
Since nghttp2 v0.7.5, nghttp2 ensures header field ordering, and
validates received header field. If it found error, RST_STREAM with
PROTOCOL_ERROR is issued. Since we require v1.0.0, we can utilize
this feature to simplify libcurl code. This commit does this.
Migration from 0.7 series are done based on nghttp2 migration
document. For libcurl, we removed the code sending first 24 bytes
client magic. It is now done by nghttp2 library.
on_invalid_frame_recv callback signature changed, and is updated
accordingly.
to allow code to act differently on the situation.
Also added some more info message for the connection re-use function to
make it clearer when connections are not re-used.
Previously when we do pause because of out of buffer, we just throw
away unread data in connection buffer. This just broke protocol
framing, and I saw occasional FRAME_SIZE_ERROR. This commit fix this
issue by remembering how much data read, and in the next iteration, we
process remaining data.
This commit fixes the bug that streams get stuck if stream gets some
DATA, and stream->closed becomes true at the same time. Previously,
in this condition, after we processed DATA, we are going to try to
read data from underlying transport, but there is no data, and gets
EAGAIN. There was no code path to evaludate stream->closed.
... from the connection struct. The stream one being the 'struct HTTP'
which is kept in the SessionHandle struct (easy handle).
lookup streams for incoming frames in the stream hash, hashing is based
on the stream id and we get the SessionHandle for the incoming stream
that way.
Previously we counted all connections to a specific host name and that
would be used for the CURLMOPT_MAX_HOST_CONNECTIONS check for example,
while servers on different port numbers are normally considered
different "origins" on the web and should thus be considered different
hosts.
All the existing Curl_bundle* functions were only ever used from within
the conncache.c file, so I moved them over and made them static (and
removed the Curl_ prefix).
This avoids unnecessary dynamic allocs and as this also removed the last
users of *hash_alloc() and *hash_destroy(), those two functions are now
removed.
The OpenSSL trace callback is wonderfully undocumented but given a
journey in the source code, it seems the cases were ssl_ver is zero
doesn't follow the same pattern and thus turned out confusing and
misleading. For now, we skip doing any CURLINFO_TEXT logging on those
but keep sending them as CURLINFO_SSL_DATA_OUT/IN.
Also, I added direction to the text info and I edited some functions
slightly.
Bug: https://github.com/bagder/curl/issues/219
Reported-by: Jay Satiro, Ashish Shukla
- update default versions of dependencies (except for rare/old platforms)
- update urls
- sync examples makefiles with main ones
- remove line ending space
Make the HTTP headers separated by default for improved security and
reduced risk for information leakage.
Bug: http://curl.haxx.se/docs/adv_20150429.html
Reported-by: Yehezkel Horowitz, Oren Souroujon
When doing HTTP requests Negotiate authenticated, the entire connnection
may become authenticated and not just the specific HTTP request which is
otherwise how HTTP works, as Negotiate can basically use NTLM under the
hood. curl was not adhering to this fact but would assume that such
requests would also be authenticated per request.
CVE-2015-3148
Bug: http://curl.haxx.se/docs/adv_20150422B.html
Reported-by: Isaac Boukris
If a URL is given with a zero-length host name, like in "http://:80" or
just ":80", `fix_hostname()` will index the host name pointer with a -1
offset (as it blindly assumes a non-zero length) and both read and
assign that address.
CVE-2015-3144
Bug: http://curl.haxx.se/docs/adv_20150422D.html
Reported-by: Hanno Böck
The internal libcurl function called sanitize_cookie_path() that cleans
up the path element as given to it from a remote site or when read from
a file, did not properly validate the input. If given a path that
consisted of a single double-quote, libcurl would index a newly
allocated memory area with index -1 and assign a zero to it, thus
destroying heap memory it wasn't supposed to.
CVE-2015-3145
Bug: http://curl.haxx.se/docs/adv_20150422C.html
Reported-by: Hanno Böck
At some point, Firefox has changed and generates different directory
names for the default profile that made this script fail to find them.
Bug: https://github.com/bagder/curl/issues/207
Reported-by: sneakyimp
Add 'gdi32' and 'crypt32' Windows implibs to avoid failure
while building libcurl.dll using the mingw compiler.
The same logic is used in 'src/makefile.m32' when
building curl.exe.
The factor of 8 is a bytes-to-bits conversion factor, but pkt_size and
rate_bps are both in bytes. When using the rate limiting option, curl
waits 8 times too long, and then transfers very quickly until the
average rate reaches the limit. The average rate follows the limit over
time, but the actual traffic is bursty.
Thanks-to: Benjamin Gilbert
The key length in bits will always fit in an unsigned long so the
loss-of-data warning assigning the result of x64 pointer arithmetic to
an unsigned long is unnecessary.
Prior to this change libcurl could show multiple 'CyaSSL: Connecting to'
messages since cyassl_connect_step2 is called multiple times, typically.
The message is superfluous even once since libcurl already informs the
user elsewhere in code that it is connecting.
- cache entries must be also refreshed when they are in use
- have the cache count as inuse reference too, freeing timestamp == 0 special
value
- use timestamp == 0 for CURLOPT_RESOLVE entries which don't get refreshed
- remove CURLOPT_RESOLVE special inuse reference (timestamp == 0 will prevent refresh)
- fix Curl_hostcache_clean - CURLOPT_RESOLVE entries don't have a special
reference anymore, and it would also release non CURLOPT_RESOLVE references
- fix locking in Curl_hostcache_clean
- fix unit1305.c: hash now keeps a reference, need to set inuse = 1
This change is to allow the user's CTX callback to change the minimum
protocol version in the CTX without us later overriding it, as we did
prior to this change.
SSL_CTX_load_verify_locations can return negative values on fail,
therefore to check for failure we check if load is != 1 (success)
instead of if load is == 0 (failure), the latter being incorrect given
that behavior.
Previously in Curl_http2_switched, we called nghttp2_session_mem_recv to
parse incoming data which were already received while curl was handling
upgrade. But we didn't call nghttp2_session_send, and it led to make
curl not send any response to the received frames. Most likely, we
received SETTINGS from server at this point, so we missed opportunity to
send SETTINGS + ACK. This commit adds missing nghttp2_session_send call
in Curl_http2_switched to fix this issue.
Bug: https://github.com/bagder/curl/issues/192
Reported-by: Stefan Eissing
"name =value" is fine and the space should just be skipped.
Updated test 31 to also test for this.
Bug: https://github.com/bagder/curl/issues/195
Reported-by: cromestant
Help-by: Frank Gevaerts
(Curl_cyassl_init)
- Return 1 on success, 0 in failure.
Prior to this change the fail path returned an incorrect value and the
evaluation to determine whether CyaSSL_Init had succeeded was incorrect.
Ironically that combined with the way curl_global_init tests SSL library
initialization (!Curl_ssl_init()) meant that CyaSSL having been
successfully initialized would be seen as that even though the code path
and return value in Curl_cyassl_init were wrong.
If the handle removed from the multi handle happens to be the one
"owning" the pipeline other transfers will be waiting indefinitely. Now
we move such handles back to connect to have them race (again) for
getting the connection and thus avoid hanging.
Bug: http://curl.haxx.se/bug/view.cgi?id=1465
Reported-by: Jiri Dvorak
... even if they don't have an associated connection anymore. It could
leave the waiting transfers pending with no active one on the
connection.
Bug: http://curl.haxx.se/bug/view.cgi?id=1465
Reported-by: Jiri Dvorak
(cyassl_connect_step1)
- Use TLS 1.0-1.2 by default when available.
CyaSSL/wolfSSL >= v3.3.0 supports setting a minimum protocol downgrade
version.
cyassl/cyassl@322f79f
This header file must be included after all header files except
memdebug.h, as it does similar memory function redefinitions and can be
similarly affected by conflicting definitions in system or dependent
library headers.
CID 1202732 warns on the previous use, although I cannot fine any
problems with it. I'm doing this change only to make the code use a more
familiar approach to accomplish the same thing.
We prematurely changed protocol handler to HTTP/2 which made things very
slow (and wrong).
Reported-by: Stefan Eissing
Bug: https://github.com/bagder/curl/issues/169
Since we just started make use of free(NULL) in order to simplify code,
this change takes it a step further and:
- converts lots of Curl_safefree() calls to good old free()
- makes Curl_safefree() not check the pointer before free()
The (new) rule of thumb is: if you really want a function call that
frees a pointer and then assigns it to NULL, then use Curl_safefree().
But we will prefer just using free() from now on.
The following functions return immediately if a null pointer was passed.
* Curl_cookie_cleanup
* curl_formfree
It is therefore not needed that a function caller repeats a corresponding check.
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
The function "free" is documented in the way that no action shall occur for
a passed null pointer. It is therefore not needed that a function caller
repeats a corresponding check.
http://stackoverflow.com/questions/18775608/free-a-null-pointer-anyway-or-check-first
This issue was fixed by using the software Coccinelle 1.0.0-rc24.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Bug: https://github.com/bagder/curl/pull/168
(trynextip)
- Don't try the "other" protocol family unless IPv6 is available. In an
IPv4-only build the other family can only be IPv6 which is unavailable.
This change essentially stops IPv4-only builds from attempting the
"happy eyeballs" secondary parallel connection that is supposed to be
used by the "other" address family.
Prior to this change in IPv4-only builds that secondary parallel
connection attempt could be erroneously used by the same family (IPv4)
which caused a bug where every address after the first for a host could
be tried twice, often in parallel. This change fixes that bug. An
example of the bug is shown below.
Assume MTEST resolves to 3 addresses 127.0.0.2, 127.0.0.3 and 127.0.0.4:
* STATE: INIT => CONNECT handle 0x64f4b0; line 1046 (connection #-5000)
* Rebuilt URL to: http://MTEST/
* Added connection 0. The cache now contains 1 members
* STATE: CONNECT => WAITRESOLVE handle 0x64f4b0; line 1083
(connection #0)
* Trying 127.0.0.2...
* STATE: WAITRESOLVE => WAITCONNECT handle 0x64f4b0; line 1163
(connection #0)
* Trying 127.0.0.3...
* connect to 127.0.0.2 port 80 failed: Connection refused
* Trying 127.0.0.3...
* connect to 127.0.0.3 port 80 failed: Connection refused
* Trying 127.0.0.4...
* connect to 127.0.0.3 port 80 failed: Connection refused
* Trying 127.0.0.4...
* connect to 127.0.0.4 port 80 failed: Connection refused
* connect to 127.0.0.4 port 80 failed: Connection refused
* Failed to connect to MTEST port 80: Connection refused
* Closing connection 0
* The cache now contains 0 members
* Expire cleared
curl: (7) Failed to connect to MTEST port 80: Connection refused
The bug was born in commit bagder/curl@2d435c7.
In function Curl_closesocket() in connect.c the call to
Curl_multi_closed() was wrongly omitted if a socket close function
(CURLOPT_CLOSESOCKETFUNCTION) is registered.
That would lead to not removing the socket from the internal hash table
and not calling the multi socket callback appropriately.
Bug: http://curl.haxx.se/bug/view.cgi?id=1493
A signal handler for SIGALRM is installed in Curl_resolv_timeout. It is
configured to interrupt system calls and uses siglongjmp to return into
the function if alarm() goes off.
The signal handler is installed before curl_jmpenv is initialized.
This means that an already installed alarm timer could trigger the
newly installed signal handler, leading to undefined behavior when it
accesses the uninitialized curl_jmpenv.
Even if there is no previously installed alarm available, the code in
Curl_resolv_timeout itself installs an alarm before the environment is
fully set up. If the process is sent into suspend right after that, the
signal handler could be called too early as in previous scenario.
To fix this, the signal handler should only be installed and the alarm
timer only be set after sigsetjmp has been called.
... by using the regular Curl_http_done() method which checks for
that. This makes test 1801 fail consistently with error 56 (which seems
fine) to that test is also updated here.
Reported-by: Ben Darnell
Bug: https://github.com/bagder/curl/issues/166
This makes curl pick better (stronger) ciphers by default. The strongest
available ciphers are fine according to the HTTP/2 spec so an OpenSSL
built curl is no longer rejected by string HTTP/2 servers.
Bug: http://curl.haxx.se/bug/view.cgi?id=1487
...after the method line:
"Since the Host field-value is critical information for handling a
request, a user agent SHOULD generate Host as the first header field
following the request-line." / RFC 7230 section 5.4
Additionally, this will also make libcurl ignore multiple specified
custom Host: headers and only use the first one. Test 1121 has been
updated accordingly
Bug: http://curl.haxx.se/bug/view.cgi?id=1491
Reported-by: Rainer Canavan
When checking for a connection to re-use, a proxy-using request must
check for and use a proxy connection and not one based on the host
name!
Added test 1421 to verify
Bug: http://curl.haxx.se/bug/view.cgi?id=1492