- Set user info param to the socket returned by Curl_getconnectinfo,
regardless of if the socket is bad. Effectively this means the user info
param now will receive CURL_SOCKET_BAD instead of -1 on bad socket.
- Remove incorrect comments.
CURLINFO_ACTIVESOCKET is documented to write CURL_SOCKET_BAD to user
info param but prior to this change it wrote -1.
Bug: https://github.com/bagder/curl/pull/518
Reported-by: Marcel Raad
Rationale: when starting up a curl-using app, all cookies from the jar
are checked against each other. This was causing a startup delay in the
Fifth browser.
All tests pass.
Signed-off-by: Lauri Kasanen <cand@gmx.com>
Apparently there are sites out there that do redirects to URLs they
provide in plain UTF-8 or similar. Browsers and wget %-encode such
headers when doing a subsequent request. Now libcurl does too.
Added test 1138 to verify.
Closes#473
Fixes a name space pollution at the cost of programs using one of these
defines will no longer compile. However, the vast majority of libcurl
programs that do multipart formposts use curl_formadd() to build this
list.
Closes#506
This reverts commit 370ee919b3.
Issue #509 has all the details but it was confirmed that the crash was
not due to this, so the previous commit was wrong.
Removed wrong assert()s
The 'conn' passed in as userdata can be used and there can be other
sessionhandles ('data') than the single one this checked for.
introduced in c6aedf680f. It needs to be CURLM_STATE_LAST big since it
must hande the range 0 .. CURLM_STATE_MSGSENT (18) and CURLM_STATE_LAST
is 19 right now.
Reported-by: Dan Fandrich
Bug: http://curl.haxx.se/mail/lib-2015-10/0069.html
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346Closes#346
sk_X509_pop will decrease the size of the stack which means that the loop would
end after having added only half of the certificates.
Also make sure that the X509 certificate is freed in case
SSL_CTX_add_extra_chain_cert fails.
- If a CURLINFO option is unknown return CURLE_UNKNOWN_OPTION.
Prior to this change CURLE_BAD_FUNCTION_ARGUMENT was returned on
unknown. That return value is contradicted by the CURLINFO option
documentation which specifies a return of CURLE_UNKNOWN_OPTION on
unknown.
- Change algorithm init to happen after OpenSSL config load.
Additional algorithms may be available due to the user's config so we
initialize the algorithms after the user's config is loaded.
Bug: https://github.com/bagder/curl/issues/447
Reported-by: Denis Feklushkin
For a single-stream download from localhost, we managed to increase
transfer speed from 1.6MB/sec to around 400MB/sec, mostly because of
this single fix.
... only call it when there is data arriving for another handle than the
one that is currently driving it.
Improves single-stream download performance quite a lot.
Thanks-to: Tatsuhiro Tsujikawa
Bug: http://curl.haxx.se/mail/lib-2015-09/0097.html
If GnuTLS fails to read the certificate then include whatever reason it
provides in the failure message reported to the client.
Signed-off-by: Mike Crowe <mac@mcrowe.com>
The gnutls vtls back-end was previously ignoring any password set via
CURLOPT_KEYPASSWD. Presumably this was because
gnutls_certificate_set_x509_key_file did not support encrypted keys.
gnutls now has a gnutls_certificate_set_x509_key_file2 function that
does support encrypted keys. Let's determine at compile time whether the
available gnutls supports this new function. If it does then use it to
pass the password. If it does not then emit a helpful diagnostic if a
password is set. This is preferable to the previous behaviour of just
failing to read the certificate without giving a reason in that case.
Signed-off-by: Mike Crowe <mac@mcrowe.com>
... even for those that don't support providing anything in the
'internals' struct member since it offers a convenient way for
applications to figure this out.
- Change the designator name we use to show the base64 encoded sha256
hash of the server's public key from 'pinnedpubkey' to
'public key hash'.
Though the server's public key hash is only shown when comparing pinned
public key hashes, the server's hash may not match one of the pinned.
Without this workaround, NSS re-uses a session cache entry despite the
server name does not match. This causes SNI host name to differ from
the actual host name. Consequently, certain servers (e.g. github.com)
respond by 400 to such requests.
Bug: https://bugzilla.mozilla.org/1202264
If the port number in the proxy string ended weirdly or the number is
too large, skip it. Mostly as a means to bail out early if a "bare" IPv6
numerical address is used without enclosing brackets.
Also mention the bracket requirement for IPv6 numerical addresses to the
man page for CURLOPT_PROXY.
Closes#415
Reported-by: Marcel Raad
In some timing-dependnt cases when a 4xx response immediately followed
after a 150 when a STOR was issued, this function would wrongly return
'complete == true' while 'wait_data_conn' was still set.
Closes#405
Reported-by: Patricia Muscalu
RFC 7540 section 8.1.2.2 states: "An endpoint MUST NOT generate an
HTTP/2 message containing connection-specific header fields; any message
containing connection-specific header fields MUST be treated as
malformed"
Closes#401
Introduced in commit 59f3f92ba6 this function is only implemented when
CURL_DISABLE_CRYPTO_AUTH is not defined. As such we shouldn't define
the function in the header file either.
This patch addresses known bug #76, where on 64-bit Windows SOCKET is 64
bits wide, but long is only 32, making CURLINFO_LASTSOCKET unreliable.
Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
If strict certificate checking is disabled (CURLOPT_SSL_VERIFYPEER
and CURLOPT_SSL_VERIFYHOST are disabled) do not fail if the server
doesn't present a certificate at all.
Closes#392
The multi state machine would otherwise go into the DO_MORE state after
DO, even for the case when the FTP state machine had already performed
those duties, which caused libcurl to get stuck in that state and fail
miserably. This occured for for active ftp uploads.
Reported-by: Patricia Muscalu
Visual Studio complains with a message box:
"Run-Time Check Failure #1 - A cast to a smaller data type has caused a
loss of data. If this was intentional, you should mask the source of
the cast with the appropriate bitmask.
For example:
char c = (i & 0xFF);
Changing the code in this way will not affect the quality of the
resulting optimized code."
This is because only 'val' is cast to unsigned char, so the "& 0xff" has
no effect.
Closes#387
Return 0 instead of NGHTTP2_ERR_CALLBACK_FAILURE if we can't locate the
SessionHandle. Apparently mod_h2 will sometimes send a frame for a
stream_id we're finished with.
Use nghttp2_session_get_stream_user_data and
nghttp2_session_set_stream_user_data to identify SessionHandles instead
of a hash.
Closes#372
Currently when the server responds with 401 on NTLM authenticated
connection (re-used) we consider it to have failed. However this is
legitimate and may happen when for example IIS is set configured to
'authPersistSingleRequest' or when the request goes thru a proxy (with
'via' header).
Implemented by imploying an additional state once a connection is
re-used to indicate that if we receive 401 we need to restart
authentication.
Closes#363
The SSH state machine didn't clear the 'rc' variable appropriately in a
two places which prevented it from looping the way it should. And it
lacked an 'else' statement that made it possible to erroneously get
stuck in the SSH_AUTH_AGENT state.
Reported-by: Tim Stack
Closes#357
connect.c:953:5: warning: initializer element is not computable at load
time
connect.c:953:5: warning: missing initializer for field 'dwMinorVersion'
of 'OSVERSIONINFOEX'
curl_sspi.c:97:5: warning: initializer element is not computable at load
time
curl_sspi.c:97:5: warning: missing initializer for field 'szCSDVersion'
of 'OSVERSIONINFOEX'
Otherwise it would never be called for an HTTP/2 connection, which has
its own disconnect handler.
I spotted this while debugging <https://bugzilla.redhat.com/1248389>
where the http_disconnect() handler was called on an FTP session handle
causing 'dnf' to crash. conn->data->req.protop of type (struct FTP *)
was reinterpreted as type (struct HTTP *) which resulted in SIGSEGV in
Curl_add_buffer_free() after printing the "Connection cache is full,
closing the oldest one." message.
A previously working version of libcurl started to crash after it was
recompiled with the HTTP/2 support despite the HTTP/2 protocol was not
actually used. This commit makes it work again although I suspect the
root cause (reinterpreting session handle data of incompatible protocol)
still has to be fixed. Otherwise the same will happen when mixing FTP
and HTTP/2 connections and exceeding the connection cache limit.
Reported-by: Tomas Tomecek
Bug: https://bugzilla.redhat.com/1248389
Currently, libcurl rejects responses with "Content-Encoding: compress"
when CURLOPT_ACCEPT_ENCODING is set to "". I think that libcurl should
treat the Content-Encoding "compress" the same as other
Content-Encodings that it does not support, e.g. "bzip2". That means
just ignoring it.
MSVC 12 complains:
lib\vtls\openssl.c(1554): warning C4701: potentially uninitialized local
variable 'verstr' used It's a false positive, but as it's normally not,
I have enabled warning-as-error for that warning.