Make GnuTLS old and new consistent, specify the desired protocol, cipher
and certificate type in always in both modes. Disable insecure ciphers
as reported by howsmyssl.com. Honor not only --sslv3, but also the
--tlsv1[.N] switches.
Related Bug: http://curl.haxx.se/bug/view.cgi?id=1323
conversion from 'curl_off_t' to 'size_t', possible loss of data
Where curl_off_t is a 64-bit word and size_t is 32-bit - for example
with 32-bit Windows builds.
1 - allow >31 bit max-age values
2 - don't overflow on extremely large max-age values when we add the
value to the current time
3 - make sure max-age takes precedence over expires as dictated by
RFC6265
Bug: http://curl.haxx.se/mail/lib-2014-01/0130.html
Reported-by: Chen Prog
Starting with Visual Studio 2013 (VC12) and Windows 8.1 the
GetVersionInfoEx() function has been marked as deprecated and it's
return value atered. Updated connect.c and curl_sspi.c to use
VerifyVersionInfo() where possible, which has been available since
Windows 2000.
A transfer timeout could result in an error message such as "Operation
timed out after 3000 milliseconds with 19 bytes of -1 received". This
patch removes the non-sensical "of -1" when the size of the transfer
is unknown, mirroring the logic in lib/transfer.c
By default even recent versions of OpenSSL support and accept both
"export strength" ciphers, small-bitsize ciphers as well as downright
deprecated ones.
This change sets a default cipher set that avoids the worst ciphers, and
subsequently makes https://www.howsmyssl.com/a/check no longer grade
curl/OpenSSL connects as 'Bad'.
Bug: http://curl.haxx.se/bug/view.cgi?id=1323
Reported-by: Jeff Hodges
With the recently added timeout "reminder" functionality, there's no
reason left for us to execute timeout code before the time is
ripe. Simplifies the handling too.
This will make the *TIMEOUT and *CONNECTTIMEOUT options more accurate
again, which probably is most important when the *_MS versions are used.
In multi_socket, make sure to update 'now' after having handled activity
on a socket.
BACKGROUND:
We have learned that on some systems timeout timers are inaccurate and
might occasionally fire off too early. To make the multi_socket API work
with this, we made libcurl execute timeout actions a bit early too if
they are within our MULTI_TIMEOUT_INACCURACY. (added in commit
2c72732ebf, present since 7.21.0)
Switching everything to the multi API made this inaccuracy problem
slightly more notable as now everyone can be affected.
Recently (commit 21091549c0) we tweaked that inaccuracy value to make
timeouts more accurate and made it platform specific. We also figured
out that we have code at places that check for fixed timeout values so
they MUST NOT run too early as then they will not trigger at all (see
commit be28223f35 and a691e04470) - so there are definitately problems
with running timeouts before they're supposed to run. (We've handled
that so far by adding the inaccuracy margin to those specific timeouts.)
The libcurl multi_socket API tells the application with a callback that
a timeout expires in N milliseconds (and it explicitly will not tell it
again for the same timeout), and the application is then supposed to
call libcurl when that timeout expires. When libcurl subsequently gets
called with curl_multi_socket_action(...CURL_SOCKET_TIMEOUT...), it
knows that the application thinks the timeout expired - and alas, if it
is within the inaccuracy level libcurl will run code handling that
handle.
If the application says CURL_SOCKET_TIMEOUT to libcurl and _isn't_
within the inaccuracy level, libcurl will not consider the timeout
expired and it will not tell the application again since the timeout
value is still the same.
NOW:
This change introduces a modified behavior here. If the application says
CURL_SOCKET_TIMEOUT and libcurl finds no timeout code to run, it will
inform the application about the timeout value - *again* even if it is
the same timeout that it already told about before (although libcurl
will of course tell it the updated time so that it'll still get the
correct remaining time). This way, we will not risk that the application
believes it has done its job and libcurl thinks the time hasn't come yet
to run any code and both just sit waiting. This also allows us to
decrease the MULTI_TIMEOUT_INACCURACY margin, but that will be handled
in a separate commit.
A repeated timeout update to the application risk that the timeout will
then fire again immediately and we have what basically is a busy-loop
until the time is fine even for libcurl. If that becomes a problem, we
need to address it.
The net effect of this bug as it appeared to users, would be that
libcurl would timeout in the connect phase.
When disabling IPv6 use but still using getaddrinfo, libcurl would
wrongly not init the "hints" struct field in init_thread_sync() which
would subsequently lead to a getaddrinfo() invoke with a zeroed hints
with ai_socktype set to 0 instead of SOCK_STREAM. This would lead to
different behaviors on different platforms but basically incorrect
output.
This code was introduced in 483ff1ca75, released in curl 7.20.0.
This bug became a problem now due to the happy eyeballs code and how
libcurl now traverses the getaddrinfo() results differently.
Bug: http://curl.haxx.se/mail/lib-2014-01/0061.html
Reported-by: Fabian Frank
Debugged-by: Fabian Frank
Removed some of the infof() calls that were added with the recent
pipeline improvements but they're not useful to the vast majority of
readers and the pipelining seems to fundamentaly work - the debugging
outputs can easily be added there if debugging these functions is needed
again.
When the requested authentication bitmask includes NTLM, we cannot
re-use a connection for another username/password as we then risk
re-using NTLM (connection-based auth).
This has the unfortunate downside that if you include NTLM as a possible
auth, you cannot re-use connections for other usernames/passwords even
if NTLM doesn't end up the auth type used.
Reported-by: Paras S
Patched-by: Paras S
Bug: http://curl.haxx.se/mail/lib-2014-01/0046.html
When the progress callback returned 1 at a very early state, the code
would not make CURLE_ABORTED_BY_CALLBACK get returned but the process
would still be interrupted. In the HTTP case, this would then cause a
CURLE_GOT_NOTHING to erroneously get returned instead.
Reported-by: Petr Novak
Bug: http://curl.haxx.se/bug/view.cgi?id=1318
This is a debug function only and serves no purpose in production code,
it only slows things down. I left the code #ifdef'ed for possible future
pipeline debugging.
Also, this was a global function without proper namespace usage.
Reported-by: He Qin
Bug: http://curl.haxx.se/bug/view.cgi?id=1320
If OpenSSL is built to support SSLv2 this brings back the ability to
explicitly select that as a protocol level.
Reported-by: Steve Holme
Bug: http://curl.haxx.se/mail/lib-2014-01/0013.html
Some feedback provided by byte_bucket on IRC pointed out that commit
db11750cfa wasn’t really correct because it allows for “upgrading” to a
newer protocol when it should be only allowing for SSLv3.
This change fixes that.
When SSLv3 connection is forced, don't allow SSL negotiations for newer
versions. Feedback provided by byte_bucket in #curl. This behavior is
also consistent with the other force flags like --tlsv1.1 which doesn't
allow for TLSv1.2 negotiation, etc
Feedback-by: byte_bucket
Bug: http://curl.haxx.se/bug/view.cgi?id=1319
Since ad34a2d5c8 (present in 7.34.0 release) forcing
SSLv3 will always return the error "curl: (35) Unsupported SSL protocol
version" Can be replicated with `curl -I -3 https://www.google.com/`.
This fix simply allows for v3 to be forced.
Following commit 0aafd77fa4, replaced the internal usage of
FORMAT_OFF_T and FORMAT_OFF_TU with the external versions that we
expect API programmers to use.
This negates the need for separate definitions which were subtly
different under different platforms/compilers.
Added support to the built-in printf() replacement functions, for these
non-ANSI extensions when compiling under Visual Studio, Borland, Watcom
and MinGW.
This fixes problems when generating libcurl source code that contains
curl_off_t variables.
Fixes a bug when all addresses in the first family fail immediately, due
to "Network unreachable" for example, curl would hang and never try the
next address family.
Iterate through all address families when to trying establish the first
connection attempt.
Bug: http://curl.haxx.se/bug/view.cgi?id=1315
Reported-by: Michal Górny and Anthony G. Basile
Introduced in commit 2a4ee0d221 sending of data via the FILE
protocol would always return CURLE_WRITE_ERROR regardless of whether
CURL_WRITEFUNC_PAUSE was returned from the callback function or not.
Make sure that we detect such attempts and return a proper error code
instead of silently handling this in problematic ways.
Updated the documentation to mention this limitation.
Bug: http://curl.haxx.se/bug/view.cgi?id=1286
Previously this memdebug free() replacement didn't properly work with a
NULL argument which has made us write code that avoids calling
free(NULL) - which causes some extra nuisance and unnecessary code.
Starting now, we should allow free(NULL) even when built with the
memdebug system enabled.
free(NULL) is permitted by POSIX