SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG option enabling allowed successfull
interoperability with web server Netscape Enterprise Server 2.0.1 released
back in 1996 more than 15 years ago.
Due to CVE-2010-4180, option SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG has
become ineffective as of OpenSSL 0.9.8q and 1.0.0c. In order to mitigate
CVE-2010-4180 when using previous OpenSSL versions we no longer enable
this option regardless of OpenSSL version and SSL_OP_ALL definition.
Some functions using getaddrinfo and gethostbyname were still
mistakingly being used/linked even if c-ares was selected as resolver
backend.
Reported by: Arthur Murray
Bug: http://curl.haxx.se/mail/lib-2012-01/0160.html
Previously the code would create a dummy socket while resolving just to
have curl_multi_fdset() return something but the non-win32 version
doesn't do it this way and the creation and use of a socket that isn't
made with the common create-socket callback can be confusing to apps
using the multi_socket API etc.
This change removes the dummy socket and thus will cause
curl_multi_fdset() to return with maxfd == -1 more often.
Fixed a problem in POP3 and IMAP where a connection would fail when
CURLUSESSL_TRY was specified for a server that didn't support
SSL/TLS connections rather than continuing.
The STARTTLS response code in SMTP, POP3 and IMAP would return
CURLE_LOGIN_DENIED rather than CURLE_USE_SSL_FAILED when SSL/TLS
was not available on the server.
Reported by: Gokhan Sengun
Bug: http://curl.haxx.se/mail/lib-2012-01/0018.html
Unfortunately we have no test cases for this and I have no SSPI build or
server to verify this with. The change seems simple enough though.
Bug: http://curl.haxx.se/bug/view.cgi?id=3466497
Reported by: Patrice Guerin
When the buffer gets realloced to hold the file name in the
SSH_SFTP_READDIR_LINK state, the counter was not bumped accordingly.
Reported by: Armel Asselin
Patch by: Armel Asselin
Bug: http://curl.haxx.se/mail/lib-2011-12/0249.html
When a HTTP connection is re-used for a subsequent request without
proxy, it would always re-use the Host: header of the first request. As
host names are case insensitive it would make curl send another host
name case that what the particular request used.
Now it will instead always use the most recent host name to always use
the desired casing.
Added test case 1318 to verify.
Bug: http://curl.haxx.se/mail/lib-2011-12/0314.html
Reported by: Alex Vinnik
The load host names to DNS cache function was moved to hostip.c and it
now makes sure to not add host names that already are present in the
cache. It would previously lead to memory leaks when for example using
the --resolve and multiple URLs on the command line.
The commit 9dd85bc unintentionally changed the way we compute the time
spent waiting for 100-continue. In particular, when using a SSL client
certificate, the time spent by SSL handshake was included and could
cause the CURL_TIMEOUT_EXPECT_100 timeout to be mistakenly fired up.
Bug: https://bugzilla.redhat.com/767490
Reported by: Mamoru Tasaka
ftp_do_more() returns after accepting the server connect however it
needs to fall through and set "*complete" to TRUE before exit from the
function.
Bug: http://curl.haxx.se/mail/lib-2011-12/0250.html
Reported by: Gokhan Sengun
In the recent do_more fix the new logic was mistakenly checking the
pointer instead of what it points to.
Reported by: Gokhan Sengun
Bug: http://curl.haxx.se/mail/lib-2011-12/0250.html
When sending quote command to a SFTP server and 'mkdir' was used, it
would send fixed permissions and not use the CURLOPT_NEW_DIRECTORY_PERMS
as it should.
Reported by: Armel
Patch by: Armel
Bug: http://curl.haxx.se/mail/lib-2011-12/0249.html
CURLOPT_RESOLVE populates the DNS cache with entries that are marked as
eternally in use. Those entries need to be taken care of when the cache
is killed off.
Bug: http://curl.haxx.se/bug/view.cgi?id=3463121
Reported by: "tw84452852"
First off the timeout for accepting a server connect back must of course
respect a global timeout. Then the timeleft function is only used by ftp
code so it was moved to ftp.c and made static.
"wait_data_conn" was added to the connectionbits in commit c834213ad5 for
handling active FTP connections but as it is purely FTP specific and now
only ever accessed by ftp.c I moved it into the FTP connection struct.
Backpedaled out the funny double-change of state in the multi state
machine by adding a new argument to the do_more() function to signal
completion. This way it can remain in the DO_MORE state properly until
done. Long term, the entire DO_MORE logic should be moved into the FTP
code and be hidden from the multi code as the logic is only used for
FTP.
1- Two new error codes are introduced.
CURLE_FTP_ACCEPT_FAILED to be set whenever ACCEPTing fails because of
FTP server connected.
CURLE_FTP_ACCEPT_TIMEOUT to be set whenever ACCEPTing timeouts.
Neither of these errors are considered fatal and control connection
remains OK because it could just be a firewall blocking server to
connect to the client.
2- One new setopt option was introduced.
CURLOPT_ACCEPTTIMEOUT_MS
It sets the maximum amount of time FTP client is going to wait for a
server to connect. Internal default accept timeout is 60 seconds.
It makes it easier to introduce debug outputs in this function, and
everything in the function is using the value anyway so it might even be
more efficient.
Regression introduced in 7.23.0 with commit 9dd85bce. The function in
which the PRETRANSFER time stamp was recorded was moved in time causing
it be stored very quickly after the start timestamp. On most systems
shorter than 1 millisecond and thus it wouldn't even show with -w
"%{time_pretransfer}" using the command line tool.
Bug: http://curl.haxx.se/mail/archive-2011-12/0022.html
Reported by: Toni Moreno
Allow, at configure time, the production of versioned symbols. The
symbols will look like "CURL_<FLAVOUR>_<VERSION> <SYMBOL>", where
<FLAVOUR> represents the SSL flavour (e.g. OPENSSL, GNUTLS, NSS, ...),
<VERSION> is the major SONAME version and <SYMBOL> is the actual symbol
name. If no SSL library is enabled the symbols will be just
"CURL_<VERSION> <SYMBOL>".
This gets the appconnect time right for ssl backends, which don't
support non-blocking connects.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Do not try to resolve interfaces names via DNS by recognizing interface
names in a few ways. If the interface option argument has a prefix of
"if!" then treat the argument as only an interface. Similarly, if the
interface argument is the name of an interface (even if it does not have
an IP address assigned), treat it as an interface name. Finally, if the
interface argument is prefixed by "host!" treat it as a hostname that
must be resolved by /etc/hosts or DNS.
These changes allow a client using the multi interfaces to avoid
blocking on name resolution if the interface loses its IP address or
disappears.
Fixed the connection reuse detection in ConnectionExists() when
comparing a new connection that is non-SSL based against that of a SSL
based connection that has become so by being upgraded via TLS.
This is a regression since who knows when. When spotting that a HTTP
proxy is used we must not uncondititionally enable the HTTP protocol
since if we do tunneling through the proxy we're still using the target
protocol.
Reported by: Naveen Chandran
If no SSLv2 was detected in OpenSSL by configure, then we enforce the
OPENSSL_NO_SSL2 define as it seems some people report it not being
defined properly in the OpenSSL headers.
When a 32 digit hex key is given as a hostkey md5 checksum, the code
would still run it against the knownhost check and not properly
acknowledge that the md5 should then be the sole guide for.
The verbose output now includes the evaluated MD5 hostkey checksum.
Some related source code comments were also updated.
Bug: http://curl.haxx.se/bug/view.cgi?id=3451592
Reported by: Reza Arbab
As there are different return codes for host vs proxy errors, this function
now properly returns the code properly depending on what was attempted to get
resolved.
Bug: http://curl.haxx.se/mail/archive-2011-12/0010.html
Reported by: Jason Liu
When making a distinction which return code to return, the code previously
only regarded HTTP proxies to be proxies and thus return host-related errors
for failures on other proxy types than HTTP. Now all proxy types will be
considered proxies...
Keep track of which sockets that are the result of accept() calls and
refuse to call the closesocket callback for those sockets. Test case 596
now verifies that the open socket callback is called the same number of
times as the closed socket callback for active FTP connections.
Bug: http://curl.haxx.se/mail/lib-2011-12/0018.html
Reported by: Gokhan Sengun
When the new socket is created for an active connection, it is now done
using the open socket callback.
Test case 596 was modified to run fine, although it hides the fact that
the close callback is still called too many times, as it also gets
called for closing sockets that were created with accept().
If the first name server is not available, the multi interface does
not invoke the socket_cb when the DNS request to the first name server
timesout. Ensure that the list of sockets are always updated after
calling Curl_resolver_is_resolved.
This bug can be reproduced if Curl is complied with --enable_ares and
your code uses the multi socket interfaces and the
CURLMOPT_SOCKETFUNCTION option. To test try:
iptables -I INPUT \
-s $(sed -n -e '/name/{s/.* //p;q}' /etc/resolv.conf)/32 \
-j REJECT
and then run a program which uses the multi-interface.
Changed the eob detection to work across the whole of the buffer so that
lines that begin with a dot (which the server will have escaped) are
passed to the client application correctly.
Curl_pop3_write() now has a state machine that scans for the end of a
POP3 body so that the CR LF '.' CR LF sequence can come in everything
from one up to five subsequent packets.
Test case 810 is modified to use SLOWDOWN which makes the server pause
between each single byte and thus makes the POP3 body get sent to curl
basically one byte at a time.
Added convenience macro to use to check if a handle is using a shared
SSL session, and fixed so that Curl_ssl_close_all() doesn't lock when
the session isn't shared.
Skip a floating point addition operation when integral part of time difference
is zero. This avoids potential floating point addition rounding problems while
preserving decimal part value.
Macros that look like function calls need to be made so that we can use
semicolons properly for indentation and for reducing the risk for
mistakes when using them.
1) enables the Window Size option
2) allows the server to enable the echo mode
3) allows an app using libcurl to disable the default binary mode
Signed-off-by: Laurent Rabret
By setting PROTOPT_NOURLQUERY in the protocol handler struct, the
protocol will get the "query part" of the URL cut off before the data is
handled by the protocol-specific code. This makes libcurl adhere to
RFC3986 section 2.2.
Test 1220 is added to verify a file:// URL with query-part.
Bugfix: https handshake fails using gnutls 3 on windows
http://sourceforge.net/tracker/index.php?func=detail&aid=3441084&group_id=976&atid=100976
New gnutls versions have an error handler that knows about Winsock
errors, which is why gnutls_transport_set_global_errno() was deprecated
and then removed.
This is a correction of commit f5bb370 (blame me) which meant to
reimplement gnutls_transport_set_global_errno(), which is not necessary.
Regression: commit b998d95b (shipped first in release 7.22.0) made the
condition always equal false that should reset the TIMER_CONNECT timer
and call the Curl_verboseconnect() function.
Reported by: "Captain Basil"
Bug: http://curl.haxx.se/mail/archive-2011-11/0035.html
When the user requests PORT with a specific port or port range, the code
could lock up in an endless loop. There's now an extra conditional that
makes sure to special treat the error and try the local address only
once so a second failure will abort the loop correctly.
Bug: http://curl.haxx.se/bug/view.cgi?id=3433968
Reported by: Gokhan Sengun
If a proxy offers several Authentication schemes where NTLM and
Negotiate are offered by the proxy and you tell libcurl not to use the
Negotiate scheme then the request never returns when the proxy answers
with its HTTP 407 reply.
It is reproducible by the following steps:
- Use a proxy that offers NTLM and Negotiate ( CURLOPT_PROXY and
CURLOPT_PROXYPORT )
- Tell libcurl NOT to use Negotiate CURL_EASY_SETOPT(CURLOPT_PROXYAUTH,
CURLAUTH_BASIC | CURLAUTH_DIGEST | CURLAUTH_NTLM )
- Start the request
The call to CURL_EASY_PERFORM never returns. If you switch on debug
logging you can see that libcurl issues a new request As soon as it
received the 407 reply. Instead it should return and set the response
code to 407.
Bug: http://curl.haxx.se/mail/lib-2011-10/0323.html
Move calling of ERR_remove_state(0) a.k.a ERR_remove_thread_state(NULL)
from Curl_ossl_close_all() to Curl_ossl_cleanup().
In this way ERR_remove_state(0) is now only called in libcurl by
curl_global_cleanup(). Previously it would get called by functions
curl_easy_cleanup(), curl_multi_cleanup and potentially each time a
connection was removed from a connection cache leading to premature
destruction of OpenSSL's thread local state hash.
Multi-threaded apps using OpenSSL enabled libcurl should still call
function ERR_remove_state(0) or ERR_remove_thread_state(NULL) at the
very end end of threads that do not call curl_global_cleanup().
Now called 'use_ssl' instead, which better matches the current CURLOPT
name and since the option is used for all pingpong protocols (at least)
it makes sense to not use 'ftp' in the name.
Use gnutls_priority_set_direct() instead of gnutls_protocol_set_priority().
Remove the gnutls_certificate_type_set_priority() use since x509 is the
default certificate type anyway.
Reported by: Vincent Torri
This extends the fix from commit d7934b8bd4
When the multi state is changed within the multi_runsingle from DOING to
DO_MORE, we didn't immediately start the FTP state machine again. That
then left the FTP state in FTP_STOP. When curl_multi_fdset() was
subsequently called, the ftp_domore_getsock() function would return the
wrong fd info.
Reported by: Gokhan Sengun
After a PORT has been issued, and the multi handle would switch to the
CURLM_STATE_DO_MORE state (which is unique for FTP), libcurl would
return the wrong fdset to wait for when curl_multi_fdset() is
called. The code would blindly assume that it was waiting for a connect
of the second connection, while that isn't true immediately after the
PORT command.
Also, the function multi.c:domore_getsock() was highly FTP-centric and
therefore ugly to keep in protocol-agnostic code. I solved this problem
by introducing a new function pointer in the Curl_handler struct called
domore_getsock() which is only called during the DOMORE state for
protocols that set that pointer.
The new ftp.c:ftp_domore_getsock() function now returns fdset info about
the control connection's command/response handling while such a state is
in use, and goes over to waiting for a writable second connection first
once the commands are done.
The original problem could be seen by running test 525 and checking the
time stamps in the FTP server log. I can verify that this fix at least
fixes this problem.
Bug: http://curl.haxx.se/mail/lib-2011-10/0250.html
Reported by: Gokhan Sengun
The fix is pretty much the one Nick Zitzmann provided, just edited to do
the right indent levels and with test case 1204 added to verify the fix.
Bug: http://curl.haxx.se/mail/lib-2011-10/0190.html
Reported by: Nick Zitzmann
The default lowat level for gnutls-2.12* is set to zero to avoid
unnecessary system calls and the gnutls_transport_set_lowat function has
been totally removed in >=gnutls-3 which causes build failures.
Therefore, the function shouldn't be used except for versions that
require it, <gnutls-2.12.0.
Previously the bit was set before the connection was found working so if
it would first fail to an ipv6 address and then connect fine to a IPv4
address the variable would still be TRUE.
Reported by: Thomas L. Shinnick
Bug: http://curl.haxx.se/bug/view.cgi?id=3421912
When doing a multipart formpost with a read callback, and that callback
returns CURL_READFUNC_ABORT, that return code must be properly
propagated back and handled accordingly. Previously it would be handled
as a zero byte read which would cause a hang!
Added test case 587 to verify. It uses the lib554.c source code with a
small ifdef.
Reported by: Anton Bychkov
Bug: http://curl.haxx.se/mail/lib-2011-10/0097.html
Save the errno value immediately after a connect() failure so that it
won't get reset to something else before we read it.
Bug: http://curl.haxx.se/mail/lib-2011-10/0066.html
Reported by: Frank Van Uffelen and Fabian Hiernaux
Set ACK timeout to 5 seconds.
If we are waiting for block X and receive block Y that is the expected one, we
should send ACK and increase X (which is already implemented). Otherwise drop
the packet and don't increase retry counter.
Prevent modification of easy handle being added with curl_multi_add_handle()
unless this function actually suceeds.
Run Curl_posttransfer() to allow restoring of SIGPIPE handler when
Curl_connect() fails early in multi_runsingle().
It makes much nicer and less convuluted code everywhere if this struct
member is always present even when libcurl is built without SSL support.
This reverts parts of commit 15e3e45170
Modified smtp_endofresp() to detect NTLM from the server specified list
of supported authentication mechanisms.
Modified smtp_authenticate() to start the sending of the NTLM data.
Added smtp_auth_ntlm_type1_message() which creates a NTLM type-1
message. This function is used by authenticate() to start the sending
of data and by smtp_state_auth_ntlm_resp() when the AUTH command
doesn't contain the type-1 message as part of the initial response.
This lack of initial response can happen if an OOM error occurs or the
type-1 message is longer than 504 characters. As the main AUTH command
is limited to 512 character the data has to be transmitted in two
parts; one containing the AUTH NTLM and the second containing the
type-1 message.
Added smtp_state_auth_ntlm_type2msg_resp() which handles the incoming
type-2 message and sends an outgoing type-3 message. This type-2
message is sent by the server in response to our type-1 message.
Modified smtp_state_auth_resp() to handle the response to: the AUTH
NTLM without the initial response and the type-2 response.
Modified smtp_disconnect() to cleanup the NTLM SSPI stack.
Added the output message length as a parameter to both
Curl_ntlm_create_type1_message() and Curl_ntlm_create_type3_message()
for use by future functions that require it.
Updated curl_ntlm.c to cater for the extra parameter on these two
functions.
Changed the name of variable l, in several functions, which represents
the length of strings being sent to the server, to len which is more
meaningful and consistent with other code in smtp.c and elsewhere.
Reworked smtp_authenticate() to be simpler and easier to follow.
Variables and now initialised in their definitions and if no username
and password are specified the function sets the state to SMTP_STOP and
returns immediately, rather than being part of a huge if statement.
Don't even declare the struct members for disabled features
Introducing the CURLSHE_NOT_BUILT_IN return code for the share interface
when trying to set a sharing option that has been disabled (or not
enabled) in the library.
When the progress function returns to cancel the request, we must mark
the connection to get closed and it must do to the DONE state.
do_init() must be called as early as possible so that state variables
for new connections are reset early. We could otherwise see that the old
values were still there when a connection was to be disconnected very
early and it would make it behave wrongly.
Bug: http://curl.haxx.se/mail/lib-2011-10/0006.html
Reported by: Vladimir Grishchenko
The size of the email can now be set via CURLOPT_INFILESIZE. This
allows the email to be rejected by the server, if supported, and the
maximum size has been configured on the server.
Removed the code that striped off the domain name when Curl_gethostname
returned the fully qualified domain name as the function has been
updated to return the un-qualified host name.
Replaced the use of HOSTNAME_MAX as the size of the buffer in the call
to Curl_gethostname with sizeof(host) as this is safer should the buffer
size ever be changed.
Allow (*curl_write_callback) write callbacks to return
CURL_WRITEFUNC_OUT_OF_MEMORY to properly indicate libcurl of OOM conditions
inside the callback itself.
If a socket is larger than FD_SETSIZE, avoid using FD_SET() on the
platforms where this is possible.
Bug: http://curl.haxx.se/bug/view.cgi?id=3413274
Reported by: Tim Starling
To avoid that the progress meter headers get output between each
transfer, make sure the bits gets kept when (re-)inited.
Reported by: Christopher Stone
I think curl should ignore this case and smtp.c should test for this.
Since RFC-2821 seems to allow a "null reverse-path". Ref. "MAIL
FROM:<>" in section 3.7, page 25.
Fixed Curl_gethostname() so that it always returns the un-qualified
machine name rather than being dependent on the socket provider.
Note: The return of getenv("CURL_GETHOSTNAME") is also parsed in case
the developer / test harness provided a fully qualified domain name as
it's value as well.
With this fix, it should work for PolarSSL-1.0.0 (and SVN-1091 trunk)
and retain compatibility with earlier versions. (Tested with 0.14.1)
PolarSSL still doesn't play nicely with curl's CA bundle (we discussed
this before) but I was at least able to retrieve the
https://www.gmail.com/ login page using a modified ca-certificates.crt
file with all 3 versions of PolarSSL.
Renamed the variable from 'proto' to 'level' simply because it is not
protocol you set but level and that is the name of the argument used in
man pages and the POSIX documentation of the setsockopt function.
This workarounds old libssh2 versions not properly initializing
some ssh session variables, which would trigger memory debuggers
warnings on memory being used without having been initialized.
The current version of speedcheck.c may disable timeout by setting zero
to Curl_expire. Which is fine using the curl_multi_perform, because it
recheck all timeout internals, but when using custom event poller (like
hiperfifo.c) it may keep stalle connection forever.
Calling sclose() both in the child and the parent fools the
socket leak detector into thinking it's been closed twice.
Calling close() in the child instead overcomes this problem. It's
not as portable as the sclose() macro, but this code is highly
POSIX-specific, anyway.
Just internal stuff...
Curl_safefree is now a macro defined in memdebug.h instead of a function
prototyped in url.h and implemented in url.c, so inclusion of url.h is no
longer required in order to simply use Curl_safefree.
Provide definition of macro WHILE_FALSE in setup_once.h in order to allow
other macros such as DEBUGF and DEBUGASSERT, and code using it, to compile
without 'conditional expression is constant' warnings.
The WHILE_FALSE stuff fixes 150+ MSVC compiler warnings.
Ensure existing logic in Curl_resolv_timeout() is not subverted upon getting a
negative timeout from resolve_server(). The timeout in resolve_server() could
be checked to avoid calling Curl_resolv_timeout() with an expired timeout, but
fixing this in this way allows existing logic in resolve_server() to be kept
unchanged.
Configure script option --enable-wb-ntlm-auth renamed to --enable-ntlm-wb
Configure script option --disable-wb-ntlm-auth renamed to --disable-ntlm-wb
Preprocessor symbol WINBIND_NTLM_AUTH_ENABLED renamed to NTLM_WB_ENABLED
Preprocessor symbol WINBIND_NTLM_AUTH_FILE renamed to NTLM_WB_FILE
Test harness env var CURL_NTLM_AUTH renamed to CURL_NTLM_WB_FILE
Static function wb_ntlm_close renamed to ntlm_wb_cleanup
Static function wb_ntlm_initiate renamed to ntlm_wb_init
Static function wb_ntlm_response renamed to ntlm_wb_response
Feature string literal NTLM_SSO renamed to NTLM_WB.
Preprocessor symbol USE_NTLM_SSO renamed to WINBIND_NTLM_AUTH_ENABLED.
curl's 'long' option 'ntlm-sso' renamed to 'ntlm-wb'.
Fix some comments to make clear that this is actually a NTLM delegation.
Fixed the order of the preferred SMTP authentication method to:
AUTH CRAM-MD5, AUTH LOGIN then AUTH PLAIN.
AUTH PLAIN should be the last as it slightly more insecure than AUTH LOGIN
as the username and password are sent together - there is no handshaking
between the client and server like there is with AUTH LOGIN.
Previous interfaces for these libcurl internal functions did not allow to tell
apart a legitimate zero size result from an error condition. These functions
now return a CURLcode indicating function success or otherwise specific error.
Output size is returned using a pointer argument.
All usage of these two functions, and others closely related, has been adapted
to the new interfaces. Relative error and OOM handling adapted or added where
missing. Unit test 1302 also adapted.
* Added function comments:
- Curl_ntlm_decode_type2_message
- Curl_ntlm_create_type1_message
- Curl_ntlm_create_type3_message
* Modification of ntlm processing state to NTLMSTATE_TYPE2 is now done
only when Curl_ntlm_decode_type2_message() has fully succeeded.
As a bonus, this lets our MemoryTracking subsystem track zlib operations.
And also fixes a shortcut some zlib 1.2.x versions took using malloc()
instead of calloc(), which would trigger memory debuggers warnings on
memory being used without having been initialized.
As I modified conn->bits.tcpconnect to become an array that holds one
bool for each potential connection all uses of that struct field must
index it correctly.
When using the multi interface, a SOCKS proxy, and a connection that
wouldn't immediately consider itself connected (which my Linux tests do
by default), libcurl would be tricked into doing _two_ connects to the
SOCKS proxy when it setup the data connection and then of course the
second attempt would fail miserably and cause error.
This problem is a regression that was introduced by commit
4a42e5cdaa that was introduced in the 7.21.7 release.
Bug: http://curl.haxx.se/mail/lib-2011-08/0199.html
Reported by: Fabian Keil
Until 2011-08-17 libcurl's Memory Tracking feature also performed
automatic malloc and free filling operations using 0xA5 and 0x13
values. Our own preinitialization of dynamically allocated memory
might be useful when not using third party memory debuggers, but
on the other hand this would fool memory debuggers into thinking
that all dynamically allocated memory is properly initialized.
As a default setting, libcurl's Memory Tracking feature no longer
performs preinitialization of dynamically allocated memory on its
own. If you know what you are doing, and really want to retain old
behavior, you can achieve this compiling with preprocessor symbols
CURL_MT_MALLOC_FILL and CURL_MT_FREE_FILL defined with appropriate
values.
"release-ssl-ssh2-zlib" and "debug-ssl-ssh2-zlib" are two new makefile
targets that build libcurl with MSVC and link with libssh2
Bug: http://curl.haxx.se/bug/view.cgi?id=3388920
Reported by: "kdekker"
Strict splitting of http_ntlm.[ch] may trigger 8 compiler warnings when
building with some compilers and strict compiler warnings enabled, depending
on other specific configuration options some could get triggered or not.
Seven are related with 'unused function parameters' and another one with
'var may be used before its value is set'.