When using only 1 second precision, curl doesn't create new cnonce
values quickly enough for all uses.
For example, issuing the following command multiple times to a recent
Tomcat causes authentication failures:
curl --digest -utest:test http://tomcat.test.com:8080/manager/list
This is because curl uses the same cnonce for several seconds, but
doesn't increment the nonce counter. Tomcat correctly interprets
this as a replay attack and rejects the request.
When microsecond-precision is available, this commit causes curl to
change cnonce values much more frequently.
With microsecond resolution, increasing the nounce length used in the
headers to 32 was made to further reduce the risk of duplication.
axTLS:
This will make the axTLS backend perform the RFC2818 checks, honoring
the VERIFYHOST setting similar to the OpenSSL backend.
Generic for OpenSSL and axTLS:
Move the hostcheck and cert_hostcheck functions from the lib/ssluse.c
files to make them genericly available for both the OpenSSL, axTLS and
other SSL backends. They are now in the new lib/hostcheck.c file.
CyaSSL:
CyaSSL now also has the RFC2818 checks enabled by default. There is a
limitation that the verifyhost can not be enabled exclusively on the
Subject CN field comparison. This SSL backend will thus behave like the
NSS and the GnuTLS (meaning: RFC2818 ok, or bust). In other words:
setting verifyhost to 0 or 1 will disable the Subject Alt Names checks
too.
Schannel:
Updated the schannel information messages: Split the IP address usage
message from the verifyhost setting and changed the message about
disabling SNI (Server Name Indication, used in HTTP virtual hosting)
into a message stating that the Subject Alternative Names checks are
being disabled when verifyhost is set to 0 or 1. As a side effect of
switching off the RFC2818 related servername checks with
SCH_CRED_NO_SERVERNAME_CHECK
(http://msdn.microsoft.com/en-us/library/aa923430.aspx) the SNI feature
is being disabled. This effect is not documented in MSDN, but Wireshark
output clearly shows the effect (details on the libcurl maillist).
PolarSSL:
Fix the prototype change in PolarSSL of ssl_set_session() and the move
of the peer_cert from the ssl_context to the ssl_session. Found this
change in the PolarSSL SVN between r1316 and r1317 where the
POLARSSL_VERSION_NUMBER was at 0x01010100. But to accommodate the Ubuntu
PolarSSL version 1.1.4 the check is to discriminate between lower then
PolarSSL version 1.2.0 and 1.2.0 and higher. Note: The PolarSSL SVN
trunk jumped from version 1.1.1 to 1.2.0.
Generic:
All the SSL backends are fixed and checked to work with the
ssl.verifyhost as a boolean, which is an internal API change.
The text "additional stuff not fine" text was added for debug purposes a
while ago, but it isn't really helping anyone and for some reason some
Linux distributions provide their libcurls built with debug info still
present and thus (far too many) users get to read this info.
The logic previously checked for a started NTLM negotiation only for
host and not also with proxy, leading to problems doing POSTs over a
proxy NTLM that are larger than 2000 bytes. Now it includes proxy in the
check.
Bug: http://curl.haxx.se/bug/view.cgi?id=3582321
Reported by: John Suprock
The existing logic only cut off the fragment from the separate 'path'
buffer which is used when sending HTTP to hosts. The buffer that held
the full URL used for proxies were not dealt with. It is now.
Test case 5 was updated to use a fragment on a URL over a proxy.
Bug: http://curl.haxx.se/bug/view.cgi?id=3579813
As a handle can be re-used after having done HTTP auth in a previous
request, it must make sure to clear out the HTTP types that aren't
wanted in this new request.
This reverts commit ce8311c7e4.
The commit made test 2024 work but caused a regression with repeated
Digest authentication. We need to fix this differently.
After a research team wrote a document[1] that found several live source
codes out there in the wild that misused the CURLOPT_SSL_VERIFYHOST
option thinking it was a boolean, this change now bans 1 as a value and
will make libcurl return error for it.
1 was never a sensible value to use in production but was introduced
back in the days to help debugging. It was always documented clearly
this way.
1 was never supported by all SSL backends in libcurl, so this cleanup
makes the treatment of it unified.
The report's list of mistakes for this option were all PHP code and
while there's a binding layer between libcurl and PHP, the PHP team has
decided that they have an as thin layer as possible on top of libcurl so
they will not alter or specifically filter a 'TRUE' value for this
particular option. I sympathize with that position.
[1] = http://daniel.haxx.se/blog/2012/10/25/libcurl-claimed-to-be-dangerous/
Since automake 1.12.4, the warnings are issued on running automake:
warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS')
Avoid INCLUDES and roll these flags into AM_CPPFLAGS.
Compile tested on:
Ubuntu 10.04 (automake 1:1.11.1-1)
Ubuntu 12.04 (automake 1:1.11.3-1ubuntu2)
Arch Linux (automake 1.12.4)
As pointed out in Bug report #3579064, curl_multi_perform() would
wrongly use a blocking mechanism internally for some commands which
could lead to for example a very long block if the LIST response never
showed.
The solution was to make sure to properly continue to use the multi
interface non-blocking state machine.
The new test 1501 verifies the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3579064
Reported by: Guido Berhoerster
When given a string as 'srp' it didn't work, but required 'SRP'.
Starting now, the check disregards casing.
Bug: http://curl.haxx.se/bug/view.cgi?id=3578418
Reported by: Jeff Connelly
Previously the Metalink code used Apple's CommonCrypto library only if
curl was built using the --with-darwinssl option. Now we use CommonCrypto
on all Apple operating systems including Tiger or later, or iOS 5 or
later, so you don't need to build --with-darwinssl anymore. Also rolled
out this change to libcurl's md5 code.
The iOS build was broken by a reference to a function that only existed
under OS X; fixed. Also fixed a hard-to-reproduce problem where, if the
server disconnected before libcurl got the chance to hang up first and
SecureTransport was in use, then we'd raise an error instead of failing
gracefully.
This is a minor change in behavior after having been pointed out by Mark
Tully and discussed on the list. Initially this case would internally
call poll() with no sockets and a timeout which would equal a sleep for
that specified time.
Bug: http://curl.haxx.se/mail/lib-2012-10/0076.html
Reported by: Mark Tully
Since there are servers that seem to return very big encrypted
data packages, we need to be able to handle those without having
an internal size limit. To avoid the buffer growing to fast to
early the initial size was decreased and the minimum free space
in the buffer was decreased as well.
During the periods of rate limitation, the speedcheck function wasn't
called and thus the values weren't updated accordingly and it would then
easily trigger wrongly once data got transferred again.
Also, the progress callback's return code was not acknowledged in this
state so it could make an "abort" return code to get ignored and not
have the documented effect of aborting an ongoing transfer.
Bug: http://curl.haxx.se/mail/lib-2012-09/0081.html
Reported by: Jie He
The Curl_reconnect_request() function could end up returning a pointer
to a free()d struct when Curl_done() failed inside. Clearing the pointer
unconditionally after Curl_done() avoids this risk.
Reported by: Ho-chi Chen
Bug: http://curl.haxx.se/mail/lib-2012-09/0188.html
Selected socks proxy in Google's Chrome browser. Resulting in the
following environment variables:
NO_PROXY=localhost,127.0.0.0/8
ALL_PROXY=socks://localhost:1080/
all_proxy=socks://localhost:1080/
no_proxy=localhost,127.0.0.0/8
... and libcurl didn't treat 'socks://' as socks but instead picked HTTP
proxy.
Reported by: Scott Bailey
Bug: http://curl.haxx.se/bug/view.cgi?id=3566860
Each certificate section of the input certdata.txt file has a trust
section following it with details.
This script failed to detect the start of the trust for at least one
cert[*], which made the script continue pass that section into the next
one where it found an 'untrusted' marker and as a result that certficate
was not included in the output.
[*] = "Hellenic Academic and Research Institutions RootCA 2011"
Bug: http://curl.haxx.se/mail/lib-2012-09/0019.html
SMTP client will send SIZE parameter in MAIL FROM command only if server
supports it. Without this patch server might say "504 Command parameter
not implemented" and reject the message.
Bug: http://curl.haxx.se/bug/view.cgi?id=3564114
/*
* Name: curl_multi_wait()
*
* Desc: Poll on all fds within a CURLM set as well as any
* additional fds passed to the function.
*
* Returns: CURLMcode type, general multi error code.
*/
CURL_EXTERN CURLMcode curl_multi_wait(CURLM *multi_handle,
struct curl_waitfd extra_fds[],
unsigned int extra_nfds,
int timeout_ms);
In Mountain Lion, Apple added TLS 1.1 and 1.2, and deprecated a number
of SecureTransport functions, some of which we were using. We now check
to see if the replacement functions are present, and if so, we use them
instead. The old functions are still present for users of older
cats. Also fixed a build warning that started to appear under Mountain
Lion
Commit b91d29a28e170c16d65d956db79f2cd3a82372d2 introduces a bug and breaks Curl_closesocket function. sock_accepted flag for the second socket should be tagged as TRUE before the sockopt callback is called because in case the callback returns an error, Curl_closesocket function is going to call the - fclosesocket - callback for the accept()ed socket
For active FTP connections, applications may need setting the sockopt after accept() call returns successful. This fix gives a call to the callback registered with CURL_SOCKOPTFUNCTION option. Also a new sock type - CURLSOCKTYPE_ACCEPT - is added. This type is to be passed to application callbacks with - purpose - parameter. Applications may use this parameter to distinguish between socket types.
Commit e351972bc8 brought in the ssh agent support but some uses of
the libssh2 agent API was done unconditionally which wasn't good enough
since that API hasn't always been present.
By reading the ->head pointer and using that instead of the ->size
number to figure out if there's a list remaining we avoid the (false
positive) clang-analyzer warning that we might dereference of a null
pointer.
I suspect this is a regression introduced in commit 207cf150, included
since 7.24.0.
Avoid showing '(nil)' as hostname in verbose output by making sure the
hostname fixup function is called early enough to set the pointers that
are used for this. The name data is set again for each request even for
re-used connections to handle multiple hostnames over the same
connection (like with proxy) or that the casing etc of the host name is
changed between requests (which has proven to be important at least once
in the past).
Test1011 was modified to use a redirect with a re-used a connection
since it then showed the bug and now lo longer does. There's currently
no easy way to have the test suite detect 'nil' texts in verbose ouputs
so no tests will detect if this problem gets reintroduced.
Bug: http://curl.haxx.se/mail/lib-2012-07/0111.html
Reported by: Gisle Vanem
We found a problem with ftp transfer using libcurl (7.23 and 7.25)
inside an application which is receiving unix signals (SIGUSR1,
SIGUSR2...) almost continuously. (Linux 2.4, PowerPC, HAVE_POLL_FINE
defined).
Curl_socket_check() uses poll() to wait for the socket, and retries it
when a signal is received (EINTR). However, if a signal is received and
it also happens that the timeout has been reached, Curl_socket_check()
returns -1 instead of 0 (indicating an error instead of a timeout).
In our case, the result is an aborted connection even before the ftp
banner is received from the server, and a return value of
CURLE_OUT_OF_MEMORY from curl_easy_perform() (Curl_pp_multi_statemach(),
in pingpong.c, actually returns OOM if Curl_socket_check() fails :-)
Funny to debug on a system on which OOM is a possible cause).
Bug: http://curl.haxx.se/mail/lib-2012-07/0122.html
Due to WSAPoll bugs, libcurl does not work as intended. When the cURL
library is used to setup a connection to an incorrect port, normally the
result is CURLE_COULDNT_CONNECT, /* 7 */, but due to the bug in WSAPoll,
the result now is CURLE_OPERATION_TIMEDOUT, /* 28 - the timeout time was
reached */.
On August 1, Jan Koen Annot opened a case for this to Microsoft Premier
Online (https://premier.microsoft.com/). The support engineer handling
the case wrote that the case description is quite clear. He will try to
reproduce the issue and then proceed with troubleshooting it.
Reported by: Jan Koen Annot
Bug: http://curl.haxx.se/mail/lib-2012-07/0310.html
When figuring out if the data stream needs to be rewound when the
request is to be resent, we must not access the HTTP struct unless the
protocol used is indeed HTTP...
Bug: http://curl.haxx.se/bug/view.cgi?id=3544688
Previously the curl_multi interface would freeze if darwinssl was
enabled and at least one of the handles tried to connect to a Web site
using HTTPS. Removed the "wouldblock" state darwinssl was using because
I figured out a solution for our "would block but in which direction?"
dilemma.
In many states the easy_conn pointer is referenced and just assumed to
be working. This is an added extra check since analyzing indicates
there's a risk we can end up in these states with a NULL pointer there.
A HEAD response has no body length and gets the headers like the
corresponding GET would so it should not get closed after the response
based on the same rules. This mistake caused connections that did HEAD
to get closed too often without a valid reason.
Bug: http://curl.haxx.se/bug/view.cgi?id=3542731
Reported by: Eelco Dolstra
The function https_getsock was only implemented properly when USE_SSLEAY
or USE_GNUTLS is defined, but it is also necessary for USE_SCHANNEL.
The problem occurs when Curl_read_plain or Curl_write_plain returns
CURLE_AGAIN. In that case CURL_OK is returned to the multi-interface an
the used socket is set to state CURL_POLL_REMOVE and the easy-state is
set to CURLM_STATE_PROTOCONNECT. This is fine, because later the socket
should be set to CURL_POLL_IN or CURL_POLL_OUT via multi_getsock. That's
where https_getsock is called and doesn't return any sockets.
The code was printing a warning when SNI was set up successfully. Oops.
Printing the cipher number in verbose mode was something only TLS/SSL
programmers might understand, so I had it print the name of the cipher,
just like in the OpenSSL code. That'll be at least a little bit easier
to understand. The SecureTransport API doesn't have a method of getting
a string from a cipher like OpenSSL does, so I had to generate the
strings manually.
When doing CONNECT requests, libcurl must make sure the connection is
alive as much as possible. NTLM requires it and it is generally good for
other cases as well.
NTLM over CONNECT requests has been broken since this regression I
introduced in my CONNECT cleanup commits that started with 41b0237834,
included since 7.25.0.
Bug: http://curl.haxx.se/bug/view.cgi?id=3538625
Reported by: Marcel Raad
Allow NTLM authentication when building using SecureTransport (Darwin) for SSL.
This uses CommonCrypto, a cryptography library that ships with all versions of
iOS and Mac OS X. It's like OpenSSL's libcrypto, except that it's missing a few
less-common cyphers and doesn't have a big number data structure.
Before commit 2dded8fedb (dec 2010) there was logic that used
RAND_screen() at times and now I remove the leftover #ifdef check for
it.
The seeding code that uses Curl_FormBoundary() in ossl_seed() is dubious
to keep since it hardly increases randomness but I fear I'll break
something if I remove it now...
- Renamed st_ function prefix to darwinssl_
- Renamed Curl_st_ function prefix to Curl_darwinssl_
- Moved the duplicated ssl_connect_done out of the #ifdef in lib/urldata.h
- Fixed a teensy little bug that made non-blocking connection attempts block
- Made it so that it builds cleanly against the iOS 5.1 SDK
Removed two, not intended to exist, RESOURCE declarations.
Bug: http://curl.haxx.se/bug/view.cgi?id=3535977
And sorted configuration hunks to reflect same internal order
as the one shown in the usage message.
Increase decrypted and encrypted cache buffers using limitted
doubling strategy. More information on the mailinglist:
http://curl.haxx.se/mail/lib-2012-06/0255.html
It updates the two remaining reallocations that have already been there
and fixes the other one to use the same "do we need to increase the
buffer"-condition as the other two. CURL_SCHANNEL_BUFFER_STEP_SIZE was
renamed to CURL_SCHANNEL_BUFFER_FREE_SIZE since that is actually what it
is now. Since we don't know how much more data we are going to read
during the handshake, CURL_SCHANNEL_BUFFER_FREE_SIZE is used as the
minimum free space required in the buffer for the next operation.
CURL_SCHANNEL_BUFFER_STEP_SIZE was used for that before, too, but since
we don't have a step size now, the define was renamed.
Process extra data buffer before returning from schannel_connect_step2.
Without this change I've seen WinCE hang when schannel_connect_step2
returns and calls Curl_socket_ready.
If the encrypted handshake does not fit in the intial buffer (seen with
large certificate chain), increasing the encrypted data buffer is necessary.
Fixed warning in curl_schannel.c line 1215.
Implemented timeout loop in schannel_send while sending data. This
is as close as I think we can get to write buffering; I put a big
comment in to explain my thinking.
With some committer adjustments
Make the Schannel implementation use libcurl's default buffer size
for the initial received encrypted and decrypted data cache buffers.
The implementation still needs to handle more data since more data
might have already been received or decrypted during the handshake
or a read operation which needs to be cached for the next read.
curl_schannel.c - implemented graceful SSL shutdown. If we fail to
shutdown the connection gracefully, I've seen schannel try to use a
session ID for future connects and the server aborts the connection
during the handshake.
curl_schannel.c - auto certificate validation doesn't seem to work
right on CE. I added a method to perform the certificate validation
which uses CertGetCertificateChain and manually handles the result.
Coverity actually pointed out flawed logic in the previous call to
Curl_strntoupper() where the code used sizeof() of a pointer to pass in
a size argument. That code still worked since it only needed to
uppercase 4 letters. Still, the entire malloc/uppercase/free sequence
was pointless since the code has already matched the string once in the
condition that starts the block of code.
As spotted by Coverity, va_end() was not used previously. To make it
used I took away a bunch of return statements and made them into
assignments instead.
SSPI related code now compiles with ANSI and WCHAR versions of security
methods (WinCE requires WCHAR versions of methods).
Pulled UTF8 to WCHAR conversion methods out of idn_win32.c into their own file.
curl_sasl.c - include curl_memory.h to use correct memory functions.
getenv.c and telnet.c - WinCE compatibility fix
With some committer adjustments
Building with CyaSSL failed compilation. Reason being that OCSP_REQUEST and
OCSP_RESPONSE are enum values in CyaSSL and defines in <wincrypt.h> included
via <winldap.h> in ldap.c.
http://curl.haxx.se/mail/lib-2012-06/0196.html
Version number is removed in order to make this info consistent with
how we do it with other MS and Linux system libraries for which we don't
provide this info.
Identifier changed from 'WinSSPI' to 'schannel' given that this is the
actual provider of the SSL/TLS support. libcurl can still be built with
SSPI and without SCHANNEL support.
Removed obsolete minor status variable and parameter of status function
which was never used or set at all. Also Curl_sspi_strerror does support
only one status and there is no need for a second sub status.
Added Windows SSPI version information to the curl version string when
SCHANNEL SSL is not enabled, as the version of the library should also
be included when SSPI is used to generate security contexts.
Removed SSPI from the feature list as the features are GSS-Negotiate,
NTLM and SSL depending on the usage of the SSPI library.
Removed duplicate blank lines.
Removed spaces between the not and test in various if statements.
Removed explicit test of NULL in an if statement.
Placed function returns on same line as function declarations.
Replaced the use of curl_maprintf() with aprintf() as it is the
preprocessor job to do this substitution if ENABLE_CURLX_PRINTF
is set.
curl_sspi.c: Fixed mingw32-gcc compiler warnings
curl_sspi.c: Fixed length of error code hex output
The hex value was printed as signed 64-bit value on 64-bit systems:
SEC_E_WRONG_PRINCIPAL (0xFFFFFFFF80090322)
It is now correctly printed as the following:
SEC_E_WRONG_PRINCIPAL (0x80090322)
curl_sspi.c: Fallback to security function table version number
Instead of reporting an unknown version, the interface version is used.
curl_sspi.c: Removed SSPI/ version prefix from Curl_sspi_version
curl_schannel: Replaced static buffer sizes with defined names
curl_schannel.c: First brace when declaring functions on column 0
curl_schannel.c: Put the pointer sign directly at variable name
curl_schannel.c: Use structs directly instead of typedef'ed structs
curl_schannel.c: Removed space before opening brace
curl_schannel.c: Fixed lines being longer than 80 chars
Moved the error constant switch to curl_sspi.c and added two new helper
functions to curl_sspi.[ch] which either return the constant or a fully
translated message representing the SSPI security status.
Updated socks_sspi.c and curl_schannel.c to use the new functions.
Windows 2000 Professional: Schannel returns SEC_E_OK instead
of SEC_I_CONTEXT_EXPIRED. If the length of the output buffer
is zero and the first byte of the encrypted packet is 0x15,
the application can safely assume that the message was a
close_notify message and change the return value to
SEC_I_CONTEXT_EXPIRED.
Connection shutdown does not mean that there is no data to read
Correctly handle incomplete message and ask curl to re-read
Fixed buffer for decrypted being to small
Re-structured read condition to be more effective
Removed obsolete verbose messages
Changed memory reduction method to keep a minimum buffer of size 4096
Fixed warning: dereferencing pointer does break strict-aliasing rules
by using a union instead of separate pointer variables.
Internal union sockaddr_u could probably be moved to generic header.
Thanks to Paul Howarth for the hint about using unions for this.
Important for winbuild: Separate declaration of sockaddr_u pointer.
The pointer variable *sock cannot be declared and initialized right
after the union declaration. Therefore it has to be a separate statement.
Since Curl_pgrsDone() itself calls Curl_pgrsUpdate() which may return an
abort instruction or similar we need to return that info back and
subsequently properly handle return codes from Curl_pgrsDone() where
used.
(Spotted by a Coverity scan)
Previously it would use a 256 byte buffer and thus cut off very long
subject names. The limit is now upped to the receive buffer size, 16K.
Bug: http://curl.haxx.se/bug/view.cgi?id=3533045
Reported by: Anthony G. Basile
Re-factored the smtp_state_*_resp() functions to 1) Match the constants
that were refactored in commit 00fddba672, 2) To be more readable and
3) To match their counterparties in pop3.c.
Corrected lines longer than 78 characters.
Removed unnecessary braces in smtp_state_helo_resp().
Introduced some comments in data sending functions.
Tidied up comments to match changes made in pop3.c.
Corrected lines longer than 78 characters.
Changed POP3_AUTH_FINAL to POP3_AUTH to match SMTP code now that the
AUTH command is no longer sent on its own.
Introduced some comments in data sending functions.
Another attempt at trying to rational code and comment style.
Added a service type parameter to Curl_sasl_create_digest_md5_message()
to allow the function to be used by different services rather than being
hard coded to "smtp".
Not all SASL enabled POP3 servers support the AUTH command on its own
when trying to detect the supported mechanisms. As such changed the
mechanism detection to use the CAPA command instead.
Because pop3_endofresp() is called for each line of data yet is not
passed the line and line length, so we have to use the data pointed to
by pp->linestart_resp which contains the whole packet, the mechanisms
were being detected in one call yet the function would be called for
each line of data.
Using curl with verbose mode enabled would show that one line of data
would be received in response to the AUTH command, before the AUTH
<mechanism> command was sent to the server and then the next few lines
of the original AUTH command would be displayed before the response from
the AUTH <mechanism> command. This would then cause problems when
parsing the CRAM-MD5 challenge data as extra data was contained in the
buffer.
Changed the parsing so that each line is checked for the mechanisms
and the function returns FALSE until the whole of the AUTH response has
been processed.
Previously it wasn't possible to connect to POP3 and not specify the
user name as a CURLE_ACCESS_DENIED error would be returned. This error
occurred because USER would be sent to the server with a blank user name
if no mailbox user was specified as the server would reply with -ERR.
This wasn't a problem prior to the 7.26.0 release but with the
introduction of custom commands the user and/or application developer
might want to issue a CAPA command without having to log in as a
specific mailbox user.
Additionally this fix won't send the newly introduced AUTH command if no
user name is specified.
Rather than encoding the password message itself the
smtp_state_authpasswd_resp() function now delegates the work to the same
function that smtp_state_authlogin_resp() and smtp_authenticate() use
when constructing the encoded user name.
In preparation for moving to the SASL module re-factored the
smtp_auth_login_user() function to smtp_auth_login() so that it can be
used for both user names and passwords as sending both of these under
the login authentication mechanism is the same.
The POP3 protocol doesn't really have the concept of error codes and
uses +, +OK and -ERR in response to commands to indicate continue,
success and error.
The AUTH command is one of those commands that requires multiple pieces
of data to be sent to the server where the server will respond with + as
part of the handshaking. This meant changing the values before
continuing with the next stage of adding authentication support.
Changed the order of the state machine to match the order of actual
events.
Reworked some comments and function parameter positioning that I missed
the other day.
Added support for detecting the supported SASL authentication mechanisms
via the AUTH command. There are two ways of detecting them, either by
using the AUTH command, that will return -ERR if not supported or by
using the CAPA command which will return SASL and the list of mechanisms
if supported, not include SASL if SASL authentication is not supported
or -ERR if the CAPA command is not supported. As such it seems simpler
to use the AUTH command and fallback to normal clear text authentication
if the the command is not supported.
Additionally updated the test cases to return -ERR when the AUTH command
is encountered. Additional test cases will be added when support for the
individual authentication mechanisms is added.
Moved EOB definition into header file.
Switched the logic around in pop3_endofresp() to allow for the
introduction of auth-mechanism detection.
Repositioned second and third function variables where they will fit
within the 78 character line limit.
Tidied up some comments.
Move the SMTP_AUTH constants into a separate header file in
preparation for adding SASL based authentication to POP3 as the two
protocols will need to share them.
Due to the result code being reset to CURLE_OK when smtp_dophase_done()
was called, postdata would incorrectly be sent to the server when the
MAIL FROM or RCPT command was rejected.
As such, libcurl would return the wrong result code from performing the
operation and additionally set CURLINFO_RESPONSE_CODE to be that
returned by the postdata command.
Bug: http://curl.haxx.se/mail/lib-2012-05/0108.html
Reported by: Gokhan Sengun
In nettle/md5.h, md5_init and md5_update are defined as macros to
nettle_md5_init and nettle_md5_update respectively. This causes
error when using MD5_params.md5_init and md5_update. This patch
renames these members as md5_init_func and md5_update_func to
avoid name conflict. For completeness, MD5_params.md5_final was
also renamed as md5_final_func.
The changes in curl_ntlm_core.c is conversion error and fixed by
casting to proper type.
A dot character at the beginning of a line would not be escaped to a
double dot as required by RFC-2821, instead it would be deleted by the
mail server. Please see section 4.5.2 of the RFC for more information.
Note: This fix also simplifies the detection of repeated CRLF.CRLF
combinations, such as CRLF.CRLF.CRLF, a little rather than having to
advance the eob counter to 2.
Roman Mamedov spotted (in
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=670126) that curl would
not complain when given a URL with an IPv6 numerical address without
brackets. It would simply cut off the last ":[hex]" part and thus not
work correctly.
That's a URL using an illegal syntax and now libcurl will instead return
a clear error code and error message detailing the error.
The above mentioned bug report claims this to be a regression but
libcurl does not guarantee functionality when given URLs that aren't
following the URL spec (RFC3986 mostly). I consider the fact that it
used to handle this differently a mere coincidence.
When doing a chunked-encoded POST with -d (CURLOPT_POSTFIELDS) and the
size of the POST was zero length, it made libcurl first send a zero
chunk and then the terminating one. This could confuse a receiver and it
should rather just send the terminating chunk as it does with this fix.
Test case 1333 is added to verify.
Bug: http://curl.haxx.se/mail/archive-2012-04/0060.html
Reported by: Arnaud Compan
Commit 9109cdec11 brought this regression (shipped since 7.24.0).
The singleipconnect() function must not return an error if Curl_socket()
returns an error. It should then simply return OK and pass a SOCKET_BAD
back simply because that is how the user of this function expects it to
work and something else is not fine.
Reported by: Blaise Potard
Bug: http://curl.haxx.se/bug/view.cgi?id=3516508
Include stdbool.h only when it is available and configure is capable of
detecting a proper 'bool' data type when the header is included.
Compilation fix for old or unpatched versions of XL C compiler.
Report: http://curl.haxx.se/mail/archive-2012-04/0022.html
NSS_InitContext() was introduced in NSS 3.12.5 and helps to prevent
collisions on NSS initialization/shutdown with other libraries.
Bug: https://bugzilla.redhat.com/738456
configure script now provides conditional definitions for Makefile.am
that result in CURL_HIDDEN_SYMBOLS being defined by resulting makefiles
when appropriate.
Additionally, configure script option for symbol hiding control is now
named --enable-symbol-hiding --disable-symbol-hiding. While still valid,
old option name --enable-hidden-symbols --disable-hidden-symbols will
be deprecated in some future release.
BUILDING_LIBCURL and CURL_STATICLIB are no longer defined in curl_config.h,
configure will generate appropriate conditionals so that mentioned symbols
get defined and used in Makefiles at compilation time
Configuration files such as curl_config.h and all config-*.h no longer exist
nor are generated/copied into 'src' directory, now these only exist in 'lib'
directory from where curl tool sources uses them.
Additionally old src/setup.h has been refactored into src/tool_setup.h which
now pulls lib/setup.h
The possibility of a makefile needing an include path adjustment exists.
Curl_socket returns CURLE_COULDNT_CONNECT when the opensocket callback
returns CURL_SOCKET_BAD. Previous return value CURLE_FAILED_INIT
conveys incorrect information to the user.
Reworked the command sending from two specific LIST and RETR command
functions into a single command based function as well as the two
associated response handlers into a generic command handler.
If an empty string is passed to CURLOPT_SSH_PUBLIC_KEYFILE, libcurl will
pass no public key to libssh2 which then tries to compute it from the
private key. This is known to work when libssh2 1.4.0+ is linked against
OpenSSL.
This change replaces RFC 2818 based hostname check in OpenSSL build with
RFC 6125 [1] based one.
The hostname check in RFC 2818 is ambiguous and each project implements
it in the their own way and they are slightly different. I check curl,
gnutls, Firefox and Chrome and they are all different.
I don't think there is a bug in current implementation of hostname
check. But it is not as strict as the modern browsers do. Currently,
curl allows multiple wildcard character '*' and it matches '.'. (as
described in the comment in ssluse.c).
Firefox implementation is also based on RFC 2818 but it only allows at
most one wildcard character and it must be in the left-most label in the
pattern and the wildcard must not be followed by any character in the
label.[2] Chromium implementation is based on RFC 6125 as my patch does.
Firefox and Chromium both require wildcard in the left-most label in the
presented identifier.
This patch is more strict than the current implementation, so there may
be some cases where old curl works but new one does not. But at the same
time I think it is good practice to follow the modern browsers do and
follow the newer RFC.
[1] http://tools.ietf.org/html/rfc6125#section-6.4.3
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=159483
With FOLLOWLOCATION enabled. When a 3xx page is downloaded and the
download size was known (like with a Content-Length header), but the
subsequent URL (transfered after the 3xx page) was chunked encoded, then
the previous "known download size" would linger and cause the progress
meter to get incorrect information, ie the former value would remain
being sent in. This could easily result in downloads that were WAY
larger than "expected" and would cause >100% outputs with the curl
command line tool.
Test case 599 was created and it was used to repeat the bug and then
verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3510057
Reported by: Michael Wallner
It is now possible to calculate the md5 sum as the stream of buffers
becomes known where as previously it was only possible to calculate the
md5 sum of a pre-prepared buffer.
This feature allows the user to specify and use additional POP3
commands such as UIDL and DELE via libcurl's CURLOPT_CUSTOMREQUEST or
curl's -X command line option.
Simplified the code to remove the need for a separate "LIST <msg id>"
command handler and state machine and instead use the LIST command
handler for both operations.
Moved the server greeting response handling code from the statemach_act
functions to separate response functions. This makes the code simpler
to follow and provides consistency with the other responses that are
handled here.
The commit e650dbde86 that stripped off [brackets] from ipv6-only host
headers for the sake of cookie parsing wrongly incremented the host
pointer which would cause a bad free() call later on.
The refactoring of HTTP CONNECT handling in commit 41b0237834 that
made it protocol independent broke it for the multi interface. This fix
now introduce a better state handling and moved some logic to the
http_proxy.c source file.
Reported by: Yang Tse
Bug: http://curl.haxx.se/mail/lib-2012-03/0162.html
Changed the returned curl error codes for EHLO and HELO responses from
CURLE_LOGIN_DENIED to CURLE_REMOTE_ACCESS_DENIED as a negative response
from these commands represents no service as opposed to a login error.
An alternative would be:
1. specify HTTPS_CA_DIR and/or HTTPS_CA_FILE
2. ensure that Net::SSL is being used, and IO::Socket::SSL is NOT being
used
This question and answer explain:
http://stackoverflow.com/questions/74358/
Curl_protocol_connect() now does the tunneling through the HTTP proxy if
requested instead of letting each protocol specific connection function
do it.
Commit 466150bc64 fixed the Host: header with CONNECT, but I then
forgot the preceeding request-line. Now this too uses [brackets]
properly if a ipv6 numerical address was given.
Bug: http://curl.haxx.se/bug/view.cgi?id=3493129
Reported by: "Blacat"
Set the conn->data->info.httpcode variable in smtp_statemach_act() to
allow Curl_getinfo() to return the SMTP response code via the
CURLINFO_RESPONSE_CODE action.
Curl_pop3_write() would drop the final CRLF of a message as it was
considered part of the EOB as opposed to part of the message. Whilst
the EOB sequence needs to be searched for by the function only the
final 3 characters should be removed as per RFC-1939 section 3.
Reported by: Rich Gray
Bug: http://curl.haxx.se/mail/lib-2012-02/0051.html
Curl_smtp_escape_eob() would leave off final CRLFs from emails ending
in multiple blank lines additionally leaving the smtpc->eob variable
with the character count in, which would cause problems for additional
emails when sent through multiple calls to curl_easy_perform() after a
CURLOPT_CONNECT_ONLY.
Fixed the use of angled brackets "<>" in the optional AUTH parameter as
per RFC-2554 section 5. The address should not include them but an
empty address should be replaced by them.
Added a new CURLOPT_MAIL_AUTH option that allows the calling program to
set the optional AUTH parameter in the MAIL FROM command.
When this option is specified and an authentication mechanism is used
to communicate with the mail server then the AUTH parameter will be
included in the MAIL FROM command. This is particularly useful when the
calling program is acting as a relay in a trusted environment and
performing server to server communication, as it allows the relaying
server to specify the address of the mailbox that was used to
authenticate and send the original email.
Modify configure.ac to test for new CyaSSL Init function and remove
default install path to system. Change to CyaSSL OpenSSL header and
proper Init in code as well.
Note that this no longer detects or works with CyaSSL before v2
Fixed incorrect behavior in smtp_done() which would cause the end of
block data to be sent to the SMTP server if libcurl was operating in
connect only mode. This would cause the server to return an error as
data would not be expected which in turn caused libcurl to return
CURLE_RECV_ERROR.
... by making sure that the string is always freed after the invoke as
parse_proxy will always copy the data and this way there's a single
free() instead of multiple ones.
The proxy parser function strips off trailing slashes off the proxy name
which could lead to a mistaken zero length proxy name which would be
treated as no proxy at all by subsequent functions!
This is now detected and an error is returned. Verified by the new test
1329.
Reported by: Chandrakant Bagul
Bug: http://curl.haxx.se/mail/lib-2012-02/0000.html
Allow an appliction to set libcurl specific SSL options. The first and
only options supported right now is CURLSSLOPT_ALLOW_BEAST.
It will make libcurl to disable any work-arounds the underlying SSL
library may have to address a known security flaw in the SSL3 and TLS1.0
protocol versions.
This is a reaction to us unconditionally removing that behavior after
this security advisory:
http://curl.haxx.se/docs/adv_20120124B.html
... it did however cause a lot of programs to fail because of old
servers not liking this work-around. Now programs can opt to decrease
the security in order to interoperate with old servers better.
This adds three new options to control the behavior of TCP keepalives:
- CURLOPT_TCP_KEEPALIVE: enable/disable probes
- CURLOPT_TCP_KEEPIDLE: idle time before sending first probe
- CURLOPT_TCP_KEEPINTVL: delay between successive probes
While not all operating systems support the TCP_KEEPIDLE and
TCP_KEEPINTVL knobs, the library will still allow these options to be
set by clients, silently ignoring the values.
When CURLOPT_REFERER has been used, curl_easy_reset() did not properly
clear it.
Verified with the new test 598
Bug: http://curl.haxx.se/bug/view.cgi?id=3481551
Reported by: Michael Day
When the target host was given as a IPv6 numerical address, it was not
properly put within square brackets for the Host: header in the CONNECT
request. The "normal" request did fine.
Reported by: "zooloo"
Bug: http://curl.haxx.se/bug/view.cgi?id=3482093
When connecting to a domain with multiple IP addresses, allow different,
decreasing connection timeout values. This should guarantee some
connections attempts with sufficiently long timeouts, while still
providing fallback.
With advice from Nikos Mavrogiannopoulos, changed the priority string to
add "actual priorities" and favour ARCFOUR. This makes libcurl work
better when enforcing SSLv3 with GnuTLS. Both in the sense that the
libmicrohttpd test is now working again but also that it mitigates a
weakness in the older SSL/TLS protocols.
Bug: http://curl.haxx.se/mail/lib-2012-01/0225.html
Reported by: Christian Grothoff
Protocols (IMAP, POP3 and SMTP) that use the path part of a URL in a
decoded manner now use the new Curl_urldecode() function to reject URLs
with embedded control codes (anything that is or decodes to a byte value
less than 32).
URLs containing such codes could easily otherwise be used to do harm and
allow users to do unintended actions with otherwise innocent tools and
applications. Like for example using a URL like
pop3://pop3.example.com/1%0d%0aDELE%201 when the app wants a URL to get
a mail and instead this would delete one.
This flaw is considered a security vulnerability: CVE-2012-0036
Security advisory at: http://curl.haxx.se/docs/adv_20120124.html
Reported by: Dan Fandrich
OpenSSL added a work-around for a SSL 3.0/TLS 1.0 CBC vulnerability
(http://www.openssl.org/~bodo/tls-cbc.txt). In 0.9.6e they added a bit
to SSL_OP_ALL that _disables_ that work-around despite the fact that
SSL_OP_ALL is documented to do "rather harmless" workarounds.
The libcurl code uses the SSL_OP_ALL define and thus logically always
disables the OpenSSL fix.
In order to keep the secure work-around workding, the
SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS bit must not be set and this change
makes sure of this.
Reported by: product-security at Apple
Using a URL with embedded user name and password didn't work if the host
was given as a numerical IPv6 string, like ftp://user:password@[::1]/
Reported by: Brandon Wang
Bug: http://curl.haxx.se/mail/archive-2012-01/0047.html
SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG option enabling allowed successfull
interoperability with web server Netscape Enterprise Server 2.0.1 released
back in 1996 more than 15 years ago.
Due to CVE-2010-4180, option SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG has
become ineffective as of OpenSSL 0.9.8q and 1.0.0c. In order to mitigate
CVE-2010-4180 when using previous OpenSSL versions we no longer enable
this option regardless of OpenSSL version and SSL_OP_ALL definition.
Some functions using getaddrinfo and gethostbyname were still
mistakingly being used/linked even if c-ares was selected as resolver
backend.
Reported by: Arthur Murray
Bug: http://curl.haxx.se/mail/lib-2012-01/0160.html
Previously the code would create a dummy socket while resolving just to
have curl_multi_fdset() return something but the non-win32 version
doesn't do it this way and the creation and use of a socket that isn't
made with the common create-socket callback can be confusing to apps
using the multi_socket API etc.
This change removes the dummy socket and thus will cause
curl_multi_fdset() to return with maxfd == -1 more often.
Fixed a problem in POP3 and IMAP where a connection would fail when
CURLUSESSL_TRY was specified for a server that didn't support
SSL/TLS connections rather than continuing.
The STARTTLS response code in SMTP, POP3 and IMAP would return
CURLE_LOGIN_DENIED rather than CURLE_USE_SSL_FAILED when SSL/TLS
was not available on the server.
Reported by: Gokhan Sengun
Bug: http://curl.haxx.se/mail/lib-2012-01/0018.html
Unfortunately we have no test cases for this and I have no SSPI build or
server to verify this with. The change seems simple enough though.
Bug: http://curl.haxx.se/bug/view.cgi?id=3466497
Reported by: Patrice Guerin
When the buffer gets realloced to hold the file name in the
SSH_SFTP_READDIR_LINK state, the counter was not bumped accordingly.
Reported by: Armel Asselin
Patch by: Armel Asselin
Bug: http://curl.haxx.se/mail/lib-2011-12/0249.html
When a HTTP connection is re-used for a subsequent request without
proxy, it would always re-use the Host: header of the first request. As
host names are case insensitive it would make curl send another host
name case that what the particular request used.
Now it will instead always use the most recent host name to always use
the desired casing.
Added test case 1318 to verify.
Bug: http://curl.haxx.se/mail/lib-2011-12/0314.html
Reported by: Alex Vinnik
The load host names to DNS cache function was moved to hostip.c and it
now makes sure to not add host names that already are present in the
cache. It would previously lead to memory leaks when for example using
the --resolve and multiple URLs on the command line.
The commit 9dd85bc unintentionally changed the way we compute the time
spent waiting for 100-continue. In particular, when using a SSL client
certificate, the time spent by SSL handshake was included and could
cause the CURL_TIMEOUT_EXPECT_100 timeout to be mistakenly fired up.
Bug: https://bugzilla.redhat.com/767490
Reported by: Mamoru Tasaka
ftp_do_more() returns after accepting the server connect however it
needs to fall through and set "*complete" to TRUE before exit from the
function.
Bug: http://curl.haxx.se/mail/lib-2011-12/0250.html
Reported by: Gokhan Sengun
In the recent do_more fix the new logic was mistakenly checking the
pointer instead of what it points to.
Reported by: Gokhan Sengun
Bug: http://curl.haxx.se/mail/lib-2011-12/0250.html
When sending quote command to a SFTP server and 'mkdir' was used, it
would send fixed permissions and not use the CURLOPT_NEW_DIRECTORY_PERMS
as it should.
Reported by: Armel
Patch by: Armel
Bug: http://curl.haxx.se/mail/lib-2011-12/0249.html
CURLOPT_RESOLVE populates the DNS cache with entries that are marked as
eternally in use. Those entries need to be taken care of when the cache
is killed off.
Bug: http://curl.haxx.se/bug/view.cgi?id=3463121
Reported by: "tw84452852"
First off the timeout for accepting a server connect back must of course
respect a global timeout. Then the timeleft function is only used by ftp
code so it was moved to ftp.c and made static.
"wait_data_conn" was added to the connectionbits in commit c834213ad5 for
handling active FTP connections but as it is purely FTP specific and now
only ever accessed by ftp.c I moved it into the FTP connection struct.
Backpedaled out the funny double-change of state in the multi state
machine by adding a new argument to the do_more() function to signal
completion. This way it can remain in the DO_MORE state properly until
done. Long term, the entire DO_MORE logic should be moved into the FTP
code and be hidden from the multi code as the logic is only used for
FTP.
1- Two new error codes are introduced.
CURLE_FTP_ACCEPT_FAILED to be set whenever ACCEPTing fails because of
FTP server connected.
CURLE_FTP_ACCEPT_TIMEOUT to be set whenever ACCEPTing timeouts.
Neither of these errors are considered fatal and control connection
remains OK because it could just be a firewall blocking server to
connect to the client.
2- One new setopt option was introduced.
CURLOPT_ACCEPTTIMEOUT_MS
It sets the maximum amount of time FTP client is going to wait for a
server to connect. Internal default accept timeout is 60 seconds.
It makes it easier to introduce debug outputs in this function, and
everything in the function is using the value anyway so it might even be
more efficient.
Regression introduced in 7.23.0 with commit 9dd85bce. The function in
which the PRETRANSFER time stamp was recorded was moved in time causing
it be stored very quickly after the start timestamp. On most systems
shorter than 1 millisecond and thus it wouldn't even show with -w
"%{time_pretransfer}" using the command line tool.
Bug: http://curl.haxx.se/mail/archive-2011-12/0022.html
Reported by: Toni Moreno
Allow, at configure time, the production of versioned symbols. The
symbols will look like "CURL_<FLAVOUR>_<VERSION> <SYMBOL>", where
<FLAVOUR> represents the SSL flavour (e.g. OPENSSL, GNUTLS, NSS, ...),
<VERSION> is the major SONAME version and <SYMBOL> is the actual symbol
name. If no SSL library is enabled the symbols will be just
"CURL_<VERSION> <SYMBOL>".
This gets the appconnect time right for ssl backends, which don't
support non-blocking connects.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Do not try to resolve interfaces names via DNS by recognizing interface
names in a few ways. If the interface option argument has a prefix of
"if!" then treat the argument as only an interface. Similarly, if the
interface argument is the name of an interface (even if it does not have
an IP address assigned), treat it as an interface name. Finally, if the
interface argument is prefixed by "host!" treat it as a hostname that
must be resolved by /etc/hosts or DNS.
These changes allow a client using the multi interfaces to avoid
blocking on name resolution if the interface loses its IP address or
disappears.
Fixed the connection reuse detection in ConnectionExists() when
comparing a new connection that is non-SSL based against that of a SSL
based connection that has become so by being upgraded via TLS.
This is a regression since who knows when. When spotting that a HTTP
proxy is used we must not uncondititionally enable the HTTP protocol
since if we do tunneling through the proxy we're still using the target
protocol.
Reported by: Naveen Chandran
If no SSLv2 was detected in OpenSSL by configure, then we enforce the
OPENSSL_NO_SSL2 define as it seems some people report it not being
defined properly in the OpenSSL headers.
When a 32 digit hex key is given as a hostkey md5 checksum, the code
would still run it against the knownhost check and not properly
acknowledge that the md5 should then be the sole guide for.
The verbose output now includes the evaluated MD5 hostkey checksum.
Some related source code comments were also updated.
Bug: http://curl.haxx.se/bug/view.cgi?id=3451592
Reported by: Reza Arbab