The PROT_* set of internal defines for the protocols is no longer
used. We now use the same bits internally as we have defined in the
public header using the CURLPROTO_ prefix. This is for simplicity and
because the PROT_* prefix was already used duplicated internally for a
set of KRB4 values.
The PROTOPT_* defines were moved up to just below the struct definition
within which they are used.
The protocol handler struct got a 'flags' field for special information
and characteristics of the given protocol.
This now enables us to move away central protocol information such as
CLOSEACTION and DUALCHANNEL from single defines in a central place, out
to each protocol's definition. It also made us stop abusing the protocol
field for other info than the protocol, and we could start cleaning up
other protocol-specific things by adding flags bits to set in the
handler struct.
The "protocol" field connectdata struct was removed as well and the code
now refers directly to the conn->handler->protocol field instead. To
make things work properly, the code now always store a conn->given
pointer that points out the original handler struct so that the code can
learn details from the original protocol even if conn->handler is
modified along the way - for example when switching to go over a HTTP
proxy.
Instead of polluting many places with #ifdefs, we create a single place
for this function, and also check return code properly so that a NULL
pointer returned won't cause problems.
It helps to prevent a hangup with some FTP servers in case idle session
timeout has exceeded. But it may be useful also for other protocols
that send any quit message on disconnect. Currently used by FTP, POP3,
IMAP and SMTP.
While changing Curl_sec_read_msg to accept an enum protection_level
instead of an int, I went ahead and fixed the usage of the associated
fields.
Some code was assuming that prot_clear == 0. Fixed those to use the
proper value. Added assertions prior to any code that would set the
protection level.
Some FTP servers (e.g. Pure-ftpd) end up hanging if we close the data
connection before transferring all the requested data. If we send ABOR
in that case, it prevents the server from hanging.
Bug: https://bugzilla.redhat.com/643656
Reported by: Pasi Karkkainen, Patrick Monnerat
This fixes a memory leak related to the GSS-API code.
Added a krb5_init and krb5_end functions. Also removed a work-around
the lack of proper initialization of the GSS-API context.
Curl_sec_login was returning the opposite result that the code in ftp.c
was expecting. Simplified the return code (using a CURLcode) so to see
more clearly what is going on.
Test 563 is enabled now and verifies that the combo FTP type=A URL,
CURLOPT_PORT set and proxy work fine. As a bonus I managed to remove the
somewhat odd FTP check in parse_remote_port() and instead converted it
to a better and more generic 'slash_removed' struct field. Checking the
->protocol field isn't right since when an FTP:// URL is sent over a
HTTP proxy, the protocol is HTTP but the URL was handled by the FTP code
and thus slash_removed is set TRUE for this case.
The FTP implementation was missing a timestamp reset point, making the
waiting for responses after sending a post-transfer "QUOTE" command not
working as supposedly. This bug was introduced in 7.20.0
Prefixing the FTP quote commands with an asterisk really only
worked for the postquote actions. This is now fixed and test case
227 has been extended to verify.
the global timeout if set. Also, as was reported in the bug report #2956437
by Ryan Chan, the time stamp to use as basis for the per command timeout was
not set properly in the DONE phase for FTP (and not for SMTP) so I fixed
that just now. This was a regression compared to 7.19.7 due to the
conversion of FTP code over to the generic pingpong concepts.
http://curl.haxx.se/bug/view.cgi?id=2956437
again when downloading files over FTP using ASCII and it turns out that the
final size of the file is not the same as the initial size the server
reported. This is very common since servers don't take the newline
conversions into account.
command is a special "hack" used by the drftpd server, but even though it is
a custom extension I've deemed it fine to add to libcurl since this server
seems to survive and people keep using it and want libcurl to support
it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
usable from the curl tool with --ftp-pret. Using this option on a server
that doesn't support this command will make libcurl fail.
meter/callback during FTP command/response sequences. It turned out it was
really lame before and now the progress meter SHOULD get called at least
once per second.
used during the FTP connection phase (after the actual TCP connect), while
it of course should be. I also made the speed check get called correctly so
that really slow servers will trigger that properly too.
QUOTE commands and the request used the same path as the connection had
already changed to, it would decide that no commands would be necessary for
the "DO" action and that was not handled properly but libcurl would instead
hang.
CURLOPT_PREQUOTE) now accept a preceeding asterisk before the command to
send when using FTP, as a sign that libcurl shall simply ignore the response
from the server instead of treating it as an error. Not treating a 400+ FTP
response code as an error means that failed commands will not abort the
chain of commands, nor will they cause the connection to get disconnected.
to use the "standard" ENABLE_IPV6 one. Also, if port number cannot be figured
out to connect to after a name resolve (due to it not being IPv4 or IPv6),
that particular address will now simply be skipped.
With the curl memory tracking feature decoupled from the debug build feature,
CURLDEBUG and DEBUGBUILD preprocessor symbol definitions are used as follows:
CURLDEBUG used for curl debug memory tracking specific code (--enable-curldebug)
DEBUGBUILD used for debug enabled specific code (--enable-debug)
Chen pointed out how curl couldn't upload with resume when reading from a
pipe.
This ended up with the introduction of a new return code for the
CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
that libcurl may try to resolve the situation anyway. In our case this means
libcurl will attempt to instead read that much data from the stream instead
of seeking and that way curl can now upload with resume when data is read
from a stream!
how it occurs (http://curl.haxx.se/mail/lib-2009-04/0289.html). The
conclusion was that if an error is detected and Curl_done() is called for
the connection, ftp_done() could at times return another error code that
then would take precedence and that new code confused existing logic that
works for the first error code (CURLE_SEND_ERROR) only.
proxy. libcurl would then wrongly close the connection after each
request. In his case it had the weird side-effect that it killed NTLM auth
for the proxy causing an inifinite loop!
I added test case 1098 to verify this fix. The test case does however not
properly verify that the transfers are done persistently - as I couldn't
think of a clever way to achieve it right now - but you need to read the
stderr output after a test run to see that it truly did the right thing.
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
the condition in the previous request was unmet. This is typically a time
condition set with CURLOPT_TIMECONDITION and was previously not possible to
reliably figure out. From bug report #2565128
(http://curl.haxx.se/bug/view.cgi?id=2565128)
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
clarity. This does fix one problem that causes ;type=i FTP URLs
to fail in the Turkish locale when CURLOPT_PROXY_TRANSFER_MODE is
used (test case 561)
Added tests 561 and 1092 through 1094 to test various combinations
of ;type= and ;mode= URLs that could potentially fail in the Turkish
locale.
now has an improved ability to do right when the multi interface (both
"regular" and multi_socket) is used for SCP and SFTP transfers. This should
result in (much) less busy-loop situations and thus less CPU usage with no
speed loss.
particular state for the control connection like it did before for implicit
FTPS (libcurl assumed such control connections to be encrypted while some
FTPS servers such as FileZilla assumes such connections to be clear
mode). Use the CURLOPT_USE_SSL option to set your desired level.
researching it, it turned out he got a 550 response back from a SIZE command
and then I fell over the text in RFC3659 that says:
The presence of the 550 error response to a SIZE command MUST NOT be taken
by the client as an indication that the file cannot be transferred in the
current MODE and TYPE.
In other words: the change I did on September 30th 2008 and that has been
included in the last two releases were a regression and a bad idea. We MUST
NOT take a 550 response from SIZE as a hint that the file doesn't exist.
Changed checkprefix() to use it and those instances of strnequal() that
compare host names or other protocol strings that are defined to be
independent of case in the C locale. This should fix a few more
Turkish locale problems.
systems supporting getifaddrs(). Also fixed a problem where an IPv6
address could be chosen instead of an IPv4 one for --interface when it
involved a name lookup.
gets a 550 response back for the cases where a download (or NOBODY) is
wanted. It still allows a 550 as response if the SIZE is used as part of an
upload process (like if resuming an upload is requested and the file isn't
there before the upload). I also modified the FTP test server and a few test
cases accordingly to match this modified behavior.
remain in use as internal curl_off_t print formatting strings for the internal
*printf functions which still cannot handle print formatting string directives
such as "I64d", "I64u", and others available on MSVC, MinGW, Intel's ICC, and
other DOS/Windows compilers.
This reverts previous commit part which did:
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
the names of the curl_off_t formatting string directives now become
CURL_FORMAT_CURL_OFF_T and CURL_FORMAT_CURL_OFF_TU.
CURL_FMT_OFF_T -> CURL_FORMAT_CURL_OFF_T
CURL_FMT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
Remove the use of an internal name for the curl_off_t formatting string directives
and use the common one available from the inside and outside of the library.
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
line of a multiline FTP response whose last byte landed exactly at the end
of the BUFSIZE-length buffer would be treated as the terminal response
line. The following response code read in would then actually be the
end of the previous response line, and all responses from then on would
correspond to the wrong command. Test case 1062 verifies this.
Stop closing a never-opened ftp socket.
fix for it. It occured when you did a FTP transfer using
CURLFTPMETHOD_SINGLECWD and then did another one on the same easy handle but
switched to CURLFTPMETHOD_NOCWD. Due to the "dir depth" variable not being
cleared properly. Scott's test case is now known as test 539 and it
verifies the fix.
message when libcurl doesn't get a 220 back immediately on connect, I now
changed it to be more specific on what the problem is. Also worth noticing:
while the bug report contains an example where the response is:
421 There are too many connected users, please try again later
we cannot assume that the error message will always be this readable nor
that it fits within a particular boundary etc.
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
is an inofficial PROXY4 variant that sends the hostname to the proxy instead
of the resolved address (which is already supported by SOCKS5). --socks4a is
the curl command line option for it and CURLOPT_PROXYTYPE can now be set to
CURLPROXY_SOCKS4A as well.
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
was even mentioned to be bad in a comment! Should make test 2000 and 2001 work
fine.
Also, freedirs() now take a ftp_conn struct pointer which saves some extra
unnecessary variable assignments.
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).
CURLOPT_NOBODY enabled but not CURLOPT_HEADER, libcurl wouldn't do TYPE
before it does SIZE which makes it less useful. I walked over the code and
made it do this properly, and added test case 542 to verify it.
second transfer as it didn't store and remember the "" path from the
previous transfer so it would instead CWD to the entry path as stored. This
worked, but did a superfluous command. Thus, test case 541 now also verifies
this fix.
and allow reuse by multiple protocols. Several unused error codes were
removed. In all cases, macros were added to preserve source (and binary)
compatibility with the old names. These macros are subject to removal at
a future date, but probably not before 2009. An application can be
tested to see if it is using any obsolete code by compiling it with the
CURL_NO_OLDIES macro defined.
Documented some newer error codes in libcurl-error(3)
out that libcurl didn't deal with large responses from server commands, when
the single response was consisting of multiple lines but of a total size of
16KB or more. Dan Fandrich improved the ftp test script and provided test
case 1006 to repeat the problem, and I fixed the code to make sure this new
test case runs fine.
out that libcurl didn't deal with very long (>16K) FTP server response lines
properly. Starting now, libcurl will chop them off (thus the client app will
not get the full line) but survive and deal with them fine otherwise. Test
case 1003 was added to verify this.
(http://curl.haxx.se/bug/view.cgi?id=1776232) about libcurl calling
Curl_client_write(), passing on a const string that the caller may not
modify and yet it does (on some platforms).
(http://curl.haxx.se/bug/view.cgi?id=1776235) about ftp requests with NOBODY
on a directory would do a "SIZE (null)" request. This is now fixed and test
case 1000 was added to verify.
passed to it with curl_easy_setopt()! Previously it has always just refered
to the data, forcing the user to keep the data around until libcurl is done
with it. That is now history and libcurl will instead clone the given
strings and keep private copies.
(http://curl.haxx.se/bug/view.cgi?id=1757328) and submitted a patch. It turns
out we broke login to FTP servers that don't require (nor understand) PASS
after the USER command
http://curl.haxx.se/mail/lib-2007-06/0238.html, libcurl didn't properly do
no-body requests on FTP files on re-used connections properly, or at least
it didn't provide the info back in the header callback properly in the
subsequent requests.
returning an error code, to allow connections to be torn down
cleanly since this function can be called AFTER an OOM situation
has already been reached.
and CURLOPT_CONNECTTIMEOUT_MS that, as their names should hint, do the
timeouts with millisecond resolution instead. The only restriction to that
is the alarm() (sometimes) used to abort name resolves as that uses full
seconds. I fixed the FTP response timeout part of the patch.
Internally we now count and keep the timeouts in milliseconds but it also
means we multiply set timeouts with 1000. The effect of this is that no
timeout can be set to more than 2^31 milliseconds (on 32 bit systems), which
equals 24.86 days. We probably couldn't before either since the code did
*1000 on the timeout values on several places already.
doing an FTP transfer is removed from a multi handle before completion. The
fix also fixed the "alive counter" to be correct on "premature removal" for
all protocols.
curl that uses the new CURLOPT_FTP_SSL_CCC option in libcurl. If enabled, it
will make libcurl shutdown SSL/TLS after the authentication is done on a
FTP-SSL operation.
get confused and not acknowledge the 'no_proxy' variable properly once it
had used the proxy and you re-used the same easy handle. I made sure the
proxy name is properly stored in the connect struct rather than the
sessionhandle/easy struct.