mirror of
https://github.com/moparisthebest/curl
synced 2024-11-15 22:15:13 -05:00
268 lines
11 KiB
Plaintext
268 lines
11 KiB
Plaintext
_ _ ____ _
|
|
___| | | | _ \| |
|
|
/ __| | | | |_) | |
|
|
| (__| |_| | _ <| |___
|
|
\___|\___/|_| \_\_____|
|
|
|
|
TODO
|
|
|
|
Things to do in project cURL. Please tell us what you think, contribute and
|
|
send us patches that improve things! Also check the http://curl.haxx.se/dev
|
|
web section for various technical development notes.
|
|
|
|
All bugs documented in the KNOWN_BUGS document are subject for fixing!
|
|
|
|
LIBCURL
|
|
|
|
* Introduce an interface to libcurl that allows applications to easier get to
|
|
know what cookies that are received. CURLINFO_COOKIELIST to get a
|
|
curl_slist with cookies (netscape/mozilla cookie file formatted), and
|
|
CURLOPT_COOKIELIST to set a list of cookies (using the same format).
|
|
http://curl.haxx.se/mail/lib-2004-12/0195.html
|
|
|
|
* Introduce another callback interface for upload/download that makes one
|
|
less copy of data and thus a faster operation.
|
|
[http://curl.haxx.se/dev/no_copy_callbacks.txt]
|
|
|
|
* More data sharing. curl_share_* functions already exist and work, and they
|
|
can be extended to share more. For example, enable sharing of the ares
|
|
channel and the connection cache.
|
|
|
|
* Introduce a new error code indicating authentication problems (for proxy
|
|
CONNECT error 407 for example). This cannot be an error code, we must not
|
|
return informational stuff as errors, consider a new info returned by
|
|
curl_easy_getinfo() #845941
|
|
|
|
* Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
|
|
SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
|
|
To support ipv6 interface addresses properly.
|
|
|
|
* Add the following to curl_easy_getinfo(): GET_HTTP_IP, GET_FTP_IP and
|
|
GET_FTP_DATA_IP. Return a string with the used IP. Suggested by Alan.
|
|
|
|
LIBCURL - multi interface
|
|
|
|
* Add a curl_multi_fdset() alternative. this allows apps to avoid the
|
|
FD_SETSIZE problem with select().
|
|
|
|
* Add curl_multi_timeout() to make libcurl's ares-functionality better.
|
|
|
|
* Make sure we don't ever loop because of non-blocking sockets return
|
|
EWOULDBLOCK or similar. This FTP command sending, the SSL connection etc.
|
|
|
|
* Make transfers treated more carefully. We need a way to tell libcurl we
|
|
have data to write, as the current system expects us to upload data each
|
|
time the socket is writable and there is no way to say that we want to
|
|
upload data soon just not right now, without that aborting the upload. The
|
|
opposite situation should be possible as well, that we tell libcurl we're
|
|
ready to accept read data. Today libcurl feeds the data as soon as it is
|
|
available for reading, no matter what.
|
|
|
|
DOCUMENTATION
|
|
|
|
* More and better
|
|
|
|
FTP
|
|
|
|
* Make the detection of (bad) %0d and %0a codes in FTP url parts earlier in
|
|
the process to avoid doing a resolve and connect in vain.
|
|
|
|
* Support GSS/Kerberos 5 for ftp file transfer. This will allow user
|
|
authentication and file encryption. Possible libraries and example clients
|
|
are available from MIT or Heimdal. Requsted by Markus Moeller.
|
|
|
|
* REST fix for servers not behaving well on >2GB requests. This should fail
|
|
if the server doesn't set the pointer to the requested index. The tricky
|
|
(impossible?) part is to figure out if the server did the right thing or
|
|
not.
|
|
|
|
* Support the most common FTP proxies, Philip Newton provided a list
|
|
allegedly from ncftp:
|
|
http://curl.haxx.se/mail/archive-2003-04/0126.html
|
|
|
|
* Make CURLOPT_FTPPORT support an additional port number on the IP/if/name,
|
|
like "blabla:[port]" or possibly even "blabla:[portfirst]-[portsecond]".
|
|
|
|
* FTP ASCII transfers do not follow RFC959. They don't convert the data
|
|
accordingly.
|
|
|
|
* Since USERPWD always override the user and password specified in URLs, we
|
|
might need another way to specify user+password for anonymous ftp logins.
|
|
|
|
* The FTP code should get a way of returning errors that is known to still
|
|
have the control connection alive and sound. Currently, a returned error
|
|
from within ftp-functions does not tell if the control connection is still
|
|
OK to use or not. This causes libcurl to fail to re-use connections
|
|
slightly too often.
|
|
|
|
HTTP
|
|
|
|
* Pipelining. Sending multiple requests before the previous one(s) are done.
|
|
This could possibly be implemented using the multi interface to queue
|
|
requests and the response data.
|
|
|
|
* When doing CONNECT to a HTTP proxy, libcurl always uses HTTP/1.0. This has
|
|
never been reported as causing trouble to anyone, but should be considered
|
|
to use the HTTP version the user has chosen.
|
|
|
|
TELNET
|
|
|
|
* Reading input (to send to the remote server) on stdin is a crappy solution
|
|
for library purposes. We need to invent a good way for the application to
|
|
be able to provide the data to send.
|
|
|
|
* Move the telnet support's network select() loop go away and merge the code
|
|
into the main transfer loop. Until this is done, the multi interface won't
|
|
work for telnet.
|
|
|
|
SSL
|
|
|
|
* Anton Fedorov's "dumpcert" patch:
|
|
http://curl.haxx.se/mail/lib-2004-03/0088.html
|
|
|
|
* Evaluate/apply Gertjan van Wingerde's SSL patches:
|
|
http://curl.haxx.se/mail/lib-2004-03/0087.html
|
|
|
|
* "Look at SSL cafile - quick traces look to me like these are done on every
|
|
request as well, when they should only be necessary once per ssl context
|
|
(or once per handle)". The major improvement we can rather easily do is to
|
|
make sure we don't create and kill a new SSL "context" for every request,
|
|
but instead make one for every connection and re-use that SSL context in
|
|
the same style connections are re-used. It will make us use slightly more
|
|
memory but it will libcurl do less creations and deletions of SSL contexts.
|
|
|
|
* Add an interface to libcurl that enables "session IDs" to get
|
|
exported/imported. Cris Bailiff said: "OpenSSL has functions which can
|
|
serialise the current SSL state to a buffer of your choice, and
|
|
recover/reset the state from such a buffer at a later date - this is used
|
|
by mod_ssl for apache to implement and SSL session ID cache".
|
|
|
|
* OpenSSL supports a callback for customised verification of the peer
|
|
certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
|
|
it be? There's so much that could be done if it were! (brought by Chris
|
|
Clark)
|
|
|
|
* Make curl's SSL layer capable of using other free SSL libraries. Such as
|
|
the Mozilla Security Services:
|
|
http://www.mozilla.org/projects/security/pki/nss/
|
|
|
|
LDAP
|
|
|
|
* Look over the implementation. The looping will have to "go away" from the
|
|
lib/ldap.c source file and get moved to the main network code so that the
|
|
multi interface and friends will work for LDAP as well.
|
|
|
|
NEW PROTOCOLS
|
|
|
|
* TFTP - RFC1350 (protocol) and RFC3617 (URI format)
|
|
|
|
Dan Fandrich: I wrote a tftp protocol module as part of the I-Boot
|
|
bootloader under a BSD-style license with attribution clause
|
|
http://download.intrinsyc.com/supported/tools/i-boot-lite/i-boot-lite-1.8/src/libs/net/tftp.c
|
|
|
|
* RTSP - RFC2326 (protocol - very HTTP-like, also contains URL description)
|
|
|
|
* SFTP/SCP/SSH (no RFCs for protocol nor URI/URL format). An implementation
|
|
should most probably use an existing ssh library, such as OpenSSH.
|
|
|
|
* RSYNC (no RFCs for protocol nor URI/URL format). An implementation should
|
|
most probably use an existing rsync library, such as librsync.
|
|
|
|
CLIENT
|
|
|
|
* "curl --sync http://example.com/feed[1-100].rss" or
|
|
"curl --sync http://example.net/{index,calendar,history}.html"
|
|
|
|
Downloads a range or set of URLs using the remote name, but only if the
|
|
remote file is newer than the local file. A Last-Modified HTTP date header
|
|
should also be used to set the mod date on the downloaded file.
|
|
(idea from "Brianiac")
|
|
|
|
* Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
|
|
Requested by Dane Jensen and others. This is easily scripted though.
|
|
|
|
* Add an option that prevents cURL from overwiting existing local files. When
|
|
used, and there already is an existing file with the target file name
|
|
(either -O or -o), a number should be appended (and increased if already
|
|
existing). So that index.html becomes first index.html.1 and then
|
|
index.html.2 etc. Jeff Pohlmeyer suggested.
|
|
|
|
* "curl ftp://site.com/*.txt"
|
|
|
|
* The client could be told to use maximum N simultaneous transfers and then
|
|
just make sure that happens. It should of course not make more than one
|
|
connection to the same remote host. This would require the client to use
|
|
the multi interface.
|
|
|
|
* Extending the capabilities of the multipart formposting. How about leaving
|
|
the ';type=foo' syntax as it is and adding an extra tag (headers) which
|
|
works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
|
|
fil1.hdr contains extra headers like
|
|
|
|
Content-Type: text/plain; charset=KOI8-R"
|
|
Content-Transfer-Encoding: base64
|
|
X-User-Comment: Please don't use browser specific HTML code
|
|
|
|
which should overwrite the program reasonable defaults (plain/text,
|
|
8bit...) (Idea brough to us by kromJx)
|
|
|
|
* ability to specify the classic computing suffixes on the range
|
|
specifications. For example, to download the first 500 Kilobytes of a file,
|
|
be able to specify the following for the -r option: "-r 0-500K" or for the
|
|
first 2 Megabytes of a file: "-r 0-2M". (Mark Smith suggested)
|
|
|
|
* --data-encode that URL encodes the data before posting
|
|
http://curl.haxx.se/mail/archive-2003-11/0091.html (Kevin Roth suggested)
|
|
|
|
* Provide a way to make options bound to a specific URL among several on the
|
|
command line. Possibly by letting ':' separate options between URLs,
|
|
similar to this:
|
|
|
|
curl --data foo --url url.com : \
|
|
--url url2.com : \
|
|
--url url3.com --data foo3
|
|
|
|
(More details: http://curl.haxx.se/mail/archive-2004-07/0133.html)
|
|
|
|
The example would do a POST-GET-POST combination on a single command line.
|
|
|
|
BUILD
|
|
|
|
* Consider extending 'roffit' to produce decent ASCII output, and use that
|
|
instead of (g)nroff when building src/hugehelp.c
|
|
|
|
TEST SUITE
|
|
|
|
* Make the test servers able to serve multiple running test suites. Like if
|
|
two users run 'make test' at once.
|
|
|
|
* If perl wasn't found by the configure script, don't attempt to run the
|
|
tests but explain something nice why it doesn't.
|
|
|
|
* Extend the test suite to include more protocols. The telnet could just do
|
|
ftp or http operations (for which we have test servers).
|
|
|
|
* Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
|
|
fork()s and it should become even more portable.
|
|
|
|
NEXT MAJOR RELEASE
|
|
|
|
* curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
|
|
CURLMcode. These should be changed to be the same.
|
|
|
|
* remove obsolete defines from curl/curl.h
|
|
|
|
* make several functions use size_t instead of int in their APIs
|
|
|
|
* remove the following functions from the public API:
|
|
curl_getenv
|
|
curl_mprintf (and variations)
|
|
curl_strequal
|
|
curl_strnequal
|
|
|
|
They will instead become curlx_ - alternatives. That makes the curl app
|
|
still capable of building with them from source.
|
|
|
|
* Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
|
|
internally. Let the app judge success or not for itself.
|