1
0
mirror of https://github.com/moparisthebest/wget synced 2024-07-03 16:38:41 -04:00

[svn] Add more annotations to the TODO list.

This commit is contained in:
hniksic 2005-06-22 15:32:33 -07:00
parent 5379a71966
commit f00a78aa39

88
TODO
View File

@ -11,9 +11,45 @@ The items are not listed in any particular order (except that
recently-added items may tend towards the top). Not all of these
represent user-visible changes.
* Honor `Content-Disposition: XXX; filename="FILE"' when creating the
file name. If possible, try not to break `-nc' and friends when
doing that.
* Change the file name generation logic so that redirects can't dictate
file names (but redirects should still be followed). By default, file
names should be generated only from the URL the user provided. However,
with an appropriate flag, Wget will allow the remote server to specify
the file name, either through redirection (as is always the case now)
or via the increasingly popular header `Content-Disposition: XXX;
filename="FILE"'. The file name should be generated and displayed
*after* processing the server's response, not before, as it is done now.
This will allow trivial implementation of -nc, of O_EXCL when opening
the file, --html-extension will stop being a horrible hack, and so on.
* -O should be respected, with no exceptions. It should work in
conjunction with -N and -k. (This is hard to achieve in the current
code base.) Ancillary files, such as directory listings and such,
should be downloaded either directly to memory, or to /tmp.
* Implement digest and NTLM authorization for proxies. This is harder
than it seems because it requires some rethinking of the HTTP code.
* Rethink the interaction between recur.c (the recursive download code)
and HTTP/FTP code. Ideally, the downloading code should have a way
to retrieve a file and, optionally, to specify a list of URLs for
continuing the "recursive" download. FTP code will surely benefit
from such a restructuring because its current incarnation is way too
smart for its own good.
* Both HTTP and FTP connections should be first-class objects that can
be reused after a download is done. Currently information about both
is kept implicitly on the stack, and forgotten after each download.
* Restructure the FTP code to remove massive amounts of code duplication
and repetition. Remove all the "intelligence" and make it work as
outlined in the previous bullet.
* Add support for SFTP. Teach Wget about newer features of FTP servers
in general.
* Use FTP features for checking MD5 sums and implementing truly robust
downloads.
* Wget shouldn't delete rejected files that were not downloaded, but
just found on disk because of `-nc'. For example, `wget -r -nc
@ -21,15 +57,28 @@ represent user-visible changes.
removing any of the existing HTML files.
* Be careful not to lose username/password information given for the
URL on the command line.
URL on the command line. For example,
wget -r http://username:password@server/path/ should send that
username and password to all content under /path/ (this is apparently
what browsers do).
* Don't send credentials using "Basic" authorization before the server
has a chance to tell us that it supports Digest or NTLM!
* Add a --range parameter allowing you to explicitly specify a range
of bytes to get from a file over HTTP (FTP only supports ranges
ending at the end of the file, though forcibly disconnecting from
the server at the desired endpoint might be workable).
the server at the desired endpoint would work). For example,
--range=n-m would specify inclusive range (a la the Range header),
and --range=n:m would specify exclusive range (a la Python's
slices). -c should work with --range by assuming the range is
partially downloaded on disk, and contuing from there (effectively
requesting a smaller range).
* If multiple FTP URLs are specified that are on the same host, Wget should
re-use the connection rather than opening a new one for each file.
This should be easy provided the above restructuring of FTP code that
would include the FTP connection becoming a first-class objects.
* Try to devise a scheme so that, when password is unknown, Wget asks
the user for one.
@ -53,6 +102,7 @@ represent user-visible changes.
* --retr-symlinks should cause wget to traverse links to directories too.
* Make wget return non-zero status in more situations, like incorrect HTTP auth.
Create and document different exit statuses for different errors.
* Make -K compare X.orig to X and move the former on top of the latter if
they're the same, rather than leaving identical .orig files laying around.
@ -60,31 +110,37 @@ represent user-visible changes.
* Make `-k' check for files that were downloaded in the past and convert links
to them in newly-downloaded documents.
* Devise a way for options to have effect on a per-URL basis. This is very
natural for some options, such as --post-data. It could be implemented
simply by having more than one struct options.
* Add option to clobber existing file names (no `.N' suffixes).
* Add option to only list wildcard matches without doing the download.
* Add option to only list wildcard matches without doing the download. The same
could be generalized to support something like apt's --print-uri.
* Handle MIME types correctly. There should be an option to (not)
retrieve files based on MIME types, e.g. `--accept-types=image/*'.
* Allow time-stamping by arbitrary date.
* Allow time-stamping by arbitrary date. For example,
wget --if-modified-after DATE URL.
* Allow size limit to files (perhaps with an option to download oversize files
up through the limit or not at all, to get more functionality than [u]limit.
* Make quota apply to single files, preferrably so that the download of an
oversized file is not attempted at all.
* Download to .in* when mirroring.
* When updating an existing mirror, download to temporary files (such as .in*)
and rename the file after the download is done.
* Add an option to delete or move no-longer-existent files when mirroring.
* Implement uploading (--upload URL?) in FTP and HTTP.
* Rewrite FTP code to allow for easy addition of new commands. It
should probably be coded as a simple DFA engine.
* Implement uploading (--upload=FILE URL?) in FTP and HTTP. A beginning of
this is available in the form of --post-file, but it should be expanded to
be really useful.
* Make HTTP timestamping use If-Modified-Since facility.
* Add more protocols (e.g. gopher and news), implementing them in a
modular fashion.
* Add more protocols (such as news or possibly some of the streaming
protocols), implementing them in a modular fashion.
* Add a "rollback" option to have continued retrieval throw away a
configurable number of bytes at the end of a file before resuming