1
0
mirror of https://github.com/moparisthebest/wget synced 2024-07-03 16:38:41 -04:00
wget/TODO

129 lines
5.3 KiB
Plaintext
Raw Normal View History

1999-12-02 02:42:23 -05:00
Hey Emacs, this is -*- outline -*- mode
This is the to-do list for Wget. There is no timetable of when we plan to
implement these features -- this is just a list of things it'd be nice to see in
Wget. Patches to implement any of these items would be gladly accepted. The
items are not listed in any particular order (except that recently-added items
2000-11-14 17:45:43 -05:00
may tend towards the top). Not all of these represent user-visible
changes.
* It would be nice to have a simple man page for wget that refers you to the
.info files for more information. It could be as simple as the output of wget
--help plus some boilerplate. This should stop wget re-packagers like RedHat
who include the out-of-date 1.4.5 man page in order to have one. Perhaps we
can automatically generate a man page from the .texi file like gcc does?
2000-11-21 09:58:46 -05:00
* Try to devise a scheme so that, when password is unknown, Wget asks
the user for one.
* Limit the number of successive redirection to max. 20 or so.
* If -c used on a file that's already completely downloaded, don't re-download
it (unless normal --timestamping processing would cause you to do so).
* If -c used with -N, check to make sure a file hasn't changed on the server
before "continuing" to download it (preventing a bogus hybrid file).
2000-11-14 17:45:43 -05:00
* Take a look at
<http://info.webcrawler.com/mak/projects/robots/norobots-rfc.html>
and support the new directives.
* Generalize --html-extension to something like --mime-extensions and have it
look at mime.types/mimecap file for preferred extension. Non-HTML files with
filenames changed this way would be re-downloaded each time despite -N unless
.orig files were saved for them. Since .orig would contain the same data as
non-.orig, the latter could be just a link to the former. Another possibility
would be to implement a per-directory database called something like
.wget_url_mapping containing URLs and their corresponding filenames.
* When spanning hosts, there's no way to say that you are only interested in
files in a certain directory on _one_ of the hosts (-I and -X apply to all).
Perhaps -I and -X should take an optional hostname before the directory?
* Add an option to not encode special characters like ' ' and '~' when saving
local files. Would be good to have a mode that encodes all special characters
(as now), one that encodes none (as above), and one that only encodes a
character if it was encoded in the original URL (e.g. %20 but not %7E).
* --retr-symlinks should cause wget to traverse links to directories too.
* Make wget return non-zero status in more situations, like incorrect HTTP auth.
* Timestamps are sometimes not copied over on files retrieved by FTP.
* Make -K compare X.orig to X and move the former on top of the latter if
they're the same, rather than leaving identical .orig files laying around.
* If CGI output is saved to a file, e.g. cow.cgi?param, -k needs to change the
'?' to a "%3F" in links to that file to avoid passing part of the filename as
a parameter.
1999-12-02 02:42:23 -05:00
* Make `-k' convert <base href=...> too.
* Make `-k' check for files that were downloaded in the past and convert links
to them in newly-downloaded documents.
1999-12-02 02:42:23 -05:00
* Add option to clobber existing file names (no `.N' suffixes).
* Introduce a concept of "boolean" options. For instance, every
boolean option `--foo' would have a `--no-foo' equivalent for
turning it off. Get rid of `--foo=no' stuff. Short options would
be handled as `-x' vs. `-nx'.
* Implement "thermometer" display (not all that hard; use an
alternative show_progress() if the output goes to a terminal.)
* Add option to only list wildcard matches without doing the download.
* Add case-insensitivity as an option.
* Handle MIME types correctly. There should be an option to (not)
retrieve files based on MIME types, e.g. `--accept-types=image/*'.
* Implement "persistent" retrieving. In "persistent" mode Wget should
treat most of the errors as transient.
* Allow time-stamping by arbitrary date.
* Fix Unix directory parser to allow for spaces in file names.
* Allow size limit to files (perhaps with an option to download oversize files
up through the limit or not at all, to get more functionality than [u]limit.
1999-12-02 02:42:23 -05:00
* Implement breadth-first retrieval.
* Download to .in* when mirroring.
* Add an option to delete or move no-longer-existent files when mirroring.
1999-12-02 02:42:23 -05:00
* Implement a switch to avoid downloading multiple files (e.g. x and x.gz).
1999-12-02 02:42:23 -05:00
* Implement uploading (--upload URL?) in FTP and HTTP.
* Rewrite FTP code to allow for easy addition of new commands. It
should probably be coded as a simple DFA engine.
* Recognize more FTP servers (VMS).
* Make HTTP timestamping use If-Modified-Since facility.
* Implement better spider options.
* Add more protocols (e.g. gopher and news), implementing them in a
modular fashion.
* Implement a concept of "packages" a la mirror.
* Implement correct RFC1808 URL parsing.
* Implement HTTP cookies.
* Implement more HTTP/1.1 bells and whistles (ETag, Content-MD5 etc.)
* Add a "rollback" option to have --continue throw away a configurable number of
bytes at the end of a file before resuming download. Apparently, some stupid
proxies insert a "transfer interrupted" string we need to get rid of.
* When using --accept and --reject, you can end up with empty directories. Have
Wget any such at the end.