É porque o wget é invocado para cada URL. Use a opção -i
para alimentar uma lista de URLs para wget
:
$ wget -i urls.txt --wait=30
Do manual:
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file,
URLs are read from the standard input. (Use ./- to read from a file
literally named -.) If this function is used, no URLs need be present
on the command line. If there are URLs both on the command line and in
an input file, those on the command lines will be the first ones to be
retrieved. If --force-html is not specified, then file should consist
of a series of URLs, one per line.
However, if you specify --force-html, the document will be regarded as
html. In that case you may have problems with relative links, which
you can solve either by adding "<base href="url">" to the documents or
by specifying --base=url on the command line.
If the file is an external one, the document will be automatically
treated as html if the Content-Type matches text/html. Furthermore,
the file's location will be implicitly used as base href if none was
specified.