Eu geralmente uso essas opções quando quero spider um site usando wget
:
$ wget -r -l4 –spider -D unix.stackexchange.com https://unix.stackexchange.com/
Isso diz wget
para recursivamente ( -r
) aranha ( -spider
) até 4 níveis de profundidade ( -l4
). A opção -D
informa wget
para seguir apenas links que se enquadram neste domínio.
A execução desta será assim:
$ timeout 1 wget -r -l4 –spider -D unix.stackexchange.com https://unix.stackexchange.com/
--2018-07-31 20:28:40-- http://xn--spider-vg0c/
Resolving xn--spider-vg0c (xn--spider-vg0c)... failed: nodename nor servname provided, or not known.
wget: unable to resolve host address ‘xn--spider-vg0c’
--2018-07-31 20:28:40-- https://unix.stackexchange.com/
Resolving unix.stackexchange.com (unix.stackexchange.com)... 151.101.65.69, 151.101.193.69, 151.101.129.69, ...
Connecting to unix.stackexchange.com (unix.stackexchange.com)|151.101.65.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 106032 (104K) [text/html]
Saving to: ‘unix.stackexchange.com/index.html’
unix.stackexchange.com/index.html 100%[====================================================================================================>] 103.55K --.-KB/s in 0.1s
2018-07-31 20:28:40 (1.02 MB/s) - ‘unix.stackexchange.com/index.html’ saved [106032/106032]
Loading robots.txt; please ignore errors.
--2018-07-31 20:28:40-- https://unix.stackexchange.com/robots.txt
Reusing existing connection to unix.stackexchange.com:443.
HTTP request sent, awaiting response... 200 OK
Length: 2148 (2.1K) [text/plain]
Saving to: ‘unix.stackexchange.com/robots.txt’
unix.stackexchange.com/robots.txt 100%[====================================================================================================>] 2.10K --.-KB/s in 0s
2018-07-31 20:28:40 (228 MB/s) - ‘unix.stackexchange.com/robots.txt’ saved [2148/2148]