I have wget set up to download pages from a site with the following parameters:
wget -r --level=400 --retry-connrefused --waitretry=2 --read-timeout=15 --timeout=15 -t 0 [ site ]The timeout feature works correctly sometimes, but at others, it does not, and the program displays
HTTP request sent, awaiting response...
and doesn't resolve, meaning the whole download process needs to be cancelled.
Is there a fix for this or some way to resume recursive downloading from the last page that it downloaded successfully, i.e. not downloading those pages already saved to the disk and instead just carrying on from where it was stopped?
I'm certain that this doesn't have to do with the internet connection as I'm able to visit each page in the web browser despite the fact that wget has halted on the HTTP request.
2 Reset to default