Glam Prestige Journal

Bright entertainment trends with youth appeal.

I have wget set up to download pages from a site with the following parameters:

wget -r --level=400 --retry-connrefused --waitretry=2 --read-timeout=15 --timeout=15 -t 0 [ site ]

The timeout feature works correctly sometimes, but at others, it does not, and the program displays

HTTP request sent, awaiting response...

and doesn't resolve, meaning the whole download process needs to be cancelled.

Is there a fix for this or some way to resume recursive downloading from the last page that it downloaded successfully, i.e. not downloading those pages already saved to the disk and instead just carrying on from where it was stopped?

I'm certain that this doesn't have to do with the internet connection as I'm able to visit each page in the web browser despite the fact that wget has halted on the HTTP request.

2 Reset to default

Know someone who can answer? Share a link to this question via email, Twitter, or Facebook.

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy