Options

Here are the options available in wget command in linux.

Option Description Syntax
-v / –version Display the version of Wget installed on your system. $ wget -v
-h / –help Print a help message displaying all available command-line options for Wget. $ wget -h [URL]
-o logfile Direct all system-generated messages to the specified logfile. If no logfile is specified, messages are redirected to the default logfile (‘wget-log’). $ wget -o logfile [URL]
-b / –background Send the process to the background as soon as it starts, allowing other processes to continue. If no output file is specified, output is redirected to ‘wget-log’ by default. $ wget -b [URL]
-a Append output messages to the current output logfile without overwriting it. This preserves the log of previous commands, with the current log appended after them. $ wget -a logfile [URL]
-i Read URLs from a file. If specified as the file, URLs are read from standard input. If URLs are present both in the command line and input file, those on the command line take precedence. The file need not be an HTML document. $ wget -i inputfile<br>$ wget -i inputfile [URL]
-t number / –tries=number Set the number of retry attempts. Specify ‘0’ or ‘inf’ for infinite retrying. The default is 20 retries, with exceptions for fatal errors like connection refusal or link not found. $ wget -t number [URL]
-c Resume a partially downloaded file if the file supports resuming. If resuming is not supported, the download cannot be resumed. $ wget -c [URL]
-w Set the system to wait for the specified number of seconds between retrievals. This option helps reduce server load by spacing out requests. Time can be specified in seconds, minutes (m), hours (h), or days (d). $ wget -w number_in_seconds [URL]
-r Enable recursive retrieval of specified links, even in the event of fatal errors. This option recursively follows links within the given URL. $ wget -r [URL]

Wget Command in Linux/Unix

Wget is the non-interactive network downloader which is used to download files from the server even when the user has not logged on to the system and it can work in the background without hindering the current process. 

  • GNU wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies. 
     
  • wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting wget finish the work. By contrast, most of the Web browsers require constant user’s presence, which can be a great hindrance when transferring a lot of data. 
     
  • wget can follow links in HTML and XHTML pages and create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as recursive downloading. While doing that, wget respects the Robot Exclusion Standard (/robots.txt). wget can be instructed to convert the links in downloaded HTML files to the local files for offline viewing. 
     
  • wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports resuming, it will instruct the server to continue the download from where it left off.  

Similar Reads

Basic Syntax :

The basic syntax of the Wget command is as follows:...

Options:

Here are the options available in wget command in linux....

Example :

1. To simply download a webpage:...

Wget Command – FAQs

What is the wget command used for?...

Conclusion

In this article we discussed Wget command which is a handy tool in Linux for downloading files from the internet without needing user interaction. It works quietly in the background, which means you can start a download and do other things while it works. Wget can handle various types of web addresses and can even copy entire websites. It’s helpful for slow or unreliable internet connections because it keeps trying to download until it succeeds. Plus, it offers useful features like resuming interrupted downloads and setting wait times between retrievals. By learning its simple commands and options, users can efficiently manage their downloads and save time....