A crawl also called a crawler or spider is a program that systematically browses the world wide web (WWW) in search of primarily free content such as articles and video clips. Such crawlers are used to maintain a complete listing of all content on the World Wide Web (WWW), including websites and newsgroups.

Crawling is distinct from searching, as the result of a search is provided in just a few pages, while the goal of a crawl is to amass as much information as possible.