Share to: share facebook share twitter share wa share telegram print page

Web crawler

A web crawler or spider is a computer program that automatically fetches the contents of a web page. The program then analyses the content, for example to index it by certain search terms. Search engines commonly use web crawlers.[1]

  • HTTrack – a web crawler released in 1998

References

  1. Masanès, Julien (February 15, 2007). Web Archiving. Springer. p. 1. ISBN 978-3-54046332-0. Retrieved April 24, 2014.[permanent dead link]
Prefix: a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9

Portal di Ensiklopedia Dunia

Kembali kehalaman sebelumnya