Summary
Information retrieval is a vital aspect of online search. The rate of data retrieval is highly dependent on
the search engine especially in the academia where online research is paramount. All search engines use
web crawler which is sometimes called SPIDER. This is an internet boast that systematically browser the
internet. It is also used for updating of websites. They work by copying all the pages visited for later
processing by a search engine which indexes the downloaded pages for efficient search. This paper bring
to the fore, various search engines, their mean capacity and proposes a model for fast data retrieval base
on the position of ranking with discounted cumulative gain (DCG).
Index Terms
Search Engine Network Information Retrieval Crawler.How to cite this article
- Published: February 28, 2018
- Volume/Issue: Volume 2, Issue 1
- Pages: 1-7
PDF preview
Other papers in IJIETS
Browse additional articles authored by one or more contributors to this paper.