It all starts with Search Engines. There are numerous search engines, hundreds in fact. The main four are: Google, Bing, Yahoo and Ask. For any search engine, the person searching is the customer. Providing the customer with the most relevant results from the web is the search engines primary function. To achieve this end; search engines build an index of the web by crawling the entire world wide web. They do this to build a searchable index ready to provide results for searchers. The search engines need to be able to calculate the relevance and ranking to produce the most relevant and useful results from the index. Optimization of web pages, documents etc for ranking position has become known as Search Engine Optimization.
Crawling to Index
There are hundreds of millions of websites; how do the search engines get around to index them all? They have a method to crawl each and every website, page, pdf and image. Search engines get around the web using links. Bots, spiders, crawlers (or whatever) work their way around the web using the link structure. If you are imagining a spider web you are not far away. This is a really random spider web with stops all over the place and it does not have a true center. These interconnected links provide the opportunity for the search engines index the billions of documents.
The link structure interconnecting the web serves to bind it together in a way which allows the search engines to find every document/page (assuming they can be crawled). The next part of the process is to store the gathered information so they can be searched. To achieve this huge undertaking the search engines have huge data-centers at cities around the world. There are literally thousands of massive machines processing information at an incredible rate. The speed and quality of these machines combined with the depth of the indexing allows a complex algorithm to be used in returning results in a fraction of a second. So go ahead and try a search………….