Key tips to test Web Crawlers

Categories: Software Product Development |

What is crawler and how can it help?

Key tips to test Web CrawlersCrawlers are Internet robots that systematically browse entire sites or specific pages in order to index and gather information from them. They probably gained their name because they navigate through a site or page at a time, following the links to other pages on the site until all pages have been read.

Using Web Crawlers can save precious time and effort when large sets of information have to be handled and can also drive more users to a page using its data to optimize it. In the case of Search Engines, crawlers can copy all the pages they visit for later processing by a search engine that indexes the downloaded pages so that users can search them much more quickly. There are several types of Crawlers, like Batch and Incremental Crawlers.

Scraping Crawler: A Rising Method

Web Scraping, also known as Web Data Extraction, refers to an application that processes the HTML of a Web page to extract data for manipulation such as converting the Web page to another format or to save it in a local database. Examples could be scraping the result of a football match or the available size of a pair of shoes.

How can Crawlers capture information?

There are several techniques used for Scraping. They go from having a human being doing copy – paste of information from a website to automated software tools. Here are some of these techniques:

  • HTML Parsers: This method is used when the targeted website uses templates to display information. The Crawler captures the information in a relational form called “Wrappers”.
  • Web-Scraping Software: Uses Software tools that attempt to recognize the data structure of the website and capture information. This can also be done using scripts to target a specific site.

Implications of Using a Crawler

When you’re writing a Web crawler, it’s important for you to understand that your crawler is using others’ resources. Whenever you download a file from somebody else’s server, you’re consuming their bandwidth and server resources. Most site operators welcome well-behaved crawlers because those crawlers provide exposure, which means potentially more visitors. Ill-behaved crawlers are not welcomed, and often are banned.

Good practices involve rotating user agents from a pool of well-known ones from browsers, or using download delays to not hit the server multiple times at once, but all of these methods must be open.

Crawler Testing

Depending on the techniques used, different paths can be followed for testing. The most basic form of testing consists on having someone checking that raw information captured by the crawler matches the one in the source website. But there are software tools that provide a whole platform for testing, displaying captured information and the original website together in a way that is easy to be read and compared.

 A common approach on testing crawlers is creating a checklist summarizing key tests that should be performed on every crawler to provide information on troubleshooting:

 ➔    Test the entire crawl depth:Confirm that captured data is structured correctly in every level. If there is any problem, check the filters on the target folders. If nothing is returned, check the authentication settings in the associated Data Source and Crawler Web Service.

➔    Check the metadata: Is it stored in the appropriate properties? Does it match the metadata in the source? If there are problems, check the Data Type settings in the Crawler editor, and check the mappings for each associated Data Type.

 ➔    Click through to crawled documents from each crawled directory: If there are problems, check the gateway settings in the Crawler Web Service editor

 ➔    Test refreshing information to confirm that it reflects modifications. If there are problems, make sure you are providing the correct signature. Must keep in mind that sometimes data gets updated too often on the source and the crawler does not correctly detect those changes.

 ➔    Check logs after every crawl: Logs can reveal problems even if they report a successful crawl, or can guide on finding the source of an issue if the crawl failed.

Conclusion

Big Data’s momentum makes easy to foresee an immense growth in the importance of web crawling in the coming years. Nonetheless, we shouldn’t forget that there are still some challenges while using and testing this sophisticated computer programs.

We hope to have thrown some light on this subject. We’ll gladly answer any question based on our experience with crawlers.

 

References

 http://www.slideshare.net/iamthevictory/web-crawler

 http://doc.scrapy.org/en/latest/intro/overview.html

 http://doc.scrapy.org/en/latest/topics/practices.html

 http://en.wikipedia.org/wiki/Web_crawler

 http://en.wikipedia.org/wiki/Web_scraping

 www.webopedia.com/TERM/W/Web_Scraping.html

http://www.3idatascraping.com/web-scraping-service.php