WEB SCRAPING

 WEB SCRAPING

Web scraping is Data Scraping used for extracting data from websites. It is also known as Web harvesting, Web data Extraction. It is basically copying where specific data is gathered and copied from the web, to a database or a spreadsheet for Data Analysis.

Web scraping a web page involves fetching it and extracting from it. Fetching is the downloading of a page (which a browser does when a user views a page). Therefore, web crawling is a main component of web scraping, to fetch pages for later processing. Once fetched, then extraction can take place.The content of a page may be parsed, searched, reformatted, its data copied into a spreadsheet or loaded into a database. Web scrapers typically take something out of a page, to make use of it for another purpose somewhere else. An example would be to find and copy names and telephone numbers, or companies and their URLs, or e-mail addresses to a list (contact scraping).

Click here to go through the example of Web Scraping in Jupyter Notebook

Click here to go through another example of Web Scraping in Jupyter Notebook 

Web scraping is used for  web indexing, web mining and data mining, online price change monitoring and price comparison, product review scraping (to watch the competition), gathering real estate listings, weather data monitoring, website change detection, research, tracking online presence and reputation, web mashup, and web data integration. 

Methods to prevent web scraping(source:Wikipedia)

The administrator of a website can use various measures to stop or slow a bot. Some techniques include:

  • Blocking an IP address either manually or based on criteria such as geolocation and DNSBL(Domain Name System Blacklist). This will also block all browsing from that address.
  • Disabling any web service API that the website's system might expose.
  • Bots sometimes declare who they are (using user agent strings) and can be blocked on that basis using robots.txt; 'googlebot' is an example. Other bots make no distinction between themselves and a human using a browser.
  • Bots can be blocked by monitoring excess traffic
  • Bots can sometimes be blocked with tools to verify that it is a real person accessing the site, like a CAPTCHA. Bots are sometimes coded to explicitly break specific CAPTCHA patterns or may employ third-party services that utilize human labor to read and respond in real-time to CAPTCHA challenges.
  • Commercial anti-bot services: Companies offer anti-bot and anti-scraping services for websites. A few web application firewalls have limited bot detection capabilities as well. However, many such solutions are not very effective.[27]
  • Locating bots with a honeypot or other method to identify the IP addresses of automated crawlers.
  • Obfuscation using CSS sprites to display such data as telephone numbers or email addresses, at the cost of accessibility to screen reader users.
  • Because bots rely on consistency in the front-end code of a target website, adding small variations to the HTML/CSS surrounding important data and navigation elements would require more human involvement in the initial set up of a bot and if done effectively may render the target website too difficult to scrape due to the diminished ability to automate the scraping process.
  • Websites can declare if crawling is allowed or not in the robots.txt file and allow partial access, limit the crawl rate, specify the optimal time to crawl and more.
  • Load database data straight into the HTML DOM via AJAX, and use DOM methods to display it. No visible data in the source document means that it can't be scraped.

 

 

 

 

 

 

 


Comments

Popular posts from this blog

THREE LEVELS OF DATA INDEPENDENCE

Python-HackerRank Problem List Comprehensions

Python Problem Solving - Lonely Integer