Web Scraper —Lenskart Retail Store Location

Utilized BeautifulSoup for scraping data and storing it in CSV file

Bhupesh Singh Rathore | Cruio
4 min readMay 16, 2023

Web Scraping Lenskart Retail Store Locations

In this tutorial, we will explore the process of web scraping and demonstrate its application in extracting retail store location data. Specifically, we will be scraping Lenskart’s website to gather information about their physical stores. Lenskart is a popular online retailer specializing in eyewear, and they have numerous physical stores across different regions in India. By scraping their website, we can obtain valuable insights into their store locations, timings, and contact details.

Understanding Web Scraping

Web scraping is the process of automatically extracting data from websites. It involves writing code that navigates through the HTML structure of a webpage, identifies relevant data elements, and extracts the desired information. This technique is widely used in various domains, including market research, competitive analysis, and data mining.

To perform web scraping, we’ll utilize the following libraries in Python:

  • requests: A library for sending HTTP requests to the website.
  • BeautifulSoup: A powerful library for parsing HTML and extracting data from web pages.
  • os: A library for handling file operations.
  • csv: A module for reading and writing CSV files.

Project Setup

To get started, make sure you have the necessary libraries installed. You can easily install them using the pip package manager:

pip install requests
pip install beautifulsoup4

Once the libraries are installed, we can proceed with our project.

Scraping Lenskart Store Locations

Our goal is to scrape Lenskart’s website and extract information about their retail store locations. We will focus on specific regions in India, such as Delhi, Uttar Pradesh, Rajasthan, Maharashtra, and Karnataka.

First, we define a list of locations we are interested in:

locations = ['delhi', 'uttar pradesh', 'rajasthan', 'maharashtra', 'karnataka']

Next, we iterate over each location and perform the scraping process. We construct the URL for each location and send an HTTP GET request to retrieve the corresponding webpage. Then, we use BeautifulSoup to parse the HTML content and extract the relevant information, including store names, addresses, timings, and phone numbers.

Here’s the code snippet that performs the scraping:

import requests
from bs4 import BeautifulSoup
import os
import csv
locations = ['delhi', 'uttar pradesh', 'rajasthan', 'maharashtra', 'karnataka']
for location in locations:
url = 'https://www.lenskart.com/stores/location/' + location
response = requests.get(url)

soup = BeautifulSoup(response.content, 'html.parser')

stores = []
for store in soup.find_all('div', class_='StoreCard_imgContainer__P6NMN'):
name = store.find('a', {'class' : "StoreCard_name__mrTXJ"}).text
address = store.find('a', {'class' : 'StoreCard_storeAddress__PfC_v'}).text
timings = store.find('div', {'class' : 'StoreCard_storeAddress__PfC_v'}).text[7:-1]
phone = store.find('div', {'class' : "StoreCard_wrapper__xhJ0A"}).a.text[1:]

stores.append([name, address, location.title(), timings, '', '', phone])

It’s worth noting that the code snippet includes commented-out sections related to geocoding. If you have a valid Google Maps API key, you can uncomment and modify that part to obtain latitude and longitude coordinates for each store based on its address.

Finally, we write the scraped data to a CSV file named lenskart_stores.csv. We check if the file already exists, and if it does, we append the store information to it. Otherwise, we create a new file and write the header row along with the store data.

if os.path.exists('lenskart_stores.csv'):
if os.stat('lenskart_stores.csv').st_size > 0:
with open('lenskart_stores.csv', mode='a') as file:
writer = csv.writer(file)
for store in stores:
writer.writerow(store)
else:
with open('lenskart_stores.csv', mode='a') as file:
writer = csv.writer(file)
writer.writerow(['Store Name', 'Address', 'Location', 'Timings', 'Latitude', 'Longitude', 'Phone'])
for store in stores:
writer.writerow(store)
else:
with open('lenskart_stores.csv', mode='a') as file:
writer = csv.writer(file)
writer.writerow(['Store Name', 'Address', 'Location', 'Timings', 'Latitude', 'Longitude', 'Phone'])
for store in stores:
writer.writerow(store)

By running this code, you will generate a CSV file (lenskart_stores.csv) containing the scraped Lenskart store information for the specified locations.

Conclusion

Web scraping is a powerful technique for extracting data from websites. In this project, we demonstrated how to scrape Lenskart’s website to gather information about their retail store locations. By leveraging libraries such as BeautifulSoup, we navigated the HTML structure of the website, extracted relevant data elements, and stored the information in a CSV file.

Web scraping can be applied to various use cases, such as market research, competitor analysis, and data collection for analysis and visualization. However, it’s essential to be mindful of the website’s terms of service and legal restrictions when scraping data.

Feel free to explore and modify the code to suit your specific requirements and scraping targets. Happy scraping!

Make sure to replace 'MY_API_KEY_BUT_REQUIRES_BILLING' with your valid Google Maps API key if you decide to use the geocoding functionality. Remember that geocoding API usage may incur charges on your Google Cloud account.

Bhupesh Singh Rathore — Portfolio

Follow me on — LinkedIn | YouTube

Enjoy Data Science ’n’ Coding 😎🐍.

--

--