Web Scraping Wallpaper from Socwall

In this article, we will tell you how to scrape wallpaper from Socwall by using ScrapeStorm’s “Smart mode“.

Introduction to the scraping tool

ScrapeStorm (www.scrapestorm.com) is a new generation of Web Scraping Tool based on artificial intelligence technology. It is the first scraper to support both Windows, Mac and Linux operating systems.

Introduction to the scraping object

Socwall is a Digg website that offers high resolution wallpapers. A picture station with the theme of scenery, plants and flowers. You can dig your favorite pictures on Socwall, just click the arrow below the picture, you can also upload your favorite high-resolution wallpaper.

Official Website: https://www.socwall.com/

Scraping fields

title, title_link, image, save to favorite, by, views, tags

Function point directory

How to extract the list page plus the detail page

How to download images

Preview of the scraped result

Export to Excel2007:

Export images to local:

Let’s take a closer look at how to scrape wallpaper from Socwall. The specific steps are as follows:

1. Download and install ScrapeStorm, then register and log in

(1) Open the ScrapeStorm official website, download and install the latest version.

(2) Click Register/Login to register a new account and then log in to ScrapeStorm.

Tips: You can use this web scraping software directly, you don’t need to register, but the tasks under the anonymous account will be lost when you switch to the registered user, so it is recommended that you use it after registration.

2. Create a task

(1) Copy the URL of Socwall

Click here to learn more about how to enter the URL correctly.

(2) Create a new smart mode task

You can create a new scraping task directly on the software, or you can create a task by importing rules.

Click here to learn how to import and export scraping rules.

3. Configure the scraping rules

(1) Set the fields

Intelligent mode automatically recognizes the fields on the page. You can right-click the field to rename the name, add or delete fields, modify data, and so on. If you only need images, you can delete all other fields.

Click here to learn how to how to configure the extracted field.

Add or remove fields as needed, and rename the fields. The results of the field settings are as follows:

(2) Use the “Scrape into” feature to scrape the detail page data

There is only partial data on the list page, you can use the “scrape into” function to enter the detail page to scrape the data.

Click here to learn how to extract the list page plus the detail page.

On the details page we add the required fields: by, views, tags

4. Set up and start the scraping task

(1) Running and Anti-block settings

Click “Setting”, set waiting time based on web page open speed. You can check “Block Images” and “Block Ads”. The anti-block settings follow the system default settings. Then click “Save”.

Click here to learn more about how to configure the scraping task.

P.S. “Block Images” will reduce the load time and speed up the scraping process. And this operation does not affect the scraping and downloading of images.

(2) Start scraping data

Premium Plan and above users can use “Scheduled job and “Sync to Database”. If you want to download images, you can check “Download images while running”. Then click “Start”.

Click here to learn about scheduled job.

Click here to learn about sync to database.

Click here to learn about download images.

(3) Wait a moment, you will see the data being scraped.

5. Export and view data

(1) Click “Export” to download your data.

If you don’t need data other than images, you don’t need to export the data. You can choose to “Export Later”.

(2) Choose the format to export according to your needs.

ScrapeStorm provides a variety of export methods to export locally, such as excel, csv, html, txt or database. Professional Plan and above users can also post directly to wordpress.

Click here to learn more about how to view the extraction results and clear the extracted data.

Click here to learn more about how to export the result of extraction.