Storing Scraped Data for scalability and continuous processing.

Arnoldchris
3 min readApr 14, 2024

--

I prefer storing data scraped by scrapy and other tools in the JSONL formats, I’ll even go out of my way and say this is the sauce that allows me to automate my scraping process more easily and to scale.

See how I use it in this GitHub repository,

https://github.com/arnoldchrisoduor1/Scrapers_Crawlers_Spiders_Bots

Before I get my rant on know that this article goes hand in hand with my other articles on how to automatically convert this JSONL data to csv and excel formats then sent via email to clients or yourself.

Here are the reasons why I think JSONL format is a game changer when it comes to storing data while scraping or on frequently updated data.

NOTE: I’ll mainly be comparing the JSONL vs the JSON format which most people tend to use.

Scalability:

  • Memory limitations: A single JSON file can become very large when dealing with a lot of scraped data. This can consume significant memory and potentially lead to crashes or slowdowns, especially if you’re working with limited resources.
  • Incremental processing: JSONL allows you to process data incrementally. Each line represents a separate JSON object, enabling you to parse and store data one line (record) at a time. This is more memory-efficient and avoids loading the entire dataset into memory at once.

Ease of appending data:

  • Simple appending: With JSONL, adding new scraped data is straightforward. You simply append new JSON objects (one per line) to the existing file. This is much simpler than rewriting or merging an entire JSON file with each new batch of data.

Error handling:

  • Partial data recovery: If there’s an error during the scraping process while using JSONL, you might only lose the data from the specific line where the error occurred. In contrast, with a single JSON file, an error could corrupt the entire file, making data recovery more difficult.

Streaming:

  • Streaming applications: JSONL is well-suited for streaming applications where data is continuously scraped and processed in real-time. By processing data line by line, you can avoid delays incurred in loading a massive JSON file.
  • Simpler Parsing: In some cases, parsing individual JSON objects from a JSONL file can be easier than parsing a large nested structure within a single JSON file.
  • Easier Debugging: If there’s a malformed JSON object in your JSONL file, it’s easier to identify and remove because each object is on a separate line. Debugging a single large JSON file with a malformed object can be more challenging.
  • Version Control: Version control systems like Git can track changes to JSONL files more efficiently. Each line represents a data point, making it easier to see what has been added, removed, or modified compared to tracking changes within a single large JSON file.
  • Flexibility: JSONL allows you to store different types of data structures within the same file. Each line can hold a complete JSON object representing a product, review, or any other data point you scrape, regardless of its complexity.

Of course there are other formats that are close to this like the CSV and Avro which is normally used with tools like Apache Spark or Hadoop but they have a steep learning curve which is really not necessary for a simple scraping project.

I hope this article helped you in your program optimization journey :)

--

--

Arnoldchris

I'm a passionate computer engineering student, a FullStack Sofware Developer, ML Engineer with a huge interest in Finance