Last year I picked up a project that needed some extra functionality. The project was to take a small portion of our product data, manually add some partner specific data, then make it available to the partner as a restful API interface. The partner relationship was far from a done deal and we were doing the project as a proof of concept to test if we should be moving forward.
As I dug into the code, I found what first appeared like an ugly hack, but then as I understood it better I was impressed with the clever solution and thought I’d share it.
To get started, I examined the curating tool that allowed a user to search our product catalog and then add the partner specific information. In a traditional setup, you would have the curating tool push the data into a new table that links the partner specific data with the product data. This table would then be queried by an API interface that is exposed to the partner for a complete solution.
The changes I needed to make included extending the API schema. Examining the curating tool for an insight into database access, I was surprised to find the tool pushing the data as JSON FILES to an S3 bucket instead of hitting a backend service to store the changes to a database. I assumed there must be a process that picked up the JSON files from the S3 bucket and updated the database, but I thought I’d get to that later when needed. It seemed like an unnecessary step, but I knew the code had been written quickly and maybe they thought this was the easiest way to split tasks and move fast.
After I made the changes and verified the JSON files had my new fields, it was time to figure out how to get the changes pushed to the database. At this point, I hadn’t seen anything about the database yet, so it was time to just examine the database.
Unfortunately I couldn’t find any code that dealt with a database anywhere in the code base. I reached out to the original author and asked where the database stuff was contained. He was clearly confused by my request and responded with “there is no database, it’s all static files.”
Which confused me, “then where does the REST API pull the data from?”
Then it hit me. The directory structure of the S3 server was all the structure we needed to simulate the resulting API. We were storing data associated with each product in a JSON file named with the PRODUCTID. These files were kept in a subdirectory called “product.” if someone wanted to get the data associated with product 45623 and were able to hit the S3 bucket as a website, they just had to hit an endpoint like so: http://api.website.com/product/45623. With the S3 directory structure and file storage named the way they were, the interface would be almost indistinguishable from a restful API.
Which is exactly what had been designed! Do more with less indeed.
Sure, there are differences between this solution and a fully restful API (like errors or “file not found” will return HTML instead of JSON), but for a lean approach to vetting out a partnership, it was the ideal solution.
P.S. For anyone implementing this for their quick and dirty API solution, here are some notes:
- There are routing rules that could facilitate a more complex API than what is described here. Please check out the documentation on routing rules.
- A default S3 bucket will give you “403 permission denied” errors. To configure the whole bucket to be accessible for serving a website, you need to configure a bucket policy.