How to Create a Pinterest Clone Part III: Display and Searching

Jim Chen
Geek Culture
Published in
7 min readJul 16, 2022

Overview and Objective

In the last two blogs (part 1 and part 2), we built a web application that includes a webpage for users to upload photos and an API endpoint that tags a photo with a trained neural network, stores the photo file in the MongoDB, and saves the object ID generated by MongoDB along with the tags into the TigerGraph database. In this blog, we will wrap it up by adding two functionalities: display and searching.

Section I: Implement the backend

We need two new endpoints in the backend. One will retrieve a photo file from the MongoDB by an object ID, the other will return a list of photo IDs given an optional search text.

Let’s start with the first one. With the Motor module, it is straight forward to get an entry in the MongoDB by the object ID.

# from bson.objectid import ObjectId
####################
# Connection to MongoDB
# client = motor.motor_asyncio.AsyncIOMotorClient("mongodb://localhost:27017")
# database = client.photo_manager
# photo_collection = database.get_collection("photos")
# Get a specific photo from MongoDB by ID
async def retrieve_photo_from_MongoDB(id:str):
try:
entry = await photo_collection.find_one({"_id": ObjectId(id)})
if entry:
return entry
else:
return False
except:
return False

The function’s return value is either False or an entry in the form of {_id: <Object ID>, photo: <Photo file in bytes>} (a python dictionary). We can then use this function in the API endpoint.

# from fastapi.responses import Response
# import base64
@app.put("/retrievePhoto/{id}")
async def retrieve_photo(id: str):
data = await retrieve_photo_from_MongoDB(id)
if not data:
return { "code": 401, "message": "Failed to get photo."}
contents = data["photo"]
contents = base64.b64encode(contents)
return Response(content=contents, media_type="image/png")

One thing to notice here is that if we transmit raw bytes of a file directly, they may contain reserved bytes for network protocols that interferes with the connection, so we need to encode the bytes with Base64 data encoding. The result of encoding is also a byte array, and we will pass it back to the frontend without further processing because it is easier to reconstruct the file from encoded bytes in the frontend. The media_type parameter is only a reminder of its true type. We can put media_type="image/jpeg" or some other types.

Another thing to notice is that we need to add some CORS settings in order to return a media typed response.

# from fastapi.middleware.cors import CORSMiddleware
####################
# API endpoints
# app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)

Let’s move on to the other endpoint that can return a list of photo IDs for a search text, and we will write a query on TigerGraph to help with this!

You can go to the TigerGraph Cloud Portal at https://tgcloud.io/, log in your account, start your solution, and open the GraphStudio as we did last time.

Then, switch to the ‘photos’ graph on the top-left corner, click on the ‘Write Queries’ tab, and create a new query with the ‘+’ button.

In the pop-up window, enter FetchAllPhotos for the name and click CREATE.

Replace the default query content with the code below.

CREATE QUERY FetchAllPhotos(STRING text, FLOAT threshold) FOR GRAPH photos { 
STRING pattern = "%" + lower(text) + "%";
seed = SELECT u FROM Type:u WHERE lower(u.name) LIKE pattern;
result = SELECT v FROM seed:u - (PHOTO_HAS_TYPE:e) - Photo:v
WHERE e.probability >= threshold;
PRINT result;
}

Basically, it is a GSQL query that takes two parameters (the search text and a threshold of probability), finds all Type vertexes that have names similar to the search text, and returns all Photo vertexes that are connected to one of such types with probability greater than the threshold. If the parameter text is an empty string, all the types will be accepted.

There are much more things that we can accomplish with the GSQL language, such as sorting and filtering. If you are interested, please check out this doc for more examples!

Now we can save, install, and run the query with buttons on the top bar! Feel feel to test it with different inputs once your TigerGraph database is populated with some data.

With the query installed on the TigerGraph database, we are ready to complete the last API endpoint. We need a function to run the query and return a list of object IDs that corresponds to photos to show.

async def retrieve_all_photo_info_from_TigerGraph(text: str):
try:
threshold = 0.5
results = conn.runInstalledQuery("FetchAllPhotos", params={
"text": text,
"threshold": threshold
})
ret = []
for photo_info in results[0]["result"]:
ret.append(photo_info["attributes"]["id"])
return ret
except:
return False

We empirically set a threshold of 0.5 so that if a user searches ‘shirt’, only photos that are classified as shirts with probability higher that 50% will be shown. We can then implement the API endpoint easily.

# from typing import Union
@app.put("/retrieveAllPhotoInfo")
async def retrieve_all_photo_info(text: Union[str, None] = None):
if text is None:
data = await retrieve_all_photo_info_from_TigerGraph("")
else:
data = await retrieve_all_photo_info_from_TigerGraph(text)
if not data:
return { "code": 401, "message": "Failed to retrieve photo"}
else:
return { "code": 200, "data": data, "message": "Photo fetched" }

Just notice that the search text might be empty (or None), in which case we would display all photos.

Now you’ve completed the entire backend for this project! The full code is given below.

Section II: Implement the frontend

We will add two components to the frontend: the search bar and the display section. The search bar is simple with the Quasar framework.

<div>
<q-input
outlined
v-model="searchText"
label="Search"
debounce="500"
class="text-body1 text-weight-bolder"
@update:model-value="retrieveAllPhotos()"
>
<template v-slot:prepend>
<q-icon name="search" />
</template>
</q-input>
</div>

It would look like this:

We store the input in the variable searchText, and will refresh the display section with function retrieveAllPhotos (not implemented yet) every time the input changes.

For the display section, we will show all the photos stored in the libraryPhotoURLs array, which is an array of Data URLs generated from the photo files. The libraryPhotoURLs array is refreshed by the function retrieveAllPhotos. The photos’ sizes are varied due to the function getSize(), which randomly returns a class among card-small, card-medium, and card-large.

<div class="library-content">
<div
v-for="idx in libraryPhotoURLs.length"
:key="idx"
:class="getSize()"
>
<div class="pin-modal">
<div class="modal-head">
<div class="save-card">Save</div>
</div>
</div>
<div class="pin-photo">
<img
:src="libraryPhotoURLs[idx-1]"
style="opacity: 1;
max-width: 100%;"
/>
</div>
</div>
</div>

We use the CSS Grid Layout and the hover effects to replicate the style of Pinterest. The CSS code related to this part is given below.

The function retrieveAllPhotos makes a request to the backend to get an array of object IDs that corresponds to all photos to show, sets libraryPhotoURLs as a new empty array of the same length, and calls the function retrievePhoto for each photo.

retrieveAllPhotos(){
const api = axios.create({ baseURL: 'http://localhost:8000' });
api.put(`/retrieveAllPhotoInfo?text=${this.searchText}`, {})
.then((resp)=>{
// an array of object IDs
const all_photo_info = resp.data.data;
this.libraryPhotoURLs = (new Array(all_photo_info.length)).fill('');
for(let i=0;i<all_photo_info.length;i++) {
this.retrievePhoto(all_photo_info[i], i);
}
})
.catch((err)=>{
console.log(err);
});
}

The function retrievePhoto requests the base64 encoded photo bytes from the backend by the object ID, decodes the bytes into a buffer, reconstructs the photo file, generates the Data URL, and stores it at the indicated index of the libraryPhotoURLs array.

retrievePhoto(id:string, idx:number){
const api = axios.create({ baseURL: 'http://localhost:8000' });
api.put(`/retrievePhoto/${id}`, {})
.then((resp)=>{
const buff = Buffer.from(resp.data, 'base64');
const blob = new Blob([buff], { type: 'image/png' });
const photo = new File([blob], 'test', { type: 'image/png' });
const reader = new FileReader();
reader.onload = (e) => {
this.libraryPhotoURLs[idx] = e.target?.result as string;
};
reader.readAsDataURL(photo as File);
})
.catch((err)=>{
console.log(err)
});
}

All the changes in frontend are made in frontend/src/pages/IndexPage.vue, we put the full code of this file here.

Section III: Summary and Resources

Congratulations on completing this blog series! You can run the application, find some photos of clothes, and upload them to this application. As a reminder, you can run the application by typing these in the terminal:

# Start the frontend
$ cd photo_library/frontend
$ quasar dev
# Start the backend
$ cd photo_library/backend
$ source venv/bin/activate
$ python3 main.py
# Remember to start the TigerGraph solution and the MongoDB service

Hope you get the same result as in this picture:

In this blog series, you’ve learned how to do these things:

  • Build a frontend application with the Pinterest’s styles
  • Build a backend server with the image classification model
  • Set up and interact with the TigerGraph and the MongoDB

Armed with these skills, you can start on other full-stack or machine learning projects more easily!

All the code for this project is uploaded here with instructions to run it. Feel free to join the TigerGraph Discord and the Developer Forum if you have any questions!

--

--