Now that we have our backend API, and frontend React app deployed we can gather user data. Great. Now comes the final piece, a new lambda function to email all those users their weekly lunch match!
The matcher function should be a simple function that is invoked on a chron (which serverless supports easily). Its important that this service (and any other service our backend provides in the future) performs NO MUTATIONS of its own. This is an opinionated approach. The idea is that ALL mutations (ie side effects) which affect external systems (databases, emails, APIs, etc) must be performed in the GraphQL API. This restriction, along with the restriction that all GraphQL mutations must be fully tested, gives us a very strong assurance that our system does what is supposed to. As soon as you allow arbitrary lambda functions to send emails, mutate the db, etc — you open up a can of worms. Lets avoid the worms.
This also has the pleasant side effect that all services are essentially just compute tasks. They request some data from GraphQL, perform some logic, then execute some GraphQL mutation based on the result. In the case of the matcher function, it queries for all the users that are requesting a match that week, in the shape that it needs to perform its calculation. It then performs its calculation to create as many matches as possible, then tells GraphQL to match and email all those users.
By breaking up the task into separate chunks like this, we not along make it easier to reason about, but easier to test. I could have easily emailed the users in the matcher function for example, but that would have made testing more complicated. In world of databases we have the concept of ACID transactions. Which basically means that these series of mutations should be treated as a single unit — either all execute successfully or none do. A GraphQL mutation can also be used in this way ( though not necessarily, partial failures are well supported). For example, our “matchUsers” mutation knows not to send an email to the user if the database mutation to update the user’s match set had failed. This logic is all incapsulated in a single mutation, which can be well tested in isolation.
I should spend some time describing the actual matching algorithm. The algo itself is not super important, but the development process is worth explaining. When doing the initial data modeling, if we were using a simple NoSQL store like DynamoDB, it might be tempting to just store all the user’s previous matches as an embedded array on the user record, as well all the slack teams that user belongs to. You could definitely do this, but it would make the getUsersRequestingMatch database query much more complicated. By storing users as vertices and using matched and member edges, we can efficiently answer for each user, “Who are all the possible users that I can match with?” We simply traverse our members edge to find all users that share a slack team with us, and our matched edges to see which users we have already been matched with. This is natural and intuitive data modeling.
If we used DynamoDB, we would first have to query for all the slack team records that we belong to, then query all the users for those teams. Then remove any users that were found in our embedded “matched” array. Not so bad right? Well were it gets interesting is if we wanted to make our matcher smart by assigning a weight to each possible match based on some criteria (mutual friends, interests, etc). Because we are using a graph database, ranking the users by mutual interests is fairly straight forward and provides real added value to the service. Without a graph database this algorithm is still achievable but much more difficult to implement and less efficient. You want to always use the right tool for the job, and social network recommendation app is a perfect use-case for a graph database like ArangoDB.
Once we have all the users and their possible matches, we then have to actually generate the matches. I started writing an algorithm that tried to accomplish this, while maximizing the total number of matches, but soon realized that this was non trivial. I paused, took a step back, and then did some googling. Sure enough, this is actually a tricky graph theory problem that was solved in 1961 by Edmond Blossom. What is even sweeter is there already existed an npm module that implemented the algorithm! The npm ecosystem is wonderful. When you find yourself writing some complex code, as yourself — “Am I reinventing the wheel here?” and “Has someone else already solved this problem?” This is actually a super important lesson to learn a developer, especially an indie developer, as your time is limited and you should try to avoid writing as much code as possible. Code is a liability that needs to be tested, maintained and deployed, and we want to ship only what is absolutely necessary to achieve our business goals. Use open source solutions when ever you can, and contribute back to open source when ever you can!
Now on to the final Part 7.