Node.js app with AWS (Prisma, Apollo, Lambda)

Nazarii Marko
11 min readAug 4, 2022

--

Good time of day, fellow techie! How’s life? I’ll take liberty to assume it’s not well, considering you are searching for a way to put 3 technologies from the title together. Rough spot, I’ve been there. But fear not! I’ll share my experience and hopefully someone will find it useful (or at least entertaining).

So let’s start at the beginning. I’ve been doing AWS stuff for some time now and gathered a few DOs and DONTs. And by some twisted coincidence (I’d say lucky, but it’s up to you, friend) I happened to watch “Fear and Loathing in Las Vegas (1998)” a few hours ago (not really few hours, more like half a year now. I started this article back in February, but the funnies thing happened. Fucking ivans decided to perform a special soil fertilisation operation in my country) and I’m still under impression from it. Thus, the article is gonzo! By the way, did you know word “gonzo” in South Boston Irish slang stands for the last man standing after an all night drinking marathon? Fascinating! You’re asking what’s fascinating about this fact? I’ll answer your question! Fascinating how you will feel that you’re the last person in this world, who needs to deploy an API to fucking Lambda! All righty, now that we’re through with the intro shit, let’s get to the point. And, please, don’t let me get carried away again.

And to address the most obvious question (which is “what’s with profanity”?), I’d like to stress out that I believe in freedom of speech and expression (not for ruskies, fuck them), so I’d love to see more technical articles that convey emotions and feelings of author at the time of writing or exploring the technology. I’d read the docs if I didn’t want any emotions, wouldn’t I?Consider this article a first step to achieving my life-long dream. And there’s only 8 fucks (oops, 9 now) and 5 shits (yup, 6), so you’ll get over it.

Iteration #1: Initial setup

For the sake of demonstration, I’ll start with something basic and will add complexity iteratively. Neither of us wants to get overwhelmed, right? First, let’s setup a basic Apollo+Prisma application. I’ll use yarn for dependencies management, but you can pick whichever package manager you prefer, it’s not like I give two shits.

$ yarn init --yes
$ yarn add apollo-server-express apollo-server-core express graphql @prisma/client
$ yarn add -D prisma typescript ts-node @types/node
$ yarn prisma init
$ yarn tsc --init

So far so good, let’s create an /src directory (you didn’t expect me to skip this step, did you?), connect the DB and start our server up:

./docker-compose.ymlversion: "3"
services:
postgres:
image: postgres:14.0
restart: always
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:

$ docker compose up -d to start up the DB.

./.envDATABASE_URL="postgresql://postgres:postgres@localhost:5432/mydb?schema=public"

A moment of revelation! Let’s create our first DB model. What do you prefer? Is it Post, Book, User or something else? Show me your wild OSINT skills, let me know by email. Anyways, I’ll stick with books because I’d like to.

./prisma/schema.prismagenerator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model Book {
id String @id @default(cuid())
title String
author String
}

$ yarn prisma migrate dev to migrate schema & generate types. You can also provide name parameter if you’d like to. How? RTFM. Now to our code. Apollo Server has an option to define context, which is passed into every single resolver. How cool is that? Let’s define this bad boy:

./src/context.tsimport { PrismaClient } from "@prisma/client";
import { Request, Response } from "express";
import { Context } from "./typings";
export const createContext = ({
req,
res,
}: {
req: Request;
res: Response;
}): Context => {
const prisma = new PrismaClient();
return {
prisma,
req, // just in case 🙂
res, // and this one as well
};
};

Btw, you probably want to define the Context type. And let’s finally put everything together (Did you expect me to split the app for you? Not gonna happen, everything goes straight into index.ts):

./src/index.tsimport { ApolloServer } from "apollo-server-express";
import { ApolloServerPluginDrainHttpServer, gql } from "apollo-server-core";
import * as express from "express";
import * as http from "http";
import { createContext } from "./context";
import { Context } from "./typings";
import { Prisma } from "@prisma/client";
(async () => {
const typeDefs = gql`
type Book {
id: ID!
title: String
author: String
}
input BookWhereUniqueInput {
id: ID!
}
input BookCreateInput {
title: String!
author: String!
}
type Query {
books: [Book]
book(where: BookWhereUniqueInput!): Book
}
type Mutation {
createBook(data: BookCreateInput): Book
}
`;
const resolvers = {
Query: {
books: (parent: unknown, args: unknown, ctx: Context) => {
return ctx.prisma.book.findMany();
},
book: (
parent: unknown,
{ where }: { where: Prisma.BookWhereUniqueInput },
ctx: Context
) => {
return ctx.prisma.book.findUnique({ where });
},
},
Mutation: {
createBook: (
parent: unknown,
{ data }: { data: Prisma.BookCreateInput },
ctx: Context
) => {
return ctx.prisma.book.create({ data });
},
},
};
const app = express();
const httpServer = http.createServer(app);
const server = new ApolloServer({
typeDefs,
resolvers,
context: createContext,
plugins: [ApolloServerPluginDrainHttpServer({ httpServer })],
});
await server.start();
server.applyMiddleware({ app });
await new Promise<void>((resolve) =>
httpServer.listen({ port: 4000 }, resolve)
);
console.log(`🚀 Server ready at http://localhost:4000${server.graphqlPath}`);
})();

Go on, fire it up and check if everything works, don’t trust my word (ts-node ./index.ts, in case you were wondering). But… Do you also have this strange feeling that something’s off? No, except my lazy ass putting schema, resolvers and server start up into a single file. What’s that? Context? Yup, mate, you are right, thanks for spotting. Our context will be created every time API is called, so we want to create a new Prisma client only when it’s needed. Why? Good question! RTFM. Let’s fix Prisma instantiating and put it outside the createContext function:

./src/context.tsimport { PrismaClient } from "@prisma/client";
import { Request, Response } from "express";
import { Context } from "./typings";
let prisma: PrismaClient;export const createContext = ({
req,
res,
}: {
req: Request;
res: Response;
}): Context => {
if(!prisma) {
prisma = new PrismaClient();
}
return {
prisma,
req,
res,
};
};

Looks way better now. Also, it’s very important point to keep in mind. RDS would say “fuck off, i need to rest for a few minutes”, if you were creating a new Prisma instance every time createContext was triggered. But don’t worry, it’s already fixed and behind us. Alrighty, but what if it happens to me anyways? In that case, my curious friend, you should RTFM and act accordingly! How fascinating is that?!

Iteration #2: Apollo-Server-Lambda

Yes, that’s cool that you can copy snippets from Prisma & Apollo docs, I admire you, but when are we gonna get to the fun stuff?

Good Lord, have some fucking patience! I’m getting to it, can’t you see? Alright, here’s a concept for you to digest: Lambda doesn’t work the same way the simple http server does. It consumes something called event, not classic http request. And that leads us to a conclusion that apollo-server-express won’t work all that well. Fortunately, Apollo provides another package, called apollo-server-lambda. And good news is that it’s fairly easy to migrate from one to another (The bad news, on the other hand, is that the documentation for this particular package is kinda bad. But it’s built on apollo-server-express base, so we should be fine). Shall we?

$ yarn remove apollo-server-express
$ yarn add apollo-server-lambda

Now we need to update the index.ts a bit. Types definition and resolvers shouldn’t change, we only need to update a few things here and there.

./src/index.tsimport {
ApolloServerPluginLandingPageGraphQLPlayground,
gql,
} from "apollo-server-core";
import * as express from "express";
import { createContext } from "./context";
import { Context } from "./typings";
import { Prisma } from "@prisma/client";
import { ApolloServer } from "apollo-server-lambda";
const typeDefs = gql`
...
`;
const resolvers = {
...
};
const app = express();
const server = new ApolloServer({
typeDefs,
resolvers,
context: createContext,
introspection: true,
plugins: [
ApolloServerPluginLandingPageGraphQLPlayground({
settings: {
"schema.polling.enable": false,
},
}),
],
});
exports.handler = server.createHandler({
expressAppFromMiddleware: (middleware) => {
app.use(middleware);
return app;
},
});

surely, you can spot a few differences. Let’s run through them together, better safe than sorry:

  • Playground plugin with disabled polling.
  • no need to await server.start(), therefore different approach to accessing express.
  • exports.handler

Also, createContext now accepts express as an argument, instead of Request with Response, so let’s quickly adjust it:

./src/context.tsexport const createContext = ({ express }: { express: any }): Context => {
const { req, res }: { req: Request; res: Response } = express;
...
}

Ew, you used any ? Come on, why would you do that?

Yup, sue me.

Look at this piece of fine written fine software! We’ve done a great job here and there’s not much more left to finish, I promise.

Iteration #3: Fun stuff

Wait, you offered me to test the app after the first iteration, but don’t offer to try it out now?

Yup, you are right. So, the next step would be to finally deploy that fine piece of software to AWS. How do we do that? Serverless!

What’s that?

Well THAT, my friend, is another framework to put on your CV! Ain’t it exciting? Sure is! So let’s hop into the action.

./serverless.yml
service: my-awesome-application
plugins:
- serverless-offline
- serverless-webpack
- serverless-dotenv-plugin
- serverless-plugin-scripts
custom:
serverless-offline:
useChildProcesses: true
webpack:
webpackConfig: ./webpack.config.js
keepOutputDirectory: true
packager: npm
packagerOptions:
scripts:
- npx prisma generate
- rm -rf ./node_modules/@prisma/engines
package:
individually: true
provider:
name: aws
runtime: nodejs14.x
region: eu-central-1
httpApi:
cors:
allowedOrigins:
allowCredentials: true
allowedMethods:
- GET
- POST
environment:
DATABASE_URL: !Join
- ""
- - "postgresql://"
- "RolfDoe"
- ":"
- "SuperStrongPassword"
- "@"
- !GetAtt rdsPostgres.Endpoint.Address
- ":5432/mydb?schema=public&connection_limit=1"
iam:
role:
statements:
- Effect: Allow
Action:
- rds:*
Resource: "arn:aws:rds:eu-central-1:*:*"
lambdaHashingVersion: 20201221
useDotenv: true
functions:
graphql:
handler: src/index.handler
timeout: 15
vpc:
securityGroupIds:
- !Ref lambdaSg
subnetIds:
- !Ref lambdaRdsPrivateSnA
events:
- httpApi:
path: /graphql
method: GET
- httpApi:
path: /graphql
method: POST
resources:
Resources:
rdsPostgres:
Type: AWS::RDS::DBInstance
Properties:
MasterUsername: "RolfDoe"
MasterUserPassword: "SuperStrongPassword" # come on, man, use your imagination
AllocatedStorage: 20
DBName: mydb
DBInstanceClass: db.t2.micro
VPCSecurityGroups:
- !GetAtt rdsSg.GroupId
DBSubnetGroupName:
Ref: rdsSnG
Engine: postgres
EngineVersion: 14.0
lambdaSgEgress:
Type: AWS::EC2::SecurityGroupEgress
Properties:
IpProtocol: tcp
FromPort: 0
ToPort: 65535
CidrIp: 0.0.0.0/0
GroupId:
Fn::GetAtt:
- lambdaSg
- GroupId
lambdaSgIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
IpProtocol: tcp
FromPort: 0
ToPort: 65535
GroupId:
Fn::GetAtt:
- lambdaSg
- GroupId
rdsSgEgress:
Type: AWS::EC2::SecurityGroupEgress
Properties:
IpProtocol: tcp
FromPort: 0
ToPort: 65535
CidrIp: 0.0.0.0/0
GroupId:
Fn::GetAtt:
- rdsSg
- GroupId
rdsSgIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
IpProtocol: tcp
FromPort: 5432
ToPort: 5432
GroupId:
Fn::GetAtt:
- rdsSg
- GroupId
SourceSecurityGroupId:
Fn::GetAtt:
- lambdaSg
- GroupId
rdsSg:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: rds security group
VpcId:
Ref: lambdaRdsVpc
rdsSnG:
Type: AWS::RDS::DBSubnetGroup
Properties:
SubnetIds:
- Ref: lambdaRdsPrivateSnA
- Ref: lambdaRdsPrivateSnB
lambdaRdsPrivateSnA:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: eu-central-1a
CidrBlock: 10.10.12.0/24
VpcId: !Ref lambdaRdsVpc
MapPublicIpOnLaunch: false
lambdaRdsPrivateSnB:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: eu-central-1b
CidrBlock: 10.10.13.0/24
VpcId: !Ref lambdaRdsVpc
MapPublicIpOnLaunch: false
lambdaSg:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: lambda security group
VpcId:
Ref: lambdaRdsVpc
lambdaRdsVpc:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.10.0.0/16
EnableDnsHostnames: true
EnableDnsSupport: true

Looks pretty easy, doesn’t it? One thing to mention — you can use environment variables, but I can’t show you the syntax, because Medium thinks I’m crafting some kind of SSTI payload, so go on, google the shit out of serverless-dotenv-plugin.

Back to our yml. Let’s see what we’ve got here: some plugins, webpack configs, CORS, shitty iam role (it’s not like I give a fuck, it’s demo. Btw, the role is bad because I’m allowing all actions for all DBs in the selected region. Smells bad. Don’t do that on real projects unless you have a death wish) and a lambda function. Indeed, that’s not hard, but let’s run through the file and figure out what’s happening on each level:

Plugins are going to make your life easier, but you’ll need to install them as regular npm packages. Btw, I guess we forgot to install Serverless, so let’s fix it in this step:

$ yarn add -D serverless serverless-offline serverless-webpack serverless-dotenv-plugin serverless-plugin-scripts

As I already said, plugins will make your life easier. Basically, serverless-offline will let us run the app locally, webpack and scripts will be used to reduce the final package size.

The custom section includes configs for plugins. Important piece here is webpack, take a close look. I bet you already noticed this row: webpackConfig: ./webpack.config.js! Let me show you how a typical webpack config looks like:

const nodeExternals = require('webpack-node-externals');
const serverlessWebpack = require('serverless-webpack');
const CopyWebpackPlugin = require('copy-webpack-plugin');
module.exports = {
devtool: 'inline-cheap-module-source-map',
entry: serverlessWebpack.lib.entries,
mode: serverlessWebpack.lib.webpack.isLocal ? 'development' : 'production',
module: {
rules: [
{
test: /\.ts$/,
loader: 'ts-loader',
options: { transpileOnly: true },
},
],
},
plugins: [
new CopyWebpackPlugin({
patterns: [
'./prisma/schema.prisma',
'./prisma/migrate.sql',
],
}),
],
node: false,
externals: [nodeExternals()],
resolve: {
extensions: ['.ts'],
},
target: 'node',
};

You may have noticed CopyWebpackPlugin here (I mean, this piece is quite large, idk how you could miss it). It’s pretty straightforward, it simply copies the files that would not be included into the final package otherwise (webpack tree shaking is a bitch). What’s tree shaking? ffs, RTFM!

Also, you may have noticed that we are copying an sql file. Spot on, mate! As you know, There’s no fucking way Lambda would interact with any cli tools. And guess what prisma is! So, this leaves us with no way to apply our migrations, which sucks, because our app won’t run without it. I came up with an easy way around this limitation. In two words, we’ll create another lambda to apply our schema changes and we’ll run it automagically during deployments. This approach lays prisma migration tool to rest, since it’s not utilised, but at least it works. Another issue is that gradual schema deployment is off the table. That sucks as well.

./src/migrate.tsimport * as fs from 'fs';
const pg = require('pg');
export const handler = () => {
const sql = fs.readFileSync(__dirname + '../prisma/migrate.sql').toString();
const pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
});
pool.connect((err: any, client: any, done: any) => {
if (err) {
console.log('error: ', err);
process.exit(1);
}
client.query(sql, (err: any, result: any) => {
done();
if (err) {
console.log('error: ', err);
process.exit(1);
}
process.exit(0);
});
});
};

And to get the migrate.sql file we just need to run
$ cat ./prisma/migrations/**/*.sql >> ./prisma/migrate.sql.

To create the lambda we need to modify functions section in our yml file:

  graphql:
...
migrate:
handler: src/migrate.handler
vpc:
securityGroupIds:
- !Ref lambdaSg
subnetIds:
- !Ref lambdaRdsPrivateSnA
events:
- schedule:
rate: cron(0 0 1 1 ? *)
enabled: false

This way it can only be triggered manually and will not be publicly accessible. If you’d like to automate this step and put it into the pipeline, try this link.

And the last important piece is resources. It simply is just a CloudFormation template, you can read further here. Sounds like we are done with template. Let’s try running
$ yarn serverless package
to see if our app bundles. If you’d like to know how to deploy the app to AWS Lambda, you could just google it, could you not? Anyways, I’m feeling generous today, so feel free to use this link (or this one, whichever you like, i don’t really care).

Conclusions

So you’ve seen the process of setting up the basic project and deployment (or packaging, who cares) to AWS Lambda. So let me ask you, was it worth it? What are you doing with your life? What am I doing with my life? Why would anyone think of deploying full-fucking-fledged API to a single Lambda? Why not micro-services? Why not EC2? Why not ECS? Why not Elastic Beanstalk?

So many questions, so few answers. On a serious note, this approach would work for simple APIs, at least those that don’t need the DB. If you think about it, the idea of serverless approach is to have the application running only when there’s demand. But the more services you plug into your app, the harder it gets to stick to the initial idea. Let’s assume you’d like to add S3 bucket to your app. Since your DB and Lambda are in private subnet, you can’t easily access S3, that’s how VPC works and it’s logical approach. So now you have to set up the gateway endpoint, which will be running 24/7. Not really serverless-ish, is it?

All in all, AWS provides a shitload of decent services. It just so happens that not all of them were designed to run APIs. So you should be fine as long as you understand how the service was intended.

All the best, mate, (maybe) see you again!

P.S. In case you were wondering what’s RTFM
P.P.S. FFS, learn how to use google

--

--