Serverless architecture for Deep Learning

Manjeet Singh Nagi
Jan 13 · 3 min read

Serverless architecture

Serverless architecture is a way to build and run applications without having to manage the processing part of the infrastructure. You still need to provision any transient or persistent storage(if needed by your application). But you do not need to buy or rent or provision processing servers.

Serverless architecture lends your application to be more scalable, decoupled, & modular. It reduces the operational overhead of servers. It lets developer focus on what is most important for them, the functionality.

Deep Learning

Deep Learning involves processing huge data(the more the better) through your neural network multiple times(sometimes 100s). The huge data involved and the multiple iterations/passes of data through the neural network demands huge processing power. Deep learning programs/applications are generally monolithic entities running on single machines with huge processing power(GPUs, costly). The section below proposes a decoupled, serverless architecture on AWS(using Lambda, S3, Dynamo db, SNS) that can run your neural network with huge data and multiple iterations/epochs/passes.

Benefits

In addition to the benefits generally derived from serverless architecture, this architecture has following benefits specifically for Deep learning,

  1. You do not need to learn any new skills like tensor flow, pytorch etc.
  2. Even for huge data and multiple passes of data through a deep network you do not need a GPU.
  3. You can take some benefit of low cost of AWS Lambda for processing.

Proposed Architecture

Proposed Architecture

Important points about the architecture

Mini batch — The architecture is feasible, only with the mini batch more of processing data instead of processing all the data together in each pass. So you need to divide your data into mini batches in such a way that each execution of each lambda function does not exceed 15 min(limit set by AWS)

API Gateway/Application creator(lambda function) /Application config(Dynamo table)— User will kick start the execution by calling the API. The user will provide in the payload of the API call the following detail — Application ID(unique ID to identify this execution), Number of layers in the network, number of nodes in each layer, Activation function for each layer, Learning rate, regularization parameter etc. Application creator(lambda Function) will store all these details in Application config(Dynamo table). All other lamda functions will refer to this.

SNS notifications — Different Lambda functions will send SNS notification to each other to kick start the next step.

Forward Pass/Backward Pass — A very generic lambda function to process forward pass and backward pass respectively for a given Application ID, mini batch, layer and epoch. It will be able to kick start another instance of itself if it finds that the current layer for it is processing the forward pass is not the last layer in the network(by comparing the current layer with Max layer in the network from Application config.). Forward pass saves its output, the weights for the next layer, in a dynamo table(named Forward Pass). So does the backward pass lambda function in the dynamo table named Backward Pass.

Initializer — Initializes the weights for each execution or pass of a mini batch.

Cost calculator — Calculates the loss function and its differential w.r.t the output of the last layer.

Do let me know your concerns/questions or comments on this.

Go serverless for your Deep Learning!


Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade