Deploying a Structural Engineering Application to the Cloud with AWS Lambda
In my previous Medium posts, we first learned the process of building a structural engineering web application using Flask and then explored how to deploy the application to the cloud for broader user access utilizing Fully Managed Services. In this article, we continue our exploration by delving into serverless approach for deploying a structural engineering function to the cloud using Amazon Web Services (AWS) Lambda Service. AWS Lambda handles the execution of code and dynamically scales instances based on incoming traffic, hence eliminates the need for server provisioning or management — here comes the concept of “serverless”.
AWS Lambda functions prove to be a preferred choice for deploying Python functions, machine learning models, and connecting them to other applications through REST APIs. In contrast, AWS Elastic Beanstalk offers a straightforward method for cloud deployment of web applications, yet the application remains active at all times, regardless of usage. The cost of running a Lambda function, on the other hand, is solely determined by the compute time consumed by the function.
For this example, we will keep it simple with a solitary Lambda function that computes the capacity of a steel column, and an API Gateway serving as the front-end for the Lambda function. The API Gateway enables the deployment of REST APIs using standard HTTP methods such as GET, POST, PUT, and DELETE to perform operations on resources.
AWS suggests separating the Lambda handler from the core logic function for better organization. Our example Lambda function calculates the axial load capacity of a steel column. The core logic for this function was discussed in a my aforementioned article on Medium.
import json
import math
def calculate_phi_pn(moment_of_inertia, section_area, kl, steel_modulus_of_elasticity=29000, f_y=50):
###
# Core logic function calculating column axial load capacity
###
return phi_pn
def lambda_handler(event, context):
response_object = {}
# get input variables for the calculate_phi_pn function from the event.
moment_of_inertia = event['queryStringParameters']['moment_of_inertia']
section_area = event['queryStringParameters']['section_area']
kl = event['queryStringParameters']['kl']
steel_modulus_of_elasticity = event['queryStringParameters']['steel_modulus_of_elasticity']
steel_yield_stress = event['queryStringParameters']['steel_yield_stress']
# test whether all string inputs can be converted to float.
try:
[float(input) for input in [moment_of_inertia, section_area, kl, steel_modulus_of_elasticity, steel_yield_stress]]
except ValueError:
output_response = "Please provide numeric values."
response_object['body'] = json.dumps(output_response)
return response_object
# run calculate_phi_pn function to get axial load capacity
output_response = calculate_phi_pn(abs(float(moment_of_inertia)),
abs(float(section_area)),
abs(float(kl)),
abs(float(steel_modulus_of_elasticity)),
abs(float(steel_yield_stress)))
# Return output response in json format
response_object['body'] = json.dumps(output_response)
return response_object
When using AWS Lambda, it’s crucial to be mindful of the size of the deployment package. While math
and json
libraries are imported in our example, more complex functions may require additional packages like Pandas, NumPy, or Scikit-Learn. For large, size-intensive packages, adding layers to the configuration becomes necessary. AWS offers pre-packaged layers and the ability to create custom layers that can be used across multiple functions. However, this will not be covered in this article. Keep in mind that AWS Lambda has a limit of 50 MB for direct uploads (zipped) and 250 MB for uploads from AWS S3 (unzipped). This applies to all uploaded files, including layers and custom runtimes.
It’s important to keep in mind that the first response from a Lambda function may experience slower response times, based on its complexity. When activated through API Gateway, AWS Lambda must spin up a new instance of the function to handle the request. Subsequent requests are handled by the active container until the “idle timeout” is reached. At that point, AWS will terminate the container. To maintain the function’s performance, it is possible to keep it “warm,” but this will incur additional costs.
Our simple column load capacity function takes in a query string and outputs a json variable called response_object
. To send a request and receive a response, I recommend using AWS API Gateway for added control in communication with the Lambda function. Now, let’s explore API Gateway further.
Adding API Gateway
Start with creating a REST API in AWS API Gateway by selecting “Create API” and choosing “REST API”.
The next step is to configure the API. In this example, the following settings were used.
Then, create a ‘GET’ method and provide the name of the Lambda function, in this case, simplecolumncapacity
.
Select “Method Request”.
Configure the “Method Request” by setting the “Request Validator” to “Validate query string parameters and headers”. This will ensure that the API returns an error message if all required parameters are not entered. By entering the required URL Query String Parameters, you ensure that the Lambda function receives the necessary inputs to run correctly.
We are now ready to deploy the API Gateway. Using stages in API Gateway deployment is considered best practice, as it allows for the deployment of multiple versions of the same API, each with its own distinct environment such as development, testing, and production.
The API Gateway has been successfully deployed. However, if we test it by accessing the specified URL, we will encounter the error message that we previously configured to display when invalid query string parameters or headers are encountered in the method request.
The image below depicts the correct URL format with all the necessary parameters required by the Lambda function. After entering the correct URL, the function calculates and returns the axial capacity of the steel column.
This usage of the Lambda function may not be the most optimal approach. Let’s start by examining its utilization within Python.
import requests
# Function to generate url with api_endpoint and query_strings
def send_api_request(api_endpoint, query_string):
url = api_endpoint + "?" + query_string
response = requests.get(url)
return response.json()
i_s = 56.3 # in4: moment of inertia
a_s = 14.6 # in2: section area
kl = 20 # ft: effecive length
e_s = 29000 # ksi: modulus of elasticity
f_y = 50 # ksi: steel yield stress
api_endpoint = ## API ADDRESS HIDDEN, ENTER YOUR OWN ##
query_string = "moment_of_inertia="+str(i_s)+"§ion_area="+str(a_s)+"&kl="+str(kl)+"&steel_modulus_of_elasticity="+str(e_s)+"&steel_yield_stress="+str(f_y)
response_data = send_api_request(api_endpoint, query_string)
print(response_data)
By using Python in Visual Studio, the Lambda function can be leveraged. Another practical approach for utilizing the Lambda function and REST API in the field of structural engineering is through Grasshopper. Grasshopper is a visual programming platform that runs within Rhinoceros 3D, a widely-used computer-aided design software in architecture, engineering, and product design. To further enhance its functionality, consider Sergey Pigach’s Swiftlet plugin for Grasshopper.
Conclusion
In conclusion, deploying a structural engineering function in the cloud using the AWS Lambda Service offers a serverless approach that eliminates the need for server provisioning or management. Python functions and ML models can all be deployed using Lambda functions, making them a popular choice.
This article demonstrated a simple Lambda function for calculating the capacity of a steel column, with the API Gateway serving as the front-end for the function and providing REST API deployment. The utilization of this Lambda function can be accomplished through Python in Visual Studio or through the visual programming platform Grasshopper.
References
- Swiftlet, Grasshopper plugin, https://github.com/enmerk4r/Swiftlet