ChatGPT's Guide to Building and Deploying Serverless Applications

A Step-by-Step Tutorial Using the Serverless Framework and AWS Lambda

Sebastian Carrizo
Globant
12 min readMay 12, 2023

--

IA generated image

I've been experimenting with ChatGPT for several months, using it for various tasks such as debugging, generating code, and suggesting report titles. I realized that ChatGPT could be a useful development tool for anyone, from novice to expert, to develop ideas from conception to implementation using natural language and basic coding concepts. With ChatGPT, you can focus on creative thinking while it handles the technical details. In this article, I'll demonstrate how ChatGPT can act as a developer's copilot to guide you through building a serverless function on AWS that connects to virtual machines via SSH for maintenance or automation tasks.

Overview

With ChatGPT, I refined my initial idea, defined functional requirements, explored alternative implementation options, re-defined requirements, deployed the solution, and developed a scenario to test it. This article shows how a virtual assistant like ChatGPT can help IT specialists develop their concepts from start to finish using natural language processing and basic cloud and coding knowledge.

To follow this article, you need the following:

  • An AWS account with admin access (a free tier account is sufficient).
  • Python 3.8 or later installed on your local machine.
  • AWS CLI (Command Line Interface) installed on your local machine.
  • An IDE (Integrated Development Environment) or text editor of your choice.
  • Serverless Framework installed on your local machine.
  • Basic knowledge of Python.
  • Basic knowledge of AWS Lambda functions and IAM roles.
  • Basic knowledge of SSH protocol and connecting to remote servers.
  • A test EC2 instance with SSH access configured and running on AWS.

Ok ChatGPT, let’s dive in

The following is a transcript of my conversation with ChatGPT to develop a serverless solution on AWS to connect to remote servers via SSH and execute commands. The conversation has been edited for clarity and brevity.

The first thing I did was to present my idea to ChatGPT as clearly as possible, including context, preferred language, and functional requirements:

Me:

Hi dude, I like to create some lambda functions using Python to execute a command in a Linux server using ssh.

Here are some functional requirements:

1. All lambda functions must be called programmatically

2. ssh execution Lambda function will receive the following parameters: { serverId, userId, commandLine }

3. ssh execution lambda function must get server credentials from another lambda function that receives as parameters { serverId, userId }and return { sshUser, sshKey, serverIp, serverPort }, sshKey is a base64 encoded string

4. ssh execution function gets the name of the function that retrieve credentials from an environment variable

5. credentials will be hardcoded by now

Please provide code for both functions.

ChatGPT:

Sure! Here are two Lambda functions that fulfill the requirements you've outlined:

  1. Lambda function to retrieve server credentials:
import json

def retrieve_credentials(event, context):
# retrieve input parameters
server_id = event['serverId']
user_id = event['userId']

# retrieve credentials for the given server and user
ssh_user = 'user1'
ssh_key = 'base64encodedstring'
server_ip = '123.45.67.89'
server_port = 22

# create response
response = {
'sshUser': ssh_user,
'sshKey': ssh_key,
'serverIp': server_ip,
'serverPort': server_port
}

return {
'statusCode': 200,
'body': json.dumps(response)
}

2. Lambda function to execute SSH command:

import json
import os
import base64
import paramiko

def execute_ssh_command(event, context):
# retrieve input parameters
server_id = event['serverId']
user_id = event['userId']
command_line = event['commandLine']

# retrieve credentials for the given server and user using the specified function name
get_credentials_function_name = os.environ['GET_CREDENTIALS_FUNCTION_NAME']
lambda_client = boto3.client('lambda')
credentials_event = {
'serverId': server_id,
'userId': user_id
}
credentials_response = lambda_client.invoke(
FunctionName=get_credentials_function_name,
Payload=json.dumps(credentials_event)
)
credentials = json.loads(credentials_response['Payload'].read().decode('utf-8'))

# decode base64-encoded SSH key
ssh_key = base64.b64decode(credentials['sshKey']).decode('utf-8')

# establish SSH connection
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(
credentials['serverIp'],
port=credentials['serverPort'],
username=credentials['sshUser'],
key_filename=ssh_key
)

# execute command on remote server
stdin, stdout, stderr = ssh.exec_command(command_line)
output = stdout.read().decode('utf-8')
error = stderr.read().decode('utf-8')

# close SSH connection
ssh.close()

# create response
response = {
'output': output,
'error': error
}

return {
'statusCode': 200,
'body': json.dumps(response)
}

Note that in the execute_ssh_command function, the name of the Lambda function that retrieves server credentials is retrieved from an environment variable GET_CREDENTIALS_FUNCTION_NAME. You'll need to set this environment variable to the name of your credentials retrieval function in your AWS Lambda configuration.

Also, note that the execute_ssh_command function uses the boto3 library to invoke the credentials retrieval function from within the function. You'll need to ensure that your AWS Lambda function has the necessary IAM permissions to invoke other Lambda functions.

As you can see, ChatGPT returned a good first version of the code, which, although not perfect, provides us with an excellent starting point.

Here is a diagram of the solution ChatGPT generates from the information provided:

Solution's architectural overview

Alright, ChatGPT, it's time to deploy

Next, I asked ChatGPT to guide me through the necessary steps to deploy the solution using the Serverless Framework, which allows deploying applications on various serverless solutions in the cloud, in this case, AWS Lambda:

Me:

I would like to deploy both functions as code using Serverless Framework.

Could you provide the following information:

  1. Instruction to create the serverless project and name the service sshWorker
  2. Serverless.yml configuration (function will be called programmatically, do not include events configuration)
  3. Step to download libraries
  4. Configuration required to allow ssh_execution function to reach public server
  5. Configuration required to let ssh_execution function run the maximum time allowed by lambda

ChatGPT:

Sure, I can provide you with the information you need to deploy the Lambda functions using the Serverless Framework.

  1. To create a new Serverless project and name the service sshWorker, you can use the following command:
serverless create --template aws-python3 --path sshWorker
cd sshWorker

This will create a new Serverless project with a basic AWS Lambda function written in Python 3.

2. Here's an example serverless.yaml file that defines the two Lambda functions you want to deploy:

service: sshWorker

provider:
name: aws
runtime: python3.9

functions:
retrieveCredentials:
handler: handler.retrieve_credentials
events: []

executeSshCommand:
handler: handler.execute_ssh_command
events: []

Note that the events property is set to an empty array for both functions, as you don't want to define any API Gateway or other triggers for these functions.

3. To download the required libraries for these functions, you can use the following command:

cd sshWorker
pip install -r requirements.txt -t ./

This will install the required libraries defined in your requirements.txt file and save them to the project directory.

4. To allow the executeSshCommand function to reach any server, you'll need to ensure that the Lambda function is deployed in a VPC that has a route to the server. You can define a VPC configuration in your serverless.yml file like this:

provider:
name: aws
runtime: python3.9
vpc:
securityGroupIds:
- sg-0123456789abcdef
subnetIds:
- subnet-0123456789abcdef

In this example, securityGroupIds specifies the ID of the security group associated with the VPC, and subnetIds specifies the ID of the subnet associated with the VPC.

5. To let the executeSshCommand function run for the maximum time allowed by AWS Lambda (15 minutes), you can add the following configuration to your serverless.yml file:

functions:
executeSshCommand:
handler: handler.execute_ssh_command
timeout: 900 # in seconds

This sets the timeout for the executeSshCommand function to 900 seconds (15 minutes). You can adjust this value as needed.

Once you have your serverless.yml file and libraries downloaded, you can deploy the functions using the following command:

serverless deploy

This will package and deploy your Lambda functions to AWS. You can then call the retrieveCredentials function and pass its output as input to the executeSshCommand function programmatically using the AWS SDK or CLI.

Here is an example of a network diagram based on the configuration provided by ChatGPT:

Solution nChatGPT'sagram

ChatGPT, let's keep things dynamic and parameterize some aspects

ChatGPT's process was accurate enough that even someone without Serverless Framework knowledge could easily create the project. I later proposed parameterizing the environment variable to assign the function name based on the composite name assigned by Serverless:

Me:

Could you explain how to include GET_CREDENTIALS_FUNCTION_NAME environment variable into serverless.yaml, try to use the function name generated by serverless deply

ChatGPT:

Sure! To include the GET_CREDENTIALS_FUNCTION_NAME environment variable in your serverless.yml file, you can use the provider.environment property. Here's an example of how you can include this variable:

service: sshWorker

provider:
name: aws
runtime: python3.9
region: us-east-1
environment:
GET_CREDENTIALS_FUNCTION_NAME: ${self:service}-${opt:stage, self:provider.stage}-retrieveCredentials

functions:
executeSshCommand:
handler: handler.execute_ssh_command

In this example, we're using the ${self:service} and ${opt:stage, self:provider.stage} variables to construct the name of the retrieveCredentials function. This assumes that you've named your service sshWorker, and that you're deploying to the default stage (dev). If you deploy to a different stage, you can override the stage by passing the — stage option when running serverless deploy.

When you run serverless deploy, the name of the retrieveCredentials function will be automatically generated based on the service name and stage, so you don't need to hardcode the function name in your serverless.yml file.

Let's fix some errors, ChatGPT

After brief interactions, I had what I needed to deploy my first application version. However, ChatGPT's responses weren't entirely accurate due to changes in Serverless syntax. I fixed serverless.yamlfile structure issues after researching the Serverless Framework website. Despite successful deployment, I encountered runtime errors and turned to ChatGPT for help:

Me:

I just get the following error:

"ErrorMessage": "Name boto3 is not defined"

ChatGPT:

This error indicates that the boto3 library has not been imported in your handler.py file.

To fix this error, you can add the following import statement at the top of your handler.py file:

import boto3

This will import the boto3 library and allow you to use it in your code.

If you've already imported the boto3 library and are still getting this error, you'veure that you've installed the library using pip install boto3.

Me:

Looks like "pip install -r requirements.txt -t ./" is not installing the right binary version.

Could you describe a method to let Serverless install dependencies?

ChatGPT:

Sure! You can use the serverless-python-requirements plugin to automatically install your Python dependencies during deployment.

To install the plugin, run the following command:

npm install serverless-python-requirements

Then, add the following configuration to your serverless.yml file:

plugins:
- serverless-python-requirements

custom:
pythonRequirements:
useDownloadCache: true
useStaticCache: true
dockerizePip: true

This configuration tells the serverless-python-requirements plugin to use a download cache and a static cache to speed up the installation of dependencies. It also tells the plugin to use Docker to isolate the installation process and ensure that the dependencies are installed in a compatible environment.

With this configuration in place, you can remove the requirements.txt file from your project directory, as the plugin will automatically download and install the required packages based on your pip configuration in setup.py or pyproject.toml.

To deploy your function with the dependencies installed, simply run:

serverless deploy

The serverless-python-requirements plugin will automatically install the dependencies and package them with your function during deployment.

At this point, I was able to resolve all the error messages, but the function became stuck while trying to retrieve the credentials from the mock function. Therefore, I asked ChatGPT to help me understand why this was happening:

Me:

Looks like the execute_ssh_command function can't reach retrieve_credentials and the function gets stuck, could you help me?

ChatGPT:

Your execute_ssh_command function may be gettingit'sck because it's unable to retrieve the credentials from the retrieve_credentials function.

To resolve this issue, you should make sure that your retrieve_credentials function is working correctly and returning the credentials that your execute_ssh_command function needs.

Here are a few things you can try:

  1. Verify that the retrieve_credentials function is returning the correct values by testing it separately. You can do this by invoking the function manually or by adding a test event to your serverless.yml file.
  2. Check that the GET_CREDENTIALS_FUNCTION_NAME environment variable is set correctly and that it matches the name of the retrieve_credentials function in your serverless.yml file.
  3. Make sure that the IAM role assigned to your Lambda function has the necessary permissions to invoke the retrieve_credentials function. You can do this by checking the IAM policy assigned to the role and verifying that it includes the lambda:InvokeFunction permission for the retrieve_credentials function.
  4. Try adding some debug logging statements to your code to help you identify where the function is getting stuck. You can use the print() function to log messages to CloudWatch Logs, which you can then view in the AWS Management Console.

Once you've identified the issue, you can update your code or configuration to fix the problem and ensure that your execute_ssh_command function can retrieve the credentials it needs to establish an SSH connection.

Me:

Give me only the code of IAM permission required to allow invoke any lambda function on the provider section of serverless.yaml

ChatGPT:

To allow a Lambda function to invoke any other Lambda function in your account, you can add the following IAM permission to your function's execution role:

provider:
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"

This policy statement allows the Lambda function to invoke any other Lambda function in your account by specifying the lambda:InvokeFunction action and setting the Resource property to "*", which means any Lambda function in your account.

Be aware that granting a Lambda function permission to invoke any other function can potentially pose a security risk. You should carefully consider the security implications before adding this policy statement to your serverless.yml file. It's generally recommended to specify only the specific functions that your Lambda function needs to invoke, rather than allowing access to all functions in your account.

As ChatGPT 3's information only extends up to 2021, I had to make some adjustments to the provided code to accommodate changes in syntax and implementation due to software updates and evolution. After incorporating the code from ChatGPT and rectifying some syntax issues resulting from changes in the serverless.yaml template since 2021, I was able to successfully establish a connection to a public Linux server using the SSH protocol.

Here is the content of the resulting files after making the necessary adjustments to make the application work as expected:

handler.py:

import json
import json
import os
import base64
import paramiko
import boto3
import io

def retrieve_credentials(event, context):
# retrieve input parameters
server_id = event['serverId']
user_id = event['userId']

# retrieve credentials for the given server and user
ssh_user = 'root'
ssh_key ='<base64 encoded ssh provate key>'
server_ip = 'WWW.XXX.YYY.ZZZ' #Server IP address
server_port = 22 #Server port

# create response
response = {
'sshUser': ssh_user,
'sshKey': ssh_key,
'serverIp': server_ip,
'serverPort': server_port
}

return response



def execute_ssh_command(event, context):
# retrieve input parameters
server_id = event['serverId']
user_id = event['userId']
command_line = event['commandLine']

print("Getting credentials")
# retrieve credentials for the given server and user using the specified function name
get_credentials_function_name = os.environ['GET_CREDENTIALS_FUNCTION_NAME']
lambda_client = boto3.client('lambda')
credentials_event = {
'serverId': server_id,
'userId': user_id
}
credentials_response = lambda_client.invoke(
FunctionName=get_credentials_function_name,
Payload=json.dumps(credentials_event)
)

print(credentials_response)
credentials = json.loads(credentials_response['Payload'].read().decode('utf-8'))

# decode base64-encoded SSH key
ssh_key = base64.b64decode(credentials['sshKey']).decode('utf-8')
ssh_key = paramiko.RSAKey.from_private_key(io.StringIO(ssh_key))

print("Trying to connect server")
# establish SSH connection
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())

ssh.connect(
credentials['serverIp'],
port=credentials['serverPort'],
username=credentials['sshUser'],
pkey=ssh_key,
timeout=10
)

# execute command on remote server
stdin, stdout, stderr = ssh.exec_command(command_line)
output = stdout.read().decode('utf-8')
error = stderr.read().decode('utf-8')

# close SSH connection
ssh.close()

# create response
response = {
'output': output,
'error': error
}

return {
'statusCode': 200,
'body': json.dumps(response)
}

requirements.txt:

paramiko

serverless.yaml:

service: sshWorker

provider:
name: aws
runtime: python3.9
region: eu-west-3
iam:
role:
statements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource:
- arn:aws:lambda:*:*:function:*
vpc:
securityGroupIds:
- sg-00db7b15f65130187
- sg-02307a64ec65af985
subnetIds:
- subnet-0bfed0d58bc384b94
- subnet-0baf03f358221ec39
- subnet-00611b79fcb5577e0
- subnet-076b7b218aca12ff4

functions:
retrieveCredentials:
handler: handler.retrieve_credentials
events: []

executeSshCommand:
handler: handler.execute_ssh_command
events: []
timeout: 60 # in seconds
environment:
GET_CREDENTIALS_FUNCTION_NAME: ${self:service}-${sls:stage}-retrieveCredentials

plugins:
- serverless-python-requirements

custom:
pythonRequirements:
useDownloadCache: true
useStaticCache: true
dockerizePip: true

Conclusion

The use of natural language processing in developing a Lambda function with the assistance of ChatGPT has shown the potential for this technology to simplify and expedite cloud-based solution building. With minimal changes required to the original code, individuals without advanced technical knowledge can participate in the development process. This integration of cloud computing and natural language processing has the potential to democratize development, allowing for a broader range of people to create and deploy solutions in a user-friendly and innovative manner. As technology continues to progress, we can anticipate even more efficient and accessible methods for building solutions in the cloud.

References (written with the assistance of ChatGPT)

  1. AWS Lambda — AWS doc page for Lambda with tutorials, best practices, and resources. Available at: https://aws.amazon.com/lambda/
  2. Serverless Framework — Open-source toolkit website for building serverless apps with documentation, tutorials, and examples. Available at: https://www.serverless.com/
  3. Creating a Serverless Python Function in AWS Lambda — Tutorial by Marcia Villalba on creating a Python function with the Serverless Framework and deploying it to AWS Lambda. Available at: https://aws.amazon.com/getting-started/hands-on/create-a-serverless-function/
  4. How to Use SSH to Connect to a Remote Server in Python — Tutorial by David Amos on establishing an SSH connection to a remote server and executing commands with Paramiko in Python. Available at: https://www.digitalocean.com/community/tutorials/how-to-use-ssh-to-connect-to-a-remote-server-in-python
  5. AWS Lambda Execution Role — AWS doc page explaining IAM roles for Lambda and how to create and manage them. Available at: https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html
  6. Controlling Access to AWS Lambda Resources with IAM — AWS doc page on using IAM policies to control access to Lambda resources. Available at: https://docs.aws.amazon.com/lambda/latest/dg/access-control-overview.html
  7. Python AWS Lambda with Serverless Framework — Tutorial by Mahmoud Lababidi on creating a Python Lambda function with the Serverless Framework, managing dependencies, and environment variables. Available at: https://medium.com/@mahmoudlababidi/python-aws-lambda-with-serverless-framework-c53f4e84a81b
  8. AWS Lambda Limits — AWS doc page listing default limits and quotas for Lambda, including memory, timeout, and concurrency. Available at: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
  9. Serverless YAML Reference — Official reference for the serverless.yml configuration file used by the Serverless Framework, with detailed documentation and examples. Available at: https://www.serverless.com/framework/docs/providers/aws/guide/serverless.yml/
  10. Lambda function to execute an SSH command on a Linux server using Python and the Serverless Framework — Q&A with ChatGPT providing a step-by-step guide to creating a Lambda function to execute an SSH command on a Linux server using Python and the Serverless Framework. Available at: https://github.com/OpenAI/gpt-3-examples/issues/47

--

--