On integrating SaaS Contract Products with AWS Marketplace

Dirk Michel
16 min readMar 11, 2024

--

AWS Partner Network (APN) companies can list products on the AWS Marketplace, enabling customers to find, buy, deploy, and manage their software, data, and service offerings that run on AWS Cloud.

AWS Marketplace supports an increasing number of product types, such as Amazon Machine Image (AMI), container-based, machine-learning, and SaaS-based products and can be used as a transaction, billing, and fulfilment platform for free and paid products.

APN Partners start by completing the seller registration process on the AWS Marketplace Management Portal, the primary interface for managing seller information, product listings, and dashboards. Paid products extend the seller registration process to include payee details and tax information.

Registered sellers can then commence the product listing process, where AWS Marketplace sets out a range of listing requirements depending on (a) the product type, (b) the billing type, and (c) the pricing type. This blog focuses on the SaaS-based product type, which enables various billing options: Contract or upfront billing, Contract with Consumption (metered) billing, and Subscription pay-as-you-go billing. Finally, AWS Marketplace allows the seller to define public and private offer models.

For those on a tight time budget: The TL;DR of the following sections is to show an integration approach with AWS Marketplace for a SaaS, contract-based, private offer listing with a fulfilment website that customers will be sent to after subscribing. The listing requirements can be implemented as a serverless architecture using Amazon Route53, Amazon Certificate Manager, Amazon CloudFront, AWS Lambda@Edge, Amazon DynamoDB, and an Amazon S3 web endpoint.

All product listings are managed through the AWS Marketplace Management Portal, not the AWS Management Console. Each product listing is guided by a sequence of listing lifecycle stages, beginning with staging and followed by limited and then public. These stages are helpful as they group requirements into bite-sized requirements and activities.

During the staging phase, we provide our product information assets, which is a guided self-service “wizard-based” workflow for entering our product description, logo, and other details, such as product support, contact information, refund handling, terms of use (EULA), and pricing dimensions; all of which AWS Marketplace then uses to render our product representation on the Marketplace. The wizard workflow also asks us to choose an access method for our product and fulfil a purchase: We choose the Fulfilment URL option, aka the Product registration URL, which determines the SaaS Contract integration requirements we need to comply with during the limited phase.

In the limited phase, we implement the fulfilment website (which is interchangeably referred to as the “registration website” or the “landing page” in the AWS documentation) and integrate it with AWS Marketplace APIs to handle several customer purchase, registration, and onboarding scenarios. This phase also enables testing our website through allowlisted AWS Account IDs we can define, from which we initiate and validate the customer registration and onboarding flow.

The key integration requirement is configuring the SaaS product through a “registration landing page” that handles and accepts new buyers. This is where the magic happens, as AWS Marketplace can redirect customer purchase actions to our software’s registration fulfilment website.

When a buyer completes a product purchase, AWS Marketplace redirects the customer, and the customer’s browser sends a POST request containing a temporary token with the customer’s identifier to our registration website. From that point onwards, we must comply with a range of post-processing requirements for our product to be approved and launched on AWS Marketplace:

  1. Our fulfilment URL domain name must be resolvable and capable of accepting a POST redirect issued by AWS Marketplace.
  2. Extract the registration token, redeem it by calling the ResolveCustomer API, and exchange it for a customer identifier, customer AWS account ID, and product code. Call the GetEntitlements API to validate the customer's purchase entitlements. Entitlements map onto the pricing dimensions that we set during the staging phase.
  3. The fulfilment website needs to present contact and support information to the buyer and indicate the next steps to access the product.
  4. Define an accounts database to store an entry for each customer, with a column for the AWS customer identifier. Create an Amazon SNS topic that issues notifications when customers subscribe or unsubscribe to our products.

The following diagram illustrates this.

Serverless fulfilment website and integration with AWS Marketplace

Once our integration testing cycle is complete, we request the transition to the public phase through the AWS Marketplace Management Portal. The product visibility status will stay in the “under review” state until a member from the AWS Marketplace Ops team completes their cross-validation by verifying that we have successfully called the relevant API operations and sufficiently onboarded new customers.

Now we build! Install Anaconda as your Python virtual environment and the AWS Cloud Development Kit (CDK) to follow along. CDK lets us use Python and other supported programming languages to write compact code that generates AWS CloudFormation. The Python snippets provide working code to illustrate some of the configuration options.

Let’s do it.

Describing the CDK project layout is a good way to start, as it shows the buildout of the fulfilment website and integration to AWS Marketplace.

We structure the CDK Python project with multiple stacks, beginning with a top-level app.py module. CDK will initialise the application when executing “cdk deploy” from the command line. The following snippet shows the Stacks and how they are organised and sequenced. A single stack could handle all the resources we need, but grouping resources into multiple smaller stacks can be a useful habit.

#!/usr/bin/env python3
import os
import aws_cdk as cdk
from stacks.amp_fulfillment_stack import AmpFulfillmentStack
from stacks.amp_edge_stack import AmpEdgeStack

app = cdk.App()

account_id = "YOUR SELLER ACCOUNT ID"
region = "us-east-1"

stack1 = AmpEdgeStack(app, "AmpEdgeStack",
description="Provision TLS certs",
termination_protection=False,
cross_region_references=True,
tags={"marketplace":"my_product"},
env=cdk.Environment(region=region, account=account_id),
)

stack2 = AmpFulfillmentStack(app, "AmpFulfillmentStack",
description="Provision fulfilment site and backend for AWSMP API integration",
termination_protection=False,
cross_region_references=True,
tags={"marketplace":"my_product"},
env=cdk.Environment(region=region, account=account_id),
cert=stack1.acm_certificate,
)

cdk.Tags.of(stack1).add(key="project",value="awsmp")
cdk.Tags.of(stack2).add(key="project",value="awsmp")

stack2.add_dependency(stack1)

app.synth()

The following sections reference the Stacks and show how to validate our steps incrementally.

As shown in the app.py snippet, we specifically decide which AWS Account and Region is used for the fulfilment site and supporting resources, as some subtleties need to be navigated. For example, the AWS Marketplace APIs must be called from the seller Account ID used to publish the SaaS application through the AWS Marketplace Management Portal. Equally, the AWS Marketplace APIs are hosted in the US-East-1 region. Some of the other cloud resources must also be provisioned in US-East-1. Using Amazon CloudFront with a custom domain name and HTTPS support uses Amazon ACM-issued TLS certificates that need to be located there. Amazon Lambda@Edge functions, used in conjunction with Amazon CloudFront, must also be in US-EAST-1, from where they are replicated into Amazon CloudFront Edge locations. As a result, we prefer to deploy the CDK stack resource directly into US-EAST-1.

Hosting the fulfilment website

One way to provision the fulfilment website and Amazon S3 bucket with CDK is to use the Amazon S3 Construct Library. When using S3 web hosting, the bucket name is defined as the domain name, and we can keep the bucket private and unreachable directly and enforce TLS. Connectivity to the website on S3 will be securely handled via Amazon CloudFront.

The website (an index.html and some image and styling files) should contain further instructions and contact information for completing the onboarding process. The website assets can be conveniently uploaded to the Amazon S3 bucket with the s3deploy.BucketDeployment construct. The below snippet illustrates this.

from aws_cdk import (
Stack,
RemovalPolicy,
aws_s3 as s3,
aws_s3_deployment as s3deploy,
aws_certificatemanager as acm,
aws_iam as iam,
)
from constructs import Construct

class AmpEdgeStack(Stack):

def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)

domain = "awsmp.example.com"

### defining the s3 bucket used for web hosting the fulfillment website
website_bucket = s3.Bucket(self, "AmpFulfillementSite",
bucket_name=domain,
website_index_document="index.html",
auto_delete_objects=True,
versioned=True,
removal_policy=RemovalPolicy.DESTROY,
public_read_access=False,
block_public_access=s3.BlockPublicAccess.BLOCK_ALL,
enforce_ssl=True,
intelligent_tiering_configurations=[
s3.IntelligentTieringConfiguration(
name="my_s3_tiering",
prefix="prefix",
tags=[s3.Tag(
key="owner",
value="the owner name"
)]
)],
lifecycle_rules=[
s3.LifecycleRule(
noncurrent_version_expiration=Duration.days(7)
)
],
)

### uploading the fulfilment website to the S3 website bucket
s3deploy.BucketDeployment(self, "DeployWebsite",
sources=[s3deploy.Source.asset("./assets")],
destination_bucket=website_bucket,
destination_key_prefix="/"
)

self.website_bucket=web_bucket # Reference for a downstream Stack

### defining the acm cert in us-east-1 for cloudfront
acm_cert = acm.Certificate(self, "AmpFulfillementSiteCertificate",
domain_name=domain,
validation=acm.CertificateValidation.from_dns(),
)

self.acm_certificate=acm_cert # Reference for a downstream Stack

Then, the TLS certificate for the bucket domain name is defined, which Amazon CloudFront uses to enable HTTPS. Amazon S3 bucket hosting does not support HTTPS directly.

Issuing a TLS certificate for a domain name requires validation that we own it. This validation step can be automated by passing a HostedZone object into the .from_dns() method; without it, as shown in the snippet, it indicates that manual validation is required for the certificate to be issued and the CDK stack to run to completion.

Notice how the snippet defines objects for downstream Stacks: The web_bucket and the acm_cert references are passed up to app.py and are then declared as key-value arguments for the Stack class.

Fronting the website

Amazon CloudFront is an excellent choice for our stack as it handles custom domain names, supports HTTPS and Amazon S3-backed websites as origins, and enables various options for handling and processing incoming HTTP(S) requests before they are forwarded to the origin fulfilment website, in effect fronting the site. The content delivery and catching features of Amazon CloudFront can also be necessary, depending on the expected customer interaction volume.

The CDK library for Amazon CloudFront defines the CloudFrontWebDistribution, the original construct for working with CloudFront distributions. Alternatively, the Distribution construct provides a somewhat newer method for working with Amazon CloudFront distributions. The OriginAccessIdentity enables the distribution to access the S3 origin securely and adds the required resource policy statements to our bucket. An important concept with Amazon CloudFront distributions is the behaviour configuration, which defines many of the details we care about most, for example, the allowed request methods, the AWS Lambda function version association, and the TLS certificate that should be used. The following snippet shows how.

from aws_cdk import (
Duration,
Stack,
RemovalPolicy,
aws_iam as iam,
aws_cloudfront as cloudfront,
aws_lambda as lambda_,
aws_lambda_destinations as destinations,
aws_sns as sns,
aws_dynamodb as dynamodb,
)
from constructs import Construct

class AmpFulfillmentStack(Stack):

def __init__(self, scope: Construct, construct_id: str, cert, web_bucket, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)

# defining a cloudfront OAI, so the bucket website can be accessed by cloudfront
cf_origin_access_identity = cloudfront.OriginAccessIdentity(self, "AmpFulfillementSiteOAI",
comment="AWSMP Fulfillment Site OAI"
)

# defining the cloudfront distribution
cf_distribution = cloudfront.CloudFrontWebDistribution(self, "AmpFulfillementSiteDistribution",
origin_configs=[
cloudfront.SourceConfiguration(
s3_origin_source=cloudfront.S3OriginConfig(
s3_bucket_source=web_bucket, # passing in the bucket object created in stack 1
origin_access_identity=cf_origin_access_identity
),
behaviors=[
cloudfront.Behavior(
is_default_behavior=True,
allowed_methods=cloudfront.CloudFrontAllowedMethods.ALL
)
]
)
],
error_configurations=[
{
"errorCode": 403,
"responseCode": 200,
"responsePagePath": "/error.html"
}
],
viewer_protocol_policy=cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
price_class=cloudfront.PriceClass.PRICE_CLASS_100,
viewer_certificate=cloudfront.ViewerCertificate.from_acm_certificate(
certificate=cert, # passing in the acm cert object created in stack 1
aliases=["awsmp.example.com"],
security_policy=cloudfront.SecurityPolicyProtocol.TLS_V1_2_2021,
)
)

[...]

Amazon CloudFront creates the public distribution endpoint, which we can reference from within the Amazon Route53 Public Hosted Zone for the corporate domain. Once we add an A Record entry as an Amazon CloudFront alias to the distribution id, our custom domain name will resolve to the distribution with a valid TLS certificate.

Setting the allowed methods for the cache behaviour is important, as AWS Marketplace redirects customer registrations to the fulfilment site. That redirect comes in as a POST method, containing information we need to extract before we call the AWS Marketplace APIs to complete the customer registration and onboarding. Hence, we set cloudfront.CloudFrontAllowedMethods.ALL to include the POST method; otherwise, Amazon CloudFront will drop the POST request and log an error.

AWS Marketplace redirects as a POST method for every customer purchase action, and we can use an AWS Lambda function to process incoming Amazon CloudFront viewer request events.

Event-based processing

Event processing at the edge with Amazon CloudFront can be achieved with CloudFront Functions and Lambda@Edge functions. We use Lambda@Edge due to its Python runtime and full application logic support.

With that, we define a “straight-up” AWS Lambda function for event-based processing that should run whenever AWS Marketplace issues POST requests towards our fulfilment URL.

As with any AWS Lambda function, we define its Execution Role and attach the applicable IAM permission policies for it to call other AWS services. In our case, we need the function to call two different AWS Marketplace APIs: the MarketplaceMetering API and the MarketplaceEntitlementService API. Additionally, we want the Lambda function to write its invocation logs into Amazon CloudWatch, send messages to an Amazon SNS topic, and write to an Amazon DynamoDB table… more later.

We then pass the role object and the Amazon SNS topic object into the CDK AWS Lambda function construct to create the function. The Lambda Destination is defined with the on_success parameter, which tells the function to send invocation records to the Amazon SNS topic we defined. With this, we implement another integration requirement for AWS Marketplace customer onboarding.

Notice how we define the function timeout duration, as we intend to use it as a CloudFront Lambda@Edge function, which enforces a maximum runtime duration for viewer requests of 5 seconds.

The AWS Lambda function also needs the function code it should execute during invocation. We can conveniently pass in the directory name containing the function code with construct.Code.from_asset(“<directory name>”), as shown in the following snippet.


[...]
# creating the lambda execution role
lambda_role = iam.Role(
self,
"lambda_role",
assumed_by=iam.CompositePrincipal(iam.ServicePrincipal("lambda.amazonaws.com"), iam.ServicePrincipal("edgelambda.amazonaws.com") ),
managed_policies=[iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AWSLambdaBasicExecutionRole")],
)
# adding permissions to the lambda execution role
lambda_role.add_to_policy(
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
actions=[
"dynamodb:DescribeTable",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
],
resources=["arn:aws:dynamodb:*:*:table/*"],
)
)
# adding permissions to the lambda execution role
lambda_role.add_managed_policy(
iam.ManagedPolicy.from_aws_managed_policy_name(
"service-role/AWSLambdaBasicExecutionRole"
)
)
# adding permissions to the lambda execution role
lambda_role.add_managed_policy(
iam.ManagedPolicy.from_aws_managed_policy_name(
"AWSMarketplaceGetEntitlements"
)
)

lambda_role.add_to_policy(
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
actions=[
"aws-marketplace:ResolveCustomer",
],
resources=["*"],
)
)

# creating an sns topic that we then use as a lambda destination
awsmp_sns_topic = sns.Topic(self, "awsmp_sns_topic")

# creating the lambda function
new_lambda_function = lambda_.Function(
self,
"new_awsmp_lambda_function",
description="the Lambda function we will define as lambda@edge for cloudfont",
runtime=lambda_.Runtime.PYTHON_3_11,
handler="lambda.lambda_handler",
code=lambda_.Code.from_asset("lambda"),
role=lambda_role,
timeout=Duration.seconds(5),
on_success=destinations.SnsDestination(awsmp_sns_topic)
current_version_options=lambda_.VersionOptions(
description="lambda version publication",
)
)

# creating a lambda function version to it can be referenced by a cloudfront behaviour as a lambda@edge function.
numbered_version = new_lambda_function.current_version

# now we create a backend dynamodb table to persist customer onboarding details
dynamodb_table = dynamodb.Table(self, "AmpFulfilmentTable",
table_name="awsmp",
partition_key=dynamodb.Attribute(name="customer-id", type=dynamodb.AttributeType.STRING),
billing_mode=dynamodb.BillingMode.PAY_PER_REQUEST,
removal_policy=RemovalPolicy.DESTROY,
)

The Lambda function should be invoked every time our Amazon CloudFront distribution receives a redirect POST request from AWS Marketplace. We associate the AWS Lambda function with our Amazon CloudFront distribution behaviour to do this. This association turns a regular AWS Lambda function into an Amazon CloudFront Lambda@Edge function that CloudFront replicates into its edge locations.

The function association must reference a numbered AWS Lambda version and does not accept function aliases or the $Latest tag. Therefore we create an AWS Lambda Version with the .current_version method of the Function CDK construct. The numbered version can then be passed into the behaviour as a viewer-request function, with the parameter to include the event body. The AWS Lambda function needs access to the request body to parse the registration token.

The last block of the snippet creates an AmazonDB table as a persistent backend storage system for the customer onboarding details we receive from the AWS Marketplace POST redirect and the responses we obtain from the AWS Marketplace APIs. Again, this completes another integration requirement of AWS Marketplace.

Lambda@Edge function code

Now, we look at the AWS Lambda function code we need to process the incoming customer registration redirects from AWS Marketplace. Amazon CloudFront passes the request object into Lambda@Edge functions with a specific event structure. The event structure is important to define our parser details, including the body. The following snippet shows the JSON structure the Lambda handler receives:

{
"Records":[
{
"cf":{
"config":{
...
},
"request":{
"body":{
"data":"eC1hbXpuLW1hcmtldHBsYWNlLXRva2VuPU1BaG9uSTBQYXN6WkU0eWwlMkJiN3NXU3FsU2F3d1dsRjNDTVZrWFlFaEdnamRMRzBhMGpjbEwycG5OYTg4eSUyRkU0aGJjZVVQNnp3OFNIRXdVd3F1Zk1IZCUyQnk4S2JiYlFkVUZ3d2QlMkZkcFJoZ2dBcHJXbFkyWTlkM3RhZ0pJV1pSSVdzaWNUYXZCUEF0ekVZRGN5QkclMkZUWVpWaEFkbyUyRnhDaFQ1UjI5VFR1VUJjc3JDODMxUm1aUDhnJTNEJTNE",
...
},
"clientIp":"a.b.c.d",
"headers":{
"host":[
{
...
}
],
"content-type":[
{
"key":"content-type",
"value":"application/x-www-form-urlencoded"
}
...
},
"method":"POST",
"querystring":"",
"uri":"/"
}
}
}
]
}

Defining a sample event that follows the above event structure can be used to configure Lambda tests, which is very useful during initial functional testing of the code.

Now we can access the fields in the event structure, including the request [records][cf][request][body][data] field, which Amazon CloudFront passes into Lambda@Edge as a base64 encoded sequence.

The Lambda function code then parses out the fields we need into variables, and, being good AWS Lambda citizens, we log them out and handle exceptions for our future selves and others who may need to work with the function code later on. Additionally, errors such as invalid registration tokens can be made visible to the customer directly by returning an error page from within the function.

When looking for the AWS Lambda function logs, remember that Lambda@Edge logs are regionalised. Lambda@Edge functions are replicated into AWS Edge Locations where they are invoked; hence, they log into the AWS Region’s AWS CloudWatch Log Group named as
/aws/lambda/us-east-1.<function-name>. Additionally, CloudFront enables logging of invalid Lambda@Edge function responses by default and, perhaps confusingly, creates an additional regionalised Log Group named /aws/cloudfront/LambdaEdge/<distribution-id>.

We start by parsing out a set of event records, which we can then use throughout the function code, followed by the if condition block containing the main steps that handle the POST request:

a/ We access the [body][data] field and then decode the data sequence to arrive at the registration token representation free of ‘b prefixes and %hex encoding that we assign to the regToken variable.

b/ We call the metering marketplace API to obtain the customer ID, and then, once in hand, we use that customer ID to call the marketplace-entitlement API to retrieve the entitlement, i.e. the pricing dimensions that the buyer selected as part of the purchase.

c/ We call the Amazon DynamoDB API to persist the customer purchase records for future use. The Amazon DynamoDB table is defined with the customer ID as its primary key.

d/ We add a final redirect clause at the end of the if block that replaces the initial AWS Marketplace POST with our GET request towards the web site. In that way, the initial viewer POST request never arrives at our site, which is important in our case, as S3 website endpoints only support GET and HEAD methods.

The following function code snippet shows the steps.

import json
import boto3
import base64
import urllib.parse as urlparse

CONTENT = """
<\!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Your Marketplace Registation</title>
</head>
<body>
<p>Oh no, something went wrong! Please contact us on info@awsmp.example.com !</p>
</body>
</html>
"""

client = boto3.client("dynamodb", region_name = "us-east-1")

def lambda_handler(event, context):

config = event["Records"][0]["cf"]["config"]
request = event["Records"][0]["cf"]["request"]

headers = request["headers"]
host = headers["host"][0]["value"]

method = request["method"]
client_ip = request["clientIp"]
request_id = config["requestId"]
uri = request["uri"]

if method == "POST":

try:
print("LOG FULL_REQUEST_OBJECT:", request)
data = request["body"]["data"]
print("LOG DATA:", data)
PostBody = base64.b64decode(data)
print("LOG POST_BODY:", PostBody)
kv = PostBody.decode("ascii")
print("LOG KV:", kv)
kvp = urlparse.parse_qs(kv)
print("LOG KVP:", kvp)
regToken = kvp["x-amzn-marketplace-token"][0]
print("LOG TOKEN:", regToken)
except Exception as e:
print("FAIL: No POST body found!")
print("FAIL: Exception:", e)
response = {
"status": "200",
"statusDescription": "OK",
"headers": {
"cache-control": [
{
"key": "Cache-Control",
"value": "max-age=100"
}
],
"content-type": [
{
"key": "Content-Type",
"value": "text/html"
}
]
},
"body": CONTENT
}
return response

if regToken:
try:
# calling the meteringmarketplace api to exchange the token with a customer id
print("LOG CALLING_AWSMP_WITH_TOKEN: ", regToken)
marketplaceClient = boto3.client("meteringmarketplace", region_name = "us-east-1")
customerData = marketplaceClient.resolve_customer(
RegistrationToken=regToken
)
print("FULL_RESOLVE_CUSTOMER_RESPONSE: ", customerData)

customer_id = customerData["CustomerIdentifier"]
print("LOG CUSTOMER_ID: ", customer_id)
product_code = customerData["ProductCode"]
print("LOG PRODUCT_CODE: ", product_code)
customer_account_id = customerData["CustomerAWSAccountId"]
print("LOG CUSTOMER_ACCOUNT_ID: ", customer_account_id)

except Exception as e:
print("FAIL: Registration token invalid or expired!")
print("FAIL: Exception:", e)
response = {
"status": "200",
"statusDescription": "OK",
"headers": {
"cache-control": [
{
"key": "Cache-Control",
"value": "max-age=100"
}
],
"content-type": [
{
"key": "Content-Type",
"value": "text/html"
}
]
},
"body": CONTENT
}
return response

try:
# calling the marketplace-entitlement api to obtain entitlements for the resolved customer id
marketplaceClient = boto3.client("marketplace-entitlement", region_name="us-east-1")
productCode = product_code
customerID = customer_id

entitlement = marketplaceClient.get_entitlements(
ProductCode=productCode,
Filter={
"CUSTOMER_IDENTIFIER": [
customerID,
]
}
)
print("ENTITLEMENT_API_RESPONSE: ", entitlement)

except Exception as e:
print("FAIL: No valid entitlements recieved for customer id!")
response = {
"status": "200",
"statusDescription": "OK",
"headers": {
"cache-control": [
{
"key": "Cache-Control",
"value": "max-age=100"
}
],
"content-type": [
{
"key": "Content-Type",
"value": "text/html"
}
]
},
"body": CONTENT
}
return response

try:
# persisting successful customer subscription data (resolved customer id and entitements) into ddb (not writing entitlements to ddb; TODO)
response = client.put_item(
Item={
"customer-id": {
"S": customer_id,
},
"account-id": {
"S": customer_account_id,
},
"product-code": {
"S": product_code,
},
"method": {
"S": method,
},
"client-ip": {
"S": client_ip,
},
"request-id": {
"S": request_id,
},
"uri": {
"S": uri,
},
"host": {
"S": host,
},
},
ReturnConsumedCapacity="TOTAL",
TableName="awsmp",
)
print("LOG DATA_WRITTEN_TO_DDB:", customer_id, customer_account_id, product_code, method, client_ip, request_id, uri, host)

except Exception as e:
print("FAIL: Cannot write to DDB!")
print("FAIL: Exception:", e)
response = {
"status": "200",
"statusDescription": "OK",
"headers": {
"cache-control": [
{
"key": "Cache-Control",
"value": "max-age=100"
}
],
"content-type": [
{
"key": "Content-Type",
"value": "text/html"
}
]
},
"body": CONTENT
}
return response

# when everything runs through, we send the redirect back to the customers browser to load the fulfillment website
response = {
"status": "302",
"statusDescription": "Found",
"headers": {
"location": [{
"key": "Location",
"value": "https://awsmp.example.com/"
}]
}
}
return response

else:
response = {
"status": "200",
"statusDescription": "OK",
"headers": {
"cache-control": [
{
"key": "Cache-Control",
"value": "max-age=100"
}
],
"content-type": [
{
"key": "Content-Type",
"value": "text/html"
}
]
},
"body": CONTENT
}
return response

This completes our integration work, and we are ready for an end-to-end dry run using the white-listed AWS Accounts. We subscribe to our own product on AWS Marketplace, just as a customer would, and test that the integration works as expected. AWS Marketplace can temporarily reduce the product price to $0.01, facilitating end-to-end validation with negligible charges.

Iterating on the validation is possible by whitelisting more than one of our accounts. However, it's unlikely to be needed, as we have plenty of validation opportunities with the AWS Lambda test feature.

Upon completion of the validation we can finally use the AWS Marketplace Management Portal to request the product visibility status change to public. The AWS Marketplace Operations team will then be notified to execute a final verification that we can successfully call AWS Marketplace APIs and “sufficiently” onboard new customers.

Conclusions

AWS Partner Network companies can list SaaS Contract products on AWS Marketplace by complying with a range of integration requirements for sellers to achieve a consistent experience for buyers. The requirements centre around creating a fulfilment website integrated with AWS Marketplace APIs to process customer registrations appropriately. The website can be implemented with a serverless architecture AWS CDK application, using frontend services such as Amazon Route53, Amazon Certificate Manager, Amazon CloudFront, AWS Lambda@Edge, and an Amazon S3 website endpoint; An Amazon DynamoDB table can handle the backend storage of customer data. Deploying the CDK app into US-EAST-1 avoids cross-region complexities and simplifies the implementation. Testing the AWS Lambda function code with a sample JSON event can reduce the validation time and help reduce iterations with the AWS Marketplace Ops team during the final publication stage of the product listing process.

--

--

Dirk Michel

SVP SaaS and Digital Technology | AWS Ambassador. Talks Cloud Engineering, Platform Engineering, Release Engineering, and Reliability Engineering.