Running Quantum Jobs in AWS Braket From Your Laptop

Fernando de la Iglesia
The Startup
Published in
8 min readDec 6, 2020

As you probably know, some months ago Amazon Web Services (AWS) released as general availability his fully managed quantum computing service called Braket (yes, bra-ket). This is great because with this service just with your current AWS account you can get access to several actual quantum computing devices, in addition to some simulator. The currently supported quantum devices are:

  • Rigetti Aspen-8 with 31 qubits
  • IonQ device with 11 qubits
  • D-Wave quantum annealers 2000Q_6 (2000+ qubits) and Advantage 1.1 (5000+ qubits)

It is quite easy run quantum jobs using the Amazon Braket Notebooks instances and the instructions for that are very clear. But if you want to run quantum jobs from your laptop or from any other system leveraging the Braket SDK and corresponding API, I found the instructions not so clear. Therefore I think it is worth to explain here the end to end process using very simple example algorithms.

AWS access key and secret key

Of course the first step is to have an AWS account and in order to use the API, create an access key ID and a secret access key pair. The process is very simple and well documented in https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html. For the examples below I will use AKIAIOSFODFF7EXAMPLEas the access key ID and wJalrXUtnFerm/K7MDENG/bPxRfiCYEXAMPLEKEY as the secret key.

You need to configure your environment for the python scripts including the SDK (see next) to use these keys. If you are using the AWS CLI you can configure the environment with the CLI option:

> aws configure

and in the process the CLI will ask and configure the proper file that have to contain the keys. If you are not using, neither installed the AWS CLI, you can manually create the text file where the AWS SDK will look for the keys. Depending on the operating system you are using, the file will be ~/.aws/credentials on Linux and macOS or C:\Users\USERNAME\.aws\credentials on Windows, and the content of the file must be:

> cat ~/.aws/credentials[default]
aws_access_key_id=AKIAIOSFODFF7EXAMPLE
aws_secret_access_key=wJalrXUtnFerm/K7MDENG/bPxRfiCYEXAMPLEKEY

As an option you can configure the keys in the script that is going to run the quantum job (see later) but it is not a good practice to hardcode credentials and therefore I will not consider it here.

Enable Braket

Prior to be able to run quantum jobs in Braket you have to enable it. Again the process is straight forward and easy and it is very well described in https://docs.aws.amazon.com/braket/latest/developerguide/braket-enable-overview.html.

In the process of enabling Braket you create a (or select a previously existing) S3 bucket and folder to store the results from the quantum jobs executed. It is required to declare this bucket and folder when running quantum jobs on actual quantum devices because the results will be stored in there. The bucket must be named in the form of amazon-braket-uniqueString.

Boto3 and AWS braket SDK

The AWS Braket SDK supports Pyton and therefore you need Boto3, the AWS SDK for Python, to be able to configure and manage AWS services, including AWS Braket.

A note before going on: when installing the required python packages to run quantum annealing jobs (see the corresponding section below), I had some problems using Python 3.8, therefore I’m using Python 3.7.

The installation of both SDKs is easily accomplished using pip:

> pip install boto3
> pip install amazon-braket-sdk

Creating the Python script

Just a “warning!” before start describing how to create the scripts. The intention of this post is just to show how to run the scripts from your laptop (or other remote system), not to deal with the complexity of managing the job status or retrieving the results when the job is completed or how to adapt the algorithm to the available gates in each real device (of course, in the gate based devices). Once that said, let us proceed.

As usual the first thing to write is the import of the required modules:

# AWS import Boto3
import boto3
# AWS imports: Import Braket SDK modules
from braket.circuits import Circuit
from braket.aws import AwsDevice
# OS import to load the region to use
import os
os.environ['AWS_DEFAULT_REGION'] = "us-east-1"
# The region name must be configured

The default region can be configured in the script as it is shown before or in the corresponding config file (or using the AWS CLI) if you prefer it: ~/.aws/configon Linux and macOS or at C:\Users\USERNAME\.aws\configon Windows.

Now we can declare the S3 bucket and folder created before, when on boarding Braket:

# When running in real QPU you must enter the S3 bucket you created
# during on boarding to Braket in the code as follows
my_bucket = f"amazon-braket-your-bucket" # the name of the bucket
my_folder = "YourFolder" # the name of the folder in the bucket
s3_folder = (my_bucket, my_folder)

And the device where our algorithm will be executed:

# Set up device
device = AwsDevice("arn:aws:braket:::device/qpu/ionq/ionQdevice")

For this example I will be using the IonQ device. The devices available, including the State Vector simulator, and their corresponding ARNs can be located in https://docs.aws.amazon.com/braket/latest/developerguide/braket-devices.html. ARN stands for Amazon Resource Names and is the way in which AWS uniquely identify the resources. We will come back to the D-Wave devices later on in this same post.

Time to write our algorithm code. In this simple example I will use the classical teleportation algorithm:

# Create the Teleportation Circuit
circ = Circuit()
# Put the qubit to teleport in some superposition state, very simple
# in this example
circ.h(0)
# Create the entangled state (qubit 1 reamins in Alice while qubit 2
# is sent to Bob)
circ.h(1).cnot(1, 2)
# Teleportation algorithm
circ.cnot(0, 1).h(0)
# Do the trick with deferred measurementcirc.h(2).cnot(0, 2).h(2) # Control Z 0 -> 2 (developed because
# IonQ is not having native Ctrl-Z)
circ.cnot(1, 2) # Control X 1 -> 2

We can print the circuit:

print(circ)

And finally send the job to be run indicating the circuit, the S3 folder to store the results and the number of repetitions or shots to run:

# Run circuit
result = device.run(circ, s3_folder, shots=1000)

The jobs are not executed intermediately because there could be reservations and queues in the device. In order to check the availability and execution windows of the device we can do the following:

execution_windows = device.properties.service.executionWindows
print(f'{device.name} availability windows are:\n{execution_windows}\n')

We can obtain the id of the job and check the status of the task:

# Get id and status of submitted task
result_id = result.id
result_status = result.state()
print('ID of task:', result_id)
print('Status of task:', result_status)

With this, if required, we can set up a loop or any other mechanism to get the results once the task is completed and print the results. As I said before, it is not the intention of this post to show how to check the status of the task and retrieve the results.

if result_status == "COMPLETED":
# get measurement shots
counts = result.result().measurement_counts
print(counts)

If we execute the script described we will get:

T  : |0|1|2|3|4|5|6|q0 : -H---C-H-C-----
| |
q1 : -H-C-X---|---C-
| | |
q2 : ---X-H---X-H-X-
T : |0|1|2|3|4|5|6|IonQ Device availability windows are:
[DeviceExecutionWindow(executionDay=<ExecutionDay.WEEKDAYS: 'Weekdays'>, windowStartHour=datetime.time(13, 0), windowEndHour=datetime.time(21, 0))]
ID of task: e7298eb6-8741-4afa-a091-7117b131eb81
Status of task: COMPLETED
Counter({'101': 181, '011': 169, '111': 140, '110': 132, '001': 118, '010': 91, '100': 89, '000': 80})

That shows the circuit defined, the execution windows of the selected device, the task ID and status and finally the counting of the result measurements for the 1000 shots.

Using D-Wave quantum annealers

Let us go over a similar example but for the use of the D-Wave annealers. In this case, in addition to boto3 and amazon-braket-sdk packages, we need to install in our environment the D-Wave Ocean SDK and the Amazon Braket Ocean Plugin. As before this is a easy job:

> pip install dwave-ocean-sdk
> pip install amazon-braket-ocean-plugin

As a reminder, take into account the note on using Python 3.7.

Let us go over the script. As before, first we need to import the packages, but in this case in place of the braket circuits, we import the Braket Ocean Plugin modules:

# AWS import Boto3
import boto3
# AWS imports: Import Braket SDK modules
from braket.ocean_plugin import BraketSampler, BraketDWaveSampler
from braket.aws import AwsDevice
# OS import to load the region to use
import os
os.environ['AWS_DEFAULT_REGION'] = "us-west-2"
# The region name must be configured

We need to import as well the Ocean modules, and in addition if we want create some graphics, the corresponding modules:

# Import D-Wave stuff
import networkx as nx
import dwave_networkx as dnx
from dwave.system.composites import EmbeddingComposite
# Import the popular matplotlib for graphics
import matplotlib.pyplot as plt

Same requirement to define the S3 bucket and folder:

# When running in real QPU you must enter the S3 bucket you created
# during onboarding to Braket in the code as follows
my_bucket = f"amazon-braket-your-bucket" # the name of the bucket
my_folder = "YourFolder" # the name of the folder in the bucket
s3_folder = (my_bucket, my_folder)

And of course to define the device to be used, in this case one of the D-Wave quantum annealers:

# Set up device
device = AwsDevice("arn:aws:braket:::device/qpu/d-wave/DW_2000Q_6")

We can check as well the execution windows for this device:

execution_windows = device.properties.service.executionWindows
print(f'{device.name} availability windows are:\n{execution_windows}\n')

Now we can write down the algorithm that we will run. In this case I’m taking it from the D-Wave examples, creating a graph and obtaining the maximum independent set of nodes of a graph.

First we define the graph and visualize it:

# Define the graph
# Create empty graph
G = nx.Graph()
# Add edges to graph - this also adds the nodes
G.add_edges_from([(1, 2), (1, 3), (2, 3), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (6, 7)])
# Visualize the original graph
pos = nx.spring_layout(G)
plt.figure()
nx.draw_networkx(G, pos=pos, with_labels=True)
plt.show()

After that we can instantiate the sampler and do the usual D-Wave magic. Note the small change required with respect to the usual D-Wave coding. The second line is instantiating the BraketDWaveSampler instead of the usual DWaveSampler and sets the S3 folder to store the results and the ARN of the device to be used:

#Instanciate the sampler and do the magic
sampler = BraketDWaveSampler(s3_folder,'arn:aws:braket:::device/qpu/d-wave/DW_2000Q_6')
sampler = EmbeddingComposite(sampler)

To finally run the job in this case using the DNX D-Wave routine for finding the maximum independent set, and indicating the graph, the sampler instantiated previously and the amount of reads or shots:

# Find the maximum independent set, S
S = dnx.maximum_independent_set(G, sampler=sampler, num_reads=1000)

Once the job is completed we can print the result and visualize it:

# Print the solution for the user
print('Maximum independent set size found is', len(S))
print(S)
# Visualize the results
k = G.subgraph(S)
notS = list(set(G.nodes()) - set(S))
othersubgraph = G.subgraph(notS)
plt.figure()
nx.draw_networkx(G, pos=pos, with_labels=True)
nx.draw_networkx(k, pos=pos, with_labels=True, node_color='r', font_color='k')
nx.draw_networkx(othersubgraph, pos=pos, with_labels=True, node_color='b', font_color='w')
plt.show()

As you can see below in the case of the D-Wave devices they are online everyday and hour and you can run the jobs online. If you run the script described you will get first the device availability windows:

DW_2000Q_6 availability windows are:
[DeviceExecutionWindow(executionDay=<ExecutionDay.EVERYDAY: 'Everyday'>, windowStartHour=datetime.time(0, 0), windowEndHour=datetime.time(23, 59, 59))]

The initial graph:

The results:

Maximum independent set size found is 3
[2, 5, 7]

and the colored graph with the results:

--

--

Fernando de la Iglesia
The Startup

I love to learn, specially how nature works, and this is why I studied physics and love quantum “things”.