Update to the Amentum Gravity API and an efficient way to call it

Introduction

B. Lough
amentumspace
4 min readJun 23, 2020

--

Amentum are pleased to announce a new feature of our Gravity API. In addition to being able to calculate the geoid height for a given latitude/longitude, users are now able to calculate gravity anomalies.
Both calculations use the GeographicLib implementation of the EGM2008 model.

As part of the release process, we have done extensive testing between this
implementation and the fortran implementation provided by National Geospatial-Intelligence Agency (NGA). We find very good agreement in all test cases.

This post, in addition to introducing the new anomaly calculations, describes an efficient way in which users can access all of the Amentum web APIs.

Motivation

We recognise that users may require calling the Gravity API for a large grid of latitude and longitude pairs. Due to the inherent latency of a single web call, the overhead of calling a web API for many single calculations can become prohibitive.

In this note, we describe a solution provided by the python requests-futures
add-on to the requests HTTP library. Of course there will be many solutions
to this problem in many different languages, but here we suggest a solution using python due to its ubiquity.

The requests-futures add-on allows many asynchronous HTTP requests to be made at once. This means all desired requests are sent to the server in a batch, and the server then services each request as soon as it finishes the previous one. We do not need to wait for a full round-trip to complete.

In our experience we have found that the overhead of calling a web API in this manner becomes almost negligible.

Timing metrics of a single web api calculation

Consider making a call to the gravity anomaly endpoint using curl:

This will perform one calculation and output the result to the command line in json format:

To have curl generate some timing metrics for us, we can create a new file, curl-format.txt:

And then run the following:

A typical output from running this command is:

These numbers will vary, largely depending on network conditions (and to a lesser extent Amentum server load), but from this we can see that a single round-trip calculation can take around half a second. This will be prohibitive if we wish to make hundreds, if not thousands, of calculations.

Implementation in python using the requests library (not recommended when many API calls are required)

A common library used in python to make HTTP requests is the requests library. We start by creating a new python file, synchronous_calc.py:

In this example we loop 50 times for illustrative purposes. Similar to the curl code above, this python script will make a single HTTP request
and wait for the response before making the next call. As expected the time scales with N, the number of calculations performed. So 50 calculations take around 25 seconds.

Implementation with asynchronous calls using the requests-futures library

The recommended method of calling the Amentum APIs when many API calls are required is using asynchronous requests. With this method, all requests are sent to the Amentum servers in a batch and the servers then service each call immediately after finishing the previous call.

This method results in minimal overhead due to waiting for a single calculation to complete a round-trip.

Ideally we would approach the point where the average time for each calculation to complete would be close to the cost time of the actual calculation on the server alone. This is what we observe when using the asynchronous requests-futures add-on to the requests library.

Creating a new python script, asynchonous_calc.py:

This time we perform 200 calculations for a range of latitudes and longitudes (10 points in latitude and 20 in longitude), to illustrate how the API might be called in practice.

Using asynchronous HTTP calls, we can complete all calculations in around 14 seconds (again depending on network conditions).

This averages out to around 70ms per calculation which is close to the calculation time on the server. By calling the API in this manner we virtually eliminate any effect of network latency and approach bare-metal performance via the web API.

--

--