Analyzing Data with IBM Cloud SQL Query

Easily make SQL queries on your object storage data

At IBM Think 2018, IBM Cloud SQL Query (SQL Query) was introduced. SQL Query provides a serverless, no-ETL solution to easily query data stored in IBM Cloud Object Storage. Underneath, SQL Query uses Apache Spark SQL as its underlying query engine, and users can compose queries using standard SQL to analyze and transform structured and semi-structured data in object storage.

No additional setup is involved. Once you have SQL Query running on IBM Cloud, you can immediately start querying your data using the SQL Query user interface, or programmatically using either the REST API or the Python ibmcloudsql library.

In this article, I’ll show you how to set up and start using SQL Query from the UI. After that, we’ll look at a practical example using ibmcloudsql to query traffic accident data around Los Angeles that’s stored in Cloud Object Storage, and then visualize accidents in a Jupyter Notebook with PixieDust using Mapbox.

IBM Cloud SQL Query logo.

Getting set up

In order to get started using SQL Query, you’ll need to have a Cloud Object Storage service first. If you don’t have an object storage service, you can provision a lite plan instance in the IBM Cloud catalog. It’s a limited plan that’s free, but will provide more than enough storage for what we need. When that’s done, you’ll need to create a bucket to store the data that we’ll use for our example.

To create a bucket, click Create Bucket and a window will appear asking you to select a name and region for the bucket. If you’re interested in what regions and endpoints are available, or curious about each region in particular, there’s some documentation explaining regions and endpoints for Cloud Object Storage.

You can create any name for your bucket, but keep the rest of the default settings as they appear. I’ve called this bucket data-bucket-032018. Since bucket names have to be unique across the IBM Cloud platform, create a unique name for your bucket.

A note on universally unique buckets: If you plan on reproducing the queries below using your own IBM Cloud Object Storage bucket, be sure to replace data-bucket-032018 where it appears in my data_source variable and FROM clause with your own unique bucket name.

The data we’ll use for this example is a CSV file of Los Angeles Traffic Collision Data from 2010 to Present. Download the file and upload it to the object storage bucket you just created.

Now onto setting up SQL Query. Select the SQL Query service from the IBM Cloud catalog. It’s currently in beta and free to use.

Currently, SQL Query can run queries on data that’s stored as CSV, Parquet, or JSON in Cloud Object Storage. To interact with the SQL Query, you can write SQL queries using its UI, or programmatically using the REST API, the ibmcloudsql Python library, or write a serverless function using IBM Cloud Functions.

After provisioning SQL Query from the catalog, click on the Manage tab in the left sidebar. Click Open UI from that page to access the SQL Query UI to start making queries.

Start querying your data

When you click Open UI, the SQL Query service will automatically generate a unique Cloud Object Storage bucket that will store all of the results as CSV files from your SQL queries. You can also specify other buckets to store results in, but to keep things simple, we’ll use the generated object storage bucket.

The SQL Query UI is an editor that lets you immediately start composing SQL queries. Since SQL Query uses Spark SQL, you can use Spark SQL functions and ANSI SQL to compose both simple and complex queries that involve large amounts of data.

SQL Query UI

At the bottom of the editor, you’ll see the target Cloud Object Storage URL. This URL points to the object storage bucket generated by SQL Query to store your results.

/result is added by default as the suffix of the URL. result will be attached to each file name the contains the results of a SQL query inside this bucket.

To try out the editor, let’s compose a query that gets data from the traffic collision data. The query selects the traffic collision ID, date and time the collision occurred, the area where the collision happened, victim age and sex, and location coordinates. Additionally, we’ll filter the data so that it only provides us with victims whose age is between 20–35 and an incident time between 5pm to 8pm. The SQL code is below:

Since SQL Query creates a table from CSV, Parquet, or JSON files behind the scenes, in the FROM statement you have to provide a table unique resource identifier in the form of

cos://region/bucket-name/file-name

that points to the file(s) you’re querying — similar to defining the table you’re querying in SQL. The identifier comprises the bucket endpoint, bucket name, and the object file name. SQL Query accepts wildcards, so, for example, if you have lots of files in a bucket with the same name and distinguished by a month and year (e.g., example-032018), you could combine them all into a single table by selecting cos://us-geo/example-bucket/example-* or cos://us-geo/example-bucket/example-.

Now, click Run to start the SQL query. Once it runs, you’ll see two columns in the results panel appear. The Jobs column lists all the SQL queries that have been completed or failed. To the right of that you’ll have two tabs: Result and Query Details. Result shows the result of the query as a table.

Query Details will include a copy of the SQL query, the query status, the run time (or time that it took to get the result), the user who made the query, and the URL location of the results in the object storage bucket.

If you click on the square icon next to the Result Location URL, it will take you to the Cloud Object Storage bucket and provide you with the CSV file with the results, which you can download or use as the table unique resource identifier for another query.

And, that’s all you have to do to get started using SQL Query. So now that we have the basics of running a query, let’s look at how we can use SQL Query and PixieDust to create visualizations for the data we have in Cloud Object Storage.

Using SQL Query in Watson Studio with PixieDust

Querying data within the SQL Query UI is a great way to explore what your data looks like and to experiment whether your SQL query generates the right results. However, if you want to visualize the results, you’ll have to interact with SQL Query programmatically. One way to do this is using Jupyter Notebooks to run the code for the SQL queries and PixieDust to visualize the results.

In this part, we’ll use a Jupyter Notebook running Python 3.x and PixieDust to visualize data provided by SQL Query. We’ll modify a version of the SQL query above to divide the Location column into longitude and latitude coordinates. Then we’ll use PixieDust to visualize these points using Mapbox to see where most of the traffic accidents occur in Los Angeles.

For this example, I am using a Jupyter Notebook on Watson Studio. However, if you have a local instance of a Jupyter Notebook that should be fine as well, but I can’t guarantee you’ll have all the updated packages that come with Watson Studio. If you want to follow along with the pre-built example, you can download the notebook on Github.

PixieDust is an open-source tool that’s an extension to Jupyter Notebooks. It adds a wide range of functionality to easily create customized visualizations from your data sets with little code involved. The Welcome to PixieDust notebook on Watson Studio is a great primer to help you become acquainted with the tool and its capabilities.

To interact with SQL Query, we’ll use ibmcloudsql, an open-source Python library that sends SQL queries to SQL Cloud and retrieves the results from your object storage bucket. A starter notebook for SQL Query is also available, which includes the steps to get started, set up the appropriate authentication configuration you’ll need, and run PixieDust to create charts based on the data in that notebook.

When you open up a notebook, you’ll need to install and import PixieDust and ibmcloudsql. To install them, you can run the following in the first cell:

!pip -q install ibmcloudsql
!pip install --upgrade pixiedust

Now, import the libraries in the next cell:

import ibmcloudsql
import pixiedust

Now, you’ll need to get an API key from IBM Cloud to have access to Cloud Object Storage and SQL Query. This is needed to write SQL Query results back to object storage. Go to Manage > Security > Platform API Keys at the top of your IBM Cloud account. Click the Create button and a window will appear asking you to enter a custom name for the API key.

Click Create at the bottom of that window. After that, you can download or Show the API key. You’ll need to save it so that you can use it in the notebook.

To save the Cloud API Key securely in the notebook we’ll use the getpass module. Import the module and create a variable to store the API Key. When the cell runs, it will give you the prompt and you just paste in your API Key and click return on your keyboard.

import getpass
cloud_api_key = getpass.getpass('Enter your IBM Cloud API Key')

The next requirement is getting the SQL Query service instance cloud resource number (CRN). You can select that from the Manage panel in your SQL Query service page. Click on the Instance CRN button under REST API. It will copy the CRN for you.

In another notebook cell, create a variable and add your copied CRN number.

sql_crn = '<your_crn>'

Now, all you need is the full URL of the generated object storage bucket that will store and retrieve the results of the queries. An easy way to do this is to go back to your Cloud Object Storage service and find the bucket that was created with your SQL Query service. Click on the kebab menu icon and select Bucket SQL URL.

A window will open up with the bucket’s URL. Copy that URL.

Create a variable in your notebook to store that URL like the following.

sql_cos_endpoint = 'cos://us-south/sql-58216eb4-8ac6-43d1-94a2-98c3aebbbc71/accidents'

Note that I’ve added the suffix /accidents at the end of my URL. This is so we can distinguish these queries from others that we might have made that will appear in this bucket. The suffix /accidents will become the prefix for the results generated by SQL Query.

With the API Key, CRN, and Cloud Object Storage endpoint added, all we need to do is run SQL Query. To do that, we’ll invoke the ibmcloudsql library’s SQLQuery function and provide the API Key, CRN, and object storage endpoint.

sqlClient = ibmcloudsql.SQLQuery(cloud_api_key, sql_crn, sql_cos_endpoint)

After that, we can use the run_sql function provided by the library to run a SQL query. The following code example provides a suggested way how to do that using the previous query we ran in the SQL Query UI. This time, I’ve separated the bucket URL containing the file we want to run queries on from the SQL query string. You should change it to point to your own object storage bucket.

Just copy the code and run it with your object storage bucket that contains the CSV file. You’ll get the same results that you got in the UI, but as a data frame in a notebook.

Since we want to get the location of each accident, we’ll need to refine the data a little more so that the latitude and longitude coordinates are in separate columns, rather than grouped together in the location column. To do that, I’ve written a CTE (common table expression). Copy and paste the following code and run it in a cell:

The CTE splits the coordinates into two rows (latitude and longitude) and uses the accident ID for reference. Using a table called location that’s generated from the CTE, we can then join the CSV data and that table together using the accident ID. We’ll then filter the accident time (5pm-8pm) and age of victims (20–35), as well as remove accidents that have 0.000 as coordinates.

A sample of the results of that query should look like:

Now that we have the coordinates in separate rows, using PixieDust we can view the locations of each accident on a map inside another cell. To do that, we’ll run PixieDust’s display() function with our traffic_location data.

display(traffic_location)

Running this function, PixieDust will start and all of the results will show as a table.

Click the chart button at the top and select Map.

Once Map is selected, select the id and age as the values and latitude and longitude as the keys. By default, PixieDust will show only 100 results on the map, but you can add all 30,000+ results without any issues.

Once you click OK, you should see a map with each accident on it.

If there is an error, make sure that the renderer is mapbox. PixieDust comes with a Mapbox API Key built in that you can use.

Once you see the map, you can then zoom into it and view individual points which will, for instance, give you:

Get querying!

We began this article with a brief introduction to getting set up with IBM Cloud SQL Query and using its UI. Then, we composed and ran queries in a Jupyter Notebook on Watson Studio with the SQL Query Python library and PixieDust. If you want to download the notebook to follow along, or play with the SQL queries we’ve made here, click here for the Github repository.

Once you get these services working together, lots of possibilities open up. For example, in my sample notebook you could keep experimenting with the query string to fully explore the data set. Granted, we’re not building predictive models or crunching petabytes of data, but for situations where you need to share a sizable amount of data and test some SQL queries, I can see it being very useful.

IBM CODAIT

Things we made with data at IBM’s Center for Open Source Data and AI Technologies.

Thanks to Mike Broberg

Dr. Abdullah J. Alger

Written by

Developer Evangelist @ IBM Cloud | I write about building things, breaking things, databases things, cloud things, and thing things.

IBM CODAIT

Things we made with data at IBM’s Center for Open Source Data and AI Technologies.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade