06# SQL for Data Science
Dataset: Github Repository & StackOverFlow
Example: How many files are covered by each type of software license?
GitHub is the most popular place to collaborate on software projects. A GitHub repository (or repo) is a collection of files associated with a specific project.
Most repos on GitHub are shared under a specific legal license, which determines the legal restrictions on how they are used. For our example, we’re going to look at how many different files have been released under each license.
We’ll work with two tables in the database. The first table is the licenses
table, which provides the name of each GitHub repo (in the repo_name
column) and its corresponding license. Here's a view of the first five rows.
The second table is the sample_files
table, which provides, among other information, the GitHub repo that each file belongs to (in the repo_name
column). The first several rows of this table are printed below.
Next, we write a query that uses information in both tables to determine how many files are released in each license.
It’s a big query, and so we’ll investigate each piece separately.
We’ll begin with the JOIN (highlighted in blue above). This specifies the sources of data and how to join them. We use ON to specify that we combine the tables by matching the values in the repo_name
columns in the tables.
Next, we’ll talk about SELECT and GROUP BY (highlighted in yellow). The GROUP BY breaks the data into a different group for each license, before we COUNT the number of rows in the sample_files
table that corresponds to each license. (Remember that you can count the number of rows with COUNT(1)
.)
Finally, the ORDER BY (highlighted in purple) sorts the results so that licenses with more files appear first.
It was a big query, but it gave us a nice table summarizing how many files have been committed under each license:
Dataset StackOverflow
Introduction
Stack Overflow is a widely beloved question and answer site for technical questions. You’ll probably use it yourself as you keep using SQL (or any programming language).
Their data is publicly available. What cool things do you think it would be useful for?
Here’s one idea: You could set up a service that identifies the Stack Overflow users who have demonstrated expertise with a specific technology by answering related questions about it, so someone could hire those experts for in-depth help.
In this exercise, you’ll write the SQL queries that might serve as the foundation for this type of service.
As usual, run the following cell to set up our feedback system before moving on.
1) Explore the data
Before writing queries or JOIN clauses, you’ll want to see what tables are available.
Hint: Tab completion is helpful whenever you can’t remember a command. Type client.
and then hit the tab key. Don't forget the period before hitting tab.
2) Review relevant tables
If you are interested in people who answer questions on a given topic, the posts_answers
table is a natural place to look. Run the following cell, and look at the output.
3) Selecting the right questions¶
A lot of this data is text.
We’ll explore one last technique in this course which you can apply to this text.
A WHERE clause can limit your results to rows with certain text using the LIKE feature. For example, to select just the third row of the pets
table from the tutorial, we could use the query in the picture below.
You can also use %
as a "wildcard" for any number of characters. So you can also get the third row with:
query = """
SELECT *
FROM `bigquery-public-data.pet_records.pets`
WHERE Name LIKE '%ipl%'
"""
Try this yourself. Write a query that selects the id
, title
and owner_user_id
columns from the posts_questions
table.
- Restrict the results to rows that contain the word “bigquery” in the
tags
column. - Include rows where there is other text in addition to the word “bigquery” (e.g., if a row has a tag “bigquery-sql”, your results should include that too).
4) Your first join
Now that you have a query to select questions on any given topic (in this case, you chose “bigquery”), you can find the answers to those questions with a JOIN.
Write a query that returns the id
, body
and owner_user_id
columns from the posts_answers
table for answers to "bigquery"-related questions.
- You should have one row in your results for each answer to a question that has “bigquery” in the tags.
- Remember you can get the tags for a question from the
tags
column in theposts_questions
table.
Here’s a reminder of what a JOIN looked like in the tutorial:
query = """
SELECT p.Name AS Pet_Name, o.Name AS Owner_Name
FROM `bigquery-public-data.pet_records.pets` as p
INNER JOIN `bigquery-public-data.pet_records.owners` as o
ON p.ID = o.Pet_ID
"""
It may be useful to scroll up and review the first several rows of the posts_answers
and posts_questions
tables.
Hint: Do an INNER JOIN between bigquery-public-data.stackoverflow.posts_questions
and bigquery-public-data.stackoverflow.posts_answers
.
Give post_questions
an alias of q
, and use a
as an alias for posts_answers
. The ON part of your join is q.id = a.parent_id
.
5) Answer the question
You have the merge you need. But you want a list of users who have answered many questions… which requires more work beyond your previous result.
Write a new query that has a single row for each user who answered at least one question with a tag that includes the string “bigquery”. Your results should have two columns:
user_id
- contains theowner_user_id
column from theposts_answers
tablenumber_of_answers
- contains the number of answers the user has written to "bigquery"-related questions
6) Building a more generally useful service
How could you convert what you’ve done to a general function a website could call on the backend to get experts on any topic?
Think about it and then check the solution below.
def expert_finder(topic, client):
'''
Returns a DataFrame with the user IDs who have written Stack Overflow answers on a topic.Inputs:
topic: A string with the topic of interest
client: A Client object that specifies the connection to the Stack Overflow datasetOutputs:
results: A DataFrame with columns for user_id and number_of_answers. Follows similar logic to bigquery_experts_results shown above.
'''
my_query = """
SELECT a.owner_user_id AS user_id, COUNT(1) AS number_of_answers
FROM `bigquery-public-data.stackoverflow.posts_questions` AS q
INNER JOIN `bigquery-public-data.stackoverflow.posts_answers` AS a
ON q.id = a.parent_Id
WHERE q.tags like '%{topic}%'
GROUP BY a.owner_user_id
"""# Set up the query (a real service would have good error handling for
# queries that scan too much data)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
my_query_job = client.query(my_query, job_config=safe_config)# API request - run the query, and return a pandas DataFrame
results = my_query_job.to_dataframe()return results