How we built the environment for Ideathon on our first anniversary?

Pranay Nanda
GDG Cloud New Delhi
10 min readJun 10, 2019

Google Cloud Developer Community celebrated their first anniversary on May 25, 2019. It was an remarkable event for us as well. It was a new format of conduct and a different type of offering to learn than the usual StudyJams/Talk Sessions.

Here’s what happened on the day:

  • Participants were split into 12 teams randomly.
  • A common base problem was given to all teams — They all had to come up with ideas to digitise and enable a fictitious community of handicrafts-women. These women have been producing handicraft goods since decades and now they want to go online and scale.
  • The ideas were then presented on AMP Stories.

AMP stories immerse readers in fast-loading full-screen experiences. The AMP story format is free and part of the open web and are available for everyone to try on their websites. They can be shared and embedded across the web without being confined to a closed ecosystem or platform.

AMP stories provides content publishers with a mobile-focused format for delivering news and information as visually rich, tap-through stories. AMP Stories offers a robust set of advertising opportunities for advertisers and publishers to reach a unique audience on the web with immersive experiences.

Thanks to Saurabh Rajpal, who took an engaging workshop on creating AMP Stories and enabled our attendees to present their ideas in a novel fashion.

The winning team had an interesting take on the problem along with a beautiful presentation. The solution was economically feasible, easy to operate and saw an opportunity for a platform wherein both, the community and its buyers could leverage GPS data to provide an immersive customer experience.

Well, of course we are a community all about Google Cloud and while the ideas presented during the ideathon were innovative or the concept of presenting them on AMP stories was out-of-the-box, the infrastructure pipeline backing the AMP Stories was nothing short of intriguing and it was hosted entirely on Google Cloud Platform. There were two major wins with this pipeline:

  1. The project could be clearly counted as an open-source project as the source code was made available to the audience for their free will to contribute to the base repos. They had to fork these repos and clone them to their laptops to start development. Therefore, if anyone in the audience hadn’t previously contributed to any open-source projects, this counted as their first.
  2. The deployment process was frictionless as all anyone had to do was a git push and create a pull request.

The basic process looked like this:

After pondering over couple of ideas in order to make the deployment process as devoid of human interaction as possible, we settled for Cloud Run as our compute option. Although Cloud Run is still a beta product, it fit our use case perfectly. Here are some pointers on Cloud Run:

  • Cloud Run was recently launched at Next’19(Google’s largest conference of cloud computing) and is an excellent blend of containers and serverless computing.
  • It runs on the Knative platform and has the capacity to serve a maximum of 80 concurrent requests per container.
  • It also has the capacity to scale automatically and use cases involve deploying custom tooling without having to worry about infrastructure.
  • It is an implementation of event-driven computing and currently supports HTTP requests sent on port 8080 with more support for more events expected to be added soon.
  • It supports custom domain mapping and the endpoint is automatically SSL encrypted using a free certificate from Let’s Encrypt!
  • It scales to zero when not in use.
  • Cloud Run can also be deployed to a Google Kubernetes Engine cluster, giving more control to users who need it.

Cloud Run was the optimal solution because of the following reasons:

  1. We were looking for a solution that gave absolute creativity to the presenters and also was easy to deploy for us as organisers without having to worry about other parameters like security, encryption or scale.
  2. Because Cloud Run is all about serverless containers, it guaranteed that the application would run as is on the web as it runs on the presentees laptop.
  3. A major motivation was also the native compatibility with Cloud Build that allowed for seamless CI/CD.

At an over simplified angle, AMP Stories are simply HTML pages. With Google Compute Engine or Google Kubernetes Engine, we would have had to manage clusters and other aspects concerned with traditional VMs. While Cloud Functions and App Engine are also serverless solutions and offer the same benefits over GCE and GKE, Cloud Functions simply didn’t fit the use case and App Engine would have required us to write a needless app.yaml file(or equivalent). App Engine would have also forced us to the dependency of using a web framework in either of the languages it supports. With Cloud Run, all we had to do was package an NGINX image with any content that the team had to present to a container image and then deploy.

A minor change about changing listen 80to listen 8080 in the default NGINX configuration file allowed the deployed container to listen to PORT 8080 which is the default port that Cloud Run listens on. We used the following configuration file:

The magic of Cloud Run was complemented by Cloud Build, an integrated serverless CI/CD solution offered by Google Cloud Platform. Here are certain advantages that Cloud Build brings:

  • Cloud Build is Google Cloud Platform’s built-in CI/CD tool that offers generous 120 minutes of build time free per day.
  • By default, Cloud Build is bundled with builders popular through industry such as Docker, Jenkins, Gradle along with git and few more.
  • One may also create their own builders or may provide your own custom commands for generating a build.
  • Support for integration with GitHub and BitBucket is supported besides Google Cloud Source Repositories(which is Google’s private repository management offering).

It was Cloud Build’s native integration with GitHub that instilled the confidence in the solution even more. Also the fact that almost anything can be done using the UI, made the job even simpler. Cloud Build can be setup with GitHub either from the GCP console or as a GitHub app. Because we were experimenting, we implemented both and would therefore talk about both.

The GitHub app feels more native to this setup as checks for merge conflicts before building. It lets you create fast, consistent, reliable builds across all languages and automatically build containers or non-container artifacts on commits to your GitHub repository.

What’s more is that the user has complete control over defining custom workflows for building, testing, and deploying across multiple environments such as VMs, Serverless, Kubernetes, or Firebase. In fact, as of this writing, Cloud Build is one of the only two apps available on GitHub for Container CI. The app can be setup like any other GitHub application by installing and granting proper access to the repositories.

With the GitHub app, the user has the option to either configure all repositories in the organisation or select few for access.

The alternative is to setup using GCP Console.

  1. We start off by going to Cloud Build tab in the hamburger menu on the console.

2. On the Cloud Build console, click ‘Add Trigger’.

3. Select the source as GitHub and consent to storing authentication token.

4. On clicking ‘Continue’, you’d be taken to GitHub OAuth 2.0 authentication screen. Input the username and password to proceed.

5. On successful authentication, you’d be brought back to the Cloud Build console on GCP. Select the appropriate repository to connect to. As described in the screenshot, you may need to request access from the administrator of the repo if the repo belongs to an organisation. This was true in our case since all the repos belonged to the organisation ‘gcdc-nd’.

6. On clicking ‘Continue’, you’d be taken to the final screen where you can configure multiple settings as to what kind of webhook would you like to create. The trigger can be set to a branch or a tag which can be narrowed by a regex filter. We chose to fire the trigger on push to any branch to avoid complexity.

The most important aspect of this screen is that it gives you a choice for the file to be used for build. It can either be a Dockerfile or a cloudbuild.yaml` file for more composite builds using Cloud Build. The path for Dockerfile or cloudbuild.yaml file is configurable and is relative to the root of the repo. It’s always advisable to keep them both at the root.

7. When you click ‘Create Trigger’, a webhook in your GitHub repo is automatically created.

Voila! We have a functioning pipeline ready. Now, as soon as a commit is made to the repo, the build pipeline will trigger.

We used the following cloudbuild.yaml for our Cloud Build configuration:

If you read through the file, you’ll realise that these are simply standard command line instructions when the builder name is paired with the arguments. Well, that’s true, what’s essentially being done here is that the containers docker and gcloud in Google Container Registry are being passed with the arguments given in the args key mapping. This might seem familiar if you’re well versed with writing Dockerfile or have executed shell commands in traditional programming languages like Python or Go.

It gets more interesting. Cloud Build CLI wraps traditional Docker commands under the gcloud builds banner(among other builders) and therefore the first two scalar arguments can be also be resolved to one.

# Build Docker image.
- name: gcr.io/cloud-builders/docker
args: [‘build’, ‘-t’, ‘gcr.io/$PROJECT_ID/gcdc-nd1yr-team11:${SHORT_SHA}’, ‘.’]
# Push the built image to Google Container Registry
- name: ‘gcr.io/cloud-builders/docker’
args: [“push”, “gcr.io/$PROJECT_ID/gcdc-nd1yr-team11”]

What that means is that the statements above are equivalent in output as the ones below.

# Build the image and push the built image to Google Container Registry
- name: ‘gcr.io/cloud-builders/gcloud’
args: [“builds”, "submit", "--tag", “gcr.io/$PROJECT_ID/gcdc-nd1yr-team11”]

As stated earlier, we wanted the pipeline to be as devoid of human interaction as possible. We also wanted to automate the monotonous task of approving pull requests while the event was in motion. To reduce friction for participants to be able to push their code to the master branch of the main repository, we used a GitHub app called Mergify. Mergify is a pull requests automation service that accepts configuration as a YAML file. It also supports integration with TravisCI. Mergify is installed from the GitHub marketplace and has to have a .mergify.yaml file at the root of the repo. Our .mergify.yaml looked like this:

This is the most elementary Mergify configuration file. If you read it closely, it says to merge after receiving approval from 0 reviewers. PLEASE DO NOT USE THIS AS IS IN YOUR PROJECTS AS IT WILL ALLOW ANYONE TO PUSH ANY PIECE OF CODE TO YOUR MASTER BRANCH. This implies that as soon as anyone creates a pull request, Mergify will add the pull request to the queue for the repo and because the number of approvals required is defined as greater than or equal to 0, the code will be pushed straight to the master branch. Guilty, definitely not a good practice but this is what we needed for the day.

Combined, all these configurations offered palpable experience to the contributors. Therefore, the detailed pipeline was:

In the end, each team got their own secure, scalable and unique websites that were easy to navigate without having to delve into the complexities of having to open random ports for deployment, configuring servers or fiddling with DNS setup.

A sample URL given by Cloud Run: https://gcdc-nd1yr-team11-3ivvcrwyqq-uc.a.run.app

It’s worth mentioning that beyond their ideas, the teams also got creative with implementing short URLs for their default Cloud Run endpoints.

That’s all folks!

It’d be unfair if we didn’t pay our due gratitude. A mention of gratitude to Ashish Arora who helped us setup this pipeline.

With this, we’d like to announce that our platform is open for collaborations with other community members interested in open source projects. Members can connect with one other to share ideas, learn and collaborate with each other and we as organisers will ensure visibility of your projects via all our social media platforms. This extends to this Medium publication as well. If any member is interested in writing a blog on Machine Learning, Artificial Intelligence or Cloud Computing, the organisers will extend publishing rights to you and share your blog via all social media channels.

Our first year has been an exciting ride and we look forward to an enthralling one for the next.

--

--