MongoDB Atlas in Google Cloud accessed by Private Service Connect

Barry Searle
Google Cloud - Community
8 min readFeb 3, 2022

Intro

MongoDB is a NoSQL database that has gathered a large following since launching in 2009, as the need to store all data has been recognized, without having to understand every nuance of it. The community edition can be found everywhere, often outside the purview of the usual IT management folks as developers have incorporated it into applications, for its relatively low administrative overhead and low design barrier to entry.

MongoDB launched Atlas, their cloud offering in 2016. This offers a MongoDB managed service with “Global Clusters”. This gives you the ability to use the underlying infrastructure of the Cloud Service Provider (CSP) to get your data closer to the consumer, provide high availability and disaster recovery with minimal operational overhead required.

Initially, if you needed to access the database from outside of the Google Cloud project, the MongoDB cluster IP addresses were public, and you could limit inbound traffic to only specific, allowed IP addresses. Then came support for network peering, which necessitated peering your network to the network on which the hosted MongoDB Service database was sitting, but this meant potential exposure to a range of IP addresses.

Next and the topic of this guide, came Private Service Connect (PSC) which allows you to connect to a MongoDB cluster in Google Cloud via an RFC1918 IP address on your VPC. No traffic goes over the internet and you have no exposure to IP addresses in the MongoDB Atlas “producer” project.

Glossary of important terms:

Producer Project — the one producing the managed service.

Consumer project — the one consuming the managed service.

RFC1918 address — The usual private ranges (10.x.x.x, 192.168.x.x, 172.16.x.x)

Service Attachment — Service producers expose thor services through a service attachment that gets traffic routed from an internal load balancer (link)

Shared VPC — VPC that is created and administered in a “host project”, but used by infrastructure in different “service projects”.

Where to start?

  • I am working on a Mac from which I will access the Google Cloud and Atlas web consoles.
  • I will create a Debian Linux instance that I will refer to as test/dev from which I shall test the accessibility of the MongoDB service via a local IP address. Since we will have to allow API access from an address, this might be less ephemeral than the machine from which we are accessing the consoles. For this reason I will allow the external ip address of this instance for API access since I will have to run a privileged script as part of this process.

Configure the Atlas side in the Atlas project

  • Have a project set up under an organization at https://cloud.mongodb.com/. I wont go into too much detail on MongoDB Atlas itself but I’ll rather focus on getting connected.
  • Create a cluster in the project — We need at least an M10 to support the network peering/PSC features (Link)
  • Now on the menu on the left you will find a “Network Access” tab. This is where you set up from which IP addresses the data in your cluster can be accessed. I will be testing this with a Google Cloud VPC with a subnet of 10.0.0.0/23. Note that this is a private RFC1918 range to which Atlas will now open the firewall.

In your Google Cloud project

  • Create a VPC called atlas-test-vpc with a subnet of 10.0.0.0/23 called atlas-test-subnet, in the same region as your MongoDB Atlas cluster.
  • Create a test/dev VM called atlas-test-vm on atlas-test-subnet.
  • Allow full access to cloud API’s
  • Give the VM a network tag of dev, so you can forward traffic through the firewall by tag.
  • Give the VM an ephemeral external IP address and also an ephemeral internal ip address on the subnet you just created.
  • Add a firewall rule forwarding all traffic from your machine on which you are accessing the web interfaces (www.whatismyip.com) to instances tagged dev (your test/dev instance), on atlas-test-vpc.
  • Your test/dev instance is now ready to be connected to.

Run the following to connect to your test instance:

gcloud beta compute ssh — zone “us-central1-a” “atlas-test-vm” — project “medium-atlas”

On your test/dev instance on your VPC

sudo apt-get install -y mongodb-mongosh

Set up a Private Endpoint Group

  • In the Atlas project click “Network access” on the left
  • “Private endpoint” →“Add private endpoint”
  • Choose the region where your subnet is, and a service attachment will be created.
  • Next fill in your Google Cloud project-specific information.
  • Click “Next” and a script will be generated for you, containing your project specifications.
  • Copy the script to a file called setup_psc.sh on your test/dev machine that has the Google Cloud SDK installed. Make it executable by running chmod +x setup_psc.sh.
  • Now run it, and it will reserve the necessary ip addresses, even if they are not contiguous, and create the forwarding rules from the 50 IP addresses on your Google Cloud VPC to the 50 endpoints reserved in the Atlas producer project for your endpoint service.
  • This script will take a couple of minutes to run.
  • This script runs entirely against Google Cloud and so does not need the Atlas API key.

The script looks as follows in my environment (atlas-test is my custom PSC endpoint prefix):

#!/bin/bash

gcloud config set project medium-atlas

for i in {0..49}

do

gcloud compute addresses create atlas-test-ip-$i — region=us-central1 — subnet=atlas-test-subnet

done

for i in {0..49}

do

if [ $(gcloud compute addresses describe atlas-test-ip-$i — region=us-central1 — format=”value(status)”) != “RESERVED” ]; then

echo “atlas-test-ip-$i is not RESERVED”;

exit 1;

fi

done

for i in {0..49}

do

gcloud compute forwarding-rules create atlas-test-$i — region=us-central1 — network=atlas-test-vpc — address=atlas-test-ip-$i — target-service-attachment=projects/p-fkosb42io2cg3nz1a11pzmwd/regions/us-central1/serviceAttachments/sa-us-central1–61f437e1b488fd5c7d9b9ec9-$i

done

if [ $(gcloud compute forwarding-rules list — regions=us-central1 — format=”csv[no-heading](name)” — filter=”name:atlas-test” | wc -l) -gt 50 ]; then

echo “Project has too many forwarding rules that match prefix atlas-test. Either delete the competing resources or choose another endpoint prefix.”

exit 2;

fi

gcloud compute forwarding-rules list — regions=us-central1 — format=”json(IPAddress,name)” — filter=”name:atlas-test” > atlasEndpoints-atlas-test.json

  • The script will return a list of endpoints in a file called atlasEndpoints-atlas-test.json.
  • I made use of uploading it to a Google Cloud Storage bucket as follows, and downloading it from there to the instance on which I was stepping through the connection wizard, but I will leave it to you to determine the best way to get the file uploaded via the web console.

# vim create the file and paste content

vim setup_spc.sh

# paste the script into the file and make it executable

chmod +x setup_psc.sh

#run the setup file

./setup_psc.sh

#returns the following file of ip address to service attachment maps

cat atlasEndpoints-atlas-test.json

#make a bucket for the file

gsutil mb gs://medium-atlas

#Copy the file to the bucket

gsutil cp atlasEndpoints-atlas-test.json gs://medium-atlas

#now on your computer on which you are browsing the Atlas console

#Copy the file now, ready to upload it into the setup wizard

gsutil cp gs://medium-atlas/* .

#now delete the bucket to reduce clutter

gsutil rm -r gs://medium-atlas

  • After uploading the file, Atlas will step through each of the forwarding rules, mapping it to the endpoints. This could take a significant amount of time (30 minutes to create the service attachments and another 30 to apply to the cluster).

Once the endpoint status is “Available”, we are ready to connect.

  • Under “Network Services” → “Private Service Connect” you will find a list of forwarding rules describing how IP addresses are being forwarded to service attachments in the producer project.

Here is a sample:

  • Even once the service attachment mapping has completed, it takes a while longer for the servers to be updated, so wait for the blue banner at the top to go away.
  • At this point, you can go to “Databases” on the left side→ “Connect”
  • “Private Endpoint” will be grayed out until the configuration changes have been deployed to all servers in the cluster.
  • Be patient, a lot of plumbing is being done for you!
  • It took about an hour for my 3 servers to be available, and clicking on the connect button did not result in a clickable “Private Endpoint” connection method until I refreshed the page. But after a page refresh:
  • Click “Choose connection method”
  • Click “Connect with the MongoDB Shell”
  • Click “I have the MongoDB Shell Installed” since we installed it earlier on our test instance.
  • Choose your shell version, and copy out the connection string.

mongosh “mongodb+srv://medium-cluster-pl-0.oqryx.mongodb.net/myFirstDatabase” — username {user}

  • Copy it to the command line on your test instance.
  • Run it in the command line and you will get something like the following:

mongosh “mongodb+srv://medium-cluster-pl-0.oqryx.mongodb.net/myFirstDatabase” — username {name}

Enter password: ******

Current Mongosh Log ID: 61f475f5ef14ae5b8d86301c

Connecting to: mongodb+srv://medium-cluster-pl-0.oqryx.mongodb.net/myFirstDatabase?appName=mongosh+1.1.9

Using MongoDB: 4.4.12

Using Mongosh: 1.1.9

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).

You can opt-out by running the disableTelemetry() command.

Atlas atlas-2gcksc-shard-0 [primary] myFirstDatabase>

Conclusion:

You have now configured access from your Google Cloud project and VPC to a managed Atlas MongoDB service, and it is available on IP addresses that are local to your VPC.

You can grow your cluster up to 50 MongoDB nodes, and you can connect using a single, simple connection string that is provided for you in the console.

If your VPC which you have mapped your service endpoints to is a shared VPC in a host project, then you will be able to access MongoDB from any service projects which are attached.

--

--

Barry Searle
Google Cloud - Community

These are my personal learnings; the views expressed in these pages are mine alone and not those of my employer, Google