How to Pick the Best Self-Hosted Sentry Option for Your Project?

Leo Lin
3 min readApr 7, 2023

--

Which self-host sentry should you use?

WHAT is Sentry?

As engineers, we want to solve issues quickly and effectively. But without any service to track errors, getting the information we need to fix them can be hard. To this end, Sentry is a powerful error-tracking service that helps engineers debug and fix issues faster and easier. You can use Sentry in different ways:

  • Subscribe to Sentry Cloud, which gives you access to the latest features and support
  • Install Sentry on your own server, which gives you more control and customization.

Below are three choices for self-hosting Sentry.

Comparison of different solutions.

Each option has pros and cons, depending on your needs and preferences. Here are some things to consider when choosing a self-hosted Sentry option:

  1. You might be better off using the official cloud solution if you have high traffic, concurrency, or security requirements, especially for the Frontend environment.
  2. If you want to stay updated with the newest features and improvements, use the official or community-maintained versions.
  3. If you want to minimize the dependencies and resources needed to run Sentry, stable/helm is the only solution.

WHAT does the dependency mentioned above mean?

Let’s talk about the dependencies. Both Kubernetes and docker-compose can pack the services that an application needs. However, the more services are packed, the more resources they need. From the infrastructure perspective, it’s an important metric for evaluation.

Look at the official self-host Sentry. You can find that tons of services are needed in the below figure. In addition to the database such as Redis and PostgreSQL, many services are still needed for the sentry. PS: Since the sentry-kubernetes/chart is just the Kubernetes version, it still needs the similar dependencies.

On the other hand, there are fewer dependent services of stable/sentry, as shown in the figure below. Thus, it’s obvious that stable/sentry is much more lightweight than the other options.

Try the lite version — stable/sentry

Next, I am going to take my side project as an example. It has some characteristics:

  • Backend service
  • Only for internal service, i.e., won’t be exposed
  • The traffic is not huge
  • The dependencies of the official and community versions are huge, which will take up more system resources
  • Tight budget……

Thus, I decide to use stable/sentry. However, since it’s deprecated and the Slack integration doesn’t work properly after testing, I’ve forked and fixed the sentry python package and built a newer image for stable/sentry.

Github repo: CoyoteLeo/sentry

Image: karta0910489/sentry:9.1.4

values.yaml example for stable/helm:

image:
repository: karta0910489/sentry
tag: "9.1.4"

web:
env:
- name: SENTRY_SLACK_CLIENT_ID
value: "<client id>"
- name: SENTRY_SLACK_CLIENT_SECRET
value: "<client secret>"
- name: SENTRY_SLACK_VERIFICATION_TOKEN
value: "<ver>"

# How many worker instances to run
worker:
replicacount: 1

# Admin user to create
user:
# Indicated to create the admin user or not,
# Default is true as the initial installation.
create: true
email: admin@sentry.local
password: admin-password

postgresql:
enabled: false
nameOverride: sentry-postgresql
postgresqlDatabase: sentry
postgresqlUsername: postgres
postgresqlHost: database
existingSecret: password
existingSecretKey: postgres

redis:
enabled: false
nameOverride: sentry-redis
host: redis-master.redis
existingSecret: password
existingSecretKey: redis

hooks:
dbInit:
enabled: true
resources:
# We setup 3000Mi for the memory limit because of a Sentry instance need at least 3Gb RAM to perform a migration process
# reference: https://github.com/helm/charts/issues/15296
limits:
memory: 5000Mi
requests:
memory: 5000Mi

service:
name: sentry
type: ClusterIP
externalPort: 9000
internalPort: 9000

config:
configYml: ""
sentryConfPy: ""

sentrySecret: <secret>

I hope this helps! Feel free to contribute to CoyoteLeo/sentry!

--

--

Leo Lin

Hi, here is Leo. I am a backend engineer who loves software architecture, data engineering, machine learning, and large-scale service design.