Building a Church Website: Constraints, Considerations, Choices

Siegfriedson
Sort of Like a Tech Diary
15 min readMar 9

--

When you’re one of the ‘tech people’ in your small church community, and when you have friends in just the right places, it’s only a matter of time before you’re asked to build a site.

That was how some colleagues and I found ourselves with the task of helping our church do what we know best. This slimmed down, sanitised retelling highlights the major decision points — public and private — that marked my involvement in an ongoing project that might be more significant than I’ll ever let on.

Before we begin

The first question that came up when the idea of a website for my local community was floated was: do we actually need one? Social media services do a lot of what websites have traditionally done, without the additional overhead of maintaining any infra associated with a site. When determining a digital strategy, small institutions have to prioritise based on expected outcomes and available resources.

Ignoring the small fact that there were a few tech people capable of delivering a website, the fundamental question we needed to confront was if the things we wanted the site to do could be achieved by simpler means.

It turned out that we had big plans, and although our current social media channels were doing a good job of representing us on the Web, we needed to plant a flag in our own corner of the Internet.

What’s a website anyway?

The second question I proposed is one we’re still figuring out. Far from being a domain we own, a website could be a number of things. From where I sat, there were three outcomes to pick from: this site could either be an Internet billboard telling the public who we are, what we do and why they should care. We could also run it as our very own Web publication, releasing press statements, news and other publishable content. It could also be a super sophisticated application parishioners used to get church-related things done.

Or it could be all three.

The WordPress site

We picked number two, because we had plans of maintaining (and perhaps even expanding) an existing editorial board for the weekly church bulletin. WordPress was a no-brainer, because it allowed us to simply deploy the application to some host and let the editors manage the content.

For a community that publishes on the regular, any traditional CMS designed for blogs should do — they’re build for that. Except, we overestimated our capacity to produce written content. Turns out running a publication is hard: you need people to consistently write more stuff.

Picking a WordPress host

The next decision was who to host with. In spite of the slew of services offering WordPress (it’s basically a commodity), we went with a local provider for just one reason: ease of payment.

It irks me that many small organisations in Ghana have not gotten round to the fact that they need to have ways to pay for online services from anywhere in the world. I’m not sure why this hesitance to allow themselves the goodness of all that’s available to purchase and use exists, but it is a thing, and its ridiculous.

Changes to our local payments landscape have transformed how people and organisations think about online commercial activity. Mobile Money’s transformative power has had the unintended effect of creating a sort of silo of online payments methods; the unaware have been lulled into thinking that once they can pay with Mobile Money, they’ve figured out online payments and call it a day.

The technology is not there yet. I hope it does, because that will be such validation that an African solution to an African problem is globally applicable. But until then, depending solely on Mobile Money payments restricts the number of vendors they can do business with.

So anyway we went with a local web host so we could actually pay them.

Hitting reality

A number of things became clear once we went down the route of maintaining a WordPress site on a shared hosting account. The first was that there was no editorial team to speak of (it was one guy, really), content was not going to be forthcoming, and a blog-oriented framework was overkill.

Our hosting provider put us on their lowest tier: five subdomains, four databases (the fifth was taken up by WordPress), a five gigabyte SSD and five hundred gigabytes of bandwidth. That too, was overkill.

The WordPress template thing that let’s you design your page in a WYSIWYG visual editor was an unmitigated dev experience disaster for someone who prefers writing their markup and stylesheets by hand.

This left me in a bind, and I needed solutions. Development experience was going to be important, because, the lack of our editorial team meant I could not simply install WordPress and hand it over to the writers to do their business. I would have to spend more time on the site, so it was best to make it feel like home. I had two choices to make: ditch the clunky visual editor and build a WordPress template from scratch, or run with a single-page static site. Both choices would allow me to still depend on the CMS-goodness that is a WordPress installation, in the hope that some editors will find their way to my door.

The second area of concern was the gross underutilisation of resources we were paying for. I hate wastage. I could have been quiet about this since we could afford the expense, but throwing money away isn’t part of my constitution. This is also why I don’t work for government.

Since we didn’t have the personnel needed to fully utilise our hosting tier, we were going to have to downscale. So, maybe opt for an even lower tier, or ditch WordPress entirely and go ̶n̶a̶k̶e̶d̶ static? After all, the editors would never come.

It’s just index.html!

Before going truly native, we built a Jekyll site that consumed our WordPress site’s API. It didn’t work out for a number of reasons, two of which are that it was a bit of a pain to update on our shared hosting account, and because it depended on content that was never coming. You see, at that time we hadn’t divorced ourselves entirely of the idea of going with option two: the online publication (see: What’s a website anyway?), and that was never going to work.

Eventually, we settled on customising an already-made template to our needs, and just git pushing to a directory on our shared host. That’s as simple as it was ever going to get.

The choice of tech implicitly reverted the decision we had taken when asking ourselves what kind of site we were after. This was an Internet billboard, and that was all it was every going to be. Until it wasn’t!

Hooking up with Firebase RTDB

It turns out our church does put out a steady stream of content on the regular, in the form of the unassuming weekly announcements. That kind of content already has process backing it, ensuring that it is consistently fresh. It is also essential to the community: announcements are how we know what’s going to happen, and what’s not going to happen, and what we’re to do where, how and with whom.

Unlike long-form content, these bite-sized chunks of relevant, timely info are not exactly suited to the typical blog format. They could, however, be just another section of a long, single web page that updates dynamically from some third-party service.

I’ve used the Real-time Database from Firebase for this exact purpose, when I needed to keep a webpage updated in real-time with some content that will change often. Thanks to this wonderful bit of tech coupled with VueJS, our simple HTML file acquired new superpowers.

Concerning VueJS

The choice of JavaScript library followed from my want of simplicity in development experience. It also had a lot to do with familiarity; not simply with VueJS, but with the kind of web development that doesn’t require a build step.

This is a big selling point for VueJS, and it’s the reason I moved to it when AngularJS died. Yes, I do recognise the value of the more “professional” way of building modern front-ends. No, I value the freedom to choose the right tool for the right job.

The folks at VueJS get this. That’s why they call it the “progressive” web framework, because it adapts to how deep you want to get. Given the overall preference for Single-File .vue components, production-grade tooling from Evan You and the team, as well as the existence of petite-vue, I think the “progressive” days of VueJS are numbered.

I have heard of AlpineJS, and I hope it succeeds. There is a use-case for more laidback front-end engineering that risks getting lost in the drive towards the future, so it’s projects like these that I adore. There’s a zen to the breath of fresh air and distance from the noise that these attempts at keeping the simple in front-end engineering bring, that keeps me happy, so good luck to the teams behind them!

It’s just index.html, with superpowers!

Our site “regressed” from a full publishing engine to an HTML document, and then grew wings with real-time updates of announcements. It also looked prettier than it had ever been, because let’s face it, I’m a mediocre designer. Relying on strong hints from a young engineer’s brief stint as our go-to web guy, this new site looked more “production-ready” than anything we had attempted before.

There was still the question of all the unused resources we were paying our local host (*wink*) for. An answer to that would come from another member of the church’s media team. Among other things, we were exploring running a service to receive online payments, run an online shop for merch and other goodies, we needed a reliable database to help with church administration… big dreams needing big resources, and our fancy new web page was just the start.

But long before all this actually came up, the transition toward something more than a simple web page was well on its way. Having real-time announcements backed by Firebase gave us a sort of CMS, where a technically-oriented editor could input key:value data to update the site, without the need for a new deployment. We were straddling two worlds with respect to the first question we asked about what the website should be (see: What’s a website anyway?)

Once you go Vue, you don’t go back. The announcementsApp that powered the real-time goodness was never going to be the end of it. A small image gallery, I realised, could easily be powered by VueJS, allowing us to easily update the list of images displayed in that section of our supercharged Internet billboard. And so the galleryApp was built. I built a similar app for the big slider (we still use those!) called, you guessed it, sliderApp , and our website started to look more fancier from behind than I’d imagined.

Things got more interesting when the need for documentsApp arose.

Handling documents

Remember how we described announcements? They were byte-sized chunks of useful information that could fit on their own section of a single page. In the real world, announcements of that sort aren’t always that simple: sometimes, they were pointers to some greater detail that needed elaboration somewhere else.

Given the design we had gone with, there was only enough space for pre-Elon-Tweet-length text for any announcement, and then some. Anything beyond the length of a single SMS page will look bad. We needed to handle the “read more” case before it showed up.

My solution to this was Bootstrap’s OffCanvas component. It’s a pretty sheet that slides out when called, and back in when dismissed. It’s the sort of component you might use for a hidden drawer menu, or a bottom sheet, both popular patterns in mobile apps. Our OffCanvas implementation will pop up from the bottom of the screen populated with relevant text when an announcement item that had a button was clicked.

The content was to be text-mainly, written in Markdown for simplicity: I did not want to spend minutes wrapping paragraphs in <p> tags like I do when publishing on another of my sites. Ease-of-use was the overarching goal, and this was its latest moment.

Given that we had dispensed with the idea of running a publication, I wasn’t about to co-opt blogy terms for this feature. We weren’t dealing with blog posts or articles — rather, we were handling documents, infrequently published texts only surfaced where they were needed, totally not an inherent part of the site’s use; it was still a single-page Internet billboard.

Architecture

Here’s what it looked like: documents lived in /documents/. Each filename was a kebab-case string with an .md extension. The path documents/name-of-file is what we call a document slug. documentsApp has an openDocument() method that attempts tofetch() a document based on documentSlug, a state variable with a watch set on it.

Whenever documentSlug is updated, a fetch() is attempted. If successful, the text of the file is parsed by a Markdown parser (MarkDownIt in this case) and set as the content of our OffCanvas.

Triggering this mechanism is a hash-change listener set on window which looks to see if the current URL hash is a valid slug. So, churchdomain.com/#documents/name-of-file will, on load (after the app has mounted), cause the URL to be parsed, the slug found and a fetch() called on documentSlug. If all goes well, a bottom sheet pops out with the content of the linked-to document.

Leaving our shared host

Development experience being a priority led to a falling out with our web host. Now that we had more fancy on our app, I (perhaps unjustifiably) felt the need to go the extra mile and have staging and live environments to deploy to. You didn’t want to be updating our live site in real-time with random crap while testing locally. All this while, our site had lived on a beta domain, and I wanted to keep that playground open for more experiments after we’d launched.

On our shared hosting account, our site lived in PUBLIC_DIR/beta/ , while a placeholder HTML page was in the document root. I had hoped to map our base domain name to PUBLIC_DIR/prod/ for the aforementioned reason. It would not be possible, however, for several reasons I still do not entirely appreciate.

After a long conversation during which I learned, among other things, that I am a high-maintenance guest, we decided to move our static site to a more flexible hosting service. That was how we ended up on Firebase Hosting for a few weeks.

On Firebase

I’ve been a Firebase junkie since before Google bought them. It’s been a remarkable service since day one, and the acquisition did not change that for the most part. The real-time database service was (I think) the primary selling point of the original Firebase service, and it’s been the one thing I’ve always turned to for a little splash of magic in my apps.

The thing though (and you saw this coming, didn’t you?) with managed services such as these is that, the more you rely on them, the more you come to appreciate this fundamental truth of computing; that abstractions will eventually leak.

Firebase is an abstraction over the overwhelming complexity of massive distributed services running at web scale, on edge, bla bla bla. Its ease of use, practically the main selling point, is a drug which is hard to wean yourself off of. Managed services of this sort are a promise to the developer: do as we say, and we’ll keep our end of the bargain. The corollary (the fine print Marketing leaves out) is often painfully true: do what you will, and you’re on your own sorry lol.

I’ve had my share of Firebase-related horror in my past life, where hard-to-test flows were supposed to Just Work™. And then they didn’t. And then, miraculously, they did. When you live in a world of opaque, managed services that promise and mostly deliver, you live under pre-Enlightenment superstition, where spooks can show up in the dark and chew the wiring of your apps and whatnot, where the gods of near-perfect uptime will smile on you if you Do The Things® just right.

I detail the rest of the story of how I ended up in Firebase limbo about two weeks after transitioning to its hosting service, and how I discovered Render.com, in a previous essay.

Reduce, reduce, reduce

My short-lived experiment with Firebase Hosting reinforced one good habit, so that was a positive I’m happy to live with. Firebase Hosting is billed based on your data transfer and hosting storage space. With generous monthly limits promised by the service, I felt we were in safe hands. I uploaded the first release and realised just how quickly we were going to rack up costs if I did nothing to reduce the size of the site.

The unmetered environment of our previous shared host had made me too careless. This allowed our simple Internet billboard to remain bloated for longer than it needed to be. Now, as tenants on a new host, we lived by the megabyte. The usage graphs on the Hosting dashboard added more colour to this drama.

I set about removing unused image resources, reducing the sizes of relevant images to the barest minimum without sacrificing quality, and getting rid of unused files. To the best of my memory, this process halved our release sizes, and also reduced our data transfer use anytime the site was accessed for the first time.

A build step for production deployment would have solved this. Sue me.

Jekyll

Now that hosting had been fixed, and our site had more confident footing, and after some of the process had been run through and refined, I felt like we were on our way to public launch and full usage by the community. But they weren’t done with me.

The first request came from our priest, who asked that a document from a bishop be published on the site. Unlike what we’d done to augment announcements, this looked like a bigger deal: a more serious publication assignment. documentsApp could handle it, sure, but having the document slug live behind a URL hash left one with the impression that sharing links to content was not a priority. I’ve written about prioritising the small things in a previous essay, so you can guess it mattered to me.

There was also the matter of this set up not playing nice with Open Graph tags, since they cannot be dynamically updated for a single-page site with no server-side rendering. This was fine weeks before, because the entire strategy around the website was to not optimise for content, because the editors weren’t coming.

The second salvo came from a parishioner who does publish content frequently, who requested that said weekly be made available on the site. In a moment, we had transitioned from our Internet billboard to an outfit guaranteed to put out fresh content at least fifty-times a year. With no place for all the content to live for easy reference, the message was clear. We had to go back.

And that was how, three days ago, I found myself adapting the single HTML document to a Jekyll project structure.

Architecture II

To avoid getting into the weeds of Jekyll, read this. The matter at hand was building a home for posts on the site, and trying to not change too much while at it. The current visual structure of the website was not optimised for long-form text, but the document canvas I built for announcements was particularly suited for this.

However, it required some user action to reveal the hidden content; otherwise, it was out of the way. Online publications sort of require the exact opposite; the content must be in the way: that’s the point. Marrying these two required a bit of thought, but I settled on a compromise I was happy with. I hope the others buy into it.

Our site was to have a Reader, available at churchdomain.com/reader, which will be the home of all published posts. Tapping on a link opened the post’s URL (where Open Graph tags are now set to post-specific content) and immediately popped up the OffCanvas, displaying the post’s content.

Post content was not inserted into the page with {{ content }} as expected of Jekyll projects. It was to be driven by documentsApp, which needed a little modification to make this work. Our VueJS app would no longer searched a URL hash for a document slug, since the path itself could contain it.

This led to a somewhat unusual use-case for Jekyll’s posts that I want to highlight. Regular use of Jekyll for blogging requires you to have a post file with YAML front matter on top, and the rest of that post’s content below. Given the nature of the architecture discussed in the previous section, our Jekyll posts did not need to have any “content”, since the content lived in /documents/, where documentsApp could find and render them.

Our posts now only contained YAML front matter declaring what they were, so post pages could be generated with the relevant Open Graph tags, and so that site.posts could populate /reader with what was available.

What tied both worlds together was this: the slug-end of the name of the post file had to be the same as the name of the related document. A link churchdomain.com/uncategorized/2023/03/07/name-of-post told documentsApp that we were reading a post in the uncategorized category, which was linked to some document at the pathdocuments/name-of-post.md.

This set up allowed us to have the best of both worlds: on the one hand, we had “first class” handling of published content, on the other, we could still pull up documents by linking to the relevant slug with a URL hash from anywhere on the site. We could publish an article to the Reader, and “link” to the backing document in an announcement. Posts were decoupled from their content. At a small price, I hope.

What now?

I really don’t know. The site is mostly done, and barring any curveballs, we should be good to launch. Our content handling is just days old, so it may take time to work out the inconveniences and optimise our process for keeping the site updated with new content. Some training of the rest of the team is required. Let them ̶e̶a̶t̶ ̶c̶a̶k̶e̶ learn Markdown!

The journey to this version of our site has been fun, to say the least, and I hope all the effort will prove worthwhile. There’s an amazing team of people driving the way our community adapts to technology, and I’m just one part of the machine that’s been chugging along since late 2019.

There may be more church-dev-related content on this site once I get round to building our payments service.

If you enjoyed this, let me know.

--

--