Beerway Oriented Programming in F# [ 2 ]


In this blogpost we continue where we left off from the previous blogpost. We aim to do the following:

  1. Configuring the Literals so none of the excessive hardcoded mumbo-jumbo continues. We’ll be saving our configurations that’ll be loaded on startup of the process from MongoDb via MLab. The only hardcoded value will be that of the connection string to connect to the Mongo server.
  2. Generalizing the scheduler to run the pipeline for multiple breweries.
  3. Scheduling the process to run on a timely basis.
  4. Persisting Scrapes via MongoDb.

We’ll be going over testing using Expecto and adding logging via Logary in the 3rd blogpost in this series.


I received a great pointer about handling the NoDifference case from Atle Rudshaug; we should treat the NoDifference case as a Success i.e. if no difference is found don’t send a text out.

Our changed Error Module now looks like:

Compare Function which now simply pipes the difference between the current and previous scrape.

Alert Function now has the onus to discern whether or not to send a text based on the cardinality of the difference set.

Additionally, we’ll add a new member of type string to our record called “Name” to move towards a direction of generalizing the pipeline that’ll be highlighted later on.

Our BeerInfo.fs, with the updated record type and Chiron related static members now look like:

Configuration and Generalizing

Let’s get rid of all the hard coded literals and generalize the pipeline over a list of breweries rather than just the Tired Hands one. So far we have done a good job separating the Common components; we can do better! Let’s move all the configurations to the cloud and generalize the pipleline.


We’ll be using MLab’s free tier to persist all the configuration details. We start by creating a database called “beerwayorientedprogramming” and adding the configurations collection; this should be pretty straight forward process. The UI of MLab is awesome! Feel free to reach out to me if you have any problems.

The configuration collection, for now, should contain a document with our Twilio details. We can decide later on if we want to add other fields here.

The configuration collection, once persisted, will similar to the following:

"_id": {
"$oid": "5976bcc1734d1d6202aa1556"
"MyPhoneNumber": "your phone number",
"AccountSID": "your twilio account sid",
"AuthToken": "your twilio auth token",
"SendingPhoneNumber": "your twilio sending phone number"

Communicating with the Database

Next, we’ll add the mongocsharpdriver and MongoDB.FSharp reference via PAKET. If you are unsure how to do this, please refer to the previous post that contains information on how to use PAKET and double check if the dependencies have been successfully referenced.

We’ll create a new module called Db in the Common.fs file before the Error module that’ll contain all our database related functionality. Additionally, we’ll gut out all the code to deserialize / serialize the JSON file we previously worked on in the Compare module.

The only literal to be hardcoded is that of the connection string [ if you want to be creative, you can keep this in a configuration file using the FSharp.Configuration library ].

All in all, the Db Module looks like:

More details about Mongo + F# CRUD operations can be found in my previous blogpost that can be found here. And the changed Alert module with configuration now looks like:


The only brewery specific code will live in the brewery specific parser and the file that contains the main function that’ll contain the pipeline for the brewery. We’ll need to change up the Compare module to create the Json file based on the name of the brewery.

The changed BeerwayOrientedProgramming module now looks like:

And the changed compare function in the Compare module now looks like:


The next step is to setup a scheduler to run the breweryPipelines on a timer. For this, we’ll download Quartz.NET for the scheduling via PAKET.

Following this F# Snippet, we are able to easily setup a scheduled process of going through all the breweries and parse the details every 2 seconds forever.

We don’t f#ck around with our beer acquisition, we make the process some enterprise level beer getting bazooka.

Persisting Scrapes

Finally, let’s add the ability to persist our scrapes to the same MongoDb “beerwayorientedprogramming” database.

With the same spirit of generalizing our process to easily add other brewery parsers, we’ll be naming the collections of the database on the basis of the name of brewery after gutting out JSON serialization and deserialization to and from a file.

We’ll start off by gutting all the old JSON serialization and deserialization components by revisiting our BeerInfo record type and adding the MongoDb Id that’s of type BsonObjectId after removing our Chiron based static members.

The new BeerInfo module looks like:

If you notice, we changed the type of ‘Beers’ from an FSharp list into a one to System.Generic.Collections one for the sake of conforming to the C# MongoDb driver that the F# one was built on top of.

We’ll now remove the reference to Chiron since we don’t need it any more. This is done by opening up the Command Pallet [ Cmd + Shift + P ] and navigating into PAKET’s remove reference in the following way after opening up the fsproj file:

Once the reference to Chiron is removed, we’ll add a couple of methods to our Db module pertinent to creating new Ids as well as getting the previous scrape.

If there is an exception while trying to get the collection by the name of the brewery, we’ll try to recreate it in the with block.

We have removed the complexity of persisting the scrapes from the Compare module to the Db one where we are grabbing the last scrape. We check if the last scrape is null [ after casting it into an object to check for nullability since we are using FirstOrDefault() ].

Our updated TiredHandsScraper.scrape function will now look like:

with the getBeerNamesFromTiredHands function looking like:

Additionally, our Compare module will be significantly simplified:

It’s awesome to get our scrapes persisting that can be confirmed by checking out our documents in the TiredHands collection:


We have definitely come far by adding configuration, generalizing, scheduling and persisting. As mentioned before, the next and last post in this series will contain some testing and logging to fully bolster up this once simple application into a fully evolved one.

As always, I do appreciate your feedback!

Show your support

Clapping shows how much you appreciated Moko Sharma’s story.