Migrating our trusty ol’ .NET Framework applications to AWS, hold on! One foot at a time!

Breaking the monolith!

François Bouteruche
My Local Farmer Engineering
9 min readNov 9, 2021

--

If you read my previous post about our first contact (👽) with .NET on AWS, you know that my team and I had more than mixed feelings about it. If you also have serious doubt on how AWS could support a .NET developer, I really encourage you to read it.

Once we overcame our unconscious bias, we faced a mountain. Where to start? I will try to share with you what we’ve learnt from the trenches. I hope it may help you to climb your own mountain.

Disclaimer
I Love My Local Farmer is a fictional company inspired by customer interactions with AWS Solutions Architects. Any stories told in this blog are not related to a specific customer. Similarities with any real companies, people, or situations are purely coincidental. Stories in this blog represent the views of the authors and are not endorsed by AWS.

So, over the years, we’ve built a large monolithic system. We added features over features. You may even detect that several generations of developers have worked over the code base only through the differences of programming style. Today, our longest standing team members have the deepest knowledge on the history of our source code. However, from time to time, we land into some obscure part of the code base that even our wise and venerable team members fail to explain (do you see how careful I am to avoid to say our olders? 😊). Our system works like a charm BUT it is more and more complex to evolve it. If you read our post presenting our engineering org, you may know that we have a quarterly release cadence. How could it be any different with such a complexity? If you don’t feel our pain, you’re blessed.

I remember our first meetings discussing how we could migrate our system on AWS. We rapidly ended up with two camps. The first camp was saying: “This system is a monolith. We can’t break it up so we have to move it all in once on AWS or leave it on premise where it’ll live a good life until we replace it”. The second camp was saying: “Calm down young Padawans. This system is Swiss Horology. It took years to fine tune it and it works perfectly well. We can’t afford the risk to move it all in once on AWS and rewriting it would take years. We need to find a way to move a small part first.”

TL;DR; the second camp won. We agreed that we needed to get our hands on AWS with a small part of the system first and then go for the big bang. The challenge is how you do this when you have a monolith. Well, that’s the rest of this story. I’ll go first with a bit of history (or archaeology, depending on how old you are). Then I will explain how, with a pick, we detached a small rock from our monolith. This small part will be launch to the Cloud in our next post. Today, I really want to focus how we dealt with our monolith because it is not an easy one.

Once upon a time…

You probably don’t know that when I Love My Local Farmer was founded in the late 2000s, it was just a website with a simple directory listings farmers in our area. There was three people to run it: Firmine Yzdee, our CEO, Jean Dupont, our COO, and Inès Adberrahmane, our CTO. To grow quickly the number of indexed farmers, they decided to leverage collective intelligence. In a more pragmatic way of saying it, they offered a web form so that visitors can submit new farms to the directory. Each submission was recorded into a database. Then, they would review the submission to ensure that it was not crappy data and they would validate it for posting in the directory.

Inès quickly built a simple but yet very effective system for three persons. The web form was an ASP.NET Web Forms website plugged to a SQL Server database. On the backend, they would connect to the database with Excel to review and validate the data. That was it. The first version of our Collaborative Farm Discovery system, aka Disco, was born.

Ever since our farmer directory transformed into an online marketplace connecting consumers with local farmers, our business has grown and so has Disco. Disco became an acquisition funnel for farms. The more farms are registered on the marketplace, the more consumers we can connect to them. At the beginning of the funnel, our digital marketing teams lead consumers or farmers to the form to get new farm submissions. Then our B2B sales teams qualify submissions, connect with farmers and try to convert submissions into new farmer subscriptions. Once a farmer subscribes to our marketplace, the whole contract and billing lifecycle is managed through Disco.

So, over the time, as sales and finance teams were established, the Excel spreadsheet has been replaced by a handmade system mixing both CRM and contract and billing management features. We built it on purpose to fit the exact needs of our stakeholders and we call it Disco CRM.

Nowadays, the core of Disco is our SQL Server database. We often represent Disco with the database at the center of the diagram, the farm submission form on the left and Disco CRM on the right as on the diagram below. Our database is the source of truth regarding everything related to farms and their subscriptions to our marketplace. We own a good old relational model where you can navigate from a record in the FarmSubmissions table to a record in the Subscriptions table through the relations between entities. I can’t disclose the number of tables we have in this model. I can, however, give you a sense of where we are. Imagine a model where even phone numbers have their own table and we have a 1 to many relation between the Farms table and the Phones table. We also have the same relation between the Farmers table and the Phones table. Get it?

The farm submission form is still today based on ASP.NET Web Forms. It was the early day of ASP.NET MVC and Entity Framework when we started to build Disco CRM. We decided to follow the trend. Over the years, we upgraded our code to each new version of each frameworks. We now use ASP.NET Web Forms 4.8, ASP.NET MVC 5 and Entity Framework 6 running on top of .NET Framework 4.8 runtime.

Now you know what we had, it’s time to write about how we broke it into two pieces.

…we used a sledgehammer to break our monolith

First, we discussed our main pain points with Disco. We realized that the biggest one we never really solved in a satisfactory manner is the impact of traffic peaks on the form during TV ads campaigns or when we are mentioned in TV shows or TV news. The number of connections goes up as well as the number of inserts into the FarmSubmissions table. We reached many times the rupture points of our web servers and our database servers. Each time, we solved this by adding servers to our web farms and by buying even more powerful database servers. We are now the happy owners of a most-of-the-time-oversized infrastructure 👍

We saw there an opportunity to solve elegantly this issue by decoupling the public facing part of Disco from the internal one. But we needed to find a crack in our monolithic system, a spot we could hit with a sledgehammer to get a small burst without undermining the whole structure. From outside, we obviously had two components: the ASP.NET Web Forms website and the Disco CRM. We should been able to deploy them independently. We were not because of our monolithic database relational model.

So we started a deep analysis of our relation model to determine which tables were mainly used by the ASP.NET Web Forms website and which were mainly used by Disco CRM. We found that a lot of tables are only used by Disco CRM but all the tables used by the website are also used by Disco CRM. We also found the website is just feeding tables it uses. It doesn’t read or update them after. We also realized that the FarmSubmissions table lets us know if a submissions record has already been processed on Disco CRM side. When the website adds a new record to this table, it sets its status to 0 meaning its a brand new record. We have a Windows service that scans the table for new records every minutes. It analyzes the new records for obvious craps. It deletes the crappy records and sets the status of legitimate records to 1 meaning they are now visible to human operators in Disco CRM.

It finally was evident that our crack was there. It was the FarmSubmissions table and this Windows service. We could host our ASP.NET Web Forms website and a small part of the database model on AWS. The Windows service would now request the new records in the database hosted on AWS and would then transfer the legitimate records to our database in our corporate datacenter.

What we’ve learnt

That was it! We have found a way to put one foot on AWS without taking too many risks. This is what we learned at that point. Migrating on AWS is all about deciding how much to tackle while managing risk.

On one side, we have hundreds of sales and finance people using Disco CRM on a daily basis. We couldn’t afford the risk to migrate Disco all-in once with very low knowledge of AWS and how to operate our system on it. Do you imagine the money the company could lose if everything would go wrong?

On the other side, the company, as a whole, couldn’t afford the luxury of immobility. The COVID-19 crisis has deeply shaken our business and with the growing interest for healthy food, new competitors enter the market. We need to reduce our operating costs and to accelerate innovation at the same time and AWS has proven it could help there in our first projects (implement a VPN solution in a few days, using serverless to build up a new service in 4 weeks).

So we found our way to move forward. We saw this little crack in our monolith that helped us to break the system into two parts. I don’t mean that you will find a crack in your monolith or that you even need it. Maybe you already have some pieces of software that operate more independently that the rest like batch processing. You may have a system already built with a service-oriented architecture or microservices architecture. You may already have REST APIs that you can move independently. My take here is that you need to balance the risks you have to deal with and find your way to move forward.

I’ve recently discovered a citation of Jeff Bezos that we could apply to our .NET development team:

“Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death.”

My team was slowly falling into stasis. A brand new world was presented in front of us with the new open source and cross-platform .NET runtime, and we were still stuck on .NET Framework with no one willing to move. It was our time to start moving forward.

In our next post, we will discuss which AWS services we selected to host our ASP.NET Web Forms site and our fragment of Disco database.

In the meantime, if you have comments, questions or feedback, feel free to reply to this post and follow us to be notified of next posts.

--

--