Our application is a monolithic application written over many years in .NET and MySQL. We run several standalone services, web services, and API end points, but they all communicate through our database, so in the hip, young, lingo they do not qualify as microservices. They serve us well none the less.
Our system is fully hosted on AWS. It runs on EC2 in two regions and several availability zones. We started with AWS before VPCs, before ELBs had configurable timeout, before Beanstalk, and before many of the tools abundant today. Thus, we developed tools to manage our system on our own, we manage our app servers on our own, we run our databases on EC2 instead of RDS, we manage our own Redis clusters and we use SimpleDB instead of DynamoDB.
It all works beautifully, scalable, resilient, and fault tolerant, however it has one downside. When our back office needs enterprise reports across the whole system they need to login to different servers in different regions and availability zones and run reports there and then consolidate them manually. Works, but not ideal. One reason we just got nominated for one of the Best Places to Work in Illinois is because we try to not settle for “works, but not ideal”.
So, when our back office requested a new enterprise report, it was time to put some new technologies to work. Instead of queueing the report locally when it is requested we now use the AWS SDK for .NET to post a message an AWS SQS queue. Since we only need one pub-sub queue for this purpose, I created it using the AWS console in few clicks.
To process the messages I wrote a microservice in GO. The first order of business was to connect to MySQL. This was not hard, but we use stored procedures heavily in our system and most of the GO MySQL libraries did not handle stored procedures because the built in sql driver does not handle multiple result sets. Initially this was going to be a show stopper till I discovered “github.com/ziutek/mymysql/mysql” which handled stored procedures nicely.
The next step was to connect to AWS SQS. To do that I used the “github.com/goamz/goamz/aws” package. Amazon recently announced they are working on their own GO SDK. I will probably end up using it for production when it is released, for now the goamz package works great. When we post the message to SQS we specify what databases we want the report to run in. On the GO side, we pull the message from SQS and we iterate over the databases, connecting, running the reports and assembling the output. For good measure, we use a local Redis install to check on the message keys so we do not process an SQS message twice for one reason or another. Once the report is executed we delete the message from SQS and continue polling to process any messages received in the future.
For each report we generate we upload the output to S3 then produce a signed URL to access the report, the URL is valid only for 24 hours. The only thing that remains is to provide the back office with the URLs for those reports. For that we use another AWS service, AWS SNS. We created a topic in SNS. Our back office subscribed to the topic. The microservice posts a message to the topic with the signed URLs. AWS SNS notifies our back office and they click the links to download the enterprise reports for all the environments they seek.
What makes this rock is not only the fact they our back office now presses one button and gets an enterprise wide report but that the whole thing is completely decoupled from our system. We run it all on a spot instance. The instance can completely go away and it will impact our end users or our system in any way. Conversely events occuring on our main system does not affect this new microservice. It is decoupled, scalable, and beautiful.
Next step, dockerize the GO microservice and the Redis install. Giddy Up!