This is a short post on debugging web workers.

One important aspect of being a software engineer is to quickly isolate the errors between layers (Browser/Backend/Database). On the browser, we are used to the Developer Tools. That tells us what errors happened, allow us to debug javascript or spy on the network tab to see what requests/responses went through the network. But in some applications, work is done inside a Web Worker, which the Developer tools doesn’t seem to debug by default. Or does it?

With chrome, you can navigate to chrome://inspect/#workers.

There you go, you have the developer tools…


In my previous posts, I’ve shown how GraphQL can speed up development by creating flexible/efficient APIs and fitting into a microservices architecture here. But up to this point, I was developing without any tests, as they were just example codes, but if we want to do production-quality code, we must be able to test the APIs.

A basic in-memory test

HotChocolate, the .net core framework for GraphQL is kind enough to provide us with a simple way to execute the GraphQL query, so we can run the query, get the JSON response and then assert the response properties as we like.


As the industrial revolution dramatically increased the productivity of our societies, enabling us to go to the next level, the Cloud is making software development cheaper, faster and more resilient. Let me show you how:

The old world

If your company needed a website or systems, we would have to buy our own physical Servers, manage the operating system and networking ourselves. That process was expensive, slow and inefficient.

Let me tell you a true story from a friend that worked for a Bank in Brazil around 2005.

This bank specifically had processes for you to request a new server that took months…


As software engineers, we love to develop features for making our customer lives easier, but until those features are available to end-users, it adds no value to the business. Therefore the deployment process is an essential part of succeeding as a development team.

With six years of experience in Continuous Delivery, this is how I would classify the maturity level for deployments:

Level 0 – YOLO (You only live once)

The deployment process is quite simple; the developer builds the code on his machines, copies and pastes into the production server. …


On my last post, I explained why you could consider using a GraphQL API instead of REST. I used the example of an order summary page, where a mobile application might have to make multiple HTTP requests to fetch all the data. Those were individual calls, as this was a microservice architecture:


REST has brought us into a better position since the days of sharing XML contracts through SOAP. It is a good standard and brought us a long way, but what it makes easy to understand and consume can also be a problem.

Resource focused

REST is a single entity focused with URLs as:

  • GET /profile
  • GET /orders?customerId={customerId}
  • GET /product/1

Although it’s a nice standard, single resource retrieval’s can be expensive.

The e-commerce order summary example:

Imagine you are building an order summary page for an e-commerce, for that you need the customer orders, list each order total price, items purchased including quantity, unit cost and product name.


Azure resources generally have good integration with App Insights for logs and APM (Application performance monitoring). But sometimes your monitoring stack is with a 3rd Party company, like New Relic, Zabbix or Datadog.

In this guide, we are going to how to log every single request hitting an Azure API management into Datadog.

What is Datadog?

Datadog is a monitoring service for cloud-scale applications, providing monitoring of servers, databases, tools, and services, through a SaaS-based data analytics platform. [Wikipedia]

Logging from API Management

The base documentation where I learned this logging flow.

The steps


When starting a new development project you need to think of Continuous Delivery, you got to have automated deployments, manual deployments can get you a quick start but will cost you on the long run. Even if the project isn’t our normal Web API deployment.

I recently had to build an Azure API Management, which is Azure version of an API Gateway, rather than having one client having to know about many backend services, you can add an API Gateway layer as:

It simplifies the life of the mobile app, but also as it’s an indirection layer to achieve:

  • Log…


Over the years you might have worked with either AWS or Azure as cloud providers, both offer fairly similar services, so the experience in one of them mildly translates into the other one, as long as you know the basics. So here is a map of the services and their brother from another mother.

This is a shallow comparison for the main services, with the service purpose and key differences I could think of, so once you know the service name you can do a full investigation.

Compute

Azure Virtual Machines / EC2 (Elastic compute cloud)

When you want to manage your own virtual machines, IaaS, this is the…


Given my previous posts on TDD, TDD can be great for greenfield projects, where a project starts with it and lets you rip all the benefits, but greenfield project is a luxury that we won’t have it a fair amount of times in our careers, sometimes we will have to extend legacy codebases. How can we do this safely?

This post will add another technique to your TDD toolbelt, adding test coverage before extending legacy applications.

The two ways to extend legacy codebases

Edit and Pray

Raphael Yoshiga

C#/Azure consultant at RYoshiga Consultancy ltd. TDD evangelist and passionate full-stack developer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store