StartUps are the best launchpads for interns, freshers and other less experienced professionals to gain valuable experience while working on the latest technologies and leading teams and processes.
In my experience, I’ve seen a very stable trend among startups. The earlier you get in, the more decisions you affect or contribute to. Trust me when I say this, but this can be good and bad depending on what expectations you set when you join.
I’ve worked at three startups, two of them as an intern, and my latest appointment, a full-time role as the Technology Lead. I can easily say, the amount of learning that has happened through working at start-ups is far greater than any online course or college degree. …
httppackage in Flutter for a streamlined approach to network requests and clean code practices.
Every mobile application has to communicate with an external API over the Internet to provide additional functionality that enhances the user experience and adds to the feature set of the application. This can include authentication, custom business logic, file uploads, etc.
Most Flutter developers use the
http package to achieve this. While that works, there’s a better, lesser-known package out there called Dio.
Dio is a powerful HTTP client for Dart, which supports Interceptors, Global Configuration, FormData, Request Cancellation, File Downloading, Timeout, etc.
If you haven’t read Part 2, I highly encourage you to do so, to ensure that you have a relevant context for this article. You can read it here —
Okay so if you’ve followed along with the series, we now have our Node.js application load balanced and NGINX serving as our reverse proxy. In this part, we’ll optimize our server by using gzip compression and secure all endpoints by enforcing HTTPS using our own SSL certificate from Let’s Encrypt.
So, why compress? Compression allows us to use the network bandwidth more effectively and reduce the size of the payload that a client has to ingest. But why use compression on NGINX when express has a handy-dandy module that acts as a middleware where you literally just have to install it using —
npm install --save compression and then use it like…
This is Part 2 of the series on how to deploy Node.js applications into production environments, to have a robust pipeline from development to deployment. In this part, we’re going to set up NGINX as a reverse proxy and do some basic load balancing.
If you haven’t read Part 1, I highly encourage you to do so, to ensure that you have a relevant context for this article. You can read it here:
Firstly, let’s clear out the jargon that I threw in the description of this post. What is a Reverse Proxy? Firstly, what is a Proxy?
A proxy server is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.¹ …
This is Part 1 of the series where we are going to look at PM2 as a process manager help scale to our application.
While this is easy, it’s not scalable due to the single-threaded nature of Node.js. In most cases, the deployment system/virtual machine will have more than 1 usable thread, and running the default command will result in suboptimal usage of the available resources as only one thread is used to run this Node.js process. Ideally, you’d want to run as many Node.js …