https://pngtree.com/so/new-year-2024

2024: New Year, New Hopes, New Paradigms & Technology Stacks

Hiraq Citra M
lifefunk
Published in
7 min readJan 1, 2024

--

Happy New Year to everyone!

Day 1, January 2024

Today is the first day of the new year of 2024. I’m sure that everyone will have their own hopes and plans for this year, including me. So, what kind of plans do I have?

As a software engineer, I’ll focus on new paradigms and the new technology stacks that I need to learn and use this year

  • Rust
  • Decentralization & P2P
  • IPFS & IPLD
  • WASM (WebAssembly)

Why?

Quick answer:

Why not?

No, I’m just kidding 😀, so I’ll share my “why” I decided to use the paradigm and also the technology stacks

Why Rust?

Source: https://www.freecodecamp.org/news/rust-getting-started-with-the-most-loved-programming-language/

I’ve been following Rust since its initial version. And day by day the language and the community have grown exponentially, although there are some “debates” about some things, such as “trademark policy”, “toxicity”, etc, it’s okay, I think the language, the foundation, and especially the community will find their ways to grow up better.

I used to think in the past that Rust was a language that was hard to learn. I had to give up learning this language twice in the beginning, and fortunately, I’ve been successful in using this language for the third time.

Rust language has its own unique characteristics, such as ownership & borrowing, and lifetime. The problem only occurs when we learn this language for the first and try to compare it to our own idealism and previous languages that we are used to.

Everything just becomes “acceptable” after I decide to forget my previous knowledge and learn new knowledge from scratch, and everything starts to “make sense”.

The myth of “I will not be productive using this language due to its high barrier of complexity”. How come we will be productive using something that we have so many unknowns on it? At first, I thought like that. But when I changed my perspective, I have a different opinion now.

When we have to model some complex domain business logic or complex system, we will become very productive and more importantly *safe*. Rust compiler will actively remind us of anything that may give our system a “glitch”, even before we have to compile our codebase (try rust-analyzerif you want to know what I mean), as long as we follow its rules (no unsafe).

I’m just looking at Rust language, which is like Java but with the performance of C++ and a concurrency level like Erlang. There is a common insight that everything that using Rust should be or even must be a low-level thing, something like building OS Kernel. Well, it’s not wrong either, but I’m not a “low-level guy”, and I think it’s still possible and “acceptable” if we want to build something that “high-level” system or application using Rust.

That’s why I was thinking that I think it’s really acceptable to build an application like that using Rust, and the good news is, Rust already has its own advantage which is, “zero-cost abstraction” which means, it doesn’t matter if we want to work or building the high-level abstraction, because in Rust it will not affecting the performance, that’s why I was thinking that when we are using Rust, it’s like using Java and with a bonus performance from C++.

Why decentralization & p2p?

There are multiple parameters that I’ve used and chosen as my base reason why I should move to this paradigm.

The privacy. The story begins with, this is the most important reason why I choose decentralization and p2p. I’m a person who doesn’t like when foreigners able to send me a chat just because they know my “address” something like my phone number.

For now, my phone number is something like a “key” to be able to contact me under any conditions, it’s like an email address, once our email address gets spread to the public, it means any entities on this planet will send us a ton of unnecessary emails.

Phone numbers and email is an example of current privacy problems. It’s useless even if we ask some people who know our “address” to share our “key”, there is no guarantee. Especially in the centralized entity. There is no guarantee that the “centralized entity” will not share our “key address”, and the worst thing is, we don’t have any power and control over our own “key” things.

Decentralization and P2P can help us to solve the problem. There is no “centralized entity” that will control or even own our own “things”. We as a user should control and own our own “thing” not them. It’s okay if we have to share something with them, but the control is in our hands.

The creator & distributed economy. What is exactly creator economy? The Creator economy also known as the influencer economy, is a software-facilitated economy that allows content creators and influencers to earn revenue from their creations.

Source: https://en.wikipedia.org/wiki/Creator_economy

Let’s take the example of this case “YouTuber” or “Influencer” from specific platforms like YouTube or TikTok. Many people rally on these platforms and compete to create “content” to get traction from specific audiences based on their needs. Did you know that actually we don’t have control over our own content? And the platform will be able to remove your content anytime they like based on some conditions?

Even for the revenue-sharing model, there is no guarantee too that the number is fair right? Any revenue that we’ve got from some platform, means we have to accept it, whatever the conditions.

Distributed power & control. This topic has a strong relation with previous topics, which are privacy & creator economy. I have a strong opinion about it, that if we are living in some environment that we don’t have any control and power in it, means, we already are controlled and owned by that “thing”.

As long as we don’t have any control and power, means, there is no fairness, what we can do is just “accept” whatever the condition. That’s why the reason I take for this case, is distributed power & control.

If someone or something takes too much power hold by, that’s when the balance will be broken

Why IPFS & IPLD?

Why IPFS?

Our peer-to-peer content delivery network is built around the innovation of content addressing: store, retrieve, and locate data based on the fingerprint of its actual content rather than its name or location

There are three important concepts from IPFS that I agree with it

  • Open
  • Verifiable
  • Resilient

Why IPLD?

A data model for interoperable protocols

Content addressing through hashes has become a widely used means of connecting data in distributed systems, from the blockchains that run your favorite cryptocurrencies to the commits that back your code, to the web’s content at large

IPLD is a single namespace for all hash-inspired protocols. Through IPLD, links can be traversed across protocols, allowing you to explore data regardless of the underlying protocol

Why WASM (WebAssembly)?

It provides a way to run code written in multiple languages on the web at near-native speed, with client apps running on the web that previously couldn’t have done so.

The original WebAssembly runtimes was, of course, the browser: browser engines added the ability to execute WebAssembly programs from Javascript.

These days, however, WASM runtimes go way beyond browsers

The reason why I took `WASM (WebAssembly)` as my next stack is because of the portability. Since the current WASM implementation was able to move beyond the browser, it means we can implement WASM in backend things too.

Last year we only talked about Monolithic vs Microservice, today I've just seen a different vision that WASM will change most of it. Inspired by smart contract development like using Solidity in EVM network platforms. What we need is just to code the business logic and upload it to the network, and the contract will run forever (as long as it is not destroyed).

Imagine if in the backend we have a similar system like this. It doesn’t mean we have to move all the things to cryptocurrency networks, but I’m just thinking that someday it is really possible if we only need to build the network once, but we can change the logic anytime we need.

Fewer things to work and it means our productivity will be increased. Compared with the current condition where a backend engineer needs to build anything from scratch. We need to build networking things, wire with some database, use some magical frameworks, make a business abstraction model, etc, it just so many things to do.

By implementing `WASM` there are two groups of backend engineers

  • Core / Platform engineers. It’s a group of engineers working on building the composable network that supports WASM
  • Product engineers. People who are using WASM to provide a business logic abstraction and upload it to the network

Outro

This year will become a very interesting year for me. A lot of new things will happen, and I hope so do you.

--

--