Homepage
Open in app
Sign in
Get started
Data Science and Machine Learning
Follow
Microsoft Power BI Guide — Part 3(Preparing the Environment and Getting Started)
Microsoft Power BI Guide — Part 3(Preparing the Environment and Getting Started)
Before diving into our Power BI project, it’s essential to set up the environment correctly to ensure consistency and avoid potential…
Sandaruwan Herath
Jun 7
Microsoft Power BI Guide — Part 2(Getting Familiar with Power BI through a Mini Project)
Microsoft Power BI Guide — Part 2(Getting Familiar with Power BI through a Mini Project)
Before diving into complex projects, it’s crucial to become familiar with the Power BI interface and understand its core functionalities…
Sandaruwan Herath
Jun 7
Microsoft Power BI Guide — Part 1
Microsoft Power BI Guide — Part 1
Introduction to Power BI
Sandaruwan Herath
Jun 6
The decoder stack in the Transformer model
The decoder stack in the Transformer model
The decoder stack in the Transformer model, much like its encoder counterpart, consists of several layers, each featuring three main…
Sandaruwan Herath
May 1
Lung Localization with Pytorch Lighnining
Lung Localization with Pytorch Lighnining
Over the last decade, deep learning, particularly Convolutional Neural Networks (CNNs), has significantly advanced the field of medical…
Sandaruwan Herath
Apr 23
The Feedforward Network (FFN) in The Transformer Model
The Feedforward Network (FFN) in The Transformer Model
The Transformer model revolutionizes language processing with its unique architecture, which includes a crucial component known as the…
Sandaruwan Herath
Apr 19
Post-Layer Normalization
Post-Layer Normalization
In the intricate architecture of the Transformer, Post-Layer Normalization (Post-LN) plays a pivotal role in stabilizing the learning…
Sandaruwan Herath
Apr 17
Exploring the Multi-head Attention Sublayer in the Transformer
Exploring the Multi-head Attention Sublayer in the Transformer
The multi-head attention mechanism is a hallmark of the Transformer model’s innovative approach to handling sequential data. It enhances…
Sandaruwan Herath
Apr 17
Positional Encoding in the Transformer Model
Positional Encoding in the Transformer Model
The positional encoding component of the Transformer model is vital as it adds information about the order of words in a sequence to the…
Sandaruwan Herath
Apr 17
Input Embedding Sublayer in the Transformer Model
Input Embedding Sublayer in the Transformer Model
The input embedding sublayer is crucial in the Transformer architecture as it converts input tokens into vectors of a specified dimension…
Sandaruwan Herath
Apr 17
About Data Science and Machine Learning
Latest Stories
Archive
About Medium
Terms
Privacy
Teams