<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Omar Mohamed on Medium]]></title>
        <description><![CDATA[Stories by Omar Mohamed on Medium]]></description>
        <link>https://medium.com/@omarmohamed286?source=rss-cb5be1d65da6------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 15 May 2026 16:03:05 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@omarmohamed286/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Write better React code: minimize looking up]]></title>
            <link>https://medium.com/@omarmohamed286/write-better-react-code-minimize-looking-up-94963f4ae65b?source=rss-cb5be1d65da6------2</link>
            <guid isPermaLink="false">https://medium.com/p/94963f4ae65b</guid>
            <category><![CDATA[react]]></category>
            <category><![CDATA[front-end-development]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Omar Mohamed]]></dc:creator>
            <pubDate>Wed, 10 Dec 2025 21:00:58 GMT</pubDate>
            <atom:updated>2025-12-10T21:00:58.863Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/960/1*xy6Ixl2hDTIijPQqAiq65w.jpeg" /><figcaption>source: <a href="https://www.bitsbyamel.com/posts/How-to-write-clean-React-code-13-tips-for-better-readability-and-maintainability">bitsbyamel</a>.com</figcaption></figure><p>So I was reading <a href="https://r.je/encapsulation-dont-look-up">this article</a> and found a really great statement: <strong>Don’t look up. </strong>which means that, in order to write a flexible, reusable and encapsulated code which is loosely coupled, each entity in your code (class/function) should be self-contained and not aware of the implementation details of other entities. the author said the statement Don’t look up which I liked a lot and it made sense for me.</p><p>An example would be why global variables are considered a bad practice. a function should only be aware of what’s inside its scope and the arguments that it defines. the usage of a global variable makes the function <strong>look up </strong>outside of its world.</p><p>When I tried to project that onto the framework that I use the most nowadays which is React “yes I called it a framework I hope it doesn’t make you nauseous” I found that it’s so common to have a dependency between parents and Childs. that’s the whole point of components. they are functions that depend on each other to render UI. so they are obligated to look up by definition. yet you can minimize looking up by making your components small, generic and has the least amount of info of its parent as it can.</p><p>Though in some situations, we can eliminate the coupling between the parent and the child completely by using <strong>elements as prop</strong> pattern. so instead of:</p><pre>type ButtonProps = {<br>  isLoading: boolean;<br>  isError: boolean;<br>};<br><br>const Button = ({ isLoading, isError }: ButtonProps) =&gt; {<br>  return (<br>    &lt;button&gt;<br>      Submit {isLoading ? &lt;Loading /&gt; : isError ? &lt;Error /&gt; : null}<br>    &lt;/button&gt;<br>  );<br>};</pre><p>We can do:</p><pre>type ButtonProps= {<br>  icon: ReactNode;<br>};<br><br>const Button = ({ icon }: ButtonProps) =&gt; {<br>  return &lt;button&gt;Submit {icon}&lt;/button&gt;;<br>};</pre><p>Or:</p><pre>const Button = ({ children }: PropsWithChildren) =&gt; {<br>  return &lt;button&gt;Submit {children}&lt;/button&gt;;<br>};</pre><p>This solves the problem of tight coupling so well, the button is self-contained and <strong>doesn’t look up,</strong> and we don’t need to add 10 props for 10 different states. but this leaves a lot of flexibility for the implementer of the component. he can pass whatever he wants. you can choose the appropriate approach given your use case.</p><p>This could make us understand more what React introduced from the functional programming world which are “<strong>Pure</strong>” functions.</p><p>In math, a function is a mapping between two spaces. “a space is a set of values”, so for instance (1,2) =&gt; (2,4) is a mapping between two spaces. Not all mappings are functions though. in order to call a mapping a function, each input from the first space should <strong>ALWAYS </strong>match the output from the second space. so our mapping could be represented using the function y=2x. in order to call this a function, when I give it x=2 it should produce 4 every time.</p><p>In programming, this is not generally true. in most languages that I know it’s not obligatory for functions to be deterministic or “Pure”. take a function that returns a random number as an example.</p><p>That’s why a pure functions is the closest to the definition of functions in math. a pure function should always return the same output for the same input. and it shouldn’t do any side effects.</p><p>If you think about it, this aligns so well with the statement <strong>don’t look up</strong>. if our function doesn’t do any side effects, it means that it’s concerned with its scope only which forces it not to look up and depend on the implementation details of other entities. which makes it predictable, reusable and testable.</p><p>The content of the article could look random and disconnected, they were just some thoughts in my mind and I put them here. I’m not sure if they make sense together. thanks for reading anyways.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=94963f4ae65b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How I Improved a Website’s Performance by 85%]]></title>
            <link>https://medium.com/@omarmohamed286/how-i-improved-a-websites-performance-by-85-e22b0555307a?source=rss-cb5be1d65da6------2</link>
            <guid isPermaLink="false">https://medium.com/p/e22b0555307a</guid>
            <category><![CDATA[web-performance]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[html]]></category>
            <category><![CDATA[front-end-development]]></category>
            <dc:creator><![CDATA[Omar Mohamed]]></dc:creator>
            <pubDate>Mon, 17 Nov 2025 00:43:02 GMT</pubDate>
            <atom:updated>2025-11-17T11:00:19.122Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3zZVeSsTQP84pG9enKRpog.png" /></figure><p>So I had this static website built with <strong>Astro</strong>, and one of the pages had an LCP value of 10.89! this was almost the only interactive page in the whole website and the interactive parts of it were written in <strong>React</strong>. In this quick story I’ll demonstrate how I managed to reduce the LCP to 1.55.</p><p>First of all, the numbers are measured on vercel free plan, without caching, and on my normal browser “which means that some of my chrome extensions contribute to the initial loading time”. now let’s get started.</p><h3>When does “rel=preload” improve performance? (and when it doesn’t)</h3><p>Preload is a resource hint to tell the browser that you need to download this resource as soon as possible.</p><p>As I mentioned, the website is built using <strong>Astro</strong>. what I found out is that, in any client component rather than the ones with client directive “client:only”, Astro adds preload link tags to all the images in this client component. I have no idea about the cause of this, I don’t know if it’s because something I was doing wrong or because of Astro. if you have any idea about this, please share it!</p><p>One might say, good, it’s a performance boost. but actually, it’s not. here is why:</p><h4>1. If everything is important, then nothing is.</h4><p>If you prioritize all the images, then they are all equal and preload here is pointless. the whole page had like 20+ images, adding preload to all of these images can lead to <strong>bandwidth contention</strong>.</p><h4>2. Use it without “as” and you lose it.</h4><p>The link tags looked like this:</p><pre>&lt;link rel=&quot;preload&quot; href=&quot;/image.png&quot;&gt;</pre><p>When you use preload, you need to specify the “as” attribute to determine the resource type. if you don’t do that, the browser will download the resource twice, so you will be creating the bottleneck yourself. a better version of this link tag would look like this:</p><pre>&lt;link rel=&quot;preload&quot; href=&quot;/image.png&quot; as=&quot;image&quot;&gt;</pre><p>And of course if the image isn’t at the same origin, you must set the “crossorigin” attribute.</p><p>So this behavior was literally destroying the performance. a lot of unnecessary preloads, and even worse without an “as” attribute which lead to the images loading twice.</p><p>I ended up just converting the client components from client:visible and client:load to client:only=”react” to prevent this behavior. and I didn’t use any preloads because it’s actually should be used the most for late-discovered resources like CSS background images for instance.</p><p>Notice that client:only directive turns the component into fully client-side rendered component, which means that the place of the component will be empty until it’s loaded. this could’ve affected my CLS, but I added a predefined width and height for the area of the components. alternatively, you can use a loading fallback slot, just make sure it covers the area of the component when it loads to avoid and layout shifts.</p><h3>Just use WebP.</h3><p>The preload problem delayed my LCP image because it needed to load twice before it can be rendered. removing the preload solved the first problem. the second was the image format. I converted it right away from PNG to WebP using lossless conversion. the original size was something like 300KB, after conversion it was 15KB. I could’ve even compressed it more but I didn’t want to miss with the image’s quality.</p><h3>Font Awesome CDN taking too long.</h3><p>The Font Awesome CDN was taking more than 300ms just waiting for a response. I don’t know if this is normal but I think that this is a lot, I think that’s because the CDN is requesting the whole library or something. anyways I had two options:</p><h4>1. Dump the library and use images for the icons.</h4><h4>2. Defer the request.</h4><p>well, I chose the second one because it’s easier. deferring the request was a reasonable decision because I was only using the icons at the very end of the page at the footer. I just used <strong>media=print</strong> which is an old trick to load CSS asynchronously. I don’t know if there are any modern solutions.</p><p>That was actually enough to have a way better LCP. I had one more problem which is YouTube JS scripts blocking the rendering because I was using YouTube I Frames. but I was already tired at the time. I might look into it later.</p><p>So yeah, that’s everything I wanted to say. I hope it was any beneficial. Thanks for your time.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e22b0555307a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Principal Component Analysis (PCA) Clearly Explained]]></title>
            <link>https://medium.com/@omarmohamed286/principal-component-analysis-pca-clearly-explained-7956f71824cf?source=rss-cb5be1d65da6------2</link>
            <guid isPermaLink="false">https://medium.com/p/7956f71824cf</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[data-analysis]]></category>
            <category><![CDATA[dimensionality-reduction]]></category>
            <dc:creator><![CDATA[Omar Mohamed]]></dc:creator>
            <pubDate>Sun, 17 Mar 2024 17:31:39 GMT</pubDate>
            <atom:updated>2024-03-17T17:31:39.363Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BpgowaeeN-C2oqyB9GmadQ.png" /></figure><p>Principal component analysis is a <strong><em>dimensionality reduction </em></strong>algorithm which is used so much while dealing with data, in this story I’m gonna speak about PCA and how does it work while keeping the story as intuitive as possible, also it’s my first story so your feedback will be appreciated a lot!</p><h3>Dimensionality Reduction — What?</h3><p>There are several algorithms to apply dimensionality reduction, PCA is one of them, but what is dimensionality reduction?</p><p>Simply, dimensionality reduction is the process of transforming our data from a high dimensional space to a lower dimensional space.</p><p>For instance, if we have a dataset with 100 features or “Columns”, by the help of an appropriate dimensionality reduction technique we can reduce the number of features to 3 features only or 2 and thereby we have done dimensionality reduction, because we had 100 dimensions and we made it only 3 or 2 dimensions.</p><p>Notice that absolutely this process will cause a bit of information loss, maybe after applying dimensionality reduction we loose 5% of the original data, isn’t a big deal right? “later on we will see this part in details and how is the loss of information calculated exactly”.</p><h3>Dimensionality Reduction — Why?</h3><p>As we said, after applying dimensionality reduction we can go from 100 features to only 3 or 2 features, why would we do that?</p><ol><li>One obvious usage would be <strong>Visualization, </strong>with 100 features we can’t visualize how are the features behaving together since we can’t plot more than 3 features in a graph, after transforming our 100 features to only 3 or 2, it’s easy to plot the data in a 3D or 2D graph.</li><li>As we know, the bigger the number of features for a model is the longer time it would take to train, a model with hundreds and thousands of features would take much longer time than a model with a small number of features, of course the small number of features won’t describe 100% of the original data but you can consider the trade-off between model complexity and information loss.</li><li>High dimensional data in some cases can lead to <strong>overfitting</strong>, reducing the dimension of the data can mitigate the overfitting effect.</li></ol><h3>Dimensionality Reduction — How?</h3><p>One way to apply dimensionality reduction is PCA Algorithm and to understand how PCA works, I will speak first about the mathematical building blocks of PCA, if you are familiar with these mathematical concepts (covariance , correlation , covariance matrix , eigenvectors , eigenvalues) feel free to skip this part.</p><h3>PCA Mathematical Building Blocks</h3><h4>Covariance</h4><p>Covariance is a measure of how two variables are changing together, to understand what that means, we can see this image:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/795/1*nudPwGJR4l3hfFRhj3eiQA.png" /><figcaption>Covariance</figcaption></figure><p>The graph on the right indicates positive covariance between two variables which means whenever one variable increases the second increases as well, the one on the left indicates negative covariance which means whenever one variable increases the other decreases, the one in the middle indicates near zero covariance which indicates no obvious relation between the variables, this is how to calculate covariance between two variables:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/598/1*8KGlYScvjTyjkaA5leo1XQ.png" /><figcaption>Covariance Formula</figcaption></figure><h4>Correlation</h4><p>Correlation is nothing but a standardized form of covariance, but it tells us in a straight way the strength of covariance since covariance can be any big positive or negative number or zero, correlation value varies between -1 and 1, correlation of 1 means high positive correlation, correlation of -1 means high negative correlation and correlation of zero means no correlation whatsoever.</p><h4>Covariance Matrix</h4><p>When dealing with high dimensional data we have a lot of variables not just 2 to calculate their covariance and thus covariance matrix is a matrix that contains the covariance between all of our variables, so its gonna be like that:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/416/1*InDuPA-L9PCnwRQAQt2GjQ.png" /><figcaption>Covariance Matrix</figcaption></figure><p>The one on the left is the covariance matrix for 3 variables A,B and C, as you can see, it’s a square matrix and each column is just the covariance between one variable and all the other variables, the one on the right is the same covariance matrix but after standardization, and therefore all values are lower than 1 (it may have some negative values but in this example the correlation is positive between all the variables), also notice that the diagonal is all ones because the correlation between the variable and itself is 1.</p><p>Now you may ask why would it be useful to build this matrix, I will talk about the intuition behind covariance matrix and what it actually tells us later on.</p><h4>Eigenvectors and Eigenvalues</h4><p>Actually its a quick recap, you should have studied this in linear algebra before.</p><p>In linear algebra, each matrix can be considered as a linear transformation, which means when we multiply this matrix by a vector, it <strong>“transforms”</strong> this vector to another form and transforms all the vectors in the space not only this vector, this transformation it does can be rotating, scaling or anything else.</p><p>When the linear transformation is applied to a space, as we said all the vectors in the space are transformed, and this transformation can be scaling, rotating and so on, the interesting thing is, there are special kinds of vectors that don’t change their direction after transformation, only scaling is applied to them, this kind of vectors are called eigenvectors, and the magnitude of the scaling factor that’s applied to them is called eigenvalue.</p><p>This is an imaginary example:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/819/1*Xz95r7d50VLxmk1G2YaanQ.png" /></figure><p>Say we have two vectors A and B and we applied some kind of linear transformation to the space they exist in, we can see that vector A has the same direction (same angle with positive x-axis) before and after transformation while vector B has changed its direction (different angle with positive x-axis before and after transformation), here vector A is an eigenvector and the magnitude of the scaling that happened to it is the eigenvalue, for instance, if A was (2,3) and after transformation it became (4,6) then its eigenvalue is 2 because it got scaled by 2.</p><p>Important note:</p><p>Each matrix has an associated number of eigenvectors, if this matrix is symmetric and hence diagonalizable then it will have a number of eigenvectors the same as its dimension, which means that if we deal with a covariance matrix that has 5*5 dimensions and we have standardized the data points so the diagonal of it will be all ones then it will have 5 associated eigenvectors and eigenvalues, put that in mind because it’s gonna help you to understand PCA more.</p><p>Now after the quick recap of mathematics I think we are ready to delve into the actual steps of PCA.</p><h3>PCA Intuition</h3><p>What do we want in PCA? we want to go from high dimensional space to lower dimensional space, to get the intuition behind it say that we want to go from 2D space to 1D and consider this example:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/928/1*3doNm0edRPwT5JuQBas4TQ.png" /></figure><p>The goal of PCA is to find “Principal Components” that we can project our data onto them while retaining as much information as possible.</p><p>“From now on I assume you are familiar with what projection is”.</p><p>In the example above, if we project our data onto vector A it will be much better than the projection onto B and A will retain a lot of information from the original data than B, that’s because the data on A is more spread out than B, so here A is a principal component, say we succeed to preserve 95% of the information in the data, then we can project all the data points onto A (using dot product) and now we have only 1D Line that represents the 2D data with only 5% loss in the information, isn’t it cool? and this can be generalized as well, you can go from 3D to 2D by projecting on 2D hyperplane.</p><p>in a nutshell, PCA finds the principal components “which are vectors” that we can project the data onto and thus get from high-dimension space to low-dimension space, as we have seen we went from 2D space to 1D space in the above example by projecting the data onto a 1D vector, practically speaking we can have 100 features in 100D space and by using PCA we can find 3 principal components and then project the data onto them and go from 100D space to 3D space, how can PCA find these principal components? let’s see how PCA really works.</p><h3>PCA Steps</h3><ol><li>Data Standardization : remember what we said about covariance and correlation? we said that correlation is a standardized form of covariance, so in first step we standardize all the data points that we have so all the data points vary from -1 to 1, which means we apply this formula:</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/553/1*wVhJB6wP762g7uSqoPwW0w.png" /><figcaption>Data Standardization</figcaption></figure><p>2. Covariance Matrix: in the second step we build the covariance matrix which will be a square matrix with values between -1 and 1 and diagonal with all ones “as we saw when I talked about covariance matrix above”, we will understand the intuition behind that later.</p><p>3. Eigenvectors and Eigenvalues: now we calculate the eigenvectors and eigenvalues for the covariance matrix, and guess what? these eigenvectors are the principal components we are looking for “we will know why”.</p><p>To be more clear, if the covariance matrix is 100*100 which means that we have 100 features, we will get 100 eigenvectors and eigenvalues (remember the note from above), each eigenvector has some information of the data, the one with the biggest eigenvalue is the one that best represents the data and keeps the most of it, to get exactly how much each eigenvector (principal component) retains from the data, we divide the eigenvalue of the eigenvector by the sum of all the eigenvalues. for instance, if we have one eigenvector with 2.23 eigenvalue and one else with 0.19 eigenvalue, the first has 92% of the information (2.23/2.23+0.19) and the second has 8% of the information (0.19/2.23+0.19).</p><p>After calculating the eigenvectors and eigenvalues, depending on your case you can now project your data onto the eigenvectors that represent the most of the information in the original data, say you had 3 features and you want to make them 2, after doing the steps of PCA you will get 3 principal components, maybe the first will have 97% of the information, the second will have 2% of the information, and the third will have 1%, to go from 3 features to 2 features you can now project your original data onto the first two principal components and retain 99% of the data (97% of the first and 2% of the second).</p><h3>Eigenvectors Of Covariance Matrix Are The Principal Components — why?</h3><p>To get the intuition behind this, we have to see the covariance matrix as a linear transformation, the covariance matrix describes the relations between all the variables in the data, and thus when it’s applied to any vector it takes this vector and rotates it into the direction of highest variance of the data, what is the direction of highest variance? it’s the direction that the covariance between two variables tells us!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/740/1*DFk9UXLwejSusTLaGca-jw.png" /></figure><p>This line is the direction of highest variance because all the data points are going in that direction, then what I meant when I said it takes any vector and rotates it into the direction of the highest variance is this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/958/1*LDERoAYfE00XKW7i1wzkrQ.png" /></figure><p>The graph on the right after applying the covariance matrix to vector A (which means multiplying the covariance matrix by vector A to apply the transformation to it), as we see the vector A after transformation is now closer to the direction of the highest variance, if we do this transformation over and over (infinite times), the vector A will be the direction of the highest variance itself! which means that vector A will be the best vector to project the data onto.</p><p>In PCA we say that the eigenvectors of the covariance matrix are the principal components because of that, if the covariance matrix is applied to a vector and this vector didn’t change its direction and just got scaled then this vector is the direction of highest variance because the covariance matrix didn’t rotate it, and that vector by definition is the eigenvector.</p><h3>Conclusion</h3><p>I hope this was clear for you, if there are any conflicts feel free to ask in the comments, if you want to code PCA you can find a lot of code snippets out there.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7956f71824cf" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>