<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Guneet Kohli on Medium]]></title>
        <description><![CDATA[Stories by Guneet Kohli on Medium]]></description>
        <link>https://medium.com/@guneet-kohli?source=rss-accb7af71a6e------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 11 May 2026 14:51:10 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@guneet-kohli/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[AI Teamwork: Can Multiple LLMs Work Better Than One?]]></title>
            <link>https://guneet-kohli.medium.com/ai-teamwork-can-multiple-llms-work-better-than-one-720b14115f03?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/720b14115f03</guid>
            <category><![CDATA[rags]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[genai]]></category>
            <category><![CDATA[transformers]]></category>
            <category><![CDATA[llm]]></category>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Mon, 24 Mar 2025 01:37:24 GMT</pubDate>
            <atom:updated>2025-03-24T01:43:25.720Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>Is combining multiple LLMs a way to unlock AI’s full potential?</strong></h3><p>We often hear the saying, ‘Teamwork makes the dream work.’ But does this concept apply to AI as well? Is it possible that combining multiple LLMs could result in better and faster responses? Could LLMs operate on the principle of ‘Divide and Conquer’ to optimize their performance? These questions have been on my mind recently.</p><p>Curious, I ran a small experiment. While working on a set of questions, I wanted faster responses. So, I tried using different LLMs for different parts of the task — and surprisingly, the results came in quicker. Each model handled a specific block, and together, they outpaced a single model working alone. Fascinating, right?</p><h3><strong>Why Did this work?</strong></h3><p>The success of the experiment got me thinking — was it due to token-per-minute limitations, or do multiple LLMs actually complement each other?. Apparently, it seems like most LLMs are trained on vast datasets, but they still develop strengths and weaknesses. Some are better at factual recall, while others excel at creative writing or problem-solving. By dividing the task into smaller parts and assigning each model to the area it handles best, I unintentionally created an AI team — where each “teammate” played to its strengths. Isn’t this interesting to notice? This realization was striking: even AI, like humans, performs better when tasks are divided efficiently.</p><p>Apparently, for the same reason it is actually advisable to keep prompts for the LLM to be short and to the point. When the LLM prompt exceeds a certain limitat, it appears to face the problem of hallucination. This could also be because LLMs are essentially transformers, and their attention mechanisms would have been jeopardized when we give them too many instructions.</p><h3>The Divide and Conquer Strategy in AI</h3><p>Ahh, again we are back to our DSA class. Understanding all these algorithms. Evidently, dividing a complex task into smaller and more manageable pieces is what works in AI as well. This could be due to multiple reasons:</p><ul><li><strong>Smaller prompts tailored to specific tasks result in more relevant, high-quality responses.</strong></li><li><strong>Different models can work simultaneously on separate parts of a task, reducing response time.</strong></li></ul><p>Basically, we created a powerful team — like the <em>Avengers</em> — where superheroes combine their unique powers for a greater outcome.By leveraging different types of intelligence and created an architecture with multiple models:</p><ul><li><strong>Model A</strong> — Great at summarizing complex information.</li><li><strong>Model B</strong> — Strong at creative generation and phrasing.</li><li><strong>Model C</strong> — Expert at fact-checking or providing technical details.</li></ul><p>Individually, each model has blind spots. Together, they cover for one another, producing responses that are not only faster but more accurate and nuanced.</p><p>Here is the Python code for plotting the graph showing the comparison of average response times for Case 1 — where all the three models were same with Case 2 — which used a combination of different models.</p><pre>import matplotlib.pyplot as plt<br>import numpy as np<br><br># Data for Case 1 and Case 2<br>x_labels_case1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]<br>model1_case1 = [0.43, 0.986466, 0.190241, 0.173561, 0.336204, 0.253013, 0.204127, 5.047971, 0.310801, 0.213484]<br>model2_case1 = [23.401517, 4.092776, 18.999459, 5.986792, 9.521706, 13.6195, 43.721286, 12.749018, 7.95277, 9.120764]<br>model3_case1 = [2.110472, 4.574461, 4.597078, 2.824468, 16.169483, 2.062155, 3.891042, 3.305207, 5.735195, 57.138562]<br><br>x_labels_case2 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]<br>model1_case2 = [0.286384, 0.232897, 0.237311, 0.25821, 0.216052, 0.243387, 0.158544, 0.352585, 0.246721, 0.252649]<br>model2_case2 = [17.495792, 9.731949, 6.361989, 4.899729, 5.199543, 6.430908, 64.843112, 4.980411, 12.679339, 11.60478]<br>model3_case2 = [1.868402, 1.726916, 3.097865, 2.197605, 1.498171, 1.592312, 3.160078, 2.001775, 2.075913, 3.977281]<br><br># Average of response times for Case 1<br>avg_case1_model1 = np.mean(model1_case1)<br>avg_case1_model2 = np.mean(model2_case1)<br>avg_case1_model3 = np.mean(model3_case1)<br>avg_case1 = (avg_case1_model1 + avg_case1_model2 + avg_case1_model3) / 3<br><br># Average of response times for Case 2<br>avg_case2_model1 = np.mean(model1_case2)<br>avg_case2_model2 = np.mean(model2_case2)<br>avg_case2_model3 = np.mean(model3_case2)<br>avg_case2 = (avg_case2_model1 + avg_case2_model2 + avg_case2_model3) / 3<br><br># Create x labels for average comparisons<br>x_labels_avg = [&#39;Case 1&#39;, &#39;Case 2&#39;]<br><br># Data for plotting<br>averages = [avg_case1, avg_case2]<br><br># Plotting the comparison<br>plt.figure(figsize=(8, 5))<br>plt.plot(x_labels_avg, averages, marker=&#39;o&#39;, linestyle=&#39;-&#39;, color=&#39;b&#39;, label=&quot;Average Response Time&quot;)<br>plt.fill_between(x_labels_avg, 0, averages, color=&#39;skyblue&#39;, alpha=0.3)<br><br># Customize the plot<br>plt.title(&quot;Comparison of Average Response Times (Case 1 vs Case 2)&quot;)<br>plt.xlabel(&quot;Case Type&quot;)<br>plt.ylabel(&quot;Average Response Time (s)&quot;)<br>plt.grid(True, axis=&#39;y&#39;, linestyle=&#39;--&#39;, alpha=0.7)<br>plt.legend(loc=&#39;upper right&#39;)<br>plt.tight_layout()<br>plt.show()</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/790/1*KqxywQmF6fIZO0A5kHtxzg.png" /><figcaption>Comparison of Average Response Times: Same Models ( Case 1 ) vs Different Models ( Case 2 )</figcaption></figure><p>As represented by the graph, the average response time came down in the case where a combination of models were used.</p><h3>Potential Applications: Where Can This Approach Shine?</h3><p>Use cases of this approach could essentially be in Research Assistance, Customer Support or Code Generation.</p><ul><li><strong>Customer Support:</strong> A fast-response bot handles FAQs, while a more empathetic model deals with complex or emotional queries.</li><li><strong>Research Assistance:</strong> One model summarizes studies, another analyzes data, and a third helps formulate insights.</li><li><strong>Code Generation:</strong> One model writes code, another optimizes for performance, and a third checks for errors.</li></ul><p>Given that this industry is growing every single day, and the specifics of the models are becoming better and better. This approach can be used wherever and whenever.</p><h3>The Challenges: Is It All Smooth Sailing?</h3><p>Well, is there anything that we can get without having a little tradeoff? This approach comes with a higher compute power and yes higher costs. But this becomes essential when dealing with healthcare or finance data. Furthermore, while multiple models can process tasks in parallel, managing their outputs and stitching them into a coherent final answer requires smart orchestration.</p><h3>Final Thoughts: Is the Future of AI Collaborative?</h3><p>My experiment left me with more questions than answers — but one thing is clear: combining LLMs taps into something powerful. It shifts the narrative from <em>“Which model is the best?”</em> to <em>“How can different models work together to achieve more?”</em></p><p>So, could AI teamwork truly make the dream work? Based on what I’ve seen so far, <strong>it’s not just possible — it might be inevitable.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=720b14115f03" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How Large Language Models Work: A Beginner’s Guide with Lazy Learner Mike]]></title>
            <link>https://guneet-kohli.medium.com/how-large-language-models-work-a-beginners-guide-with-lazy-learner-mike-b3fcc1505301?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/b3fcc1505301</guid>
            <category><![CDATA[genai]]></category>
            <category><![CDATA[vectorization]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[transformers]]></category>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Sun, 16 Mar 2025 23:54:20 GMT</pubDate>
            <atom:updated>2025-03-16T23:54:20.423Z</atom:updated>
            <content:encoded><![CDATA[<h3>The Hype Around Google: A Comparison to Today’s LLMs</h3><p>Assume we are in the late 90s, when Google was just released. The hype surrounding Google was overwhelming, with many viewing it as a god-like entity, and there’s evidence online that even reasoned why Google was considered “God.”</p><p><a href="https://churchofgoogle.org/Proof_Google_Is_God.html">Is Google God?</a></p><p>Large Language Models (LLMs) are deep-learning models trained on vast amounts of data, often described as being trained on an “infinite” amount of information. This is the standard definition you’ll come across on the web, though usually phrased more elegantly. But how do we really understand what an LLM is? Today, much like the overwhelming hype around Google in its early days, people are treating LLMs almost like divine entities, believing they can answer any question. But what’s the science and reasoning behind this widespread belief? Let’s break it down.</p><h3>Introducing Lazy Learner Mike: A Curious Mind</h3><p>To understand what an <strong>LLM</strong> is, let’s imagine it as a person — <strong>Lazy Learner Mike</strong>. So Mike is a curious guy who wants to understand what is happening all over the world without scrambling newspapers or watching news with his Dad all night, he would rather play video games and have some way of still fetching all information. How exactly would Mike plan on doing this now?</p><h3>Mike’s Plan: Building a Deep Learning Model</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BAjZjChMkaglVZHK2hKkkw.png" /><figcaption>Transformer Architecture (compromising Encoder, Decoder)</figcaption></figure><p>Mike is no ordinary guy. He’s incredibly smart, skilled in coding, math, and stats, and has a sharp eye for patterns. Instead of reading everything manually, Mike decides to create a deep learning model based on the <strong>Transformer architecture</strong> (don’t worry, not the robots from the movies!). His goal is to use this model to perform pattern matching and retrieve information in response to his questions.</p><h3>Discovering Vectorization: Turning Words into Numbers</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fYhj-wNnLxKbUjWkM2JXxQ.png" /><figcaption>Documents (having words) converted into vector embeddings and then saving them in a vector Database</figcaption></figure><p>To his surprise, Mike stumbles upon <strong>vectorization</strong> — a technique introduced by one of his university seniors. Basically, turning words into numbers so a computer can understand them, just like a student turning notes into formulas for solving problems. Vectorization is essentially a form of pattern matching, where the model converts words and sentences into numerical data (vectors), making it easier to identify and process patterns in language. With this, Mike realizes that pattern matching is an ideal way to fetch relevant responses to his questions, based on a vast amount of data.</p><h3>The Learning Process: How the Model Gets Smarter</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/412/1*CAiFjE8TtKFRi_VHRYQIyw.png" /><figcaption>Learning based on user questions</figcaption></figure><p>As Mike continues his journey with his deep learning model, he realizes that the key to getting better responses isn’t just about gathering more data, but also about understanding how the model “learns” from that data. Just like Mike uses vectorization to turn the world’s knowledge into patterns he can easily process, LLMs also take vast amounts of text, break it down into patterns, and then “learn” from them.</p><h3>Trial and Error: Continuous Improvement of the Model</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/894/1*DAc45Au_WFX2ROB1O8ukgQ.png" /><figcaption>Will our LLM Hit or Miss in this iteration?</figcaption></figure><p>The more Mike practices asking his model questions, the more it begins to refine its responses, understanding context, nuances, and even more complex queries over time. Similarly, LLMs are trained on huge datasets to recognize language patterns, structure, and meaning, allowing them to generate responses that are coherent and contextually relevant. While Mike’s curiosity drives his learning, LLMs use their training and patterns to continually improve, making them powerful tools for finding information, just like Mike’s model was designed to help him avoid reading everything manually.</p><h3>The Final Outcome: A Model That Fetches Information Like a Pro</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Uc37Aw35sC2NCF_F3Hajqw.png" /><figcaption>Information Retreived successfully</figcaption></figure><p>In the end, Mike’s model, much like LLMs, isn’t perfect at first. It learns through trial and error, continually improving its ability to understand and respond based on the patterns it has identified. And just as Mike can now rely on his model to fetch answers to his questions without sifting through piles of information, we can rely on LLMs to help us process and generate human-like responses based on the vast knowledge they’ve absorbed.</p><h3>Conclusion: How This Relates to LLMs Today</h3><p>In conclusion, Large Language Models (LLMs) are much like Mike’s deep learning model — constantly learning from the vast amounts of data they’re trained on and improving over time. Just as Mike’s model helps him find answers more easily, LLMs help us by understanding and responding to our questions based on patterns they’ve learned. While these models are still evolving, they hold the potential to make information retrieval faster and more efficient, just like having a super-smart assistant at your fingertips that gets smarter everytime you talk to it.</p><h3>End Note: The Future of LLMs</h3><p>As with any technology, LLMs are still evolving. While they’re powerful tools today, they will only continue to improve as they learn from more data and refine their understanding. Who knows? In the future, LLMs might be just as integral to our daily lives as Google was in the late 90s.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b3fcc1505301" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Mobile Price Range Classification using AWS SageMaker]]></title>
            <link>https://guneet-kohli.medium.com/mobile-price-range-classification-using-aws-sagemaker-5ddaf9d59777?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/5ddaf9d59777</guid>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[aws-sagemaker]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[classification]]></category>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Sun, 11 Feb 2024 18:16:44 GMT</pubDate>
            <atom:updated>2024-02-11T18:16:44.932Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*X8VUbouoI5JQVpGG.png" /></figure><p>Learning Machine Learning along with cloud computing often seems daunting. However, a platform where innovation meets simplicity Amazon SageMaker is a cloud-based machine-learning platform that simplifies model creation, training, and deployment. It accelerates workflows, offers cost-efficiency, and scales seamlessly.</p><p>The goal of this project was to build and deploy a Random-Forest multi-class classifier model on AWS SageMaker to predict mobile phone price ranges. The first step of the project was to understand the dataset given. The dataset was a collection of mobile phone features along with their corresponding price ranges.</p><p>Each row represents a different mobile phone, and the columns contain various attributes and specifications for each phone. This dataset contains mobile phone attributes and the label that we need to predict was Price_Range of the phone.</p><ul><li>battery_power: Battery capacity.</li><li>blue: Bluetooth availability (0 or 1).</li><li>clock_speed: Processor clock speed.</li><li>dual_sim: Dual SIM card support (0 or 1).</li><li>fc: Front camera megapixels.</li><li>four_g: 4G network support (0 or 1).</li><li>int_memory: Internal memory (GB).</li><li>m_dep: Mobile depth (thickness).</li><li>mobile_wt: Mobile phone weight.</li><li>n_cores: Number of processor cores.</li><li>pc: Primary camera megapixels.</li><li>px_height: Pixel resolution height.</li><li>px_width: Pixel resolution width.</li><li>ram: RAM capacity.</li><li>sc_h: Screen height.</li><li>sc_w: Screen width.</li><li>talk_time: Talk time (hours).</li><li>three_g: 3G network support (0 or 1).</li><li>touch_screen: Touch screen availability (0 or 1).</li><li>wifi: Wi-Fi availability (0 or 1).</li><li>price_range: Target variable representing mobile phone price range.</li></ul><p>In summary, the dataset includes various features that characterize mobile phones, and the goal appears to be predicting the price range based on these features. The target variable (price_range) is categorical, indicating different price ranges as follows:</p><ul><li>Low Price Range (Label 0)</li><li>Medium Price Range (Label 1)</li><li>High Price Range (Label 2)</li><li>Very High Price Range (Label 3)</li></ul><p>The dataset seems suitable for a classification task where the machine learning model aims to predict the price range category of a mobile phone.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4gqZPTcep6rqEumS4cEd8A.png" /><figcaption>Dataset</figcaption></figure><p>Tools Used: VS Code, Anaconda, AWS SageMaker, AWS S3, AWS IAM User, AWS IAM Role.</p><p>The project can be divided into three parts: Setup, Training and Development.</p><h3><strong>SETUP</strong></h3><p>Step 1: Installing AWS CLI to communicate better with Management Console from VS Code. Make sure to give the IAM User Administrative access. Download the access keys and keep them somewhere secure, avoid sharing it with anyone.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*U0gOvr7oXZ3U54MswrHrSQ.png" /><figcaption>Communicating with AWS CLI from Terminal</figcaption></figure><p>Step 2: Created a user with Administrator access so that interaction between user and local machine is seamless.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*icDrqQzgXPZj84l0CCLzjw.png" /></figure><p>Step 3: Created a new environment in VSCode listing all the requirements in a text file. Packages included boto3, sagemaker, scikit-learn,pandas, numpy and ipykernel</p><p>Step 4: Set up an S3 bucket for storage of train and test files into the cloud.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ixc46RziDwKc7AOIycULJw.png" /></figure><h3>TRAINING PHASE</h3><p>The training phase was a series of steps followed, which included data ingestion, feature engineering, writing a script file to get the tasks done, creating an IAM Role and then performing the actual training.</p><h4>Data ingestion</h4><p>Sent the train and test files in S3 buckets.</p><h4>Script.py</h4><p>Wrote a script that used the Random Forest Classifier from sklearn.</p><p>%%writefile script.py was used to create a script in a notebook.</p><h4>Creating an IAM Role</h4><p>IAM Role(not user) was created and the ARN was used in the script. Made sure to add SageMaker policy in the role to prevent any errors further in the code.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LixyJl-wMC8SX-NzNaiPPw.png" /><figcaption>IAM Role</figcaption></figure><h4>Sagemaker using the Script.py file</h4><p>The script.py file serves as the entry point for our sklearn model. Here the ARN of the role comes in play.</p><pre># Importing sagemaker&#39;s default SKLearn library<br>from sagemaker.sklearn.estimator import SKLearn<br><br>FRAMEWORK_VERSION = &quot;0.23-1&quot;<br><br>sklearn_estimator = SKLearn(<br>    entry_point=&quot;script.py&quot;,<br><br>    # ARN of a new sagemaker role (ARN of user does not work)<br>    role=&quot;arn:aws:iam::905418303768:role/sagemaker-role&quot;,<br><br>    # creates instance inside the Sagemaker machine<br>    instance_count=1,<br>    instance_type=&quot;ml.m5.large&quot;,<br><br>    # framework version present in the documentation, declared above<br>    framework_version=FRAMEWORK_VERSION,<br><br>    # name of folder after model has been trained<br>    base_job_name=&quot;RF-custom-sklearn&quot;,<br><br>    # hyperparameters to the RF classifier<br>    hyperparameters={<br>        &quot;n_estimators&quot;: 100,<br>        &quot;random_state&quot;: 0,<br>    },<br>    use_spot_instances = True,<br>    max_wait = 7200,<br>    max_run = 3600<br>)</pre><p>Before deploying the model, methods such as .fit are used to ensure model training gets completed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SFo1Z9LwHVdwv6oYMZACWw.png" /><figcaption>Training job status</figcaption></figure><p>The Model accuracy on the testing data is 88.33%.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JpXRQiQWuI642O1DAE0UIQ.png" /><figcaption>Accuracy</figcaption></figure><h3>DEPLOYMENT PHASE</h3><p>To ensure that a copy of the model is created , another location is specified. This step is taken as a measure to ensure availability so that the copy of the model is used for deployment.</p><h4>End-point Deployment</h4><p>This is done by performing the model.deploy() function</p><pre>endpoint_name = &quot;Custom-sklearn-model-&quot; + strftime(&quot;%Y-%m-%d-%H-%M-%S&quot;, gmtime())<br>print(&quot;EndpointName={}&quot;.format(endpoint_name))<br><br>predictor = model.deploy(<br>    initial_instance_count=1,<br><br>    # deploy in this specific instance as an endpoint<br>    instance_type=&quot;ml.m4.xlarge&quot;,<br>    endpoint_name=endpoint_name,<br>)</pre><h4>Testing the deployment</h4><p>A sample example was taken to test the deployment. Since it was a multi class classification problem, the solution was dependent on 20 dimensions. For a sample set of points, the classification for the mobile came out to be Very High Price Range.</p><pre>| Feature         | Value |<br>|-----------------|-------|<br>| Battery Power   | 1454  |<br>| Bluetooth       | 1.0   |<br>| Clock Speed     | 0.5   |<br>| Dual SIM        | 1.0   |<br>| Front Camera    | 1.0   |<br>| 4G Support      | 0.0   |<br>| Internal Memory | 34.0  |<br>| Depth           | 0.7   |<br>| Weight          | 83.0  |<br>| Cores           | 4.0   |<br>| PC              | 3.0   |<br>| Pixel Height    | 250.0 |<br>| Pixel Width     | 1033.0|<br>| RAM             | 3419.0|<br>| Screen Height   | 7.0   |<br>| Screen Width    | 5.0   |<br>| Talk Time       | 5.0   |<br>| 3G Support      | 1.0   |<br>| Touch Screen    | 1.0   |<br>| WiFi            | 0.0   |<br><br> ---------------------------------------------------------------------<br>| Price Range     | * MODEL DETERMINED AS 3 =&gt; VERY HIGH PRICE RANGE* |<br> ---------------------------------------------------------------------</pre><p>Learning Resources:</p><ul><li>Krish Naik’s <a href="https://youtu.be/Le-A72NjaWs?si=1P0km7dOrezirBKq">tutorial</a></li><li><a href="https://www.linkedin.com/learning/learning-amazon-sagemaker/machine-learning-with-amazon-sagemaker?u=36051636">Linkedin Learning </a>on AWS Sagemaker</li></ul><p>Links to Code and files: <a href="https://github.com/guneet-kohli/AWS-Sagemaker-Mobile-Price-Classification">Github</a></p><p><a href="https://github.com/guneet-kohli/AWS-Sagemaker-Mobile-Price-Classification">GitHub - guneet-kohli/AWS-Sagemaker-Mobile-Price-Classification: Deployed a mobile price prediction model using scikit-learn on AWS SageMaker. Trained a Random Forest classifier for accurate predictions based on key features. Streamlined deployment with a concise guide and seamless integration of AWS services.</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5ddaf9d59777" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Clearing Microsoft DP-900 in First attempt within a week]]></title>
            <link>https://guneet-kohli.medium.com/clearing-microsoft-dp-900-in-first-attempt-within-a-week-b2b95d4a1338?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/b2b95d4a1338</guid>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[microsoft-azure]]></category>
            <category><![CDATA[azure]]></category>
            <category><![CDATA[data]]></category>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Wed, 07 Sep 2022 13:49:33 GMT</pubDate>
            <atom:updated>2022-09-07T13:49:33.108Z</atom:updated>
            <content:encoded><![CDATA[<p>Data and Cloud have always fascinated the analyst in me, but I specifically came across two subjects in my 7th semester that piqued my interest. These two subjects were: “Cloud Computing” and “Data Warehousing and Data Management”. After completion of my 7th semester, I landed an opportunity to intern at Cognizant. The organisation was familiar with my interest towards both Cloud and Data, and hence I was assigned a role as an Azure Data Engineer Intern. During the time of this internship, I formed enriching bonds with my fellow interns, who actually inspired me to give this DP-900 exam.</p><p>And here I am, after one week of continuously searching and solving dumps that I found online, clearing my first ever Microsoft Certification. After an entire week of rigorous training and focus. Azure Data Fundamentals validates foundational knowledge of core data concepts and how these concepts are implemented using Microsoft Azure data services, For doing preparation in a situation that is time-bound , one needs to perform a step-by-step analysis. A through understanding of the components required to ace the certification.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/999/1*wVXUJF6aSctiuhmiPUbKNQ.png" /><figcaption>Certification Journey</figcaption></figure><h3>Day 1</h3><p>(Monday, 29th August)</p><p>Firstly, I went through Microsoft Learn website to understand the components and skills required for clearing this certification. The percentage breakup of the skills measured was clearly mentioned on the website, the outcomes were as follows:</p><ul><li>Describing core data concepts (25-30%)</li><li>Identifying considerations for relational data on Azure (20-25%)</li><li>Describing considerations for working with non-relational data on Azure (15-20%)</li><li>Describing an analytics workload on Azure (25-30%)</li></ul><p>After gaining a through understanding of the syllabus, on the Microsoft Learn portal there was an entire section dedicated to these learning outcomes. Under each outcome 2 or 3 modules along with quizzes were present, which played a crucial role in helping me ace my certification. On Day 1, I thoroughly prepared fundamentals via Microsoft’s official learning platform.</p><h3>Day 2</h3><p>(Tuesday, 30th August)</p><p>With the advent of the second day, I revised the contents of the Udemy Course, “DP-900: Microsoft Azure Data Fundamentals in a Weekend” by in28minutes Official.</p><p>The course helped me gain knowledge about:</p><ul><li>Core Data concepts</li><li>Working with relational data on Azure</li><li>Working with non relational data on Azure</li><li>Working with analytics workload on Azure.</li></ul><p>Here’s the <a href="https://www.udemy.com/share/1054Cw/">link</a> to the Udemy course,</p><p>“https://www.udemy.com/share/1054Cw/”.</p><h3>Day 3</h3><p>(Wednesday, 31st August)</p><p>After going through five sections of the Udemy Course on second day, third day started with the target of achieving next six sections. The summarized notes and presentations shared by the instructor helped in recapitulating concepts learned on the second day. After completion of most of the coursework, I came through a YouTube <a href="https://youtu.be/wi3PkLK_gNc">video</a> , shared by my friend who helped me practice and put my skills to a test. As the third day came to an end, I was done with 11 sections of Udemy courses and 20 Questions of Youtube video.</p><h3>Day 4</h3><p>(Thursday, 1st September)</p><p>This day was all about Practice, Practice and Practice. The more Practice I did, the more I felt the need to learn more about Azure. On this day, covered all the 80 questions in the video and curated some handmade notes to help me with a through understanding.</p><p>After being confident enough about preparation, in the evening we booked our exam. But due to harsh luck, no slot was available on the next day. Hence, I booked the slot for 3rd of September.</p><h3>Day 5</h3><p>(Friday, 2nd September)</p><p>On the fifth day, I decided to go for the mock test that came with Udemy course. The test was quite easy to crack and had a conceptual understanding of various topics. This test was enough to brush up my skills.</p><p>After doing and reverse engineering the contents of the mock test, I decided to take a break and relax for a while. However, while relaxing I forgot to keep an eye on my inbox and I got to know that after almost 24 hours of your booking, Microsoft shares an official mock test.</p><h3>Day 6 — The exam day</h3><p>(Saturday, 3rd September)</p><p>As I woke up on the exam day with the intent to relax, I came across the official mock test mail. I felt immediately the need to attempt it, and hence decided to go for it. Just to save some time, I modified the test and instead of attempting 40-45 questions at a time. I went for attempting all the 119 questions at a go, which actually turned out to be quite hectic. After attempting the questions, I felt apprehensive as I had just scored 59% and 70% was the cut off required to clear the exam. So, I got determined to review the questions which I got wrong. I decided to reschedule the test and prepare more for it, but unfortunately the exam was in the evening and for rescheduling the exam, you need to inform at least 24 hours prior to the exam. So, I was stuck in a situation which was similar to a swamp. To my rescue, came an idea to curate last minute handouts for revising before the exam. These last minute handouts helped me clear my exam.</p><p>At 5pm, I checked in for my exam and by 6.30pm I had cleared the certification and got my <a href="https://www.credly.com/badges/62509d32-0af6-430e-9688-b27785a8fac2/public_url">Badge</a> of Azure Data Fundamentals. I felt elated at that moment. I hope my strategy will help you clear your DP-900 certification. GOOD LUCK!</p><p><a href="https://www.credly.com/badges/62509d32-0af6-430e-9688-b27785a8fac2/public_url">Microsoft Certified: Azure Data Fundamentals was issued by Microsoft to Guneet Kohli.</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b2b95d4a1338" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Visualising COVID-19 country-wise data using Python and Dash]]></title>
            <link>https://guneet-kohli.medium.com/visualising-covid-19-country-wise-data-using-python-and-dash-5239b0a66191?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/5239b0a66191</guid>
            <category><![CDATA[dashboard]]></category>
            <category><![CDATA[plotly]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[web-applications]]></category>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Tue, 02 Aug 2022 11:34:41 GMT</pubDate>
            <atom:updated>2022-08-02T11:34:41.009Z</atom:updated>
            <content:encoded><![CDATA[<h4>Creating a Dashboard Web App for COVID-19 cases data</h4><p>Let’s start by creating a folder which’ll hold all the components needed to create this web app. From creating a virtual environment, .gitignore file to the final app.py file, everything will be stored in this dedicated directory.</p><p>This is how our dashboard will look like:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*d99cwvux9ymntey8-XLnaA.png" /></figure><h3>So, Let’s Begin</h3><p>The first step of using Python language comes with “Downloading &amp; Importing Libraries” 😃. Since, we are using libraries like Plotly and Dash, we need to ensure that these libraries are present in our virtual environment. All the libraries that were required during the creation of this Web App have been listed in requirements.txt file present in Github repository (link shared at the end). The steps to create a dedicated app directory and a virtual environment are:</p><blockquote><em>cd &lt;initial-folder&gt; # The folder selected initially</em></blockquote><blockquote><em>mkdir dash_app_example # Creating a dedicated app directory.</em></blockquote><blockquote><em>cd dash_app_example #moving to the dedicated app directory.</em></blockquote><blockquote><em>virtualenv venv # Create the virtual environment ‘venv’.</em></blockquote><blockquote>venv\Scripts\activate #Activates the virtual environment on Windows</blockquote><blockquote>pip install &lt;package-name&gt; #Helps in installing packages</blockquote><p>After making sure all the packages are installed, let us move to Jupyter Notebook and create a new notebook in the dedicated folder. I’m working on Jupyter notebook but you can work on any other text editor as well, be it Atom, VS Code or anything that works for you. But make sure that your file name is app.py</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/337/1*zG5GI9oJwWeHJypc_TD4JA.png" /><figcaption>Importing Various Packages</figcaption></figure><h3>Dataset Used</h3><p>The data repository for the 2019 Novel Coronavirus Visual Dashboard is operated by the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE). Also, Supported by ESRI Living Atlas Team and the Johns Hopkins University Applied Physics Lab (JHU APL).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DL1CNL0awNgJsLayMGH7VQ.png" /></figure><p>Three different URLs were used for checking the cases: Confirmed, Recovered and Dead, which were all in csv format.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PAKauPSpSwyQ4hgCdO9QLA.png" /><figcaption>Confirmed cases Dataset</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CkCARhaF_D3J_Bl6UBPlJA.png" /><figcaption>Recovered Cases Dataset</figcaption></figure><h3>Data Cleaning</h3><ul><li>Cleaning of data was performed country-wise in time-series format for country selected</li><li>Overall case count for confirmed, recovered and dead cases was calculated by using a blocks of code.</li><li>In this stage values for COVID-19 Case count were also evaluated.</li></ul><h3>Generating the line graph</h3><p>After performing country and case wise data cleaning on the dataset provided. The User Interface development phase began with the first-and-foremost step of creating a line graph that would represent everyday data of COVID-19 Patients. The following code block represents both country-wise data processing and creation of a line graph</p><pre><strong>def</strong> fig_world_trend(cntry<strong>=</strong>&#39;US&#39;,window<strong>=</strong>3):<br>    df <strong>=</strong> process_data(data<strong>=</strong>covid_conf_ts,cntry<strong>=</strong>cntry,window<strong>=</strong>window)<br>    df<strong>.</strong>head(10)<br>    <strong>if</strong> window<strong>==</strong>1:<br>        yaxis_title <strong>=</strong> &quot;Daily Cases&quot; <br>    <strong>else</strong>:<br>        yaxis_title <strong>=</strong> &quot;Daily Cases ({}-days MA)&quot;<strong>.</strong>format(window)<br>    fig <strong>=</strong> px<strong>.</strong>line(df, y<strong>=</strong>&#39;Total&#39;, x<strong>=</strong>df<strong>.</strong>index, title<strong>=</strong>&#39;Daily confirmed cases trend for {}&#39;<strong>.</strong>format(cntry),height<strong>=</strong>600,color_discrete_sequence <strong>=</strong>[&#39;indigo&#39;])<br>    fig<strong>.</strong>update_layout(title_x<strong>=</strong>0.5,plot_bgcolor<strong>=</strong>&#39;#cfd7fa&#39;,paper_bgcolor<strong>=</strong>&#39;#ffffff&#39;,xaxis_title<strong>=</strong>&quot;Date&quot;,yaxis_title<strong>=</strong>yaxis_title)<br>    <strong>return</strong> fig</pre><h3>Dash Application</h3><p>Most of the work in Dash is conceptually based on fundamental principles of Web Development. Basic understanding about HTML5, CSS and Javascript comes in handy while creating the UI. To create a secure dashboard, username and password pairs were generated, followed by creation of a dropdown list for every unique country.</p><pre><strong>def</strong> get_country_list():<br>    <strong>return</strong> covid_conf_ts[&#39;Country/Region&#39;]<strong>.</strong>unique()<br><br><strong>def</strong> create_dropdown_list(cntry_list):<br>    dropdown_list <strong>=</strong> []<br>    <strong>for</strong> cntry <strong>in</strong> sorted(cntry_list):<br>        tmp_dict <strong>=</strong> {&#39;label&#39;:cntry,&#39;value&#39;:cntry}<br>        dropdown_list<strong>.</strong>append(tmp_dict)<br>    <strong>return</strong> dropdown_list<br><br><strong>def</strong> get_country_dropdown(id):<br>    <strong>return</strong> html<strong>.</strong>Div([html<strong>.</strong>Label(&#39;Select Country&#39;),<br>                        dcc<strong>.</strong>Dropdown(id<strong>=</strong>&#39;my-id&#39;<strong>+</strong>str(id),<br>                            options<strong>=</strong>create_dropdown_list(get_country_list()),<br>                            value<strong>=</strong>&#39;US&#39;<br>                        ),<br>                        html<strong>.</strong>Div(id<strong>=</strong>&#39;my-div&#39;<strong>+</strong>str(id))<br>                    ])</pre><p>Furthermore, features like Graph Container for Dash, Moving Window Slider were added before generating App Layout and assigning dash callbacks.</p><p>To run the Application on local host, the following command needs to be executed as it will share the Dash URL where the web application is deployed in the production environment.</p><pre>app<strong>.</strong>run_server(host<strong>=</strong> &#39;0.0.0.0&#39;,debug<strong>=False</strong>)</pre><h3>An Overview of the Application</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FUlHeJcmIqYA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DUlHeJcmIqYA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FUlHeJcmIqYA%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/40cef2e7ca165e3a4af8bd9b219b6333/href">https://medium.com/media/40cef2e7ca165e3a4af8bd9b219b6333/href</a></iframe><p>The above You Tube video will give an overview of the application and its features. For instant access to code, link to my Github profile has also been shared.</p><h3>Source Code</h3><p>The repository is made available on Github at the following <a href="https://github.com/guneet-coder/COVID19/blob/main/app1.ipynb">link</a>:</p><p><a href="https://github.com/guneet-coder/COVID19/blob/main/app1.ipynb">COVID19/app1.ipynb at main · guneet-coder/COVID19</a></p><p>For any queries, kindly report in the issues section of my Github profile. Hope this helped you!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5239b0a66191" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[My Experience as Microsoft Azure Data Engineer Intern at Cognizant Technology Solutions]]></title>
            <link>https://guneet-kohli.medium.com/my-experience-as-microsoft-azure-data-engineer-intern-at-cognizant-technology-solutions-1e6a4ce3cc2b?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/1e6a4ce3cc2b</guid>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Sun, 19 Jun 2022 17:16:17 GMT</pubDate>
            <atom:updated>2022-06-19T17:16:17.201Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7A3rOz7BeaEY7yVu.jpg" /></figure><p>I started my internship with Cognizant Technology Solutions amidst mid January, 2022. My tenure as an intern at Cognizant was an out-of-the-world experience. From enhancing my persona to developing my technical skills, the exposure I got from Cognizant was the kickstart that every fresher in this rapidly growing IT sector demands. Indeed, it was with tons of hardwork and a stroke of luck that I got selected as an Intern in such a prestigious organisation. These six months flew by and here I’m sharing my amazing experience with this Multinational Corporation.</p><h3><strong>JANUARY</strong></h3><p><strong>Week 1: (18th January- 21st January, 2022)</strong></p><p>My first week at Cognizant was indeed a thrilling one, despite having a Work from Home internship, the level of interaction we had during our Teams meet was unparalleled. Interns from all over the country were present in the induction session, where various company policies were shared. Diversified sessions were held from time to time, focusing especially on interns’ mental wellness and heartfulness. These programs assisted us to connect with our inner-self &amp; helped in decluttering one’s mind. A boot camp was also organised which focused on minimising waste, it was a simple series of steps delivered via an online pragmatic course for gaining knowledge, resources and guidance and it influenced us to drive change at personal level within our families and community.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/639/1*rfgyQJj-tygi_S04eZPCBg.png" /></figure><p>Moreover, in this week vital topics to thrive in this corporate world were also discussed, which included Data Security, Code Of Ethics Acceptable Use and prevention of sexual harassment at the workplace.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sPxK4wwTYMpUumJHZgEcfQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*870A62wiG2G3p-lJgJHFXw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oHFdpXZVmY7mzzVFtx8jEg.png" /></figure><p><strong>Week 2: (24th January — 28th January, 2022)</strong></p><p>My second week started off with Behavioural training sessions and Working remotely ethics. To succeed as an individual, training and tips especially related to growth mindset were held from time to time. Another big change that cognizant brought in my life was the introduction of Cognizant Health Challenge. Its perks included:</p><p>· Eight weeks of fun, health and fitness</p><p>· Exciting rewards and giveaways each week</p><p>· Personalised, AI-powered diet and fitness plan</p><p>· Engaging live webinars</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MBW4JdpFpba1hIxvi60DPA.png" /></figure><p>In the words of the CEO — Brian Humphries, “You are working for a company with a big heart that cares deeply about you. Together, we’ve come through a two-year, COVID-induced humanitarian crisis that has also caused severe stress and fatigue.I would encourage all of you to pay close attention to your mental and emotional well-being as well as your physical health. The leadership team and I want you to know that your health and well-being take priority over absolutely everything else.”</p><h3><strong>FEBRUARY</strong></h3><p><strong>Week 3: (31st January — 4th February, 2022)</strong></p><p>First week of February began with a test, it was used to assess our reading, listening, grammar and speaking skills for ensuring that we are business ready. Apart from these tests, a series of well-being sessions were conducted from time to time in this epoch.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WmCT_i4KKzqyAHEZmKcPaA.png" /></figure><p><strong>Week 5: (7th February — 11th February)</strong></p><p>From the second week of February, our technical training started, it began with brushing up our skills to get industry ready with courses like: <strong>User Interface Programming</strong> (<strong>HTML 5, CSS 3 &amp; JavaScript</strong>), <strong>Database Programming</strong> <strong>using</strong> <strong>ANSI SQL</strong>. This week was dedicated to SQL Learning specifically.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/828/1*uXCbhYGFz3RWeazaAHBJBQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/828/1*MeQfiQVF1jceeS63bYdBaQ.png" /></figure><p><strong>Week 6: (13th February- 18th February)</strong></p><p>With almost over a month and a half in the company, working in the corporate sector had me in exponentially growing my learning curve. The main focus of this week was learning about Responsive Web Design, and connecting SQL with Java Databases.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/896/1*aKweOyUXEOAnlWGBd1t6qQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/888/1*tYor5XE_8H-rqZD-C2kqtA.png" /></figure><p><strong>Week 7: (21st February- 25th February)</strong></p><p>This week was allotted to us as a buffer week to complete any backlogs that are left. So, most of the sessions in this week were Outreach.</p><h3><strong>MARCH</strong></h3><p><strong>Week 8: (28th February- 4th March)</strong></p><p>As we ushered into March, the celebration of International Women’s Day (IWD) in the coming week encouraged us to take advantage of <a href="https://be.cognizant.com/redir/t?s=https%3a%2f%2fbe.cognizant.com%2fsites%2fbusiness-resource-group-women-empowered%2fSitePage%2f399565%2fwomen-empowered-global&amp;t=U9SNiJg0GQTL7hVYOSWdyJuc%2fgGnWD4VMSwo7%2by4NfkA1kd2MqmJGffoONiqaUeTdgNvL4YBPB%2by%2fkAoxurgWX1vbMs86Epxm5xwUuum7MmD7R2Cf81Hx%2fFvSVksm49toCFPdIsEmQ2hCRVsne1Q9XG4Pn%2fCHLfO3r4nu1lwp9DNw%2bd3Cv9mFPXzaQgZXevp">Women Empowered (WE)</a>-sponsored events that shared women’s contributions across Cognizant, the importance of male ally-ship, mentorship opportunities, and ways you can help “Break the Bias.” Participation of Women in Technology was highly encouraged especially to #BreakTheBias.</p><p><strong>Week 9: (7th March — 11th March)</strong></p><p>After getting trained on a vast range of technology, this week began with assignment of a domain. The technology allotted to me was Data Engineering with Azure.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/531/1*Sd26yGgmxS27kL7aGYMZKw.png" /></figure><p>Week wise plan was allocated to us and it resulted in completing Python for Beginners: Learning Python Programming.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ECp61cqAvqv99T7noBwKrA.png" /></figure><p>Week wise plan was allocated to us and it resulted in completing Python for Beginners: Learning Python Programming.</p><p><strong>Week 10: (14th March — 18th March)</strong></p><p>With the advent of the tenth week, multiple Python courses started in parallel, Python for Programmers and Python Bible along with a hands-on of around 100 problems in Python.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/795/1*aAswNSMRuXu_hTih2EFZPw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/795/1*D1HUTiX4fZp9p31SaoKXiA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/798/1*v66k8l0RDt8o1egtJ0ztMA.png" /></figure><p><strong>Week 11: (21st March — 25th March)</strong></p><p>The week was flooded with knowledge, from learning to do visualizations using python libraries to creating dashboards. It was a fun filled week with completion of courses, these were Python Enablement, Data Analysis, Complete Courses on Data Visualization, Matplotlib and SQL, with a deadline to complete the courses by the end of this week.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/808/1*4rLeYQL0FUQtad7ptWR1EA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XgsrXbUUqs3jXk46lXK08Q.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wKzc7pwh_mjRSerV-LYKnA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GEQXEbVOfXWG3VjC4ZjtdQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FmVcCs39jQuzDCP8gxbtiA.png" /></figure><p><strong>Week 12: (28th March — 1st April)</strong></p><p>This week focused on data bricks and its various aspects. Introductory Knowledge about Spark was an enlightening experience. Another test for English proficiency was held by EFSET for ensuring our business communication skills are upto the mark.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QGEqlId9JO7dPn7zqmLwcA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8JSBAMBc0sauf_Jq6qY0KQ.png" /></figure><h3><strong>APRIL</strong></h3><p><strong>Week 13: (4th April- 8th April)</strong></p><p>After performing miscellaneous self-learning tasks, a Python trainer was allocated. Most of the sessions conducted during this period were Doubt sessions. Over the course of April start, everyday assignments and sessions were held which helped in grasping most of the concepts in a pragmatic learning approach. Attached are the snippets of all the assignments.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3tN_37aEALO9Tuhx6OJtuQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lXhs-IkAopVb7AMHOnC2_Q.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AhJxQHjvju1ULY2pCcTchQ.png" /></figure><p><strong>Week 14: (11th April — 15th April)</strong></p><p>Week 14th consisted of Python Doubt sessions with assignments. The snapshots of assignments are attached alongside. Concepts like exception handling, file handling, working with libraries, etc were practised frequently and helped in building a strong fundamental knowledge about Python Programming.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*H9hfvRQ6dQb7q9ZGOppjVA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xzmQspweHVixkEll3pVSBA.png" /></figure><p><strong>Week 15: (18th April — 22nd April)</strong></p><p>Over the course of these three weeks, we were required to acquire knowledge of Programming with Databricks. Since, Apache Spark has lately conquered the big data world, so special emphasis was given on grasping concepts of today’s “Big-Data King” platform.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XihB6iyliVHrlrb_bVNUXw.png" /></figure><p><strong>Week 16: (25th April- 29th April)</strong></p><p>In this week, a programming test for SQL was scheduled. The test focussed most on SQL Joins and also considered our knowledge whilst applying queries and a few concepts of normalization were covered — namely 1st Normal Form — . Furthermore, rest of the week was focused on practicing SQL hands-on problems on hacker rank.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yQ0ipb7XckR4ltvyYhHwpg.png" /></figure><p><strong>Week 17: (25th April- 29th April)</strong></p><p>As this week started, along with Daily Connect meets, Started BH- sessions. These sessions were usually for understanding the corporate culture. Along with these sessions went on Python Classes which became more interactive and became a source of learning.</p><h3><strong>MAY</strong></h3><p><strong>Week 18: (2nd May- 6th May)</strong></p><p>In this week, we were assigned with a case study, for evaluating our skills in a business unit. The case study was about creating a Python based Web App for checking the analysis of COVID patients all around the world. Here within, I have attached my GitHub link of the implementation. Looking forward to write a blog about that as well :p . Make sure to star and follow me on Github as well.</p><p><a href="https://github.com/guneet-coder/COVID19">GitHub - guneet-coder/COVID19: Dashboard/ Web App</a></p><p><strong>Week 19: (9th May -13th May)</strong></p><p>This week started with Azure Training. From learning concepts of cloud computing i.e., creating a virtual machine, resource group,learning about datacenters, policies, tags, Azure SQL, Dynamic Data Masking, etc.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*llnwwEVxJnMKybbKX_JIeQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NihNHUvDmXiSYr5MkpOYJQ.png" /></figure><p><strong>Week 20: (16th May- 20th May)</strong></p><p>In week 20, major tasks allocated included creating a blob storage and understanding the differences between blob storage and datalake. In the diagrams “maysa” is a blob storage and “ADFmay2022” is a data factory created as a sample.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kl0sL9un57mWAkZEYbWhHQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*R0OzWCS-TbHXiu1zi_G0Nw.png" /></figure><p><strong>Week 21: (23rd May- 27th May)</strong></p><p>As week 21 started, the creation of pipelines in Azure Data Factory started. Here is a summary of a pipeline in Data factory which copies data from Azure Bob Storage to another Blob Storage.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oSz_Q18GbwGe4Jwh3daUOA.png" /></figure><p><strong>Week 22: (30th May- 3rd June)</strong></p><p>This week was allotted for completion of self-learning tasks and Udemy courses, especially for preparing for Microsoft certifications like DP-900 and DP-100.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Vkcd6tUHwvftnqAteMikeA.png" /></figure><h3>JUNE</h3><p><strong>Week 23: (6th June-10th June)</strong></p><p>With the advent of 23rd week, concepts such as :</p><p>· Parameterized pipeline,</p><p>· ARM template,</p><p>· Automation,</p><p>· Triggers,</p><p>· Datalake,</p><p>· Self-hosted ADF, etc were covered.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pzTEL-nwWhC5RThP0xVLCA.png" /></figure><p><strong>Week 24: (13th June-17th June)</strong></p><p>In the final week of my internship, most of the focus was on covering Udemy course concepts and pragmatic learning, Data Engineering DP-203 by Microsoft. Conceptual clarity and implementation, specific to Azure Data Factory was covered. Our target was to create pipelines that helped in efficiently managing and transforming data so that assorted and unstructured data files become business-ready.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VG0j3kgmDFhAuE9rCzgjnA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*or_Iaagjfx2S4kR_6-kvMQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*yBrGfODzUr4D5TVE" /></figure><p>As my internship experience comes to an end, I’m highly grateful for getting this experience. Through the cycles of ups and downs, I have almost completed this internship. It’s a huge milestone for me in this world of technology, with many other paths to explore. I’m indebted to my family and friends who went with me through this Work from home phase as it wasn’t really easy. Also, I would like to thank Sachin Bagga sir, my college professor who boosted my morale in these tough times and asked me to stay in high spirits. Looking forward to a this portal that has opened a world of opportunites for me as a Data Engineer, hopefully…🤞😃</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1e6a4ce3cc2b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[My Journey as “Data Science Club Lead” at my College so far…]]></title>
            <link>https://guneet-kohli.medium.com/my-journey-as-data-science-club-lead-at-my-college-so-far-d02d72b0a216?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/d02d72b0a216</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[college]]></category>
            <category><![CDATA[data]]></category>
            <category><![CDATA[community]]></category>
            <category><![CDATA[data-science]]></category>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Wed, 20 Apr 2022 16:22:28 GMT</pubDate>
            <atom:updated>2022-04-20T16:39:51.082Z</atom:updated>
            <content:encoded><![CDATA[<h3>How it all began?</h3><p>Amidst the lockdown, when all our hopes were stranded, I was encouraged by my professor, Dr. Sumeet Kaur Sehra to set up a platform for all the tech enthusiasts, specifically for the upcoming Data Scientists of our college who’ll rule the industry, I sincerely hope. This erudite journey has been an extremely enriching experience for me. I learnt about the inside out of how to become an entrepreneur along with brushing up my technical skills. When Sehra ma’am asked me to set up the club, I was highly elated as I thought what is a better opportunity for giving back to society than sharing knowledge.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/200/0*iyXS9rBoevNkNURI" /><figcaption>Data Science Club, GNDEC</figcaption></figure><h3>What was the need to bring together a new group?</h3><p>The purpose of the club was to bring together a group who’d like to contribute and work in the field of Data Science at Guru Nanak Dev Engineering College, Ludhiana. The first few months of setting up the club were quite exhaustive as it was almost impossible to gather students. It was a time consuming process, but once a panel was formed the learning curve has shown a good fit. The club hasn’t stopped ever since.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/550/1*qtjcxk5StV8QWeyF0OmAxg.png" /><figcaption>Data Science Masterclass at GNDEC, Ludhiana</figcaption></figure><p>Since the club was set up amidst the lockdown, most of the communication was through social media. From Instagram page to creating our own website for the club by its members, the club proved to be a source of motivation for many students, who’d love to kick start their journey of being a Data Scientist.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/967/1*Z9pa8mb1f-NjT1om-E4GgQ.jpeg" /><figcaption>A glimpse of various technical and fun events at DSC, GNDEC</figcaption></figure><h3>What inspired students to join the club?</h3><p>Let’s actually hear these amazing students who joined the club and what was their motivating factor and how they landed commanding positions</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FnWuxI6Mr-Jc%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DnWuxI6Mr-Jc&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FnWuxI6Mr-Jc%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/350795eb366ae14078c768d911dd66c0/href">https://medium.com/media/350795eb366ae14078c768d911dd66c0/href</a></iframe><p>I am really grateful to be part of such an amazing and inspiring community. I hope to see the value this club brings to my juniors along the line. This was a beautiful chapter in my college life, I aspire that this platform brings out the best in all the academicians who wish to be a part of this tech team :)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d02d72b0a216" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[My Interview experience at Accolite, Inc.]]></title>
            <link>https://guneet-kohli.medium.com/my-interview-experience-at-accolite-inc-6d1b4a5a8d71?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/6d1b4a5a8d71</guid>
            <category><![CDATA[interview]]></category>
            <category><![CDATA[software-engineer]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[jobs]]></category>
            <category><![CDATA[internships]]></category>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Sun, 02 Jan 2022 17:35:46 GMT</pubDate>
            <atom:updated>2022-01-02T17:35:46.200Z</atom:updated>
            <content:encoded><![CDATA[<p>Being a woman in the tech sector is an arduous journey. Recently, I participated in an event organized by Accolite Digital, “Women in Tech Event” which was organized for promoting gender equality in the industry. It was a diversity hiring event for Software Engineer role. Accolite Digital is an American company, focused on innovation, and provides best-in-class digital transformation services.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/1*gurTLjeU6u5yU-e1gWypUg.jpeg" /></figure><p>The hiring process had 5 rounds, each round being an elimination round. The eligibility criteria for appearing in the selection process was as follows:<br>• Applicable to diverse candidates from 2022 graduation batch with no active<br>backlog &amp; CGPA&gt;6/10 or %&gt;60% through out.<br>• Eligible Courses — B.E/B.Tech, MTech, MCA, Integrated Course.</p><h4>ROUND 1</h4><p>The first round comprised a technical aptitude test which had MCQs from a variety of topics like OS, DBMS, SQL, Software Engineering, etc It was held on the EduThrill platform. The technical aptitude test duration was 30 minutes I gave the first test on 9th November, 2021. The results were announced through email within an hour of submitting the first test.</p><h4>ROUND 2</h4><p>It was a coding round and the time allotted was 60 minutes. The second test was held on 9th November in the latter part of the day. I received the email from Accolite Digital on 14th November that I had cleared the second round of assessment as well. Due to high number of applicants, my next round was scheduled after almost a month — on 10th December.</p><h4>ROUND 3</h4><p>I was quite nervous before the third round. Before entering the meeting, I closed my eyes and took a deep breath, and I think that is what helped me in clearing my third round. This round was a <strong>live coding round</strong>, it had a panel of interviewers who were all watching me code the solution to the problem statement. Being really tired from all the college classes, I got stuck in the coding part of a question(here comes the best part) but the interviewer guided me and eventually I solved the question.(I really felt proud of myself at that moment, Thank you sir for your guidance).</p><h4>ROUND 4</h4><p>This was the final technical round, it was similar to the third round. The only difference was that the coding questions were tougher this time. It was again a mix of coding questions and technical interview. The discussion lasted for more than an hour. I solved the questions and even answered the questions, which were related to software engineering — like the SDLC models(Waterfall Model, Spiral Model, etc.).The final technical round, i.e. the fourth round was held on 15th December.</p><h4>ROUND 5</h4><p>This was the HR round. In this round, most of the questions regarding joining, work culture, etc were answered by their team. It was an exhaustive discussion. Each and everything was explained very well.</p><p>After the round 5, they shared the employment offer via the email. Accolite Digital was also offering an internship along with full time job offer. The whole process was a great learning experience for me. I didn’t accept the employment offer due to some personal reasons. But, I really liked the interview experience.</p><p>The best part about this interview was that Accolite was that they even invited me to their Virtual Christmas concert😍. I think they really believe in Michelle Kwan’s quote, “Work Hard, Be Yourself and Have Fun!”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*r6j1lGkgGlxmn59EcR9JLQ.jpeg" /></figure><p>That was my experience interviewing with Accolite Digital. I hope it helped you and answered any questions that you had in your mind. 😄</p><h3>Wishing you luck!</h3><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6d1b4a5a8d71" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Understanding Blockchain Technology]]></title>
            <link>https://guneet-kohli.medium.com/understanding-blockchain-15da46fbe0f6?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/15da46fbe0f6</guid>
            <category><![CDATA[cryptocurrency]]></category>
            <category><![CDATA[mining]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[computer-science]]></category>
            <category><![CDATA[bitcoin]]></category>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Thu, 02 Dec 2021 17:30:37 GMT</pubDate>
            <atom:updated>2021-12-02T17:38:53.194Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>What is a Blockchain?</strong></p><p>A blockchain is nothing but a chain of blocks(generally records), as the name suggests. Not how you are imagining pictorially, but it is a P2P network used to store records whose main purpose is to maintain system integrity by eliminating all kinds counterfeiting techniques. From a Peer-to-Peer Network (P2P) one can understand that all the users have the same powers and all the transactions done are visible at all stages, thus eliminating the need of a central authority.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/423/0*ekXAC2lvPY0xqtiw.png" /><figcaption>Peer to Peer Network [Blockchain]</figcaption></figure><p>Blockchain technology is the essence of cryptocurrency, which is nothing but a collection of data which is designed to work as a medium of exchange. These digital assets are the foundation of today’s capital and tomorrow’s finances. Thus, the need to understand how this technology works is quite relevant.</p><h3><strong>How transactions take place in a blockchain?</strong></h3><p>Blockchain uses the concept of hashing, the purpose of which is to encrypt the details so that a hacker is not able to read or alter the data. Hash is similar to a person’s fingerprint, except that these change every time a change is made in the block so that the integrity is maintained.</p><p>Let’s consider a scenario:</p><p>Suppose Chandler and Pheobe want to send Joey 5 Bitcoins.</p><p>Transaction t1: Chandler → Joey</p><p>Transaction t2: Pheobe → Joey</p><p>For a transaction to take place, a block needs to be created. The transaction details are permanently inscribed in this block. Since it is a decentralized network, so the changes made are visible to all users in the network, which makes it difficult to hack. Here two transactions are taking place, hence two different blocks will be generated. These consecutive blocks are tied up, hence the alias “chain” , forming a public distributed ledger. Hence this ledger is available to Chandler, Pheobe and Joey.</p><p>The block contains the transaction details, hash of the current block and hash of the previous block. The blocks are connected via hash to the previous block.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/0*4aF07p7padgwzgTj" /><figcaption>Connection of Blocks</figcaption></figure><p>Whenever a transaction is made, or a change is done, the hash gets altered, thereby breaking the chain for a small duration. The process of verification happens during that time, it is technically termed as, “Proof Of Work”. After the verification is done, the hash value gets updated and the connection between the blocks is made.</p><h3><strong>Who can perform this validation?</strong></h3><p>Such transactions are made all over the globe, the people who validate these transactions are known as miners. For a block to be validated and added to the chain, miners need to solve a complex mathematical problem. The one who solves that problem quickest gets 12.5 Bitcoins, as a reward. This process is known as mining.</p><p>Like, many of you may remember the show, “The Big Bang Theory” how the guys forgot about the bitcoins they mined seven years ago, when they were not even worth a penny. Years later, they wish they could have learnt the value of cryptocurrency back then.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FCc-Hbklizzk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DCc-Hbklizzk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FCc-Hbklizzk%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/28e883736bded836661ab5906c9cebc8/href">https://medium.com/media/28e883736bded836661ab5906c9cebc8/href</a></iframe><h3><strong>Understanding the concept of keys</strong></h3><p>Every user in a blockchain has two types of keys. One of them is a private key, which is confidential and the other one is a public key, which is visible to the world.</p><p>Let’s suppose that the guys would have been successful in getting back some Bitcoins . Assuming, Leonard wants to make a transaction of 1 BTC to Penny.</p><p>Transaction: Leonard → Penny</p><p>For making this transaction, Leonard sends his and Penny’s wallet address through an hashing algorithm, these details are encrypted using encryption algorithms and <strong>Leonard’s private key, </strong>which ensures that the transaction was done by Leonard. This is now transmitted across the world, using Penny’s public key. She can now decrypt the message by using her own, i.e. <strong>Penny’s private key, </strong>which only she knows.</p><p>Thus, the need of the hour is to ensure transparency in the digital era. Blockchain will help in maintaining integrity and making sure that unethical practices are kept at bay.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=15da46fbe0f6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Creating a Virtual Machine on Google Cloud]]></title>
            <link>https://guneet-kohli.medium.com/creating-a-virtual-machine-on-google-cloud-9eb852d7419b?source=rss-accb7af71a6e------2</link>
            <guid isPermaLink="false">https://medium.com/p/9eb852d7419b</guid>
            <category><![CDATA[virtual-machine]]></category>
            <category><![CDATA[linux]]></category>
            <category><![CDATA[virtualization]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[google-cloud-platform]]></category>
            <dc:creator><![CDATA[Guneet Kohli]]></dc:creator>
            <pubDate>Tue, 30 Nov 2021 07:52:48 GMT</pubDate>
            <atom:updated>2021-11-30T07:52:48.729Z</atom:updated>
            <content:encoded><![CDATA[<p>Google Cloud Platform, or GCP is a platform offered by Google which provides cloud computing services, collaboratively known as the SPI model of cloud computing. The SPI model is the most common service model which provides Software-as-a-Service(SaaS), Platform-as-a-Service(PaaS) and Infrastructure-as-a-Service(Iaas). GCP has various elements under the navigation menu, which provide these services to users.</p><p><strong>Elements Of Google Cloud: </strong>GCP has various elements like Compute Engine, Google Cloud App Engine, Google Cloud Container Engine, Google Cloud Storage, Google Cloud Dataflow, Google BigQuery, Google Cloud Machine Learning Engine, etc. Each element has its own unique purpose. For instance, Google BigQuery can be used for analysis providing the user with SQL workplace as well as for the Administration part, which provides facilities like Monitoring, BI Engine, etc.</p><p>For creating a Virtual Machine, Cloud Engine under the Navigation Menu will be employed. Compute Engine is an IaaS service which provides user to work with virtual instances for workload hosting.</p><h3>Steps for creation of a Virtual Machine</h3><ol><li>Under the Navigation Menu, select Compute Engine and then click on VM instances.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iAirDwtbZt_zXVX6-xMtYg.png" /></figure><p>2. Under the dropdown menu, click <strong>CREATE INSTANCE </strong>for creating a new instance and create a new VM instance by configuring various parameters, like Machine configuration, Bootdisk, Identity and API access along with firewalls.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Kp5EwCMhwhLHo_64xkTfqA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ro4or1AyKwq0UOKf6GPLYg.png" /></figure><p>3. Click <strong>CREATE</strong>. It will take a minute for the machine to be created. To check, if the machine is created successfully, check the VM instances page. The green mark against the status shows that the VM instance was succesfully created.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ht-IGjHaSCPE6A7CWgqGnQ.png" /></figure><p>4. <strong>SSH(Secure Shell) </strong>also known as <strong>Secure Socket Shell</strong>, is a network protocol that gives users, particularly system administrators, a secure way to access a computer over an unsecured network.It is used to connect to virtual machine, in the row for your machine. To do this, click on SSH. Then a pop-up would appear which will ask the user if they want to initiate SSH connection.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ht-IGjHaSCPE6A7CWgqGnQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nkoEDz-DXXssKk_PGIhouQ.png" /></figure><p>5. After successfully connecting, SSH keys will be transferred to the VM and our VM, here Debian GNU/Linux is created.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CNLJ1gQqHS7rhjJsu6qSvg.png" /></figure><h3>CREATING A NEW INSTANCE USING COMMAND PROMPT</h3><p>Instead of using the Cloud Console to create a virtual machine instance, one can use the command line tool, “gcloud”.</p><p>Command used for creation of VM instance,</p><p>“gcloud compute instances create &lt;new-instance&gt; — machine-type &lt;machine-details&gt; — zone &lt;zone-name&gt;”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fEvdPlQq_2ZKVp1-WCKe-g.png" /></figure><p>THE NEW INSTANCE, GUNEET-KOHLI2 HAS THESE DEFAULT VALUES:</p><p>•The latest <a href="https://cloud.google.com/compute/docs/images">Debian 10 (buster)</a> image.</p><p>•The n1-standard-2 machine type</p><ul><li>A root persistent disk with the same name as the instance; the disk is automatically attached to the instance.</li></ul><p>SSH is used to connect this newly created instance using via gcloud. The command used is,” gcloud compute ssh &lt;new-instance&gt; — zone &lt;zone-name&gt;”</p><p>After connecting to SSH, public/ private rsa key pair will be generated, and instance details will be presented.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*trGmlVjdV5tcoQwlR0xX1w.png" /></figure><p>To check, if the new instance was created successfully, go to the VM instances page, and check if the instances you created are present.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jR0UxeEtYOcnd5S1pp0dZQ.png" /></figure><p>After creation of virtual machine, one can map existing server infrastructure, host a web page, create load balancers, network topology and what not to GCP.</p><p>The ascendancy of Virtualization and the indispensable need of creation of VMs, to optimize Quicker Desktop Provisioning and deployment. This technology is not only cost-effective, but it also assists the environment by reducing carbon footprint, because of its ability to cut down on number of servers and addressing ones needs logically rather than physically.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9eb852d7419b" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>