<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Mohammad jomaa on Medium]]></title>
        <description><![CDATA[Stories by Mohammad jomaa on Medium]]></description>
        <link>https://medium.com/@jomaajob?source=rss-6a59297ae215------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 10:21:02 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@jomaajob/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Integrating AWS with Azure AD Using IAM SAML Federation (Without IAM Identity Center)]]></title>
            <link>https://faun.pub/integrating-aws-with-azure-ad-using-iam-saml-federation-without-iam-identity-center-bae676dbc9e0?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/bae676dbc9e0</guid>
            <category><![CDATA[iam-roles]]></category>
            <category><![CDATA[sso]]></category>
            <category><![CDATA[azure-ad]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[identity-provider]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Tue, 16 Dec 2025 11:58:13 GMT</pubDate>
            <atom:updated>2026-01-30T07:33:19.852Z</atom:updated>
            <content:encoded><![CDATA[<h3>Integration of Amazon Web Services and Microsoft Entra ID Using AWS Identity and Access Management SAML 2.0 Federation Without AWS IAM Identity Center</h3><p>Modern enterprises often standardize identity management on <strong>Microsoft Entra ID (Azure AD)</strong> while operating workloads on <strong>AWS</strong>. A secure and scalable way to integrate both platforms is by using AWS Identity and Access Management (IAM)<strong> SAML federation</strong>, allowing Azure AD to act as the trusted Identity Provider (IdP).</p><p>This article walks through a <strong>complete, production-ready integration</strong> between <strong>AWS and Azure AD</strong> using AWS Identity and Access Management (IAM)<strong> SAML Identity Providers</strong>, <strong>IAM roles</strong>, and <strong>Azure AD provisioning</strong>, without using <strong>AWS IAM Identity Center</strong>.</p><h4>Why This Approach?</h4><p>This design enables centralized access control while following security best practices:</p><ul><li>Azure AD is the <strong>single source of identity</strong></li><li>AWS uses <strong>IAM roles</strong>, not IAM users</li><li>No passwords stored in AWS</li><li>Role-based access managed centrally</li><li>Automatic role discovery and provisioning</li></ul><p>This approach is ideal for enterprises that:</p><ul><li>Already use Azure AD as their corporate directory</li><li>Need fine-grained access control in AWS</li><li>Want to avoid IAM Identity Center</li><li>Require auditability and least-privilege access</li></ul><h4>High-Level Architecture</h4><p>The integration works as follows:</p><ol><li><strong>Azure AD (Microsoft Entra ID)</strong> authenticates the user</li><li>Azure AD sends a <strong>SAML assertion</strong> to AWS</li><li>AWS validates the assertion using a <strong>IAM SAML identity provider</strong></li><li>The user assumes an <strong>IAM role</strong> with defined permissions</li><li>Azure AD provisioning syncs available AWS roles automatically</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kmmjjYTOgwY9M3FK-oJ6Gg.png" /></figure><h3>Step 1: Create the Enterprise Application in Azure AD</h3><p>Start by creating an Azure AD Enterprise Application:</p><ul><li>Go to <strong>Microsoft Entra Admin Center</strong></li><li>Navigate to <strong>Enterprise Applications</strong></li><li>Create a new application named <strong>AWS Single-Account Access</strong></li></ul><p>This application represents AWS as a service provider in Azure AD.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IrBuL0Er6I9bFF0n4hSMeg.png" /></figure><h3>Step 2: Configure SAML Single Sign-On</h3><p>Enable <strong>SAML-based SSO</strong> for the application.</p><h4>Key SAML Settings</h4><ul><li><strong>Identifier (Entity ID)</strong>:<br> Use a custom value such as:</li></ul><pre>aws-test-saml</pre><ul><li><strong>Reply URL (ACS)</strong>:</li></ul><pre>https://signin.aws.amazon.com/saml</pre><p>The Entity ID uniquely identifies Azure AD to AWS and must match later in AWS IAM.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bwhkKUKAM-6_Wdjt4aW2ew.png" /></figure><h3>Step 3: Download the Federation Metadata XML</h3><p>From the SAML configuration section, download the <strong>Federation Metadata XML</strong>.</p><p>This file contains:</p><ul><li>Azure AD entity ID</li><li>SAML endpoints</li><li>Token signing certificate</li></ul><p>AWS uses this file to establish trust with Azure AD.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VeBJk7ouksRrF5lYntckwA.png" /></figure><h3>Step 4: Create the IAM SAML identity provider in AWS</h3><p>In the AWS console:</p><ul><li>Go to <strong>IAM → Identity providers</strong></li><li>Create a new provider of type <strong>SAML</strong></li><li>Upload the Federation Metadata XML</li><li>Assign a provider same previous <strong>Entity ID </strong>name :</li></ul><pre>aws-test-saml</pre><p>At this point, AWS trusts Azure AD as an external identity provider.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uZ06AtyKWOll2KwkvV1Ijg.png" /></figure><h3>Step 5: Create Roles for SAML 2.0–based federation</h3><p>IAM roles define <strong>what users can do in AWS</strong>.</p><p>To create a role:</p><ul><li>Choose <strong>SAML 2.0 federation</strong> as the trusted entity</li><li>Select the previously created SAML provider</li><li>Allow <strong>AWS Management Console access</strong></li></ul><p>Attach permissions such as:</p><ul><li>AdministratorAccess</li><li>ReadOnlyAccess</li><li>Custom least-privilege policies</li></ul><p>Each role represents a permission set that Azure AD users can assume.</p><p>make sure to edit the trust IAM customer managed policy to make it as this</p><pre>{<br> &quot;Version&quot;: &quot;2012-10-17&quot;,<br> &quot;Statement&quot;: [<br>  {<br>   &quot;Effect&quot;: &quot;Allow&quot;,<br>   &quot;Principal&quot;: {<br>    &quot;Federated&quot;: &quot;arn:aws:iam::1111111111:saml-provider/aws-test-saml&quot;<br>   },<br>   &quot;Action&quot;: &quot;sts:AssumeRoleWithSAML&quot;,<br>   &quot;Condition&quot;: {<br>    &quot;StringEquals&quot;: {<br>     &quot;SAML:aud&quot;: &quot;https://signin.aws.amazon.com/saml&quot;<br>    }<br>   }<br>  }<br> ]<br>}</pre><h3>Step 6: Create an IAM User for Azure AD Provisioning</h3><p>Azure AD provisioning requires AWS credentials to <strong>discover available roles</strong>.<br> This user is <strong>not used for login</strong>.</p><h4>Create a IAM customer managed policy</h4><pre>{<br>  &quot;Version&quot;: &quot;2012-10-17&quot;,<br>  &quot;Statement&quot;: [<br>    {<br>      &quot;Effect&quot;: &quot;Allow&quot;,<br>      &quot;Action&quot;: [<br>        &quot;iam:ListRoles&quot;<br>      ],<br>      &quot;Resource&quot;: &quot;*&quot;<br>    }<br>  ]<br>}</pre><h3>Create the IAM User</h3><ul><li>Username: AzureAdUser</li><li>Access: Programmatic only</li><li>No console access</li><li>Attach the IAM customer managed policy above</li><li>Generate Access Key and Secret Key</li></ul><p>This user only allows Azure AD to list roles.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*04Eig_BZ8c6rLwn0xi3vPA.png" /></figure><h3>Step 7: Configure Azure AD Provisioning</h3><p>Back in Azure AD:</p><ul><li>Open <strong>AWS Single-Account Access</strong></li><li>Go to <strong>Provisioning</strong></li><li>Select <strong>AWS Credentials</strong> as the authentication method</li><li>Enter the Access Key and Secret Key</li><li>Test the connection</li><li>Enable provisioning</li></ul><p>Azure AD will now automatically sync AWS roles.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Hr0eiJHzchnAWneppt16Sw.png" /></figure><blockquote>click on Start provisioning and wait until the initial sync done</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RlD0d8ao8oD9VQ2GReXxMA.png" /></figure><h3>Step 8: Assign Users or Groups to AWS Roles</h3><p>To grant access:</p><ul><li>Go to <strong>Users and groups</strong> in the Enterprise Application</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2syFw5pIUtTpuuPrqZIpSQ.png" /></figure><ul><li>Add a user or group</li><li>Select an <strong>App Role</strong> (mapped to an AWS IAM role)</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kfuwZi1TdMe1HwjwOO-yag.png" /></figure><ul><li>Assign</li></ul><p>Within minutes, the user can access AWS with the assigned permissions.</p><h3>User Login Experience</h3><p>End users log in through Azure AD:</p><ol><li>Open</li></ol><pre>https://myapps.microsoft.com</pre><p>2. Select <strong>AWS Single-Account Access</strong></p><p>3. AWS Console opens with the assigned role</p><p>No AWS credentials are required.</p><h3>Security Best Practices</h3><ul><li>Do not create IAM users for people</li><li>Use groups instead of individual users</li><li>Apply least-privilege IAM policies</li><li>Rotate provisioning access keys regularly</li><li>Use one SAML provider per Azure tenant</li><li>Monitor access via <strong><em>AWS CloudTrail</em></strong></li></ul><h3>Common Troubleshooting Tips</h3><ul><li>If roles don’t appear, restart provisioning</li><li>Ensure iam:ListRoles permission exists</li><li>Verify SAML claims include Role and RoleSessionName</li><li>Confirm trust IAM customer managed policy references the correct SAML provider</li></ul><h3>Conclusion</h3><p>By integrating AWS with Azure AD using <strong>IAM SAML federation</strong>, organizations gain:</p><ul><li>Centralized identity management</li><li>Strong security posture</li><li>Scalable role-based access</li><li>Clean separation of identity and permissions</li><li>Reduced operational overhead</li></ul><p>This approach aligns perfectly with enterprise IAM best practices and cloud governance frameworks.</p><h3>References &amp; Official Documentation</h3><p><strong>AWS</strong></p><ul><li><a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html">https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html</a></li><li><a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml.html">https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml.html</a></li><li><a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-azuread.html">https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-azuread.html</a></li></ul><p><strong>Microsoft</strong></p><ul><li><a href="https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/configure-saml-single-sign-on">https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/configure-saml-single-sign-on</a></li><li><a href="https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/aws-provisioning-tutorial">https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/aws-provisioning-tutorial</a></li><li><a href="https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/assign-user-or-group-access">https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/assign-user-or-group-access</a></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*SkWuq9v2fzyGbLFs.png" /></figure><h4>👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇</h4><h4>🚀<a href="http://from.faun.to/r/8zxxd">Join FAUN.dev() &amp; Get Similar Stories in your Inbox Each Week</a></h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bae676dbc9e0" width="1" height="1" alt=""><hr><p><a href="https://faun.pub/integrating-aws-with-azure-ad-using-iam-saml-federation-without-iam-identity-center-bae676dbc9e0">Integrating AWS with Azure AD Using IAM SAML Federation (Without IAM Identity Center)</a> was originally published in <a href="https://faun.pub">FAUN.dev() 🐾</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bridging AI and AWS: A Deep Dive into Using Model Context Protocol (MCP) for Intelligent Cloud…]]></title>
            <link>https://faun.pub/bridging-ai-and-aws-a-deep-dive-into-using-model-context-protocol-mcp-for-intelligent-cloud-f464ae62a407?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/f464ae62a407</guid>
            <category><![CDATA[python]]></category>
            <category><![CDATA[mcp-server]]></category>
            <category><![CDATA[mcp-protocol]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[aws]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Tue, 18 Nov 2025 06:44:25 GMT</pubDate>
            <atom:updated>2026-01-30T07:42:58.276Z</atom:updated>
            <content:encoded><![CDATA[<h3>Bridging AI and AWS: A Deep Dive into Using Model Context Protocol (MCP) for Intelligent Cloud Infrastructure Management</h3><h3>Introduction:</h3><blockquote>When Your Infrastructure Starts Talking Back</blockquote><p>Imagine asking your AWS infrastructure a question in plain English and getting an intelligent, contextual answer — not raw JSON or <strong>Amazon CloudWatch </strong>graphs, but actual insights. That’s what becomes possible when you combine the Model Context Protocol (MCP) with AWS.</p><p>I recently built CloudWhisper, an AI-powered chatbot that uses MCP to connect AI models like ChatGPT and Claude directly to AWS services. In this article, I’ll explain how MCP works with AWS, why it matters, and how to build your own MCP-powered AWS integration.</p><h3>What is Model Context Protocol (MCP)?</h3><p>MCP is a standardized protocol that lets AI models securely connect to external data sources and tools. Think of it as a translator between AI models and your infrastructure.</p><p><strong><em>Without MCP, you’d need to:</em></strong></p><ul><li>Manually format data for AI models</li><li>Handle authentication and security yourself</li><li>Build custom integrations for each service</li><li>Manage complex API interactions</li></ul><p><strong><em>With MCP, you get:</em></strong></p><ul><li>A standardized way for AI to access your data</li><li>Built-in security and authentication</li><li>Real-time data access without exposing credentials</li><li>Contextual responses based on live infrastructure data</li></ul><h3>Why MCP Matters for AWS</h3><p>AWS exposes many services and APIs. MCP provides a clean bridge so AI models can:</p><ul><li>Query <strong>Amazon Elastic Compute Cloud (Amazon EC2)</strong> instances</li><li>Analyze <strong>Amazon Simple Storage Service (Amazon S3)</strong> bucket configurations</li><li>Review <strong>Amazon CloudWatch</strong> alarms</li><li>Provide recommendations based on real-time data</li><li>Switch between multiple AWS accounts seamlessly</li></ul><p>The key benefit: AI models get structured, real-time data instead of generic responses, enabling more accurate and actionable insights.</p><h3>How CloudWhisper Uses MCP: The Architecture</h3><p><a href="https://github.com/MohammadJomaa/cloudwhisper">CloudWhisper</a> demonstrates a practical MCP implementation. Here’s how it works:</p><blockquote>GitHub repo link: <a href="https://github.com/MohammadJomaa/cloudwhisper">https://github.com/MohammadJomaa/cloudwhisper</a></blockquote><h4>The Three-Layer Architecture</h4><h4>1- AI Layer (MCP Client)</h4><ul><li>ChatGPT or Claude receives user questions</li><li>Formats requests using MCP protocol</li><li>Sends requests to the MCP server</li></ul><h4>2- MCP Server Layer (The Bridge)</h4><ul><li>Receives AI requests via JSON-RPC</li><li>Translates requests into AWS API calls</li><li>Fetches real-time data from AWS</li><li>Returns structured data to the AI</li></ul><h4>3- AWS Layer (The Data Source)</h4><ul><li>Returns infrastructure data</li><li>Provides real-time metrics and statusThe Data Flow</li></ul><h3>The Data Flow</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DTGHR153_oKG6QawmcxLKQ.png" /><figcaption>The Data Flow</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kZ0K7MOf5Zg9zpHGjK_3qg.png" /><figcaption>CloudWhisper image</figcaption></figure><h3>Building Your Own MCP Server for AWS</h3><p>Let me walk you through building an MCP server for AWS, using CloudWhisper as a reference.</p><h4>Step 1: Understanding the MCP Protocol</h4><p>MCP uses JSON-RPC 2.0. The basic structure looks like this:</p><p>Request:</p><pre>{<br>  &quot;jsonrpc&quot;: &quot;2.0&quot;,<br>  &quot;id&quot;: 1,<br>  &quot;method&quot;: &quot;tools/call&quot;,<br>  &quot;params&quot;: {<br>    &quot;name&quot;: &quot;list_instances&quot;,<br>    &quot;arguments&quot;: {}<br>  }<br>}</pre><p>Response:</p><pre>{<br>  &quot;jsonrpc&quot;: &quot;2.0&quot;,<br>  &quot;id&quot;: 1,<br>  &quot;result&quot;: {<br>    &quot;content&quot;: [<br>      {<br>        &quot;type&quot;: &quot;text&quot;,<br>        &quot;text&quot;: &quot;{\&quot;success\&quot;: true, \&quot;instances\&quot;: [...]}&quot;<br>      }<br>    ]<br>  }<br>}</pre><h4>Step 2: Setting Up Your Project Structure</h4><p>Here’s the structure I used for CloudWhisper:</p><pre>cloudwhisper/<br>├── src/<br>│   ├── mcp_server/<br>│   │   └── multi_cloud_mcp_server.py    # MCP server implementation<br>│   ├── aws_integration/<br>│   │   └── aws_client.py                # AWS API wrapper<br>│   ├── chatbot/<br>│   │   └── multi_cloud_chatbot.py       # MCP client (AI integration)<br>│   └── config/<br>│       ├── cloud_accounts.yaml          # AWS account config<br>│       └── ai_integration_config.yaml   # AI API keys</pre><h4>Step 3: Creating the AWS Client</h4><p>First, create a wrapper around AWS APIs. This abstracts AWS SDK for Python (boto3) and provides clean methods:</p><pre>import boto3<br>from typing import Dict, Any<br><br>class AWSClient:<br>    def __init__(self, account_id: str = &quot;default&quot;):<br>        self.account_id = account_id<br>        self.session = boto3.Session(<br>            aws_access_key_id=os.getenv(&#39;AWS_ACCESS_KEY_ID&#39;),<br>            aws_secret_access_key=os.getenv(&#39;AWS_SECRET_ACCESS_KEY&#39;),<br>            region_name=os.getenv(&#39;AWS_DEFAULT_REGION&#39;, &#39;us-east-1&#39;)<br>        )<br>    <br>    def list_instances(self) -&gt; Dict[str, Any]:<br>        &quot;&quot;&quot;List EC2 instances with detailed information.&quot;&quot;&quot;<br>        ec2 = self.session.client(&#39;ec2&#39;)<br>        response = ec2.describe_instances()<br>        <br>        instances = []<br>        for reservation in response[&#39;Reservations&#39;]:<br>            for instance in reservation[&#39;Instances&#39;]:<br>                instances.append({<br>                    &#39;instance_id&#39;: instance.get(&#39;InstanceId&#39;),<br>                    &#39;status&#39;: instance.get(&#39;State&#39;, {}).get(&#39;Name&#39;),<br>                    &#39;instance_type&#39;: instance.get(&#39;InstanceType&#39;),<br>                    &#39;private_ip&#39;: instance.get(&#39;PrivateIpAddress&#39;),<br>                    &#39;public_ip&#39;: instance.get(&#39;PublicIpAddress&#39;),<br>                    # ... more fields<br>                })<br>        <br>        return {<br>            &quot;success&quot;: True,<br>            &quot;instances&quot;: instances,<br>            &quot;count&quot;: len(instances)<br>        }<br>    <br>    def list_storage_buckets(self) -&gt; Dict[str, Any]:<br>        &quot;&quot;&quot;List S3 buckets with size and configuration.&quot;&quot;&quot;<br>        s3 = self.session.client(&#39;s3&#39;)<br>        # Implementation here<br>        pass<br>    <br>    def get_monitoring_alerts(self) -&gt; Dict[str, Any]:<br>        &quot;&quot;&quot;Get CloudWatch alarms.&quot;&quot;&quot;<br>        cloudwatch = self.session.client(&#39;cloudwatch&#39;)<br>        # Implementation here<br>        pass</pre><h4>Step 4: Implementing the MCP Server</h4><p>The MCP server handles JSON-RPC requests and translates them to AWS API calls:</p><pre>import json<br>import sys<br>from aws_client import AWSClient<br><br>class CloudWhisperMCPServer:<br>    def __init__(self):<br>        self.aws_client = AWSClient()<br>    <br>    def run(self):<br>        &quot;&quot;&quot;Main server loop - reads from stdin, writes to stdout.&quot;&quot;&quot;<br>        while True:<br>            try:<br>                request_line = input()<br>                if not request_line:<br>                    continue<br>                <br>                request = json.loads(request_line)<br>                response = self._handle_request(request)<br>                <br>                print(json.dumps(response))<br>                sys.stdout.flush()<br>                <br>            except EOFError:<br>                break<br>            except Exception as e:<br>                error_response = {<br>                    &quot;jsonrpc&quot;: &quot;2.0&quot;,<br>                    &quot;id&quot;: request.get(&quot;id&quot;, 1),<br>                    &quot;error&quot;: {<br>                        &quot;code&quot;: -32603,<br>                        &quot;message&quot;: f&quot;Internal error: {str(e)}&quot;<br>                    }<br>                }<br>                print(json.dumps(error_response))<br>                sys.stdout.flush()<br>    <br>    def _handle_request(self, request: Dict[str, Any]) -&gt; Dict[str, Any]:<br>        &quot;&quot;&quot;Route requests to appropriate handlers.&quot;&quot;&quot;<br>        method = request.get(&quot;method&quot;, &quot;&quot;)<br>        params = request.get(&quot;params&quot;, {})<br>        request_id = request.get(&quot;id&quot;, 1)<br>        <br>        if method == &quot;initialize&quot;:<br>            return self._handle_initialize(request_id)<br>        elif method == &quot;tools/list&quot;:<br>            return self._handle_tools_list(request_id)<br>        elif method == &quot;tools/call&quot;:<br>            return self._handle_tools_call(request_id, params)<br>        else:<br>            return {<br>                &quot;jsonrpc&quot;: &quot;2.0&quot;,<br>                &quot;id&quot;: request_id,<br>                &quot;error&quot;: {<br>                    &quot;code&quot;: -32601,<br>                    &quot;message&quot;: f&quot;Method not found: {method}&quot;<br>                }<br>            }<br>    <br>    def _handle_initialize(self, request_id: int) -&gt; Dict[str, Any]:<br>        &quot;&quot;&quot;Handle initialization request.&quot;&quot;&quot;<br>        return {<br>            &quot;jsonrpc&quot;: &quot;2.0&quot;,<br>            &quot;id&quot;: request_id,<br>            &quot;result&quot;: {<br>                &quot;protocolVersion&quot;: &quot;2024-11-05&quot;,<br>                &quot;capabilities&quot;: {<br>                    &quot;tools&quot;: {}<br>                },<br>                &quot;serverInfo&quot;: {<br>                    &quot;name&quot;: &quot;CloudWhisper MCP Server&quot;,<br>                    &quot;version&quot;: &quot;1.0.0&quot;<br>                }<br>            }<br>        }<br>    <br>    def _handle_tools_list(self, request_id: int) -&gt; Dict[str, Any]:<br>        &quot;&quot;&quot;List available tools (AWS operations).&quot;&quot;&quot;<br>        tools = [<br>            {<br>                &quot;name&quot;: &quot;list_instances&quot;,<br>                &quot;description&quot;: &quot;List EC2 instances&quot;,<br>                &quot;inputSchema&quot;: {<br>                    &quot;type&quot;: &quot;object&quot;,<br>                    &quot;properties&quot;: {}<br>                }<br>            },<br>            {<br>                &quot;name&quot;: &quot;list_storage_buckets&quot;,<br>                &quot;description&quot;: &quot;List S3 buckets&quot;,<br>                &quot;inputSchema&quot;: {<br>                    &quot;type&quot;: &quot;object&quot;,<br>                    &quot;properties&quot;: {}<br>                }<br>            },<br>            {<br>                &quot;name&quot;: &quot;get_monitoring_alerts&quot;,<br>                &quot;description&quot;: &quot;Get CloudWatch alarms&quot;,<br>                &quot;inputSchema&quot;: {<br>                    &quot;type&quot;: &quot;object&quot;,<br>                    &quot;properties&quot;: {}<br>                }<br>            }<br>        ]<br>        <br>        return {<br>            &quot;jsonrpc&quot;: &quot;2.0&quot;,<br>            &quot;id&quot;: request_id,<br>            &quot;result&quot;: {<br>                &quot;tools&quot;: tools<br>            }<br>        }<br>    <br>    def _handle_tools_call(self, request_id: int, params: Dict[str, Any]) -&gt; Dict[str, Any]:<br>        &quot;&quot;&quot;Execute tool calls (AWS API operations).&quot;&quot;&quot;<br>        tool_name = params.get(&quot;name&quot;, &quot;&quot;)<br>        arguments = params.get(&quot;arguments&quot;, {})<br>        <br>        if tool_name == &quot;list_instances&quot;:<br>            result = self.aws_client.list_instances()<br>            return {<br>                &quot;jsonrpc&quot;: &quot;2.0&quot;,<br>                &quot;id&quot;: request_id,<br>                &quot;result&quot;: {<br>                    &quot;content&quot;: [<br>                        {<br>                            &quot;type&quot;: &quot;text&quot;,<br>                            &quot;text&quot;: json.dumps(result)<br>                        }<br>                    ]<br>                }<br>            }<br>        # ... handle other tools<br>        <br>        return {<br>            &quot;jsonrpc&quot;: &quot;2.0&quot;,<br>            &quot;id&quot;: request_id,<br>            &quot;error&quot;: {<br>                &quot;code&quot;: -32601,<br>                &quot;message&quot;: f&quot;Tool not found: {tool_name}&quot;<br>            }<br>        }</pre><h4>Step 5: Connecting the AI Client</h4><p>The AI client (ChatGPT or Claude) communicates with the MCP server via subprocess:</p><pre>import subprocess<br>import json<br>import sys<br><br>class AWSChatbot:<br>    def __init__(self):<br>        self.server_process = None<br>        self.start_mcp_server()<br>    <br>    def start_mcp_server(self):<br>        &quot;&quot;&quot;Start MCP server as subprocess.&quot;&quot;&quot;<br>        server_path = &quot;src/mcp_server/multi_cloud_mcp_server.py&quot;<br>        self.server_process = subprocess.Popen(<br>            [sys.executable, server_path, &quot;--subprocess&quot;],<br>            stdin=subprocess.PIPE,<br>            stdout=subprocess.PIPE,<br>            text=True,<br>            bufsize=1<br>        )<br>    <br>    def call_tool(self, tool_name: str, arguments: Dict[str, Any] = None) -&gt; str:<br>        &quot;&quot;&quot;Call a tool on the MCP server.&quot;&quot;&quot;<br>        request = {<br>            &quot;jsonrpc&quot;: &quot;2.0&quot;,<br>            &quot;id&quot;: 1,<br>            &quot;method&quot;: &quot;tools/call&quot;,<br>            &quot;params&quot;: {<br>                &quot;name&quot;: tool_name,<br>                &quot;arguments&quot;: arguments or {}<br>            }<br>        }<br>        <br>        # Send request<br>        self.server_process.stdin.write(json.dumps(request) + &quot;\n&quot;)<br>        self.server_process.stdin.flush()<br>        <br>        # Read response<br>        response_line = self.server_process.stdout.readline()<br>        response = json.loads(response_line)<br>        <br>        if &quot;result&quot; in response:<br>            return response[&quot;result&quot;][&quot;content&quot;][0][&quot;text&quot;]<br>        <br>        return None<br>    <br>    def ask_ai(self, question: str):<br>        &quot;&quot;&quot;Get cloud data and ask AI about it.&quot;&quot;&quot;<br>        # Get data from AWS via MCP<br>        instances_data = json.loads(self.call_tool(&quot;list_instances&quot;))<br>        storage_data = json.loads(self.call_tool(&quot;list_storage_buckets&quot;))<br>        alerts_data = json.loads(self.call_tool(&quot;get_monitoring_alerts&quot;))<br>        <br>        # Prepare context for AI<br>        context = self._prepare_context(instances_data, storage_data, alerts_data)<br>        <br>        # Ask AI (ChatGPT or Claude)<br>        response = self.ai_client.chat.completions.create(<br>            model=&quot;gpt-4&quot;,<br>            messages=[<br>                {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;You are an AWS infrastructure expert.&quot;},<br>                {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: f&quot;{context}\n\nQuestion: {question}&quot;}<br>            ]<br>        )<br>        <br>        return response.choices[0].message.content</pre><h3>Configuration and Setup</h3><h4>Prerequisites</h4><p>Before you start, you’ll need:</p><ol><li>Python 3.8+ installed</li><li>AWS Account with appropriate permissions</li><li>AI API Key (OpenAI or Anthropic)</li><li>AWS Credentials configured</li></ol><h4>Installation Steps</h4><h4>1- Clone and Install Dependencies</h4><pre>git clone https://github.com/MohammadJomaa/cloudwhisper.git<br>cd cloudwhisper<br>pip install -r requirements.txt</pre><h4>The key dependencies are:</h4><ul><li>boto3 — AWS SDK for Python (boto3)</li><li>openai or anthropic — AI model APIs</li><li>flask — Web interface (optional)</li><li>pyyaml — Configuration file parsing</li></ul><h4>2. Configure AWS Credentials</h4><pre>export AWS_ACCESS_KEY_ID=&quot;your-access-key&quot;<br>export AWS_SECRET_ACCESS_KEY=&quot;your-secret-key&quot;<br>export AWS_DEFAULT_REGION=&quot;us-east-1&quot;</pre><p><strong>Option B: Configuration File</strong></p><pre>./setup_config.sh<br># Then edit src/config/cloud_accounts.yaml</pre><p><strong>Option C: AWS Profile]</strong></p><pre>export AWS_PROFILE=&quot;your-profile-name&quot;</pre><h4>3. Configure AI Integration</h4><p>set your AI API key:</p><pre># For OpenAI<br>export OPENAI_API_KEY=&quot;sk-your-openai-key&quot;<br><br># For Anthropic<br>export ANTHROPIC_API_KEY=&quot;sk-ant-api03-your-anthropic-key&quot;</pre><p>Or use configuration files:</p><pre># src/config/ai_integration_config.yaml<br>ai_integration:<br>  chatgpt:<br>    enabled: true<br>    api_key: &quot;sk-your-openai-key&quot;<br>    model: &quot;gpt-4&quot;<br>  <br>  claude:<br>    enabled: true<br>    api_key: &quot;sk-ant-api03-your-key&quot;<br>    model: &quot;claude-3-sonnet-20240229&quot;</pre><h4>4. Verify Configuration</h4><pre>python3 validate_config.py</pre><p>This checks:</p><ul><li>AWS credentials are valid</li><li>AI API keys are set</li><li>Configuration files are properly formatted</li><li>All required permissions are available</li></ul><h4>5. Start the Application</h4><p>Web Interface:</p><pre>python3 chat_ui.py<br># Open http://localhost:5001</pre><p>Command Line:</p><pre>python3 ai_chatbot.py</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kZ0K7MOf5Zg9zpHGjK_3qg.png" /></figure><h3>Real-World Usage Examples</h3><p>Here are some practical examples of what you can do:</p><h4>Example 1: Infrastructure Overview</h4><p>Question: “Give me an overview of my AWS infrastructure”</p><p>What Happens:</p><ol><li>MCP server calls list_instances(), list_storage_buckets(), get_monitoring_alerts()</li><li>AI receives structured data about all resources</li><li>AI generates a comprehensive summary</li></ol><p>Response:</p><pre>🖥️ AWS Infrastructure Overview<br><br>## 📊 Summary<br>• Total EC2 Instances: 5<br>  - Running: 3<br>  - Stopped: 2<br>• S3 Buckets: 8<br>  - Total Storage: 245.67 GB<br>  - Encrypted: 6/8<br>• CloudWatch Alarms: 12<br>  - Active: 8<br>  - In Alarm: 2<br><br>## 💰 Cost Insights<br>• Estimated Monthly Cost: $234.56<br>• Top Cost Driver: EC2 instances (65%)<br>• Recommendation: Consider Reserved Instances for production workloads<br><br>## 🔒 Security Status<br>• 2 S3 buckets without encryption<br>• 1 instance without monitoring enabled<br>• Recommendation: Enable encryption and monitoring for all resources</pre><h4>Example 2: Specific Resource Query</h4><p>Question: “What are the IP addresses of my EC2 instances?”</p><p>What Happens:</p><ol><li>MCP server calls list_instances()</li><li>Extracts network interface information</li><li>AI formats the response with IP addresses</li></ol><p>Response:</p><pre>🖥️ EC2 Instance IP Addresses<br><br>## Instance #1: web-server-prod<br>• Instance ID: `i-1234567890abcdef0`<br>• Private IP: 10.0.1.45<br>• Public IP: 54.123.45.67<br>• VPC: vpc-abc123<br>• Subnet: subnet-xyz789<br><br>## Instance #2: app-server-staging<br>• Instance ID: `i-0987654321fedcba0`<br>• Private IP: 10.0.2.12<br>• Public IP: None (no public IP assigned)<br>• VPC: vpc-def456<br>• Subnet: subnet-uvw456<br><br>## Instance #3: db-server-prod<br>• Instance ID: `i-1122334455aabbcc0`<br>• Private IP: 10.0.3.78<br>• Public IP: None (intentionally private)<br>• VPC: vpc-ghi789<br>• Subnet: subnet-rst123</pre><h4>Example 3: Cost Analysis</h4><p>Question: “How can I reduce my AWS costs?”</p><p>What Happens:</p><ol><li>MCP server gathers instance data, storage sizes, and usage patterns</li><li>AI analyzes the data for cost optimization opportunities</li><li>AI provides specific, actionable recommendations</li></ol><p>Response:</p><pre>💰 AWS Cost Optimization Analysis<br><br>## Current Spending Breakdown<br>• EC2 Instances: $156.78/month (67%)<br>• S3 Storage: $45.23/month (19%)<br>• Data Transfer: $23.45/month (10%)<br>• Other Services: $9.12/month (4%)<br><br>## 💡 Optimization Recommendations<br><br>### 1. EC2 Instance Optimization<br>• **Stopped Instances**: You have 2 stopped instances that are still incurring EBS costs<br>  - Action: Terminate `i-0987654321` and `i-1122334455` if not needed<br>  - Potential Savings: $23.45/month<br><br>• **Instance Right-Sizing**: `i-1234567890` is a t3.xlarge but only using 15% CPU<br>  - Action: Consider downgrading to t3.large<br>  - Potential Savings: $18.67/month<br><br>• **Reserved Instances**: For production instances, consider Reserved Instances<br>  - Action: Purchase 1-year Reserved Instance for `i-1234567890`<br>  - Potential Savings: $12.34/month (upfront payment required)<br><br>### 2. S3 Storage Optimization<br>• **Lifecycle Policies**: 3 buckets have old data that could be moved to Glacier<br>  - Action: Set up lifecycle policies for `logs-bucket`, `backups-bucket`, `archive-bucket`<br>  - Potential Savings: $8.90/month<br><br>• **Unused Buckets**: `test-bucket-old` has 0 objects but still exists<br>  - Action: Delete unused buckets<br>  - Potential Savings: $0.50/month (minimal, but good practice)<br><br>### 3. Data Transfer Optimization<br>• **CloudFront**: Consider using CloudFront for static content<br>  - Action: Set up CloudFront distribution for `static-assets-bucket`<br>  - Potential Savings: $5.67/month<br><br>## 📊 Total Potential Savings<br>• Monthly: $69.63 (30% reduction)<br>• Annual: $835.56<br><br>## ⚠️ Important Notes<br>• Review all recommendations before implementing<br>• Test changes in staging environment first<br>• Monitor costs after implementing changes<br>• Some optimizations require upfront investment (Reserved Instances)</pre><h4>Example 4: Security Analysis</h4><p>Question: “What security issues do I have?”</p><p>What Happens:</p><ol><li>MCP server checks encryption status, monitoring, and access patterns</li><li>AI analyzes security configurations</li><li>AI provides prioritized security recommendations</li></ol><p>Response:</p><pre>🔒 AWS Security Analysis<br><br>## 🚨 Critical Issues<br><br>### 1. Unencrypted S3 Buckets<br>• **Buckets**: `logs-bucket`, `temp-uploads`<br>• **Risk**: Data exposure if bucket is accidentally made public<br>• **Action**: Enable server-side encryption immediately<br>• **Priority**: HIGH<br><br>### 2. Missing CloudWatch Monitoring<br>• **Instance**: `i-1122334455` (db-server-prod)<br>• **Risk**: No visibility into performance or security events<br>• **Action**: Enable detailed monitoring<br>• **Priority**: HIGH<br><br>## ⚠️ Medium Priority Issues<br><br>### 3. Overly Permissive Amazon EC2 security group<br>• **Amazon EC2 security group**: `sg-web-server` allows SSH from 0.0.0.0/0<br>• **Risk**: Potential unauthorized access<br>• **Action**: Restrict SSH access to your IP or VPN<br>• **Priority**: MEDIUM<br><br>### 4. No MFA on Root Account<br>• **Risk**: Root account compromise could affect entire account<br>• **Action**: Enable MFA for root account<br>• **Priority**: MEDIUM<br><br>## ✅ Security Best Practices Checklist<br><br>### Implemented<br>✅ CloudTrail logging enabled<br>✅ S3 versioning enabled on production buckets<br>✅ AWS Identity and Access Management (IAM) roles<br>✅ Amazon Virtual Private Cloud (Amazon VPC) with private Amazon VPC subnet configured<br><br>### Recommended<br>1. Enable encryption on all S3 buckets<br>2. Enable Amazon CloudWatch monitoring on all instances<br>3. Review and restrict Amazon EC2 security group rules<br>4. AWS Identity and Access Management (IAM) users<br>5. Set up AWS Config for compliance monitoring<br>6. Implement AWS GuardDuty for threat detection<br>7. Regular security audits (quarterly recommended)<br><br>## 📋 Action Plan<br>1. **This Week**: Enable encryption on unencrypted buckets<br>2. **This Week**: Enable monitoring on db-server-prod<br>3. **This Month**: Review and update all Amazon EC2 security groups<br>4. **This Month**: Enable MFA for root account<br>5. **Ongoing**: Regular security reviews and audits</pre><h3>Conclusion: The Future of Infrastructure Management</h3><p>MCP with AWS enables natural language interaction with your infrastructure. Instead of navigating consoles or writing scripts, you can ask questions and get intelligent answers.</p><p>The combination of:</p><ul><li>MCP for standardized AI-cloud communication</li><li>AWS for comprehensive cloud services</li><li>AI models for intelligent analysis</li></ul><p>Creates a powerful tool for infrastructure management.</p><p>Whether you’re building your own MCP server or using CloudWhisper as a starting point, the key is understanding how these pieces fit together. Start simple with EC2 and S3, then expand to other services as needed.</p><p>The future of cloud management isn’t just about automation — it’s about making infrastructure accessible, understandable, and manageable through natural conversation. MCP is the bridge that makes this possible.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*13TNsrKkSjsAMWP4.png" /></figure><h4>👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇</h4><h4>🚀<a href="http://from.faun.to/r/8zxxd">Join FAUN Developer Community &amp; Get Similar Stories in your Inbox Each Week</a></h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f464ae62a407" width="1" height="1" alt=""><hr><p><a href="https://faun.pub/bridging-ai-and-aws-a-deep-dive-into-using-model-context-protocol-mcp-for-intelligent-cloud-f464ae62a407">Bridging AI and AWS: A Deep Dive into Using Model Context Protocol (MCP) for Intelligent Cloud…</a> was originally published in <a href="https://faun.pub">FAUN.dev() 🐾</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[CloudWhisper: Revolutionizing AWS Infrastructure Management with AI and MCP]]></title>
            <link>https://faun.pub/cloudwhisper-revolutionizing-aws-infrastructure-management-with-ai-and-mcp-b125811983cc?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/b125811983cc</guid>
            <category><![CDATA[python]]></category>
            <category><![CDATA[mcps]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Wed, 15 Oct 2025 20:42:37 GMT</pubDate>
            <atom:updated>2025-10-15T20:42:37.982Z</atom:updated>
            <content:encoded><![CDATA[<p>How I built an AI-powered chatbot that whispers intelligent insights about your AWS infrastructure using the Model Context Protocol</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kZ0K7MOf5Zg9zpHGjK_3qg.png" /><figcaption>CloudWhisper’s modern interface for conversational AWS management</figcaption></figure><h4>The Problem: Infrastructure Management is Too Complex</h4><p>If you’ve ever managed AWS infrastructure, you know the struggle:</p><p>The AWS Console Dance</p><p>• Log in to AWS Console</p><p>• Navigate through countless menus</p><p>• Click through multiple pages to find one EC2 instance IP</p><p>• Repeat for every simple question</p><p>• Switch accounts? Start over.</p><h4>The Script Fatigue</h4><p>• Need to know S3 storage costs? Write a script.</p><p>• Want to check CloudWatch alarms? Another script.</p><p>• Different account? Modify the script.</p><p>• It’s exhausting.</p><p>The Context Switching Hell</p><p>• Console for viewing</p><p>• CLI for querying</p><p>• Third-party tools for analysis</p><p>• Mental model constantly shifting</p><p>I realized: We’re managing 2024 cloud infrastructure with 2014 tools.</p><h4>The Vision: What If Infrastructure Could Talk?</h4><p>Imagine asking your infrastructure questions like you’d ask a colleague:</p><p>“What are my EC2 instance IPs?”</p><p>“How much am I spending on S3?”</p><p>“Are there any security vulnerabilities?”</p><p>And getting instant, intelligent answers. Not raw JSON. Not CloudWatch graphs. Actual insights.</p><p>That’s the vision behind CloudWhisper.</p><h4>What is <a href="https://github.com/MohammadJomaa/cloudwhisper">CloudWhisper</a>?</h4><p><a href="https://github.com/MohammadJomaa/cloudwhisper">CloudWhisper</a> is an AI-powered infrastructure chatbot that provides real-time insights about your AWS environment through natural language conversations.</p><p>But it’s more than just a chatbot. It’s a new paradigm for infrastructure management.</p><h4>The Core Innovation: MCP Integration</h4><p>CloudWhisper is built on the Model Context Protocol (MCP) — a standardized protocol that enables AI models to securely connect to external data sources.</p><h4>Traditional Approach:</h4><p>User → AWS Console → Manual Analysis → Decision</p><h4>CloudWhisper Approach:</h4><p>User → Natural Language Question → AI + Real-Time AWS Data → Intelligent Answer</p><p>The difference? Context and intelligence.</p><h3>The Technology Stack: How CloudWhisper Works</h3><p>Architecture Overview:</p><p>User asks: “What are my EC2 IPs?”</p><p>↓</p><p>CloudWhisper Web UI (Beautiful interface + Natural Language Processing)</p><p>↓</p><p>AI Integration (ChatGPT or Claude)</p><p>↓</p><p>MCP Server (Model Context Protocol — Translates AI → AWS API)</p><p>↓</p><p>AWS Services (Real-time data from EC2, S3, CloudWatch)</p><p>↓</p><p>Data Retrieved and Formatted</p><p>↓</p><p>AI Analyzes Data</p><p>↓</p><p>Intelligent Response Generated</p><p>↓</p><p>User Receives Insights</p><h3>The MCP Magic</h3><p>Model Context Protocol (MCP) is the secret sauce that makes CloudWhisper possible:</p><p>1. Secure Communication: AI models can’t directly access AWS — MCP acts as a secure intermediary</p><p>2. Real-Time Data: Every query fetches live data from your AWS account</p><p>3. Contextual Understanding: AI receives formatted, relevant data for accurate analysis</p><p>4. Controlled Access: You define what data AI can access</p><h3>Technology Choices</h3><h4>Backend:</h4><blockquote>• Python 3.8+ for robust backend logic</blockquote><blockquote>• Flask for lightweight web framework</blockquote><blockquote>• Boto3 for official AWS SDK</blockquote><blockquote>• Structlog for structured logging</blockquote><h4>AI Integration:</h4><blockquote>• OpenAI GPT-4 for conversational intelligence</blockquote><blockquote>• Anthropic Claude as alternative AI provider</blockquote><blockquote>• MCP Protocol for secure AI-cloud communication</blockquote><h4>Frontend:</h4><blockquote>• Modern HTML/CSS/JS for responsive, beautiful interface</blockquote><blockquote>• Real-time updates for instant AI responses</blockquote><blockquote>• Loading animations for professional UX</blockquote><h3>Key Features: What Makes CloudWhisper Special</h3><h4>1. Natural Language Queries</h4><p>Instead of navigating AWS Console or writing scripts, just ask:</p><p>Example Queries:</p><p>“What are the IP addresses of my EC2 instances?”</p><p>→ Gets all running instances with public/private IPs</p><p>“How much storage am I using in S3?”</p><p>→ Lists all buckets with sizes and costs</p><p>“Show me my CloudWatch alarms”</p><p>→ Displays all alarms with status and metrics</p><p>“Are there any security issues?”</p><p>→ AI-powered security analysis with recommendations</p><h4>2. Real-Time AWS Data</h4><p>Unlike static dashboards or scheduled reports, CloudWhisper queries your AWS infrastructure in real-time.</p><p>Every time you ask a question:</p><p>• Fresh API calls to AWS services</p><p>• Latest instance states</p><p>• Current resource usage</p><p>• Up-to-date alarm statuses</p><p>This means you’re always working with current data.</p><h4>3. AI-Powered Intelligence</h4><p>CloudWhisper doesn’t just return data — it analyzes and explains it.</p><blockquote>Example:</blockquote><p>You ask: “How can I reduce AWS costs?”</p><p>CloudWhisper responds:</p><p>Cost Optimization Analysis</p><p>Current Infrastructure</p><p>• Total EC2 Instances: 5 ($250/month)</p><p>• S3 Storage: 500GB ($12/month)</p><p>• Running but Unused: 2 instances</p><h4>Recommendations</h4><p>1. Stop 2 unused instances → Save $100/month</p><p>2. Move old S3 data to Glacier → Save $8/month</p><p>3. Use Reserved Instances → Save $50/month</p><p>Total Potential Savings: $158/month (63%)</p><p>4. Multi-Account Support</p><p>Managing multiple AWS accounts? CloudWhisper makes it easy:</p><p>• Quick switching between accounts</p><p>• Account-specific analysis</p><p>• Cross-account comparisons</p><p>• Centralized management</p><h4>Perfect for:</h4><p>• Development/Staging/Production environments</p><p>• Multi-client management</p><p>• Enterprise AWS setups</p><p>• Agency infrastructure management</p><h4>5. Beautiful Interface</h4><p>Web UI:</p><p>• Modern gradient design</p><p>• Real-time chat interface</p><p>• Responsive layout</p><p>• Loading animations</p><p>• Status indicators</p><p>• Quick action buttons</p><h4>CLI:</h4><p>• Terminal-based interface</p><p>• Fast responses</p><p>• Full feature parity</p><p>• Perfect for automation</p><h3>Final Thoughts</h3><p>Building CloudWhisper has been an incredible journey. From a simple idea — “what if I could just ask my infrastructure questions?” — to a fully-featured AI-powered chatbot using cutting-edge MCP technology.</p><p>But this is just the beginning.</p><p>The future of infrastructure management is conversational, intelligent, and real-time. CloudWhisper is a step toward that future.</p><p>🎁 I’m Open-Sourcing It!<br>Because the best tools are built by communities, not individuals.</p><p>🤝 Calling All Contributors:<br>Whether you’re a Python dev, AWS expert, AI enthusiast, or just curious — your contributions are welcome!</p><p>🔗 Check it out:<br><a href="https://github.com/MohammadJomaa/cloudwhisper">https://github.com/MohammadJomaa/cloudwhisper</a></p><p>If you find it useful, please ⭐ star the repo and share with your network!</p><p>Let’s make infrastructure management more conversational! 💬</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*C9ZjEL_9EI9Kvong.png" /></figure><h4>👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇</h4><h4>🚀<a href="http://from.faun.to/r/8zxxd">Join FAUN Developer Community &amp; Get Similar Stories in your Inbox Each Week</a></h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b125811983cc" width="1" height="1" alt=""><hr><p><a href="https://faun.pub/cloudwhisper-revolutionizing-aws-infrastructure-management-with-ai-and-mcp-b125811983cc">CloudWhisper: Revolutionizing AWS Infrastructure Management with AI and MCP</a> was originally published in <a href="https://faun.pub">FAUN.dev() 🐾</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Implementing Automated Resource Tagging Across AWS Organizations]]></title>
            <link>https://faun.pub/implementing-automated-resource-tagging-across-aws-organizations-247e70d384ae?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/247e70d384ae</guid>
            <category><![CDATA[eventbridge]]></category>
            <category><![CDATA[tagging]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[aws-config]]></category>
            <category><![CDATA[aws-lambda]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Tue, 16 Sep 2025 08:44:48 GMT</pubDate>
            <atom:updated>2025-09-22T06:04:30.630Z</atom:updated>
            <content:encoded><![CDATA[<p>This article presents a comprehensive solution for implementing automated resource tagging across AWS Organizations using CloudFormation, Lambda functions, and AWS Config. The solution addresses the common challenge of maintaining consistent resource tags across multiple AWS accounts by providing an automated system that tags resources based on their account’s organizational tags, monitors compliance through AWS Config, and propagates tags from organizational units to child accounts. The implementation consists of two CloudFormation templates deployed across different regions, with the core infrastructure in the primary region and OU tag inheritance components in the us-east-1 region. The solution includes cross-account role management, automated StackSet deployment, and comprehensive error handling. This approach has been successfully implemented and tested in production environments, providing significant improvements in resource governance, compliance monitoring, and operational efficiency.</p><h3>GitHub Repository</h3><p>The complete implementation, including all CloudFormation templates, documentation, and deployment scripts, is available in the following GitHub repository:</p><p>[<a href="https://github.com/MohammadJomaa/AWS_ORG_TAGGING">AWS_ORG_TAGGING Repository</a>](<a href="https://github.com/MohammadJomaa/AWS_ORG_TAGGING">https://github.com/MohammadJomaa/AWS_ORG_TAGGING</a>)**</p><p>The repository contains:</p><ul><li><strong><em>complete-tagging-strategy.yaml : </em></strong>Main CloudFormation template for core infrastructure</li><li><strong><em>virginia-region-components.yaml: </em></strong>Virginia region template for OU tag inheritance</li></ul><h3>Introduction</h3><p>Managing resource tags across multiple AWS accounts can quickly become a nightmare. When you’re dealing with dozens or hundreds of accounts in an AWS Organization, manually ensuring consistent tagging becomes impossible. This is where automated tagging strategies come into play.</p><p>In this article, I’ll walk you through a real-world solution we implemented for automated resource tagging using AWS CloudFormation, Lambda functions, and Organization Config Rules. The approach we developed handles both existing resource tagging and ensures new resources inherit proper tags from their organizational units.</p><h3>The Challenge</h3><p>Most organizations struggle with:</p><ul><li>Inconsistent tagging across accounts</li><li>Manual tag management overhead</li><li>Compliance requirements for resource governance</li><li>Difficulty tracking costs and resources without proper tags</li></ul><p>Our solution addresses these pain points by creating an automated system that:</p><ul><li>Tags resources based on their account’s organizational tags</li><li>Monitors compliance through AWS Config</li><li>Propagates tags from organizational units to child accounts</li><li>Works seamlessly across multiple regions</li></ul><h3>Architecture Overview</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/751/1*YZLFj3PTdJi0tFsmSbZuCQ.png" /><figcaption>Org Tagging Architecture Overview</figcaption></figure><p>The solution consists of two main components deployed across different regions:</p><h4>Core Infrastructure (Primary Region)</h4><p>The main deployment includes Lambda functions that handle resource tagging and compliance evaluation. These functions work with AWS Config to monitor resource compliance and automatically apply missing tags.</p><h4>OU Tag Inheritance (Virginia Region)</h4><p>Since AWS Organizations events are only published in the us-east-1 region, we deploy a separate component there to handle organizational unit tag inheritance. This ensures that when OUs are tagged, those tags automatically propagate to child accounts and OUs.</p><h4>Implementation Details</h4><p>Before diving into the CloudFormation templates, you’ll need to ensure your AWS Organization is properly configured. This includes:</p><ul><li>Enabling AWS Organizations</li><li>Setting up CloudFormation StackSets</li><li>Configuring AWS Config in your target regions</li><li>Having appropriate IAM permissions</li></ul><p>The deployment requires several key pieces of information:</p><ul><li>Your management account ID</li><li>Your organization ID (in the format o-xxxxxxxxxx)</li><li>Target regions for deployment</li><li>Specific configuration preferences</li></ul><h4>Repository Structure and Files</h4><p>The [GitHub repository](https://github.com/MohammadJomaa/AWS_ORG_TAGGING) is organized as follows:</p><pre>The [GitHub repository](https://github.com/MohammadJomaa/AWS_ORG_TAGGING) is organized as follows:<br><br>AWS_ORG_TAGGING/<br>├── complete-tagging-strategy.yaml      # Main CloudFormation template<br>├── virginia-region-components.yaml     # Virginia region template<br>├── readme.md                          # Deployment documentation<br>└── [Additional files as needed]</pre><h4>Key Files Explained</h4><pre>1. `complete-tagging-strategy.yaml` - This is the primary template that deploys:<br>   - Lambda functions for resource tagging and compliance evaluation<br>   - IAM roles with appropriate permissions<br>   - AWS Config Organization Config Rule<br>   - CloudFormation StackSet for cross-account role deployment<br>   - Custom resources for automated StackSet instance creation<br><br>2. `virginia-region-components.yaml` - This template deploys:<br>   - OU tag inheritance Lambda function<br>   - EventBridge rules for Organizations events<br>   - Necessary permissions for cross-region functionality</pre><h3>Core Tagging Logic</h3><p>The heart of the solution lies in two Lambda functions:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/751/1*YZLFj3PTdJi0tFsmSbZuCQ.png" /></figure><h4><strong><em>OrgAccountTagger</em></strong></h4><p>This function identifies resources missing required tags and applies them automatically. It works by:</p><ul><li>Retrieving the account’s organizational tags</li><li>Scanning resources in the account</li><li>Comparing existing tags with required tags</li><li>Applying missing tags using the appropriate AWS service APIs</li></ul><h4><strong><em>ConfigOrgAccountTagEvaluator</em></strong></h4><p>This function evaluates resource compliance for AWS Config. It:</p><ul><li>Receives configuration change notifications</li><li>Checks if resources have all required tag keys</li><li>Reports compliance status back to Config</li><li>Triggers the tagger function when non-compliant resources are found</li></ul><h3>Cross-Account Role Management</h3><p>One of the trickier aspects of this solution is managing permissions across multiple accounts. We use CloudFormation StackSets to deploy a standardized role (`<strong><em>TagPropagatorRole</em></strong>`) in each member account. This role allows the management account’s Lambda functions to tag resources in member accounts.</p><p>The StackSet deployment is automated through a custom CloudFormation resource that:</p><ul><li>Creates the StackSet</li><li>Automatically provisions instances in all member accounts</li><li>Handles deployment failures gracefully</li><li>Provides visibility into deployment status</li></ul><h4>Organizational Unit Tag Inheritance</h4><p>The OU tag inheritance component is particularly useful for maintaining consistency. When you tag an organizational unit, the system automatically:</p><ul><li>Applies those tags to all direct child accounts</li><li>Propagates tags to direct child OUs</li><li>Handles account moves between OUs</li><li>Manages new account creation</li></ul><p>This is implemented through EventBridge rules that listen for Organizations API calls and trigger the appropriate Lambda function.</p><h3>Deployment Process</h3><h4>Step 1: Main Template Deployment</h4><p>The first step involves deploying the core infrastructure in your primary region. This template creates:</p><ul><li>IAM roles with appropriate permissions</li><li>Lambda functions for tagging and evaluation</li><li>AWS Config rules for compliance monitoring</li><li>StackSet for cross-account role deployment</li></ul><p><strong><em>Required Parameters for Main Template</em></strong></p><pre><br>- `ManagementAccountId`: Your AWS Organizations management account ID (12-digit number)<br>- `OrganizationId`: Your AWS Organizations ID (format: o-xxxxxxxxxx)<br>- `ConfigRegion`: Region where AWS Config is enabled (e.g., me-central-1)<br>- `DeploymentRegions`: Comma-separated list of regions for StackSet deployment<br>- `EnforceValues`: Boolean flag for tag value enforcement (default: false)<br>- `TaggerFunctionName`: Name for the tagging Lambda function (default: OrgAccountTagger)<br>- `EvaluatorFunctionName`: Name for the evaluator Lambda function (default: ConfigOrgAccountTagEvaluator)<br>- `MemberRoleName`: Name for cross-account role (default: TagPropagatorRole)<br>- `FailureTolerancePercentage`: Percentage of accounts that can fail during deployment (default: 0)<br>- `MaxConcurrentPercentage`: Maximum percentage of accounts to process concurrently (default: 10)</pre><p><strong><em>The deployment command requires several parameters:</em></strong></p><pre>aws cloudformation deploy \<br>  --template-file complete-tagging-strategy.yaml \<br>  --stack-name AutomatedTaggingStrategy \<br>  --parameter-overrides \<br>    ManagementAccountId=YOUR_ACCOUNT_ID \<br>    OrganizationId=o-YOUR_ORG_ID \<br>    ConfigRegion=me-central-1 \<br>    DeploymentRegions=me-central-1 \<br>    EnforceValues=false \<br>    TaggerFunctionName=OrgAccountTagger \<br>    EvaluatorFunctionName=ConfigOrgAccountTagEvaluator \<br>    MemberRoleName=TagPropagatorRole \<br>    FailureTolerancePercentage=0 \<br>    MaxConcurrentPercentage=10 \<br>  --capabilities CAPABILITY_NAMED_IAM \<br>  --region me-central-1</pre><h4>Step 2: Virginia Region Components</h4><p>The second deployment handles the OU tag inheritance components in <strong>us-east-1.</strong> This step is crucial because AWS Organizations events are only published in the <strong>us-east-1</strong> region.</p><p><strong><em>Required Parameters for Virginia Template:</em></strong></p><pre>- `LambdaTagPropagatorRoleArn`: ARN of the Lambda execution role from the main template output<br>- `StackName`: Name of the main CloudFormation stack (used for resource naming)</pre><blockquote>Important Notes</blockquote><blockquote>- This template must be deployed in us-east-1 region</blockquote><blockquote>- The `LambdaTagPropagatorRoleArn` parameter must be obtained from the main template’s output</blockquote><blockquote>- The template creates EventBridge rules that listen for Organizations API events</blockquote><pre>aws cloudformation deploy \<br>  --template-file virginia-region-components.yaml \<br>  --stack-name AutomatedTaggingStrategy-Virginia \<br>  --parameter-overrides \<br>    LambdaTagPropagatorRoleArn=arn:aws:iam::ACCOUNT:role/LambdaTagPropagatorRole \<br>    StackName=AutomatedTaggingStrategy \<br>  --capabilities CAPABILITY_NAMED_IAM \<br>  --region us-east-1</pre><h3>Security Best Practices</h3><p>All IAM roles follow the principle of least privilege. Cross-account roles have minimal permissions, and Lambda execution roles are scoped to specific functions. The solution doesn’t require any public internet access and uses AWS internal networks for all communication.</p><h4>Testing and Validation</h4><p>After deployment, it’s crucial to test the solution thoroughly:</p><ol><li>Manual Lambda Testing: Invoke the functions directly to verify they work correctly</li><li>Config Rule Validation: Check that the Organization Config Rule is evaluating resources properly</li><li>OU Tag Inheritance: Test tagging organizational units and verifying tag propagation</li><li>Cross-Account Functionality: Ensure the StackSet deployed roles correctly in member accounts</li></ol><h4>Common Pitfalls and Solutions</h4><p><strong><em>StackSet Deployment Issues</em></strong></p><p>One common issue is StackSet instance creation failing due to empty account lists. This usually happens when the organization ID is incorrect or when there are no accounts in the specified organizational units. The solution includes logic to automatically detect the root OU and use it as the deployment target.</p><p><strong><em>Cross-Region Permission Issues</em></strong></p><p>Lambda functions in one region sometimes struggle to assume roles in another region. This is resolved by ensuring the role ARNs are correct and that the roles exist in the target regions.</p><p><strong><em>Config Rule Evaluation Problems</em></strong></p><p>Config rules might fail to evaluate resources if the Lambda function doesn’t have proper permissions or if the function code has bugs. The solution includes comprehensive error handling and logging to identify and resolve these issues.</p><h3>Maintenance and Updates</h3><h4>Regular Monitoring</h4><p>The solution requires ongoing monitoring:</p><ul><li>Check Config rule compliance regularly</li><li>Review Lambda function logs for errors</li><li>Monitor StackSet operation status</li><li>Verify tag inheritance is working correctly</li></ul><h4>Updates and Changes</h4><ul><li>Use CloudFormation change sets for template updates</li><li>Test changes in a development environment first</li><li>Update Lambda function code through the CloudFormation template</li><li>Modify parameters through the CloudFormation console</li></ul><h3>Results and Benefits</h3><p>Implementing this automated tagging strategy provides several key benefits:</p><ul><li><strong>Consistency</strong>: All resources across the organization have consistent tagging</li><li><strong>Compliance</strong>: Automated monitoring ensures ongoing compliance with tagging policies</li><li><strong>Efficiency</strong>: Reduces manual overhead and human error</li><li><strong>Cost Management</strong>: Better resource tracking and cost allocation</li><li><strong>Governance</strong>: Improved visibility into resource usage and ownership</li></ul><h3>Conclusion</h3><p>Automated resource tagging across AWS Organizations doesn’t have to be complex. With the right combination of CloudFormation, Lambda functions, and AWS Config, you can create a robust solution that handles both existing and new resources automatically.</p><p>The key to success is understanding your organization’s specific requirements and adapting the solution accordingly. Start with the core functionality and gradually add more sophisticated features like OU tag inheritance and cross-region deployment.</p><p>Remember that this is an ongoing process. Regular monitoring, testing, and updates are essential to maintaining an effective automated tagging strategy. The investment in setting up this system pays dividends in improved governance, compliance, and operational efficiency.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*sWLwv4ZHJTK2ddbe.png" /></figure><h4>👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇</h4><h4>🚀<a href="http://from.faun.to/r/8zxxd">Join FAUN Developer Community &amp; Get Similar Stories in your Inbox Each Week</a></h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=247e70d384ae" width="1" height="1" alt=""><hr><p><a href="https://faun.pub/implementing-automated-resource-tagging-across-aws-organizations-247e70d384ae">Implementing Automated Resource Tagging Across AWS Organizations</a> was originally published in <a href="https://faun.pub">FAUN.dev() 🐾</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to decrease the size of AWS EBS in Red Hat Enterprise Linux with ext4 & xfs file System]]></title>
            <link>https://medium.com/@jomaajob/how-to-decrease-the-size-of-aws-ebs-in-red-hat-enterprise-linux-with-ext4-xfs-file-system-305c0f7e73dc?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/305c0f7e73dc</guid>
            <category><![CDATA[system]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[linux]]></category>
            <category><![CDATA[ebs-volume]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Sat, 03 May 2025 09:35:45 GMT</pubDate>
            <atom:updated>2025-05-25T16:51:47.001Z</atom:updated>
            <content:encoded><![CDATA[<h4>Introduction</h4><p>Elastic Block Store (EBS) is a storage, which provides block-level persistent volumes for the EC2 instances. By reducing the size of your EBS volume according to the minimum storage requirements for your application, one can save a lot in terms of money on storing costs. Here’s a step-by-step guide:</p><ol><li>Take a new AMI snapshot from your existing EC2 instance.</li><li>Create a new EC2 instance based on the previous AMI snapshot.</li><li>Create a new EBS of your desired size, making sure that it is larger than the amount used in the original EBS.</li><li>Connect the newly created EBS to a new EC2 instance that you have just launched.</li><li>Newly logged into the fresh EC2, we prepare a new EBS with a suitable file system.</li><li>Paste all the data from the / directory to another EBS.</li><li>Setup and setup grup2 on a new EBS.</li><li>Shutdown the new EC2 instance.</li><li>Unmount both the EBS volumes (new and old) to detach it from the new EC2.</li><li>Connect the newly EBS to a new EC2 via /dev/sda1 as a bootable EBS.</li><li>Take your EC2 instance with the resized EBS and have alot of fun! ^_^</li></ol><h4>Solution implementation</h4><ol><li>Take a new AMI snapshot from your existing EC2 instance.</li><li>Create a new EC2 instance based on the previous AMI snapshot.=</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MX3JQSSEBPYPg35U9F3M1A.png" /><figcaption>New EC2</figcaption></figure><p>3. Create a new EBS of your desired size, making sure that it is larger than the amount used in the original EBS.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*f5rCtlKHMRGzRM5BHYJPlw.png" /><figcaption>EBSs</figcaption></figure><p>4. Connect the newly created EBS to a new EC2 instance that you have just launched.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0xMK0Wc-_CRK_3w-o5Qi0w.png" /><figcaption>Attach the new EBs</figcaption></figure><p>5. Newly logged into the fresh EC2, we prepare a new EBS with a suitable file system.</p><pre>#Create new dir for /mnt/new-volume001<br>sudo mkdir /mnt/new-volume001<br>#list all of blocks<br>lsblk</pre><pre>fdisk /dev/nvme1n1<br>#for XFS<br>sudo mkfs -t xfs /dev/nvme1n1p1</pre><pre>#for Ext4<br>#sudo mkfs -t ext4 /dev/nvme1n1p1</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*14COlzmHDJZDGjy3BPn5kQ.png" /><figcaption>Prepare the new EBS</figcaption></figure><p>6. Paste all the data from the / directory to another EBS.</p><pre>sudo mount /dev/nvme1n1p1 /mnt/new-volume001</pre><pre>sudo rsync -axv / /mnt/new-volume001/</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rhAVGLcvKRaxG9vwNAH9vQ.png" /><figcaption>After copy the / dir to new EBS</figcaption></figure><p>7. Setup and setup grup2 on a new EBS.</p><pre>grub2-install --root-directory=/mnt/new-volume001/ --force /dev/nvme1n1</pre><pre># might face error like that<br># grub2-install: error: /usr/lib/grub/x86_64-efi/modinfo.sh doesn&#39;t exist. Please specify --target or --directory.<br>#you can add this to your command <br>grub2-install --root-directory=/mnt/new-volume001/ --force /dev/nvme2n1 --target i386-pc</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mcQsbt7TzhToa4cw1-kxqw.png" /></figure><pre>lsblk</pre><pre># get the dev name of OS in old EBS<br>blkid /dev/&lt;devname&gt;</pre><pre>cat /etc/fstab</pre><pre># Copy the UUID of old EBs</pre><pre>UUID=&lt;UUID&gt;<br>umount /mnt/new-volume001<br>#XFS<br>xfs_admin  -U &lt;UUID&gt; /dev/&lt;devname&gt;</pre><pre>#ext4</pre><pre># sudo tune2fs -U $UUID  /dev/nvme1n1p1<br># e2fsck -f /dev/nvme1n1p1<br># sudo tune2fs -U $UUID   /dev/nvme1n1p1<br># blkid<br># e2label /dev/nvme1n1p1 root</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3pK-KCV83zlzVYrZhT_BQA.png" /></figure><p>8. Shutdown the new EC2 instance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zUOr5VbufMGs95Vb52r6gg.png" /></figure><p>9. Unmount both the EBS volumes (new and old) to detach it from the new EC2.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-PLCTwgfC1qe9VGpdJqXKg.png" /></figure><p>10. Connect the newly EBS to a new EC2 via /dev/sda1 as a bootable EBS.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wvvDzNzCmUuHwDBKkfi_xw.png" /></figure><p>11. Take your EC2 instance with the resized EBS and have alot of fun! ^_^</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*30jwy7yKgPsGeCVf.png" /></figure><h4>👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇</h4><h4>🚀<a href="http://from.faun.to/r/8zxxd">Join FAUN Developer Community &amp; Get Similar Stories in your Inbox Each Week</a></h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=305c0f7e73dc" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Convert .NET Monolithic app to Microservices using AWS Microservice Extractor [Part 2]]]></title>
            <link>https://medium.com/@jomaajob/convert-net-monolithic-app-to-microservices-using-aws-microservice-extractor-part-2-557700daac98?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/557700daac98</guid>
            <category><![CDATA[aws-eks]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[refactoring]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Sat, 26 Apr 2025 05:11:27 GMT</pubDate>
            <atom:updated>2025-05-25T17:04:47.935Z</atom:updated>
            <content:encoded><![CDATA[<h3>Convert .NET Monolithic app to Microservices using AWS Microservice Extractor [Part 2]</h3><blockquote><strong>Part 1</strong> : <a href="https://medium.com/faun/convert-net-monolithic-app-to-microservices-using-aws-microservice-extractor-part-1-b5fe48b10ae4">Setup Microservice Extractor , ASP.Net sample app review, ASP.NET application Onboarding</a></blockquote><blockquote><strong>Part 2</strong>: <a href="https://medium.com/@jomaajob/convert-net-monolithic-app-to-microservices-using-aws-microservice-extractor-part-2-3472a305dc5b"><em>Launch visualization, AWS Microservice Extractor for .NET AI-Powered Recommendations, Microservice extraction, Application refactoring</em></a></blockquote><blockquote><strong>Part 3</strong>: Testing, Deploy the new micro-services app on EKS &amp; ECS (<strong>Under processing</strong> )</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ssILV4NekEz_1Ikl0yaINg.jpeg" /></figure><h3>Agenda</h3><blockquote>1. Introduction</blockquote><blockquote>2. Launch visualization</blockquote><blockquote>3. AWS Microservice Extractor for .NET AI-Powered Recommendations</blockquote><blockquote>4. Microservice extraction</blockquote><blockquote>5. Application refactoring</blockquote><blockquote>6. Conclusion</blockquote><h3>1. Introduction</h3><p>we are already discussed the following points in Part 1</p><ol><li><em>Setup Microservice Extractor</em></li><li><em>ASP.Net sample app review</em></li><li><em>ASP.NET application Onboarding</em></li></ol><p>Now in this <strong>Part 2</strong> we are going to discuss the ASP.NET Class visualization, AI-Powered Recommendations, Microservice extraction and Application refactoring.</p><h3>2. Launch visualization</h3><p>If the onboarding status changes to <strong>Success</strong>, you can open up the visualization either by pointing your mouse at <strong>View dependency graph</strong> in top green banner or from Applications list selecting Launch Visualization.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*DxKKEXNAQJs89tkd.png" /></figure><p>Microservice Extractor isolates and identifies the logically grouped components to be extracted as independent services, which are visible in the tab Visualization (Nodes dependencies). A link between nodes indicates a connection between them.<br>Normally, nodes in the visualization are represented on Project level. Filters allow you to view certain project level or namespace level nodes. In addition, you may also double-click on specific nodes to navigate the namespace level or class level aggregations.</p><h3>3. AWS Microservice Extractor for .NET AI-Powered Recommendations</h3><h3>What are AI-powered recommendations?</h3><blockquote>The new AI-powered recommendations system relies on an ML model that inspects the source code of your own project. When the analysis by Microservice Extractor is done, your classes are organized within this tool into prospective candidates for microservices creation.</blockquote><p>This improved feature is especially useful for those customers that do not have the required skills to update their applications. This is typically the situation for many companies that have applications with a long ‘shelf’ life in the market, where the original developers are no longer accessible or at times developed by third parties and it becomes challenging to upgrade.</p><h3>Choosing the right recommendation option</h3><p>The Microservice Extractor provides three extraction options: manual classification, heuristic assessment, and AI-based recommendations. These are a few of the methods that you can use to grouping your classes for microservices hence enabling you as the developer to select an approach which is flexible and effective at least depending on how comfortable one should get with special building blocks without relying 100 percent on classical ones. Crucially, this first choice of extraction does not restrict your ability to select a different approach through the user interface later on as you contemplate the emerging needs for these new microservices.</p><p>If you have a well-developed domain understanding of some application planned for refactoring, the manual classification suits best with the purpose to build such microservice that will serve it. This requires a deep understanding of the application class structure, and even how these classes are related to each other.</p><p>If you are familiar with the application at a lower level, but if know enough about how to strip out what is needed then it should be apparent as well from the source code using heuristic analysis where your logical starting points would lie. In this case of analysis, the points selected are classified according to the group that features classes by their name. For instance, in an MVC application a class of controller may act as the starting point at which one would start to pull out an order-related microservice.</p><p>On the other hand, in case you have limited or no knowledge about applications that are being modernized, then an AI-powered recommendations engine is very priceless. These recommendations move from the heuristic analysis since they clearly distinguish the entrance and exit points for the services. The microservice extractor application uses AI recommendation algorithms to scan every source file in an application and produce recommendations that one can use as the actual list of the candidates.</p><p>to use AI- powered recommendations you can click on<strong> <em>Generate automated groups</em> as below:</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*X_4uLfql8gAn5H1y.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*yPbjb4jcwYJ2Gzfy.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*11vDvb-4HZp44iOt.png" /></figure><h3>4. Microservice extraction</h3><p>In this example, you will extract Inventory as a standalone service.</p><ul><li>Right-click the Inventory node and select <strong>Add node to group.</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Mr2ISm7zbcyhW3Cp.png" /></figure><ul><li>Select <strong>create new</strong> group. Set the Group name to <strong>InventoryGroup</strong>. then click Add</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*9pIzfJQWySaxp6j4.png" /></figure><ul><li>Right-click on the inventory class then <strong>Go to group</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/719/0*-FDBSyuUlah5bfJ6.png" /></figure><ul><li>Click <strong>View group details</strong>.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*UFsDjDyrvD-O4hls.png" /></figure><blockquote><em>On the right section, you can find the </em><strong><em>Group details</em></strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*eGrA9sbjoH7PwnIZ.png" /></figure><blockquote><em>Note that the Nodes list refers to the InventoryGroup class. This class will be released as a standalone service and exposed as a </em><strong><em>REST API </em></strong><em>after release. Check the list of Dependencies. All dependencies in this list will be listed or referenced in the extracted standalone service.</em></blockquote><ul><li>Choose <strong>Extract group</strong> from the <strong>Extract and Port</strong> menu to begin the extraction process.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/499/0*8EZ5qw8_DhpQ1Ou0.png" /></figure><ul><li>Set name for your standalone service “<strong>InventoryService1</strong>”</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*MKU0M1lN6P9IljF4.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*u0CWsqozb495XS2D.png" /></figure><ul><li>After complete the extraction process you can check-out the extraction path as below:</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*EwYRvlokC8H8ExTv.png" /></figure><p>Extraction path</p><ul><li>Once extraction is passed, you can open the <strong>Output path </strong>of the <strong>Modified application code &amp; Extracted service</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/900/0*grmi64aL0eUrzqMe.png" /></figure><ul><li>Open <strong>Extracted service codes to </strong>identify any changes made</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*pEJQ2REEuv9Nstsq.png" /></figure><p>Extracted service codes</p><ul><li>You’ll discover that it created an <strong>ApiController</strong> called <strong>InventoryController</strong>. This controller is closely linked to the methods of the <strong>Inventory</strong> class exposing them as REST APIs.</li><li>It also copied all the Entity Framework data model types and DBContext referenced by the extracted Inventory services under the Models folder.</li><li>In addition it has relocated the Inventory class to the Services folder.</li></ul><blockquote><em>Please note that although our AWS Microservice Extractor, for.NET will make its efforts to ensure that the created service compiles there is no guarantee. You may still need to fix references or update NuGet packages in order to successfully compile the Web API project.</em></blockquote><p>Change the port of <strong>Extracted service </strong>to be <a href="http://localhost:8081/">http://localhost:8081/</a> as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*vXHBOsxViF-vL0Pk.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*F7llaIrItogm8qin.png" /></figure><ul><li>After you finished this step click f5 to test the <strong>Extracted service</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*u6s1W6EIYGRcn9xx.png" /></figure><ul><li>Open <strong>Modified application code to </strong>identify any changes made on monolith app</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YIGaCwL_m4H8-7_yS1inFQ.png" /></figure><blockquote><em>Please take a look, at the </em><strong><em>HomeController.cs</em></strong><em>,</em><strong><em> ShoppingCartController.cs</em></strong><em> and </em><strong><em>StoreController.cs</em></strong><em> files, in the Controllers folder. The local Inventory class calls have been modified to make remote API calls using the </em><strong><em>EndpointAdapter</em></strong><em>.</em></blockquote><ul><li>The AWS Microservice Extractor for .NET efficiently converts local class calls to remote API calls, but there may be times where manual adjustments or refactoring are necessary.</li><li>Take a look at the <strong>EndpointAdapter</strong> folder, which was generated during the extraction process, where you’ll find the <strong>InventoryEndpointFactory</strong> class. This handy class allows for a seamless switch between making remote REST API calls or local Inventory class calls, depending on the setting of the <strong>RemoteRoutingToMicroservice</strong> flag in the local <strong>web.config</strong> file.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*vLkNEAae8MiHAVFd.png" /></figure><p>Modified application Code / web.config</p><p>In the line 11 please change the url and link it to the new extracted service [ <a href="http://localhost:8081/"><strong>http://localhost:8081</strong></a> ] as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wm9-4TDHser6N3f4.png" /></figure><p>Modified application Code / web.config</p><h3>5. Application refactoring</h3><p>In the preceding sections, you were introduced to the capabilities of AWS Microservice Extractor for .NET and how it can aid in extracting microservices from your existing .NET monolithic applications. Although the subsequent section of this tutorial is not mandatory, it offers valuable insights as most of the tasks involved do not require the use of AWS Microservice Extractor. It is highly recommended to follow these steps if your objective is to successfully refactor the <strong>extracted service </strong>and integrate it with the remaining monolith through REST APIs.</p><h3>Refactor the extracted service</h3><ul><li>Go to <strong>Tools</strong> &gt;&gt; <strong>NuGet Package Manager</strong> &gt;&gt; <strong>Package Manager Console</strong></li></ul><blockquote><em>To update the Microsoft.AspNet.WebApi.Client NuGet package, enter the following command in the Package Manager Console. This package will provide support for formatting and content negotiation for </em><strong><em>System.Net.Http</em></strong><em>.</em></blockquote><pre>Update-Package -reinstall Microsoft.AspNet.WebApi.Client</pre><blockquote><em>To optimize communication efficiency, it’s beneficial to transform the Entity Framework data entities from the Inventory class into simplified “plain old CLR objects” (POCOs), also referred to as Data Transfer Objects (DTOs), before transmitting them over the network.</em></blockquote><ul><li>Create a new class by right-clicking on the <strong>GadgetsOnline</strong> project in the Visual Studio solution explorer and Name it <strong>DTO.cs (copy from step 2–1a from this </strong><a href="https://catalog.us-east-1.prod.workshops.aws/workshops/c8d702fa-5eb6-4b3e-98a6-539b1785cec0/en-US/8-refactor"><strong>link</strong></a><strong>)</strong></li><li>Create a new class by right-clicking on the <strong>GadgetsOnline</strong> project in the Visual Studio solution explorer and Name it <strong>DTOHelper.cs (copy from step 2–1b from this </strong><a href="https://catalog.us-east-1.prod.workshops.aws/workshops/c8d702fa-5eb6-4b3e-98a6-539b1785cec0/en-US/8-refactor"><strong>link</strong></a><strong>)</strong></li><li>Navigate to the directory “ <strong>Models/GadgetsOnlineEntities.cs</strong>” and disable the code highlighted in the following section. This particular code is responsible for <strong>seeding</strong> the data and is unnecessary for the extracted service.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*0NpRUFQBvV7f1HUz.png" /></figure><ul><li>Open <strong>Controllers/InventoryController.cs</strong> and do the below code changes.</li></ul><ol><li>In line 39</li></ol><pre>//Remove<br>                //return Ok(myInstance.GetBestSellers(count));</pre><pre>//ADD <br>                return Ok(DTOHelper.GetDTOProductList(myInstance.GetBestSellers(count)));</pre><p>2. in line 99</p><pre>//Remove<br>      // return Ok(myInstance.GetAllProductsInCategory(category));</pre><pre>//Add<br>      return Ok(DTOHelper.GetDTOProductList(myInstance.GetAllProductsInCategory(category)));</pre><p>3. In line 130</p><pre>//Remove<br>          //return Ok(myInstance.GetProductById(id));</pre><pre>//Add<br>          return Ok(DTOHelper.GetDTOProduct(myInstance.GetProductById(id)));</pre><ul><li>To initiate the service in debug mode, simply press F5. Once activated, a browser window will automatically launch as below:</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*DLsw20PenEdOypWE.png" /></figure><p>Launch the Extracted service</p><h3>Refactor the modified application code.</h3><ul><li>Go to <strong>Tools</strong> &gt;&gt; <strong>NuGet Package Manager</strong> &gt;&gt; <strong>Package Manager Console</strong></li></ul><blockquote><em>To update the Microsoft.AspNet.WebApi.Client NuGet package, enter the following command in the Package Manager Console. This package will provide support for formatting and content negotiation for </em><strong><em>System.Net.Http</em></strong><em>.</em></blockquote><pre>Update-Package -reinstall Microsoft.AspNet.WebApi.Client</pre><ul><li>Open <strong>Controllers/ShoppingCartController.cs</strong> and update line 15 as indicated below</li></ul><pre>// remove<br>    //Inventory inventory;</pre><pre>//Add<br>    IInventoryEndpoint inventory;</pre><ul><li>Open <strong>Controllers/StoreController.cs</strong> and update line 14 as indicated below.</li></ul><pre>//Remove<br>   //Inventory inventory;</pre><pre>//Add<br>   IInventoryEndpoint inventory;<br>   // GET: Store</pre><p>Simply hit the <strong>F5</strong> button to activate the application’s debug mode. Keep in mind that it may take a few minutes for the web app to fully load, and you may see a message saying “This site can’t be reached” in the meantime. If that occurs, just give it some time to start up completely. Once the application is up and running, you’ll be directed to the same home page you started with. However, take note that the categories displayed on the left side and the list of top-selling products at the bottom are now being fetched from the <strong>Inventory Web API through REST API requests.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*SQ8z9GI4k7jkBcfw.png" /></figure><p>The ASP.NET Application after refactoring</p><h3>6. Conclusion</h3><p>To sum it up, implementing the AWS Microservices Extractor for .NET has splitted the monolithic application into two separate services that seamlessly communicate through REST APIs. This structural change not only amplifies scalability and adaptability, but also maintains a consistent end-user experience. Adopting microservices has been a game-changing move, enhancing efficiency and flexibility in the development and operation of the application.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=557700daac98" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Convert .NET Monolithic app to Microservices using AWS Microservice Extractor [Part 1]]]></title>
            <link>https://faun.pub/convert-net-monolithic-app-to-microservices-using-aws-microservice-extractor-part-1-52e36d2e5212?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/52e36d2e5212</guid>
            <category><![CDATA[amazon-eks]]></category>
            <category><![CDATA[refactoring]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[microservices]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Wed, 02 Apr 2025 08:06:11 GMT</pubDate>
            <atom:updated>2025-05-31T07:35:14.528Z</atom:updated>
            <content:encoded><![CDATA[<h3>Convert .NET Monolithic app to Microservices using AWS Microservice Extractor [Part 1]</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ssILV4NekEz_1Ikl0yaINg.jpeg" /></figure><h3>Abstract</h3><p>Accepting Amazon EKS or ECS as the ecosystem for running your app is very much beneficial since it has a lot of features that come along. In this series, we will convert ASP.NET application into a microservices architecture with AWS Microservice Extractor for .NET then try to run them in EKS &amp; ECS.</p><p>This unique solution specifically for .NET apps lets us go from a monolithic towards the microservices arch, thus more scalability and also better fault isolation as well with cost efficiencies.</p><p>In general, throughout the series we will not only show how to extract but also fully test the code and functions of this newly organized microservice.</p><blockquote><strong>Part 1</strong> : <a href="https://medium.com/faun/convert-net-monolithic-app-to-microservices-using-aws-microservice-extractor-part-1-b5fe48b10ae4">Setup Microservice Extractor , ASP.Net sample app review, ASP.NET application Onboarding</a></blockquote><blockquote><strong>Part 2</strong>: <a href="https://medium.com/@jomaajob/convert-net-monolithic-app-to-microservices-using-aws-microservice-extractor-part-2-3472a305dc5b"><em>Launch visualization, AWS Microservice Extractor for .NET AI-Powered Recommendations, Microservice extraction, Application refactoring</em></a></blockquote><blockquote><strong>Part 3</strong>: Testing, Deploy the new micro-services app on EKS &amp; ECS <em>(</em><strong><em>Under processing</em></strong><em> )</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*g87Rz2vENEcLTP_-.png" /><figcaption>AWS Microservice Extractor for .NET workflow</figcaption></figure><h3>Agenda</h3><blockquote>1. Introduction</blockquote><blockquote>2. Setup Microservice Extractor</blockquote><blockquote>3. ASP.Net sample app review</blockquote><blockquote>4. ASP.NET application Onboarding</blockquote><h3>1. Introduction</h3><p>The AWS Microservice Extractor for .NET simplifies the process of decomposing applications into individual services. Use this tool to improve and modernize .NET applications as it analyzes source code and runtime metrics in order to produce a graphical representation of an application, its relationships. it gives a full view of applications and facilitates the work when dealing with code refactoring as well as extracting different projects &amp; services from existing codebases. This allows teams to perform self-service for developing, constructing and operating these projects thereby improving agility time on hand as well usability scalability.</p><h3>2. Setup Microservice Extractor</h3><blockquote>Check the Prerequisites to use AWS Microservice Extractor for .NET form this link : <a href="https://docs.aws.amazon.com/microservice-extractor/latest/userguide/microservice-extractor-prerequisites.html">https://docs.aws.amazon.com/microservice-extractor/latest/userguide/microservice-extractor-prerequisites.html</a></blockquote><p>AWS Microservice Extractor for .NET runs on the Microsoft Windows operating system. If you are running this tutorial on a development environment that you control (such as your laptop or a virtual machine), you will need to ensure that you have the following required prerequisites:</p><ol><li>Visual Studio 2022 (<a href="https://visualstudio.microsoft.com/vs/community/">Download VS 2022 Community Edition </a>) with the following features enabled.</li></ol><ul><li>In the <strong>workloads</strong> section, select “ASP.NET and Web Development”.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/417/0*UX1M1HuXGeiBtiZJ.png" /></figure><ul><li>In the <strong>Individual components</strong> section, ensure the following are selected:</li></ul><p>a. NET 4.7.1 Targeting Pack</p><p>b. SQL Server Express 2019 LocalDB</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/314/0*tdqK2DeQrhtUWTDe.png" /></figure><p>2. Download and Install Git from <a href="https://git-scm.com/downloads">here </a>.</p><p>3. In your AWS account choose to work with us-west-2 region and create new s3 bucket as below:</p><blockquote>I have used other region but the tool does not work correctly so just choose us-west-2 region</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sk73cXroFNGDDWQf3fMlPg.png" /><figcaption>AWS s3 bucket for tool</figcaption></figure><p>4. In your AWS account create user then create Secret key &amp; Access key (as an example <strong><em>MicroserviceUser</em></strong>)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JLLvY5ZtBGUqgiyn5sLGVA.png" /><figcaption>AWS IAM user to use by tool</figcaption></figure><blockquote>Create the IAM policy as this <a href="https://docs.aws.amazon.com/microservice-extractor/latest/userguide/microservice-extractor-prerequisites.html">link</a> and link it with your user</blockquote><p>5. Download the tool from this link: <a href="https://aws.amazon.com/microservice-extractor/">https://aws.amazon.com/microservice-extractor/</a></p><p>6. Install the tool and open it as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*n01270qUDndjSQ-stWAAhw.png" /><figcaption>AWS Microservice Extractor for .NET</figcaption></figure><p>7. Click on settings and configure the tool as below :</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*E32ZVMLzWMe0taNGTX6qng.png" /><figcaption>AWS Microservice Extractor for .NET configurations</figcaption></figure><h3>3. ASP.Net sample app review</h3><blockquote>You can clone the sample asp.net MVC app by run the below command:</blockquote><pre>git clone -b extractor-lab https://github.com/aws-samples/dotnet-modernization-gadgetsonline.git</pre><p>The repository can be found at <a href="https://github.com/aws-samples/dotnet-modernization-gadgetsonline">https://github.com/aws-samples/dotnet-modernization-gadgetsonline</a></p><blockquote>Open downloaded sample app in Visual studio by double-clicking the solution file, located at <strong>dotnet-modernization-gadgetsonline\GadgetsOnline\GadgetsOnline.sln</strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/0*yQihCgNvFDsnvZFJ.png" /><figcaption><strong>GadgetsOnline Code</strong></figcaption></figure><p>Then, check the structure of the project in Visual Studio. MVC Controllers are classes that receive requests from users and create an instance of a business service class. Each service class creates an instance of the <strong>GadgetsOnlineEntities</strong> DbContext and passes one or more data model objects (<strong>Product, Order, Category etc</strong>.) from calling controller.</p><p>This application represents a three-layer app, where controllers make requests to business services in order to complete such user deals and facilitate the interaction between the two layers. The latter service is dependent on database access layer calls for further processing of information. This is a typical structure usually seen in ASP.NET MVC frameworks.</p><p><strong><em>In Visual Studio Press F5 to Launch the sample app in debug mode as below</em></strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SeB81559g0zK8J9lWsO3qQ.png" /></figure><blockquote>The port (8080) in your case will be different and the home page casue i haved changed some html codes.</blockquote><p>Before going to the code details you can check the high level digram as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/291/1*_zDy3X4unvyzrSN0w6MbNg.png" /></figure><blockquote>And for more complicated relations between all classes you can check the diagram as below :</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/621/1*M7tOT0gvF4TaXLExXNGUHw.png" /></figure><p>Review the <strong>Index()</strong> method in the <strong>Controllers\HomeController.cs</strong> file.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*V6lTr-640CQ7C5RkLSqLkg.png" /></figure><ul><li>It calls <strong>GetBestSellers(6) </strong>method to obtain top 6 selling products that will be shown on the home screen page.<br>The <strong>GetBestSellers </strong>method provides data model object collection as a result List&lt;Product&gt;.</li></ul><p>Examine the<strong> Services\Inventory.cs </strong>file in project folders, ! There are several methods that use a instance of the <strong>GadgetsOnlineEntities</strong> DbContext to fetch data models and filter based on one or more conditions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RcUVByg1st_30WFcCLHVxg.png" /></figure><h3>4. ASP.NET application Onboarding</h3><ol><li>choose <strong>Applications</strong> in the left pane.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/260/0*rfRXWzXBSGBmexsr.png" /></figure><p>2. In the right pane, click <strong>Onboard application</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*p2fx2yB0RSk7j1md.png" /></figure><p>3. On the Application details page:</p><ul><li>For <strong>Name</strong>, enter “Any name”.</li><li>In the <strong>Source Code</strong> part, click <strong>Choose file</strong> and find <strong>GadgetsOnline.sln</strong> in the file picker. click <strong>Open</strong>.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/945/0*FdrMUD1-7brveBKL.png" /></figure><ul><li>For <strong>MSBuild path</strong>, you should select the version of MSBuild that is to be used in building your app.</li></ul><blockquote>Make sure that Visual Studio 2022 is selected (it will be the default option).</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/790/0*FzKXB_1C-sZ050Lp.png" /></figure><blockquote>The rest of the settings in this section you can leave it as the default values.</blockquote><ul><li><strong>Runtime profiling data — optional</strong> is an option that allows for the inclusion of runtime usage measurements which can appear overly on the visualization chart. This is also optional and can be omitted for this experiment.</li><li><strong>Analyze .NET Core Portability</strong> This setting can be enabled after installing <a href="https://aws.amazon.com/porting-assistant-dotnet/">Porting Assistant for .NET</a> on the same machine as AWS Microservice Extractor. This environment connects a source code analyzer and Porting Assistant to analyze the compatibility of updated.from .Net Framework into .NET core</li><li>Click <strong>Onboard application</strong> to begin the analysis.</li><li>The analysis will create a directed dependency graph indicating the relationships between various classes of the application.</li><li>This process will take a few moments to finish. Once it has completed, the message “GadgetsOnline is ready for visualization” will be displayed as a banner. You can then proceed to the next step to visualize the dependency graph created by AWS Microservice Extractor for .NET.</li><li>This analysis will generate a directed dependency graph that explains the dependences between different classes of the application.</li><li>This procedure will take a few seconds to be completed. When it has been done, the message “&lt; Your appname &gt; is ready for visualization” will appear on a banner. You may then move on to the next step where you can see the dependency graph that has been generated by AWS Microservice Extractor for .NET.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/816/0*Lsve3fxnKBlLzwb8.png" /></figure><p>To view [Part 2] Please check the <a href="https://medium.com/@jomaajob/convert-net-monolithic-app-to-microservices-using-aws-microservice-extractor-part-2-3472a305dc5b">link</a> below:</p><p><a href="https://medium.com/@jomaajob/convert-net-monolith-app-to-microservices-using-aws-microservice-extractor-ce9de2c1c614">https://medium.com/@jomaajob/convert-net-monolithic-app-to-microservices-using-aws-microservice-extractor-part-1-6422442572ed</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*OJOpQ9GIqyFQfzXW.png" /></figure><p>👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇</p><p>🚀<a href="http://from.faun.to/r/8zxxd">Join FAUN Developer Community &amp; Get Similar Stories in your Inbox Each Week</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=52e36d2e5212" width="1" height="1" alt=""><hr><p><a href="https://faun.pub/convert-net-monolithic-app-to-microservices-using-aws-microservice-extractor-part-1-52e36d2e5212">Convert .NET Monolithic app to Microservices using AWS Microservice Extractor [Part 1]</a> was originally published in <a href="https://faun.pub">FAUN.dev() 🐾</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Protect AWS EKS from Regional Disasters using Kasten10]]></title>
            <link>https://medium.com/@jomaajob/how-to-protect-aws-eks-from-regional-disasters-using-kasten10-d320231bfb55?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/d320231bfb55</guid>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[aws-eks]]></category>
            <category><![CDATA[backup]]></category>
            <category><![CDATA[kasten]]></category>
            <category><![CDATA[aws]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Tue, 04 Feb 2025 01:26:31 GMT</pubDate>
            <atom:updated>2025-05-25T17:08:22.384Z</atom:updated>
            <content:encoded><![CDATA[<h4>Abstract :</h4><blockquote>In this article, we will discuss how you can protect EKS (Objects configuration and Persistent Volume EBS) by Kasten10 from regional disasters. We will create two EKS clusters (us-east-1, us-east-2) and regularly backup the objects configurations of primary EKS to S3 and make an EBS snapshot for persistent volume, so that secondary EKS can import and restore the backup.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*Eucifpr37pylCqhng914JA.png" /></figure><h4>Contents:</h4><ol><li>Introduction:</li><li>Overview of AWS EKS:</li><li>Overview of Kasten10:</li><li>Environment preparation:</li><li>Kasten10 Installation:</li><li>Backup and restore Ghost application:</li><li>Conclusion</li></ol><h4>Prerequisites :</h4><blockquote><em>We assume that the reader has basic knowledge of kubernetes ,Helm, AWS.</em></blockquote><h4>1. Introduction:</h4><p>Kubernetes backup refers to the process of creating a copy of the Kubernetes resources and data to protect against data loss and to ensure business continuity. Backing up Kubernetes resources, such as deployments, statefulsets, and services, is critical to ensure that your applications can be quickly restored in case of a catastrophic failure.</p><p>There are several Kubernetes backup tools available, including open-source solutions like Velero and commercial solutions from vendors like <strong>Kasten10 by Veeam</strong>. These tools provide an easy and efficient way to backup and restore Kubernetes resources and data.</p><p>Kubernetes backup can be performed at the cluster level, namespace level, or even at the resource level. This provides granular control over the backup process and enables you to create backups that meet specific business requirements.</p><p>When implementing Kubernetes backup, it is important to consider factors such as the frequency and scope of backups, recovery point objectives (RPOs), and recovery time objectives (RTOs). Testing backups regularly is also critical to ensure that they can be successfully restored in case of a failure.</p><p>To deploy Kasten10 on AWS, users can choose from a variety of deployment options, including Amazon EKS, self-managed Kubernetes clusters, and managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS) and Amazon EKS Anywhere. Regardless of the deployment option, Kasten10 provides a seamless experience for backup and recovery, with support for various storage providers, including Amazon S3, Amazon EBS, and Amazon EFS.</p><h4>2. Overview of AWS EKS</h4><p>AWS EKS (Elastic Kubernetes Service) is a fully managed service that allows you to easily run, scale, and manage Kubernetes clusters on AWS. Kubernetes is an open-source platform for container orchestration that is widely used for deploying and managing containerized applications.</p><p>With AWS EKS, you can quickly provision a Kubernetes cluster in a few simple steps, and the service takes care of the underlying infrastructure and management tasks, such as scaling, patching, and upgrading the cluster. This means you can focus on deploying and managing your applications, rather than worrying about the underlying infrastructure.</p><p>AWS EKS integrates with other AWS services, such as Amazon Elastic Container Registry (ECR) for storing and managing container images, and AWS Identity and Access Management (IAM) for managing access to your Kubernetes resources. Additionally, EKS provides a number of built-in integrations with other AWS services and third-party tools, such as AWS CloudFormation for infrastructure as code and Grafana for monitoring and observability.</p><h4>3. Overview of Kasten10</h4><p>The K10 data management platform, purpose-built for Kubernetes, provides enterprise operations teams an easy-to-use, scalable, and secure system for backup/restore, disaster recovery, and mobility of Kubernetes applications.</p><p>K10’s application-centric approach and deep integrations with relational and NoSQL databases, K10 provides a native Kubernetes API and includes features such as full spectrum consistency, database integrations, automatic application discovery, multi-cloud mobility, and a powerful web-based user interface.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Sg4cH72XqimYVZjj.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xWYIiOt31NVYBEjrSMSQaQ.png" /><figcaption>Kasten user interface</figcaption></figure><p>Deploying this Quick Start for a new virtual private cloud (VPC) with default parameters builds the following K10 platform in the AWS Cloud. The diagram shows three Availability Zones, leveraging multiple AWS services.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mPzKfD_yYdspcIs0AS2a6g.png" /></figure><p>More detailed K10 architecture diagram is shown below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xPY-gwruHxZE5DyKbxyMtw.png" /></figure><p>Kasten K10 provides application backup and mobility capabilities with the following tenets:</p><ul><li><strong>Create scalable and resilient backups</strong>. Kasten K10 integrates with the Amazon S3 (and other target stores) so that your applications can be stored as a true backup in a fault-domain that is separated from primary storage and has the cost efficiencies to afford long term retention. The data efficiently transferred by K10 using techniques like dedup and change-block-tracking.</li><li><strong>Seamless Migration:</strong> The ability to move an application across clusters is an extremely powerful feature that enables a variety of use cases including Disaster Recovery (DR), Test/Dev with realistic data sets, and performance testing in isolated environments. In particular, the K10 platform is built to support application migration and mobility in a variety of different and overlapping contexts:</li></ul><blockquote>1. Cross-Namespace</blockquote><blockquote>2. Cross-Cluster</blockquote><blockquote>3. Cross-Account: (e.g., AWS accounts, Google Cloud projects)</blockquote><blockquote>4. Cross-Region: (e.g., US-East-1 to US-East-2)</blockquote><blockquote>5. Cross-Cloud: (e.g., Azure to AWS)</blockquote><ul><li><strong>Treat the application as the operational unit.</strong> This balances the needs of operations and development teams in cloud-native environments. Kasten’s data management solution works with an entire application and not just the infrastructure or storage layers. This allows your operations team to scale by ensuring business policy compliance at the application level instead of having to think about the hundreds of components that make up a modern app. At the same time, working with the application gives your developers power and control when needed without slowing them down.</li></ul><h4>4. Environment preparation :</h4><p>Our target is to install kasten10 on two EKS as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/proxy/1*zIzil7C_LkejV0Jab7Cnuw.png" /></figure><blockquote>All the instructions are in Linux if you are using Mac or Windows please check out the provided links with each step.</blockquote><ul><li>AWS CLI version 2. See<a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html"> Installing, updating, and uninstalling the AWS CLI version 2</a>.</li></ul><pre>curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip&quot; -o “awscliv2.zip” unzip awscliv2.zip sudo ./aws/install</pre><ul><li>Install eksctl on your desktop machine: See <a href="https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html#installing-eksctl">Installing or upgrading eksctl</a> for another OS</li></ul><pre>curl — silent — location “https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz” | tar xz -C /tmp</pre><pre>sudo mv /tmp/eksctl /usr/local/bin</pre><pre>eksctl version</pre><ul><li>Helm. See <a href="https://helm.sh/docs/intro/install/">Installing Helm</a>.</li></ul><pre>curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3<br>chmod 700 get_helm.sh<br>./get_helm.sh</pre><ul><li>kubectl. See<a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html"> Installing kubectl</a>.</li></ul><h4>4.2 Create two EKS Clusters as below :</h4><p>Two EKS clusters in the same AWS account. See <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html">Creating an EKS Cluster</a>. (This blog post was tested with EKS running Kubernetes version 1.24)</p><p>The two clusters will be referred to as the Primary{pk} and Recovery {rk}clusters.</p><p>Configure all the required environments as below:</p><pre>REGION=us-east-1<br>REGION2=us-east-2<br>BUCKET={Name of AWS S3}<br>PRIMARY_EKS=pk<br>RECOVERY_EKS=rk<br>PRIMARY_CONTEXT=pkc<br>RECOVERY_CONTEXT=rkc<br>ACCOUNT=$(aws sts get-caller-identity --query Account --output text)</pre><p>Create two EKS clusters in separate region { us-east-1 &amp; us-east-2}</p><pre>eksctl create cluster --name=$PRIMARY_EKS --nodes=3 --node-type=t3.medium --region $REGION<br>eksctl create cluster --name=$RECOVERY_EKS --nodes=3 --node-type=t3.medium --region $REGION2</pre><pre>#Add two contexts to your .kube file so you can deal with them easily<br>#For easier management of kubectl config, we add our clusters to kubeconfig with an alias:</pre><pre>aws eks --region $REGION update-kubeconfig --name $PRIMARY_EKS --alias $PRIMARY_CONTEXT<br>aws eks --region $REGION update-kubeconfig --name $RECOVERY_EKS --alias $RECOVERY_CONTEXT</pre><pre>kubectl config use-context $PRIMARY_CONTEXT<br># In the Production env be careful and use this command kubectl config get-contexts to check what the current context</pre><h4>4.3 Configure OIDC</h4><p>Each cluster must be configured with an EKS IAM OIDC Provider. See <a href="https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html">Create an IAM OIDC provider for your cluster</a>. This is a requirement for <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html">IAM roles for service account</a> which is used to grant the required AWS permissions to the Velero deployments.</p><pre>kubectl config use-context $PRIMARY_CONTEXT<br>eksctl utils associate-iam-oidc-provider --cluster $PRIMARY_EKS --approve --region $REGION</pre><pre>kubectl config use-context $RECOVERY_EKS<br>eksctl utils associate-iam-oidc-provider --cluster $RECOVERY_EKS --approve --region $REGION2</pre><pre>kubectl config use-context $PRIMARY_CONTEXT</pre><pre>oidc_id_primary=$(aws eks describe-cluster --name $PRIMARY_EKS --region $REGION --query &quot;cluster.identity.oidc.issuer&quot; --output text | cut -d &#39;/&#39; -f 5)<br>oidc_id_recovery=$(aws eks describe-cluster --name $RECOVERY_EKS --region $REGION2 --query &quot;cluster.identity.oidc.issuer&quot; --output text | cut -d &#39;/&#39; -f 5)</pre><pre>echo oidc_id_recovery= $oidc_id_recovery<br>echo ACCOUNT= $ACCOUNT<br>echo oidc_id_primary  = $oidc_id_primary</pre><figure><img alt="" src="https://cdn-images-1.medium.com/proxy/1*GhakrJRdriqbtbXm7AJlnQ.png" /></figure><h4>4.4 Set up persistent storage in Amazon EKS useing EBS CSI driver:</h4><ul><li>Download an example IAM policy with permissions that allow your worker nodes to create and modify Amazon EBS volumes:</li></ul><pre>curl -o example-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/v0.9.0/docs/example-iam-policy.json</pre><ul><li>Create an IAM policy named Amazon_EBS_CSI_Driver:</li></ul><pre>aws iam create-policy --policy-name AmazonEKS_EBS_CSI_Driver_Policy --policy-document file://example-iam-policy.json</pre><ul><li>View your cluster’s OIDC provider URL</li></ul><pre>aws eks describe-cluster --name $PRIMARY_EKS --region $REGION --query &quot;cluster.identity.oidc.issuer&quot; --output text | cut -d &#39;/&#39; -f 5<br>aws eks describe-cluster --name $RECOVERY_EKS --region $REGION2 --query &quot;cluster.identity.oidc.issuer&quot; --output text | cut -d &#39;/&#39; -f 5</pre><ul><li>To deploy the Amazon EBS CSI driver, run one of the following commands:</li></ul><pre># PRIMARY_Cluster<br>kubectl config use-context $PRIMARY_CONTEXT</pre><pre>kubectl apply -k &quot;github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master&quot;<br></pre><pre>eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system  --cluster $PRIMARY_EKS --region $REGION --role-name &quot;AmazonEKS_EBS_CSI_DriverRole&quot; \<br>    --attach-policy-arn arn:aws:iam::$ACCOUNT:policy/AmazonEKS_EBS_CSI_Driver_Policy --approve</pre><pre>kubectl delete pods -n kube-system -l=app=ebs-csi-controller</pre><pre>#make sure that the sa is Annotated with ARN role<br>kubectl describe serviceAccount ebs-csi-controller-sa -n kube-system</pre><pre># RECOVERY_Cluster </pre><pre>kubectl config use-context $RECOVERY_CONTEXT</pre><pre>kubectl apply -k &quot;github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master&quot;</pre><pre>eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system   --cluster $RECOVERY_EKS --region $REGION2 --role-name &quot;AmazonEKS_EBS_CSI_DriverRole_recovery&quot; \<br>--attach-policy-arn arn:aws:iam::$ACCOUNT:policy/AmazonEKS_EBS_CSI_Driver_Policy --approve</pre><pre>kubectl delete pods -n kube-system -l=app=ebs-csi-controller<br></pre><pre># make sure that the sa is Annotated with ARN role<br>kubectl describe serviceAccount ebs-csi-controller-sa -n kube-system</pre><blockquote>eksctl will Annotate the ebs-csi-controller-sa Kubernetes service account with the Amazon Resource Name (ARN) of the IAM role that will created too by eksctl:</blockquote><h4>4.5 Prepare S3 to Save Kasten10’s backups :</h4><pre>aws s3 mb s3://$BUCKET --region $REGION</pre><p>Although Amazon S3 stores your data across multiple geographically distant <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/#Availability_Zones">Availability Zones</a> by default, compliance requirements might dictate that you store data at even greater distances. <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-how-setup.html">Cross-Region Replication</a> allows you to replicate data between distant AWS Regions to satisfy these requirements.</p><h4>4.6 Prepare IAM policy for Kasten10 deployment :</h4><p>Kasten10 performs a number of API calls to resources in EC2 and S3 to perform snapshots and save the backup to the S3 bucket. The following IAM policy will grant Kasten10 the necessary permissions</p><pre>cat &gt; Kasten10.json &lt;&lt;EOF<br>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;ec2:CopySnapshot&quot;,<br>                &quot;ec2:CreateSnapshot&quot;,<br>                &quot;ec2:CreateTags&quot;,<br>                &quot;ec2:CreateVolume&quot;,<br>                &quot;ec2:DeleteTags&quot;,<br>                &quot;ec2:DeleteVolume&quot;,<br>                &quot;ec2:DescribeSnapshotAttribute&quot;,<br>                &quot;ec2:ModifySnapshotAttribute&quot;,<br>                &quot;ec2:DescribeAvailabilityZones&quot;,<br>                &quot;ec2:DescribeRegions&quot;,<br>                &quot;ec2:DescribeSnapshots&quot;,<br>                &quot;ec2:DescribeTags&quot;,<br>                &quot;ec2:DescribeVolumeAttribute&quot;,<br>                &quot;ec2:DescribeVolumesModifications&quot;,<br>                &quot;ec2:DescribeVolumeStatus&quot;,<br>                &quot;ec2:DescribeVolumes&quot;<br>            ],<br>            &quot;Resource&quot;: &quot;*&quot;<br>        },<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: &quot;ec2:DeleteSnapshot&quot;,<br>            &quot;Resource&quot;: &quot;*&quot;,<br>            &quot;Condition&quot;: {<br>                &quot;StringLike&quot;: {<br>                    &quot;ec2:ResourceTag/Name&quot;: &quot;Kasten: Snapshot*&quot;<br>                }<br>            }<br>        },<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;s3:CreateBucket&quot;,<br>                &quot;s3:PutObject&quot;,<br>                &quot;s3:GetObject&quot;,<br>                &quot;s3:PutBucketPolicy&quot;,<br>                &quot;s3:ListBucket&quot;,<br>                &quot;s3:DeleteObject&quot;,<br>                &quot;s3:DeleteBucketPolicy&quot;,<br>                &quot;s3:GetBucketLocation&quot;,<br>                &quot;s3:GetBucketPolicy&quot;<br>            ],<br>            &quot;Resource&quot;: &quot;*&quot;<br>        }<br>    ]<br>}<br>EOF</pre><pre># Create Katen10 IAM Policy<br>aws iam create-policy --policy-name KastenPolicy --policy-document file://Kasten10.json</pre><h4>5. Kasten10 Installation :</h4><p>We should install Kasten10 on both EKS clusters, you can check out the diagram below for more details :</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/772/1*zIzil7C_LkejV0Jab7Cnuw.png" /><figcaption>Solution architecture</figcaption></figure><pre>helm repo add kasten https://charts.kasten.io/</pre><pre>kubectl config use-context $PRIMARY_CONTEXT<br>helm install k10 kasten/k10 --namespace=kasten-io # --set serviceAccount.create=false</pre><pre>kubectl config use-context $RECOVERY_CONTEXT<br>helm install k10 kasten/k10 --namespace=kasten-io # --set serviceAccount.create=false</pre><p>To establish a connection to it use the following `kubectl` command:</p><pre># RECOVERY_Cluster<br>kubectl config use-context $PRIMARY_CONTEXT</pre><pre>kubectl -namespace kasten-io port-forward service/gateway 8080:8000</pre><pre># The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`</pre><pre># open new terminal and run thoses for RECOVERY_Cluster</pre><pre>kubectl config use-context $RECOVERY_CONTEXT<br>kubectl -namespace kasten-io port-forward service/gateway 8090:8000</pre><pre># The Kasten dashboard will be available at: `http://127.0.0.1:8090/k10/#/`</pre><p>Annotate the K10-k10 sa Kubernetes service account with the Amazon Resource Name (ARN) of the IAM role that will be created too by eksctl:</p><pre># PRIMARY_EKS</pre><pre># Change the context <br>kubectl config use-context $PRIMARY_CONTEXT</pre><pre># delete the sa <br>kubectl delete serviceAccount k10-k10  -n kasten-io<br></pre><pre># Create iamserviceaccount and linked with new k10-k10 sa<br>eksctl create iamserviceaccount --name k10-k10 --namespace kasten-io  --cluster $PRIMARY_EKS --region $REGION --role-name &quot;KastenRole&quot; --attach-policy-arn arn:aws:iam::$ACCOUNT:policy/KastenPolicy --approve</pre><pre>    <br>kubectl describe serviceAccount k10-k10  -n kasten-io</pre><pre># RECOVERY_EKS</pre><pre># Change the context<br>kubectl config use-context $RECOVERY_CONTEXT</pre><pre># delete the sa<br>kubectl delete serviceAccount k10-k10  -n kasten-io</pre><pre># Create iamserviceaccount and linked with new k10-k10 sa<br>eksctl create iamserviceaccount --name k10-k10 --namespace kasten-io  --cluster $RECOVERY_EKS --region $REGION2 --role-name &quot;KastenRecoverRole&quot; --attach-policy-arn arn:aws:iam::$ACCOUNT:policy/KastenPolicy --approve</pre><pre># make sure that sa is Annotated with ARN role<br>kubectl describe serviceAccount k10-k10 -n kasten-io</pre><blockquote>eksctl will Annotate the k10-k10 Kubernetes service account with the Amazon Resource Name (ARN) of the IAM role that will created too by eksctl:</blockquote><h4>6. Backup and restore Ghost application:</h4><p>Ghost is an open-source publishing platform designed to create blogs, magazines, and news sites. It includes a simple markdown editor with preview, theming, and SEO built-in to simplify editing.</p><p>We will use the <a href="https://github.com/bitnami/charts/tree/master/bitnami/ghost">Bitnami Helm chart</a> as it’s commonly deployed and well-tested. This chart depends on the <a href="https://github.com/bitnami/charts/tree/master/bitnami/mariadb">Bitnami MariaDB chart</a> that will serve as the persistent data store for the blog application. The MariaDB data will be stored in an EBS volume that will be snapshotted by Velero as part of performing the backup.</p><p>Now we switch to the Primary cluster’s context and install Ghost (ignore the notification <em>ERROR: you did not provide an external host</em> that appears when you install Ghost. This will be solved with the following commands):</p><pre>helm repo add bitnami https://charts.bitnami.com/bitnami</pre><pre>kubectl config use-context $PRIMARY_CONTEXT<br>helm install ghost bitnami/ghost \<br>    --create-namespace \<br>    --namespace ghost<br></pre><pre>export APP_HOST=$(kubectl get svc --namespace ghost ghost --template &quot;{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}&quot;)<br>export GHOST_PASSWORD=$(kubectl get secret --namespace &quot;ghost&quot; ghost -o jsonpath=&quot;{.data.ghost-password}&quot; | base64 -d)<br>export MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace &quot;ghost&quot; ghost-mysql -o jsonpath=&quot;{.data.mysql-root-password}&quot; | base64 -d)<br>export MYSQL_PASSWORD=$(kubectl get secret --namespace &quot;ghost&quot; ghost-mysql -o jsonpath=&quot;{.data.mysql-password}&quot; | base64 -d)</pre><pre>helm upgrade ghost bitnami/ghost \<br>  --namespace ghost \<br>  --set service.type=LoadBalancer,ghostHost=$APP_HOST,ghostPassword=$GHOST_PASSWORD,mysql.auth.rootPassword=$MYSQL_ROOT_PASSWORD,mysql.auth.password=$MYSQL_PASSWORD</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*FZTPRTgCgvUUeSMz.png" /></figure><p>We can check that the installation was successful by running this command:</p><p>kubectl get pod -A</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*5VNtMpjHrRtloO1i.png" /></figure><p>In the Ghost Admin console, you can create an example blog post that will be included in the backup and restore process by signing in (using the Admin URL displayed above). As a result, the backup includes not only the application deployment configuration but also the posts in the blog database that is saved in PV — EBS.</p><h4>6.1 Backup Ghost application</h4><p>Open a new terminal and run those commands</p><pre>kubectl config use-context $PRIMARY_CONTEXT</pre><pre>kubectl --namespace kasten-io port-forward service/gateway 8080:8000</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uSzxNe23r9LMPUC9yhcaWA.png" /></figure><p>Open this link <a href="http://127.0.0.1:8080/k10/#/"><strong><em>http://127.0.0.1:8080/k10/#/</em></strong></a><strong><em> in </em></strong>your browser</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IJ0XRsY5B5XggU40sbvOWg.png" /></figure><p>Go to settings and choose locations, in this page you will configure your S3 Bucket that has already been created to save your EKS backup files.</p><pre>echo $BUCKET</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*H5kuKRoHSRkecPzBegw4ag.png" /></figure><p>Now go to Dashboard → Applications → Ghost app → Create a Policy</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*S6vTZUBsvL_uAGoitKLyGw.png" /></figure><p>and configure your policy to take a frequent snapshot as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aH2y-TfMwszA2UUCvOw-Gw.png" /></figure><p>Go to the polices section and chack-out your new one and try to run it once.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rBc_eSFNJ8UysF8Vd-ajMw.png" /></figure><p>In this policy please Click on “<strong><em>Show more details</em></strong>” as below and save this token, because we need it to configure restore policy in the recovery EKS</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WCp9_pWIDUFaCaYSKyLPZg.png" /></figure><p>After finishing this job Go to the S3 bucket and Snapshot section in the AWS console to follow up on the changes as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wvCpPVsINJweofanZYh1_A.png" /><figcaption>Kasten10 Dashboard</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ggZtmAPSEAtMQjNxPLLOhQ.png" /><figcaption>Snapshots in AWS Console</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5zRaTu-CbrweYJmx4bbf5A.png" /><figcaption>S3 Bucket in AWS Console</figcaption></figure><h4>6.2 Restore Ghost application</h4><p>Open a new terminal and run those commands</p><pre>kubectl config use-context $RECOVERY_EKS</pre><pre>kubectl --namespace kasten-io port-forward service/gateway 8090:8000</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*npcmmXlavxYgVNI2lhOFoA.png" /></figure><p>Open this link <a href="http://127.0.0.1:8080/k10/#/"><strong><em>http://127.0.0.1:8090/k10/#/</em></strong></a><strong><em> in </em></strong>your browser</p><p>Go to settings and choose locations, in this page you will configure your S3 Bucket that has already been created to restore your EKS backup files.</p><pre>echo $BUCKET</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*H5kuKRoHSRkecPzBegw4ag.png" /></figure><p>Now go to Dashboard → Applications → Ghost app → Create a Policy</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pQctEIC-ZhhzcOPNMOHnuw.png" /></figure><p>Now go to Dashboard → Applications → Ghost app → restore</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Y-wBOwXvj8vofmZGaaAJKw.png" /></figure><p>choose one of PIT to restore your application and restore it</p><blockquote>Note: you can configure the policy to restore after import</blockquote><h4>Summary :</h4><p>In conclusion, Kasten10 is a powerful data management solution that simplifies backup, recovery, and mobility of Kubernetes applications. By deploying Kasten10 on AWS, users can take advantage of the scalability and flexibility of the cloud to protect their applications and data. With Kasten10’s comprehensive features, including backup scheduling, policy-based automation, and efficient data storage, users can ensure the availability and integrity of their Kubernetes workloads.</p><p>To deploy Kasten10 on AWS, users can choose from a variety of deployment options, including Amazon EKS, self-managed Kubernetes clusters, and managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS) and Amazon EKS Anywhere. Regardless of the deployment option, Kasten10 provides a seamless experience for backup and recovery, with support for various storage providers, including Amazon S3, Amazon EBS, and Amazon EFS.</p><p>Overall, implementing Kasten10 on AWS provides a robust solution for Kubernetes data management, with the flexibility and scalability of the cloud. Whether you’re managing a small-scale deployment or a large-scale enterprise cluster, Kasten10 and AWS have you covered.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*V3q4LSwfeS0simt6.png" /></figure><h4>👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇</h4><h4>🚀<a href="http://from.faun.to/r/8zxxd">Join FAUN Developer Community &amp; Get Similar Stories in your Inbox Each Week</a></h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d320231bfb55" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Backup & Protect AWS EKS using Velero from vmware-tanzu]]></title>
            <link>https://faun.pub/how-to-backup-protect-aws-eks-using-velero-from-vmware-tanzu-0871539eba4c?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/0871539eba4c</guid>
            <category><![CDATA[amazon-eks]]></category>
            <category><![CDATA[backup]]></category>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[disaster-recovery]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Wed, 29 Jan 2025 05:20:23 GMT</pubDate>
            <atom:updated>2025-05-31T07:35:08.905Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/277/0*RN1G2O2i9egn3SWb.png" /></figure><h3>Abstract :</h3><blockquote>The purpose of this document is to backup AWS EKS (Object configurations &amp; persistent data — EBS) in the same region. In a Cross-Region scenario, we encountered this <a href="https://github.com/vmware-tanzu/velero/issues/4799">issue</a>, and you can implement the Cross-Region solution using this <a href="https://github.com/jglick/velero-plugin-for-aws/pkgs/container/velero-plugin-for-aws">branch by (Jesse Glick).</a></blockquote><h3>Prerequisites :</h3><blockquote><em>We assume that the reader has basic knowledge of kubernetes ,Helm, AWS.</em></blockquote><h3>Contents:</h3><blockquote>1. Introduction</blockquote><blockquote>2. Overview of Velero</blockquote><blockquote>3. Overview of AWS EKS</blockquote><blockquote>4. Implementing Velero on EKS</blockquote><blockquote>5. Cleaning up</blockquote><blockquote>6. Conclusion</blockquote><h3>1. Introduction :</h3><p>Kubernetes backup refers to the process of creating a copy of the Kubernetes resources and data to protect against data loss and to ensure business continuity. Backing up Kubernetes resources, such as deployments, statefulsets, and services, is critical to ensure that your applications can be quickly restored in case of a catastrophic failure.</p><p>There are several Kubernetes backup tools available, including open-source solutions like Velero and commercial solutions from vendors like VMware and Trilio. These tools provide an easy and efficient way to backup and restore Kubernetes resources and data.</p><p>Kubernetes backup can be performed at the cluster level, namespace level, or even at the resource level. This provides granular control over the backup process and enables you to create backups that meet specific business requirements.</p><p>When implementing Kubernetes backup, it is important to consider factors such as the frequency and scope of backups, recovery point objectives (RPOs), and recovery time objectives (RTOs). Testing backups regularly is also critical to ensure that they can be successfully restored in case of a failure.</p><h3>2. Overview of Velero:</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/0*G8jT9BhVuyv7fpU-.png" /></figure><p>Velero is an open-source tool that enables backup and disaster recovery of Kubernetes clusters and their persistent volumes. It can be used to back up your Kubernetes resources, including namespace, deployment, statefulset, cronjob, and others, as well as the persistent volumes associated with them.</p><p>Velero consists of two components:</p><ul><li>A <em>Velero server</em> pod that runs in your Amazon EKS cluster</li><li>A command-line client (<em>Velero CLI</em>) that runs locally</li></ul><h4>2.1 How Velero Backup works:</h4><p>When you run velero backup create test-backup:</p><ol><li>The Velero client makes a call to the Kubernetes API server to create a Backup object.</li><li>The BackupController notices the new Backup object and performs validation.</li><li>The BackupController begins the backup process. It collects the data to back up by querying the API server for resources.</li><li>The BackupController makes a call to the object storage service – for example, AWS S3 – to upload the backup file.</li></ol><p>By default, velero backup create makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run velero backup create --help to see available flags. Snapshots can be disabled with the option --snapshot-volumes=false.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/771/0*8b3l8xO6JKvRWOgQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ogHD95enYzuRp4YH.png" /></figure><h4>2.2 How Velero Restore works:</h4><ol><li>The Velero CLI makes a call to Kubernetes API server to create a restore CRD that will restore from an existing backup.</li><li>The restore controller:</li></ol><p>2.1. Validates the restore CRD object.</p><p>2.2. Makes a call to Amazon S3 to retrieve backup files.</p><p>2.3. Initiates restore operation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*9Z0qqHbG3DnrBBA5.png" /></figure><p>The restore operation allows you to restore all of the objects and persistent volumes from a previously created backup. You can also restore only a filtered subset of objects and persistent volumes.</p><p>By default, backup storage locations are created in read-write mode. However, during a restore, you can configure a backup storage location to be in read-only mode, which disables backup creation and deletion for the storage location. This is useful to ensure that no backups are inadvertently created or deleted during a restore scenario.</p><h3>3. Overview of AWS EKS?</h3><p>AWS EKS (Elastic Kubernetes Service) is a fully managed service that allows you to easily run, scale, and manage Kubernetes clusters on AWS. Kubernetes is an open-source platform for container orchestration that is widely used for deploying and managing containerized applications.</p><p>With AWS EKS, you can quickly provision a Kubernetes cluster in a few simple steps, and the service takes care of the underlying infrastructure and management tasks, such as scaling, patching, and upgrading the cluster. This means you can focus on deploying and managing your applications, rather than worrying about the underlying infrastructure.</p><p>AWS EKS integrates with other AWS services, such as Amazon Elastic Container Registry (ECR) for storing and managing container images, and AWS Identity and Access Management (IAM) for managing access to your Kubernetes resources. Additionally, EKS provides a number of built-in integrations with other AWS services and third-party tools, such as AWS CloudFormation for infrastructure as code and Grafana for monitoring and observability.</p><h3>4. Implementing Velero on EKS</h3><p>When it comes to using Velero on Amazon Web Services (AWS) Elastic Kubernetes Service (EKS), there are a few steps that need to be taken. Here’s a general outline of the process:</p><ol><li>Install Velero: Velero can be installed on EKS using the Helm chart.</li><li>Configure Velero: After installing Velero, you’ll need to configure it to specify which resources you want to back up and where to store the backups. This can be done by creating a Velero custom resource definition (CRD)</li><li>Create a storage location: To store your backups, you’ll need to create a storage location. This can be done using an Amazon S3 bucket, which can be created using the AWS Management Console.</li><li>Backup your resources: Once Velero is configured, you can create a backup of your Kubernetes resources using the command velero backup create &lt;backup-name&gt;.</li><li>Restore your resources: If you need to restore your resources, you can do so using the command velero restore create &lt;restore-name&gt; --from-backup &lt;backup-name&gt;.</li></ol><p>These are just the basic steps for using Velero on AWS EKS. There are many additional options and features available with Velero that can be customized to fit your specific needs.</p><h4>4.1 Prerequisites</h4><p>All the instructions are in Linux if you are using Mac or Windows please check out the provided links with each step.</p><ul><li>AWS CLI version 2. See<a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html"> Installing, updating, and uninstalling the AWS CLI version 2</a>.</li></ul><pre>curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip&quot; -o “awscliv2.zip” unzip awscliv2.zip sudo ./aws/install</pre><ul><li>Install eksctl on your desktop machine: See <a href="https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html#installing-eksctl">Installing or upgrading eksctl</a> for another OS</li></ul><pre>curl — silent — location “https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz” | tar xz -C /tmp</pre><pre>sudo mv /tmp/eksctl /usr/local/bin</pre><pre>eksctl version</pre><ul><li>Helm. See <a href="https://helm.sh/docs/intro/install/">Installing Helm</a>.</li></ul><pre>curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3<br>chmod 700 get_helm.sh<br>./get_helm.sh</pre><ul><li>kubectl. See<a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html"> Installing kubectl</a>.</li></ul><h4>4.2 Create two EKS Clusters as below :</h4><p>Two EKS clusters in the same AWS account. See <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html">Creating an EKS Cluster</a>. (This blog post was tested with EKS running Kubernetes version 1.24.)</p><p>The two clusters will be referred to as the Primary and Recovery clusters.</p><p>Configure all the required environment as below:</p><pre>BUCKET=&lt;BUCKETNAME&gt; <br>REGION=&lt;REGION&gt;<br>PRIMARY_EKS=&lt;PRIMARY CLUSTERNAME&gt;<br>RECOVERY_EKS=&lt;RECOVERY CLUSTERNAME&gt;<br>REGION=&lt;Your AWS region&gt;</pre><pre>eksctl create cluster --name=$PRIMARY_EKS --nodes=3 --node-type=t3.small --region $REGION<br>eksctl create cluster --name=$RECOVERY_EKS --nodes=3 --node-type=t3.small --region $REGION</pre><pre># Add two contexts to your .kube file so you can deal with them easily<br>#For easier management of kubectl config, we add our clusters to kubeconfig with an alias:</pre><pre>PRIMARY_CONTEXT=PRIMARY_velero<br>RECOVERY_CONTEXT=RECOVERY_velero<br>aws eks --region $REGION update-kubeconfig --name $PRIMARY_EKS --alias $PRIMARY_CONTEXT<br>aws eks --region $REGION update-kubeconfig --name $RECOVERY_EKS --alias $RECOVERY_CONTEXT</pre><pre>kubectl config use-context $PRIMARY_CONTEXT<br># In the Production env be careful and use this command kubectl config get-contexts to check what the current context</pre><ul><li>Each cluster must be configured with an EKS IAM OIDC Provider. See <a href="https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html">Create an IAM OIDC provider for your cluster</a>. This is a requirement for <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html">IAM roles for service account</a> which is used to grant the required AWS permissions to the Velero deployments.</li></ul><pre>kubectl config use-context $PRIMARY_CONTEXT<br>eksctl utils associate-iam-oidc-provider --cluster $PRIMARY_EKS --approve</pre><pre>kubectl config use-context $RECOVERY_EKS<br>eksctl utils associate-iam-oidc-provider --cluster $RECOVERY_EKS --approve</pre><pre>kubectl config use-context $PRIMARY_CONTEXT</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GhakrJRdriqbtbXm7AJlnQ.png" /></figure><h4>4.3 Set up persistent storage in Amazon EKS useing EBS CSI driver:</h4><ul><li>Download an example IAM policy with permissions that allow your worker nodes to create and modify Amazon EBS volumes:</li></ul><pre>curl -o example-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/v0.9.0/docs/example-iam-policy.json</pre><ul><li>Create an IAM policy named Amazon_EBS_CSI_Driver:</li></ul><pre>aws iam create-policy --policy-name AmazonEKS_EBS_CSI_Driver_Policy --policy-document file://example-iam-policy.json</pre><ul><li>View your cluster’s OIDC provider URL</li></ul><pre>oidc_id_primary=$(aws eks describe-cluster --name $PRIMARY_EKS --query &quot;cluster.identity.oidc.issuer&quot; --output text | cut -d &#39;/&#39; -f 5)<br>oidc_id_recovery=$(aws eks describe-cluster --name $RECOVERY_EKS --query &quot;cluster.identity.oidc.issuer&quot; --output text | cut -d &#39;/&#39; -f 5)<br>ACCOUNT=$(aws sts get-caller-identity --query Account --output text)<br>echo $oidc_id_primary<br>echo $oidc_id_recovery<br>echo $ACCOUNT</pre><ul><li>Create the following IAM trust policies file:</li></ul><pre>cat &lt;&lt;EOF &gt; trust-policy-primary.json<br>{<br>  &quot;Version&quot;: &quot;2012-10-17&quot;,<br>  &quot;Statement&quot;: [<br>    {<br>      &quot;Effect&quot;: &quot;Allow&quot;,<br>      &quot;Principal&quot;: {<br>        &quot;Federated&quot;: &quot;arn:aws:iam::$ACCOUNT:oidc-provider/oidc.eks.$REGION.amazonaws.com/id/$oidc_id_primary&quot;<br>      },<br>      &quot;Action&quot;: &quot;sts:AssumeRoleWithWebIdentity&quot;,<br>      &quot;Condition&quot;: {<br>        &quot;StringEquals&quot;: {<br>          &quot;oidc.eks.$REGION.amazonaws.com/id/$oidc_id_primary:sub&quot;: &quot;system:serviceaccount:kube-system:ebs-csi-controller-sa&quot;<br>        }<br>      }<br>    }<br>  ]<br>}<br>EOF</pre><pre>cat &lt;&lt;EOF &gt; trust-policy-recovery.json<br>{<br>  &quot;Version&quot;: &quot;2012-10-17&quot;,<br>  &quot;Statement&quot;: [<br>    {<br>      &quot;Effect&quot;: &quot;Allow&quot;,<br>      &quot;Principal&quot;: {<br>        &quot;Federated&quot;: &quot;arn:aws:iam::$ACCOUNT:oidc-provider/oidc.eks.$REGION.amazonaws.com/id/$oidc_id_recovery&quot;<br>      },<br>      &quot;Action&quot;: &quot;sts:AssumeRoleWithWebIdentity&quot;,<br>      &quot;Condition&quot;: {<br>        &quot;StringEquals&quot;: {<br>          &quot;oidc.eks.$REGION.amazonaws.com/id/$oidc_id_recovery:sub&quot;: &quot;system:serviceaccount:kube-system:ebs-csi-controller-sa&quot;<br>        }<br>      }<br>    }<br>  ]<br>}<br>EOF</pre><ul><li>Create an IAM roles:</li></ul><pre>aws iam create-role \<br>  --role-name AmazonEKS_EBS_CSI_DriverRole \<br>  --assume-role-policy-document file://&quot;trust-policy-primary.json&quot;</pre><pre>aws iam create-role \<br>  --role-name AmazonEKS_EBS_CSI_DriverRole_Recovery \<br>  --assume-role-policy-document file://&quot;trust-policy-recovery.json&quot;</pre><ul><li>Attach your new IAM policies to the roles:</li></ul><pre>aws iam attach-role-policy \<br>--policy-arn arn:aws:iam::$ACCOUNT:policy/AmazonEKS_EBS_CSI_Driver_Policy \<br>--role-name AmazonEKS_EBS_CSI_DriverRole</pre><pre>aws iam attach-role-policy \<br>--policy-arn arn:aws:iam::$ACCOUNT:policy/AmazonEKS_EBS_CSI_Driver_Policy \<br>--role-name AmazonEKS_EBS_CSI_DriverRole_Recovery</pre><ul><li>To deploy the Amazon EBS CSI driver, run one of the following commands:</li></ul><pre>kubectl config use-context $PRIMARY_CONTEXT<br>kubectl apply -k &quot;github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master&quot;</pre><pre>kubectl config use-context $RECOVERY_CONTEXT<br>kubectl apply -k &quot;github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master&quot;</pre><ul><li>Annotate the ebs-csi-controller-sa Kubernetes service account with the Amazon Resource Name (ARN) of the IAM role that you created earlier:</li></ul><pre># PRIMARY_CONTEXT -- <br>kubectl config use-context $PRIMARY_CONTEXT</pre><pre>kubectl annotate serviceaccount ebs-csi-controller-sa \<br>  -n kube-system \<br>  eks.amazonaws.com/role-arn=arn:aws:iam::$ACCOUNT:role/AmazonEKS_EBS_CSI_DriverRole</pre><pre>kubectl delete pods -n kube-system -l=app=ebs-csi-controller<br></pre><pre># ---------------------------------------------------</pre><pre># RECOVERY_CONTEXT -- <br>kubectl config use-context $RECOVERY_CONTEXT</pre><pre>kubectl annotate serviceaccount ebs-csi-controller-sa \<br>  -n kube-system \<br>  eks.amazonaws.com/role-arn=arn:aws:iam::$ACCOUNT:role/AmazonEKS_EBS_CSI_DriverRole_Recovery</pre><pre>kubectl delete pods -n kube-system -l=app=ebs-csi-controller<br></pre><pre># Return Back to PRIMARY_CONTEXT<br>kubectl config use-context $PRIMARY_CONTEXT</pre><p>In this step, make sure that you annotate the service account <strong><em>ebs-csi-controller-sa correctly (optional )</em></strong></p><pre>kubectl edit serviceaccount ebs-csi-controller-sa -n kube-system</pre><h4>4.4 Prepare S3 to Save velero’s backups :</h4><pre>aws s3 mb s3://$BUCKET --region $REGION</pre><p>Although Amazon S3 stores your data across multiple geographically distant <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/#Availability_Zones">Availability Zones</a> by default, compliance requirements might dictate that you store data at even greater distances. <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-how-setup.html">Cross-Region Replication</a> allows you to replicate data between distant AWS Regions to satisfy these requirements.</p><h4>4.5 Prepare IAM policy for Velero deployment :</h4><p>Velero performs a number of API calls to resources in EC2 and S3 to perform snapshots and save the backup to the S3 bucket. The following IAM policy will grant Velero the necessary permissions</p><pre>cat &gt; velero_policy.json &lt;&lt;EOF<br>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;ec2:DescribeVolumes&quot;,<br>                &quot;ec2:DescribeSnapshots&quot;,<br>                &quot;ec2:CreateTags&quot;,<br>                &quot;ec2:CreateVolume&quot;,<br>                &quot;ec2:CreateSnapshot&quot;,<br>                &quot;ec2:DeleteSnapshot&quot;<br>            ],<br>            &quot;Resource&quot;: &quot;*&quot;<br>        },<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;s3:GetObject&quot;,<br>                &quot;s3:DeleteObject&quot;,<br>                &quot;s3:PutObject&quot;,<br>                &quot;s3:AbortMultipartUpload&quot;,<br>                &quot;s3:ListMultipartUploadParts&quot;<br>            ],<br>            &quot;Resource&quot;: [<br>                &quot;arn:aws:s3:::${BUCKET}/*&quot;<br>            ]<br>        },<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;s3:ListBucket&quot;<br>            ],<br>            &quot;Resource&quot;: [<br>                &quot;arn:aws:s3:::${BUCKET}&quot;<br>            ]<br>        }<br>    ]<br>}<br>EOF</pre><pre>aws iam create-policy \ <br>    --policy-name VeleroAccessPolicy \<br>    --policy-document file://velero_policy.json</pre><h4>4.6 Create Service Accounts for Velero:</h4><p>The best practice for providing AWS policies to applications running on EKS clusters is to use <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html">IAM Roles for Service Accounts</a>. eksctl provides an easy way to create the required IAM role and scope the trust relationship to the velero-server Service Account.</p><pre>eksctl create iamserviceaccount \<br>--cluster=$PRIMARY_EKS \<br>--name=velero-server \<br>--namespace=velero \<br>--role-name=eks-velero-backup \<br>--role-only \<br>--attach-policy-arn=arn:aws:iam::$ACCOUNT:policy/VeleroAccessPolicy \<br>--approve</pre><pre>eksctl create iamserviceaccount \<br>--cluster=$RECOVERY_EKS \<br>--name=velero-server \<br>--namespace=velero \<br>--role-name=eks-velero-recovery \<br>--role-only \<br>--attach-policy-arn=arn:aws:iam::$ACCOUNT:policy/VeleroAccessPolicy \<br>--approve</pre><blockquote>The <em>--namespace=velero</em> flag ensures that only an workloads running in the <em>velero</em> namespace will be able to access the IAM Policy (VeleroAccessPolicy)</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PNmg5OqY9fmXsoJG2KEl7A.png" /></figure><h4>4.7 Install Velero in both EKS Clusters</h4><pre>helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts</pre><pre>cat &gt; values.yaml &lt;&lt;EOF<br>configuration:<br>  backupStorageLocation:<br>    bucket: $BUCKET<br>  provider: aws<br>  volumeSnapshotLocation:<br>    config:<br>      region: $REGION<br>credentials:<br>  useSecret: false<br>initContainers:<br>- name: velero-plugin-for-aws<br>  image: velero/velero-plugin-for-aws:v1.6.1<br>  volumeMounts:<br>  - mountPath: /target<br>    name: plugins<br>serviceAccount:<br>  server:<br>    annotations:<br>      eks.amazonaws.com/role-arn: &quot;arn:aws:iam::${ACCOUNT}:role/eks-velero-backup&quot;<br>EOF</pre><pre>cat &gt; values_recovery.yaml &lt;&lt;EOF<br>configuration:<br>  backupStorageLocation:<br>    bucket: $BUCKET<br>  provider: aws<br>  volumeSnapshotLocation:<br>    config:<br>      region: $REGION<br>credentials:<br>  useSecret: false<br>initContainers:<br>- name: velero-plugin-for-aws<br>  image: velero/velero-plugin-for-aws:v1.6.1<br>  volumeMounts:<br>  - mountPath: /target<br>    name: plugins<br>serviceAccount:<br>  server:<br>    annotations:<br>      eks.amazonaws.com/role-arn: &quot;arn:aws:iam::${ACCOUNT}:role/eks-velero-recovery&quot;<br>EOF</pre><p>We need to install the Velero server twice: once in the Primary cluster and again in the Recovery cluster.</p><p>We can check that we have these new contexts with the following command:</p><p>kubectl config get-contexts</p><ul><li>Change the context to your Primary cluster and install Velero:</li></ul><pre>kubectl config use-context $PRIMARY_CONTEXT<br>helm install velero vmware-tanzu/velero \<br>    --create-namespace \<br>    --namespace velero \<br>    -f values.yaml</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XBEUYihyhRO3M3EYdoPLuQ.png" /></figure><p>We can check that the Velero server was successfully installed by running this command in each context:</p><p>kubectl get pods –n velero</p><ul><li>Now change the context to your Recovery cluster and proceed to install Velero:</li></ul><pre>kubectl config use-context $RECOVERY_CONTEXT<br>helm install velero vmware-tanzu/velero \<br>    --create-namespace \<br>    --namespace velero \<br>    -f values_recovery.yaml</pre><h4>4.7.1 Install the Velero CLI</h4><p>Velero operates by submitting commands as CRDs. To take a backup of the cluster, you submit to the cluster a backup CRD. These can be difficult to create by hand, so the Velero team has created a CLI that makes it easy to perform backups and restores. We will be using the Velero CLI to create a backup of the Primary cluster and restore to the Recovery cluster.</p><p>Installation instructions vary depending on your operating system. Follow the instructions to install Velero <a href="https://velero.io/docs/v1.10/basic-install/#install-the-cli">here.</a></p><h4>4.8 Backup and restore Ghost application</h4><p>Ghost is an open-source publishing platform designed to create blogs, magazines, and news sites. It includes a simple markdown editor with preview, theming, and SEO built-in to simplify editing.</p><p>We will use the <a href="https://github.com/bitnami/charts/tree/master/bitnami/ghost">Bitnami Helm chart</a> as it’s commonly deployed and well-tested. This chart depends on the <a href="https://github.com/bitnami/charts/tree/master/bitnami/mariadb">Bitnami MariaDB chart</a> that will serve as the persistent data store for the blog application. The MariaDB data will be stored in an EBS volume that will be snapshotted by Velero as part of performing the backup.</p><p>Now we switch to the Primary cluster’s context and install Ghost (ignore the notification <em>ERROR: you did not provide an external host</em> that appears when you install Ghost. This will be solved with the following commands):</p><pre>helm repo add bitnami https://charts.bitnami.com/bitnami</pre><pre>kubectl config use-context $PRIMARY_CONTEXT<br>helm install ghost bitnami/ghost \<br>    --create-namespace \<br>    --namespace ghost<br></pre><pre>export APP_HOST=$(kubectl get svc --namespace ghost ghost --template &quot;{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}&quot;)<br>export GHOST_PASSWORD=$(kubectl get secret --namespace &quot;ghost&quot; ghost -o jsonpath=&quot;{.data.ghost-password}&quot; | base64 -d)<br>export MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace &quot;ghost&quot; ghost-mysql -o jsonpath=&quot;{.data.mysql-root-password}&quot; | base64 -d)<br>export MYSQL_PASSWORD=$(kubectl get secret --namespace &quot;ghost&quot; ghost-mysql -o jsonpath=&quot;{.data.mysql-password}&quot; | base64 -d)</pre><pre>helm upgrade ghost bitnami/ghost \<br>  --namespace ghost \<br>  --set service.type=LoadBalancer,ghostHost=$APP_HOST,ghostPassword=$GHOST_PASSWORD,mysql.auth.rootPassword=$MYSQL_ROOT_PASSWORD,mysql.auth.password=$MYSQL_PASSWORD</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2vi6Ldph4rOW415417etHA.png" /></figure><p>We can check that the installation was successful by running this command:</p><p>kubectl get pod -A</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9lZ1muNkHolj7wUaDB-aAA.png" /></figure><p>In the Ghost Admin console, you can create an example blog post that will be included in the backup and restore process by signing in (using the Admin URL displayed above). As a result, the backup includes not only the application deployment configuration but also the posts in the blog database that is saved in PV — EBS.</p><h4>4.9 Backup Primary Cluster</h4><p>Create a backup of the Primary cluster. Be sure to switch your kubectl context back to the Primary cluster before running the command below.</p><p>We can see how a Velero backup CRD looks like by using the -o flag, which outputs the backup CRD YAML without actually submitting the backup creation to the Velero server.</p><pre>kubectl config use-context $PRIMARY_CONTEXT</pre><pre># Check out the outputs of configurations file<br>velero backup create ghost-backup -o yaml</pre><pre>velero backup create ghost-backup</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LIQRQaOBMzullm-Orf7aCQ.png" /></figure><p>The backup CRD shows that we are backing up all namespaces since the includedNamespaces array includes the star wildcard. By using selectors, we can choose individual components of the cluster, even though we are backing up the entire cluster. As a result, we are able to back up a single namespace, which may include a single application.</p><blockquote>We can also see the backup files created by Velero in the Amazon S3 bucket we previously created:</blockquote><pre>aws s3 ls $BUCKET/backups/ghost-backup/</pre><h4>4.10 Validate the backup :</h4><p>Let’s check the status of the backup and validate that backup has been completed successfully.</p><pre>velero backup describe ghost-backup</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mAEaUxMXwlXWKsTst7RP0g.png" /></figure><p>Check out the field Phase: in the output. If the current Phase is InProgress, then wait a few seconds and try again until you see the Phase: Completed.</p><h4>4.11 Restore the app into the Recovery cluster</h4><pre># Switch your kubectl context to your Recovery cluster.<br>kubectl config use-context $RECOVERY_CONTEXT</pre><pre>velero restore create ghost-restore \<br>    --from-backup ghost-backup \<br>    --include-namespaces ghost</pre><p>you can check the services in the ghost namespace as below:</p><pre>kubectl -n ghost get svc ghost</pre><ul><li>Validate that the restoring processes have been completed by visiting the URL under EXTERNAL-IP, and check if your previous post is existing.</li></ul><blockquote>You might need to change the DNS for your production environment and assign it to a new EKS cluster</blockquote><h4>4.12 Schedule a Backup</h4><p>The schedule operation allows you to create a backup of your data at a specified time, defined by a <a href="https://en.wikipedia.org/wiki/Cron">Cron expression</a>.</p><pre>velero schedule create NAME --schedule=&quot;* * * * *&quot; [flags]</pre><ul><li>Cron schedules use the following format.</li></ul><pre># ┌───────────── minute (0 - 59)<br># │ ┌───────────── hour (0 - 23)<br># │ │ ┌───────────── day of the month (1 - 31)<br># │ │ │ ┌───────────── month (1 - 12)<br># │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;<br># │ │ │ │ │                                   7 is also Sunday on some systems)<br># │ │ │ │ │<br># │ │ │ │ │<br># * * * * *</pre><ul><li>For example, the command below creates a backup that runs every 30 minutes.</li></ul><pre>velero schedule create ghost-schedule --schedule=&quot;*/30 * * * *&quot;</pre><p>This command will create the backup, ghost-schedulewithin Velero, but the backup will not be taken until the next scheduled time, Every 30 minutes.</p><p>Backups created by a schedule are saved with the name &lt;SCHEDULE NAME&gt;-&lt;TIMESTAMP&gt;, where &lt;TIMESTAMP&gt; is formatted as <em>YYYYMMDDhhmmss</em>. For a full list of available configuration, flags use the Velero CLI help command.</p><p>For more details check <a href="https://velero.io/docs/v1.10/backup-reference/">Velero Backup Reference</a></p><h3>5. Cleaning up</h3><p>To avoid incurring future charges, delete the resources. If you used eksctl to create your clusters, you can use eksctl delete cluster &lt;clustername&gt; to delete the clusters.</p><pre># Delete PRIMARY_EKS Cluster<br>eksctl delete cluster $PRIMARY_EKS</pre><pre># Delete RECOVERY_EKS Cluster<br>eksctl delete cluster $RECOVERY_EKS</pre><pre># Delete S3  Bucket<br>aws s3 rb s3://$BUCKET --force</pre><h3>6. Conclusion</h3><p>In conclusion, Velero is a powerful tool for managing backups and restores of Kubernetes applications, and it’s a great fit for running on EKS. With Velero, you can easily backup your Kubernetes resources, including your applications, volumes, and configuration data, to an S3 bucket, and restore them in case of a disaster or data loss. With Velero on EKS, you can also easily migrate your applications across clusters or regions, and ensure your data is securely stored and protected. Additionally, Velero provides advanced features like scheduling backups, specifying backup retention policies, and validating backups, making it a versatile tool for managing your Kubernetes workloads on EKS.</p><p>Overall, Velero on EKS is a great solution for anyone looking to simplify their backup and restore process for Kubernetes applications, while taking advantage of the scalability and flexibility of EKS. Whether you’re a developer, a DevOps engineer, or a cloud administrator, Velero can help you ensure your Kubernetes workloads are always available and protected, so you can focus on delivering value to your users.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*JPrib4L0ueUpn_59.png" /></figure><h4>👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇</h4><h4>🚀<a href="http://from.faun.to/r/8zxxd">Join FAUN Developer Community &amp; Get Similar Stories in your Inbox Each Week</a></h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0871539eba4c" width="1" height="1" alt=""><hr><p><a href="https://faun.pub/how-to-backup-protect-aws-eks-using-velero-from-vmware-tanzu-0871539eba4c">How to Backup &amp; Protect AWS EKS using Velero from vmware-tanzu</a> was originally published in <a href="https://faun.pub">FAUN.dev() 🐾</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to migrate Linux & Windows On-prem workloads to AWS using Migration-Hub Orchestrate]]></title>
            <link>https://faun.pub/how-to-migrate-linux-windows-on-prem-workloads-to-aws-using-migration-hub-orchestrate-eeb12b20909f?source=rss-6a59297ae215------2</link>
            <guid isPermaLink="false">https://medium.com/p/eeb12b20909f</guid>
            <category><![CDATA[aws-migration-hub]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[cloud]]></category>
            <category><![CDATA[migration]]></category>
            <dc:creator><![CDATA[Mohammad jomaa]]></dc:creator>
            <pubDate>Wed, 01 Jan 2025 11:13:58 GMT</pubDate>
            <atom:updated>2025-05-31T07:35:18.703Z</atom:updated>
            <content:encoded><![CDATA[<h4>Abstract :</h4><p>This article talks about the features and capabilities of AWS Migration-Hub Orchestrate, accompanied by examples that illustrate its technical aspects. We consider a scenario where we have an On-prem application comprising of two virtual machines (Linux and Windows). We will explore how these VMs can be seamlessly migrated to AWS as EC2 instances using Migration Hub’s Orchestrate functionality..</p><h4>Prerequisites</h4><blockquote>We supposed that the reader has the <strong><em>basic</em></strong> knowledge of AWS, AWS Migrations strategics, AWS MGN, System administration (Linux, Windows )</blockquote><h4>The content :</h4><blockquote>1- Introduction</blockquote><blockquote>2- What is Migration-hub</blockquote><blockquote>3- What is Migration-hub orchestrator</blockquote><blockquote>4- Solution Architecture</blockquote><blockquote>5- Implementation</blockquote><blockquote>6- Conclusion</blockquote><h4>1. Introduction :</h4><p>The existing solution for migrating On-prem workloads to AWS suffers from several issues.</p><p><strong>1- Central Management Dashboard</strong>: Beginning with the lack of an integrated dashboard that can facilitate the managing and orchestrating this journey. Prepare MHO-Plugin ImageThis interface, however, does not serve any unified control and oversight.<br>2- <strong>Automated Standards:</strong> Another major issue is a lack of automated standards, which becomes acute at work with big and demanding customers who have lots of servers. It leads to the loss of time on repeated operations and also a greater chance of human errors, particularly in relation to the daily technical activities associated with migration.</p><p>Migration Hub addresses these challenges with an effective solution by offering further functionality to automate the On-prem workloads migration towards AWS. With the Migration Hub, you can bypass many such challenges and achieve a more seamless migration process.</p><h4>2. What is Migration-hub?</h4><p>With Migration Hub, you get a clear view of your application lineup, making planning and monitoring a breeze. It doesn’t matter which migration tool you’re using — you can visualize the status of each connection, server, or database for the applications in your portfolio.</p><figure><img alt="Aws migration hub network visualization" src="https://cdn-images-1.medium.com/max/1024/0*_NRcAUS4NptK3hNx.png" /><figcaption>Aws migration hub network visualization</figcaption></figure><p>Migration Hub gives you the flexibility to either jump right into migration and create groups as your servers move, or you can first identify servers and then group applications. The choice is yours! And no matter which approach you take, you can migrate each server in an application and keep an eye on the progress using any tool in AWS Migration Hub</p><figure><img alt="AWS migration hub application groups" src="https://cdn-images-1.medium.com/max/892/0*pxZXVvOyLzfaiqyr.png" /><figcaption>AWS migration hub application groups</figcaption></figure><ul><li>For those considering to migrate their applications through lift-and-shift on AWS, they should consider employing the Application Migration Service service. You should go to the AWS Application Migration Service with its <a href="http://AWS Application Migration Service and Application Migration Service Documentation.">documentation</a> for a deeper understanding.</li><li>In the database migration to AWS, in our case we have chosen a Database Migration Service also known as an AWS DMS. In case you need more details to make sure that your database move is successful, do no hesitate using the information about AWS Database Migration Service and find its <a href="http://aws.amazon.com/dms/">documentation</a>.</li></ul><h4>3. What is a Migration-hub Orchestrate?</h4><p>Imagine how AWS Migration Hub Orchestrator is like your personal assistant, simplifying and automating the whole process of shifting your servers as well as enterprise apps on to Amazon Web Services. It serves as a single stop way station where you can hassle freely manage and track your migration path.</p><p>SAP NetWeaver-based applications, including such as the ones like S/4HANA may be shifted to AWS using Migration Hub Orchestrator very easily. It achieves this by re-hosting them with the custom applications supported on Amazon EC2. The best part? The available templates would include many that one can conveniently edit to match the unique demands of the migration.</p><figure><img alt="Migration Hub Orchestrator templates" src="https://cdn-images-1.medium.com/max/1024/1*j-2o8XZo1kpErT1JWqw1aQ.png" /><figcaption>Migration Hub Orchestrator templates</figcaption></figure><p>Migration Hub Orchestrator automates the steps in your chosen workflow and displays the status of the migration.</p><figure><img alt="Migration Hub Orchestrator workflow steps" src="https://cdn-images-1.medium.com/max/1024/1*gCEG7dP6aYUdzCp97M_WkA.jpeg" /><figcaption>Migration Hub Orchestrator workflow steps</figcaption></figure><h4>4. Solution Architecture :</h4><figure><img alt="Migration-Hub | Solution Architecture" src="https://cdn-images-1.medium.com/max/886/1*wS-rZa-zJ5jFVEjwXnxtyw.png" /><figcaption>Migration-Hub | Solution Architecture</figcaption></figure><blockquote><strong>Step 1:</strong></blockquote><p>In this step, we will Install AWS Application Discovery to get some information about VMs (CPU, Memory, Disk, Network, IPs, etc). once you finish this step you will be able to see those servers on the Migration-Hub servers console.</p><blockquote><strong>Step 2:</strong></blockquote><p>From the Migration-Hub console tool tap, you can download a plugin image and boot it on an On-prem environment and configure the connectivity &amp; credentials on (Linux/SSH-22),(Windows/WinRM-5986) VMs</p><blockquote><strong>Step 3:</strong></blockquote><p>In this step, you are ready to build the workflow that will automate the VMs migration and launch them on AWS</p><blockquote><strong>Step 4:</strong></blockquote><p>During the workflow, one crucial step involves installing an MGN agent on the virtual machines (VMs) to facilitate the replication of their data to the replication server on AWS. This step ensures seamless data transfer and is executed automatically without requiring any manual intervention. The installation process is initiated by sending commands from the plugin VM to the target VMs, streamlining the setup without necessitating any direct involvement on your part.</p><blockquote><strong>Step 5:</strong></blockquote><p>Once replication is done, the Migration-Hub workflow will ask you to start the lunch test instance and test it. If you are ready the workflow will ask you for starting in the Cutover process</p><blockquote><strong>Step 6:</strong></blockquote><p>If you are ready the workflow will ask you for starting the Cutover process, the MGN will start with cutover</p><blockquote><strong>Step 7:</strong></blockquote><p>The MGN at this step will launch the final instance in the target subnet and should be ready for production.</p><h4>5. Implementation :</h4><p>We have set up all the required resources on AWS in the testing environment. But it should be emphasized that you can carry out the following setup in both on-premises and also AWS as well. Therefore, we will start by installing the Migration-Hub Plugin.</p><h4>5.1.1 Prepare MHO-Plugin Image</h4><blockquote>If you intend to install the plugin on-premises, you may treat this section as optional.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9AcmgthTGvdSRwCxM7B9XA.png" /></figure><blockquote>This workflow needs the vmimport role to be created as this <a href="https://docs.aws.amazon.com/vm-import/latest/userguide/required-permissions.html">link</a></blockquote><p>Create a role with this required name “vmimport” and add the below policy and Trust relationships</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wc_XeRetXMC1jW5j7YxovA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AmcmwCKoaW12s70EhlMp4w.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bd60F3g5_ifMBMBaYJVUGg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*U43lHIwVU5rpgAcWQN4v1A.png" /><figcaption>MHO — <strong>Orchestrate</strong></figcaption></figure><p>After completing the workflow, you will be able to find the Amazon Machine Image listed. This AMI serves as a template from which you can create and launch new instances.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YYmkuo6_0NsLmgd3gXa3Vw.png" /></figure><h4>5.1.2 Prepare IAM user for MHO-Plugin</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3l6gaxwd_7pQXaKji-tl_g.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*a30SqsTiY4-P0wRydycttQ.png" /><figcaption>IAM user | MHO-plugin User</figcaption></figure><p>Then open IAM console →<strong> MHO plugin user</strong> → <strong>Security Credentials</strong> → <strong>Access keys </strong>→ <strong>Create access key</strong> → other</p><blockquote>Save the key as csv file becaue you need to use it in the next step</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*V1f18CFC6zJWBml3OYllAw.png" /><figcaption>MHO user | Access key</figcaption></figure><h4>5.1.3 MHO-Plugin Configuration</h4><p>You can access the plugin by SSH</p><blockquote>You can use <strong><em>SSM</em></strong> because the SSM agent is already installed and the test env is on AWS.</blockquote><pre>ssh ec2-user@PluginIPAddress<br>#default password, plugin@123</pre><p>To set up the Migration Hub Orchestrator plugin using plugin setup commands, initiate a bash shell session within the plugin Docker container with the provided command</p><pre>docker exec -it mhub-orchestrator-plugin bash</pre><p>After accessing the container run this command to configure AWS access for Plugin as below:</p><pre>aws configure --profile &lt;profile-name&gt;<br>####<br>plugin setup --aws-configurations</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lyq1vQthIFPi1VkVuUu75A.png" /><figcaption>MHO — Plugin Machine Console</figcaption></figure><p>You can check the plugin list in Migration-Hub → Orchestrate → Plugins:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*e2rVgvhV2g9rSWr8kXImog.png" /><figcaption>MHO MHO — Plugins list</figcaption></figure><p><a href="https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/configure-plugin.html">For more details about MHO-plugin</a></p><h4>5.2 Linux &amp; Windows workloads Setup</h4><p>In this POC we are going to create two workloads and install AWS Discovery Agent</p><blockquote><strong>Linux</strong> : amzn2-ami-kernel-5.10-hvm-2.0.20230515.0-x86_64-gp2</blockquote><blockquote><strong>windows</strong>: Windows_Server-2016-English-Full-Base-2023.05.10</blockquote><p>After preparing the env you should find something like that:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QwYKXFUfJoEYre72V-truA.png" /><figcaption>Test env</figcaption></figure><p>For Security group configurations you can check out the below table.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MkjIbSzE82TDTizOSIPL6g.png" /><figcaption>SG- Workloads</figcaption></figure><h4>5.2.2 AWS Discovery Agent Installation ( Linux )</h4><p>These agents do not only harvest the technical parameters and general performance information but also snatch up the needed in-process operations and active network connections. The Discovery Agent is a sort of your local detective that works right within the environment. It requires substantial authority(root rights) to be able to accomplish its any mission. After its installation and activation, this agent establishes a secure connection with the home region of your residence and introduces itself to The Application Discovery Service.</p><p>But before that, we should create an IAM user for this Agent and add the below policy:</p><ul><li><a href="https://us-east-1.console.aws.amazon.com/iamv2/home?region=us-east-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAWSApplicationDiscoveryAgentAccess">AWSApplicationDiscoveryAgentAccess</a></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*x8PmFD3VDYgi0nx530Bdrw.png" /><figcaption>MHO-Discovery agent</figcaption></figure><p>Open <a href="https://us-east-1.console.aws.amazon.com/migrationhub/home?region=us-east-1#">Mi<strong><em>gration</em></strong></a><strong><em>-Hub</em></strong> → <strong>Discovery → tools</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DKV4PYFveQX9sdCeEEOljw.png" /></figure><pre>curl -o ./aws-discovery-agent.tar.gz https://s3-us-west-2.amazonaws.com/aws-discovery-agent.us-west-2/linux/latest/aws-discovery-agent.tar.gz<br>tar -xzf aws-discovery-agent.tar.gz<br>sudo bash install -r us-east-1 -k AKIAY45XASXO7VIBQYX5 -s0DDhsRb/Zfid4ZqaMI5/qgVFjICt9DgZIwA+SVaX</pre><p>After the registration, the agent then begins collecting the data on behalf of the host or this VM where it is located. The agent pings the Application Discovery Service every 15 minutes for the configuration information.</p><p>After joining it, the agent goes into action; he gets all the information on behalf of his host or just a VM. It resembles an attentive assistant continuously keeping in touch with the Application Discovery Service for the configuration details every 15 minutes.for more <a href="https://docs.aws.amazon.com/application-discovery/latest/userguide/install_on_linux.html">details</a></p><h4>5.2.2 AWS Discovery Agent Installation ( Windows )</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*z5pRdsqwQVmzauACUoVUSA.png" /></figure><pre>powershell -command &quot;&amp; { iwr https://s3-us-west-2.amazonaws.com/aws-discovery-agent.us-west-2/windows/latest/AWSDiscoveryAgentInstaller.exe -OutFile AWSDiscoveryAgentInstaller.exe }&quot; .\AWSDiscoveryAgentInstaller.exe REGION=&quot;us-east-1&quot; KEY_ID=&quot;AKIAY45XASXO7VIBQYX5&quot; KEY_SECRET=&quot;-s0DDhsRb/Zfid4ZqaMI5/qgVFjICt9DgZIwA+SVaX&quot; /q</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XLbehVgj9kwV03BSxfcPGg.png" /><figcaption>MHO-Windows-workload</figcaption></figure><p>For more <a href="https://docs.aws.amazon.com/application-discovery/latest/userguide/install_on_windows.html">details</a>.</p><p>After installing AWS Discovery Agent you will be able to see both machines in the AWS Migration-Hub console</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PmtyPZPy8WAQ9TTYqTD0Qw.png" /><figcaption>MHO Servers console</figcaption></figure><h4>5.3 Providing MHO-Plugin with Workloads credentials</h4><blockquote><strong>Linux</strong> :</blockquote><p>For the Linux workload, you should run some commands to prepare it to work with MHO-Plgin for more details check out this <a href="https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/configure-plugin.html">link</a></p><p>We need to configure SSH user on MHO-Linux-workload as below:</p><pre>vi /etc/ssh/sshd_config<br># Then set  PasswordAuthentication as below <br># PasswordAuthentication yes</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JwfnB_meHGLfFvzcngA7aQ.png" /><figcaption>MHO-Linux-Workload</figcaption></figure><pre># Restart SSHd service <br>sudo systemctl restart sshd<br># set password for ec2-user and a save it to use in next step<br>sudo passwd ec2-user</pre><p>At this point, you can supply the MHO-Plugin with Linux credentials. This can be done by logging into the MHO-Plugin machine and executing the following command:</p><pre># Login MHO-Plugin<br>docker exec -it mhub-orchestrator-plugin bash<br>plugin setup --remote-server-configurations</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2cfvbhDT1kktHxapzMJ98w.png" /><figcaption>MHO-plugin machine</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*C737tYZej0V3okACu0msvw.png" /></figure><p>Copy the public key form MHO-Plugin to MHO-Linux-Workload as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*L2LU6_udKhCKrFcmIP7uZQ.png" /><figcaption>MHO -Plugin machine</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uX1Ua2hHeWd8z0aK6r_KDA.png" /><figcaption>MHO-Linux-Workload</figcaption></figure><blockquote><strong>Windows</strong></blockquote><p>To prepare your Windows workload to work with the MHO-Plugin, some specific commands need to be executed. For more detailed instructions, please refer to the <a href="https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/configure-plugin.html">link</a> provided</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*iquK5SJUDjYjYCQy9Wt0Dg.png" /></figure><pre>#1 Download the setup <br>Invoke-WebRequest -Uri &quot;https://medium.com/r/?url=https%3A%2F%2Fapplication-data-collector-release.s3.us-west-2.amazonaws.com%2Fscripts%2FWinRMSetup.ps1&quot;<br>#2 Download the New-SelfSignedCertificateEx.ps1<br>Invoke-WebRequest -Uri &quot;https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-libraries-for-net%2Fblob%2Fmaster%2FSamples%2FAsset%2FNew-SelfSignedCertificateEx.ps1&quot;</pre><pre>.\WinRMSetup.ps1</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nwKzD34vaq6WqAKs87xxfw.png" /><figcaption>MHO-Windows-workload</figcaption></figure><blockquote>Remember to ensure that port <strong>5986</strong> is open on the MHO-Windows-workload, and that the MHO-plugin can access it.</blockquote><p>You’re now able to configure the MHO-plugin using the Windows credentials as depicted below:</p><pre># Login MHO-Plugin<br>docker exec -it mhub-orchestrator-plugin bash<br>plugin setup --remote-server-configurations</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Y7Cq1l2wpp5491J7xA6Mag.png" /><figcaption>MHO-Plugin machine</figcaption></figure><h4>6. <a href="https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/rehost-on-ec2.html">Build Migration-hub Workflow to Rehost applications on Amazon EC2</a></h4><p>Let’s check what we have done until now :</p><figure><img alt="Migration-Hub | Solution Architecture" src="https://cdn-images-1.medium.com/proxy/1*wS-rZa-zJ5jFVEjwXnxtyw.png" /></figure><ol><li>We prepared MHO plugin and all requirements.</li><li>We prepared our test environment as required.</li></ol><p>If you look at the diagram, you will notice a section that refers to MGN features such as the replication server, the target subnet, the staging subnet, and so on. These elements are configured in the next section.</p><p>Before going to build workflow let’s do some <strong>actions that are required</strong></p><h4>1- Grouping the servers to applications</h4><p>Open to <strong><em>Migration-hub console → Applications, </em></strong>create App1 and add our servers to this App1 as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kVG29Inx9vsJJ-wrB642rg.png" /><figcaption>MHO — App1</figcaption></figure><h4>2- Prepare MGN service settings</h4><p>AWS Application Migration Service (MGN) is a highly automated lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of migrating applications to AWS</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Bnqt4jTsf50M0vED8-H5Rg.png" /><figcaption>AWS MGN service</figcaption></figure><p>For more details follow this <a href="https://catalog.us-east-1.prod.workshops.aws/workshops/c6bdf8dc-d2b2-4dbd-b673-90836e954745/en-US/server-migration-overview/app-mig-service/setup">guide</a>.</p><blockquote>Ensure that the necessary service roles have been created by clicking on the Reinitialize service permissions button on the Application Migration Service Console Replication settings page.</blockquote><h4>3- Provide credentials to install the AWS Replication Agent on your remote server.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7VW4AZF9wODJUPLL9TRzYQ.png" /></figure><pre>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;mgn:StartCutover&quot;,<br>                &quot;mgn:StartTest&quot;<br>            ],<br>            &quot;Resource&quot;: &quot;*&quot;<br>        },<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: &quot;iam:PassRole&quot;,<br>            &quot;Resource&quot;: &quot;*&quot;,<br>            &quot;Condition&quot;: {<br>                &quot;StringEquals&quot;: {<br>                    &quot;iam:PassedToService&quot;: &quot;ec2.amazonaws.com&quot;<br>                }<br>            }<br>        }<br>    ]<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Djhe7o_ca9VGM9-bYemBTQ.png" /><figcaption>MHO-MGN</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-2XHOnc29PNJ5bpcRlSlww.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JWP8JXYEphjIR3KXg75_1w.png" /><figcaption>MHO — AWS Secrets Manager</figcaption></figure><p>With the environment now prepared, we are ready to migrate our App1 (consisting of both Linux &amp; Windows VMs) to AWS as detailed below:</p><ul><li>Open Migration-Hub consol → Rehost Applications on Amazon Ec2</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*E4t_hDg0nhYTIks9NvYDHQ.png" /><figcaption>MHO- Rehost Applications on Amazon Ec2</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5wwjYUa2EfoViPvRZAiXdw.png" /><figcaption>MHO — workflow</figcaption></figure><p>Once the workflow is created, it needs to be executed. Following this, you’ll be able to review the subsequent steps as depicted below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4Is8D7nGaabggTX0K0-dIA.png" /><figcaption>MHO- Workflow- Steps</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uOppUxLPpu1_9XgpAC9ZuA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*t8jVTWKzS4TIQsb9mUMmhA.png" /></figure><p>if you need to change the Migrated instance type, VPC &amp; subnets, etc for the launched instances this can be achieved by modifying the launch settings for each source server, as illustrated below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2Qw3tvsnM0M4JbWLHhOpBg.png" /><figcaption>MGN → <a href="https://eu-west-1.console.aws.amazon.com/mgn/home?region=eu-west-1#/sourceServers">Active source servers</a> → MHO-Linux-Workload</figcaption></figure><p>Upon finishing the replication process, you’ll encounter a few manual steps requiring user intervention, similar to the one illustrated here. “ <strong><em>Marking the instance ‘Ready for test’ </em></strong>“ :</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8eVtgWfJEpnK_Wxn-TTDPg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*78XWeQNTJm8PjG_Vcx7BZQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gYi-J52Uomw-IHsTKanw_g.png" /></figure><p>Once you’ve completed all the steps, you’ll observe that your on-premises workloads have been successfully migrated to AWS, as illustrated below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QSRplI_q8f2E9D0qnOMGfw.png" /><figcaption>Migrated machines</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YlodZw5tAf6PHco_EzvFTA.png" /><figcaption>On-prem machines</figcaption></figure><h4>Conclusion</h4><p>In conclusion, AWS Migration Hub overcoming the limitations and challenges faced during the migration of On-prem workloads to AWS.</p><p>By providing a central management dashboard, it enables efficient control and orchestration of the migration journey. Moreover, the inclusion of automated standards reduces time wastage and minimizes the risk of human errors, especially in complex migration tasks.</p><p>Ultimately, we thoroughly examined the capabilities and features of AWS Migration Hub orchestration, using real workload scenarios to illuminate every technical aspect. Our case study was a theoretical on-premises application, made up of two Virtual Machines — a Linux and a Windows VM. This application is set to be migrated to AWS, where it transformed into EC2 instances with the orchestration of the Migration Hub.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*NDaQ7lVOrIQVhEGs.png" /></figure><h4>👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇</h4><h4>🚀<a href="http://from.faun.to/r/8zxxd">Join FAUN Developer Community &amp; Get Similar Stories in your Inbox Each Week</a></h4><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=eeb12b20909f" width="1" height="1" alt=""><hr><p><a href="https://faun.pub/how-to-migrate-linux-windows-on-prem-workloads-to-aws-using-migration-hub-orchestrate-eeb12b20909f">How to migrate Linux &amp; Windows On-prem workloads to AWS using Migration-Hub Orchestrate</a> was originally published in <a href="https://faun.pub">FAUN.dev() 🐾</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>