<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Anunay Bhatt on Medium]]></title>
        <description><![CDATA[Stories by Anunay Bhatt on Medium]]></description>
        <link>https://medium.com/@ab-lumos?source=rss-4456e8332d99------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 24 Jul 2021 01:49:50 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@ab-lumos/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Embedding Security into SDLC using Reference Architectures for developers]]></title>
            <link>https://ab-lumos.medium.com/embedding-security-into-sdlc-using-reference-architectures-for-developers-29403c00fb3d?source=rss-4456e8332d99------2</link>
            <guid isPermaLink="false">https://medium.com/p/29403c00fb3d</guid>
            <category><![CDATA[sdlc]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[security-architecture]]></category>
            <category><![CDATA[reference-architecture]]></category>
            <category><![CDATA[security]]></category>
            <dc:creator><![CDATA[Anunay Bhatt]]></dc:creator>
            <pubDate>Fri, 26 Mar 2021 22:05:10 GMT</pubDate>
            <atom:updated>2021-04-06T04:16:41.540Z</atom:updated>
            <content:encoded><![CDATA[<h3>Uncomplicate Security for developers using Reference Architectures</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*m_zozbmGUf1RTEcnYaPz4g.png" /><figcaption>Figure 1: Security Blueprint for a microservices app on AWS EKS cluster</figcaption></figure><p>So you want your developers to build secure applications during SDLC and not as an afterthought after being dinged by Security reviewers. You got this idea of highlighting security controls via reference architectures that can be easily consumed by developers early on in the SDLC life cycle.</p><p>But you are concerned! Would this reference architecture be another “lengthy”, “preachy” document with languages like “Thou shall do this, thou shall not do this…” and not really something that can be easily adopted by developers? If you develop a visual reference architecture infographic and give it to your developers, it will still be a “visual” form of a checklist. Not saying that such architectures don’t have a place, they are very helpful for a 10,000 foot view of the security landscape. But that does not really help your developers, does it?</p><p>In this blog, we will walk through some of the salient features of a meaningful security reference architecture and the process required to develop one. We will also look at the challenges that one might expect to face while launching a successful security reference architecture program.</p><h3>Salient features of a successful reference architecture</h3><p>These are the 7 jewels that must adorn your reference architecture program otherwise there is a high likelihood that your work might not be adopted by developers.</p><p><strong>#1 Targeted to the developers </strong>— Always think from a developer’s perspective while framing the mission, vision and goals of the reference architecture and also while creating the technical artifacts.</p><p><strong>#2 Relatable </strong>— Developers are not looking for another security guidance document. Use a demo application, sample code or other relatable means to demonstrate the reference architecture.</p><p><strong>#3 Easily adoptable</strong>- Automation and scaffolding must be your favorite tools in order to make your reference architecture reachable. Also, reference architecture program must be added at key entry points of SDLC for easy consumption.</p><p><strong>#4 Offering benefit to developers</strong> — Why would a developer follow your reference architecture? One example of a <em>cherry on top of the cake</em> for developer is — processes are set it place to fast-track reference architecture based apps through the Security review process.</p><p><strong>#5 Offering holistic and mappable security</strong>- Security controls demonstrated in your reference architecture must be mapped to your company’s GRC standards or in some cases may go above and beyond to create new GRC standards.</p><p><strong>#6 Iterative</strong> — Multiple deployment models demand multiple “iterative” reference architecture. It’s important to not waste effort on a reference architecture that is not useful to developers. Your process to develop reference architecture must have a minimalistic feasibility analysis phase for Go/No-Go decisions</p><p><strong>#7 Measurable</strong> — Any measure to demonstrate the adoption of the reference architecture program would work here, but one metric that can convey a good story around showcasing security maturity is — how many apps (Or what percentage of your apps) have baked in security using security reference architecture?</p><p>So there you have it — the 7 salient features to make your reference architecture more adoptable to the developers. Next we will see the process of creating a sample reference architecture.</p><h3>Process for creating reference architecture</h3><p>Let’s take an example so that it’s easier for you to follow along. Most developers today are using a modular form of application development called microservices. These have also become very popular with the advent of managed Kubernetes clusters offered by public cloud platforms like AWS, GCP and Azure.</p><p>So consider a scenario where you are a Security Architect and want your developers to follow your prescribed reference architecture to develop secure microservices apps on a public cloud platform like AWS.</p><h4>Phase #1 Start with a security blueprint</h4><p>You can call this anything else but when I say “blueprint”, I am referring to the paper exercise for creating a visual reference architecture. The objective here is to “quickly” develop a good starting point highlighting key security topics but not deep diving into any of the technical areas. I quoted quickly as this phase can help you discuss the efficacy of the reference architecture with minimal effort</p><p>For our example, let us see the paper architecture of our ecommerce microservice application:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1022/1*p2tJ0VPO9-zGPr7sh3PYQg.png" /><figcaption>Figure 2: Application architecture for a demo ecommerce app — Shop sock by Weaveworks</figcaption></figure><p>Overlay the paper architecture with your security controls. These controls can be open source, cloud provider specific or specific to your organization. This of course would require good knowledge of the current security tooling of your organization.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1021/1*swAQ_6DpOVkWfkZVCxld7Q.png" /><figcaption>Figure 3: Application architecture overlaid with security controls</figcaption></figure><p><strong><em>Quick tip →</em></strong><em> Before proceeding to next phase, talk to your Security Engineers, Security Architects and other Security Leaders, and discuss the efficacy of the security blueprint. Don’t waste your time and effort on something that is not going to be useful to your developers.</em></p><h4>Phase #2 Build a sample application or use an open source one</h4><p>Developers love using code samples and our objective with reference architecture is just the same to offer them meaningful security integration code samples. But for that we need an app.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/400/1*LLqiTvrznxFnbwUWrsAJnA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/400/1*1LY08rfeKQTRmVbvzZ438Q.png" /></figure><p>For the microservices example, we can select from a number of sample apps that are available online — <a href="https://www.weave.works/docs/cloud/latest/tasks/deploy/sockshop-deploy/">Sock Shop by Weaveworks</a>, <a href="https://github.com/GoogleCloudPlatform/microservices-demo">Online Boutique by GCP</a>, etc. The example app architecture we used in the blueprint above is from the Sock Shop demo app by Weaveworks. Shop Sock is a cloud-native microservices demo application. This application is a web-based e-commerce app which allows users to browse different varieties of socks, add them to the cart and buy them.</p><p><strong><em>Quick tip → </em></strong><em>While onboarding the sample app to your organization, be cautious of using all the developer best practices of your organization like Git setup, CICD tools, third-party package onboarding, golden AMIs, etc. As your sample code would be adopted by developers, you should not employ “shortcuts” for application onboarding</em></p><p><strong><em>Quick tip → </em></strong>Where possible, reduce operational overhead on your team by using managed services for infrastructure</p><h4>Phase #3 Integrate security controls with the sample application</h4><p>This is the phase where you start to secure the sample application and offer guidance to developers while doing so. Outcome of this phase would be “How to” guides illustrated via the sample code of your demo application. Developers can directly copy your code and integrate with their own applications thus making it easier to follow along with these guides. At the same time, the guides must also be mapped to your organization’s GRC standards for backward traceability.</p><p>The security choices you make at this phase are very critical and your thought process (pros and cons comparison between tools) must be extensively documented. Some examples are — comparison between container CNIs like Cilium vs Calico, between service mesh like Istio or Linkerd, between managed and unmanaged security tools, etc.</p><p>Some example of security controls applicable to our example scenario and also highlighted in Figure 2:</p><blockquote><strong><em>1</em></strong><em>- Create and use secure CICD pipelines for your application and infrastructure code integrating with secure scanning tools like static analysis, credential scanning, third-party product scanning, secure Git repo, etc.</em></blockquote><blockquote><strong><em>2</em></strong><em>- Inject trust certificates into application containers using organization PKI server</em></blockquote><blockquote><strong><em>3</em></strong><em>- Integrate front-end with single sign-on server</em></blockquote><blockquote><strong><em>4</em></strong><em>- Create least privilege IAM roles in AWS using IAM Access Analyzer</em></blockquote><blockquote><strong><em>5</em></strong><em>- Encrypt service-to-service traffic with mTLS using ServiceMesh like Istio</em></blockquote><blockquote><strong><em>6</em></strong><em>- Create service to service authorization using ServiceMesh authorization policies</em></blockquote><blockquote><strong><em>7</em></strong><em>- Store, retrieve and rotate secrets using Vault</em></blockquote><blockquote><strong><em>8</em></strong><em>- Encrypt cloud storage with AWS KMS</em></blockquote><blockquote><strong><em>9</em></strong><em>- Monitor service using Zipkin</em></blockquote><blockquote><strong><em>10</em></strong><em>- Centralize the storage for infrastructure (Cloudtrail, VPC flow logs, EKS control plane) and OS/app logs</em></blockquote><blockquote><strong><em>11</em></strong><em>- Restrict network access with AWS security groups, NACLs, WAF, etc.</em></blockquote><blockquote><strong><em>12</em></strong><em>- Secure connectivity to on-premise using AWS DirectConnect</em></blockquote><blockquote><strong><em>13</em></strong><em>- Enable fine-grained access control in Kubernetes using Kubernetes access control</em></blockquote><blockquote><strong><em>14</em></strong><em>- Detect and remediate cloud resources misconfigurations</em></blockquote><p><strong><em>Quick tip → </em></strong>Handover the maintenance of security controls documentation to Security engineering teams in your organization like Vault, IAM, PKI, etc. to reduce the operational overhead of such documentation on your team. Also, handover the mapping of compliance solutions to your organization’s GRC team</p><h4>Phase #4 Make it easy for developers to use your reference architecture and track adoption</h4><p>Use automation to convert your “How to” guides into one-click templates or create “Hello World” templates which are fully integrated with all the security controls proposed in your reference architecture. You can use any of the following to accomplish this starting with the least resource intensive:</p><ul><li>Use already existing scaffolding tooling in your company. Partner with the scaffolding team engineers and automate your security controls</li><li>Offer your controls as infrastructure-as-code scripts (IaaC)</li><li>Create your own scaffolding templates</li></ul><p>Use metrics to track adoption. Some example metrics include — new teams adopting your scaffolded reference architecture templates, increase in documentation views for “How to” guides, increase in Git forks, etc.</p><p><strong><em>Quick tip → </em></strong>Do not recreate the wheel with automation or scaffolding. Use your existing organizational tooling where applicable.</p><h3>Challenges that you might expect to face</h3><ul><li><strong>Operational and maintenance challenges for sample app</strong>—You are a security architecture team, not an application deployment team. Your sample app needs to follow organizational CICD best practices and also needs to maintain patching and other ongoing operational requirements. This would mean extra overhead but it also gives your team a better understanding of developer daily routine and thus <em>increases your team’s developer sympathy</em>.</li><li><strong>Security controls becoming stale over time — </strong>Your team wants to develop new reference architecture but is stuck updating and maintaining the previous one. One possible solution is handing over maintenance to individual security engineering teams.</li><li><strong>Not enough developers using your reference architectures</strong> — It is important to understand developer requirements before jumping into a reference architecture. Talk to developers who have gone through the process and ask for their lessons learned with integrating security. Then make it easy to adopt your reference architecture by tooling or sample code. Refrain from only using theory to educate developers.</li><li><strong>Handle scope </strong>—If your team is not full-stack security then scope your reference architectures accordingly and make it clear the aspects of security that your reference architecture solves for, while caveating the missing areas.</li></ul><p>So there you have it, some guidance to Security Architects looking to roll out their own reference architectures for developers. Please feel free to leave comments if you have questions or suggestions to improve this read.</p><h3>References</h3><p><a href="https://www.weave.works/docs/cloud/latest/tasks/deploy/sockshop-deploy/">Sock Shop by Weaveworks</a></p><p><a href="https://github.com/GoogleCloudPlatform/microservices-demo">Online Boutique by GCP</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=29403c00fb3d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[SSRF attack on AWS: Replaying Capital One hack for stealing EC2 metadata]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://ab-lumos.medium.com/ssrf-attack-on-aws-technical-demo-for-stealing-ec2-metadata-4910dafafdee?source=rss-4456e8332d99------2"><img src="https://cdn-images-1.medium.com/max/1039/1*j4HJNKucPVApeFOAQvFmcw.jpeg" width="1039"></a></p><p class="medium-feed-snippet">This is a technical demo for performing a SSRF attack on your test AWS account in order to remotely retrieve the metadata stored on EC2</p><p class="medium-feed-link"><a href="https://ab-lumos.medium.com/ssrf-attack-on-aws-technical-demo-for-stealing-ec2-metadata-4910dafafdee?source=rss-4456e8332d99------2">Continue reading on Medium »</a></p></div>]]></description>
            <link>https://ab-lumos.medium.com/ssrf-attack-on-aws-technical-demo-for-stealing-ec2-metadata-4910dafafdee?source=rss-4456e8332d99------2</link>
            <guid isPermaLink="false">https://medium.com/p/4910dafafdee</guid>
            <category><![CDATA[capital-one]]></category>
            <category><![CDATA[demo]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[hacks]]></category>
            <category><![CDATA[cybersecurity]]></category>
            <dc:creator><![CDATA[Anunay Bhatt]]></dc:creator>
            <pubDate>Sat, 17 Aug 2019 18:05:30 GMT</pubDate>
            <atom:updated>2019-09-17T18:17:40.534Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[Introduction to Hashing and how to retrieve Windows 10 password hashes]]></title>
            <link>https://ab-lumos.medium.com/introduction-to-hashing-and-how-to-retrieve-windows-10-password-hashes-9c8637decaef?source=rss-4456e8332d99------2</link>
            <guid isPermaLink="false">https://medium.com/p/9c8637decaef</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[hashing]]></category>
            <category><![CDATA[windows-server-2016]]></category>
            <category><![CDATA[windows]]></category>
            <category><![CDATA[hashing-and-salting]]></category>
            <dc:creator><![CDATA[Anunay Bhatt]]></dc:creator>
            <pubDate>Wed, 03 Jul 2019 15:44:04 GMT</pubDate>
            <atom:updated>2019-07-03T15:59:30.732Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*k7nURKt5QW03SXeSA-IgnQ.jpeg" /></figure><p>In the security world, you might have heard of the exploit used by hackers to reveal passwords from their hashed counterparts. We call this technique password cracking or in practicality ‘password guessing’. Even with the complexity of password controls put in by organizations today, this threat is very much real. This tutorial is intended for any individual with a mindset of security who wants to learn more about how hackers are able to crack Windows stored user passwords.</p><h3><strong>Introduction to hashing, rainbow tables</strong></h3><p>Hashing is a software process of generating fixed character length hash values for a text file. This is a <em>one-way</em> function meaning the original text file cannot be generated back from the hash value. This hash value is used to verify the integrity of original text when it is sent over a communication medium. For example, when A sends a text message to B, it first creates a SHA-2 (<em>popular hashing algorithm) </em>hash of the message and sends it along with the message. When B receives the message, it also creates a hash of the text message using same SHA-2 algorithm and compares it with the hash provided by A. If the hashes match, B can be rest assured that the original message has not been corrupted on the way.</p><p>Application engineers also use this technique for securing passwords of users logging into their systems. Instead of storing passwords in the back-end database in clear text, <strong>password hashes</strong> are used. This protects clear-text passwords from internal application developers and also from hackers in case they are able to breach the database. Hackers are cognizant of this process and have lot of tools in their arsenal to efficiently <em>guess </em>the passwords from the hashes. I use the word ‘guess’ because remember hashes are one-way functions, you cant decode them like you can do to an encrypted string. You would need to create a hash of a guessed password and compare to the extracted hash to determine if you have guessed correct.</p><p>Free online tables are available which store password hashes of common passwords which can make a hackers job lot easier if people are not serious about password complexities. These tables are called <strong>rainbow tables </strong>or hash tables. In case of complex passwords, there are free tools which use a brute-force approach of comparing hashes of multiple combinations of text. Regardless of the approach being used, it is appropriate to state that password hashes are NOT SAFE if in the hands of an ill-will hacker.</p><h3>Windows hashing basics</h3><p>You really need to know only the following three basic concepts before extracting Windows hashes:</p><p><strong>LM hash</strong></p><p>LAN Manager (LM) hash is an old and weak Windows technique for creating hashed passwords, which has been disabled by default in current Windows environments. But this can still be enabled manually on current systems — See Microsoft documentation on how to protect your systems from using it:</p><p><a href="https://docs.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/network-security-do-not-store-lan-manager-hash-value-on-next-password-change">Network security Do not store LAN Manager hash value on next password change (Windows 10)</a></p><p>The reason why LM hash is easier to break is because passwords are not case sensitive, password length is maximum 14 characters and more importantly because it breaks the text in two halves of seven characters before hashing them separately and concatenating. So if your password is less than seven characters, it should be a breeze for a hacker to guess the password. [1]</p><p><strong>NT hash or NTLM hash</strong></p><p>New Technology (NT) LAN Manager hash is the new and more secure way of hashing passwords used by current Windows operating systems. It first encodes the password using UTF-16-LE and then hashes with MD-4 hashing algorithm.</p><p>If you need to know more about Windows hashes, the following article makes it easy to understand [2]</p><p><strong>SAM database file</strong></p><p>Security Account Manager (SAM) is the database file that stores the user’s password in the hashed format. You would need access to this file in order to retrieve hashes from your local or remote Windows machine [3]</p><h3>Extracting local hashes from Windows Server 2016</h3><p>In this section, I will show you how to extract hashed passwords from your Windows desktops using a very popular and powerful tool — mimikatz. The screenshots are from Windows Server 2016.</p><p><em>Step 1: Download mimikatz</em></p><p>Binaries are available at — <a href="https://github.com/gentilkiwi/mimikatz/releases">https://github.com/gentilkiwi/mimikatz/releases</a></p><p><em>Step 2: Run (regedit)</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/609/1*eIjCrl59pmq6wXe0rO3flQ.png" /></figure><p><em>Step 3: Navigate to HKEY_LOCAL_MACHINE and export SAM registry file and SYSTEM registry file to the same directory as the mimikatz installation. Save the files as “Registry hive files”</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/550/1*FHp1KGqyrYDXwInzuQzHeQ.png" /></figure><p>Your mimikatz directory should look as below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/872/1*IZF6LQjK6nkezPrmkxQgkw.png" /></figure><p><em>Step 4: Run mimikatz.exe and type “lasdump::sam” command followed by the file paths of sam and system file:</em></p><blockquote>lsadump::sam sam3.hiv system.hiv</blockquote><p>If you get an error as below, you will need to elevate permissions of mimkatz</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7ppcJv_dvEcPjT4KJCCbnQ.png" /></figure><p><em>Step 5: Type “token::elevate” to elevate the permissions</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3q_rDiTNhODjIMw6uMKIYg.png" /></figure><p><em>Step 6: Type the lsadump command again and you should now see the hash values of local users</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/662/1*eNGnvks30byRBDv_pLzJeg.png" /></figure><h3>Confirm if you got the right hash</h3><p>Use Windows commands to create local users and extract the generated NTLM hash using the above process. Once you have the hash, use the below online utility to generate hashes by yourself and confirm if it matches.</p><p><a href="https://www.browserling.com/tools/ntlm-hash">https://www.browserling.com/tools/ntlm-hash</a> [4]</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YfjIcml9E9g0_WhbmD5agw.png" /></figure><p><strong>Windows commands for user and password modifications:</strong></p><p>List of all users → net user</p><p>Add user → net user /add username -key=”password”</p><p>Update password of user → net user username newpassword</p><p><strong>Other tools that can be used in place of mimikatz:</strong></p><p>HashSuite, fqdump, pwdump2</p><p><strong>Password cracking/guessing tools</strong>:</p><p>L0phtCrack, Cain and Abel, John the Ripper</p><h3>A quick note on <strong>Salting</strong></h3><p>Salting is a quick way of increasing the security of your hashed passwords. Passwords first are concatenated with a randomly generated set of bits (salt) and then the hash is calculated. Even if users have same password, they will have different hashes since the salt is randomly generated for each user. Salting also protects against rainbow tables since the table now must contain “salt.password” hashes which is unlikely for a long and random salt value. [5]</p><h3><strong>Sources</strong></h3><ol><li><a href="https://en.wikipedia.org/wiki/LAN_Manager">https://en.wikipedia.org/wiki/LAN_Manager</a></li><li><a href="https://medium.com/@petergombos/lm-ntlm-net-ntlmv2-oh-my-a9b235c58ed4">https://medium.com/@petergombos/lm-ntlm-net-ntlmv2-oh-my-a9b235c58ed4</a></li></ol><p>3. <a href="https://en.wikipedia.org/wiki/Security_Account_Manager">https://en.wikipedia.org/wiki/Security_Account_Manager</a></p><p>4. <a href="https://www.browserling.com/tools/ntlm-hash">https://www.browserling.com/tools/ntlm-hash</a></p><p>5. <a href="https://en.wikipedia.org/wiki/Salt_(cryptography)">https://en.wikipedia.org/wiki/Salt_(cryptography)</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9c8637decaef" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AWS-Azure-GCP Services comparison (2019)]]></title>
            <link>https://ab-lumos.medium.com/aws-azure-gcp-services-comparison-2019-9565f060afbb?source=rss-4456e8332d99------2</link>
            <guid isPermaLink="false">https://medium.com/p/9565f060afbb</guid>
            <category><![CDATA[gcp]]></category>
            <category><![CDATA[solution-architect]]></category>
            <category><![CDATA[multi-cloud]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[azure]]></category>
            <dc:creator><![CDATA[Anunay Bhatt]]></dc:creator>
            <pubDate>Thu, 16 May 2019 00:40:23 GMT</pubDate>
            <atom:updated>2019-05-21T19:48:39.609Z</atom:updated>
            <content:encoded><![CDATA[<h3>AWS Azure GCP Services comparison (2019)</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NozHvtwPdtyAOtKpmdj_vw.jpeg" /></figure><p>Hello multi-clouders! I recently started working on a Microsoft Azure engagement and wanted to get a quick introduction to Azure service landscape. So I started mapping Azure services to the two public cloud platforms I am most familiar with— AWS and Google Cloud.</p><p>I have initially started with the key cloud computing areas — Compute, Storage, Database, Networking, DevOps, Governance and Security. Hopefully in the future updates, I can incorporate other areas — IoT, Big Data, Machine Learning and Analytics. But for now, I want to provide you all the baby steps I have taken towards mapping cloud services across the three Public cloud platforms.</p><p>Providing screenshots of the excel since Medium does not have a good table structure, but if you like the content, feel free to use the Airtable link below to download the CSV.</p><blockquote><a href="https://airtable.com/shrKu5pUNq1Fx4XBb"><strong>Airtable with raw CSV data</strong></a></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1OXiDIz_jBnBDHkLru2uAA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yZXheZp_7UvQq_rccz0WWQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Ta-kQOC60PePUB2tWWc6TQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MTfwvVi900d9k3zk4jQ8Cw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*npPpSv6ssBDUhT9HoKCTsQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BlHQdSV-NpTDLhsEkA5INQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oC1j97L7EccxISYYllgFKg.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9565f060afbb" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Cloud Migration Strategy — Networking (Part 1)]]></title>
            <link>https://ab-lumos.medium.com/cloud-migration-strategy-networking-part-1-a5eb916c41fe?source=rss-4456e8332d99------2</link>
            <guid isPermaLink="false">https://medium.com/p/a5eb916c41fe</guid>
            <category><![CDATA[connectivity]]></category>
            <category><![CDATA[data-migration]]></category>
            <category><![CDATA[cloud-migration]]></category>
            <category><![CDATA[cloud-networking]]></category>
            <category><![CDATA[public-cloud]]></category>
            <dc:creator><![CDATA[Anunay Bhatt]]></dc:creator>
            <pubDate>Mon, 14 Jan 2019 05:26:32 GMT</pubDate>
            <atom:updated>2019-01-14T05:59:18.460Z</atom:updated>
            <content:encoded><![CDATA[<h3>Cloud Migration Strategy — Networking (Part 1)</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uQApsofAAktELyXr5j5thA.jpeg" /></figure><p>Cloud migration can be a nerve-wracking experience for organizations looking to move their on-premise resources to the Cloud. In this article I will talk about some of the most important Networking things to keep in mind before you start developing your own strategy for cloud migration (specifically targeted towards AWS and GCP). This is a two part series with the first part focusing on Cloud Connectivity and the latter geared towards Networking in the Cloud.</p><h3><strong>Cloud Migration background</strong></h3><p>Organizations today have multiple use cases for choosing cloud over their own infrastructure. Some examples are:</p><ol><li>Utilizing cloud compute and storage services for Big data processing jobs which can be pretty expensive to do on-premise.</li><li>Backup or archival of data to Cloud for cheap, durable and highly available data requirements.</li><li>Developing a Disaster Recovery solution on the cloud to act as active-active or active-passive replication system.</li><li>Migrating entire infrastructure to the cloud for the long-haul or employing a hybrid approach.</li></ol><p>Whatever your cloud migration use case may be, one thing is clear that Networking is the most important capability to make the migration happen seamlessly and most importantly — securely.</p><h3><strong>Connectivity and Data Migration to the Cloud</strong></h3><p>One of the first most critical pieces to any cloud migration is the connectivity between on-premise data center and cloud environment. Efficient and reliable connectivity helps in solving data migration challenges which are critical in building any cloud migration project. Lets take the cloud migration use case, where an organization wants to leverage cloud’s compute capability to run big data solutions. In this scenario, possibly petabyte scales of data needs to be transferred from on-premise network to cloud services like AWS S3 or Google Cloud Storage bucket. A good connectivity strategy would help ensure that this data is transferred in a fast and reliable manner. Let us first review some of the options that cloud service providers (CSP) have today to fulfill such connectivity needs:</p><ol><li><strong>CSP-managed VPN</strong> <strong>:</strong> Both AWS and GCP allow the creation of a managed-VPN connection over the public Internet using IPSec protocol suite. A virtual private gateway is created at the side of the cloud provider and is then connected to an on-premise router using the authentication and encryption security configurations of the IPSec protocol suite. The cloud provider takes care of redundancy and high-availability at there side by automatically replicating VPN endpoints to two different data centers. The virtual private gateway also supports the use of dynamic routing using BGP routing protocol so that the router automatically learns new routes and does not need to be reconfigured in case of changes in the infrastructure networking theme. Amazon also offers <em>VPN CloudHub </em>service to create a hub-and-spoke model by connecting multiple on-premise data centers through a single cloud gateway.</li><li><strong>Customer-managed VPN : </strong>Customers can also deploy there own VPN solutions on the virtual machines in the cloud for creating an IPSec VPN tunnel to their on-premise network. It goes without saying that customers would be responsible for creating redundancy in there design by deploying the VPN endpoints across multiple availability zones.</li><li><strong>Private Network Connection : </strong>This is the fastest (low latency) and the most reliable option for connecting to the CSP network by using a dedicated fiber connection to the CSP’s endpoint. Network speeds can be reached from 50 Mbps up to 10 Gbps per connection. AWS service for the dedicated network connection is called — <em>DirectConnect </em>and in the GCP world, this is called <em>Google Cloud Interconnect.</em></li><li><strong>Data transfer via connection with Cloud Storage endpoints : </strong>For customers looking to transfer data to their cloud storage services like AWS S3 or GCS bucket, there are various tools available to make this a smooth process. Both S3 and GCS have a GUI or CLI commands to directly upload the files to the bucket in your desired cloud region using SSL to encrypt data in transit. Some of these tools for AWS are <em>rsync, S3 CLI and Glacier CLI. </em>In the GCP world, we have <em>gsutil </em>and <em>Storage Transfer Service</em>. While this may be a good solution for small distance transfers, this is not a good solution for transferring data across long distances. Amazon offers the <em>S3 Transfer Acceleration </em>which maximizes data transfer across long distances by taking an optimized network path. Amazon also provides a service called <em>Storage Gateway </em>which can help establish a permanent and seamless gateway between your on-premise applications to AWS S3, EBS and Glacier. This is most suitable for organizations working towards a hybrid cloud storage model.</li><li><strong>Data transfer using physical transport (Offline) : </strong>For customers wanting a fast and secure way to transfer petabyte-scale data without creating a connection to the cloud provider, there are options to get your data physically transferred to the cloud provider’s edge location. Even with high-speed online network connections as described above, it can take days or even weeks to transfer humongous amounts of data to the cloud which is a pretty common scenario with Big data solutions in the market today. For example, a 1 Gbps network connection would take around 12 days to transfer 100 Tb of data. Using tamper-resistant and encrypted physical devices to securely transport the data, can significantly increase data migration time frame. AWS service for the physical data migration service is called — <em>AWS Snowball </em>and in the GCP world, this is called <em>Google Transfer Appliance.</em></li></ol><p>In the next part of this series, we will focus on some of the things to keep in mind after connectivity to the cloud is established and we are ready for the next level - <strong>Networking in the Cloud. </strong>This is the fun world where vital networking functions are created with a few clicks on the interface!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a5eb916c41fe" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building Serverless Websites in AWS]]></title>
            <link>https://ab-lumos.medium.com/building-serverless-websites-in-aws-eaede9055d88?source=rss-4456e8332d99------2</link>
            <guid isPermaLink="false">https://medium.com/p/eaede9055d88</guid>
            <category><![CDATA[lambda]]></category>
            <category><![CDATA[serverless]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[api-integration]]></category>
            <category><![CDATA[rest-api]]></category>
            <dc:creator><![CDATA[Anunay Bhatt]]></dc:creator>
            <pubDate>Tue, 08 Jan 2019 02:26:08 GMT</pubDate>
            <atom:updated>2019-01-08T02:40:16.335Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ScfYXUbKpWTctMr42P0Rlw.jpeg" /></figure><p>Researching on RESTful APIs, I found the use case of AWS services to build Serverless websites very appealing. This can help small to mid-size organizations to reduce their operational overhead while delivering highly scalable and reliable services to customers. In this blog post, I will document steps to develop such a serverless website in AWS.</p><h4>Step 1: Creating a static web front-end</h4><p>We will be using the unique public url for AWS S3 bucket as our web front-end. The S3 bucket would be simply used to host static content for the website like HTML files, image files, CSS files, etc.</p><ol><li>Login to AWS and select your preferred AWS region</li><li>Create a bucket with default configurations using your preferred cloud region</li><li>Upload the contents of your static website into the S3 bucket — CSS, Javascript, HTML, and images. Your S3 bucket should look like this after the procedure:</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hz-Zof5nriUk0ZKia7R0ZA.png" /><figcaption>S3 bucket after content upload</figcaption></figure><p>4. Add a bucket policy to allow READs from the public. Below is the JSON that your bucket policy must have to allow Public READ access:</p><pre>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Effect&quot;: &quot;Allow&quot;, <br>            &quot;Principal&quot;: &quot;*&quot;, <br>            &quot;Action&quot;: &quot;s3:GetObject&quot;, <br>            &quot;Resource&quot;: &quot;arn:aws:s3:::[YOUR_BUCKET_NAME]/*&quot; <br>        } <br>    ] <br>}</pre><p><em>Note: Depending on your bucket setting, you may or may not be able to do the above operation. If an error like ‘Access Denied’ comes up while granting updating bucket policy, please first update the </em><strong><em>Public Access Settings </em></strong><em>to remove the block on updating the bucket policy.</em></p><p>5. To enable users in the region of your S3 bucket to be able to see the contents of bucket as if they are coming from a website<strong>, </strong>you need to <strong>enable website hosting</strong> on your S3 bucket. This can be done from Properties of the bucket:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/752/1*qg4W2FEg6j0cqsI8aKdMMA.png" /><figcaption>Enabling static website hosting on S3 bucket</figcaption></figure><p>6. Visit your S3 endpoint to see your implementation of static website at:</p><p><a href="http://serverless-website-test-retful.s3-website-us-west-1.amazonaws.com">http://&lt;bucket-name&gt;.s3-website-&lt;region-name&gt;.amazonaws.com</a></p><h4>Step 2: Creating a Server-less backend process</h4><p>Amazon offers a fully-managed SQL database through RDS and a fully-managed no-SQL database using DynamoDB. You can use either one for your serverless solution depending on your use case. For application level logic like getting or putting data from these databases, Amazon offers a very neat and powerful solution using <strong>AWS Lambda. </strong>In this section, we would create a new NoSQL table in DynamoDB and do a test to insert data into it through AWS Lambda</p><ol><li>Go to DynamoDB, select your Region, and create a table.</li><li>Create an IAM role for AWS Lambda allowing it to be granted below policies:</li></ol><p>a. AWS managed policy = <a href="https://console.aws.amazon.com/iam/home?region=us-west-1#/policies/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2Fservice-role%2FAWSLambdaBasicExecutionRole">AWSLambdaBasicExecutionRole</a></p><p>b. Inline policy with the below JSON, allowing PUT permission to the DynamoDB:</p><pre>{<br> “Version”: “2012–10–17”,<br> “Statement”: [<br> {<br> “Sid”: “VisualEditor0”,<br> “Effect”: “Allow”,<br> “Action”: “dynamodb:PutItem”,<br> “Resource”: “&lt;dynamoDB table ARN&gt;”<br> }<br> ]<br>}</pre><p>3. Create a Lambda function at your preferred region, attaching the above-created role and selecting your preferred programming language.</p><p>4. Code your Lambda function to receive a POST-type request, capture necessary details in variables, and put these variables in the DynamoDB/RDS using AWS-offered SDKs.</p><p>5. In order to test this implementation, create a Test Event matching the details of the POST request. Test the Lambda function against the test event and see if values are populated in the back-end database.</p><h4>Step 3: Expose Lambda function using RESTful API</h4><p>The back-end process we created in Step 2 is a stand-alone process and is not connected to the front-end in Step 1. Before the birth of API Gateway, AWS Lambda was just a cloud event-driven coding platform. API Gateway has revolutionized the way in which Lambda can be used for delivering application logic outside of cloud events. In this blogpost, we would be using API Gateway to expose the functionality of the Lambda function we created in Step 2 as a RESTful API.</p><ol><li>Inside AWS console, click on API Gateway and create a new REST API</li><li>Create a new resource inside the above created REST API, and assign it to the resource path desired.</li><li>Create a new resource method inside the above resource to specify the type of REST API method — POST, GET,etc. Then select the Lambda function as the integration type which you have created as part of the previous step.</li><li>Deploy your API and copy the Invoke URL created to be used as part of your website configuration files</li></ol><p>For the purpose of this demonstration, we have taken authorization for this API as None. However, in most of the cases, you will need to authorize your APIs using JWT authorization or API keys or other methods as considered necessary.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=eaede9055d88" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>