<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by M. G. on Medium]]></title>
        <description><![CDATA[Stories by M. G. on Medium]]></description>
        <link>https://medium.com/@prowler?source=rss-16c2355d770------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 15 May 2026 09:01:15 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@prowler/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Game of Encapsulations]]></title>
            <link>https://medium.com/@prowler/game-of-encapsulations-e0385a11f6ff?source=rss-16c2355d770------2</link>
            <guid isPermaLink="false">https://medium.com/p/e0385a11f6ff</guid>
            <category><![CDATA[vxlan]]></category>
            <category><![CDATA[nsx]]></category>
            <category><![CDATA[geneve]]></category>
            <category><![CDATA[cisco]]></category>
            <category><![CDATA[vmware]]></category>
            <dc:creator><![CDATA[M. G.]]></dc:creator>
            <pubDate>Sun, 26 Apr 2020 17:55:26 GMT</pubDate>
            <atom:updated>2020-04-26T17:55:26.999Z</atom:updated>
            <content:encoded><![CDATA[<p>First there was a big, flat network full of broadcast packets running around which was not practical as it was utilizing precious bandwidth. To mitigate the solution, VLANs were invented and each VLAN was an independent broadcast domain which reduced the broadcast to a single subnet (VLAN) and allowed isolation and segmentation of resources. One of the biggest problems with traditional L2 networks is that due to a nature of L2 network half of the links will always be blocked. Introduction of VLANs created new protocols (trunk, DTP, VTP, per-vlan STP etc.) which increased complexity and made L2 networks even harder to troubleshoot and monitor. Limitation was in size as well, as VLAN header is 12-bit in size which limited the number of VLANs to 4094. Service providers started using double VLAN-tagging to be able to use (tunnel) one Core VLAN for multiple Customers VLANs (QinQ). MPLS did its part by tunneling whatever the customers wanted via MPLS tunnels, hence making VLANs inside the Service Provider networks irrelevant. All of that did give some breathing space and postponed introduction of new technologies.</p><p>As big data enter and cloud providers started to take off, VLANs were simply not good enough. Small, difficult to do deterministic traffic engineering and complex. To overcome the problem of VLANs, VXLAN was introduced. Nothing better than solving L2 problems by tunneling L2 traffic over L3. VXLAN standardization was proposed by representatives of different hardware and software companies (Cumulus networks, Arista, Cisco, VMware, RedHat, etc.). VXLANs have 24-bit header which gave an option to create over 16 million Layer 2 segments (VXLAN Network Identifiers — VNIs). VTEP (VXLAN tunnel endpoint) would check the frame destination, encapsulate the frame into VXLAN header and send it across Layer 3 network to the destination VTEP which would strip off VXLAN header and send the frame to the destination host using traditional protocols. As with VLAN, virtual machines on the same VNI can communicate directly with each other, whereas virtual machines on different VNIs need a router to communicate with each other. VTEPs can be physical devices (hardware VTEP) or they can run in hypervisor (software VTEP). One of the biggest problems VXLAN has had was not having an intelligent control plane and using flood and learn method to map VTEPs with MAC address and/or sending all information to a centralized controller (in case of SDN) which made VXLAN extremely difficult to scale.</p><p>To solve the problem, intelligent scalable control plane had to be introduced and of course the solution was to use the “Trashcan of the Internet” — BGP. Combining EVPN as a control plane with VXLAN as a data plane have made the solution more scalable as an EVPN address family in BGP is used to populate both VTEP IP addresses and end host MAC addresses. Now you could have millions of route entries across thousands of devices and at the same time utilize all the nice features that EVPN has to offer like active/active setups, load balancing, mass withdrawal, route reflectors, etc. EVPN+VXLAN RFC was created by Cisco, Juniper, Nokia, and AT&amp;T. As one can notice no “software” companies were involved as for these combinations to properly work, basic and advances routing features had to be in place which at the moment most of the software solutions still lack. <br> Soon after it was standardized, some shortcomings of VXLAN started to be exposed like not enough flexibility in the header, lack of OAM features, single protocol support etc. and the new encapsulation showed called GENEVE (Generic Network Virtualization Encapsulation). Its specification explains that this is a purely data plane protocol, leaving control plane integration unspecified. It was design to offer maximum flexibility and it covered al shortcomings of VXLAN introducing protocol, OAM and other fields inside the header as well as TLVs for adding extra information and passing them between tunnel endpoints. Sounds like a great protocol proposed by VMware, RedHat, Intel, Microsoft but “hardware” companies were not so ecstatic about it. As GENEVE does not have a fixed length hardware implementation are not efficient. While “software” companies don’t really care about that as frames are processed in software anyway and TLVs give them flexibility to send any metadata across, “hardware companies” were loosing one of the biggest advantages they have had, i.e. fast processing of data in hardware.</p><p>At the similar time GENVE were proposed, “hardware” companies with its own proposal, GPE-VXLAN (Generic Protocol Extension for VXLAN). This was the extension of the VXLAN protocol that included a protocol bit, OAM flag bit, BUM traffic etc. The protocol bit includes IPv4, IPv6, Ethernet and NSH. NSH (Network Service Header) is a mechanism to send metadata transmission. Biggest difference is that GPE-VXLAN with NSH must include all information inside fixed-size fields.</p><p>Similar protocols, one more adopted for hardware usage, the other one more for software processed packet. Which protocol will get more exposure and become the standard nobody can say at the moment? Even though, <a href="https://www.theregister.co.uk/2020/04/17/open_letter_to_internet_engineering/">some things</a> might hint us who stands greater chances.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e0385a11f6ff" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[GCP, AZURE, AWS Virtual Networking overview]]></title>
            <link>https://medium.com/@prowler/gcp-azure-aws-virtual-networking-overview-a282db89468b?source=rss-16c2355d770------2</link>
            <guid isPermaLink="false">https://medium.com/p/a282db89468b</guid>
            <category><![CDATA[networking]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[gcp]]></category>
            <category><![CDATA[azure]]></category>
            <category><![CDATA[aws]]></category>
            <dc:creator><![CDATA[M. G.]]></dc:creator>
            <pubDate>Wed, 08 Apr 2020 08:53:08 GMT</pubDate>
            <atom:updated>2020-04-08T08:53:08.355Z</atom:updated>
            <content:encoded><![CDATA[<p>The aim of this article is to provide an overview of the virtual networking concepts of the three biggest cloud platforms: Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). The table below contains a summary of the virtual networking concepts of AWS, Azure and GCP.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WKYbcqvSN5mEMEfVTusW0w.jpeg" /></figure><h3>Virtual Network</h3><p>Cloud provider like to think about Virtual Network as a virtual Data Center. It is a foundation for any user/company who wants to build or migrate IT services into the cloud.</p><p>GCP, Azure and AWS all provide a virtual network service that can be used for launching on-demand pool of shared compute resources, running them in an isolated secure environment, connecting them to the Internet, other cloud services and services running on-premises. All the platforms give user some level of flexibility, let the user control a part of the virtual networking environment — including IP address ranges, subnets, access control rules and routing. There are certain restrictions imposed on users which will be highlighted in the below paragraphs.</p><h3>Google VPC</h3><p>Unlike other providers, Google Virtual Private Cloud networks are global resources. There are two different types of GCP VPC:</p><p>- Auto VPC</p><p>- Custom VPC</p><p>Using auto mode, GCP will automatically allocate IP addresses for subnets in each region while Custom VPC will let user choose IPs for each subnet. Drawback of auto mode is if you have multiple VPCs and need to peer them it will not be possible as they will have the same IP subnets.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/478/1*CjMEd3ywvaExmCN2s9nvHA.jpeg" /></figure><p>VPC can have multiple subnets inside the VPC, but each subnet can span within the specific region only. IP address ranges are not defined on VPC level, but each subnet can have any IP address range allocated. There is possibility to define secondary IP ranges as well. Minimum subnet supported by Google is /29 while maximum is /8. Google VPC reserves 4 IP addresses (the first two and the last two) in each subnet that are now allowed to be used by customers.</p><p>Only unicast traffic is supported withing VPC, no broadcast or multicast support is available. VPC currently support IPv4 traffic only and IPv6 is not supported.</p><p>Creation of VPC is free of cost.</p><h3>Azure VNet</h3><p>Azure Virtual Network (VNet) is always inside a single region. Each VNet has an IP address range that is specified when the VNet is created. Azure offers a single type of VNet deployment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Oozd5etJbQ2rYSsW_3OdHw.png" /></figure><p>Subnet scope can span across region just like the VNet and the IP addresses allocated for subnet need to be a subset of IP address range defined for VNet. All subnets are manually created. Both IPv4 and IPv6 protocols are supported. Minimum subnet size is /29 while maximum is /8 for IPv4 while the have fixed subnet of /64 for IPv6. Azure takes 5 IP address (the first 4 and the last one) in each subnet for its own operations.</p><p>Just like GCP, only unicast traffic can run inside the VPC as broadcast and multicast are not supported.</p><p>Creation of VPC is free of cost.</p><h3>AWS VPC</h3><p>VPC is a logically isolated virtual network that is created in an AWS region and spans all the availability zones in the region. There are two types of VPC in AWS:</p><p>- Default</p><p>- Non-default</p><p>Default VPC is a network which is automatically created for the first-time account resources are provisioned. It has public IP address (Internet access) by default and it is used for fast provisioning of the services. Only one default VPC is possible. Nondefault VPC comes with private IP address only, it is created by the customer, not AWS, and needs to be configured before it can be used. 5 nondefault networks are allowed per regions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/699/1*GW-sU8NzIZxYILlQO42Etg.png" /></figure><p>Subnets are defined per availability zone and cannot be used in more than one AZ. Both IPv4 and IPv6 protocols are supported. All subnets must be subset of the VPC defined range. AWS offer minimum /28 and maximum /16 subnets for IPv4 and fixed /64 subnet for IPv6. Azure takes 5 IP address (the first 4 and the last one) in each subnet for its own operations.</p><p>There is no broadcast support in AWS, only unicast and multicast traffic (Transit Gateway) is supported.</p><h3>Routing</h3><h3>GCP</h3><p>Google Cloud routes define the paths that network traffic takes from a virtual machine (VM) instance to other destinations. These destinations can be inside your VPC network or outside it. The routing table for a VPC network is defined at the VPC network level. VPC networks have two different types of routes:</p><p>- System-generated routes</p><p>- Custom routes</p><p>System routes are automatically generated when you create a VPC network, add or expand the subnet. They apply to instances inside the VPC.</p><p>Default route is created when you create a VPC. It is created with low priority (1000) and can be replaced with a custom route with higher priority. Subnet routes are system-generated routes that define paths to each subnet in the VPC network. Custom routes cannot override subnet routes as they will always have higher priority. Custom routes can be used with subnets only with broader information. Subnet router are created automatically with creation of a subnet.</p><p>Custom routers can be divided into static and dynamic routes (BGP). Static routes can use any static next hop in case you want to do traffic engineering. Dynamic routes can be regional or global. Dynamic routes are managed by one or more Cloud Routers. Their destinations always represent IP ranges outside your VPC network, and their next hops are always BGP peer addresses. A Cloud Router can manage dynamic routes for:</p><p>- Cloud VPN tunnels that use dynamic routing</p><p>- Cloud Interconnect</p><h3>AZURE</h3><p>Azure routes traffic between subnets, connected virtual networks, on-premises networks, and the Internet by default.</p><p>Azure has two different types of routes:</p><p>- System routes</p><p>- Custom routes</p><p>Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table. You can override some of Azure’s system routes with custom routes. Azure automatically routes traffic between subnets using the routes created for each address range and there is no need to define gateways for Azure to route traffic between subnets.</p><p>The system default route specifies the 0.0.0.0/0 address prefix. Unless overridden, Azure routes traffic for any address not specified by an address range within a virtual network, to the Internet, with the exception of the destination address of Azure’s services which Azure routes via Azure backbone network. You create custom routes by either creating user-defined routes or by exchanging border gateway protocol (BGP) routes between your on-premises network gateway and an Azure virtual network gateway.</p><p>You can create custom, or user-defined (static) routes in Azure to override Azure’s default system routes or to add additional routes to a subnet’s route table.</p><p>Using BGP with an Azure virtual network gateway depends on the type you selected when you created the gateway. You must use BGP with ExpressRoute service while it is optional to use BGP with VPN service.</p><p>Route selection algorithm in Azure prefers a route with the longest prefix match. If multiple routes contain the same prefix than user-defined route (static) has the highest priority, followed by BGP routes and system routes.</p><h3>AWS</h3><p>Each VPC has an implicit router and you use route tables to control where network traffic is directed. There are two types on route table in AWS:</p><p>- Main route table</p><p>- Custom route table</p><p>Each subnet in your VPC must be associated with a route table.</p><p>The main route table automatically comes with your VPC. It controls the routing for all subnets that are not explicitly associated with any other route table. A subnet can only be associated with one route table at a time. By default, when you create a nondefault VPC, the main route table contains only a local route as nondefault VPCs don’t have Internet connectivity.</p><p>By default, a custom route table is empty and can be populated as required. You can add, remove, and modify routes in a custom route table. Custom route table can be deleted only if it has no associations.</p><p>You can associate a route table with an internet gateway or a virtual private gateway. When a route table is associated with a gateway, it’s referred to as a gateway route table.</p><p>Route selection algorithm in AWS prefers a route with the longest prefix match. The static routes take priority over the propagated routes in case both have the same prefix advertised.</p><h3>Traffic control</h3><h3>GCP</h3><p>Every Virtual Private Cloud (VPC) network functions as a stateful distributed firewall. While firewall rules are defined at the network level, connections are allowed or denied on a per-instance basis. GCP firewall rules let you allow or deny traffic to and from your virtual machine (VM) instances based on a configuration that you specify. Each firewall rule applies to incoming (ingress) or outgoing (egress) traffic, not both..After a session has been established, firewall rules allow bidirectional communication. Every VPC network has two implied firewall rules:</p><p>- Allow egress rule</p><p>- Deny ingress rule</p><p>Implied allow egress rule has the lowest priority (65535) and allows all traffic to all destination, except the traffic blocked by GCP (GRE, SMTP on port 25, etc.). A higher priority rule can be used to change the default behavior.</p><p>Implied deny ingress rule as well is configured with the lowest priority possible (65535) and blocks all incoming traffic except some of the traffic (icmp, ssh, rdp) that is allowed but those rules can be deleted or modified.</p><h3>AZURE</h3><p>Azure offers two components to protect traffic inside the VNet:</p><p>- Network Security Groups</p><p>- Application Security Groups</p><p>A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. By default, all incoming communication within VNet subnets is allowed as well as any traffic coming from AzureLoadBalancer while all other incoming traffic is blocked. Augmented security rules simplify security definition for virtual networks, allowing you to combine service tags or application security groups. For inbound traffic, Azure processes the rules in a network security group associated to a subnet first and then the rules in a network security group associated to the network interface. For outbound traffic, Azure processes the rules in a network security group associated to a network interface first and then the rules in a network security group associated to the subnet.</p><p>Application security groups enable you to configure network allowing you to group virtual machines and define network security policies based on those groups.</p><h3>AWS</h3><p>Inbound and outbound traffic can be controlled using two different mechanisms:</p><p>- security groups</p><p>- network Access Control Lists (ACL)</p><p>A security group acts as a virtual stateful firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level and are associated with network interfaces, not the subnet level. For each security group you can add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic. You can specify allow rules, but not deny rules. By default, security groups deny all inbound and permits all outbound traffic.</p><p>A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. By default, ACL allows all inbound and outbound IPv4 and IPv6 traffic. Each subnet in your VPC must be associated with a network ACL. If you don’t explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. The differences between Security groups and Network access lists are shown in the table below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/774/1*qO4XUcEcFLLg1nICyYVEcA.jpeg" /></figure><h3>External Network Peering</h3><h3>GCP</h3><h3>Private Google Access</h3><p>Google Cloud provides several private access options. Each option allows virtual machine (VM) instances with internal (RFC 1918) IP addresses to reach certain APIs and services:</p><p>- Private Google Access</p><p>- Private Google Access for on-prem hosts</p><p>- Private Service Access</p><p>- Serverless VPC Access</p><p>VM instances that only have internal IP addresses can use Private Google Access. They can reach the external IP addresses of Google APIs and services.</p><p>Private Google Access for on-premises hosts gives a possibility for on-premises hosts to reach Google APIs and services by using Cloud VPN or Cloud Interconnect from your on-premises network to Google Cloud. On-premises hosts can send traffic from the following types of source IP addresses:</p><p>- a private IP address, such as an RFC 1918 address</p><p>- a privately used public IP address, except for a Google-owned public IP address</p><p>Private services access enables you to reach Google and third parties (service producers) services with internal IP addresses that are hosted in a VPC network.</p><p>Serverless VPC Access enables you to connect from the App Engine standard environment and Cloud Functions directly to your VPC network. This connection makes it possible for your App Engine standard environment apps and Cloud Functions to access resources in your VPC network via internal IP addresses.</p><h3>VPC Network Peering</h3><p>Google Cloud VPC Network Peering allows private RFC 1918 connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization. VPC Network Peering enables you to peer VPC networks so that workloads in different VPC networks can communicate in private RFC 1918 space. VPC Network Peering gives you several advantages over using external IP addresses or VPNs to connect networks, including:</p><p>- Network Latency: All peering traffic stays within Google’s network.</p><p>- Network Security: Service owners do not need to have their services exposed to the public Internet</p><p>- Network Cost: Google Cloud charges egress bandwidth pricing for networks using external IPs to communicate even if the traffic is within the same zone</p><p>VPC peers always exchange all subnet routes and you can also exchange custom routes (static and dynamic routes).</p><h3>Cloud VPN</h3><p>Cloud VPN securely connects your peer network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection. Google Cloud offers two types of Cloud VPN gateways:</p><p>- HA VPN</p><p>- Classic VPN</p><p>HA VPN is a high-availability (HA) Cloud VPN solution that lets you securely connect your on-premises network to your Virtual Private Cloud network through an IPsec VPN connection in single region. HA VPN provides an SLA of 99.99% service availability. Cloud VPN can be deployed in an active/active or active/passive routing configuration.</p><p>Classic VPN gateways have a single external IP address and support tunnels using dynamic (BGP) or static routing (route based or policy based). They provide an SLA of 99.9% service a Each Cloud VPN tunnel can support up to 3 Gbps availability.</p><h3>Cloud Interconnect</h3><p>Cloud Interconnect provides low latency, highly available connections using private IP addresses that enable you to reliably transfer data between your on-premises and Virtual Private Cloud networks. Cloud Interconnect offers two options for extending your on-premises network:</p><p>- Dedicated Interconnect</p><p>- Partner Interconnect</p><p>Dedicated Interconnect provides direct physical connections between your on-premises network and Google’s network and enables you to transfer large amounts of data between networks. Cloud Router dynamically exchanges routes between your VPC network and your on-premises network through BGP. To achieve a specific level of reliability, Google has two prescriptive configurations, one for 99.99% availability and another for 99.9% availability.</p><p>For Dedicated Interconnect, connection capacity is delivered over one or more 10 Gbps or 100 Gbps Ethernet connections, with the following maximum capacities supported per interconnect:</p><p>- 8 x 10 Gbps connections (80 Gbps total)</p><p>- 2 x 100 Gbps connections (200 Gbps total)</p><p>Partner Interconnect provides connectivity between your on-premises network and your VPC network through a supported service provider. For Partner Interconnect, the following connection capacities for each interconnect attachment (VLAN) are supported:</p><p>- From 50 Mbps to 10 Gbps up to 8 x 10 Gbps interconnect attachments (VLANs) (80 Gbps)</p><p>Supported service providers offer layer 2 connectivity, layer 3 connectivity, or both.</p><h3>Direct Peering</h3><p>Direct Peering allows you to establish a direct peering connection between your business network and Google’s edge network and exchange high-throughput cloud traffic. Direct Peering exists outside of Google Cloud and it is recommended to use it in case you need access to G Suite applications.</p><h3>AZURE</h3><h3>Services Endpoints</h3><p>Service endpoints extend your virtual network private address space. The endpoints also extend the identity of your VNet to the Azure services over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network.</p><p>Service endpoints provide the following benefits:</p><p>- Improved security by using private IP address to connect</p><p>- Optimal routing as the traffic uses Azure backbone</p><p>- Simplified management as there is no public IPs, NAT-ing, etc.</p><p>By default, Azure service resources secured to virtual networks aren’t reachable from on-premises networks. If you want to allow traffic from on-premises, you must also allow public (typically, NAT) IP addresses from your on-premises or ExpressRoute.</p><h3>Virtual network peering</h3><p>Virtual network peering enables you to connect networks in Azure Virtual Network to have the virtual networks appear as on. The traffic between virtual machines uses the Microsoft backbone infrastructure. Network traffic between peered virtual networks is private and traffic kept on the Microsoft backbone network</p><p>Azure supports the following types of peering:</p><p>- Virtual network peering: Connect virtual networks within the same Azure region.</p><p>- Global virtual network peering: Connecting virtual networks across Azure regions.</p><p>For peered virtual networks, resources in either virtual network can directly connect with resources in the peered virtual network. Service chaining enables you to direct traffic from one virtual network to a virtual appliance or gateway in a peered network through user-defined routes.</p><h3>VPN Gateway</h3><p>A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. Each virtual network can have only one VPN gateway. There are different deployment models for Azure VPN Gateway:</p><p>- Site-to-Site VPN</p><p>- Multi-site</p><p>- Point-to-Site</p><p>- VNet-to-VNet</p><h3>ExpressRoute</h3><p>ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. ExpressRoute connections do not go over the public Internet. Microsoft uses BGP to exchange routes between your on-premises network, your instances in Azure, and Microsoft public addresses. Each ExpressRoute circuit consists of two connections to two Microsoft Enterprise edge routers at an ExpressRoute Location from the connectivity provider/your network edge.</p><p>ExpressRoute Premium allows you to extend connectivity across geopolitical boundaries.</p><h3>ExpressRoute Direct</h3><p>ExpressRoute Direct provides customers the opportunity to connect directly into Microsoft’s global network at peering locations across the world. ExpressRoute Direct provides dual 100Gbps connectivity, which supports Active/Active connectivity at scale.</p><h3>AWS</h3><h3>VPC endpoints (PrivateLink)</h3><p>A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. There are two types of VPC endpoints:</p><p>- Interface endpoints</p><p>- Gateway endpoints</p><p>An interface endpoint is a network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink.</p><p>A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service.</p><h3>VPC Peering</h3><p>A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. All inter-region traffic is encrypted with no single point of failure, or bandwidth bottleneck. Traffic always stays on the global AWS backbone and never traverses the public internet, which reduces threats such as common exploits, and DDoS attacks.</p><h3>AWS Site-to-Site VPN</h3><p>You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection and configuring routing to pass traffic through the connection. IPv6 traffic is not supported for this deployment. A Site-to-Site VPN connection offers two VPN tunnels between a virtual private gateway or a transit gateway on the AWS side, and a customer gateway on the remote (on-premises) side. A virtual private gateway is the VPN concentrator on the Amazon side of the Site-to-Site VPN connection. A transit gateway is a transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks.</p><h3>AWS Direct Connect</h3><p>AWS Direct Connect makes it easy to establish a dedicated connection from an on-premises network to Amazon VPC. Using Direct Connect, you can establish private connectivity between AWS and your data center, office, or collocated environment. Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a282db89468b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Global Cloud Infrastructure overview]]></title>
            <link>https://medium.com/@prowler/global-cloud-infrastructure-overview-ad2b19dd6052?source=rss-16c2355d770------2</link>
            <guid isPermaLink="false">https://medium.com/p/ad2b19dd6052</guid>
            <category><![CDATA[infrastructure]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[gcp]]></category>
            <category><![CDATA[azure]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <dc:creator><![CDATA[M. G.]]></dc:creator>
            <pubDate>Wed, 08 Apr 2020 07:53:15 GMT</pubDate>
            <atom:updated>2020-04-08T07:53:15.452Z</atom:updated>
            <content:encoded><![CDATA[<p>Cloud computing, as Gartner defines it, is a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using internet technologies. To offer service on global scale global infrastructure is very important. The way you have build it will define high availably, latency and connectivity options between the clients and the providers.</p><p>All big cloud players have similar design of global infrastructure when it comes to logical separation. The global infrastructure is divided into regions which are further divided into zones. The definition varies mostly due to marketing purpose.</p><p><strong>AWS</strong></p><p>In AWS world region is a physical location around the world where data centers are clustered. Each group of logical data centers is an Availability Zone. An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in a region. Each region contains at least two availability zones. Every data center, AZ, and AWS Region is interconnected via highly available, low-latency 100 GbE global network infrastructure.</p><p><strong>AZURE</strong></p><p>When it comes to Microsoft, a region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network. Availability Zones are physically separate locations within an Azure region. Each Availability Zone is made up of one or more datacenters equipped with independent power, cooling, and networking.</p><p>Azure uses term Geographies for two or more regions that preserve data residency and compliance boundaries.</p><p><strong>GCP</strong></p><p>With Google, regions are independent geographic areas that consist of zones. Each region has one or more zones; most regions have three or more zones. A <em>zone</em> is a deployment area for Google Cloud resources within a region. Zones should be considered a single failure domain within a region. These zones can be separate buildings or separate power, cooling, networking, and control planes. Zones have high-bandwidth, low-latency network connections to other zones in the same region.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/975/1*d1oG3MDYmfYXr8NPbNWnZg.jpeg" /></figure><p>All cloud provider have the best coverage of North America, Europe and far East while other regions are covered from few main hubs. However, all providers are constantly expanding the spreading their global presence so this map will look very different in the next few years.</p><h3>Regions</h3><p>Based on the official numbers Azure leads with 52 regions and 4 announced. However, most of Azure regions have a single data center within the regions and very few availability zones (multiple segregated data centers) as it can be seen in the below charts. They have designed their cloud network differently than AWS and GCP as their goal is to cover (advertise) as many parts of the world as possible so enterprise customers can have reachability and proximity of Azure service wherever they are located.</p><p>AWS has at minimum 2 AZ inside each region to make sure that all regions have proper high availability. Google AZ can sometimes be at the same location but using different power, cooling, network sources to keep the data centers outside of failure domain of each other.</p><p>All provides are planning further expansion of its regions with the following announcements: <br> AWS:</p><p>- Milan, Italy</p><p>- Osaka, Japan</p><p>- Madrid, Spain</p><p>- Cape Town, SA</p><p>- Jakarta, Indonesia</p><p>Azure:</p><p>- Mexico Central</p><p>- Spain Central</p><p>- Israel Central</p><p>- Qatar Central</p><p>GCP:</p><p>- Las Vegas, USA</p><p>- Jakarta, Indonesia</p><p>- Warsaw. Poland</p><p>- Doha, Qatar</p><p>- Toronto, Canada</p><p>- Melbourne, Australia</p><p>- Delhi, India</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/483/1*Rx6qnaVYjqh_76PEg-1F2A.jpeg" /></figure><p>In terms of availability zones AWS and GCP have lot more than Azure, which has only 10 of them while AWS and GCP have almost 7 times more. But as explained above different provides use different terms for the same thing so these number should be taken with a grain of salt.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/478/1*xuLkne_3RUErfaG6yJiCfQ.jpeg" /></figure><h3>Network Connectivity</h3><p>All provides offer multiple options to connect and use their cloud service but in general they can be grouped as:</p><p>- Direct connectivity for high throughput, low latency interconnection with a cloud provider</p><p>- Point of Presence (POP) for placing services and as close as possible to their clients</p><p>In this segment AWS with 97 dedicated interconnections and 216 POPs has an obvious lead having more direct connections as well as POPs placed around the world then Azure and GCP.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/478/1*ZpsI9tfDqgN9hjWGkJ1o2A.jpeg" /></figure><h3>Physical network</h3><p>Even though some companies are trying to convince their customers that overlay is all you need to care about, common sense tells us that underlay networks are still quite important to have in order to run your operations. While all provides have global presence and connectivity to every part of the world, they are doing it in different way. When it comes to owning physical network, a network they run their data across, Goggle has huge advantage over Azure and AWS in number of cables owned as well as length of all subsea infrastructure. Google subsea network spans over 100 412 km with some of the cables being fully private and owned solely by Google, while other cables are part of consortium and Google has partial ownership.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/477/1*m6tNy393CRY6s4EAQgoXCg.jpeg" /></figure><p>Out of the three, only Google has full ownership over some of the cables while both Azure and AWS subsea infrastructure is all done as part of consortium. Microsoft is proud co-owner of MAREA cable, highest-capacity submarine cable in the world with a capacity of 200 Terabit/s</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/483/1*yhBJieKjorfS_j3m6uOmWQ.jpeg" /></figure><p>As it can be seen from above charts each provider has some unique features when it comes to global infrastructure, be it full high availability inside the regions, global regions coverage or high throughput data network and it is up to the customers to choose which of these thing matter for them the most.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ad2b19dd6052" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>