Mapping Your As-Is to Google Cloud, To Leverage Cloud Value: Infrastructure

Dazbo (Darren Lester)
Google Cloud - Community
21 min readNov 28, 2023

Welcome to the continuation of the Google Cloud Adoption: From Strategy to Operation series.

In the previous part we looked at how we can use Google Cloud Migration Center to support with your as-is discovery process, and then to map those existing workloads to Google Cloud using IaaS.

But, as you’ve heard me say time and time again, IaaS is not the way to leverage maximum value from cloud. And with that in mind, this article will recommend some target state Google Cloud products and services, for an extensive set of as-is workloads.

White tiers are in scope for this article

Throughout, you’ll see how these product recommendations connect back to the set of Cloud Adoption and Cloud Consumption principles that I proposed here:

  1. Host in public cloud
  2. Use managed services
  3. Use cloud-native services
  4. Avoid commercially licensed software
  5. Automate deployments and installs
  6. Immutable infrastructure (“cattle, not kittens”)
  7. Document everything
  8. Manage lock-in
  9. Systemic FinOps

Establishing Your Common Workloads

The first thing you need to do is establish the common as-is workloads that you need to map to cloud products.

If you have an existing Technical Reference Model (TRM), then this is a trivial exercise. If you’re unfamiliar with the concept: the TRM is a way of organising and categorising your technology, infrastrucutre and platform services. The TRM maps technology capabilities to specific services and products. Organisations will often have multiple vendors and products providing the same capability, and the TRM helps you to identify this. (I’ll do an article on the subject of the TRM sometime soon.)

If you don’t have a TRM, then you’ll need to pull together some of your estate data — typically from your CMDB — to get view of the common hosting stacks that you have in your estate today. By the way, when I say hosting stack, I mean the venue, hardware, software and infrastructure services that your business applications depend on.

Establishing Hosting Stack Capability Tiers

So traditional (i.e. on-prem) hosting stacks will be made up of the following broad tiers of hosting capability:

  • Hosting Venue — such as data centres and colos.
  • WAN — including connectivity to Internet providers, SDWAN, inter-data centre connectivity, and any private or peered connectivity to cloud providers.
  • Network and Network Services — including switching and routing, reverse proxies, site selectors, load balancers, and network virtualisation.
  • Perimeter and Access — including traditional network zoning, Layer 2 / Layer 3 firewalls, web application firewalls, network intrusion detection and prevention (IDS/IDP), Next-Gen firewalls, DDoS protection, VPN, forward proxies, and zero trust network access (ZTNA) capabilities.
  • Storage, Backup and Archive — including block storage (e.g. SAN), file storage (i.e. NAS), and backup appliances.
  • Physical Compute — including physical x86 servers, physical RISC servers, HCI, and mainframes.
  • Compute Virtualisation — Hypervisor platforms.
  • O/S and Containers — operating systems, and container orchestration platforms.
  • Middleware and Database — relational and NoSQL databases, application and web servers, content management platforms, messaging and integration, and workload scheduling and orchestration.
  • Infrastructure Provisioning and CI/CD — including compute, storage and network provisioning, VM deployments, image baking, configuration management, source management, CI/CD.
  • Operations and Visibility — including monitoring, logging, alerting, security information and event management (SIEM), and APM.

These tiers are just my preferred way of organising products and services. You might choose to carve them up differently.

Visualising Your Tiers and As-Is Vendors, Products and Services

Having established these tiers, I then like to identify the key vendors, products and services used by an organisation in their as-is on-prem / legacy estate, and build a visual view of this information. I like to call it “Hosting-on-a-Page”, or HoaP. (Catchy, right?)

For many organisations, the view might end up looking something like this:

A typical enterprise legacy hosting stack

This view is incredibly useful:

  • It gives you an at-a-glance view of all the infrastructure and hosting services your organisation provides today.
  • It shows the main vendors and vendor products that you are using to provide those hosting services. It doesn’t show all the vendors and products. That’s what your TRM is for. It just shows the most significant ones.
  • It helps you do your as-is versus to-be mapping and gap analysis.

Outside of cloud adoption strategy, I’ve also found the HoaP view to be extremely useful for:

  • Explaining to stakeholders what I hold strategic responsibility for. (E.g. I’ve been an enterprise architect responsible for the organisation’s strategy across everything in this picture.)
  • Explaining to stakeholders which services are in-scope (or affected by) a particular strategic initiative, project or solution. For example, it can be useful to highlight elements of a picture like this, as part of the introduction to a solution design that is changing or implementing a hosting capability.

Using the HoaP View for Mapping

Now that we have this view of our capabilities, we can use it to map our most important as-is capabilities to preferred services in cloud. As we go through the as-is capabilities and attempt to map them, we can usually categorise as one of:

  • Replace — replace this capability with cloud service(s) and then remove the as-is product.
  • Maintain — this existing product will be used in the cloud, and potentially also maintained on-prem. (Depending on your strategic goals for on-prem.)
  • Tactical Retain (Sunset) — where we don’t want to use this product in the long term, but it’s likely we still need it in the short and medium term. Most likely, the product will only remain for existing on-prem workloads and we will not make use of it in the cloud.
  • Eliminate — this is no longer needed.

In the rest of this article and the next, I will go through these as-is capabilities and products, and recommend some ideal replacements in Google Cloud. (I’d normally use a table for this, but Medium doesn’t handle tables well. So I’ll do a separate heading per hosting tier.)

Hosting Venue

Depending on your strategic intent with respect to your data centres, you will either aim to replace or sunset your data centres. The answer is obvious: Google Cloud!

Whether or not you retain your data centres in the medium or long term depends on:

  • The extent to which you expect to retain workloads that can’t be migrated to public cloud.
  • The viability of moving those workloads to a colo, such that you don’t have to maintain your own data centres.

When you run your workloads in Google Cloud, you’re using Google’s data centres.

Google’s Data Centre in the Netherlands

Here’s a few reasons why that’s a good thing:

  • Google Cloud operates data centres in 39 regions (split across 118 zones) across the world, and the number keeps growing.
  • Their data centres are significantly more efficient than a typical enterprise data centre.
  • Google has been 100% carbon-neutral since 2007.
  • They were the first cloud company to achieve 100% energy consumption from renewable sources.
  • Google is one of the world’s largest purchasers of wind and solar energy.
  • Google Cloud is committed to sustainability. So, if you’re switching from your own data centres to Google’s, your almost certainly improving your organisation’s green credentials.

WAN

Artistic rendition from DALL-E: the cost of connectivity

In your target state:

  • My of your workloads will end up mostly in cloud, and can all be accessed over the Internet.
  • Off-the-shelf offerings will increasingly be provided in the form of SaaS, which can also all be accessed over the Internet.
  • The legacy applications that you currently host in your data centres will hopefully be eradicated. The goal is to eliminate or replace all those existing workloads that run in your data centres.

Consequently, in the long term you should be able to eliminate your reliance on WAN infrastructure in and out of your existing data centres. E.g.

  • Dedicated data centre interconnects — i.e. between your existing data centres.
  • Enterprise Internet connectivity for your data centre hosted workloads, e.g. using enterprise offerings from providers like BT and Vodafone.

Note: if your cloud adoption strategy depends on either short term or long term hybrid cloud capability — i.e. where you have extended your on-premises network into Google Cloud using private connectivity — then you will likely need some sort of Cloud interconnect between your existing data centres and Google Cloud. If you are using a colo, then your colo provider likely has everything in place to facilitate an interconnect.

Google Cloud Interconnect provides SLA-backed highly available, high bandwidth, low latency private connectivity between your data centres and Google Cloud.

Overview of a Google Cloud dedicated Interconnect — from Google’s documentation

Network and Network Services

Networks, Switching and Routing

With Google Cloud, there is no longer any need for physical networks, routers, and switches. Google Cloud provides networks for you, called virtual private clouds (VPC). Unlike an AWS VPC, Google’s networks are global, and you are free to provision your resources in any of Google’s regions across the globe.

Within each VPC, you have the ability to provision subnetworks (which align to your chosen regions), allocate internal and external IP addresses, and provide routes between networks and to the Internet.

Advantages of using Google VPCs:

  • You can provision entire networks in seconds.
  • They are fully managed by Google.
  • You have no physical hardware to deploy; no cabling; and no switches or routers to manage.
  • Behind the scenes, you are utilising Google’s super-fast transcontinental fibre network.
  • You can instantly enable private connectivity between resources, even across continents.

Load Balancers and Reverse Proxies

Traditionally, on-premises networks deploy load balancers and reverse proxies to manage ingress traffic. I.e. to handle requests from users or other services into applications that you host in your data centre.

Typically, load balancers and reverse proxies exist as the same physical or virtual appliance in your data centres. In a nutshell, they provide these capabilities:

  • They proxy requests from a client to your servers. Thus, the client only sees the proxy; it has no visiblity of the servers that sit behind your proxy.
  • As a proxy, they provide a single point of connection for the client. Typically, the proxy provides a virtual IP address which is exposed to the client, and hides the IP addresses of any machines behind it.
  • They provide load balancing. I.e. requests come into the load balancer / proxy, and the LB then distributes the requests to the servers it sits in front of.
  • The load balancer typically performs some sort of health check of the servers it is fronting, when deciding where to send requests to.
  • The load balancer / proxy often handles TLS offload, being the termination point for secure incoming traffic. The proxy then typically forms a new connection (which may or may not be encrypted) to the servers it sits in front of.

Within a typical enterprise with an on-premises network, such proxy / LB capability is often provided using products like:

Overview of a load balancer — from F5 documentation

In Google Cloud you no longer need these appliances. They can be entirely replaced with the fully distributed, software-defined, fully-managed load balancers that are provided by Google, and which exist as a VPC resource.

Advantages of using Google’s load balancing:

  • You have no appliances to procure, deploy, patch or manage.
  • Load balancers can be provisioned in seconds and support a wide variety of use cases.
  • They can be internally or externally facing.
  • They are incredibly scalable.
  • They are automatically highly available, with a >99.99% SLA.
  • Unlike on-prem LBs, there is no need to deploy them within separate DCs and then synchronise configuration between them.
  • They can route traffic based on geographic origin.
  • They can be deployed alongside managed instance groups (MIG) to allow automatic healing and scaling of a group of VMs behind the load balancer, based on demand.
  • They natively integrate with other Google Cloud services, such as Cloud CDN (to cache content close to users), Cloud Armor (for DDoS protection, web application firewall), and Identity-Aware Proxy (to provide application-level authentication and authorisation).

Egress / Forward Proxy

Enterprises typically deploy a forward-proxy (typically just called “proxy”) or secure web gateway (SWG), in order to:

  • Provide proxying — to hide the identity of an internal client (e.g. a user’s machine, or a server) from a resource it is connecting to on the Internet.
  • Apply content filtering.
  • Apply data loss protection (DLP) capabilities.
  • Apply allow/deny lists.
  • Cache the responses from remote machines. (Which is useful if many internal machines need the same resource.)
  • Provide access auditing.
  • Apply policies based on our source client that the request originates from.

Common products used by enterprises include Zscaler Internet Access (ZIA), Blue Coat (Symantec) Secure Gateway, F5 Web Gateway, and Check Point Secure Web Gateway.

It is important to note that these proxies/SWGs often cater for two completely different categories of use:

  1. Egress traffic from your servers. E.g.
    - Connecting to remote servers to pull down updates and patches.
    - Interfacing with remote servers on the Internet.
  2. Internet access for users.

Google offers a couple of options, depending on the capabilities you require.

The first is: Google Cloud NAT. This is a Google fully-managed, distributed network address translation service. It allows resources in your Google Cloud (such as GCE instances, GKE nodes, Cloud Run, and Cloud Functions) — even those without an external IP address — to create outbound connections to the Internet or another network. This is super useful, because it means these internal resources CANNOT be accessed from the outside. (Which is great from a security perspective.) But they CAN connect to the outside.

Furthermore, all traffic routed through Cloud NAT can be logged, thus meeting any audit requirements you might have.

For more egress protection, Google have now introduced Secure Web Proxy (SWP). It is a fully-managed serverless, autoscaling service. It allows workloads within the Google Cloud environment — e.g. VMs, containers, serverless workloads using the serverless connector, and even workloads outside of Google Cloud that are connected by VPN or Interconnect — to both proxy and secure HTTP/S traffic. You can use it in conjunction with Cloud NAT.

With Google Secure Web Proxy, you get the following capabilities:

  • Block access to URLs and domains.
  • Granular policy-based access control, both from Google Cloud workloads to the Internet, and between Google Cloud workloads. E.g. based on source IP addresses, service accounts or secure tags. (Just like with the Google Cloud Firewall.)
  • TLS content inspection, such that you can inspect encrypted web traffic and then enforce policies based on the content of the request payload.
  • Native integration with Cloud NAT.
  • Native integration with Google Cloud Logging, e.g. for audit logs.

Of course, an enterprise will still want to route user requests through a proxy, to continue to provide the capabilities listed in category #2. Recall that in your target state, your enterprise users will be increasingly accessing business resources over the Internet; access to services provided from your data centres will be diminishing. But even so, you still want to enforce outbound access controls, like identity-based access, allow-listing, DLP, and logging. But you want to reduce your reliance on your data centres, and you DON’T want to route all your users’ Internet traffic through your own data centres. So in this scenario, you might consider migrating any existing on-premises proxies to a cloud-based proxy. For example Zscaler Internet Access (ZIA) is a cloud-based proxy.

Perimeter and Access

Legacy enterprises spend a lot of time and money securing the perimeter of the corporate network. The paradigm here is that anything outside the network is inherently untrusted, but anything already in the corporate network is trusted and safe. Once a client has authenticated and authorised “to the network”, then that client is given very broad access within the corporate network. This is known as perimeter-based security.

In order to protect the perimeter, you typically see the following technologies in use:

  • Enterprise VPN software — like Cisco AnyConnect and Palo Alto GlobalProtect— is installed on corporate laptops, to allow broad access to the corporate network. They work by establishing a secure tunnel between an end-user device (e.g. a corporate-issued laptop) and the corporate network.
  • Perimeter firewalls — like those from Cisco, Palo Alto and Check Point — which restrict traffic in and out of the network. They are also often used to firewall traffic between subnets within the perimeter, but at a fairly broad level of granularity. For example, an enterprise might have a subnet for application servers and a different subnet for database servers. Traffic must pass through a firewall to get from one subnet to the other, thus ensuring that only allowed traffic can pass through. But traffic within a given subnet does not have to pass through any firewalls.
Check Point Quantum Nextgen Firewalls

Perimeter-Based Security is Bad; ZTNA is Good!

And ZTNA is Good!

There, I said it. Perimeter-based security is an outdated and flawed paradigm. The modern paradigm is zero trust network access, or ZTNA. Sometimes you’ll hear people say things like:

“ZTNA is just a phrase.”

“ZTNA is nothing new.”

People that say this don’t understand it.

Here’s the Zero Trust paradigm, in a nutshell:

  • No implicit trust is granted to anything!
  • Being “inside the network” means nothing. The corporate network is no longer trusted. If you’re on the corporate network, this doesn’t grant you carte blanche to everything. Threats can come from anywhere, and malicious activity can (and often does) originate from within the corporate network.
  • The default is to DENY access to everything, and only allow access based on your identity.
  • You must continuously authenticate your identity, and you must be authorised, to access any given service.
  • Authentication typically requires some sort of strong, multi-factor, context-aware process.
  • Client devices are typically also checked to ensure they meet a minimum set of security requirements, such as minimum OS level, and an up-to-date antivirus program. This is called a security posture check.

Conclusion:

  • In the short term: there’s still a requirement to protect data centre-hosted workloads. VPNs are a poor choice in the modern environment. Enterprises should replace VPN with a ZTNA solution, if they haven’t already. (Zscaler offer the very cool Zscaler Private Access, to provide ZTNA access to users trying to access corporate resources, wherever they are.)
  • In the long term: you may plan to eliminate your data centres. You will have no on-premises workloads to access, but you still need to protect access to resources and services that you access over the Internet, based on identity and authorisation. For Google Cloud services, you can do this using the Google tools: Identity-Aware Proxy, and the Zero Trust product: BeyondCorp Enterprise. See my previous blog for a bit more information on how BeyondCorp Enterprise delivers ZTNA in order to access any resource.

Storage, Backup and Archive

In traditional enterprises, we typically have:

Block Storage

Block storage is typically provided in the form of a Storage Area Network (SAN).

Typical use cases for SAN:

  • For VMs — i.e. for the OS and any additional disks
  • Storage for databases.

The SAN is usually provisioned as storage arrays connected to the network. The storage array itself contains many redundant disks. Most organisations have moved away from spinning disks (traditional HDDs) for SAN, and are either using all-flash arrays (storage arrays composed entirely of SSDs) or hybrid arrays composed of SSDs and HDDs.

Typical enterprise block storage array vendors include Dell EMC, IBM, HPE, and Pure Storage.

A Pure Storage All-Flash Array

File Storage

File storage is typically provided in the form of Network Attached Storage (NAS).

File storage is used to persist unstructured files, such as: documents, images, videos, executables, and other binary files. The NAS then makes these resources available to multiple consumers over the network, i.e. as some sort of shared file resource.

Typical use cases for NAS:

  • Providing traditional “shared drives” to users in the corporate environment.
  • Providing shared access to applications. For example, consider a cluster of stateless web servers, which are all pointing to a common folder containing static resources.

NAS is usually provisioned as a file storage appliance (which itself contains a number of highly redundant disks), connected to the network. The NAS appliance exposes the resources on the disks using protocols like NFS or SMB/CIFS.

Typical enterprise file storage vendors include NetApp (e.g. NetApp ONTAP and NetApp FAS), Dell EMC, HPE and IBM.

Backup

In the enterprise, backup appliances usually rely on block storage, but they add additional intelligence and capabilities, e.g. backup scheduling, data redundancy, file versioning, in-line encryption, data deduplication, and backup integrity verification. A crucial requirement of a backup solution is the ability to restore data quickly, and often at signifcant scale.

Traditional backup appliances tend to use block storage.

Common enterprise backup vendors include Veeam, Veritas, Commvault, Dell EMC, and IBM.

Archive

It’s worth mentioning that traditional on-premises enterprises are often not very good at archiving.

Achiving is somewhat distinct from backup. Backup is about being able to recover data from a particular point in time. Archive is about storing long term data that is not actively accessed, but may need to be accessed for specific reasons. (E.g. compliance.)

Many enterprises use their backup solution as an archive solution. But backup solutions are expensive, and using backup for archive is therefore an expensive way to provide this capability.

Cloud Target State

In the cloud, we need to replace these capabilities. Here are some general recommendations:

  • In the cloud, block storage is generally only required to provide persistent disks for virtual machines. Thus, it is relegated to your IaaS workloads. In Google Cloud, block storage is provided to GCE instances in the form of persistent disks. Persistent disks offer a range of performance points, and they can be both zonal, or regional. Regional is useful if you need to achieve block-level replication of storage between more than one VM. (Usually this is a requirement that can be avoided with good design.)
  • In the cloud, the requirement for file storage has significantly reduced. This is because most of the use cases for a shared file system can now be satisfied using block storage. For example, in the Google ecosystem, end users can store all of their files on Google Drive, rather than on a shared drive hosted on a NAS appliance. And similarly, where application servers historically needed some sort of static content (e.g. web servers), this can now be satisfied using object storage in the form of Google Cloud Storage (GCS). GCS is highly available and redundant, has virtually unlimited scale, and offers low latency and high throughput. Furthermore, GCS storage can even be presented as local file systems on a VM, using Cloud Storage FUSE.
  • It’s worth noting that Google Cloud does offer a managed file storage solution, called Cloud Filestore. This allows existing clients to access resources using NFS.
  • In the cloud, both backup and archive can be satisfied using GCS. In fact, GCS is an ideal solution for these use cases, because GCS offers different storage classes, depending on how frequently data needs to be accessed. For example, the Coldline Storage class is intended for data that will not be accessed more than once in three months, and Archive Storage is intended for data that will not be accessed more than once a year. These storage classes have different costs. Archive storage is typically less than a tenth of the cost of standard storage. Furthermore, GCS enables the implementation of automatic storage policies which can — for example — automatically move rarely accessed data to a cheaper storage class! Furthermore, GCS provides object versioning, so you can keep historical versions of an object. So you can see how an enterprise can massively reduce its backup and storage costs, by switching from traditional on-prem backup appliances to Google Storage.

Storage, Backup and Archive Conclusions

  • Where VMs are being migrated to cloud, replace on-prem VM block storage with GCE persistent disks.
  • Replace legacy end user file storage (e.g. shared drives) with an end-user object storage solution, like Google Drive.
  • Replace server file storage use cases (e.g. web servers) with GCS.
  • Try to eliminate any other file storage requirements; modern applications can be architected such that file storage is not required.
  • Replace legacy on-prem backup and archive with object storage, using GCS. Leverage storage policies, automatic storage classes, and object versioning.

Physical Compute and Compute Virtualisation

In a typical medium to large enterprise, it’s common to find these flavours of compute:

  • Dell / Cisco / HPE / IBM x86 (CPU architecture) + VMware or HyperV (hypervisor)
  • IBM Power (RISC CPU architecture) + PowerVM (hypervisor)
  • IBM Z (mainframe) + with or without z/VM (hypervisor)

At this stage, I’m going to ignore the middleware and application software running on these platforms, since I’ll cover those in the next article. Here, I’ll just just cover some target state options in Google Cloud.

x86 / VMware

I’ve covered this fairly extensively already, in this part. But to summarise:

  • The preferred target state is to eliminate the need for VMs altogether.
  • With any on-prem VMs hosting software like… databases, Hadoop, and Kubernetes… The goal will be to migrate these to fully managed and/or native Google cloud solutions. But on this in the next part.
  • Some existing workloads that run on VMs might be easily containerised, and can therefore be moved to services like Google Kubernetes Engine, or Cloud Run. This will remove the VM dependency, and make these applications more lightweight, more portable, faster, and (in many cases) more scalable.
  • But for some legacy workloads — particularly commercial off-the-shelf (COTS) packages — it is not cost effective or even possible to modernise these applications to remove their dependency on VMs. So, there are certain workloads where you’ll need VMs, even in the cloud. (At least, until you can eliminate, modernise, or replace those legacy applications.)
  • Where you need to keep VMs, your goal should be to eliminate your dependency on any third party hypervisor software, once you’re in Google Cloud. So, look to use GCE rather than GCVE, if you can.
  • Look to leverage Google services that add significant value to your cloud-hosted VM estate. For example, make use of Google Cloud Load Balancer and Managed Instance Groups (MIGs). By doing this, you can turn your legacy applications of fixed size into an autoscaling, elastic group of VMs, where VMs are provisioned and destroyed in response to actual demand.

IBM Power

A long time ago (in a galaxy far, far away), enterprise-class IBM Power servers offered levels of performance, availability and scalabilility that could not be matched in the x86 world. And so it was common for older enterprises to run their most demanding workloads on IBM Power.

Move forward a several years (around year 2000–2010), and x86 servers — coupled with VMware — were able to offer levels of performance, availability and scalability that were sufficient for large, demanding workloads in the enterprise. And at a much lower price point. And so, Power servers in the enterprise tend to be relegated to very large workloads like SAP and Oracle database. In Google Cloud, there is no good reason to run cloud-hosted Power platforms for either of these. There are much better target states!

So generally, you should be aiming to:

  • Modernise / replace workloads running on IBM Power, where possible.
  • Eliminate the Power footprint.

Mainframe

IBM Mainframe

Mainframe migration is tricky. No doubt about it!

I’ve discussed already in a previous article. Although a few vendors exist, the most common mainframe vendor for large enterprises is IBM. IBM brands its mainframe systems as “System z” or “IBM Z”. Mainframe workloads typically take the form of:

  • Batch COBOL applications. It is common for enterprises with legacy IT to have COBOL applications that are 30 or more years old! These applications often perform bespoke transaction processing, and are often linked to legacy business processes.
  • Online transactioanl COBOL applications.
  • Databases, typically in the form of IBM DB2 for Z. It is common for enterprises with legacy IT to have central DB2 databases, which tables often shared amongst many of the aforementioned COBOL applications. This often creates a one-to-many mapping between groups of tables and legacy applications, making it difficult to decouple and independently migrate applications.
  • z/VM-hosted Linux operating systems. As widespread adoption of compute virtualisation become prevalent in the 2000s, many organisations with mainframes found themselves stuck with their mainframes, but wanting to get more value from these expensive units. IBM offered a way to do this, in the form of their z/VM hypervisor. This enabled mainframes to be a way to host virtualised operating systems, just like any other hypervisor platform. Thus, many organisations started running Linux-based applications on their mainframes, via z/VM.

Broadly, your options are:

  1. Rehost:
    Lift-and-shift your COBOL applications to Micro Focus that is running on Google Cloud. And Google’s Dual Run provides a way to safely migrate the workloads, and run them side-by-side, until you’re ready to decommission the mainframe workload. It doesn’t get rid of your legacy code problem, but it does move you to a platform with cheaper TCO, and without the 5 year hardware refresh cycles!
    Lift-and-shift your z/VM workloads to GCE, thus fully eliminating the need for hte z/VM platform.
  2. Refactor and Replatform:
    Typically be converting legacy COBOL applications to Java services. This can largely be automated, using tools like Google’s G4. This approach helps you to modernise your workloads, removing your dependency on an increasingly niche (and increasingly expensive) pool of COBOL programmers. You can also use this opportunity to API-enable existing applications, and to make them friendlier for users.
    And for legacy applications running on z/VM, you can modernise them just as you would with VMware-based workloads. E.g. by containerising.
  3. Replace: typically with a modern off-the-shelf (often SaaS) solution.

A few years ago, it was close to impossible for any medium or large organisation to exit a mainframe. But today, the migration technology has moved on significantly, and such a migration is possible. Take a look at this success story from Santander. (These are slides I pulled together during the 2023 Google Next event.)

Before You Go

  • Please share this with anyone that you think will be interested. It might help them, and it really helps me!
  • Feel free to leave a comment 💬.
  • Follow and subscribe, so you don’t miss my content. Go to my Profile Page, and click on these icons:
Follow and Subscribe

Useful Links and References

Series Navigation

--

--

Dazbo (Darren Lester)
Google Cloud - Community

Cloud Architect and moderate geek. Google Cloud evangelist. I love learning new things, but my brain is tiny. So when something goes in, something falls out!