top of page
Search

Migrating to a single cloud has hidden costs and can fail for four reasons.


On-premise infrastructure remains inflexible, slow to configure, and poorly utilised. As companies seek to accelerate digital transformations with software at the heart of new products and services, public cloud migrations have been positioned as the obvious answer.

Data sovereignty and security are still major legal concerns, and the risk of experiencing performance issues or being unable to access your data in another country remains significant.


Companies that have migrated applications to the public cloud are running into escalating cloud costs, performance degradation and outages. As a result, over 72% of companies that have migrated applications to the public cloud have repatriated at least one major application group back on-premise or into an MSP private cloud.


Why has this happened?


Reason 1: Costs in the cloud spiral out of control.


Complex, multi-tier applications are made up of databases, several application logic servers and many web or front-end servers. The database and application logic tiers need to be constantly running. This results in the underlying infrastructure being dedicated to these application tiers. Now, this is fine when a company owns the underlying infrastructure on-premise. However, in a shared service such as the public cloud, such continuously running workloads end up reserving or dedicating a portion of the underlying infrastructure.


Any shared service such as a toll road, a railway line or the public cloud is cheaper for an individual customer only when they utilise the asset for a limited period of time. Otherwise, it is then the reserved or dedicated share of the asset’s cost that is passed on to the customer. The problem with rehosting or re-platforming complex applications into the cloud is that their continuously running workloads require reserved/dedicated resources.


Reason 2. Performance degrades in the cloud.

The cloud is not an omnipresent and unlimited computing capability. It is merely an incredibly large (hyperscale) data centre in a remote location. Companies connect to such a data centre through a series of telecom carrier network hops.


When a company migrates an application with distributed end-points in the real world into the public cloud, it essentially means that all of its hundreds or thousands of end-users need to interconnect into this single, remote facility from a multitude of locations via multiple telecom carrier hand-offs (called interconnection or peering hops).

IT teams can rarely fix such application performance failures because it is well-nigh impossible to replicate transient network conditions across hundreds or thousands of end-users.


Reason 3: Some complex applications just cannot be migrated to the cloud.


Complex applications can require highly specific configurations of storage, network, DNS and interconnectivity amongst tiers. Such highly specific configurations require direct access to the underlying infrastructure. Public cloud providers, on the other hand, abstract away the underlying infrastructure and present a software interface. This means that those specific configurations are impossible to replicate.


The only alternative is to rewrite such applications completely, but that then becomes a strategic application transformation with its attendant business process reengineering, customer and user migrations and data migration – all of which can take several quarters to deliver.


Reason 4: Cloud vendor lock-in, aka all cloud providers are not equal.


Once a company migrates the bulk of its applications portfolio to the cloud, it encounters a paradox: having spent upwards of 3 years migrating applications and infrastructure to a cloud provider, a company is then locked into that cloud provider and is at the mercy of any pricing and commercial terms changes. Cloud regions are also known to suffer the occasional outage.

However, having spent several tens or in extreme cases hundreds of million dollars, companies balk at making their applications work with yet another cloud provider.

What companies really need: the ability to target application workloads into the best location for security, cost, performance or data sovereignty.


All of this establishes the fact that not all application workloads are best suited for the public cloud. In fact, some (distributed) application workloads need to be deployed in edge locations, not on-premise nor in the public cloud. Instead of the one-size-fits-all-workloads approach of public cloud providers, what this actually means is that a company needs to be able to choose the best possible location amongst on-premise, managed service provider private clouds, multiple public clouds and edge locations. What it also means is that an application workload will need to be deployed onto the best-suited infrastructure type, whether that is bare metal, VM or container based – rather than be shoehorned into a single type of infrastructure because that is all that a location or vendor tooling can support. Pushing this to its logical conclusion, it is not just separate applications that need to be deployed into the best possible combination of location and infrastructure type: complex applications, which can result in deployments of upwards of 100 individual (but configured and interconnected) server workloads for each environment, also need be deployed with this approach, viz., each server workload being deployed into the best possible location and infrastructure type, and yet working as part of the overall application.

Making applications work across multiple locations requires several technology building blocks and is not easy to achieve. We will explore in our next note how an enterprise can achieve this, and how a managed service provider can uplift their private cloud capabilities to support hybrid cloud architectures.


 

About Author

Adi has over 20 years of experience in the TMT sector and has held a variety of roles across M&A, Strategy and Technology P&L leadership. He brings seed to scale expertise in launching and growing technology platforms and revenue streams. In his last corporate role at TelecityGroup/ Equinix, Adi was responsible for organic growth strategy and M&A. At Telecity/Equinix, Adi launched and scaled Cloud-IX, which became Europe’s leading SDN cloud orchestration platform with over 92 resellers and 125+ direct customers.


Adi has worked with several CIOs and CTOs in the course of their business transformation, application delivery, data centre infrastructure deployment and cloud migration initiatives. The idea for ProtoCloud was born out of the realisation that the current state of infrastructure-only deployment solutions address only 5%-10% of the overall deployment needs of an enterprise; deploying complex applications on top of that infrastructure is time-consuming and requires significant manual effort. The complexity of application deployments has only increased exponentially in the last few years given the multitude of on-premise, managed service provider private cloud, public cloud and now edge locations that CIOs must now integrate into their IT architectures. ProtoCloud was founded in 2018 to solve for the entirety of the applications+infrastructure problem, across hybrid cloud architectures.




bottom of page