DEV Community

Cover image for Is Host-based Cloud Platform Useless?
Terrence Chou for AWS Community Builders

Posted on • Updated on • Originally published at bit.ly

Is Host-based Cloud Platform Useless?

For most organisations, one of the intentions to kick off their cloud journey is to get rid of maintaining any underlying infrastructure component entirely due to a variety of considerations, for instance, tedious hardware lifecycle and different technology focus. However, does that mean the host-based cloud platform could not present any value to organisations? Not at all! Because everything depends on the use case nonetheless.

Before We Jump Into The Topic...

Because of agility and elasticity, most organisations get started on their cloud journey and get involved in the whole cloud ecosystem widely. Other than those factors, the most fascinating point is the pay-as-you-go model which gives organisations another way to stretch their service capacity for supporting any short-period/temporary situation without investing in traditional infrastructure as they used to. Everything looks quite rational, doesn't it? However, one key sometimes is missed from the outset - Which framework will you adopt for either cloud extension or cloud migration? Rehost, Replatform, or Refactor?

We have heard that embracing the cloud is an inevitable trend quite often, however, the influence it makes is more than just a trend. The whole cloud ecosystem not only breaks the traditional boundaries (roles and responsibilities) but also forms a brand-new working model. This new norm changes each technical team's ownership significantly because each of them is able to provision resources, grant accesses, expose services, and even more without involving other teams as they used to. But, this transition also forms inconsistency and confusion which could potentially result in several side effects, for instance, increasing operational difficulty and unwanted spending, especially when the organisation is a large-scale enterprise.

From the above-mentioned situations, we are able to correlate them with an extremely prominent concern: Do we just want to launch/move every single workload to the cloud without too many changes? Or, are we keen to refactor the service framework completely?

Is Anything Discrepant in Cost Across A Decade?

From my personal perspective, moving to the cloud could not be just for reducing whatever cost because this is a tremendous misunderstanding if you put "cloud is cheap" into your mindset. If it really is, why does the FinOps principle come into play?

However, human nature is an interesting evolution, especially in the cloud era. When we go back to the era of operating everything by ourselves, we pay for the hardware and software lifecycle annually; in other words, we only need to debate the payment once. Because of this reason, we are not regularly chased up by the expenditure until the next cycle comes. The story could be totally different on the cloud albeit we keep the same cost unchanged. Why? What happens? The bill! You are able to monitor every cost generated by each deployment on a daily basis and feel shocked about either invisible spending, unexpected consumption, or even both. Because of this transparency, most organisations are keen to reduce the whole spending before they decide to move on to the next stage.

On the other hand, each CSP encourages every customer to embrace more cloud-native services/features instead of building self-managed frameworks due to a variety of considerations, for instance, solution integrity, product familiarity, modernised architecture, or cost optimisation. If you still do not have a clear cloud blueprint then you will get stuck in the concern I raised previously eventually.

Cloud-native Is Not Really A Must-be!

Is the cloud-native service architecture required? Although the feedback could be positive (Yes, that is a milestone we aim to achieve!) or negative (Well...that is a goal certainly, but it depends on if we are eager to revamp our service framework...), we should deem the whole process as an evolution instead of enforcement. We do not have the Infinity Gauntlet, giving us the power to rewrite anything immediately by snapping our fingers... We shall classify which service architecture will stay unchanged and which one will be revamped in practice. After classification, you will figure out that the most cost-effective manner for hosting those unchanged frameworks is the host-based platform, for instance, Dedicated Hosts or VMware Cloud, and here are the reasons.

  • Optimise your license fee - Let me use the Microsoft SQL Server as an example for elaboration. There are two purchasing models: per-vCPU and per-core, and the adoption depends on how many SQL Servers you have. Here is a high-level guidance: If you are able to fully allocate your SQL Servers in a single host then you shall adopt the per-core model that makes your license fee lower. In contrast, if a host could not be fully allocated then you shall compare which purchasing model is more suitable because it does not mean that the per-vCPU model is more expensive.

License Optimisation

  • Remain operational consistency (VMware Cloud) - Most organisations get started on their virtualisation journey with VMware vSphere suites, including vCenter (management console), vSAN (storage), and NSX (networking and network security). These components are also VMware Cloud fundamentals across AWS, Azure, Google, and other CSP platforms. Besides the optimisation of the license fee, if the customer not only aims to keep their service frameworks unchanged but also retains their operation excellences as much as possible, then the VMware Cloud will be the most ideal choice.

  • Gain more capacity (VMware Cloud) - Why VMware is so powerful? Because of the over-provisioning principle! What is it? That is to say, you are able to allocate compute resources (vCPU, memory, and storage) more than a single host has. How come!? In fact, every resource you allocate to every single workload does not mean that it will be fully utilised; you could see the real usage is lower than 50% or even more in most cases. What VMware does is dynamically reallocate these idle resources to anyone who really needs them, ensuring that every resource a single host has could be completely used. Let me use the r5 Dedicated Host as an example for elaboration, it is able to load 24x r5.xlarge instances; in other words, it could afford much more than just 24x r5.xlarge instances based on the VMware technology.

You could also refer to my post Migrate On-premises Workloads To AWS that introduces the VMware Cloud on AWS architecture in depth.

Slow Down Your Pace

Essentially, the cloud itself is a journey of transformation; it could be an evolution (It is the right time to get rid of legacy architectures) or even a revolution (Why do we have to change?). As I mentioned earlier, nothing could be revamped just by snapping our fingers simply. Every intention has a background, in order to carry out our intentions, we must have a blueprint (How will we get there?), define checkpoints (Are we on track?), review the whole progress (Does anything we missed?), etc. If you look at this journey carefully, you will be aware that this framework is not a single cycle, instead, it takes place over and over again because how frequently each feature, service, or even partnership will be published in the cloud world is faster than your imagination; that is a reason why you shall always keep your mindset in the Day-1 state.

To slow down does not mean pausing everything, instead, it gives you a space to verify what is the goal you aim to fulfil and do you align with it.

Top comments (0)