Not all clouds are equal. More specifically, not all elements, components, mechanics, attributes and specific optimisations inside different clouds are provisioned, planned, built and deployed or even maintained equally.
But to call all different clouds in some way inherently imbalanced would be a misnomer; every instance of cloud is (or at least should be) equal to and commensurate with the tasks that it has been brought into being for. If there is any inherent difference, it is simply that there are different engine sizes out there driving around the virtualised supercomputing highways that are now defined by the existence of cloud itself.
Why Clouds are Diversely Differentiated
Many of the reasons clouds come in different shapes are down to the way they are tuned and optimised for a specific type of task. Heavily transactional clouds have extra Input/Output (I/O) capabilities, others are pumped for processing power with extra CPU juice, some are memory-optimised and others are specifically built to have exemplary big data analytics functions and so on.
Other key reasons for cloud computing service differentiation include geography. A reality that is clearly borne out in Europe, with so many closely partnering nations nestled right up next to each other, many enterprises need to build regional clouds to serve regional teams. Sometimes this is designed to gain specific language support, but more typically it is just for lower latency.
Closely associated and allied with geographic disparities are cloud differentiations built as a result of different billing management nuances, especially across multinational firms. Following directly onwards from these drivers – and with again particular relevance for mainland Europe and its island neighbours – regional security and compliance regulations also create differentiated clouds.
Kubernetes as a Cloud Container Connector
In order to achieve agility across this disparate landscape, many organisations are embracing the open source cloud container orchestration technology Kubernetes.
Pronounced ‘koo-ber-net-ees’ and deriving from the ancient Greek for pilot or helmsman, Kubernetes is one of the fastest-growing elements of cloud. This technology is helping enterprises to connect resources across the increasingly complex, diverse and sometimes fragmented fabric of the modern cloud.
When we look at how we need to try and work effectively across these different cloudscapes, we must look for consistent software tooling that can be applied across multi-clustered cloud environments. Kubernetes can be part of that consistent toolset, but only if we know how to use it and the global skills-base for this technology does need a significant boost.
Aiming to bolster the planet’s collective Kubernetes capabilities and competencies is the Cloud Native Computing Foundation (CNCF). The organisation used its Kubecon + CloudNativeCon North America conference and convention this month in Los Angeles to formally launch its own Kubernetes Cloud Native Associate (KCNA) exam.
Described as a pre-professional certification, this exam is designed for candidates interested in advancing to a higher professional level through a demonstrated understanding of Kubernetes foundational knowledge and skills.
“The KCNA certification is a game-changer for everyone new to cloud-native. It is built to be beginner-friendly and inclusive for everyone, from engineers to program managers to business teams. KCNA is the first step to demonstrate knowledge of cloud-native fundamentals and tools, such as Kubernetes. We see it becoming a bridge that connects different teams within the organisation and simplifies the introduction for all cloud-native enthusiasts,” said Katie Gamanji, ecosystem advocate, CNCF.
This type of skills qualification will arguably be largely welcomed i.e. almost every cloud-native Kubernetes industry commentary or study highlights the skills challenge.
“This [Kubernetes skills] challenge ranked very high in our most recent container studies. There isn’t a large enough talent pool of qualified applicants who are experienced in managing Kubernetes to meet the growing need for its management,” said Paul Nashawaty, senior analyst at ESG.
Template-ised Abstraction
Part of why this technology appears to be so tough to master is down to how different cloud service providers deliver different Kubernetes implementations. This often means that applications need to be template-ised so that they can be implemented on a multi-region multi-cloud basis via a layer of abstraction.
When we also remember that the configuration of applications across different clouds is also not always consistent – and backend database credentials can be configured differently – so getting Kubernetes to run containers properly across modern cloud-native instances is rarely a click-and-go affair.
Among the companies being particularly vocal on the state of Kubernetes at Kubecon + CloudNativeCon 2021 were ‘alternative cloud’ organisation (i.e. not AWS, Microsoft or Google) Linode. Keen to share her views on the state of the cloud nation this year, Linode director of product marketing Hilary Wilmoth has said that even before the event, she expected much of the discussion at KubeCon to touch on how Kubernetes continues to develop and become mainstream.
“By the shift to mainstream, I mean that Kubernetes is moving beyond the original primary audience of infrastructure developers and more into the wider IT ecosystem. We’re seeing adoption and interest by teams at small and mid-size businesses that want to do more cutting-edge things, but also don’t have huge amounts of resources to throw at new technologies to completely overhaul their business. They are looking for ways to take advantage of the good things Kubernetes can deliver, without the perceived required skills overhead,” said Wilmoth.
Linode’s own Kubernetes Engine is designed to make it easier for developers to build and run containerised applications. The company is currently ‘excited’ about related aspects of technology including horizontal cluster autoscaling i.e. software tools to make it easier to scale applications up and down based on workload levels in an automated fashion in real time based on resource limits.
“We want to add things that will keep Kubernetes simple to adopt, while also closing some of the gaps that still exist, like availability, reliability and getting insight into what is going on in real time,” added Wilmouth.
Establishing Cloud Democracy
Cloud complexity and diversity shows no signs of slowing down and Kubernetes shows little sign of losing its lustre as a key facilitating bridge to gain agility and control inside cloud computing software engines everywhere.
If we can address the skills challenge and take a cloud-native approach where applications and services are designed for the cloud from ground zero, then we can perhaps establish the IT democracy that we all seek to build.
Cloud is meant to convey freedom, flexibility and perhaps even a little forgiveness. Organisations like the Linux Foundation and the CNCF put open source freedom and knowledge sharing at the heart of how they operate. We can get to a more democratic cloud and be able to control complexity, but some of us will need to go back to school and skill-up, who wouldn’t vote for that?