Nobody today argues with the notion that open source is now enterprise-grade. With the stalwarts of the early open road now firmly established as fully-fledged platform companies, the previously proprietary-only behemoths of Silicon Valley and beyond have also adopted the open mantra.
Tracking a history dating back to the start of the 1990s not unlike Red Hat, the Germany-headquartered SUSE (pronounced ‘sue-sah’) is now known for more than its core enterprise-grade Linux Operating System (OS). Today SUSE is also understood to be a company with an arsenal of open source software-defined infrastructure technologies and a set of cloud-centric application delivery tools.
With so much of modern cloud now being, compartmentalised and containerised into smaller and theoretically more easily digestible components of technology, the industry has collectively embraced the open source Kubernetes container orchestration system as a key route to control.
Given containers’ abilities to act as discrete components of defined application functionality, the Lego building block ideal of cloud composability may finally be coming of age. So what shapes can we build next and how strong will they be?
Homestead Harvesting & Ranching
Now big enough to make sizeable acquisitions to bolster its own research and development function, SUSE bought Rancher back in 2020. A dedicated Kubernetes specialist, the Rancher team is well versed in all manner of deployment, scaling, maintenance, scheduling and operation across multiple application containers.
When the Rancher ranchers aren’t wrangling ‘cattle’ (by which we figuratively mean ‘data and applications’, obviously) they’re following a more plant-based diet of crops and grains (by which we mean IT infrastructure substrates). This has led to project Harvester, an open source Hyperconverged Infrastructure (HCI) technology designed to provide cloud-based storage and networking services.
Keen to create happy families right across the homestead, SUSE has now established a newly formalised integration between SUSE Rancher and Harvester. So in less technical terms, what we have here is cloud container management now more closely aligned to the cloud infrastructure layer that it will need to exist.
Reaching Production Quality
This move is intended to provide cloud engineers with a more usable and more easily deployable way of building what we would call ‘production quality’ software i.e. fit for purpose and ready to run in live business operations. Software that has ‘hit production’ is generally agreed to have gone through stress, penetration and user acceptance testing procedures and been exposed to real world data streams and use cases.
Because Harvester unifies the delivery of virtual machines and containers in cloud computing environments, the journey to production quality software is (in theory at least) faster.
Talking about the drive to bridge from legacy technologies to cloud-native, SUSE’s president of engineering and innovation Sheng Liang says that Harvester is designed to leverage SUSE Rancher’s GitOps-powered continuous delivery capabilities to manage potentially thousands of HCI clusters running a mix of virtual machines (VM) and containerised workloads from core to edge.
“SUSE Rancher users can now create Kubernetes clusters on Harvester VMs. Harvester, on the other hand, can leverage SUSE Rancher to provide centralised user authentication and multi-cluster management,” notes Liang and team.
Wider Kubernetes Developments
The company points out that although the installation of Kubernetes is designed to be simple, it can often require additional knowledge if a company needs to reset a cluster to test an app in different Kubernetes versions. To combat this complexity and get businesses closer to production quality cloud-native deployments faster, SUSE has further developed Rancher Desktop to make running Kubernetes and Docker workloads on a local development PC or Mac easier.
“[We have also developed] project Epinio. Designed to allow engineers to write code that will be deployed on Kubernetes without spending the time or money to teach everyone a new platform, Epinio allows users to bring an application from source code to deployment. It does this by providing the correct abstractions to developers while allowing operators to continue working in an environment they are comfortable with,” explains Liang and team.
There’s no point in getting to production-quality cloud (or indeed cloud-native) deployments if the IT team is unable to see what’s going on and gain the visibility it needs into the systems being deployed. Cognizant of this challenge, SUSE has tabled project Opni for cloud observability.
Observability data is part of every Kubernetes environment, but few organisations use it effectively to gather available insights about the health of their operating systems and potential downtime for clusters and applications. SUSE has provided anomaly detection by applying artificial intelligence (AI) to Kubernetes through Opni, which provides log and metric anomaly detection for Kubernetes clusters.
Kubernetes Keys in Kubewarden
As we move closer to cloud-native, we perhaps need to think about our initial misgivings when cloud was first really postulated somewhere around the start of this millennium.
“You mean we hand over our data, our applications and all manner of our digital intellectual property (IP) over to a services provider who will keep our information in a data centre nestled up in close proximity besides our competitors’ IT assets!?,” said almost every IT manager and CIO when first offered the opportunity to embrace cloud.
We have of course moved on somewhat from that point in the year 2000, but that doesn’t mean we can let security provisioning become an afterthought. SUSE says it has developed the creatively named project Kubewarden to address contemporary container-centric cloud security concerns. This technology allows security policies to be written in any software coding language that can compile to WebAssemblies (WASM), an open standard for software portability.
Kubewarden allows operations and governance teams to codify the rules of what can and cannot be run in their environments. Policies are distributed through container registries and workloads and policies can be distributed and secured in the same way.
What is happening here overall is engineering. It is software engineering, obviously. But more specifically it is a process of quality control and service management much like you would expect to see on any manufacturing line designed to produce a production quality final product.
Whether you’re making cakes or cloud, you need the right ingredients, you need a continuous production line with a secure view into the mixture being used and just occasionally a few German chocolate sprinkles on top.