Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Charmed Kubeflow 1.7 Beta is here. Try it now!

Tags: AI/ML , AIML , Kubeflow , MLOps

This article is more than 1 year old.


Canonical is happy to announce that Charmed Kubeflow 1.7 is now available in Beta. Kubeflow is a foundational part of the MLOps ecosystem that has been evolving over the years. With Charmed Kubeflow 1.7, users benefit from the ability to run serverless workloads and perform model inference regardless of the machine learning framework they use.

We are looking for data scientists, machine learning engineers and AI enthusiasts to take Charmed Kubeflow 1.7 Beta for a drive and share their feedback with us. 

What’s new in Kubeflow 1.7?

Kubeflow 1.7 is the latest version of the upstream project, scheduled to go live very soon. The roadmap had many improvements planned, such as:

  • Testing on the latest versions of Kubernetes.
  • Improved user isolation.
  • Simplified hyperparameter trial and log access.
  • Distributed Training Operator support for PaddlePaddle, a new ML framework.

Google suggested in their Data and AI Trends in 2023 that organisations are rethinking their business intelligence (BI) strategy, moving away from a dashboard-focused model to an action-focused approach. To achieve this, enterprises need to look for solutions that have more capabilities to handle structured data and offer simplified methods to tune models and reduce operational costs.

Besides all the features introduced in the upstream project’s version, Canonical’s Charmed Kubeflow offers Knative and KServe support. It brings new enhancements for both serving and inference. Furthermore, Charmed Kubeflow offers more possibilities to run machine learning workloads across clouds.

Run serverless machine learning workloads

Serverless enables DevOps adoption by saving developers from having to explicitly describe the infrastructure underneath. On the one hand, it increases developer productivity by reducing routine tasks.  On the other hand, it reduces operational costs.

In the machine learning operations (MLOps) space, KNative is an open-source project which allows users to deploy, run and manage serverless cloud-native applications to Kubernetes. More exactly, it enables machine learning workloads to run in a serverless manner, taking away the burden from machine learning engineers and data scientists to provision and manage servers. Thinking of the Pareto principle, which also applies to data science: professionals spend  80% of their time at work on gathering data, cleaning messy data, or planning the usage of infrastructure, as opposed to doing actual analysis or generation of insights. Knative addresses this problem by letting professionals focus on their code

Charmed Kubeflow 1.7 Beta has KNative as part of the default bundle. This enables data scientists to allocate more time to their activities, rather than struggling with the infrastructure itself. Together with KServe which has also been added to the default bundle, three main components will be addressed: building, serving and eventing of the machine learning models.

Have MLOps everywhere. Run on private, public, hybrid or multi-cloud

Depending on your company policy, computing power available and various security and compliance restrictions,  you may prefer running machine learning workflows on private or public clouds. However,  it is often very difficult to have various datasets living in different clouds. Connecting the dots means basically connecting different data sources.  

To address this, companies need machine learning tooling that can work on various cloud environments, both private and public. This should allow them to complete most of the machine learning workflow within one tool, to avoid even more time spent on connecting more of those dots.

Charmed Kubeflow is an end-to-end MLOps platform that allows professionals to perform the entire machine learning lifecycle within one tool. Once data is ingested, all activities such as training, automation, model monitoring and model serving can be performed inside the tool. From its initial design, Charmed Kubeflow could run on any cloud platform and has the ability to support various scenarios, including hybrid-cloud and multi-cloud scenarios. The latest additions to the default bundle enable data scientists and machine learning engineers to benefit from inference and serving, regardless of the chosen ML framework.

Join us live: tech talk on Charmed Kubeflow 1.7

Today,  8 March at 5 PM GMT, Canonical will host a live stream about Charmed Kubeflow 1.7 Beta. Together with Daniela Plasencia and Noha Ihab, we will continue the tradition that started with the previous release.  We will answer your questions and talk about:

  • The latest release: Kubeflow 1.7 and how our distribution handles it
  • Key features covered in Charmed Kubeflow 1.7
  • The differences between the upstream release and Canonical’s Charmed Kubeflow

The live stream will be available on both LinkedIn and Youtube, so pick your platform and meet us there.

Charmed Kubeflow 1.7 Beta: try it out

Are you already a Charmed Kubeflow user?

If you are already familiar with Charmed Kubeflow, you will only have to upgrade to the latest version. We already have prepared a guide, with all the steps you need to take. 

Please be mindful that this is not a stable version, so there is always a risk that something might go wrong. Save your work to proceed with caution.  If you encounter any difficulties, Canonical’s MLOps team is here to hear your feedback and help you out. Since this is a Beta version, Canonical does not recommend running or upgrading it on any production environment.

Are you new to Charmed Kubeflow?

You are a real adventurer, you can go ahead and start directly with the beta version. This might result in a few more challenges for you. For all the prerequisites, follow the tutorial that is available and please check out the section “Deploying Charmed Kubeflow”.

Shortly after you deploy and install MicroK8s and Juju, you will need to add the Kubeflow model and then make sure you have the latest version. Follow the instruction below to get this up and running:

juju deploy kubeflow --channel 1.7/beta --trust

Now, you can go back to the tutorial to finish the configuration of Charmed Kubeflow or read the documentation to learn more about it.

The stable version will be released soon, so please report any bugs or submit your improvement ideas on Discourse. The known issues are also listed there. 

Don’t be shy. Share your feedback.

Charmed Kubeflow is an open-source project that grows because of the care, time and feedback that our community gives. The latest release in beta is no exception, so if you have any feedback or questions about Charmed Kubeflow 1.7, please don’t hesitate to let us know. 

Learn more about MLOps


kubeflow logo

Run Kubeflow anywhere, easily

With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.

Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like katib or pipelines-ui.

Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.

Learn more about Charmed Kubeflow ›

kubeflow logo

What is Kubeflow?

Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.

Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and configurable steps, with machine learning specific frameworks and libraries.

Learn more about Kubeflow ›

kubeflow logo

Install Kubeflow

The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple, portable and scalable.

You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install with MicroK8s on any of these environments and can be scaled to high-availability.

Install Kubeflow ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Charmed Kubeflow vs Kubeflow

Why should you use an official distribution of Kubeflow? Kubeflow is an open source MLOps platform that is designed to enable organizations to scale their ML...

What is MLflow?

MLflow is an open source platform, used for managing machine learning workflows. It was launched back in 2018 and has grown in popularity ever since, reaching...

A deep dive into Kubeflow pipelines 

Widely adopted by both developers and organisations, Kubeflow is an MLOps platform that runs on Kubernetes and automates machine learning (ML) workloads. It...