Canonical
on 29 March 2023
Run serverless ML workloads. Optimise models for deep learning. Expand your data science tooling.
Canonical, the publisher of Ubuntu, announced today the general availability of Charmed Kubeflow 1.7. Charmed Kubeflow is an open-source, end-to-end MLOps platform that can run on any cloud, including hybrid cloud or multi-cloud scenarios. This latest release offers the ability to run serverless machine learning workloads and perform model serving, regardless of the framework that professionals use. This new capability increases developer productivity by reducing routine tasks, helping organisations lower their operational costs. It unburdens developers from explicitly describing the infrastructure underneath.
Based on a poll run by Canonical, open source and ease of use are the most important factors professionals consider when selecting AI/ML tooling. Charmed Kubeflow 1.7 expands its spectrum of open-source frameworks and libraries and makes the model development deployment process easier with a new set of capabilities.
Serverless workloads and new model serving capabilities
In a recent MLOps report by Deloitte AI Institute, 74% of respondents indicated that they plan to integrate artificial intelligence (AI) into all enterprise applications within three years. To achieve this, companies need to find ways to scale their AI projects in a reproducible, portable and reliable manner. Charmed Kubeflow 1.7 brings new capabilities for enterprise AI:
- The introduction of KNative in the Kubeflow bundle allows organisations to run serverless machine learning workloads.
- The addition of KServe enables users to perform model serving, regardless of the framework.
- New frameworks for model serving, such as NVIDIA Triton.
While observability features have been available in the product since last year, Charmed Kubeflow 1.7 comes with new dashboards for an improved user experience and easier infrastructure monitoring. More information about these capabilities can be found in Canonical’s recently published guide: Integrate with Observability Stack using COS.
More development tooling
Charmed Kubeflow 1.7 supports PaddlePaddle, an industrial platform with a rich set of features that help data scientists develop deep learning models. Deep learning is a subset of machine learning that uses neural networks to mimic human brains. It requires a tremendous amount of computing power and very large volumes of data. PaddlePaddle addresses this challenge by enabling parallel, distributed deep learning.
Deep learning is gaining more popularity and PaddlePaddle itself has more than 1.9 million users. With the introduction of PaddlePaddle, Charmed Kubeflow expands its library of open-source frameworks and gives professionals the flexibility to choose what suits them the best.
Improved model optimisation features
Data scientists spend a lot of time optimising their models and need to stay up to date with the latest AI advancements, frameworks and libraries. Katib solves this issue by simplifying log access and hyperparameter tuning. Charmed Kubeflow’s Katib component has a new user interface (UI) that reduces the number of low-level commands needed to find appropriate correlations between logs. Furthermore, Katib includes new features such as Tune API, which makes tuning experiments easy to build and simplifies how users access trial metrics from the Katib database.
With these Katib enhancements, data scientists can reach better performance metrics, reduce time spent on optimisation and experiment quickly. This results in faster project delivery, shorter machine learning lifecycles and a smoother path to optimised decision-making with AI projects.
Other highlights in Charmed Kubeflow 1.7
Charmed Kubeflow 1.7 also supports statistical analysis to address a new category of professionals working with statistics. It can analyse both structured and unstructured data, providing access to packages such as R Shiny or libraries such as Plotly.
Charmed Kubeflow also recently became NVIDIA DGX-software certified, accelerating at-scale deployments of AI and data science projects on the highest-performing hardware.
Further reading: