<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1232938&amp;fmt=gif">
Insights > Blog

Managing ML Workflows with Advanced Data on Kubeflow

By Advanced Data & Analytics Team | Posted on February 12, 2020 | Posted in AI/ML, Featured, Data & Analytics, Kubernetes

Arrikto, a Palo Alto startup that focuses on cloud-native storage, recently held a webinar walking participants through a demonstration of how multi-cloud Machine Learning (ML) workflows can be managed on Kubeflow, Google’s open-source ML project.

You can watch the hour-long demonstration here. In it, you’ll learn how data scientists can use Kubeflow to easily set up their own ML development environment, work in that environment on-premises, then seamlessly move their workflow into the public cloud.

We’ve written about the promise of Kubeflow in the past, and Arrikto’s webinar reinforces this belief. But as you watch the webinar, here are some things you should keep in mind:

1. Kubeflow is not quite ready for prime time

Technically, Kubeflow is still in beta, which means it’s likely not the best tool for most use cases yet.

Download Now: The Enterprise Guide to Kicking Off the AI Adoption Process

Still, the trajectory the project is on is definitely encouraging. Kubeflow is already abstracting away low-level concerns with Kubernetes—hardware, resources, scheduling—and its planned stage of similarly abstracting the details on ML is a great next step.

2. Creating a CI/CD pipeline for ML is still a challenge

The way IT organizations are going to be successful with ML is by creating an experience for data scientists to spend their time on problems outside of IT or ML infrastructure.

One of the end goals of Kubeflow is achieving that experience. However, when it comes to maintaining a full pipeline for ML workflows—while also mirroring a development process similar to ones we have now—all the pieces are still being put together.

data-on-screen_wide_redapt_1

3. There’s a lot to learn inside the Kubeflow package

Kubeflow isn’t really a single product, but rather, a collection of tools working in concert toward the ultimate goal of making scaling and deploying ML models easier.

Already included in the pre-release version are:

  • Jupyter notebooks for experimentation and sharing
  • Katib for tuning hyperparameters on Kubernetes
  • Kubeflow Pipelines for building and deploying ML workflows based on containers
  • Tracking of metadata of ML workflows
  • Nuclio functions for serverless data processing and ML

While each of these elements are powerful tools, each also comes with its own learning curve and necessary resources to put them to work.

4. Building IT operations around ML is hard

It’s one thing to provide high-end workstations, GPUs, and data to data scientists. But as you try to do that as an entire enterprise, a lot of problems quickly emerge.

By the time Kubeflow reaches its first official release, those working on the project will have hopefully cracked the code to integrate the user experience into the IT workflow as smoothly and seamlessly as possible.

Want to gain clarity on AI adoption? Download our free eBook The Enterprise Guide to Kicking Off the AI Adoption Process.

Get your free eBook

The Enterprise Guide to Kicking Off the AI Adoption Process

CLICK TO DOWNLOAD
20.01_artificial-intelligence-campaign_redapt_ebook_final-1 20.01_artificial-intelligence-campaign_redapt_ebook_final-2 20.01_artificial-intelligence-campaign_redapt_ebook_final-3