There’s a lot of potential in Machine Learning (ML). Unfortunately, there are also a number of obstacles companies hit when it comes to realizing that potential.
During a panel at last summer’s Transform 2019 conference, it was pointed out that nearly 90% of ML models cooked up by data scientists never actually make it into production.
Why do ML projects fail?
Why is this the case? Why are one in ten ML projects doomed to failure? There are a couple of reasons.
One is that the technology is new, and most IT organizations are simply unfamiliar with the software tools and specialized hardware, such as Nvidia GPUs, that are required to effectively deploy ML models.
The other reason is the disconnect between IT and data science. IT tends to stay focused on making things available and stable. They want uptime at all costs. Data scientists, on the other hand, are focused on iteration and experimentation. They want to break things.
Learn more about your path to AI/ML adoption by reading our FREE in-depth guide, Accelerating Your Success with Artificial Intelligence and Machine Learning.
Enter Kubeflow
[Kubeflow] won’t be of much use unless companies looking to unlock the potential of ML address their biggest hurdle: not understanding where they're at in their technological evolution.
Originally developed by Google, Kubeflow is an open-source project designed to facilitate the end-to-end process of developing and deploying ML models.
Kubeflow sits atop Kubernetes in your development workflow, providing data scientists with a self-service playground to conduct ML model experiments. Then, once those experiments are completed, it packages the model up and publishes it in a way that can be used by production systems.
Right now, it's still in its infancy — Kubeflow V1.2 was just released in November of 2020. But some companies have already put it to work, and so far the results are promising.
But while Kubeflow is a tool that can potentially solve the ML deployment problem, it won’t be of much use unless companies looking to unlock the potential of ML address their biggest hurdle: not understanding where they're at in their technological evolution.
Benefits of Kubeflow
If you have a large organization looking to grow your ML capabilities, Kubeflow can help make the process much easier.
For data scientists working on ML models, it provides them with a self-service environment for experimentation.
It also accelerates their ability to take those models and publish them to a production environment by managing the workflow — packaging the model, pushing it out in clusters, and making it available to use by other applications.
Outside of data science, Kubeflow facilitates the training of ML models by taking a known set of data and output results and sending it through a model so that it can learn.
ML is different than the traditional software development life cycle process in that once a model is developed and ready to use, there are different steps beyond the usual packaging and deployment.
These steps include feedback on how the model is running in production and how accurate a model’s results are—necessary measures to help a ML model learn.
Before Kubeflow, these steps were often outside of most IT capabilities, and as a result, projects often stalled before they could be completed. That’s starting to change.
Features of Kubeflow
Kubeflow is not really a single product, but more of a collection of tools working in concert to make scaling and deploying ML models easier and more efficient. With it, you can put to work:
- Jupyter notebooks for experimentation and sharing
- Katib for tuning hyperparameters on Kubernetes
- Kubeflow Pipelines for building and deploying ML workflows based on containers
- Tracking of metadata of ML workflows
- Nuclio functions for serverless data processing and ML
Kubeflow is still in its infancy, but its promise for increased innovation in the ML space is already clear. As more and more companies become familiar with it, more and more ML projects are going to reach the finish line.
Looking to increase your company’s ML capabilities and presence in the cloud? Download our free eBook, 3 Simple Steps to Applying the Technical Maturity Framework When Going Cloud-Native.
Enter the Technology Evolution Playbook
Effectively putting ML to work means understanding a number of technical variables, including from the outset, things like:
- Where your data is currently located
- Whether that data is clean
- What data you need from elsewhere in order to drive your ML process
- Where your ML workloads will be running
Getting to the bottom of these and other variables requires using a technical maturity framework. Without knowing whether you are even ready to utilize ML, you’re going to go nowhere fast.
The Redapt Technology Evolution Playbook will help you nail down exactly what your business is trying to achieve, and whether actually using something like ML makes sense in the first place.
So what’s the solution to the ML model problem?
There’s no one solution for the logjam between data science ML models and production—at least not yet. But there are steps that can improve the percentage of models making it to deployment.
One of those steps is Kubeflow, which is tailor made for making it easier to deploy ML across the board.
The other step—the one every company looking to utilize ML can take right now—is to thoroughly assess their technical maturity.
That way, they can get a handle on their current capabilities, learn whether ML is even something they need to pursue, and pinpoint ways to bridge the gap between how data scientists dream up ML models and IT puts those models to work.
If you’re interested in learning more about ML and assessing the technical evolution of your business, download our free eBook The Redapt Technical Maturity Framework.
Categories
- Cloud Migration and Adoption
- Enterprise IT and Infrastructure
- Data Management and Analytics
- Artificial Intelligence and Machine Learning
- DevOps and Automation
- Cybersecurity and Compliance
- Application Modernization and Optimization
- Featured
- Managed Services & Cloud Cost Optimization
- News
- Workplace Modernization
- Tech We Like
- AI
- AWS
- Social Good News
- Cost Optimization
- Hybrid Cloud Strategy
- NVIDIA
- Application Development
- GPU