Insights > Blog

When Do I Need to Use Deep Learning?

By Lev Selector | Posted on July 9, 2021 | Posted in AI/ML, Data & Analytics

The most routine data science tasks don’t need the sheer horsepower that deep learning models provide.

But as enterprises start using larger data sets, there are a growing number of complex tasks arriving on the scene.



For example, image recognition. Deep learning models can be used to identify objects and individuals in scores of photographs and videos, making recent years, there’s been tremendous growth in deep learning.

The area of artificial intelligence (AI) in particular has received a substantial amount of attention and investment.

AI systems perform operations usually associated only with human intelligence — operations like playing games, understanding speech, extracting information from text, and piloting autonomous vehicles. 

And deep learning (DL) is the technology behind AI.

Key to the rise of DL has been the cloud, which allows models to train on massive datasets. 

But beyond the ability to sift through oceans of data, deep learning offers enterprises three important traits for remaining competitive: speed, scalability, and flexibility. Let’s unpack how:


By design, deep learning algorithms are meant to learn quickly. This is made possible by using clusters of GPUs and CPUs to spread out compute-intensive tasks to mold models that can then be deployed.


In addition to spreading out resources, deep learning can leverage the on-demand nature of the cloud for endless resources. This makes it possible for models of any size to be deployed as needed without having to invest in increasing amounts of infrastructure.


As deep learning has matured, there has been an explosion in pre-trained models available to enterprises, with the major cloud providers all offering “plug-and-play” options for tasks such as speech-to-text, language translation, chatbots, and anomaly detection.


Deep learning vs. traditional machine learning

Despite the rapid growth of deep learning, most of the common data science tasks that enterprises need assistance with can still be tackled by traditional machine learning.

This is because widespread data science initiatives like predictive analytics, trend forecasting, business optimization, and recommendation systems rely upon a tabular data format (think Excel or relational databases). 

ng rapid facial recognition possible.

Similarly, deep learning algorithms are what power speech recognition and natural language processing, allowing for advances in digital personal assistants like Alexa and Siri, as well as helping computers better understand context and speech patterns.

Even recommendation engines, which are ubiquitous today on commerce and streaming sites like Amazon and Netflix, got their start with deep learning algorithms.

Keeping deep learning affordable

One important thing to keep in mind about deep learning is the fact that the technology is still in its relatively infancy. And like most new technologies, it can be prohibitively expensive to employ.

One major reason for this is the usage-based pricing of the public cloud. Training deep learning models is compute-intensive, which means smaller enterprises may quickly run into sticker shock as their deep learning capabilities mature.

To avoid this, we recommend organizations take some preventative measures. These include:

  • Using pre-trained models provided by cloud providers and third parties whenever possible
  • Leveraging the public cloud for model prototyping, then moving the models to a private cloud with GPU-enabled servers for ongoing learning
  • Whenever possible, using open-source software like KubeFlow to deploy models into Kubernetes

These measures can go a long way to keeping deep learning costs down, allowing organizations to reap the benefits of the technology for specific tasks without overwhelming their budgets.

To learn more about deep learning, machine learning, and AI, contact one of our experts today.