It used to be that working on monolithic applications was a tedious process, vulnerable to catastrophic crashes that made an entire application inoperable for an extended period of time.
It also made the actual work of developing an application slow and plodding, with limited deployments and the need for never-ending communication between teams.
Then microservices and containers arrived, and with them the ability to break an application into separate components. Each of these parts are independent of each other, which greatly speeds up development since developers are no longer waiting on other developers just to move their part of a project forward.
In addition, separating an application into multiple components builds resilience. If one part of the application were to fail, the entire application doesn’t fail with it.
As much as this approach has simplified the development process, it’s not without its own complications.
Eventually, an enterprise using microservices architecture and containers is going to scale to the point where juggling hundreds — even thousands — of containers is untenable both from a resource and budget perspective.
Thankfully, there’s a process for an enterprise to orchestrate and monitor their environments. Here’s what it can look like:
Step 1: Utilize Kubernetes
Developed by the minds at Google, Kubernetes acts as a conductor for all the containers used in development.
In a nutshell, Kubernetes gathers a large number of containers into a cluster and ensures they play well with each other. This also makes monitoring those containers easier as there’s just one cluster to pay attention to at a time.
Step 2: Deploy a Kubernetes Manager
Just like with containers, as an enterprise grows, so does the number of clusters. This is where a Kubernetes manager such as Rancher comes in.
Continuing with the orchestration theme, tools like Rancher act as the conductor of Kubernetes by monitoring all of the clusters holding containers.
Step 3: Manage Your Kubernetes Manager
Eventually, even a tool like Rancher will need to be maintained and upgraded as new abilities and security patches arrive. Thankfully, there are automation tools available to make this happen without downtime for containers.
Step 4: Break Stuff
Now that our tech stack is in place, it’s time to test its resiliency. That’s where something like Chaos Monkey is useful.
Developed by Netflix, Chaos Monkey is designed to wreak havoc on containers by randomly selecting some of them for destruction at any given time. This not only tests the ability of an application to recover during a failure, it also exposes potential weaknesses in how automation tools are deployed.
Put another way, it’s like firefighters creating a controlled burn to ward off potential large-scale fires and test containment times all at once.
Step 5: Keep an Eye on Things
With a tech stack in place and a tool like Chaos Monkey running wild, it’s time to overlay a monitoring solution to ensure containers stay up and running.
For this, you want to deploy something like Prometheus, an open-source monitoring service that provides you with powerful metrics and precision alerts for when failures occur.
This is just one possible solution
In the end, how an enterprise builds, orchestrates, and monitors a container and microservices environment will depend on their specific needs and capabilities.
But regardless of whether they do the work in-house or with a partner, the benefits of embracing microservices and containers — and a DevOps culture as a whole — are well worth the effort.
For more information on microservices and containers, download our free eBook: 3 Simple Steps to Applying the Technical Maturity Framework When Going Cloud-Native.
Keep up with Redapt
- Data & Analytics
- Enterprise Infrastructure
- Cloud Adoption
- Application Modernization
- Multi-Cloud Operations
- Google Cloud Platform (GCP)
- Dell EMC
- Workplace Modernization
- Security & Governance
- Tech We Like
- Microsoft Azure
- IoT and Edge
- Amazon Web Services (AWS)
- Azure Security
- SUSE Rancher
- Social Good
- Artificial Intelligence (AI)
- Hybrid Cloud
- Azure Kubernetes Service (AKS)
- CloudHealth by VMware
- Customer Lifecycle
- Machine Learning (ML)
- cloud health