Digital Transformation – Container based platforms for Microservice applications

Digital transformation is fundamentally changing how enterprise IT operates and delivers value to their customers. Cloud native application platform is the preferred platform in enterprise digital journey to transform ,deploy and operate the application workloads in an optimized manner. 

Migrating the workloads to cloud platform as is and deploying ,operating in the same way cannot be qualified as digital transformation.

Digital Transformation is

  • Deploying and managing applications at scale.
  • Able to recreate the platform including infrastructure using code (IaC –Infra as code).
  • Deploying and releasing the module /code to production in an automated seamless manner.
  • Automated product component upgrades using Blue green deployment.
  • Continuous integration and delivery.
  • Move workloads without having to redesign your applications or completely rethink your infrastructure, which lets you standardize on a platform and avoid vendor lock-in.
  • Easily performing canary deployments and rollbacks.
  • Zero downtime Platform infrastructure maintenance (Upgrade /Patching) –Rolling update, Migration with node pools.

Microservice application architecture plays an important role in enterprise IT transformation journey.

Let’s move further and discuss about Monolithic architecture pattern and its issues.

Monolithic Architecture

  • A monolithic application is built as a single unit. Enterprise Applications contains three parts: a database , a client-side user interface and a server-side application. This server-side application will handle HTTP requests, execute some domain-specific logic, retrieve and update data from the database, and populate the HTML views to be sent to the browser.To make any alterations to the system, a developer must build and deploy an updated version of the server-side application.
  • In the Monolithic architecture, all the different REST endpoints, business and data layers were wired together as one REST interface . The only physically separate component was the front end.
Monolithic App

Monolithic Application Issues

  • Application is too large and complex to fully understand and made changes fast and correctly.
  • Application SIZE can slow down the start-up time.
  • Redeploy the entire application on each update.
  • Continuous deployment is difficult.
  • Difficult to scale when different modules have conflicting resource requirements.
  • All instances of the application are identical, one bug will impact the availability of the entire application.
  • Changes in frameworks or languages will affect an entire application

Moving on , Let’s proceed to discuss Microsevice architecture framework and how it helps enterprise to do digital transformation.

Microservice architecture

Micro services architecture design principles support enterprise to modernize their application platform .Application Modernization using micro services design principle is splitting your application into a set of smaller, interconnected services instead of building a single monolithic application.

 Each microservice is a small application that has its own hexagonal architecture consisting of business logic along with various adapters. Some microservice would expose a REST, RPC or message-based API and most services consume APIs provided by other services. Other microservice might implement a web UI.

The Microservice architecture pattern significantly impacts the relationship between the application and the database.

Some APIs are also exposed to the mobile, desktop, web apps. The apps don’t, however, have direct access to the back-end services. Instead, communication is mediated by an intermediary known as an API Gateway. The API Gateway is responsible for tasks such as load balancing, caching, access control, API metering, and monitoring.

Microservice Reference Architecture


Microservice architecture tackles the problem of complexity by decomposing application into a set of manageable services which are much faster to develop, and much easier to understand and maintain.

Microservice architecture enables each service to be developed independently by a team that is focused on that service.

Microservice architecture reduces barrier of adopting new technologies since the developers are free to choose whatever technologies make sense for their service and not bounded to the choices made at the start of the project.

Microservice architecture enables each Microservice to be deployed independently. As a result, it makes continuous deployment possible for complex applications. Microservice architecture enables each service to be scaled independently.

Microservice platform using open source packages as a platform components

Microservice application platform can be built using the open source components

API layer where all the Rest API services reside, a proxy layer that acts as intermediary services that connects the API to Data streams(Kafka) and finally the Kafka layer that provides centralised data streams-Apigee.

A centralised log system aggregates logs from different micro-service components and makes it available in a centralised location -Splunk.

Distributed tracing systems to provide analytics on latency in the micro service architecture –Zipkin.

Service Registry – phone book for your micro services. Each service registers itself with the service registry and tells the registry where it lives (host, port, node name) and perhaps other service-specific metadata –Zookeeper.

Data Streams –Apache Kafka.

Config Management – server and client-side support for externalized configuration in a distributed system –spring cloud.

CI/CD and infrastructure configuration management – Jenkins, Ansible, Cloud formation templates -Jenkin ,CI.

However, it’s become much more complicated to automate and operate this type of platform.Nearly all applications nowadays need to have answers for things like Replication of components, Auto-scaling, Load balancing, rolling updates, Logging across components, Monitoring and health checking, Service discovery and Authentication.

As application development moves towards a container-based approach, the need to orchestrate and manage resources is important. Kubernetes is the leading platform that provides the ability to provide reliable scheduling of fault-tolerant application workloads.

The marriage of Kubernetes with containers for a microservices architecture makes a lot of sense, and clears the path for efficient software delivery.

As a container orchestration tool, Kubernetes helps to automate many aspects of microservices development, including:

  • Container deployment.
  • Elasticity (scaling up and down to meet demand).
  • The logical grouping of containers.
  • Management of containers and applications that use them.

Kubernetes –Container orchestration platform  

  • Open source container orchestration platform, allowing large numbers of containers to work together reducing operational burden.
  • Construct at the top on Docker.
  • Provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts.
  • Scaling up or down by adding or removing containers when demand changes.
  • It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).
Kubernetes Reference Architecture

Kubernetes -Key Components

  • Container – Sealed application package that can be run in isolation.
  • Pod – Small group of tightly coupled containers (web server).
  • Controller: Loop that drives current state toward desired state – replication controller – Kube controller (kube-controller-manager).
  • Service –A set of running pods that work together –Load balanced back ends
    Labels: Metadata attached to the objects (Phase = Canary, Phase = Prod).
  • API server -API server validates and configures data for the API objects which include pods, services, replication controllers, and others. The API Server services REST operations and provides the front end to the cluster’s shared state through which all other components interact.
  • Etcd – reliable distributed key-value store.
  • Scheduler-Kubernetes scheduler is one of the critical components of the platform. It runs on the master nodes working closely with the API server and controller. The scheduler is responsible for matchmaking — a pod with a node.
Kubernetes -Components

Kubernetes managed service platform provided by the public cloud providers removes the need for having many components and integrating all in the micro services platform.

Public cloud provider managed service kubernetes platform helps enterprise to deploy and operate the modernized workloads in an easy and seamless way.

Google Kubernetes Engine – GKE -Reference Architecture 

Kubernetes Engine is a managed Kubernetes service offered by Google. It allows customers to easily create and maintain Kubernetes clusters that use the open-source version of Kubernetes as a base. Kubernetes Engine also adds additional components (add-ons) to the cluster that help the applications running in the cluster to use other Google products and services, such as Google Container Registry, Stackdriver monitoring and logging, and integration with Identity and Access Management (IAM). Google also manages the Kubernetes control plane, sometimes referred to as the Kubernetes Master, and guarantees certain up-time for the control plane.

GKE Reference Architecture

AWS Kubernetes -EKS Reference Architecture

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes.

Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Amazon EKS automatically detects and replaces unhealthy control plane instances, and it provides automated version upgrades and patching for them.

Amazon EKS is also integrated with many AWS services to provide scalability and security for your applications, including the following:

  • Amazon ECR for container images.
  • Elastic Load Balancing for load distribution.
  • IAM for authentication.
  • Amazon VPC for isolation.

Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all the existing plugins and tooling from the Kubernetes community.

AWS Kubernetes (EKS) architecture

How EKS works

EKS Deployment workflow

AWS ECS Reference Architecture –Non Managed Container Platform

You can use AWS ECS based platform for deploying the microservice applications, but you need to manage the servers /clusters. Now AWS is providing Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters.

AWS ECS reference architecture

Azure AKS reference Architecture

Azure Kubernetes Service (AKS) is the quickest way to use Kubernetes on Azure. AKS provides capabilities to deploy and manage Docker containers using Kubernetes. Azure DevOps helps in creating Docker images for faster deployments and reliability using the continuous build option.

AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure like other cloud providers . As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance . The Kubernetes masters are managed by Azure.

Azure Managed Kubernetes (AKS) Platform Reference Architecture

It’s easy to create AKS platform and deploy the applications using Azure DevOps.

Azure AKS Platform Key Components

Azure Kubernetes Service (AKS). AKS is an Azure service that deploys a managed Kubernetes cluster.

Kubernetes cluster. AKS is responsible for deploying the Kubernetes cluster and for managing the Kubernetes masters. You only manage the agent nodes.

Virtual network. By default, AKS creates a virtual network to deploy the agent nodes into. For more advanced scenarios, you can create the virtual network first, which lets you control things like how the subnets are configured, on-premises connectivity, and IP addressing.

Ingress. An ingress exposes HTTP(S) routes to services inside the cluster.

External data stores. Microservice are typically stateless and write state to external data stores, such as Azure SQL Database or Cosmos DB.

Azure Active Directory. AKS uses an Azure Active Directory (Azure AD) identity to create and manage other Azure resources such as Azure load balancers. Azure AD is also recommended for user authentication in client applications.

Azure Container Registry. Use Container Registry to store private Docker images, which are deployed to the cluster. AKS can authenticate with Container Registry using its Azure AD identity. Note that AKS does not require Azure Container Registry. You can use other container registries, such as Docker Hub.

Azure Pipelines. Pipelines is part of Azure DevOps Services and runs automated builds, tests, and deployments. You can also use third-party CI/CD solutions such as Jenkins.

Helm. Helm is as a package manager for Kubernetes — a way to bundle Kubernetes objects into a single unit that you can publish, deploy, version, and update.

Azure Monitor. Azure Monitor collects and stores metrics and logs, including platform metrics for the Azure services in the solution and application telemetry. Use this data to monitor the application, set up alerts and dashboards, and perform root cause analysis of failures. Azure Monitor integrates with AKS to collect metrics from controllers, nodes, and containers, as well as container logs and master node logs.

Will discuss more about how to secure the Kubernetes platform in the next blog ….