AKS Q/A (1)- AKS interview questions and answers

One word difference between docker and kubernetes

Docker is container platform and kubernetes is container orchestration platform.

Why Containers are considered ephemeral

Why Containers Are Ephemeral:

  1. Short-lived by Design
    Containers are typically created to run a specific task or service and can be stopped, restarted, or destroyed without affecting the overall system. They don’t store persistent data by default.
  2. Stateless Architecture
    Most containerized applications follow a stateless model, meaning they don’t retain data or state between restarts. This makes scaling and recovery easier.
  3. Volatile Storage
    Any data written inside a container (unless explicitly mounted to a volume) is lost when the container stops or crashes. This reinforces their temporary nature.
  4. Immutable Infrastructure
    Containers are built from images that define their configuration. Instead of modifying a running container, you rebuild and redeploy a new one — promoting consistency and repeatability.
  5. Rapid Deployment & Termination
    Containers can be spun up and torn down quickly, which is ideal for CI/CD pipelines, microservices, and cloud-native applications.

🧠 How to Handle Ephemerality

To manage the ephemeral nature of containers effectively:

  • Use volumes or persistent storage for data that must survive container restarts.
  • Implement state management outside the container (e.g., databases, cloud storage).
  • Use orchestration tools like Kubernetes to handle container lifecycle and resilience.

Thats why if docker container stop due to memory or other issue, it cant up again as not have power to do auto healing thats why kubernetes comes into picture.

For container we need human intervention while aks do it automatically.

1 – If you delete cluster then all data get deleted?

No, the Persistent Volume is not deleted because it is stored separately from the Pod. This is one of the main purposes of using a Persistent Volume — to retain data even if the Pod is deleted or recreated.

2 .Difference Between kubectl apply vs kubectl create

CommandDescriptionUse Case
kubectl createCreates a new resourceUse only when the resource does not exist
kubectl applyCreates or updates a resourceUse when you may modify configs and want to re-apply

kubectl create – One-time creation

kubectl create -f deployment.yaml
  • Creates the resource from deployment.yaml
  • ❌ Will fail if the resource already exists: pgsqlCopyEditError from server (AlreadyExists): deployments.apps "my-app" already exists

kubectl apply – Declarative & idempotent

bashCopyEditkubectl apply -f deployment.yaml
  • If the resource doesn’t exist, it creates it
  • If it exists, it updates it based on the changes in the YAML
  • Supports declarative configuration (preferred in GitOps, CI/CD)

Best Practice

Use kubectl apply for all YAML-based deployments in production and CI/CD pipelines — it’s declarative, idempotent, and update-friendly.

why we dont use replication controller in aks

We don’t use ReplicationController in AKS (Azure Kubernetes Service) or modern Kubernetes clusters because it is deprecated and replaced by more advanced controllers like Deployments and ReplicaSets.

Here’s a breakdown of why ReplicationController is not used in AKS or modern Kubernetes environments:


🔄 1. ReplicationController is Obsolete

  • ReplicationController (RC) is the original controller to manage the number of pod replicas.
  • It is now replaced by ReplicaSet, which offers more features and better support in current Kubernetes versions.

🚀 2. Deployment is the Standard Way

  • In AKS (and all modern Kubernetes setups), we use Deployments which manage:
    • Rolling updates
    • Rollbacks
    • Declarative pod updates
  • Deployments automatically use ReplicaSet internally to manage pod replicas.

⚙️ 3. Advanced Features Missing in RC

ReplicationController does not support:

  • Rollback of versions
  • Canary or blue/green deployments
  • Strategies for zero-downtime upgrades
  • Label selectors as flexible as ReplicaSet

✅ Summary: Use Deployment Instead

FeatureReplicationControllerReplicaSetDeployment
Manages pod replicas
Rolling updates
Rollbacks
Recommended in AKS⚠️ (used by Deployment)

What is an Annotation?

Annotations are key-value pairs, like labels.

Unlike labels, they are not used for selection (e.g., no selector.matchAnnotations).

Instead, they are used to store extra metadata that might be:

  • Used by tools
  • Referenced by controllers or policies
  • Just informational

🆚 Labels vs Annotations

FeatureLabelsAnnotations
Used for selecting/filtering✅ Yes❌ No
Max sizeSmall (~63 chars per key/value)Larger (up to ~256KB)
PurposeIdentify objectsStore metadata/config for tools
Used by controllers/tools✅ Sometimes✅ Frequently

🛠️ How to Add an Annotation

Example in YAML:

yamlCopyEditapiVersion: v1
kind: Pod
metadata:
  name: my-pod
  annotations:
    example.com/maintainer: "team@example.com"
spec:
  containers:
    - name: nginx
      image: nginx

A: Kubernetes addresses four major limitations of Docker:

Q: What specific problems with Docker does Kubernetes solve?

1. Single Host Nature of Docker

  • Problem: Docker runs on a single host, so containers on the same host can impact each other. If one container consumes too many resources, it can cause other containers to fail.
  • Kubernetes Solution: Kubernetes operates as a cluster with multiple nodes, allowing distribution of containers across different hosts to minimize resource conflicts.

2. Lack of Auto-healing

  • Problem: If a container dies in Docker, it stays down until someone manually restarts it.
  • Kubernetes Solution: Kubernetes includes auto-healing capabilities through controllers like ReplicaSets that automatically restart failed containers without human intervention.

3. Limited Auto-scaling Capabilities

  • Problem: Docker doesn’t natively scale containers up or down based on load.
  • Kubernetes Solution: Kubernetes provides both manual scaling through replica configuration in YAML files and automatic scaling through Horizontal Pod Autoscalers (HPA) that respond to resource utilization thresholds.

How does Kubernetes handle multiple hosts?

A: Kubernetes uses a cluster architecture:

  • It consists of at least one master node and multiple worker nodes
  • This allows for distributing applications across different machines
  • If one node has problems, applications can run on other healthy nodes

How does Kubernetes implement auto-scaling?

A: Kubernetes offers two approaches to scaling:

  • Manual scaling: Update the number of replicas in YAML configuration files
  • Automatic scaling: Use Horizontal Pod Autoscaler (HPA) to automatically scale based on metrics like CPU utilization

How does Kubernetes achieve auto-healing?

A: Through continuous monitoring and controllers:

  • The Kubernetes API server constantly monitors the state of all containers
  • If a container starts to fail, Kubernetes creates a new container even before the failing one goes down completely
  • This ensures continuous availability of applications

Leave a Reply

Your email address will not be published. Required fields are marked *