skip to content
luminary.blog
by Oz Akan
container sketch

Containers and Orchestration Refersher

Refresher for Docker containers and Kubernetes orchestration with interview quesions

/ 18 min read

Table of Contents

What are Containers?

Containers are lightweight, portable, executable packages that include everything needed to run an application: code, runtime, system tools, libraries, and settings. They provide consistent environments across development, testing, and production.

Containers vs Virtual Machines

AspectContainersVirtual Machines
OSShare host OS kernelEach has full OS
SizeLightweight (MBs)Heavy (GBs)
StartupSecondsMinutes
Resource UsageLow overheadHigh overhead
IsolationProcess-levelHardware-level
PortabilityHighMedium

Key Benefits

  • Consistency: “Works on my machine” problem solved
  • Portability: Run anywhere containers are supported
  • Efficiency: Better resource utilization than VMs
  • Scalability: Quick startup and lightweight
  • DevOps: Simplified CI/CD pipelines

Docker Fundamentals

Core Components

1. Docker Image

Read-only template used to create containers. Built using layers.

# Example Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]

2. Docker Container

Running instance of an image.

Terminal window
# Create and run container
docker run -d -p 3000:3000 --name myapp my-node-app
# List running containers
docker ps
# Stop container
docker stop myapp
# Remove container
docker rm myapp

3. Dockerfile

Text file with instructions to build an image.

# Multi-stage build example
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:18-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
USER node
CMD ["npm", "start"]

4. Docker Registry

Storage and distribution system for Docker images.

Terminal window
# Pull image from registry
docker pull nginx:latest
# Tag image
docker tag myapp:latest myregistry.com/myapp:v1.0
# Push image to registry
docker push myregistry.com/myapp:v1.0

Docker Architecture

Docker Client → Docker Daemon → Containers
↓ Images
Docker Registry Volumes
Networks

Docker Commands Reference

Image Management

Terminal window
# Build image
docker build -t myapp:latest .
# List images
docker images
# Remove image
docker rmi myapp:latest
# Image history
docker history myapp:latest
# Inspect image
docker inspect myapp:latest

Container Management

Terminal window
# Run container
docker run -d --name myapp -p 8080:80 nginx
# Execute command in running container
docker exec -it myapp /bin/bash
# View container logs
docker logs myapp
# Copy files to/from container
docker cp file.txt myapp:/path/to/destination
# Container stats
docker stats myapp

Volume Management

Terminal window
# Create volume
docker volume create myvolume
# List volumes
docker volume ls
# Mount volume
docker run -v myvolume:/data myapp
# Remove volume
docker volume rm myvolume

Network Management

Terminal window
# Create network
docker network create mynetwork
# List networks
docker network ls
# Connect container to network
docker network connect mynetwork myapp
# Inspect network
docker network inspect mynetwork

Docker Compose

Multi-Container Applications

Define and run multi-container applications using YAML.

docker-compose.yml
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
depends_on:
- db
- redis
volumes:
- ./logs:/app/logs
networks:
- app-network
db:
image: postgres:13
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
redis:
image: redis:alpine
ports:
- "6379:6379"
networks:
- app-network
volumes:
postgres_data:
networks:
app-network:
driver: bridge

Compose Commands

Terminal window
# Start services
docker-compose up -d
# Stop services
docker-compose down
# View logs
docker-compose logs -f
# Scale service
docker-compose scale web=3
# Build services
docker-compose build
# Execute command
docker-compose exec web bash

Container Best Practices

Dockerfile Optimization

1. Use Multi-Stage Builds

# Build stage
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Runtime stage
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
USER node
CMD ["npm", "start"]

2. Minimize Layers

# Bad - Multiple layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y vim
# Good - Single layer
RUN apt-get update && \
apt-get install -y curl vim && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

3. Use .dockerignore

node_modules
npm-debug.log
.git
.gitignore
README.md
.env
coverage

4. Run as Non-Root User

FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
USER nextjs

Security Best Practices

1. Use Official Base Images

# Prefer official images
FROM node:18-alpine
# Over custom or unknown images
FROM some-random-user/node

2. Keep Images Updated

# Use specific versions, not latest
FROM node:18.17.0-alpine

3. Scan for Vulnerabilities

Terminal window
# Docker vulnerability scanning
docker scan myapp:latest
# Trivy scanning
trivy image myapp:latest

4. Limit Container Capabilities

Terminal window
# Run with limited capabilities
docker run --cap-drop=ALL --cap-add=NET_ADMIN myapp

Container Orchestration

Why Orchestration?

  • Service Discovery: Find and connect services
  • Load Balancing: Distribute traffic across instances
  • Auto-scaling: Scale based on demand
  • Health Checks: Monitor and restart failed containers
  • Rolling Updates: Deploy without downtime
  • Resource Management: CPU and memory allocation

Kubernetes (K8s)

Core Concepts

Cluster Architecture

Master Node (Control Plane)
├─ API Server
├─ etcd (Key-Value Store)
├─ Scheduler
└─ Controller Manager
Worker Nodes
├─ kubelet
├─ kube-proxy
└─ Container Runtime (Docker/containerd)

Key Objects

1. Pod

Smallest deployable unit, contains one or more containers.

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: web
image: nginx:1.20
ports:
- containerPort: 80
- name: sidecar
image: busybox
command: ['sh', '-c', 'sleep 3600']

2. Deployment

Manages ReplicaSets and provides declarative updates.

apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx:1.20
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

3. Service

Exposes pods to network traffic.

apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP # ClusterIP, NodePort, LoadBalancer

4. ConfigMap

Stores configuration data.

apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database_url: "postgresql://localhost:5432/mydb"
debug: "true"

5. Secret

Stores sensitive data.

apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
data:
username: dXNlcm5hbWU= # base64 encoded
password: cGFzc3dvcmQ= # base64 encoded

6. Ingress

Manages external access to services.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80

Kubernetes Commands

Cluster Management

Terminal window
# Cluster info
kubectl cluster-info
# Node status
kubectl get nodes
# Cluster events
kubectl get events

Pod Management

Terminal window
# List pods
kubectl get pods
# Pod details
kubectl describe pod my-pod
# Pod logs
kubectl logs my-pod
# Execute command in pod
kubectl exec -it my-pod -- /bin/bash
# Port forwarding
kubectl port-forward pod/my-pod 8080:80

Deployment Management

Terminal window
# Create deployment
kubectl create deployment web --image=nginx
# Apply YAML file
kubectl apply -f deployment.yaml
# Scale deployment
kubectl scale deployment web --replicas=5
# Rolling update
kubectl set image deployment/web web=nginx:1.21
# Rollback
kubectl rollout undo deployment/web
# Deployment status
kubectl rollout status deployment/web

Service Management

Terminal window
# Expose deployment
kubectl expose deployment web --port=80 --type=LoadBalancer
# List services
kubectl get services
# Service endpoints
kubectl get endpoints

Advanced Kubernetes Concepts

1. Namespaces

Logical isolation within cluster.

apiVersion: v1
kind: Namespace
metadata:
name: production
Terminal window
# Create namespace
kubectl create namespace production
# List resources in namespace
kubectl get pods -n production
# Set default namespace
kubectl config set-context --current --namespace=production

2. Resource Quotas

Limit resource consumption.

apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: production
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "10"

3. Horizontal Pod Autoscaler (HPA)

Automatically scale pods based on metrics.

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70

4. Persistent Volumes

Manage storage.

# Persistent Volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /data
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard

5. StatefulSets

For stateful applications.

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
spec:
serviceName: "database"
replicas: 3
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: postgres
image: postgres:13
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi

Container Security

1. Image Security

Terminal window
# Scan image for vulnerabilities
docker scan nginx:latest
# Use distroless images
FROM gcr.io/distroless/java:11
# Multi-stage builds to reduce attack surface
FROM maven:3.8-openjdk-11 AS build
# ... build steps ...
FROM gcr.io/distroless/java:11
COPY --from=build /app/target/app.jar /app.jar

2. Runtime Security

# Security Context
apiVersion: v1
kind: Pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL

3. Network Policies

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

4. Pod Security Standards

apiVersion: v1
kind: Namespace
metadata:
name: secure-namespace
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted

Container Monitoring & Logging

Monitoring Stack

1. Prometheus + Grafana

# Prometheus ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod

2. Metrics Collection

Terminal window
# Install metrics-server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# View resource usage
kubectl top nodes
kubectl top pods

Logging

1. Centralized Logging (ELK Stack)

# Fluentd DaemonSet for log collection
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
spec:
selector:
matchLabels:
name: fluentd
template:
spec:
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true

2. Application Logging

# Structured logging in container
FROM node:18-alpine
COPY . .
# Application logs to stdout/stderr
CMD ["node", "app.js"]

Alternative Orchestration Platforms

1. Docker Swarm

Terminal window
# Initialize swarm
docker swarm init
# Create service
docker service create --name web --replicas 3 -p 80:80 nginx
# Scale service
docker service scale web=5
# Update service
docker service update --image nginx:1.21 web

2. Amazon ECS

{
"family": "web-app",
"networkMode": "awsvpc",
"cpu": "256",
"memory": "512",
"containerDefinitions": [
{
"name": "web",
"image": "nginx:latest",
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
}
]
}
]
}

3. HashiCorp Nomad

job "web" {
datacenters = ["dc1"]
type = "service"
group "web" {
count = 3
task "nginx" {
driver = "docker"
config {
image = "nginx:latest"
ports = ["http"]
}
resources {
cpu = 500
memory = 256
}
}
}
}

CI/CD with Containers

Build Pipeline

# GitHub Actions example
name: Build and Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build Docker image
run: docker build -t myapp:${{ github.sha }} .
- name: Run tests
run: docker run --rm myapp:${{ github.sha }} npm test
- name: Push to registry
run: |
docker tag myapp:${{ github.sha }} myregistry.com/myapp:${{ github.sha }}
docker push myregistry.com/myapp:${{ github.sha }}
- name: Deploy to Kubernetes
run: |
kubectl set image deployment/myapp app=myregistry.com/myapp:${{ github.sha }}

GitOps with ArgoCD

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
spec:
source:
repoURL: https://github.com/myorg/myapp-config
path: kubernetes
targetRevision: HEAD
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true

Interview Questions and Answers

Q1: What is the difference between a container and a virtual machine?

A1: Containers share the host OS kernel and isolate at the process level, making them lightweight (MBs) with second-level startup times. Virtual machines include a full OS, requiring a hypervisor, making them heavy (GBs) with minute-level startup times. Containers provide process-level isolation while VMs provide hardware-level isolation. Containers are more portable and efficient but VMs offer stronger isolation.

Q2: Explain Docker architecture and its main components.

A2: Docker uses a client-server architecture:

  • Docker Client: CLI that sends commands to Docker daemon
  • Docker Daemon: Background service that manages containers, images, volumes, and networks
  • Docker Registry: Storage for Docker images (e.g., Docker Hub)
  • Docker Images: Read-only templates with layers
  • Docker Containers: Running instances of images

The client communicates with the daemon via REST API, and the daemon pulls images from registries and manages container lifecycle.

Q3: What is a Dockerfile and how does layering work?

A3: A Dockerfile is a text file containing instructions to build a Docker image. Each instruction (FROM, RUN, COPY, etc.) creates a new layer. Layers are cached and reused, making builds faster. For example:

FROM node:18 # Layer 1
WORKDIR /app # Layer 2
COPY package.json # Layer 3
RUN npm install # Layer 4
COPY . . # Layer 5

Only changed layers and subsequent layers are rebuilt. This makes Docker builds efficient.

Q4: What is the difference between CMD and ENTRYPOINT in Dockerfile?

A4:

  • CMD: Provides default arguments that can be overridden at runtime. Used for default commands.
  • ENTRYPOINT: Defines the executable that always runs. Arguments are appended to it.

Example:

ENTRYPOINT ["python"]
CMD ["app.py"]

Running docker run myapp test.py executes python test.py (CMD overridden), but the ENTRYPOINT remains. Best practice: use ENTRYPOINT for the executable and CMD for default arguments.

Q5: Explain multi-stage builds and their benefits.

A5: Multi-stage builds use multiple FROM statements in a Dockerfile, allowing you to separate build and runtime environments. Benefits:

  • Smaller final images: Build tools aren’t included in final image
  • Better security: Fewer attack vectors
  • Cleaner separation: Build stage separate from runtime

Example: Use a full Node.js image to build, then copy only the build artifacts to a smaller Alpine image.

Q6: What are Docker volumes and why are they important?

A6: Docker volumes are persistent storage mechanisms that exist outside the container filesystem. They’re important because:

  • Data persistence: Data survives container deletion
  • Sharing data: Multiple containers can share volumes
  • Performance: Better I/O performance than bind mounts
  • Backup/restore: Easier to backup and migrate

Types: Named volumes (managed by Docker), bind mounts (host filesystem path), and tmpfs (memory-only).

Q7: What is Docker Compose and when would you use it?

A7: Docker Compose is a tool for defining and running multi-container applications using YAML files. Use it when:

  • Running multiple related containers (web app + database + cache)
  • Need to define service dependencies
  • Want reproducible development environments
  • Need to manage container configuration as code

It simplifies starting/stopping entire application stacks with single commands (docker-compose up/down).

Q8: Explain the concept of a Kubernetes Pod.

A8: A Pod is the smallest deployable unit in Kubernetes, containing one or more tightly coupled containers that share:

  • Network namespace: Same IP address and port space
  • Storage volumes: Shared volumes
  • Lifecycle: Started/stopped together

Pods are ephemeral and designed to run a single instance of an application. Multiple containers in a pod typically include a main container and helper “sidecar” containers for logging, monitoring, or proxying.

Q9: What is a Kubernetes Deployment and what advantages does it provide?

A9: A Deployment is a Kubernetes object that manages ReplicaSets and provides declarative updates for Pods. Advantages:

  • Desired state management: Maintains specified number of replicas
  • Rolling updates: Zero-downtime deployments
  • Rollback capability: Revert to previous versions
  • Scaling: Easy horizontal scaling
  • Self-healing: Automatically replaces failed pods

Deployments are ideal for stateless applications.

Q10: What is the difference between a Deployment and a StatefulSet?

A10: Deployment:

  • For stateless applications
  • Pods are interchangeable
  • Random pod names (web-7d4f8-xyz)
  • No guaranteed ordering
  • Shared storage

StatefulSet:

  • For stateful applications (databases)
  • Pods have unique identities
  • Predictable names (db-0, db-1, db-2)
  • Ordered creation/deletion
  • Dedicated storage per pod

Use StatefulSets when pods need stable network identities or persistent storage.

Q11: Explain Kubernetes Services and their types.

A11: A Service exposes Pods to network traffic and provides stable endpoints. Types:

  • ClusterIP (default): Internal-only access within cluster
  • NodePort: Exposes service on each node’s IP at a static port
  • LoadBalancer: Creates external load balancer (cloud providers)
  • ExternalName: Maps service to external DNS name

Services use label selectors to route traffic to matching Pods, providing load balancing and service discovery.

Q12: What are ConfigMaps and Secrets? When would you use each?

A12: ConfigMap: Stores non-sensitive configuration data (database URLs, feature flags, config files) Secret: Stores sensitive data (passwords, API keys, certificates) - base64 encoded

Use ConfigMap for environment-specific settings. Use Secrets for credentials and sensitive data. Both can be injected as environment variables or mounted as volumes. Secrets offer additional protection through encryption at rest (when configured).

Q13: What is a Kubernetes Namespace and why use it?

A13: Namespaces provide logical isolation within a cluster, creating virtual clusters. Benefits:

  • Resource isolation: Separate dev, staging, production
  • Access control: Apply RBAC policies per namespace
  • Resource quotas: Limit CPU/memory per namespace
  • Organization: Group related resources

Default namespaces: default, kube-system, kube-public, kube-node-lease. Create custom namespaces for multi-tenancy and environment separation.

Q14: Explain Kubernetes Ingress and how it differs from a Service.

A14: Service: Layer 4 (TCP/UDP) load balancing, internal routing Ingress: Layer 7 (HTTP/HTTPS) routing, external access management

Ingress provides:

  • Path-based routing: /api → service-a, /web → service-b
  • Host-based routing: api.example.com → service-a
  • TLS termination: SSL/TLS certificate management
  • Single entry point: One load balancer for multiple services

Requires an Ingress Controller (nginx, traefik, etc.) to function.

Q15: What is a DaemonSet and when would you use it?

A15: A DaemonSet ensures a copy of a Pod runs on all (or selected) nodes. Use cases:

  • Log collection: Fluentd/Filebeat on every node
  • Monitoring agents: Prometheus node exporters
  • Network plugins: CNI agents
  • Storage daemons: Ceph, GlusterFS agents

When a node joins the cluster, the DaemonSet automatically schedules a Pod on it. Ideal for node-level services.

Q16: Explain Kubernetes resource requests and limits.

A16: Requests: Minimum guaranteed resources (used for scheduling) Limits: Maximum resources a container can use (enforced)

resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
  • Container uses requests for scheduling decisions
  • Exceeding memory limit → pod killed (OOMKilled)
  • Exceeding CPU limit → throttled, not killed
  • Set requests based on typical usage, limits for safety

Q17: What is a Horizontal Pod Autoscaler (HPA)?

A17: HPA automatically scales the number of Pods based on observed metrics (CPU, memory, or custom metrics). It:

  • Monitors metrics via Metrics Server
  • Compares current vs target utilization
  • Adjusts replica count within min/max bounds
  • Rechecks every 15 seconds (default)

Example: Scale from 2-10 replicas when CPU exceeds 70%. HPA prevents manual scaling and handles traffic spikes automatically. Requires resource requests to be defined.

Q18: What are liveness and readiness probes in Kubernetes?

A18: Liveness Probe: Checks if container is alive. If fails, kubelet kills and restarts container. Readiness Probe: Checks if container is ready to serve traffic. If fails, removes pod from service endpoints.

Types: HTTP GET, TCP Socket, Exec command

Example use: Liveness ensures hung app restarts; Readiness ensures traffic only goes to fully initialized pods. Don’t confuse them - liveness restarts, readiness gates traffic.

Q19: What is the difference between a .dockerignore and .gitignore?

A19: .dockerignore: Excludes files from Docker build context (sent to daemon) .gitignore: Excludes files from Git repository

.dockerignore improves build performance by reducing context size. Common entries:

node_modules
.git
*.md
.env
coverage/

Smaller build context = faster uploads to daemon, faster builds, smaller images (if files would be COPYied).

Q20: Explain Docker networking modes.

A20: Docker networking modes:

  • Bridge (default): Private network, containers communicate via internal IPs
  • Host: Container shares host network namespace, no isolation
  • None: No networking, isolated container
  • Overlay: Multi-host networking for Swarm/Kubernetes
  • Macvlan: Assigns MAC address, container appears as physical device

Use bridge for most cases, host for performance (bypasses NAT), overlay for multi-host orchestration.

Q21: What is a Kubernetes PersistentVolume (PV) and PersistentVolumeClaim (PVC)?

A21: PersistentVolume (PV): Cluster resource representing storage (admin creates) PersistentVolumeClaim (PVC): Request for storage by user (pod uses)

Workflow:

  1. Admin creates PV with capacity/access modes
  2. User creates PVC requesting storage
  3. Kubernetes binds PVC to matching PV
  4. Pod references PVC in volume mount

Decouples storage provisioning from consumption. Supports dynamic provisioning via StorageClasses.

Q22: What are Init Containers in Kubernetes?

A22: Init Containers run before app containers in a Pod, completing before app containers start. Use cases:

  • Wait for dependencies: Check if database is ready
  • Setup tasks: Clone git repo, download config
  • Security: Fetch secrets, set permissions

Multiple init containers run sequentially. If any fails, kubelet restarts the Pod. Main containers only start after all init containers succeed.

Q23: Explain Kubernetes RBAC (Role-Based Access Control).

A23: RBAC controls who can access which resources. Components:

  • Role/ClusterRole: Defines permissions (verbs: get, list, create, delete)
  • RoleBinding/ClusterRoleBinding: Grants role to users/groups/service accounts
  • ServiceAccount: Identity for pods

Role is namespace-scoped; ClusterRole is cluster-wide. Example: Give developer read-only access to pods in dev namespace. RBAC follows principle of least privilege.

Q24: What is a sidecar container pattern?

A24: A sidecar container runs alongside the main container in the same Pod, sharing resources. Common patterns:

  • Logging: Sidecar collects and forwards logs
  • Monitoring: Exports metrics from main container
  • Proxy: Envoy sidecar for service mesh
  • Data synchronization: Syncs files between containers

Example: Web server (main) + log shipper (sidecar). Sidecars extend functionality without modifying main application.

Q25: What is container orchestration and why is it needed?

A25: Container orchestration automates deployment, scaling, networking, and management of containerized applications. Needed for:

  • High availability: Automatic restarts and failover
  • Scaling: Handle increased load automatically
  • Load balancing: Distribute traffic across instances
  • Service discovery: Containers find each other
  • Rolling updates: Zero-downtime deployments
  • Resource management: Optimal container placement

Without orchestration, managing hundreds of containers manually is impractical. Kubernetes, Docker Swarm, and ECS are popular orchestrators.

Q26: What are Kubernetes labels and selectors?

A26: Labels: Key-value pairs attached to objects for organization Selectors: Query labels to identify resources

labels:
app: web
environment: production
version: v1.2

Selectors filter objects: app=web,environment=production

Services use selectors to route traffic to pods. Deployments use them to manage pods. Labels enable loose coupling - change labels to redirect traffic without modifying pods.

Q27: Explain the concept of immutable infrastructure with containers.

A27: Immutable infrastructure means containers are never modified after deployment - replaced entirely with new versions. Benefits:

  • Consistency: Eliminates configuration drift
  • Reliability: Same image across environments
  • Easy rollback: Redeploy previous image
  • Security: No runtime patches, rebuild instead

Instead of SSH into container and patching, build new image with fix and deploy. Containers are cattle, not pets.

Q28: What is a Kubernetes Job and CronJob?

A28: Job: Runs pods to completion, ensures specified number of successful completions CronJob: Runs Jobs on a schedule (cron syntax)

Use cases:

  • Job: Data migration, batch processing, one-time tasks
  • CronJob: Scheduled backups, report generation, cleanup tasks
schedule: "0 2 * * *" # Daily at 2 AM

Jobs handle retries and parallelism. CronJobs create Jobs at scheduled times.

Q29: What are Kubernetes Taints and Tolerations?

A29: Mechanism to control pod scheduling on nodes:

Taint: Applied to nodes, repels pods without matching toleration Toleration: Applied to pods, allows (but doesn’t require) scheduling on tainted nodes

Use cases:

  • Dedicated nodes for specific workloads (GPU nodes)
  • Isolate production from dev
  • Prevent scheduling on nodes with issues

Effects: NoSchedule, PreferNoSchedule, NoExecute

Example: Taint GPU nodes, only pods with GPU toleration can schedule there.

Q30: What are the benefits of using multi-stage builds in Docker?

A30: Multi-stage builds create smaller, more secure images by separating build and runtime:

Benefits:

  • Smaller images: 500MB build image → 50MB final image (10x reduction)
  • Improved security: No build tools in production image
  • Single Dockerfile: No need for separate build/runtime Dockerfiles
  • Faster deployments: Smaller images transfer faster
  • Better layer caching: Build dependencies cached separately

Example: Compile Go app in builder stage (has compiler), copy binary to scratch image (no OS). Final image is ~10MB vs ~800MB with build tools.