This blog post is a collection of Kubernetes interview questions that will help you get the job of your dream. Make sure you memorize a short answer and can provide a detailed explanation in your own words.
Our goal was to make this guide as technical and detailed as possible. We assume you already know what Kubernetes is, are familiar with general ideas of containerization, and understand the benefits of these technologies. Therefore, we omit generic questions and dive right into detailed, practical, and low-level questions, which are more likely to be asked during a Kubernetes interview.
You can use this as a roadmap for your interview preparation or as a last-minute knowledge check.
Selecting a Suitable Number of Deployment Configuration Repositories
It is important to consider how many repos should house an organization’s deployment configurations. Organizations have varying requirements based on their scale and complexity.
Generally, small companies that don’t rely heavily on automation, and where all employees are trusted, can use a mono-repo. Mid-sized companies that use some automation should use a repository for each team, while larger organizations that require greater control and rely significantly on automation should use repositories for each service.
Teams can often manage themselves if they have their own repository. Each team can decide who has release access, so there is no need for a central team that gives write access to each team, which may create a release bottleneck.
Software engineers often commit changes to manifests and allow the GitOps agent to try deploy the application, thus validating the changes. Testing changes before pushing them to a manifest helps prevent the introduction of issues into pre-production.
Typically, the agent uses a Helm chart or other template to generate the manifests. Engineers can run commands locally to test their manifests before they commit any changes.
Configuration drift has long been an issue for production deployments. Configuration differences between the target machines of a CI/CD deployment can cause it to fail. Developers sometimes use staging environments to test applications before deploying them to production. A staging environment should ideally have the same configuration as the production environment, ensuring that any tests reflect the real conditions of the live application.
However, teams often change Kubernetes clusters using ad-hoc commands, not part of the CI/CD process. These changes contribute to configuration drift and impact application deployments. For example, an application might pass all the testing in the staging environment but fails when deployed to production.
Argo CD helps prevent configuration drift and maintain state traceability by using Git as a single source of truth for all current and past deployments. The Git history enables retrospective investigation. However, all manifest changes must go through Argo CD to maintain a clean history. If, for example, a developer uses kubectl
to make changes directly, Argo CD can detect this and mark the application as OutOfSync.
Argo CD offers an auto-sync capability that, when enabled, eliminates configuration drift for Kubernetes applications. It is important to ensure that all changes to a manifest are committed to the Git repository. Kustomize and Helm support different manifests for a single commit, so developers using these should pin the dependencies to specific commits.
2 What is a flaky test?
A test that intermittently fails for no apparent reason is called a flaky test. Flaky tests usually work correctly on the developer’s machine but fail on the CI server. Flaky tests are difficult to debug and are a major source of frustration.
Common sources of flakiness are:
Test-Driven Development (TDD) is a software design practice in which a developer writes tests before code. By inverting the usual order in which software is written, a developer can think of a problem in terms of inputs and outputs and write more testable (and thus more modular) code.
The TDD cycle consists of three steps:
1 What is âHeapsterâ in Kubernetes?
In this Kubernetes interview question, the interviewer would expect a thorough explanation. You can explain what it is and also it has been useful to you (if you have used it in your work so far!). A Heapster is a performance monitoring and metrics collection system for data collected by the Kublet. This aggregator is natively supported and runs like any other pod within a Kubernetes cluster, which allows it to discover and query usage data from all nodes within the cluster.
With the help of Minikube, users can Kubernetes locally. This process lets the user run a single-node Kubernetes cluster on your personal computer, including Windows, macOS, and Linus PCs. With this, users can try out Kubernetes also for daily development work.