GoogleCloudPlatform/kubectl-ai
kubectl-ai
kubectl-ai acts as an intelligent interface, translating user intent into
precise Kubernetes operations, making Kubernetes management more accessible and
efficient.
Quick Start
First, ensure that kubectl is installed and configured.
Installation
Quick Install (Linux & MacOS only)
|
|
Manual Installation (Linux, MacOS and Windows)
-
Download the latest release from the releases page for your target machine.
-
Untar the release, make the binary executable and move it to a directory in your $PATH (as shown below).
|
|
Install with Krew (Linux/macOS/Windows)
First of all, you need to have krew insatlled, refer to krew document for more details Then you can install with krew
|
|
Now you can invoke kubectl-ai as a kubectl plugin like this: kubectl ai.
Usage
Using Gemini (Default)
Set your Gemini API key as an environment variable. If you don’t have a key, get one from Google AI Studio.
|
|
Using AI models running locally (ollama or llama.cpp)
You can use kubectl-ai with AI models running locally. kubectl-ai supports ollama and llama.cpp to use the AI models running locally.
An example of using Google’s gemma3 model with ollama:
|
|
Using Grok
You can use X.AI’s Grok model by setting your X.AI API key:
|
|
Using Azure OpenAI
You can also use Azure OpenAI deployment by setting your OpenAI API key and specifying the provider:
|
|
Using OpenAI
You can also use OpenAI models by setting your OpenAI API key and specifying the provider:
|
|
Using OpenAI Compatible API
For example, you can use aliyun qwen-xxx models as follows
|
|
- Note:
kubectl-aisupports AI models fromgemini,vertexai,azopenai,openai,grokand local LLM providers such asollamaandllama.cpp.
Run interactively:
|
|
The interactive mode allows you to have a chat with kubectl-ai, asking multiple questions in sequence while maintaining context from previous interactions. Simply type your queries and press Enter to receive responses. To exit the interactive shell, type exit or press Ctrl+C.
Or, run with a task as input:
|
|
Combine it with other unix commands:
|
|
You can even combine a positional argument with stdin input. The positional argument will be used as a prefix to the stdin content:
|
|
Extras
You can use the following special keywords for specific actions:
model: Display the currently selected model.models: List all available models.version: Display thekubectl-aiversion.reset: Clear the conversational context.clear: Clear the terminal screen.exitorquit: Terminate the interactive shell (Ctrl+C also works).
Invoking as kubectl plugin
Use it via the kubectl plug interface like this: kubectl ai. kubectl will find kubectl-ai as long as it’s in your PATH. For more information about plugins please see: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/
Examples
|
|
The kubectl-ai will process your query, execute the appropriate kubectl commands, and provide you with the results and explanations.
MCP server
You can also use kubectl-ai as a MCP server that exposes kubectl as one of the tools to interact with locally configured k8s environment. See mcp docs for more details.
k8s-bench
kubectl-ai project includes k8s-bench - a benchmark to evaluate performance of different LLM models on kubernetes related tasks. Here is a summary from our last run:
| Model | Success | Fail |
|---|---|---|
| gemini-2.5-flash-preview-04-17 | 10 | 0 |
| gemini-2.5-pro-preview-03-25 | 10 | 0 |
| gemma-3-27b-it | 8 | 2 |
| Total | 28 | 2 |
See full report for more details.
Start Contributing
We welcome contributions to kubectl-ai from the community. Take a look at our
contribution guide to get started.
Note: This is not an officially supported Google product. This project is not eligible for the Google Open Source Software Vulnerability Rewards Program.