Troubleshoot Kubernetes Agentically
AI-powered conversational debugging with real-time cluster insights.
Real-time Debugging
~50ms command delivery via Server-Sent Events (SSE) with instant execution
Secure by Default
Read-only operations with comprehensive command validation and RBAC integration
AI-Native Design
Multi-LLM support with native A2A (Agent-to-Agent) protocol for any LLM provider
Simple Deployment
Single API service with lightweight executors - deploy anywhere Kubernetes runs
Auto-scaling Performance
Horizontal scaling with Redis pub/sub - supports unlimited API pods
Flexible Integration
REST API, Node.js CLI, and comprehensive test automation framework
Architecture
Core Components
- Kubently API: Horizontally scalable FastAPI service with A2A server for multi-agent communication
- Kubently Executor: Lightweight agent deployed in each target cluster with configurable RBAC rules
- Redis: Pub/Sub for command distribution, session persistence, and conversation state
- SSE Connection: Real-time bidirectional streaming for instant command delivery (~50ms latency)
- LLM Integration: Supports multiple LLM providers through LLMFactory for intelligent troubleshooting
Use Cases
Intelligent Troubleshooting
Systematic debugging with LLM-powered analysis and todo tracking for thorough investigations
Multi-Agent Systems
Full A2A protocol implementation with tool call interception and streaming responses
Enterprise Ready
OAuth/OIDC authentication, TLS support with cert-manager, and comprehensive test automation
Getting Started
Ready to start debugging your Kubernetes clusters with AI-powered insights?
Community & Support
Join the Kubently community and get help from other users and maintainers:
- GitHub: Source code and issues
- Documentation: Comprehensive guides and API reference
- Discussions: Ask questions and share ideas