OpenLIT | OpenTelemetry-native GenAI and LLM Application Observability
https://openlit.io/Open Source Platform for AI Engineering Monitor, debug, and improve your LLM applications with comprehensive observability, tracing, and evaluation tools. Built for production workloads.
Sponsored by
Features
Powerful Features for Modern Teams
Everything you need to build, ship, and scale your AI applications
TracingEvaluationPrompt HubExperimentsDashboardsFleet Hub
Distributed Tracing
Monitor and trace your LLM applications in real-time. Visualize request flows, identify bottlenecks, and understand the complete lifecycle of every AI interaction with OpenTelemetry-powered distributed tracing.
AI Model Evaluation
Run online/offline evals via UI (experiment with prompts/models) and via SDKs (experiment with end-to-end application).
Prompt Management
Centrally manage, version, and deploy prompts across your applications. Experiment with different prompt variations, track performance, and iterate faster with version control for your prompts.
Experiment with your prompts and models
Experiment with your prompts and models to find the best performing ones. OpenGround is a playground for you to experiment with your prompts and models.
Real-time Monitoring
Get a unified view of all your LLM applications across different environments. Write custom SQL queries to analyze your AI telemetry data, create and resize custom widgets with flexible configurations and layouts and visualize telemetry from any OpenTelemetry-instrumented tool.
Multi-Deployment Management
Get a unified view of all your LLM applications across different environments. Monitor multiple deployments, compare performance metrics, and manage your entire AI fleet from a single dashboard.
All Features
Distributed Tracing
Monitor and trace your LLM applications in real-time. Visualize request flows, identify bottlenecks, and understand the complete lifecycle of every AI interaction with OpenTelemetry-powered distributed tracing.
AI Model Evaluation
Run online/offline evals via UI (experiment with prompts/models) and via SDKs (experiment with end-to-end application).
Prompt Management
Centrally manage, version, and deploy prompts across your applications. Experiment with different prompt variations, track performance, and iterate faster with version control for your prompts.
Experiment with your prompts and models
Experiment with your prompts and models to find the best performing ones. OpenGround is a playground for you to experiment with your prompts and models.
Real-time Monitoring
Get a unified view of all your LLM applications across different environments. Write custom SQL queries to analyze your AI telemetry data, create and resize custom widgets with flexible configurations and layouts and visualize telemetry from any OpenTelemetry-instrumented tool.
Multi-Deployment Management
Get a unified view of all your LLM applications across different environments. Monitor multiple deployments, compare performance metrics, and manage your entire AI fleet from a single dashboard.
Integration
Get started in minutes
Add comprehensive observability to your LLM applications with just a few lines of code. No code changes required for existing applications.
Quick Setup
Get OpenLit running in your environment in less than a minute
PythonTypescript
main.py
1# Install OpenLit 2pip install openlit 3 4# Initialize in your application 5import openlit 6openlit.init()
Zero-Code Kubernetes Observability
Automatically inject AI observability into your Kubernetes workloads without touching your code
Automatic Instrumentation
Deploy the OpenLIT Operator and it automatically instruments your AI applications - no code changes, no rebuilds
OpenTelemetry Native
Built entirely on OpenTelemetry standards for seamless integration with your existing observability stack
Simple Configuration
Just create an AutoInstrumentation CR and select your workloads - that's it!
Perfect for:
LLM Applications
AI Agents
Vector Databases
AI Frameworks
Learn About Kubernetes Operator
Supported Integrations
Works with all major LLM providers and frameworks out of the box
Why Choose OpenLit?
Open Source & Free
Always free, self-hosted, no vendor lock-in
Privacy First
Your data never leaves your infrastructure
Production Ready
Built for scale with minimal performance overhead
Community
Join our growing community
OpenLit is built by developers, for developers. Join thousands of engineers building better LLM applications with open-source observability.
Open Source Project
Contribute to the future of LLM observability
2071
GitHub Stars
1.4M
Sdk Downloads
217
Forks
45
Open Issues
Support Us
Help us build the future of open-source LLM observability
Token Supporter
You're literally counting tokens with us! Your support keeps our servers running, docs updated, and community thriving.
- ❤️ Our eternal gratitude
- 🌟 Help keep OpenLIT open-source
- 🚀 Support faster development
Context Window Hero
You've expanded our context window! Help us ship faster integrations, squash bugs, and build features the community loves.
- 🏆 Your logo on our GitHub README
- 🌐 Your logo on our official website
- 🎯 Shape the future of AI observability
Get Involved
Ways to contribute to OpenLit
Contribute Code
Help improve OpenLit with bug fixes and new features
Report Issues
Help us improve by reporting bugs and suggesting features
Write Documentation
Help others get started with better docs and guides
Share Your Story
Tell us how OpenLit helps your team
Latest from the Blog
Monitoring LLM usage in OpenWebUI
February 27, 2025
How to protect your OpenAI/LLM Apps from Prompt Injection Attacks
October 22, 2024
Unlocking Seamless GenAI & LLM Observability with OpenLIT
August 14, 2024
Ready to Transform Your AI Observability?
Join thousands of developers using OpenLIT to build better, more reliable LLM applications. Get started in less than a minute.
Documentation
IntroductionSdk OverviewKubernetes OperatorIntegrationsDestinations
Features
GPU MonitoringEvaluationsFleet HubPrompt HubVault Hub
About
Legal
SecurityContributingReadmeLicenseCode of Conduct
OpenLIT