The best open source alternative to Galileo is Langfuse. If that doesn't suit you, we've compiled a ranked list of other open source Galileo alternatives to help you find a suitable replacement. Other interesting open source alternatives to Galileo are: Arize Phoenix, OpenLLMetry, Helicone, and OpenLIT .
Galileo alternatives are mainly LLM Application Frameworks but may also be AI Integration Platforms or Machine Learning Infrastructure Tools. Browse these if you want a narrower list of alternatives or looking for a specific functionality of Galileo.
Langfuse provides tracing, evaluations, prompt management, and analytics to debug and improve LLM applications.

Langfuse is an open source LLM engineering platform designed to help teams build, debug, and improve AI-powered applications. With its comprehensive suite of tools, Langfuse empowers developers to gain deep insights into their LLM applications and optimize performance.
Key features of Langfuse include:
Tracing: Capture detailed production traces to quickly identify and resolve issues in your LLM applications. Visualize the entire request flow and pinpoint bottlenecks.
Evaluations: Collect user feedback, annotate data, and run custom evaluation functions to assess the quality and performance of your AI models.
Prompt Management: Collaboratively version and deploy prompts, with low-latency retrieval for production use. Streamline your prompt engineering workflow.
Analytics: Track key metrics like cost, latency, and quality to optimize your LLM application's performance and efficiency.
Playground: Test different prompts and models directly within the Langfuse UI, enabling rapid experimentation and iteration.
Datasets: Derive high-quality datasets from production data to fine-tune models and thoroughly test your LLM applications.
Langfuse integrates seamlessly with popular LLM frameworks and libraries, including LangChain, LlamaIndex, and OpenAI. It offers SDKs for Python and JavaScript/TypeScript, making it easy to incorporate into your existing workflow.
Built for teams of all sizes, Langfuse can be self-hosted or used as a cloud service. It's designed with enterprise-grade security in mind, offering SOC 2 Type II and ISO 27001 certifications for the cloud version.
By providing a comprehensive toolkit for LLM engineering, Langfuse helps teams build more reliable, efficient, and high-quality AI applications. Whether you're just starting with LLMs or scaling a complex AI system, Langfuse offers the observability and tools needed to succeed in the rapidly evolving field of AI engineering.
Looking for open source alternatives to other popular services? Check out other posts in the alternatives series and openalternative.co, a directory of open source software with filters for tags and alternatives for easy browsing and discovery.
Open-source platform for LLM tracing, evaluation, and optimization. Features automatic instrumentation, prompt playground, and real-time AI application monitoring.

Open-source LLM tracing and evaluation platform designed for AI teams who need complete visibility into their applications. Built on OpenTelemetry standards, this platform offers vendor-agnostic monitoring without lock-in restrictions.
Key capabilities include:
The platform has gained significant traction with 2.5M+ monthly downloads, 8k+ GitHub stars, and adoption by top AI teams. Users praise its ability to identify root causes of problematic responses, debug LLM workflows, and integrate observability directly into development processes.
Completely self-hostable with no feature restrictions, making it ideal for teams requiring full control over their AI monitoring infrastructure while maintaining transparency in model decision-making.
Open-source observability platform for LLMs using OpenTelemetry. Monitor performance, track costs, and debug AI applications with just 2 lines of code.

Monitor and optimize your LLM applications with comprehensive observability built on OpenTelemetry standards. This open-source platform provides deep insights into your AI systems with minimal setup complexity.
Key capabilities include:
Quick integration requires just 2 lines of code to start collecting telemetry data. Built on OpenTelemetry standards, ensuring compatibility with existing monitoring infrastructure and avoiding vendor lock-in.
Perfect for developers building production LLM applications who need reliable monitoring without complex setup or proprietary dependencies.
Open-source platform for logging, monitoring, and debugging LLM applications. Route, debug, and analyze AI apps with comprehensive observability tools.
Helicone is the open-source platform that helps developers build reliable AI applications through comprehensive observability. Trusted by the world's fastest-growing AI companies, it provides essential tools for routing, debugging, and analyzing LLM applications.
Key Features:
The platform offers a comprehensive dashboard for monitoring AI application performance, with detailed request tracking and user analytics. Developers can experiment with prompts, run evaluations, and manage datasets all within one unified interface.
Getting Started: No credit card required with a 7-day free trial. The platform is designed to help developers quickly identify issues, optimize performance, and ensure their AI applications run reliably at scale.
Open-source observability platform for GenAI and LLM applications. Real-time monitoring, distributed tracing, prompt management, and AI model evaluation built on OpenTelemetry.

Monitor and optimize your LLM applications with comprehensive observability tools designed for production AI workloads. Built entirely on OpenTelemetry standards for seamless integration with existing infrastructure.
Key capabilities include:
Quick setup requires just a few lines of code with zero application changes. The platform supports automatic Kubernetes instrumentation through the OpenLIT Operator, making it perfect for containerized environments.
Privacy-first approach ensures your data never leaves your infrastructure, while the open-source nature eliminates vendor lock-in concerns. Compatible with all major LLM providers and frameworks including OpenAI, Anthropic, Google, AWS Bedrock, and popular vector databases.
Production-ready with minimal performance overhead, designed to scale with your AI applications from development to enterprise deployment.