OpenLIT reposted this
For anyone who has struggled with debugging Ollama, OpenLIT is worth checking out! It runs in Docker and Kubernetes as well :)
OpenLIT allows you to simplify your AI development workflow, especially for Generative AI and LLMs. It streamlines essential tasks like experimenting with LLMs, organizing and versioning prompts, and securely handling API keys. With just one line of code, you can enable OpenTelemetry-native observability, offering full-stack monitoring that includes LLMs, vector databases, and GPUs. This enables developers to confidently build AI features and applications, transitioning smoothly from testing to production.
External link for OpenLIT
New Delhi, New Delhi, IN
OpenLIT reposted this
For anyone who has struggled with debugging Ollama, OpenLIT is worth checking out! It runs in Docker and Kubernetes as well :)
Thanks, Amin Espinoza and Joaquin Alejandro Rodriguez, for the mention!
✨ And that’s a wrap! Still buzzing from the The Linux Foundation's AI Summit in Amsterdam. This event was a perfect mix of deep technical dives, inspiring keynotes, and practical insights into the future of open source AI. Day 1 kicked off with Stephen Chin’s warm welcome and crisp framing of the summit, followed by Hilary Carter’s thoughtful session that set a collaborative, community-first tone. The emphasis on openness and building responsibly created the ideal context for everything that followed. One of the highlights was Mark Collier’s keynote on “The 3 Pillars of Open Source AI Today”—training, inference, and agents. His framing gave a clear picture of how these pillars form the foundation for open ecosystems and why shared innovation is critical to scale AI responsibly. I really enjoyed Diego Braga’s session on When Kubernetes Meets MLflow. His practical walkthrough showed how cloud-native tooling accelerates real-world MLOps workflows, making experimentation and scaling AI systems far more efficient. ⚡ Another favorite was Peter Friese’s “Beyond Prompts: Building Intelligent Applications With Genkit and the Model Context Protocol.” His live demo illustrated how context transforms LLMs from simple responders into true applications. The flight example drove the point home perfectly. ✈️🤯 Equally valuable was the joint talk by Paras Mamgain and Anmol Krishan Sachdeva: From Hours to Milliseconds: Scaling AI Inference 10x With Serverless on Kubernetes. Their playbook—async frameworks, ONNX quantization, caching, benchmarking—was dense with practical techniques for scaling inference at production speed. 🚀 I also joined Amin Espinoza and Joaquin Alejandro Rodriguez’s workshop on privacy-conscious telemetry for LLMs. Their stack—OpenLIT, OpenTelemetry, Prometheus, Tempo, Grafana—showed how observability can go hand-in-hand with PII masking and ethical data handling. 🔒📊 Day 2 kept the momentum going. 🧩 Vaibhav Gupta's “Context Engineering” was a standout. His clarity made complex ideas tangible, and I was especially impressed with his work on BAML—a domain-specific programming language that brings schema-based, type-safe prompt engineering to life. It’s a game-changer for developers designing reliable, production-ready AI agents. 🎭 David vonThenen’s talk on multi-modal sentiment analysis was equally fascinating. By combining NLP with micro-expressions, he showed how models can finally detect nuances like sarcasm—a leap toward more human-like AI understanding. A huge thank you to The Linux Foundation for hosting such a well-structured, engaging summit. It’s rare to see so many brilliant minds come together to push forward the boundaries of open source AI. #AIDevSummit #OpenSourceAI #GenAISummit #Amsterdam #Netherlands #MachineLearning #MLOps #GenAI #CloudNative #Kubernetes #AIObservability
OpenLIT reposted this
OpenLIT was mentioned last night by AIMon's Preetam Joshi. Nice example of dashboarding AI. https://lnkd.in/gAPV5Uvs
Principal Product Marketing Manager @ Redpanda. Experienced in Data Streaming and Stream Processing, Distributed Real-Time Databases, Observability.
We are about to get started at the Silicon Valley GenAI / The AI Alliance meetup at Beckhoff Automation here in San Jose, CA. Sponsored by Nebius. Hosted by Sujee Maniyam & Dave Nielsen Sign up at https://lu.ma/svgenai
Show us your love & support on Producthunt https://lnkd.in/gkdxjx7j
OpenLIT reposted this
OpenLIT allows you to simplify your AI development workflow, especially for Generative AI and LLMs. It streamlines essential tasks like experimenting with #LLMs, organizing and versioning prompts, and securely handling API keys. With just one line of code, you can enable OpenTelemetry-native observability, offering full-stack monitoring that includes LLMs, vector databases, and GPUs. This enables developers to confidently build AI features and applications, transitioning smoothly from testing to production. This project proudly follows and maintains the Semantic Conventions with the #OpenTelemetry community, consistently updating to align with the latest standards in #Observability. https://lnkd.in/ehqTywcB
🚀 Added automatic #OpenTelemetry metrics to our OpenLIT TypeScript SDK! #LLM apps & #AI agents built in Typescript / Javascript now get usage, latency, and cost metrics out of the box. Thanks to Gerard van Engelen for getting this added (x3)
OpenLIT reposted this
More control for your data in OpenLIT! Launching on #producthunt on 21st August. See ya there! https://lnkd.in/gRAaaHqT #observability #openlit #devtools #genai
OpenLIT reposted this
🚨 New Episode Alert – Is it Observable? 🚨 🎥 How Much Energy Does Your Prompt Use? Measuring AI Impact with Ecologits As LLMs become part of our daily workflows, it's time to ask a critical question: What’s the environmental cost of our AI usage? In this episode, I explore the energy footprint of LLMs and how we can observe and estimate their impact using open-source tools like: 🔍 OpenLLMetry – Distributed tracing for LLMs 📊 OpenLit – GPU usage, cost estimation & evaluation 🌱 Ecologits – Estimating energy, GHG emissions, and resource depletion ⚡ CodeCarbon – Real energy tracking for self-hosted models 💻 As always, I’ve prepared a GitHub repo with all the examples and code used in the episode: 👉 https://lnkd.in/dwTRNGWc Whether you're running hosted models or self-hosting your own, this episode will help you observe your AI workloads responsibly and understand their environmental impact. 📺 Watch the full episode here: 👉 https://lnkd.in/dcBfPzHk Let’s build smarter — and greener — AI systems. #Observability #LLM #Sustainability #OpenTelemetry #AI #GreenTech #Ecologits #OpenLLMetry #OpenLit #CodeCarbon
OpenLIT reposted this
Relying on language models introduces unpredictable costs. Without detailed telemetry, you lack visibility into token usage, request patterns, or real-time expenditure. This uncertainty stalls budget planning and complicates optimization-especially as usage scales. Dev Proxy intercepts OpenAI-compatible requests and responses, logging comprehensive usage telemetry in the OpenTelemetry format. The OpenAITelemetryPlugin collects token counts, request details, and session-level costs. You can analyze this data using any OpenTelemetry-compatible dashboard, such as .NET Aspire or OpenLIT, for granular insight. You gain immediate visibility into how your application utilizes language models. With clear, session-level usage metrics and up-to-date cost estimates, you can optimize prompt strategies, confidently manage budgets, and prevent overspending. Dev Proxy equips you with actionable data to develop, monitor, and scale language model solutions efficiently. Try Dev Proxy to make your LLM usage transparent and predictable. Learn more: https://lnkd.in/e6Us2vPY
OpenLIT reposted this
How much using language models in your app is costing you? Relying on language models introduces unpredictable costs. Without detailed telemetry, you lack visibility into token usage, request patterns, or real-time expenditure. This uncertainty stalls budget planning and complicates optimization-especially as usage scales. Dev Proxy intercepts OpenAI-compatible requests and responses, logging comprehensive usage telemetry in the OpenTelemetry format. The OpenAITelemetryPlugin collects token counts, request details, and session-level costs. You can analyze this data using any OpenTelemetry-compatible dashboard, such as .NET Aspire or OpenLIT, for granular insight. You gain immediate visibility into how your application utilizes language models. With clear, session-level usage metrics and up-to-date cost estimates, you can optimize prompt strategies, confidently manage budgets, and prevent overspending. Dev Proxy equips you with actionable data to develop, monitor, and scale language model solutions efficiently. Try Dev Proxy to make your LLM usage transparent and predictable. Check it out: https://lnkd.in/eSjSFS-y