langfuse-core-workflow-a
jeremylongshore/claude-code-plugins-plus-skills
Implement comprehensive end-to-end tracing for Large Language Model (LLM) calls, chains, and complex agents using Langfuse. This workflow demonstrates techniques like OpenAI drop-in wrappers, manual span creation for RAG pipelines, tracking streamed responses, and integrating with various LLM providers (OpenAI, Anthropic). Use this to monitor latency, track token usage, and debug complex AI features in production.