Teams using Langfuse for LLM observability have visibility into system behavior like latency, errors, and token costs. But observability alone doesn't tell you whether users are actually adopting your GenAI product, what they're trying to accomplish, or where they're hitting friction. This is where the Nebuly-Langfuse integration changes the equation.
Nebuly now natively connects to Langfuse to ingest your chat interaction data, letting you analyze user intent, sentiment, and behavior patterns without managing separate SDKs. For teams already invested in Langfuse, this means seamless access to a complementary layer of user analytics that works with your existing infrastructure.
Understanding the gap between technical metrics and user needs is critical. As research shows, observability and user analytics measure different aspects of the same system. Observability tells you if your system is working. User analytics tells you if users want what you built. Both are essential for balancing performance with adoption.
Why both observability and user analytics matter
Langfuse excels at what it was designed for. It traces LLM calls, captures latency, logs errors, and tracks costs. These technical metrics are table stakes for running GenAI applications in production. But traces alone don't reveal user intent, sentiment shifts, or which features drive engagement.
User analytics adds a complementary layer. By analyzing conversation content, user feedback, and interaction patterns, you uncover what actually drives adoption and ROI. You spot when users are struggling with a feature even if the system responses are technically correct. You detect sentiment shifts that suggest dissatisfaction before churn happens. You measure which model versions users prefer based on actual behavior.
The Nebuly-Langfuse integration removes friction by pulling data from a system teams are already using. No need to decide between tools. No engineering time spent on custom integrations. Just connect your Langfuse account to Nebuly and start analyzing user behavior alongside your observability data.
How the integration works
Nebuly offers two paths to connect Langfuse. Your choice depends on data governance requirements and infrastructure preferences.
Step 1: Understand your integration options
Full integration is the simpler path. You authenticate your Langfuse account by providing API keys, which Nebuly stores securely in encrypted vaults. Nebuly then automatically pulls your trace data from Langfuse daily (the interval is configurable). The platform converts traces into Nebuly's internal format and surfaces user-level insights immediately. This approach requires zero engineering overhead once setup is complete.
Local integration gives you full control. Nebuly provides open-source Python scripts hosted on GitHub that you can run in your own infrastructure. You configure the scripts with your Langfuse API keys, they extract and transform your data, and send it to Nebuly's endpoint. This approach is useful for teams with strict data residency requirements or those who want to enrich Langfuse data with customer context before sending it to Nebuly.
The integration works because Langfuse already stores trace data in a structured format. Nebuly simply reads that format and translates it. You don't need to change how you instrument Langfuse. Existing traces flow through to Nebuly automatically.
Step 2: Prepare your Langfuse setup
Before connecting, ensure your Langfuse traces include the user context you'll want to analyze later. Langfuse accepts tags on trace objects, which is where you should store user attributes like geography, role, customer ID, or cohort. Tags become the dimensions you'll use to slice user analytics in Nebuly.
For example, if you're building an internal copilot for your legal team, tag each trace with the user's department, seniority level, and company. If you're running a customer-facing chatbot, tag with customer segment, subscription tier, or region. These tags bridge technical traces to user profiles, allowing Nebuly to answer questions like "Are senior lawyers adopting this feature more than junior staff?"
This tagging discipline is critical. Without rich context in Langfuse, Nebuly analytics will be limited to conversation-level insights. With context, you gain cohort analysis, segment-level adoption tracking, and targeted improvement opportunities.
Step 3: Choose your integration method and authenticate
For full integration, navigate to Nebuly's integrations settings and select Langfuse. You'll be asked for your Langfuse public and private API keys. These are stored in encrypted secret storage (Azure Key Vault or equivalent for self-hosted), accessible only to the components that need them.
By default, Nebuly pulls data once daily. If you need more frequent syncs for real-time analysis, you can configure a shorter interval. Most teams find daily ingestion sufficient for adoption analysis, though faster cadences are useful when testing new model versions or features.
For local integration, clone the open-source integration repository from Nebuly's GitHub. Install dependencies, configure your Langfuse API keys in the script, and schedule it to run regularly (typically daily via cron or a similar scheduler). The script handles API pagination automatically, so large trace volumes are processed reliably. Output is sent directly to Nebuly's ingestion endpoint.
Step 4: Verify data flow and test analysis
Once authentication is complete, Nebuly begins syncing Langfuse traces. The first sync might take hours depending on your trace volume. You'll see a confirmation in Nebuly's integrations dashboard when the sync completes.
Test the connection by querying a segment of your data in Nebuly. Create a user cohort based on a tag (e.g., all traces from your legal department) and check that the count matches your expectations. Run a simple sentiment analysis on conversations from that cohort to verify content was ingested correctly.
If data doesn't appear after the first sync window, check that your Langfuse traces include the tags Nebuly expects. If traces exist but show minimal context, review your instrumentation code to ensure tags are being set consistently.
Step 5: Start analyzing user behavior
With data flowing from Langfuse to Nebuly, you now have visibility into both system performance and user behavior. In Nebuly, you can segment interactions by the tags you set in Langfuse. Analyze sentiment by cohort. Identify which user segments are hitting specific friction points. Measure adoption velocity by customer tier or region.
Correlate this with Langfuse metrics. If a particular user cohort shows low sentiment, check Langfuse for elevated latency or error rates in their traces. If adoption is high in one region but low in another, investigate whether your system performs differently by geography.
The power emerges when you stop treating these metrics separately. System performance is necessary but insufficient for success. Adoption is essential. Only by combining both do you get the complete picture of what's working and what needs improvement.
Common mistakes to avoid
Not including user context in Langfuse tags is the most common oversight. Teams instrument Langfuse perfectly for technical debugging but include no user attributes in their traces. Then when they connect Nebuly, they get conversation-level insights but can't answer business questions like "Which customer segments are adopting this?" Store at least user ID and segment type as tags from day one, before you hit scale.
Assuming data will sync immediately is another pitfall. The first sync from a large Langfuse account can take hours. Teams authenticate, check Nebuly five minutes later, see no data, and assume the integration failed. Plan for initial sync latency and use the integration dashboard to track progress rather than querying the analytics immediately.
Pulling data too frequently is less common but wastes API quota. If you configure hourly syncs for 100K traces daily, you'll burn through Langfuse API allowances quickly. Unless you're actively testing and iterating on your product hourly, daily ingestion is sufficient. Set it once and monitor quarterly.
Evidence and best practices
Research on observability versus user analytics shows that teams measuring both adoption and technical performance close feedback loops twice as fast as those measuring only one. Observability tells you the system works. User analytics tells you the system matters. Organizations combining both reduce time to ROI by an average of 40%.
Langfuse documentation provides detailed guidance on instrumentation best practices, including tag structure and trace enrichment. Most production teams apply 3-5 key tags per trace: user ID, segment, version, experiment cohort, and region.
For regulated industries like finance and healthcare, analyzing user behavior within compliance frameworks is essential. The Nebuly-Langfuse integration supports both cloud and self-hosted deployments, allowing you to keep sensitive conversation data within your infrastructure while still gaining adoption insights.
Key takeaways
Start by auditing your current Langfuse instrumentation. Make sure you're tagging traces with user context. If you're missing tags, add them now. This requires a code change but takes minutes and pays off immediately when you start analyzing adoption by user segment.
Next, check Nebuly's documentation for integration setup steps. Choose between full and local integration based on your data governance requirements. Full integration is faster to set up. Local integration gives you control.
Once connected, resist the urge to query analytics immediately. Allow the first sync to complete, then start with basic segments to verify data integrity. Run a simple cohort analysis before diving into advanced questions. Verify that user counts, sentiment distributions, and interaction patterns match what you expect.
The real value emerges when you stop thinking of observability and user analytics as separate concerns. They're complementary layers of the same system. Langfuse shows you if your system works. Nebuly shows you if your system matters to users. Together, they give you the feedback loop you need to iterate quickly and confidently.
Organizations combining technical monitoring with user analytics reduce risk more effectively. The Nebuly-Langfuse integration makes it frictionless for teams already using Langfuse to add the user analytics layer they're missing. Start with a simple connection, verify data flow, then layer in more sophisticated adoption tracking as your team's measurement culture matures.
Related Resources
For detailed step-by-step setup instructions, visit our integration documentation. The guide covers:
- Obtaining Langfuse API keys
-Configuring full vs. local integration
- Setting up trace tags for user context
- Verifying data flow and running your first analysis
You can also review the open-source integration scripts on GitHub for local integration implementation.


.png)
