Heimdall
Heimdall is an open-source observability platform for monitoring Model Context Protocol (MCP) servers and AI/LLM applications, providing real-time tracing, metrics, and analytics built on OpenTelemetry.
What is Heimdall?
What is Heimdall?
Heimdall is an open-source observability platform purpose-built for Model Context Protocol (MCP) servers and AI/LLM applications. It gives developers, platform teams, and AI engineers deep visibility into how tools, resources, and prompts are being executed inside their AI infrastructure.
Built on OpenTelemetry standards, Heimdall collects and visualizes real-time traces, metrics, and analytics from your MCP-based tools and AI services. It is designed to be self-hosted, putting you in full control of your data while enabling detailed monitoring of latency, error rates, usage patterns, and tool behavior.
By integrating lightweight SDKs into your Python or JavaScript/TypeScript MCP servers, Heimdall makes it easy to trace each tool call and resource request with minimal changes to existing code. It is ideal for teams building AI agents, tooling, and LLM-powered backends who want production-grade observability without vendor lock-in.
Key Features
-
Real-time tracing for MCP servers
Track every tool call, resource access, and prompt-related execution in real time. Heimdall surfaces detailed traces that show when a tool was invoked, how long it took, what parameters were passed, and whether it succeeded or failed. -
Dashboard analytics and visualizations
Use the built-in frontend dashboard to analyze latency, error rates, and usage patterns across your AI services. Quickly spot slow tools, failing endpoints, or unusual usage spikes. -
OpenTelemetry-native architecture
Heimdall uses OTLP/HTTP as its ingestion protocol and is fully aligned with OpenTelemetry standards. This makes it easier to integrate with existing observability stacks or extend your telemetry pipeline. -
Easy SDK integration for Python and JavaScript/TypeScript
Heimdall provides an official Python SDK (hmdl) and JavaScript/TypeScript SDK (hmdl) with simple decorators/wrappers:- Python:
@trace_mcp_tool()decorator to trace MCP tools. - JS/TS:
traceMCPTool()wrapper for async tool functions.
- Python:
-
Automatic parameter capture
The Python SDK automatically inspects function signatures so parameters are captured as named objects (e.g.,{ "query": "test", "limit": 5 }). The JS/TS SDK lets you specifyparamNamesto achieve the same, making traces more readable and useful for debugging. -
Self-hosted deployment
Heimdall currently supports only self-hosted deployment, giving you full control over infrastructure, security, and data residency. You run both the backend and frontend on your own servers or local development environment. -
Organization and project model
Heimdall uses an Organization → Project structure:- Organizations group related projects and teams.
- Projects represent individual applications or environments, each with a unique Project ID for trace collection.
-
Environment-based configuration
Configure the Heimdall SDKs via environment variables (e.g.,HEIMDALL_ENDPOINT,HEIMDALL_ORG_ID,HEIMDALL_PROJECT_ID,HEIMDALL_SERVICE_NAME,HEIMDALL_ENVIRONMENT,HEIMDALL_ENABLED) for easy deployment across development, staging, and production. -
Future roadmap (planned features)
The repository outlines several upcoming capabilities:- User tracking: Associate traces with user identities for user-level analytics (currently all requests are anonymous).
- LLM evaluation: Built-in model quality scoring and human evaluation workflows.
- Managed cloud host: Optional managed cloud deployment for teams that prefer not to self-host.
How to Use Heimdall
Using Heimdall involves three main steps: running the platform, creating an organization and project, and integrating the SDKs into your MCP servers or AI applications.
1. Set up prerequisites
Ensure you have the required runtime dependencies:
- Node.js 18+ (for Heimdall backend and frontend)
- Python 3.9+ (if you plan to use the Python SDK)
Clone the Heimdall repository from GitHub and navigate into it.
2. Start the backend
From the project root:
cd backend
npm install
npm run dev
This starts the Heimdall backend, which exposes an OTLP/HTTP endpoint (default: http://localhost:4318) for trace ingestion.
3. Start the frontend
In a separate terminal:
cd frontend
npm install
npm run dev
The frontend is typically available at http://localhost:5173, providing the web UI for organizations, projects, and analytics.
4. Create an account, organization, and project
- Open
http://localhost:5173in a browser. - Create an account using your email and password.
- Create an Organization to group your projects.
- Create a Project inside the organization. Each project gets a unique Project ID used to associate incoming traces.
Then, go to the Settings page for your project to locate:
- Organization ID
- Project ID
You will use these values in your SDK configuration.
5. Integrate the Python SDK
Install the Python package:
pip install hmdl
Basic integration example:
from hmdl import HeimdallClient, trace_mcp_tool
# Initialize client
client = HeimdallClient(
endpoint="http://localhost:4318",
org_id="your-org-id", # From Settings
project_id="your-project-id", # From Settings
service_name="my-mcp-server",
environment="development"
)
@trace_mcp_tool()
def search_tool(query: str, limit: int = 10) -> dict:
return {"results": [], "query": query, "limit": limit}
result = search_tool("test", limit=5)
client.flush()
The decorator automatically creates traces for each invocation of search_tool, capturing arguments and execution metadata.
6. Integrate the JavaScript/TypeScript SDK
Install the JS/TS package:
npm install hmdl
Basic integration example (TypeScript-style):
import { HeimdallClient, traceMCPTool } from 'hmdl';
const client = new HeimdallClient({
endpoint: "http://localhost:4318",
orgId: "your-org-id", // From Settings
projectId: "your-project-id", // From Settings
serviceName: "my-mcp-server",
environment: "development",
});
const searchTool = traceMCPTool(
async (query: string, limit: number = 10) => ({
results: [],
query,
limit,
}),
{
name: "search-tool",
paramNames: ["query", "limit"],
}
);
await searchTool("test", 5);
await client.flush();
7. Configure via environment variables (optional)
You can configure the Heimdall client using environment variables instead of hardcoding values:
export HEIMDALL_ENDPOINT="http://localhost:4318"
export HEIMDALL_ORG_ID="your-org-id"
export HEIMDALL_PROJECT_ID="your-project-id"
export HEIMDALL_SERVICE_NAME="my-mcp-server"
export HEIMDALL_ENVIRONMENT="development"
export HEIMDALL_ENABLED="true"
Then initialize the client without arguments:
# Python
from hmdl import HeimdallClient
client = HeimdallClient()
// JavaScript/TypeScript
import { HeimdallClient } from 'hmdl';
const client = new HeimdallClient();
Once integrated, your MCP tools will emit telemetry to the Heimdall backend, and you can analyze traces and metrics in the web UI.
Use Cases
1. Observability for AI agents and toolchains
Teams building complex AI agents that rely on MCP tools need to understand how each tool performs in real workloads. Heimdall provides:
- Per-tool latency tracking and error monitoring.
- Visibility into which tools are used most frequently.
- Insights into how prompts and resources are accessed through those tools.
This helps teams optimize their toolchain, deprecate underused tools, and debug agent behavior more effectively.
2. Production monitoring for LLM-powered backends
For organizations running LLM-backed APIs or AI microservices in production, Heimdall acts as a critical observability layer:
- Monitor request throughput and response times.
- Pinpoint slow paths caused by external dependencies or specific tools.
- Detect error spikes or regressions after deployments.
With Heimdall integrated via OpenTelemetry, you can align AI observability with your broader monitoring stack.
3. Development and debugging of MCP servers
During development, Heimdall helps engineers debug new MCP servers or tools:
- Trace every call to a new tool during tests.
- Inspect parameters and returned data to verify correctness.
- Quickly see where exceptions are thrown and under what conditions.
This accelerates feedback loops and reduces the time needed to identify issues in multi-tool, multi-step workflows.
4. Internal AI platform and infrastructure teams
Platform teams who provide internal AI infrastructure to multiple product teams can use Heimdall to offer standardized observability:
- Create separate projects for each application or tenant.
- Monitor organization-wide usage and performance patterns.
- Use forthcoming user-level tracking (planned) to understand consumption by team or customer.
This enables better capacity planning, chargeback/showback models, and service reliability.
5. Privacy-conscious or regulated environments
Because Heimdall is self-hosted and open source, it is well-suited for organizations operating in highly regulated or security-sensitive environments:
- Keep all traces and metadata within your own infrastructure.
- Align deployments with internal security and compliance requirements.
- Customize the platform or integrate it with existing monitoring and alerting tools.
FAQ
Is Heimdall free to use?
Yes. Heimdall is released under the MIT license, making it free and open source for both personal and commercial use. You are responsible for hosting and running the platform on your own infrastructure. The roadmap also mentions a potential managed cloud offering in the future, but the core project is open source.
What technologies and standards does Heimdall use?
Heimdall is built on OpenTelemetry and uses the OTLP/HTTP protocol (default port 4318) for ingesting telemetry data. The backend and frontend are Node.js-based services, and the official client SDKs are available for Python and JavaScript/TypeScript.
What environments and languages are supported?
Heimdall currently provides official SDKs for:
- Python (via the
hmdlpackage; requires Python 3.9+) - JavaScript/TypeScript (via the
hmdlnpm package; runs with Node.js 18+)
Because Heimdall speaks OpenTelemetry/OTLP, you can potentially integrate other languages using standard OpenTelemetry libraries, though the most seamless experience is with the official MCP-focused SDKs.
Do I have to self-host Heimdall?
Yes, at present Heimdall is designed as a self-hosted platform. You run both the backend and frontend services yourself (e.g., on local machines, VMs, Kubernetes, or your preferred hosting provider). The project roadmap mentions a managed cloud host server as a future enhancement, but it is not part of the current release.
How do I configure multiple environments (dev, staging, production)?
Heimdall supports environment-aware configuration via SDK parameters and environment variables. A common pattern is:
- Use different Projects (and possibly Organizations) for different environments.
- Set environment variables such as
HEIMDALL_ENDPOINT,HEIMDALL_ORG_ID,HEIMDALL_PROJECT_ID,HEIMDALL_SERVICE_NAME, andHEIMDALL_ENVIRONMENTseparately in each environment.
This keeps traces logically separated and makes it easy to compare behavior across development, staging, and production.
Can Heimdall associate traces with specific end users?
User-level tracking is listed as a TODO/roadmap feature. Currently, all requests are treated as anonymous, and user identities are not associated with traces by default. The project plans to add support for tracking users and providing user-level analytics in future versions.
Alternatives
AakarDev AI
AakarDev AI is a powerful platform that simplifies the development of AI applications with seamless vector database integration, enabling rapid deployment and scalability.
PromptLayer
PromptLayer is a platform for prompt management, evaluations, and LLM observability, designed to enhance AI engineering workflows.
PingPulse
PingPulse provides AI agent observability, allowing you to track agent handoffs, detect issues like stalls and loops, and receive alerts for misbehavior with minimal code integration.
BookAI.chat
BookAI allows you to chat with your books using AI by simply providing the title and author.
Devin
Devin is an AI coding agent and software engineer that helps developers build better software faster.
imgcook
imgcook is an intelligent tool that converts design mockups into high-quality, production-ready code with a single click.