Jentic Mini
Jentic Mini is a free, self-hosted open source API execution layer that brokers AI agent requests and injects credentials at runtime.
What is Jentic Mini?
Jentic Mini is a free, self-hosted API execution layer that sits between AI agents (such as OpenClaw, NemoClaw, and others) and external APIs. Instead of having an agent handle authentication details or custom “glue code” for each service, Jentic Mini brokers the requests and injects credentials at runtime.
The core purpose is to help developers connect general-purpose agents to real systems while preventing secrets from being exposed back to the agent. It does this by using a curated, machine-readable catalog of APIs and workflows (described as 10,000+), so the agent can discover and execute actions without managing API specifics in prompts.
Key Features
- Encrypted credentials vault: Stores API keys, OAuth tokens, and secrets in an encrypted local vault; credentials are injected at execution time and are not returned through the API.
- Self-hosted deployment: Runs entirely in your environment using a single Docker instance, exposing a FastAPI server backed by SQLite, with Swagger docs and hot reload for development.
- Toolkit-scoped permissions with a killswitch: Uses one toolkit key per agent; each toolkit bundles credentials and an access policy so each agent receives only the permissions it needs. Access can be revoked instantly with a single killswitch.
- Runtime request brokering: Brokers API requests on the agent’s behalf, including finding the right API and handling the mechanics of the request.
- Large curated catalog of APIs and workflows: Provides an AI-curated catalog described as 10,000+ APIs and workflows, with specs and workflows that can be auto-imported after credentials are added for catalog entries.
How to Use Jentic Mini
- Install and run it in your environment (the page indicates you can get started with an install command that downloads an installer script from the Jentic “quick claw” repository).
- Configure an agent toolkit so Jentic Mini knows which credentials and access policy apply to that agent (toolkit keys are described as scoped per agent).
- Add credentials for APIs in the catalog you want the agent to use; Jentic Mini can then auto-import the relevant API specs and workflows.
- Connect your AI agent to Jentic Mini so that when the agent requests an action, Jentic Mini brokers the call and injects credentials at execution time.
Use Cases
- Connecting an agent to multiple third-party services without embedding secrets in prompts: Jentic Mini handles authentication and request brokering while keeping credentials out of the agent.
- Managing agent access safely across teams or workflows: Use separate toolkit keys and access policies so different agents have different permission sets, and revoke access quickly via the killswitch.
- Rapid development with a local, self-hosted API layer: Run the FastAPI server in Docker with Swagger docs and hot reload to test integrations during development.
- Building agent workflows around a broad catalog of tools: Rely on the provided API and workflow catalog to let agents discover and execute actions for many services.
- Integrating with existing agent frameworks: The page specifically references OpenClaw and NemoClaw as examples of agents that can sit behind the Jentic Mini layer.
FAQ
Is Jentic Mini free to use and can it be self-hosted?
Yes. The page describes Jentic Mini as free, self-hosted, and licensed under Apache 2.0.
Does Jentic Mini send secrets back to the AI agent?
No. The page states that secrets never touch the agent and that credentials are injected at execution time and never returned through the API.
What does “toolkit-scoped permissions” mean?
The page describes one toolkit key per agent. Each toolkit includes credentials and an access policy, so the agent only receives the permissions it needs; access can be revoked instantly using a killswitch.
What APIs does Jentic Mini support?
The page describes a curated catalog of “10,000+ APIs and workflows.” It also notes that after adding credentials for a catalog API, Jentic Mini auto-imports that API’s specs and workflows.
How is Jentic Mini deployed?
It runs entirely in your infrastructure via a single Docker instance, serving a FastAPI server backed by SQLite.
Alternatives
- Use a custom agent-side integration layer: Instead of a centralized broker, you can write per-service glue code and manage authentication in your agent workflow. This typically increases maintenance and raises the chance of mishandled secrets.
- Build a generic self-hosted “API gateway” for agent calls: An API gateway or middleware layer can broker requests and manage credentials, but it may not provide the same AI-curated catalog of API specs and workflows described for Jentic Mini.
- Use a hosted integration service instead of self-hosting: The page contrasts Mini with a hosted option (Jentic Hosted / VPC). Hosted solutions shift operational responsibility away from your environment, while a self-hosted layer keeps execution and credential handling inside your infrastructure.
- Implement a sandboxed simulation workflow: If your main goal is testing without calling real external APIs, consider a simulation/sandbox approach. The page mentions Jentic Mini supports a “simulate mode,” though details beyond that are not provided here.
Alternatives
AakarDev AI
AakarDev AI is a powerful platform that simplifies the development of AI applications with seamless vector database integration, enabling rapid deployment and scalability.
Arduino VENTUNO Q
Arduino VENTUNO Q is an edge AI computer for robotics, combining AI inference hardware and a microcontroller for deterministic control. Arduino App Lab-ready.
Devin
Devin is an AI coding agent that helps software teams complete code migrations and large refactoring by running subtasks in parallel.
BenchSpan
BenchSpan runs AI agent benchmarks in parallel, captures scores and failures in run history, and uses commit-tagged executions to improve reproducibility.
Edgee
Edgee is an edge-native AI gateway that compresses prompts before LLM providers, using one OpenAI-compatible API to route 200+ models.
LobeHub
LobeHub is an open-source platform designed for building, deploying, and collaborating with AI agent teammates, functioning as a universal LLM Web UI.