UStackUStack
OpenMolt icon

OpenMolt

OpenMolt lets you build programmatic AI agents in Node.js that think, plan, and act with tools, integrations, and memory from your codebase.

OpenMolt

What is OpenMolt?

OpenMolt lets you build programmatic AI agents in Node.js that think, plan, and act using tools, integrations, and memory directly from your codebase. Instead of running agent logic in a separate product UI, you define agents, connect tools and integrations, and orchestrate agent behavior directly from application code.

The core purpose is to help you build production-oriented agent workflows—tool-using, structured-output, and stateful agents—while keeping API credentials on your server.

Key Features

  • Programmatic agent creation for Node.js: Create agents from your code using a JavaScript/TypeScript-friendly API.
  • Multi-provider LLM support via unified model strings: Use a consistent model format to switch between providers such as OpenAI GPT-4o, Anthropic Claude, and Google Gemini.
  • Security model with scope-based permissions: Credentials are stored server-side; tool calls are rendered into HTTP requests so the LLM receives tool results and not raw API keys or tokens.
  • Declarative tools and integrations: Define tools as data (endpoint, auth template, and schemas) to avoid writing boilerplate HTTP code.
  • Structured output with Zod schemas: Provide a Zod schema and receive a validated, typed object rather than manually parsing LLM responses.
  • Scheduling and cron-style automation: Run agents on interval schedules or cron-style daily schedules with timezone support.
  • Event-driven visibility into the reasoning loop: Hook into steps of the loop to observe tool calls, plan updates, LLM outputs, and final results.
  • Persistent memory with callbacks: Maintain long-term and short-term memory stores and use onUpdate callbacks to persist memory externally; agents can update memory mid-run.

How to Use OpenMolt

  1. Install the package in your Node.js project.
  2. Initialize OpenMolt with your chosen LLM provider configuration (for example, set the OpenAI API key via an environment variable).
  3. Create an agent with a name, model identifier (using the unified model string format), and instructions.
  4. Run the agent with a user prompt from your application code.

Example flow shown on the site:

  • Install: npm install openmolt
  • Create and run an agent: instantiate OpenMolt, call createAgent(...), then agent.run('...') and log the result.

Use Cases

  • Daily reporting automation: Schedule an agent to pull metrics (e.g., from Stripe) every morning, generate a summary, and post the report to a Slack channel.
  • Multi-step content pipelines: Use an agent to write content based on a strategy description, generate related assets, and save outputs to disk as part of an end-to-end workflow.
  • Email drafting with human review: Draft replies to incoming Gmail messages based on provided guidelines, while keeping review and sending within Gmail.
  • Developer workflow automation: Trigger GitHub-related tasks such as triaging issues, applying labels, posting release notes to Slack, and generating changelogs as part of CI/CD.
  • Commerce operations and reporting: Monitor Shopify orders, update records in Airtable, send notifications via Twilio, and publish daily revenue summaries to a Notion dashboard.

FAQ

What does OpenMolt mean by “programmatic AI agents”?

OpenMolt is designed so you define agents, tools, and workflows from your Node.js/TypeScript codebase—rather than configuring and running agents through a separate UI.

Can I use multiple LLM providers with the same agent code?

The documentation states that OpenMolt supports multiple LLM providers (including OpenAI, Anthropic Claude, and Google Gemini) using a unified model string format, allowing provider switching without changing your code.

How does OpenMolt handle API keys and agent access to tools?

OpenMolt uses a scope-based permission model: credentials are stored server-side and inserted into HTTP requests via Liquid templates. The LLM receives tool results (tool outputs) rather than raw API keys or tokens.

What kind of outputs can my agent return?

OpenMolt supports structured output using Zod schemas; you can provide a schema and receive a validated, typed object.

Does OpenMolt support recurring runs and automation?

Yes. It supports scheduling with interval-based runs and cron-style daily schedules, including timezone support.

Alternatives

  • Low-code agent workflow platforms: Tools that provide visual builders for integrating LLMs, prompts, and actions. They can be faster to prototype but typically shift configuration away from your application code.
  • General-purpose workflow/orchestration tools with LLM calls: Alternatives that focus on building workflows (steps, scheduling, retries) while you implement LLM/tool calling yourself. Compared to OpenMolt, you may need more glue code for structured output, tool definitions, and memory patterns.
  • Open-source agent frameworks in other ecosystems: Agent libraries in Python or other languages that provide similar concepts (tools, memory, structured outputs). Differences typically come down to language/runtime integration (Node.js vs other stacks) and the level of built-in integrations and security patterns.
  • Custom-built tool-calling services: Building your own agent runner and tool registry may provide maximum control but usually requires more engineering effort for scheduling, structured output validation, and memory persistence.