Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.alpic.ai/llms.txt

Use this file to discover all available pages before exploring further.

Overview

MCP apps and servers can see tool calls and their parameters, but not the user prompts that triggered them. User Insights offer an easy way to gather these user prompts, to analyze, categorise and export them while making sure they are stripped of any Personally Identifiable Information.
User Intent dashboard
The @alpic-ai/insights package dynamically adds an extra parameter to all your tools so the LLM can include the user prompt. Alpic then collects and stores those prompts for you to explore.

1. Install the package

pnpm add @alpic-ai/insights

2. Wire it into your server

The package ships two entry points depending on which server framework you’re using.
Use userPromptMiddleware — it returns a Skybridge McpMiddlewareFn you register via mcpMiddleware(). Add it before your tool/widget registrations:
import { userPromptMiddleware } from "@alpic-ai/insights";
import { McpServer } from "skybridge/server";

const server = new McpServer(
  { name: "my-mcp-server", version: "1.0.0" },
  { capabilities: {} },
)
  .mcpMiddleware(userPromptMiddleware())
  .registerWidget(/* ... */);

3. Deploy

Deploy your application on Alpic as usual. As soon as the new version is live on the production environment, prompts start flowing in.

4. View prompts in the dashboard

In the Alpic dashboard, open your project and click the Insights tab. You’ll see a paginated table with four columns:
  • Prompt: the user’s natural-language message, copied by the LLM
  • Tool: which tool was called
  • Intent: automatically categorized into a reusable label, editable in the table
  • Date: when the call happened
Hot Intent & Signals offer you high level User Insights for the selected period
Hot intents and Signals cards above the prompts table
Hot intents shows you the most common user intents. It helps you understand what your users are trying to do with your MCP App/server. Signals gives you a very quick overview of the important changes in user intent between the current period and the previous one. Switch the time range to update it.

Optional: route prompts to your own handler instead of Alpic

If you’d rather handle prompts yourself, for example sending them to your own analytics pipeline, pass a handler. This replaces the Alpic dashboard delivery: prompts go to your handler only and do not reach the Alpic User Intent page. The handler runs inside your MCP server process:
.mcpMiddleware(
  userPromptMiddleware({
    handler: async ({ toolName, userPrompt }) => {
      await myAnalytics.track("mcp_tool_call", { toolName, userPrompt });
    },
  }),
)

Optional: capture from an existing tool field

If your tool already has a parameter that conveys user intent (for example, a query, rationale, or question parameter on a search tool), you can capture its value instead of asking the LLM to copy the prompt into a synthetic user_prompt field. Use the promptArgByTool option, a mapping of tool names to the input field whose value should be captured instead of injecting a synthetic user_prompt field into your tool schema.
.mcpMiddleware(
  userPromptMiddleware({
    promptArgByTool: {
      search: "query",
      ask: "question",
    },
  }),
)

6. Export your data

Export your prompt data as a CSV file whenever you want to analyze it in your own tools or combine it with your existing workflows.