Skip to main content
This page covers the most common issues you may encounter when deploying and running your MCP server on Alpic. Each section describes what you see, why it happens, and how to fix it.
Before diving in: verify your project builds and runs locally first. Most deployment issues stem from differences between your local environment and Alpic’s serverless runtime.

Build fails during install

What you see: Build fails during the install phase with errors related to a missing lock file or pyproject.toml. Why it happens: Alpic’s default install commands expect a lock file in your repository. Without one, the install step fails. How to fix it:
Run your package manager’s install command locally (e.g., pnpm install) to generate a lock file, then commit it (pnpm-lock.yaml, yarn.lock, or package-lock.json) to your repository. This is the recommended approach.Alternatively, override the install command in your alpic.json:
{
  "$schema": "https://assets.alpic.ai/alpic.json",
  "installCommand": "npm install"
}
You can also set the install command from Settings > Build Settings in the dashboard.

Custom HTTP endpoints are not exposed

Alpic only routes MCP protocol traffic and standard OAuth endpoints. Custom REST API routes on your server are not reachable from the outside. What you see: HTTP requests to custom paths like /api/search or /rag-stream return 404. Why it happens: Alpic acts as an MCP gateway, not a generic HTTP proxy. It only forwards MCP protocol messages to your server. The exposed paths are /mcp (and / as an alias), and /assets/*. How to fix it: You have two options:
  1. Convert your endpoint into an MCP tool. The tool is invoked through the standard MCP protocol and has full access to your server’s logic.
- @app.get("/api/search")
- def search(q: str):
-     return {"results": do_search(q)}

+ @mcp.tool()
+ def search(q: str) -> list[str]:
+     """Search the knowledge base."""
+     return do_search(q)
  1. Call the endpoint logic directly from an existing tool. If you already have a tool that needs data from your endpoint, import and call the underlying function directly instead of making an HTTP request.
Note: for MCP apps, you can invoke tools directly from your widget, using window.openai.callTool(name, args) for ChatGPT, or the portable tools/call JSON-RPC method over postMessage for Claude. No custom HTTP endpoint needed.

Assets return 404

What you see: Static files (HTML, images, CSS, JavaScript) referenced from your MCP server or ChatGPT App return 404. Why it happens: Assets are served from a separate CDN path, not from your server runtime. They need to be placed in a specific directory to be picked up at build time. How to fix it: Place your static files in an assets/ directory at your project root. They become available at:
https://<your-server>.alpic.live/assets/<filename>
For Node.js projects, assets generated during the build step in <buildOutputDir>/assets/ are also picked up. If both locations contain files, built assets take priority on conflict. Reference these files in your MCP tools and resources using the relative path /assets/<filename>.
Learn more about asset hosting in our Hosting Assets guide.

Deployment doesn’t reflect my latest changes

What you see: Pushes to your branch don’t trigger a deployment, your latest changes aren’t live, or the build uses the wrong package.json / pyproject.toml. Why it happens: Each environment tracks a specific Git branch. Pushes to other branches are silently ignored, no deployment is triggered. Similarly, if your project lives in a subdirectory (monorepo) and the root directory is misconfigured, the build reads configuration from the wrong location. How to fix it:
  1. Check your environment’s tracked branch in Environments > [your environment]. Click the edit icon next to the branch name to update it.
  2. Check the root directory in Settings > Build Settings. For monorepos, set it to the subdirectory containing your MCP server (e.g., packages/my-mcp-server).

New environment crashes on startup

What you see: Your new environment deploys but crashes at runtime with missing API keys, database URLs, or other configuration errors. Why it happens: Environment variables are per-environment, not shared across a project. When you create a new environment, it starts with an empty set of variables, even if your production environment has them all configured. How to fix it: When creating a new environment, manually add all required environment variables for that environment. You can use different values per environment (e.g., a staging API key vs a production one). A few things to keep in mind:
  • Runtime changes are instant: Updating an environment variable takes effect immediately, no redeploy needed.
  • Build-time variables need a redeploy: If a variable is used during the install or build step (e.g., a private registry token), you need to redeploy after changing it.
  • 4 KB total limit: The combined size of all keys and values is capped at 4 KB per environment.
For more on managing environments, see Environments.

Tool calls fail with a timeout error

What you see: Tool calls return an Internal Server Error or stop responding after approximately 30 seconds. Why it happens: Each tool invocation has a 30-second timeout. Any single tool call that exceeds this limit is terminated. How to fix it:
  • Optimize the tool: Reduce API call latency, use smaller models, cache results, or paginate large responses.
  • Use long-running tasks: For operations that genuinely need more time (data processing, complex API orchestrations, ML inference), implement your tool using the MCP Tasks API. Long-running tasks run on a separate compute path with a default TTL of up to 6 hours.
The 30-second limit applies to each individual tool invocation, not to the entire MCP session. If you’re hitting this limit on a specific tool, consider breaking it into smaller, sequential tool calls.

Deployment fails with 404 on /mcp

What you see: Deployment fails with Server returned status 404 on /mcp after 3 attempts in build logs. Why it happens: Alpic expects your MCP server to listen on /mcp (Streamable HTTP) or /sse (SSE). If your server uses a different path (e.g., /), the deployment health check fails. How to fix it:
Ensure your server uses the default MCP SDK paths:
// Streamable HTTP, listens on /mcp by default
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
If you’re using Express or another framework, mount the MCP handler on /mcp:
app.post("/mcp", handleMcpRequest);

Server shows as “Public” despite OAuth configuration

Your server has OAuth configured locally, but the Alpic dashboard shows “Public” and clients can’t authenticate. What you see: The Authentication section in your project settings shows “Public”. Clients like ChatGPT report that OAuth is not enabled for your server. Why it happens: Alpic detects OAuth at deploy time by sending an unauthenticated initialize request to your server. If your server returns 200 instead of 401, or returns 401 without the correct WWW-Authenticate header, Alpic classifies it as public. How to fix it: Your server must return the following on unauthenticated requests:
  1. HTTP 401 status code
  2. A WWW-Authenticate header with this format:
WWW-Authenticate: Bearer resource_metadata="http://localhost:<port>/.well-known/oauth-protected-resource"
Your server must also expose /.well-known/oauth-protected-resource locally, returning valid OAuth Protected Resource Metadata. After making changes, redeploy, OAuth status is only evaluated during deployment.
For a complete guide on configuring OAuth with Alpic, see OAuth Setup.

”Server already initialized” error

What you see: The first tool call succeeds, but subsequent calls fail with Invalid Request: Server already initialized. Why it happens: Your server creates a single StreamableHTTPServerTransport at module scope and reuses it across requests. The MCP SDK rejects the second initialization attempt. How to fix it: Create a fresh transport per incoming HTTP request:
// Wrong, reuses transport across warm Lambda invocations
const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: () => randomUUID() });

// Right, fresh transport per request
app.post("/mcp", (req, res) => {
  const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: () => randomUUID() });
  const server = getServer();
  server.connect(transport);
  // handle request...
});
Alternatively, use stdio transport and let Alpic handle transport management entirely. With stdio, you don’t manage any HTTP server or transport, just expose your tools.

Deployment fails after a successful build

What you see: The build completes successfully, but the deployment fails during the post-build health check with a timeout error. Why it happens: After building, Alpic verifies your server starts correctly. If your server takes too long to boot (approximately 10 seconds), the health check times out. Common causes: heavy dependencies (PyTorch, sentence-transformers, Playwright) or loading large models at import time. How to fix it:
  1. Trim dependencies: Remove non-critical packages. Every megabyte adds to cold start time.
  2. Defer expensive initialization: Don’t load ML models or establish database connections at import time. Do it lazily on the first tool call instead.
  3. Pre-download assets at build time: If your server needs large files (models, datasets), download them during the install command, not at runtime.
{
  "$schema": "https://assets.alpic.ai/alpic.json",
  "installCommand": "uv sync && uv run python download_models.py"
}
  1. Minimize import chains: In Python, importing torch or transformers at the top level triggers heavy initialization. Use lazy imports inside tool functions.

Still stuck?

If none of the above resolves your issue:
  • Check the build logs in your Alpic Dashboard under Deployments for your environment
  • Reach out on Discord for community help
  • Email support@alpic.ai with your project ID and a description of the issue