Build fails during install
What you see: Build fails during the install phase with errors related to a missing lock file orpyproject.toml.
Why it happens: Alpic’s default install commands expect a lock file in your repository. Without one, the install step fails.
How to fix it:
- Node.js
- Python (requirements.txt)
Run your package manager’s install command locally (e.g.,
pnpm install) to generate a lock file, then commit it (pnpm-lock.yaml, yarn.lock, or package-lock.json) to your repository. This is the recommended approach.Alternatively, override the install command in your alpic.json:Custom HTTP endpoints are not exposed
Alpic only routes MCP protocol traffic and standard OAuth endpoints. Custom REST API routes on your server are not reachable from the outside. What you see: HTTP requests to custom paths like/api/search or /rag-stream return 404.
Why it happens: Alpic acts as an MCP gateway, not a generic HTTP proxy. It only forwards MCP protocol messages to your server. The exposed paths are /mcp (and / as an alias), and /assets/*.
How to fix it: You have two options:
- Convert your endpoint into an MCP tool. The tool is invoked through the standard MCP protocol and has full access to your server’s logic.
- Call the endpoint logic directly from an existing tool. If you already have a tool that needs data from your endpoint, import and call the underlying function directly instead of making an HTTP request.
window.openai.callTool(name, args) for ChatGPT, or the portable tools/call JSON-RPC method over postMessage for Claude. No custom HTTP endpoint needed.
Assets return 404
What you see: Static files (HTML, images, CSS, JavaScript) referenced from your MCP server or ChatGPT App return 404. Why it happens: Assets are served from a separate CDN path, not from your server runtime. They need to be placed in a specific directory to be picked up at build time. How to fix it: Place your static files in anassets/ directory at your project root. They become available at:
<buildOutputDir>/assets/ are also picked up. If both locations contain files, built assets take priority on conflict.
Reference these files in your MCP tools and resources using the relative path /assets/<filename>.
Learn more about asset hosting in our Hosting Assets guide.
Deployment doesn’t reflect my latest changes
What you see: Pushes to your branch don’t trigger a deployment, your latest changes aren’t live, or the build uses the wrongpackage.json / pyproject.toml.
Why it happens: Each environment tracks a specific Git branch. Pushes to other branches are silently ignored, no deployment is triggered. Similarly, if your project lives in a subdirectory (monorepo) and the root directory is misconfigured, the build reads configuration from the wrong location.
How to fix it:
- Check your environment’s tracked branch in Environments > [your environment]. Click the edit icon next to the branch name to update it.
- Check the root directory in Settings > Build Settings. For monorepos, set it to the subdirectory containing your MCP server (e.g.,
packages/my-mcp-server).
New environment crashes on startup
What you see: Your new environment deploys but crashes at runtime with missing API keys, database URLs, or other configuration errors. Why it happens: Environment variables are per-environment, not shared across a project. When you create a new environment, it starts with an empty set of variables, even if your production environment has them all configured. How to fix it: When creating a new environment, manually add all required environment variables for that environment. You can use different values per environment (e.g., a staging API key vs a production one). A few things to keep in mind:- Runtime changes are instant: Updating an environment variable takes effect immediately, no redeploy needed.
- Build-time variables need a redeploy: If a variable is used during the install or build step (e.g., a private registry token), you need to redeploy after changing it.
- 4 KB total limit: The combined size of all keys and values is capped at 4 KB per environment.
For more on managing environments, see Environments.
Tool calls fail with a timeout error
What you see: Tool calls return anInternal Server Error or stop responding after approximately 30 seconds.
Why it happens: Each tool invocation has a 30-second timeout. Any single tool call that exceeds this limit is terminated.
How to fix it:
- Optimize the tool: Reduce API call latency, use smaller models, cache results, or paginate large responses.
- Use long-running tasks: For operations that genuinely need more time (data processing, complex API orchestrations, ML inference), implement your tool using the MCP Tasks API. Long-running tasks run on a separate compute path with a default TTL of up to 6 hours.
Deployment fails with 404 on /mcp
What you see: Deployment fails with Server returned status 404 on /mcp after 3 attempts in build logs.
Why it happens: Alpic expects your MCP server to listen on /mcp (Streamable HTTP) or /sse (SSE). If your server uses a different path (e.g., /), the deployment health check fails.
How to fix it:
- TypeScript
- Python
Ensure your server uses the default MCP SDK paths:If you’re using Express or another framework, mount the MCP handler on
/mcp:Server shows as “Public” despite OAuth configuration
Your server has OAuth configured locally, but the Alpic dashboard shows “Public” and clients can’t authenticate. What you see: The Authentication section in your project settings shows “Public”. Clients like ChatGPT report that OAuth is not enabled for your server. Why it happens: Alpic detects OAuth at deploy time by sending an unauthenticatedinitialize request to your server. If your server returns 200 instead of 401, or returns 401 without the correct WWW-Authenticate header, Alpic classifies it as public.
How to fix it: Your server must return the following on unauthenticated requests:
- HTTP 401 status code
- A
WWW-Authenticateheader with this format:
/.well-known/oauth-protected-resource locally, returning valid OAuth Protected Resource Metadata.
After making changes, redeploy, OAuth status is only evaluated during deployment.
For a complete guide on configuring OAuth with Alpic, see OAuth Setup.
”Server already initialized” error
What you see: The first tool call succeeds, but subsequent calls fail withInvalid Request: Server already initialized.
Why it happens: Your server creates a single StreamableHTTPServerTransport at module scope and reuses it across requests. The MCP SDK rejects the second initialization attempt.
How to fix it: Create a fresh transport per incoming HTTP request:
Deployment fails after a successful build
What you see: The build completes successfully, but the deployment fails during the post-build health check with a timeout error. Why it happens: After building, Alpic verifies your server starts correctly. If your server takes too long to boot (approximately 10 seconds), the health check times out. Common causes: heavy dependencies (PyTorch, sentence-transformers, Playwright) or loading large models at import time. How to fix it:- Trim dependencies: Remove non-critical packages. Every megabyte adds to cold start time.
- Defer expensive initialization: Don’t load ML models or establish database connections at import time. Do it lazily on the first tool call instead.
- Pre-download assets at build time: If your server needs large files (models, datasets), download them during the install command, not at runtime.
- Minimize import chains: In Python, importing
torchortransformersat the top level triggers heavy initialization. Use lazy imports inside tool functions.
Still stuck?
If none of the above resolves your issue:- Check the build logs in your Alpic Dashboard under Deployments for your environment
- Reach out on Discord for community help
- Email support@alpic.ai with your project ID and a description of the issue