Skip to content

Frequently Asked Questions

General

What's one of the main reasons I would want to use the Nautobot MCP Server?

It lets you query your network infrastructure using natural language instead of writing API calls or navigating the UI — ask a question, get an answer directly from your Source of Truth.

Is this different than the NautobotGPT MCP Server?

Yes! The Nautobot MCP Server connects AI assistants directly to a Nautobot instance, while the NautobotGPT MCP Server routes queries through NautobotGPT for processing and inference. This Nautobot MCP Server is completely decoupled from NautobotGPT. Thus, this is an ideal alternative if you do not want or cannot use NautobotGPT, but still want a method to chat with your Nautobot data using your own LLM or AI assistant.

What is the Model Context Protocol (MCP)?

MCP is an open standard that defines how AI assistants communicate with external tools and data sources. It provides a consistent interface so that any MCP-compatible client — whether Claude Desktop, VS Code, Cursor, or others — can connect to any MCP server without custom integration work. The Nautobot MCP Server implements this standard to expose your Nautobot data to AI assistants.

What version of Nautobot is required?

The Nautobot MCP Server requires Nautobot 2.x or higher with API access enabled. Your Nautobot instance must be reachable over the network from wherever the MCP server is deployed, and you need a valid API token with appropriate permissions.


Setup

What is required to configure the Nautobot MCP Server?

You must install the package via pip install from Network to Code's private Artifactory, then follow the documentation to configure environment variables (Nautobot URL, API token, transport settings, etc.) and deploy it wherever suits your needs — on a server, locally, or alongside your existing Nautobot infrastructure. Both the deployment of the server and the integration of it with MCP clients are the self-service.

Do I need to set up TLS/HTTPS?

The MCP server itself runs on plain HTTP (port 8000 by default). For production deployments, you should place a reverse proxy — such as nginx, Caddy, Apache, or Traefik — in front of the server to handle TLS termination. This is the same pattern used by most web applications and gives you full control over certificate management, renewal, and cipher configuration.

Where should I deploy the Nautobot MCP Server?

The server should be deployed somewhere with reliable network access to your Nautobot instance. Common options include the same server or VM that runs Nautobot, a separate host in the same network, or a container in your existing orchestration platform. It is lightweight and does not need to be co-located with Nautobot — it just needs to reach Nautobot's API over the network.

Can I run the Nautobot MCP Server locally on my laptop?

Yes. If you have network access to a Nautobot instance (e.g., over VPN or on the same network), you can install and run the server directly on your laptop for development or testing. Just install the package, set your environment variables, and start it — your MCP client connects to localhost and TLS is not required since traffic never leaves your machine. This is a great way to try it out before setting up a shared production deployment.


Security

What security does the Nautobot MCP Server have?

Every request is authenticated by validating the bearer token against Nautobot's API, so users only have access to what their Nautobot RBAC permissions allow. Query safety limits enforce max fields, nesting depth, and result set sizes to prevent resource exhaustion, and per-user concurrency controls limit simultaneous API requests. TLS is supported for encrypted communication.

Does my data leave my network?

No. The Nautobot MCP Server runs entirely within your infrastructure and communicates directly with your Nautobot instance over your local network. No data is sent to Network to Code or any other third party by the server. The only external communication is between your MCP client (e.g., Claude Desktop) and its LLM provider, which is separate from and not controlled by this server.

How are API tokens handled?

The server never stores your Nautobot API token. Each time a client connects, the bearer token is validated against Nautobot. When a cached session expires or is evicted, the token must be re-validated on the next connection.

Can users see data they don't have access to in Nautobot?

No. The server authenticates each user with their own Nautobot API token, and all queries are executed using that token. Nautobot's built-in Role-Based Access Control (RBAC) is fully enforced — users can only see and query data their Nautobot permissions allow. The MCP server does not elevate or bypass any permissions.


Compatibility

What tools, applications, LLMs, and systems can I hook this Nautobot MCP Server into?

Any application that supports the Model Context Protocol can connect — this includes Claude Desktop, VS Code (via Copilot or extensions), Cursor, Windsurf, and other MCP-compatible AI coding assistants. The server is LLM-agnostic on the client side since MCP is the standard interface, so as more tools adopt MCP support, they can plug in without changes to the server.

Can I use this Nautobot MCP Server with a local LLM?

Yes, as long as the local LLM is running through a client or framework that supports MCP.

What considerations should be made for which LLM is used?

The quality of results depends heavily on the LLM driving the interaction. Since you choose your own model, there are no guarantees on output quality — a small or underpowered local model may struggle to formulate correct queries or interpret results accurately. We recommend using a leading model from a major AI lab (Anthropic, OpenAI, Google, etc.) for the best experience.

What transport protocols are supported?

The server supports two MCP transport protocols: streamable-http (default, recommended) and sse (Server-Sent Events). Streamable-http uses the /mcp endpoint and offers modern bidirectional streaming with lower latency. SSE uses the /sse endpoint and is available for compatibility with older MCP clients. Set the MCP_TRANSPORT environment variable to choose your transport, and make sure your client configuration points to the matching endpoint path.


Usage

Can the Nautobot MCP Server make changes to my Nautobot data?

Not at this time. The server is currently read-only. It supports querying and retrieving data and metadata. It does not expose any mutations, POST, PUT, PATCH, or DELETE operations. Your Nautobot data cannot be modified through this server.

Why am I getting fewer results than expected?

The server enforces a configurable hard cap on the number of objects returned per GraphQL query (MAX_GRAPHQL_LIMIT, default 250). If your query requests more than the limit, the results are automatically clamped. If you need larger result sets, you can increase this limit in the server configuration.


Performance & Limits

What query safety limits are in place?

The server enforces several limits to prevent resource exhaustion. MAX_FIELDS (default 100) limits the number of fields per GraphQL query. MAX_DEPTH (default 5) limits how deeply nested a query can be. MAX_GRAPHQL_LIMIT (default 250) caps the number of objects returned per list query. MAX_PAGE_LIMIT (default 5) limits automatic REST API pagination. These defaults strike a balance between usefulness and protecting your Nautobot instance — you can tune them as needed.


Troubleshooting

I'm getting authentication errors when connecting. What should I check?

The server validates your Nautobot API token by calling Nautobot. If that call fails, your connection is rejected. Verify that your token is correct and has not expired, that NAUTOBOT_BASE_URL points to the right Nautobot instance, and that the server can reach Nautobot over the network. If your Nautobot uses a self-signed certificate, you may need to set NAUTOBOT_VERIFY_SSL=false during testing (though this is not recommended for production).

My MCP client can't connect to the server. What should I check?

First, confirm the server is running and listening on the expected host and port. Make sure your client is using the correct endpoint path — /mcp for streamable-http transport or /sse for SSE transport. If you're connecting through a reverse proxy, verify that the proxy is forwarding to the correct upstream port and that TLS is configured properly.

My query returned an error about fields or depth. What does that mean?

The server parses every GraphQL query before execution and checks it against configured safety limits. If your query requests more fields than MAX_FIELDS or nests deeper than MAX_DEPTH, the tool returns a message explaining what was exceeded. Your AI assistant can usually handle this automatically by simplifying the query or requesting confirmation to proceed. If you regularly hit these limits, you can adjust them in the server configuration.