Connect clients to MCP servers
Overview
After deploying MCP servers in your Kubernetes cluster, you need to connect clients to use them. This guide covers two main connection scenarios:
- External clients - Connecting from outside the cluster using Ingress or Gateway API to expose MCP servers
- Internal clients - Connecting from applications running within the same Kubernetes cluster
Prerequisites
- A Kubernetes cluster with MCP servers deployed (see Run MCP servers in Kubernetes)
- An Ingress controller or Gateway API implementation installed in your cluster (for external access)
kubectlconfigured to communicate with your cluster
Connect from outside the cluster
To make your MCP servers accessible to external clients like the ToolHive UI, ToolHive CLI, or other MCP clients, you need to expose the proxy service using an Ingress resource or Gateway API.
When exposing MCP servers externally, you should:
- Always use HTTPS with valid TLS certificates
- Configure authentication to control access (see Authentication and authorization)
- Consider network policies to restrict traffic
Running MCP servers without authentication on public networks is a security risk.
Option 1: Using Ingress
Ingress provides a stable API for exposing HTTP/HTTPS services. This example shows a generic Ingress configuration that works with popular Ingress controllers like Traefik, Contour, and HAProxy, as well as cloud provider implementations like AWS Load Balancer Controller, Google Cloud Load Balancer, and Azure Application Gateway.
First, ensure you have an MCP server deployed. This example uses the fetch
server:
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPServer
metadata:
name: fetch
namespace: toolhive-system
spec:
image: ghcr.io/stackloklabs/gofetch/server:latest
transport: streamable-http
mcpPort: 8080
proxyPort: 8080
Create an Ingress resource to expose the MCP server proxy. You can use either host-based routing (separate subdomain per server) or path-based routing (single domain with paths):
- Host-based routing
- Path-based routing
Each MCP server gets its own subdomain:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fetch-mcp-ingress
namespace: toolhive-system
annotations:
cert-manager.io/cluster-issuer: 'letsencrypt-prod'
spec:
ingressClassName: traefik
tls:
- hosts:
- fetch-mcp.example.com
secretName: fetch-mcp-tls
rules:
- host: fetch-mcp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mcp-fetch-proxy
port:
number: 8080
The MCP server is accessible at https://fetch-mcp.example.com/mcp.
Multiple MCP servers share a single domain using path prefixes. This approach requires URL rewriting to strip the path prefix before forwarding to the backend service.
Path rewriting syntax varies by Ingress controller. Check your controller's documentation for the correct annotations or resources.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mcp-servers-ingress
namespace: toolhive-system
annotations:
cert-manager.io/cluster-issuer: 'letsencrypt-prod'
# Traefik example: strip path prefix
traefik.ingress.kubernetes.io/router.middlewares: toolhive-system-strip-mcp-prefix@kubernetescrd
spec:
ingressClassName: traefik
tls:
- hosts:
- mcp.example.com
secretName: mcp-tls
rules:
- host: mcp.example.com
http:
paths:
- path: /fetch
pathType: Prefix
backend:
service:
name: mcp-fetch-proxy
port:
number: 8080
- path: /weather
pathType: Prefix
backend:
service:
name: mcp-weather-proxy
port:
number: 8080
---
# Traefik Middleware to strip path prefixes
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: strip-mcp-prefix
namespace: toolhive-system
spec:
stripPrefix:
prefixes:
- /fetch
- /weather
The MCP servers are accessible at https://mcp.example.com/fetch/mcp and
https://mcp.example.com/weather/mcp.
The ToolHive operator automatically creates a Kubernetes Service for each
MCPServer following the naming pattern mcp-<name>-proxy. For example, an
MCPServer named fetch gets a Service named mcp-fetch-proxy.
Apply the resources:
kubectl apply -f fetch-server.yaml
kubectl apply -f fetch-ingress.yaml # or mcp-ingress.yaml for path-based
Verify the Ingress is configured:
kubectl get ingress -n toolhive-system
Option 2: Using Gateway API
The Gateway API is a more expressive way to expose services and is the successor to Ingress. This example works with Gateway API implementations like Cilium, Istio, Envoy Gateway, and Traefik, as well as cloud provider implementations like AWS Gateway API Controller, Google Kubernetes Engine (GKE) Gateway controller, and Azure Application Gateway for Containers. See the full list of implementations.
For a complete working example using the ngrok Gateway API implementation, see the Configure secure ingress for MCP servers on Kubernetes tutorial.
Check for an existing Gateway
Many Gateway API implementations create a Gateway resource automatically during
installation. For example, Traefik's Helm chart creates a traefik-gateway in
the default namespace when enabled. Check if a Gateway already exists:
kubectl get gateway --all-namespaces
If a Gateway exists, note its name and namespace to use in your HTTPRoute. If you need to create a new Gateway, use this example:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: mcp-gateway
namespace: toolhive-system
spec:
gatewayClassName: traefik # Change to match your Gateway implementation
listeners:
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- name: mcp-gateway-cert
allowedRoutes:
namespaces:
from: Same
Create an HTTPRoute to expose your MCP server. You can use either host-based routing (separate subdomain per server) or path-based routing (single domain with paths):
- Host-based routing
- Path-based routing
Each MCP server gets its own subdomain:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: fetch-mcp-route
namespace: toolhive-system
spec:
parentRefs:
- name: mcp-gateway # Reference your Gateway name (e.g., traefik-gateway if using existing)
# namespace: default # Uncomment if Gateway is in a different namespace
hostnames:
- fetch-mcp.example.com # Change to your domain
rules:
- backendRefs:
- name: mcp-fetch-proxy # Format: mcp-<mcpserver-name>-proxy
port: 8080 # This matches the proxyPort
The MCP server is accessible at https://fetch-mcp.example.com/mcp.
Multiple MCP servers share a single domain using path prefixes. This approach uses URL rewriting to strip the path prefix before forwarding to the backend service.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: mcp-servers-route
namespace: toolhive-system
spec:
parentRefs:
- name: mcp-gateway # Reference your Gateway name (e.g., traefik-gateway if using existing)
# namespace: default # Uncomment if Gateway is in a different namespace
hostnames:
- mcp.example.com # Change to your domain
rules:
- matches:
- path:
type: PathPrefix
value: /fetch
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /
backendRefs:
- name: mcp-fetch-proxy # Format: mcp-<mcpserver-name>-proxy
port: 8080 # This matches the proxyPort
- matches:
- path:
type: PathPrefix
value: /weather
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /
backendRefs:
- name: mcp-weather-proxy
port: 8080
The MCP servers are accessible at https://mcp.example.com/fetch/mcp and
https://mcp.example.com/weather/mcp.
The URLRewrite filter removes the path prefix (e.g., /fetch) before
forwarding requests to the backend service, so the MCP server receives requests
at /mcp as expected.
Apply the resources:
kubectl apply -f mcp-gateway.yaml # If creating a new Gateway
kubectl apply -f fetch-route.yaml # or mcp-routes.yaml for path-based
Verify the route is configured:
kubectl get httproute -n toolhive-system
TLS certificates
For production deployments, use valid TLS certificates from a trusted certificate authority. The most common approaches are:
- cert-manager
- Manual certificates
cert-manager automates certificate management in Kubernetes. Install cert-manager and create a ClusterIssuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@example.com # Change to your email
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: traefik # Change to match your Ingress controller
Apply it:
kubectl apply -f letsencrypt-issuer.yaml
The Ingress example above already includes the cert-manager annotation. Once cert-manager is installed, it will automatically provision and renew certificates.
If you have existing certificates, create a Kubernetes Secret:
kubectl create secret tls fetch-mcp-tls \
--cert=path/to/tls.crt \
--key=path/to/tls.key \
-n toolhive-system
Reference this secret in your Ingress or Gateway configuration as shown in the examples above.
Connect with ToolHive UI or CLI
Once your MCP server is exposed with HTTPS, you can connect to it as a remote MCP server from the ToolHive UI or CLI.
- ToolHive UI
- ToolHive CLI
In the ToolHive UI:
- Click Add an MCP server on the MCP Servers page
- Select Add a remote MCP server
- Enter the connection details:
- Name: A friendly name for the server
- Server URL: Use the appropriate URL based on your routing approach:
- Host-based:
https://fetch-mcp.example.com/mcp - Path-based:
https://mcp.example.com/fetch/mcp
- Host-based:
- Transport: Streamable HTTP (or SSE if your server uses SSE)
- If authentication is configured, select the method and enter the required OAuth or OIDC details
- Click Install server
The MCP server appears in your server list and you can use it with any connected MCP client.
Use the thv run command to connect:
# Host-based routing: separate subdomain per server
thv run --name fetch-k8s https://fetch-mcp.example.com/mcp
# Path-based routing: single domain with paths
thv run --name fetch-k8s https://mcp.example.com/fetch/mcp
If authentication is configured, add the appropriate flags. See the ToolHive CLI guide for details.
The MCP server is now available to your configured MCP clients.
For more details on using remote MCP servers, see:
Connect from within the cluster
Applications running inside your Kubernetes cluster can connect directly to MCP server proxy services using Kubernetes service discovery. This is more efficient and secure than routing through an external Ingress.
Service DNS names
Each MCPServer automatically gets a Kubernetes Service that other pods can use
to connect. The service name follows the pattern mcp-<name>-proxy, and the
full DNS name follows the standard Kubernetes pattern:
mcp-<mcpserver-name>-proxy.<namespace>.svc.cluster.local:<proxyPort>
For example, if you have an MCPServer named fetch in the toolhive-system
namespace with proxyPort: 8080, the full URL would be:
http://mcp-fetch-proxy.toolhive-system.svc.cluster.local:8080
Within the same namespace, you can use the short form:
http://mcp-fetch-proxy:8080
Example: Configuring applications
When deploying applications like AI agents in the same cluster, configure them to use the service DNS name. This example shows how to pass the connection URL as an environment variable:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-agent-app
namespace: my-app
spec:
# ... other deployment configuration ...
template:
spec:
containers:
- name: app
image: my-agent-app:latest
env:
- name: MCP_SERVER_URL
# Different namespace: use full DNS name
value: 'http://mcp-fetch-proxy.toolhive-system.svc.cluster.local:8080/mcp'
# Same namespace: use short name
# value: 'http://mcp-fetch-proxy:8080/mcp'
Network policies for cross-namespace access
If your cluster uses network policies, you may need to create a policy to allow traffic between namespaces:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app-to-mcp
namespace: toolhive-system
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: toolhive-proxy
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: my-app # Your app's namespace must have this label
Authentication for external clients
When exposing MCP servers externally, configure authentication to control access. ToolHive supports multiple authentication methods:
- OIDC authentication - Use external identity providers like Google, GitHub, Okta, or Microsoft Entra ID
- Kubernetes service accounts - For service-to-service authentication within the cluster
See the Authentication and authorization guide for detailed setup instructions.
Check connection status
Test external connectivity
If you have the ToolHive CLI installed, you can test connectivity to your MCP server:
thv mcp list tools --server https://fetch-mcp.example.com/mcp
Or use curl to send a JSON-RPC request:
# Host-based routing
curl -X POST https://fetch-mcp.example.com/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}'
# Path-based routing
curl -X POST https://mcp.example.com/fetch/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}'
You should receive a JSON response with a list of available tools.
If you've configured authentication on your MCP server, see the Authentication and authorization guide for how to test authenticated connections.
Test internal connectivity
Test connectivity from within the cluster:
# Port-forward to test locally
kubectl port-forward -n toolhive-system service/mcp-fetch-proxy 8080:8080
# In another terminal, test the connection
curl -X POST http://localhost:8080/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}'
Verify the connection path
Check that the Ingress or Gateway is properly configured and the Service has running pods:
# For Ingress: verify it exists and has an address
kubectl get ingress -n toolhive-system fetch-mcp-ingress
# For Gateway API: check HTTPRoute status (look for "Accepted: True" in conditions)
kubectl describe httproute -n toolhive-system fetch-mcp-route
# Verify the Service exists
kubectl get service -n toolhive-system mcp-fetch-proxy
# Check that the proxy pod is running
kubectl get pods -n toolhive-system -l app.kubernetes.io/instance=fetch
If the Ingress shows an address or the HTTPRoute status shows "Accepted: True", and the pod is running, the connection path is properly configured.
Next steps
Learn how to secure your MCP servers with Authentication and authorization.
Configure Telemetry and metrics to monitor your MCP server usage and performance.
Set up logging to track requests and audit MCP server activity.
Related information
- Run MCP servers in Kubernetes - Deploy MCP servers in your cluster
- Proxy remote MCP servers - Create proxies for external MCP servers
- Client compatibility - Supported MCP clients and configuration
- Kubernetes CRD reference - Full MCPServer specification
- Configure secure ingress tutorial - Complete tutorial using ngrok and Gateway API
Troubleshooting
Ingress returns 503 Service Unavailable
If your Ingress returns a 503 error:
# Check if the service exists
kubectl get service -n toolhive-system mcp-fetch-proxy
# Check the proxy pod is running
kubectl get pods -n toolhive-system -l app.kubernetes.io/instance=fetch
# Check pod logs for errors
kubectl logs -n toolhive-system -l app.kubernetes.io/instance=fetch
Common causes:
- Proxy pod not running: Ensure the MCPServer resource was created successfully
- Wrong service name: The Ingress backend service name must follow the
pattern
mcp-<mcpserver-name>-proxy - Wrong port: The Ingress backend port must match the
proxyPortin the MCPServer spec - Pod health check failing: Check the proxy pod logs for errors
TLS certificate issues
If you see certificate errors when connecting:
# Check the certificate secret exists
kubectl get secret -n toolhive-system fetch-mcp-tls
# Describe the secret to verify it contains tls.crt and tls.key
kubectl describe secret -n toolhive-system fetch-mcp-tls
# If using cert-manager, check certificate status
kubectl get certificate -n toolhive-system
# Check cert-manager logs
kubectl logs -n cert-manager -l app=cert-manager
Common causes:
- Certificate not ready: Wait for cert-manager to provision the certificate (can take a few minutes)
- DNS not configured: Ensure your domain points to the Ingress load balancer
- Challenge validation failing: Check cert-manager logs for ACME challenge errors
- Wrong ClusterIssuer: Verify the cert-manager annotation references an existing ClusterIssuer
DNS resolution fails
If your domain does not resolve to your cluster:
# Check Ingress external IP
kubectl get ingress -n toolhive-system fetch-mcp-ingress
# Test DNS resolution
nslookup fetch-mcp.example.com
# Or using dig
dig fetch-mcp.example.com
Solutions:
- Configure your DNS provider to create an A record pointing to the Ingress external IP
- If using a cloud provider load balancer, create a CNAME record instead
- Wait for DNS propagation (can take minutes to hours)
Cannot connect from within cluster
If pods cannot connect to the MCP server service:
# Verify service exists
kubectl get service -n toolhive-system mcp-fetch-proxy
# Check that pods are running
kubectl get pods -n toolhive-system -l app.kubernetes.io/instance=fetch
# Test DNS resolution from a pod
kubectl run test-dns -n toolhive-system --image=busybox --restart=Never -- \
nslookup mcp-fetch-proxy.toolhive-system.svc.cluster.local
# Check the DNS test results
kubectl logs -n toolhive-system test-dns
# Clean up the test pod
kubectl delete pod -n toolhive-system test-dns
# Test connectivity from a pod
kubectl run test-curl -n toolhive-system --image=curlimages/curl --restart=Never -- \
curl -X POST http://mcp-fetch-proxy:8080/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}'
# Check the connectivity test results
kubectl logs -n toolhive-system test-curl
# Clean up the test pod
kubectl delete pod -n toolhive-system test-curl
Common causes:
- Network policies blocking traffic: Check for network policies that might prevent pod-to-pod communication
- Wrong namespace: Ensure you're using the correct service DNS name for cross-namespace access
- Service not created: The operator automatically creates services, but verify it exists
- Wrong port: Ensure you're using the
proxyPortvalue from the MCPServer spec - Wrong service name: Remember the service name is
mcp-<name>-proxy, not just<name>
Gateway API not working
If using Gateway API and connections fail:
# Check Gateway status
kubectl get gateway -n toolhive-system mcp-gateway
# Check HTTPRoute status
kubectl get httproute -n toolhive-system fetch-mcp-route
# Describe the HTTPRoute for detailed status
kubectl describe httproute -n toolhive-system fetch-mcp-route
# Check Gateway implementation logs (example for Istio)
kubectl logs -n istio-system -l app=istio-ingressgateway
Common causes:
- Gateway not ready: Wait for the Gateway to be accepted and programmed by the controller
- Wrong gateway class: Ensure
gatewayClassNamematches your installed Gateway API implementation - Listener configuration issues: Verify the Gateway listener configuration matches the HTTPRoute requirements
- Certificate issues: For HTTPS, ensure the certificate reference exists and is valid
Cross-namespace access denied
If cross-namespace connections fail:
# Check network policies in the MCP server namespace
kubectl get networkpolicy -n toolhive-system
# Describe network policies to see rules
kubectl describe networkpolicy -n toolhive-system
# Check if the app namespace has the required labels
kubectl get namespace my-app --show-labels
Solutions:
- Create or update network policies to allow traffic from your app namespace
- Add required labels to your application namespace
- Test connectivity using a debug pod in the app namespace