MCPTotal
Connecting AgentKit and Agent Builder to Your MCPs
October 8, 2025
Gil Dabah

Connecting AgentKit and Agent Builder to Your MCPs

agent builderagentkitmcp hostingmcp serversopenai

This week, OpenAI introduced its new agent building platform, featuring AgentKit and Agent Builder. Agent Builder provides a visual canvas interface, similar to Zapier and n8n, where users can draw, drag and drop control flows to build automated workflows.

The big deal is that creating an agent on your own is now considerably simpler due to OpenAI’s components, which save developers time.
This allows the builders to focus on building their workflow logic while benefiting from built-in guardrails, evals for robustness testing to ensure consistentcy and improved performance.

Whether you prefer to draw it or write code to automate your workflow, MCP is easily integrable.

Creating and running MCP servers

First things first, we need to run MCP servers in order to connect them as part of the workflow.
With our platform at mcptotal.io, you can sign up for free and immediately create a new space to running MCPs for your agent:

A space is an aggregation of MCP tools running as one MCP endpoint in a cloud hosted isolated container.
Taking this endpoint URL you can connect it to any AI client that speaks MCP.
And the next step, add various MCP servers to run under this space, dedicated for your agent workflow.
In this case we’re adding the ‘Gmail’ MCP server and authorizing it to use our inbox, so we can read and write emails:

And we added ‘PDF Maker’ too, so our agent can create cool PDFs because it knows the markdown language, and we got the space ready for use:

Note that you can also run any MCP server you wish in our platform, by adding a custom Python or Node package (uvx/npx) or a docker image, through clicking the first card in the homepage’s catalog.

Connecting the MCP server

Back to the spaces page, clicking ‘Connect’ we can fetch the endpoint URL of the MCP server that our platform provides, which aggregates all the tools (Gmail and PDF Maker and this example) together.

OpenAI’s various SDKs don’t yet support OAuth for MCP directly.
However, in some cases they do accept an authorization bearer token, which is great. But in other cases, they don’t!
Fortunately, our platform allows connecting to a space’s MCP server, supporting multiple protocols and authentication methods.
While OpenAI’s documentation shows using SSE, our tests show that “Streamable HTTP” also works reasonably well.

Using OpenAI Client Response API with MCP

If you’re already using their SDK, you can now extend it to use MCP by adding a 'tools' dictionary to configure it.
See how we use a full URL for the MCP server without any authentication, unfortunately they don't support any for this one at the moment.
In fact, using MCPTotal, the URL contains the secret as a query param, so game on.
Here's the Python snippet that works, make sure to replace with your own server address later:


1 from openai import OpenAI  
2 client = OpenAI()
1 resp = await client.responses.create({
2     "model": "gpt-5",
3     "tools": [
4         {
5             "type": "mcp",
6             "server_label": "agent-1-mcp",
7             "server_description": "An MCP server with both Gmail and PDF Maker ready.",
8             "server_url": "https://mcp.mcptotal.io/mcp-zhjaz2lop3fk5htdcf7k/mcp?key=jxlJOXdyPDvXf8ByODb8",
9             "require_approval": "never",
10         },
11     ],
12     "input": "Create a pdf saying 'hello world' and send it to johndoe@gmail.com, figure out subject and other details on your own please.",
13 })
  

Note how we keep the ‘require_approval’ field as 'never', so it can directly consume the MCP functionality.

AgentKit Integration for MCP

Now, let's see how it's done with the new AgentKit, trying to keep things simple. Though for more information, you can learn from their example here.
In the below full code snippet we show how to use the Pythonic Agent class integrated with our MCP server.
This time we can safely use the "Streamable HTTP" protocol as well as a HTTP bearer token, as you can see around line 30, so it's more secure.


1 import asyncio
2 import os
3 import shutil
4 import subprocess
5 import time
6 from typing import Any
7 
8 from agents import Agent, Runner, gen_trace_id, trace
9 from agents.mcp import MCPServer, MCPServerStreamableHttp
10 from agents.model_settings import ModelSettings
11 
12 
13 async def run(mcp_server: MCPServer):
14     agent = Agent(
15         name="Assistant",
16         instructions="Use the tools to answer the questions.",
17         mcp_servers=[mcp_server],
18         model_settings=ModelSettings(tool_choice="required"),
19     )
20 
21     message = "send an email to nir.haas@piiano.com saying hello from Dabah and agentkit"
22     print(f"Running: {message}")
23     result = await Runner.run(starting_agent=agent, input=message)
24     print(result.final_output)
25 
26 
27 async def main():
28     async with MCPServerStreamableHttp(
29         name="Streamable HTTP Python Server",
30         params={{
31             "url": "https://mcp.mcptotal.io/mcp-a0keutiwgnb32kjgykwx/mcp",
32             "headers": {"Authorization": "Bearer n2Y0khGnWmaorFbLwopO"}
33         }},
34     ) as server:
35         trace_id = gen_trace_id()
36         with trace(workflow_name="Streamable HTTP Example", trace_id=trace_id):
37             print(f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}\n")
38             await run(server)
39 
40 asyncio.run(main())

A few notes regarding using MCPs with OpenAI:

  1. Their GPT model might ask many questions in order to complete the request, so sometimes you might need to give it more details for sake of completeness, otherwise it will come back asking more questions about it and it's annoying (obviously depends on the prompt/message too).
  2. As for now - It seems that chatgpt.com supports MCPs working with files, however the SDKs (both Chat Completion and AgentKit) don't work well with MCP that use files as resources.
  3. Make sure you don't forget setting your API key envar. For example, under terminal it would be this command: export OPENAI_API_KEY="<key here>"
  4. OpenAI's documentation warns that connecting to a malicious server can cause harm. We know it can try to steal data or fooling the agent to do unintended things. So make sure you connect it to trusted servers.
  5. Some SDKs allow for using authentication and others don't, prefer using the HTTP header when possible.

Agent Builder Integration for MCP

When using the builder dashboard with an MCP component, an access token must be used for authentication, good.
To connect it, select "Streamable HTTP" in the space's "Connect to" dialog.
Now choose "HTTP header" under Security, and a key will be displayed; this key is used as the HTTP header to authorize access to your server.

Back to the Agent Builder MCP configuration dialog, we now have a fitting pair of URL and secret that you can copy paste.
So it's more secure because of the use of a HTTP header that provides the access token to your MCP server.

Summary

MCPTotal simplifies the creation of an MCP server, enabling you to host various underlying tools (MCP servers) and securely expose them to OpenAI’s agent platform with appropriate URLs and credentials. We easily allow for running custom MCPs so you don't need to go through the hassle of deploying and managing them on your own.

Our architecture prioritizes isolated (single-tenant) and sandboxed server operations, along with auditing and logging for security and diagnostics that are accessible to you.
For further details on our security architecture, please refer to our blog here.

Last updated: October 8, 2025
Back to Blog
AgentKit Connected to MCPTotal