← All writing
Writing · Product

I asked Claude to build my MCP server.
Here's what I actually got.

Standing up an MCP server took a day. Understanding what I built took longer. The interesting part wasn't the engineering.

§ 01 · The obvious next step

Last week I published an essay arguing that the future of content isn't better destinations. It's enabling agentic access, exposing your content and capabilities so they work in any agent context, not just the surfaces you control. MCP servers, structured knowledge, agent-readable interfaces. The whole argument.

Then I went and implemented it on my own site. Hard to argue for something you haven't tried.

I used Claude Code to build it. I described what I wanted (a public, read-only MCP server for sambr.com that would let any MCP-aware client query my photo gallery, my writing, my bio, and my contact info without scraping), and Claude Code wrote the implementation. It took about a day of iteration. It works.

What I want to do here is open the hood. Not because the engineering is complicated (it isn't), but because the interesting decisions weren't technical. They were about data, and what you learn when you try to make your own content queryable by machines.

§ 02 · What Claude Code actually built

The server runs as a Vercel serverless function at https://sambr.com/api/mcp. Each request is stateless: no sessions, no memory between calls. A request comes in, gets handled, and the function terminates. Vercel's runtime is short-lived by design, so stateless is the right fit.

The protocol is JSON-RPC 2.0, which is what MCP runs on. A client sends a method name and parameters, the server returns a result. The outer layer of the implementation is essentially a router. It receives a method like tools/call or resources/read, dispatches to the right handler, and returns a structured response. That routing logic is about 120 lines of clean, unsurprising code.

The server exposes three things:

Tools, callable functions: find_photos, get_photo, find_hero, list_writing, get_essay, about_me, contact. These are what an agent actually invokes when it wants to do something.

Resources, named content at URIs like sambr://gallery and sambr://writing/dont-build-the-agent. Resources are readable blobs rather than callable functions, closer to a document than an API endpoint.

Prompts, suggested templates that tell a client how to use the server well. summarize_photographer, recommend_photos, cite_essay. These are hints, not commands. An agent can follow them or ignore them.

Underneath the tools, two data stores. Photo data lives in Vercel KV, a Redis-based key-value store, under two keys: gallery (the full photo array) and hero (the homepage selection). Essays, bio, and contact info are hardcoded in a data.js file, static text that doesn't need a database.

That's the whole thing. It's genuinely not complicated. The protocol handles the rest.

§ 03 · Where it's different from a standard API

The obvious question when you first look at an MCP server is: why not just a REST API? The tools look like endpoints. The JSON-RPC envelope looks like overhead. What's actually different?

The difference isn't in the transport. It's in who the client is.

A REST API is designed to be called by code that a developer wrote, code that knows exactly which endpoint to hit, what parameters to send, and how to parse the response. The developer read the docs, wrote the integration, and everything is predetermined.

An MCP server is designed to be called by an agent that has never seen your server before. The agent reads the tool descriptions at runtime, decides which tool to call based on what it's trying to accomplish, constructs the parameters from context, and interprets the result. There's no developer in the loop. The tool description is the documentation, and the agent reads it dynamically every time.

With a REST API, you write for a developer who will read your docs once. With MCP, you write for an agent that will read your tool descriptions on every single call.

This changes how you write the descriptions. Look at the find_photos tool definition and you'll find a line that reads: "Place / location metadata is NOT indexed. There is no way to filter by city, country, or region." That sentence exists because without it, an agent asked to find photos from California would try a location filter, get nothing, and either fail silently or hallucinate. The description is behavioral guardrails, not just documentation.

The other structural difference is discoverability. A client connecting to the MCP server for the first time calls tools/list and gets back everything it needs to understand what's available. No docs site, no SDK, no prior knowledge required. The server describes itself, and the agent figures out what to do with it. That's the protocol doing real work. It's not just a convention; it's what makes the whole thing composable across tools and clients that have never met before.

§ 04 · The decision that taught me the most

When I first tested the server, I asked it to find photos taken in California. It returned nothing. Not a hallucination, just an honest empty result, because location isn't indexed.

That wasn't a bug. It was a decision I'd made deliberately, and then forgotten I'd made.

The photo metadata doesn't include location data. My Leica M11 doesn't embed GPS coordinates in the EXIF. I could add location manually (go back through several years of photographs, tag each one with a place), but I chose not to. The effort wasn't worth it for a personal site, and I didn't want to introduce inconsistent or half-complete data into a queryable system. Better to scope it out cleanly than to have an agent return partial results and present them as complete.

So the tool description says explicitly that location isn't indexed. An agent that reads it won't try. An agent that ignores it will fail gracefully rather than confidently wrong.

The quality of your MCP server is directly constrained by the quality of your underlying data. You can't query what you didn't capture.

This is where the engineering stops being the interesting part. The server is fine. The protocol is fine. The constraint is the data model: what exists, what's structured, what's queryable, and what decisions were made years ago when nobody was thinking about agents at all.

Most organizations are in a version of this situation, at much larger scale. The content exists. The data exists. But it was captured for human consumption, not machine retrieval: embedded in page layouts, split across CMS fields that made sense for publishing workflows, tagged inconsistently, or simply never tagged at all. Standing up an MCP server on top of that doesn't fix it. It exposes it.

§ 05 · What this means for the argument I made last week

The essay I published last week argued that the shift from destination-based content to agentic access is a distribution problem, not a content problem. I still believe that. But building this server added a layer I undersold.

It's also a data model problem. And the data model problem is harder, because it requires going back (back through existing content, existing metadata, existing publishing workflows) and making decisions about structure that nobody thought to make at the time. That's not a technical project. It's an organizational one. Someone has to own it, resource it, and make the call that it's worth doing.

For my photography site, the scope was manageable and the tradeoffs were clear. For an enterprise content library with fifteen years of accumulated pages, PDFs, and CMS records, the same audit is a different kind of undertaking entirely.

But the direction is the same. Figure out what your content actually is, not as pages, but as entities with attributes and relationships. Decide what's worth making queryable and what isn't. Be explicit about the gaps. Then build the interface that exposes it cleanly.

Claude Code can build you the server in a day. The data model is still your problem. Which is, I think, exactly right. The engineering is the easy part. It always was.