Universal connectivity for AI systems. Access the 6-dimension ontology, semantic search, and RAG pipelines. Native Claude Code integration.
MCP (Model Context Protocol) is Anthropic’s universal standard for connecting AI models to external resources. It enables AI systems to access data, tools, and knowledge securely and efficiently through a standardized protocol.
In the ONE Platform, MCP provides the bridge between your AI and the 6-dimension ontology, making all your things, connections, and knowledge accessible to Claude and other AI systems.
Claude MCP Protocol ONE Platform
│ │ │
├──→ List Resources ────────────→ [list_resources] ────────────→ Query things
│ │ │
├──→ Read Resource ─────────────→ [read_resource] ────────────→ Fetch thing
│ │ │
├──→ Call Tool ─────────────────→ [call_tool] ────────────────→ Execute action
│ │ │
└──← Results ←──────────────────── [results] ←────────────────── Return data
MCP is the primary interface for AI access to the ontology:
MCP enforces security at multiple levels:
MCP provides semantic search through vector embeddings:
// Vector stored with knowledge
{
_id: "knowledge_123",
type: "documentation",
content: "How to authenticate users",
embedding: [0.1, -0.2, 0.5, ...], // Vector (768 or 1536 dims)
groupId: "org_abc",
metadata: { topic: "auth" }
}
// Search by semantic similarity
query: "user authentication methods"
query_embedding: [0.11, -0.19, 0.52, ...]
// Results ranked by cosine similarity
MCP provides direct access to your data when working in Claude Code:
// Automatically available in Claude Code
const doc = await mcp.resource.read({
uri: "one://knowledge/doc_123"
});
const results = await mcp.tool.call({
name: "semantic_search",
arguments: { query: "authentication" }
});
MCP includes several performance features:
| Aspect | MCP | ACP | A2A |
|---|---|---|---|
| Primary Use | Resource access | Agent messaging | Task coordination |
| AI Integration | Native | Via agents | Via agents |
| Query/Search | Full semantic | Message-based | Task-based |
| Real-time | Streaming | Async queue | Sync & async |
How this protocol maps to the 6-dimension ontology
MCP resources are scoped to groups for multi-tenant isolation
Role-based access control determines which users can access which resources
Resources expose things from the database for AI access
Relationships between resources tracked as connections
Resource access creates events for audit logging
Vector embeddings enable semantic search across knowledge
Built-in support for vector search and semantic retrieval
Query the knowledge dimension using embeddings
Expose resources (databases, APIs, files) to AI systems
Define tools that AI can invoke with type safety
Semantic search across company documents, embeddings updated in real-time
Retrieve only relevant data based on AI request context
Give Claude direct access to your codebase, docs, and data
Implement Retrieval-Augmented Generation with semantic search
// Create an MCP server exposing things
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { Tool, Resource } from "@modelcontextprotocol/sdk/types.js";
const server = new Server({
name: "one-ontology-server",
version: "1.0.0"
});
// Define resource type for things
server.setRequestHandler(ListResourcesRequest, async () => {
const things = await ctx.db
.query("things")
.withIndex("by_group", q => q.eq("groupId", groupId))
.collect();
return {
resources: things.map(thing => ({
uri: `one://things/${thing._id}`,
name: thing.name,
description: `${thing.type}: ${thing.name}`,
mimeType: "application/json"
}))
};
});
// Define tool for semantic search
server.setRequestHandler(ListToolsRequest, async () => {
return {
tools: [
{
name: "semantic_search",
description: "Search knowledge using semantic embeddings",
inputSchema: {
type: "object",
properties: {
query: { type: "string" },
limit: { type: "number", default: 10 },
groupId: { type: "string" }
},
required: ["query"]
}
}
]
};
});
// Implement semantic search tool
server.setRequestHandler(CallToolRequest, async (request) => {
if (request.params.name === "semantic_search") {
const { query, limit, groupId } = request.params.arguments;
// Call your embedding API
const embedding = await getEmbedding(query);
// Search knowledge vectors
const results = await ctx.db
.query("knowledge")
.withIndex("by_embedding", q => q.eq("groupId", groupId))
.collect()
.then(docs => {
// Sort by cosine similarity
return docs
.map(doc => ({
...doc,
similarity: cosineSimilarity(embedding, doc.vector)
}))
.sort((a, b) => b.similarity - a.similarity)
.slice(0, limit);
});
return { content: [{ type: "text", text: JSON.stringify(results) }] };
}
});
const transport = new StdioServerTransport();
await server.connect(transport);
// Query MCP server for semantic search
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const response = await client.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
tools: [
{
type: "use_mcp_tool",
tool: "semantic_search"
}
],
messages: [
{
role: "user",
content: "Find documentation about authentication patterns"
}
]
});
// Claude will automatically use semantic_search tool
console.log(response.content);
// Access ONE Platform things via MCP
const response = await mcp.resource.read({
uri: "one://things/user_123"
});
const thing = JSON.parse(response.contents[0].text);
console.log(`Loaded: ${thing.name} (${thing.type})`);
// Create new thing
await mcp.tool.call({
name: "create_thing",
arguments: {
type: "document",
name: "Architecture Plan",
properties: { version: "1.0" }
}
});