Technology May 01, 2026 · 6 min read

"The CEO Wrote This with MCP" — How I Used an AI Agent to Examine the Translation Industry's Pain Points

Kawamura International / LDX Lab Starting with the Pain Points There are some quietly troublesome problems in the translation field. When IR or legal documents are run through a general-purpose MT system, number formatting falls apart. Honorific language does not match the writing st...

DE
DEV Community
by Kozo-KI
"The CEO Wrote This with MCP" — How I Used an AI Agent to Examine the Translation Industry's Pain Points

Kawamura International / LDX Lab

Starting with the Pain Points

There are some quietly troublesome problems in the translation field.

When IR or legal documents are run through a general-purpose MT system, number formatting falls apart. Honorific language does not match the writing style. Company-specific terminology gets replaced with different translations. The feeling of "Having to fix this again?" can end up accounting for most of the post-translation work.

This is less a problem with translation tools than a limitation of processing unstructured text without context. The same thing happens with meeting minutes. Who is supposed to do what by when is buried in the text, and you cannot tell unless you read it.

What I set out to do today was test what would happen if that process were handed over to an AI agent.

My Background

Let me be honest from the start. I am not an engineer.

We have a CTO. We also have people who can read code. But I conducted today's test entirely on my own. What I used was the Claude.ai chat interface and our in-house API (LDX hub), connected via MCP. I did not write a single line of code.

This also serves as a record of where non-engineers get stuck when using AI agents.

Architecture: Connecting APIs with MCP

The setup is simple.

Claude (chat)
  └── Zuplo MCP (API gateway)
        └── LDX hub
              ├── StructFlow   ← Turns unstructured text into structured JSON
              ├── RefineLoop   ← Post-translation terminology and quality refinement
              └── RenderOCR    ← Text extraction from PDFs and images

LDX hub is an API service developed in-house. It handles translation, OCR, and data structuring. It runs on an API gateway called Zuplo, and Zuplo has MCP server functionality.

In other words, if you connect Zuplo MCP to Claude, Claude itself will decide when to call the API.

Getting It Connected (Where I Got Stuck)

The connection process has three steps.

  1. Generate an API key in the Zuplo dashboard
  2. Add the MCP server in Claude.ai settings (URL + API key)
  3. Verify the connection

It looked simple, but I got stuck at step 1.

It wasn't immediately obvious where in the Zuplo dashboard to generate the key. I ended up reading through the documentation to find it. Later, while doing the VOC analysis (described below), I came across a user review that described exactly the same experience:

"Honestly struggled with the initial setup.
 Figuring out that API keys are issued from the Zuplo
 dashboard wasn't obvious. But once it was running,
 StructFlow's accuracy was satisfying."

This was one of the fictional review data items I processed with StructFlow today, but it felt very real. I'll make a tutorial video.

Verification 1: Extracting Action Items from Meeting Minutes

The first thing I tried once connected was structuring meeting minutes.

I simply described what I wanted: "Please extract the person responsible, the task, and the deadline from five sets of meeting minutes."

Claude assembled the job on its own.

{
  "model": "anthropic/claude-sonnet-4-6",
  "system_prompt": "Extract all action items from the meeting minutes...",
  "example_output": {
    "action_items": [
      { "assignee": "Tanaka", "task": "Fix the bug", "due_date": "2026-05-02" }
    ]
  },
  "inputs": [
    { "id": "minutes-001", "data": { "text": "..." } }
    // All 5 submitted at once
  ]
}

I did not write any of this JSON. Claude selected the createStructFlowJob tool on its own and called it.

Result: 13 action items extracted from 5 sets of minutes in 11 seconds. 100% success rate.

What I found interesting: from the sentence "Tanaka will fix this by April 22. (Note: confirmed as already resolved as of today)," it separately extracted the deadline and the completion flag. It is reading context, not simply extracting keywords.

Verification 2: VOC Analysis of 100 Reviews

Next, I tried a VOC (Voice of Customer) analysis of app reviews.

When I said "I want to analyze 100 mixed Japanese-English reviews for a task management app tentatively called Flowra," Claude:

  1. Generated the 100 review data items itself (70 in Japanese, 30 in English)
  2. Submitted 2 parallel jobs to StructFlow
  3. Once the results came back, formatted them into an HTML report

All I did was describe what I wanted in three sentences.

The five fields it extracted per review:

Each review → StructFlow → Structured JSON
               ├─ sentiment (positive / negative / neutral)
               ├─ nps_score (integer 0–10)
               ├─ evaluation_axes (axis / score / comment)
               ├─ mentioned_features (feature names mentioned)
               └─ improvement_requests (requested improvements)

Result: All 100 items completed successfully. Processing time was 69 seconds for Job 1 and 59 seconds for Job 2. Output character count was 4.3× the input (33,791 characters).

The 4.3× figure is because comments are automatically generated for each evaluation axis, not simply classified. Something like "Ease of use: Task management has genuinely become easier (positive)" is generated for 100 items across multiple axes.

When I aggregated the data, a polarization emerged: 37% were promoters (NPS 9–10), and 37% were also detractors (0–6). It is a typical product pattern where dissatisfaction with pricing coexists with enthusiasm for features.

What I Noticed: What "Understanding Intent" Actually Means

The key thing I realized from spending the day on this: once MCP gives Claude access to tools, Claude decides on its own how to call them.

I had not read the StructFlow API reference. I had no idea how to submit jobs. But it called createStructFlowJob, polled with getStructFlowJob, and moved on once the job was done.

This is different from code completion. It is decomposing a task and executing autonomously within the constraints of the tools.

However, there is a prerequisite. The tool schema must be accurately defined. Because Zuplo exposes parameters accurately, Claude can pass the correct arguments. If the schema is ambiguous, things stop before "understanding intent" even becomes relevant.

Honest Assessment

What worked:

  • Once I described what I wanted, Claude handled job assembly, submission, polling, and formatting
  • Quality did not drop even with mixed Japanese-English text
  • Even as a non-engineer, once connected, I was able to produce something that actually works

What did not work / areas for improvement:

  • Initial setup takes real effort. The API key location in particular was not obvious
  • I ended up pasting an API key into the chat, which was a security issue (I reissued it immediately)
  • Direct calls to the Anthropic API from the browser were blocked by CORS (I worked around it a different way)

I am including the failures here. A record of where things broke is more reproducible than a "everything went perfectly" account.

Today's Summary

Verification Volume Processing Time Success Rate
Meeting minutes → Action items 5 items 11 seconds 100%
Reviews → VOC structuring 100 items 69s + 59s 100%

In both cases, all I did was describe what I wanted.

What's Next

  • "Isn't plain LLM enough?" — put to the test: Comparing the same 100 minutes submitted directly to the Anthropic API (Part 2)

MCP takes a bit of effort to get connected. But once it's connected, Claude takes over. All you need to do is tell it what you want. That holds true even for non-engineers.

LDX Lab case studies: https://ldxlab.io/ldxhub/case
StructFlow / RefineLoop / RenderOCR / TermWeave (coming soon)

DE
Source

This article was originally published by DEV Community and written by Kozo-KI.

Read original article on DEV Community
Back to Discover

Reading List