Technology Apr 22, 2026 · 5 min read

Stop Manually Fixing Your Agent’s Output: How We Built a Custom Skill for Monday.com

I'm using my AI agent to create Monday.com tasks. I ask Claude Code to create a task with a proper description, and it does - except when I open the task, the description field is empty. The text I wanted is sitting in the updates thread. Like a comment. Not a description. The reason I'm doing this...

DE
DEV Community
by David Shimon
Stop Manually Fixing Your Agent’s Output: How We Built a Custom Skill for Monday.com

I'm using my AI agent to create Monday.com tasks. I ask Claude Code to create a task with a proper description, and it does - except when I open the task, the description field is empty. The text I wanted is sitting in the updates thread. Like a comment. Not a description.

The reason I'm doing this from Claude Code at all: context. When you're mid-task - debugging something, writing a spec, reviewing code - and you realize something needs to go on the backlog, the last thing you want is to context-switch to a browser, find the right board, fill in the form, and try to reconstruct what you were thinking. With Claude Code, I just say "add this to the backlog" and it already knows what we've been working on. Nothing gets lost. Then we keep going.

I assumed it was a bug.
It wasn’t. It was worse: the system was working exactly as designed.

What actually happened

The Monday MCP server exposes a bunch of tools. When Claude looks for "how do I add content to a task", it scans the tool list and finds create_update. That sounds like "update an item", but it actually creates an entry in the updates thread. Think comment section, not description field.

It's a naming problem, but also a missing tool problem. There was simply no tool for setting an item's description.

Finding the right mutation

I wanted to understand what the correct path was. So I let Claude read an existing task from my board first. That's when it noticed: "The description is stored as item_description."

Now we knew the field existed. But how do you set it? Claude's next move was to dump the full Monday GraphQL schema via [all_monday_api](https://github.com/mondaycom/mcp#-dynamic-api-tools-beta) and grep through it. The schema came back as 51KB of JSON. We filtered it down to mutations with "item", "description", "block", or "doc" in the name — and there it was:

set_item_description_content: Sets an item description document's content with new markdown data.

The right tool existed. It was just completely invisible.

Why it was invisible

No agent is going to figure this out on its own.

set_item_description_content was added to the Monday API on January 26, 2026. The MCP server hasn't caught up yet — so the only way to reach it is through all_monday_api, a generic escape hatch that lets you run raw GraphQL. The mutation is in there, but you'd never know unless you already knew to look.

Monday stores item descriptions separately from item metadata, which is why the correct flow requires two calls:

  1. create_item: sets name, owner, status, etc.
  2. all_monday_api with set_item_description_content: sets the description as markdown

The fix: encode the right behavior

I created a /create-monday-task skill for Claude Code that wraps this two-step flow correctly. The skill does two things the agent wouldn't figure out alone: it explicitly forbids create_update (with a comment explaining why), and it sequences the two API calls correctly: create_item first, then set_item_description_content via raw GraphQL.

The "NEVER use create_update" guardrail is the part that matters most. Without it, any future agent - or future me - would fall into the same trap.

Now when I ask Claude to create a task, it uses the skill, and the description ends up in the right place.

The upstream fix

The real fix is simpler: create_item should just accept a description parameter. Internally it would call set_item_description_content after creating the item. One call, no schema exploration needed.

I filed it here: github.com/mondaycom/mcp/issues/314 — no response yet, but the issue documents the expected behavior if you want to follow along.

The broader lesson

When something goes wrong, the instinct is to fix it manually and move on. Go into Monday, cut the text from the update, paste it into the description, done. I've done this more times than I'd like to admit.

But that's the wrong response — and the leverage gets even bigger when you're on a team. A manual fix helps you once. A shared skill helps everyone, every time, and nobody else has to dig through 51KB of schema to understand why.

The better question is: if I can figure out the right way to do this, why can't the agent? Usually, if there's a path — even a two-step, buried-in-raw-GraphQL path — the agent can follow it once you show it the way. The fix isn't to correct the output by hand. It's to invest a little time understanding why it went wrong, and then encode that understanding: a skill, a guardrail, a better tool description.

That investigation is rarely wasted either. You come out the other side with a working solution and a clearer mental model of how agents actually select and use tools — which makes the next debugging session faster.

Manual corrections disappear. A well-written skill sticks around.

So next time your agent does something slightly off — before you reach for the manual fix — ask: is there a way to make the agent do this right? There probably is. It's almost always worth finding out.

DE
Source

This article was originally published by DEV Community and written by David Shimon.

Read original article on DEV Community
Back to Discover

Reading List