AI in Practice, No Fluff — Day 10/10
Last week I needed to generate cover images for a blog series. Ten posts, two sizes each. I opened an AI design tool, described what I wanted, and waited.
The results were unusable. Garbled text, wrong colors, layouts that ignored every parameter I gave it. I spent an hour trying different prompts, adjusting descriptions, regenerating. Nothing worked.
Then I wrote an HTML template. Loaded our exact fonts, plugged in the hex colors, added a CSS gradient. Rendered 20 images in under a minute. Every one was exactly right on the first pass.
That is the moment this post is about. Not the failure of AI image generation (it will get better), but the instinct to reach for AI when a simpler tool would have worked from the start.
The first series was about which AI to use. This one taught how to use it well. Today is about when not to use it at all.
The hammer problem
This series has spent nine days teaching you techniques. Few-shot prompting. Chain-of-thought reasoning. Structured output. Tool use. Embeddings. RAG. Serious tools for actual problems.
The risk now is the hammer problem. When you have spent time learning what AI can do, the instinct is to use it for everything. That instinct will be right much of the time, but it's good to know when you actually need a screwdriver.
When code is the better answer
There is a test I use. I call it the 30-line test.
If you could solve this problem in 30 lines of straightforward code, AI is probably not the right tool. Not because AI cannot do it, but because code will do it faster, more reliably, and without the overhead of prompt engineering. That said, having AI help you write that code is still a great option.
Here is what that looks like in practice:
Deterministic logic. If the answer is always the same given the same input, write a function. "Convert this date to ISO format." "Calculate sales tax for this state." "Validate that this string is a valid email address." These are if-then problems. Code does not hallucinate a wrong tax rate. Code does not occasionally decide that "user@.com" looks close enough.
Exact matching. Pattern matching, lookups, filtering. "Find all rows where the status is 'overdue'." "Extract phone numbers from this text." A regex takes milliseconds and costs nothing. An API call takes seconds and costs money. The regex will be right every time.
Math. Spreadsheets exist. I've watched people paste data into ChatGPT to calculate averages. The model will probably get it right. "Probably" is the problem. When you need exact answers, use exact tools.
Formatting and templates. If you need the same output structure every time with different data plugged in, that is a template engine, not a language model. The cover image problem from my opening was exactly this. I did not need creativity. I needed precision and repetition.
When AI is the right tool
The flip side is just as important. There are problems where writing the code would be either impossible or absurdly expensive, and AI handles them naturally.
Ambiguity. When the input doesn't have clean structure and you need to make sense of it anyway. A customer writes "this thing broke again smh" and you need to classify it as a billing issue, a technical issue, or a feature request. Good luck writing that with if-then rules. An LLM reads the intent behind the words.
Natural language. Summarizing a 20-page document. Translating between languages with cultural nuance. Writing a professional reply to a frustrated customer. These are language tasks, and language models are built for them.
Judgment calls. "Is this resume a good fit for this role?" "Does this code review comment sound too harsh?" "Should this support ticket be escalated?" These are decisions with gray areas, where reasonable people would disagree. AI handles gray areas well because it was trained on millions of examples of human judgment.
Creative variation. Brainstorming product names. Generating test data that feels realistic. Writing variations of marketing copy to A/B test. When you need variety and exploration, not precision and repetition.
The hybrid pattern
The best systems I have built use both. AI for the fuzzy step, code for the precise one.
Here is a real example. I built a system that processes incoming messages and routes them to the right handler. The routing decision is fuzzy. A message about "can't log in" might be an authentication issue, a password reset, or a session timeout. AI classifies the intent. Once the intent is classified, code takes over. Code routes to the correct handler, updates the database, sends the confirmation email. The fuzzy step needed judgment. Everything after it needed reliability.
The memory system from yesterday's post is another one. Semantic search uses embeddings to find entries by meaning, not just keywords. AI powers the search. The storage, retrieval, indexing, and deduplication are all code. I would never trust a language model to manage a database. I would absolutely trust it to understand what I am looking for.
The pattern is the same every time. AI handles the parts that require understanding. Code handles the parts that require guarantees.
The 30-line test, revisited
I want to come back to this because it is the most practical takeaway in the post.
Before reaching for AI, ask: could I solve this in about 30 lines of straightforward code? If yes, write the code. It will be faster to write, faster to run, cheaper to operate, and more reliable to maintain.
If the answer is no, if the problem involves natural language, ambiguity, judgment, or creative variation, AI is probably the right tool. You now have nine days of techniques to apply.
If the answer is "sort of," if some parts are straightforward and some parts are fuzzy, you are looking at a hybrid. Let AI handle the fuzzy step. Let code handle the rest.
The series, in perspective
Ten days ago, this series opened with few-shot prompting. Show, do not describe. That was a technique.
Today we end with judgment. When to apply the techniques, and when to close the chat window and write the code instead.
That isn't something a tutorial teaches. It comes from building things, watching what works, and being honest about what doesn't. From getting excited about a tool and then catching yourself before you over-apply it. (I still catch myself. The cover image hour was last week.)
Getting better at using AI isn't just about using it for everything. It's also about knowing when not to use it.
If there is anything I left out or could have explained better, tell me in the comments.
This article was originally published by DEV Community and written by Jeff Reese.
Read original article on DEV Community