Back to Hub
technology

Why AI Writing Sounds Robotic (And What We Did About It)

A
Certified GuideMintMyStory
Why AI Writing Sounds Robotic (And What We Did About It)

The problem we didn't expect to have

We spent the first few months of MintMyStory on safety filters, image generation, and getting the page layout right. What we didn't budget time for was making the writing not sound like a robot wrote it.

But the feedback kept coming. Parents would generate a story, read the first page out loud, and say something like: "It's fine. But it doesn't sound like a real book."

They were right. And most of them couldn't quite say why.


What AI prose actually sounds like

Here is a sentence from an early MintMyStory story:

"In a vibrant, breathtaking world nestled at the heart of the Enchanted Forest, young Mia embarked on a pivotal journey that would serve as a testament to her enduring courage."

Grammatically clean. Nobody would call it wrong. But try reading that out loud at 8pm after a long day. It sounds like a press release for a seven-year-old.

The problem is structural. Language models are trained to predict the most statistically likely next word. The result is a sort of averaged-out prose that drifts toward the same words, the same sentence lengths, the same rhetorical moves — over and over.

Wikipedia has a project that catalogs this precisely. Their researchers put it simply: "LLMs guess what should come next. The result tends toward the most statistically likely result that applies to the widest variety of cases."

That's it. That's the whole problem. A children's book shouldn't apply to the widest variety of cases. It should feel specific, a little weird, and like it was written for this kid.


The patterns that kept appearing

We read a lot of generated stories. A lot. And the same culprits kept showing up.

Monument building. Every minor scene got framed as a pivotal moment in Mia's journey. Characters didn't just do things — they undertook endeavors that underscored their enduring spirit. The AI couldn't let a moment just exist. It had to explain why it mattered.

The "-ing" tack-on. Sentences would end with fake depth: "She opened the door, highlighting her curiosity and showcasing her bravery." A human would have stopped at "opened the door." Or had her hesitate. Not explained the door's thematic significance.

The word problem. Certain words appeared constantly: vibrant, tapestry, delve, foster, crucial, pivotal, testament. We ran a check across a hundred outputs and found "vibrant" in 78% of them. It's fine on its own. Appearing in three out of four stories makes it a tell.

Dead-flat rhythm. Every sentence the same length. Every paragraph three points. The cadence of a PowerPoint, not a story.


What we changed

We added a layer of instructions to our story generation prompt specifically designed to fight these patterns. Not vague guidance like "write naturally" — actual rules.

- Do not explain why a scene is significant. Show it.
- Vary sentence length deliberately. Short sentences matter.
- Banned words: delve, tapestry, vibrant, testament, pivotal, foster, underscore.
- No "-ing" analysis phrases at the end of sentences.
- The narrator needs a personality. Be specific. Be a little weird.

We also added a self-audit step. After generating a draft, the model is asked one question: "Does this sound like a chatbot or a parent reading at bedtime?" If it leans chatbot, it rewrites.

The rules come from Wikipedia's "Signs of AI Writing" guide — a living document that researchers maintain by cataloging 29 distinct patterns that make AI text recognizable. We read through all 29 and turned them into prompt instructions.


Before and after

Same story, same scene.

Before:

"In a vibrant, breathtaking world nestled at the heart of the Enchanted Forest, young Mia embarked on a pivotal journey that would serve as a testament to her enduring courage."

After:

"Mia had been told the Enchanted Forest smelled like pine and old rain. She'd expected it to feel bigger. It mostly just felt quiet."

The second one has a character with an expectation that doesn't quite pan out. She notices something specific. The reader leans in because something is slightly off and they want to know what happens next. The first version just announces the story's themes and moves on.


Why it matters more with kids

Adults can feel when something is off in a text. They've read enough to sense when a sentence is working too hard. Kids haven't built that instinct yet.

But they're sensitive to it differently. They stop asking you to re-read the same page. They squirm. The story loses them.

A really good children's book gets read fifteen times in a row and the kid still says "one more time." That's the bar. The AI is good at plot and pictures — but the words have to feel like a person is in the room with you.

We still don't hit that every time. We're getting there.


Try MintMyStory and see what the stories actually feel like.

Community Resource

Experience the difference
in focus.

Empower your child by making them the hero of their own narrated adventure.

GET STARTED
No Credit Card REQUIREDInstant AI GENERATION