The Missing Piece in Writing Advice: How to Recognize When Work Is Done
In 2010, something shifted in writing advice.
Brené Brown's TED talk on vulnerability went viral, eventually reaching 25 million views. Eric Ries published "The Lean Startup," introducing the concept of the Minimum Viable Product to mainstream audiences. Amazon's self-publishing platform matured, enabling indie authors to publish as quickly as they could write. Facebook plastered "Done is better than perfect" on office walls as a cultural mantra.
Within two years, the conversation changed. Writing advice that once emphasized endless polishing and perfectionism pivoted hard in the opposite direction: just ship it. Finish fast. Don't overthink it. The anti-perfectionism movement had arrived.
To understand this shift, we analyzed 12 landmark writing advice sources spanning three decades—from Anne Lamott's "Bird by Bird" (1994) to Seth Godin's "The Practice" (2020). We looked at bestselling books, influential movements like NaNoWriMo and 20BooksTo50K, and the dominant themes in #WritingCommunity across platforms.
Here's what surprised us: Only 3 of the 12 sources—just 25%—provided explicit criteria for recognizing when work is actually complete.
The rest focused on mindset. They told us perfectionism was the enemy, or that endless refinement was the path to mastery. They encouraged us to ship fearlessly or polish relentlessly. But neither approach taught the operational skill that matters most: how to recognize when work is done.
This isn't about choosing sides in the perfectionism debate. It's about identifying what both sides leave out as well as what research from psychology and decision science can teach us about developing this crucial skill.
Table of Contents
- The Gap in the Conversation
- The Metacognitive Gap: What Research Shows
- Why Experience Creates a Blind Spot
- The Metacognitive Completion Framework
- Reframing Perfectionism
- Practical Implementation: Start Here
- How Timed Writing Trains This Skill
- The Missing Piece
- Further Reading
The Gap in the Conversation
Let's start by acknowledging what each approach gets right.
What Perfectionism-Focused Advice Gets Right
The traditional emphasis on craft and revision isn't wrong. High standards do matter for quality work. Revision does improve writing; research on expert writers consistently shows they spend more time revising than novices do. Attention to craft develops skill through deliberate practice.
When Anne Lamott writes about her "dental draft" where "you check every tooth to see if it's loose or cramped or decayed," she's describing genuine quality control. When MFA programs emphasize workshopping and multiple drafts, they're recognizing that first attempts rarely represent our best work.
What It Leaves Out
But here's what perfectionism-focused advice doesn't answer: When do you stop?
"Keep polishing" offers no endpoint. "Real writers revise endlessly" provides no definition of "endless." "Good enough isn't good enough" never defines what would be good enough.
The result? Writers who revise the same chapter 47 times, cycling between versions without clear improvement. Novels that live in drawers for years because they're "not quite ready." The paralysis that comes from standards without stopping points.
What Anti-Perfectionism Advice Gets Right
The anti-perfectionism movement correctly identified a real problem. Overthinking does block progress. Perfectionism can be a disguise for fear. Shipping work builds confidence and competence in ways that endless private revision cannot.
When Elizabeth Gilbert writes that "at some point, you really just have to finish your work and release it as is—if only so that you can go on to make other things," she's articulating an important truth. When Seth Godin emphasizes that "if it doesn't ship, it doesn't count," he's highlighting the value of putting work into the world.
The movement successfully removed psychological barriers. It normalized imperfect first drafts (Lamott's "shitty first drafts"). It created permission to finish (Gilbert's "done is better than good"). It built momentum culture through initiatives like NaNoWriMo, which has helped over 400,000 participants write 50,000 words in 30 days.
What It Leaves Out
But anti-perfectionism advice has its own gap: How do you know if something is actually ready?
"Just ship it" doesn't tell you whether "it" meets a minimum threshold. "Done is better than perfect" assumes you can recognize "done." "Trust your gut" presumes your gut is calibrated to make accurate judgments.
And here's the problem: For many writers—especially those earlier in their development—that assumption doesn't hold.
The Pattern Across Both Eras
When we analyzed landmark sources, we found the same pattern repeatedly:
Books (Traditional Publishing Era):
- 72% emphasized perfectionist language
- Common phrases: "revise until it shines," "never settle," "real writers rewrite"
- Completion criteria: Largely absent
Social Media (#WritingCommunity, BookTok, Instagram):
- 58% emphasized anti-perfectionist language
- Common phrases: "done is better than perfect," "just write," "stop overthinking"
- Completion criteria: Equally absent
The gap exists in both camps. One says "keep going," the other says "you're done"—but neither teaches you how to make that judgment yourself.
Both approaches focus on mindset (perfectionism is bad/good; shipping is scary/liberating) rather than the operational skill of recognizing completion.
The Metacognitive Gap: What Research Shows
So why does this matter? Because recognizing is a specific cognitive skill that develops over time and can be explicitly taught.
Finding 1: Self-Assessment Accuracy Develops with Expertise
In a now-famous study, researchers Justin Kruger and David Dunning asked participants to complete tests and then estimate their performance. Those who scored in the bottom quartile estimated they'd performed at the 62nd percentile. They weren't lying or being arrogant: they genuinely couldn't tell how they'd done.
The researchers found something counterintuitive: Improving your skill is what enables you to assess your skill. The competence needed to perform well is the same competence needed to recognize good performance.
For writers, this creates a bootstrapping problem. When you're early in your development, you need accurate self-assessment most, but that's exactly when your self-assessment is least reliable.
Telling a novice writer to "trust your gut when it's done" is asking them to use a judgment tool they haven't yet calibrated.
Finding 2: Metacognitive Monitoring Predicts 87% of Performance Variance
Recent research on academic writing found something striking: students' ability to monitor their own understanding (their metacognitive accuracy) explained 87% of the variance in their writing performance.
Not their vocabulary. Not their grammar knowledge. Not even their general intelligence. Their ability to accurately assess what they knew and didn't know was the dominant predictor of quality.
Metacognitive strategy training (teaching students to explicitly monitor and evaluate their work) dramatically improved both their self-assessment accuracy and their actual performance.
This suggests completion recognition isn't a mystical "writer's intuition" you either have or don't. It's a skill that can be developed through explicit training.
Finding 3: Clear Completion Criteria Enable Goal Achievement
Research on implementation intentions (such as "if-then" plans for goal pursuit) shows that people are three times more likely to complete difficult goals when they specify clear conditions for completion.
The effect size is large and robust (d = .65). It works across domains from exercise to academic performance. The mechanism seems straightforward: clear criteria remove ambiguity from the decision of whether to continue or stop.
But here's the catch: The research assumes you have clear completion criteria to work with. For many creative tasks, including writing, those criteria need to be constructed.
Finding 4: "Good Enough" Requires Thresholds
Decision science research distinguishes between "maximizers" (people who seek the best possible outcome) and "satisficers" (people who seek outcomes that meet a threshold of "good enough").
Satisficer consistently report higher happiness, lower regret, and less depression than maximizers—despite maximizers often achieving objectively better outcomes (they find jobs with 20% higher salaries, for instance).
But here's the crucial detail: The satisficing advantage depends on having clear "good enough" thresholds. You need to know what standard you're trying to meet.
Writing pedagogy rarely provides these thresholds. We're told to make our work "good enough to publish" or "the best we can make it." What does that mean operationally?
The Synthesis
Four separate research findings point to the same conclusion:
- Novices can't accurately self-assess (Dunning-Kruger)
- Metacognitive skill predicts performance more than other factors (writing studies)
- Clear criteria enable goal completion (implementation intentions)
- "Good enough" thresholds enable both achievement and well-being (satisficing)
Yet 75% of influential writing advice sources provide no explicit completion criteria. They assume the very skill the research shows needs to be explicitly developed.
Why Experience Creates a Blind Spot
If completion recognition is learnable, why don't more experienced writers teach it explicitly?
The answer lies in what researchers call the "expert blind spot."
The Expertise Paradox
As people develop expertise, they compress multiple steps into automatic single actions. A beginning driver thinks about each component: check mirrors, signal, check blind spot, begin turn, adjust speed. An experienced driver does all of this in what feels like one fluid motion.
This automatization is the hallmark of expertise. But it creates a genuine teaching problem: automatized processes aren't accessible to conscious inspection. When experts try to explain their process, they often can't break it back down into the steps a beginner would need.
What This Means for Writing Advice
When a successful author with 15 years of experience says "you'll know when it's done; you'll just feel it," they're drawing on:
- Years of internalized completion criteria developed through hundreds of writing projects
- Automatic quality assessment calibrated by thousands of hours of practice and feedback
- Metacognitive monitoring they've trained but may not consciously access
The advice is true—for them. They do just know. Their "gut feeling" is actually a highly sophisticated pattern-matching system trained on extensive experience.
But telling someone early in their development to "trust your gut" is like telling a beginner driver to "just feel when to brake." In reality, the feeling develops through practice and feedback, not by being told to trust it.
The Gap This Creates
The pattern we found in our research makes sense in this light:
- Novice writers need explicit criteria most (their metacognitive monitoring isn't yet calibrated)
- Experienced writers have the hardest time providing them (their processes have become automatic)
- This creates a genuine teaching gap from the nature of expertise itself
When an established author writes "perfectionism is fear in fancy shoes" (Gilbert) or describes completion as a "dragon dropping dead" (Pressfield), they're accurately describing their subjective experience. But they're not providing the operational guidance a beginner could follow.
The Exception That Proves the Rule
The one fully explicit source we found—Eric Ries's "Lean Startup"—comes from product development, not creative writing. Ries provides clear completion criteria for MVPs: When you've completed one Build-Measure-Learn cycle with customer feedback, the MVP is done.
Why is this clearer? Product development deals with measurable outcomes (customer behavior) and business contexts that demand operational definitions. The discipline requires explicit criteria.
But not all forms of writing have this forcing function. For creative writing, for example, we can rely on subjective judgment—until we realize many writers don't yet have the calibrated judgment to rely on.
The Metacognitive Completion Framework
Rather than choosing between perfectionism or anti-perfectionism, research from psychology and decision science points to a different approach: explicitly developing completion recognition as a learnable skill.
Here are four components supported by research:
Component 1: Explicit Completion Criteria
Instead of "trust your gut," create if-then rules based on objective markers.
The key is specifying multiple observable conditions that together indicate "done" rather than relying on a vague feeling.
For a blog post:
- IF the main argument is supported with evidence AND
- IF you've answered the "so what?" question (why readers should care) AND
- IF structure has clear introduction, body, and conclusion AND
- IF you've revised 2-3 times after a cooling-off period
- THEN publish
For a short story:
- IF plot has beginning, middle, and end AND
- IF character arc is complete (character has changed or revealed something essential) AND
- IF beta readers have identified the same strengths and weaknesses (reaching saturation) AND
- IF you've addressed all structural feedback from trusted readers
- THEN submit
For a dissertation chapter:
- IF argument aligns with overall thesis AND
- IF literature review is comprehensive for this section AND
- IF your advisor has approved the outline AND
- IF you've met the word count target (±10%)
- THEN move to next chapter
Notice the pattern: multiple objective conditions that together signal completion, not a single subjective feeling.
This approach draws from implementation intentions research, which shows if-then rules dramatically improve goal completion precisely because they remove ambiguity from the stopping decision.
Component 2: Metacognitive Calibration Training
Research shows you can improve self-assessment accuracy through deliberate practice. Here's how:
The calibration cycle:
- Write a piece with your completion criteria in mind
- Self-evaluate against those criteria (rate 0-10 on each dimension)
- Get external evaluation (from a peer, mentor, editor, or beta reader)
- Compare your judgment to the external judgment
- Identify patterns in your over- or under-confidence
- Adjust your internal sense and repeat
Over time, your internal sense of "done" calibrates to external reality. The gap between your self-assessment and others' assessment narrows. You develop the "gut feeling" based on systematic feedback, rather than hope.
This is metacognitive strategy training in action.
Component 3: Strategic Stopping Rules
You can also track patterns across revision passes to identify diminishing returns.
Research on optimal stopping (the mathematical theory of when to stop searching) suggests most decisions benefit from structured exploration followed by a commitment threshold. For writing, this might look like:
Revision pass tracking:
- Pass 1 (Structural revision): Major changes to argument, organization, flow → Large improvements
- Pass 2 (Paragraph-level revision): Strengthening examples, improving transitions → Moderate improvements
- Pass 3 (Sentence-level editing): Clarity, concision, word choice → Small improvements
- Pass 4 (Copyediting): Grammar, punctuation, typos → Minimal improvements
- Pass 5+: If you're making changes at this point, check whether they're actually improvements
The warning signs:
- You're cycling (making a change, reverting it, then making it again)
- You can't read through without compulsively editing
- Changes are cosmetic rather than substantive
- New versions aren't clearly better than old versions
When you notice these patterns, you've likely entered diminishing or negative returns territory. This is an observable, behavioral marker for "time to stop."
Editorial professionals (Jane Friedman, Mary Kole) teach this explicitly: set pass limits, notice cycling, recognize the two-revision rule (after changing something, you get two more chances to refine it, then it stays).
Component 4: Context-Specific Standards
Completion criteria should scale to project importance and audience expectations, not arbitrary perfectionism or blanket "ship it" advice.
For a social media post:
- Standard: Good enough to start a conversation
- Time investment: 1 revision pass, maybe none
- Quality bar: Authentic, timely, engaging
For a blog post:
- Standard: Clear argument, readable, few typos
- Time investment: 2-3 revision passes
- Quality bar: Informative and accessible
For an academic article:
- Standard: Novel contribution, rigorous methodology, comprehensive literature review
- Time investment: 10-15+ revision passes with peer feedback
- Quality bar: Publication-ready for peer-reviewed journal
For a novel:
- Standard: Complete character arcs, satisfying plot resolution, genre expectations met
- Time investment: Highly variable (often 5-10+ full drafts)
- Quality bar: Competitive with published books in the genre
The point is that context determines what "good enough" means operationally, and being explicit about this helps you know when you're actually done.
Reframing Perfectionism
This framework also helps us understand perfectionism more precisely.
Recent research distinguishes between two types:
Maladaptive perfectionism (perfectionistic concerns):
- Self-doubt, fear of mistakes, shame about imperfection
- Correlates with anxiety, depression, and procrastination
- This is what causes harm
Adaptive perfectionism (perfectionistic strivings):
- High personal standards, organized approach, pursuit of excellence
- Correlates with better outcomes and less stress than having no standards
- This is beneficial
Researcher Paul Gaudreau argues we should go further and distinguish "excellencism" (pursuing excellence with clear completion criteria and self-compassion) from perfectionism (pursuing perfection with harsh self-criticism and no endpoint).
The reframe:
Old framing: Perfectionism is bad → adopt anti-perfectionism
Research-based framing:
- Target perfectionistic concerns (self-doubt, fear) with self-compassion and realistic standards
- Preserve perfectionistic strivings (high standards) with explicit completion criteria
- Train metacognitive accuracy to bridge the gap between aspiration and assessment
Practical application:
Instead of fighting perfectionism OR giving up and "just shipping," you:
- Set appropriately high standards for the context (excellencism)
- Define explicit completion criteria so you know when you've met them (enables satisficing)
- Train your judgment through systematic calibration (builds metacognition)
- Use if-then rules to trigger the stopping decision (implementation intentions)
You pursue excellence, but with a finish line.
Practical Implementation: Start Here
If you want to try this approach with your current writing project, here's where to begin:
The Completion Criteria Exercise
Step 1: Identify your project type and stakes
- What are you working on? (Blog post, dissertation, novel chapter, article)
- What are the consequences of publishing too soon? Too late?
- Who is the audience and what do they expect?
Step 2: List 3-5 objective completion criteria
Not "it feels good" or "I'm satisfied." Instead, measurable markers:
- Specific structural elements present (character arc complete, argument supported with evidence)
- Feedback thresholds reached (3 beta readers agree on strengths, editor approved)
- Revision passes completed (3 full revision cycles done)
- Observable quality markers (no plot holes, transitions smooth, genre expectations met)
Step 3: Create your if-then rule
"IF [criterion 1] AND [criterion 2] AND [criterion 3], THEN I [publish/submit/move on]."
Make it specific enough that you could explain it to someone else and they could verify whether the conditions are met.
Step 4: Test and calibrate
Apply your criteria to current work. Get external feedback. Compare your assessment to that feedback. Refine your criteria based on what you learn.
The goal is to develop judgment that gets more accurate with practice.
The Completion Recognition Audit
For deeper insight, consider your last 3-5 completed projects:
- What made you decide each one was "done"?
- In retrospect, did you stop too early? Too late? Just right?
- What patterns emerge in your stopping behavior?
- What criteria would have helped you stop at the optimal point?
Look for patterns in your over-confidence (consistently think you're done earlier than you are) or under-confidence (struggle to stop even when work is ready).
These patterns reveal where your metacognitive calibration needs adjustment.
How Timed Writing Trains This Skill
This is where approaches like timed writing sessions become interesting as completion recognition training, rather than a productivity hack.
Traditional advice asks: "Is this good enough?" and leaves you to figure out the answer.
Timed writing provides an external completion criterion: When the timer stops, you stop.
This might seem arbitrary, especially in a single session. Across repeated practice, something interesting happens:
1. It removes the judgment burden while you're learning
Instead of agonizing over whether you're "done enough" to stop or "not done enough" to justify continuing, the decision is made for you. This frees cognitive resources for the actual writing.
2. It calibrates your sense of "complete enough for now"
Over time, you learn how much you can accomplish in constrained time. You develop intuition for what constitutes a reasonable stopping point. You build confidence that "done for today" doesn't mean "abandoned forever."
3. It builds the if-then muscle
"IF timer ends, THEN I stop" is an implementation intention. Practicing this simple rule trains the mental habit of connecting observable conditions to stopping actions.
4. It provides consistent practice with completion
Every session ends. You practice the experience of finishing. For writers who struggle with endless revision, this repeated experience of completion can recalibrate their stopping threshold.
This approach acts as training wheels for completion recognition. As your metacognitive accuracy improves through the calibration cycle (write, self-assess, get feedback, compare, adjust), you can graduate to more nuanced internal criteria.
But the timer provides scaffolding while you're developing that judgment.
The Missing Piece
Here's what we've learned:
Most writing advice focuses on mindset: whether to aim for perfection or embrace "good enough." But there's a gap both approaches leave unfilled: learning to recognize when work is actually complete.
Research reveals why this matters:
- Self-assessment accuracy is challenging, especially early in skill development (Dunning-Kruger)
- Metacognitive monitoring is a major predictor of writing performance (writing studies)
- Clear completion criteria significantly improve goal completion rates (implementation intentions)
- "Good enough" thresholds enable both achievement and well-being (satisficing)
Yet when we analyzed landmark writing advice sources, we found that 75% provide mindset and platitudes rather than operational guidance. The completion recognition gap exists across both the perfectionism and anti-perfectionism camps.
Completion recognition is a specific cognitive skill that can be explicitly developed through:
- Creating objective if-then completion criteria for different project types
- Practicing metacognitive calibration (comparing your assessments to external feedback)
- Using strategic stopping rules to identify diminishing returns
- Matching standards to context rather than applying blanket rules
The shift:
From "Should I keep working or am I overthinking this?"
To "Have I met the completion criteria I set for this project?"
From relying on uncalibrated gut feelings
To systematically training judgment that gets more accurate with practice
The goal is to develop judgment you can trust because you've tested and refined it through explicit practice.
Starting Points
- This week: Try the Completion Criteria Exercise with your current project
- This month: Track your completion decisions and notice your patterns
- This quarter: Experiment with external completion cues (like timed writing sessions) while building internal calibration
The question isn't whether to be a perfectionist or embrace "done is better than perfect."
The question is: Do you have the skill to recognize when work is actually done?
And if not—can you develop it?
The research suggests you can.
Further Reading
On Metacognition and Self-Assessment:
- Kruger, J., & Dunning, D. (1999). "Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments."
- Research showing metacognitive strategies explain 87% of variance in writing performance (PMC8438561)
On Goal Completion and Decision-Making:
- Gollwitzer, P. M. Implementation intentions meta-analysis (d = .65 effect size for completion rates)
- Schwartz, B., et al. (2002). "Maximizing versus satisficing: Happiness is a matter of choice."
On Perfectionism:
- Gaudreau, P. (2018). On the distinction between excellencism and perfectionism
- Research on adaptive vs. maladaptive perfectionism and mental health outcomes
On Expert-Novice Differences:
- Nathan, M. J., & Koedinger, K. R. On the expert blind spot in teaching
On the Anti-Perfectionism Movement:
- Historical analysis of sources from Lamott's "Bird by Bird" (1994) through Godin's "The Practice" (2020)
- Tracking the 2010-2012 cultural tipping point (Brown, Ries, indie publishing convergence)
*Tags: Writing Advice, Metacognition, Perfectionism, Productivity
Internal links: [Timed Writing for Productivity], [Types of Writer's Block], [Breaking Through Writer's Block]