Writing Practice Builds AI Prompting Skills
Knowing AI techniques doesn't guarantee effective use. A 2025 meta-analysis of 51 studies found that students using ChatGPT procedurally (taking outputs at face value) performed worse than those using it constructively with critical evaluation.[1]
Controlled experiments with human-AI teams show similar patterns: collaborative training on AI tools reduced performance compared to independent training, suggesting that simply learning techniques together doesn't build the skills needed for effective collaboration.[2]
In This Article
Reading time: 12 minutes
This article explores the unexpected connection between daily writing practice and effective AI collaboration:
- The Metacognitive Demands of AI - What makes AI collaboration challenging (3 min)
- What Writing Research Teaches Us - 40 years of research on expert writers (3 min)
- The Theoretical Bridge - Why two fields reached the same conclusion (2 min)
- Putting It Into Practice - Four high-leverage writing practices (3 min)
- What We Know and Don't Know - Research gaps and opportunities (1 min)
The gap stems from how people think while using AI, not from technical knowledge. AI literacy's impact on job performance operates almost entirely through creative self-efficacy and metacognitive skills (β = 0.680 total effect, with 79% mediated by creative confidence).[3]
Studies of university students confirm this pattern: self-regulated learning skills (including planning, monitoring, and self-reflection) predict AI writing performance more strongly than technical AI literacy (β = 0.237 vs. β = 0.153).[4]
Knowing AI techniques matters less than metacognitive skills
The cognitive demands of effective AI collaboration map almost perfectly to skills writing researchers have studied for over forty years. Two independent research streams (separated by decades and domains) have converged on the same insight: metacognitive control is what makes someone effective at complex cognitive tasks.
Preliminary interviews with professional creative writers suggest similar patterns: writers with established craft foundations evaluate AI contributions more confidently, employ iterative refinement, and maintain careful oversight of each decision, though this exploratory work awaits peer review.[5]
The Metacognitive Demands of AI
AI collaboration imposes significant metacognitive demands on users; it requires awareness and regulation of your own thinking processes. Recent research from Microsoft Research and the University of Cambridge (Tankelevitch et al., 2024) identified why this creates challenges: the metacognitive burden creates the primary barrier, despite the technical complexity.[6]
Their research identified three core metacognitive demands:
Prompt Formulation requires self-awareness of your actual goals. You need to decompose complex tasks into clear sub-goals, anticipate where ambiguity might creep in, and articulate intent with precision. People who struggle here may be facing challenges with the metacognitive skill of making their own thinking explicit, even when domain knowledge is strong.
Prompt Iteration demands metacognitive flexibility: the ability to recognize when your approach isn't working and adapt your strategy. It requires well-calibrated confidence in your prompting ability: enough to experiment, but not so much that you persist with failing approaches. This belongs to metacognitive skills rather than technical capabilities.
Output Evaluation requires critical judgment about AI responses. You need to detect errors, identify hallucinations, and assess whether the output actually serves your goals. This demands confidence in your evaluation abilities and awareness of your own knowledge boundaries.
What makes this challenging: educational research shows that fewer than half of students regularly engage in basic metacognitive practices like asking for help when confused or connecting new problems to prior knowledge.[7] Despite its importance, explicit instruction in metacognition remains rare across all levels of education. Communication skills (including the ability to articulate what you want clearly) develop through practice, not passive study.
Given that these skills develop through practice, the question becomes: what kind of practice develops the metacognitive abilities AI collaboration requires?
What Writing Research Teaches Us: The Evolution of a Key Insight
Expert writers move recursively through planning, translating, and reviewing; they constantly monitor their progress and adjust their approach. This insight, first articulated by Hayes and Flower in their 1981 cognitive process theory of writing, remains one of the most influential frameworks in writing research, with over 3,800 citations.[8]
Hayes spent thirty years refining this model. With each revision (in 1996[9] and again in 2012[10]), one element became increasingly prominent: metacognitive control.
Hayes' 2012 model reorganizes the entire writing framework around metacognition. At the top of his hierarchy sits the Control Level: the metacognitive executive function that coordinates all writing processes. As Hayes explicitly stated: "It is the metacognitive control of writing that helps coordinate both the writer's knowledge and the strategies that help achieve the objectives."[10:1]
This evolution matters because it reflected accumulated evidence that metacognitive control distinguishes expert writers from novices.
Research across multiple contexts confirms this pattern. Negretti's 2021 study of doctoral writers found that writing practice develops metacognitive awareness: the ability to monitor your thinking, identify gaps in your reasoning, and adjust strategies.[11] A meta-analysis of metacognitive strategy training confirmed that helping writers become more aware of their own cognitive processes improves writing outcomes (effect size d = 0.69).[12]
Expert writers tend to have developed six transferable cognitive skills through regular practice:
- Task decomposition - Breaking complex goals into manageable steps
- Clarity and precision - Eliminating ambiguity from communication
- Iterative refinement - Treating revision as core practice, not failure
- Error detection - Critical evaluation of their own output
- Metacognitive monitoring - Constant awareness of what's working
- Working memory management - Handling cognitive load across complex processes
Notice that these are exactly the skills research has identified as necessary for AI collaboration[6:1]: domain-general metacognitive abilities that apply across all complex cognitive tasks.
The Theoretical Bridge: Convergence from Two Directions
This convergence is clear: Hayes spent three decades refining understanding of writing cognition, progressively emphasizing metacognitive control as central to the process. Independently, AI researchers studying human-AI collaboration in 2024 identified metacognitive demands as users' primary challenge.
Two research streams, separated by decades and domains, identified the same bottleneck: metacognitive control is the limiting factor in complex cognitive tasks.
The metacognitive through-line is unmistakable. When expert writers plan, they exercise metacognitive control: "What's my actual goal? How do I break this down? What might be unclear?" When effective AI users formulate prompts, they do the same: "What do I actually want? How do I make this unambiguous? Where might the AI misunderstand?"
The recursive pattern matters, too. Expert writers move flexibly through planning, translating, and reviewing. Iterative writing practice trains comfort with recursive processes. Effective AI users do exactly the same: formulate, articulate, evaluate the output, reformulate based on results, articulate again.
Who excels at AI prompting reinforces this connection. Industry observations suggest that effective prompters tend to be people who express ideas clearly and articulate their intentions precisely, regardless of engineering background. Hayes' research shows writing practice directly develops articulacy through repeated cycles of making thinking explicit.
Working memory demands tell the same story. Both writing and AI collaboration require managing multiple goals simultaneously while tracking context and monitoring progress. Kellogg's 1996 research identified this pattern in writing;[13] Tankelevitch et al.'s 2024 research found it in AI collaboration.[6:2]
A caveat: this framework is compelling, but unproven. No empirical studies have tested whether writing practice directly transfers to better AI collaboration. The theoretical case is strong; two independent research traditions converging on metacognitive control is compelling evidence. But the empirical evidence is absent.
This research gap represents both a limitation and an opportunity. The theoretical parallels are strong enough to warrant practical experimentation, though we must acknowledge we don't yet know if transfer occurs.
Putting It Into Practice: What Kind of Writing?
If the hypothesis holds, not all writing practice should be equally effective for developing AI collaboration skills. The most leveraged practices emphasize planning, clarity, revision, and metacognitive awareness.
Timed Freewriting forces clear articulation under constraints. Set a timer for 15 minutes and write continuously; this builds fluency in translating thoughts to language and develops metacognitive awareness as you notice your own thinking patterns. Daily rhythm matters more than session length.
Outlining Exercises train task decomposition. Before writing any document, create a structured outline. This practices breaking complex goals into hierarchical components; the same cognitive skill is required for effective prompt formulation.
Revision-Focused Writing develops critical evaluation skills. Write a first draft, wait 24 hours, then revise. This builds comfort with iterative refinement while developing comfort with multiple iterations; it requires exactly the metacognitive flexibility needed for prompt iteration.
Explanatory Writing trains clarity and precision. Practice explaining complex concepts to non-expert audiences. This requires anticipating confusion and making implicit knowledge explicit; these are the skills needed when prompting AI systems that lack your context.
Regular practice develops procedural fluency (the automaticity that comes from repeated exercise). Just as daily physical practice builds muscle memory, daily writing practice builds metacognitive habits.
What We Know and Don't Know
Established:
✓ Writing practice develops metacognitive skills
✓ AI collaboration requires metacognitive skills
✓ Theoretical overlap is substantial
✓ Hayes (30 years) and AI researchers (2024) independently identified metacognition as central
Unknown:
? Whether writing practice improves AI collaboration
? Which practices transfer most effectively
? Transfer timeline
? Individual differences in transfer
Writing practice has independent value: it clarifies thinking, improves communication, and builds discipline. The convergence of Hayes' research evolution with 2024 AI findings suggests this hypothesis merits empirical investigation. Consider trying the writing practices to see how they translate for you.
Conclusion
Most AI training focuses on technique: chain-of-thought prompting, model capabilities, specific patterns. But if the bottleneck is metacognitive skill, technique training focuses on symptoms while leaving root causes unaddressed.
You can teach someone chain-of-thought prompting in five minutes. But if they lack the metacognitive awareness to decompose their goals clearly or recognize when their approach isn't working, the technique won't help.
Writing might be the most underrated AI skill development strategy exactly because it develops the foundational abilities that make techniques effective.
This is testable theory grounded in forty years of writing research and recent AI findings. Theory built on two independent research streams converging on the same insight.
Start your first timed writing session today.
Endnotes
This article presents a theoretical synthesis grounded in established research. While the parallel between writing practice and AI collaboration skills is compelling, empirical studies testing this transfer have not yet been conducted. Readers are encouraged to experiment and share their findings.
Ng, D. T. K., Su, J., Leung, J. K. L., Zhang, Y., Chu, S. K. W., & Qiao, M. S. (2025). "The effect of ChatGPT on students' learning performance: A meta-analysis." Humanities and Social Sciences Communications, 12, Article 95. Meta-analysis of 51 studies showing students using ChatGPT procedurally (taking outputs at face value) performed worse than those using it constructively with critical evaluation. Nature portfolio Q1 journal. ↩ ↩︎
Flathmann, C., McNeese, N., Canonico, L. B., & Knijnenburg, B. (2024). "Empirical impacts of independent and collaborative training on task performance and improvement in human-AI teams." Journal of Cognitive Engineering and Decision Making, 68(1), 23-42. Controlled experimental study finding collaborative training on AI tools reduced performance compared to independent training. Q2 journal (SAGE). ↩ ↩︎
Liu, X., Zhang, L., & Wei, X. (2025). "Generative artificial intelligence literacy: Scale development and its effect on job performance." Behavioral Sciences, 15(6), 811. Scale development study (N=584 total) showing AI literacy's impact on job performance operates almost entirely through creative self-efficacy and metacognitive skills (β = 0.680 total effect, with 79% mediated by creative confidence). Q2 journal (MDPI). ↩ ↩︎
Chu, S. K. W., Mok, S. S., Liu, X., Zhang, Y., Zou, E., So, W. M. W., Chan, L. K., Lee, C. W. Y., & Sit, S. K. (2024). "The effects of AI literacy, self-regulated learning strategies and writing self-efficacy on AI-supported writing performance." Education and Information Technologies. Correlational study of 257 university students showing self-regulated learning predicts AI writing performance (β = 0.237) more strongly than AI literacy (β = 0.153). Q1 journal. ↩ ↩︎
Gero, K.I., Ashktorab, Z., Dugan, C., Pan, Q., Johnson, J., Geyer, W., Ruiz, M., Miller, S., Millen, D.R., Murray, M., Brachman, M., & Houde, S. (2024). "From Pen to Prompt: How Creative Writers Integrate AI into their Writing Practice." arXiv preprint. Qualitative study with 18 professional creative writers. Not yet peer-reviewed. ↩ ↩︎
Tankelevitch, L., Kewenig, V., Simkute, A., Scott, A., Sarkar, A., Rintel, S., Inie, N., & Inkpen, K. (2024). "The UI is the model: Exploring the role of user interfaces in supporting metacognitive processes during AI-assisted writing." In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Article 1010, pp. 1-26). ACM. Microsoft Research & University of Cambridge collaboration identifying metacognitive demands as primary challenge in AI collaboration. CHI is top-tier HCI conference (peer-reviewed). ↩ ↩︎ ↩︎ ↩︎
International Baccalaureate Organization. (2021). "Making the abstract explicit: The role of metacognition in teaching and learning." IB Policy Research. Synthesis of research on metacognitive practices in education. ↩ ↩︎
Hayes, J.R., & Flower, L. (1981). "A Cognitive Process Theory of Writing." College Composition and Communication, 32(4), 365-387. Foundational paper with over 3,800 citations establishing the recursive model of writing processes. ↩ ↩︎
Hayes, J.R. (1996). "A New Framework for Understanding Cognition and Affect in Writing." In C.M. Levy & S. Ransdell (Eds.), The Science of Writing: Theories, Methods, Individual Differences, and Applications (pp. 1-27). Mahwah, NJ: Lawrence Erlbaum Associates. ↩ ↩︎
Hayes, J.R. (2012). "Modeling and Remodeling Writing." Written Communication, 29(3), 369-388. Reorganizes writing framework with metacognitive control at the top of the hierarchy. ↩ ↩︎ ↩︎
Negretti, R. (2021). "Searching for Metacognitive Generalities in Learning to Write for Publication." Written Communication, 38(4), 455-485. Study of doctoral writers showing writing practice develops metacognitive awareness. ↩ ↩︎
Ohtani, K., & Hisasaka, T. (2018). "Beyond intelligence: A meta-analytic review of the relationship among metacognition, intelligence, and academic performance." Metacognition and Learning, 13(2), 179-212. https://doi.org/10.1007/s11409-018-9183-8 Meta-analysis showing metacognitive training improves academic outcomes with moderate to large effect sizes (d = 0.69). ↩ ↩︎
Kellogg, R.T. (1996). "A Model of Working Memory in Writing." In C.M. Levy & S. Ransdell (Eds.), The Science of Writing: Theories, Methods, Individual Differences, and Applications (pp. 57-71). Mahwah, NJ: Lawrence Erlbaum Associates. ↩ ↩︎