"Prompt engineering" is a technical discipline reserved for those with coding backgrounds. Examining the determinants of effective Large Language Model (LLM) interaction, researchers identified a moderate-to-strong positive correlation (r = 0.444) between a user's lexical diversity and the quality of the generated output [1]. That is to say, the ability to generate high-quality outputs from AI models is primarily a function of linguistic precision. Further supporting this, subsequent research found that 83% of successful prompt interactions were determined not by complex coding syntax, but by the clarity and structural coherence of the natural language input [2].

For academic writers and researchers, this leads to a significant conclusion: you do not need to learn a new, alien skill set to utilize AI effectively. The skills required to structure a dissertation, articulate a thesis, or refine a grant proposal are the exact competencies that drive high-performance prompting.

As we explored in [Post 01: Why Writing Matters More in the Age of AI], the cognitive demands of writing facilitate deep thinking. Here, we demonstrate how those same demands facilitate deep interaction with artificial intelligence.

Research evidence summary showing four key statistics: r=0.444 correlation between lexical diversity and output quality, 83.7% user agreement on prompt clarity, 17% variance from language aptitude, and 77.5% less grounding in LLMs versus humans.

In This Post, We'll Look At

  • The Evidence: Examining the statistical link between language aptitude and prompt efficacy.
  • Skill Transfer: Five core writing competencies that directly translate to prompt engineering.
  • Cognitive Frameworks: Why writing operates as a "high road" transfer mechanism for AI interaction.
  • Practical Application: Strategies to leverage your existing rhetorical knowledge.
  • Daily Practice: How timed writing sessions reinforce prompting capabilities.

Evidence: Writing Skills Predict AI Success

While understanding the underlying transformer architecture of an LLM is useful for developers, the user experience is fundamentally linguistic.

A study of predictors of success in learning programming languages (specifically Python) identified language aptitude as explaining 17% of variance in learning outcomes, compared to 34% for fluid reasoning and working memory, and only 2% for numeracy [3]. If language aptitude predicts the ability to write code, it logically follows that it predicts the ability to interface with models that process natural language.

Analyzing the "lexical richness" (a measure of vocabulary variety and precision) of prompts, users who employed a broader, more precise lexicon consistently extracted higher-fidelity responses from the model. The correlation coefficient of r = 0.444 is statistically significant, indicating that the nuance of the input dictates the nuance of the output [1].

Furthermore, 83.7% of users agreed that clearer and more specific prompts lead to better AI results, with clarity emerging as a fundamental determinant of prompt success [2]. As models become more sophisticated, they rely less on rigid formatting tricks and more on semantic intent.

This aligns with the distinction we made in [Post 04: The Missing Piece in Writing Advice]; recognizing the underlying cognitive structures of communication is more valuable than memorizing surface-level tactics. The evidence is clear: the most effective "engineers" of text generation are those who have spent years engineering text themselves. As Prat and colleagues demonstrated, the same linguistic competencies that facilitate human-to-human communication also facilitate human-to-AI collaboration.

What Predicts Programming Success? Language aptitude explains 17% of variance in learning outcomes, compared to 34% for fluid reasoning and working memory, and only 2% for numeracy.

The Five Writing Skills That Transfer

We can map specific, established writing competencies directly to the requirements of effective prompting. These are not metaphors; they are functional cognitive equivalents.

The Five Writing Skills That Transfer to AI Prompting: Common ground establishment, task decomposition, audience design, iterative refinement, and lexical precision. Each skill shows a direct arrow to its AI application.

1. Establishing Common Ground (Context Setting)

The Writing Skill: In academic discourse, a writer must establish the "state of the field" before introducing a new argument. This aligns with the theory of grounding in communication, defined as the collaborative process by which participants in a conversation establish mutual knowledge, beliefs, and assumptions [4].

The Prompting Transfer: LLMs suffer from what researchers identify as "grounding gaps": they possess vast information but lack specific context unless explicitly provided [5]. An audience (human or machine) cannot process a complex request without established parameters.

Novice Prompt: "Summarize this paper."

Writer's Prompt: "Acting as a peer reviewer for the Journal of Higher Education, summarize the methodological limitations of the attached manuscript. Focus specifically on the sample selection bias relative to the findings in Smith (2020)."

The writer applies Clark and Brennan's principle of positive evidence, explicitly stating the context (peer reviewer), the scope (methodological limitations), and the comparative framework (Smith, 2020).

2. Task Decomposition (Structure)

The Writing Skill: Experienced researchers do not write a paper from start to finish in one breath. They utilize task decomposition, a cognitive strategy described as breaking a complex goal into a hierarchy of sub-goals [6]. This is evident in outlining, sectioning, and organizing a literature review before drafting the methodology.

The Prompting Transfer: Large Language Models often hallucinate or lose coherence when faced with multi-faceted, monolithic requests. The "Chain of Thought" prompting technique is essentially task decomposition applied to AI. Writers are already trained to structure arguments linearly and logically.

Novice Prompt: "Write a research proposal about microplastics."

Writer's Prompt: "We will develop a research proposal in stages. Step 1: Define three potential gaps in the current literature regarding microplastics in freshwater systems. Step 2: Once we select a gap, outline a quantitative methodology to address it. Step 3: Draft the abstract."

By managing the cognitive load of the AI through decomposition, the writer ensures higher fidelity outputs.

3. Audience Design (Persona Adoption)

The Writing Skill: Ede and Lunsford famously conceptualized "audience addressed" versus "audience invoked" [7]. Skilled writers constantly adjust their register, tone, and complexity based on who they are addressing. This is the rhetorical concept of addressivity.

The Prompting Transfer: In prompting, this skill translates to "persona adoption." Because an LLM is a probabilistic engine, it predicts the next token based on the training data associated with a specific context; when a writer specifies an audience or a persona, they are effectively narrowing the probability distribution to a specific slice of the model's training data.

Novice Prompt: "Explain quantum entanglement."

Writer's Prompt: "Explain quantum entanglement using analogies suitable for an undergraduate sociology student who understands systems theory but lacks a background in physics."

The writer utilizes their rhetorical knowledge of audience design to constrain the output, ensuring the level of complexity matches the intended use case.

4. Iterative Refinement (Revision)

The Writing Skill: Writing is rewriting. Driscoll et al. emphasize that genre knowledge (understanding the expectations of a specific form of writing) is developed through iterative cycles of drafting and feedback [8]. Experienced writers view a first draft as a "discovery draft," not a final product.

The Prompting Transfer: Novices often abandon a prompt if the first output is unsatisfactory. Writers, accustomed to the revision process, engage in iterative refinement. They treat the AI's output as a first draft that requires critique.

The Process:

  1. Generate text.
  2. Analyze against genre conventions (e.g., "This abstract lacks a clear statement of the problem").
  3. Reprompt: "The previous output captured the methodology but failed to articulate the significance. Rewrite the conclusion to emphasize the policy implications."

This dialogue mirrors the internal monologue of a writer refining their own work.

5. Precision in Language (Lexical Diversity)

The Writing Skill: As noted in the opening, precise vocabulary allows for precise thinking; academic writers are trained to distinguish between similar concepts (e.g., "correlation" vs. "causation," "method" vs. "methodology").

The Prompting Transfer: This is the direct application of the findings by Santana Junior et al. [1]. Vague language leads to generic outputs (the "regression to the mean" problem in AI). High lexical diversity forces the model away from generic responses.

Novice Prompt: "Make this writing better."

Writer's Prompt: "Edit the following passage to improve syntactic variety and eliminate nominalizations. Ensure the tone remains objective but persuasive."

The use of specific linguistic terms ("syntactic variety," "nominalizations") acts as a control mechanism, steering the AI with high-resolution instructions.

Side-by-side comparison of novice versus writer prompts in three scenarios: summarization, task requests, and editing. Writer prompts include context, decomposition, and precision that novice prompts lack.

Writing as a Domain-General Cognitive Skill

Why does this transfer happen so seamlessly? The answer lies in the nature of writing itself: it is not merely a communicative tool; it is a cognitive technology.

A meta-analysis on cross-linguistic transfer found that skills developed in one language (L1) strongly predict proficiency in a second language (L2) [9]. We can view "prompting" as a second language (L2) where the L1 is academic English; the underlying cognitive mechanisms (planning, monitoring, and evaluating) are identical.

Perkins and Salomon describe this as "High Road Transfer" [10]. This occurs when a skill is abstracted from its original context and applied to a new one through mindful abstraction. When a researcher realizes, "I am not coding; I am constructing a rhetorical argument for a machine audience," they are engaging in high road transfer.

Beaufort categorizes writing expertise into five knowledge domains [11]:

  1. Discourse Community Knowledge
  2. Subject Matter Knowledge
  3. Genre Knowledge
  4. Rhetorical Knowledge
  5. Writing Process Knowledge

Every single one of these domains is relevant to AI interaction: you need Subject Matter Knowledge to fact-check the AI; you need Genre Knowledge to tell the AI what structure to follow; you need Rhetorical Knowledge to frame the prompt; you need Process Knowledge to iterate.

The data suggests that writing is a unitary skill that underpins success in almost any information-processing task [9]. By viewing AI interaction through this lens, we demystify the technology and re-empower the human agent.


How to Leverage Writing Skills

Recognizing you have the skills is the first step; applying them requires a deliberate shift in strategy for AI contexts:

1. Apply Rhetorical Analysis to Prompts

Before hitting "enter," subject your prompt to the same scrutiny you would apply to a thesis statement; ask: Is the claim clear? Is the scope defined? Is the tone appropriate? Treat the prompt as a micro-essay where the "thesis" is the specific instruction you want the model to execute.

2. Utilize Constraint Setting

Writers thrive on constraints (word counts, style guides, citation formats); LLMs also perform better with constraints. Instead of open-ended queries, use your knowledge of academic constraints to guide the AI.

Example: "Summarize the findings in 300 words, using APA 7 citation style, avoiding bullet points, and prioritizing quantitative data over qualitative descriptions."

3. Engage in Metacognitive Review

When an LLM fails, use the metacognitive skills developed through editing; do not simply try again at random. Analyze the failure: Was the context (grounding) missing? Was the vocabulary too ambiguous?

4. Collaborative Dialogue

View the AI not as a search engine, but as a co-author; use the "Track Changes" mentality. If the AI produces a hallucination, correct it: "You cited Smith (2019), but that study focuses on biology, not physics. Please search for Smith (2021) regarding quantum mechanics." This leverages the iterative refinement skill (Skill #4).


The Writing Practice Connection

If prompt engineering is fundamentally a writing task, then improving your writing skills inevitably improves your prompting skills. The cognitive muscles required for Task Decomposition and Precision in Language atrophy without use.

In [Post 02: Writing Practice Builds AI Prompting Skills], we explored the feedback loop between daily generation and cognitive sharpness. Regular, timed writing practice forces you to access vocabulary quickly, structure thoughts under pressure, and articulate complex ideas clearly: the exact distinct competencies identified as predictive of AI success [1].

Maintaining a daily writing practice is not just about preserving human expression; it is the most effective training regimen for high-level AI collaboration.


Conclusion

The narrative that "prompt engineering" requires a technical background is a misconception that disempowers the very people best equipped to use these tools. The research by Santana Junior, Anam, and Prat confirms that linguistic aptitude, lexical diversity, and structural clarity are the true drivers of performance.

You have spent years, perhaps decades, honing the craft of academic writing; you have mastered the art of establishing common ground, designing for an audience, and refining language with surgical precision. These are not legacy skills to be discarded; they are the keys to the engine.

Recognize the transfer. Apply your rhetorical knowledge. The machine is waiting for a writer.


Notes & References

[1] Santana Junior, E. G., Benjamin, G., Araujo, M., Santos, H., Freitas, D., Almeida, E., Neto, P. A. da M. S., Li, J., Chun, J., & Ahmed, I. (2025). Which Prompting Technique Should I Use? An Empirical Investigation of Prompting Techniques for Software Engineering Tasks. arXiv preprint arXiv:2506.05614.

[2] Anam, R. K. (2025). Prompt Engineering and the Effectiveness of Large Language Models in Enhancing Human Productivity. arXiv preprint arXiv:2507.18638.

[3] Prat, C. S., Madhyastha, T. M., Mottarella, M. J., & Kuo, C. H. (2020). Relating Natural Language Aptitude to Individual Differences in Learning Programming Languages. Scientific Reports, 10, 3817. https://doi.org/10.1038/s41598-020-60661-8

[4] Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on Socially Shared Cognition (pp. 127–149). American Psychological Association.

[5] Shaikh, O., Gligorić, K., Khetan, A., Gerstgrasser, M., Yang, D., & Jurafsky, D. (2023). Grounding Gaps in Language Model Generations. arXiv preprint arXiv:2311.09144.

[6] Coffey, E. B. J., & Herholz, S. C. (2013). Task Decomposition: A Framework for Peri-operative Cognitive Skills Analysis. Frontiers in Human Neuroscience, 7, 640. https://doi.org/10.3389/fnhum.2013.00640

[7] Ede, L., & Lunsford, A. (1984). Audience Addressed/Audience Invoked: The Role of Audience in Composition Theory and Pedagogy. College English, 46(2), 155-171.

[8] Driscoll, D. L., Paszek, J., Gorzelsky, G., Hayes, C., & Jones, E. (2020). Genre Knowledge and Writing Development: Results from the Writing Transfer Project. Written Communication, 37(1), 69-103. https://doi.org/10.1177/0741088319882313

[9] Kim, Y. S. G., Al Otaiba, S., Wanzek, J., & Gatlin, B. (2015). Toward an Understanding of Dimensions, Predictors, and the Gender Gap in Written Composition. Journal of Educational Psychology, 107(1), 79-95. https://doi.org/10.1037/a0037210

[10] Perkins, D. N., & Salomon, G. (1992). Transfer of Learning. International Encyclopedia of Education, 2, 6452-6457.

[11] Beaufort, A. (2007). College Writing and Beyond: A New Framework for University Writing Instruction. Utah State University Press.