Generative AI and Voice

I’m a writer. I am writing right now! I have written journal articles, book chapters, (unpublished) fiction, (unpublished) poetry, materials, reflections (blog posts), combination summary/reflections of talks/workshops (blog posts) I attend, emails, feedback on students’ work, the occasional Facebook update, Whatsapp/messenger/Google chat messages, and so the list goes on. It is a form of expression, as is speaking, and drawing. These, including all the different kinds of writing I have done and do, are all forms of expression that AI is now capable of approximating. However, until fairly recently (when suddenly it was showing up everywhere!), I had not explicitly considered the relationship between AI generated production and a person’s ‘voice’. Examples of ‘voice’ vs AI can be seen in the two screenshots below:

Via an email from Pavillion ELT – abstract of a forthcoming webinar.
Via Sandy Millin’s summary of Ciaran Lynch’s MaW SIG PCE talk at IATEFL 2025.

Both of these screenshots set voice against AI-generated content. The first one (which looks like an interesting webinar – Wednesday 14th May between 1600 and 1700 London time in case you might like to attend!) seems to be about helping learners develop their own voice in another language and suggests that this aspect of language learning is of greater importance in a world full of AI output. The second is in the context of materials writing, and highlights an issue that arises in the use of AI in creating materials – “lacks teachers’ unique voice”. The speaker goes on to offer a framework for using AI to help with materials writing while avoiding the problems listed in the above screenshot. (See Sandy Millin’s write up for further information! The post actually collects all of her write-ups of the MaW SIG 2025 PCE talks in a single post – good value! 🙂 )

I teach academic skills including writing to primarily foundation and occasionally pre-masters students who want to go on and study at Sheffield University. In the last year, we’ve been overhauling our syllabus, partially in response to one of our assessments being retired and partially in response to the proliferation of generative AI. Our goal is to move from complete prohibition of AI to responsible use of it. And I suppose, one thing we hope to achieve from that is reach a point where students may or may not choose to use AI in certain elements of their assessment but actively avoid it in others. This, I think, has some overlap with Ciaran Lynch’s framework for writing materials:

Via Sandy Millin’s summary of Ciaran Lynch’s MaW SIG PCE talk at IATEFL 2025.

Maybe we need a similar framework/workflow for our students that succinctly captures when and how AI use might be helpful and when it is to be avoided. And I think voice is part of the key to that! But what exactly is voice? In terms of writing, according to Mhilli (2023),

“authorial voice is the identity of the author reflected in written discourse, where written discourse should be understood as an ever evolving and dynamic source of language features available to the writer to choose from to express their voice. To clarify further, authorial identity may encapsulate such extra-discoursal features as race, national origin, age, or gender. Authorial voice, in contrast, comprises, only those aspects of identity that can be traced in a piece of writing”.

[I recommend having a read of this article, if you are interested in the concept of voice! Especially regarding the tension between writers’ authentic L1 voice and the constraints of academic writing in terms of genre and linguistic features (which vary across fields).]

In terms of essay writing, and our students (who are only doing secondary research), if they are copying large chunks of text from generative AI, then they are not manipulating available language features to express meaning/their voice, they are merely doing the written equivalent of lip-synching. I think this is still the case if they use it for paraphrasing because paraphrasing is influenced by your stance towards what you are paraphrasing and how you are using the information. I suppose students could in theory prompt AI to take a particular stance in writing a paraphrase or explain how they plan to use the information but they would also need to be able to evaluate the output and assess whether it meets that brief sufficiently. In which case, would it save them much time or effort? Would the outcome be truer to the student’s own voice? I wonder. Of course, the assessment’s purpose and criteria would influence whether not that use was acceptable.

On the other hand, if students use AI to help them come up with keywords for searches and then look at titles and abstracts, and choose which sources to read in more depth, select ideas, engage with those ideas, evaluate them, synthesise them and organise it all into an essay, using language features available to them, then that incorporates use of AI but definitely doesn’t obscure their voice and the ownership of the essay is still very much with the student rather than with AI. They could even get AI to list relevant ideas for the essay title (with full awareness that any individual idea might be partly or fully a hallucination), thereby giving them a starting point of possible things to consider, and compare those with what they find in the literature. This (and the greyer area around paraphrasing explored above) suggests that a key element that underpins voice is that of criticality. Perhaps we could also describe it as active (and informed) use rather than passive use.

Another issue regarding voice in a world of AI generated output, which I have also come across recently lies in the use of AI detection tools:

From “AI, Academic Integrity and Authentic Assessment: An Ethical Path Forward for Education

If ESL and autistic voices are more likely to be flagged as AI generated content, then our AI detection tools do not allow space for these authentic voices. These findings point to a need to be very careful in the assumptions we make. I’m sure we’ve all looked at a piece of work and gone “this was definitely written by AI, it’s so obvious!” at some point. Hopefully our conclusions are based on our knowledge of our students, and their linguistic abilities, previous work produced under various conditions and so on. However, for submissions that are anonymised this is no longer possible. I think, rather than relying on detection tools, we need to work towards making our assessments and the criteria by which we assess robust enough to negate the need for such tools. Either way, the findings would also suggest that the webinar described in screenshot no. 1 may be very pertinent for teachers in our field. (I wonder if the speakers have come across instances of that line of research too?! I increasingly get the impression that schedule-willing, I may be attending that webinar!)

Finally, this excerpt from a Guardian article about AI and human intelligence I think provides perhaps the most important reason for helping students to develop their voice and not sidestep this through use of AI:

“‘Don’t ask what AI can do for us, ask what it is doing to us’: are ChatGPT and co harming human intelligence?” – Helen Thomson writing for The Guardian, Saturday 19th April 2025

We want those Eureka moments! We want the richness of what diversity of thought brings to the table. (It is baffling to see Diversity, Equality and Inclusion initiatives being dismantled willy nilly in the U.S. – everybody loses out from that. But then, so much of what goes on these days is baffling.) Maybe something small we can do is help our students realise that their voice, as every voice, is important and that diluting it and losing it through ineffective use of AI makes the world a poorer place. I haven’t even touched on AI and image production or AI and spoken production but this blog post is long enough already (maybe I should have got AI to summarise it for me! 😉 ) so I will leave that for another post!