Teacher Identity

This blog post was inspired by Sandy Millin’s write-up of an IATEFL 2025 panel on the subject of Teacher Identity

I think opportunities to discuss and reflect on teacher identity, such as the IATEFL 2025 panel written up by Sandy, are invaluable, as identity is constantly evolving and growing. In the first talk Sandy summarised, the speaker, Robyn Stewart, adapted Barkhulzen and Mandieta’s (2020) facets of language teacher professional identity to highlight the influence of the world on identity, external influence on it. It also shows the interplay between personal and professional identity and the elements that can be considered to be part of our professional identity:

Via Sandy Millin’s write-up of Robyn Stewart’s talk in the IATEFL 2025 teacher identity panel.

There are so many things that influence who we are in the classroom! One of the lessons Robyn Stewart drew from her dissertation research was “Don’t underestimate the role of context”. I’m inclined to agree:

On a personal level, I’m not that interested in generative AI, generally distrust it, disapprove of the resource consumption it represents and feel the amount of money, time, expertise and so on being ploughed into it everywhere could be better spent elsewhere (e.g. use of AI in medical contexts) rather than generating infinite quantities of text.

As a language learner, if I had the time, energy and spare brain, or was as driven as summer 2014 me, such that I could overcome the lack of all the afore-mentioned (and could override my concerns about unnecessary resource consumption!), I would perhaps explore the possibilities of communicating with it in Italian/French/German and using it to help me improve my production. I could get *well* in to a project like that. (And if I were teaching general English I could use the knowledge and skills I might develop in the process to help my students benefit from using the English version.)

However, my professional identity has the greatest influence on my interaction with AI: I have to embrace AI’s existence and figure out ways to work with students in a world which it is now very much a part of. In terms of context, I work specifically in higher education, preparing students to study at university by teaching them an Academic English skills course which they do alongside subject modules. Assessments are high stakes in terms of scores but they also need to ensure that students develop the skills necessary to succeed, including that of correctly treading the line between fair use of tools and academic misconduct regulations – a line that has been evolving with the evolution of AI. We used to mutter about Grammarly and translation tools, but ignore them other than prohibiting students from using them and putting a handful forward for misconduct each assessment cycle, and then generative AI came along and blew all that out of the water and onto a whole other level. We have been grappling with it ever since. However, it will only be come September of this year that I will engage with it fully as a teacher in the classroom beyond warning students off it (rather than only from the perspective of course coordination, course/materials development – as in, integrating teaching AI – related skills into our materials, currently in progress, rather than developing materials using AI – and misconduct evaluation).

The young Vietnamese participants in the study carried out by Hang Vu, the third speaker of the IATEFL 2025 panel on teacher identity, demonstrated a high level of insight and awareness into the issues they face in developing their professional identity as teachers in a world dominated by AI, and what kind of training they need in order to do that successfully. Sandy described Hang Vu’s idea of “emerging identities”, as summarised on the slide below:

Via Sandy Millin’s write up of Hang Vu’s talk in the IATEFL 2025 panel on Teacher Identity.

There’s a lot to think about there! I suppose I have mainly been teacher/coordinator as AI inspector in professional terms, but also teacher as learner as despite my personal misgivings: I have made an effort to attend (whether live or via recording) all the training available to us regarding AI. I have been teacher as AI user when I have used it to generate discussion questions (and then teacher as critical thinker when I have deleted half of them as unsuitable and edited/adapted others!). Teacher as AI instructor/facilitator, of course, as mentioned above, is still in the “coming soon to a classroom near you” stage. I suppose will have to be “teacher as AI supporter” within the “teacher as instructor/facilitator” side of things – regarding what we decide are acceptable uses of AI…but I predict it will be more along the lines of channeling inevitable use rather than encouraging use vs non-use! And I think alongside that, I will definitely be encouraging critical discussion in my classroom regarding the use of AI and surrounding issues. It will be interesting to see what the students think. It seems to me that just as much as the youngsters in the Vietnamese study, us old fossils who have been teaching a good while also need to regularly engage with our professional identities and figure out how we are going to move with the times professionally, regardless of (although obviously also interlinked/connected with/influenced by!) our personal feelings towards the various changes (which as Catherine Walters’ plenary discussed, have been many and varied over the last 50 years!)

Sandy’s post finished with some of the questions posed by the audience, one of which was “Should we proactively work with learners about how to do AI? Maybe we should ask learners for the whole AI conversation, not just the final result.” – It’s an interesting one. I definitely want critical discussion and to find out the students’ take on it, and as with other things potentially their feedback/ideas/thoughts can feed into future iterations of the course, but ultimately, in terms of assessment, what is and isn’t acceptable has to align with university and college policy on AI use. One thing I do hope is that I will be able to persuade students of the importance of developing their own voice, as I think if I can do that, then reasonable/acceptable use (with the appropriate guidance on how) will be a natural progression. For sure, all this thinking I am doing at the moment (I’m on annual leave – I have time to think!!) will be a useful form of preparation for the task ahead!

This blog post is plenty long enough already, yet I haven’t even scratched the surface of identity, personal and professional, and the interplays between identity and classroom. But, another time… 🙂

Generative AI and Voice

I’m a writer. I am writing right now! I have written journal articles, book chapters, (unpublished) fiction, (unpublished) poetry, materials, reflections (blog posts), combination summary/reflections of talks/workshops (blog posts) I attend, emails, feedback on students’ work, the occasional Facebook update, Whatsapp/messenger/Google chat messages, and so the list goes on. It is a form of expression, as is speaking, and drawing. These, including all the different kinds of writing I have done and do, are all forms of expression that AI is now capable of approximating. However, until fairly recently (when suddenly it was showing up everywhere!), I had not explicitly considered the relationship between AI generated production and a person’s ‘voice’. Examples of ‘voice’ vs AI can be seen in the two screenshots below:

Via an email from Pavillion ELT – abstract of a forthcoming webinar.
Via Sandy Millin’s summary of Ciaran Lynch’s MaW SIG PCE talk at IATEFL 2025.

Both of these screenshots set voice against AI-generated content. The first one (which looks like an interesting webinar – Wednesday 14th May between 1600 and 1700 London time in case you might like to attend!) seems to be about helping learners develop their own voice in another language and suggests that this aspect of language learning is of greater importance in a world full of AI output. The second is in the context of materials writing, and highlights an issue that arises in the use of AI in creating materials – “lacks teachers’ unique voice”. The speaker goes on to offer a framework for using AI to help with materials writing while avoiding the problems listed in the above screenshot. (See Sandy Millin’s write up for further information! The post actually collects all of her write-ups of the MaW SIG 2025 PCE talks in a single post – good value! 🙂 )

I teach academic skills including writing to primarily foundation and occasionally pre-masters students who want to go on and study at Sheffield University. In the last year, we’ve been overhauling our syllabus, partially in response to one of our assessments being retired and partially in response to the proliferation of generative AI. Our goal is to move from complete prohibition of AI to responsible use of it. And I suppose, one thing we hope to achieve from that is reach a point where students may or may not choose to use AI in certain elements of their assessment but actively avoid it in others. This, I think, has some overlap with Ciaran Lynch’s framework for writing materials:

Via Sandy Millin’s summary of Ciaran Lynch’s MaW SIG PCE talk at IATEFL 2025.

Maybe we need a similar framework/workflow for our students that succinctly captures when and how AI use might be helpful and when it is to be avoided. And I think voice is part of the key to that! But what exactly is voice? In terms of writing, according to Mhilli (2023),

“authorial voice is the identity of the author reflected in written discourse, where written discourse should be understood as an ever evolving and dynamic source of language features available to the writer to choose from to express their voice. To clarify further, authorial identity may encapsulate such extra-discoursal features as race, national origin, age, or gender. Authorial voice, in contrast, comprises, only those aspects of identity that can be traced in a piece of writing”.

[I recommend having a read of this article, if you are interested in the concept of voice! Especially regarding the tension between writers’ authentic L1 voice and the constraints of academic writing in terms of genre and linguistic features (which vary across fields).]

In terms of essay writing, and our students (who are only doing secondary research), if they are copying large chunks of text from generative AI, then they are not manipulating available language features to express meaning/their voice, they are merely doing the written equivalent of lip-synching. I think this is still the case if they use it for paraphrasing because paraphrasing is influenced by your stance towards what you are paraphrasing and how you are using the information. I suppose students could in theory prompt AI to take a particular stance in writing a paraphrase or explain how they plan to use the information but they would also need to be able to evaluate the output and assess whether it meets that brief sufficiently. In which case, would it save them much time or effort? Would the outcome be truer to the student’s own voice? I wonder. Of course, the assessment’s purpose and criteria would influence whether not that use was acceptable.

On the other hand, if students use AI to help them come up with keywords for searches and then look at titles and abstracts, and choose which sources to read in more depth, select ideas, engage with those ideas, evaluate them, synthesise them and organise it all into an essay, using language features available to them, then that incorporates use of AI but definitely doesn’t obscure their voice and the ownership of the essay is still very much with the student rather than with AI. They could even get AI to list relevant ideas for the essay title (with full awareness that any individual idea might be partly or fully a hallucination), thereby giving them a starting point of possible things to consider, and compare those with what they find in the literature. This (and the greyer area around paraphrasing explored above) suggests that a key element that underpins voice is that of criticality. Perhaps we could also describe it as active (and informed) use rather than passive use.

Another issue regarding voice in a world of AI generated output, which I have also come across recently lies in the use of AI detection tools:

From “AI, Academic Integrity and Authentic Assessment: An Ethical Path Forward for Education

If ESL and autistic voices are more likely to be flagged as AI generated content, then our AI detection tools do not allow space for these authentic voices. These findings point to a need to be very careful in the assumptions we make. I’m sure we’ve all looked at a piece of work and gone “this was definitely written by AI, it’s so obvious!” at some point. Hopefully our conclusions are based on our knowledge of our students, and their linguistic abilities, previous work produced under various conditions and so on. However, for submissions that are anonymised this is no longer possible. I think, rather than relying on detection tools, we need to work towards making our assessments and the criteria by which we assess robust enough to negate the need for such tools. Either way, the findings would also suggest that the webinar described in screenshot no. 1 may be very pertinent for teachers in our field. (I wonder if the speakers have come across instances of that line of research too?! I increasingly get the impression that schedule-willing, I may be attending that webinar!)

Finally, this excerpt from a Guardian article about AI and human intelligence I think provides perhaps the most important reason for helping students to develop their voice and not sidestep this through use of AI:

“‘Don’t ask what AI can do for us, ask what it is doing to us’: are ChatGPT and co harming human intelligence?” – Helen Thomson writing for The Guardian, Saturday 19th April 2025

We want those Eureka moments! We want the richness of what diversity of thought brings to the table. (It is baffling to see Diversity, Equality and Inclusion initiatives being dismantled willy nilly in the U.S. – everybody loses out from that. But then, so much of what goes on these days is baffling.) Maybe something small we can do is help our students realise that their voice, as every voice, is important and that diluting it and losing it through ineffective use of AI makes the world a poorer place. I haven’t even touched on AI and image production or AI and spoken production but this blog post is long enough already (maybe I should have got AI to summarise it for me! 😉 ) so I will leave that for another post!