“Let me hear the real you” M.E.T. webinar by Mark Heffernan and David Byrne

This double-act webinar was done by Mark Heffernan and David Byrne. You may have come across this duo at IATEFL if you attended. They also have a column in Modern English Teacher, who hosted this webinar. I haven’t encountered them before, but it was a really good webinar – if I were to attend an IATEFL in the future, I would totally look out for a session of theirs in the programme!

If you are they, or you attended the webinar, and see any mistakes in my notes-based summary, please comment and let me know!

The outline was as follows:

David particularly highlighted the idea of “Help your learners to find/make decisions”, saying that the role of teachers has changed over the years. We used to be arbiters of right and wrong, but now, we are facilitators of learning and discussion, our role isn’t to say what is right or wrong but to show possibilities and allow learners to make choices.

Writing

  • Has AI changed how we write?
  • Has AI changed how students write?

Yes.

Everyone (well, many people) uses it, to varying degrees of success, appropriateness and responsibility. If you don’t use it responsibly and effectively, it does wash out your personality/voice. In order to maintain your voice, you need to know what your voice is.

We have to train our learners on responsible, appropriate, effective use.

Questions we need to ask are: Who is the audience? What is the need (Why are you writing this?)? What role do you play in it? What role should/could AI play in this process?

E.g. a letter of complaint – if you will be all hedging/not cantankerous enough, you could use AI to write it and prompt it to add in some extra cantankerousness. If you are, you probably want your voice in there and will write it yourself. You have choices.

If we’re doing a test, AI is not appropriate unless it is built into the test. However, you could use it for brainstorming, ideation, feedback, suggested language chunks. It can be a learning tool. Most universities acknowledge and accept students using it in that way. What is generally prohibited is using it to produce text and submitting that. This is a change from two years ago and shows how things have evolved.

How do writers come across? How do you want to come across? It’s all about tone and voice.

The question becomes not did you get the grammar/vocabulary correct but is the text produced undeniably written by AI? If it is, it is not successful. If you have just pulled little language chunks from AI, then it could be.

You can teach a whole lesson on voice/tone but David/Mark suggest that is better to embed it throughout the course. Syllabuses tend to be spiral-shaped. Give students chances at multiple stages during the course to reflect and make choices. If we give them chances to do that, they have choices. It’s not a one and done lesson, appropriateness and AI can’t be a one off. It needs to be woven through. It needs to be scaffolded. The rise of AI has made it even more important than before to do this (teach about voice) but it was always important.

Speaking

When you speak, you portray a version of yourself, you make choices.

English learning and using depends on context: I need to be able to… so that I can… .

There is more than one correct way to structure an essay but we teach maybe the most foolproof way, the easiest way.

Hedging – it’s partly using modals, so it’s grammar but it’s also functional (you signal how sure or unsure, how strongly or otherwise you feel towards what you are saying).

David and Mark shared some possible activities for working with voice/persona by weaving it into existent activities:

If you don’t show interest in what someone is saying, so you just listen and don’t say anything/interject etc, the speaker may feel lack of interest and lose confidence. If you see this happen in a discussion between students of yours, facilitate discussion of these kind of moments – e.g. this happened (X didn’t say or do anything while you were talking), why is that, X? How did you feel about it Y?)

My take-away:

We have seminar discussion exam preparation and then the exams coming up, and I want to try taking this approach to evaluating the example discussion recording (e.g. how did x respond, or not, how do you think y felt?), and to feedback on students’ discussions, and link it back to the language we teach them in order to enable participation. Get them thinking about what kind of persona they want to portray in a seminar discussion exam (e.g. engaged, knowledgable etc) and how to achieve that, as well as get them thinking about how to participate effectively in a real seminar. I might get them to repeat a practise discussion while playing different personas, to give them a chance to experiment.

In terms of writing (we are about to embark on extended essay writing on Monday!), I want to include more discussion of voice and, again, showing them that they have choices over how to express themselves in their essays and how those choices affect the outcome.

I feel I’ve come away with a load of ideas for how to slightly tweak what I already do, and hopefully thereby increase the value of it to my students: I call that a win! 🙂 Thank you Mark and David!

Generative AI and Voice

I’m a writer. I am writing right now! I have written journal articles, book chapters, (unpublished) fiction, (unpublished) poetry, materials, reflections (blog posts), combination summary/reflections of talks/workshops (blog posts) I attend, emails, feedback on students’ work, the occasional Facebook update, Whatsapp/messenger/Google chat messages, and so the list goes on. It is a form of expression, as is speaking, and drawing. These, including all the different kinds of writing I have done and do, are all forms of expression that AI is now capable of approximating. However, until fairly recently (when suddenly it was showing up everywhere!), I had not explicitly considered the relationship between AI generated production and a person’s ‘voice’. Examples of ‘voice’ vs AI can be seen in the two screenshots below:

Via an email from Pavillion ELT – abstract of a forthcoming webinar.
Via Sandy Millin’s summary of Ciaran Lynch’s MaW SIG PCE talk at IATEFL 2025.

Both of these screenshots set voice against AI-generated content. The first one (which looks like an interesting webinar – Wednesday 14th May between 1600 and 1700 London time in case you might like to attend!) seems to be about helping learners develop their own voice in another language and suggests that this aspect of language learning is of greater importance in a world full of AI output. The second is in the context of materials writing, and highlights an issue that arises in the use of AI in creating materials – “lacks teachers’ unique voice”. The speaker goes on to offer a framework for using AI to help with materials writing while avoiding the problems listed in the above screenshot. (See Sandy Millin’s write up for further information! The post actually collects all of her write-ups of the MaW SIG 2025 PCE talks in a single post – good value! 🙂 )

I teach academic skills including writing to primarily foundation and occasionally pre-masters students who want to go on and study at Sheffield University. In the last year, we’ve been overhauling our syllabus, partially in response to one of our assessments being retired and partially in response to the proliferation of generative AI. Our goal is to move from complete prohibition of AI to responsible use of it. And I suppose, one thing we hope to achieve from that is reach a point where students may or may not choose to use AI in certain elements of their assessment but actively avoid it in others. This, I think, has some overlap with Ciaran Lynch’s framework for writing materials:

Via Sandy Millin’s summary of Ciaran Lynch’s MaW SIG PCE talk at IATEFL 2025.

Maybe we need a similar framework/workflow for our students that succinctly captures when and how AI use might be helpful and when it is to be avoided. And I think voice is part of the key to that! But what exactly is voice? In terms of writing, according to Mhilli (2023),

“authorial voice is the identity of the author reflected in written discourse, where written discourse should be understood as an ever evolving and dynamic source of language features available to the writer to choose from to express their voice. To clarify further, authorial identity may encapsulate such extra-discoursal features as race, national origin, age, or gender. Authorial voice, in contrast, comprises, only those aspects of identity that can be traced in a piece of writing”.

[I recommend having a read of this article, if you are interested in the concept of voice! Especially regarding the tension between writers’ authentic L1 voice and the constraints of academic writing in terms of genre and linguistic features (which vary across fields).]

In terms of essay writing, and our students (who are only doing secondary research), if they are copying large chunks of text from generative AI, then they are not manipulating available language features to express meaning/their voice, they are merely doing the written equivalent of lip-synching. I think this is still the case if they use it for paraphrasing because paraphrasing is influenced by your stance towards what you are paraphrasing and how you are using the information. I suppose students could in theory prompt AI to take a particular stance in writing a paraphrase or explain how they plan to use the information but they would also need to be able to evaluate the output and assess whether it meets that brief sufficiently. In which case, would it save them much time or effort? Would the outcome be truer to the student’s own voice? I wonder. Of course, the assessment’s purpose and criteria would influence whether not that use was acceptable.

On the other hand, if students use AI to help them come up with keywords for searches and then look at titles and abstracts, and choose which sources to read in more depth, select ideas, engage with those ideas, evaluate them, synthesise them and organise it all into an essay, using language features available to them, then that incorporates use of AI but definitely doesn’t obscure their voice and the ownership of the essay is still very much with the student rather than with AI. They could even get AI to list relevant ideas for the essay title (with full awareness that any individual idea might be partly or fully a hallucination), thereby giving them a starting point of possible things to consider, and compare those with what they find in the literature. This (and the greyer area around paraphrasing explored above) suggests that a key element that underpins voice is that of criticality. Perhaps we could also describe it as active (and informed) use rather than passive use.

Another issue regarding voice in a world of AI generated output, which I have also come across recently lies in the use of AI detection tools:

From “AI, Academic Integrity and Authentic Assessment: An Ethical Path Forward for Education

If ESL and autistic voices are more likely to be flagged as AI generated content, then our AI detection tools do not allow space for these authentic voices. These findings point to a need to be very careful in the assumptions we make. I’m sure we’ve all looked at a piece of work and gone “this was definitely written by AI, it’s so obvious!” at some point. Hopefully our conclusions are based on our knowledge of our students, and their linguistic abilities, previous work produced under various conditions and so on. However, for submissions that are anonymised this is no longer possible. I think, rather than relying on detection tools, we need to work towards making our assessments and the criteria by which we assess robust enough to negate the need for such tools. Either way, the findings would also suggest that the webinar described in screenshot no. 1 may be very pertinent for teachers in our field. (I wonder if the speakers have come across instances of that line of research too?! I increasingly get the impression that schedule-willing, I may be attending that webinar!)

Finally, this excerpt from a Guardian article about AI and human intelligence I think provides perhaps the most important reason for helping students to develop their voice and not sidestep this through use of AI:

“‘Don’t ask what AI can do for us, ask what it is doing to us’: are ChatGPT and co harming human intelligence?” – Helen Thomson writing for The Guardian, Saturday 19th April 2025

We want those Eureka moments! We want the richness of what diversity of thought brings to the table. (It is baffling to see Diversity, Equality and Inclusion initiatives being dismantled willy nilly in the U.S. – everybody loses out from that. But then, so much of what goes on these days is baffling.) Maybe something small we can do is help our students realise that their voice, as every voice, is important and that diluting it and losing it through ineffective use of AI makes the world a poorer place. I haven’t even touched on AI and image production or AI and spoken production but this blog post is long enough already (maybe I should have got AI to summarise it for me! 😉 ) so I will leave that for another post!

Using Adobe Firefly for Image Generation

Have you used Adobe Firefly before? Me neither. But we have free access to it via the University and the TEL team has used it, and so did a session for us on it. It can be used to for images to put in lesson handouts and slides, but also online platforms like Wooclap and Quizlet.

You write a prompt in a box and it generates images.

This was a scenario given to us:

Prompt 1: an image of 4 students in a discussion. This was the result:

Issues: There are 3 students and teacher. They look quite young while we teach university age students. Three of them are blonde so it isn’t a good representation of our students. So this is an example of the bias that exists in AI in an automatic result with no detail prescribed in the prompt.

Prompt 2: an image of 4 university students from diverse background in a discussion. This was the result:

Problems: They are not in a classroom.

Adding “seated” (to be more typical of a classroom):

Not a perfect picture (looks a bit like an airport…) but better than the first picture! In terms of the purpose of generating the image, this would probably work. Prompt writing/editing for Adobe Firefly tends to take multiple iterations before you get something you might be happy to use.

We were given the following tips:

  • add more detail to get better results;
  • be aware of bias as you engineer prompts and evaluate the outcome;
  • be picky – it may take several iterations to get what you want. Sometimes a fairly simple prompt immediately yields a satisfactory outcome but usually it takes a bit more effort. Particularly to produce an outcome that is suitably representative for an international student population.

Adobe Firefly has a lot of stock images that it draws on which means the quality is better than similar counterparts.

Once you have generated an image you can also edit it to a certain extent. Which is good as the first images you get can have arm melds, funny shaped heads and so forth! It’s not very good with limbs. A central human image may be fine but anyone in the background or if you require groups/more people, then problems abound! Despite these issues, Firefly is better at it than Gemini.

So al very cool but actually stock images like Pixabay (and creative commons licensed like Flickr – in particular ELTpics – if the context is suitable), i.e. human generated, are much less resource-intensive to use. So, don’t get too carried away by the “it’s so cool” thing. I tend to use Google image search and the appropriate level of license filter, personally.

My general impression: I can’t currently see an Adobe Firefly – shaped hole in my life that needs filling. I wonder if in 5 years time I will look back on this post with an “oh you innocent child” type lens or not?! Time will tell! It was a good session though, after being shown the prompts and pitfalls, we went into a breakout group and had to come up with prompts for another scenario. Unfortunately in my group, none of us had access sorted out yet so we couldn’t test the prompts we wrote.