“Let me hear the real you” M.E.T. webinar by Mark Heffernan and David Byrne

This double-act webinar was done by Mark Heffernan and David Byrne. You may have come across this duo at IATEFL if you attended. They also have a column in Modern English Teacher, who hosted this webinar. I haven’t encountered them before, but it was a really good webinar – if I were to attend an IATEFL in the future, I would totally look out for a session of theirs in the programme!

If you are they, or you attended the webinar, and see any mistakes in my notes-based summary, please comment and let me know!

The outline was as follows:

David particularly highlighted the idea of “Help your learners to find/make decisions”, saying that the role of teachers has changed over the years. We used to be arbiters of right and wrong, but now, we are facilitators of learning and discussion, our role isn’t to say what is right or wrong but to show possibilities and allow learners to make choices.

Writing

  • Has AI changed how we write?
  • Has AI changed how students write?

Yes.

Everyone (well, many people) uses it, to varying degrees of success, appropriateness and responsibility. If you don’t use it responsibly and effectively, it does wash out your personality/voice. In order to maintain your voice, you need to know what your voice is.

We have to train our learners on responsible, appropriate, effective use.

Questions we need to ask are: Who is the audience? What is the need (Why are you writing this?)? What role do you play in it? What role should/could AI play in this process?

E.g. a letter of complaint – if you will be all hedging/not cantankerous enough, you could use AI to write it and prompt it to add in some extra cantankerousness. If you are, you probably want your voice in there and will write it yourself. You have choices.

If we’re doing a test, AI is not appropriate unless it is built into the test. However, you could use it for brainstorming, ideation, feedback, suggested language chunks. It can be a learning tool. Most universities acknowledge and accept students using it in that way. What is generally prohibited is using it to produce text and submitting that. This is a change from two years ago and shows how things have evolved.

How do writers come across? How do you want to come across? It’s all about tone and voice.

The question becomes not did you get the grammar/vocabulary correct but is the text produced undeniably written by AI? If it is, it is not successful. If you have just pulled little language chunks from AI, then it could be.

You can teach a whole lesson on voice/tone but David/Mark suggest that is better to embed it throughout the course. Syllabuses tend to be spiral-shaped. Give students chances at multiple stages during the course to reflect and make choices. If we give them chances to do that, they have choices. It’s not a one and done lesson, appropriateness and AI can’t be a one off. It needs to be woven through. It needs to be scaffolded. The rise of AI has made it even more important than before to do this (teach about voice) but it was always important.

Speaking

When you speak, you portray a version of yourself, you make choices.

English learning and using depends on context: I need to be able to… so that I can… .

There is more than one correct way to structure an essay but we teach maybe the most foolproof way, the easiest way.

Hedging – it’s partly using modals, so it’s grammar but it’s also functional (you signal how sure or unsure, how strongly or otherwise you feel towards what you are saying).

David and Mark shared some possible activities for working with voice/persona by weaving it into existent activities:

If you don’t show interest in what someone is saying, so you just listen and don’t say anything/interject etc, the speaker may feel lack of interest and lose confidence. If you see this happen in a discussion between students of yours, facilitate discussion of these kind of moments – e.g. this happened (X didn’t say or do anything while you were talking), why is that, X? How did you feel about it Y?)

My take-away:

We have seminar discussion exam preparation and then the exams coming up, and I want to try taking this approach to evaluating the example discussion recording (e.g. how did x respond, or not, how do you think y felt?), and to feedback on students’ discussions, and link it back to the language we teach them in order to enable participation. Get them thinking about what kind of persona they want to portray in a seminar discussion exam (e.g. engaged, knowledgable etc) and how to achieve that, as well as get them thinking about how to participate effectively in a real seminar. I might get them to repeat a practise discussion while playing different personas, to give them a chance to experiment.

In terms of writing (we are about to embark on extended essay writing on Monday!), I want to include more discussion of voice and, again, showing them that they have choices over how to express themselves in their essays and how those choices affect the outcome.

I feel I’ve come away with a load of ideas for how to slightly tweak what I already do, and hopefully thereby increase the value of it to my students: I call that a win! 🙂 Thank you Mark and David!

Teacher Identity

This blog post was inspired by Sandy Millin’s write-up of an IATEFL 2025 panel on the subject of Teacher Identity

I think opportunities to discuss and reflect on teacher identity, such as the IATEFL 2025 panel written up by Sandy, are invaluable, as identity is constantly evolving and growing. In the first talk Sandy summarised, the speaker, Robyn Stewart, adapted Barkhulzen and Mandieta’s (2020) facets of language teacher professional identity to highlight the influence of the world on identity, external influence on it. It also shows the interplay between personal and professional identity and the elements that can be considered to be part of our professional identity:

Via Sandy Millin’s write-up of Robyn Stewart’s talk in the IATEFL 2025 teacher identity panel.

There are so many things that influence who we are in the classroom! One of the lessons Robyn Stewart drew from her dissertation research was “Don’t underestimate the role of context”. I’m inclined to agree:

On a personal level, I’m not that interested in generative AI, generally distrust it, disapprove of the resource consumption it represents and feel the amount of money, time, expertise and so on being ploughed into it everywhere could be better spent elsewhere (e.g. use of AI in medical contexts) rather than generating infinite quantities of text.

As a language learner, if I had the time, energy and spare brain, or was as driven as summer 2014 me, such that I could overcome the lack of all the afore-mentioned (and could override my concerns about unnecessary resource consumption!), I would perhaps explore the possibilities of communicating with it in Italian/French/German and using it to help me improve my production. I could get *well* in to a project like that. (And if I were teaching general English I could use the knowledge and skills I might develop in the process to help my students benefit from using the English version.)

However, my professional identity has the greatest influence on my interaction with AI: I have to embrace AI’s existence and figure out ways to work with students in a world which it is now very much a part of. In terms of context, I work specifically in higher education, preparing students to study at university by teaching them an Academic English skills course which they do alongside subject modules. Assessments are high stakes in terms of scores but they also need to ensure that students develop the skills necessary to succeed, including that of correctly treading the line between fair use of tools and academic misconduct regulations – a line that has been evolving with the evolution of AI. We used to mutter about Grammarly and translation tools, but ignore them other than prohibiting students from using them and putting a handful forward for misconduct each assessment cycle, and then generative AI came along and blew all that out of the water and onto a whole other level. We have been grappling with it ever since. However, it will only be come September of this year that I will engage with it fully as a teacher in the classroom beyond warning students off it (rather than only from the perspective of course coordination, course/materials development – as in, integrating teaching AI – related skills into our materials, currently in progress, rather than developing materials using AI – and misconduct evaluation).

The young Vietnamese participants in the study carried out by Hang Vu, the third speaker of the IATEFL 2025 panel on teacher identity, demonstrated a high level of insight and awareness into the issues they face in developing their professional identity as teachers in a world dominated by AI, and what kind of training they need in order to do that successfully. Sandy described Hang Vu’s idea of “emerging identities”, as summarised on the slide below:

Via Sandy Millin’s write up of Hang Vu’s talk in the IATEFL 2025 panel on Teacher Identity.

There’s a lot to think about there! I suppose I have mainly been teacher/coordinator as AI inspector in professional terms, but also teacher as learner as despite my personal misgivings: I have made an effort to attend (whether live or via recording) all the training available to us regarding AI. I have been teacher as AI user when I have used it to generate discussion questions (and then teacher as critical thinker when I have deleted half of them as unsuitable and edited/adapted others!). Teacher as AI instructor/facilitator, of course, as mentioned above, is still in the “coming soon to a classroom near you” stage. I suppose will have to be “teacher as AI supporter” within the “teacher as instructor/facilitator” side of things – regarding what we decide are acceptable uses of AI…but I predict it will be more along the lines of channeling inevitable use rather than encouraging use vs non-use! And I think alongside that, I will definitely be encouraging critical discussion in my classroom regarding the use of AI and surrounding issues. It will be interesting to see what the students think. It seems to me that just as much as the youngsters in the Vietnamese study, us old fossils who have been teaching a good while also need to regularly engage with our professional identities and figure out how we are going to move with the times professionally, regardless of (although obviously also interlinked/connected with/influenced by!) our personal feelings towards the various changes (which as Catherine Walters’ plenary discussed, have been many and varied over the last 50 years!)

Sandy’s post finished with some of the questions posed by the audience, one of which was “Should we proactively work with learners about how to do AI? Maybe we should ask learners for the whole AI conversation, not just the final result.” – It’s an interesting one. I definitely want critical discussion and to find out the students’ take on it, and as with other things potentially their feedback/ideas/thoughts can feed into future iterations of the course, but ultimately, in terms of assessment, what is and isn’t acceptable has to align with university and college policy on AI use. One thing I do hope is that I will be able to persuade students of the importance of developing their own voice, as I think if I can do that, then reasonable/acceptable use (with the appropriate guidance on how) will be a natural progression. For sure, all this thinking I am doing at the moment (I’m on annual leave – I have time to think!!) will be a useful form of preparation for the task ahead!

This blog post is plenty long enough already, yet I haven’t even scratched the surface of identity, personal and professional, and the interplays between identity and classroom. But, another time… 🙂

Generative AI and Voice

I’m a writer. I am writing right now! I have written journal articles, book chapters, (unpublished) fiction, (unpublished) poetry, materials, reflections (blog posts), combination summary/reflections of talks/workshops (blog posts) I attend, emails, feedback on students’ work, the occasional Facebook update, Whatsapp/messenger/Google chat messages, and so the list goes on. It is a form of expression, as is speaking, and drawing. These, including all the different kinds of writing I have done and do, are all forms of expression that AI is now capable of approximating. However, until fairly recently (when suddenly it was showing up everywhere!), I had not explicitly considered the relationship between AI generated production and a person’s ‘voice’. Examples of ‘voice’ vs AI can be seen in the two screenshots below:

Via an email from Pavillion ELT – abstract of a forthcoming webinar.
Via Sandy Millin’s summary of Ciaran Lynch’s MaW SIG PCE talk at IATEFL 2025.

Both of these screenshots set voice against AI-generated content. The first one (which looks like an interesting webinar – Wednesday 14th May between 1600 and 1700 London time in case you might like to attend!) seems to be about helping learners develop their own voice in another language and suggests that this aspect of language learning is of greater importance in a world full of AI output. The second is in the context of materials writing, and highlights an issue that arises in the use of AI in creating materials – “lacks teachers’ unique voice”. The speaker goes on to offer a framework for using AI to help with materials writing while avoiding the problems listed in the above screenshot. (See Sandy Millin’s write up for further information! The post actually collects all of her write-ups of the MaW SIG 2025 PCE talks in a single post – good value! 🙂 )

I teach academic skills including writing to primarily foundation and occasionally pre-masters students who want to go on and study at Sheffield University. In the last year, we’ve been overhauling our syllabus, partially in response to one of our assessments being retired and partially in response to the proliferation of generative AI. Our goal is to move from complete prohibition of AI to responsible use of it. And I suppose, one thing we hope to achieve from that is reach a point where students may or may not choose to use AI in certain elements of their assessment but actively avoid it in others. This, I think, has some overlap with Ciaran Lynch’s framework for writing materials:

Via Sandy Millin’s summary of Ciaran Lynch’s MaW SIG PCE talk at IATEFL 2025.

Maybe we need a similar framework/workflow for our students that succinctly captures when and how AI use might be helpful and when it is to be avoided. And I think voice is part of the key to that! But what exactly is voice? In terms of writing, according to Mhilli (2023),

“authorial voice is the identity of the author reflected in written discourse, where written discourse should be understood as an ever evolving and dynamic source of language features available to the writer to choose from to express their voice. To clarify further, authorial identity may encapsulate such extra-discoursal features as race, national origin, age, or gender. Authorial voice, in contrast, comprises, only those aspects of identity that can be traced in a piece of writing”.

[I recommend having a read of this article, if you are interested in the concept of voice! Especially regarding the tension between writers’ authentic L1 voice and the constraints of academic writing in terms of genre and linguistic features (which vary across fields).]

In terms of essay writing, and our students (who are only doing secondary research), if they are copying large chunks of text from generative AI, then they are not manipulating available language features to express meaning/their voice, they are merely doing the written equivalent of lip-synching. I think this is still the case if they use it for paraphrasing because paraphrasing is influenced by your stance towards what you are paraphrasing and how you are using the information. I suppose students could in theory prompt AI to take a particular stance in writing a paraphrase or explain how they plan to use the information but they would also need to be able to evaluate the output and assess whether it meets that brief sufficiently. In which case, would it save them much time or effort? Would the outcome be truer to the student’s own voice? I wonder. Of course, the assessment’s purpose and criteria would influence whether not that use was acceptable.

On the other hand, if students use AI to help them come up with keywords for searches and then look at titles and abstracts, and choose which sources to read in more depth, select ideas, engage with those ideas, evaluate them, synthesise them and organise it all into an essay, using language features available to them, then that incorporates use of AI but definitely doesn’t obscure their voice and the ownership of the essay is still very much with the student rather than with AI. They could even get AI to list relevant ideas for the essay title (with full awareness that any individual idea might be partly or fully a hallucination), thereby giving them a starting point of possible things to consider, and compare those with what they find in the literature. This (and the greyer area around paraphrasing explored above) suggests that a key element that underpins voice is that of criticality. Perhaps we could also describe it as active (and informed) use rather than passive use.

Another issue regarding voice in a world of AI generated output, which I have also come across recently lies in the use of AI detection tools:

From “AI, Academic Integrity and Authentic Assessment: An Ethical Path Forward for Education

If ESL and autistic voices are more likely to be flagged as AI generated content, then our AI detection tools do not allow space for these authentic voices. These findings point to a need to be very careful in the assumptions we make. I’m sure we’ve all looked at a piece of work and gone “this was definitely written by AI, it’s so obvious!” at some point. Hopefully our conclusions are based on our knowledge of our students, and their linguistic abilities, previous work produced under various conditions and so on. However, for submissions that are anonymised this is no longer possible. I think, rather than relying on detection tools, we need to work towards making our assessments and the criteria by which we assess robust enough to negate the need for such tools. Either way, the findings would also suggest that the webinar described in screenshot no. 1 may be very pertinent for teachers in our field. (I wonder if the speakers have come across instances of that line of research too?! I increasingly get the impression that schedule-willing, I may be attending that webinar!)

Finally, this excerpt from a Guardian article about AI and human intelligence I think provides perhaps the most important reason for helping students to develop their voice and not sidestep this through use of AI:

“‘Don’t ask what AI can do for us, ask what it is doing to us’: are ChatGPT and co harming human intelligence?” – Helen Thomson writing for The Guardian, Saturday 19th April 2025

We want those Eureka moments! We want the richness of what diversity of thought brings to the table. (It is baffling to see Diversity, Equality and Inclusion initiatives being dismantled willy nilly in the U.S. – everybody loses out from that. But then, so much of what goes on these days is baffling.) Maybe something small we can do is help our students realise that their voice, as every voice, is important and that diluting it and losing it through ineffective use of AI makes the world a poorer place. I haven’t even touched on AI and image production or AI and spoken production but this blog post is long enough already (maybe I should have got AI to summarise it for me! 😉 ) so I will leave that for another post!

Using Adobe Firefly for Image Generation

Have you used Adobe Firefly before? Me neither. But we have free access to it via the University and the TEL team has used it, and so did a session for us on it. It can be used to for images to put in lesson handouts and slides, but also online platforms like Wooclap and Quizlet.

You write a prompt in a box and it generates images.

This was a scenario given to us:

Prompt 1: an image of 4 students in a discussion. This was the result:

Issues: There are 3 students and teacher. They look quite young while we teach university age students. Three of them are blonde so it isn’t a good representation of our students. So this is an example of the bias that exists in AI in an automatic result with no detail prescribed in the prompt.

Prompt 2: an image of 4 university students from diverse background in a discussion. This was the result:

Problems: They are not in a classroom.

Adding “seated” (to be more typical of a classroom):

Not a perfect picture (looks a bit like an airport…) but better than the first picture! In terms of the purpose of generating the image, this would probably work. Prompt writing/editing for Adobe Firefly tends to take multiple iterations before you get something you might be happy to use.

We were given the following tips:

  • add more detail to get better results;
  • be aware of bias as you engineer prompts and evaluate the outcome;
  • be picky – it may take several iterations to get what you want. Sometimes a fairly simple prompt immediately yields a satisfactory outcome but usually it takes a bit more effort. Particularly to produce an outcome that is suitably representative for an international student population.

Adobe Firefly has a lot of stock images that it draws on which means the quality is better than similar counterparts.

Once you have generated an image you can also edit it to a certain extent. Which is good as the first images you get can have arm melds, funny shaped heads and so forth! It’s not very good with limbs. A central human image may be fine but anyone in the background or if you require groups/more people, then problems abound! Despite these issues, Firefly is better at it than Gemini.

So al very cool but actually stock images like Pixabay (and creative commons licensed like Flickr – in particular ELTpics – if the context is suitable), i.e. human generated, are much less resource-intensive to use. So, don’t get too carried away by the “it’s so cool” thing. I tend to use Google image search and the appropriate level of license filter, personally.

My general impression: I can’t currently see an Adobe Firefly – shaped hole in my life that needs filling. I wonder if in 5 years time I will look back on this post with an “oh you innocent child” type lens or not?! Time will tell! It was a good session though, after being shown the prompts and pitfalls, we went into a breakout group and had to come up with prompts for another scenario. Unfortunately in my group, none of us had access sorted out yet so we couldn’t test the prompts we wrote.

Generative AI and Assessment

A session about Generative AI and EAP that I attended recently provided the above quote for our consideration. I think one of the things that is challenging about the Generative AI landscape and its presence in the context of higher education is that it evolves so rapidly. This rapid evolution contrasts starkly with much slower-moving policy-making and curriculum development processes. Certainly in my current context, this issue of becoming “left behind” has been one that we have been grappling with for a few years now. Initially, there was a period where once generative AI had emerged into existence, all we could do was watch, as it became increasingly apparent that students were using it in their assessments, while awaiting a university policy to inform our response. An extra layer of waiting then ensued because as well as being university policy-informed, we are Studygroup policy-informed. During that wait, our response to generative AI had to be “No. You can’t use this tool. It is against the rules. It will result in academic misconduct.” Of course, being as assessment in pathway colleges is high stakes (the deciding factor in whether or not a student can access their chosen university course), students use it anyway, due to running out of time, due to desperation, due to self-perceived inadequacy.

Now, we have the university policy which centres on ethical and appropriate use of AI, and acknowledging how and where it is used, and, in cooperation with Studygroup, are figuring out how to integrate AI use into our programme. We started by focusing on one of our coursework assessments, an extended essay, and discussing what aspects we thought were and weren’t suitable for students to use AI to help them with. So, for example, we thought it acceptable for students to do the following in their use of AI:

  • generate ideas around a topic, which they could then research using suitable resources e.g. the university library website and Google Scholar.
  • ask AI to suggest keywords to help them find information about the topics they want to research.
  • ask AI to suggest possible essay structures (but not paragraph level structure)
  • generate ideas for possible paragraph topics
  • get AI to proofread the essay but only at surface level, to suggest language corrections (this would only be the case if we no longer gave scores for grammar and vocabulary so will require rubric-level change)

Of course we can’t just implement this, we need to go through the process of getting approval from Studygroup for it and then building it into our materials. We can’t just expect learners to meet our expectations with no guidance other than the above list embedded in an assignment brief. Much like was discussed in the AI and Independent Learning webinar, we need to help the students to develop the skills that they need in order to use AI appropriately and effectively. This will include things as basic as how to access the university-approved AI (Gemini) and how to use it (including how to write prompts that get it to do things that are helpful and appropriate and equally avoid accidentally getting it to do things that aren’t helpful or acceptable). Also important will be raising their awareness of ethical issues surrounding the use of AI and of its inbuilt bias, as its output depends on what it has been trained on and there is always the risk of “hallucination” or false output. They will need to be cognisant of its strengths and weaknesses, and to develop an ability to evaluate its output so that they don’t blindly use or base actions on output which is flawed. Their ability to evaluate will also need to extend to being able to assess when and when not to use it, and how to proceed with its output.

All of the above is far from straightforward! When you look at it like that, it’s little wonder that left to their own devices students use it in the wrong way. So, in order to have an effective policy regarding the use of AI, there is a lot of preparation that is required. That skill-development and awareness-raising needs to be built in throughout the course into all relevant lessons. And that means a lot of (wo)man hours, given our course materials are developed by people who are also teaching, coordinating and so on. In addition, teachers will need sufficient training to ensure they have the level of knowledge and skill necessary to successfully guide students through the materials/lessons where AI features. The other complicating factor is that the extent of the changes means that new materials/lessons cannot be implemented part way through an academic year as all cohorts of a given year need the same input and to take assessments that are assessed consistently through the year. So, if we are not ready by a September, then we are immediately already looking at a delay of another year. It is a complex business!

So, I absolutely agree with the quote at the start of this post but I think it is also a LOT easier said than done. As developing an approach in a high stakes environment takes time but generative AI and tide wait for no man. By the time we reach the stage of being able to implement our plans fully, they will probably need adapting to whatever new developments have arisen in the meantime (already there is the question of Google Note and similar which we have not yet addressed!). For sure, the assessment landscape is changing and will continue to change, but I do believe that we can’t rely on “catching students out” e.g. with AI detection tools and the like. We need to support them in using AI effectively and acceptably, so that they can benefit from its strengths and be able to use it in such a way as to mitigate its weaknesses and avoid misuse. Of course, as mentioned earlier, to be able to do that, we, ourselves, as teachers, need to develop our own knowledge and skills in the use of AI so that we can guide them through this decidedly tricky terrain. Providing training is a means of ensuring a base level of competence rather than relying on teachers to learn what is required independently. Training objectives would need to mirror the objectives for students but with an extra layer that addresses how to assist students in their use of AI, and how to help them develop their criticality in relation to it. Obviously there will be skills and knowledge that teachers have that will be transferable e.g. around criticality, metacognition and so on, but support and collaboration that enables them to explore the application of them in the context of AI would be beneficial.

Apart from the issue of addressing AI use in the context of learning and assessments, in terms of not getting left behind, we also need to ensure that what we are offering students is sufficiently worthwhile that they continue to come and do our courses rather than deciding to rely on AI to support them through their studies, from application through completion and side-stepping what we offer. But that’s for another blog post!

I would be interested to hear how your workplace has integrated use of AI into materials and lessons, and recognised its existence (for better and for worse) in the context of assessment. I wold also be interested to hear how teachers have been supported in negotiating teaching, learning and assessment in an AI world. Please use the comments to let me know! 🙂

Gen AI and Independent Learning

This was the title of the English with Cambridge Webinar that I watched today (linked so you can watch it too – recommended!) It’s divided into 3 parts – what autonomy is, activities learners can do with Gen AI to learn autonomously and risks to avoid. This post will offer a brief summary of that, followed by some ideas and thoughts of my own.

The first activity is to design an autonomous learner, sharing ideas in the chat. The usual kind of things came up – motivation, confidence, agency, enthusiasm. These wre compared with the literature e.g. Holec (1981) – “the autonomous learner can take charge of their own learning” but the speaker said we need to unpackage and update this. So that, it does involve the ideas that were put in the chat, as well as ability to manage their time and resources, awareness of learning strategies, resourceful (e.g. would think to ask an AI chatbot) but also critical (won’t just accept the response without evaluating it). However, teachers are also very important in the process – autonomous learners aren’t born but are made, with support from teachers. This is important because if you are autonomous, you will achieve better results and improve more quickly. Also, autonomy is important beyond language learning, in the work place, in personal lives etc – it is a lifelong learning and living skill. It goes hand in hand with critical thinking, which is also a key skill. You are also likely to be have better confidence and self-esteem.

The other speaker reminds us that most AI tools require users to be a certain age. E.g. ChatGPT is not for under 13’s and 13-18 year olds need parental consent. So, if you do any activities with students, ensure they are old enough to use them and whether you need parental consent. Then some activities:

  1. Using the Chatbot as a writing tutor. This is a back and forth process, where the student asks the Chatbot to highlight the mistakes but not correct them. The student then tries to correct the mistakes and repeats the activity. They need to tell the Chatbot explicitly not to correct them. This could go through several iterations until the learner has had enough, at which point they prompt the Chatbox to explain the mistakes. “What about this sentence? What is wrong with it? <sentence>” NB: the Chatbot can make mistakes – it can say there are mistakes when there aren’t.
  2. We were shown a sort of tabulated study plan for improving writing and asked what we think the prompt might have been to generate it. Critically: if you want something useful, you need to be very detailed in your prompt to get something useful back from the Chatbot. It was something along the lines of “My teacher says my writing has xyz problems, and I want to take a B1 writing test in 4 weeks. I will have to write x and y. Can you make a study plan for me in a table. Can you include information about what I should do and what resources I should use.”
  3. Similar to the above, we were shown a visual idiom guide and asked what we thought the prompt was. It was something along the lines of “I have to learn these phrases for next week. I’m not a patient student and I think I have dsylexia. Can you suggest some study guides. <Phrases>.
  4. Intonation – Voice chat in ChatGPT. You speak into your phone and you get audio back. “I’ve got to do a presentation. I think my intonation is flat. Can you help me? <Short extract from presentation> And ChatGPT can make suggestions. You can keep going back and forth. Say it again and ask for further suggestions.

(I recommend watching the webinar to receive a full presentation of these ideas!)

The final part of the webinar deals with the risks of using AI and how to avoid them. There was a poll asking “Has AI ever misunderstood you?” – There were a lot of answers with “yes”. AI is not faultless and doesn’t always understand. Then we are asked to think about what overreliance on AI might look at. Lack of creativity, quite formulaic answers, repetitive were ideas that came up from the audience. To avoid these risks, we need to train learners not to use AI too much. This is also where critical thinking comes in – learners need to be able to make effective choices in use of AI. We want learners to be confident users of AI but in a critical way. We want them to be thinking and reflecting on things like is AI useful, is it doing what it needs to do. Questioning them regularly, getting them to keep a journal of keeping it – when they used it, why, the result, would they use it again – to get them to think about how effective it is. Offering yourself as a resource in terms of support in using AI, that learners can talk to you and get advice when they want to. Cambridge Life Competencies Framework was talked about – there are freely available activities to use with students.

An example activity from this:

This can be used on a text that Generative AI has produced, to encourage students to question what is produced.

Another activity was to ask students to use for a chosen stage of a task. They should explain where they will use it, why they decided to use it for that stage of the task and then reflect on the outcome. This should be a supportive, encouraging environment. The key thing is encouraging reflection.

The final question was “Are you an autonomous learner?” directed at us teachers. We need to build up our knowledge and understanding of things like AI. This will enable us to be able to give support and advice to students. Turn activities into your own, adapted to your own context. We should also be a learning community in terms of AI, as it is new for us all. This would create a supportive environment rather than one of fear for using it in the wrong way.

The webinar concluded with 3 things to keep in mind: Purpose – you need a reason for using AI, don’t use it for the sake of it or because you think you should. Have a plan. Make it sure it fits the purpose. Privacy – any data that you put into GenAI chat becomes part of the data that the Chatbot uses. So anything you put it can be repeated to other users. Therefore don’t enter personal data about you, your learners or anyone else into it. You should also not put copyrighted things into it if you don’t own it. Planet – the use of GenAI has an effect on sustainability in terms of the environment and society as a whole.

My thoughts and ideas

The first thing that I couldn’t help thinking was that when I was learning Italian intensively and autonomously in the summer of 2014, I would have LOVED to have had access to GenAI! Being able to get instant basic feedback on my writing would have been very cool. I wonder how competent I would have been at handling the feedback i.e. at identifying which parts were valid and which parts were sketchy.

There’s also an AI tool we learnt about in one of the AI professional development sessions delivered at work, Google Notebook, where you can feed it a bunch of content and it converts it into a podcast which is a discussion between 2 “people”, in passably natural spoken language. It is called a “Deep Dive”. The usual AI caveats apply, in that what it churns out in the podcast may not be accurate to what was fed to it and it might make stuff up. Personally, I would have loved using it for Italian learning though. It would be really good for generating content to listen to, using topics and vocabulary that you have some familiarity with. You could read the texts in preparation. I don’t believe this is the intended purpose of the tool (it is supposed to be a research assistant, and you are effectively outsourcing reading and summarising texts to AI) but it would be a very good use of it! It would also mean the issue of accuracy was less acute, given the purpose of listening to the podcast/summary would be to practise listening rather than to make high stakes decisions based on that output!

Where I work, we’ve mostly been coming at it from the perspective of how to conduct assessments in a world where AI exists and students use it in the production of their written work. Being part of a university, the first stage was waiting for there to be university policy on it. Now we are at the stage of being able to integrate the policy into our programme. It is still a slow process as there is a lot of procedure to follow when you bring in new things. We are shifting from a zero tolerance policy, which obviously was not very effective but all we had to be going on with, to identifying how and when AI could be used effectively in students’ learning and where the boundaries are. We want to integrate positive use into lessons, which echoes what this webinar was saying. By modelling effective use and giving students opportunities to use it with support, and highlighting its limitations, we hope to help them become more AI literate and therefore less likely to use it in detrimental ways. Maybe at some point we will have to teach them about Google Note and the limitations of it, since it is likely something that they could use at university as part of their process.

It is nice to be moving towards a position in which we can acknowledge the positive elements of AI. Of course, as quickly as we adapt, so quickly will it continue to evolve. (The tools we learnt about in the session where we learnt about the “Deep Dive” – wow! I may turn my notes. or at least some of them, from that session into a future blog post…) I think, going back to the webinar at the root of this post, one of the great things about it (the webinar, that is) is that the skills and criticality, and ideas for teaching those which were presented, will continue to be equally relevant even though the ideas for using the AI itself will change and evolve. As for the part about learner autonomy, in my view they nailed it – it was so good to see them discussing it as something to bring into the classroom and develop (I have done a lot of work on that in my career – through classroom research, through publication, through conference presentations and webinars) rather than something that learners are or aren’t. So, as I said before, it IS definitely worth a watch! Also worth taking some time to look at the Cambridge Life Competencies framework and resources attached to it.