A session about Generative AI and EAP that I attended recently provided the above quote for our consideration. I think one of the things that is challenging about the Generative AI landscape and its presence in the context of higher education is that it evolves so rapidly. This rapid evolution contrasts starkly with much slower-moving policy-making and curriculum development processes. Certainly in my current context, this issue of becoming “left behind” has been one that we have been grappling with for a few years now. Initially, there was a period where once generative AI had emerged into existence, all we could do was watch, as it became increasingly apparent that students were using it in their assessments, while awaiting a university policy to inform our response. An extra layer of waiting then ensued because as well as being university policy-informed, we are Studygroup policy-informed. During that wait, our response to generative AI had to be “No. You can’t use this tool. It is against the rules. It will result in academic misconduct.” Of course, being as assessment in pathway colleges is high stakes (the deciding factor in whether or not a student can access their chosen university course), students use it anyway, due to running out of time, due to desperation, due to self-perceived inadequacy.
Now, we have the university policy which centres on ethical and appropriate use of AI, and acknowledging how and where it is used, and, in cooperation with Studygroup, are figuring out how to integrate AI use into our programme. We started by focusing on one of our coursework assessments, an extended essay, and discussing what aspects we thought were and weren’t suitable for students to use AI to help them with. So, for example, we thought it acceptable for students to do the following in their use of AI:
- generate ideas around a topic, which they could then research using suitable resources e.g. the university library website and Google Scholar.
- ask AI to suggest keywords to help them find information about the topics they want to research.
- ask AI to suggest possible essay structures (but not paragraph level structure)
- generate ideas for possible paragraph topics
- get AI to proofread the essay but only at surface level, to suggest language corrections (this would only be the case if we no longer gave scores for grammar and vocabulary so will require rubric-level change)
Of course we can’t just implement this, we need to go through the process of getting approval from Studygroup for it and then building it into our materials. We can’t just expect learners to meet our expectations with no guidance other than the above list embedded in an assignment brief. Much like was discussed in the AI and Independent Learning webinar, we need to help the students to develop the skills that they need in order to use AI appropriately and effectively. This will include things as basic as how to access the university-approved AI (Gemini) and how to use it (including how to write prompts that get it to do things that are helpful and appropriate and equally avoid accidentally getting it to do things that aren’t helpful or acceptable). Also important will be raising their awareness of ethical issues surrounding the use of AI and of its inbuilt bias, as its output depends on what it has been trained on and there is always the risk of “hallucination” or false output. They will need to be cognisant of its strengths and weaknesses, and to develop an ability to evaluate its output so that they don’t blindly use or base actions on output which is flawed. Their ability to evaluate will also need to extend to being able to assess when and when not to use it, and how to proceed with its output.
All of the above is far from straightforward! When you look at it like that, it’s little wonder that left to their own devices students use it in the wrong way. So, in order to have an effective policy regarding the use of AI, there is a lot of preparation that is required. That skill-development and awareness-raising needs to be built in throughout the course into all relevant lessons. And that means a lot of (wo)man hours, given our course materials are developed by people who are also teaching, coordinating and so on. In addition, teachers will need sufficient training to ensure they have the level of knowledge and skill necessary to successfully guide students through the materials/lessons where AI features. The other complicating factor is that the extent of the changes means that new materials/lessons cannot be implemented part way through an academic year as all cohorts of a given year need the same input and to take assessments that are assessed consistently through the year. So, if we are not ready by a September, then we are immediately already looking at a delay of another year. It is a complex business!
So, I absolutely agree with the quote at the start of this post but I think it is also a LOT easier said than done. As developing an approach in a high stakes environment takes time but generative AI and tide wait for no man. By the time we reach the stage of being able to implement our plans fully, they will probably need adapting to whatever new developments have arisen in the meantime (already there is the question of Google Note and similar which we have not yet addressed!). For sure, the assessment landscape is changing and will continue to change, but I do believe that we can’t rely on “catching students out” e.g. with AI detection tools and the like. We need to support them in using AI effectively and acceptably, so that they can benefit from its strengths and be able to use it in such a way as to mitigate its weaknesses and avoid misuse. Of course, as mentioned earlier, to be able to do that, we, ourselves, as teachers, need to develop our own knowledge and skills in the use of AI so that we can guide them through this decidedly tricky terrain. Providing training is a means of ensuring a base level of competence rather than relying on teachers to learn what is required independently. Training objectives would need to mirror the objectives for students but with an extra layer that addresses how to assist students in their use of AI, and how to help them develop their criticality in relation to it. Obviously there will be skills and knowledge that teachers have that will be transferable e.g. around criticality, metacognition and so on, but support and collaboration that enables them to explore the application of them in the context of AI would be beneficial.
Apart from the issue of addressing AI use in the context of learning and assessments, in terms of not getting left behind, we also need to ensure that what we are offering students is sufficiently worthwhile that they continue to come and do our courses rather than deciding to rely on AI to support them through their studies, from application through completion and side-stepping what we offer. But that’s for another blog post!
I would be interested to hear how your workplace has integrated use of AI into materials and lessons, and recognised its existence (for better and for worse) in the context of assessment. I wold also be interested to hear how teachers have been supported in negotiating teaching, learning and assessment in an AI world. Please use the comments to let me know! π

Pingback: Generative AI and Voice β Lizzie Pinard