Adapting to online teaching – EAP (3)

This is the third and final post that involves me wittering away about what I have done in my weekly 2hr online lessons with the pre-masters group that I share with my co-ADoS.

Week 5

After the low-point that was Week 4’s lesson (which you can read about in the second post of this series which covers Week 3 and 4 – update: the students also didn’t do their homework/preparation for my co-ADoS’s session with them so at least it wasn’t personal 😉 ), I changed my approach in terms of lesson focus. I shifted from trying to tap into and build on the asynchronous content to a straight-forward focus on CW2, students’ speaking coursework which is a presentation based on their CW3 which is an extended writing coursework. (However, it is worth mentioning that this shift would have taken place regardless of how Week 4 went, as at this stage in the term students need help with their speaking coursework!)

My lesson had 4 objectives. In the event, we only completed 3 of them. This was fine because the final one was only there in case the main task took less time than I’d anticipated, which it didn’t. The final objective will feature in Week 6’s lesson.

At the start of the week, students had received an email about CW2 with all the important information about it in terms of what it is, how it works and a timeline of tasks and deadlines. I started the lesson with a task based on that email (essentially to make sure they had read it and understood it rather than ignored it!) – working in groups to answer a set of questions based on the email on a pre-prepared Padlet:

I know – a lot of questions. However, they were quick and easy to answer so the task did not take too long. This was the follow-up:

Some questions came up and I was able to respond to those, as well as reiterating key information.

Positives: The task forced them to read the email. (Students are good at not reading emails!) They had the opportunity to ask questions. They engaged!

To improve: I think I would probably do this the same way in future! Beats talking at them about it.

For Pre-Masters students, CW2, like CW3, is synoptic. They work on and submit the same pieces of work for their Research Project (Humanities) or Literature Review (Science and Engineering) module and their AES (Academic English Skills – ours!) module. So in theory they should already have been working on it in their other modules (who focus on content and structure where we focus on language skills). The next step in this lesson, then, was to find out where they were at with it. I used Padlet again, but this time an individual task:

The goal of this task was two-fold – as well as to find out what students have done so far, I wanted students to have a clearer idea of where they were headed next. The questions were based on things they need to do as part of their CW2 preparation, leading them to question 8, where their answers to 1-7 guide them as to what they need to do. Some students had done loads already, some had started, some hadn’t started at all. Fairly typical! (They have been advised that next lesson will start with a progress check and I will want to know what they have done since this lesson! We shall see…)  This was the follow-up:

There were a few worries that I was able to address.

Positives: It gave me a snapshot of where they were at, and the opportunity to set up an expectation, based on the task, for next week’s lesson.

To improve: Their answers to question 8 were a bit vague. Next time I would give an example answer to push them to give more useful (to themselves) answers.

The final task of the lesson was completing the practice submission. This was what they were told about it in the information email:

I figured it would be less daunting if we did as much as possible during the lesson and they just had to finish for homework. We did it step by step:

It took them a fair bit of time! In fact, they didn’t quite manage to finish the final stage. Hence why there wasn’t time to embark on the assessment criteria side of things. However, we will now be looking at the criteria at the start of Week 6 and their submission deadline is not til the end of Week 8, so it’s ok.

Positives: It scaffolded an important task (the practice submission) for them. Giving them time in class alleviates (at least slightly) the time pressure they are under currently, which is important.

To improve: I would make more use of the individual chat feature, to prod them/check on them, rather than only the main everyone chatbox.

Overall: Admittedly this wasn’t the most exciting lesson in the world, but it did what it needed to do and they stayed with me! I deliberately over-planned because I just had no idea how long doing the practice task would take them so I wanted to be prepared for whichever eventuality.

Week 6

The final lesson for this term! I started with a chat box warmer, one I’ve used previously – tell me using one adjective how you feel right now. The adjectives were more positive than Week 4 (when I last used this warmer) on the whole, which was encouraging!

These were my lesson objectives:

For the first, I did a similar task to last week – a set of questions to answer on a pre-prepared Padlet:

The answers were more encouraging this time round – there were still some who hadn’t started but they were in the minority rather than the majority this week! I had to cajole some of them into responding – by the end of the task I had won 11/15, having started with about 5. Having responded verbally to some of their answers – to acknowledge their progress, to pick up on answers that indicated confusion and to encourage them to keep working hard/not leave it til the last minute – I followed up with this:

There were some concerns that came out, which I was able to address.

Positives about this stage: Students knew they would be expected to give me a ‘progress report’, as I had told them at the end of last week’s lesson. Hopefully more work got done as a result! Knowing that homework (in this case CW2 work) will be revisited in the next class rather than forgotten about is supposed to be more motivating for students. I am getting better at talking into empty space. I think each week since the start of this way of doing things, I have improved and become more comfortable with it little by little (because I only teach one lesson per week, it’s a slow learning curve!). I had thought through feedback and the feedback elements felt less haphazard than they have been known to feel in past lessons.

To improve: I still don’t know what to do with the students who just don’t respond whatever I do or say! Given the stage in the course and the age of the students, though, to an extent I think all I can do is provide opportunities for participation as best I can and make sure they are clearly set up and scaffolded.

 

Then we moved onto the next stage, which I had carried over from last lesson.

This stage was a preparatory stage for the following evaluation stage and the two in combination were to ensure that students have a clear idea of what they need to do in order to get good marks for their presentations. I introduced the 4 criteria and their subheadings, giving a brief explanation of what each one meant.

 

To try and make it clearer for students and to check understanding, I then did a little matching task. The example below is one of the items. It was a series of sentences starting “I should…” and students had to match each one to the correct criteria. I asked them to write their answers (e.g. for this example, they would write 2a)

Positives: Links the things students need to do with the criteria they need to do them for. Doesn’t require a lot of student writing.

To improve: Next time I would insert a breakout room stage and have a task with the 4 criteria and a list of the statements and get the students to discuss and match them, then use what I actually did as the feedback stage. On the plus side, the way I did it didn’t have a negative impact on the next (important) task, which was the final part of this stage of the lesson – the example presentation evaluation:

The first step was getting them all to watch it individually rather than playing it and sharing screen, to avoid bandwidth and audio quality issues. I asked them to write “done” in the chatbox once they had finished. Once they were done, I put them into breakout rooms in groups to discuss the presentation in terms of the criteria and add to the pre-prepared Padlet.

Positives: they did the task and showed understanding of the criteria and how the presentation mapped to the criteria.

To improve: I think the instruction slide above should have been two slides. One for watching the presentation and evaluating it individually and one for doing the group task. Fortunately, used as above it didn’t impact the task negatively! Next time, I would also include an element of getting them to engage with the content (which was quite humorous!) rather than only the quality. A couple of them spontaneously mentioned things about it in the chatbox as they watched which was nice! When I planned the lesson, I was too focused on the main task and forgot to allow for personalisation.

The final stage of the lesson focused on the Q&A. As students are submitting recorded presentations rather than doing them live, we need a live element to address the answering questions part of the criteria (2b!). These will take place in Week 8 and involve use of a list of questions which students are able to look at in advance of their slot (they are already on Blackboard!).

They’ve already had this information (the first 3 questions) on multiple occasions from multiple sources but it bears repetition! (Inevitably, some got it wrong!) Once clarified, we could focus on the fourth question – useful language.

Because we were running out of time a bit, I displayed the above slide and got them to add examples, before getting them to download the list of questions (most of them hadn’t as yet) and putting them into breakout rooms for a bit of practice. Finally, we came back to the main room and I asked each of them one of the questions, just to give them a feel for it.

Positives: They had a chance to practice in groups and a chance to “try it out” in the main room subsequently. They now all have the questions downloaded and have looked at them and realised that it’s not as easy as they had assumed so might actually do some preparation work towards it!

To improve: Next time, rather than bring them back to the main room, I’d do the “giving a feel for it” element in each breakout room in turn. That way, there would be less waiting time for students and they could continue practising after I move to the next group. The final main room stage could then focus on task reflection.

Overall: I finally won at timing! Ok, not quite but much closer than was the case at the start of this term! Nothing took wildly longer than I had anticipated, everything I had planned was done, just in time. The final stage could have used a bit more time but didn’t suffer unduly for it. So, I’m pleased! It means I am getting the hang of estimating how much time it will take to do stuff. As ever plenty to work on and ways to improve but that’s the joy of it. Anyway that is it, for me, for teaching, till September! When it will be a brand new class who come directly to remote learning (the earliest we will do face to face is January and that’s very much dependent on the state of the world by then – anything could happen!). In the meantime, 3 crazy weeks of assessment and then 4 weeks of MUCH-NEEDED downtime are on the way. (I was sick for the whole of the Christmas holiday, my Easter holiday was a stress fest rather than a trip to Sicily thanks to the pandemic, so really, **really** looking forward to some downtime! And then using what I’ve learned this year come the start of next year. 🙂 )

 

Adapting to online teaching 2 (EAP)

After my first two weeks of whole group online teaching this term, I published this post about my experience of adapting to this way of teaching (behind the curve because we didn’t do any whole group teaching on our course last term, only short small group tutorials, which I mentioned briefly in my post about our experience of throwing an EAP course online at short notice). Another two weeks have passed so here is the next instalment! (It’s ok, we only have 6 teaching weeks this term before the final 3 weeks become all about assessment, so there will only be 3 of these posts in total!)

Week 3

The theme for this week was “Overpopulation – myth or problem?”. Having established in Week 2 that I can do break-out rooms (woo!), I decided to try a speaking-focused lesson with a focus on paraphrasing and summarising sources when speaking (which they will need to do for their Coursework 2 presentations). In preparation for the lesson, students had to find a source to support the position they had been assigned (half the class were assigned ‘myth’, half were assigned ‘problem’). In total, there were 4 break-out room groups, of which the final one was the main discussion task. The first 3 tasks involved random groupings, while the main task I did customised groupings because groups had to have a balance of “myth” and “problem” viewpoints and had to take into account attendance patterns thus far (i.e. I wanted to make sure that as well as being balanced viewpoint wise, no group had more than one student with patchy attendance!)

This was the first task (yes, somehow I forgot about “A”…! Students didn’t say anything about it, if they noticed. Of course they may have thought the chat box warmer task was “A”!)

This task reviews the skills learners developed and were tested on in Coursework 1 Source Report. In all the breakout room tasks for this lesson, I included times on the slides to give students an indication of how long they would have in their breakout room to complete the task.

Positive of this task: clear and achievable for students; provided opportunity for speaking/warming up their working in a breakout room mode!

Problem with this task: no tangible output = room for students to slack off. In future I would do something like get groups to report back in the main room, answering questions such as “In your group, whose source was the most current? What different search methods did your group discuss?”

This was primarily a preparatory task for the main discussion but also paraphrasing skill practice. As well as review and practise of written paraphrasing, it encouraged students to pick out key arguments that they could use in the main discussion task. By now, students are used to using Padlet in our whole group sessions both with and without the breakout room/group component.

Positives of this task: useful skill practice, a preparation step for the main discussion, has a tangible/monitorable output (student posts on the padlet)

Problems with this task: my instructions weren’t clear enough – in hindsight I should have included an example post on the padlet!; it took even longer than I had anticipated, which probably also relates to the instructions not being clear enough (fortunately, as has been mentioned previously, timing is very flexible in these sessions this term!); I used the comment function on Padlet to give live feedback/guide students but not all groups noticed the comments as they are not as immediately visually evident as the equivalent on a Google doc would be (I dealt with this by going into breakout rooms and drawing students’ attention to the comments!); my post-task feedback again needed more thought (work in progress!).

This was the final preparation task before the main discussion task. The goal was to give students time to consider the arguments linked to the alternative viewpoint and possible responses to these, so that the main task discussion could be of a higher quality.

Positives of this task: It used the output of the previous task (the arguments on the padlet) with a focus to how they would be used in the subsequent task, which adds coherence to the lesson arc and hopefully means students can see why they are doing what they are doing – there is a clear direction to the tasks;

Problems with this task: students could think “I’ll manage with the discussion, I don’t need to do this task”; any given student’s experience of this task would vary depending on how forthcoming or not their group-mates were. Group dynamics in the online setting is something I need to think about more – how to help students to work well together in groups, in breakout rooms. Maybe add more structure to breakout room tasks e.g. start them with some kind of mini-activity where students have to write something in the chat box, before moving onto using the audio and doing the actual task at hand.

(No, I don’t know what happened to my grasp of the alphabet in these lesson materials! I think I was so focused on the task content that I forgot to pay attention to numbering/lettering!)

So, the main task! Group discussion requiring use of the sources found for homework (research skills), the key arguments identified, paraphrased and considered in the course of this lesson and language for referring to sources verbally.

Positives of this task: Brings together everything the students have done from homework through to final discussion preparation

Problems with this task: As far as I was able to tell, only one out 4 groups did the task properly! I think again what was missing was a clear feedback stage which students would be made aware of in advance of starting the task and which required them to DO the task fully in order to complete; students who want to do the task properly but are in a group with students who are more interested in slacking off lose out (had one student who when I was in the breakout room monitoring/checking on them, tried to give her opinion and elicit others’ opinions but radio silence followed!).

This evaluative element of the lesson comes from Sandy’s recent blog post about conversation shapes. (Although it might be hard to see in this screenshot of the slide, depending on the resolution of your screen, when displayed as a pdf of a ppt in Blackboard collaborate, the credits were clearly visible!) Unsurprisingly, for the group who did have their discussion, it looked most like conversation 2. As a class, we identified that conversation 3 would be most effective – contributions of varying length, responding to the other speakers’ contributions, building on other speakers’ contributions. Obviously in groups, there would be more than 2 speakers but the students didn’t seem to have any problems applying the visuals to a group discussion.

Positives about this task: It was great to have a visual way to think about the discussions the students had had (those who had had them!! But I figure for those who bothered less, this was still useful and could be considered in terms of previous discussions). Having identified that 3 would be the most effective, this can be revisited in future speaking lessons as a prompt in advance of discussion tasks. Could also consider what language and cues would help to build a discussion like this e.g. agreeing and disagreeing language that allows connection to what has been said (that’s a good point, but…/yes, I completely agree, also…), back-channeling etc.

Problems with this task: I probably didn’t go far enough with it. Although, possibly this is not a problem but rather a slow-burn thing that bears plenty of revisiting and therefore doesn’t require lengthy input around it straight away. I think in future I will introduce this after the first suitable seminar discussion practice that students do in the course and revisit it and build on it regularly e.g. have example discussions to match to each shape, the language input as mentioned above etc. (Thank you, Sandy!)

The final task of the lesson was a reflective task, with the output going onto a padlet. Reflection is a key component of learning, of course, and actually these students by and large did a good job of this. This is something I need to capitalise on more in future lessons.

Positives of this task: made students think about what they’d done and evaluate it; those who didn’t speak recognised it in their answers (it’s something!);

Problems with this task: Too many closed questions – need to push them further than that, closed questions are fine but then a follow-up question could be good.

This task reflected weekly lesson content for week 3. In practice, the students had very little in-class time to start it, because all the teacher-led tasks (as above) took a fair amount of time to do, but students are accustomed to fairly substantial homework tasks and as this was part of Lesson 3CD also factors into their asynchronous learning time.

Overall, Week 3 was a useful learning curve for me. There were plenty of positives, there are plenty of things to work on. I find it really useful to consider each lesson in these terms, think about what went well, what didn’t work and how you’d do it differently next time to make it work better, and think about how to reflect what you’ve learned more immediately in subsequent lessons – I guess that is what reflective teaching and learning is all about!

Week 4

Well…you know those lessons where you think you’ve made a really quite good lesson plan and have high hopes for how the lesson will go, but the reality turns out… rather differently? That was week 4’s lesson for me. The theme for Week 4 was Scientific Controversy. The asynch materials included a listening practice based on a panel discussion about genetic modification, which I asked the students in advance of the class as preparation. Though it was homework, it wasn’t extra in the sense that it was part of the core asynch materials for the week.

I began the lesson in the usual way – with a chat box warmer. Today I asked them to pick one adjective that most describes them right now and write it in the chat box. 9/14 responded – tired, exhausted, sleepy, blue, sleepy, energetic, sleepy too, calm, hungry. I acknowledged and responded to all their responses. Then we looked at the lesson objectives. In this lesson, I put extra effort into making sure the lesson objectives were clear and carried through the lesson, so that students could see where they were in relation to the objectives, see progress being made and see how tasks relate to the lesson objectives (I’d read, or watched, I forget which, about the importance of doing this). I did this by repeating the objectives slide at appropriate intervals, highlighting each objective as it was focused on and putting a tick by each objective as it was met. Here is an example:

The first stage of the lesson was a language review stage. 

This stage included a definition check for controversy and scientific controversy and a series of pictures of example scientific controversies for which students had to guess what scientific controversy was being illustrated. Here is an example:

The students responded, and a good pace was maintained. I could perhaps have done more with the second question, tried to get students to share more ideas, but knowing I had some meatier tasks later in the lesson, I didn’t want to spend too long on this one. The final task of the first stage was a quick Quizlet review of some vocabulary from the homework asynch materials. 11/14 did it, which was an improvement on Week 2! I haven’t tried the team/breakout room version yet – that may be for next week!

Positives for this stage: Pacing, student response, topic and activities connected to asynch materials so provide review opportunities, use of pictures.

Problems with this stage: The second question on the picture slides got neglected. I think when it unfolding, I worried that if I pushed the second question, the amount of time they spent typing would negatively affect the pace/mean too long was spent on the activity.

The next stage of the lesson was reviewing the listening homework.

I started with these questions:

As you can see, I messed up the formatting for this slide so the Write yes or no looks like it only relates to question 3. I corrected it verbally but only got ‘no’s’, for those who responded. Hoping this was for the third question, I reminded them about the online mock exams available, the importance of practice and that that there would be opportunity for practice during this lesson too.

This next task was supposed to be a fairly quick and easy way of getting them to show their understanding of the opinions voiced in the panel discussion:

Nobody did it. Nobody responded when I asked why nobody had started doing anything a few minutes later. Eventually I said ok give me a smile emoji if you did the listening homework and a sad face emoji if you didn’t. I only got sad faces. So this task flopped completely. The next one was also not going to be possible as it reviewed the target language from the aforementioned homework:

So I skipped to the point where I displayed the target language and we related it to the conversation shapes we’d look at in Week 3 and then moved on to the final review task:

(The opinions referred to are those of the panel speakers again.) Obviously this needed a workaround due to the lack of homework issue, so I had them open up the relevant powerpoint which had notes relating to each panellist’s views and got them to tell me via the chatbox when they had done so.

Positives about this stage: It had a mixture of chatbox and breakout room activities, and focused on the content and the language of the listening homework. I had some workarounds for lack of homework.

Problems with this stage: It relied on students having done the homework! The padlet task had no work around (I was working on the basis that at least SOME of them would have done it and be able to post on the padlet and the rest could interact with that using the comments) for the zero homework completion.

The next and final stage of the lesson was the speaking/live listening stage:

I made this slide a) to give students an overview of this stage of the lesson and b) to insert at the relevant intervals to show which phase of the task we were moving on to. More detailed instructions for each step came at the start of each step. I had hoped this overview would motivate the students to carry out each step as they would know the following steps relied on it and have a clear picture of what they were working towards.

In practice, I put the students into breakout rooms, having set up the task, and went in to each room to check on the students. Group A gave me radio silence. No response. No audio, nothing in the chatbox, whatever I said. So I reiterated what they needed to do and said I would be back in 10 minutes to check on them (the preparation stage was 20 minutes). Group B had some students who did engage and some who did radio silence. Thank God for the ones who did! They asked questions about their topic, I checked their understanding of the task and then I left them to it for a bit (again promising to return in 10 minutes to check on them). At the relevant point I went back to Group A, knowing full well that the chances of them having done anything since I left (no activated mics had appeared at any point) were slim (they could have used the chatbox…they hadn’t!). I tried again, more radio silence. Group B, again, had made progress when I went in to check on them. Then I brought everyone back to the main room. Except…most of Group A didn’t appear/reconnect. (So, presumably, they had done the log on and bugger off thing!) Obviously the plan in the slide above was a write-off (the members of Group A that did show up were still radio silence when addressed/instructed!). In the event, Group B did their discussion and I gave them some feedback, again referring to conversation shapes.

Positives of this stage: It was clearly staged. The group that did the parts that they were able to do made a good effort. (I feel for them, being so outnumbered by ones who won’t participate…)

Problems with this stage: It relied on student participation! Step 3 relied on Step 2 being carried out to some degree of success. Too ambitious? But these ARE pre-masters students, it shouldn’t be! There again, they are all knackered (see chatbox warmer – though Mr Energetic? Group A. Just saying.) If the stage had worked as planned, students may have struggled to summarise the other group’s discussion because poor audio quality makes it harder to follow what is being said.

What am I taking away from these 2 weeks? That I want an article/book/video about classroom management with online platforms! Though quite what can be done if students are completely unresponsive, I’m not sure. I have worked really hard on making everything as clear and as meaningful as possible, in terms of tasks and objectives, which I am pleased with. I continue to try different task types and see what does and doesn’t work (with this group). Possibly I approached it wrongly overall – I tried to connect to the asynchronous material and give students engaging tasks that would help them develop their academic skills and prepare for exams, but maybe I should have focused more on their coursework. The next and final big thing students have to do in terms of course work is prepare and submit a presentation recording, so my final 2 lessons will focus on that! I can but do my best. Importantly, I seem much better able to accept things going wrong, take what I can from it and not beat myself up over it than I have been in the past. I think this links with having had a really supportive line manager/programme leader for a year now – work-related anxiety levels are a lot lower than they used to be – and also, of course, that it has been 1.5yrs now of using Mindfulness to cope better with life, including work.

Watch this space to find out what happens in the last instalment of my teaching reflections for this term. The main purpose of these posts is to be my memory, outsourced, when I come to planning lessons next term with a new group of students! Space and time will make it easier to incorporate what I have been learning these last 4 weeks (lots of learning, hard to keep up but I am doing my best!). The course will look a bit different, and is still under construction, but since it will be what it is from the start, rather than a change being thrust on students part way through, there will be a lot more scope for setting clear expectations and instilling good habits etc from the beginning AND the university will have made it so that students can access Google suite from China yayyy (I forget the technical details but it is some kind of VPN they are purchasing that enables it) – so, exciting times ahead!

 

 

Adapting to online teaching (EAP)

Things got a little busy around the middle of March, what with the small issue of a lockdown and a complete shift to remote teaching and learning to deal with. We are now starting our second term of this scenario and where last term was a frantic race to lay down enough track for us all to get from start to end of term somewhat intact, this term (for me) there is more brain space available to shift the focus from how to survive to how to thrive and actually blog about it too! (Why isn’t the noun for thrive thrival? From survival to thrival would make a great blog post title, not that I am there yet!)

This term, we have introduced more synchronous contact time per week. Last term, in addition to all the asynchronous content, we had 2hrs per class per week, which was broken into 4 half-hour slots across which the class was divided, with each small group attending one slot for a short tutorial. By the end of the term, mine looked something like this:

00-05 General chat

Making sure everyone is there, some kind of simple chatbox warmer while students are getting logged in, linked to topic of the week.

05-10 Review of week

Ask students to review how the week has gone, what work they have done, have they understood everything etc. (I found the most time efficient way of doing this was having the review questions on a slide and asking each student to answer all the questions on the slide (up to 3) in one go. Rather than by one question at a time or by using the chatbox. To save the faff of mics going on and off and typing speed, which I also trialled and errored, so to speak!)

10-25 Tasks

A combination of short discussions/debates/vocabulary review tasks. Try to flip as much as possible to have more time

25-30 H/W

Make sure students understand any homework they have to do that week and are clear what the requirements for the next week are in terms of asynchronous materials.

This term, as well as these small group tutorials, we have introduced a 2hr whole class session. To start with, these were to be 1hr teacher-led and 1hr guided study, where the students are set a task and the teacher is on hand to help. Two weeks in and we have decided to leave the structuring of the 2hr slot up to teachers to use how best suits what they are doing with the students. Due to remission hours, I am sharing a group with my co-ADoS and I am doing the 2hr whole group slot while she does the small group tutorials. I’m as happy as the proverbial pig in you-know-what: I have these 2hr slots, with weekly learning materials and assessment requirements to draw on for content and all the freedom in the world to experiment with this new teaching medium. It’s really funny being back in that position of things feeling so new.

I have done two sessions so far.

Session 1

The weekly materials on the VLE for Week 1 focused on Term 3 requirements and reading/writing exam practice. Back in the old days, the fifth hour each week used to be a workshop hour, guiding students on aspects of their writing and speaking coursework. This was my first session with this group of students as last term I taught a different group. These students are the group my co-ADoS has taught for the last two terms. Thus, the first thing I needed to do was some kind of getting to know you activity.

I experimented with using Padlet:

After going through some important course-related information with the students, I also used Padlet to get information from students about their coursework which they started work on last term but we only focus on this term (this is a Pre-Masters group and this is the final year that we are running a synoptic writing coursework, in which we look at the language skills aspect of the coursework while their Research Project module tutor [humanities] or Literature Review module tutor [science and engineering] focus on the content):

I also experimented with Quizlet Live’s individual mode, which like the team mode allows Quizlet use in class, but doesn’t require use of breakout rooms etc to do so is more straightforward.

It worked! It’s a way to review vocabulary in an online setting with a competitive element. My next job is to come up with a few alternative ways so it doesn’t get tired (I used it in week 2 as well!). I might even give the breakout room-team version a go at some point if I am feeling brave.

I followed up with this, having them use the chatbox:

Those three tasks +feedback (e.g. in the GTKY task I had to answer all those questions, most of which were course related and how to learn English online effectively-related) plus going through the important course information took up the whole first hour. The second hour, they had a choice of two tasks – one, work on their coursework, two, do a practice writing exam (they have the real thing in Week 7 this term). The latter required them to have already looked at some of the asynchronous materials, so if they hadn’t yet (it was only Tuesday!), they could start by doing that.

I asked them, where possible (most of them are in China) to share their work with me on a Google doc so I could see what they were doing. None of them did. Some of them have since submitted the writing practice for feedback (it was optional – we will give them feedback if they give us their work to give feedback on, but they could also have opted to use the model and analysis provided in the materials). Their coursework in its entirety will be submitted at the end of Week 4 for first draft feedback so whether or not they used that hour for it, it will have to  be done at some point!

Things I took away from session 1:

  • Allow extra time for tasks; padlet is useful for giving tasks tangible outcomes that you can monitor and give feedback on;
  • yay I still have Quizlet live in my arsenal; the second hour definitely needs tangible and meaningful outcomes;
  • it’s really clear when you do tasks who is participating and who has logged on and then buggered off to do something else in the assumption (perhaps based on other subjects’ whole-class sessions) that the teacher will talk for the whole time and won’t notice if someone isn’t actually there!;
  • the chatbox is versatile but I need to get students speaking as well (time to get to grips with break-out rooms! Only doing small-group tutorials meant I hadn’t up til that point, but I used them for the first time in week 2).

Session 2

This time, I wanted to use breakout rooms and get the students speaking. I also wanted to connect to the topic of the asynchronous materials (Surveillance) and aim to make the session complement the asynchronous component of the course. In terms of skills, the asynchronous weekly lesson material focused on listening/note-taking and paraphrasing/synthesising different view points in a presentation.

I decided to start with a two-part dictogloss. To make it more topical, rather than using the one provided in the lesson materials, I found a couple of Guardian articles about surveillance in the context of Covid19 and the contract tracing scheme, in particular the still-absent app. For the first two sentences, having ensured they had pen and paper to hand via getting them to tell me when they had via the chatbox, I read them out a few times for the students to note down key ideas (I added an extra time and went slightly more slowly than I would have done in a face to face classroom, to mitigate potential audio quality issues). That done, I put them into breakout rooms in small groups with the task of reconstructing the text and choosing one of their group members to write their reconstruction on the padlet I had prepared for the task. (I have two padlets for use during lessons which I wipe between uses, it can be a whiteboard for ss to use, a substitute google doc or a combination of the two.) Once they were in their rooms, I went from room to room and made sure they were on task. Each group managed to duly put their reconstruction on the padlet and were able to compare theirs with other groups and the original. For the second two sentences, back in the main room, the students had to make notes and then use their notes to complete a gapped summary that I displayed for them. They gave their answers in the chat box.

In hindsight, I would a) have spent more time on the feedback element for the first two sentences and b) used the breakout rooms for students to discuss and decide their answers for the gapped summary rather than going directly for the chatbox. Following the two dictoglosses, I displayed 3 reflective questions for students to think about and answer in the chatbox. Again, breakout rooms could have been used here.

We then moved on to another round of Quizlet live with vocabulary relating to surveillance, which, again, would either be review or preparation depending on how far through the asynchronous materials students were. This was the final teacher-led task. Timing-wise, I ran slightly over for that initial hour, but that wasn’t a problem (even moreso in the light of the requirement of that structure being abandoned, which came out of a meeting the following day!). The guided study task for week 2 was based on something we are trying with our asynchronous padlet – the weekly speaking challenge. The purpose of this weekly challenge is to increase the amount of speaking practice students do per week and to get them used to recording themselves speaking as this is what they will have to do for their coursework presentations later this term. As with introducing anything new (e.g. these students did a weekly paraphrase challenge in the last two terms and uptake was slow there too but it happened with perseverance!), they need a lot of encouraging. So, given that most of them hadn’t done the one from Week 1 and that the Week 2 one was an extension of my lesson, this was the task:

 

These were the questions:

(The PEE structure is Point, Evidence, Evaluation and it is the structure we teach them to present, support and evaluate their ideas in both writing and speaking.) This task requires them to practice the “paraphrasing/synthesising different view points in a presentation” element of the weekly asynchronous materials in a way that will enable me to check and give feedback on their output.

Things I took away from  session 2:

  • A little really does goes a long way so less = more, especially if I want to start building in more effective scaffolding and feedback elements;
  • I can do breakout rooms, yay! Now I need to think about how best to use them in a way that maximises potential benefits;
  • activities from face to face classrooms can be done online with some adaptation, I need to think carefully about how best to adapt them – what needs adding, what needs removing etc.;
  • teaching online is different but…that’s ok!
  • the more confident I get with it all, the more I can adapt what I do to be as inclusive as possible (obviously that is always an aim, but it helps to have some experience with the medium of teaching and how everything works or doesn’t work in the bag when working towards it).

Session 3 is tomorrow, so I am looking forward to using what I have learnt from session 1 and 2 to inform what I do. Watch this space!

I hope this has been of interest to some of you out there, though I suspect I am rather behind the curve because of how things have worked with our course! Hope you are enjoying the remote way of doing things, wherever you are at with it! I would love to hear about tasks you have adapted and tried in your online classrooms and how it went – if you have blogged about it please drop a link in the comments for me! Otherwise, please do use the comments to share. 🙂

 

Taking an EAP course online – what we’ve done so far!

Like most of the rest of the educational world, I have been thrust headlong into the world of online teaching and learning. Both from the teaching perspective and the coordinating one. It’s now week 7 of our first term in this brave new world and I have come up for air very briefly before assessments rain down on us between now and the end of the term. I thought I would share a bit of my experience of this term so far and how things are working because I’ve found it useful looking at others’ experiences!

Though we are in week 7, I have so far taught only 4 synchronous sessions as, being an ADoS, I “only” have one group,  we didn’t have any synchronous learning in Week 1 (it got up and running from week 2) and Week 3 got wiped out by a University closure day tacked onto the Easter weekend. My Week 7 session is tomorrow!

I use the term ‘taught’ fairly loosely as our approach is not the traditional whole class online lesson one. Instead, we have a two hour slot and the class of (on average) 20 is divided into 4 half hour slots within that (we change these groupings each week). It’s been interesting coming to terms with the new set-up and figuring out what works (and, indeed, what doesn’t!). We are using Blackboard collaborate and like most of these kind of platforms, it has some useful features like allowing students to raise hands, chat in a chat box, be put into breakout rooms and so forth. Of course with half an hour and a small handful of students, as a whole, we haven’t been using the breakout rooms much. That will change next term though! My half hours tend to take the structure of check on previous week’s learning, task, discussion. It seems to work best when:

  • you nominate students clearly so that they know when to speak (sounds so obvious but in slot one on day one I had to learn that the hard way!).
  • you get used to speaking into the ether and include prompts to get students writing in the chat box or raising their hands within what you say.
  • you use visual instructions to back up the oral ones and there is no ambiguity in what you want students to when and in what order and for how long, and how they are going to return/signal their return to the next whole group learning phase.
  • you get students to prepare thoroughly for the discussion in advance of the session.

As well as our online slots, we (continue to) use Blackboard for asynchronous content. Given we had 2 weeks to get our course up and running, we were fortunate in that we already had all lesson materials on Blackboard in the form of powerpoints and worksheets, previously with the function of enabling students to review content. The challenge, then, has been to make it more suitable to online learning. We have done this in the following ways:

  • Recording start of week and end of week videos. The former review the previous week of learning and talk the students through the lesson content for the current week, while the latter review the week’s content. This has been a laying the track as we go kind of a team effort, with everyone contributing – teachers and ADoSes writing scripts and finding additional materials to support the week’s topic and skills, ADoSes checking and editing scripts as well as adding the additional resources to the relevant lesson padlet on Blackboard, the odd teacher but mainly the TEL (Technology-Enhanced Learning) team recording the videos using Kaltura. Being as there were three cohorts and sets of teachers to manage, this required a complex Project Management Googlesheet to keep track of who was doing what by when. By hook or by crook, though, we have managed to do it! Script checking is complete, script recording ongoing. Materials are released on a weekly basis.
  • Using individual class padlets. Teachers have set up a padlet for each of their groups and this provides a means of generating student interaction (with each other, with tasks, with the teacher) outside of the synchronous learning slots. My students have engaged most with the paraphrase challenge – this is the brainchild of one of my colleagues not me so I don’t take credit! It involves putting a sentence or a short paragraph together with source information on the padlet for students to paraphrase either the entirety in the case of the sentence level ones or select an idea to paraphrase from the paragraph level ones. Of course they need to include correctly formatted citations. It’s a good way to provide regular paraphrasing practice – a skill that students tend to need a lot of practice of in order to master, regardless of L1 background!
  • As alluded to in point one, supplementing what already existed with extra content for the students to use for skills practice – videos, website links, extra practice activities etc.
  • In week 5 and ongoing, end of week quizzes were introduced, using Blackboard’s quizzing tool. These contain questions based on the week’s content to check students’ learning but also as a means for the institution to monitor participation. Script writers have written the questions at the end of the end of week video script, and the TEL team have created the quizzes in Blackboard. I don’t know what we would do without the TEL team!!

Student feedback has been positive but the main thing they want more of is teacher contact points within a week. Thus, next term we will be keeping the short tutorials slot and adding another two hour slot where an hour is more traditional teacher led input and the second hour can be used for tasks with the teacher on hand to provide support. We are also looking add more interactive content to the lesson padlets on Blackboard for next term and for the new academic year (although we have just learnt that there will also be more content being prescribed from higher up than our centre so how that all pans out remains to be seen!)

In terms of asynchronous learning, my students were struggling to keep on top of remembering what they had and hadn’t done tasks-wise and therefore forgetting to do some things. Being younger foundation students, unlike the pre-masters students they haven’t yet learned how to study effectively independently and are used to a lot more structure and hand-holding. So, I made them a record of work to alleviate this issue! Some are even using it 😉

I hope this is of interest to some of you out there and would be interested to hear via comments what you are doing with your students and how that is working out!

Right, see you at the other end of this term (maybe!) <fills lungs and prepares for the next wave to break>

Scholarship Circle: Giving formative feedback on student writing (2.2)

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 / session 2.2 of this particular circle.

In this week’s session of the scholarship circle, we started by doing a pilot text analysis. In order to do this, we needed a first draft and a final draft of a piece of CW3 essay coursework and a method of analysis. Here is what it looked it like:

So…

  •  QM code refers to the error correction code and there we had to note down the symbol given to each mistake in the first draft.
  • Focus/criterion refers to the marking criteria we use to assess the essay. There are five criteria – Task achievement (core elements and supported position), Organisation (cohesive lexi and meta-structures), Grammar (range and accuracy), Vocabulary (Range and accuracy) and Academic conventions (presentation of source content and citations/references). Each QM can be assigned a criteria to attach to so that when the student looks at the criteria-based feedback, it shows them also how many QMs they have attached to each criteria. The more QMs there are, the more that criterion needs work!
  • Error in first draft and Revision in final draft require exact copying from the student’s work unless they have removed the word/s that prompted the QM code.

Revision status is where the method comes in. Ours, shared with us by our M.A. researcher whose project our scholarship circle was borne out of, is based on Storch and Wigglesworth. Errors are assigned a status as follows:

  • Successful: the revision made has corrected the problem
  • Unsuccessful: the revision made has not corrected the problem
  • Unverifiable: if the QM is wrongly used by the teacher and the student has made something incorrect in the final draft based on that QM or has made no change but no change is in reality required
  • Unattempted: the QM is correctly used but the student does not make any change in the final draft.

Doing the pilot threw up some interesting issues that we will need to keep in mind if we use this approach in our data collection:

  • As there are a group of us rather than just one of us, there needs to be consistency with regards to what is considered successful and what is considered unsuccessful. E.g. if the student removes a problem word/phrase rather than correcting it, is that successful? If the student corrects the issue identified by the QM but the sentence is grammatically incorrect, is that successful? The key here is that we make a decision as a group and stick by that as otherwise our data will not be reliable/useful due to inconsistency.
  • We need to beware making assumptions about what students were thinking when they revised their work. One thing a QM does, regardless of the student’s understanding of the code, is draw their attention to that section of writing and encourage them to focus closely on it. Thus, the revision may go beyond the QM as the student has a different idea of how to express something.
  • It is better to do the text analysis on a piece of writing that you HAVEN’T done the feedback on, as it enables you to be more objective in your analysis.
  • When doing a text analysis based on someone else’s feedback, however, we need to avoid getting sucked in to questioning why a teacher has used a particular code and whether it was the most effective correction to suggest or not. These whys and wherefores are a separate study!

Another thing that was discussed was the need to get ethical approval before we can start doing anything. This consists of a 250 word overview of the project, and we need to state the research aims as well as how we will collect data. As students and teachers will need to consent to the research being done (i.e. to use of their information), we need to include a blank copy of the consent form we intend to use in our ethical approval application. By submitting that ethical approval form, we will be committing to carrying out the project so we need to be really sure at this point that this is going to happen. Part of the aim of today’s session, in doing a pilot text analysis, was to give us some idea of what we would be letting ourselves in for!

Interesting times ahead, stay tuned… 🙂

Scholarship Circle: Giving formative feedback on student writing (2.1)

It’s a brand new term (well, sort, of it’s actually the third week of it now!), the second of our four terms here at the college, and today (Monday 21st January, though I won’t be able to publish this post on the same day!) we managed our first scholarship circle session of the term.

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 of this particular circle.

The biggest challenge we faced was remembering where we had got to in the final session BC (Before Christmas!). What were our research questions that we had decided on again? Do we still like them? What was the next step we were supposed to take this term?

Who?

We talked again about which students we wanted to participate – did we want IFY (Foundation) or PMP (Pre-Masters)? We considered the fact that it’s not only linguistic ability which influences response to feedback (our focus) – things like age, study pathway, past learning experiences and educational culture in country of origin will all play their part. Eventually, we decided to focus on IFY students with PMPs their coursework may alter dramatically between first and final draft submissions due to feedback from their content tutor, which would affect our ability to do text analysis regarding their response to our first draft feedback. Within the IFY cohort we have decided to focus on the c and d level groups (which are the two bottom sets, if you will), as these students are most at risk of not progressing so any data which enables us to refine the feedback we give them and others like them will be valuable.

What?

It is notoriously tricky to pin down a specific focus and design a tool which enables you to collect data that will provide the information you need in order to address that focus. Last term, we identified two research questions:

This session, we decided that this was actually too big and have decided to focus on no. 2. Of course having made that decision, and, in fact, also in the process of making that decision, we discussed what specifically to focus on. Here are some of the ideas:

  • Recognition – which of the Quickmarks are students able to recognise and identify without further help/guidance?
  • Process – are they using the Quickmarks as intended? (When they don’t recognise one, do they use the guidance provided with it, that appears when you click on the symbol? If they do that, do they use the links provided within that information to further inform themselves and equip themselves to address the issue? You may assume students know what the symbols mean/read the information if they don’t but anecdotal evidence suggests otherwise – e.g. a student who was given a wrong word class symbol and changed the word to a different word rather than changing the class of it!)
  • Application – do they go on to be able to correct other instances of the error in their work?

Despite our interest in the potential responses, we shelved the following lines of enquiry for the time being:

  • How long do they spend altogether looking at their feedback?
  • How do they split that time between Quickmarks, general comments and copy-pasted criteria?

We are mindful that we only have 6 weeks of sessions this term (and that included this one!) as this term’s week 10, unlike the final week of last term, is going to be, er, a tad busy! (An extra cohort and 4 exams being done between them vs one cohort and one exam last time round!) As we want to collect data next term, that gives us limited time for preparation.

How?

We are going to collect data in two ways.

Text analysis

We each will look at a first draft and a final essay draft of a different student and do a text analysis to find out if they have applied the Quickmark feedback to the rest of their text. This will involve picking a couple of Quickmarks that have been given to the student in their first draft, identifying and highlighting any other instances of that error type, and then looking at the final draft in order to find the highlighted errors so that we can see if they have been corrected, and if they have, how – successfully or not.

We are going to have a go at this in our session next week, to practise what we will need to do and agree on the process.

Questionnaire

Designing an effective questionnaire is very difficult and we are still in the very early stages. We are still leaning towards Google Forms as the medium. Key things we need to keep in mind are:

  • How many questions can we realistically expect students to answer? The answer is probably fewer than we think, and this means that we have to be selective in what questions to include.
  • How can we ask the questions most clearly? As well as using graded language, this means thinking about question types – will we use a Likert scale? will we use tick boxes? will we use any open questions?
  • How can we ensure that the questions generate useful, relevant data? The data needs to answer the research questions. Again, this requires considering different question types and what sort of data they will yield. Additionally, knowing that we need to analyse all the data that we collect, in terms of our research question, we might want to avoid open questions as that data will be more difficult and time-consuming to analyse, interesting though it might be.

The questions will obviously relate to the focuses we identified, earlier discussed – recognition, process and application. One of our jobs for the next couple of sessions is to write our questions. It’s easy (ish!) to talk around what we want to know, but writing clear questions that elicit that information will be significantly more challenging!

Another thing we acknowledged, finally, is that research-wise we are not doing anything new that hasn’t been done before, BUT the “newness” comes from doing it in our particular context. And that is absolutely fine! 🙂

Homework: 

Well those of us who haven’t got round to doing the reading set at the end of the previous session (cough cough) will hopefully manage to finish that. (That was Goldstein, L. Questions and answers about teacher written commetary and student revision: teachers and students working together in Journal of Second Language Writing and Ene, E & Upton, T.A. Learner uptake of teacher electronic feedback in ESL composition.) Otherwise, thinking about possible questions and how to formulate them!

Scholarship Circle: Giving formative feedback on student writing (5-8)

Last time I blamed time and workload for the lack of updates, but this time the reason there is only one post representing four sessions is in part a question of time but more importantly a question of content. This will hopefully make more sense as I go on to explain below!

(For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 of this particular circle.)

Session 5 saw us finishing off what we started in Session 4 – i.e. editing the error correction code to make it clearer and more student-friendly. So, nothing to add for that, really! It was what it was – see write-up of Session 4 for an insight.

Sessions 6 and 7 were very interesting – we talked about potential research directions for our scholarship circle. We started with two possibilities. I suggested that we replicate the M.A. research regarding response to feedback that started the whole scholarship circle off and see if the changes we are making have had any effect. At the same time as I had that idea, another of our members brought forward the idea of participating in a study that is going to be carried out by a person who works in the Psychology department at Sheffield University, regarding reflection on feedback and locus of control. What both of these have in common is that they are not mine to talk about in any great depth on a public platform given that one has not yet been published and the other is still in its planning stages.

Session 6

So, in session 6, the M.A. researcher told us, in depth, all about her methodology, as in theory if we were to replicate that study we would be using that methodology and then we also heard about the ideas and tools involved in the Psychology department research. From the former, it was absolutely fascinating to hear about how everything was done and also straightforward enough to identify that replicating that study would take up too much time at critical assessment points when people are already pressed for time: it’s one thing to give up sleeping if you are trying to do your M.A. dissertation to distinction level (congratulations!) but another if you are just working full time and don’t necessarily want to take on that level of workload out of the goodness of your heart! We want to do research, but we also want to be realistic. With regards to the latter, it sounded potentially interesting but while we heard about the idea, we didn’t see the tools it would involve using until Session 7. The only tool that we contributed was the reflection task that we have newly integrated into our programme, which students have to complete after they receive feedback on the first draft of their assignments.

Session 7

Between Session 6 and 7, we got hold of the tools (emailed to us by the member in touch with the research in the Psychology department) and were able to have a look in advance of Session 7. In Session 7, we discussed the tools (questionnaires) and agreed that while some elements of them were potentially workable and interesting, there were enough issues regarding the content, language and length that it perhaps wasn’t the right direction for us to take after all. The tools had been produced for a different context (first year undergraduate psychology students). We decided that what we needed was to be able to use questionnaires that were geared a) towards our context and students and b) towards finding out what we want to know. We also talked about the aim of our research, as obviously the aim of a piece of research has a big impact on how you go about doing that research. Broadly, we want to better understand our students’ response to feedback and from that be able to adapt what we do with our feedback to be as useful as it possibly can be for the students. We spent some time discussing what kinds of questions might be included in such a questionnaire.

So, at this point, we began the shift away from focusing on those two studies, one existing, complete but unpublished, and one proposed,  and towards deciding on our own way forward, which became the focus of session 8

Session 8

Between Session 7 and Session 8, our M.A. Researcher sent us an email pointing out that in order to think about what we want to include in our questionnaires, we first need to have a clear idea of what our research questions are. So that was the first thing we discussed.

One fairly important thing that we decided today as part of that discussion about research questions was that it would be better to focus on one thing at a time. So, rather than focusing on all the types of feedback that Turnitin has to offer within one project, this time round focus specifically on the quickmarks (which, of course, we have recently been working on!). Then, next time round we could shift the focus to another aspect. This is in keeping with our recognition of the need to be realistic regarding what we can achieve, so as to avoid setting ourselves up for failure. (I think this is a key thing to bear in mind for anybody wanting to set up a scholarship circle like this!) The questions we decided on were:

  1. Do students understand the purpose of feedback and our expectations of them when responding to feedback?
  2. How do students respond to the Quickmarks?

Questions that got thrown around in the course of this discussion were:

  • Do students prioritise some codes over others? E.g. do they go for the ones they think are more treatable?
  • What codes do students recognise immediately?
  • If they don’t immediately recognise the codes, do they read the descriptions offered?
  • Do they click on the links in the descriptions?
  • Do they do anything with those links after opening them? (One of the students in the M.A. research opened all the links but then never did anything with them!)
  • How much time do they believe they should spend on this feedback?
  • How long are students spending on looking at the feedback in total?
  • How do students split their time between Quickmarks (/”In-text feedback” so includes comments and text-on-text a.k.a. the “T” option, which some of us haven’t previously used!) and general comments and the grade form?

Of course, these questions will feed in to the tool that we go on to design.

We identified that our learner training ideas e.g. the reflection form, improving the video that introduces them to Turnitin feedback, developing a task to go with the video in which they answer questions and in so doing create themselves a record of the important information that they can refer back to etc. can and should be worked on without waiting to do the research. That way, having done what we can to improve things based on our current understanding, we can use the research to highlight any gaps.

We also realised that for the data regarding Quickmarks to be useful, it would be good for it to be specific. So, one thing on our list of things to find out is whether Googleforms would allow us to have an item in which students identify which QMs they were given in their text and then answer questions regarding their attitude to those Quickmarks, how clear they were etc. Currently we are planning on using Googleforms to collect data as it is easy to administer and organises the results in a visually useful way. Of course that decision may be changed based on whether or not it allows us to do what we want to do.

Lots more to discuss and hopefully we will be able to squeeze in one more meeting next week (marking week, but only one exam to mark, most unusually! – in a normal marking week, it just would not be possible) before the Christmas holidays begin… we shall see! Overall, I think it will be great to carry out research as a scholarship group and use it to inform what we do (hence my overambitious as it turns out initial idea…). Exciting times! 🙂

 

Using Google+ Communities with classes (2)

All of a sudden we are 5 weeks into term. This week, also known as 5+1 (so not to get it mixed up with teaching week 6, which is next week) is Learning Conversations week (the closest we get to half term, and only in the September term!) so it seemed a good time to take stock and see how things are going with Google Communities, following my introductory post from many moons ago.

Firstly, it must be said that the situation has changed since I wrote that first post: Now, all teachers are required to use GC instead of my Group on MOLE (the university brand of Blackboard VLE) because we had trouble setting up groups on MOLE at the start of this term. Nevertheless, I am carrying on with my original plan of reflecting and evaluation on my use of GC with my students because I think it is a valuable thing to do!

In order to evaluate effectively, I wanted to have the students’ perspective as well as my own, I posted a few evaluative questions in the discussion category of each of my classes’ GC page.

So, no science involved, no Likert scales, no anonymity, just some basic questions. (The third question was because I thought I might as well get their views on how the lessons are going so far at the same time!) I’m well aware of the limitations of this approach BUT then again I’m not planning to make any great claims based on the feedback I get and I’m not after sending a write-up to the ELTJ or anything like that either (would need all manner of ethical approval to do that!). I did try to frame the questions positively e.g. “What do you think would improve the way we use GC?” rather than “What don’t you like about GC?” so that the students wouldn’t feel like responding to the question wasn’t a form of criticism and therefore feel inhibited. An added benefit is that it pushes them to be constructive regarding future use rather than just say how they feel about the current use of it.

Before I go into the responses I’ve had from students, however, it would make sense to summarise how I’ve been using the GCs with them. I recently wrote about GCs for the British Council TeachingEnglish page (soon to be published) and the way I came up with for describing them in that post was “a one-stop shop for everything to do with their [students] AES classes” and that is basically what it has become:

Speaking Category extract

 

Writing Category extract

 

Vocabulary Category extract

 

Listening Category extract

I would say the main use I have made of it is to share materials relating to lessons, mostly in advance of the lessons – TedTalks, newspaper articles etc – but also useful websites and tools, for individual use or class use – AWL highlighter, Quizlet, Vocab.com etc. Finally, it is great for sharing editable links to Google Docs, which we use quite often in class for various writing tasks. Other than these key uses, I have also used it to raise students’ awareness of mental health issues and the mental health services offered to students by the university, during Mental Health Week here (which coincided with World Mental Health Day) and to raise their awareness of the students union and what it offers to them.

In terms of student feedback, they think it’s “convenient”, “easy to use” and they “enjoy using” it. They also mention the ability to comment on posts (not present with My Group on MOLE) and communicate outside of the classroom as well as in it. In terms of suggestions for improvement, one student said students should use it to interact more frequently but that it should be clear which posts are class content and which are sharing/interaction. A couple of students also said they’d like the Powerpoints used in class to be uploaded there. However, those are available on MOLE. The trouble, of course, is that in using GC rather than My Group (which is on MOLE), students are a lot more tuned into GC (which we use all the time) than MOLE. I have no scientific evidence to back this up, but I suspect that be it academically or personally, if you have to use multiple platforms you tend to gravitate towards one, or some, more than others rather than using them all equally, particularly if time is very limited, as it is for busy students! (I could be wrong – if you know of any relevant studies let me know!) Unfortunately GC cannot fully replace MOLE as students need to learn how to use it in preparation for going to university here and they need to submit coursework assignments to Turnitin via MOLE. Perhaps, then I need to come up with ways to encourage them to go from one to the other and back, so they don’t forget about ‘the other’…

In terms of future use, I have set up a little experiment in that as part the of Learning Conversations that are taking place this week, we have to decide on Smart Actions that the students are supposed to carry out. E.g.

 

Go to Useful Websites on MOLE and explore the ‘Learning Vocabulary’ websites available. Tell your teacher which websites you visited and what you learnt from them by the final AES lesson of Week 6.

Some of them, like the above, lend themselves to posting on GC. In this way, not only do they tell me what they have learnt but also they share that learning with the rest of their classmates. So, in their learning conversations, whenever the Smart Action(s) were amenable to this plan, I have been encouraging students to use GC to communicate the outcome to me and share the learning with the rest of the class. We will see how it goes, if they do post their findings etc. Be interesting to see what happens! Another idea I’ve had is to do something along the lines of “academic words of the week”, where I provide a few choice academic words along with definitions, collocations, examples of use and a little activity that gives them a bit of practice using them, and get them to also make a Quizlet vocabulary set collaboratively (I have a Quizlet class set up for each class). Then perhaps after every couple of weeks we could do an in-class vocabulary review activity to see what they can remember.

Finally, it seems to me that Monday, being the first day of the second half of the term, is a crucial opportunity to build on student feedback by getting them to discuss ways in which we could use the GC for more interactive activities and find out what they’d be interested in having me share other than class-related materials and the occasional forays into awareness-raising that I have attempted. The key thing that I want them to take away is that I want the GC to work for them and that I am very much open to ideas from them as to how that should be, so that it becomes a collaborative venture rather than a teacher-dominated one.

We shall see what the next five weeks hold… Do you have any other ideas for how I could use GCs more effectively? Would love to hear them if you do!

 

Scholarship Circle: Giving formative feedback on student writing (3+4)

Time and workload have dictated that I combine two weekly scholarship sessions into one post, so this “double digest” is my write-up of sessions 3 and 4.

(For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 and session 2 of this particular circle.)

Session 3

In session 3, we started by discussing the type of feedback we give students in their coursework. In CW1 (an essay outline), we give them structural feedback as well as pointing out where sources are insufficiently paraphrased, while in CW3 they get structural feedback and language feedback using the error correction code. We also talked more about direct feedback. We questioned where the line between direct feedback and collusion lies and decided that it’s ok to use teacher feedback to improve work but if they hired another tutor to correct their work, it would be collusion. We also came to the conclusion that direct feedback can be useful for certain things and that you could use it to scaffold learners e.g. in the first instance of the mistake, provide the correct form as a model; in the second instance of the mistake, provide the start of the correct form; in the third instance of the mistake, just highlight the type of mistake and let the learner correct it by themselves, using previous instances and feedback to help them. If there are any further instances of that mistake type, indicate to learners that they need to find and correct them.

We also talked more about this issue of correcting mistakes beyond those pointed out by the teacher i.e. proofreading work for more instances of the same mistake. In our experience, it frequently does not happen. In the masters research done by one of our number, the main reasons for that, given by the students when they were asked, were:

  • the belief that no comments = no mistakes
  • not knowing how to find/correct mistakes

However, with regards to the quick marks (i.e. error correction code on Turnitin), in terms of the students who participated in the study, 80-100% of quick marks resulted in successful revisions. Thus, on the whole, only when mistakes are pointed out are they are corrected, in general. This brought us back to the question of proofreading and learner training which we had touched on in previous sessions, identifying it as a definite need.

We acknowledged that we expect proofreading but that it doesn’t happen. This is partly because our learners are not used to it – they are used to having all errors pointed out to them. In some cases, as in one of the participants in the M.A. study, learners are not able to identify mistakes. In that case, the ideal situation would be helping those learners to find and correct the errors they ARE able to deal with it at their level. We decided that in order to help learners in both cases, more proofreading-related lessons are needed. They already have “Grammar Guru” which is an online interactive grammar tutoring tool, within which are activities that prompt proofreading for mistakes with the specific focus of a given tutorial e.g. articles.

However, the only time they do it with their own work is with CW3 and so we wondered if there would be scope for using work produced for writing exam practices as the basis for proofreading activities too.

We also looked at 2 tools for encouraging students to engage with their feedback:

1. A google form, adapted from something similar which is used at Nottingham Trent, that encourages students to find examples of particular mistakes in their text, correct them and make a note of the materials used in order to make that correction:

The idea is that students complete it between receiving their feedback and attending their tutorial, so that during the tutorial the tutor can, amongst other things, check their corrections and suggest alternative sources.

2. A form for students to complete that pushes them to reflect on their feedback:

As with the first one, this is intended to be completed between receiving the feedback on Turnitin and attending the tutorial, thus making the tutorial more effective than the common scenario where the student comes in not having even opened the feedback. We also wondered about the possibility of combining the two, so in other words combining focused error identification and correction with reflection on other aspects of the feedback.

Session 4

This week, in session 4, we mainly focused on the error correction code that we use. We looked at each symbol and accompanying notes, firstly deciding if it was a necessary one to keep and then refining it. The code, used on Turnitin, works as follows: We highlight mistakes and attach symbols to them. When the student subsequently looks at their text, they see the symbols and then when they click on the symbol, the accompanying notes appear. Our notes include, depending on the mistake, an explanation of the mistake, examples of incorrect use and corrected use, and links to sources that students can use to help them to learn more about the language point in question. Here is an example:

We paid particular attention to the clarity of the language used in the accompanying notes, getting rid of anything unnecessary e.g. modals, repetition etc, and the links provided to help students. The code also exists in GoogleDoc format so we all had Chromebooks out and were working on it collaboratively. There are a lot of symbols and there was plenty to say, so actually we only got as far as “C”!! (They are ordered alphabetically….!) This job will continue in the next session, which will be the week after next, as next week we have Learning Conversations which are off timetable so our availability is very different from normal.

I would be interested to hear what approaches you use where you work in terms of error correction, codes, proofreading training, pre-tutorial requirements, engaging learners with feedback and so on. Please do share any thoughts using the comments box below… 🙂

Scholarship Circle: Giving formative feedback on student writing (2)

Before we had time to turn around twice, Tuesday rolled around again and with it our weekly scholarship circle meeting, with its name and focus of “Giving formative feedback on student writing” (For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here and to see what we discussed last week – in session 1 of this circle –  here)

A week is a short turn around time but a number (9, in fact!) of eager beavers, who’d all managed to read the article “Sugaring the Pill: Praise and criticism in written feedback” by Fiona Hyland and Ken Hyland in Journal of Second Language Writing, turned up to discuss it and relate it to our context. The article is, in its own words, “a detailed text analysis of the written feedback given by two teachers to ESL students over a complete proficiency course”. The authors categorise all the feedback by function; namely, praise, criticism and suggestions and analyse it accordingly. It’s a very interesting and thought-provoking article. However, the purpose of this post is not to summarise it but rather our discussion which arose from it. This is not as easy a task as it might sound!

Praise

We started by talking about praise. Something we found interesting, in both the article and a similar piece of research done for a masters dissertation by one of our number, was that the students in these studies were able to identify when praise was insincere/formulaic/there for the sake of being there. (Here we are talking about the general comments at the end of a text rather than specific in-text comments.) Additionally, also in terms of general end-of-text comments, students who receive substantial formulaic praise may automatically mentally downgrade it, particularly if the balance of feedback overall is in favour of praise i.e. more positive comments than suggestions for improvement. In connection with this, students were also found not to believe the positive general comments if they did not reflect the in-text feedback, which being more directly connected to the text held more weight for them. Finally, both the article and the masters research highlighted the danger of the suggestions for improvement in a praise-criticism sandwich being ignored/missed by a student and the danger of hedged comments (e.g. using modals) being misunderstood.

Another aspect of feedback which it was thought might lead to misunderstanding is our feedback guidelines here at the college, which stipulate that in our general comments we should include 3 positive points and 3 areas to work on. We discussed the possibility that this might be (mis)interpreted by students to mean that the piece of writing was good and in need of improvement in equal measure when in fact that may not be the case. We also discussed the importance of framing the negative points as suggestions rather than criticism, as well as of avoiding hedging and the aforementioned dangers of miscommunication that may go with it:

Compare

“Your writing does not have enough linkers so it is confusing” (highlighting a negative)

with:

“You should include more linkers in your work to make it clearer” (making a suggestion for improvement)

This would, in turn, be easier to understand for a student than:

“I wonder if you could include more linkers in this paragraph? This might help the reader.” (hedged)

or:

“This is a good introduction with a clear thesis statement and scope, however, you need to look at coherence. Go back to …. and consider… . I think you could also benefit from having a look at…  …it is quite advanced but I think you are ready to take your AW to the next level!” (Praise-criticism sandwich: the student in question ignored all the suggestions because the teacher had said it was good so they didn’t feel the need to make any changes!) 

Of course, as discussed in the journal article, teachers do use phrases such as “I wonder if” and questions rather than direct instructions to avoid appropriation of the piece of work and also to avoid being overly authoritative, in order to meet what Hyland and Hyland describe as the “interpersonal goal” aspect of feedback (in contrast with pedagogic and informational goals).  Our conclusion, based on the masters findings, our experience and having read the journal article was that teachers possibly worry too much about being polite in the feedback, which ends up being confusing for the student more than anything else. As here:

When the message gets lost…

Still relating to praise, we agreed that it is most effective when specific i.e. directly highlights something in the text that the student is doing well, a view supported by the article and the masters research. Carrying this over to general end-of-text comments, we wondered if ‘repeating’ what you have said in specific in-text comments (which I admitted to doing quite a bit hence raising the issue), whether positive or negative, might actually be a way of reinforcing the importance of the in-text comments in question rather than being redundant or otherwise negative and making the general comments more personalised/less formulaic.

Finally, one issue I raised was that on Turnitin, if you have all the in-text comments (both positive and negative – “negative”, including suggestions for improvement not just criticisms obviously) in a single colour in terms of highlighting, a student might look at that and assume their essay was terrible because of the quantity of highlighting. I wondered if using different colours of highlighting for positive and negative would alleviate that situation. However, it was also put forward that it might be even worse if students knew that code and had very few things highlighted in the positive colour!

Improving feedback

As well as identifying the potential issues with praise discussed above, we also discussed possible solutions:

Reframing general comments

We agreed that:

  • short, personalised comments would be most useful, to avoid misunderstandings and identifiable insincerity. (Our comments bank – a google doc of generic comments – does not currently fit this bill.)
  • in Turnitin we could make more use of the “T” option (which is along side the QM and the comment bubble options and which most of us were unaware of!). This allows you to write directly on the text in ‘blue ink’ – might be more personalised/allow more flexibility than the general comments in the comments box. It might also allow for less in-text highlighting for comments bubbles.
  • having a “3 positive things and 3 ‘negative’/to improve” one size fits all guideline is problematic as students are all different (though if you have 60+ students’ work to look at in a short space of time, is carefully tailored, individualised feedback realistically feasible?)

Learner Training

We decided that learner training was crucial for enabling students to make full use of the feedback and therefore make doing it worth our time and theirs. Firstly, for in-text comments to be truly useful, it was suggested that we need to explicitly train students to look for further examples of the mistakes we highlight using the Quick Marks (i.e. error correction code) as otherwise they will correct what we highlight but they won’t automatically apply it to the rest of their text. Perhaps part of learner training would be to train them towards the point where they can do that without being continually prompted in comments or tutorials. We also considered the need for recognising and differentiating between “treatable” errors (e.g. articles – there are rules that can be followed) and “non-treatable” errors (e.g. word choice), and giving appropriate feedback. For non-treatable errors, direct feedback, i.e. giving students the correction, is better, while for treatable errors we can use indirect feedback, i.e. identifying the error and asking students to correct it themselves, using clues such as error correction coding. Currently, most of our feedback is indirect, so this is something we may need to reconsider.

Another aspect of learner training that we discussed was how to train learners to make the most of their very brief (10-15 minute) tutorials. For these tutorials to be truly beneficial, we agreed that it was imperative for students to look at their feedback BEFORE coming to the tutorial. In fact, they need not only to look at it but also to attempt to respond to it, so that during the tutorial the tutor can check their attempts and help them with the areas they were unable to address independently. We wondered about using a pre-tutorial sheet to encourage them to do this, something that in order to complete they need to engage with the feedback. A couple of teachers have already experimented with this kind of thing with encouraging results so it is worth looking into.

All in all, we managed to discuss a lot in an hour – or just over, as we lost track of time! (You know it’s a good scholarship circle when the participants just can’t drag themselves away at the end! I think the reason this scholarship circle is going so well is that it has a very specific focus and it is one that is equally important to all of us.)

Homework for next week: to read a chapter by Dana Ferris called “Does error feedback help student writers? New evidence on the short- and long-term effects of written error correction” in a book published by Cambridge University Press called Feedback in Second Language Writing (edited by the same Hylands who wrote last week’s article!) Just from the title I am very curious about what Ferris will say, but I won’t have time to find out till at least the weekend!

Feel free to join in the discussion by commenting on this post! 🙂