Adapting to online teaching – EAP (3)

This is the third and final post that involves me wittering away about what I have done in my weekly 2hr online lessons with the pre-masters group that I share with my co-ADoS.

Week 5

After the low-point that was Week 4’s lesson (which you can read about in the second post of this series which covers Week 3 and 4 – update: the students also didn’t do their homework/preparation for my co-ADoS’s session with them so at least it wasn’t personal 😉 ), I changed my approach in terms of lesson focus. I shifted from trying to tap into and build on the asynchronous content to a straight-forward focus on CW2, students’ speaking coursework which is a presentation based on their CW3 which is an extended writing coursework. (However, it is worth mentioning that this shift would have taken place regardless of how Week 4 went, as at this stage in the term students need help with their speaking coursework!)

My lesson had 4 objectives. In the event, we only completed 3 of them. This was fine because the final one was only there in case the main task took less time than I’d anticipated, which it didn’t. The final objective will feature in Week 6’s lesson.

At the start of the week, students had received an email about CW2 with all the important information about it in terms of what it is, how it works and a timeline of tasks and deadlines. I started the lesson with a task based on that email (essentially to make sure they had read it and understood it rather than ignored it!) – working in groups to answer a set of questions based on the email on a pre-prepared Padlet:

I know – a lot of questions. However, they were quick and easy to answer so the task did not take too long. This was the follow-up:

Some questions came up and I was able to respond to those, as well as reiterating key information.

Positives: The task forced them to read the email. (Students are good at not reading emails!) They had the opportunity to ask questions. They engaged!

To improve: I think I would probably do this the same way in future! Beats talking at them about it.

For Pre-Masters students, CW2, like CW3, is synoptic. They work on and submit the same pieces of work for their Research Project (Humanities) or Literature Review (Science and Engineering) module and their AES (Academic English Skills – ours!) module. So in theory they should already have been working on it in their other modules (who focus on content and structure where we focus on language skills). The next step in this lesson, then, was to find out where they were at with it. I used Padlet again, but this time an individual task:

The goal of this task was two-fold – as well as to find out what students have done so far, I wanted students to have a clearer idea of where they were headed next. The questions were based on things they need to do as part of their CW2 preparation, leading them to question 8, where their answers to 1-7 guide them as to what they need to do. Some students had done loads already, some had started, some hadn’t started at all. Fairly typical! (They have been advised that next lesson will start with a progress check and I will want to know what they have done since this lesson! We shall see…)  This was the follow-up:

There were a few worries that I was able to address.

Positives: It gave me a snapshot of where they were at, and the opportunity to set up an expectation, based on the task, for next week’s lesson.

To improve: Their answers to question 8 were a bit vague. Next time I would give an example answer to push them to give more useful (to themselves) answers.

The final task of the lesson was completing the practice submission. This was what they were told about it in the information email:

I figured it would be less daunting if we did as much as possible during the lesson and they just had to finish for homework. We did it step by step:

It took them a fair bit of time! In fact, they didn’t quite manage to finish the final stage. Hence why there wasn’t time to embark on the assessment criteria side of things. However, we will now be looking at the criteria at the start of Week 6 and their submission deadline is not til the end of Week 8, so it’s ok.

Positives: It scaffolded an important task (the practice submission) for them. Giving them time in class alleviates (at least slightly) the time pressure they are under currently, which is important.

To improve: I would make more use of the individual chat feature, to prod them/check on them, rather than only the main everyone chatbox.

Overall: Admittedly this wasn’t the most exciting lesson in the world, but it did what it needed to do and they stayed with me! I deliberately over-planned because I just had no idea how long doing the practice task would take them so I wanted to be prepared for whichever eventuality.

Week 6

The final lesson for this term! I started with a chat box warmer, one I’ve used previously – tell me using one adjective how you feel right now. The adjectives were more positive than Week 4 (when I last used this warmer) on the whole, which was encouraging!

These were my lesson objectives:

For the first, I did a similar task to last week – a set of questions to answer on a pre-prepared Padlet:

The answers were more encouraging this time round – there were still some who hadn’t started but they were in the minority rather than the majority this week! I had to cajole some of them into responding – by the end of the task I had won 11/15, having started with about 5. Having responded verbally to some of their answers – to acknowledge their progress, to pick up on answers that indicated confusion and to encourage them to keep working hard/not leave it til the last minute – I followed up with this:

There were some concerns that came out, which I was able to address.

Positives about this stage: Students knew they would be expected to give me a ‘progress report’, as I had told them at the end of last week’s lesson. Hopefully more work got done as a result! Knowing that homework (in this case CW2 work) will be revisited in the next class rather than forgotten about is supposed to be more motivating for students. I am getting better at talking into empty space. I think each week since the start of this way of doing things, I have improved and become more comfortable with it little by little (because I only teach one lesson per week, it’s a slow learning curve!). I had thought through feedback and the feedback elements felt less haphazard than they have been known to feel in past lessons.

To improve: I still don’t know what to do with the students who just don’t respond whatever I do or say! Given the stage in the course and the age of the students, though, to an extent I think all I can do is provide opportunities for participation as best I can and make sure they are clearly set up and scaffolded.

 

Then we moved onto the next stage, which I had carried over from last lesson.

This stage was a preparatory stage for the following evaluation stage and the two in combination were to ensure that students have a clear idea of what they need to do in order to get good marks for their presentations. I introduced the 4 criteria and their subheadings, giving a brief explanation of what each one meant.

 

To try and make it clearer for students and to check understanding, I then did a little matching task. The example below is one of the items. It was a series of sentences starting “I should…” and students had to match each one to the correct criteria. I asked them to write their answers (e.g. for this example, they would write 2a)

Positives: Links the things students need to do with the criteria they need to do them for. Doesn’t require a lot of student writing.

To improve: Next time I would insert a breakout room stage and have a task with the 4 criteria and a list of the statements and get the students to discuss and match them, then use what I actually did as the feedback stage. On the plus side, the way I did it didn’t have a negative impact on the next (important) task, which was the final part of this stage of the lesson – the example presentation evaluation:

The first step was getting them all to watch it individually rather than playing it and sharing screen, to avoid bandwidth and audio quality issues. I asked them to write “done” in the chatbox once they had finished. Once they were done, I put them into breakout rooms in groups to discuss the presentation in terms of the criteria and add to the pre-prepared Padlet.

Positives: they did the task and showed understanding of the criteria and how the presentation mapped to the criteria.

To improve: I think the instruction slide above should have been two slides. One for watching the presentation and evaluating it individually and one for doing the group task. Fortunately, used as above it didn’t impact the task negatively! Next time, I would also include an element of getting them to engage with the content (which was quite humorous!) rather than only the quality. A couple of them spontaneously mentioned things about it in the chatbox as they watched which was nice! When I planned the lesson, I was too focused on the main task and forgot to allow for personalisation.

The final stage of the lesson focused on the Q&A. As students are submitting recorded presentations rather than doing them live, we need a live element to address the answering questions part of the criteria (2b!). These will take place in Week 8 and involve use of a list of questions which students are able to look at in advance of their slot (they are already on Blackboard!).

They’ve already had this information (the first 3 questions) on multiple occasions from multiple sources but it bears repetition! (Inevitably, some got it wrong!) Once clarified, we could focus on the fourth question – useful language.

Because we were running out of time a bit, I displayed the above slide and got them to add examples, before getting them to download the list of questions (most of them hadn’t as yet) and putting them into breakout rooms for a bit of practice. Finally, we came back to the main room and I asked each of them one of the questions, just to give them a feel for it.

Positives: They had a chance to practice in groups and a chance to “try it out” in the main room subsequently. They now all have the questions downloaded and have looked at them and realised that it’s not as easy as they had assumed so might actually do some preparation work towards it!

To improve: Next time, rather than bring them back to the main room, I’d do the “giving a feel for it” element in each breakout room in turn. That way, there would be less waiting time for students and they could continue practising after I move to the next group. The final main room stage could then focus on task reflection.

Overall: I finally won at timing! Ok, not quite but much closer than was the case at the start of this term! Nothing took wildly longer than I had anticipated, everything I had planned was done, just in time. The final stage could have used a bit more time but didn’t suffer unduly for it. So, I’m pleased! It means I am getting the hang of estimating how much time it will take to do stuff. As ever plenty to work on and ways to improve but that’s the joy of it. Anyway that is it, for me, for teaching, till September! When it will be a brand new class who come directly to remote learning (the earliest we will do face to face is January and that’s very much dependent on the state of the world by then – anything could happen!). In the meantime, 3 crazy weeks of assessment and then 4 weeks of MUCH-NEEDED downtime are on the way. (I was sick for the whole of the Christmas holiday, my Easter holiday was a stress fest rather than a trip to Sicily thanks to the pandemic, so really, **really** looking forward to some downtime! And then using what I’ve learned this year come the start of next year. 🙂 )

 

Adapting to online teaching 2 (EAP)

After my first two weeks of whole group online teaching this term, I published this post about my experience of adapting to this way of teaching (behind the curve because we didn’t do any whole group teaching on our course last term, only short small group tutorials, which I mentioned briefly in my post about our experience of throwing an EAP course online at short notice). Another two weeks have passed so here is the next instalment! (It’s ok, we only have 6 teaching weeks this term before the final 3 weeks become all about assessment, so there will only be 3 of these posts in total!)

Week 3

The theme for this week was “Overpopulation – myth or problem?”. Having established in Week 2 that I can do break-out rooms (woo!), I decided to try a speaking-focused lesson with a focus on paraphrasing and summarising sources when speaking (which they will need to do for their Coursework 2 presentations). In preparation for the lesson, students had to find a source to support the position they had been assigned (half the class were assigned ‘myth’, half were assigned ‘problem’). In total, there were 4 break-out room groups, of which the final one was the main discussion task. The first 3 tasks involved random groupings, while the main task I did customised groupings because groups had to have a balance of “myth” and “problem” viewpoints and had to take into account attendance patterns thus far (i.e. I wanted to make sure that as well as being balanced viewpoint wise, no group had more than one student with patchy attendance!)

This was the first task (yes, somehow I forgot about “A”…! Students didn’t say anything about it, if they noticed. Of course they may have thought the chat box warmer task was “A”!)

This task reviews the skills learners developed and were tested on in Coursework 1 Source Report. In all the breakout room tasks for this lesson, I included times on the slides to give students an indication of how long they would have in their breakout room to complete the task.

Positive of this task: clear and achievable for students; provided opportunity for speaking/warming up their working in a breakout room mode!

Problem with this task: no tangible output = room for students to slack off. In future I would do something like get groups to report back in the main room, answering questions such as “In your group, whose source was the most current? What different search methods did your group discuss?”

This was primarily a preparatory task for the main discussion but also paraphrasing skill practice. As well as review and practise of written paraphrasing, it encouraged students to pick out key arguments that they could use in the main discussion task. By now, students are used to using Padlet in our whole group sessions both with and without the breakout room/group component.

Positives of this task: useful skill practice, a preparation step for the main discussion, has a tangible/monitorable output (student posts on the padlet)

Problems with this task: my instructions weren’t clear enough – in hindsight I should have included an example post on the padlet!; it took even longer than I had anticipated, which probably also relates to the instructions not being clear enough (fortunately, as has been mentioned previously, timing is very flexible in these sessions this term!); I used the comment function on Padlet to give live feedback/guide students but not all groups noticed the comments as they are not as immediately visually evident as the equivalent on a Google doc would be (I dealt with this by going into breakout rooms and drawing students’ attention to the comments!); my post-task feedback again needed more thought (work in progress!).

This was the final preparation task before the main discussion task. The goal was to give students time to consider the arguments linked to the alternative viewpoint and possible responses to these, so that the main task discussion could be of a higher quality.

Positives of this task: It used the output of the previous task (the arguments on the padlet) with a focus to how they would be used in the subsequent task, which adds coherence to the lesson arc and hopefully means students can see why they are doing what they are doing – there is a clear direction to the tasks;

Problems with this task: students could think “I’ll manage with the discussion, I don’t need to do this task”; any given student’s experience of this task would vary depending on how forthcoming or not their group-mates were. Group dynamics in the online setting is something I need to think about more – how to help students to work well together in groups, in breakout rooms. Maybe add more structure to breakout room tasks e.g. start them with some kind of mini-activity where students have to write something in the chat box, before moving onto using the audio and doing the actual task at hand.

(No, I don’t know what happened to my grasp of the alphabet in these lesson materials! I think I was so focused on the task content that I forgot to pay attention to numbering/lettering!)

So, the main task! Group discussion requiring use of the sources found for homework (research skills), the key arguments identified, paraphrased and considered in the course of this lesson and language for referring to sources verbally.

Positives of this task: Brings together everything the students have done from homework through to final discussion preparation

Problems with this task: As far as I was able to tell, only one out 4 groups did the task properly! I think again what was missing was a clear feedback stage which students would be made aware of in advance of starting the task and which required them to DO the task fully in order to complete; students who want to do the task properly but are in a group with students who are more interested in slacking off lose out (had one student who when I was in the breakout room monitoring/checking on them, tried to give her opinion and elicit others’ opinions but radio silence followed!).

This evaluative element of the lesson comes from Sandy’s recent blog post about conversation shapes. (Although it might be hard to see in this screenshot of the slide, depending on the resolution of your screen, when displayed as a pdf of a ppt in Blackboard collaborate, the credits were clearly visible!) Unsurprisingly, for the group who did have their discussion, it looked most like conversation 2. As a class, we identified that conversation 3 would be most effective – contributions of varying length, responding to the other speakers’ contributions, building on other speakers’ contributions. Obviously in groups, there would be more than 2 speakers but the students didn’t seem to have any problems applying the visuals to a group discussion.

Positives about this task: It was great to have a visual way to think about the discussions the students had had (those who had had them!! But I figure for those who bothered less, this was still useful and could be considered in terms of previous discussions). Having identified that 3 would be the most effective, this can be revisited in future speaking lessons as a prompt in advance of discussion tasks. Could also consider what language and cues would help to build a discussion like this e.g. agreeing and disagreeing language that allows connection to what has been said (that’s a good point, but…/yes, I completely agree, also…), back-channeling etc.

Problems with this task: I probably didn’t go far enough with it. Although, possibly this is not a problem but rather a slow-burn thing that bears plenty of revisiting and therefore doesn’t require lengthy input around it straight away. I think in future I will introduce this after the first suitable seminar discussion practice that students do in the course and revisit it and build on it regularly e.g. have example discussions to match to each shape, the language input as mentioned above etc. (Thank you, Sandy!)

The final task of the lesson was a reflective task, with the output going onto a padlet. Reflection is a key component of learning, of course, and actually these students by and large did a good job of this. This is something I need to capitalise on more in future lessons.

Positives of this task: made students think about what they’d done and evaluate it; those who didn’t speak recognised it in their answers (it’s something!);

Problems with this task: Too many closed questions – need to push them further than that, closed questions are fine but then a follow-up question could be good.

This task reflected weekly lesson content for week 3. In practice, the students had very little in-class time to start it, because all the teacher-led tasks (as above) took a fair amount of time to do, but students are accustomed to fairly substantial homework tasks and as this was part of Lesson 3CD also factors into their asynchronous learning time.

Overall, Week 3 was a useful learning curve for me. There were plenty of positives, there are plenty of things to work on. I find it really useful to consider each lesson in these terms, think about what went well, what didn’t work and how you’d do it differently next time to make it work better, and think about how to reflect what you’ve learned more immediately in subsequent lessons – I guess that is what reflective teaching and learning is all about!

Week 4

Well…you know those lessons where you think you’ve made a really quite good lesson plan and have high hopes for how the lesson will go, but the reality turns out… rather differently? That was week 4’s lesson for me. The theme for Week 4 was Scientific Controversy. The asynch materials included a listening practice based on a panel discussion about genetic modification, which I asked the students in advance of the class as preparation. Though it was homework, it wasn’t extra in the sense that it was part of the core asynch materials for the week.

I began the lesson in the usual way – with a chat box warmer. Today I asked them to pick one adjective that most describes them right now and write it in the chat box. 9/14 responded – tired, exhausted, sleepy, blue, sleepy, energetic, sleepy too, calm, hungry. I acknowledged and responded to all their responses. Then we looked at the lesson objectives. In this lesson, I put extra effort into making sure the lesson objectives were clear and carried through the lesson, so that students could see where they were in relation to the objectives, see progress being made and see how tasks relate to the lesson objectives (I’d read, or watched, I forget which, about the importance of doing this). I did this by repeating the objectives slide at appropriate intervals, highlighting each objective as it was focused on and putting a tick by each objective as it was met. Here is an example:

The first stage of the lesson was a language review stage. 

This stage included a definition check for controversy and scientific controversy and a series of pictures of example scientific controversies for which students had to guess what scientific controversy was being illustrated. Here is an example:

The students responded, and a good pace was maintained. I could perhaps have done more with the second question, tried to get students to share more ideas, but knowing I had some meatier tasks later in the lesson, I didn’t want to spend too long on this one. The final task of the first stage was a quick Quizlet review of some vocabulary from the homework asynch materials. 11/14 did it, which was an improvement on Week 2! I haven’t tried the team/breakout room version yet – that may be for next week!

Positives for this stage: Pacing, student response, topic and activities connected to asynch materials so provide review opportunities, use of pictures.

Problems with this stage: The second question on the picture slides got neglected. I think when it unfolding, I worried that if I pushed the second question, the amount of time they spent typing would negatively affect the pace/mean too long was spent on the activity.

The next stage of the lesson was reviewing the listening homework.

I started with these questions:

As you can see, I messed up the formatting for this slide so the Write yes or no looks like it only relates to question 3. I corrected it verbally but only got ‘no’s’, for those who responded. Hoping this was for the third question, I reminded them about the online mock exams available, the importance of practice and that that there would be opportunity for practice during this lesson too.

This next task was supposed to be a fairly quick and easy way of getting them to show their understanding of the opinions voiced in the panel discussion:

Nobody did it. Nobody responded when I asked why nobody had started doing anything a few minutes later. Eventually I said ok give me a smile emoji if you did the listening homework and a sad face emoji if you didn’t. I only got sad faces. So this task flopped completely. The next one was also not going to be possible as it reviewed the target language from the aforementioned homework:

So I skipped to the point where I displayed the target language and we related it to the conversation shapes we’d look at in Week 3 and then moved on to the final review task:

(The opinions referred to are those of the panel speakers again.) Obviously this needed a workaround due to the lack of homework issue, so I had them open up the relevant powerpoint which had notes relating to each panellist’s views and got them to tell me via the chatbox when they had done so.

Positives about this stage: It had a mixture of chatbox and breakout room activities, and focused on the content and the language of the listening homework. I had some workarounds for lack of homework.

Problems with this stage: It relied on students having done the homework! The padlet task had no work around (I was working on the basis that at least SOME of them would have done it and be able to post on the padlet and the rest could interact with that using the comments) for the zero homework completion.

The next and final stage of the lesson was the speaking/live listening stage:

I made this slide a) to give students an overview of this stage of the lesson and b) to insert at the relevant intervals to show which phase of the task we were moving on to. More detailed instructions for each step came at the start of each step. I had hoped this overview would motivate the students to carry out each step as they would know the following steps relied on it and have a clear picture of what they were working towards.

In practice, I put the students into breakout rooms, having set up the task, and went in to each room to check on the students. Group A gave me radio silence. No response. No audio, nothing in the chatbox, whatever I said. So I reiterated what they needed to do and said I would be back in 10 minutes to check on them (the preparation stage was 20 minutes). Group B had some students who did engage and some who did radio silence. Thank God for the ones who did! They asked questions about their topic, I checked their understanding of the task and then I left them to it for a bit (again promising to return in 10 minutes to check on them). At the relevant point I went back to Group A, knowing full well that the chances of them having done anything since I left (no activated mics had appeared at any point) were slim (they could have used the chatbox…they hadn’t!). I tried again, more radio silence. Group B, again, had made progress when I went in to check on them. Then I brought everyone back to the main room. Except…most of Group A didn’t appear/reconnect. (So, presumably, they had done the log on and bugger off thing!) Obviously the plan in the slide above was a write-off (the members of Group A that did show up were still radio silence when addressed/instructed!). In the event, Group B did their discussion and I gave them some feedback, again referring to conversation shapes.

Positives of this stage: It was clearly staged. The group that did the parts that they were able to do made a good effort. (I feel for them, being so outnumbered by ones who won’t participate…)

Problems with this stage: It relied on student participation! Step 3 relied on Step 2 being carried out to some degree of success. Too ambitious? But these ARE pre-masters students, it shouldn’t be! There again, they are all knackered (see chatbox warmer – though Mr Energetic? Group A. Just saying.) If the stage had worked as planned, students may have struggled to summarise the other group’s discussion because poor audio quality makes it harder to follow what is being said.

What am I taking away from these 2 weeks? That I want an article/book/video about classroom management with online platforms! Though quite what can be done if students are completely unresponsive, I’m not sure. I have worked really hard on making everything as clear and as meaningful as possible, in terms of tasks and objectives, which I am pleased with. I continue to try different task types and see what does and doesn’t work (with this group). Possibly I approached it wrongly overall – I tried to connect to the asynchronous material and give students engaging tasks that would help them develop their academic skills and prepare for exams, but maybe I should have focused more on their coursework. The next and final big thing students have to do in terms of course work is prepare and submit a presentation recording, so my final 2 lessons will focus on that! I can but do my best. Importantly, I seem much better able to accept things going wrong, take what I can from it and not beat myself up over it than I have been in the past. I think this links with having had a really supportive line manager/programme leader for a year now – work-related anxiety levels are a lot lower than they used to be – and also, of course, that it has been 1.5yrs now of using Mindfulness to cope better with life, including work.

Watch this space to find out what happens in the last instalment of my teaching reflections for this term. The main purpose of these posts is to be my memory, outsourced, when I come to planning lessons next term with a new group of students! Space and time will make it easier to incorporate what I have been learning these last 4 weeks (lots of learning, hard to keep up but I am doing my best!). The course will look a bit different, and is still under construction, but since it will be what it is from the start, rather than a change being thrust on students part way through, there will be a lot more scope for setting clear expectations and instilling good habits etc from the beginning AND the university will have made it so that students can access Google suite from China yayyy (I forget the technical details but it is some kind of VPN they are purchasing that enables it) – so, exciting times ahead!

 

 

What does an ADoS do?

Following Sandy’s post about a busy week in her life as a DoS of IH Bydgoszcz in Poland, which I found very interesting, and attending a Learning and Teaching Professional Scheme introductory meeting and learning that to become a SFHEA one of the things I need to do is write a personal statement about who I am and what I do here at the university,  I was inspired to write a bit about what I do as an ADoS in Sheffield University ELTC’s USIC arm. So here it is! This is what an ADoS does!

(Caveat: every ADoS position is different and depends on the type and size of the institution, as well as institutional requirements – this post is just about what an ADoS does here, where I am – aka what I do! Perhaps the title should be “What does *this* ADoS do?”!)

  • I teach. (Yay!) Currently 6hrs per week plus 3-4 WAS’s (1hr Writing Advisory Service appointments), as of next week 9hrs per week plus 1 WAS. Along with that, of course, comes all the usual planning, prepping, marking and admin. Am also timetabled 6hrs of cover slots per week.
  • I write meeting notes. Well, I co-write meeting notes with my fellow January ADoS. (At this point, I should explain – I am ADoS for the January Foundation cohort of students. We currently have 4 cohorts of students  – September Foundation and Pre-Masters, and January Foundation and Pre-Masters – but will go up to 5 in April. The April lot is always smaller so though there are also a mixture of Foundation and Pre-Masters, they are counted as one cohort.) We do this using Google docs and share them with our teachers towards the end of one week, ready for the meeting at the start of the next week. This means that teachers have a written record to refer back to without having to write copious notes on a scrap of paper that then gets lost or something! We give them a print-out in the meeting, so they can write down anything extra that comes up/anything that wasn’t clear to them that they asked about etc.
  • I run…co-run…weekly module meetings (in previous terms we did the meetings independently but this term about 95% of our teachers are teaching both January cohorts so it made sense to combine it; this may revert to separate meetings next term, depends on timetables and teachers!). These meetings are about what’s got to happen in the immediate future and looking forward to next week’s lessons. (So, as ADoSes, say it’s week 5, we write meeting notes for week 6’s meeting in which we are talking about week 7 lessons!)
  • I make materials. Last term, that included materials for the workbook, as we adapted some lessons based on teacher feedback and student response from previous use of them. This involves not just creating the new materials and putting them into the workbook but also updating the powerpoints, teachers notes and student worksheets that live in our shared drive resources folder so that everything matches up to the changes that have been made. Examples this term include independent listening development materials, and self-study materials and in-class or self-study materials for using www.wordandphrase.info/academic. (Here I have linked to copies of the materials in my personal google drive so that you can see them, but the originals live on my work google drive and are set to be useable only by people with sheffield.ac.uk email addresses.)
  • Relating to the above, I seek feedback regarding the materials in order to use it to improve them for the next time around.
  • I make sure the tracker is up to date and correct. The tracker refers to an excel spreadsheet with marks and progression rating colours for all students, and there is separate tracker for each cohort. This involves inputting data (e.g. the diagnostic test results), reminding teachers when data that they are inputting needs to be done, helping teachers when they have trouble inputting data, correcting mistakes with student information e.g. when they change groups due to changing pathway and fixing it when random things happen like a student ends up with two lines that correspond to their name/number but non-identical scores (cue checking scripts to be able to work out which is the correct row and delete the other). I have learnt what a v-lookup is and what filters are. Either which way, we hate the tracker… 😉
  • I make sure all the other admin happens when it meant to. This includes transferring progression colours from the tracker to the student management system at certain points, generation of learning conversation documents (even if we don’t actually have the conversations, as this term, the data is needed so that academic success tutors can discuss it with students). This term the document generation has been mostly automated but teachers still need to select smart targets in a Google sheet and copy and paste the resultant data from Google docs to a certain spreadsheet that will then be used for a mail merge, and stuff like that. Teachers need to be told it’s coming up, taught how to do it (in the case of new teachers), supported through it (i.e. troubleshooting if/when the struggle) and we have to check everything in the end to make sure all is in order.
  • I deal with unforeseen situations that come up e.g. a teacher being off sick for longer than a day or two when there is a tight marking deadline and other admin too – between us the ADoSes have to cover that teacher’s marking and admin.
  • I make sure everything is ready for assessments. This includes sending mock tests/seminar discussion exam sheets/etc off to be printed well in advance of when the assessment will take place (printing has a two week turnaround and may take longer in busy periods), setting up Turnitin buttons on MOLE, putting coursework templates on MOLE, doing summative assessment papers myself as part of pre-standardisation etc.
  • I am first point of contact when teachers have any questions, problems, issues etc with January IFY students and teaching (and basically anything relating to anything they have to do here e.g. the admin, the tools used to do the admin etc). This is mostly done in person, in the staffroom, but also involves emails. Where relevant we then liaise with the person or people who need to be involved in resolving the issue. Otherwise, we offer support/guidance as necessary. The main skills this requires are patience, supportiveness and ability to be interrupted, provide the help needed and seamlessly pick up the thread of what you were doing when help was needed! I am currently trying to devise a way of providing more support to new teachers than what we currently do, watch this space!
  • I run…co-run…standardisation for all summative assessments. This involves us marking several samples of a given assessment, rationalising our scores (which are under the influence of the centre-level standardisation that Studygroup centres do), agreeing together what the official scores are and then getting teachers to do the same. With exam marking standardisation, we will then all be in a big room while the teachers are looking at and marking the samples and the discussion follows directly. Once complete, marking commences. With coursework, we send out the samples in advance of a given weekly meeting and in that meeting share and discuss scores. We also have to do this for the speaking exams (the seminar discussion and the presentation), which are both done by sending out recordings in advance for teachers to watch and grade, after which scores are discussed as with the written exams.
  • I double mark speaking exams. In order to increase reliability, we double-mark a a couple of groups (seminar discussion) or a few students (individual presentations) with each teacher.
  • I sign marks off and prepare module boxes. Once all marks have been inputted into spreadsheet and student management system, everything needs double-checking. Errors get picked up and changed, and then, when everything is in order, we sign off the marks for a given cohort for a given exam. The paper work goes into the module box along with some samples of high, medium and low-scoring papers and evidence of standardisation. The resultant module box is stored ready to be audited by the external examiner when s/he pays a visit, so it is important that everything is in order.
  • I randomly spot-check first draft feedback on course work to make sure we as a team are being consistent in the amount and quality of feedback that is given and advise where any changes/tweaks are necessary.
  • I do naughty student meetings. These meetings are 1-1 with the student and their teacher, and are held when students plagiarise in the first draft of their coursework. The idea is to find out what’s gone on and why, and to ensure that it will be addressed before the piece of work is submitted finally. (Otherwise, the student will have to go to a misconduct panel hearing and that makes more paperwork for us and more stress for the student!)
  • I prepare academic misconduct case paperwork. If a student’s final draft submission has high levels of plagiarism or it is clear they have received help because the work submitted is too far above their normal level, we need to prepare paperwork for academic misconduct panel hearings. This mostly involves filling in forms and providing evidence (past pieces of written work, which necessitates digitised work folders, which we also set up for teachers to use).
  • I invigilate listening exams. Mostly Studygroup provide invigilators for exams but our listening exams are complicated enough that we provide a chief invigilator per exam room. Generally that’s around 4 chief invigilators per exam. One of those things that is terrifying the first time you do it and then subsequently you wonder what all the fuss was about!
  • I send next term’s workbook off to the printers. Each term, at some point sufficiently in advance of the end of term, next term’s workbook has to be sent off to print. This involves making any changes that have been flagged up, altering or replacing lessons, proofreading, editing, checking formatting hasn’t altered, sometimes throwing in an alternative syllabus at the last minute because we have been told that due to timetabling we will have to deliver a 2hr-1hr-2hr delivery pattern as well as the default 2-2-1 delivery pattern. That kind of thing.
  • I am supposed to do 3hrs CPD a week, but often it gets relegated to the weekend other than an hr of scholarship circle most weeks (unless stuff comes up which needs dealing with pronto, in which case that takes priority!).

So that’s the kind of thing (there is more, but that is all I can think of for now!)… except rather than “I”, it’s “we”, really! Each of the five cohorts mentioned towards the start of this post (bullet point two) has an ADoS and together we are a team. Within that, some of us also operate in sub-teams: I am part of Team Jan ADoS, and the two September ADoSes work together closely too. For me, the teamwork aspect is the best part of it! We bounce off each other, we support each other, between us we have more brains to cope with remembering everything that has to be done, we commiserate with each other (when the tracker plays up, for example!), we help each other out when there’s lots to be done (e.g. the example of covering the sick teacher’s marking and admin, we all took on some of it and between us got it done) and so on.

I like my job, when it isn’t driving me crazy 😉 If you have ADoSes where you work, what similarities and differences are there between my ADoS role and those where you are?

Another and final question I want to leave you with: How do you support new teachers where you work? Will be interested to hear any replies… please comment!

WAS (Writing Advisory Service) at Sheffield University ELTC

Amongst many other things (e.g. pre-sessional programmes, general English classes, IELTS and CAE preparation, foundation programmes, in-department support, in-sessional programmes and credit-bearing modules) the ELTC also provides a Writing Advisory Service (WAS) to all students studying at the University of Sheffield. It is not only international students who use this service, home students use it too. In terms of levels, we get a mixture of bachelors students, masters students, PhD students and lifelong learning students. This post is going to talk a bit about what a WAS appointment offers and my experience of doing them.

What is “a WAS”?

It is a writing advisory service appointment which lasts for one hour. Any students studying at the university can book an appointment. Teachers are timatabled WAS slots and these appear on our timetabling system. When a student books an appointment, we are able to access their information by logging in to this system and clicking on the relevant slot. In advance of the appointment, we are able to see a student’s name, their department and course, their nationality and an appointment history. So, if students have been before, we can see a record of what they brought (i.e. what type of writing) and what advice they were given. If it is their first appointment, then obviously this part will be blank. These are not “our” students; in most cases you see a different student every appointment. Occasionally you get needy students who try to book the same tutor every time, but this is discouraged as we don’t want to encourage over-dependence on a particular person.

How does it work?

Students have to report to reception so that reception can mark them as attended, which unlocks the appointment history so that we are able to edit it. As teachers, we have to be at reception just before the session is due to start, to meet the student and take them to the allocated room, which always has a computer in it. Students have to bring a print out of whatever piece of writing they want help with. We are not expected to read stuff on screen, thankfully! Before I look at the piece of writing, I ask the student about it – what is it? what problems do they think they have with it? is there anything in particular they want me to look at (e.g. structure, referencing etc.) – so that I have a context to start from. Then the student has to sit and wait while I read through their writing and identify issues with it.

Once I have had a chance to look through the piece of writing, what follows is a discussion of it with the student. Generally I focus on structural issues first – so problems with the introduction, thesis statement, paragraph topic and concluding sentences, conclusion. Next would be other aspects of cohesion like linking language, demonstratives and catch-all nouns, lexical chains, etc. Then issues of academic style e.g. formality/appropriate vocabulary and referencing. Finally, I’ll pick out a few persistent grammar issues to discuss. The idea is that it’s NOT a proofreading service, it’s an opportunity for students to learn how to write better, based on a piece of their writing. Therefore, ideally, we need to equip students to deal with their issues independently. One way of doing this, for example, is using www.wordandphrase.info/academic to model how to use it to answer questions relating to what word to use and how to use it. We also direct them to various websites such as the Manchester Phrasebank.

The final stage of the appointment is writing it up in the student’s appointment record notes as they have access to these notes. The notes are written to the student, as they are for the student to refer back to, rather than being written in lesson record style. I usually get the student to tell me what we’ve talked about, as a way to reinforce what we have done, and write that into their records, pasting in any links we have used in the course of the session too.

This is a recording which lives on the Writing Advisory Service web page. Students can watch in advance of their appointment, in order to know what to expect.

My experience of WAS’s

  • It’s not uncommon to get a no-show! Students are encouraged to cancel in advance if they can’t make it but sometimes that doesn’t happen. They may get caught up in whatever else they are doing or forget they made the appointment etc. Repeat offenders get banned from making appointments for a period of time.
  • When students do show up (which is most of the time, to be fair!), they are very enthusiastic and appreciative. They want to do well in whatever assignment it is they are working on and recognise that what you are discussing with them can help them with this.
  • The first one you ever do is terrifying and difficult, but as with so many things, with experience it gets much easier. You learn what to look out for and how to help students get to grips with those issues. You learn not to be daunted by whatever is put in front of you, however obscure it may seem at first.
  • Because I teach EAP generally, it’s easy to pick out materials from our electronic stores of them, to illustrate what I am trying to explain to students. This is very helpful!
  • You get to see a wide range of different types of writing from different subjects. It can be a bit scary to be faced with an essay full of legalese, especially if you are a bit tired anyway (as with my slot last thing on a Friday!), but you get used to looking beyond the subject specific stuff (which we aren’t expected to be experts on!).
  • They are enjoyable! It’s a bit of a faff because I, like my colleagues in this building, are in a different building to where the appointments happen, so though it’s an hour’s appointment, with the walking there and back etc it’s nearer an hour and a half of time gone, but once you’re there and doing it, the hour flies!

Do you have anything like this where you work? How does it work?

Scholarship Circle: Giving formative feedback on student writing (2.2)

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 / session 2.2 of this particular circle.

In this week’s session of the scholarship circle, we started by doing a pilot text analysis. In order to do this, we needed a first draft and a final draft of a piece of CW3 essay coursework and a method of analysis. Here is what it looked it like:

So…

  •  QM code refers to the error correction code and there we had to note down the symbol given to each mistake in the first draft.
  • Focus/criterion refers to the marking criteria we use to assess the essay. There are five criteria – Task achievement (core elements and supported position), Organisation (cohesive lexi and meta-structures), Grammar (range and accuracy), Vocabulary (Range and accuracy) and Academic conventions (presentation of source content and citations/references). Each QM can be assigned a criteria to attach to so that when the student looks at the criteria-based feedback, it shows them also how many QMs they have attached to each criteria. The more QMs there are, the more that criterion needs work!
  • Error in first draft and Revision in final draft require exact copying from the student’s work unless they have removed the word/s that prompted the QM code.

Revision status is where the method comes in. Ours, shared with us by our M.A. researcher whose project our scholarship circle was borne out of, is based on Storch and Wigglesworth. Errors are assigned a status as follows:

  • Successful: the revision made has corrected the problem
  • Unsuccessful: the revision made has not corrected the problem
  • Unverifiable: if the QM is wrongly used by the teacher and the student has made something incorrect in the final draft based on that QM or has made no change but no change is in reality required
  • Unattempted: the QM is correctly used but the student does not make any change in the final draft.

Doing the pilot threw up some interesting issues that we will need to keep in mind if we use this approach in our data collection:

  • As there are a group of us rather than just one of us, there needs to be consistency with regards to what is considered successful and what is considered unsuccessful. E.g. if the student removes a problem word/phrase rather than correcting it, is that successful? If the student corrects the issue identified by the QM but the sentence is grammatically incorrect, is that successful? The key here is that we make a decision as a group and stick by that as otherwise our data will not be reliable/useful due to inconsistency.
  • We need to beware making assumptions about what students were thinking when they revised their work. One thing a QM does, regardless of the student’s understanding of the code, is draw their attention to that section of writing and encourage them to focus closely on it. Thus, the revision may go beyond the QM as the student has a different idea of how to express something.
  • It is better to do the text analysis on a piece of writing that you HAVEN’T done the feedback on, as it enables you to be more objective in your analysis.
  • When doing a text analysis based on someone else’s feedback, however, we need to avoid getting sucked in to questioning why a teacher has used a particular code and whether it was the most effective correction to suggest or not. These whys and wherefores are a separate study!

Another thing that was discussed was the need to get ethical approval before we can start doing anything. This consists of a 250 word overview of the project, and we need to state the research aims as well as how we will collect data. As students and teachers will need to consent to the research being done (i.e. to use of their information), we need to include a blank copy of the consent form we intend to use in our ethical approval application. By submitting that ethical approval form, we will be committing to carrying out the project so we need to be really sure at this point that this is going to happen. Part of the aim of today’s session, in doing a pilot text analysis, was to give us some idea of what we would be letting ourselves in for!

Interesting times ahead, stay tuned… 🙂

Scholarship Circle: Giving formative feedback on student writing (2.1)

It’s a brand new term (well, sort, of it’s actually the third week of it now!), the second of our four terms here at the college, and today (Monday 21st January, though I won’t be able to publish this post on the same day!) we managed our first scholarship circle session of the term.

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 of this particular circle.

The biggest challenge we faced was remembering where we had got to in the final session BC (Before Christmas!). What were our research questions that we had decided on again? Do we still like them? What was the next step we were supposed to take this term?

Who?

We talked again about which students we wanted to participate – did we want IFY (Foundation) or PMP (Pre-Masters)? We considered the fact that it’s not only linguistic ability which influences response to feedback (our focus) – things like age, study pathway, past learning experiences and educational culture in country of origin will all play their part. Eventually, we decided to focus on IFY students with PMPs their coursework may alter dramatically between first and final draft submissions due to feedback from their content tutor, which would affect our ability to do text analysis regarding their response to our first draft feedback. Within the IFY cohort we have decided to focus on the c and d level groups (which are the two bottom sets, if you will), as these students are most at risk of not progressing so any data which enables us to refine the feedback we give them and others like them will be valuable.

What?

It is notoriously tricky to pin down a specific focus and design a tool which enables you to collect data that will provide the information you need in order to address that focus. Last term, we identified two research questions:

This session, we decided that this was actually too big and have decided to focus on no. 2. Of course having made that decision, and, in fact, also in the process of making that decision, we discussed what specifically to focus on. Here are some of the ideas:

  • Recognition – which of the Quickmarks are students able to recognise and identify without further help/guidance?
  • Process – are they using the Quickmarks as intended? (When they don’t recognise one, do they use the guidance provided with it, that appears when you click on the symbol? If they do that, do they use the links provided within that information to further inform themselves and equip themselves to address the issue? You may assume students know what the symbols mean/read the information if they don’t but anecdotal evidence suggests otherwise – e.g. a student who was given a wrong word class symbol and changed the word to a different word rather than changing the class of it!)
  • Application – do they go on to be able to correct other instances of the error in their work?

Despite our interest in the potential responses, we shelved the following lines of enquiry for the time being:

  • How long do they spend altogether looking at their feedback?
  • How do they split that time between Quickmarks, general comments and copy-pasted criteria?

We are mindful that we only have 6 weeks of sessions this term (and that included this one!) as this term’s week 10, unlike the final week of last term, is going to be, er, a tad busy! (An extra cohort and 4 exams being done between them vs one cohort and one exam last time round!) As we want to collect data next term, that gives us limited time for preparation.

How?

We are going to collect data in two ways.

Text analysis

We each will look at a first draft and a final essay draft of a different student and do a text analysis to find out if they have applied the Quickmark feedback to the rest of their text. This will involve picking a couple of Quickmarks that have been given to the student in their first draft, identifying and highlighting any other instances of that error type, and then looking at the final draft in order to find the highlighted errors so that we can see if they have been corrected, and if they have, how – successfully or not.

We are going to have a go at this in our session next week, to practise what we will need to do and agree on the process.

Questionnaire

Designing an effective questionnaire is very difficult and we are still in the very early stages. We are still leaning towards Google Forms as the medium. Key things we need to keep in mind are:

  • How many questions can we realistically expect students to answer? The answer is probably fewer than we think, and this means that we have to be selective in what questions to include.
  • How can we ask the questions most clearly? As well as using graded language, this means thinking about question types – will we use a Likert scale? will we use tick boxes? will we use any open questions?
  • How can we ensure that the questions generate useful, relevant data? The data needs to answer the research questions. Again, this requires considering different question types and what sort of data they will yield. Additionally, knowing that we need to analyse all the data that we collect, in terms of our research question, we might want to avoid open questions as that data will be more difficult and time-consuming to analyse, interesting though it might be.

The questions will obviously relate to the focuses we identified, earlier discussed – recognition, process and application. One of our jobs for the next couple of sessions is to write our questions. It’s easy (ish!) to talk around what we want to know, but writing clear questions that elicit that information will be significantly more challenging!

Another thing we acknowledged, finally, is that research-wise we are not doing anything new that hasn’t been done before, BUT the “newness” comes from doing it in our particular context. And that is absolutely fine! 🙂

Homework: 

Well those of us who haven’t got round to doing the reading set at the end of the previous session (cough cough) will hopefully manage to finish that. (That was Goldstein, L. Questions and answers about teacher written commetary and student revision: teachers and students working together in Journal of Second Language Writing and Ene, E & Upton, T.A. Learner uptake of teacher electronic feedback in ESL composition.) Otherwise, thinking about possible questions and how to formulate them!

Scholarship Circle: Giving formative feedback on student writing (9)

It’s the last week of term, exam week, and we have managed to squeeze in a final scholarship circle meeting for the term. How amazing are we? 😉 I also have no excuse not to write it up shortly afterwards – nothing sensitive content-wise and, for once in a way, I have a wee bit of time. Sort of. (By the time you factor in meetings, WAS and ADoS stuff for next term, not as much as you might think…!)

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 of this particular circle.)

So, session 9. The first thing we recognised in this session is that we won’t be collecting data until term 3 for September students and term 4 for January students (which will be their term 3). This is a good thing! It means we have next term to plan out what we are going to do and how we are going to do it. It sounds like a lot of time but there is a lot we have to do and elements of it are, by their nature, time-consuming.

Firstly, we need to decide exactly who our participants will be and why. “You just said term 3/4 September/January students!” I hear you say. Yes…generally, that is the focus. In other words, students who are doing a coursework essay and therefore receiving QuickMark feedback. However, within those two broad groups (September Term 3/January Term 4), we have IFY (foundation) and PMP (Pre-masters) students and the IFY cohorts are streamed by IELTS score into a, b, c and (numbers depending) d groups. So, we need to decide exactly who our participants will be. This choice is affected by things like the age of the participants (some of our students are under 18 which makes the ethical approval process, which is already time-consuming, markedly more difficult) and what exactly we want to be able to find out from our data. For example, if we want to know the effect of the streaming group on the data, then we need to collect the data in such a way that it is marked for streaming group. (NB: as I learnt last term in the context of a plagiarism quiz that had to be disseminated to all students, it is a bad idea for this information to rely on student answers – having a field/question such as “What group are you in?” might seem innocuous but oh my goodness the random strangeness it can throw up is amazing! See pic below…)

“Bad” and “g’d” are other examples of responses given! …Students will be students? We need to make sure that our Google Form collects the information we want to collect and allows us to analyse it in the way that we want to analyse it. Obviously, we need to know what we want to collect and how we want to analyse it before we can design an effective tool. Additionally, however pesky they might be, participant students will also need to be a) fully informed regarding the research as well as b) aware that it is voluntary and that they have the right to cease participation and withdraw their data at any point.

Developing our research is just one of the directions that our scholarship circle might take next term. We also discussed the possibility of further investigation into how to teach proofreading more effectively. We are hoping to do some secondary research into this and refine our practice accordingly. While we will do what we can, we recognised that time constraints may affect what we can do. For example, we discussed the following activity to encourage proofreading after students receive feedback on their drafts:

  • Put students in groups of four and have them look at the feedback, specifically QuickMarks, on their essays
  • Students, in their groups, to work out what is wrong and what the correction should be. Teacher checks their correction and ensures that it is correct.
  • Students to pick a mistake or two (up to four sentences) and copy them onto a piece of flip-chart paper with the mistakes still in place
  • Each group passes their flip-chart paper to another group who should t try to correct it.
  • The flip-chart paper passes from group to group, with the idea that they look at the mistake and the first correction group’s edits and see if they think it is now correct or want to make additional changes (in a different colour)
  • Finally, the original group gets their flip-chart paper with corrections and edits back and compares it with their correct version.

This is a really nice little activity. However, after students receive their first draft feedback, they do not have any more lesson time (what time remains of the term, after they get their feedback, is taken up by tutorials, mocks and exams!), so it wouldn’t be possible to do it using that particular feedback. Perhaps what we need to do is use the activity with a different piece of work (for example a writing exam practice essay), and integrate other proofreading activities at intervals through the course, so that when they do get their first draft feedback for their coursework, they know what to do with it!

Another thing we discussed in relation to proofreading and helping students to develop this skill is the importance of scaffolding. I attempted to address the issue of scaffolding the proofreading process in a lesson I wrote for my foundation students last term. In that lesson, students had to brainstorm the types of errors that they commonly make in their writing – grammar, vocabulary, register, cohesion-related things like pronouns etc – and then I handed out a paragraph with some of those typical errors sown in and they had some time to try and find the errors. After that, I gave them the same paragraph but with the mistakes underlined, and having checked which ones they had found correctly, they had had to identify the type of error for each one that had been underlined. Finally, I gave them a version with the mistakes underlined and identified using our code, and they had to try and correct them. All of this was group work. The trouble was the lesson wasn’t long enough for them (as a low-level foundation group) to have as much time as they could have done with for each stage of the lesson. I had hoped there would be time for them to then look at their coursework essays (this was the last lesson before first draft submission) and try to find and correct some mistakes but in reality we only just got through the final paragraph activity.

Other ideas for scaffolding the development of proofreading skills were to prepare paragraphs that had only one type of mistake sown in so that students only had to identify other errors of that particular type, with the idea that they could have practice at identifying different errors separately before trying to bring it together in a general proofreading activity. That learning process would be spread over the course rather than concentrated into one (not quite long enough) lesson. There is also a plan to integrate such activities into the Grammar Guru interactive/electronic grammar programmes that students are given to do as part of their independent study. Finally, we thought it would be good to be more explicit about the process we want students to follow when they proofread their work. This could be done in the general feedback summary portion of the feedback. E.g. cue them to look first at the structural feedback and then at the language feedback etc. That support would hopefully avoid them being overwhelmed by the feedback they receive. One of our tasks for scholarship circle sessions next term is to bring in the course syllabus and identify where proofreading focuses could be integrated.

Another issue regarding feedback that we discussed in this session was the pre-masters students’ coursework task which is synoptic – they work on it with their academic success tutor with focus on content and with us for focus on language. Unfortunately, with the set-up as it is, as students do not work on it with a subject tutor, there is no content “expert” to guide them and there is a constant tension with regards to timing of feedback. Our team give feedback on language at the same time as the other team give feedback on content (which, not being experts, is a struggle for them, exacerbated by not being able to give feedback on language, especially as the two are fairly entwined!). Content feedback may necessitate rewriting of chunks of text, rendering our language feedback useless at that point in time. However, there is not enough time in the term for feedback to be staggered appropriately. We don’t have a solution for this, other than more collaboration with Academic Success tutors, which again time constraints on both sides may render difficult, but it did lead us onto the question of whether we should, in general, focus our QuickMarks only on parts of text that are structurally sound? (Again, there isn’t time for there to be a round of structural feedback followed by a round of linguistic feedback once the structural feedback has been implemented.)

Suffice to say it is clear that we still have plenty to get our teeth into in future scholarship circle sessions – our focus, and areas closely related, is far from exhausted. Indeed we have a lot to do still, with our research still in its early stages. We are not sure what will happen next term with regards to when the sessions will take place as it is timetable dependent but we are keeping our current time-slot pencilled in as a starting point. Fingers crossed a good number of us will be able to make it or find an alternative time that more of us can do!

Thank you to all my lovely colleagues who have participated in the scholarship circle this term, it has been a brilliant thing to do and I am looking forward to the continuation next term!

 

 

Scholarship Circle: Giving formative feedback on student writing (5-8)

Last time I blamed time and workload for the lack of updates, but this time the reason there is only one post representing four sessions is in part a question of time but more importantly a question of content. This will hopefully make more sense as I go on to explain below!

(For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 of this particular circle.)

Session 5 saw us finishing off what we started in Session 4 – i.e. editing the error correction code to make it clearer and more student-friendly. So, nothing to add for that, really! It was what it was – see write-up of Session 4 for an insight.

Sessions 6 and 7 were very interesting – we talked about potential research directions for our scholarship circle. We started with two possibilities. I suggested that we replicate the M.A. research regarding response to feedback that started the whole scholarship circle off and see if the changes we are making have had any effect. At the same time as I had that idea, another of our members brought forward the idea of participating in a study that is going to be carried out by a person who works in the Psychology department at Sheffield University, regarding reflection on feedback and locus of control. What both of these have in common is that they are not mine to talk about in any great depth on a public platform given that one has not yet been published and the other is still in its planning stages.

Session 6

So, in session 6, the M.A. researcher told us, in depth, all about her methodology, as in theory if we were to replicate that study we would be using that methodology and then we also heard about the ideas and tools involved in the Psychology department research. From the former, it was absolutely fascinating to hear about how everything was done and also straightforward enough to identify that replicating that study would take up too much time at critical assessment points when people are already pressed for time: it’s one thing to give up sleeping if you are trying to do your M.A. dissertation to distinction level (congratulations!) but another if you are just working full time and don’t necessarily want to take on that level of workload out of the goodness of your heart! We want to do research, but we also want to be realistic. With regards to the latter, it sounded potentially interesting but while we heard about the idea, we didn’t see the tools it would involve using until Session 7. The only tool that we contributed was the reflection task that we have newly integrated into our programme, which students have to complete after they receive feedback on the first draft of their assignments.

Session 7

Between Session 6 and 7, we got hold of the tools (emailed to us by the member in touch with the research in the Psychology department) and were able to have a look in advance of Session 7. In Session 7, we discussed the tools (questionnaires) and agreed that while some elements of them were potentially workable and interesting, there were enough issues regarding the content, language and length that it perhaps wasn’t the right direction for us to take after all. The tools had been produced for a different context (first year undergraduate psychology students). We decided that what we needed was to be able to use questionnaires that were geared a) towards our context and students and b) towards finding out what we want to know. We also talked about the aim of our research, as obviously the aim of a piece of research has a big impact on how you go about doing that research. Broadly, we want to better understand our students’ response to feedback and from that be able to adapt what we do with our feedback to be as useful as it possibly can be for the students. We spent some time discussing what kinds of questions might be included in such a questionnaire.

So, at this point, we began the shift away from focusing on those two studies, one existing, complete but unpublished, and one proposed,  and towards deciding on our own way forward, which became the focus of session 8

Session 8

Between Session 7 and Session 8, our M.A. Researcher sent us an email pointing out that in order to think about what we want to include in our questionnaires, we first need to have a clear idea of what our research questions are. So that was the first thing we discussed.

One fairly important thing that we decided today as part of that discussion about research questions was that it would be better to focus on one thing at a time. So, rather than focusing on all the types of feedback that Turnitin has to offer within one project, this time round focus specifically on the quickmarks (which, of course, we have recently been working on!). Then, next time round we could shift the focus to another aspect. This is in keeping with our recognition of the need to be realistic regarding what we can achieve, so as to avoid setting ourselves up for failure. (I think this is a key thing to bear in mind for anybody wanting to set up a scholarship circle like this!) The questions we decided on were:

  1. Do students understand the purpose of feedback and our expectations of them when responding to feedback?
  2. How do students respond to the Quickmarks?

Questions that got thrown around in the course of this discussion were:

  • Do students prioritise some codes over others? E.g. do they go for the ones they think are more treatable?
  • What codes do students recognise immediately?
  • If they don’t immediately recognise the codes, do they read the descriptions offered?
  • Do they click on the links in the descriptions?
  • Do they do anything with those links after opening them? (One of the students in the M.A. research opened all the links but then never did anything with them!)
  • How much time do they believe they should spend on this feedback?
  • How long are students spending on looking at the feedback in total?
  • How do students split their time between Quickmarks (/”In-text feedback” so includes comments and text-on-text a.k.a. the “T” option, which some of us haven’t previously used!) and general comments and the grade form?

Of course, these questions will feed in to the tool that we go on to design.

We identified that our learner training ideas e.g. the reflection form, improving the video that introduces them to Turnitin feedback, developing a task to go with the video in which they answer questions and in so doing create themselves a record of the important information that they can refer back to etc. can and should be worked on without waiting to do the research. That way, having done what we can to improve things based on our current understanding, we can use the research to highlight any gaps.

We also realised that for the data regarding Quickmarks to be useful, it would be good for it to be specific. So, one thing on our list of things to find out is whether Googleforms would allow us to have an item in which students identify which QMs they were given in their text and then answer questions regarding their attitude to those Quickmarks, how clear they were etc. Currently we are planning on using Googleforms to collect data as it is easy to administer and organises the results in a visually useful way. Of course that decision may be changed based on whether or not it allows us to do what we want to do.

Lots more to discuss and hopefully we will be able to squeeze in one more meeting next week (marking week, but only one exam to mark, most unusually! – in a normal marking week, it just would not be possible) before the Christmas holidays begin… we shall see! Overall, I think it will be great to carry out research as a scholarship group and use it to inform what we do (hence my overambitious as it turns out initial idea…). Exciting times! 🙂

 

CUP Online Academic Conference 2018: Motivation in EAP – Using intrinsically interesting ‘academic light’ topics and engaging tasks (Adrian Doff)

This is the first session of this online conference that I have been able to attend live this week, hoping to catch up with some of the others via recordings…

Part of a series of academic webinars running this week, this is the 5th session out of 8. Apparently recordings will be available in about a week’s time. Adrian Doff has worked as a teacher and teacher trainer in various countries and is co author of Meanings into Words and Language in Use series amongst other things. He is talking to us from Munich, Germany.

We are going to look at what topics and tasks might be appropriate in EAP teaching, especially to students who both need academic skills in English but also need to improve their general language ability. For most of his ELT life, Adrian has been involved in general ELT as a teacher and materials writer and has recently move into EAP mainly through supplementary material creation.

Our starting point for this webinar: look at some of the differences between GE and EAP. In the literature of EAP quite a lot is made of these differences, partly as a way to define EAP in contrast to GE.

Firstly, the contrast between needs and wants: to what extent do we define the content of the course in terms of the perceived needs of learners and what we think students want to do vs what they need to do. In all teaching and learning there is a balance between these two things.

  • In GE, needs/outcomes define the syllabus, skills and general contexts and they are seen as fairly longterm outcomes and goals, often expressed in terms of the CE framework. E.g. language used in restaurants/cafes, we think it will be useful for learners of English. Equally we consider what students want, and the topics and tasks and texts are more based on interest, engagement and variety. E.g. a common classroom activity is a class survey mingling and asking questions and reporting back. They are not really related to the needs, i.e. we don’t expect students to get a job doing surveys, but it is interesting, lively, generates interaction etc so it is motivating for them to do.
  • If we think about EAP, the needs are more pressing and clearer, dictate the skills, genre and language we look at and that dominates choice of topics, texts and tasks.

Two differences come out of this first one:

  • Firstly, In GE, the overt focus of the lesson is focused on a topic, while in EAP the overt focus is on the skills being developed.
  • Secondly, teachers’ assumptions about motivations in class.

Adrian shows us a quote from De Chazal (2014), saying that motivation is teacher-led while in EAP stakes are high and students are very self-motivated, clear intrinsic motivation from a clear goal. In GE students may not necessarily see tasks/topics relevant in terms of what they need, while in EAP they do.

Next we looked at example materials from GE and EAP, based around the same topic area of climate change.

  • EAP – “Selecting and prioritising what you need”  – students are taken through a series of skills: choosing sources, thinking about what they know, looking at the text, looking at language of course and effect, leading into writing an essay. The assumption is that students will be motivated by the knowledge that they need these skills. The page looks sober, black and white, reflecting the seriousness of EAP.
  • GE – Cambridge Empower, also leads to writing an essay but first there is focus on the topic, listening to new items about extreme weather events and discussion. Then reading a text that leads into writing skills focus on reporting opinions and it leads into the essay. It arouses interest in the topic through: strong use of visual support, active discussion of the topic, listening and speaking tasks used although it’s a reading and writing lesson. Lots of variety of interaction and general fluency practice.

These reflect the different needs of GE and EAP learners, reflects the more serious nature of academic study. This is fine if we can assume that learners in EAP classes are in fact motivated and have a clear idea of their needs and how what is being done relates to that. De Chazal uses “can be self motivated” and “are more likely to be working towards a clear goal” – not definite.

Adrian puts forward a spectrum on which GE, GEAP and SEAP on it but says that many students occupy a place somewhere in the middle of the scale i.e. learning English for study purposes but also need GE and may not have clear study aims. E.g. Turkey. Students who study English in addition to their subject of study in University context. Need to get to B1+, preparing for a programme where some content is in English but not wanting to study in an English-speaking university so don’t need full on EAP, may not necessarily be motivated. In the UK, students need an improved IELTS score, need EAP skills in addition to general skills and are more motivated. In both of these, EAP ‘light’ may be useful.

For the rest of this session, he says we will look at what this might look like and how it might come out in practice. It is clearly possible to focus on academic skills in a way that is engaging for learners who may not be highly motivated while still providing the skills that they need to master.

Approach 1

E.g. Skills for writing an academic essay, specifically in the opening part, the introduction, where they may need to define abstract concepts. Students might be shown an example which provides examples of the language needed.

It isn’t in itself a particularly engaging text, but it seems to Adrian that there are ways in which this topic could be made to be more interesting and engaging for less motivated students:

  • a lead-in to get ss thinking about the topic – brainstorming
  • discussion with concrete examples e.g. in what ways mght courage be an asset in these occupations
  • personalisation: think of a courageous person you know, what did they do which was courageous
  • prediction: get ss to write a definition of courage without using a dictionary

THEN look at the text.

So this is an example of bringing in features of General English methodology into EAP. This helps to provide motivation, it is generated by the task and teacher, bringing interest to the topic which does not HAVE to be dry.

Approach 2:

To actually choose topics which have general interest even if not related to learners’ areas of study.

Listening to lectures: identifying what the lecturer will talk about using the signals given (EAP focus: outlining content of a presentation). Can be done with a general interest topic e.g. male and female communication.

  • Start off with a topic focus: think about the way men or boys talk together and the way women or girls talk together. Do you think there are any differences? Think about…
  • Leads into a focus on listening skills: students listen to an introduction to a class seminar on this topic; identify how speaker uses signalling language, stress and intonation to make it clear what he is going to talk about

So those are a couple of examples of directions that EAP light could take. This is a crossover between GE and EAP, skills and language defined by needs, but the initial focus is on the topic itself rather than on the skills. Topics selected as academic in nature but have intrinsic interest. Motivation is enhanced through visuals, engaging tasks, personalisation etc.

Q and A

What is a good source of EAP light topics?

Adrian plugs his Academic Skills development worksheets – generally academic nature but of general interest. (They accompany “Empower”) If you are developing your own, look at the kind of topics in GE coursebooks and see if there are any that would lend themselves to EAP.

What about letting students choose their own topics?

A good idea if this is EAP where students are already engaged in academic study, as they will have a good idea of what they need. In GEAP it is important to choose topics which lend themselves to whatever academic skill you are developing as well.

What were the textbooks used in the examples:

EAP – Cambridge Academic English B2 level; GE- Empower B2 level

 

Using Google+ Communities with classes (2)

All of a sudden we are 5 weeks into term. This week, also known as 5+1 (so not to get it mixed up with teaching week 6, which is next week) is Learning Conversations week (the closest we get to half term, and only in the September term!) so it seemed a good time to take stock and see how things are going with Google Communities, following my introductory post from many moons ago.

Firstly, it must be said that the situation has changed since I wrote that first post: Now, all teachers are required to use GC instead of my Group on MOLE (the university brand of Blackboard VLE) because we had trouble setting up groups on MOLE at the start of this term. Nevertheless, I am carrying on with my original plan of reflecting and evaluation on my use of GC with my students because I think it is a valuable thing to do!

In order to evaluate effectively, I wanted to have the students’ perspective as well as my own, I posted a few evaluative questions in the discussion category of each of my classes’ GC page.

So, no science involved, no Likert scales, no anonymity, just some basic questions. (The third question was because I thought I might as well get their views on how the lessons are going so far at the same time!) I’m well aware of the limitations of this approach BUT then again I’m not planning to make any great claims based on the feedback I get and I’m not after sending a write-up to the ELTJ or anything like that either (would need all manner of ethical approval to do that!). I did try to frame the questions positively e.g. “What do you think would improve the way we use GC?” rather than “What don’t you like about GC?” so that the students wouldn’t feel like responding to the question wasn’t a form of criticism and therefore feel inhibited. An added benefit is that it pushes them to be constructive regarding future use rather than just say how they feel about the current use of it.

Before I go into the responses I’ve had from students, however, it would make sense to summarise how I’ve been using the GCs with them. I recently wrote about GCs for the British Council TeachingEnglish page (soon to be published) and the way I came up with for describing them in that post was “a one-stop shop for everything to do with their [students] AES classes” and that is basically what it has become:

Speaking Category extract

 

Writing Category extract

 

Vocabulary Category extract

 

Listening Category extract

I would say the main use I have made of it is to share materials relating to lessons, mostly in advance of the lessons – TedTalks, newspaper articles etc – but also useful websites and tools, for individual use or class use – AWL highlighter, Quizlet, Vocab.com etc. Finally, it is great for sharing editable links to Google Docs, which we use quite often in class for various writing tasks. Other than these key uses, I have also used it to raise students’ awareness of mental health issues and the mental health services offered to students by the university, during Mental Health Week here (which coincided with World Mental Health Day) and to raise their awareness of the students union and what it offers to them.

In terms of student feedback, they think it’s “convenient”, “easy to use” and they “enjoy using” it. They also mention the ability to comment on posts (not present with My Group on MOLE) and communicate outside of the classroom as well as in it. In terms of suggestions for improvement, one student said students should use it to interact more frequently but that it should be clear which posts are class content and which are sharing/interaction. A couple of students also said they’d like the Powerpoints used in class to be uploaded there. However, those are available on MOLE. The trouble, of course, is that in using GC rather than My Group (which is on MOLE), students are a lot more tuned into GC (which we use all the time) than MOLE. I have no scientific evidence to back this up, but I suspect that be it academically or personally, if you have to use multiple platforms you tend to gravitate towards one, or some, more than others rather than using them all equally, particularly if time is very limited, as it is for busy students! (I could be wrong – if you know of any relevant studies let me know!) Unfortunately GC cannot fully replace MOLE as students need to learn how to use it in preparation for going to university here and they need to submit coursework assignments to Turnitin via MOLE. Perhaps, then I need to come up with ways to encourage them to go from one to the other and back, so they don’t forget about ‘the other’…

In terms of future use, I have set up a little experiment in that as part the of Learning Conversations that are taking place this week, we have to decide on Smart Actions that the students are supposed to carry out. E.g.

 

Go to Useful Websites on MOLE and explore the ‘Learning Vocabulary’ websites available. Tell your teacher which websites you visited and what you learnt from them by the final AES lesson of Week 6.

Some of them, like the above, lend themselves to posting on GC. In this way, not only do they tell me what they have learnt but also they share that learning with the rest of their classmates. So, in their learning conversations, whenever the Smart Action(s) were amenable to this plan, I have been encouraging students to use GC to communicate the outcome to me and share the learning with the rest of the class. We will see how it goes, if they do post their findings etc. Be interesting to see what happens! Another idea I’ve had is to do something along the lines of “academic words of the week”, where I provide a few choice academic words along with definitions, collocations, examples of use and a little activity that gives them a bit of practice using them, and get them to also make a Quizlet vocabulary set collaboratively (I have a Quizlet class set up for each class). Then perhaps after every couple of weeks we could do an in-class vocabulary review activity to see what they can remember.

Finally, it seems to me that Monday, being the first day of the second half of the term, is a crucial opportunity to build on student feedback by getting them to discuss ways in which we could use the GC for more interactive activities and find out what they’d be interested in having me share other than class-related materials and the occasional forays into awareness-raising that I have attempted. The key thing that I want them to take away is that I want the GC to work for them and that I am very much open to ideas from them as to how that should be, so that it becomes a collaborative venture rather than a teacher-dominated one.

We shall see what the next five weeks hold… Do you have any other ideas for how I could use GCs more effectively? Would love to hear them if you do!