Scholarship Circle: Giving formative feedback on student writing (2.1)

It’s a brand new term (well, sort, of it’s actually the third week of it now!), the second of our four terms here at the college, and today (Monday 21st January, though I won’t be able to publish this post on the same day!) we managed our first scholarship circle session of the term.

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 of this particular circle.)

The biggest challenge we faced was remembering where we had got to in the final session BC (Before Christmas!). What were our research questions that we had decided on again? Do we still like them? What was the next step we were supposed to take this term?

Who?

We talked again about which students we wanted to participate – did we want IFY (Foundation) or PMP (Pre-Masters)? We considered the fact that it’s not only linguistic ability which influences response to feedback (our focus) – things like age, study pathway, past learning experiences and educational culture in country of origin will all play their part. Eventually, we decided to focus on IFY students with PMPs their coursework may alter dramatically between first and final draft submissions due to feedback from their content tutor, which would affect our ability to do text analysis regarding their response to our first draft feedback. Within the IFY cohort we have decided to focus on the c and d level groups (which are the two bottom sets, if you will), as these students are most at risk of not progressing so any data which enables us to refine the feedback we give them and others like them will be valuable.

What?

It is notoriously tricky to pin down a specific focus and design a tool which enables you to collect data that will provide the information you need in order to address that focus. Last term, we identified two research questions:

This session, we decided that this was actually too big and have decided to focus on no. 2. Of course having made that decision, and, in fact, also in the process of making that decision, we discussed what specifically to focus on. Here are some of the ideas:

  • Recognition – which of the Quickmarks are students able to recognise and identify without further help/guidance?
  • Process – are they using the Quickmarks as intended? (When they don’t recognise one, do they use the guidance provided with it, that appears when you click on the symbol? If they do that, do they use the links provided within that information to further inform themselves and equip themselves to address the issue? You may assume students know what the symbols mean/read the information if they don’t but anecdotal evidence suggests otherwise – e.g. a student who was given a wrong word class symbol and changed the word to a different word rather than changing the class of it!)
  • Application – do they go on to be able to correct other instances of the error in their work?

Despite our interest in the potential responses, we shelved the following lines of enquiry for the time being:

  • How long do they spend altogether looking at their feedback?
  • How do they split that time between Quickmarks, general comments and copy-pasted criteria?

We are mindful that we only have 6 weeks of sessions this term (and that included this one!) as this term’s week 10, unlike the final week of last term, is going to be, er, a tad busy! (An extra cohort and 4 exams being done between them vs one cohort and one exam last time round!) As we want to collect data next term, that gives us limited time for preparation.

How?

We are going to collect data in two ways.

Text analysis

We each will look at a first draft and a final essay draft of a different student and do a text analysis to find out if they have applied the Quickmark feedback to the rest of their text. This will involve picking a couple of Quickmarks that have been given to the student in their first draft, identifying and highlighting any other instances of that error type, and then looking at the final draft in order to find the highlighted errors so that we can see if they have been corrected, and if they have, how – successfully or not.

We are going to have a go at this in our session next week, to practise what we will need to do and agree on the process.

Questionnaire

Designing an effective questionnaire is very difficult and we are still in the very early stages. We are still leaning towards Google Forms as the medium. Key things we need to keep in mind are:

  • How many questions can we realistically expect students to answer? The answer is probably fewer than we think, and this means that we have to be selective in what questions to include.
  • How can we ask the questions most clearly? As well as using graded language, this means thinking about question types – will we use a Likert scale? will we use tick boxes? will we use any open questions?
  • How can we ensure that the questions generate useful, relevant data? The data needs to answer the research questions. Again, this requires considering different question types and what sort of data they will yield. Additionally, knowing that we need to analyse all the data that we collect, in terms of our research question, we might want to avoid open questions as that data will be more difficult and time-consuming to analyse, interesting though it might be.

The questions will obviously relate to the focuses we identified, earlier discussed – recognition, process and application. One of our jobs for the next couple of sessions is to write our questions. It’s easy (ish!) to talk around what we want to know, but writing clear questions that elicit that information will be significantly more challenging!

Another thing we acknowledged, finally, is that research-wise we are not doing anything new that hasn’t been done before, BUT the “newness” comes from doing it in our particular context. And that is absolutely fine! 🙂

Homework: 

Well those of us who haven’t got round to doing the reading set at the end of the previous session (cough cough) will hopefully manage to finish that. (That was Goldstein, L. Questions and answers about teacher written commetary and student revision: teachers and students working together in Journal of Second Language Writing and Ene, E & Upton, T.A. Learner uptake of teacher electronic feedback in ESL composition.) Otherwise, thinking about possible questions and how to formulate them!

Advertisements

Scholarship Circle: Giving formative feedback on student writing (9)

It’s the last week of term, exam week, and we have managed to squeeze in a final scholarship circle meeting for the term. How amazing are we? 😉 I also have no excuse not to write it up shortly afterwards – nothing sensitive content-wise and, for once in a way, I have a wee bit of time. Sort of. (By the time you factor in meetings, WAS and ADoS stuff for next term, not as much as you might think…!)

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 of this particular circle.)

So, session 9. The first thing we recognised in this session is that we won’t be collecting data until term 3 for September students and term 4 for January students (which will be their term 3). This is a good thing! It means we have next term to plan out what we are going to do and how we are going to do it. It sounds like a lot of time but there is a lot we have to do and elements of it are, by their nature, time-consuming.

Firstly, we need to decide exactly who our participants will be and why. “You just said term 3/4 September/January students!” I hear you say. Yes…generally, that is the focus. In other words, students who are doing a coursework essay and therefore receiving QuickMark feedback. However, within those two broad groups (September Term 3/January Term 4), we have IFY (foundation) and PMP (Pre-masters) students and the IFY cohorts are streamed by IELTS score into a, b, c and (numbers depending) d groups. So, we need to decide exactly who our participants will be. This choice is affected by things like the age of the participants (some of our students are under 18 which makes the ethical approval process, which is already time-consuming, markedly more difficult) and what exactly we want to be able to find out from our data. For example, if we want to know the effect of the streaming group on the data, then we need to collect the data in such a way that it is marked for streaming group. (NB: as I learnt last term in the context of a plagiarism quiz that had to be disseminated to all students, it is a bad idea for this information to rely on student answers – having a field/question such as “What group are you in?” might seem innocuous but oh my goodness the random strangeness it can throw up is amazing! See pic below…)

“Bad” and “g’d” are other examples of responses given! …Students will be students? We need to make sure that our Google Form collects the information we want to collect and allows us to analyse it in the way that we want to analyse it. Obviously, we need to know what we want to collect and how we want to analyse it before we can design an effective tool. Additionally, however pesky they might be, participant students will also need to be a) fully informed regarding the research as well as b) aware that it is voluntary and that they have the right to cease participation and withdraw their data at any point.

Developing our research is just one of the directions that our scholarship circle might take next term. We also discussed the possibility of further investigation into how to teach proofreading more effectively. We are hoping to do some secondary research into this and refine our practice accordingly. While we will do what we can, we recognised that time constraints may affect what we can do. For example, we discussed the following activity to encourage proofreading after students receive feedback on their drafts:

  • Put students in groups of four and have them look at the feedback, specifically QuickMarks, on their essays
  • Students, in their groups, to work out what is wrong and what the correction should be. Teacher checks their correction and ensures that it is correct.
  • Students to pick a mistake or two (up to four sentences) and copy them onto a piece of flip-chart paper with the mistakes still in place
  • Each group passes their flip-chart paper to another group who should t try to correct it.
  • The flip-chart paper passes from group to group, with the idea that they look at the mistake and the first correction group’s edits and see if they think it is now correct or want to make additional changes (in a different colour)
  • Finally, the original group gets their flip-chart paper with corrections and edits back and compares it with their correct version.

This is a really nice little activity. However, after students receive their first draft feedback, they do not have any more lesson time (what time remains of the term, after they get their feedback, is taken up by tutorials, mocks and exams!), so it wouldn’t be possible to do it using that particular feedback. Perhaps what we need to do is use the activity with a different piece of work (for example a writing exam practice essay), and integrate other proofreading activities at intervals through the course, so that when they do get their first draft feedback for their coursework, they know what to do with it!

Another thing we discussed in relation to proofreading and helping students to develop this skill is the importance of scaffolding. I attempted to address the issue of scaffolding the proofreading process in a lesson I wrote for my foundation students last term. In that lesson, students had to brainstorm the types of errors that they commonly make in their writing – grammar, vocabulary, register, cohesion-related things like pronouns etc – and then I handed out a paragraph with some of those typical errors sown in and they had some time to try and find the errors. After that, I gave them the same paragraph but with the mistakes underlined, and having checked which ones they had found correctly, they had had to identify the type of error for each one that had been underlined. Finally, I gave them a version with the mistakes underlined and identified using our code, and they had to try and correct them. All of this was group work. The trouble was the lesson wasn’t long enough for them (as a low-level foundation group) to have as much time as they could have done with for each stage of the lesson. I had hoped there would be time for them to then look at their coursework essays (this was the last lesson before first draft submission) and try to find and correct some mistakes but in reality we only just got through the final paragraph activity.

Other ideas for scaffolding the development of proofreading skills were to prepare paragraphs that had only one type of mistake sown in so that students only had to identify other errors of that particular type, with the idea that they could have practice at identifying different errors separately before trying to bring it together in a general proofreading activity. That learning process would be spread over the course rather than concentrated into one (not quite long enough) lesson. There is also a plan to integrate such activities into the Grammar Guru interactive/electronic grammar programmes that students are given to do as part of their independent study. Finally, we thought it would be good to be more explicit about the process we want students to follow when they proofread their work. This could be done in the general feedback summary portion of the feedback. E.g. cue them to look first at the structural feedback and then at the language feedback etc. That support would hopefully avoid them being overwhelmed by the feedback they receive. One of our tasks for scholarship circle sessions next term is to bring in the course syllabus and identify where proofreading focuses could be integrated.

Another issue regarding feedback that we discussed in this session was the pre-masters students’ coursework task which is synoptic – they work on it with their academic success tutor with focus on content and with us for focus on language. Unfortunately, with the set-up as it is, as students do not work on it with a subject tutor, there is no content “expert” to guide them and there is a constant tension with regards to timing of feedback. Our team give feedback on language at the same time as the other team give feedback on content (which, not being experts, is a struggle for them, exacerbated by not being able to give feedback on language, especially as the two are fairly entwined!). Content feedback may necessitate rewriting of chunks of text, rendering our language feedback useless at that point in time. However, there is not enough time in the term for feedback to be staggered appropriately. We don’t have a solution for this, other than more collaboration with Academic Success tutors, which again time constraints on both sides may render difficult, but it did lead us onto the question of whether we should, in general, focus our QuickMarks only on parts of text that are structurally sound? (Again, there isn’t time for there to be a round of structural feedback followed by a round of linguistic feedback once the structural feedback has been implemented.)

Suffice to say it is clear that we still have plenty to get our teeth into in future scholarship circle sessions – our focus, and areas closely related, is far from exhausted. Indeed we have a lot to do still, with our research still in its early stages. We are not sure what will happen next term with regards to when the sessions will take place as it is timetable dependent but we are keeping our current time-slot pencilled in as a starting point. Fingers crossed a good number of us will be able to make it or find an alternative time that more of us can do!

Thank you to all my lovely colleagues who have participated in the scholarship circle this term, it has been a brilliant thing to do and I am looking forward to the continuation next term!

 

 

Scholarship Circle: Giving formative feedback on student writing (5-8)

Last time I blamed time and workload for the lack of updates, but this time the reason there is only one post representing four sessions is in part a question of time but more importantly a question of content. This will hopefully make more sense as I go on to explain below!

(For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 of this particular circle.)

Session 5 saw us finishing off what we started in Session 4 – i.e. editing the error correction code to make it clearer and more student-friendly. So, nothing to add for that, really! It was what it was – see write-up of Session 4 for an insight.

Sessions 6 and 7 were very interesting – we talked about potential research directions for our scholarship circle. We started with two possibilities. I suggested that we replicate the M.A. research regarding response to feedback that started the whole scholarship circle off and see if the changes we are making have had any effect. At the same time as I had that idea, another of our members brought forward the idea of participating in a study that is going to be carried out by a person who works in the Psychology department at Sheffield University, regarding reflection on feedback and locus of control. What both of these have in common is that they are not mine to talk about in any great depth on a public platform given that one has not yet been published and the other is still in its planning stages.

Session 6

So, in session 6, the M.A. researcher told us, in depth, all about her methodology, as in theory if we were to replicate that study we would be using that methodology and then we also heard about the ideas and tools involved in the Psychology department research. From the former, it was absolutely fascinating to hear about how everything was done and also straightforward enough to identify that replicating that study would take up too much time at critical assessment points when people are already pressed for time: it’s one thing to give up sleeping if you are trying to do your M.A. dissertation to distinction level (congratulations!) but another if you are just working full time and don’t necessarily want to take on that level of workload out of the goodness of your heart! We want to do research, but we also want to be realistic. With regards to the latter, it sounded potentially interesting but while we heard about the idea, we didn’t see the tools it would involve using until Session 7. The only tool that we contributed was the reflection task that we have newly integrated into our programme, which students have to complete after they receive feedback on the first draft of their assignments.

Session 7

Between Session 6 and 7, we got hold of the tools (emailed to us by the member in touch with the research in the Psychology department) and were able to have a look in advance of Session 7. In Session 7, we discussed the tools (questionnaires) and agreed that while some elements of them were potentially workable and interesting, there were enough issues regarding the content, language and length that it perhaps wasn’t the right direction for us to take after all. The tools had been produced for a different context (first year undergraduate psychology students). We decided that what we needed was to be able to use questionnaires that were geared a) towards our context and students and b) towards finding out what we want to know. We also talked about the aim of our research, as obviously the aim of a piece of research has a big impact on how you go about doing that research. Broadly, we want to better understand our students’ response to feedback and from that be able to adapt what we do with our feedback to be as useful as it possibly can be for the students. We spent some time discussing what kinds of questions might be included in such a questionnaire.

So, at this point, we began the shift away from focusing on those two studies, one existing, complete but unpublished, and one proposed,  and towards deciding on our own way forward, which became the focus of session 8

Session 8

Between Session 7 and Session 8, our M.A. Researcher sent us an email pointing out that in order to think about what we want to include in our questionnaires, we first need to have a clear idea of what our research questions are. So that was the first thing we discussed.

One fairly important thing that we decided today as part of that discussion about research questions was that it would be better to focus on one thing at a time. So, rather than focusing on all the types of feedback that Turnitin has to offer within one project, this time round focus specifically on the quickmarks (which, of course, we have recently been working on!). Then, next time round we could shift the focus to another aspect. This is in keeping with our recognition of the need to be realistic regarding what we can achieve, so as to avoid setting ourselves up for failure. (I think this is a key thing to bear in mind for anybody wanting to set up a scholarship circle like this!) The questions we decided on were:

  1. Do students understand the purpose of feedback and our expectations of them when responding to feedback?
  2. How do students respond to the Quickmarks?

Questions that got thrown around in the course of this discussion were:

  • Do students prioritise some codes over others? E.g. do they go for the ones they think are more treatable?
  • What codes do students recognise immediately?
  • If they don’t immediately recognise the codes, do they read the descriptions offered?
  • Do they click on the links in the descriptions?
  • Do they do anything with those links after opening them? (One of the students in the M.A. research opened all the links but then never did anything with them!)
  • How much time do they believe they should spend on this feedback?
  • How long are students spending on looking at the feedback in total?
  • How do students split their time between Quickmarks (/”In-text feedback” so includes comments and text-on-text a.k.a. the “T” option, which some of us haven’t previously used!) and general comments and the grade form?

Of course, these questions will feed in to the tool that we go on to design.

We identified that our learner training ideas e.g. the reflection form, improving the video that introduces them to Turnitin feedback, developing a task to go with the video in which they answer questions and in so doing create themselves a record of the important information that they can refer back to etc. can and should be worked on without waiting to do the research. That way, having done what we can to improve things based on our current understanding, we can use the research to highlight any gaps.

We also realised that for the data regarding Quickmarks to be useful, it would be good for it to be specific. So, one thing on our list of things to find out is whether Googleforms would allow us to have an item in which students identify which QMs they were given in their text and then answer questions regarding their attitude to those Quickmarks, how clear they were etc. Currently we are planning on using Googleforms to collect data as it is easy to administer and organises the results in a visually useful way. Of course that decision may be changed based on whether or not it allows us to do what we want to do.

Lots more to discuss and hopefully we will be able to squeeze in one more meeting next week (marking week, but only one exam to mark, most unusually! – in a normal marking week, it just would not be possible) before the Christmas holidays begin… we shall see! Overall, I think it will be great to carry out research as a scholarship group and use it to inform what we do (hence my overambitious as it turns out initial idea…). Exciting times! 🙂

 

CUP Online Academic Conference 2018: Motivation in EAP – Using intrinsically interesting ‘academic light’ topics and engaging tasks (Adrian Doff)

This is the first session of this online conference that I have been able to attend live this week, hoping to catch up with some of the others via recordings…

Part of a series of academic webinars running this week, this is the 5th session out of 8. Apparently recordings will be available in about a week’s time. Adrian Doff has worked as a teacher and teacher trainer in various countries and is co author of Meanings into Words and Language in Use series amongst other things. He is talking to us from Munich, Germany.

We are going to look at what topics and tasks might be appropriate in EAP teaching, especially to students who both need academic skills in English but also need to improve their general language ability. For most of his ELT life, Adrian has been involved in general ELT as a teacher and materials writer and has recently move into EAP mainly through supplementary material creation.

Our starting point for this webinar: look at some of the differences between GE and EAP. In the literature of EAP quite a lot is made of these differences, partly as a way to define EAP in contrast to GE.

Firstly, the contrast between needs and wants: to what extent do we define the content of the course in terms of the perceived needs of learners and what we think students want to do vs what they need to do. In all teaching and learning there is a balance between these two things.

  • In GE, needs/outcomes define the syllabus, skills and general contexts and they are seen as fairly longterm outcomes and goals, often expressed in terms of the CE framework. E.g. language used in restaurants/cafes, we think it will be useful for learners of English. Equally we consider what students want, and the topics and tasks and texts are more based on interest, engagement and variety. E.g. a common classroom activity is a class survey mingling and asking questions and reporting back. They are not really related to the needs, i.e. we don’t expect students to get a job doing surveys, but it is interesting, lively, generates interaction etc so it is motivating for them to do.
  • If we think about EAP, the needs are more pressing and clearer, dictate the skills, genre and language we look at and that dominates choice of topics, texts and tasks.

Two differences come out of this first one:

  • Firstly, In GE, the overt focus of the lesson is focused on a topic, while in EAP the overt focus is on the skills being developed.
  • Secondly, teachers’ assumptions about motivations in class.

Adrian shows us a quote from De Chazal (2014), saying that motivation is teacher-led while in EAP stakes are high and students are very self-motivated, clear intrinsic motivation from a clear goal. In GE students may not necessarily see tasks/topics relevant in terms of what they need, while in EAP they do.

Next we looked at example materials from GE and EAP, based around the same topic area of climate change.

  • EAP – “Selecting and prioritising what you need”  – students are taken through a series of skills: choosing sources, thinking about what they know, looking at the text, looking at language of course and effect, leading into writing an essay. The assumption is that students will be motivated by the knowledge that they need these skills. The page looks sober, black and white, reflecting the seriousness of EAP.
  • GE – Cambridge Empower, also leads to writing an essay but first there is focus on the topic, listening to new items about extreme weather events and discussion. Then reading a text that leads into writing skills focus on reporting opinions and it leads into the essay. It arouses interest in the topic through: strong use of visual support, active discussion of the topic, listening and speaking tasks used although it’s a reading and writing lesson. Lots of variety of interaction and general fluency practice.

These reflect the different needs of GE and EAP learners, reflects the more serious nature of academic study. This is fine if we can assume that learners in EAP classes are in fact motivated and have a clear idea of their needs and how what is being done relates to that. De Chazal uses “can be self motivated” and “are more likely to be working towards a clear goal” – not definite.

Adrian puts forward a spectrum on which GE, GEAP and SEAP on it but says that many students occupy a place somewhere in the middle of the scale i.e. learning English for study purposes but also need GE and may not have clear study aims. E.g. Turkey. Students who study English in addition to their subject of study in University context. Need to get to B1+, preparing for a programme where some content is in English but not wanting to study in an English-speaking university so don’t need full on EAP, may not necessarily be motivated. In the UK, students need an improved IELTS score, need EAP skills in addition to general skills and are more motivated. In both of these, EAP ‘light’ may be useful.

For the rest of this session, he says we will look at what this might look like and how it might come out in practice. It is clearly possible to focus on academic skills in a way that is engaging for learners who may not be highly motivated while still providing the skills that they need to master.

Approach 1

E.g. Skills for writing an academic essay, specifically in the opening part, the introduction, where they may need to define abstract concepts. Students might be shown an example which provides examples of the language needed.

It isn’t in itself a particularly engaging text, but it seems to Adrian that there are ways in which this topic could be made to be more interesting and engaging for less motivated students:

  • a lead-in to get ss thinking about the topic – brainstorming
  • discussion with concrete examples e.g. in what ways mght courage be an asset in these occupations
  • personalisation: think of a courageous person you know, what did they do which was courageous
  • prediction: get ss to write a definition of courage without using a dictionary

THEN look at the text.

So this is an example of bringing in features of General English methodology into EAP. This helps to provide motivation, it is generated by the task and teacher, bringing interest to the topic which does not HAVE to be dry.

Approach 2:

To actually choose topics which have general interest even if not related to learners’ areas of study.

Listening to lectures: identifying what the lecturer will talk about using the signals given (EAP focus: outlining content of a presentation). Can be done with a general interest topic e.g. male and female communication.

  • Start off with a topic focus: think about the way men or boys talk together and the way women or girls talk together. Do you think there are any differences? Think about…
  • Leads into a focus on listening skills: students listen to an introduction to a class seminar on this topic; identify how speaker uses signalling language, stress and intonation to make it clear what he is going to talk about

So those are a couple of examples of directions that EAP light could take. This is a crossover between GE and EAP, skills and language defined by needs, but the initial focus is on the topic itself rather than on the skills. Topics selected as academic in nature but have intrinsic interest. Motivation is enhanced through visuals, engaging tasks, personalisation etc.

Q and A

What is a good source of EAP light topics?

Adrian plugs his Academic Skills development worksheets – generally academic nature but of general interest. (They accompany “Empower”) If you are developing your own, look at the kind of topics in GE coursebooks and see if there are any that would lend themselves to EAP.

What about letting students choose their own topics?

A good idea if this is EAP where students are already engaged in academic study, as they will have a good idea of what they need. In GEAP it is important to choose topics which lend themselves to whatever academic skill you are developing as well.

What were the textbooks used in the examples:

EAP – Cambridge Academic English B2 level; GE- Empower B2 level

 

Using Google+ Communities with classes (2)

All of a sudden we are 5 weeks into term. This week, also known as 5+1 (so not to get it mixed up with teaching week 6, which is next week) is Learning Conversations week (the closest we get to half term, and only in the September term!) so it seemed a good time to take stock and see how things are going with Google Communities, following my introductory post from many moons ago.

Firstly, it must be said that the situation has changed since I wrote that first post: Now, all teachers are required to use GC instead of my Group on MOLE (the university brand of Blackboard VLE) because we had trouble setting up groups on MOLE at the start of this term. Nevertheless, I am carrying on with my original plan of reflecting and evaluation on my use of GC with my students because I think it is a valuable thing to do!

In order to evaluate effectively, I wanted to have the students’ perspective as well as my own, I posted a few evaluative questions in the discussion category of each of my classes’ GC page.

So, no science involved, no Likert scales, no anonymity, just some basic questions. (The third question was because I thought I might as well get their views on how the lessons are going so far at the same time!) I’m well aware of the limitations of this approach BUT then again I’m not planning to make any great claims based on the feedback I get and I’m not after sending a write-up to the ELTJ or anything like that either (would need all manner of ethical approval to do that!). I did try to frame the questions positively e.g. “What do you think would improve the way we use GC?” rather than “What don’t you like about GC?” so that the students wouldn’t feel like responding to the question wasn’t a form of criticism and therefore feel inhibited. An added benefit is that it pushes them to be constructive regarding future use rather than just say how they feel about the current use of it.

Before I go into the responses I’ve had from students, however, it would make sense to summarise how I’ve been using the GCs with them. I recently wrote about GCs for the British Council TeachingEnglish page (soon to be published) and the way I came up with for describing them in that post was “a one-stop shop for everything to do with their [students] AES classes” and that is basically what it has become:

Speaking Category extract

 

Writing Category extract

 

Vocabulary Category extract

 

Listening Category extract

I would say the main use I have made of it is to share materials relating to lessons, mostly in advance of the lessons – TedTalks, newspaper articles etc – but also useful websites and tools, for individual use or class use – AWL highlighter, Quizlet, Vocab.com etc. Finally, it is great for sharing editable links to Google Docs, which we use quite often in class for various writing tasks. Other than these key uses, I have also used it to raise students’ awareness of mental health issues and the mental health services offered to students by the university, during Mental Health Week here (which coincided with World Mental Health Day) and to raise their awareness of the students union and what it offers to them.

In terms of student feedback, they think it’s “convenient”, “easy to use” and they “enjoy using” it. They also mention the ability to comment on posts (not present with My Group on MOLE) and communicate outside of the classroom as well as in it. In terms of suggestions for improvement, one student said students should use it to interact more frequently but that it should be clear which posts are class content and which are sharing/interaction. A couple of students also said they’d like the Powerpoints used in class to be uploaded there. However, those are available on MOLE. The trouble, of course, is that in using GC rather than My Group (which is on MOLE), students are a lot more tuned into GC (which we use all the time) than MOLE. I have no scientific evidence to back this up, but I suspect that be it academically or personally, if you have to use multiple platforms you tend to gravitate towards one, or some, more than others rather than using them all equally, particularly if time is very limited, as it is for busy students! (I could be wrong – if you know of any relevant studies let me know!) Unfortunately GC cannot fully replace MOLE as students need to learn how to use it in preparation for going to university here and they need to submit coursework assignments to Turnitin via MOLE. Perhaps, then I need to come up with ways to encourage them to go from one to the other and back, so they don’t forget about ‘the other’…

In terms of future use, I have set up a little experiment in that as part the of Learning Conversations that are taking place this week, we have to decide on Smart Actions that the students are supposed to carry out. E.g.

 

Go to Useful Websites on MOLE and explore the ‘Learning Vocabulary’ websites available. Tell your teacher which websites you visited and what you learnt from them by the final AES lesson of Week 6.

Some of them, like the above, lend themselves to posting on GC. In this way, not only do they tell me what they have learnt but also they share that learning with the rest of their classmates. So, in their learning conversations, whenever the Smart Action(s) were amenable to this plan, I have been encouraging students to use GC to communicate the outcome to me and share the learning with the rest of the class. We will see how it goes, if they do post their findings etc. Be interesting to see what happens! Another idea I’ve had is to do something along the lines of “academic words of the week”, where I provide a few choice academic words along with definitions, collocations, examples of use and a little activity that gives them a bit of practice using them, and get them to also make a Quizlet vocabulary set collaboratively (I have a Quizlet class set up for each class). Then perhaps after every couple of weeks we could do an in-class vocabulary review activity to see what they can remember.

Finally, it seems to me that Monday, being the first day of the second half of the term, is a crucial opportunity to build on student feedback by getting them to discuss ways in which we could use the GC for more interactive activities and find out what they’d be interested in having me share other than class-related materials and the occasional forays into awareness-raising that I have attempted. The key thing that I want them to take away is that I want the GC to work for them and that I am very much open to ideas from them as to how that should be, so that it becomes a collaborative venture rather than a teacher-dominated one.

We shall see what the next five weeks hold… Do you have any other ideas for how I could use GCs more effectively? Would love to hear them if you do!

 

Scholarship Circle: Giving formative feedback on student writing (3+4)

Time and workload have dictated that I combine two weekly scholarship sessions into one post, so this “double digest” is my write-up of sessions 3 and 4.

(For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 and session 2 of this particular circle.)

Session 3

In session 3, we started by discussing the type of feedback we give students in their coursework. In CW1 (an essay outline), we give them structural feedback as well as pointing out where sources are insufficiently paraphrased, while in CW3 they get structural feedback and language feedback using the error correction code. We also talked more about direct feedback. We questioned where the line between direct feedback and collusion lies and decided that it’s ok to use teacher feedback to improve work but if they hired another tutor to correct their work, it would be collusion. We also came to the conclusion that direct feedback can be useful for certain things and that you could use it to scaffold learners e.g. in the first instance of the mistake, provide the correct form as a model; in the second instance of the mistake, provide the start of the correct form; in the third instance of the mistake, just highlight the type of mistake and let the learner correct it by themselves, using previous instances and feedback to help them. If there are any further instances of that mistake type, indicate to learners that they need to find and correct them.

We also talked more about this issue of correcting mistakes beyond those pointed out by the teacher i.e. proofreading work for more instances of the same mistake. In our experience, it frequently does not happen. In the masters research done by one of our number, the main reasons for that, given by the students when they were asked, were:

  • the belief that no comments = no mistakes
  • not knowing how to find/correct mistakes

However, with regards to the quick marks (i.e. error correction code on Turnitin), in terms of the students who participated in the study, 80-100% of quick marks resulted in successful revisions. Thus, on the whole, only when mistakes are pointed out are they are corrected, in general. This brought us back to the question of proofreading and learner training which we had touched on in previous sessions, identifying it as a definite need.

We acknowledged that we expect proofreading but that it doesn’t happen. This is partly because our learners are not used to it – they are used to having all errors pointed out to them. In some cases, as in one of the participants in the M.A. study, learners are not able to identify mistakes. In that case, the ideal situation would be helping those learners to find and correct the errors they ARE able to deal with it at their level. We decided that in order to help learners in both cases, more proofreading-related lessons are needed. They already have “Grammar Guru” which is an online interactive grammar tutoring tool, within which are activities that prompt proofreading for mistakes with the specific focus of a given tutorial e.g. articles.

However, the only time they do it with their own work is with CW3 and so we wondered if there would be scope for using work produced for writing exam practices as the basis for proofreading activities too.

We also looked at 2 tools for encouraging students to engage with their feedback:

1. A google form, adapted from something similar which is used at Nottingham Trent, that encourages students to find examples of particular mistakes in their text, correct them and make a note of the materials used in order to make that correction:

The idea is that students complete it between receiving their feedback and attending their tutorial, so that during the tutorial the tutor can, amongst other things, check their corrections and suggest alternative sources.

2. A form for students to complete that pushes them to reflect on their feedback:

As with the first one, this is intended to be completed between receiving the feedback on Turnitin and attending the tutorial, thus making the tutorial more effective than the common scenario where the student comes in not having even opened the feedback. We also wondered about the possibility of combining the two, so in other words combining focused error identification and correction with reflection on other aspects of the feedback.

Session 4

This week, in session 4, we mainly focused on the error correction code that we use. We looked at each symbol and accompanying notes, firstly deciding if it was a necessary one to keep and then refining it. The code, used on Turnitin, works as follows: We highlight mistakes and attach symbols to them. When the student subsequently looks at their text, they see the symbols and then when they click on the symbol, the accompanying notes appear. Our notes include, depending on the mistake, an explanation of the mistake, examples of incorrect use and corrected use, and links to sources that students can use to help them to learn more about the language point in question. Here is an example:

We paid particular attention to the clarity of the language used in the accompanying notes, getting rid of anything unnecessary e.g. modals, repetition etc, and the links provided to help students. The code also exists in GoogleDoc format so we all had Chromebooks out and were working on it collaboratively. There are a lot of symbols and there was plenty to say, so actually we only got as far as “C”!! (They are ordered alphabetically….!) This job will continue in the next session, which will be the week after next, as next week we have Learning Conversations which are off timetable so our availability is very different from normal.

I would be interested to hear what approaches you use where you work in terms of error correction, codes, proofreading training, pre-tutorial requirements, engaging learners with feedback and so on. Please do share any thoughts using the comments box below… 🙂

Scholarship Circle: Giving formative feedback on student writing (2)

Before we had time to turn around twice, Tuesday rolled around again and with it our weekly scholarship circle meeting, with its name and focus of “Giving formative feedback on student writing” (For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here and to see what we discussed last week – in session 1 of this circle –  here)

A week is a short turn around time but a number (9, in fact!) of eager beavers, who’d all managed to read the article “Sugaring the Pill: Praise and criticism in written feedback” by Fiona Hyland and Ken Hyland in Journal of Second Language Writing, turned up to discuss it and relate it to our context. The article is, in its own words, “a detailed text analysis of the written feedback given by two teachers to ESL students over a complete proficiency course”. The authors categorise all the feedback by function; namely, praise, criticism and suggestions and analyse it accordingly. It’s a very interesting and thought-provoking article. However, the purpose of this post is not to summarise it but rather our discussion which arose from it. This is not as easy a task as it might sound!

Praise

We started by talking about praise. Something we found interesting, in both the article and a similar piece of research done for a masters dissertation by one of our number, was that the students in these studies were able to identify when praise was insincere/formulaic/there for the sake of being there. (Here we are talking about the general comments at the end of a text rather than specific in-text comments.) Additionally, also in terms of general end-of-text comments, students who receive substantial formulaic praise may automatically mentally downgrade it, particularly if the balance of feedback overall is in favour of praise i.e. more positive comments than suggestions for improvement. In connection with this, students were also found not to believe the positive general comments if they did not reflect the in-text feedback, which being more directly connected to the text held more weight for them. Finally, both the article and the masters research highlighted the danger of the suggestions for improvement in a praise-criticism sandwich being ignored/missed by a student and the danger of hedged comments (e.g. using modals) being misunderstood.

Another aspect of feedback which it was thought might lead to misunderstanding is our feedback guidelines here at the college, which stipulate that in our general comments we should include 3 positive points and 3 areas to work on. We discussed the possibility that this might be (mis)interpreted by students to mean that the piece of writing was good and in need of improvement in equal measure when in fact that may not be the case. We also discussed the importance of framing the negative points as suggestions rather than criticism, as well as of avoiding hedging and the aforementioned dangers of miscommunication that may go with it:

Compare

“Your writing does not have enough linkers so it is confusing” (highlighting a negative)

with:

“You should include more linkers in your work to make it clearer” (making a suggestion for improvement)

This would, in turn, be easier to understand for a student than:

“I wonder if you could include more linkers in this paragraph? This might help the reader.” (hedged)

or:

“This is a good introduction with a clear thesis statement and scope, however, you need to look at coherence. Go back to …. and consider… . I think you could also benefit from having a look at…  …it is quite advanced but I think you are ready to take your AW to the next level!” (Praise-criticism sandwich: the student in question ignored all the suggestions because the teacher had said it was good so they didn’t feel the need to make any changes!) 

Of course, as discussed in the journal article, teachers do use phrases such as “I wonder if” and questions rather than direct instructions to avoid appropriation of the piece of work and also to avoid being overly authoritative, in order to meet what Hyland and Hyland describe as the “interpersonal goal” aspect of feedback (in contrast with pedagogic and informational goals).  Our conclusion, based on the masters findings, our experience and having read the journal article was that teachers possibly worry too much about being polite in the feedback, which ends up being confusing for the student more than anything else. As here:

When the message gets lost…

Still relating to praise, we agreed that it is most effective when specific i.e. directly highlights something in the text that the student is doing well, a view supported by the article and the masters research. Carrying this over to general end-of-text comments, we wondered if ‘repeating’ what you have said in specific in-text comments (which I admitted to doing quite a bit hence raising the issue), whether positive or negative, might actually be a way of reinforcing the importance of the in-text comments in question rather than being redundant or otherwise negative and making the general comments more personalised/less formulaic.

Finally, one issue I raised was that on Turnitin, if you have all the in-text comments (both positive and negative – “negative”, including suggestions for improvement not just criticisms obviously) in a single colour in terms of highlighting, a student might look at that and assume their essay was terrible because of the quantity of highlighting. I wondered if using different colours of highlighting for positive and negative would alleviate that situation. However, it was also put forward that it might be even worse if students knew that code and had very few things highlighted in the positive colour!

Improving feedback

As well as identifying the potential issues with praise discussed above, we also discussed possible solutions:

Reframing general comments

We agreed that:

  • short, personalised comments would be most useful, to avoid misunderstandings and identifiable insincerity. (Our comments bank – a google doc of generic comments – does not currently fit this bill.)
  • in Turnitin we could make more use of the “T” option (which is along side the QM and the comment bubble options and which most of us were unaware of!). This allows you to write directly on the text in ‘blue ink’ – might be more personalised/allow more flexibility than the general comments in the comments box. It might also allow for less in-text highlighting for comments bubbles.
  • having a “3 positive things and 3 ‘negative’/to improve” one size fits all guideline is problematic as students are all different (though if you have 60+ students’ work to look at in a short space of time, is carefully tailored, individualised feedback realistically feasible?)

Learner Training

We decided that learner training was crucial for enabling students to make full use of the feedback and therefore make doing it worth our time and theirs. Firstly, for in-text comments to be truly useful, it was suggested that we need to explicitly train students to look for further examples of the mistakes we highlight using the Quick Marks (i.e. error correction code) as otherwise they will correct what we highlight but they won’t automatically apply it to the rest of their text. Perhaps part of learner training would be to train them towards the point where they can do that without being continually prompted in comments or tutorials. We also considered the need for recognising and differentiating between “treatable” errors (e.g. articles – there are rules that can be followed) and “non-treatable” errors (e.g. word choice), and giving appropriate feedback. For non-treatable errors, direct feedback, i.e. giving students the correction, is better, while for treatable errors we can use indirect feedback, i.e. identifying the error and asking students to correct it themselves, using clues such as error correction coding. Currently, most of our feedback is indirect, so this is something we may need to reconsider.

Another aspect of learner training that we discussed was how to train learners to make the most of their very brief (10-15 minute) tutorials. For these tutorials to be truly beneficial, we agreed that it was imperative for students to look at their feedback BEFORE coming to the tutorial. In fact, they need not only to look at it but also to attempt to respond to it, so that during the tutorial the tutor can check their attempts and help them with the areas they were unable to address independently. We wondered about using a pre-tutorial sheet to encourage them to do this, something that in order to complete they need to engage with the feedback. A couple of teachers have already experimented with this kind of thing with encouraging results so it is worth looking into.

All in all, we managed to discuss a lot in an hour – or just over, as we lost track of time! (You know it’s a good scholarship circle when the participants just can’t drag themselves away at the end! I think the reason this scholarship circle is going so well is that it has a very specific focus and it is one that is equally important to all of us.)

Homework for next week: to read a chapter by Dana Ferris called “Does error feedback help student writers? New evidence on the short- and long-term effects of written error correction” in a book published by Cambridge University Press called Feedback in Second Language Writing (edited by the same Hylands who wrote last week’s article!) Just from the title I am very curious about what Ferris will say, but I won’t have time to find out till at least the weekend!

Feel free to join in the discussion by commenting on this post! 🙂