Scholarship Circle: Giving formative feedback on student writing (3.1)

I would say it’s the start of a new term (College term 3) and a new wave of scholarship circle sessions, but, in reality, it’s actually week 5! We had our first session for the term last week and this is me playing catch-up. The beginning of a new term is a notoriously busy time and particularly for the January cohort is a scattergun of coursework draft submission/feedback (+ for me as ADoS, in addition to doing my own feedback, providing support to teachers doing theirs) and speaking exams (+for me as ADoS double marking a portion of those with each of my teachers), so I’m actually pretty glad the scholarship circle didn’t get going ’til last week!

Our agenda was as follows:

  1. Revisit the issue of our research on Quickmarks to see where we are at and figure out our timeline.
  2. Decide on a focus for this term’s scholarship circle sessions
  3. Set ourselves some reading homework

This terms research project update

The consent forms are ready to go and will be sent to:

  • the centre manager
  • the teachers of the students we have identified as the sample who will receive the questionnaire and from which participants for the text analysis will be selected (there are a small number of us and we will only be doing a small number i.e. 1-2 of text analyses each!)
  • the students themselves. (We only need to send a consent form to those selected for text analysis as the consent form for the questionnaire will be built into the questionnaire).

We reconfirmed that we will be focusing on International Foundation Year (IFY) students rather than PMP (Pre-masters students) as PMP students’ course work tends to change dramatically between first draft feedback and final submission due to content tutor feedback, which would affect text analysis possibilities. We are aware that a range of factors influence response to feedback, e.g. age, pathway, language level, past learning experiences, educational culture in country of origin, so have picked IFY students with a particular language level (as defined by IELTS scores) and over the age of 18. This minimises the influence of age and language level factors on response, and avoiding ethical/consent/safeguarding issues that arise when minors are involved.

The text analysis will be done in the early part of next term. There won’t be time this term as once final drafts are submitted, teachers will be busy with coursework marking and then exam marking extraordinaire (biggest cohort of students ever this term). It will have to be the early part i.e. before the end of week 4, as beyond then, teachers will be busy doing first draft feedback for next term’s students. For next term’s students, if we are repeating the research cycle, we can do the analysis in the autumn term.

Focus for this term’s sessions

This term, including the current session, there will be 6 sessions. (Week 10 will be an impossibility due to above-mentioned exam marking extraordinaire!) We have decided to focus on comments, as a logical next step to the focus on Quickmarks that our current research is based on.

At the moment, we do have a generic comments bank which teachers can copy comments from in order to paste them into a student’s assignment. The aim of this is to save time and help teachers by providing them with ideas of what they can put. In practice, fast typists ignore the bank as it is quicker to type what you want to say than it is to read through a bank of comments, decide which one is the best fit and then do the copy-pasting. The comment bank also gets ignored due to it being generic rather than specific to a given student’s piece of work. It was noted that either which way, it is useful for new teachers as an extra point of support.

Going forward, we discussed the possibility of going through the bank of comments as we did with the quickmarks and making them more user-friendly (for students and teachers alike!). One idea was to have a base comment, with space to make it specific by referring to a given student example. Another idea was to refine the categorisation of the comments so that is easier to find the ones you need. We also talked about refining the bank by selecting the best comments with the widest structure and editing or culling any that seemed less useful (much as some of the quickmarks were edited or culled in a similar fashion).

Another issue that came out is the importance of familiarity – be it with the quickmarks or with the comments, the only way for these resources to be used effectively and efficiently is if teachers are familiar with them so that time isn’t wasted through not being sure about which quickmark/comment to use, if there is an appropriate quickmark/comment available etc. Familiarity is also important for students, so that they are better able to recognise what their feedback means and what they need to do. To address this, we had the idea of a “quick mark auction”. This would involve a list of sentences, each with a different mistake underlined, a set of corresponding quickmarks and a set of quickmark meanings. By the end of the activity, students (and new teachers!) would have identified what each quick mark means and which one to use with each error example. We have set up a google doc so that we can create this resource collaboratively:

Obviously no one has added anything to it yet – work in progress! It will happen…

As we did with the Quickmarks, we aim to inform what we do with what we read in relevant literature and discuss in our weekly sessions. Which brings me on to…

Homework

Our reading homework for this week (which I haven’t done yet – yikes!) is:

  • Nicol, D.J. and Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice in Studies in Higher Education 31(2) pp.199-218
  • Burke, D. and Pieterick, J. (2010) Giving students effective written feedback McGraw-Hill Education

I better get to it!

 

Scholarship Circle: Giving formative feedback on student writing (2.2)

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 / session 2.2 of this particular circle.

In this week’s session of the scholarship circle, we started by doing a pilot text analysis. In order to do this, we needed a first draft and a final draft of a piece of CW3 essay coursework and a method of analysis. Here is what it looked it like:

So…

  •  QM code refers to the error correction code and there we had to note down the symbol given to each mistake in the first draft.
  • Focus/criterion refers to the marking criteria we use to assess the essay. There are five criteria – Task achievement (core elements and supported position), Organisation (cohesive lexi and meta-structures), Grammar (range and accuracy), Vocabulary (Range and accuracy) and Academic conventions (presentation of source content and citations/references). Each QM can be assigned a criteria to attach to so that when the student looks at the criteria-based feedback, it shows them also how many QMs they have attached to each criteria. The more QMs there are, the more that criterion needs work!
  • Error in first draft and Revision in final draft require exact copying from the student’s work unless they have removed the word/s that prompted the QM code.

Revision status is where the method comes in. Ours, shared with us by our M.A. researcher whose project our scholarship circle was borne out of, is based on Storch and Wigglesworth. Errors are assigned a status as follows:

  • Successful: the revision made has corrected the problem
  • Unsuccessful: the revision made has not corrected the problem
  • Unverifiable: if the QM is wrongly used by the teacher and the student has made something incorrect in the final draft based on that QM or has made no change but no change is in reality required
  • Unattempted: the QM is correctly used but the student does not make any change in the final draft.

Doing the pilot threw up some interesting issues that we will need to keep in mind if we use this approach in our data collection:

  • As there are a group of us rather than just one of us, there needs to be consistency with regards to what is considered successful and what is considered unsuccessful. E.g. if the student removes a problem word/phrase rather than correcting it, is that successful? If the student corrects the issue identified by the QM but the sentence is grammatically incorrect, is that successful? The key here is that we make a decision as a group and stick by that as otherwise our data will not be reliable/useful due to inconsistency.
  • We need to beware making assumptions about what students were thinking when they revised their work. One thing a QM does, regardless of the student’s understanding of the code, is draw their attention to that section of writing and encourage them to focus closely on it. Thus, the revision may go beyond the QM as the student has a different idea of how to express something.
  • It is better to do the text analysis on a piece of writing that you HAVEN’T done the feedback on, as it enables you to be more objective in your analysis.
  • When doing a text analysis based on someone else’s feedback, however, we need to avoid getting sucked in to questioning why a teacher has used a particular code and whether it was the most effective correction to suggest or not. These whys and wherefores are a separate study!

Another thing that was discussed was the need to get ethical approval before we can start doing anything. This consists of a 250 word overview of the project, and we need to state the research aims as well as how we will collect data. As students and teachers will need to consent to the research being done (i.e. to use of their information), we need to include a blank copy of the consent form we intend to use in our ethical approval application. By submitting that ethical approval form, we will be committing to carrying out the project so we need to be really sure at this point that this is going to happen. Part of the aim of today’s session, in doing a pilot text analysis, was to give us some idea of what we would be letting ourselves in for!

Interesting times ahead, stay tuned… 🙂

Scholarship Circle: Giving formative feedback on student writing (2.1)

It’s a brand new term (well, sort, of it’s actually the third week of it now!), the second of our four terms here at the college, and today (Monday 21st January, though I won’t be able to publish this post on the same day!) we managed our first scholarship circle session of the term.

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 of this particular circle.

The biggest challenge we faced was remembering where we had got to in the final session BC (Before Christmas!). What were our research questions that we had decided on again? Do we still like them? What was the next step we were supposed to take this term?

Who?

We talked again about which students we wanted to participate – did we want IFY (Foundation) or PMP (Pre-Masters)? We considered the fact that it’s not only linguistic ability which influences response to feedback (our focus) – things like age, study pathway, past learning experiences and educational culture in country of origin will all play their part. Eventually, we decided to focus on IFY students with PMPs their coursework may alter dramatically between first and final draft submissions due to feedback from their content tutor, which would affect our ability to do text analysis regarding their response to our first draft feedback. Within the IFY cohort we have decided to focus on the c and d level groups (which are the two bottom sets, if you will), as these students are most at risk of not progressing so any data which enables us to refine the feedback we give them and others like them will be valuable.

What?

It is notoriously tricky to pin down a specific focus and design a tool which enables you to collect data that will provide the information you need in order to address that focus. Last term, we identified two research questions:

This session, we decided that this was actually too big and have decided to focus on no. 2. Of course having made that decision, and, in fact, also in the process of making that decision, we discussed what specifically to focus on. Here are some of the ideas:

  • Recognition – which of the Quickmarks are students able to recognise and identify without further help/guidance?
  • Process – are they using the Quickmarks as intended? (When they don’t recognise one, do they use the guidance provided with it, that appears when you click on the symbol? If they do that, do they use the links provided within that information to further inform themselves and equip themselves to address the issue? You may assume students know what the symbols mean/read the information if they don’t but anecdotal evidence suggests otherwise – e.g. a student who was given a wrong word class symbol and changed the word to a different word rather than changing the class of it!)
  • Application – do they go on to be able to correct other instances of the error in their work?

Despite our interest in the potential responses, we shelved the following lines of enquiry for the time being:

  • How long do they spend altogether looking at their feedback?
  • How do they split that time between Quickmarks, general comments and copy-pasted criteria?

We are mindful that we only have 6 weeks of sessions this term (and that included this one!) as this term’s week 10, unlike the final week of last term, is going to be, er, a tad busy! (An extra cohort and 4 exams being done between them vs one cohort and one exam last time round!) As we want to collect data next term, that gives us limited time for preparation.

How?

We are going to collect data in two ways.

Text analysis

We each will look at a first draft and a final essay draft of a different student and do a text analysis to find out if they have applied the Quickmark feedback to the rest of their text. This will involve picking a couple of Quickmarks that have been given to the student in their first draft, identifying and highlighting any other instances of that error type, and then looking at the final draft in order to find the highlighted errors so that we can see if they have been corrected, and if they have, how – successfully or not.

We are going to have a go at this in our session next week, to practise what we will need to do and agree on the process.

Questionnaire

Designing an effective questionnaire is very difficult and we are still in the very early stages. We are still leaning towards Google Forms as the medium. Key things we need to keep in mind are:

  • How many questions can we realistically expect students to answer? The answer is probably fewer than we think, and this means that we have to be selective in what questions to include.
  • How can we ask the questions most clearly? As well as using graded language, this means thinking about question types – will we use a Likert scale? will we use tick boxes? will we use any open questions?
  • How can we ensure that the questions generate useful, relevant data? The data needs to answer the research questions. Again, this requires considering different question types and what sort of data they will yield. Additionally, knowing that we need to analyse all the data that we collect, in terms of our research question, we might want to avoid open questions as that data will be more difficult and time-consuming to analyse, interesting though it might be.

The questions will obviously relate to the focuses we identified, earlier discussed – recognition, process and application. One of our jobs for the next couple of sessions is to write our questions. It’s easy (ish!) to talk around what we want to know, but writing clear questions that elicit that information will be significantly more challenging!

Another thing we acknowledged, finally, is that research-wise we are not doing anything new that hasn’t been done before, BUT the “newness” comes from doing it in our particular context. And that is absolutely fine! 🙂

Homework: 

Well those of us who haven’t got round to doing the reading set at the end of the previous session (cough cough) will hopefully manage to finish that. (That was Goldstein, L. Questions and answers about teacher written commetary and student revision: teachers and students working together in Journal of Second Language Writing and Ene, E & Upton, T.A. Learner uptake of teacher electronic feedback in ESL composition.) Otherwise, thinking about possible questions and how to formulate them!

Scholarship Circle: Giving formative feedback on student writing (9)

It’s the last week of term, exam week, and we have managed to squeeze in a final scholarship circle meeting for the term. How amazing are we? 😉 I also have no excuse not to write it up shortly afterwards – nothing sensitive content-wise and, for once in a way, I have a wee bit of time. Sort of. (By the time you factor in meetings, WAS and ADoS stuff for next term, not as much as you might think…!)

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 of this particular circle.)

So, session 9. The first thing we recognised in this session is that we won’t be collecting data until term 3 for September students and term 4 for January students (which will be their term 3). This is a good thing! It means we have next term to plan out what we are going to do and how we are going to do it. It sounds like a lot of time but there is a lot we have to do and elements of it are, by their nature, time-consuming.

Firstly, we need to decide exactly who our participants will be and why. “You just said term 3/4 September/January students!” I hear you say. Yes…generally, that is the focus. In other words, students who are doing a coursework essay and therefore receiving QuickMark feedback. However, within those two broad groups (September Term 3/January Term 4), we have IFY (foundation) and PMP (Pre-masters) students and the IFY cohorts are streamed by IELTS score into a, b, c and (numbers depending) d groups. So, we need to decide exactly who our participants will be. This choice is affected by things like the age of the participants (some of our students are under 18 which makes the ethical approval process, which is already time-consuming, markedly more difficult) and what exactly we want to be able to find out from our data. For example, if we want to know the effect of the streaming group on the data, then we need to collect the data in such a way that it is marked for streaming group. (NB: as I learnt last term in the context of a plagiarism quiz that had to be disseminated to all students, it is a bad idea for this information to rely on student answers – having a field/question such as “What group are you in?” might seem innocuous but oh my goodness the random strangeness it can throw up is amazing! See pic below…)

“Bad” and “g’d” are other examples of responses given! …Students will be students? We need to make sure that our Google Form collects the information we want to collect and allows us to analyse it in the way that we want to analyse it. Obviously, we need to know what we want to collect and how we want to analyse it before we can design an effective tool. Additionally, however pesky they might be, participant students will also need to be a) fully informed regarding the research as well as b) aware that it is voluntary and that they have the right to cease participation and withdraw their data at any point.

Developing our research is just one of the directions that our scholarship circle might take next term. We also discussed the possibility of further investigation into how to teach proofreading more effectively. We are hoping to do some secondary research into this and refine our practice accordingly. While we will do what we can, we recognised that time constraints may affect what we can do. For example, we discussed the following activity to encourage proofreading after students receive feedback on their drafts:

  • Put students in groups of four and have them look at the feedback, specifically QuickMarks, on their essays
  • Students, in their groups, to work out what is wrong and what the correction should be. Teacher checks their correction and ensures that it is correct.
  • Students to pick a mistake or two (up to four sentences) and copy them onto a piece of flip-chart paper with the mistakes still in place
  • Each group passes their flip-chart paper to another group who should t try to correct it.
  • The flip-chart paper passes from group to group, with the idea that they look at the mistake and the first correction group’s edits and see if they think it is now correct or want to make additional changes (in a different colour)
  • Finally, the original group gets their flip-chart paper with corrections and edits back and compares it with their correct version.

This is a really nice little activity. However, after students receive their first draft feedback, they do not have any more lesson time (what time remains of the term, after they get their feedback, is taken up by tutorials, mocks and exams!), so it wouldn’t be possible to do it using that particular feedback. Perhaps what we need to do is use the activity with a different piece of work (for example a writing exam practice essay), and integrate other proofreading activities at intervals through the course, so that when they do get their first draft feedback for their coursework, they know what to do with it!

Another thing we discussed in relation to proofreading and helping students to develop this skill is the importance of scaffolding. I attempted to address the issue of scaffolding the proofreading process in a lesson I wrote for my foundation students last term. In that lesson, students had to brainstorm the types of errors that they commonly make in their writing – grammar, vocabulary, register, cohesion-related things like pronouns etc – and then I handed out a paragraph with some of those typical errors sown in and they had some time to try and find the errors. After that, I gave them the same paragraph but with the mistakes underlined, and having checked which ones they had found correctly, they had had to identify the type of error for each one that had been underlined. Finally, I gave them a version with the mistakes underlined and identified using our code, and they had to try and correct them. All of this was group work. The trouble was the lesson wasn’t long enough for them (as a low-level foundation group) to have as much time as they could have done with for each stage of the lesson. I had hoped there would be time for them to then look at their coursework essays (this was the last lesson before first draft submission) and try to find and correct some mistakes but in reality we only just got through the final paragraph activity.

Other ideas for scaffolding the development of proofreading skills were to prepare paragraphs that had only one type of mistake sown in so that students only had to identify other errors of that particular type, with the idea that they could have practice at identifying different errors separately before trying to bring it together in a general proofreading activity. That learning process would be spread over the course rather than concentrated into one (not quite long enough) lesson. There is also a plan to integrate such activities into the Grammar Guru interactive/electronic grammar programmes that students are given to do as part of their independent study. Finally, we thought it would be good to be more explicit about the process we want students to follow when they proofread their work. This could be done in the general feedback summary portion of the feedback. E.g. cue them to look first at the structural feedback and then at the language feedback etc. That support would hopefully avoid them being overwhelmed by the feedback they receive. One of our tasks for scholarship circle sessions next term is to bring in the course syllabus and identify where proofreading focuses could be integrated.

Another issue regarding feedback that we discussed in this session was the pre-masters students’ coursework task which is synoptic – they work on it with their academic success tutor with focus on content and with us for focus on language. Unfortunately, with the set-up as it is, as students do not work on it with a subject tutor, there is no content “expert” to guide them and there is a constant tension with regards to timing of feedback. Our team give feedback on language at the same time as the other team give feedback on content (which, not being experts, is a struggle for them, exacerbated by not being able to give feedback on language, especially as the two are fairly entwined!). Content feedback may necessitate rewriting of chunks of text, rendering our language feedback useless at that point in time. However, there is not enough time in the term for feedback to be staggered appropriately. We don’t have a solution for this, other than more collaboration with Academic Success tutors, which again time constraints on both sides may render difficult, but it did lead us onto the question of whether we should, in general, focus our QuickMarks only on parts of text that are structurally sound? (Again, there isn’t time for there to be a round of structural feedback followed by a round of linguistic feedback once the structural feedback has been implemented.)

Suffice to say it is clear that we still have plenty to get our teeth into in future scholarship circle sessions – our focus, and areas closely related, is far from exhausted. Indeed we have a lot to do still, with our research still in its early stages. We are not sure what will happen next term with regards to when the sessions will take place as it is timetable dependent but we are keeping our current time-slot pencilled in as a starting point. Fingers crossed a good number of us will be able to make it or find an alternative time that more of us can do!

Thank you to all my lovely colleagues who have participated in the scholarship circle this term, it has been a brilliant thing to do and I am looking forward to the continuation next term!

 

 

Scholarship Circle: Giving formative feedback on student writing (3+4)

Time and workload have dictated that I combine two weekly scholarship sessions into one post, so this “double digest” is my write-up of sessions 3 and 4.

(For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 and session 2 of this particular circle.)

Session 3

In session 3, we started by discussing the type of feedback we give students in their coursework. In CW1 (an essay outline), we give them structural feedback as well as pointing out where sources are insufficiently paraphrased, while in CW3 they get structural feedback and language feedback using the error correction code. We also talked more about direct feedback. We questioned where the line between direct feedback and collusion lies and decided that it’s ok to use teacher feedback to improve work but if they hired another tutor to correct their work, it would be collusion. We also came to the conclusion that direct feedback can be useful for certain things and that you could use it to scaffold learners e.g. in the first instance of the mistake, provide the correct form as a model; in the second instance of the mistake, provide the start of the correct form; in the third instance of the mistake, just highlight the type of mistake and let the learner correct it by themselves, using previous instances and feedback to help them. If there are any further instances of that mistake type, indicate to learners that they need to find and correct them.

We also talked more about this issue of correcting mistakes beyond those pointed out by the teacher i.e. proofreading work for more instances of the same mistake. In our experience, it frequently does not happen. In the masters research done by one of our number, the main reasons for that, given by the students when they were asked, were:

  • the belief that no comments = no mistakes
  • not knowing how to find/correct mistakes

However, with regards to the quick marks (i.e. error correction code on Turnitin), in terms of the students who participated in the study, 80-100% of quick marks resulted in successful revisions. Thus, on the whole, only when mistakes are pointed out are they are corrected, in general. This brought us back to the question of proofreading and learner training which we had touched on in previous sessions, identifying it as a definite need.

We acknowledged that we expect proofreading but that it doesn’t happen. This is partly because our learners are not used to it – they are used to having all errors pointed out to them. In some cases, as in one of the participants in the M.A. study, learners are not able to identify mistakes. In that case, the ideal situation would be helping those learners to find and correct the errors they ARE able to deal with it at their level. We decided that in order to help learners in both cases, more proofreading-related lessons are needed. They already have “Grammar Guru” which is an online interactive grammar tutoring tool, within which are activities that prompt proofreading for mistakes with the specific focus of a given tutorial e.g. articles.

However, the only time they do it with their own work is with CW3 and so we wondered if there would be scope for using work produced for writing exam practices as the basis for proofreading activities too.

We also looked at 2 tools for encouraging students to engage with their feedback:

1. A google form, adapted from something similar which is used at Nottingham Trent, that encourages students to find examples of particular mistakes in their text, correct them and make a note of the materials used in order to make that correction:

The idea is that students complete it between receiving their feedback and attending their tutorial, so that during the tutorial the tutor can, amongst other things, check their corrections and suggest alternative sources.

2. A form for students to complete that pushes them to reflect on their feedback:

As with the first one, this is intended to be completed between receiving the feedback on Turnitin and attending the tutorial, thus making the tutorial more effective than the common scenario where the student comes in not having even opened the feedback. We also wondered about the possibility of combining the two, so in other words combining focused error identification and correction with reflection on other aspects of the feedback.

Session 4

This week, in session 4, we mainly focused on the error correction code that we use. We looked at each symbol and accompanying notes, firstly deciding if it was a necessary one to keep and then refining it. The code, used on Turnitin, works as follows: We highlight mistakes and attach symbols to them. When the student subsequently looks at their text, they see the symbols and then when they click on the symbol, the accompanying notes appear. Our notes include, depending on the mistake, an explanation of the mistake, examples of incorrect use and corrected use, and links to sources that students can use to help them to learn more about the language point in question. Here is an example:

We paid particular attention to the clarity of the language used in the accompanying notes, getting rid of anything unnecessary e.g. modals, repetition etc, and the links provided to help students. The code also exists in GoogleDoc format so we all had Chromebooks out and were working on it collaboratively. There are a lot of symbols and there was plenty to say, so actually we only got as far as “C”!! (They are ordered alphabetically….!) This job will continue in the next session, which will be the week after next, as next week we have Learning Conversations which are off timetable so our availability is very different from normal.

I would be interested to hear what approaches you use where you work in terms of error correction, codes, proofreading training, pre-tutorial requirements, engaging learners with feedback and so on. Please do share any thoughts using the comments box below… 🙂

Scholarship Circle: Giving formative feedback on student writing (2)

Before we had time to turn around twice, Tuesday rolled around again and with it our weekly scholarship circle meeting, with its name and focus of “Giving formative feedback on student writing” (For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here and to see what we discussed last week – in session 1 of this circle –  here)

A week is a short turn around time but a number (9, in fact!) of eager beavers, who’d all managed to read the article “Sugaring the Pill: Praise and criticism in written feedback” by Fiona Hyland and Ken Hyland in Journal of Second Language Writing, turned up to discuss it and relate it to our context. The article is, in its own words, “a detailed text analysis of the written feedback given by two teachers to ESL students over a complete proficiency course”. The authors categorise all the feedback by function; namely, praise, criticism and suggestions and analyse it accordingly. It’s a very interesting and thought-provoking article. However, the purpose of this post is not to summarise it but rather our discussion which arose from it. This is not as easy a task as it might sound!

Praise

We started by talking about praise. Something we found interesting, in both the article and a similar piece of research done for a masters dissertation by one of our number, was that the students in these studies were able to identify when praise was insincere/formulaic/there for the sake of being there. (Here we are talking about the general comments at the end of a text rather than specific in-text comments.) Additionally, also in terms of general end-of-text comments, students who receive substantial formulaic praise may automatically mentally downgrade it, particularly if the balance of feedback overall is in favour of praise i.e. more positive comments than suggestions for improvement. In connection with this, students were also found not to believe the positive general comments if they did not reflect the in-text feedback, which being more directly connected to the text held more weight for them. Finally, both the article and the masters research highlighted the danger of the suggestions for improvement in a praise-criticism sandwich being ignored/missed by a student and the danger of hedged comments (e.g. using modals) being misunderstood.

Another aspect of feedback which it was thought might lead to misunderstanding is our feedback guidelines here at the college, which stipulate that in our general comments we should include 3 positive points and 3 areas to work on. We discussed the possibility that this might be (mis)interpreted by students to mean that the piece of writing was good and in need of improvement in equal measure when in fact that may not be the case. We also discussed the importance of framing the negative points as suggestions rather than criticism, as well as of avoiding hedging and the aforementioned dangers of miscommunication that may go with it:

Compare

“Your writing does not have enough linkers so it is confusing” (highlighting a negative)

with:

“You should include more linkers in your work to make it clearer” (making a suggestion for improvement)

This would, in turn, be easier to understand for a student than:

“I wonder if you could include more linkers in this paragraph? This might help the reader.” (hedged)

or:

“This is a good introduction with a clear thesis statement and scope, however, you need to look at coherence. Go back to …. and consider… . I think you could also benefit from having a look at…  …it is quite advanced but I think you are ready to take your AW to the next level!” (Praise-criticism sandwich: the student in question ignored all the suggestions because the teacher had said it was good so they didn’t feel the need to make any changes!) 

Of course, as discussed in the journal article, teachers do use phrases such as “I wonder if” and questions rather than direct instructions to avoid appropriation of the piece of work and also to avoid being overly authoritative, in order to meet what Hyland and Hyland describe as the “interpersonal goal” aspect of feedback (in contrast with pedagogic and informational goals).  Our conclusion, based on the masters findings, our experience and having read the journal article was that teachers possibly worry too much about being polite in the feedback, which ends up being confusing for the student more than anything else. As here:

When the message gets lost…

Still relating to praise, we agreed that it is most effective when specific i.e. directly highlights something in the text that the student is doing well, a view supported by the article and the masters research. Carrying this over to general end-of-text comments, we wondered if ‘repeating’ what you have said in specific in-text comments (which I admitted to doing quite a bit hence raising the issue), whether positive or negative, might actually be a way of reinforcing the importance of the in-text comments in question rather than being redundant or otherwise negative and making the general comments more personalised/less formulaic.

Finally, one issue I raised was that on Turnitin, if you have all the in-text comments (both positive and negative – “negative”, including suggestions for improvement not just criticisms obviously) in a single colour in terms of highlighting, a student might look at that and assume their essay was terrible because of the quantity of highlighting. I wondered if using different colours of highlighting for positive and negative would alleviate that situation. However, it was also put forward that it might be even worse if students knew that code and had very few things highlighted in the positive colour!

Improving feedback

As well as identifying the potential issues with praise discussed above, we also discussed possible solutions:

Reframing general comments

We agreed that:

  • short, personalised comments would be most useful, to avoid misunderstandings and identifiable insincerity. (Our comments bank – a google doc of generic comments – does not currently fit this bill.)
  • in Turnitin we could make more use of the “T” option (which is along side the QM and the comment bubble options and which most of us were unaware of!). This allows you to write directly on the text in ‘blue ink’ – might be more personalised/allow more flexibility than the general comments in the comments box. It might also allow for less in-text highlighting for comments bubbles.
  • having a “3 positive things and 3 ‘negative’/to improve” one size fits all guideline is problematic as students are all different (though if you have 60+ students’ work to look at in a short space of time, is carefully tailored, individualised feedback realistically feasible?)

Learner Training

We decided that learner training was crucial for enabling students to make full use of the feedback and therefore make doing it worth our time and theirs. Firstly, for in-text comments to be truly useful, it was suggested that we need to explicitly train students to look for further examples of the mistakes we highlight using the Quick Marks (i.e. error correction code) as otherwise they will correct what we highlight but they won’t automatically apply it to the rest of their text. Perhaps part of learner training would be to train them towards the point where they can do that without being continually prompted in comments or tutorials. We also considered the need for recognising and differentiating between “treatable” errors (e.g. articles – there are rules that can be followed) and “non-treatable” errors (e.g. word choice), and giving appropriate feedback. For non-treatable errors, direct feedback, i.e. giving students the correction, is better, while for treatable errors we can use indirect feedback, i.e. identifying the error and asking students to correct it themselves, using clues such as error correction coding. Currently, most of our feedback is indirect, so this is something we may need to reconsider.

Another aspect of learner training that we discussed was how to train learners to make the most of their very brief (10-15 minute) tutorials. For these tutorials to be truly beneficial, we agreed that it was imperative for students to look at their feedback BEFORE coming to the tutorial. In fact, they need not only to look at it but also to attempt to respond to it, so that during the tutorial the tutor can check their attempts and help them with the areas they were unable to address independently. We wondered about using a pre-tutorial sheet to encourage them to do this, something that in order to complete they need to engage with the feedback. A couple of teachers have already experimented with this kind of thing with encouraging results so it is worth looking into.

All in all, we managed to discuss a lot in an hour – or just over, as we lost track of time! (You know it’s a good scholarship circle when the participants just can’t drag themselves away at the end! I think the reason this scholarship circle is going so well is that it has a very specific focus and it is one that is equally important to all of us.)

Homework for next week: to read a chapter by Dana Ferris called “Does error feedback help student writers? New evidence on the short- and long-term effects of written error correction” in a book published by Cambridge University Press called Feedback in Second Language Writing (edited by the same Hylands who wrote last week’s article!) Just from the title I am very curious about what Ferris will say, but I won’t have time to find out till at least the weekend!

Feel free to join in the discussion by commenting on this post! 🙂

 

 

Scholarship Circle: Giving formative feedback on student writing (1)

Today, Tuesday 2nd October, was the inaugural meeting of the newly formed “Giving formative feedback on student writing” scholarship circle which will take place weekly on Tuesdays here at the USIC arm of the ELTC. (For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here.)

With a healthy turn-out of  ten teachers, the main goal for this initial meeting was to pin down what we want to get out of the sessions and how we are going to achieve it. We started with these questions:

  • What are we all here for?
  • What do we want to learn?
  • What shall we do with this scholarship circle?

We established that we were all there because we want to be able to give students better feedback. By better, we mean the right kind of feedback: feedback that they will a) be able to understand and b) benefit from. We therefore want to avoid the situation in which we put a lot of effort into producing feedback on their work and they don’t use it.

Our particular focus for this is first drafts of coursework assignments. We have CW1 which is an essay outline and CW3 which is either an essay based on the essay outline (foundation students) or a synoptic assignment research proposal (pre-masters students).

These are some of the questions that we want to answer, or respond to, in the course of this scholarship circle (the list may grow or change over the course of the scholarship circle, this is just our starting point):

  • How much feedback can students cope with? What is the right amount to give them?
  • What language should be used? (H’obviously this doesn’t translate as should we give feedback in English or Mandarin…)
  • How can we help students to access/use feedback more effectively? This includes Quick Marks (i.e. error correction code, on Turnitin), in-text comments and general comments, as well as helping students use them in combination. We also have some evidence from research done on our students suggesting that students prefer specific in-text comments as they are more memorable long-term than Quick Marks, which is something to keep in mind.
  • How can we help teachers use Quick Marks more effectively and consistently?
  • How and when do we praise students’ work? How do we do this most effectively, without seeming insincere?

From these questions, we settled on a short list of things to do:

  • Read “Sugaring the Pill: Praise and criticism in written feedback” by Fiona Hyland and Ken Hyland in Journal of Second Language Writing, so that we can discuss it next week. Dana Ferris was also recommended as a good author for sources about feedback.
  • Discuss and standardise our use of Quick Marks in a future Scholarship Circle meeting.
  • Discuss designing/creating learner training materials/classes to help our students develop independent use of formative feedback to correct their errors.

It was a short but fruitful session, setting us up nicely for our future weekly meetings. Watch this space for future posts tracking our progress and my reflections on our journey! 🙂