Scholarship Circle: Giving formative feedback on student writing (3.1)

I would say it’s the start of a new term (College term 3) and a new wave of scholarship circle sessions, but, in reality, it’s actually week 5! We had our first session for the term last week and this is me playing catch-up. The beginning of a new term is a notoriously busy time and particularly for the January cohort is a scattergun of coursework draft submission/feedback (+ for me as ADoS, in addition to doing my own feedback, providing support to teachers doing theirs) and speaking exams (+for me as ADoS double marking a portion of those with each of my teachers), so I’m actually pretty glad the scholarship circle didn’t get going ’til last week!

Our agenda was as follows:

  1. Revisit the issue of our research on Quickmarks to see where we are at and figure out our timeline.
  2. Decide on a focus for this term’s scholarship circle sessions
  3. Set ourselves some reading homework

This terms research project update

The consent forms are ready to go and will be sent to:

  • the centre manager
  • the teachers of the students we have identified as the sample who will receive the questionnaire and from which participants for the text analysis will be selected (there are a small number of us and we will only be doing a small number i.e. 1-2 of text analyses each!)
  • the students themselves. (We only need to send a consent form to those selected for text analysis as the consent form for the questionnaire will be built into the questionnaire).

We reconfirmed that we will be focusing on International Foundation Year (IFY) students rather than PMP (Pre-masters students) as PMP students’ course work tends to change dramatically between first draft feedback and final submission due to content tutor feedback, which would affect text analysis possibilities. We are aware that a range of factors influence response to feedback, e.g. age, pathway, language level, past learning experiences, educational culture in country of origin, so have picked IFY students with a particular language level (as defined by IELTS scores) and over the age of 18. This minimises the influence of age and language level factors on response, and avoiding ethical/consent/safeguarding issues that arise when minors are involved.

The text analysis will be done in the early part of next term. There won’t be time this term as once final drafts are submitted, teachers will be busy with coursework marking and then exam marking extraordinaire (biggest cohort of students ever this term). It will have to be the early part i.e. before the end of week 4, as beyond then, teachers will be busy doing first draft feedback for next term’s students. For next term’s students, if we are repeating the research cycle, we can do the analysis in the autumn term.

Focus for this term’s sessions

This term, including the current session, there will be 6 sessions. (Week 10 will be an impossibility due to above-mentioned exam marking extraordinaire!) We have decided to focus on comments, as a logical next step to the focus on Quickmarks that our current research is based on.

At the moment, we do have a generic comments bank which teachers can copy comments from in order to paste them into a student’s assignment. The aim of this is to save time and help teachers by providing them with ideas of what they can put. In practice, fast typists ignore the bank as it is quicker to type what you want to say than it is to read through a bank of comments, decide which one is the best fit and then do the copy-pasting. The comment bank also gets ignored due to it being generic rather than specific to a given student’s piece of work. It was noted that either which way, it is useful for new teachers as an extra point of support.

Going forward, we discussed the possibility of going through the bank of comments as we did with the quickmarks and making them more user-friendly (for students and teachers alike!). One idea was to have a base comment, with space to make it specific by referring to a given student example. Another idea was to refine the categorisation of the comments so that is easier to find the ones you need. We also talked about refining the bank by selecting the best comments with the widest structure and editing or culling any that seemed less useful (much as some of the quickmarks were edited or culled in a similar fashion).

Another issue that came out is the importance of familiarity – be it with the quickmarks or with the comments, the only way for these resources to be used effectively and efficiently is if teachers are familiar with them so that time isn’t wasted through not being sure about which quickmark/comment to use, if there is an appropriate quickmark/comment available etc. Familiarity is also important for students, so that they are better able to recognise what their feedback means and what they need to do. To address this, we had the idea of a “quick mark auction”. This would involve a list of sentences, each with a different mistake underlined, a set of corresponding quickmarks and a set of quickmark meanings. By the end of the activity, students (and new teachers!) would have identified what each quick mark means and which one to use with each error example. We have set up a google doc so that we can create this resource collaboratively:

Obviously no one has added anything to it yet – work in progress! It will happen…

As we did with the Quickmarks, we aim to inform what we do with what we read in relevant literature and discuss in our weekly sessions. Which brings me on to…


Our reading homework for this week (which I haven’t done yet – yikes!) is:

  • Nicol, D.J. and Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice in Studies in Higher Education 31(2) pp.199-218
  • Burke, D. and Pieterick, J. (2010) Giving students effective written feedback McGraw-Hill Education

I better get to it!


Feedback forms and follow up

This post is inspired by the coincidence of happening across this post by Anthony  and it having recently been that time of term where I work – only, instead of it being end-of-course questionnaire evaluation forms, we’ve just done mid-course questionnaire evaluation forms. These were anonymous to the extent that the teacher is required to leave the room while the students complete these forms, so that they are able to discuss their responses together freely.

Anthony makes a good point about the issue of lack of context with such anonymous data: I suppose regardless of the number of learners in the class, with anonymity the data becomes quantitative rather than qualitative – i.e. x students think the course is satisfactory, while y students think it better than satisfactory and z think it is below standard. X students think there is too much grammar, y students think there is too little grammar and z students think the amount of grammar is just right. And so on. If, in the first instance, x represents the majority, and in the second instance, z represents the majority, then one is relieved.


Which colour stands for satisfied? (Cuisenaire Rods – taken from wiki commons, labeled for commercial reuse with modification)

Would it be better to have feedback with names attached? Anthony describes the potential benefits of this. As I see it, the main one would be the potential for tailoring your response(s) to it and gaining more understanding of what’s behind it. As he mentioned, the cons are generally thought to be the issue of reliability – will a student give genuine feedback if they have to put their name to it? Thinking back to my time as student representative, while I was doing my courses at Leeds Met, there was definitely a strong preference for anonymous feedback as far as the students were concerned – I ended up with a system whereby another student collected and collated feedback and gave it to me to pass on to the tutors at the evaluation meeting –  but the tutors welcomed informal feedback at any time too, which was really nice. Why the preference for anonymity amongst students? As far as I can tell, the thinking goes something along the lines of, “If I complain, they will hate me, which will make things difficult henceforth and may mean I get a lower grade in the next assignment”. There may be a cultural aspect at play too. In some cultures, complaining just isn’t done. Britain would be one of those – everyone has seen that info-graphic regarding “what we say” and “what we really mean”! Stiff upper lip and all that, what! 😉 Perhaps anonymity makes the complaining less direct and therefore more acceptable/comfortable for people to undertake.

Perhaps the length of the course also influences things: If you have a group of students for a whole year, then possibly there is more potential for building a rapport with them and creating a situation whereby channels of communication are open and students are comfortable in using these and telling you when something isn’t working or, on the other hand, if something is particularly helpful. They may then feel more comfortable to put their names to a feedback form too, if you wanted that. Whereas, if you have a class for a couple of hours a week for 12 weeks, depending on the students and teacher involved, you may not reach that point, where they would want to put their name to a form, though you have a perfectly good rapport with them. (And of course, you can’t make it optional, though within any class you probably have some learners who would be happy to put their name on their form, because the “anonymous” ones would no longer be anonymous: we’d be back to pseudo-anonymity.)

So what of the anonymous feedback forms in all this? They are an evaluation tool. From the institutional point of view, you can get some statistics out of them. From the teacher’s point of view, you can identify trends – i.e. if within one class, the majority of students are ticking the “too little” box for writing, then you can address that. Then, of course, if the class is very small, probably you know who filled out each form anyway! So it becomes pseudo-anonymity. Either which way, as long as they aren’t the only tool used to evaluate, they serve a purpose. (Even if, as Anthony pointed out, the information is all somewhat decontextualised.) However, from the student’s point of view, having gone to the effort of filling out the form, they probably want some kind of response to it, some kind of acknowledgement…

My question is, how is the best way to do that effectively when the forms are anonymous or “anonymous”? Here, I’m wondering about the feedback directed at the teacher, rather than the institution (I don’t have a magic wand for making classes smaller or at a different time of the day etc!) What can we do other than, say, doing more writing, if that was the general consensus (as per one of the examples given above), to respond effectively? If you try and talk to the learners as a group about any issues within the feedback forms, then it threatens the anonymity, in that to get anything meaningful out of it, the students who had raised the issues would have to speak, thereby putting their name to it. Which they also may not want to do in front of the class. Getting them to come and speak to you, or to write to you, of their own volition, defies the point of anonymity too (or does it?)  Another issue, of course, is time – whatever method you do use to deal with this can’t be too time-consuming in terms of lesson time, because that is at a premium already, or in terms of out-of-class teacher preparation time, because there are only so many hours in a day and a teacher can only physically do so much.

Answers on a postcard! (Or comment below…) 🙂 (And apologies for the somewhat rambling nature of this post…you may complain using the comments box below – anonymously or not as you please 😉 )