Scholarship Circle: Giving formative feedback on student writing (2.2)

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 / session 2.2 of this particular circle.

In this week’s session of the scholarship circle, we started by doing a pilot text analysis. In order to do this, we needed a first draft and a final draft of a piece of CW3 essay coursework and a method of analysis. Here is what it looked it like:

So…

  •  QM code refers to the error correction code and there we had to note down the symbol given to each mistake in the first draft.
  • Focus/criterion refers to the marking criteria we use to assess the essay. There are five criteria – Task achievement (core elements and supported position), Organisation (cohesive lexi and meta-structures), Grammar (range and accuracy), Vocabulary (Range and accuracy) and Academic conventions (presentation of source content and citations/references). Each QM can be assigned a criteria to attach to so that when the student looks at the criteria-based feedback, it shows them also how many QMs they have attached to each criteria. The more QMs there are, the more that criterion needs work!
  • Error in first draft and Revision in final draft require exact copying from the student’s work unless they have removed the word/s that prompted the QM code.

Revision status is where the method comes in. Ours, shared with us by our M.A. researcher whose project our scholarship circle was borne out of, is based on Storch and Wigglesworth. Errors are assigned a status as follows:

  • Successful: the revision made has corrected the problem
  • Unsuccessful: the revision made has not corrected the problem
  • Unverifiable: if the QM is wrongly used by the teacher and the student has made something incorrect in the final draft based on that QM or has made no change but no change is in reality required
  • Unattempted: the QM is correctly used but the student does not make any change in the final draft.

Doing the pilot threw up some interesting issues that we will need to keep in mind if we use this approach in our data collection:

  • As there are a group of us rather than just one of us, there needs to be consistency with regards to what is considered successful and what is considered unsuccessful. E.g. if the student removes a problem word/phrase rather than correcting it, is that successful? If the student corrects the issue identified by the QM but the sentence is grammatically incorrect, is that successful? The key here is that we make a decision as a group and stick by that as otherwise our data will not be reliable/useful due to inconsistency.
  • We need to beware making assumptions about what students were thinking when they revised their work. One thing a QM does, regardless of the student’s understanding of the code, is draw their attention to that section of writing and encourage them to focus closely on it. Thus, the revision may go beyond the QM as the student has a different idea of how to express something.
  • It is better to do the text analysis on a piece of writing that you HAVEN’T done the feedback on, as it enables you to be more objective in your analysis.
  • When doing a text analysis based on someone else’s feedback, however, we need to avoid getting sucked in to questioning why a teacher has used a particular code and whether it was the most effective correction to suggest or not. These whys and wherefores are a separate study!

Another thing that was discussed was the need to get ethical approval before we can start doing anything. This consists of a 250 word overview of the project, and we need to state the research aims as well as how we will collect data. As students and teachers will need to consent to the research being done (i.e. to use of their information), we need to include a blank copy of the consent form we intend to use in our ethical approval application. By submitting that ethical approval form, we will be committing to carrying out the project so we need to be really sure at this point that this is going to happen. Part of the aim of today’s session, in doing a pilot text analysis, was to give us some idea of what we would be letting ourselves in for!

Interesting times ahead, stay tuned… 🙂

Scholarship Circle: Giving formative feedback on student writing (2.1)

It’s a brand new term (well, sort, of it’s actually the third week of it now!), the second of our four terms here at the college, and today (Monday 21st January, though I won’t be able to publish this post on the same day!) we managed our first scholarship circle session of the term.

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 of this particular circle.

The biggest challenge we faced was remembering where we had got to in the final session BC (Before Christmas!). What were our research questions that we had decided on again? Do we still like them? What was the next step we were supposed to take this term?

Who?

We talked again about which students we wanted to participate – did we want IFY (Foundation) or PMP (Pre-Masters)? We considered the fact that it’s not only linguistic ability which influences response to feedback (our focus) – things like age, study pathway, past learning experiences and educational culture in country of origin will all play their part. Eventually, we decided to focus on IFY students with PMPs their coursework may alter dramatically between first and final draft submissions due to feedback from their content tutor, which would affect our ability to do text analysis regarding their response to our first draft feedback. Within the IFY cohort we have decided to focus on the c and d level groups (which are the two bottom sets, if you will), as these students are most at risk of not progressing so any data which enables us to refine the feedback we give them and others like them will be valuable.

What?

It is notoriously tricky to pin down a specific focus and design a tool which enables you to collect data that will provide the information you need in order to address that focus. Last term, we identified two research questions:

This session, we decided that this was actually too big and have decided to focus on no. 2. Of course having made that decision, and, in fact, also in the process of making that decision, we discussed what specifically to focus on. Here are some of the ideas:

  • Recognition – which of the Quickmarks are students able to recognise and identify without further help/guidance?
  • Process – are they using the Quickmarks as intended? (When they don’t recognise one, do they use the guidance provided with it, that appears when you click on the symbol? If they do that, do they use the links provided within that information to further inform themselves and equip themselves to address the issue? You may assume students know what the symbols mean/read the information if they don’t but anecdotal evidence suggests otherwise – e.g. a student who was given a wrong word class symbol and changed the word to a different word rather than changing the class of it!)
  • Application – do they go on to be able to correct other instances of the error in their work?

Despite our interest in the potential responses, we shelved the following lines of enquiry for the time being:

  • How long do they spend altogether looking at their feedback?
  • How do they split that time between Quickmarks, general comments and copy-pasted criteria?

We are mindful that we only have 6 weeks of sessions this term (and that included this one!) as this term’s week 10, unlike the final week of last term, is going to be, er, a tad busy! (An extra cohort and 4 exams being done between them vs one cohort and one exam last time round!) As we want to collect data next term, that gives us limited time for preparation.

How?

We are going to collect data in two ways.

Text analysis

We each will look at a first draft and a final essay draft of a different student and do a text analysis to find out if they have applied the Quickmark feedback to the rest of their text. This will involve picking a couple of Quickmarks that have been given to the student in their first draft, identifying and highlighting any other instances of that error type, and then looking at the final draft in order to find the highlighted errors so that we can see if they have been corrected, and if they have, how – successfully or not.

We are going to have a go at this in our session next week, to practise what we will need to do and agree on the process.

Questionnaire

Designing an effective questionnaire is very difficult and we are still in the very early stages. We are still leaning towards Google Forms as the medium. Key things we need to keep in mind are:

  • How many questions can we realistically expect students to answer? The answer is probably fewer than we think, and this means that we have to be selective in what questions to include.
  • How can we ask the questions most clearly? As well as using graded language, this means thinking about question types – will we use a Likert scale? will we use tick boxes? will we use any open questions?
  • How can we ensure that the questions generate useful, relevant data? The data needs to answer the research questions. Again, this requires considering different question types and what sort of data they will yield. Additionally, knowing that we need to analyse all the data that we collect, in terms of our research question, we might want to avoid open questions as that data will be more difficult and time-consuming to analyse, interesting though it might be.

The questions will obviously relate to the focuses we identified, earlier discussed – recognition, process and application. One of our jobs for the next couple of sessions is to write our questions. It’s easy (ish!) to talk around what we want to know, but writing clear questions that elicit that information will be significantly more challenging!

Another thing we acknowledged, finally, is that research-wise we are not doing anything new that hasn’t been done before, BUT the “newness” comes from doing it in our particular context. And that is absolutely fine! 🙂

Homework: 

Well those of us who haven’t got round to doing the reading set at the end of the previous session (cough cough) will hopefully manage to finish that. (That was Goldstein, L. Questions and answers about teacher written commetary and student revision: teachers and students working together in Journal of Second Language Writing and Ene, E & Upton, T.A. Learner uptake of teacher electronic feedback in ESL composition.) Otherwise, thinking about possible questions and how to formulate them!

Scholarship Circle: Giving formative feedback on student writing (5-8)

Last time I blamed time and workload for the lack of updates, but this time the reason there is only one post representing four sessions is in part a question of time but more importantly a question of content. This will hopefully make more sense as I go on to explain below!

(For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 of this particular circle.)

Session 5 saw us finishing off what we started in Session 4 – i.e. editing the error correction code to make it clearer and more student-friendly. So, nothing to add for that, really! It was what it was – see write-up of Session 4 for an insight.

Sessions 6 and 7 were very interesting – we talked about potential research directions for our scholarship circle. We started with two possibilities. I suggested that we replicate the M.A. research regarding response to feedback that started the whole scholarship circle off and see if the changes we are making have had any effect. At the same time as I had that idea, another of our members brought forward the idea of participating in a study that is going to be carried out by a person who works in the Psychology department at Sheffield University, regarding reflection on feedback and locus of control. What both of these have in common is that they are not mine to talk about in any great depth on a public platform given that one has not yet been published and the other is still in its planning stages.

Session 6

So, in session 6, the M.A. researcher told us, in depth, all about her methodology, as in theory if we were to replicate that study we would be using that methodology and then we also heard about the ideas and tools involved in the Psychology department research. From the former, it was absolutely fascinating to hear about how everything was done and also straightforward enough to identify that replicating that study would take up too much time at critical assessment points when people are already pressed for time: it’s one thing to give up sleeping if you are trying to do your M.A. dissertation to distinction level (congratulations!) but another if you are just working full time and don’t necessarily want to take on that level of workload out of the goodness of your heart! We want to do research, but we also want to be realistic. With regards to the latter, it sounded potentially interesting but while we heard about the idea, we didn’t see the tools it would involve using until Session 7. The only tool that we contributed was the reflection task that we have newly integrated into our programme, which students have to complete after they receive feedback on the first draft of their assignments.

Session 7

Between Session 6 and 7, we got hold of the tools (emailed to us by the member in touch with the research in the Psychology department) and were able to have a look in advance of Session 7. In Session 7, we discussed the tools (questionnaires) and agreed that while some elements of them were potentially workable and interesting, there were enough issues regarding the content, language and length that it perhaps wasn’t the right direction for us to take after all. The tools had been produced for a different context (first year undergraduate psychology students). We decided that what we needed was to be able to use questionnaires that were geared a) towards our context and students and b) towards finding out what we want to know. We also talked about the aim of our research, as obviously the aim of a piece of research has a big impact on how you go about doing that research. Broadly, we want to better understand our students’ response to feedback and from that be able to adapt what we do with our feedback to be as useful as it possibly can be for the students. We spent some time discussing what kinds of questions might be included in such a questionnaire.

So, at this point, we began the shift away from focusing on those two studies, one existing, complete but unpublished, and one proposed,  and towards deciding on our own way forward, which became the focus of session 8

Session 8

Between Session 7 and Session 8, our M.A. Researcher sent us an email pointing out that in order to think about what we want to include in our questionnaires, we first need to have a clear idea of what our research questions are. So that was the first thing we discussed.

One fairly important thing that we decided today as part of that discussion about research questions was that it would be better to focus on one thing at a time. So, rather than focusing on all the types of feedback that Turnitin has to offer within one project, this time round focus specifically on the quickmarks (which, of course, we have recently been working on!). Then, next time round we could shift the focus to another aspect. This is in keeping with our recognition of the need to be realistic regarding what we can achieve, so as to avoid setting ourselves up for failure. (I think this is a key thing to bear in mind for anybody wanting to set up a scholarship circle like this!) The questions we decided on were:

  1. Do students understand the purpose of feedback and our expectations of them when responding to feedback?
  2. How do students respond to the Quickmarks?

Questions that got thrown around in the course of this discussion were:

  • Do students prioritise some codes over others? E.g. do they go for the ones they think are more treatable?
  • What codes do students recognise immediately?
  • If they don’t immediately recognise the codes, do they read the descriptions offered?
  • Do they click on the links in the descriptions?
  • Do they do anything with those links after opening them? (One of the students in the M.A. research opened all the links but then never did anything with them!)
  • How much time do they believe they should spend on this feedback?
  • How long are students spending on looking at the feedback in total?
  • How do students split their time between Quickmarks (/”In-text feedback” so includes comments and text-on-text a.k.a. the “T” option, which some of us haven’t previously used!) and general comments and the grade form?

Of course, these questions will feed in to the tool that we go on to design.

We identified that our learner training ideas e.g. the reflection form, improving the video that introduces them to Turnitin feedback, developing a task to go with the video in which they answer questions and in so doing create themselves a record of the important information that they can refer back to etc. can and should be worked on without waiting to do the research. That way, having done what we can to improve things based on our current understanding, we can use the research to highlight any gaps.

We also realised that for the data regarding Quickmarks to be useful, it would be good for it to be specific. So, one thing on our list of things to find out is whether Googleforms would allow us to have an item in which students identify which QMs they were given in their text and then answer questions regarding their attitude to those Quickmarks, how clear they were etc. Currently we are planning on using Googleforms to collect data as it is easy to administer and organises the results in a visually useful way. Of course that decision may be changed based on whether or not it allows us to do what we want to do.

Lots more to discuss and hopefully we will be able to squeeze in one more meeting next week (marking week, but only one exam to mark, most unusually! – in a normal marking week, it just would not be possible) before the Christmas holidays begin… we shall see! Overall, I think it will be great to carry out research as a scholarship group and use it to inform what we do (hence my overambitious as it turns out initial idea…). Exciting times! 🙂

 

Using Google+ Communities with classes (2)

All of a sudden we are 5 weeks into term. This week, also known as 5+1 (so not to get it mixed up with teaching week 6, which is next week) is Learning Conversations week (the closest we get to half term, and only in the September term!) so it seemed a good time to take stock and see how things are going with Google Communities, following my introductory post from many moons ago.

Firstly, it must be said that the situation has changed since I wrote that first post: Now, all teachers are required to use GC instead of my Group on MOLE (the university brand of Blackboard VLE) because we had trouble setting up groups on MOLE at the start of this term. Nevertheless, I am carrying on with my original plan of reflecting and evaluation on my use of GC with my students because I think it is a valuable thing to do!

In order to evaluate effectively, I wanted to have the students’ perspective as well as my own, I posted a few evaluative questions in the discussion category of each of my classes’ GC page.

So, no science involved, no Likert scales, no anonymity, just some basic questions. (The third question was because I thought I might as well get their views on how the lessons are going so far at the same time!) I’m well aware of the limitations of this approach BUT then again I’m not planning to make any great claims based on the feedback I get and I’m not after sending a write-up to the ELTJ or anything like that either (would need all manner of ethical approval to do that!). I did try to frame the questions positively e.g. “What do you think would improve the way we use GC?” rather than “What don’t you like about GC?” so that the students wouldn’t feel like responding to the question wasn’t a form of criticism and therefore feel inhibited. An added benefit is that it pushes them to be constructive regarding future use rather than just say how they feel about the current use of it.

Before I go into the responses I’ve had from students, however, it would make sense to summarise how I’ve been using the GCs with them. I recently wrote about GCs for the British Council TeachingEnglish page (soon to be published) and the way I came up with for describing them in that post was “a one-stop shop for everything to do with their [students] AES classes” and that is basically what it has become:

Speaking Category extract

 

Writing Category extract

 

Vocabulary Category extract

 

Listening Category extract

I would say the main use I have made of it is to share materials relating to lessons, mostly in advance of the lessons – TedTalks, newspaper articles etc – but also useful websites and tools, for individual use or class use – AWL highlighter, Quizlet, Vocab.com etc. Finally, it is great for sharing editable links to Google Docs, which we use quite often in class for various writing tasks. Other than these key uses, I have also used it to raise students’ awareness of mental health issues and the mental health services offered to students by the university, during Mental Health Week here (which coincided with World Mental Health Day) and to raise their awareness of the students union and what it offers to them.

In terms of student feedback, they think it’s “convenient”, “easy to use” and they “enjoy using” it. They also mention the ability to comment on posts (not present with My Group on MOLE) and communicate outside of the classroom as well as in it. In terms of suggestions for improvement, one student said students should use it to interact more frequently but that it should be clear which posts are class content and which are sharing/interaction. A couple of students also said they’d like the Powerpoints used in class to be uploaded there. However, those are available on MOLE. The trouble, of course, is that in using GC rather than My Group (which is on MOLE), students are a lot more tuned into GC (which we use all the time) than MOLE. I have no scientific evidence to back this up, but I suspect that be it academically or personally, if you have to use multiple platforms you tend to gravitate towards one, or some, more than others rather than using them all equally, particularly if time is very limited, as it is for busy students! (I could be wrong – if you know of any relevant studies let me know!) Unfortunately GC cannot fully replace MOLE as students need to learn how to use it in preparation for going to university here and they need to submit coursework assignments to Turnitin via MOLE. Perhaps, then I need to come up with ways to encourage them to go from one to the other and back, so they don’t forget about ‘the other’…

In terms of future use, I have set up a little experiment in that as part the of Learning Conversations that are taking place this week, we have to decide on Smart Actions that the students are supposed to carry out. E.g.

 

Go to Useful Websites on MOLE and explore the ‘Learning Vocabulary’ websites available. Tell your teacher which websites you visited and what you learnt from them by the final AES lesson of Week 6.

Some of them, like the above, lend themselves to posting on GC. In this way, not only do they tell me what they have learnt but also they share that learning with the rest of their classmates. So, in their learning conversations, whenever the Smart Action(s) were amenable to this plan, I have been encouraging students to use GC to communicate the outcome to me and share the learning with the rest of the class. We will see how it goes, if they do post their findings etc. Be interesting to see what happens! Another idea I’ve had is to do something along the lines of “academic words of the week”, where I provide a few choice academic words along with definitions, collocations, examples of use and a little activity that gives them a bit of practice using them, and get them to also make a Quizlet vocabulary set collaboratively (I have a Quizlet class set up for each class). Then perhaps after every couple of weeks we could do an in-class vocabulary review activity to see what they can remember.

Finally, it seems to me that Monday, being the first day of the second half of the term, is a crucial opportunity to build on student feedback by getting them to discuss ways in which we could use the GC for more interactive activities and find out what they’d be interested in having me share other than class-related materials and the occasional forays into awareness-raising that I have attempted. The key thing that I want them to take away is that I want the GC to work for them and that I am very much open to ideas from them as to how that should be, so that it becomes a collaborative venture rather than a teacher-dominated one.

We shall see what the next five weeks hold… Do you have any other ideas for how I could use GCs more effectively? Would love to hear them if you do!

 

Scholarship Circle: Giving formative feedback on student writing (3+4)

Time and workload have dictated that I combine two weekly scholarship sessions into one post, so this “double digest” is my write-up of sessions 3 and 4.

(For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 and session 2 of this particular circle.)

Session 3

In session 3, we started by discussing the type of feedback we give students in their coursework. In CW1 (an essay outline), we give them structural feedback as well as pointing out where sources are insufficiently paraphrased, while in CW3 they get structural feedback and language feedback using the error correction code. We also talked more about direct feedback. We questioned where the line between direct feedback and collusion lies and decided that it’s ok to use teacher feedback to improve work but if they hired another tutor to correct their work, it would be collusion. We also came to the conclusion that direct feedback can be useful for certain things and that you could use it to scaffold learners e.g. in the first instance of the mistake, provide the correct form as a model; in the second instance of the mistake, provide the start of the correct form; in the third instance of the mistake, just highlight the type of mistake and let the learner correct it by themselves, using previous instances and feedback to help them. If there are any further instances of that mistake type, indicate to learners that they need to find and correct them.

We also talked more about this issue of correcting mistakes beyond those pointed out by the teacher i.e. proofreading work for more instances of the same mistake. In our experience, it frequently does not happen. In the masters research done by one of our number, the main reasons for that, given by the students when they were asked, were:

  • the belief that no comments = no mistakes
  • not knowing how to find/correct mistakes

However, with regards to the quick marks (i.e. error correction code on Turnitin), in terms of the students who participated in the study, 80-100% of quick marks resulted in successful revisions. Thus, on the whole, only when mistakes are pointed out are they are corrected, in general. This brought us back to the question of proofreading and learner training which we had touched on in previous sessions, identifying it as a definite need.

We acknowledged that we expect proofreading but that it doesn’t happen. This is partly because our learners are not used to it – they are used to having all errors pointed out to them. In some cases, as in one of the participants in the M.A. study, learners are not able to identify mistakes. In that case, the ideal situation would be helping those learners to find and correct the errors they ARE able to deal with it at their level. We decided that in order to help learners in both cases, more proofreading-related lessons are needed. They already have “Grammar Guru” which is an online interactive grammar tutoring tool, within which are activities that prompt proofreading for mistakes with the specific focus of a given tutorial e.g. articles.

However, the only time they do it with their own work is with CW3 and so we wondered if there would be scope for using work produced for writing exam practices as the basis for proofreading activities too.

We also looked at 2 tools for encouraging students to engage with their feedback:

1. A google form, adapted from something similar which is used at Nottingham Trent, that encourages students to find examples of particular mistakes in their text, correct them and make a note of the materials used in order to make that correction:

The idea is that students complete it between receiving their feedback and attending their tutorial, so that during the tutorial the tutor can, amongst other things, check their corrections and suggest alternative sources.

2. A form for students to complete that pushes them to reflect on their feedback:

As with the first one, this is intended to be completed between receiving the feedback on Turnitin and attending the tutorial, thus making the tutorial more effective than the common scenario where the student comes in not having even opened the feedback. We also wondered about the possibility of combining the two, so in other words combining focused error identification and correction with reflection on other aspects of the feedback.

Session 4

This week, in session 4, we mainly focused on the error correction code that we use. We looked at each symbol and accompanying notes, firstly deciding if it was a necessary one to keep and then refining it. The code, used on Turnitin, works as follows: We highlight mistakes and attach symbols to them. When the student subsequently looks at their text, they see the symbols and then when they click on the symbol, the accompanying notes appear. Our notes include, depending on the mistake, an explanation of the mistake, examples of incorrect use and corrected use, and links to sources that students can use to help them to learn more about the language point in question. Here is an example:

We paid particular attention to the clarity of the language used in the accompanying notes, getting rid of anything unnecessary e.g. modals, repetition etc, and the links provided to help students. The code also exists in GoogleDoc format so we all had Chromebooks out and were working on it collaboratively. There are a lot of symbols and there was plenty to say, so actually we only got as far as “C”!! (They are ordered alphabetically….!) This job will continue in the next session, which will be the week after next, as next week we have Learning Conversations which are off timetable so our availability is very different from normal.

I would be interested to hear what approaches you use where you work in terms of error correction, codes, proofreading training, pre-tutorial requirements, engaging learners with feedback and so on. Please do share any thoughts using the comments box below… 🙂

Scholarship Circle: Giving formative feedback on student writing (2)

Before we had time to turn around twice, Tuesday rolled around again and with it our weekly scholarship circle meeting, with its name and focus of “Giving formative feedback on student writing” (For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here and to see what we discussed last week – in session 1 of this circle –  here)

A week is a short turn around time but a number (9, in fact!) of eager beavers, who’d all managed to read the article “Sugaring the Pill: Praise and criticism in written feedback” by Fiona Hyland and Ken Hyland in Journal of Second Language Writing, turned up to discuss it and relate it to our context. The article is, in its own words, “a detailed text analysis of the written feedback given by two teachers to ESL students over a complete proficiency course”. The authors categorise all the feedback by function; namely, praise, criticism and suggestions and analyse it accordingly. It’s a very interesting and thought-provoking article. However, the purpose of this post is not to summarise it but rather our discussion which arose from it. This is not as easy a task as it might sound!

Praise

We started by talking about praise. Something we found interesting, in both the article and a similar piece of research done for a masters dissertation by one of our number, was that the students in these studies were able to identify when praise was insincere/formulaic/there for the sake of being there. (Here we are talking about the general comments at the end of a text rather than specific in-text comments.) Additionally, also in terms of general end-of-text comments, students who receive substantial formulaic praise may automatically mentally downgrade it, particularly if the balance of feedback overall is in favour of praise i.e. more positive comments than suggestions for improvement. In connection with this, students were also found not to believe the positive general comments if they did not reflect the in-text feedback, which being more directly connected to the text held more weight for them. Finally, both the article and the masters research highlighted the danger of the suggestions for improvement in a praise-criticism sandwich being ignored/missed by a student and the danger of hedged comments (e.g. using modals) being misunderstood.

Another aspect of feedback which it was thought might lead to misunderstanding is our feedback guidelines here at the college, which stipulate that in our general comments we should include 3 positive points and 3 areas to work on. We discussed the possibility that this might be (mis)interpreted by students to mean that the piece of writing was good and in need of improvement in equal measure when in fact that may not be the case. We also discussed the importance of framing the negative points as suggestions rather than criticism, as well as of avoiding hedging and the aforementioned dangers of miscommunication that may go with it:

Compare

“Your writing does not have enough linkers so it is confusing” (highlighting a negative)

with:

“You should include more linkers in your work to make it clearer” (making a suggestion for improvement)

This would, in turn, be easier to understand for a student than:

“I wonder if you could include more linkers in this paragraph? This might help the reader.” (hedged)

or:

“This is a good introduction with a clear thesis statement and scope, however, you need to look at coherence. Go back to …. and consider… . I think you could also benefit from having a look at…  …it is quite advanced but I think you are ready to take your AW to the next level!” (Praise-criticism sandwich: the student in question ignored all the suggestions because the teacher had said it was good so they didn’t feel the need to make any changes!) 

Of course, as discussed in the journal article, teachers do use phrases such as “I wonder if” and questions rather than direct instructions to avoid appropriation of the piece of work and also to avoid being overly authoritative, in order to meet what Hyland and Hyland describe as the “interpersonal goal” aspect of feedback (in contrast with pedagogic and informational goals).  Our conclusion, based on the masters findings, our experience and having read the journal article was that teachers possibly worry too much about being polite in the feedback, which ends up being confusing for the student more than anything else. As here:

When the message gets lost…

Still relating to praise, we agreed that it is most effective when specific i.e. directly highlights something in the text that the student is doing well, a view supported by the article and the masters research. Carrying this over to general end-of-text comments, we wondered if ‘repeating’ what you have said in specific in-text comments (which I admitted to doing quite a bit hence raising the issue), whether positive or negative, might actually be a way of reinforcing the importance of the in-text comments in question rather than being redundant or otherwise negative and making the general comments more personalised/less formulaic.

Finally, one issue I raised was that on Turnitin, if you have all the in-text comments (both positive and negative – “negative”, including suggestions for improvement not just criticisms obviously) in a single colour in terms of highlighting, a student might look at that and assume their essay was terrible because of the quantity of highlighting. I wondered if using different colours of highlighting for positive and negative would alleviate that situation. However, it was also put forward that it might be even worse if students knew that code and had very few things highlighted in the positive colour!

Improving feedback

As well as identifying the potential issues with praise discussed above, we also discussed possible solutions:

Reframing general comments

We agreed that:

  • short, personalised comments would be most useful, to avoid misunderstandings and identifiable insincerity. (Our comments bank – a google doc of generic comments – does not currently fit this bill.)
  • in Turnitin we could make more use of the “T” option (which is along side the QM and the comment bubble options and which most of us were unaware of!). This allows you to write directly on the text in ‘blue ink’ – might be more personalised/allow more flexibility than the general comments in the comments box. It might also allow for less in-text highlighting for comments bubbles.
  • having a “3 positive things and 3 ‘negative’/to improve” one size fits all guideline is problematic as students are all different (though if you have 60+ students’ work to look at in a short space of time, is carefully tailored, individualised feedback realistically feasible?)

Learner Training

We decided that learner training was crucial for enabling students to make full use of the feedback and therefore make doing it worth our time and theirs. Firstly, for in-text comments to be truly useful, it was suggested that we need to explicitly train students to look for further examples of the mistakes we highlight using the Quick Marks (i.e. error correction code) as otherwise they will correct what we highlight but they won’t automatically apply it to the rest of their text. Perhaps part of learner training would be to train them towards the point where they can do that without being continually prompted in comments or tutorials. We also considered the need for recognising and differentiating between “treatable” errors (e.g. articles – there are rules that can be followed) and “non-treatable” errors (e.g. word choice), and giving appropriate feedback. For non-treatable errors, direct feedback, i.e. giving students the correction, is better, while for treatable errors we can use indirect feedback, i.e. identifying the error and asking students to correct it themselves, using clues such as error correction coding. Currently, most of our feedback is indirect, so this is something we may need to reconsider.

Another aspect of learner training that we discussed was how to train learners to make the most of their very brief (10-15 minute) tutorials. For these tutorials to be truly beneficial, we agreed that it was imperative for students to look at their feedback BEFORE coming to the tutorial. In fact, they need not only to look at it but also to attempt to respond to it, so that during the tutorial the tutor can check their attempts and help them with the areas they were unable to address independently. We wondered about using a pre-tutorial sheet to encourage them to do this, something that in order to complete they need to engage with the feedback. A couple of teachers have already experimented with this kind of thing with encouraging results so it is worth looking into.

All in all, we managed to discuss a lot in an hour – or just over, as we lost track of time! (You know it’s a good scholarship circle when the participants just can’t drag themselves away at the end! I think the reason this scholarship circle is going so well is that it has a very specific focus and it is one that is equally important to all of us.)

Homework for next week: to read a chapter by Dana Ferris called “Does error feedback help student writers? New evidence on the short- and long-term effects of written error correction” in a book published by Cambridge University Press called Feedback in Second Language Writing (edited by the same Hylands who wrote last week’s article!) Just from the title I am very curious about what Ferris will say, but I won’t have time to find out till at least the weekend!

Feel free to join in the discussion by commenting on this post! 🙂

 

 

Using Google+ Communities with classes (1)

Have you used Google+ communities before in any capacity? I hadn’t until I and another ADoS colleague were asked to pilot it with our classes this term to see if it’s something we want to roll out for all teachers next term. As it is new to me, I have decided to blog periodically about it (probably this post as the initial setting up/first impressions/early days post, one or at most two posts during using it this term and a post at the end of term evaluating my use of it), primarily as means of reflecting on and developing my use but also as a means of memory outsourcing so that I can refer back to these posts when it comes time to give feedback about it!

In the past I have used other platforms with students e.g. Edmodo, Google Classroom, WordPress blogs and, more recently, MOLE which is the University’s branded Blackboard VLE. We still have MOLE (and will very much still be using it for lesson materials, assessment submission etc) but the Google Community is to replace the “My Group” folder we have on it, which was for sharing additional resources and information like tutorial timetables with students.

Setting up/first impressions:

A Google+ community is basically a.n.other social media platform. They can be public or private, you can ask to join existent public ones or create your own and invite people to join you. When setting up, you give it a name and boom it exists. You can then edit it to give it a tagline (if you want to), a ‘banner’ picture (I used a picture of Sheffield University for obvious reasons) and categories. The categories are used for organising posts. The category that it comes with is “Discussion”. To this, I have added a category for each of the four skills and a category for vocabulary. So far that is enough but later I can add more categories as the need arises (it’s a continuously editable set-up rather than a one-off fixed set-up).

To “register” students, you need to “invite” them. This can be done via a link or via Google+. As far as I can understand, you need to be registered to Google in some way in order to enjoy i.e. Google Plus/Gmail. Our students all have a university email address which is a Gmail account and can access G+, so that isn’t a problem for us. However, it is something to bear in mind for situations where students may use a range of email providers and may not be registered with any Google app/product. I have access to all my students’ university email addresses via our system, so I sent them an email inviting them to join, using a link generated by “invite a member” in the community I had set up for their class:

I sent it during the lesson just before I was going to introduce the G+ community, so that I could monitor while they joined and make sure everybody was able to do so etc. Prior to the lesson I added some content so that the students could immediately have a taste of how we would be using it, including a post with a question for them to answer by commenting on it (to highlight that it is their space as well as mine – supported by the set-up email that also encourages them to write a post of their own, which some of them did) and link to a TED talk that I was going to set for homework at the end of that lesson.

Other things I’ve posted so far are a link to a google doc that we will be using in class next week, a link to OALD and a link to some articles an another TED talk which I want them to read and watch in advance of one of next week’s lessons. I could have left them to post next week but I wanted it to not be empty when they registered, in the hopes that they will engage more if they immediately see the use/relevance of the platform. We shall see…

What I like about it so far (admittedly it’s early days, but…): 

  • It’s pretty! (I think so anyway…) I like that it looks nice, which is also helped by…
  • The ability to ‘categorise’ content so it is easier to direct students to it and if they want to go back to stuff, it’s much easier to be able to look by category rather than just a single stream of stuff.
  • It’s an easy way to share links to google docs for them to use in class for collaborative tasks (which we will do quite a bit of)
  • It’s easy to set up and ‘invite’ students
  • It’s easier to use than Blackboard, fewer steps to go through to share materials.

One thing I would like it to have that it doesn’t is the ability to schedule posts to appear at a given time in the future. I know Google Classroom has that function, amongst others, but, having used GC previously, I already prefer Google+ community otherwise. To deal with not being able to schedule posts, I have a sticky note on my computer desktop which is dedicated to reminders about when to post this, that or the other link or file. This is what it looks like so far:

Hopefully that will help me keep on top of things!

Looking ahead:

One thing I want to continue working on immediately is getting students to engage with the platform so that it can become more than just a repository for information and links. I’m thinking for starters a weekly discussion thread relating to something relevant to them and their lives as prospective Sheffield University students. Off the top of my head, I think it would be useful to raise awareness of the student mental health services at the university and the student union with all its clubs and societies and the library with all the services it offers. But also, discussions that capitalise on the range of nationalities represented (in one of my classes at least which is very multicultural). I think I also need to revisit the posts I wrote about using Edmodo, as some of the ideas there will be useable/adaptable too, even though the context of use is very different!

Overall, I think it has a lot of potential and I am looking forward to trying to tap into that this term. Watch this space!