What does an ADoS do?

Following Sandy’s post about a busy week in her life as a DoS of IH Bydgoszcz in Poland, which I found very interesting, and attending a Learning and Teaching Professional Scheme introductory meeting and learning that to become a SFHEA one of the things I need to do is write a personal statement about who I am and what I do here at the university,  I was inspired to write a bit about what I do as an ADoS in Sheffield University ELTC’s USIC arm. So here it is! This is what an ADoS does!

(Caveat: every ADoS position is different and depends on the type and size of the institution, as well as institutional requirements – this post is just about what an ADoS does here, where I am – aka what I do! Perhaps the title should be “What does *this* ADoS do?”!)

  • I teach. (Yay!) Currently 6hrs per week plus 3-4 WAS’s (1hr Writing Advisory Service appointments), as of next week 9hrs per week plus 1 WAS. Along with that, of course, comes all the usual planning, prepping, marking and admin. Am also timetabled 6hrs of cover slots per week.
  • I write meeting notes. Well, I co-write meeting notes with my fellow January ADoS. (At this point, I should explain – I am ADoS for the January Foundation cohort of students. We currently have 4 cohorts of students  – September Foundation and Pre-Masters, and January Foundation and Pre-Masters – but will go up to 5 in April. The April lot is always smaller so though there are also a mixture of Foundation and Pre-Masters, they are counted as one cohort.) We do this using Google docs and share them with our teachers towards the end of one week, ready for the meeting at the start of the next week. This means that teachers have a written record to refer back to without having to write copious notes on a scrap of paper that then gets lost or something! We give them a print-out in the meeting, so they can write down anything extra that comes up/anything that wasn’t clear to them that they asked about etc.
  • I run…co-run…weekly module meetings (in previous terms we did the meetings independently but this term about 95% of our teachers are teaching both January cohorts so it made sense to combine it; this may revert to separate meetings next term, depends on timetables and teachers!). These meetings are about what’s got to happen in the immediate future and looking forward to next week’s lessons. (So, as ADoSes, say it’s week 5, we write meeting notes for week 6’s meeting in which we are talking about week 7 lessons!)
  • I make materials. Last term, that included materials for the workbook, as we adapted some lessons based on teacher feedback and student response from previous use of them. This involves not just creating the new materials and putting them into the workbook but also updating the powerpoints, teachers notes and student worksheets that live in our shared drive resources folder so that everything matches up to the changes that have been made. Examples this term include independent listening development materials, and self-study materials and in-class or self-study materials for using www.wordandphrase.info/academic. (Here I have linked to copies of the materials in my personal google drive so that you can see them, but the originals live on my work google drive and are set to be useable only by people with sheffield.ac.uk email addresses.)
  • Relating to the above, I seek feedback regarding the materials in order to use it to improve them for the next time around.
  • I make sure the tracker is up to date and correct. The tracker refers to an excel spreadsheet with marks and progression rating colours for all students, and there is separate tracker for each cohort. This involves inputting data (e.g. the diagnostic test results), reminding teachers when data that they are inputting needs to be done, helping teachers when they have trouble inputting data, correcting mistakes with student information e.g. when they change groups due to changing pathway and fixing it when random things happen like a student ends up with two lines that correspond to their name/number but non-identical scores (cue checking scripts to be able to work out which is the correct row and delete the other). I have learnt what a v-lookup is and what filters are. Either which way, we hate the tracker… 😉
  • I make sure all the other admin happens when it meant to. This includes transferring progression colours from the tracker to the student management system at certain points, generation of learning conversation documents (even if we don’t actually have the conversations, as this term, the data is needed so that academic success tutors can discuss it with students). This term the document generation has been mostly automated but teachers still need to select smart targets in a Google sheet and copy and paste the resultant data from Google docs to a certain spreadsheet that will then be used for a mail merge, and stuff like that. Teachers need to be told it’s coming up, taught how to do it (in the case of new teachers), supported through it (i.e. troubleshooting if/when the struggle) and we have to check everything in the end to make sure all is in order.
  • I deal with unforeseen situations that come up e.g. a teacher being off sick for longer than a day or two when there is a tight marking deadline and other admin too – between us the ADoSes have to cover that teacher’s marking and admin.
  • I make sure everything is ready for assessments. This includes sending mock tests/seminar discussion exam sheets/etc off to be printed well in advance of when the assessment will take place (printing has a two week turnaround and may take longer in busy periods), setting up Turnitin buttons on MOLE, putting coursework templates on MOLE, doing summative assessment papers myself as part of pre-standardisation etc.
  • I am first point of contact when teachers have any questions, problems, issues etc with January IFY students and teaching (and basically anything relating to anything they have to do here e.g. the admin, the tools used to do the admin etc). This is mostly done in person, in the staffroom, but also involves emails. Where relevant we then liaise with the person or people who need to be involved in resolving the issue. Otherwise, we offer support/guidance as necessary. The main skills this requires are patience, supportiveness and ability to be interrupted, provide the help needed and seamlessly pick up the thread of what you were doing when help was needed! I am currently trying to devise a way of providing more support to new teachers than what we currently do, watch this space!
  • I run…co-run…standardisation for all summative assessments. This involves us marking several samples of a given assessment, rationalising our scores (which are under the influence of the centre-level standardisation that Studygroup centres do), agreeing together what the official scores are and then getting teachers to do the same. With exam marking standardisation, we will then all be in a big room while the teachers are looking at and marking the samples and the discussion follows directly. Once complete, marking commences. With coursework, we send out the samples in advance of a given weekly meeting and in that meeting share and discuss scores. We also have to do this for the speaking exams (the seminar discussion and the presentation), which are both done by sending out recordings in advance for teachers to watch and grade, after which scores are discussed as with the written exams.
  • I double mark speaking exams. In order to increase reliability, we double-mark a a couple of groups (seminar discussion) or a few students (individual presentations) with each teacher.
  • I sign marks off and prepare module boxes. Once all marks have been inputted into spreadsheet and student management system, everything needs double-checking. Errors get picked up and changed, and then, when everything is in order, we sign off the marks for a given cohort for a given exam. The paper work goes into the module box along with some samples of high, medium and low-scoring papers and evidence of standardisation. The resultant module box is stored ready to be audited by the external examiner when s/he pays a visit, so it is important that everything is in order.
  • I randomly spot-check first draft feedback on course work to make sure we as a team are being consistent in the amount and quality of feedback that is given and advise where any changes/tweaks are necessary.
  • I do naughty student meetings. These meetings are 1-1 with the student and their teacher, and are held when students plagiarise in the first draft of their coursework. The idea is to find out what’s gone on and why, and to ensure that it will be addressed before the piece of work is submitted finally. (Otherwise, the student will have to go to a misconduct panel hearing and that makes more paperwork for us and more stress for the student!)
  • I prepare academic misconduct case paperwork. If a student’s final draft submission has high levels of plagiarism or it is clear they have received help because the work submitted is too far above their normal level, we need to prepare paperwork for academic misconduct panel hearings. This mostly involves filling in forms and providing evidence (past pieces of written work, which necessitates digitised work folders, which we also set up for teachers to use).
  • I invigilate listening exams. Mostly Studygroup provide invigilators for exams but our listening exams are complicated enough that we provide a chief invigilator per exam room. Generally that’s around 4 chief invigilators per exam. One of those things that is terrifying the first time you do it and then subsequently you wonder what all the fuss was about!
  • I send next term’s workbook off to the printers. Each term, at some point sufficiently in advance of the end of term, next term’s workbook has to be sent off to print. This involves making any changes that have been flagged up, altering or replacing lessons, proofreading, editing, checking formatting hasn’t altered, sometimes throwing in an alternative syllabus at the last minute because we have been told that due to timetabling we will have to deliver a 2hr-1hr-2hr delivery pattern as well as the default 2-2-1 delivery pattern. That kind of thing.
  • I am supposed to do 3hrs CPD a week, but often it gets relegated to the weekend other than an hr of scholarship circle most weeks (unless stuff comes up which needs dealing with pronto, in which case that takes priority!).

So that’s the kind of thing (there is more, but that is all I can think of for now!)… except rather than “I”, it’s “we”, really! Each of the five cohorts mentioned towards the start of this post (bullet point two) has an ADoS and together we are a team. Within that, some of us also operate in sub-teams: I am part of Team Jan ADoS, and the two September ADoSes work together closely too. For me, the teamwork aspect is the best part of it! We bounce off each other, we support each other, between us we have more brains to cope with remembering everything that has to be done, we commiserate with each other (when the tracker plays up, for example!), we help each other out when there’s lots to be done (e.g. the example of covering the sick teacher’s marking and admin, we all took on some of it and between us got it done) and so on.

I like my job, when it isn’t driving me crazy 😉 If you have ADoSes where you work, what similarities and differences are there between my ADoS role and those where you are?

Another and final question I want to leave you with: How do you support new teachers where you work? Will be interested to hear any replies… please comment!

Advertisements

WAS (Writing Advisory Service) at Sheffield University ELTC

Amongst many other things (e.g. pre-sessional programmes, general English classes, IELTS and CAE preparation, foundation programmes, in-department support, in-sessional programmes and credit-bearing modules) the ELTC also provides a Writing Advisory Service (WAS) to all students studying at the University of Sheffield. It is not only international students who use this service, home students use it too. In terms of levels, we get a mixture of bachelors students, masters students, PhD students and lifelong learning students. This post is going to talk a bit about what a WAS appointment offers and my experience of doing them.

What is “a WAS”?

It is a writing advisory service appointment which lasts for one hour. Any students studying at the university can book an appointment. Teachers are timatabled WAS slots and these appear on our timetabling system. When a student books an appointment, we are able to access their information by logging in to this system and clicking on the relevant slot. In advance of the appointment, we are able to see a student’s name, their department and course, their nationality and an appointment history. So, if students have been before, we can see a record of what they brought (i.e. what type of writing) and what advice they were given. If it is their first appointment, then obviously this part will be blank. These are not “our” students; in most cases you see a different student every appointment. Occasionally you get needy students who try to book the same tutor every time, but this is discouraged as we don’t want to encourage over-dependence on a particular person.

How does it work?

Students have to report to reception so that reception can mark them as attended, which unlocks the appointment history so that we are able to edit it. As teachers, we have to be at reception just before the session is due to start, to meet the student and take them to the allocated room, which always has a computer in it. Students have to bring a print out of whatever piece of writing they want help with. We are not expected to read stuff on screen, thankfully! Before I look at the piece of writing, I ask the student about it – what is it? what problems do they think they have with it? is there anything in particular they want me to look at (e.g. structure, referencing etc.) – so that I have a context to start from. Then the student has to sit and wait while I read through their writing and identify issues with it.

Once I have had a chance to look through the piece of writing, what follows is a discussion of it with the student. Generally I focus on structural issues first – so problems with the introduction, thesis statement, paragraph topic and concluding sentences, conclusion. Next would be other aspects of cohesion like linking language, demonstratives and catch-all nouns, lexical chains, etc. Then issues of academic style e.g. formality/appropriate vocabulary and referencing. Finally, I’ll pick out a few persistent grammar issues to discuss. The idea is that it’s NOT a proofreading service, it’s an opportunity for students to learn how to write better, based on a piece of their writing. Therefore, ideally, we need to equip students to deal with their issues independently. One way of doing this, for example, is using www.wordandphrase.info/academic to model how to use it to answer questions relating to what word to use and how to use it. We also direct them to various websites such as the Manchester Phrasebank.

The final stage of the appointment is writing it up in the student’s appointment record notes as they have access to these notes. The notes are written to the student, as they are for the student to refer back to, rather than being written in lesson record style. I usually get the student to tell me what we’ve talked about, as a way to reinforce what we have done, and write that into their records, pasting in any links we have used in the course of the session too.

This is a recording which lives on the Writing Advisory Service web page. Students can watch in advance of their appointment, in order to know what to expect.

My experience of WAS’s

  • It’s not uncommon to get a no-show! Students are encouraged to cancel in advance if they can’t make it but sometimes that doesn’t happen. They may get caught up in whatever else they are doing or forget they made the appointment etc. Repeat offenders get banned from making appointments for a period of time.
  • When students do show up (which is most of the time, to be fair!), they are very enthusiastic and appreciative. They want to do well in whatever assignment it is they are working on and recognise that what you are discussing with them can help them with this.
  • The first one you ever do is terrifying and difficult, but as with so many things, with experience it gets much easier. You learn what to look out for and how to help students get to grips with those issues. You learn not to be daunted by whatever is put in front of you, however obscure it may seem at first.
  • Because I teach EAP generally, it’s easy to pick out materials from our electronic stores of them, to illustrate what I am trying to explain to students. This is very helpful!
  • You get to see a wide range of different types of writing from different subjects. It can be a bit scary to be faced with an essay full of legalese, especially if you are a bit tired anyway (as with my slot last thing on a Friday!), but you get used to looking beyond the subject specific stuff (which we aren’t expected to be experts on!).
  • They are enjoyable! It’s a bit of a faff because I, like my colleagues in this building, are in a different building to where the appointments happen, so though it’s an hour’s appointment, with the walking there and back etc it’s nearer an hour and a half of time gone, but once you’re there and doing it, the hour flies!

Do you have anything like this where you work? How does it work?

Scholarship Circle: Giving formative feedback on student writing (2.2)

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 / session 2.2 of this particular circle.

In this week’s session of the scholarship circle, we started by doing a pilot text analysis. In order to do this, we needed a first draft and a final draft of a piece of CW3 essay coursework and a method of analysis. Here is what it looked it like:

So…

  •  QM code refers to the error correction code and there we had to note down the symbol given to each mistake in the first draft.
  • Focus/criterion refers to the marking criteria we use to assess the essay. There are five criteria – Task achievement (core elements and supported position), Organisation (cohesive lexi and meta-structures), Grammar (range and accuracy), Vocabulary (Range and accuracy) and Academic conventions (presentation of source content and citations/references). Each QM can be assigned a criteria to attach to so that when the student looks at the criteria-based feedback, it shows them also how many QMs they have attached to each criteria. The more QMs there are, the more that criterion needs work!
  • Error in first draft and Revision in final draft require exact copying from the student’s work unless they have removed the word/s that prompted the QM code.

Revision status is where the method comes in. Ours, shared with us by our M.A. researcher whose project our scholarship circle was borne out of, is based on Storch and Wigglesworth. Errors are assigned a status as follows:

  • Successful: the revision made has corrected the problem
  • Unsuccessful: the revision made has not corrected the problem
  • Unverifiable: if the QM is wrongly used by the teacher and the student has made something incorrect in the final draft based on that QM or has made no change but no change is in reality required
  • Unattempted: the QM is correctly used but the student does not make any change in the final draft.

Doing the pilot threw up some interesting issues that we will need to keep in mind if we use this approach in our data collection:

  • As there are a group of us rather than just one of us, there needs to be consistency with regards to what is considered successful and what is considered unsuccessful. E.g. if the student removes a problem word/phrase rather than correcting it, is that successful? If the student corrects the issue identified by the QM but the sentence is grammatically incorrect, is that successful? The key here is that we make a decision as a group and stick by that as otherwise our data will not be reliable/useful due to inconsistency.
  • We need to beware making assumptions about what students were thinking when they revised their work. One thing a QM does, regardless of the student’s understanding of the code, is draw their attention to that section of writing and encourage them to focus closely on it. Thus, the revision may go beyond the QM as the student has a different idea of how to express something.
  • It is better to do the text analysis on a piece of writing that you HAVEN’T done the feedback on, as it enables you to be more objective in your analysis.
  • When doing a text analysis based on someone else’s feedback, however, we need to avoid getting sucked in to questioning why a teacher has used a particular code and whether it was the most effective correction to suggest or not. These whys and wherefores are a separate study!

Another thing that was discussed was the need to get ethical approval before we can start doing anything. This consists of a 250 word overview of the project, and we need to state the research aims as well as how we will collect data. As students and teachers will need to consent to the research being done (i.e. to use of their information), we need to include a blank copy of the consent form we intend to use in our ethical approval application. By submitting that ethical approval form, we will be committing to carrying out the project so we need to be really sure at this point that this is going to happen. Part of the aim of today’s session, in doing a pilot text analysis, was to give us some idea of what we would be letting ourselves in for!

Interesting times ahead, stay tuned… 🙂

Scholarship Circle: Giving formative feedback on student writing (2.1)

It’s a brand new term (well, sort, of it’s actually the third week of it now!), the second of our four terms here at the college, and today (Monday 21st January, though I won’t be able to publish this post on the same day!) we managed our first scholarship circle session of the term.

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 / session 9 of this particular circle.

The biggest challenge we faced was remembering where we had got to in the final session BC (Before Christmas!). What were our research questions that we had decided on again? Do we still like them? What was the next step we were supposed to take this term?

Who?

We talked again about which students we wanted to participate – did we want IFY (Foundation) or PMP (Pre-Masters)? We considered the fact that it’s not only linguistic ability which influences response to feedback (our focus) – things like age, study pathway, past learning experiences and educational culture in country of origin will all play their part. Eventually, we decided to focus on IFY students with PMPs their coursework may alter dramatically between first and final draft submissions due to feedback from their content tutor, which would affect our ability to do text analysis regarding their response to our first draft feedback. Within the IFY cohort we have decided to focus on the c and d level groups (which are the two bottom sets, if you will), as these students are most at risk of not progressing so any data which enables us to refine the feedback we give them and others like them will be valuable.

What?

It is notoriously tricky to pin down a specific focus and design a tool which enables you to collect data that will provide the information you need in order to address that focus. Last term, we identified two research questions:

This session, we decided that this was actually too big and have decided to focus on no. 2. Of course having made that decision, and, in fact, also in the process of making that decision, we discussed what specifically to focus on. Here are some of the ideas:

  • Recognition – which of the Quickmarks are students able to recognise and identify without further help/guidance?
  • Process – are they using the Quickmarks as intended? (When they don’t recognise one, do they use the guidance provided with it, that appears when you click on the symbol? If they do that, do they use the links provided within that information to further inform themselves and equip themselves to address the issue? You may assume students know what the symbols mean/read the information if they don’t but anecdotal evidence suggests otherwise – e.g. a student who was given a wrong word class symbol and changed the word to a different word rather than changing the class of it!)
  • Application – do they go on to be able to correct other instances of the error in their work?

Despite our interest in the potential responses, we shelved the following lines of enquiry for the time being:

  • How long do they spend altogether looking at their feedback?
  • How do they split that time between Quickmarks, general comments and copy-pasted criteria?

We are mindful that we only have 6 weeks of sessions this term (and that included this one!) as this term’s week 10, unlike the final week of last term, is going to be, er, a tad busy! (An extra cohort and 4 exams being done between them vs one cohort and one exam last time round!) As we want to collect data next term, that gives us limited time for preparation.

How?

We are going to collect data in two ways.

Text analysis

We each will look at a first draft and a final essay draft of a different student and do a text analysis to find out if they have applied the Quickmark feedback to the rest of their text. This will involve picking a couple of Quickmarks that have been given to the student in their first draft, identifying and highlighting any other instances of that error type, and then looking at the final draft in order to find the highlighted errors so that we can see if they have been corrected, and if they have, how – successfully or not.

We are going to have a go at this in our session next week, to practise what we will need to do and agree on the process.

Questionnaire

Designing an effective questionnaire is very difficult and we are still in the very early stages. We are still leaning towards Google Forms as the medium. Key things we need to keep in mind are:

  • How many questions can we realistically expect students to answer? The answer is probably fewer than we think, and this means that we have to be selective in what questions to include.
  • How can we ask the questions most clearly? As well as using graded language, this means thinking about question types – will we use a Likert scale? will we use tick boxes? will we use any open questions?
  • How can we ensure that the questions generate useful, relevant data? The data needs to answer the research questions. Again, this requires considering different question types and what sort of data they will yield. Additionally, knowing that we need to analyse all the data that we collect, in terms of our research question, we might want to avoid open questions as that data will be more difficult and time-consuming to analyse, interesting though it might be.

The questions will obviously relate to the focuses we identified, earlier discussed – recognition, process and application. One of our jobs for the next couple of sessions is to write our questions. It’s easy (ish!) to talk around what we want to know, but writing clear questions that elicit that information will be significantly more challenging!

Another thing we acknowledged, finally, is that research-wise we are not doing anything new that hasn’t been done before, BUT the “newness” comes from doing it in our particular context. And that is absolutely fine! 🙂

Homework: 

Well those of us who haven’t got round to doing the reading set at the end of the previous session (cough cough) will hopefully manage to finish that. (That was Goldstein, L. Questions and answers about teacher written commetary and student revision: teachers and students working together in Journal of Second Language Writing and Ene, E & Upton, T.A. Learner uptake of teacher electronic feedback in ESL composition.) Otherwise, thinking about possible questions and how to formulate them!

Scholarship Circle: Giving formative feedback on student writing (9)

It’s the last week of term, exam week, and we have managed to squeeze in a final scholarship circle meeting for the term. How amazing are we? 😉 I also have no excuse not to write it up shortly afterwards – nothing sensitive content-wise and, for once in a way, I have a wee bit of time. Sort of. (By the time you factor in meetings, WAS and ADoS stuff for next term, not as much as you might think…!)

For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 / session 5-8 of this particular circle.)

So, session 9. The first thing we recognised in this session is that we won’t be collecting data until term 3 for September students and term 4 for January students (which will be their term 3). This is a good thing! It means we have next term to plan out what we are going to do and how we are going to do it. It sounds like a lot of time but there is a lot we have to do and elements of it are, by their nature, time-consuming.

Firstly, we need to decide exactly who our participants will be and why. “You just said term 3/4 September/January students!” I hear you say. Yes…generally, that is the focus. In other words, students who are doing a coursework essay and therefore receiving QuickMark feedback. However, within those two broad groups (September Term 3/January Term 4), we have IFY (foundation) and PMP (Pre-masters) students and the IFY cohorts are streamed by IELTS score into a, b, c and (numbers depending) d groups. So, we need to decide exactly who our participants will be. This choice is affected by things like the age of the participants (some of our students are under 18 which makes the ethical approval process, which is already time-consuming, markedly more difficult) and what exactly we want to be able to find out from our data. For example, if we want to know the effect of the streaming group on the data, then we need to collect the data in such a way that it is marked for streaming group. (NB: as I learnt last term in the context of a plagiarism quiz that had to be disseminated to all students, it is a bad idea for this information to rely on student answers – having a field/question such as “What group are you in?” might seem innocuous but oh my goodness the random strangeness it can throw up is amazing! See pic below…)

“Bad” and “g’d” are other examples of responses given! …Students will be students? We need to make sure that our Google Form collects the information we want to collect and allows us to analyse it in the way that we want to analyse it. Obviously, we need to know what we want to collect and how we want to analyse it before we can design an effective tool. Additionally, however pesky they might be, participant students will also need to be a) fully informed regarding the research as well as b) aware that it is voluntary and that they have the right to cease participation and withdraw their data at any point.

Developing our research is just one of the directions that our scholarship circle might take next term. We also discussed the possibility of further investigation into how to teach proofreading more effectively. We are hoping to do some secondary research into this and refine our practice accordingly. While we will do what we can, we recognised that time constraints may affect what we can do. For example, we discussed the following activity to encourage proofreading after students receive feedback on their drafts:

  • Put students in groups of four and have them look at the feedback, specifically QuickMarks, on their essays
  • Students, in their groups, to work out what is wrong and what the correction should be. Teacher checks their correction and ensures that it is correct.
  • Students to pick a mistake or two (up to four sentences) and copy them onto a piece of flip-chart paper with the mistakes still in place
  • Each group passes their flip-chart paper to another group who should t try to correct it.
  • The flip-chart paper passes from group to group, with the idea that they look at the mistake and the first correction group’s edits and see if they think it is now correct or want to make additional changes (in a different colour)
  • Finally, the original group gets their flip-chart paper with corrections and edits back and compares it with their correct version.

This is a really nice little activity. However, after students receive their first draft feedback, they do not have any more lesson time (what time remains of the term, after they get their feedback, is taken up by tutorials, mocks and exams!), so it wouldn’t be possible to do it using that particular feedback. Perhaps what we need to do is use the activity with a different piece of work (for example a writing exam practice essay), and integrate other proofreading activities at intervals through the course, so that when they do get their first draft feedback for their coursework, they know what to do with it!

Another thing we discussed in relation to proofreading and helping students to develop this skill is the importance of scaffolding. I attempted to address the issue of scaffolding the proofreading process in a lesson I wrote for my foundation students last term. In that lesson, students had to brainstorm the types of errors that they commonly make in their writing – grammar, vocabulary, register, cohesion-related things like pronouns etc – and then I handed out a paragraph with some of those typical errors sown in and they had some time to try and find the errors. After that, I gave them the same paragraph but with the mistakes underlined, and having checked which ones they had found correctly, they had had to identify the type of error for each one that had been underlined. Finally, I gave them a version with the mistakes underlined and identified using our code, and they had to try and correct them. All of this was group work. The trouble was the lesson wasn’t long enough for them (as a low-level foundation group) to have as much time as they could have done with for each stage of the lesson. I had hoped there would be time for them to then look at their coursework essays (this was the last lesson before first draft submission) and try to find and correct some mistakes but in reality we only just got through the final paragraph activity.

Other ideas for scaffolding the development of proofreading skills were to prepare paragraphs that had only one type of mistake sown in so that students only had to identify other errors of that particular type, with the idea that they could have practice at identifying different errors separately before trying to bring it together in a general proofreading activity. That learning process would be spread over the course rather than concentrated into one (not quite long enough) lesson. There is also a plan to integrate such activities into the Grammar Guru interactive/electronic grammar programmes that students are given to do as part of their independent study. Finally, we thought it would be good to be more explicit about the process we want students to follow when they proofread their work. This could be done in the general feedback summary portion of the feedback. E.g. cue them to look first at the structural feedback and then at the language feedback etc. That support would hopefully avoid them being overwhelmed by the feedback they receive. One of our tasks for scholarship circle sessions next term is to bring in the course syllabus and identify where proofreading focuses could be integrated.

Another issue regarding feedback that we discussed in this session was the pre-masters students’ coursework task which is synoptic – they work on it with their academic success tutor with focus on content and with us for focus on language. Unfortunately, with the set-up as it is, as students do not work on it with a subject tutor, there is no content “expert” to guide them and there is a constant tension with regards to timing of feedback. Our team give feedback on language at the same time as the other team give feedback on content (which, not being experts, is a struggle for them, exacerbated by not being able to give feedback on language, especially as the two are fairly entwined!). Content feedback may necessitate rewriting of chunks of text, rendering our language feedback useless at that point in time. However, there is not enough time in the term for feedback to be staggered appropriately. We don’t have a solution for this, other than more collaboration with Academic Success tutors, which again time constraints on both sides may render difficult, but it did lead us onto the question of whether we should, in general, focus our QuickMarks only on parts of text that are structurally sound? (Again, there isn’t time for there to be a round of structural feedback followed by a round of linguistic feedback once the structural feedback has been implemented.)

Suffice to say it is clear that we still have plenty to get our teeth into in future scholarship circle sessions – our focus, and areas closely related, is far from exhausted. Indeed we have a lot to do still, with our research still in its early stages. We are not sure what will happen next term with regards to when the sessions will take place as it is timetable dependent but we are keeping our current time-slot pencilled in as a starting point. Fingers crossed a good number of us will be able to make it or find an alternative time that more of us can do!

Thank you to all my lovely colleagues who have participated in the scholarship circle this term, it has been a brilliant thing to do and I am looking forward to the continuation next term!

 

 

Scholarship Circle: Giving formative feedback on student writing (5-8)

Last time I blamed time and workload for the lack of updates, but this time the reason there is only one post representing four sessions is in part a question of time but more importantly a question of content. This will hopefully make more sense as I go on to explain below!

(For more information about what scholarship circles involve, please look here and for write-ups of previous scholarship circles, here

You might also be interested in session 1 / session 2 / session 3 and 4 of this particular circle.)

Session 5 saw us finishing off what we started in Session 4 – i.e. editing the error correction code to make it clearer and more student-friendly. So, nothing to add for that, really! It was what it was – see write-up of Session 4 for an insight.

Sessions 6 and 7 were very interesting – we talked about potential research directions for our scholarship circle. We started with two possibilities. I suggested that we replicate the M.A. research regarding response to feedback that started the whole scholarship circle off and see if the changes we are making have had any effect. At the same time as I had that idea, another of our members brought forward the idea of participating in a study that is going to be carried out by a person who works in the Psychology department at Sheffield University, regarding reflection on feedback and locus of control. What both of these have in common is that they are not mine to talk about in any great depth on a public platform given that one has not yet been published and the other is still in its planning stages.

Session 6

So, in session 6, the M.A. researcher told us, in depth, all about her methodology, as in theory if we were to replicate that study we would be using that methodology and then we also heard about the ideas and tools involved in the Psychology department research. From the former, it was absolutely fascinating to hear about how everything was done and also straightforward enough to identify that replicating that study would take up too much time at critical assessment points when people are already pressed for time: it’s one thing to give up sleeping if you are trying to do your M.A. dissertation to distinction level (congratulations!) but another if you are just working full time and don’t necessarily want to take on that level of workload out of the goodness of your heart! We want to do research, but we also want to be realistic. With regards to the latter, it sounded potentially interesting but while we heard about the idea, we didn’t see the tools it would involve using until Session 7. The only tool that we contributed was the reflection task that we have newly integrated into our programme, which students have to complete after they receive feedback on the first draft of their assignments.

Session 7

Between Session 6 and 7, we got hold of the tools (emailed to us by the member in touch with the research in the Psychology department) and were able to have a look in advance of Session 7. In Session 7, we discussed the tools (questionnaires) and agreed that while some elements of them were potentially workable and interesting, there were enough issues regarding the content, language and length that it perhaps wasn’t the right direction for us to take after all. The tools had been produced for a different context (first year undergraduate psychology students). We decided that what we needed was to be able to use questionnaires that were geared a) towards our context and students and b) towards finding out what we want to know. We also talked about the aim of our research, as obviously the aim of a piece of research has a big impact on how you go about doing that research. Broadly, we want to better understand our students’ response to feedback and from that be able to adapt what we do with our feedback to be as useful as it possibly can be for the students. We spent some time discussing what kinds of questions might be included in such a questionnaire.

So, at this point, we began the shift away from focusing on those two studies, one existing, complete but unpublished, and one proposed,  and towards deciding on our own way forward, which became the focus of session 8

Session 8

Between Session 7 and Session 8, our M.A. Researcher sent us an email pointing out that in order to think about what we want to include in our questionnaires, we first need to have a clear idea of what our research questions are. So that was the first thing we discussed.

One fairly important thing that we decided today as part of that discussion about research questions was that it would be better to focus on one thing at a time. So, rather than focusing on all the types of feedback that Turnitin has to offer within one project, this time round focus specifically on the quickmarks (which, of course, we have recently been working on!). Then, next time round we could shift the focus to another aspect. This is in keeping with our recognition of the need to be realistic regarding what we can achieve, so as to avoid setting ourselves up for failure. (I think this is a key thing to bear in mind for anybody wanting to set up a scholarship circle like this!) The questions we decided on were:

  1. Do students understand the purpose of feedback and our expectations of them when responding to feedback?
  2. How do students respond to the Quickmarks?

Questions that got thrown around in the course of this discussion were:

  • Do students prioritise some codes over others? E.g. do they go for the ones they think are more treatable?
  • What codes do students recognise immediately?
  • If they don’t immediately recognise the codes, do they read the descriptions offered?
  • Do they click on the links in the descriptions?
  • Do they do anything with those links after opening them? (One of the students in the M.A. research opened all the links but then never did anything with them!)
  • How much time do they believe they should spend on this feedback?
  • How long are students spending on looking at the feedback in total?
  • How do students split their time between Quickmarks (/”In-text feedback” so includes comments and text-on-text a.k.a. the “T” option, which some of us haven’t previously used!) and general comments and the grade form?

Of course, these questions will feed in to the tool that we go on to design.

We identified that our learner training ideas e.g. the reflection form, improving the video that introduces them to Turnitin feedback, developing a task to go with the video in which they answer questions and in so doing create themselves a record of the important information that they can refer back to etc. can and should be worked on without waiting to do the research. That way, having done what we can to improve things based on our current understanding, we can use the research to highlight any gaps.

We also realised that for the data regarding Quickmarks to be useful, it would be good for it to be specific. So, one thing on our list of things to find out is whether Googleforms would allow us to have an item in which students identify which QMs they were given in their text and then answer questions regarding their attitude to those Quickmarks, how clear they were etc. Currently we are planning on using Googleforms to collect data as it is easy to administer and organises the results in a visually useful way. Of course that decision may be changed based on whether or not it allows us to do what we want to do.

Lots more to discuss and hopefully we will be able to squeeze in one more meeting next week (marking week, but only one exam to mark, most unusually! – in a normal marking week, it just would not be possible) before the Christmas holidays begin… we shall see! Overall, I think it will be great to carry out research as a scholarship group and use it to inform what we do (hence my overambitious as it turns out initial idea…). Exciting times! 🙂

 

CUP Online Academic Conference 2018: Motivation in EAP – Using intrinsically interesting ‘academic light’ topics and engaging tasks (Adrian Doff)

This is the first session of this online conference that I have been able to attend live this week, hoping to catch up with some of the others via recordings…

Part of a series of academic webinars running this week, this is the 5th session out of 8. Apparently recordings will be available in about a week’s time. Adrian Doff has worked as a teacher and teacher trainer in various countries and is co author of Meanings into Words and Language in Use series amongst other things. He is talking to us from Munich, Germany.

We are going to look at what topics and tasks might be appropriate in EAP teaching, especially to students who both need academic skills in English but also need to improve their general language ability. For most of his ELT life, Adrian has been involved in general ELT as a teacher and materials writer and has recently move into EAP mainly through supplementary material creation.

Our starting point for this webinar: look at some of the differences between GE and EAP. In the literature of EAP quite a lot is made of these differences, partly as a way to define EAP in contrast to GE.

Firstly, the contrast between needs and wants: to what extent do we define the content of the course in terms of the perceived needs of learners and what we think students want to do vs what they need to do. In all teaching and learning there is a balance between these two things.

  • In GE, needs/outcomes define the syllabus, skills and general contexts and they are seen as fairly longterm outcomes and goals, often expressed in terms of the CE framework. E.g. language used in restaurants/cafes, we think it will be useful for learners of English. Equally we consider what students want, and the topics and tasks and texts are more based on interest, engagement and variety. E.g. a common classroom activity is a class survey mingling and asking questions and reporting back. They are not really related to the needs, i.e. we don’t expect students to get a job doing surveys, but it is interesting, lively, generates interaction etc so it is motivating for them to do.
  • If we think about EAP, the needs are more pressing and clearer, dictate the skills, genre and language we look at and that dominates choice of topics, texts and tasks.

Two differences come out of this first one:

  • Firstly, In GE, the overt focus of the lesson is focused on a topic, while in EAP the overt focus is on the skills being developed.
  • Secondly, teachers’ assumptions about motivations in class.

Adrian shows us a quote from De Chazal (2014), saying that motivation is teacher-led while in EAP stakes are high and students are very self-motivated, clear intrinsic motivation from a clear goal. In GE students may not necessarily see tasks/topics relevant in terms of what they need, while in EAP they do.

Next we looked at example materials from GE and EAP, based around the same topic area of climate change.

  • EAP – “Selecting and prioritising what you need”  – students are taken through a series of skills: choosing sources, thinking about what they know, looking at the text, looking at language of course and effect, leading into writing an essay. The assumption is that students will be motivated by the knowledge that they need these skills. The page looks sober, black and white, reflecting the seriousness of EAP.
  • GE – Cambridge Empower, also leads to writing an essay but first there is focus on the topic, listening to new items about extreme weather events and discussion. Then reading a text that leads into writing skills focus on reporting opinions and it leads into the essay. It arouses interest in the topic through: strong use of visual support, active discussion of the topic, listening and speaking tasks used although it’s a reading and writing lesson. Lots of variety of interaction and general fluency practice.

These reflect the different needs of GE and EAP learners, reflects the more serious nature of academic study. This is fine if we can assume that learners in EAP classes are in fact motivated and have a clear idea of their needs and how what is being done relates to that. De Chazal uses “can be self motivated” and “are more likely to be working towards a clear goal” – not definite.

Adrian puts forward a spectrum on which GE, GEAP and SEAP on it but says that many students occupy a place somewhere in the middle of the scale i.e. learning English for study purposes but also need GE and may not have clear study aims. E.g. Turkey. Students who study English in addition to their subject of study in University context. Need to get to B1+, preparing for a programme where some content is in English but not wanting to study in an English-speaking university so don’t need full on EAP, may not necessarily be motivated. In the UK, students need an improved IELTS score, need EAP skills in addition to general skills and are more motivated. In both of these, EAP ‘light’ may be useful.

For the rest of this session, he says we will look at what this might look like and how it might come out in practice. It is clearly possible to focus on academic skills in a way that is engaging for learners who may not be highly motivated while still providing the skills that they need to master.

Approach 1

E.g. Skills for writing an academic essay, specifically in the opening part, the introduction, where they may need to define abstract concepts. Students might be shown an example which provides examples of the language needed.

It isn’t in itself a particularly engaging text, but it seems to Adrian that there are ways in which this topic could be made to be more interesting and engaging for less motivated students:

  • a lead-in to get ss thinking about the topic – brainstorming
  • discussion with concrete examples e.g. in what ways mght courage be an asset in these occupations
  • personalisation: think of a courageous person you know, what did they do which was courageous
  • prediction: get ss to write a definition of courage without using a dictionary

THEN look at the text.

So this is an example of bringing in features of General English methodology into EAP. This helps to provide motivation, it is generated by the task and teacher, bringing interest to the topic which does not HAVE to be dry.

Approach 2:

To actually choose topics which have general interest even if not related to learners’ areas of study.

Listening to lectures: identifying what the lecturer will talk about using the signals given (EAP focus: outlining content of a presentation). Can be done with a general interest topic e.g. male and female communication.

  • Start off with a topic focus: think about the way men or boys talk together and the way women or girls talk together. Do you think there are any differences? Think about…
  • Leads into a focus on listening skills: students listen to an introduction to a class seminar on this topic; identify how speaker uses signalling language, stress and intonation to make it clear what he is going to talk about

So those are a couple of examples of directions that EAP light could take. This is a crossover between GE and EAP, skills and language defined by needs, but the initial focus is on the topic itself rather than on the skills. Topics selected as academic in nature but have intrinsic interest. Motivation is enhanced through visuals, engaging tasks, personalisation etc.

Q and A

What is a good source of EAP light topics?

Adrian plugs his Academic Skills development worksheets – generally academic nature but of general interest. (They accompany “Empower”) If you are developing your own, look at the kind of topics in GE coursebooks and see if there are any that would lend themselves to EAP.

What about letting students choose their own topics?

A good idea if this is EAP where students are already engaged in academic study, as they will have a good idea of what they need. In GEAP it is important to choose topics which lend themselves to whatever academic skill you are developing as well.

What were the textbooks used in the examples:

EAP – Cambridge Academic English B2 level; GE- Empower B2 level