If not, the first marker s and moderator would discuss the work and try to reach a consensus decision regarding the mark, using the same options. The degree of checking should be proportionate to the weighting of the assessment. Where marking is undertaken by computers for example, for Multiple Choice Questions , the Course Organiser should put manual checks in place to confirm that the software is functioning correctly. Item analysis statistics should also be reviewed to check for any anomalies e.
In cases where assessment does not involve production of written work or other physical artefacts e. For example, a Board of Examiners could review mark profiles for courses with similar components of assessment and similar student cohorts. In the event that apparently similar courses lead to substantially different mark profiles, the Board of Examiners should investigate whether the differences are justified.
If any differences in mark profiles are not justified, the Board should consider remarking the relevant course. Where it is necessary for a second moderator to resolve disagreements, the School should include the relevant assessments and information regarding the moderation process and resolution in the sample seen by the External Examiner, so that the External Examiner can comment on how these processes operated.
External Examiners should not normally be asked to intervene in resolving individual cases in the event of markers and moderators disagreeing.
Typically, Course Organisers will organise more robust moderation processes when marking is undertaken by tutors and demonstrators, for example by having a larger sample than would normally be the case.
Face to face sessions should be scheduled to ensure a shared understanding of how the criteria should be applied. For example, some Schools use moderation forms to provide a record.
The Course Organiser is responsible for ensuring the appropriate arrangements are in place to record the moderation process. Reliability means that different assessors , acting independently using the same task description, come to the same judgment about a given piece of work or student response. Reliability therefore, is about fairness to students based on comparability between assessors. Rubrics associated with tasks also have to have reliability.
This is tested when assessors use them to make judgments about grades. Even though complete objectivity between assessors is impossible to achieve, you should aim to make rubrics as reliable as you can - hence the crucial role of well-written and unambiguous descriptors. Assessors also need to be trained to use rubrics to judge student work, so that they come to the same understanding of the descriptors as other assessors.
A cornerstone of criterion referenced assessment is the practice of moderation. This practice is very important in ensuring that assessment is fair, transparent, valid and reliable.
It is also essential in ensuring that the complexity of learning outcomes is increasing through a degree course. There are three foci for moderation of assessment at the University of Tasmania, and each has processes which can be followed. Heads of School, or their delegate, should ensure that all staff involved in marking including casual academic staff are prepared. Teachers vary in their beliefs, understandings, expectations about, and judgments of, student learning.
When they discuss samples of work with other teachers during moderation processes, their own knowledge deepens, and they establish collaborative practices. Making consistent, reliable and valid decisions across different points in time is important when schools report on student progress, make decisions on school targets and resourcing, or compare cohort data with historical information.
All schools experience variables that challenge the consistency of practice such as staff changes, changes in student numbers or changing education demands. Consistent moderation over time can prevent this in a number of ways. Provider sends their internal moderator as a surprise visit to a workshop, asks the facilitator to leave the class and questions the learners about the performance, delivering etc of the facilitator.
Unfortunately some SETA Verifiers fail to check on the assessment and Moderation process resulting in the failures you refer to. Some providers are also to blame in that they do not know or care to adhere to what is required. All I can say is shame on them — this is what gives our industry a bad name. Thank you Wilma and Lance for your kind comments — I just hope my blogs assist others and helps to root out the unscrupulous.
A very good summary of the process. Excellent, informative post! I am an experienced assessor and moderator and experienced that some of the work I assessed was not physically moderated. I do not understand how they managed to get their qualifications acknowledged through the relevant SETAs. I tried numerous times to rectify the situation but just lost my job. Maybe it is due to a lack of knowledge on the side of the client large government departments in some cases.
I am extremely passionate about development and growth of people with an excellent success record. Currenlty I am disappointed in the SAQA processes while I believed it structured education could work so well to address a lot of the chalenges in South Africa.
Thank you Marie. Yes, it comes back to competence and understanding the purpose and outcome of moderation. As Walter says — it is assessment of the assessor and the assessment process.
Well written post, Des. I agree that moderation should be a rather painless process, especially if moderation of design took place at the beginning of the process and competent assessors with subject-matter expertise are being used. The new clearly indicates that pre-designed guides must be used and Assessors will not be expected to design assessments as such. Designers does not necessarily cover all the types of contingencies and special needs that may arise!
This process unfortunately leads to very few providers that will ever dare to make changes without prior approval to assessments, as the threat of deregistration is very real. Unfortunately the learner with the special need then is at the short end. Based on the current system, moderation then actually just would need to cover the following: Implementation Did the provider and the Assessor use the pre-approved programme and assessments? Evaluation To check if the Assessors actually did complete the sections as set out in the Guide.
0コメント