Abstract Title
Study to validate the assessment checklist used by different facilitators in a Communications Workshop

Authors

Madhavi Suppiah
Celia Tan

Theme

9AA Teaching and assessing communication skills

INSTITUTION

Singapore General Hospital 

Background

Be it doctors, nurses, allied health professionals, clinicians or frontline staff, effective healthcare communication is the key to exceptional patient care and safety. Developing an effective communication teaching programme and an effective evaluation tool is a challenge.

The focus of the 2 day workshop at the Singapore General Hospital is for participants to demonstrate their current skills level in a role play with patient actors using standardized case scenarios & checklist and then comparing them post workshop using the same scenario & checklist to assess their improvement.

Communication tools were taught in 3 broad categories - Attending behaviours which includes verbal and non-verbal skills, listening skills, goal-setting & closure, which formed  the assessment checklist. Results showed a marked improvement in the participants’ communication skills before and after the teaching sessions.

Summary of Work

The pilot study was to evaluate inter-rater reliability of four facilitators using video clips of participants, recorded for both pre & post workshop.

Hypothesis

Grading scale used for evaluating communication skills does not have significant differences between different facilitators or assessors, ie inter-rater reliability or consistency.

  • 4 senior facilitators were selected and given in random order 10 (out of 36) video clips of the participants.  These video clips are graded with a 3 point scale (0, 1 & 2).
  • A training session of an hour was given to the identified facilitators to review 2 video recordings and agree on the various skills in the assessment check list.
  • A period of 1 month was given to ensure that facilitators do not access in one sitting, thus reducing any instructor fatigue or biasness (not being influenced by the previous video clipping).

 

Summary of Results

The kappa k correlation coefficient for inter-rater reliability (Table 1) and intra-rater reliability was calculated (Table 2).

Conclusion

There is poor inter-rater agreement of the assessment checklist.   The Grading scale used for evaluating the communication skills has no consistency between the different facilitators.

The intra-rater reliability is slightly better especially for attending and summarizing skills, which suggests that the training of the facilitators was critical to ensure there is common understanding of the grading score, especially for a subjective evaluation tool for communication skills.

Take-home Messages

The poor inter-rater agreement could be subjective, based on the facilitator’s experiences, expectations and biasness.  However, inter-rater reliability has to be established to ensure that the results generated are useful.

To have an effective measurement, the checklist has to be “tight” so that individual facilitators’ experiences, expectations and biasness can be limited.  The following are to be implemented to have better results for subsequent rounds.

  • Tighter checklist with more clarity & standardization
  • Facilitators to be given clear and concise instructions
  • Facilitators to undergo training on the  marking schema and to adhere to the same standards
Acknowledgement

Huge thanks to the team of 4 facilitators, Celia Tan, Karen Koh, Goh Soo Cheng and Leila Ilmami for their invaluable time and effort in this project.

Background
Summary of Work

Summary of Results

Conclusion
Take-home Messages
Acknowledgement
Send ePoster Link