Guidance on Program Evaluation

1.Policy Title

College of Medicine and Health Sciences – Program Evaluation Policy

 

2. Definitions, Abbreviations and Acronyms

2.1Definitions:

Curricular Objectives

The set of program outcomes (in terms of knowledge, skills and attitudes) that are intended to be attained by students who successfully complete an academic program or part thereof.

Curriculum

The description of information, learning outcomes, learning experiences, learning environment, structure and contents of courses, and methods used to assess students’ performance during or in the end of an academic program.

Evaluation Process

The process of collecting data to assess student learning (during and in the end of an academic program) with reference to identified outcomes in order to improve program efficiency.

Guidelines

A set of explanatory notes to assist directing the evaluation process to successfully achieve its planned aim.

Instrument of Measurements

Any type of measurement (objective or subjective) that is used to compare the level of attainment of students/graduates in achieving curricular outcomes against a standard reference (examples of Instruments of Measurements are listed in Appendix D).

Outcomes

The [post program] measured graduate competencies in terms of knowledge, skills and attitudes.

Policy Owner

An individual/entity (within the College of Medicine and Health Sciences) that is responsible for developing, overseeing, reviewing and/or updating the Policy.

Principle

An existing or professed rule of action or conduct.

Program [Academic]

A planned curricular combination of integrated courses/modules (or other equivalent unit of educational activities) that, when successfully completed, lead to awarding an academic degree.

Stakeholders

Individuals or entities who should adhere or who may be affected or governed by the policy.

The Policy

College of Medicine and Health Sciences Program Evaluation Policy.

 

2.2 Abbreviations:

BMS: BioMedical Science

CB: College Board

CIPP: The Context, Input, Process, Product [program evaluation] Model

COMHS: College of Medicine and Health Sciences

CTS: Course and Teaching Survey

EP: Evaluation Process

fGPA: Focused Grade Point Average

GDC: [Omani] Graduate Diploma Certificate

IB: International Baccalaureate

MD: Doctor of Medicine [Degree]

MEID: Medical Education and Informatics Department

QAAAU: Quality Assurance and Academic Accreditation Unit

SQU: Sultan Qaboos University

 

2.3 Acronyms

PEC: Program Evaluation Committee

PEP: Program Evaluation Policy

 

3.Policy Statement

This policy is to govern and regulate all activities concerning evaluating any academic program offered by the College of Medicine and Health Sciences (COMHS).

 

4. Purpose/Reasons for Policy

This policy is intended to provide basis for an evaluation process that is founded on a sound conceptual framework, that measures how much of the intended curricular learning objectives, of the program in question, has been successfully attained. The evaluation process should be designed to:

4.1 improve program efficiency.

4.2 provide recommendations to enhance COMHS graduates’ professional competencies.

4.3 identify the necessary issues/actions to make college programs ready for national and international accreditation.

 

5.General Principles:

The following principles were deduced from SQU related policies and the conceptual contents provided in Appendix A:

5.1 The Policy should be in line with SQU policies.

5.2 The Policy should be an ongoing systematic and data-based inquiry that utilizes results of both direct and indirect evaluation measures.

5.3 All stakeholders’ opinions should be appropriately and timely sought.

5.4 The evaluation process must provide assurance to stakeholders that their feedback will be embraced with reverence irrespective of their views.

5.5 The evaluation process should be designed to appropriately measure program performance according to the four levels, viz. Reaction (satisfaction); Learning, Behavior and Results (practice of graduates) as referred to by Kirkpatrick’s Framework.

5.6 All used instruments of measurement should be acknowledged, valid and designed to provide reliable results.

5.7 All used instruments of measurement should be specific to the “Program Outcome/Goal/Objective” it is used for.

5.8 The evaluation process should be decided in context of program delivery and the feasibility of using a particular measurement instrument.

5.9 The evaluators (the team who carries out the evaluation process) must ensure honesty and integrity of the evaluation process, and therefore, it is binding that they should not provide personal opinion on program performance as stakeholders.

5.10 The evaluation report should provide critical analysis of collected data and recommend means of overcoming inadequacies and affirming strengths with realistic approach to context.

 

6. Scope of Application

All stakeholders such as Curriculum Committee, Examination Committee, Quality Assurance and Accreditation Unit, Medical Education and Informatics Department, Staff, Students (undergraduate, and postgraduate), Interns, Employers, Alumni etc.

 

7. Policy Owner

College Quality Assurance and Academic Accreditation Unit (QAAAU).

 

8. Approval Body

The College of Medicine and Health Sciences Board.

 

9. Procedure

The final aim of the program evaluation process is to improve the quality of the academic program in question. A set of guidelines (Appendix B) have been formulated to aid achieving this aim. In addition, the organization of the process and its management by the Program Evaluation Committee (PEC; see Appendix C) are as important to ensure success of the process.

The Evaluation Process (EP) will be conducted in two independents, though, closely related

parts:

9.1 Part 1: which would be dedicated to organizing the EP for a particular program, PEC should decide on the appropriate measurement tool (Appendix D) and select who to conduct the measurement, where within the educational program should that tool be applied, what would be the measurement indicator/target and for what purpose this measurement is used.

9.2 Part 2: which would be after receiving results and feedback collected in Part 1. PEC should collate, analyze, interpret data and suggest recommendations that can be conceived from the received data and submit to CB in a form of a regular report.

In a stepwise articulation, the procedure is as follows:

PART 1

Program Evaluation Committee to:

i. plan, closely work and consult with College QAAAU.

ii. decide on the educational delivery component/element to be evaluated.

iii. select the indirect measurement tool/s to be utilized in a particular EP item.

iv. assign who to conduct the evaluation of a specific EP item, THEN,

 Assigned body/individual to conduct the EP item/s.

PART 2

i. Assigned bodies/individuals to provide PEC with the results of the evaluation as per PART 1

ii. PEC shall request and receive data (other than those mentioned in bullet I above) on direct and indirect measurements of students’ characteristics, previous performance etc. from various bodies/individuals as appropriate.

iii. PEC, in concert with assigned bodies/individuals, to discuss, analyze and interpret the results as a whole.

iv. PEC to prepare an annual report and submit to College Board and QAAAU

 

10. Related Policies/Documents

University Program Review Policy

University Assessment Policy

College Assessment Policy

Program Curriculum (of the program in question)

 

11.Responsibility for Implementation

Program Evaluation Committee

12. Issuing Office/Body

College Quality Assurance and Accreditation Unit

13. Review

To be conducted by Program Evaluation Committee every five years or whenever significant amendment is needed.

14. Key Risks

14.1 Inappropriate selection of measurement tool/s.

14.2 Lack of resources to conduct measurements.

14.3 Lack of commitment, or unwillingness to participate, of stakeholders involved in collecting data.

14.4 Nonseriousness of stakeholders in providing accurate information on program performance.

14.5 Incorrect interpretation of collected data.

14.6 Inappropriate or untimely application of recommended changes.

 

Last updated 27 January 2018