• Users Online: 361
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
ORIGINAL ARTICLE
Year : 2018  |  Volume : 5  |  Issue : 4  |  Page : 116-123

Assessment of dental clinical simulation skills: Recommendations for implementation


Department of Conservative Dentistry, Faculty of Dentistry, Jordan University of Science and Technology, Irbid, Jordan

Date of Web Publication25-Jan-2019

Correspondence Address:
Abubaker S Qutieshat
Department of Conservative Dentistry, Faculty of Dentistry, Jordan University of Science and Technology, Irbid 22110
Jordan
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jdrr.jdrr_56_18

Rights and Permissions
  Abstract 


Undergraduate dental students acquire their clinical skills through the hands-on training on phantom heads. This is essential to develop their skills and experience and to ensure they can undertake safe and competent dental procedures on patients. However, the literature lacks a comprehensive approach capable of assessing dental clinical simulation skills objectively and at the same time incorporating all the tools necessary for effective learning. A brief overview of the literature regarding clinical skills assessment was performed to define specific recommendations for implementation of dental clinical skills assessment tools. Selected best theories and methods for a successful package of tools were gleaned from the existing medical and dental literature. This paper defines specific recommendations for implementation of dental clinical skills assessment tools necessary for effective teaching of tooth cavity preparation in clinical simulation laboratories. The recommended approach of comprehensive assessment was broken down into three main components (1) clinical simulation skill assessment, (2) self-assessment, and (3) student feedback and reflection. The conclusions and recommendations offered herein are intended to enhance course design and current teaching methods in dentistry rather than replace them. The following recommendations were made: (1) Dental schools need to seek to ensure that valid and reliable standard settings are applied to their assessments which, if accomplished, are very likely to enhance their educational outcomes. (2) It is important to get students to understand and positively respond to feedback which can be achieved by creating a module culture in which students are encouraged to share and discuss their mistakes openly. Proper feedback will ensure better performance and improved self-assessment skills. (3) Staff members should utilize a more consistent pattern of feedback and review their techniques periodically to continually improve the teaching process. Such recommendations, if enforced thoroughly, would be helpful in aiding tutors and course instructors assess dental clinical simulation skills components objectively and identify the clinically weaker students.

Keywords: Clinical simulation, dental education, feedback, reflection, self-assessment, standard setting


How to cite this article:
Qutieshat AS. Assessment of dental clinical simulation skills: Recommendations for implementation. J Dent Res Rev 2018;5:116-23

How to cite this URL:
Qutieshat AS. Assessment of dental clinical simulation skills: Recommendations for implementation. J Dent Res Rev [serial online] 2018 [cited 2019 Apr 19];5:116-23. Available from: http://www.jdrr.org/text.asp?2018/5/4/116/250788




  Introduction Top


Junior dental students, during their clinical skills laboratory module, are expected to develop several dental skills before dealing with real patients. Tooth cavity preparation, which is a core skill for this module, is one of the most basic elements of dentistry and is usually acquired through the hands-on training on phantom heads in clinical simulation laboratories. Acquiring this skill is central to clinical competence as it ensures dental students can undertake safe and competent dental procedures.

Clinical simulation instruction, in general, focuses on learners aspiring to become highly skilled professionals. Learners show more motivation to learn what is taught if it is part of hands-on training than they would in a classroom setting and have specific operative tasks to perform as the main instructional vehicle.[1] Those tasks are carried out under the supervision of highly skilled tutors, operative dentistry specialists, who coordinate efforts to transition students' learning from the classroom, into hands-on practice in simulation laboratories and later implementation in clinical settings.

Adult learners usually tend to dominate the process of their learning because they view themselves as self-directed individuals.[2] Systematic learning illustrates a strict environment for a lot of learners where they have a limited contribution to the process of learning if any. Such a limitation can hinder learners' motivation and willingness to learn.[3] On the contrary, allowing learners to become prime members and key contributors in the course of learning will enhance their motivation and probably increase their chances of obtaining the anticipated skill and knowledge.[4] The ideal scenario would be that learners contribute and participate in constructing and designing the outline of the training program.[5] Nevertheless, it's important to highlight that a highly learner-customized outline may not be practical or even achievable in a large group of learners or in a single training workshop or session.[6]

Learners' contribution in constructing and designing the curriculum outline can be facilitated by knowing their learning needs to achieve the aims and objectives of the course and finding out about their perspective on an ideal way to plan and deliver the material content.[6] The anticipated self-direction degree differs between learners. While some of them will favor the entirely self-directed approach, others will favor no self-direction at all.[7]

Self-directed learning can be facilitated by the utilization of learning modules using distributing a rather large group of learners into smaller groups (i.e., breakout sessions). This will give the opportunity to learners to engage with their tutor on a personal level via closed discussions, focus groups, and case studies which should give the learners a satisfactory sense of control over the process of learning.[8]

It is of utter importance to assess the extent of control to which learners believed they had and the level of improvement achieved during their course. The assessment criteria include: evaluating the learner's amount of input and level of control over the learning material, evaluating the role of the tutor throughout the course, and weighing the amount of learner's participation against the ideal advised level of control.[8]

The theory of “Self-directed learning” strongly features this learning principle. It is considered as a method of coordinating learning and teaching in a way that learning duties are widely within the control of the person who receives education. This, in turn, can be considered as a target that learners try to achieve to become committed to hold responsibility for their independence, self-determination, and self-learning.[9]

Around 100 characteristics related to self-direction were identified in the literature such as creativity, ambition, liability, confidence, skilfulness, thoughtfulness, knowledge, openness, curiosity, and self-awareness.[10] To improve self-directed learning and to bring out the potential of the aforementioned characteristics to be effective for our learners; they must be given a chance to gain several skills and at the same time, enough room to maneuver, and apply these skills. Skills include the ability to diagnose their own weaknesses, critical appraisal for the new content and reflection on self-progress to name but a few.[9]

The literature lacks a comprehensive assessment approach capable of quantifying dental clinical simulation skills components and at the same time incorporating all the tools necessary for effective learning, namely, clinical simulation skill assessment; self-assessment and; student feedback and reflection. Such an approach, if implemented and enforced thoroughly, would be helpful in aiding tutors and course instructors to better identify and assist the academically and/or clinically weaker students.

A brief overview of the literature regarding clinical skills assessment was performed to define specific recommendations for implementation of dental clinical skills assessment tools. Selected best theories and methods for a successful package of tools were gleaned from the existing medical and dental literature. In a paper such as this, however, it is impossible to do justice to all the different perspectives that exist on the topic of clinical assessment. The concept of available methods and their underpinning theories is interesting and sets the bar with respect to the nature of the studies that are of most relevance to the topic of dental clinical simulation skills assessment, therefore, the likeliest to provide the best recommendations for teaching tooth cavity preparation in clinical simulation laboratories.


  Clinical Simulation Skill Assessment Top


To assess whether students acquired tooth cavity preparation skills, a valid and reliable assessment should be developed that employs an appropriate standard setting.[11] This will ensure students who pass this assessment are “patient ready” and can undertake the basic dental procedures safely and adequately, while on the other hand students who fail will need to retake the module for they might potentially jeopardize patient safety.

As a result of this and the fact that this particular assessment acts as the gateway to practicing dentistry on real patients as clinicians; carefully assigned and fair pass marks are necessary. However, establishing a consensus on the appropriate pass mark is not an easy task in view of the complexity in evaluating such an assessment.

Conventionally, cavity preparations are evaluated subjectively by visual inspection, aided sometimes by measuring instruments.[11] Such a method is better to be accompanied by other analytical methods utilizing a checklist that is effective in determining whether the minimum requirement of the skill is met.[12] However, this evaluation method can easily fail to identify “borderline students” which, in turn, might lead to unfair evaluation. This is mostly attributed to assessor bias and misinterpretation of the checklist.[13]

Thus, to avoid such a problem, a standard has to be set to determine the minimum pass grade that will separate the students who deserve to get promoted to the next level from those who do not. This, in large, will indicate whether an assessment performance is good enough for its designated purpose.[14]

Looking at the main elements of the assessment's objectives and content, as discussed earlier, a definite standard setting that is established through a justified methodology is a must. Several standards have been developed and set for dental clinical assessments.[15] These can be classified into two groups, “relative” and “absolute".[16],[17] In such an assessment, we can only use the latter due to the fact that the performance of students against the skill is our concern rather than comparing the performance of different students undertaking the practical assessment.

Assessment takers will pass or fail depending on their clinical skills and how adequately they meet the requirements of an ideal cavity preparation regardless of the performance of other students. Therefore, all students potentially could pass or fail. For a credible absolute standard setting to be achieved in this case, one or more standard setting techniques should also be used.[14]

In the scenario presented herein, where the skill of tooth cavity preparation is to be assessed, an absolute standard setting can be achieved using two techniques, “test-centered” and “examinee centered".[18] In the test-centered technique, a panel of expert staff members make estimations of how they perceive students would fulfil the minimum requirements of a successful cavity preparation. Consequently, a cut-off mark is discussed and decided, below which students will not be considered competent to do the skill and therefore they will need to retake the assessment. Yet, it might be difficult to reach a consensus on a definitive cut-off mark due to differences of expert opinion.[16]

For this to be achieved, the so-called modified Ebel's method can play a central role in providing the desired setting. This looks at the relevance and importance of each step in the skill to be assessed via categorizing each step into groups such as essential, important, or indicated.[18] Moreover, the characteristics that the prepared cavity needs to possess in order for it to be considered “ready to be filled” can be used as a guide in this case. In other words, if the number of the total characteristics is let's say 15 (five essential, five important and five indicated); the student is expected to achieve at least three essential, three important and two indicated to pass the assessment.

Applying an appropriate standard-setting requires not only the full-time staff to be involved but also part-time staff and sometimes the students themselves.[14] The staff members chosen need to possess thorough academic knowledge and understanding of the skill that is being assessed and they also need to be familiar with the students and the evaluation process. However, only a few staff members may be qualified to serve as members of the panel. A standard setting cannot be conveniently achieved with a limited number of experts for the process might be greatly influenced by one or more experts who possess too rigid or to flexible standards (hawk vs. dove bias); therefore, a panel of >5 staff members is usually recommended.[16],[19]

This test-centered method can be used solely or in conjunction with an examinee-centered one where expert staff members determine an actual borderline group rather than a hypothetical one. Because test-centered methods possess a hypothetical nature, supplemental information about the actual performance of real assessment-takers is highly advisable which can be achieved by implementing an examinee-centred method into the setting.[16] This will ensure that the suggested pass/fail mark has served its purpose.

For this to be implemented, borderline regression method can be used where another panel of experts grade the performance of the assessment-takers using a subjective score based on how well students performed overall (i.e., global score).[20],[21] The global score should be independent of the numerical score adopted previously in the aforementioned modified Ebel's method. Such global score is usually comprised of 4-grade descriptors, namely, “good,” “pass,” “Borderline,” and “fail."[14]

Cavity preparations that are not good enough to be considered as a pass but at the same time not bad enough to be considered a fail are given the “borderline grade.” Subsequently, the global scores are collected along with the assessment's original grades and are then plotted graphically to compute the statistical regression using a statistical software package.[20],[21] Doing so will generate a cut-off pass mark which will, in turn, indicate whether the original standard setting assigned for the assessment is appropriate or not. Nonetheless, the borderline regression method has been proved to provide a high level of credibility and reliability even if used solely.[22]

It is worth mentioning that setting a standard based on a hypothetical borderline student's performance via a test-centered method is usually time-consuming for the experts are required to meticulously set the desired standard, while actual observation via an examinee-centered method is usually more time efficient. This is because it can be undertaken simultaneously throughout the assessment.[18] On the other hand, applying an examinee-centered setting can be a complex process.[22] Borderline regression method in particular, requires an advanced level of statistical calculations which, in many instances, necessitates the collaboration of a statistician.

Both of the aforementioned techniques share a common weak point: they both require judgment that possesses a subjective nature.[15],[23],[24] However, no unified approach exists that can objectively determine the ideal cavity preparation.[11] In addition, the mere determination of a cutoff mark remains to be, by far, a subjective process.[23]

To overcome the potential limitations of the aforementioned standards, the same setting method can be repeated by the inclusion of more experts as panel members or, if feasible, ask different experts to repeat the procedure. This will determine the reliability of the assessment that can be also calculated using certain statistical procedures.[14]

The general dental council in the United Kingdom (UK) has stated that several dentistry assessments in the UK appeared to happen at a very basic level of standard setting.[25] This is alarming given how crucial these assessments can be for both dental schools and students. In sum, all dental schools need to seek to ensure that valid and reliable standard settings are applied to their assessments which, if accomplished, are very likely to enhance their educational outcomes.


  Self-Assessment Top


As tutors and clinical demonstrators, providing detailed and consistent individual feedback on student performance cannot be done without difficulty due to the multi-step nature of this particular skill. Therefore, several self-assessment methods have been suggested to reduce the amount of dependence on the instructor and increase the level of students' performance.[26]

Herein, two self-assessment tools have been proposed, one for the student [Figure 1] and another for the tutor [Figure 2]. The former is based on a 5 points rating scale (very good to poor) while the latter is based on a 2 points rating scale (yes or no).
Figure 1: Student self-assessment model which can be applied for clinical simulation skill modules in dental education

Click here to view
Figure 2: Tutor self-assessment model which can be applied for clinical simulation skill modules in dental education

Click here to view


Student self-assessment

Student self-assessment tool consists of 3 sections, namely, communication and interaction, accessibility, and cavity preparation (i.e., the skill itself). The first two sections were designed to identify the uninformed aspects of students' understanding and performance where a poor rating might indicate a deficient teaching method and/or a technical problem. The skill itself consists of more than the mere drilling of a tooth; therefore, several problems can be encountered while performing the skill such as visibility-accessibility issues, using the proper grip technique, and maintaining a healthy posture throughout. Students may well inform certain aspects, that are relevant to the task, which we don't know, and unless students input their feedbacks; these aspects will most probably be overlooked.[27]

The third section was designed to help students build self-regulation skills and acquire the ability to become independent (i.e., self-directed) learners which will collectively give a structure to their own personal development strategy. Self-assessment has been considered as a very powerful tool which will equip its user with the ability to execute tasks more properly and confidently.[28] It has also been shown that self-assessment in dental education can improve the performance outcome.[29],[30],[31] In addition, as the procedure of preparing a cavity is ideally followed by restoring the tooth cavity with a filling material; emphasis should be given to consider the appropriateness of the prepared cavity to receive such filling. The third section consists of distinct, rather than general, categories.[32] In general, faulty cavity preparations can be either too destructive or too conservative. In the former case, the vitality of the tooth would become endangered, while in the latter the placement of a filling would be a problem. This necessitated the utilization of a more specific description, and thus rendering the use of general criteria impractical in this assessment. Student scores from the third section can then be compared with their actual results to assess how well the students rate their level of performance.[29],[30] It is also recommended that students get the chance to modify, improve or repeat the task (preferably supervised) right after a deficiency had been reported via the self-assessment tool.

The decision a dentist makes on treating a patient is crucial, the consequences of such decisions must be borne in mind. Therefore, the lack of a certain skill should not be underestimated. This necessitates further skill development by training and/or attending courses. As a result, the ability to perform a particular skill should be continuously appraised to identify any limitations and difficulties especially that almost all dental procedures, techniques and materials are continuously developing. In view of this, students must be better informed of the implications of adopting self-assessment strategies on their future professional practice.[33]

Self-assessment can be potentially challenging where it encourages students to think more and learn more.[34] Yet, the procedure itself is time-consuming and difficult to master,[35] not to mention the prolonged period required to achieve the perceived improvement in performance.[31]

In addition, there are several other difficulties associated with this method of self-assessment; the assessment would not necessarily reflect students' actual performance for it may correspond with their level of confidence where better students tend to under-mark themselves,[36] while poor students were overconfident and overrated their performance.[11] Moreover, creating a module culture in which students feel comfortable rating their own performance is not an easy task to achieve.[37] An explanation might be that they are more inclined to have their performance rated by an expert.[38]

In sum, a valid strategy of self-assessment can help students to acquire the reflective habits of mind which play an important role in facilitating the development of their ongoing capacity to perform.[27]

Tutor self-assessment

Tutor self-assessment tool consists of 4 sections, namely, communication and interaction, clinical simulation, accessibility, and future improvements, along with a subsection where the staff-to-student ratio is to be calculated.

As for the tutor, student's performance and the training method are mutually interconnected. In other words, if the performance is not convenient, this does not necessarily indicate a weak student for the training method might be deficient. On the other hand, modifying a training method (e.g., the use of new technology) does not necessarily lead to an improved performance. Therefore, a dual-strategy of analyzing students' performance and self-assessing tutor's approaches and techniques in delivering the desired skill, is highly recommended.[39]

Explicit criteria and expectations have to be set and fully explained, preferably using three-dimensional jumbo-models so that students can identify anatomical landmarks and sub-ideal cavities to understand the consequences of making a mistake during vital aspects of the procedure. Thorough knowledge of the anatomical landmarks is essential in teaching dental practical skills and therefore is strongly recommended here as a prerequisite for a successful lesson plan and the subsequent hands-on training session.[40]

Due to the sensitivity of this module, where only successful students can be considered “patient ready” and able to undertake the basic dental procedures safely and adequately; it is important to explain that self-assessment would only be an adjunctive method to facilitate developing the required skill, otherwise, achieving personal objectivity and honesty would probably be impossible.[28] In other words, this assessment is a formative rather than a summative one where it would only be directed towards improving the quality of students' performance.

Tutors are expected to demonstrate how an ideal self-assessment is undertaken, in addition following-up student while assessing themselves is mandatory especially during the earlier stages of the module. Therefore, extra effort from tutors is necessary to provide more consistent individual feedback. It is not uncommon for students to face serious difficulties in describing their own performance properly, therefore and early in the course;[41] the proper self-assessment language and methodology should be explicitly explained and piloted on a student or two so that a clearer image of “how to do” self-assessment is delivered.[33] With time, students will become more accustomed to the process.[42] Moreover, students who are not in the habit of reflecting on their own performance should be identified, encouraged and instructed.[27]

Self-assessment will orient the tutor toward a valid purpose which takes into account the combined role of both the tutor and the student.[39],[43] This will enhance the tutor's teaching and personal development from a theoretical and practical perspective. The collected data from the students' self-assessment (i.e., the first 2 sections) should be taken into consideration for they can indicate where the deficient aspects of the teaching method are. The more the data and the more relevant it is to serve the purpose; the better the self-assessment is executed by the tutor.

Nevertheless, there are several difficulties; the most commonly encountered is that tutors tend to present a better image of themselves than merited.[44] In addition, applying irrelevant criteria into the self-assessment form is not uncommon. This can be resolved by discussing these with other colleagues. However, it is difficult to agree upon which criteria should be integrated.

A good rapport between the tutors and their students is recommended. Furthermore, quality feedback is a must for the strategy of self-assessment to succeed; quick and relevant feedback is therefore required with emphasis on student's performance.[39]

Students can be encouraged to participate in group discussions and/or discussion forums looking at any difficulties that might be encountered during the process of self-assessment. In addition, students can ask each other to peer-review their assessments, which, along with the feedback received from their tutors would fine-tune their self-assessment skills. As for the tutors, this assessment can be reviewed by a colleague where structured and focused comments on aspects of teaching practice can be received and discussed. It is important to get students to understand and positively respond to feedback which can be achieved by creating a module culture in which students are encouraged to share and discuss their mistakes openly. Proper feedback will ensure better performance and improved self-assessment skills; however, this is a process that cannot be achieved without difficulty.


  Student Feedback and Reflection Top


Our dental school provides an open and relaxed culture which fosters constructive discussions between staff and students. Yet, student feedback is not fully embraced in the clinical simulation modules and usually tends to end up as an overall performance feedback session once or twice a year. However, since student feedback introduction in 2016; there was greater consistency in the feedback received on assessed work and there has been an increased effort in giving feedback in general. This was a huge step forward, since beforehand (i.e., before 2016) the application of student feedback was limited. To overcome this challenging situation, and to enhance the reflections, face-to-face feedback was introduced to clinical simulation skills laboratory just recently.

Students' weaknesses should be continuously diagnosed, as this helps to make the training experience more efficient where students' performance is improved, and their self-reflection is promoted.[45] Moreover, continuous monitoring would ensure that the student has not gone off track for the earlier the underperformance is diagnosed the easier for it to be improved within the timescale of the module.

Descriptive, nonpersonalized and specific feedback was given immediately after the activity or as soon as possible after.[46],[47] This feedback adopted the “competency model” strategy.[48],[49] Staff members discussed weaknesses and areas for development with students that demonstrated low levels of competence. Alternative approaches were therefore suggested, and a relaxed conversation was encouraged through engaging the student with reflective discussion assisted by open questions.[47],[49] On the other hand, competent students were advised of how to refine their skills and raise their awareness of detail. Moreover, competent students are more prone to develop bad habits which are difficult to change if went unnoticed. Such habits are best dealt with thorough face-to-face feedback. This would, therefore, enhance the interaction between the instructor and the student as well as help students develop the ability to critically evaluate their own performance and skills to achieve professional autonomy.[47]

In many occasions, areas of students' strength were overlooked, while only areas of weakness were highlighted. Such an approach has the potential to be destructive,[50] especially for “problem learners” who might be going through complicated and emotionally charged circumstances.[46] Thus, more emphasis should be placed on areas a student knows in such cases while at the same time highlighting their weaknesses. An approach proposed by Pendleton et al.[51] demonstrates this, where the instructor and the student discuss first what was performed well, then what they believe could be improved. This approach, however, would leave very little time for discussing the negative aspects properly.[50] In addition, it was previously reported in the literature that positive feedback merely overprotects certain underperformers while criticism generates the most effective learning experiences.[52]

An alternative approach would be to allow negative feedback to be “sandwiched” between two episodes of positive feedback (i.e., the feedback sandwich).[46] This model starts and ends with a praise item, while in between, thorough constructive criticism should be provided.

Many of the aforementioned drawbacks could possibly be surpassed through upgrading the adopted strategies according to the available feedback models in the literature. It is, therefore, recommended that all staff members should utilize a more consistent pattern of feedback and review their techniques periodically to continually improve the teaching process in view of the importance of these feedback sessions in enhancing students' performance. Clearly, this can be achieved by participating in training courses with a focus on medical education.


  Conclusion Top


This paper defines specific recommendations for implementation of dental clinical skills assessment tools necessary for effective teaching of tooth cavity preparation in clinical simulation laboratories namely; clinical simulation skill assessment; self-assessment; and student feedback and reflection. Such recommendations, if enforced thoroughly, would be helpful in aiding tutors and course instructors assess dental clinical simulation skills components objectively and identify the clinically weaker students.

All in all, medical education is not only the delivery of knowledge, skills and attitudes but also the construction of a professional identity. Learners must know about their future job responsibilities and how to be professional and successful in dealing with and treating patients and to achieve this tutors have to prepare their students for the professional roles they will occupy in the future and to try to develop professionals who are competent, self-aware, and able to self-monitor and self-assess their performance and to continue the journey of learning throughout their practice lifetimes. As Gagné, a key theorist and contributor to the field, believes that instruction cannot be explained easily by theories;[53] still, the broad range of instruction theories that is available to us as educators, even if it does not fully indicate the ideal method of instruction per se, is an invaluable source from which we can extract a significant amount of teaching principles that can be implemented to fine-tune our teaching methods.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Shahady EJ, Stritter FT, Baker RM. Clinical Instruction. In: Handbook for the Academic Physician. New York, NY: Springer; 1986. p. 89.  Back to cited text no. 1
    
2.
Knowles MS. Andragogy in Action. The Adult Learner: A Neglected Species. Houston, TX: Gulf; 1984.  Back to cited text no. 2
    
3.
Lawler PA. Teachers as adult learners: A new perspective. New Dir Adult Contin Educ 2003;2003:15-22.  Back to cited text no. 3
    
4.
Garrison DR. Self-directed learning: Toward a comprehensive model. Adult Educ Q 1997;48:18-33.  Back to cited text no. 4
    
5.
Knowles MS, Holton EF, Swanson RA. The Adult Learner: The Definite Classic in Adult Education and Human Resource Development. Houston, TX: Gulf Publishing Company; 1998.  Back to cited text no. 5
    
6.
Bryan RL, Kreuter MW, Brownson RC. Integrating adult learning principles into training for public health practice. Health Promot Pract 2009;10:557-63.  Back to cited text no. 6
    
7.
Pratt DD. Andragogy as a relational construct. Adult Educ Q 1988;38:160-72.  Back to cited text no. 7
    
8.
Mann KV. Theoretical perspectives in medical education: Past experience and future possibilities. Med Educ 2011;45:60-8.  Back to cited text no. 8
    
9.
Kaufman DM. ABC of learning and teaching in medicine: Applying educational theory in practice. Br Med J 2003;326:213-6.  Back to cited text no. 9
    
10.
Candy PC. Self-direction for lifelong learning. A Comprehensive Guide to Theory and Practice. San Francisco, CA: Jossey-Bass; 1991.  Back to cited text no. 10
    
11.
Taylor CL, Grey N, Satterthwaite JD. Assessing the clinical skills of dental students: A review of the literature. J Educ Learn 2013;2:20-31.  Back to cited text no. 11
    
12.
Goepferd SJ, Kerber PE. A comparison of two methods for evaluating primary class II cavity preparations. J Dent Educ 1980;44:537-42.  Back to cited text no. 12
    
13.
Feil PH. An analysis of the reliability of a laboratory evaluation system. J Dent Educ 1982;46:489-94.  Back to cited text no. 13
    
14.
Puryer J, O'Sullivan D. An introduction to standard setting methods in dentistry. Br Dent J 2015;219:355-8.  Back to cited text no. 14
    
15.
Cizek GJ, Bunch MB. Standard Setting: A Guide to Establishing and Evaluating Performance Standards on Tests. Thousand Oaks, CA: SAGE Publications Ltd.; 2007.  Back to cited text no. 15
    
16.
Livingston SA, Zieky MJ. Passing Scores: A Manual for Setting Standards of Performance on Educational and Occupational Tests, 1982. Philadelphia: Educational testing service Google Scholar; 2011.  Back to cited text no. 16
    
17.
Ben-David MF. AMEE guide no. 18: Standard setting in student assessment. Med Teach 2000;22:120-30.  Back to cited text no. 17
    
18.
Case SM, Swanson DB. Constructing Written Test Questions for the Basic and Clinical Sciences. Philadelphia: National Board of Medical Examiners; 1998.  Back to cited text no. 18
    
19.
Fowell SL, Fewtrell R, McLaughlin PJ. Estimating the minimum number of judges required for test-centred standard setting on written assessments. Do discussion and iteration have an influence? Adv Health Sci Educ Theory Pract 2008;13:11-24.  Back to cited text no. 19
    
20.
Smee SM, Blackmore DE. Setting standards for an objective structured clinical examination: The borderline group method gains ground on angoff. Med Educ 2001;35:1009-10.  Back to cited text no. 20
    
21.
Schoonheim-Klein M, Muijtjens A, Habets L, Manogue M, van der Vleuten C, van der Velden U, et al. Who will pass the dental OSCE? Comparison of the Angoff and the borderline regression standard setting methods. Eur J Dent Educ 2009;13:162-71.  Back to cited text no. 21
    
22.
Kramer A, Muijtjens A, Jansen K, Düsman H, Tan L, van der Vleuten C, et al. Comparison of a rational and an empirical standard setting procedure for an OSCE. Objective structured clinical examinations. Med Educ 2003;37:132-9.  Back to cited text no. 22
    
23.
Zieky M, Perie M, Livingston S. A Primer on Setting Cut Scores on Tests of Educational Achievement. Princeton, NJ: Educational Testing Service; 2006. p. 320.  Back to cited text no. 23
    
24.
Nichols P, Twing J, Mueller CD, O'Malley K. Standard-setting methods as measurement processes. Educ Meas Issues Pract 2010;29:14-24.  Back to cited text no. 24
    
25.
The General Dental Council. Annual Review of Education. London: GDC; 2015.  Back to cited text no. 25
    
26.
Bose S, Oliveras E, Edson WN. How can self-assessment improve the quality of healthcare. Oper Res Issue Pap 2001;2:1-27.  Back to cited text no. 26
    
27.
Ross JA. The reliability, validity, and utility of self-assessment. Practical Assessment, Research & Evaluation, 2006;11:1-13. Available online: http://pareonline.net/getvn.asp?v=11&n=10. [Last accessed on 2018 Sep 12].  Back to cited text no. 27
    
28.
Fuhrmann BS, Weissburg MJ. Self-assessment. Evaluation of Clinical Competence in the health professions. St. Louis: C. V. Mosby; 1978.  Back to cited text no. 28
    
29.
Geissler PR. Student self-assessment in dental technology. J Dent Educ 1973;37:19-21.  Back to cited text no. 29
    
30.
Knight GW, Guenzel PJ, Fitzgerald M. Teaching recognition skills to improve products. J Dent Educ 1990;54:739-42.  Back to cited text no. 30
    
31.
Curtis DA, Lind SL, Dellinges M, Setia G, Finzen FC. Dental students' self-assessment of preclinical examinations. J Dent Educ 2008;72:265-77.  Back to cited text no. 31
    
32.
Stock PL, Robinson JL. Taking on testing: Teachers as tester-researchers. Engl Educ 1987;19:93-121.  Back to cited text no. 32
    
33.
Boud D. Assessment and learning: contradictory or complementary. Assessment for Learning in Higher Education (London, Kogan Page), 1995. p. 35-48.  Back to cited text no. 33
    
34.
Cowan J. Struggling with self-assessment. Developing Student Autonomy in Learning. (London, Routledge and Kegan Paul), 1988. p. 192-210.  Back to cited text no. 34
    
35.
Stefani LA. Comparison of collaborative self, peer and tutor assessment in a biochemistry practical. Biochem Educ 1992;20:148-51.  Back to cited text no. 35
    
36.
Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med 1991;66:762-9.  Back to cited text no. 36
    
37.
Evans AW, McKenna C, Oliver M. Self-assessment in medical practice. J R Soc Med 2002;95:511-3.  Back to cited text no. 37
    
38.
Evans AW, McKenna C, Oliver M. Towards better understanding of self-assessment in oral and maxillofacial surgery. Med Educ 2001;35:1077.  Back to cited text no. 38
    
39.
Stalmeijer RE, Dolmans DH, Wolfhagen IH, Peters WG, van Coppenolle L, Scherpbier AJ, et al. Combined student ratings and self-assessment provide useful feedback for clinical teachers. Adv Health Sci Educ Theory Pract 2010;15:315-28.  Back to cited text no. 39
    
40.
Qutieshat A. Using Gagne's theory and Peyton's four-step approach to teach inferior alveolar nerve block injection. J Dent Res Rev 2018;5:75-9.  Back to cited text no. 40
  [Full text]  
41.
Stefani LA. Peer, self and tutor assessment: Relative reliabilities. Stud High Educ 1994;19:69-75.  Back to cited text no. 41
    
42.
Brown GA, Bull J, Pendlebury M. Assessing Student Learning in Higher Education. London: Routledge; 2013.  Back to cited text no. 42
    
43.
Kagan DM. Ways of evaluating teacher cognition: Inferences concerning the goldilocks principle. Rev Educ Res 1990;60:419-69.  Back to cited text no. 43
    
44.
Hashweh MZ. Teacher accommodative change. Teach Teach Educ 2003;19:421-34.  Back to cited text no. 44
    
45.
Atkinson JW. Motivational determinants of risk-taking behavior. Psychol Rev 1957;64:359-72.  Back to cited text no. 45
    
46.
Milan FB, Parish SJ, Reichgott MJ. A model for educational feedback based on clinical communication skills strategies: Beyond the “feedback sandwich". Teach Learn Med 2006;18:42-7.  Back to cited text no. 46
    
47.
McKimm J, Swanwick T. Clinical Teaching made Easy: A Practical Guide to Teaching and Learning in Clinical Settings. London: Andrews UK Limited; 2013.  Back to cited text no. 47
    
48.
Proctor B. Training for the supervision alliance: Attitude, skills and intention. In: Routledge Handbook of Clinical Supervision. London: Routledge; 2010. p. 51-62.  Back to cited text no. 48
    
49.
Hill F. Feedback to enhance student learning: Facilitating interactive feedback on clinical skills. Int J Clin Skills 2007;1:21-4.  Back to cited text no. 49
    
50.
Brown N, Cooke L. Giving effective feedback to psychiatric trainees. Adv Psychiatr Treat 2009;15:123-8.  Back to cited text no. 50
    
51.
Pendleton D, Schofield T, Tate P. The Consultation: An Approach to Teaching and Learning. Oxford: Oxford Medical Publications; 1984.  Back to cited text no. 51
    
52.
Boehler ML, Rogers DA, Schwind CJ, Mayforth R, Quin J, Williams RG, et al. An investigation of medical student reactions to feedback: A randomised controlled trial. Med Educ 2006;40:746-9.  Back to cited text no. 52
    
53.
Gagné R. The Conditions of Learning. New York: Holt, Rinehart and Winston; 1965.  Back to cited text no. 53
    


    Figures

  [Figure 1], [Figure 2]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Clinical Simulat...
Self-Assessment
Student Feedback...
Conclusion
References
Article Figures

 Article Access Statistics
    Viewed596    
    Printed16    
    Emailed0    
    PDF Downloaded82    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]