Joni Lybbert and Stacie Hunsaker, Nursing Department
High-fidelity simulation refers to the use of computerized manikins to simulate real-life situations. High-fidelity simulation is now a crucial part of nursing programs everywhere helping students gain necessary critical-thinking skills. The College of Nursing (CON) at Brigham Young University (BYU) includes simulation in their curriculum to assist students in developing decision-making skills in a modified clinical setting. However, more than the possession of simulation technology, the interaction between students and the simulation facilitator helps to achieve these aims.
One essential part of simulation is the debrief which occurs post-simulation. It is when the simulation facilitator and students process the experience, speak openly about their performance, and receive feedback. The the deciding factor in successful simulation debriefing is the ability of the facilitators to guide discussion. Facilitators do this by creating a learning environment that allows for vulnerability which permits students to learn and participate without fear making mistakes.
In our research, we studied the effect a formal simulation training for the facilitators had on students’ perceptions. The purpose of this paper is to explain the methodology of the research, discuss the results, and evaluate if formal simulation training for the simulation facilitators improves students’ perceptions of the effectiveness and betters the facilitators’ ability to lead debriefing as perceived by the students.
Brigham Young University students who had simulation as part of their curriculum had the opportunity to participate in the study. We first obtained approval from Institutional Review Board (IRB). Then as the various simulations took place throughout the semester, we distributed two validated instruments to the students along with an informed consent. The two surveys used were the Simulation Effectiveness Tool (SET) and Debriefing Assessment for Simulation in Healthcare (DASH). The surveys were given to one BYU nursing cohort in Winter of 2016 (pre-test) and then a different cohort in Fall of 2016 (post-test). The cohorts included students from the 3rd, 4th, and 5th semesters of the nursing program. This means that many students were part of the study both Winter and Fall, but in different simulations. Each cohort included nine different simulations that addressed various aspects of healthcare such as taking care of a patient who has a gastrointestinal bleed or a code blue situation. In July of 2016, between the two semesters, the simulation facilitators attended a formal simulation training.
In the SET survey, students responded to a series of 13 statements to determine their perceptions of simulation effectiveness. The DASH tool measured the quality of the debriefing by asking the students to respond to a series of statements on a scale from one, extremely ineffective/detrimental, to seven, extremely effective/outstanding. The statements fall under six different categories 1) establishes an engaging learning environment; 2) maintains an engaging learning environment; 3) structures the debriefing in an organized way; 4) provokes engaging discussion; 5) identifies and explores performance gaps; and 6) helps participants achieve or sustain good future performance. As the students completed the survey, we entered the data into a Qualtrics survey for easy data collection. The quantitative data was then transferred to and run through SPSS, a statistical software, to understand the significance of the results.
There were a total of 262 students who participated in the DASH pre-test and 282 who finished the DASH post-test. Furthermore, there were 322 students who participated in the SET pre-test while 382 participated in the SET post-test. A total of four simulations produced a statistically significant improvement in DASH scores and two simulations had statistically significant improvements on the SET scores. One simulation improved on both SET and DASH scores. Although there were slight improvements in the other simulations, they were not statistically significant.
From this data, we concluded that in some cases, formal simulation training improves students’ perceptions of the effectiveness of simulations and betters the facilitators’ ability to lead debriefing as perceived by the students. There are many factors that are not accounted for when just looking at the statistics. First, survey fatigue may have played into the results. The students were asked to fill out both the DASH and SET surveys for each simulation taken during the semester. Some students took it multiple times during both semesters tested. Stacie and I noticed that the responses to the surveys got less specific as time went on. Second, because of the nature of when the formal simulation training took place, the nursing students had different facilitators from the pre-test to the post-test.
It is interesting to note is that of the five simulations that had improved scores, 3 of them are simulations that take place in the 4th and 5th semesters of the nursing program. This could mean that the students who participated in the surveys during their 3rd or 4th semesters had improved perceptions of the simulation than from the semester previous. This could also be due to the change in content or change in facilitator.
In conclusion, in some cases formal simulation training improves student’s perceptions of the effectiveness of simulations. Also, formal simulation training may improve the facilitator’s ability to lead debriefing as perceived by the students. Further research could be conducted to see if students’ perceptions change when the formal training occurs during the semester. We have had the opportunity to present a portion of this research at Utah Conference on Undergraduate Research. The plan is to publish a paper related to this research and apply to present at the International Meeting on Simulation in Healthcare in January 2018.